Rendered at 20:11:17 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
vanderZwan 24 hours ago [-]
> The header is the cost. Not the reflection. The reflection algorithm is fast – asymptotically ~0.07 ms per enumerator, essentially the same as the hand-rolled switch in the X-macro version (~0.06 ms). What makes reflection look expensive is <meta>: just including it costs ~155 ms per TU over the baseline.
So speaking of old ways, I'm not a C++ dev, but a while ago saw someone comment that they still organize their C++ projects using tips from John Lakos' Large-scale C++ software design from 1997, and that their compile times are incredibly fast. So I decided to find a digital copy on the high seas and read it out of historical curiosity. While I didn't finish it, one wild thing stood out to me: he advised for using redundant external include guards around every include, e.g.
The reason for this being that (in 1997) every include required that the pre-processor opened the file just to check for an include guard and reading it all the way to the end to find the closing #endif, causing potentially O(N*2) disk read overhead (if anyone feels like verifying this, it's explained on pages 85 to 87).
Again, that was in 1997. I have no idea what mitigations for this problem exist in compilers by now, but I hope at least a few, right?
This conclusion is making me wonder if following that advice still would have a positive impact on compile times today after all though. Surely not, right? Can anyone more knowledgeable about this comment on that?
SuperV1234 23 hours ago [-]
This cost is not significant nowadays, it's the frontend/parsing time.
You can also use `#pragma once` which works everywhere, is nicer, and technically needs less work by the compiler, but compilers have optimized for include guards since a long time ago.
Yes, I've heard that before, but comments like this one in your linked issue still make me wonder:
> at least for gcc and Visual Studio using #pragma once has a significant impact. The fact is, the compiler does not need to continue parsing the whole file when reaching a #pragma once. otherwise the compiler always needs to do it even if the include guard afterwards will avoid double processing of the content afterwards.
As written the explanation for these optimizationst suggest that both "pragma once" and include guard optimization still requires opening and closing the file each time an include is encountered, even if you bail after parsing the first line. Is that overhead zero? Or are the optimizations explained poorly and is repeatedly opening/closing the file also avoided?
Either way, do you know what causes the slowdown as a result of including <meta>?
gpderetta 13 hours ago [-]
The compiler doesn't need to open the same file multiple times. It can remember if a a file is guarded or not every time it sees its name.
My understanding is that this is an optimization that has been available for a very long time now.
The only issue is if a file is referred through multiple names (because of hard links, symlinks, mounts). That might cause the file to be opened again, and can actually break pragma once.
Thank you for explaining and looking up the link, it's appreciated! :)
Quekid5 22 hours ago [-]
The overhead isn't zero, but with SSDs (and filesystem caches in the gigabytes these days) it's damn near insignificant in pure terms of opening files and such.
daemin 20 hours ago [-]
What I found (so far on MSVC) is that #pragma once does only process the file once, where as include guards still open the file each time it is included. Though it takes almost no time to do so but it still appears on the traces.
I'm going to experiment with other compilers and figure out how they handle it.
sagacity 1 days ago [-]
Oof, that first example (the idiomatic C++26 way) looks so foreign if you're mostly used to C++11.
delegate 1 days ago [-]
I was very curious to see what C++ 26 brings to the table, since I haven't used C++ in a while.
When I saw the 'no boilerplate' example, the very first thought that came to my mind:
This is the ugliest, most cryptic and confusing piece of code I've ever seen.
Calling this 'no boilerplate' is an insult to the word 'boilerplate'.
Yeah, I can parse it for a minute or two and I mostly get it.
But if given the choice, I'd choose the C-macro implementation (which is 30+ years old) over this, every time. Or the good old switch case where I understand what's going on.
I understand that reflection is a powerful capability for C++, but the template-meta-cryptic-insanity is just too much to invite me back to this version of the language.
vanderZwan 23 hours ago [-]
As a developer who doesn't really write C++ code I'm inclined to agree, but I think Herb Sutter's "syntax 2" project might provide a nice way out of that mess eventually.
I played around with cppfront over Christmas and it was a lot more ergonomic than my distant memories of C++11, which I don't even have negative memories of per se.
It is no different from any other language that compiles via C or C++ code generation, it got sold a bit differently due to his former position at WG21.
vanderZwan 7 hours ago [-]
Well, if you mean "as an official C++ syntax" then I agree, and I suspect Sutter would agree as well. He labeled one talk about it a "Towards a Typescript for C++", after all[0].
But I do think it is different than other "compile to C++" languages, because it seems to be more of a personal case study for Sutter to figure out various reflection and metaprogramming features, and then "backport" those worked out ideas to regular C++ via proposals. And the latter don't have to match the CPP2 syntax at all.
In multiple examples he's given in talks the resulting "regular" C++ code is easier to read, mainly because the metaprogramming deals with so much boilerplate.
What Herb Stutter misses on his Typescript and Kotlin for C++ metaphor is the actual reality how those languages integrate, unlike cpp2.
Typescript is a linter, nothing else, type annotations for JavaScript. The two features that aren't present in JavaScript, enums and namespaces, are considered design mistakes and the team vouched to focus only on being a linter,and polyfill for older runtimes, when possible (some JS features require runtime support).
While Kotlin spews JVM bytecode many language constructs, like co-routines, make it one way, it is easy to call Java from Kotlin, the other way around requires boilerplate code, manipulating the additional classes generated by the Kotlin compiler for its semantics.
vanderZwan 19 minutes ago [-]
My point was that TypeScript isn't exactly about to replace JavaScript, which was what you were arguing. I'm honestly not sure what you're trying to argue now.
Like, yeah, what you say about TS and Kotlin is true about TS and Kotlin. But since you're not explaining what cpp2 does or plans to do differently, and why it matters, I'm not sure where you're going with that. It's probably obvious but I'm not getting it.
The metaphor Sutter was going for, as I see it, is that TS and Kotlin both added missing features to their host language. Most importantly reflection and decorators in TS, which are now becoming a standard in JS as well[0]. cpp2 mainly focuses on experimenting with reflection and metaprogramming as well, adding features currently missing in C++ by being a compiles-to-C++ language. Sutter has written C++ proposals what would allow give C++ similar reflection and metaprogramming capabilities based on what he discovered by working on cpp2. That's pretty comparable if you ask me.
> But if given the choice, I'd choose the C-macro implementation (which is 30+ years old) over this, every time.
Why? The implementation is not pretty, but you only need to write it once and then it works for all enums. The actual usage is trivial, it's just a function call.
The C macro version is horrendous in comparison. Why would I want to declare my enums like that just because I might want to print them?
madduci 15 hours ago [-]
Then why isn't part of the stdlib? Why should everybody maintain their own version?
spacechild1 9 hours ago [-]
Just wait for C++32 :-D. After all, we only got `std::string::starts_with` in C++20 and C++23 finally gave us `std::string::contains`. It's a clown show, you just need to take it with humor.
SuperV1234 1 days ago [-]
It is "cryptic" and "ugly" to you just because you're not familiar with it. You'd pick the macro-based implementation because you are familiar with it.
Seeing this argumentation is so tiresome, because it feels like there is a lack of self-awareness regarding what is "familiar" and what isn't, which is subconsciously translated to "ugly" and "bad".
delegate 1 days ago [-]
Have you ever used other (modern) programming languages ?
In a lot of languages, you achieve the same with 1 line of code. It's not about familiarity, it's about the fact that it's a long and convoluted incantation to get the name of an enum.
Why do I have to be familiar with all those weird symbols just to do a trivial thing ?
Update:
Zig:
const Color = enum { red, green, blue };
const name = @tagName(Color.red); // "red"
Rust:
#[derive(Display)]
enum Color { Red, Green, Blue }
let name = Color::Red.to_string(); // "Red"
Clojure:
(name :red) => "red"
throwaway7356 1 days ago [-]
And what if you want to implement something like Rust's "derive"? That is what the article shows.
As far as I understand you would have to mess with individual parser tokens in Rust instead of high-level structures like "enum" (C++ reflection). It would be much, much uglier to implement anything like "to_enum_string" in Rust as you would have to re-implement parts of the compiler to get the "enum" concept out of a list of tokens.
SuperV1234 1 days ago [-]
C++:
enum Color { red, green, blue };
auto name = to_enum_string(Color::Red); // "Red"
shooly 1 days ago [-]
... and where does that `to_enum_string` come from exactly? It doesn't seem to be built-in, which is the point of the parent comment.
SuperV1234 1 days ago [-]
It's a fair comparison. The parent comment isn't showing the compiler source code for the built-in reflection mechanisms.
You won't have to care about ^^ and [:X:] if you just want to consume reflection-based utils, which was the whole point of my comment.
shooly 22 hours ago [-]
What? No. Parent comment is comparing C++ to modern programming languages, showcasing how they provide commonly used utilities out-of-the-box instead of making every programmer re-implement them again and again and again and again and again.
SuperV1234 22 hours ago [-]
The parent comment is quite clear:
> Why do I have to be familiar with all those weird symbols just to do a trivial thing ?
And my answer demonstrates that you do not have to.
shooly 21 hours ago [-]
> And my answer demonstrates that you do not have to
Then again - "where does that `to_enum_string` come from exactly?".
SuperV1234 11 hours ago [-]
#include "to_enum_string.h"
You don't have to understand it to use it. Even then, it's not that hard to understand, it just looks unfamiliar.
shooly 5 hours ago [-]
So finally, it's NOT built-in, and the parent comment was showing that in other languages - it IS built-in. So your code example is NOT correct and comparison is NOT correct, because you just hid the most important part of it, which is the implementation, that the user has to either: a) write themselves, b) find somewhere on the Internet.
SuperV1234 3 hours ago [-]
So? The original argument was about the "ugly" syntax that the user didn't want to interact with nor read. I proved that there's no need to do so to consume reflection utils.
shooly 1 hours ago [-]
XD.
pjmlp 14 hours ago [-]
A library that you install via vcpkg or conan.
How many libraries do you read the source code after installing them with the package manager?
shooly 5 hours ago [-]
So it is NOT built-in and the code example shown above is dishonest - @SuperV1234 compares how "lean" two languages are but conveniently hides half of the code in their preferred language to make it seem simpler that it actually is, as otherwise it would look bad in the comparison!
pjmlp 2 hours ago [-]
Reflection is built in, the support is there, anyone can make a left pad out such simple snippet.
shooly 1 hours ago [-]
> Reflection is built in
Can you quote the C++ standard section that specifically talks about the `to_enum_string` function?
pjmlp 33 minutes ago [-]
Some people like to really be obtuse on purpose.
It is on the same place as left pad on ECMA 262.
wiseowise 12 hours ago [-]
Typical C++ dev schizophrenia. In one thread complain about Node and its death-by-a-thousand-packages, then suggest the same in another.
pjmlp 12 hours ago [-]
Don't put assumptions in others heads.
First of all, the only correct way to use package managers is with validated internal repos, don't vibe install, that goes for node, and goes for C++ as well.
Second this thread was all about how code lands in one's computer.
SuperV1234 11 hours ago [-]
Package? We're suggesting to copy paste 5 lines and stick them into a header.
wiseowise 8 hours ago [-]
You can press 'parent' button my comment.
pjmlp 8 hours ago [-]
Header files are libraries as well.
8 hours ago [-]
gpderetta 1 days ago [-]
The whole point of reflection is that it doesn't have to be builtin.
wiseowise 12 hours ago [-]
No, it is objectively cryptic and ugly. I honestly don’t understand how can anyone keep up with this garbage, but the ship has sailed long time ago. It is just a soup of symbols at this point.
SuperV1234 11 hours ago [-]
No, it objectively isn't objective.
randusername 1 days ago [-]
I was a fool to assume that the same forces shaping the ugliness of C++ syntax would not also be at work in C++ 26.
mananaysiempre 24 hours ago [-]
Reflect/reify, quasiquote/unquote, etc. are the final boss of syntax design. Even Template Haskell looks rather bad.
spacechild1 1 days ago [-]
I find it quite readable. I can understand what it does even though I haven't written reflection code yet myself.
mort96 23 hours ago [-]
I wish I understood the reason for the `std::define_static_array`... Why can't `std::meta::enumerators_of` just return something that can be iterated through????
SuperV1234 23 hours ago [-]
It is kind of weird at first but the reason is that `std::vector` requires heap allocation and transient allocations are not allowed in `constexpr` contexts. The purpose of `std::define_static_array` is to promote the storage of the vector to static storage to eliminate the transient allocation issue, and so that the `template for` can work properly with it.
See wg21.link/P3491
mort96 23 hours ago [-]
Is there a reason why `std::meta::enumerators_of`, a reflection feature that's surely almost exclusively going to be used in constexpr contexts, returns a value which doesn't work in constexpr contexts?
It seems that this is being worked on, and eventually the `define_static_array` won't be needed anymore
mort96 22 hours ago [-]
Just another example where C++ language features are incompatible with each other, to be fixed "in a later version" which may or may not happen. There are so many of those in C++. I desperately wish they'd just do it properly initially.
pjmlp 14 hours ago [-]
Me too, unfortunately the old guard sees no value in implementation before standardisation for each single feature.
So it is as it is, plenty of software in C++ isn't going to be rewriten into something else.
Maybe someone can do a Claude rewrite from LLVM into something else. /s
bluGill 1 days ago [-]
You realize c++11 is closer in age to C++98 than C++26?
mananaysiempre 24 hours ago [-]
I’m not sure the nominal publication date of a standard is all that relevant when the implementors’ reaction is as lukewarm as it has been to C++ ≥20.
ginko 1 days ago [-]
Is it? I'm mostly used to (pre-)C++11 and the only unusual operators I see are ^^T (which I presume accesses the metadata info of T) and [:e:] (which I assume somehow casts the enumerator metadata 'e' to a constant value of T).
And template for but I assume that's like inline for like in zig.
CamouflagedKiwi 1 days ago [-]
requires is also new (not sure exactly when that appeared, it's after the last time I wrote C++ in anger) although I think it's fairly clear what it means. I can only guess at the other two.
Not familiar with Zig but AFAICT `inline for` is about instructing the compiler to unroll the loop, whereas `template for` means it can be evaluated at compile time and each loop iteration can have a different type for the iteration variable. It's a bit crazy but necessary for reflection to work usefully in the way the language sets it up.
Well yes, but the _effect_ is to unroll the loop for runtime, if the inline-for survives that long.
A for loop executed during comptime is just
const stuff = comptime stuff: {
for (0...8) |i| {
// etc, build up some stuff
}
break :stuff some_stuff;
};
The difference is that a comptime block won't leave behind runnable 'residue', only whatever data is constructed for later. An inline for might not leave behind an unrolled loop either, but it can.
jsd1982 1 days ago [-]
I think the conclusion section should indicate that they are based entirely on GCC 16's behavior and current implementation. We should avoid generalizing one compiler's behavior and performance. Curious how this same test would behave once clang ships C++26 reflection.
SuperV1234 1 days ago [-]
I explicitly mentioned that GCC 16.1 was the compiler used in the benchmarking section, do you think I also need to add a disclaimer in the conclusion section as well?
Regardless, I don't think things are going to differ much with Clang. Without PCH/modules, standard header inclusion is still the "slow part" of C++ compilation, regardless of the compiler used and the standard library used (libstdc++ vs libc++). `#include` is fundamentally the same on any modern compiler.
Because the reflection feature itself seems quite fast on GCC (compared to the cost of the header), I predict the results will be similar on Clang as well.
pjmlp 14 hours ago [-]
Or VC++ if ever, which has the best modules support, but it is still trailing behind in C++23.
bluGill 1 days ago [-]
I was thinking the same thing. Modules are still not widely used, it is a reasonable guess that there are a lot of optimization opportunities left.
SuperV1234 1 days ago [-]
That is true, but on the other hand Modules were standardized more than 6 years ago.
Promises and claims have been made for longer than that on how Modules would have improved compilation times and made everyone's lives easier. In 2026, I still have to see any real evidence of that, especially when PCH + unity builds are much easier to use (except on damn Bazel, which supports neither) and deliver great results.
If after 6+ years of development Modules are still so far behind, it is fair to question if the problem is with the design/implementability of the feature itself.
spacechild1 1 days ago [-]
> it is fair to question if the problem is with the design/implementability of the feature itself.
The module story is just insane. How was it possible to get such a big feature into the standard without any working reference implementation? Isn't this the requirement for standard proposals to get accepted? If I compare this with how they treated JeanHeyd and his #embed proposal, the difference is staggering. To me it seems like a few powerful comittee members wanted to get modules into C++20 at any cost. This was just irresponsible.
bluGill 21 hours ago [-]
There was in visual studio which has had it other than minor details.the real problem is tools are needed to make modules work and those needed a lot of work. The work was already partially there because it's the same work that Fortran needs which tools supported but there were just enough details different to be annoying. Fortran modules were something that were always an afterthought and when tools started realizing that this is going to be a big deal, they decided they had to do it right, which took a lot of time too.
Maybe you forget Hacker News of 10 years ago, but in 2015-2016, everyone was complaining C++ doesn't have modules and how awful it must be because they're not modules. Now that C++ has modules, they're complaining about how it has modules.
spacechild1 19 hours ago [-]
I don't remember because I wasn't there :)
People are not complaining about the fact that C++ has modules, but about their usability and effectiveness. The compile time benefits seem modest and I have seen reports that it breaks Intellisense. (Maybe that's not true anymore?)
As Vittorio said, if it takes compiler vendors so long to implement them properly, maybe the design wasn't that good after all?
My point was: if you add such a big feature, shouldn't the standard require a sufficiently complete implementation? Otherwise, how can they assess whether the proposal actually works in practice and lives up to its promises?
gpderetta 12 hours ago [-]
Agree that proof of implementation and real world experience should be a requirement for standardization. But it is a catch-22: implement it's are probably not always too keen to spend time on a large feature if it is not clear that it will be standardized.
In practice both clang and VS have had some form of module support for quite a while, but the final standard ended up being different from either implementation (shaped by their experience, and certainly with inevitable last minute inventions).
I wonder if for some features the committee should vote for general guidelines, the delegate a third party (one or more implementors) to come up with both an implementation and standardese with the understanding that it will be fast-tracked in wit too much bike-shedding
bluGill 19 hours ago [-]
Again, they had a sufficiently complete implementation. That implementation was in Visual Studio, clang had a very different implementation. The standard decided to take the Microsoft version. There are pros and cons to both and I will not fault the decision but either way one of the two had to lose and there is no surprise that for something complex it will take a long time to reimplement it to whatever the new standard is.
spacechild1 18 hours ago [-]
If the implementation really was sufficiently complete, then this is even worse! Why did they choose to vote something into the standard that is very complex and difficult to implement, but does not live up to the promises? Maybe they thought it would improve in the future, but isn't this a huge gamble?
I have heard rumors that certain people in the Visual Studio team have exaggerated the state of their modules implementation to speedrun the standardization process. I have no idea if that is really true, but it would explain a lot of things...
I'm not the only one who is asking these questions:
> I don’t know if they exaggerated their claims at the time, or if they didn’t properly fund the Visual Studio team since or what, but you can’t tell me 8 years wasn’t enough to make syntax highlighting work with modules. And if it is, then maybe there was something deeply wrong in their proposal and the committee should have asked to see the receipts before voting yes.
I've been wondering about debug-ability of code using reflection. X-Macros are quite annoying to step through in most debuggers, though possible. While the code in the first example is evaluated fully at compile-time, how would you approach debugging it?
theICEBeardk 1 days ago [-]
The answer is being debated at the moment in c++ papers and building on experience from other languages with extensive compile time evaluators like D. One thing that is happening is that we will get compile time exceptions (a paper for that is aiming to add this to the language in c++29 has come out) which may help us in reporting problems. Which will be important as there is also a lot of papers and talk about an extension of reflection allowing for better output generation which as far as I know was deferred until reflection had been accepted.
But there is also good news that with the advent of JIT like components for compile time evaluation in progress and the like of CLion having the beginnings of a compile debugger in combination with concepts there is a chance some help is available and on the way.
However right now you have to rely on compiler errors and static_asserts which is not ideal of course.
SuperV1234 1 days ago [-]
Nothing that makes it straightforward. Testing via `static_assert` is a good strategy, but it's not debugging. I believe there are some ways of printing custom diagnostics during compilation, but I am not aware of any step-by-step debugging tool that runs at compile-time.
In practice, I haven't really needed to ever debug `consteval` functions -- it's quite easy to get the right behavior down thanks to `static_assert`-based testing and thanks to the fact that they do not depend on external state (simpler).
kevin_thibedeau 24 hours ago [-]
Keep macro generated code isolated in self-contained wrapper functions that just return a static object corresponding to an argument. Then you can treat them like black boxes that never fail and never need to be stepped over.
cenamus 1 days ago [-]
I mean it's still C++ that's compiled and executed, surely the compiler would be able to provide a way to hook into that?
usefulcat 1 days ago [-]
I don't recall the source, but I don't believe most (any?) c++ compilers implement compile-time code evaluation by compiling and running code.
For one thing they are required to disallow all undefined behavior for compile time execution, and some forms of UB only occur when the code is run.
pjmlp 14 hours ago [-]
Basically nowadays they ship an interpreter in the box as well.
varispeed 1 days ago [-]
Why people are still using debuggers?
I never felt the need for them when doing TDD.
w4rh4wk5 12 hours ago [-]
It's not that people are _still_ using debuggers; it's that people have actually discovered debuggers and workflows that are more productive than adding print statements, recompiling, and rerunning the program.
If you need to step with debugger, it means you are probably not understanding the code and cannot step through in your mind. Good test suite eliminates the need to debugger too.
w4rh4wk5 6 hours ago [-]
Yeah, in my field this approach is pretty much infeasible.
Typically, I am given an ancient code base that is full of bad decisions, hard to read code and no tests in sight. Sometimes there are assertions, if I am lucky. It's impractical to create a reliably test suite, or rewrite everything from scratch.
Here, I heavily rely on a debugger just to make sense of the code. Sure, I'd wish that all of this code would just be sparkling clean, easy to read, free of UB, etc. But that's not the reality I work in, and good debugger is my number one tool getting the job done.
And don't even get me started on dealing with closed source implementations where all you could read is disassembly.
mort96 23 hours ago [-]
Because sometimes you have bugs and you haven't narrowed down the cause enough to write a proper test for it?
pjmlp 14 hours ago [-]
[dead]
HarHarVeryFunny 1 days ago [-]
No doubt reflection has been built with other use cases in mind, but it sure would have been nice just to have std::to_string(enum)
bluGill 1 days ago [-]
C++ conference speakers (including keynotes) are now begging everyone to stop using enum to string in their example. While they are a simple and easy to understand example, reflection is for much more interesting problems. I can't think of any other example that I would type into a comment box or put on a slide.
maccard 1 days ago [-]
Serialization is the canonical example. Being able to turn
struct MyStruct {
int val = 42;
string name = "my name";
};
into
{
"val": 42, // if JSON had integers, and comments of course
"name": "my name",
}
is incredibly powerfuly. If reflection supported attributes (i can't believe it shipped without, honestly), then you could also mark members as [[ignore]] and skip them.
(The link above shows ImGui generation, but the same exact logic can be applied for serialiation to JSON/YAML/whatever.)
maccard 1 days ago [-]
Sure, but
> The magic sauce? Boost.PFR! An incredibly clever library that enables reflections on aggregates, even in C++17.
That's not vanilla C++!
SuperV1234 1 days ago [-]
...so what? It's just a header you have to #include.
maccard 1 days ago [-]
By that logic why would anything have to be standardised?
gpderetta 1 days ago [-]
The question is whether something belong in the language or in a library (possibly the standard library).
A guiding principle of C++ is that if something can be implemented cleanly and efficiently in a library, the language should not be extended to support the use case.
Now boost.pfr is exceedingly clever, but relying on speculative pack expansions or using stateful metaprogramming hacks is not something I would call clean and efficient, so proper reflection is warranted.
I do worry about the compile time impact though.
jcelerier 17 hours ago [-]
the only thing that should be standardized are things that cannot be done through libraries in an efficient way. Boost.PFR is great, I built a lot of things on it, but eventually you hit the limits of what a pure library approach can do -> language feature.
SuperV1234 23 hours ago [-]
By your logic we shouldn't ever use external libraries.
PFR has given us reflection since C++14.
I also don't think the Standard Library is particularly well-defined nor well-implemented, as demonstrated by the atrocious compilation times.
bluGill 21 hours ago [-]
The standard library shows the roots of the mid-1990s and there are a lot of things we would definitely do different today. However, it is still extremely well defined compared to most everything else. C++ is one of the few languages where the library actually guarantees how an algorithm works, which is both good and bad. The bad part is some things that made perfect sense in 1995 simply don't make sense in our modern CPUs where cache is important.
bluGill 1 days ago [-]
It is powerful, but I'm not sure it is a good idea. Other languages have it, and there is lots of experience in all the ways things go wrong in the real world. I'm inclined to say you should hand write this code because eventually you will discover something weird anyway.
electroly 1 days ago [-]
Can you give an example of a language ecosystem that went with reflection-based JSON serialization/deserialization and then went on to regret it? I can't think of any, and don't agree with your conclusion. It works great, and manually writing serialization and matching deserialization code is terrible, annoying, error-prone work.
maccard 1 days ago [-]
I disagree. Rust's defacto default is serde, golang comes with batteries included, dotnet/java have had it for _years_, and all the dynamic languages do it.
SuperV1234 1 days ago [-]
I think this is a very bad take -- once you write it by hand you have to manually keep it in sync with the actual struct and ensure you made no mistakes. Reflection guarantees 1-1 future-proof mapping with the actual C++ struct, avoids boilerplate, and ensures that the serialization logic is correct.
bluGill 1 days ago [-]
The protocol is important though, not the internal structure. When you only have exactly one version of a program talking to the same version of itself you don't care. However when you are mixing versions or worse programming language (and thus can't mix structs which are implementation details of your language) the protocol is what matters.
That is if you are worried about doing this by hand reflection is not the answer, something like protobuf where your data structures are generated is the answer.
gpderetta 1 days ago [-]
I completely understand your point. Then again you might be able to use reflection to verify that your manually rolled implementation actually serializes all fields.
cogman10 1 days ago [-]
It comes up pretty frequently in java. Serialization/Deserialization, adding capabilities based on type, Adding new capabilities to a type, general tuning (for example, adding a timing or logging call onto methods).
Almost all the Java web frameworks are giant balls of reflection. Name a function the right way or add the right magic annotation and the framework will autowire it correctly.
It's a pretty powerful tool. (IDK if C++'s reflection is as capable, but this is what was enabled by java's reflection).
SuperV1234 1 days ago [-]
Java reflection is another beast altogether as it is runtime reflection. C++26 reflection is purely compile-time, which not only means it adds zero runtime cost, but also prevents those kind-of-insane use cases you see in Java and C#.
pjmlp 14 hours ago [-]
I think C++ devs have to eventually update their knowledge how Java and .NET work when talking about reflection.
Yes, originally they only supported runtime reflection.
Nowadays they have compile time tooling as well, via plugins, annotation processors, and code generators.
Which is exactly how you can have a Spring like frameworks that do all the AOP magic at compile time, for native code with GraalVM or OpenJ9, like Quarkus or Micronaut.
david422 1 days ago [-]
> Almost all the Java web frameworks are giant balls of reflection. Name a function the right way or add the right magic annotation and the framework will autowire it correctly.
I find this to be very powerful, and also very unintuitive/undiscoverable at the same time.
cogman10 1 days ago [-]
Initially, but it very quickly becomes discoverable once you are familiar with how things are working.
Most frameworks in Java are very similar. The ones that aren't are effectively doing what "expressjs" does in terms of setup, which is still pretty discoverable.
Most java frameworks rely on annotations rather than naming schemes which makes everything a lot easier to grok.
kuboble 1 days ago [-]
Reflection is simply a syntax vinegar for duck typing.
surajrmal 1 days ago [-]
Anybody the derive traits rust has are a good demo.
theICEBeardk 1 days ago [-]
I mean a readable implementation of tuple with minimal overhead is a great case for me (went from around 1.6k lines to approximately 250 lines). I wrote an implementation including the normally difficult to implement tuple_cat based on c++26 within a few hours.
My favorite thing is that I will get to remove and replace most of the cryptic template recursion stuff I have with "template for" and maybe a bit of reflection. Debugging the unrolled stuff will be a joy in comparison.
randusername 1 days ago [-]
I can't imagine myself using reflection much, but maybe it will eliminate a lot of feature proposals bogging down the committee and they can focus on harder problems.
It would be cool if the stated goal of C++29 was compile times.
w4rh4wk5 1 days ago [-]
I'd argue reflection is very much a feature for libraries. You wouldn't use it directly, but your JSON / YAML serialize is then built on top of it. So are your bindings for scripting engines like Lua.
SuperV1234 1 days ago [-]
You can already automatically serialize/deserialize arbitrarily nested structs since C++17 (using Boost.PFR). Since C++20, you can also serialize/deserialize the struct data member names automatically.
There are a lot of things that are very very important for a tiny niche. In any non-trivial project you will end up with a lot of custom libraries and some of them really benefit from some obscure feature that no place else in your project would want.
agentultra 1 days ago [-]
Also nice for UI tooling; game tools, debuggers, etc. Pull apart a struct and display it on screen and not have to patch the UI tool every time you change the struct is pretty nice.
Never quite understood why people are so obsessed with meta programming capabilities in a language, be it templates, comptime, macros, whatever.
I program mostly in C, if I need 'meta' programming I just write another C program that processes C source code (I've written a simple C parser), then in my build script I build in two stages, build meta program, run it, build rest of program.
Simple, effective, debuggable (the meta program is just normal C), infinite capabilities - can nest this to arbitritary depths, need meta-meta programming? Make a program that generates a meta program.
rddbs 1 days ago [-]
One obvious answer is that people probably don’t want to write a whole parser and wire up new steps in their build pipeline just to do something simple like get the name of enum cases as a string.
Without taking a stance on whether in-language meta programming facilities are good or bad, it’s not hard to find examples of cases where people find it useful to have them.
1 days ago [-]
Panzerschrek 14 hours ago [-]
C++ has templates, which means, that some meta-code generation needs to be executed for arbitrary types. Doing so with an external tool is impossible.
jstimpfle 24 hours ago [-]
Actually why even specify metaprogram as C like source code? It must be convenience. But there is little practical use, like a good program always models a lot of different representations of more or less the same things, just recombined and processed a little differently. Why would we want to deal with semantics of C types for example, if we can model a much clearer and better constrained universe of types used in e.g. a de/serialization framework? Even only pointers are quite special, and often only of very immediate use, but there is no point in e.g. persisting them to disk or sending them over the network.
ironman1478 1 days ago [-]
Meta programming in C++ can enable you to remove lots of runtime branching in your code at the cost binary size.
jmalicki 1 days ago [-]
This works for extreme needs.
But you're probably not doing s ton of metaprogramming all the time like you should be, and would with a language that allows it.
The lack of metaprogramming is also why C is so slow compared to C++
uecker 24 hours ago [-]
C is not slow compared to C++. C++ compilation time are slow though.
SuperV1234 10 hours ago [-]
This is a myth, C++ is not inherently slow to compile. It's the standard library that is very bloated and the main culprit for slow compilation.
jstimpfle 8 hours ago [-]
Many C++ features are very slow to compile, especially templates.
A quick compiling C++ project is most likely extremely conservative in its use of C++ (vs C) features.
SuperV1234 7 hours ago [-]
That's just false. Templates are not slow to compile at all, and you can selectively pick TUs where they're instantiated.
My entire VRSFML codebase compiles from scratch in ~4s and I liberally use C++ features, I just avoid the Standard Library most of the time.
Templates are not inherently slow, people just don't know how to use them and don't know how to control instantiation.
Most people still think that templates have to go in header files, which is also just plainly false.
jstimpfle 6 hours ago [-]
Erm... that's not just false. The point of templates is generic programming, reusable components. If you don't put them in a header, you're not reusing them much. And if you have to "selectively pick TUs where they're instantiated", you're basically admitting that you have to invest effort to reduce compile times. You are refuting the very point you're making.
C++ templates _are_ slow to compile. They require running something like a dynamically typed VM in the compiler.
**** Template sets that took longest to instantiate:
833 ms: sf::base::Optional<$> (911 times, avg 0 ms)
Each individual instantiation of this class is sub 1ms.
Including the header itself takes 3ms.
I'm sure I can optimize it even further if I wanted to.
---
Now to refute your other incorrect claims:
> The point of templates is generic programming, reusable components.
That's ONE use case. A more general use case is just reducing code repetition in a type-safe manner, which is extremely useful even within the same translation unit. Another use case is metaprogramming. And I'm sure I can come up with more. Templates are a versatile tool.
> And if you have to "selectively pick TUs where they're instantiated", you're basically admitting that you have to invest effort to reduce compile times.
...well, yeah? Of course you have to put in effort to reduce compile times. That doesn't undermine my point at all.
C++ templates are not slow to compile.
jstimpfle 24 hours ago [-]
C is not slow compared to C++, that is a strange myth.
jmalicki 23 hours ago [-]
In practice, C means you end up with generic data structures with pointers to what they contain, rather than being inline.
You do see a lot of macro use to deal with this, but that is just primitive, non-typesafe metaprogramming, and it gets unwieldy enough that in practice, you see people add an extra pointer. This is why it gets slower.
uecker 14 hours ago [-]
In practice, I see people write very performance C code where it matters, while moving on quickly where it does not. C++ code is often highly templated with annoying compile times, but still often slow because it still does not use the right data structures, and the amount of instruction bloat by specializing everything does not help for anything which is not a toy benchmark.
jstimpfle 9 hours ago [-]
This 1000%, sorry for low calories comment.
jstimpfle 23 hours ago [-]
If you need callbacks and generics, you're not writing performance code.
99% of code in the wild is comically inefficient and is doing the wrong thing, using way too generic data structures and algorithms for very concrete problems. C++ templates may be one way to make comically slow code faster by spending a lot of compile time. But it's often much quicker to just write straightforward concrete code that the compiler can easily optimize.
IMO C++ makes for slow programs for the sole fact that it compiles so slow (if you use its modern features), so you have much less time to actually iterate and improve.
jmalicki 22 hours ago [-]
If compilation is even more than 10% of the time it takes you to run your tests, you're probably not writing correct code. Compilation times don't even measure.
jstimpfle 22 hours ago [-]
So every time you compile, you run your test suite? I don't. And you trust that I have experience writing and compiling programs too...?
It should be a goal to keep rebuild times around 1 second (often not quite possible, but 3-5 seconds, even for full rebuilds, is often realistic). I edit, compile, run, edit, compile, run. Editing and running can often take as little as 1-3 seconds, and I sometimes do it dozens of times working in a row, working on a single improvement. That's why there is a 1 second rebuild time goal.
In practice I often work on codebases I don't fully control, but when the build times are excessively high, I will complain and try to improve. Build times longer than 10-15 seconds break the flow, they are a significant productivity hit. But they are quite common with C++ codebases (it can also be bad with C codebases by the way, but C++ is typically much worse because of templates and metaprogramming which is very slow).
> Compilation times don't even measure.
You must be joking. Do you even program?
jmalicki 21 hours ago [-]
You run your code before running tests? IMO that's bad practice.
1 second, seriously? Even the Linux kernel is based on C, and it doesn't even have compilation times approaching that.
I guess I also work on a lot of big data projects, where getting results will take... 48 hours or so, so anything shorter than that is basically some sort of unit test or dry run... so in that context, compilation times do not even register on the things slowing me down.
jstimpfle 9 hours ago [-]
Running the code immediately after making changes is the first line of testing. To run a huge test suite full of tests that are completely unrelated to the current changes would be stupid, it's a huge waste of time and energy.
Yes, seriously, have you ever written a project from scratch? A simple .c file with a thousand lines in it should easily build and start within 100ms. A compiler should be able to do basic parsing and codegen at 1M lines per core.
If your runs take 48h, of course you need a strategy to avoid noticing bugs only after dozens of hours running. You can't tell me that it is efficient to make changes and to wait for minutes or even hours before noticing that your code wasn't even syntactically valid, or maybe it did compile but your code had a small oversight and you need to start over building.
The Linux kernel is a HUGE project, one of the biggest around. Yes, a full rebuild takes a long time, depending on configuration. Incremental rebuilds do not, though.
I'm actually working on a Linux kernel module (distributed filesystem client), it's on the order of 40 KLOC. I can do a full rebuild in 10/15 seconds (debug/release), and that includes calling into the kernel's infrastructure and doing a lot of stuff that shouldn't have to be done. An incremental rebuild after changing a single .c file is about 3 seconds. Restarting the module (swapping for the newly built one) takes less than 10 seconds also. And this can be already a stressful bottleneck depending on the task. Say you're improving logging in a particular section of code, this can easily require 5-10 attempts.
I'm working on Desktop GUIs (2D/3D) too. You need a quick turnaround time as much as possible. Many changes are trivial but you want to do many small incremental improvements, recompile, run and test (manually), often with a breakpoint on the code you're currently working on.
The projects I'm working on are written in C or conservative C++, and most have from thousands to hundreds of thousands lines of code. They can be built from scratch in a short amount of time (< 10s for the smaller ones). And all of them do incremental builds in <= 10 seconds except when maybe changing the most central headers which essentially means a full rebuild.
You can also design a C/C++ codebase to always do a full rebuild, compiling everything as a single unit. That can be faster than trying to do incremental builds, for codebases of considerable size. Try out the popular raddebugger project, a complete build after checkout is about 3 seconds. It's ~300 KLOC I think.
pandaman 1 days ago [-]
Writing a C++ parser is much harder than a C parser to the point there had been just 3 parsers used among all C++ compilers for quite a while. So you'd need to use some library for parsing. So now you are looking into the library's parser compatibility with the compiler you are using (it might not support the C++ standard you are on at all, it can have bugs preventing it from parsing the code that the compiler parses just fine) and not just on your code but on the library headers you include in your code. What are you going to do when cindex/libclang or whatever chokes on a libstdc++ header? You also have the issue with builtin macros: are they are the same in your library parser? Most likely not. Good luck testing all that.
Two-stage compilation is just a bonus on top: you add a sequential dependency in your build graph and if you have enough of these parsing programs you are going to wait till they are all built before your build can go wide.
24 hours ago [-]
psyclobe 1 days ago [-]
Absolutely spot on, easier, and way more effective
wat10000 1 days ago [-]
Why would I write a parser that almost-but-not-quite matches the compiler's own parser, when I could just use the compiler's parser directly? I don't want to write a parser, and I especially don't want to debug weird corner cases where my implementation diverges somehow. I just want to write some code that goes like, for each field in T, do X.
C++ metaprogramming is bad, but the problem there is the C++ part, not the metaprogramming-in-the-language part.
psyclobe 23 hours ago [-]
Cause its dead simple. Shell out, run a quick sed or something, then compile it in. It is quite amazing what 'magic meta' stuff you can do with that shit. Meanwhile 10 years in we are finally getting reflection.....
SleepyMyroslav 1 hours ago [-]
C++ is often cross compiled from mostly identical sources.
Here is an example from Zig [1] that explains why it is not that simple.
I agree with some other's in this thread: this is example is not great, but I get why it was used: to compare with X-macros. How about something that would require code-generation e.g. via libclang?
My guess is: libclang is more suited for this situation if you care about compile times, even if Python is used.
Panzerschrek 14 hours ago [-]
Its misleading to call it "cost". In the C++ world only runtime cost matters. If using reflection allows to generate faster result code, it doesn't matter how long it takes to compile.
Moldoteck 8 hours ago [-]
our company doesnt do compile on push on the server. It only does it when approved by a subset of ppl. The reason is we have a limited amount of servers and compile takes about 40min/variation. It's very annoying considering at prev job compile took about 10 min in total (project was organized better+ better servers) and there wasn't a limit at all-> compile at each push to gerrit.
I'm now trying to migrate from msbuild to cmake+sscache+PCH for std libraries while also trimming unnecessary includes to reduce suffering in the future - if not for me then at least for future developers. So I would say compile time is important for development. It causes other limitations too (like bugfixing becomes a huge commit with several squished fixes together to avoid recompiles, messing up git history or slower context switching when developing several features in parallel)
pjmlp 14 hours ago [-]
It has a direct impact on the amount of emails and slack messages I get to reply to.
SuperV1234 10 hours ago [-]
Utter BS. Compilation times matter for productivity, developer motivation, iteration speed, CI turnaround time, and so on.
I'm sure you wouldn't say "it doesn't matter how long it takes to compile" it if took days. So where do you draw the line? Regardless, it matters.
Panzerschrek 9 hours ago [-]
Even days of compilation may be an acceptable price for good optimization, as long as debug builds or builds with minimal optimizations are fast enough.
drzaiusx11 17 hours ago [-]
No surprise here that the macro + char* approach wins hands down. I'm not really an active C++ user but I did use a VERY similar trick in my custom C code generator DSL (writing in Ruby) just this week. Easy and no "magic" involved.
mentos 1 days ago [-]
Curious to see if Epic Games ever refactors their reflection in Unreal Engine to use C++ 26 reflections or not.
LugosFergus 19 hours ago [-]
That'll never happen. The engine's entire serialization system is built around their custom reflection layer and UHT. Not to mention how this would affect licensees. PLUS, they just laid off a bunch of people, and the leftovers are focused on Tim's Verse fiasco. I hate to use jargon here, but there's no "business value" to switching.
EDIT: and based on these compilation time results, this would be a major setback for building the engine, which already takes an eternity.
mentos 9 hours ago [-]
Yea from my discussion/research with ChatGPT it seems compilation times would suffer.
dataflow 1 days ago [-]
I don't see how a library like Enchantum could handle everything reflection does. (How) does it figure out duplicate enum values, for example? And (how) does it discover arbitrarily large, discontiguous ranges? And (how) does it do these on MSVC?
SuperV1234 1 days ago [-]
In short, it probes enum values in a pre-defined range (e.g. [-256; 256]), and parses the `__PRETTY_FUNCTION__` macro at compile-time to extract the name of the enumerator.
Once you have that in place, you can easily detect duplicates, etc...
but one could also make it even more compact if one cared.
spacechild1 9 hours ago [-]
That doesn't look any better.
Yes, xmacros have the best compile times, but you can't possibly argue that they are elegant to use compared to the alternatives.
uecker 7 hours ago [-]
It looks better to me than the other macro solution as it is more transparent what is done compared to DEFINE_ENUM. But I agree it is not as succinct as C++'s reflection syntax.
spacechild1 1 hours ago [-]
> It looks better to me than the other macro solution as it is more transparent what is done compared to DEFINE_ENUM.
Fair enough.
SuperV1234 23 hours ago [-]
To be honest there are ways to make that much nicer. I believe that if you use recursive macros using the VA_OPT feature, you should be able to provide enumerators directly to define enum as a list.
The underlying machinery implementation is going to be much uglier and complex, though.
Oh, I didn't know about __VA_OPT__(), thanks for that!
That looks much nicer indeed, but I still vastly prefer the other solutions, simply because I can just declare regular enums.
psyclobe 23 hours ago [-]
They are gross but... effective so shrug
psyclobe 23 hours ago [-]
Pretty much. Was hoping it would've been a 'reflection slam dunk' but no... same 'ol same 'ol.
psyclobe 1 days ago [-]
Man that aucks was looking forward to some kind of speed improvement. Using magic enum atm and I guess we'll continue to do so.
C++ build times are hard pill to swallow when migrating from c. This is just another reason we'll probably stick to writing c as t the company where I work. It's like asking someone to give up instant compilation for cleaner easier to read apps?
Also now that we have cleanup handlers in c (destructors) even less of a reason to move...
TZubiri 1 days ago [-]
"Enum to string"
We've come full circle huh?
Why do you need this, logging? In that case I would rather reflect the logging statement to pribt any variable name, or hell, just write out the string.
If saving for db, maybe store as string, there's more incentive for an enum in the db, if that's a string you might as well. At any rate it doesn't seem a great idea to depend on a variable name, imagine changing a variable name and stuff breaks.
SuperV1234 1 days ago [-]
Logging, debugging, auto-generation of UIs/editors, etc... This is an extremely common operation and for a good reason.
So speaking of old ways, I'm not a C++ dev, but a while ago saw someone comment that they still organize their C++ projects using tips from John Lakos' Large-scale C++ software design from 1997, and that their compile times are incredibly fast. So I decided to find a digital copy on the high seas and read it out of historical curiosity. While I didn't finish it, one wild thing stood out to me: he advised for using redundant external include guards around every include, e.g.
The reason for this being that (in 1997) every include required that the pre-processor opened the file just to check for an include guard and reading it all the way to the end to find the closing #endif, causing potentially O(N*2) disk read overhead (if anyone feels like verifying this, it's explained on pages 85 to 87).Again, that was in 1997. I have no idea what mitigations for this problem exist in compilers by now, but I hope at least a few, right?
This conclusion is making me wonder if following that advice still would have a positive impact on compile times today after all though. Surely not, right? Can anyone more knowledgeable about this comment on that?
You can also use `#pragma once` which works everywhere, is nicer, and technically needs less work by the compiler, but compilers have optimized for include guards since a long time ago.
Some random measurements I found: https://github.com/Return-To-The-Roots/s25client/issues/1073
> at least for gcc and Visual Studio using #pragma once has a significant impact. The fact is, the compiler does not need to continue parsing the whole file when reaching a #pragma once. otherwise the compiler always needs to do it even if the include guard afterwards will avoid double processing of the content afterwards.
As written the explanation for these optimizationst suggest that both "pragma once" and include guard optimization still requires opening and closing the file each time an include is encountered, even if you bail after parsing the first line. Is that overhead zero? Or are the optimizations explained poorly and is repeatedly opening/closing the file also avoided?
Either way, do you know what causes the slowdown as a result of including <meta>?
My understanding is that this is an optimization that has been available for a very long time now.
The only issue is if a file is referred through multiple names (because of hard links, symlinks, mounts). That might cause the file to be opened again, and can actually break pragma once.
I'm going to experiment with other compilers and figure out how they handle it.
When I saw the 'no boilerplate' example, the very first thought that came to my mind:
This is the ugliest, most cryptic and confusing piece of code I've ever seen. Calling this 'no boilerplate' is an insult to the word 'boilerplate'.
Yeah, I can parse it for a minute or two and I mostly get it.
But if given the choice, I'd choose the C-macro implementation (which is 30+ years old) over this, every time. Or the good old switch case where I understand what's going on.
I understand that reflection is a powerful capability for C++, but the template-meta-cryptic-insanity is just too much to invite me back to this version of the language.
I played around with cppfront over Christmas and it was a lot more ergonomic than my distant memories of C++11, which I don't even have negative memories of per se.
[0] https://github.com/hsutter/cppfront
It is no different from any other language that compiles via C or C++ code generation, it got sold a bit differently due to his former position at WG21.
But I do think it is different than other "compile to C++" languages, because it seems to be more of a personal case study for Sutter to figure out various reflection and metaprogramming features, and then "backport" those worked out ideas to regular C++ via proposals. And the latter don't have to match the CPP2 syntax at all.
In multiple examples he's given in talks the resulting "regular" C++ code is easier to read, mainly because the metaprogramming deals with so much boilerplate.
[0] https://www.youtube.com/watch?v=8U3hl8XMm8c
Typescript is a linter, nothing else, type annotations for JavaScript. The two features that aren't present in JavaScript, enums and namespaces, are considered design mistakes and the team vouched to focus only on being a linter,and polyfill for older runtimes, when possible (some JS features require runtime support).
While Kotlin spews JVM bytecode many language constructs, like co-routines, make it one way, it is easy to call Java from Kotlin, the other way around requires boilerplate code, manipulating the additional classes generated by the Kotlin compiler for its semantics.
Like, yeah, what you say about TS and Kotlin is true about TS and Kotlin. But since you're not explaining what cpp2 does or plans to do differently, and why it matters, I'm not sure where you're going with that. It's probably obvious but I'm not getting it.
The metaphor Sutter was going for, as I see it, is that TS and Kotlin both added missing features to their host language. Most importantly reflection and decorators in TS, which are now becoming a standard in JS as well[0]. cpp2 mainly focuses on experimenting with reflection and metaprogramming as well, adding features currently missing in C++ by being a compiles-to-C++ language. Sutter has written C++ proposals what would allow give C++ similar reflection and metaprogramming capabilities based on what he discovered by working on cpp2. That's pretty comparable if you ask me.
[0] https://github.com/microsoft/reflect-metadata
Why? The implementation is not pretty, but you only need to write it once and then it works for all enums. The actual usage is trivial, it's just a function call.
The C macro version is horrendous in comparison. Why would I want to declare my enums like that just because I might want to print them?
Seeing this argumentation is so tiresome, because it feels like there is a lack of self-awareness regarding what is "familiar" and what isn't, which is subconsciously translated to "ugly" and "bad".
In a lot of languages, you achieve the same with 1 line of code. It's not about familiarity, it's about the fact that it's a long and convoluted incantation to get the name of an enum.
Why do I have to be familiar with all those weird symbols just to do a trivial thing ?
Update:
Zig:
const Color = enum { red, green, blue };
const name = @tagName(Color.red); // "red"
Rust:
#[derive(Display)]
enum Color { Red, Green, Blue }
let name = Color::Red.to_string(); // "Red"
Clojure:
(name :red) => "red"
As far as I understand you would have to mess with individual parser tokens in Rust instead of high-level structures like "enum" (C++ reflection). It would be much, much uglier to implement anything like "to_enum_string" in Rust as you would have to re-implement parts of the compiler to get the "enum" concept out of a list of tokens.
You won't have to care about ^^ and [:X:] if you just want to consume reflection-based utils, which was the whole point of my comment.
> Why do I have to be familiar with all those weird symbols just to do a trivial thing ?
And my answer demonstrates that you do not have to.
Then again - "where does that `to_enum_string` come from exactly?".
How many libraries do you read the source code after installing them with the package manager?
Can you quote the C++ standard section that specifically talks about the `to_enum_string` function?
It is on the same place as left pad on ECMA 262.
First of all, the only correct way to use package managers is with validated internal repos, don't vibe install, that goes for node, and goes for C++ as well.
Second this thread was all about how code lands in one's computer.
See wg21.link/P3491
It seems that this is being worked on, and eventually the `define_static_array` won't be needed anymore
So it is as it is, plenty of software in C++ isn't going to be rewriten into something else.
Maybe someone can do a Claude rewrite from LLVM into something else. /s
And template for but I assume that's like inline for like in zig.
Not familiar with Zig but AFAICT `inline for` is about instructing the compiler to unroll the loop, whereas `template for` means it can be evaluated at compile time and each loop iteration can have a different type for the iteration variable. It's a bit crazy but necessary for reflection to work usefully in the way the language sets it up.
https://ziglang.org/documentation/master/#inline-for
A for loop executed during comptime is just
The difference is that a comptime block won't leave behind runnable 'residue', only whatever data is constructed for later. An inline for might not leave behind an unrolled loop either, but it can.Regardless, I don't think things are going to differ much with Clang. Without PCH/modules, standard header inclusion is still the "slow part" of C++ compilation, regardless of the compiler used and the standard library used (libstdc++ vs libc++). `#include` is fundamentally the same on any modern compiler.
Because the reflection feature itself seems quite fast on GCC (compared to the cost of the header), I predict the results will be similar on Clang as well.
Promises and claims have been made for longer than that on how Modules would have improved compilation times and made everyone's lives easier. In 2026, I still have to see any real evidence of that, especially when PCH + unity builds are much easier to use (except on damn Bazel, which supports neither) and deliver great results.
If after 6+ years of development Modules are still so far behind, it is fair to question if the problem is with the design/implementability of the feature itself.
The module story is just insane. How was it possible to get such a big feature into the standard without any working reference implementation? Isn't this the requirement for standard proposals to get accepted? If I compare this with how they treated JeanHeyd and his #embed proposal, the difference is staggering. To me it seems like a few powerful comittee members wanted to get modules into C++20 at any cost. This was just irresponsible.
Maybe you forget Hacker News of 10 years ago, but in 2015-2016, everyone was complaining C++ doesn't have modules and how awful it must be because they're not modules. Now that C++ has modules, they're complaining about how it has modules.
People are not complaining about the fact that C++ has modules, but about their usability and effectiveness. The compile time benefits seem modest and I have seen reports that it breaks Intellisense. (Maybe that's not true anymore?)
As Vittorio said, if it takes compiler vendors so long to implement them properly, maybe the design wasn't that good after all?
My point was: if you add such a big feature, shouldn't the standard require a sufficiently complete implementation? Otherwise, how can they assess whether the proposal actually works in practice and lives up to its promises?
In practice both clang and VS have had some form of module support for quite a while, but the final standard ended up being different from either implementation (shaped by their experience, and certainly with inevitable last minute inventions).
I wonder if for some features the committee should vote for general guidelines, the delegate a third party (one or more implementors) to come up with both an implementation and standardese with the understanding that it will be fast-tracked in wit too much bike-shedding
I have heard rumors that certain people in the Visual Studio team have exaggerated the state of their modules implementation to speedrun the standardization process. I have no idea if that is really true, but it would explain a lot of things...
I'm not the only one who is asking these questions:
> I don’t know if they exaggerated their claims at the time, or if they didn’t properly fund the Visual Studio team since or what, but you can’t tell me 8 years wasn’t enough to make syntax highlighting work with modules. And if it is, then maybe there was something deeply wrong in their proposal and the committee should have asked to see the receipts before voting yes.
https://mropert.github.io/2026/04/13/modules_in_2026/
But there is also good news that with the advent of JIT like components for compile time evaluation in progress and the like of CLion having the beginnings of a compile debugger in combination with concepts there is a chance some help is available and on the way.
However right now you have to rely on compiler errors and static_asserts which is not ideal of course.
In practice, I haven't really needed to ever debug `consteval` functions -- it's quite easy to get the right behavior down thanks to `static_assert`-based testing and thanks to the fact that they do not depend on external state (simpler).
For one thing they are required to disallow all undefined behavior for compile time execution, and some forms of UB only occur when the code is run.
I never felt the need for them when doing TDD.
Casey has been talking about this some time ago: https://www.youtube.com/watch?v=UzD_Ze6zFKA
Also, John Carmack's perspective: https://www.youtube.com/shorts/PRE51epznT8
Typically, I am given an ancient code base that is full of bad decisions, hard to read code and no tests in sight. Sometimes there are assertions, if I am lucky. It's impractical to create a reliably test suite, or rewrite everything from scratch.
Here, I heavily rely on a debugger just to make sense of the code. Sure, I'd wish that all of this code would just be sparkling clean, easy to read, free of UB, etc. But that's not the reality I work in, and good debugger is my number one tool getting the job done.
And don't even get me started on dealing with closed source implementations where all you could read is disassembly.
(The link above shows ImGui generation, but the same exact logic can be applied for serialiation to JSON/YAML/whatever.)
> The magic sauce? Boost.PFR! An incredibly clever library that enables reflections on aggregates, even in C++17.
That's not vanilla C++!
A guiding principle of C++ is that if something can be implemented cleanly and efficiently in a library, the language should not be extended to support the use case.
Now boost.pfr is exceedingly clever, but relying on speculative pack expansions or using stateful metaprogramming hacks is not something I would call clean and efficient, so proper reflection is warranted.
I do worry about the compile time impact though.
PFR has given us reflection since C++14.
I also don't think the Standard Library is particularly well-defined nor well-implemented, as demonstrated by the atrocious compilation times.
That is if you are worried about doing this by hand reflection is not the answer, something like protobuf where your data structures are generated is the answer.
Almost all the Java web frameworks are giant balls of reflection. Name a function the right way or add the right magic annotation and the framework will autowire it correctly.
It's a pretty powerful tool. (IDK if C++'s reflection is as capable, but this is what was enabled by java's reflection).
Yes, originally they only supported runtime reflection.
Nowadays they have compile time tooling as well, via plugins, annotation processors, and code generators.
Which is exactly how you can have a Spring like frameworks that do all the AOP magic at compile time, for native code with GraalVM or OpenJ9, like Quarkus or Micronaut.
I find this to be very powerful, and also very unintuitive/undiscoverable at the same time.
Most frameworks in Java are very similar. The ones that aren't are effectively doing what "expressjs" does in terms of setup, which is still pretty discoverable.
Most java frameworks rely on annotations rather than naming schemes which makes everything a lot easier to grok.
My favorite thing is that I will get to remove and replace most of the cryptic template recursion stuff I have with "template for" and maybe a bit of reflection. Debugging the unrolled stuff will be a joy in comparison.
It would be cool if the stated goal of C++29 was compile times.
For many useful use cases, you don't need C++26 reflection at all. E.g. https://www.linkedin.com/posts/vittorioromeo_cpp-gamedev-ref...
I program mostly in C, if I need 'meta' programming I just write another C program that processes C source code (I've written a simple C parser), then in my build script I build in two stages, build meta program, run it, build rest of program.
Simple, effective, debuggable (the meta program is just normal C), infinite capabilities - can nest this to arbitritary depths, need meta-meta programming? Make a program that generates a meta program.
Without taking a stance on whether in-language meta programming facilities are good or bad, it’s not hard to find examples of cases where people find it useful to have them.
But you're probably not doing s ton of metaprogramming all the time like you should be, and would with a language that allows it.
The lack of metaprogramming is also why C is so slow compared to C++
A quick compiling C++ project is most likely extremely conservative in its use of C++ (vs C) features.
My entire VRSFML codebase compiles from scratch in ~4s and I liberally use C++ features, I just avoid the Standard Library most of the time.
Templates are not inherently slow, people just don't know how to use them and don't know how to control instantiation.
Most people still think that templates have to go in header files, which is also just plainly false.
C++ templates _are_ slow to compile. They require running something like a dynamically typed VM in the compiler.
This is my `sf::base::Optional<T>` template class, a lightweight replacement for `std::optional` with same semantics: https://github.com/vittorioromeo/VRSFML/blob/master/include/...
This is what ClangBuildAnalyzer reports:
Each individual instantiation of this class is sub 1ms. Including the header itself takes 3ms.I'm sure I can optimize it even further if I wanted to.
---
Now to refute your other incorrect claims:
> The point of templates is generic programming, reusable components.
That's ONE use case. A more general use case is just reducing code repetition in a type-safe manner, which is extremely useful even within the same translation unit. Another use case is metaprogramming. And I'm sure I can come up with more. Templates are a versatile tool.
> And if you have to "selectively pick TUs where they're instantiated", you're basically admitting that you have to invest effort to reduce compile times.
...well, yeah? Of course you have to put in effort to reduce compile times. That doesn't undermine my point at all.
C++ templates are not slow to compile.
You do see a lot of macro use to deal with this, but that is just primitive, non-typesafe metaprogramming, and it gets unwieldy enough that in practice, you see people add an extra pointer. This is why it gets slower.
99% of code in the wild is comically inefficient and is doing the wrong thing, using way too generic data structures and algorithms for very concrete problems. C++ templates may be one way to make comically slow code faster by spending a lot of compile time. But it's often much quicker to just write straightforward concrete code that the compiler can easily optimize.
IMO C++ makes for slow programs for the sole fact that it compiles so slow (if you use its modern features), so you have much less time to actually iterate and improve.
It should be a goal to keep rebuild times around 1 second (often not quite possible, but 3-5 seconds, even for full rebuilds, is often realistic). I edit, compile, run, edit, compile, run. Editing and running can often take as little as 1-3 seconds, and I sometimes do it dozens of times working in a row, working on a single improvement. That's why there is a 1 second rebuild time goal.
In practice I often work on codebases I don't fully control, but when the build times are excessively high, I will complain and try to improve. Build times longer than 10-15 seconds break the flow, they are a significant productivity hit. But they are quite common with C++ codebases (it can also be bad with C codebases by the way, but C++ is typically much worse because of templates and metaprogramming which is very slow).
> Compilation times don't even measure.
You must be joking. Do you even program?
1 second, seriously? Even the Linux kernel is based on C, and it doesn't even have compilation times approaching that.
I guess I also work on a lot of big data projects, where getting results will take... 48 hours or so, so anything shorter than that is basically some sort of unit test or dry run... so in that context, compilation times do not even register on the things slowing me down.
Yes, seriously, have you ever written a project from scratch? A simple .c file with a thousand lines in it should easily build and start within 100ms. A compiler should be able to do basic parsing and codegen at 1M lines per core.
If your runs take 48h, of course you need a strategy to avoid noticing bugs only after dozens of hours running. You can't tell me that it is efficient to make changes and to wait for minutes or even hours before noticing that your code wasn't even syntactically valid, or maybe it did compile but your code had a small oversight and you need to start over building.
The Linux kernel is a HUGE project, one of the biggest around. Yes, a full rebuild takes a long time, depending on configuration. Incremental rebuilds do not, though.
I'm actually working on a Linux kernel module (distributed filesystem client), it's on the order of 40 KLOC. I can do a full rebuild in 10/15 seconds (debug/release), and that includes calling into the kernel's infrastructure and doing a lot of stuff that shouldn't have to be done. An incremental rebuild after changing a single .c file is about 3 seconds. Restarting the module (swapping for the newly built one) takes less than 10 seconds also. And this can be already a stressful bottleneck depending on the task. Say you're improving logging in a particular section of code, this can easily require 5-10 attempts.
I'm working on Desktop GUIs (2D/3D) too. You need a quick turnaround time as much as possible. Many changes are trivial but you want to do many small incremental improvements, recompile, run and test (manually), often with a breakpoint on the code you're currently working on.
The projects I'm working on are written in C or conservative C++, and most have from thousands to hundreds of thousands lines of code. They can be built from scratch in a short amount of time (< 10s for the smaller ones). And all of them do incremental builds in <= 10 seconds except when maybe changing the most central headers which essentially means a full rebuild.
You can also design a C/C++ codebase to always do a full rebuild, compiling everything as a single unit. That can be faster than trying to do incremental builds, for codebases of considerable size. Try out the popular raddebugger project, a complete build after checkout is about 3 seconds. It's ~300 KLOC I think.
Two-stage compilation is just a bonus on top: you add a sequential dependency in your build graph and if you have enough of these parsing programs you are going to wait till they are all built before your build can go wide.
C++ metaprogramming is bad, but the problem there is the C++ part, not the metaprogramming-in-the-language part.
1. https://matklad.github.io/2025/04/19/things-zig-comptime-won...
For example, what does https://miguelmartin.com/blog/nim2-review#implementing-a-sim... look like with C++26's std::meta::info?
My guess is: libclang is more suited for this situation if you care about compile times, even if Python is used.
I'm now trying to migrate from msbuild to cmake+sscache+PCH for std libraries while also trimming unnecessary includes to reduce suffering in the future - if not for me then at least for future developers. So I would say compile time is important for development. It causes other limitations too (like bugfixing becomes a huge commit with several squished fixes together to avoid recompiles, messing up git history or slower context switching when developing several features in parallel)
I'm sure you wouldn't say "it doesn't matter how long it takes to compile" it if took days. So where do you draw the line? Regardless, it matters.
EDIT: and based on these compilation time results, this would be a major setback for building the engine, which already takes an eternity.
Once you have that in place, you can easily detect duplicates, etc...
Of course, there are major limitations, as it's all a big hack: https://github.com/ZXShady/enchantum/blob/main/docs/limitati...
Similarly interesting is Boost.PFR, which gives you reflection superpowers since C++14: https://github.com/boostorg/pfr
That's the essence of C++: you're basically trading ergonomics for compile times.
Yes, xmacros have the best compile times, but you can't possibly argue that they are elegant to use compared to the alternatives.
Fair enough.
The underlying machinery implementation is going to be much uglier and complex, though.
See https://www.scs.stanford.edu/~dm/blog/va-opt.html
That looks much nicer indeed, but I still vastly prefer the other solutions, simply because I can just declare regular enums.
C++ build times are hard pill to swallow when migrating from c. This is just another reason we'll probably stick to writing c as t the company where I work. It's like asking someone to give up instant compilation for cleaner easier to read apps?
Also now that we have cleanup handlers in c (destructors) even less of a reason to move...
We've come full circle huh?
Why do you need this, logging? In that case I would rather reflect the logging statement to pribt any variable name, or hell, just write out the string.
If saving for db, maybe store as string, there's more incentive for an enum in the db, if that's a string you might as well. At any rate it doesn't seem a great idea to depend on a variable name, imagine changing a variable name and stuff breaks.