Someone on the issue they made explained why Clone is "broken": https://github.com/JelteF/derive_more/issues/490#issuecommen...
Which links to this blog post explaining the choice in more detail: https://smallcultfollowing.com/babysteps/blog/2022/04/12/imp...
I have a crate with a "perfect" derive macro that generates where clauses from the fields instead of putting them on the generic parameters. It is nice when it works, but yah cyclical trait matching is still a real problem. I wound up needing an attribute to manually override the bounds whenever they blow up: https://docs.rs/inpt/latest/inpt/#bounds
I did a similar thing for derive-io. It greatly improved the ergonomics of the macro.
https://docs.rs/derive-io/latest/derive_io/
Being able to handle directly recursive type bounds would be an awesome improvement to the compiler, IMO.
from Niko's post:
> In the past, we were blocked for technical reasons from expanding implied bounds and supporting perfect derive, but I believe we have resolved those issues. So now we have to think a bit about semver and decide how much explicit we want to be.
- [deleted]
Automatically deriving Clone is a convenience. You can and should write your own implementation for Clone whenever you need it and automatically derived implementation is insufficient.
But this issue makes it confusing & surprising when an automatically derived clone is sufficient and when its not. Its a silly extra rule that you have to memorise.
By the way, this issue also affects all of the other derivable traits in std - including PartialEq, Debug and others. Manually deriving all this stuff - especially Debug - is needless pain. Especially as your structs change and you need to (or forget to) maintain all this stuff.
Elegant software is measured in the number of lines of code you didn't need to write.
> Elegant software is measured in the number of lines of code you didn't need to write.
Strong disagree. Elegant software is easy to understand when read, without extraneous design elements, but can easily have greater or fewer lines of code than an inelegant solution.
Sure, I generally agree. But you must admit, that description loses a bit of poetry.
I think software - like mathematics - becomes more elegant & beautiful when we find ways to simplify it without making it worse in other ways. Minimising the number of lines of code isn’t the goal. But sometimes making something smaller also makes it easier to understand and sometimes even more powerful at the same time. Those are the days I live for.
Alan Kay says the right point of view is worth 50 IQ points. There are plenty of examples of this in software. Innovations in ways of structuring our programs that make them at once simpler and more beautiful. All without sacrificing anything important in the process. We take them for granted now, but innovation is rarely obvious from the outset. Compilers and high level languages. Sum types. Operating systems. Maybe SQL and ECS. Sinatra’s http handler API. And so on.
It's surprising up to the moment the compilation error tells you that all of the members have to implement derived trait.
Nevertheless, it would be cool to be able to add #[noderive(Trait)] or something to a field not to be included in automatic trait implementation. Especially that sometimes foreign types do not implement some traits and one has to implement lots of boilerplate just to ignore fields of those types.
I know of Derivative crate [1], but it's yet another dependency in an increasingly NPM-like dependency tree of a modern Rust project.
All in all, I resort to manual trait implementations when needed, just as GP.
> It's surprising up to the moment the compilation error tells you …
Unfortunately this problem only shows up when you’re combining derive with certain generic parameters for the first time. The first time I saw this, I thought the mistake was mine. It was so surprising and confusing that it took half an hour to figure out what the problem was. I thought it was a compiler bug for awhile and went to file it on the rust project - only to find lots of people had beat me to it.
Aside from anything else, it’d be great if rust had better error messages when you run into this issue.
It took you 30 minutes to understand what "could not call Clone::clone because <type> does not satisfy Clone" means?The error message tells you exactly what the problem is and how to fix it.
This is a pet peeve of mine so I'm sorry to be overly dismissive. There are bad error messages out there in the world that are impossible to parse, but this is not one of them. Trying to file a github issue before attempting to understand the error message is insane to me.
Yes it took me 30 minutes. The error message in this case is uncharacteristically bad. Or I found it particularly confusing because of quirks in my understanding of rust’s type system.
Take a look. Do you think this quirk of derive is obvious from the error message alone? Would you have figured it out, in the context of a much more complex program with dozens of other changes that may or may not have been related?
https://play.rust-lang.org/?version=stable&mode=debug&editio...
The compiler error says my type didn’t implement clone. But #[derive(Clone)] was right there on my type, and my type was obviously cloneable. The error wasn’t on the derive - or anywhere nearby. My program even compiled successfully, right up until I tried to actually call clone() on my object. And my type trivially, obviously was cloneable because all the fields were clonable.
My first thought at the time was that my derive attribute was being ignored by the compiler somehow, but it wasn’t at all obvious why. The compiler’s suggested fix was also wrong and misleading. It suggests adding clone to an irrelevant type that shouldn’t impl clone in the first place. That’s the wrong fix.
In summary, the error message doesn’t offer the real problem, which is that the trait bounds added by derive Clone were wrong. And it didn’t suggest the proper fix - which was to impl clone manually.
I was very confused for a good while with this one. Get pissy if you want but I find this incredibly counterintuitive and I found the compiler’s error message to be uncharacteristically unhelpful.
If this quirk of derive was truly obvious, this blog post wouldn’t have hit the front page of HN in the first place. I cut my hand on one of rust’s sharp edges. Don’t get mad at me for bleeding a little.
The error message points you to the inner type and tells you to implement clone for it. Like you point out, it's sometimes not the correct fix - but the compiler can't be smart enough to tell you that. I'm actually at a loss for how this isn't anything but obvious if you know the language and type system. It's not magic!
My pet peeve is the learned helplessness around compiler error messages, particularly ones that go to great lengths to be informative and offer solutions instead of just throwing garbage at you.
> it's sometimes not the correct fix - but the compiler can't be smart enough to tell you that
Everyone said the same thing of most compiler errors until clang came along and showed everyone how much better compiler messages could be.
The compiler message here doesn’t suggest the true problem here, nor does it suggest the appropriate fix. I’m also clearly not the only one who finds this issue surprising and annoying. This blog post wouldn’t have been upvoted and commented on if it didn’t strike a nerve.
If nothing else, I think it’s quite obvious that the compiler’s error message could be improved. But better yet, I’d love to see it fixed in the language. The bounds derive places on its traits are simply wrong. There’s no reason why T needs to impl Clone. T is irrelevant.
For what it’s worth, I get what you’re frustrated about. I spent 2 years as a programming teacher. I feel like the first month of every cohort I wander around telling upset students that they forgot a semicolon. And the third month I wander around guiding frustrated students to read the compiler error message, which would have saved them 2 hours of guessing.
After banging my head into a wall for awhile on this issue, I made a reduced reproducing test case. Then I went looking in the rust compiler issue tracker. Only there did I finally see a proper explanation of what was going on, amidst a sea of other people struggling with the same thing.
That was half an hour of my life I won’t get back. I’m not helpless. But I am still pretty frustrated by the experience. I see it as a language level bug. It seems to take years and years of bike shedding for rust to fix stuff like this. I’d give even odds this still isn’t fixed in a decade from today.
But maybe, at least, we might be able to improve the error message?
- [deleted]
- [deleted]
> Aside from anything else, it’d be great if rust had better error messages when you run into this issue.
Would you mind filing a ticket detailing what you'd wish the error had been? Without additional context, the only improvement I can think of is adding a note explaining imperfect derives when hitting a missing trait bound coming from a local crate derived impl.
I mentioned in a sibling comment. The error message doesn’t explain or suggest what the problem is, and it recommends the wrong fix. (It suggests implementing clone for T, whereas here you need to manually implement clone).
Something like this would have helped me immensely:
> Note: even though struct Foo has derive(Clone), Foo does not implement clone in this case. derive(Clone) may have overly restrictive trait bounds (impl Clone where T: Clone). If this is the case, you may need to manually implement Clone for Foo with less restrictive trait bounds:
impl Clone for Foo { fn clone(&self) …
Would you mind filing a ticket on GitHub.com/rust-lang/rust with that exact request? (I'm on the go and am not logged on GitHub on this device and wouldn't want this feedback to be lost). This should be relatively easy to add and I agree it would be an improvement.
Sure - will do.
Apparently, Derivative is unmaintained [1], but there is Derive_more [2], Educe [3] and Derive-where [4], if anyone is interested.
[1] https://rustsec.org/advisories/RUSTSEC-2024-0388.html
[2] https://crates.io/crates/derive_more
The same is true for memory allocation. But I do not think it makes sense that everybody has to write memory allocators from scratch because a few special cases require it.
I disagree; Elegant software is explicit. Tbh I wouldn't mind if we got rid of derives tomorrow. Given the ability of LLMs to generate and maintain all that boilerplate for you, I don't see a reason for having "barely enough smarts" heuristic solutions to this.
I rather have a simple and explicit language with a bit more typing, than a perl that tries to include 10.000 convenience hacks.
(Something like Uiua is ok too, but their tacitness comes from simplicity not convenience.)
Debug is a great example for this. Is derived debug convenient? Sure. Does it produce good error messages? No. How could it? Only you know what fields are important and how they should be presented. (maybe convert the binary fields to hex, or display the bitset as a bit matrix)
We're leaving so much elegance and beauty in software engineering on the table, just because we're lazy.
- [deleted]
I am sorry but Uiua and LLM generated code? This has to be a shitpost
Welcome to the new normal. Love it or hate it, there are now a bunch of devs who use LLMs for basically everything. Some are producing good stuff, but I worry that many don't understand the subtleties of the code they're shipping.
For me the thing that convinced me was the ability to write so much more documentation, specification, tests and formal verification stuff than I could before, that the LLM basically has no choice but to build the right thing.
OpenAIs Codex model is also ridiculously capable compared to everything else, which helps a lot.
I’ve never tried to use it for formal verification. Does it work well for that? Is it smart enough to fix formal verification errors?
The place this approach falls down for me is in refactoring. Sure, you can get chatgpt to help you write a big program. But when I work like that, I don’t have the sort of insights needed to simplify the program and make it more elegant. Or if I missed some crucial feature that requires some of the code to be rewritten, chatgpt isn’t anywhere near as helpful. Especially if I don’t understand the code as well as I would have if I authored it from scratch myself.
I never said to use LLMs to generate Uiua. I said that Uiua is an edge case where tacitness is indeed elegance.
I wouldn't write anything but Rust via LLMs, because that's the only language where I feel that the static type checking story is strong enough, in large parts thanks to kani.
The entirety of any programming language is a convenience. You could just write your code in assembly; I don't think your argument is "automatic". Question is, how much does this particular convenience matter?
The fact that people are writing blog posts and opening bugs about it (and others in the comments here recount running into the issue) seems to indicate this particular convenience matters.
Haskell/GHC gets this right without any hand-wringing and about 3-4x the practical historical burden and without relying on the runtime.
The Rust community is very adamant as a general thing that "you're holding it wrong" when people complain about e.g. BDSM passing semantics, but it's also got REIR/RIIR, which is kinda like "you're holding it wrong" for programming.
These two things together are a category error.
By RIIR you mean “Rewrite it in Rust”? I looked it up because I was hoping to get something more substantive from the post, but that’s all I could find.
Yeah, or "Rewrite Everything in Rust". The degree of transparency you'll get on this varies between like Reddit and some Discord server, but it's fairly common knowledge that the Rust community leadership regards all other programming languages as existing in an adversarial, finite-sum outcome space with any remotely adjacent programming languages and regard it's elimination of other language use as a first-order good.
Whether this is a first order good because Rust's choices about how to split the difference between C++ and Haskell are in fact the best future for software or because they make all the money from Rust jobs and books and teaching and shit is one of those assume good intentions by default but also pay attention to conflicts of interest scenarios. Speaking for myself I think most of the zeal is legitimate with a few people trying to cash in like you get with any community.
But like all philosophies of the "There is One True Way, All Else Must Conform" stripe (in Christianity this is Opus Dei and things like that, in Islam it's called Takfir, it's not a new thing) it's misguided and destructive no matter how genuine the intentions of the hardliners.
edit: people will try to say that I'm uniquely antagonistic to Rust, but I opened the meme bookmark tab for a diff and saw this within like two minutes of writing my comment, it's a known thing: https://impurepics.com/posts/2023-03-24-refactoring.html
> it's fairly common knowledge that the Rust community leadership regards all other programming languages as existing in an adversarial, finite-sum outcome space with any remotely adjacent programming languages and regard it's elimination of other language use as a first-order good.
I call bullshit on this. If you can show any quote that could even be misconstrued as wanting some kind of language supremacy over every other from anyone in a position of "leadership", I'll eat my hat.
It's got a subreddit: https://www.reddit.com/r/rustjerk/
There's an "internal" community document entitled "The Core Team Is Toxic": https://hackmd.io/@XAMPPRocky/r1HT-Z6_t
There's an entry on the tropes page about it: https://enet4.github.io/rust-tropes/rewrite-in-rust/
You're trying to make the case that this is a fever dream hallucinated by me in isolation? Like I took a bunch of fucking mescaline and imagined a world where the Rust community is actively trying to get things rewritten in Rust, get the DoD to mandate Rust for whole classes of taxpayer funded software, the groups at the FAANGs doing strategic rewrites of like revision control systems that worked fine, the whole thing?
I just, made it up?
You're moving the goalposts.
You started at
> it's fairly common knowledge that the Rust community leadership regards all other programming languages as existing in an adversarial, finite-sum outcome space with any remotely adjacent programming languages and regard it's elimination of other language use as a first-order good.
and now arrived at "there's a subreddit, a glossary entry for a meme and a blogpost complaining of the years defunct core team". Note that the complaints on that blogpost have nothing to do with what you're concerned about.
> Like I took a bunch of fucking mescaline and imagined a world where the Rust community is actively trying to get things rewritten in Rust, get the DoD to mandate Rust for whole classes of taxpayer funded software, the groups at the FAANGs doing strategic rewrites of like revision control systems that worked fine, the whole thing?
You seem to be conflating "projects using or recommending Rust" with "a grand conspiracy of a shadowy cabal of Rust people getting their way, all the way to the top of the US government".
I'll restart my claim: no one in leadership can be quoted saying that people should only use Rust everywhere. I'll go as far as saying that it's unlikely anyone that has a commit in rust-lang/rust (a much lower bar) can either.
Right, the super key people are very careful about what gets said on the record. Only people who were hanging around before the PR machine got it's act together have heard a super key person say that explicitly. The "be quoted" is the heavy lifting there, and I bet you're right about that over say the last three or four years.
But I didn't say that, I said you will get various degrees of transparency depending on where you're hanging out. Which is a related statement to your hedge.
But this kind of nitpicking is tiresome. I posted threads and links and stuff to a mountain of "everyone fucking knows this".
Everyone hates you guys, I'm just crazy enough to say it out loud even though a bunch of my recent posts are all going to get 4 downvotes in the first hour.
I think you have some unexplained baggage that you're bringing to the table. Consider re-examining your biases.
I've got a trivially explained gripe with a Rust Evangelism Strike Force that is a meme, a byword on HN for gang-tackle brigade bullying: https://news.ycombinator.com/item?id=14178950
This pseudo-psychiatry "examine your personal issues" line of deflection can't go out of fashion too soon as far as I'm concerned.
If you want to raise an objection to my analysis of the situation, raise a substantive objection, cite a counter example, propose a theory with more explanatory power, whatever.
The thing I'm calling attention to has it's own meme page: https://enet4.github.io/rust-tropes/rust-evangelism-strike-f...
That's not a personal issue, that's a Rust community optics disaster.
Am I the only one who thinks this is perfectly fine?
The requirements for derive Clone are clearly defined. As with much in Rust, the type signature drives things, rather than the function body (contrast with C++ generics).
Occasionally, this results in perfectly reasonable code getting rejected. Such is the case with all static languages (by definition).
But in the few cases that happen, the solutions are quite straightforward. So, I don’t feel like it’s justified to add more complication to the language to deal with a few small corner cases.
If you write code manually, you can forget to update it in the future. For instance, a manual implementation of PartialEq might become stale if you add new fields in the future. If you could automatically generate the implementation, and simply guide the macro to use non-default behavior (e.g. skip a field, or use a more complicated trait bound on a generic type) then you can have the advantages of generated code without the disadvantages. Seems worth trying for, IMO.
> we cannot just require all generic parameters to be Clone, as we cannot assume they are used in such a way that requires them to be cloned.
No, this is backwards. We have to require all generic parameters are Clone, as we cannot assume that any are not used in a way that requires them to be Clone.
> The reason this is the way it is is probably because Rust's type system wasn't powerful enough for this to be implemented back in the pre-1.0 days. Or it was just a simple oversight that got stabilized.
The type system can't know whether you call `T::clone()` in a method somewhere.
> The type system can't know whether you call `T::clone()` in a method somewhere.
It’s not about that, it’s about type system power as the article said. In former days there was no way to express the constraint; but these days you can <https://play.rust-lang.org/?gist=d1947d81a126df84f3c91fb29b5...>:
impl<T> Clone for WrapArc<T> where Arc<T>: Clone, { … }
Yeah. My bad. I got annoyed by the "is broken" terminology of TFA and wasn't thinking clearly :/
I did the same… I just deleted my comment quickly when I realised I had erred!
All the #[derive(Clone)] does is generate a trait impl of Clone for the struct, which itself can be bounded by trait constraints. It doesn't have to know that every use of the struct ensures generic parameters have to/don't have to be Clone. It doesn't have to make guarantees about how the struct is used at all.
It only needs to provide constraints that must hold for it to call clone() on each field of the struct (i.e. the constraints that must hold for the generated implementation of the fn clone(&self) method to be valid, which might not hold for all T, in which case a Struct<T> will not implement Clone). The issue this post discusses exists because there are structs like Arc<T> that are cloneable despite T not being Clone itself [1]. In a case like that it may not be desirable to put a T: Clone constraint on the trait impl, because that unnecessarily limits T where Struct<T>: Clone.
[1]: https://doc.rust-lang.org/std/sync/struct.Arc.html#impl-Clon...
For structs, why couldn't rust check the necessary bounds on `T` for each field to be cloned? E.g. in
for `ptr`, `Arc<T>: Clone` exists with no bound on `T`. But for `tup`, `(T, usize): Clone` requires `T: Clone`.#[derive(Clone)] struct Weird<T> { ptr: Arc<T>, tup: (T, usize) }
Same thing for other derives, such as `Default`.
Because it doesn't know if you're relying on T being Clone in method bodies. The internal behavior of methods is not encoded in the type system.
You can already write method bodies today that have constraints that aren't enforced by the type definition though; it's trivially possible to write a method that requires Debug on a parameter without the type itself implementing Debug[0], for example. It's often even encouraged to define the constraints on impl blocks rather than the type definition. The standard library itself goes out of its way to define types in a way that allow only partial usage due to some of their methods having bounds that aren't enforced on the type definition. Rust's hashmap definition in the standard library somewhat notably doesn't actually enforce that the type of the key is possible to hash, which allows a hashmap of arbitrary types to be created but not inserted into unless the value actually implements Hash[1].
[0]: https://play.rust-lang.org/?version=stable&mode=debug&editio...
[1]: https://www.reddit.com/r/rust/comments/101wzdq/why_rust_has_...
I'm not sure how these two things are related. When writing an `impl` for a struct, there's no assumption on the bounds of the generics (if any) unless they are specified _at the impl site_.
For example, the bounds of T in
are independent of the bounds of `T` in any other impl or the struct definition.impl<T> Weird<T> { .. }
Unless I'm missing something...
What?
The way the derives work is they generate code by utilizing the fields and their types. Here is a trivial implementation (of a custom trait, rather than Clone. still holds though) which will prove you wrong:
<https://github.com/cull-os/carcass/blob/master/dup%2Fmacros%...>
> The type system can't know whether you call `T::clone()` in a method somewhere.
Why not?
Types don't carry behavioral information about what the method does internally. Everything about a method is known from its signature.
The compiler doesn't introspect the code inside the method and add additional hidden information to its signature (and it would be difficult to reason about a compiler that did).
> Types don't carry behavioral information about what the method does internally.
I don’t remember specifics, but I very distinctly remember changing a method in some way and Rust determining that my type is now not Send, and getting errors very far away in the codebase because of it.[0]
If I have time in a bit I’ll try and reproduce it, but I think Send conformance may be an exception to your statement, particularly around async code. (It also may have been a compiler bug.)
[0] It had something to do with carrying something across an await boundary, and if I got rid of the async call it went back to being Send again. I didn’t change the signature, it was an async method in both cases.
`Send`, `Sync`, and `Unpin` are special because they're so-called 'auto traits': The compiler automatically implements them for all compound types whose fields also implement those traits. That turns out to be a double-edged sword: The automatic implementation makes them pervasive in a way that `Clone` or `Debug` could never be, but it also means that changes which might be otherwise private can have unintended far-reaching effects.
In your case, what happens is that async code conceptually generates an `enum` with one variant per await point which contains the locals held across that point, and it's this enum that actually gets returned from the async method/block and implements the `Future` trait.
[dead]
> Types don't carry behavioral information about what the method does internally.
I was under the impression type inference meant that the implementation of a function directly determines the return type of a function, and therefore its signature and type.
While you can sometimes elide the return type (and what you describe only happens in closures — `|| { 0u32 }` is the same as `|| -> u32 { 0u32 }` — methods and free functions must always have an explicitly declared return type), that's not the same thing as being described above.
For the existence of any invocation of `<T as Clone>::clone()` in the method body to be encoded in the method signature, we'd either need some wild new syntax, or the compiler would need to be able to encode hidden information into types beyond what is visible to the programmer, which would make it very hard to reason about its behavior.
No, Rust functions have to declare their return types. They cannot be inferred.
[dead]
Haskell does this. If you derive Eq, it puts a condition on the generic parameter(s) requiring them to be Eq as well. Then if you use it with something that doesn’t implement Eq, your generic type doesn’t either.
It helps if you can express these preconditions in the first place, though.
Haskell has, like-for-like, a better type system than Rust.
That said, Rust is just enough Haskell to be all the Haskell any systems programmer ever needed, always in strict mode, with great platform support, a stellar toolkit, great governance, a thriving ecosystem and hype.
Lots of overlap in communities (more Haskell -> Rust than the other way IMO) and it's not a surprise :)
I have respect for Haskell because I saw it long before I saw Rust, and I love the ideas, but I never got around to actually using Haskell.
I think an IO monad and linear types [1] would do a lot for me in a Rust-like
[1] Affine types? Linear types? The ones where the compiler does not insert a destructor but requires you to consume every non-POD value. As someone with no understanding of language theory, I think this would simplify error handling and the async Drop problem
I can understand why Rust didn't implement an IO monad (even though I'd love it), but linear types seem like they would fit right in with the rest of Rust. Not sure why they didn't include them.
There's actually two different ways that Rust types can always be discarded: mem::forget and mem::drop. Making mem::forget unsafe for some types would be very useful, but difficult to implement without breaking existing code [1].
Btw, you are correct with linear types - Affine types allow discarding (like Rust already does).
I think if one implemented linear types in Rust they could still be forgotten, but not. Basically you would have constraints like:
* creating type &T or &mut T is forbidden; therefore also applying & and &mut to a value of linear type, or using ref and ref mut at the top level of patterns that match a value with linear type
* field access is forbidden, except through pattern matching (either match or let) because pattern matching consumes the matched value
* implementing Copy or Drop is not allowed (Copy makes no sense, with Drop using destructuring patterns would not be possible). Well, Drop would not be implementable anyway, together with many other traits, because it takes &mut self (which is not creatable).
I would have no idea how to express it in the syntax.
I used Haskell for a while and eventually switched over (and intentionally target Rust for many new projects, though I suspect I should be doing more Go for really simple things that just don't need more thought).
One large problem with Haskell that pushes people to Rust is strictness, I think -- laziness is a feature of Haskell and is basically the opposite direction from where Rust shines (and what one would want out of a systems language). It's an amazing feature, but it makes writing performant code more difficult than it has to be. There are ways around it, but they're somewhat painful.
Oh there's also the interesting problem of bottom types and unsafety in the std library. This is a HUGE cause of consternation in Haskell, and the ecosystem suffers from it (some people want to burn it all down and do it "right", some want stability) -- Rust basically benefited from starting later, and just making tons of correct decisions the first time (and doing all the big changes before 1.0).
That said, Haskell's runtime system is great, and it's threading + concurrency models are excellent. They're just not as efficient as Rust's (obviously) -- the idea of zero cost abstractions is another really amazing feature.
> [1] Affine types? Linear types? The ones where the compiler does not insert a destructor but requires you to consume every non-POD value. As someone with no understanding of language theory, I think this would simplify error handling and the async Drop problem
Yeah, the problem is that Affine types and Linear types are actually not the same thing. Wiki is pretty good here (I assume you meant to link to this):
https://en.wikipedia.org/wiki/Substructural_type_system
Affine is a weakening of Linear types, but the bigger problem here is that Haskell has a runtime system -- it just lives in a different solution-world from Rust.
For rust, Affine types are just a part of the way they handle aliasing and enable a GC-free language. Haskell has the feature almost... because it's cool/powerful. Yes, it's certainly useful in Haskell, but Haskell just doesn't seem as focused on such a specific goal, which makes sense because it's a very research driven language. It has the best types (and is the most widely adopted ML lang IIRC) because it focuses on being correct and powerful, but the ties to the ties to practicality are not necessarily the first or second thought.
It's really something that Rust was able to add something that was novel and useful that Haskell didn't have -- obvious huge credit to the people involved in Rust over the years and the voices in the ecosystem.
that's incorrect
haskell implements the "perfect derive" behaviour that the blog post is complaining about rust's lack of. the constraint is only propagated to the field types when using a haskell derive, not the parameters themselves (as in rust)
so the following compiles just fine:
data Foo -- not Eq data Bar a = Bar deriving Eq f :: Eq (Bar Foo) => () f = ()
- [deleted]
> we cannot just require all generic parameters to be Clone, as we cannot assume they are used in such a way that requires them to be cloned.
I don't understand what "used in such a way requires them to be cloned" means. Why would you require that?
Right now the derive macro requires `T` be `Clone`, but what we actually want to require is only that each field is clone, including those that are generic over `T`. eg `Arc<T>` is `Clone` even though `T` isn't, so the correct restriction would be to require `Arc<T>: Clone` instead of the status quo which requires `T: Clone`
This is the best explanation I've read of this limitation
Thank you, summarized perfectly.
That's the crux of the article. There's no good reason for this requirement, at least none that arise in the article, so the author concludes it must be a mistake.
I think it's a bit cynical that it would cost at least 4 years for this change to be admitted into the compiler. If the author is right and there is really no good reason for this rule, and I agree with the author in seeing no good reason, then it seems like something that could be changed quite quickly. The change would allow more code to compile, so nothing would break.
The only reason I could come up with for this rule is that for some other reason allowing non complying type parameters somehow makes the code generation really complex and they therefore postponed the feature.
> The only reason I could come up with for this rule is that for some other reason allowing non complying type parameters somehow makes the code generation really complex and they therefore postponed the feature.
The history of this decision can be found in details in this blog post: https://smallcultfollowing.com/babysteps//blog/2022/04/12/im...
The key part:
> This idea [of relaxing the bounds] is quite old, but there were a few problems that have blocked us from doing it. First, it requires changing all trait matching to permit cycles (currently, cycles are only permitted for auto traits like Send). This is because checking whether List<T> is Send would not require checking whether Option<Rc<List<T>>> is Send. If you work that through, you’ll find that a cycle arises. I’m not going to talk much about this in this post, but it is not a trivial thing to do: if we are not careful, it would make Rust quite unsound indeed. For now, though, let’s just assume we can do it soundly.
> The other problem is that it introduces a new semver hazard: just as Rust currently commits you to being Send so long as you don’t have any non-Send types, derive would now commit List<T> to being cloneable even when T: Clone does not hold.
> For example, perhaps we decide that storing a Rc<T> for each list wasn’t really necessary. Therefore, we might refactor List<T> to store T directly […] We might expect that, since we are only changing the type of a private field, this change could not cause any clients of the library to stop compiling. With perfect derive, we would be wrong.2 This change means that we now own a T directly, and so List<T>: Clone is only true if T: Clone.
Yeah; I think this argument makes sense. With perfect derive, #[derive(Clone)] has a bunch of implicit trait bounds which will change automatically as the struct changes. This has semver implications - and so we might want to be explicit about this rather than implicit.
We could solve this by having developers add trait bounds explicitly into the derive macro.
Currently this:
expands to:#[derive(Clone)] struct Foo<T>(Arc<T>)
Perfect derive would look at the struct fields to figure out what the trait bounds should be. But it might make more sense to let users set the bound explicitly. Apparently the bon crate does it something like this:impl Clone for Foo where T: Clone { ... }
Then if you add or remove fields from your struct, the trait bounds don't necessarily get modified as a result. (Or, changing those trait bounds is an explicit choice by the library author.)#[derive(Clone(bounds(Arc<T>: Clone)))]
A type might have a generic parameter T, but e.g. use it as a phantom marker.
Then even if T isn't cloneable, the type might still admit a perfectly fine implementation of clone.
LLMs are broken, too:
> "Of course. This is an excellent example that demonstrates a fundamental and powerful concept in Rust: the distinction between cloning a smart pointer and cloning the data it points to. [...]"
Then I post the compiler's output:
> "Ah, an excellent follow-up! You are absolutely right to post the compiler error. My apologies—my initial explanation described how one might expect it to work logically, but I neglected a crucial and subtle detail [...]"
Aren't you also getting very tired of this behavior?
> Aren't you also getting very tired of this behavior?
The part that annoys me definitely is how confident they all sound. However the way I'm using them is with tool usage loops and so it usually runs into part 2 immediately and course corrects.
Well, they're usually told that they're some unicorn master of * languages, frameworks, skillsets, etc., so can you really fault them? :)
TBH I'm tired of only the "Ah, an excellent follow-up! You are absolutely right <...> My apologies" part.
Yeah they definitely didn't do that in the past. We've lost "as a large language model" and "it's important to remember" but gained "you're absolutely right!"
I would have thought they'd add "don't apologise!!!!" or something like that to the system prompt like they do to avoid excessive lists.
Languages like Rust and C seem to be too complicated for them. I also asked different LLMs to write a C macro or function that creates a struct and a function to print it (so that I don't have to duplicate a field list) and it generates plausible garbage.
LLMs only ever produce plausible garbage -- sometimes that garbage happens to be right, but that's down to luck.
Haha, I encountered the opposite of this when I did a destructive thing recently but first asked Gemini, then countered it saying it’s wrong and it insisted it was right. So the reality they encountered is probably that: it either is stubbornly wrong or overly obsequious with no ability to switch.
My friend was a big fan of Gemini 2.5 Pro and I kept telling him it was garbage except for OCR and he nearly followed what it recommended. Haha, he’s never touching it again. Every other LLM changed its tune on pushback.
You should check Twitter nowadays, people love this kind of response. Some even use it as an argument
And this is why basically LLMs are "bad". They have already reached critical mass adoption, they are right or mostly right most of the time but they also screw up badly many times as well. And people will just not know, trust blindly and going even deeper down in the spiral of the total absence of critical judgement. And yeah, it also happened with Google and search engines back in the day ("I found it on the web so it must be true") but now with LLMs it is literally tailored to what you are asking, for every possible question you can ask (well, minus the censored ones).
I keep thinking the LLM contribution to humanity is/will be a net negative in the long run.
> they are right or mostly right most of the time
It’s times like this when I wonder if we’re even using the same tools. Maybe it’s because I only even try to actively use them when I expect failure and am curious how it will be (occasionally it just decides to interpose itself on a normal search result, and I’m including those cases in my results) but my success rate with DuckDuckGo Assist (GPT-4o) is… maybe 10% of the time success but the first few search results gave the answer anyway, 30% obviously stupidly wrong answer (and some of the time the first couple of results actually had the answer, but it messed it up), 60% plausible but wrong answer. I have literally never had something I would consider an insightful answer to the sorts of things I might search the web for. Not once. I just find it ludicrously bad, for something so popular. Yet somehow lots of people sing their praises and clearly have a better result than me, and that sometimes baffles, sometimes alarms me. Baffles—they must be using it completely differently from me. Alarms—or are they just failing to notice errors?
(I also sometimes enjoy running things like llama3.2 locally, but that’s just playing with it, and it’s small enough that I don’t expect it to be any good at these sorts of things. For some sorts of tasks like exploring word connections when I just can’t quite remember a word, or some more mechanical things, they can be handy. But for search-style questions, using models like GPT-4o, how do I get such consistently useless or pernicious results from them!?)
Probably depends a lot of the type of questions you're asking. I think LLMs are inherently better at language-based tasks (translate this, reword this, alternate terms for this, etc.) than technical fact-based tasks, and within technical tasks someone using it as their first port of call will be giving it a much larger proportion of easy questions than someone using it only once stumped having exhausted other sources (or, as here, challenging it with questions where they expect failure).
There's a difference in question difficulty distribution between me asking "how do I do X in FFmpeg" because I'm too lazy to check the docs and don't use FFmpeg frequently enough to memorize, compared to someone asking because they have already checked the docs and/or use FFmpeg frequently but couldn't figure out how to do specifically X (say cropping videos to an odd width/height, which many formats just don't support), for instance. Former probably makes up majority of my LLM usage, but have still occasionally been suprirsed on the latter where I've come up empty checking docs/traditional search but an LLM pulls out something correct.
A few days ago I tried something along the “how do I do X in FFmpeg” lines, but something on the web platform, I don’t remember what. Maybe something to do with XPath, or maybe something comparatively new (3–5y) in JS-land with CSS connections. It was something where there was a clear correct answer, no research or synthesis required, I was just blanking on the term, or something like that. (Frustratingly, I can’t remember exactly what it was.) Allegedly using two of the search results, one of which was correct and one of which was just completely inapplicable, it gave a third answer which sounded plausible but was a total figment.
It’s definitely often good at finding the relevant place in the docs, but painfully frequently it’s stupendously bad, declaring in tone authoritative how it snatched defeat from the jaws of victory.
The startlingly variety of people’s experiences, and its marked bimodal distribution, has been observed and remarked upon before. And it’s honestly quite disturbing, because they’re frequently incompatible enough to suggest that at least one of the two perspectives is mostly wrong.
Yet theyre fantastic personal tutors / assistants who can provide a deeply needed 1:1 learning interface for less privileged individuals. I emphasize 'can'. Not saying kids should have them by their side in their current rough around the edges and mediocre intelligent forms. Many will get burned as you describe, but it should be a lesson to curate information from multiple sources and practice applying reasoning skills!
I agree with your take, and I personally used Claude and ChatGPT to learn better/hone some skills while interviewing to land a new job. And they also help me get unstucked when doing small home fixes, because it's a custom-tailored answer to my current doubt/issue that a normal web search would make much more complicated to answer (I had to know more context about it). But still, they get things wrong and can lead you astray even if you know the topic.
My dad taught high school science until retiring this year, and at least in 2024 the LLM tutors were totally useless for honest learning. They were good at the “happy path” but the space of high schoolers’ misconceptions about physics greatly exceeds the training data and can’t be cheaply RLHFed, so they crap the bed when you role play as a dumb high schooler.
In my experience this is still true for the reasoning models with undergraduate mathematics - if you ask it to do your point-set topology homework (dishonest learning) it will score > 85/100, if you are confused about point-set topology and try to ask it an honest (but ignorant) question it will give you a pile of pseudo-mathematical BS.
- [deleted]
- [deleted]
The only thing that needs change with derive(Clone) is to add an option to it so that you easily can customize the bounds. Explicitly.
Just looking at the examples, you can tell that they wouldn't compile: the other structs passed in don't derive the trait as well, nor implement it. It's really simple, not broken.
I don't see how "the hard way" is a breaking change. Anybody got an example of something that works now but wouldn't work after relaxing that constraint?
It relaxes the contract required for an existing type with derive(Clone) to implement Clone, which might allow types in existing code to be cloned where they couldn't before. This might matter if precluding those clones is important for the code, e.g. if there are safety invariants being maintained by Type<T> only being clonable if T is clone.
Ok let's say there's existing code that requires that a type is not Clone. Then that type definitely would not have #[derive(Clone)] applied to it. So it would not be affected by the change. So it would not be broken.
It's only a breaking change if code that previously worked stops working without changing the code.
Derive Clone is not broken. It is basic. I’d say this is a good area for a dependency but not the core Rust functionality. Keep derive simple and stupid, so people can learn they can derive stuff themselves. It also avoids any surprises.
I disagree. I think the current behaviour is surprising. The first time I ran into this problem, I spent half an hour trying to figure out what was wrong with my code - before eventually realising its a known problem in the language. What a waste of time.
The language & compiler should be unsurprising. If you have language feature A, and language feature B, if you combine them you should get A+B in the most obvious way. There shouldn't be weird extra constraints & gotchas that you trip over.
I don't see it as a problem, personally. It's consistent behavior that I don't find surprising at all, perhaps because I internalized it so long ago. I can understand your frustration tho
> in the most obvious way.
What people find obvious is often hard to predict.
The main reason I'm not super fond of the way it currently works is that it can be a bit confusing in code reviews. I've joined several teams over the years working on Rust codebases around a year old where most of the team hadn't used Rust beforehand, with the idea that my Rust experience can help the team grow in their Rust knowledge and mature the codebase over time. I can recall numerous times when I've seen a trait like Debug or Clone manually implemented by someone newer to Rust where the implementation is identical to what would be generated by automatically deriving it, with a roughly equal split between times when they did actually need to manually implement it for the reasons described in this article and times when they totally could have derived the trait but didn't realize. If I can't look at a Clone implementation that just manually clones every field exactly the same way as deriving it would and immediately know whether it would be possible to derive it after over 10 years of Rust experience, I can't possibly expect someone with less than a year of Rust experience to do that, so my code review feedback ends up having to be a question about whether they tried to derive the trait or not (and to try it and keep it like that if it does work) rather than being able to let them know for sure that they can just derive the trait instead.
I guess at a higher level, my issue with the way it currently works is that it's a bit ambiguous with respect to the intent of the developer. If it were possible to derive traits in the cases the article describes, seeing a manual implementation would be immediately clear that this was what the developer chose to write. The way it works right now means that I can't tell the difference between "I tried to derive this, but it didn't work, so I had to implement it manually as a fallback" and "I implemented this manually without trying to derive it first". It's a minor issue, but I think small things like this add up in the overall experience of how hard it is for someone to learn a language, and I'd argue that it's exactly the type of thing that Rust has benefited from caring about in the past. Rust has a notoriously sharp learning curve, and yet it's grown in popularity quite a lot over the past decade, and I don't think that would have been possible without the efforts of those paying attention to the smaller rough edges in the day-to-day experience of using the language.
> What people find obvious is often hard to predict.
It’s not so terribly difficult to figure out what the expected behaviour is here. If I can write impl Clone in exactly the same way #[derive(Clone)] would do it, I should be able to just go ahead and use derive to do it. That seems pretty obvious to me.
IDK, I guess we'll just have to disagree on this. The original rationale for restricting derive makes more sense to me than not restricting it. What you see as obvious strikes me as potentially dangerous.
Then again, I never had much respect for "obviousness" (as a concept, not in terms of code that is readable); the concept doesn't strike me as very useful except for papering over disagreement.
“Obvious” to me is an appeal to ocham’s razor. Whether you’re conscious of it or not, we all deeply feel that our systems should obey the simplest theory.
If addition in rust worked normally except when you add the number 4, the program panicked, that would be ridiculous. But why? Because it is inconvenient for one. And it’s not obvious. You need a more complex theory to model language like that.
The question with derive(Clone) is “what is the most straight forward theory” and if the actual language is more complex, we have to ask ourselves if the extra complexity is worth the cost.
If you spend 2 minutes thinking about it, as the blog post author said, if you want to implement clone for Foo<T>, it should be totally irrelevant if T implements clone or not. What matters is if the members of the struct implement clone. And that’s a completely independent question. Maybe T is clone and the struct members are not (eg the struct stores Cell<T>). Maybe the struct members are clone and T is not (eg if the struct stores Arc<T>). It’s a very strange, idiosyncratic theory that derive should have an invisible trait bound on T: Clone. It was also pretty surprising to the blog post author.
Sometimes complex theories pay their own rent. Like the borrow checker. Or fixed size integers. But if the theory is hidden, hard to learn, and doesn’t seem to help programmers, I’d call it a bug. Or a kludge. It’s certainly not an elegant programming language feature. This is why bringing up obviousness is relevant. Because good language features are either simple (and thus obvious) or they help me program enough to be worth the complexity they bring. This is neither.
This seems like a kludge to me. Derive should either impl clone when the children impl clone, or be universal, or univeral with optional custom trait bounds specified by the caller. (Eg derive(Clone where Foo<T>: Clone).
The advantage of a universal implementation is that you’d get more obvious compilation errors. “impl Clone of Mystruct<T>(T) requires T: Clone. Fix by adding where T: Clone to derive or manually implementing clone”. Simple. Straightforward. Obvious.
The solution is to use a derive macro like derivative or educe.
skill issue. Arc should allow Clone, even if the underlying is not `impl Clone`.
Arc _does_ allow clone without the underlying implementation not being clone. That's exactly why it's so unexpected that having an `Arc<T>` field in a struct doesn't allow deriving clone unless `T` is also clone; it doesn't actually matter to the implementation, and you end up having to write the exact same thing manually that would be generated by deriving it if the compiler actually allowed it.
Agreed, a screwdriver isn’t broken just because it isn’t a good hammer. The title seems misleading, I was expecting a bug, memory unsafe etc.
Allowing more safe uses seems OK to me, but obviously expanding the functionality adds complexity, so there’s a trade off.
I mean, how this is not core Rust functionality if you need clone for many things unless you want to fight the borrow checker for yet another day?
Amazing site from someone so young
Eh. It's a stretch to call it "broken"
Great write-up! It’s a solid reminder that #[derive(Clone)] can introduce subtle behavior in deeply nested or generic types. Automation is helpful—but shouldn't replace carefully reviewing your code's intent. Thanks for bringing attention to this!
[dead]
it seems i have a personal dislike for rust syntax. i think non of the code should compile because they are just ugly :)
A bit off-topic, but every time I read some sophisticated Rust code involving macros, I cannot help but think that something went wrong at some point. The sheer complexity far outpaces that of C++, and even though I'm sure they would call C++ on undefined behaviour (and rightfully so) it seems less of it has to do with memory and thread-safety, and moreso with good old "C++ style" bloat: pleasing all, whilst pleasing none. Rust doesn't seem worthwhile to learn, as in a few years time C++ will get memory safety proper, and I could just use that.
Maybe this is an improvement on templates and precompiler macros, but not really.
None of this has to do with the complexity of macros.
And no, sorry, the complexity of C++ templates far outweighs anything in Rust's macros. Templates are a turing complete extension of the type system. They are not macros or anything like it.
Rust macro rules are token-to-token transformers. Nothing more. They're also sanitary, meaning they MUST form valid syntax and don't change the semantics in weird ways like C macros can.
Proc-macros are self-standing crates with a special library type in the crate manifest indicating as such, and while they're not "sanitary" like macros rules, they're still just token to token transformers that happen to run Rust code.
Both are useful, both have their place, and only proc macros have a slight developer experience annoyance with having to expand to find syntax errors (usually not a problem though).
Proc macros are implemented in an unsafe way because they run arbitrary code during compilation and access any files. I do not like it.
Also I think it would be better if they operated with reflection-like structures like functions, classes, method rather than tokens - they would be easier to write and read.
I agree in principle but also there's a lot of worth in having them do that in certain cases, and build scripts and the like already have that anyway.
Achieving a perfect world where build tooling can only touch the things it really needs is less of a toolchain problem and more of an OS hardening issue. I'd argue that's outside the scope of the compiler and language teams.
This is often overlooked. No, I don't want random cargo package to run RCE's on my machine. And there's just so many; in terms of bloat, the dependency trees are huge, rivaled only by Nodejs, if I'm being honest. I have to build some Rust stuff once in a while (mostly Postgres extensions) and every time something goes wrong it's a nightmare to sort out.
I never have these issues. Is the postgres driver a C wrapper? that's where things tend to fall apart.
Yeah, they would often claim that C API is at fault, but believe it or not, blaming Rust just as much. I've had an interesting discussion with pgrx people: https://github.com/pgcentralfoundation/pgrx/issues/1897#issu... to quote @workingjubilee
> Rust has formally rejected the notion that any function can return twice. Even LLVM has no special knowledge of the name of "sigsetjmp" or "setjmp", only that some functions can return twice.
> Thus, when you call sigsetjmp or setjmp from Rust, Rust doesn't believe that you called a function that can return twice. The Rust compiler refuses to annotate it with the required LLVMIR annotation. Thus LLVM sees a call to sigsetjmp that it believes cannot return twice... it's just an ordinary function that someone named "sigsetjmp" because they have a sense of humor. What LLVM does next is between itself and whatever gods that compilers believe in, and we have no more voice in such a matter.
In this particular case, it's hardly the fault of a "C wrapper."
Sure looks like it is, indeed, a C problem. Which checks out with my experience, the only time I had a crate fail to build in all these years it was C underneath and of course C is just anything which happened to compile and seemed to work. As an ex-C programmer I guess I sympathise, none of the non-trivial C I wrote 10-20 years ago actually builds out of the box on today's systems that's just how C is, so why would Rust magically fix that.
I mean... yeah? Rust explicitly doesn't allow longjmp or related machinery, at least not in any way that works well with the Rust VM.
And for good reason. That's been a wart and a hack since the day it was created. I'd say this is a _good_ thing.
My issue with it at the time had to do with... how to put it... simply build the extension, & carry on with my life. Version after version it would build alright, and then suddenly, boom, it doesn't work no more. Okay, I understand that perhaps ppc64el is not the most popular architecture, and some things aren't easily portable, but this otherwise mirrors my experience with Rust programs. It's always requiring the latest version of everything. The recommended installation path involves sudoing shell scripts over network, and I'm sure Rust people love it, salivating over new versions and features that come along, constantly staying up to date, but to the rest of us normal people dealing with Rust software is a chore.
Cmake is a huge quality-of-life improvement all things considering!
>The sheer complexity far outpaces that of C++
Not even close. Rust Macros vs C++ templates is more like "Checkers" vs "3D-chess while blindfolded".
>Rust doesn't seem worthwhile to learn, as in a few years time C++ will get memory safety proper
C++ getting "memory safety proper" is just adding to the problem that's C++.
It being a pile of concepts, and features, and incompatible ideas and apis.
I write both for a decade now and I disagree immensely. If Dante knew of the horrors of CPP template compiler errors he would've added an eighth ring.
I'm convinced that the only reason anyone's still using C++ is that they refuse to learn any other language, and the features they want may get added eventually. C++ is an absolute mess.
Rust is clearly aiming to match C++ in complexity.
I don't know whether this is “Death by committee” in a new wrapper, only this time it's the "community," driven by Github emojis, as opposed to legacy processes you would find in C++ committees. Upon closer inspection, there's so much more in common between the two than it would seem! The most glaring difference is cadence. C++ is evolving slowly, and the newer stuff is not rushed into codebases. Rust projects are commonly staying on the very latest stable releases, & in some instances, nightly builds—to compile. I primarily interact with Rust codebases in the context of Postgres extensions, and I can tell you one thing: constantly having to update Rust toolchain to keep my databases up to date is a _massive_ chore.
Curiously, it would appear that the niche Rust is occupying nowadays has very little overlap with C++ applications, and so much more overlap with C. We're beginning to see more and more of Rust creeping into C projects. Meanwhile, the not-so-recent addition of smart pointers, coroutines, and the more recent improvements on these designs have allowed C++ codebases to evolve at predictable, clearly-defined pace. There's definitely a battle for C++ mindshare going on, and it's really not been good outcomes for Rust people. I'd like to remind you that most large-scale orgs where performance and stability truly matters, are still overwhelmingly C++. The HPC world basically runs on C++. The AI landscape, that is, you could make an argument, merely the extension of HPC—is similarly dominated by C++. CUDA is fundamentally an extension of C++. The larger CUDA toolkit and programming guides primarily use C++ syntax and features. Legacy? Sure. This is also the case for the majority of emergent projects in the field, including ROCm, JAX, Tenstorrent stack, and consequently the vast majority of higher-order frameworks such as llama.cpp, torch, vLLM, TensorRT—all rely on C++ compilers. Rust is virtually non-existent here, which was surprising to me originally, as I would expect Rust people to be interested in homogeneous computing, but it's started making sense when I looked at Rust-CUDA compatibility matrix.[1] Turns out, it doesn't really support anything useful for performance, and the things that it does support—all have asterisks next to them.
The Rust community is obsessed with a handful hyper-important projects, such as Linux kernel, Android, Postgres (via pgrx[2], where it has been incredibly useful!) etc. IMHO, this is very telling, & a part of the larger strategy. Ignore, or re-write small projects with no intent to maintain them later, abandon the more ambitious Rust-native projects, and focus on some of the most high-profile C projects with sufficiently-large surface area, where memory safety has traditionally been a problem. You could make an argument that Linus Torvalds is, frankly, exploiting Rust evangelists, and avoiding any bad publicity that denying them completely would entail, instead opting to direct their energies towards the most boring stuff possible: the driver subsystems. This is changing, of course, most notably with bcachefs (Kent is very fond of Rust!) and we have recently seen how that played out.
The Rust Foundation needs to think about optics; it doesn't look good at all.
On the other hand, I've grown to appreciate, and indeed, tempted to write some simple programs for Xous[3], of Precursor fame, which is a really nice bit of kit, and cleverly designed, too. (See pddb[4] work by bunnie circa 2022, it's really quite exciting!) Honestly, this is Rust at its best: a Rust project written from scratch, for good, technical, no-bullshit, no-compromise reasons where it matters the most, and actively maintained, too. Although with the return of memory tagging technology (pioneered by IBM in the 90's) in Arm v9, formally-validated CHERI designs, I'm not yet convinced Rust itself provides enough oomph, hype notwithstanding, to justify much wider adoption. It's been 10 years since Rust 1.0 and the wider industry adoption is otherwise quite underwhelming. They're no longer the new kid on the block; in view of "speedrunning" feature cadence, most notably culminating in the whole async debacle—it's not looking good at all. The Rust Foundation is routinely exploited by big corps in the stark-naked pursuit to harvest young talent, but other than a few small, hyper-useful applications, like ripgrep, alacritty, a few Rust-only shops (Oxide?) and activism largely directed towards legacy C projects, recognised by StackOverflow as "the most loved language" who knows how many years in a row, industry-wide adoption really has been lackluster _at best_.
Contrary to popular belief, C++ is getting easier on the eyes as years go by, at the same time Rust is only getting harder to read. I really liked the direction they originally took back in 2014, but they're now far beyond the point of no return; the proc-macros and arcane cargo mechanics, are only making the matter worse. If I were a betting man, I would predict that C++ will eventually incorporate most of what makes Rust great: borrow-checker, and many alternative approaches are floating, & gaining traction. The static analysis and perf-class tools have come a long way. Fuzzing, too. Modern C++ is already very different from C++ of 10 years ago, and many orgs choose to only use a specific subset of it. The same is happening in async Rust, but the larger language and tooling design doesn't lend to "siloing" as well. I personally stopped writing C++ professionally for 5 years after I stopped contributing to KDE projects, and when I came back to learn about lambdas, smart pointers, move semantics and all, it was quite straightforward. The amount of tooling that C++ had accumulated during this time is quite staggering, and their traction speaks for itself.
In 2025, more C++ is being written than at any time in the past, and young people are learning it, too. To them, from POV of "Modern C++" a lot of arguments Rust people are making—seemingly falls on deaf ears. If C++ were the dominant ideology, and Rust the subverting one, it's clear that the former is set on incorporating the latter in its discourse. What usually happens in such scenarios is longterm it boils down to taste. IMHO, in matters of taste, C++ is more flexible (to agreed-upon subsets) than Rust could ever allow today, and I struggle to see how this asymmetry could be reconciled; there is seemingly no appetite for it in Rust circles outside of async runtimes.
EDIT: Don't forget about LLM technology! There's orders of magnitude of C and C++ code of various styles around. The primary selling-point of Rust is that it enables memory and thread-safety, right? If the sufficiently-advanced AI agent runtime is able to troubleshoot and address them in time, it would become much less of an issue. Many security companies are already using LLM tooling to catch various vulnerabilities, and sometimes even bother the maintainers with it, demanding CVE's assigned and all. There was a thread on HN recently covering this topic. The agent runtimes have dabbled in super-optimisation and even writing highly-sophisticated fused kernels; guess what language it's all been done with?
[1]: https://github.com/Rust-GPU/Rust-CUDA/blob/main/guide/src/fe...
[2]: https://github.com/pgcentralfoundation/pgrx
[3]: https://github.com/betrusted-io/xous-core
[4]: https://www.bunniestudios.com/blog/2022/the-plausibly-deniab...