Linux Kernel Rust Code Sees Its First CVE Vulnerability

Posted by weinzierl 17 hours ago

Counter113Comment129OpenOriginal

Comments

Comment by tekacs 17 hours ago

To pre-empt the folks who'll come in here and laugh about how Rust should be preventing memory corruption... I'll just directly quote from the mailing list:

  Rust Binder contains the following unsafe operation:
  
   // SAFETY: A `NodeDeath` is never inserted into the death list
   // of any node other than its owner, so it is either in this
   // death list or in no death list.
   unsafe { node_inner.death_list.remove(self) };
  
  This operation is unsafe because when touching the prev/next pointers of
  a list element, we have to ensure that no other thread is also touching
  them in parallel. If the node is present in the list that `remove` is
  called on, then that is fine because we have exclusive access to that
  list. If the node is not in any list, then it's also ok. But if it's
  present in a different list that may be accessed in parallel, then that
  may be a data race on the prev/next pointers.
  
  And unfortunately that is exactly what is happening here. In
  Node::release, we:
  
   1. Take the lock.
   2. Move all items to a local list on the stack.
   3. Drop the lock.
   4. Iterate the local list on the stack.
  
  Combined with threads using the unsafe remove method on the original
  list, this leads to memory corruption of the prev/next pointers. This
  leads to crashes like this one:

Comment by themafia 16 hours ago

So the prediction that incautious and unverified unsafe {} blocks would cause CVEs seems entirely accurate.

Comment by mustache_kimono 15 hours ago

> So the prediction that incautious and unverified unsafe {} blocks would cause CVEs seems entirely accurate.

This is one/the first CVE caused by a mistake made using unsafe Rust. But it was revealed along with 159 new kernel CVEs found in C code.[0]

It may just be me, but it seems wildly myopic to draw conclusions about Rust, or even, unsafe Rust from one CVE. More CVEs will absolutely happen. But even true Rust haters have to recognize that tide of CVEs in kernel C code runs something like 19+ CVEs per day? What kind of case can you make that "incautious and unverified unsafe {} blocks" is worse than that?

[0]: https://social.kernel.org/notice/B1JLrtkxEBazCPQHDM

Comment by uecker 14 hours ago

Github says 0.3% of the kernel code is Rust. But even normalized to lines of code, I think counting CVEs would not measure anything meaningful.

Comment by mustache_kimono 14 hours ago

> Github says 0.3% of the kernel code is Rust. But even normalized to lines of code, I think counting CVEs would not measure anything meaningful.

Your sense seems more than a little unrigorous. 1/160 = 0.00625. So, several orders of magnitude fewer CVEs per line of code.

And remember this also the first Rust kernel CVE, and any fair metric would count both any new C kernel code CVEs, as well as those which have already accrued against the same C code, if comparing raw lines of code.

But taking a one week snapshot and saying Rust doesn't compare favorably to C, when Rust CVEs are 1/160, and C CVEs are 159/160 is mostly nuts.

Comment by mustache_kimono 8 hours ago

> Your sense seems more than a little unrigorous. 1/160 = 0.00625. So, several orders of magnitude fewer CVEs per line of code.

This is incorrect. Chalk it up to the flu and fever! Sorry.

0.00625 == .625%. or about twice the instance of Rust code however as stated above these are just the metric from one patch cycle.

Comment by uecker 3 hours ago

It wasn't me trying to conclude anything from insufficient data.

Comment by taproottap 14 hours ago

It would probably have to be normalized to something slightly different as lines of code necessary to a feature varies by language.. But even with the sad state of CVE quality, I would certainly prefer a language that deflects CVEs for a kernel that is both in places with no updates and in places with forced updates for relevant or irrelevant CVE.

Comment by thrance 14 hours ago

To be actually fair, you should probably only look at CVEs concerning new-ish code.

Comment by accelbred 6 hours ago

The kernel policy for CVEs is any patch that is backported, no? So this is just the first Rust patch, post being non-experimental, that was backported?

Comment by K0nserv 16 hours ago

Isn’t it obvious that the primary source of CVEs in Rust programs would be the portions of the program where the human is charge of correctness instead of the compiler?

The relevant question is whether it results in fewer and less severe CVEs than code written in C. So far the answer seems to be a resounding yes

Comment by Hemospectrum 15 hours ago

It is not obvious to those who refuse to understand, and who preemptively reject case studies on the grounds that the numbers are surely fabricated.

Comment by woodruffw 16 hours ago

"Cause" seems unsubstantiated: I think to justify "cause," we'd need strong evidence that the equivalent bug (or worse) wouldn't have happened in C.

Or another way to put it: clearly this is bad, and unsafe blocks deserve significant scrutiny. But it's unclear how this would have been made better by the code being entirely unsafe, rather than a particular source of unsafety being incorrect.

Comment by ganelonhb 16 hours ago

The definition of cause is quite clear. “Rust” is obviously not the cause, but it did fail to be the solution, here. You can’t avoid that.

Comment by bigstrat2003 16 hours ago

But it didn't promise to be the solution either. Rust has never claimed, nor have its advocates claimed, that unsafe Rust can eliminate memory bugs. Safe Rust can do that (assuming any unsafe code relied upon is sound), but unsafe cannot be and has never promised to be bug free.

Comment by woodruffw 16 hours ago

Except that it didn't fail to be the solution: the bug is localized to an explicit escape hatch in Rust's safety rules, rather than being a latent property of the system.

(I think the underlying philosophical disagreement here is this: I think software is always going to have bugs, and that Rust can't - and doesn't promise - to perfectly eliminate them. Instead, what Rust does promise - and deliver on - is that the entire class of memory safety bugs can be eliminated by construction in safe Rust, and localized when present to errors in unsafe Rust. Insofar as that's the promise, Rust has delivered here.)

Comment by uecker 15 hours ago

You can label something an "explicit escape hatch" or a "latent property of the system", but in the end such labels are irrelevant. While I agree that it may be easier to review unsafe blocks in Rust compared to reviewing pointer arithmetic, union accesses, and free in C because "unsafe" is a bit more obvious in the source, I think selling this as a game changer was always an exaggeration.

Comment by woodruffw 15 hours ago

Having written lots of C and C++ before Rust, this kind of local reasoning + correctness by construction is absolutely a game changer. It's just not a silver bullet, and efforts to miscast Rust as incorrectly claiming to be one seem heavy-handed.

Comment by tialaramex 12 hours ago

Google's feedback seems to suggest Rust actually might be a silver bullet, in the specific sense meant in the "No Silver Bullet" essay.

That essay doesn't say that silver bullets are a panacea or cure all, instead they're a decimal order of magnitude improvement. The essay gives the example of Structured Programming, an idea which feels so obvious to us today that it's unspoken, but it's really true that once upon a time people wrote unstructured programs (today the only "language" where you even could do this is assembly and nobody does it) where you just jump arbitrarily to unrelated code and resume execution. The result is fucking chaos and languages where you never do that delivered a huge improvement even before I wrote my first line of code in the 1980s.

Google did find that sort of effect in Rust over C++.

Comment by uecker 4 hours ago

As a scientist, I would not trust self reports from the industry too much. Even if those are honest, there are too many things that could bias this.

Comment by crote 2 hours ago

Obviously. If you use a language which inherently makes memory safety bugs in regular code impossible, all memory safety bugs will be contained to the "trust me, I know what I'm doing - no need to check this" bypass sections. Similarly, all drownings happen in the presence of water.

The important thing to remember is that in this context C code is one giant unsafe {} block, and you're more likely to drown in the sea than in a puddle.

Comment by n2d4 16 hours ago

Sure, but that's not really that interesting or controversial.

The more useful question is, how many CVEs were prevented because unsafe {} blocks receive more caution and scrutiny?

Comment by themafia 12 hours ago

If you could find a way to actually measure that it would be useful. I doubt this is actually achievable in our Universe.

If all of C is effectively "unsafe" then wouldn't it receive the _most_ scrutiny?

Since this didn't work then I don't understand Rust's overall strategy.

Comment by goku12 8 hours ago

That's not how it works. A larger codebase to scrutinize means that there's more chance of missing a memory safety bug. If you can keep the Rust unsafe block bug-free, you don't need to worry about them anymore in safe Rust. They're talking about attention getting divided all over the code where this distinction is not there (like C code). They always have been.

On top of that, there is something else they say. You have to uphold the invariants inside the unsafe blocks. Rust for Linux documents these invariants as well. The invariant was wrong in this case. The reason I mention this is because this practice has forced even C developers to rethink and improve their code.

Rust specifies very clearly what sort of error it eliminates and where it does that. It reduces the surface area of memory safety bugs to unsafe blocks, and gives you clear guidelines on what you need to ensure manually within the unsafe block to avoid any memory safety bugs. And even when you make a human error in that task, Rust makes it easy to identify them.

There are clear advantages here in terms of the effort required to prevent memory safety bugs, and in making your responsibilities explicit. This has been their claim consistently. Yet, I find that these have to be repeated in every discussion about Rust. It feels like some critics don't care about these arguments at all.

Comment by samdoesnothing 16 hours ago

If rust is so inflexible that it requires the use of unsafe to solve problems, that's still rust's fault. You have to consider both safe rust behaviour as well as necessary unsafe code.

Comment by woodruffw 16 hours ago

This is sort of the exact opposite of reality: the point of safe Rust is that it's safe so long as Rust's invariants are preserved, which all other safe Rust preserves by construction. So you only need to audit unsafe Rust code to ensure the safety of a Rust codebase.

(The nuance being that sometimes there's a lot of unsafe Rust, because some domains - like kernel programming - necessitate it. But this is still a better state of affairs than having no code be correct by construction, which is the reality with C.)

Comment by gkbrk 16 hours ago

Which domain doesn't necessitate unsafe? Any large Rust project I check has tons of unsafe in its dependency tree.

Comment by woodruffw 16 hours ago

I've written lots of `forbid(unsafe_code)` in Rust; it depends on where in the stack you are and what you're doing.

But as the adjacent commenter notes: having unsafe is not inherently a problem. You need unsafe Rust to interact with C and C++, because they're not safe by construction. This is a good thing!

Comment by informa23 16 hours ago

[flagged]

Comment by woodruffw 16 hours ago

I think unsafe Rust is harder to write than C. However, that's because unsafe Rust makes you think about the invariants that you'd need to preserve in a correct C program, so it's no harder to write than correct C.

In other words: unsafe Rust is harder, but only in an apples-and-oranges sense. If you compare it to the same diligence you'd need to exercise in writing safer C, it would be about the same.

Comment by jackrabbit1997 16 hours ago

How would you describe the aliasing requirements of C and Rust, and do you consider them the same, as well as equally difficult?

Comment by woodruffw 15 hours ago

Safe Rust has more strict aliasing requirements than C, so to write sound unsafe Rust that interoperates with safe Rust you need to do more work than the equivalent C code would involve. But per above, this is the apples-and-oranges comparison: the equivalent C code will compile, but is statistically more likely to be incorrect. Moreover, it's going to be incorrect in a way that isn't localizable.

Comment by jackrabbit1997 9 hours ago

Are you using an LLM to write your posts?

Comment by woodruffw 9 hours ago

No, that’s just how I write. Do you normally insinuate from green accounts?

Comment by gpm 15 hours ago

> in its dependency tree.

Ultimately every program depends on things beyond any compilers ability to verify, for example the calls to code not written in that language being correct, or even more fundamentally if you're writing some embedded program that literally has interfaces to foreign code at all the silicon (both that handles IO and that which does the computation) being correct.

The promise of rust isn't that it can make this fundamentally non-compiler-verifiable (i.e. unsafe) dependency go away, it's that you can wrap the dependency in abstractions that make it safe for users of the dependency if the dependency is written correctly.

In most domains rust don't necessitate writing new unsafe code, you rely on the existing unsafe code in your dependencies that is shared, battle tested, and reasonably scoped. This is all rust, or any programming langauge, can promise. The demand that the dependency tree has no unsafe isn't the same as the domain necessitating no unsafe, it's the impossible demand that the domain of writing the low level abstractions that every domain relies on doesn't need unsafe.

Comment by bigstrat2003 16 hours ago

Almost all of them. It would be far shorter to list the domains which require unsafe. If you're seeing programmers reach for unsafe in most projects, either you're looking at a lot of low level hardware stuff (which does require unsafe more often than not), or you are seeing cases where unsafe wasn't required but the programmer chose to use it anyway.

Comment by treyd 16 hours ago

And that is fine, because those upstream deps can locally ensure that those sections are correct without any risk that some unrelated code might mis-safely use them unsafely. There is an actual rigorous mathematical proof of this. You have no such guarantees in C/C++.

Comment by informa23 16 hours ago

[flagged]

Comment by treyd 16 hours ago

> And a bug in one crate can cause UB in another crate if that other crate is not designed well and correctly.

Yes! Failure to uphold invariants of the underlying abstract model in an unsafe block breaks the surrounding code, including other crates! That's exactly consistent with what I said. There's nothing special about the stdlib. Like all software, it can have bugs.

What the proof states is that two independently correct blocks of unsafe code cannot, when used together, be incorrect. So the key value there is that you only have to reason about them in isolation, which is not true for C.

Comment by aw1621107 16 hours ago

I think you're misunderstanding GP. The claim is that the only party responsible for ensuring correctness is the one providing a safe API to unsafe functionality (the upstream dependency in GP's comment). There's no claim that upstream devs are infalliable nor that the consequences of a mistake are necessarily bounded.

Comment by dijit 16 hours ago

Those guys were writing a lot of unsafe rust and bumped into UB.

I sound like an apologist, but the Rust team stated that “memory safety is preserved as long as Rusts invariants are”. Feels really clear, people keep missing this point for some reason, almost as if its a gotcha that unsafe rust behaves in the same memory unsafe way as C/C++: when thats exactly the point.

Your verification surface is smaller and has a boundary.

Comment by mlindner 15 hours ago

Ultimately all software has to touch hardware somewhere. There is no way to verify that the hardware always does what it is supposed to be because reality is not a computer. At the bottom of every dependency tree in any Rust code there always has to be unsafe code. But because Rust is the way it is those interfaces are the only places you need to check for incorrectly written code. Everywhere else that is just using safe code is automatically correct as long as the unsafe code was correct.

Comment by samdoesnothing 16 hours ago

It's just moving the goalposts. "If it compiles it works" to "it eliminates all memory bugs" to "well, it's safer than c...".

If Rust doesn't live up to its lofty promises, then it changes the cost-benefit analysis. You might give up almost anything to eliminate all bugs, a lot to eliminate all memory bugs, but what would you give up to eliminate some bugs?

Comment by woodruffw 16 hours ago

Can you show me an example of Rust promising "if it compiles it works"? This seems like an unrealistic thing to believe, and I've never heard anybody working on or in Rust claim that this is something you can just provide with absolute confidence.

The cost-benefit argument for Rust has always been mediated by the fact that Rust will need to interact with (or include) unsafe code in some domains. Per above, that's an explicit goal of Rust: to provide sound abstractions over unsound primitives that can be used soundly by construction.

Comment by qcnguy 2 hours ago

https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

Examples:

6 days ago: Their experience with Rust was positive for all the commonly cited reasons - if it compiles it works

8 days ago: I have to debug Rust code waaaay less than C, for two reasons: (2) Stronger type system - you get an "if it compiles it works" kind of experience

4 months ago: I've been writing Rust code for a while and generally if it compiles, it works.

5 months ago: If it’s Rust, I can just do stuff and I’ve never broken anything. Unit tests of business logic are all the QA I need. Other than that, if it compiles it works.

9 months ago: But even on a basic level Rust has that "if it compiles it works" experience which Go definitely doesn't.

Some people claim that the quote is hyperbolic because it only covers memory errors. But this bug is a memory error, so ...

Comment by aw1621107 1 hour ago

> Examples:

GP isn't asking for examples of just anyone making that statement. They're asking for examples of Rust making that promise. Something from the docs or the like.

> Some people claim that the quote is hyperbolic because it only covers memory errors. But this bug is a memory error, so ...

It's a memory error involving unsafe code, so it would be out of scope for whatever promises Rust may or may not have made anyways.

Comment by treyd 15 hours ago

> Can you show me an example of Rust promising "if it compiles it works"? [...] and I've never heard anybody working on or in Rust claim that this is something you can just provide with absolute confidence.

I have heard it and I've stated it before. It's never stated in absolute confidence. As I said in another thread, if it was actually true, then Rust wouldn't need an integrated unit testing framework.

It's referring to the experience that Rust learners have, especially when writing relatively simple code, that's it tends to be hard to misuse libraries in a way that looks correct and compiles but actually fails at runtime. Rust cannot actually provide this guarantee, it's impossible in any language. However there are a lot of common simple tasks (where there's not much complex internal logic that could be subtly incorrect) where the interfaces provided by libraries they're depending on are designed to leverage the type system such that it's difficult to accidentally misuse them.

Like something like not initializing a HTTP client properly. The interfaces make it impossible to obtain an improperly initialized client instance. This is an especially distinct feeling if you're used to dynamic languages where you often have no assurances at all that you didn't typo a field name.

Comment by kstrauser 14 hours ago

I've also said it, with the implication that the only remaining bugs are likely to be ones in my own logic. Like, suppose I'm writing a budget app and haven't gone to the lengths of making Debit and Credit their own types. I can still accidentally subtract a debit from a balance instead of adding to it. But unless I've gone out of my way to work around Rust's protections, e.g. with unsafe, I know that parts of my code aren't randomly mutating immutables, or opening up subtle use-after-free situations, etc. Now I can spend all my time concentrating on the program's logic instead of tracking those other thousands of gotchas.

Comment by fwip 16 hours ago

I've seen (and said) "if it compiles it works," but only when preceded by softening statements like "In my experience," or "most of the time." Because it really does feel like most of the time, the first time your program compiles, it works exactly the way you meant it to.

I can't imagine anybody seriously making that claim as a property of the language.

(edit: fixed a comma and a forgotten word)

Comment by woodruffw 16 hours ago

Yeah, I think the experiential claim is reasonable. It's certainly my experience that Rust code that compiles is more confidence-inspiring than Python code that syntax-checks!

Comment by Pet_Ant 16 hours ago

It's not moving the goalposts at all. I'm not a Rust programmer, but for years the message has been the same. It's been monotonous and tiring, so I don't know why you think it's new.

Safe Rust code is safe. You know where unsafe code is, because it's marked as unsafe. Yes, you will need some unsafe code in an notable project, but at least you know where it is. If you don't babysit your unsafe code, you get bad things. Someone didn't do the right thing here and I'm sure there will be a post-mortem and lessons learned.

To be comparable, imagine in C you had to mark potentially UB code with ub{} to compile. Until you get that, Rust is still a clear leader.

Comment by thayne 16 hours ago

That's like saying that if c is so inflexible it requires the use of inline assembly to solve problems, it's C's fault if inline assembly causes undefined behavior.

Comment by bigstrat2003 16 hours ago

> If rust is so inflexible that it requires the use of unsafe to solve problems...

Thankfully, it doesn't. There are very few situations which require unsafe code, though a kernel is going to run into a lot of those by virtue of what it does. But the vast majority of the time, you can write Rust programs without ever once reaching for unsafe.

Comment by ncruces 1 hour ago

It's not a kernel. It's the, admittedly very complicated, concurrent double linked list. I say that with no irony.

Comment by kstrauser 16 hours ago

What's the alternative that preserves safe-by-default while still allowing unlimited flexibility to accidentally break things? I mean, Rust allows inline assembly because there are situations where you absolutely must execute specific opcodes, but darned if I want that to be the common case.

Comment by lynndotpy 16 hours ago

Yes. When writing unsafe, you have to assume you can never trust anything coming from safe rust. But you are also provided far fewer rakes to step on when writing unsafe, and you (ideally) are writing far fewer lines of unsafe code in a Rust project than you would for equivalent C.

Rust is written in Rust, and we still want to be able to e.g. call C code from Rust. (It used to be the case that external C code was not always marked unsafe, but this was fixed recently).

Comment by torginus 14 hours ago

Sorry, but this is like saying 'when I am not wrong, I am right 100% of the time'.

The devs didn't write unsafe Rust to experience the thrills of living dangerously, they wrote it because the primitives were impossible to express in safe Rust.

If I were to write a program in C++ that has a thread-safe doubly linked list in it, I'd be able to bet on that linked list will have safety bugs, not because C++ is an unsafe language, but because multi-threading is hard. In fact, I believe most memory safety errors today occur in the presence of multi-threading.

Rust doesn't offer me any way of making sure my code is safe in this case, I have to do the due diligence of trying my best and still accept that bugs might happen because this is a hard problem.

The difference between Rust and C++ in this case, is that the bad parts of Rust are cordoned off with glowing red lines, while the bad parts of C++ are not.

This might help me in minimizing the attack surface in the future, but I suspect Rust's practical benefits will end up less impactful than advertised, even when the language is full realized and at its best, because most memory safety issues occur in code that cannot be expressed in safe Rust and doing it in a safe Rust way is not feasible for some technical reason.

Comment by Phelinofist 16 hours ago

I know nothing about Rust. But why is unsafe needed? Kinda sounds a lock would make this safe?

Comment by aw1621107 16 hours ago

> I know nothing about Rust. But why is unsafe needed?

The short of it is that for fundamental computer science reasons the ability to always reject unsafe programs comes at the cost of sometimes being unable to verify that an actually-safe program is safe. You can deal with this either by accepting this tradeoff as it is and accepting that some actually-safe programs will be impossible to write, or you can add an escape hatch that the compiler is unable to check but allows you to write those unverifiable programs. Rust chose the latter approach.

> Kinda sounds a lock would make this safe?

There was a lock, but it looks like it didn't cover everything it needed to.

Comment by dijit 16 hours ago

I think you missed the parents point. We all universally acknowledge the need for the unsafe{} keyword in general; what the parent is saying is: given the constraint of a lock, could this code not have obviated the need for an unsafe block entirely. Thus rendering the memory safety-issue impossible.

Comment by aw1621107 16 hours ago

Ah, I see that interpretation now that you spelled it out for me.

Here's what `List::remove` says on its safety requirements [0]:

    /// Removes the provided item from this list and returns it.
    ///
    /// This returns `None` if the item is not in the list. (Note that by the safety requirements,
    /// this means that the item is not in any list.)
    ///
    /// # Safety
    ///
    /// `item` must not be in a different linked list (with the same id).
    pub unsafe fn remove(&mut self, item: &T) -> Option<ListArc<T, ID>> {
At least if I'm understanding things correctly, I don't think that that invariant is something that locks can protect in general. I can't say I'm familiar enough with the code to say whether some other code organization would have eliminated the need for the unsafe block in this specific case.

[0]: https://github.com/torvalds/linux/blob/3e0ae02ba831da2b70790...

Comment by Phelinofist 16 hours ago

Yes, that is what I meant - thanks for actually expressing my thoughts better than me.

Comment by n2d4 16 hours ago

I recommend you read Greg Koah-Hartman's thread instead of this article: https://social.kernel.org/notice/B1JLrtkxEBazCPQHDM

    > Rust is is not a "silver bullet" that can solve all security problems, but it sure helps out a lot and will cut out huge swatches of Linux kernel vulnerabilities as it gets used more widely in our codebase.
    
    > That being said, we just assigned our first CVE for some Rust code in the kernel: https://lore.kernel.org/all/2025121614-CVE-2025-68260-558d@gregkh/ where the offending issue just causes a crash, not the ability to take advantage of the memory corruption, a much better thing overall.

    > Note the other 159 kernel CVEs issued today for fixes in the C portion of the codebase, so as always, everyone should be upgrading to newer kernels to remain secure overall.

Comment by anonnon 12 hours ago

[flagged]

Comment by jackrabbit1997 15 hours ago

> > That being said, we just assigned our first CVE for some Rust code in the kernel: https://lore.kernel.org/all/2025121614-CVE-2025-68260-558d@g... where the offending issue just causes a crash, not the ability to take advantage of the memory corruption, a much better thing overall.

That indicates that Greg Koah-Hartman has a very poor understanding of Rust and the _unsafe_ keyword. The bug can, in fact, exhibit undefined behavior and memory corruption.

His lack of understanding is unfortunate, to put it very mildly.

Comment by n2d4 15 hours ago

What are some compiler flags that would compile the code such that an attacker could take advantage? And what would the attack be?

Or is this just a theoretical argument, "it is hypothetically possible to create a technically-spec-compliant Rust compiler that would compile this into dangerous machine code"? If so it should still be fixed of course, but if I'm patching my Linux kernel I'd rather know what the practical impact is.

Comment by jackrabbit1997 9 hours ago

[flagged]

Comment by aw1621107 13 hours ago

To play a bit of devil's advocate, I don't think the problem is necessarily with the compiler output. It's more that it's not always easy to definitively state the precise consequences of a particular issue, especially when it comes to memory safety-/UB-related issues. For example, consider this Project Zero writeup about using a single NUL byte buffer overflow as part of a root privilege exploit [0] despite some skepticism about whether that overflow was actually exploitable.

To be fair, I'm not saying that Greg KH is definitely wrong; I'm only willing to claim that in the general case observing crashes due to corrupted pointers does not necessarily mean that there's no ability to actually exploit said corruption. Actual exploitability will depend on other factors as well, and I'm far from knowledgeable enough to say anything on the matter.

[0]: https://projectzero.google/2014/08/the-poisoned-nul-byte-201...

Comment by drob518 17 hours ago

Anybody who thought the simple action of rewriting things in Rust would eliminate all bugs was hopelessly naive. Particularly since Rust allows unsafe operations. That doesn’t mean Rust provides no value over C, just that the value is short of total elimination of bugs. Which was never advertised as the value to begin with.

Comment by tensor 17 hours ago

What? I think people think "rust without unsafe" eliminates certain classes of bugs. Are we really going to imply that people don't understand that "unsafe" labeled code is ... uh.. possibly unsafe? I don't believe that these mythical "naive" people exist who think code explicitly labelled unsafe is still safe.

Comment by hnlmorg 17 hours ago

I think the problem lies with the fact that you cannot write kernel code without relying on unsafe blocks of code.

So arguably both camps are correct. Those who advocate Rust rewrites, and those who are against it too.

Comment by jeroenhd 3 hours ago

I don't think this code needed to be unsafe. This code doesn't involve any I/O or kernel pointer/CPU register management; it's just modifying a list.

I'm sure the people who wrote this code had their reasons to write the code like this (probably simplicity or performance), but this type of kernel code could be done safely.

Comment by aw1621107 1 hour ago

As pointed out by yuriks [0], it seems the patch authors are interested in looking into a safer solution [1]:

> The previous patches in this series illustrate why the List::remove method is really dangerous. I think the real takeaway here is to replace the linked lists with a different data structure without this unsafe footgun, but for now we fix the bugs and add a warning to the docs.

[0]: https://news.ycombinator.com/item?id=46307357

[1]: https://news.ycombinator.com/item?id=46307357

Comment by ViewTrick1002 13 hours ago

You can’t write rust code without relying on unsafe code. Much of the standard library contains unsafe, which they have in parts taken the time to formally verify.

I would presume the ratio of safe to unsafe code leads to less unsafe code being written over time as the full ”kernel standard library” gets built out allowing all other parts to replace their hand rolled implementations with the standard one.

Comment by bangaladore 17 hours ago

I think part of the problem is people start thinking that unsafe code with a SAFETY comment nearby is probably safe.

Then the safety comment can easily bias the reader into believing that the author has fully understood the problem and all edge cases.

Comment by webstrand 17 hours ago

The SAFETY comment is just a brief description of the important points the author considered when writing the block, and perhaps points you need to consider if you modify it. Do people just blindly assume that comments in an algorithm are correct and not misleading? In other languages they don't, I don't see why rust'd be any different.

Comment by bangaladore 11 hours ago

A SAFETY comment is supposed to justify why the unsafe code is sound. Here it justified the wrong thing. Ownership was not the problem, concurrent mutation was. That is exactly the kind of gap a SAFETY comment can hide by giving a false sense that the hard parts were already considered.

The fact that this survived review is the worrying part. Unsafe blocks are intentionally small and localized in Rust precisely so the safety argument can be checked. If the stated safety argument is incomplete and still passes review, that suggests reviewers are relying on the comment as the proof, rather than rederiving the invariants themselves. Unless of course the wrong people are reviewing these changes. Why rewrite in Rust if we don't apply extreme scrutiny to the tiny subset (presumably) that should be scrutinized.

To be clear, I think this is a failure of process, not Rust of course.

Comment by aw1621107 8 hours ago

> Ownership was not the problem, concurrent mutation was.

I think the safety comment might have been more on-point than you think. If you look at the original code, it did something like:

- Take a lock - Swap a `Node`'s `death_list` (i.e., a list of `NodeDeath`s) with an empty one - Release the lock - Iterate over the taken `death_list`

While in another thread, you have a `NodeDeath`:

- Take a lock - Get its parent's `death_list` - Remove itself from said list. - Release the lock

The issue is what happens when a `NodeDeath` from the original list tries to remove itself after the parent Node swapped its `death_list`. In that case, the `NodeDeath` grabs the replacement list from its parent node, and the subsequent attempt to remove itself from the replacement list violates the precondition in the safety comment.

> Why rewrite in Rust if we don't apply extreme scrutiny to the tiny subset (presumably) that should be scrutinized.

That "extreme scrutiny" was applied does not guarantee that all possible bugs will be found. Humans are only human, after all.

Comment by Kinrany 16 hours ago

> unsafe code with a SAFETY comment nearby

That's roughly 100% of unsafe code because a lint in the compiler asks for it.

Comment by zamalek 17 hours ago

> Anybody who thought the simple action of rewriting things in Rust would eliminate all bugs was hopelessly naive

All bugs is typically a strawman typically only used by detractors. The correct claim is: safe Rust eliminates certain classes of bugs. I'd wager the design of std eliminates more (e.g. the different string types), but that doesn't really apply to the kernel.

Comment by samdoesnothing 16 hours ago

> All bugs is typically a strawman typically only used by detractors. The correct claim is: safe Rust eliminates certain classes of bugs. I'd wager the design of std eliminates more (e.g. the different string types), but that doesn't really apply to the kernel.

Which is either 1) not true as evidenced by this bug or 2) a tautology whereby Rust eliminates all bugs that it eliminates.

Comment by drob518 16 hours ago

I think the answer is #2, the tautology. But just because it’s a tautology doesn’t mean it’s a worthless thing to say. I think it’s also true, for instance (a corollary), that Rust eliminates more types of bugs than C does. And that may be valuable even if it does not mean that Rust eliminates all bugs.

Comment by PartiallyTyped 16 hours ago

>> safe Rust

> 1) not true as evidenced by this bug

Code used unsafe, putting us out of "safe" rust.

Comment by samdoesnothing 16 hours ago

> Anybody who thought the simple action of rewriting things in Rust would eliminate all bugs was hopelessly naive.

Classic Motte and Bailey. Rust is often said "if it compiles it runs". When that is obviously not the case, Rust evangelicals claim nobody actually means that and that Rust just eliminates memory bugs. And when that isn't even true, they try to mischaracterize it as "all bugs" when, no, people are expecting it to eliminate all memory bugs because that's what Rust people claim.

Comment by anon-3988 16 hours ago

> Classic Motte and Bailey. Rust is often said "if it compiles it runs".

That claims is overly broad, but its a huge, huge part of it. There's no amount of computer science or verification that can prevent a human from writing the wrong software or specification (let plus_a_b = a - b or why did you give me an orange when I wanted an apple). Unsafe Rust is so markedly different than safe default Rust. This is akin to claiming that C is buggy or broken because people write broken inline ASM. If C can't deal with broken inline ASM, then why bother with C?

Comment by tialaramex 11 hours ago

Yeah. I spent many years getting paid to write C, these days I don't write C (even for myself) but I do write Rust.

I write bugs, because I'm human, and Rust's compiler sure does catch a lot more of my bugs than GCC used to when I was writing C all day.

Stronger typing a big part of why this happens. For example in C it's perfectly usual to use the "int" type for a file descriptor, a count of items in some container and a timeout (in seconds? milliseconds? who knows). We could do better, but we usually don't.

In idiomatic Rust everybody uses three distinct types OwnedFd, usize and Duration. As a result while arithmetic on ints must work in C, the Rust compiler knows that it's reasonable to add two Durations together, it's nonsense to add a Duration to a size, and all arithmetic is inappropriate for OwnedFd, further it's also not reasonable to multiply two Durations together, a Duration multiplied by an integer makes sense and the other way around likewise, but 5 seconds multiplied by 80 milliseconds is nonsense.

Comment by AnIrishDuck 16 hours ago

> Classic Motte and Bailey.

For this to be a "classic motte and bailey" you will need to point us to instances where _the original poster_ suggested these (the "bailey", which you characterize as "rust eliminates all bugs") things.

It instead appears that you are attributing _other comments_ to the OP. This is not a fair argumentation technique, and could easily be turned against you to make any of your comments into a "classic motte and bailey".

Comment by themafia 16 hours ago

> That doesn’t mean Rust provides no value over C

The real question is "does it provide this greater value for _less_ effort?"

The answer seems to be: "No."

Comment by drob518 16 hours ago

No, the real question is whether it provides that greater value for a reasonably acceptable, commensurate effort. Looking for greater value with less effort is looking for a free lunch, and we all know TANSTAAFL.

Comment by themafia 14 hours ago

> reasonably acceptable

Have fun defining that in an open source project.

> Looking for greater value with less effort is looking for a free lunch

If you have to switch languages to get that value, then no, this has nothing to do with free lunches.

> and we all know TANSTAAFL.

The church of the acronym. I do not share your apparent faith. Engineering requires you to actually do the work and not rely on simple aphorisms to make decisions.

Comment by timeon 13 hours ago

> The answer seems to be: "No."

It is actually "Yes."

Comment by phendrenad2 16 hours ago

I feel like everyone involved in the Linux Kernel Rust world is ironically woefully unaware of how Rust actually works, and what it's actually capable of. I suspect that Rust gurus agree with me, but don't want to say anything because it would hurt Rust adoption in places where it actually is helpful (encryption algorithms...)

Kernels - and especially the Linux kernel - are high-performance systems that require lots of shared mutable state. Every driver is a glorified while loop waiting for an IRQ so it can copy a chunk of data from one shared mutable buffer to another shared mutable buffer. So there will need to be some level of unsafe in the code.

There's a fallacy that if 95% of the code is safe, and 5% is unsafe, then that code is only 5% as likely to contain memory errors as a comparable C program. But, to reiterate what another commenter said, and something I've predicted for a long time, the tendency for the "unsafe block" to become instrumented by the "safe block" will always exist. People will loosen the API contract between the "safe" and "unsafe" sides until an error in the "safe" side kicks off an error in the "unsafe" side.

Comment by bronson 16 hours ago

> Every driver is a glorified while loop waiting for an IRQ

This is so obviously false that I suspect there's the reason you don't see any Rust gurus agreeing with you.

Drivers do lots of resource and memory management, far more than just spinning on IRQs.

Comment by infamouscow 15 hours ago

I should probably ask what experience do you have writing hardware drivers for the Linux kernel, but it's pretty obvious the answer is: none. I actually burst out laughing reading your comment, it's ridiculous.

My anecdotal experience interviewing big tech engineers that used Rust reflects GP's hunch about this astonishing experience gap. Just this year, 4/4 candidates I interviewed couldn't give me the correct answer for what two bytes in base 2 represented in base 10. Not a single candidate asked me about the endianness of the system.

Now that Rust in the kernel doesn't have an "experimental" escape hatch, these motte-and-bailey arguments aren't going to work. Ultimately, I think this is a good thing for Rust in the kernel. Once all of the idiots and buffoons have been sufficiently derided and ousted from public discourse (deservedly so), we can finally begin having serious and productive technical discussions about how to make C and Rust interoperate in the kernel.

Comment by emil-lp 2 hours ago

When you say "base 10", is that "10"-er written in big endian or small endian?

It's as if there's a convention of sorts to how we write numbers (regardless of base).

If you don't state endianness in your exercise, one should assume the convention is followed.

Comment by bronson 14 hours ago

You're saying you believe every Linux driver actually is a glorified while loop?

I guess it makes sense you're having trouble hiring qualified candidates.

Comment by 9 hours ago

Comment by bangaladore 17 hours ago

Correct me if I'm wrong, but this comment is atleast partially incorrect right?

> Since it was in an unsafe block, the error for sure was way easier to find within the codebase than in C. Everything that's not unsafe can be ruled out as a reason for race conditions and the usual memory handling mistakes - that's already a huge win.

The benefit of Rust is you can isolate the possible code that causes an XYZ to an unsafe block*. But that doesn't necessarily mean the error shown is directly related to the unsafe block. Like C++, triggering undefined behavior can in theory cause the program to do anything, including fail spectacularly within seemingly unrelated safe code.

* Excluding cases where safe things are actually possibly unsafe (like some incorrectly marked FFI)

Comment by landr0id 17 hours ago

From my experience UB in Rust can manifest a bit differently than in C or C++, but still generally has enough smoke in the right area.

I believe their point was that they only needed to audit only the unsafe blocks to find the actual root cause of the bug once they had an idea of the problematic area.

Comment by tialaramex 16 hours ago

I guess the problem here is that you and the writer have different understandings of what "the error" means.

The author is thinking about "the error" as some source code that's incorrect. "Your error was not bringing gloves and a hat to the snowball fight" but you're thinking "the error" is some diagnostic result that shows there was a problem. "My error is that I'm covered in freezing snow".

Does that help?

Comment by malcolmgreaves 17 hours ago

The point at which you _could_ start to have undefined behavior is within an `unsafe` block or function. So even if the "failure" occurred in some "safe" part of the code, the conditions to make that failure would start in the unsafe code.

When debugging, we care about where the assumptions we had were violated. Not where we observe a bad effect of these violated assumptions.

I think you get here yourself when you say:

> triggering undefined behavior can in theory cause the program to do anything, including fail spectacularly within seemingly unrelated safe code

The bug isn't where it failed spectacularly. It's where the C++ code triggered undefined behavior.

Put another way: if the undefined behavior _didn't_ cause a crash / corrupted data, the bug _still_ exists. We just haven't observed any bad effects from it.

Comment by lousken 16 hours ago

Why do they allow unsafe parts in linux kernel in the first place? Why rewriting C code into unsafe rust?

Comment by K0nserv 16 hours ago

It's important to note that the `unsafe` keyword is poorly named. What it does is unlock a few more capabilities at the cost of upholding the invariants the spec requires. It should really be called "assured" or something. The programmer is taking the wheel from the compiler and promising to drive safely.

As for why there is unsafe in the kernel? There are things, especially in a kernel, that cannot be expressed in safe Rust.

Still, having smaller sections of unsafe is a boon because you isolate these locations of elevated power, meaning they are auditable and obvious. Rust also excels at wrapping unsafe in safe abstractions that are impossible to misuse. A common comparison point is that in C your entire program is effectively unsafe, whereas in Rust it's a subset.

Comment by informa23 16 hours ago

EDIT: Hacker News has limited my ability to respond. Please keep in mind that Rust has a large number of active fans, who may have biases for whatever reasons.

> Still, having smaller sections of unsafe is a boon because you isolate these locations of elevated power, meaning they are auditable and obvious.

The Rustonomicon makes it very clear that it is generally insufficient to only verify correctness of Rust-unsafe blocks. If the absence of UB in a Rust-unsafe block depends on Rust-not-unsafe code in the surrounding module, potentially the whole module has to be verified for correctness. And that assumes that the module has correct encapsulation, otherwise even more may have to be verified. And a single buggy change to Rust-not-unsafe code can cause UB, if a Rust-unsafe block somewhere depends on that code to be correct.

Comment by tialaramex 16 hours ago

Rust is very nice for encapsulation. C isn't great at that work, and of course it can't express the idea that whatever we've encapsulated is now safe to use this way, in C everything looks equally safe/ unsafe.

Comment by informa23 16 hours ago

[flagged]

Comment by MyOutfitIsVague 16 hours ago

It's worth noting that "aliasing" in Rust and C typically mean completely unrelated things.

Strict aliasing in C roughly means that if you initialize memory as a particular type, you can only access it as that type or one of a list of aliasable types look like char. Rust has no such restriction, and has no concept of strict aliasing like this. In Rust, "type aliasing" is allowed, so long as you respect size, alignment, and representability rules.

Aliasing safety in Rust roughly means that you can not have an exclusive reference to an object if any other reference is active for that reference (reality is a little bit more involved than that, but not a lot). C has no such rule.

It's very unfortunate that such similar names were given to these different concepts.

Comment by tialaramex 14 hours ago

No. Aliasing is a single idea, an alias is another name for the same thing. The concept translates well from its usual English meaning.

The C "strict aliasing" rule is that with important exceptions the name for a thing of type T cannot also be an alias to a thing of type S, and char is an important exception. Linux deliberately switches off this rule.

Rust's rule is that there mustn't be mutable aliases. We will see why that's important in a moment.

Aliasing is an impediment to compiler optimisation. If you've been watching Matt's "Advent of Compiler Optimisation" videos (or reading the accompanying text) it's been covered a little bit in that, Matt uses C and C++ in those videos, so if you're scared of Rust you needn't fear that in the AoCO

But why mutation? Well, the optimisations concern modification. The optimiser does its job by rewriting what you asked for as something (possibly not something you could have expressed at all in your chosen language) that has the same effect but is faster or smaller. Rewrites which avoid "spilling" a register (writing its value to memory) often improve both size and speed of the software, but if there is aliasing then spilling will be essential because the other aliases are referring to the same memory. If there's no modification it doesn't matter, copies are all identical anyway.

Comment by speed_spread 16 hours ago

You need unsafe Rust for FFI - interfacing with the rest of the kernel which is still C, uses raw pointers, has no generics, doesn't track ownership, etc. One day there might enough Rust in the kernel to have pure-Rust subsystems APIs which would no longer require unsafe blocks to use. This would reverse the requirements as C would be a second class citizen with these APIs (not that C would notice or care). How far Rust is to get pushed remains to be seen but it might a long time to get there.

Comment by informa23 16 hours ago

[flagged]

Comment by techbrovanguard 8 hours ago

Posting this from a green account is just pathetic, my guy. Go outside, touch some grass.

Comment by informa23 16 hours ago

[flagged]

Comment by aw1621107 16 hours ago

Direct link to the mailing list entry at [0]. The fix for 6.19-rc1 is commit 3e0ae02ba831 [1]. The patch is pretty small (added some extra context since the function it's from is short):

        pub(crate) fn release(&self) {
            let mut guard = self.owner.inner.lock();
            while let Some(work) = self.inner.access_mut(&mut guard).oneway_todo.pop_front() {
                drop(guard);
                work.into_arc().cancel();
                guard = self.owner.inner.lock();
            }

    -       let death_list = core::mem::take(&mut self.inner.access_mut(&mut guard).death_list);
    -       drop(guard);
    -       for death in death_list {
    +       while let Some(death) = self.inner.access_mut(&mut guard).death_list.pop_front() {
    +           drop(guard);
                death.into_arc().set_dead();
    +           guard = self.owner.inner.lock();
            }
        }
And here is the unsafe block mentioned in the commit message with some more context [3]:

    fn set_cleared(self: &DArc<Self>, abort: bool) -> bool {
        // <snip>

        // Remove death notification from node.
        if needs_removal {
            let mut owner_inner = self.node.owner.inner.lock();
            let node_inner = self.node.inner.access_mut(&mut owner_inner);
            // SAFETY: A `NodeDeath` is never inserted into the death list of any node other than
            // its owner, so it is either in this death list or in no death list.
            unsafe { node_inner.death_list.remove(self) };
        }
        needs_queueing
    }
[0]: https://lore.kernel.org/linux-cve-announce/2025121614-CVE-20...

[1]: https://github.com/torvalds/linux/commit/3e0ae02ba831da2b707...

[2]: https://github.com/torvalds/linux/blob/3e0ae02ba831da2b70790...

[3]: https://github.com/torvalds/linux/blob/3e0ae02ba831da2b70790...

Comment by anon-3988 16 hours ago

The interesting part to me is that this bug does not necessarily happen in an unsafe block. The fix happens in an unsafe block, I think the API should change to avoid this. Perhaps by forcing users to pass a lambda to do stuff instead of having to manually lock and drop?

Comment by aw1621107 15 hours ago

The `unsafe` block was present because `List::remove` is marked `unsafe` [0]:

    /// Removes the provided item from this list and returns it.
    ///
    /// This returns `None` if the item is not in the list. (Note that by the safety requirements,
    /// this means that the item is not in any list.)
    ///
    /// # Safety
    ///
    /// `item` must not be in a different linked list (with the same id).
    pub unsafe fn remove(&mut self, item: &T) -> Option<ListArc<T, ID>> {
I think it'd be tricky at best to make this particular API safe since doing so requires reasoning across arbitrary other List instances. At the very least I don't think locks would help here, since temporary exclusive access to a list won't stop you from adding the same element to multiple lists.

[0]: https://github.com/torvalds/linux/blob/3e0ae02ba831da2b70790...

Comment by mlindner 14 hours ago

If the API cannot be made safe then it must be marked unsafe.

Comment by aw1621107 14 hours ago

I mean, remove() is already marked unsafe?

Otherwise there's the question of where exactly the API boundaries are. In the most general case, your unsafe boundary is going to be the module boundary; as long as what you publicly expose is safe modulo bugs, you're good. In this case the fix was in a crate-internal function, so I suppose one could argue that the public API was/is fine.

That being said, I'm not super-familiar with the code in question so I can't definitively say that there's no way to make internal changes to reduce the risk of similar errors.

Comment by 14 hours ago

Comment by mlindner 14 hours ago

Yeah this is a bad fix. It should be impossible to cause incorrect things to happen from safe code, especially from safe code calling safe code.

Comment by yuriks 11 hours ago

The author of the patch does mention that the better thing to do in the long run is to replace the data structure with one that is possible to better encapsulate: https://lore.kernel.org/all/20251111-binder-fix-list-remove-...

Comment by uecker 15 hours ago

Somebody (tsoding?) called "unsafe" in Rust a "blame shifting device". If a bug was in an "unsafe" block, it is not Rust's fault, but solely the responsibility of the programmer, while every bug in a C program is obviously the language's fault alone.

Comment by K0nserv 13 hours ago

Everything is solely the responsibility of the programmer. The strength of Rust as a language is that it helps the programmer check themselves before they wreck themselves. The critique of C would be that it provides far too little support to the programmer, although it was reasonable at the time it was was invented.

Unsafe is the one escape hatch where Rust is more like C, but pragmatically it's an important escape hatch.

Comment by uecker 4 hours ago

Yes, but the arguments why we need to replace C code with Rust was not that it is better by "helping the programmer check themselves" but that we need to switch to memory-safe languages because it "removes a whole class of error". (of course, nobody has ever said this, this must be my imagination)

Finally, there are also a lot of ways to improve memory safety in C which are not nowhere exhausted even in the kernel. As long as this is not even the case, I find the argument that there is "too little support for the programmer" quite hollow.

Comment by HumanOstrich 17 hours ago

They're still 90% of the way to their goal. And there's only 90% left to go.

Comment by mimd 13 hours ago

Huzzah! You made it to the big leagues Rust! Come join C over here with the "CookiesVE".

Comment by samdoesnothing 16 hours ago

I think it's pretty telling that there are people in this thread trying to pre-empt the expected criticism in this thread. Might be worth thinking why there might be criticism, and why it wouldn't be the case if it was a different language.

Comment by greatgib 17 hours ago

"That code can lead to memory corruption of the previous/next pointers and in turn cause a crash."

Oh no, what happened to Rust will save us from retarded legacy languages prone to memory corruption?

Comment by gfna 17 hours ago

Well the article did mention it was in an undafe block

Comment by greatgib 15 hours ago

1) unsafe block still of "rust" code that rewritten in "rust" to be safe compared to previous C code. 2) Maybe there is no other way than use "unsafe" block sometimes in "rust", so is Rust a lie?

Comment by AndrewDucker 15 hours ago

Rust constrains the code that can do unsafe things to small blocks, greatly reducing the area in which they can happen. Unlike C, where the whole of the code is unsafe.

Comment by gitaarik 8 hours ago

I imagine for some very low level kernel stuff, you might want to turn off Rust's safety features because they get in the way, make things less efficient or something.

But then when you do you should really know what you're doing.

The fact that this bug is because of "unsafe" Rust usage actually affirms the language's safety when using "safe" code. Although with "memory safe" code you can of course still fuck up lots of other things.

Comment by techbrovanguard 8 hours ago

If you do something stupid in a block marked "unsafe", it's probably on you, my guy.

Comment by 17 hours ago

Comment by secondcoming 17 hours ago

[flagged]

Comment by kahlonel 17 hours ago

Now we have vulnerabilities in two different flavors.

Comment by pa7ch 17 hours ago

Honestly seems like zig is shaping up to be a better fit for kernel. Regardless the language that attracts skilled kernel devs will matter more then lang.

Comment by ameixaseca 7 hours ago

It is not - and given its stance on memory safety, it will hardly ever be.

Comment by postepowanieadm 17 hours ago

It's true only for unsafe rust.

Comment by troglo-byte 16 hours ago

I'll confess I'm a bit less than well-read on this, but I can't help but wonder. How can increasing the number of languages in use within one enormous project possibly reduce critical vulnerabilities over the next decade?

If we're looking beyond one decade, then:

- As with all other utility, the future must be discounted compared to the present.

- A language that might look sterling today might fall behind tomorrow.

- Something something AI by 2035.