Claude Code Found a Linux Vulnerability Hidden for 23 Years

1 Like

I wrote a while ago how just because something is open source doesn’t necessarily mean it’s secure: The secure open source fallacy

And with this, it might be the case that open source is unironically less secure if AI tools find bugs faster than maintainers can fix them… And it seems like that’s exactly what is going to happen.

4 Likes

You are suggesting that AI’s ability to parse public code will make hidden codebases more secure than open source alternatives.

To claim that code is more secure when an adversary cannot access it is an old cybersec discussion, “security through obscurity”. It’s efficacy is still debated, but the general consensus says that obscurity alone is weak security

4 Likes

Not necessarily. It could happen, and the fact is many FOSS projects are outright rejecting LLM contributions, which I find unfortunate.

4 Likes

If you want one that is embracing it, look at SimpleX. Evgeny is a big fan of LLMs for coding.

1 Like

While your criticism of the fallacy is correct, open source provides security and trust by enabling the community to verify and audit the code they run. Making the code open also does allow for more vulnerability finding and patching, but this needs people to actually do it. It is extremely difficult to do the same for proprietary software, which relies on the development team to honestly find and patch vulnerabilities, and for all outsiders know could have malicious code hidden inside it, making it extremely difficult for outsiders to trust. Open source’s security requires work: “read it bro,” whereas proprietary software’s security is a promise by the developers: “trust me bro.”

We’ll see but I don’t think AI will fundamentally change relative security of open source software and proprietary software. AI tools find vulnerabilities faster in open source code, alright, but so do humans. And, like humans, when AI tools find a vulnerability, they could report or patch it, or they could exploit it.

AI finding more vulnerabilities in open source code does not mean closed source software has fewer vulnerabilities. On the contrary, AI may have the potential to learn machine code much faster than most humans can learn it. I wonder if AI’s capacity to analyze machine code, as a ratio over its capacity to analyze source code, will be much higher than the same ratio for human analysts. If so then malicious AI would pose a higher threat to machine code than humans, and AI may become an effective tool at finding vulnerabilities in closed source software.

3 Likes

AI contributions could go a long way to improve open source software but there may be valid reasons why many projects are outright rejecting AI contributions. Receiving high volumes of poor-quality AI contributions is a problem. Some projects may have a moral objection to AI and will never accept AI contributions.

2 Likes
2 Likes

Interesting data point. It may be just a matter of time before AI slop stops being an ongoing issue in open source projects.

If AI is detecting bugs at a very high rate, maintainers may get overwhelmed with bug reports whether their quality are good or bad.

“czar” WTF