Open Source Is Drowning in AI Slop. The Linux Foundation Just Dropped $12.5M.
AI slop escaped the corporate inbox. Now it's flooding the foundations of open source. The Linux Foundation just announced $12.5M in funding. Paid for by the companies selling the tools that caused the mess.

From Workslop to Codeslop
Remember workslop? That AI-generated content that looks like work but accomplishes nothing. The 15-page report nobody actually thought through. ChatGPT-polished emails we pass around like hot potatoes. Well, it's escaped the office. It's now hitting the foundations of the software you use every day without knowing it.
$12.5 Million to Put Out the Fire
On March 18, the Linux Foundation announced a $12.5 million program to protect open source maintainers from the deluge of AI contributions. The project builds on Alpha-Omega and the OpenSSF, two existing security initiatives. So far, nothing shocking. But then you look at the list of funders: Anthropic, AWS, GitHub, Google, Microsoft, OpenAI.
Six companies. The same six selling the code generation tools responsible for the problem. It's like fast-food chains funding an anti-obesity campaign. Technically virtuous. Structurally absurd.
Maintainers on the Breaking Point
To understand why this is urgent, you need to look at what's happening in the trenches. The concrete cases are striking.
cURL, the networking library running on basically every connected device on the planet: Daniel Stenberg, its creator, shut down the bug bounty program in January 2026. After paying out $86,000 in rewards over the years. Why? In 2025, 20% of submissions were AI-generated. The validity rate dropped to 5%. In January alone, 20 reports in a few days, seven submissions in 16 hours. Some mentioned real bugs. None identified an actual vulnerability. Signal drowned in noise.
Ghostty, the terminal by Mitchell Hashimoto (HashiCorp co-founder): AI code banned. His justification fits in one sentence. "This isn't an anti-AI position, it's an anti-idiot position."
tldraw, the collaborative drawing tool: Steve Ruiz enabled automatic closing of all external pull requests. No human filter anymore, the volume made it impossible.
Flux CD, the Kubernetes deployment tool: Stefan Prodan sums up the situation in five words. "AI slop is DoSing open source maintainers." That's not a cheap technical metaphor. It's literally what's happening: a volume of requests that's paralyzing the system.
Why This Matters, Even to You
These maintainers are often volunteers. They maintain critical software in their spare time, for zero dollars. cURL runs on billions of devices. The Linux kernel powers most web servers, Android phones, connected devices. When these people break, it's not a GitHub project that closes. It's a layer of global digital infrastructure that weakens.
And the domino effect is already visible. Stack Overflow lost 25% of its activity in the six months after ChatGPT launched. Tailwind CSS, a massively used web framework: downloads up, documentation traffic down 40%, revenue down 80%. People use code without understanding code. When it breaks, they open a ticket. Gentoo Linux and NetBSD got ahead of it by banning AI contributions. Linus Torvalds himself had to intervene on bogus patches hitting the Linux kernel.
The Arsonist Fire Brigade
Back to that $12.5 million. Greg Kroah-Hartman, Linux kernel maintainer, had the most clear-eyed reaction: "Grants alone won't solve the problem that AI tooling is causing today."
And he's right. The problem is structural. The cost of producing a bug report, a pull request, a piece of code has dropped to zero. Anyone can ask an LLM to "find vulnerabilities" in a project and submit the output. Takes three minutes. The cost of verification hasn't budged. You still need a competent human to read, understand, test, validate. This asymmetry is the core of the problem, and no grant fixes it.
It's exactly the same mechanism as workslop in corporations. Producing costs nothing. Verifying costs everything. And in between, an ocean of noise someone has to sort through.
The Real Cost of "Vibe Coding"
A study from CEU and the Kiel Institute modeled what developers call "vibe coding," coding with AI without really understanding what you're producing, and the conclusions are stark. The economic model creates a negative feedback loop: the easier it is to contribute, the more volume rises, the more maintainers burn out, the lower the quality of filters drops, the more projects degrade.
The Linux Foundation is putting $12.5 million on the table. It's a band-aid. A useful band-aid, on a wound that keeps widening. The real work is rethinking how we produce and filter code. Not just funding the people sorting through the wreckage.



