Author: AMR

A tool installed on 20–50 billion devices just lost one of its best defenses, and it’s not because of a zero-day or a nation-state attack. It’s because hackers got lazy.

On January 21, 2026, Daniel Stenberg, the creator and lead maintainer of cURL, announced the end of the project’s bug bounty program. After six years, $86,000 paid out, and 78 confirmed vulnerabilities fixed, the program has ended. The security team was drowning in garbage reports, and they needed to stop the bleeding.

This is bad news for cURL. More than that, it’s a warning shot for the entire bug bounty ecosystem. As a hacker who uses AI tools regularly and is passionate about the integrity of hacking, I argue that we need to have this conversation. 

What actually happened

The numbers tell the story. In the first 21 days of 2026, cURL received 20 submissions. Seven of them arrived in a single 16-hour window. Each one required careful analysis by the security team. They had to read the report, attempt to reproduce the claimed vulnerability, investigate the code paths mentioned, and write a response. The result of all that effort? Zero actual vulnerabilities found—not one.

This is not a new problem. By mid-2025, roughly 20% of submissions were what Stenberg calls “AI slop”—reports that sound technical but contain nothing useful. What makes AI slop so insidious is that it does not look like spam; AI reports use technical language, they reference specific functions and code paths, and they describe potential attack scenarios that sound plausible. But when you actually dig in, there’s nothing worthwhile to be found. AI has learned to mimic the structure of security reports without understanding what makes something actually exploitable.

By July 2025, the cURL project team’s submission volume had spiked to eight times the normal rate. Stenberg tried implementing an instant-ban policy in May 2025 for AI-generated submissions. It didn’t work.

The breaking point came when Stenberg realized that the math no longer added up. Every fake report steals time away from real bug fixes and new feature development. For a project maintained by a small team, this volume was not sustainable. As Stenberg wrote, “We need to make moves to ensure our survival and intact mental health.”

Starting February 1, 2026, all security reports will go directly through GitHub. There’s no money involved—no bounties. Accompanying this change is a very clear warning: submit garbage, and you will be banned and publicly ridiculed.

AI is not the villain; laziness is

Let me be clear about something: I use AI in my security research. It is powerful; it accelerates reconnaissance, it helps me draft reports faster, and it finds things I might miss. When I’m reviewing a codebase, AI can point me toward patterns that warrant closer inspection. When I am writing up findings, it helps me articulate technical details more clearly. All of this saves me time. AI is not inherently the problem here.

The problem is people using AI to replace their brain instead of augment it.

There is a massive difference between feeding code to an LLM and then validating what it spits out versus copying AI output directly into a bug report without understanding a single line. The hackers flooding cURL were not hacking; they were gambling. Paste code, hit submit, hope something sticks, collect bounty—that is not hacking. That is noise.

I see this constantly. People who have never found a vulnerability suddenly submit dozens of reports across multiple programs. Rather than learning the craft or developing their intuition, they play a numbers game in an attempt to replace skill with volume. When platforms reward any valid submission without considering the cost of false positives, the math seems to work in their favor.

I have written before on how AI gives attackers three critical advantages: pattern recognition at scale, generative capabilities for convincing content, and autonomous operation. Those same capabilities can help defenders and hackers, but only when paired with human judgment. Stripped of that judgment, AI just produces slop at scale.

Stenberg has been tracking AI-assisted reports for years. His conclusion is damning: in six years of monitoring submissions generated by AI alone, not a single one discovered a genuine vulnerability—zero. The signal-to-noise ratio when AI operates without human oversight is effectively worthless.

The right way exists (and it pays)

Here is where the story gets interesting. In September 2025, a hacker named Joshua Rogers submitted a massive list of potential issues to cURL. He used AI-assisted tools, specifically a security scanner called ZeroPath. The result? Approximately 50 real bugs were identified. Stenberg called them “actually, truly awesome findings.”

The difference maker? Rogers used AI as a research assistant while doing proper security work himself. He tested multiple tools, evaluated their output, and filtered the results using his own expertise before submitting anything. When he sent the findings to Stenberg, they were not raw AI output. They were validated issues that Rogers understood and could explain, then reproduce.

Rogers himself wrote extensively about his methodology. He tested tools like Almanax, Corgea, ZeroPath, Gecko, and Amplify. He compared their false-positive rates, evaluated which types of vulnerabilities each tool handled well, and understood the limitations. This is what responsible AI-assisted security research looks like.

Stenberg put it perfectly: “This is what an AI can do when wielded by a competent human.”

This is the standard. If you cannot explain a vulnerability, reproduce the behavior, and articulate why it matters, you should not be submitting it. AI can help you find candidates, but AI cannot replace understanding. The moment you outsource comprehension, you become part of the problem.

What I predict programs will do next

The cURL shutdown will not be an isolated event. Other open-source projects are facing the same flood of AI slop. The Python community has reported similar issues, Open Collective has struggled, and so has Mesa Project. Expect more programs to implement stricter barriers.

Some possibilities include the following:

  • Proof-of-concept requirements that are not easily faked
  • Reputation-weighted submissions where new accounts face higher scrutiny
  • AI detection layers to flag suspicious patterns before human review
  • Deposit systems where reporters stake money that is only refunded for valid submissions
  • Mandatory disclosure of AI tool usage in the research process.

Gone are the days of low-effort submissions. Major platforms will need to evolve or watch programs leave. Stenberg has publicly called on HackerOne to do “something stronger” to address this behavior. He even offered to help them build the infrastructure.

For legitimate hunters, this means adapting. More specifically, build your reputation on quality and methodology and prove you understand what you are reporting. The hackers who invest in real skills will stand out as the noise is filtered.

The cost to real hackers

Every garbage submission makes life harder for legitimate hackers. Programs shutting down means fewer targets for hackers. When triagers become skeptical of all reports, even good ones will face extra scrutiny. Additionally, payouts are shrinking because budgets are being consumed by noise processing. The incentives that attract top talent are eroding.

This issue is not abstract. cURL’s bug bounty paid $86,000 across 78 confirmed vulnerabilities over six years, amounting to roughly $1,100 per vulnerability. While this is not life-changing money, it is meaningful recognition for meaningful work. Stenberg defended the bug bounty model, pointing out that “no professional would come even close to that cost/performance ratio” compared to hiring dedicated security staff.

Now that model is dead for cURL, and the people who killed it were not adversaries. They were other members of the security community who prioritized volume over value.

The slop submitters are burning down the house for everyone. Every time someone pastes ChatGPT output into a report without verification, they make the ecosystem worse for all of us.

The cost to customers

For organizations running security programs, this creates a real dilemma. Bug bounty programs are supposed to provide cost-effective access to diverse security talent. When the crowd becomes unreliable, what options are left?

Some organizations will shift to private programs with vetted researchers only. This works but limits the diversity of perspectives and skills that makes crowdsourced security valuable. Others will lean harder on expensive penetration testing engagements or internal red teams. These are effective but do not scale the same way.

Smaller companies will face the worst of it. They often lack the budget for dedicated security staff or premium pen testing services. Bug bounties offer them affordable access to skilled researchers. As programs implement stricter controls or shut down entirely, these organizations will lose a critical layer of defense.

If we continue on this trajectory, the security gap will widen. The attackers, unlike bounty hunters, do not need financial incentives to find vulnerabilities.

The scary truth

Here is the uncomfortable reality that nobody wants to discuss: cURL is installed on somewhere between 20 and 50 billion devices. Every time you download a file, check an API, or transfer data over the network, there is a good chance cURL is involved. Google uses it, Apple uses it, and even your car probably uses it.

If hackers stop looking at cURL because there is no incentive to, the bugs will simply accumulate and lie in wait. The people who eventually find them will not be hackers submitting responsible disclosures; they will be threat actors who do not need bounties because their payoff comes from exploitation.

Stenberg has made it clear that cURL will still accept and fix genuine security vulnerabilities reported through GitHub. But without financial incentive, how many skilled researchers will prioritize cURL? The open-source funding model was already in crisis before AI flooded the system with noise.

Where we go from here

AI is here to stay. This is not changing. AI tools will get better, output will become more convincing, and the slop will become harder to detect.

The question is how we adapt.

For hackers: Use AI responsibly or face the consequences. The researchers who treat AI as an assistant while maintaining their own expertise will thrive. The ones who use it to skip the work will get banned, shamed, and forgotten. Stenberg set the standard clearly: understand and reproduce a bug or do not submit it.

For platforms: The filtering problem is yours to solve. If platforms cannot distinguish signal from noise, customers will leave. Platforms that invest in detection and friction to deter low-effort submissions without punishing legitimate researchers and those that build sustainable models will rise above the rest.

For organizations: Crowdsourced security still works, but you need the right crowd. Platforms that vet researchers, match skills to targets, and maintain quality standards will become even more valuable. Programs like Bugcrowd’s CrowdMatch that connect customers with researchers who actually know what they are doing will matter more than ever.

The cURL shutdown is a symptom of a larger problem. The economics of bug bounties assumed good-faith participation. AI broke that assumption by making it trivially easy to generate plausible-sounding reports that are not grounded in actual security knowledge. The system is recalibrating. Those who adapt will survive. Those who do not will find themselves locked out of an ecosystem that no longer trusts them.

The bugs are still out there. The question is whether we will find them first or whether we will let the slop submitters hand that advantage to our adversaries.

Let me make the lesson clear: AI did not kill cURL’s bug bounty; humans who refused to do the work did. 


Bugcrowd’s Bug Bounty offering help customer’s extend their team while focusing on what truly matters. Our Platform’s industry-leading managed triage service validates and prioritizes findings quickly, reliably, and at scale. This means customers experience a high signal-to-noise ratio, which is critical for success. Request a demo today to see the Platform in action.