At Bugcrowd, our mission has always been to combine human ingenuity with scalable technology to surface the vulnerabilities that matter most. While AI can accelerate discovery, human operators are ultimately accountable for the quality of each submission for two primary reasons: Professionalism and secure outcomes. Customers demand that the vulnerabilities submitted represent meaningful risk. 

We’ve all seen unvalidated web application scan reports, and how much trust deteriorates when developers are given unvalidated findings. We’re seeing history repeat itself here, but at a scale unlocked by agentic capability.

Unfortunately, AI agents are increasingly being used by some users to create a huge volume of low-quality, unverified submissions. We call this “sloptimism,” overly optimistic submissions driving large volumes of speculative or AI-generated reports submitted with minimal to no pre-submission validation and limited context. You, the researcher, are responsible for the accuracy of the submission.

Over the past three weeks alone, our queues have increased by more than 334%, even when excluding legitimate reports produced through traditional hacker workflows. A significant portion of these submissions share similar characteristics: thin evidence, templated write-ups, and a high likelihood that the issue was not verified before submission.

Our code of conduct already requires that the human researcher be responsible for the accuracy and value of each submission. Today, we are adding in additional changes to help eliminate “sloptimism” submissions, outlined below. 

Sources behind “sloptimism”

Our investigation suggests several primary sources behind this trend:

  1. Firms training AI systems on live targets

A number of offending accounts have been correlated to security organizations who appear to be using Bugcrowd triage services as training input. In these cases, low-detail submissions are repeatedly sent from multiple accounts (sock puppets), feeding our triage outcomes as reinforcement learning signals to determine what constitutes valid versus invalid findings. This practice violates our terms of service. 

  1. AI-assisted novice researchers

A second group consists of newer researcher accounts deploying AI agents or automated tooling, clearly lacking manual validation of generated findings, and in many cases presenting no security impact. These submissions often follow common LLM-generated patterns and contain little to no reproduction steps alongside speculative impact assessments.

  1. Automated submission pipelines

A smaller but growing segment appears to originate from automated pipelines that generate bulk submissions. These reports frequently reuse common templates and differ only slightly between submissions, indicating bulk generation rather than researcher-led validation.

Addressing “sloptimism” and its impact

While experimentation with AI in security research is expected and encouraged, we are partnering to deliver a professional outcome and experience for our customers. High-volume, low-confidence submissions place significant strain on triage teams and program owners, ultimately degrading signal quality across the ecosystem. This transfers workload to our triage team, impacting the time and care required to validate legitimate hacker submissions.

To address this, we are updating submission policies, rate controls, and detection mechanisms to ensure Bugcrowd continues to prioritize validated research and high-signal findings, while discouraging speculative and automated spam.

The Bugcrowd Platform is introducing a number of changes designed to protect both our customers and productive hackers by keeping queues focused on signal and impact.

Initially, these changes include (but may not be limited to):

  • Accounts identified as engaging in submission farming will be permanently banned. Reinstatement will only be considered through a formal appeal process and will require identity verification, confirming the account is operated by an individual.
  • Accounts that submit ≥10 consecutive invalid reports will be reviewed. Where submissions are attributable to automated or AI-generated activity without sufficient validation prior to submission, the account may receive a 30-day suspension, alongside guidance on acceptable submission practices.
  • Accounts that submit ≥10 invalid reports will be required to complete identity verification, confirming the account is owned and operated by an individual hacker before further submissions are permitted.
  • Automation used to “squat” common finding types at program launches, without demonstrated impact at the time of submission, is against our  behavioral standards and will result in enforcement actions designed to correct and prevent repeat occurrences.

We recognize this represents an important shift in how bug bounty platforms, including Bugcrowd, have historically approached submission policies.

We welcome feedback from the community, including all perspectives, alternatives, or examples where these measures may unintentionally impact legitimate research through the HIVE community, or by contacting support@bugcrowd.com.