It’s been nearly a decade and a half since bug bounty platforms first emerged, creating a robust and streamlined vulnerability reporting process. Bug bounty programs bridged an enormous gap between hackers and organizations, both in the private and public sectors. 

AI is continuing to impact the industry, and we are at a turning point in the bug bounty scene. Some hackers are seeing fewer program invites, so we want to provide a straightforward breakdown of the current state of bug bounty. Specifically, we want to answer some tough questions and help researchers become better, more competitive bug hunters. 

First, we’re going to level with you here: the influx of AI slop and AI P5 spamming has made some organizations more apprehensive about managing a bug bounty. It has also resulted in unprecedented fatigue across our triage team. In establishing a bug bounty program, a customer wishes to leverage the creativity of the hacker community and receive reports from attack vectors that AI may not consider. In other words, a bug bounty helps round out a customer’s managed vulnerability reporting approach. 

To help organizations continue to benefit from the value of human hackers, we must put our best foot forward and show up collectively. In this blog post, we’ll cover three common issues that hold hackers back. 

Carefully read the brief

While a bug bounty doesn’t follow the same procedure as a pen test, the rules of engagement are just as important. The Crowd is made up of so much incredible talent, but we’re unfortunately encountering many submissions that are increasingly difficult for our ASE team to reproduce.

Generally speaking, if an asset is not listed as in scope, it should be considered out of scope. Some engagement briefs may mention potential out-of-scope considerations on a case-by-case basis. These often don’t qualify for a reward and must be accompanied by very clear explanations as to why the finding should be considered due to high impact. Bugcrowd staff are not able to check with a program owner each time there’s a potential finding that’s not listed in the brief. If a brief notes that a program may take out-of-scope assets into consideration, then consider carefully whether what you’re submitting makes sense from the organization’s point of view.

Polish in-scope submissions before submitting

Once you find a submission that is in scope, it’s time to submit. Make sure you’re as detailed as possible in the steps to reproduce and in the impact it has for the customer. Every day, our ASE team receives an overwhelming number of submissions that are outright unclear or loaded with security jargon that doesn’t demonstrate any clear impact. Keep in mind, it’s not just about showing a proof of concept clearly; it’s also about explaining why it’s a genuine security risk. If this can’t be demonstrated or explained clearly, then the submission will be marked as “Not Reproducible.” Why? Because it doesn’t carry any risk.

Think of it this way: You approach a secured, gated facility, and there’s a hole in the fence that allows your hand through. You call the main office and inform them of this hole, but they may not prioritize or care about it unless you can explain or demonstrate what can actually be done with this fence vulnerability. The hole isn’t supposed to be there, true, but can you actually reach anything of value or perhaps reach through and unlock a gate? If not, then it’s nothing more than a hole in a fence, and patching it doesn’t offer any additional protection for this particular facility. (Here’s a non-technical, classic example of what can happen when impact cannot be clearly articulated.)

Remember: just because you’re able to manipulate or extract a piece of information from a customer’s asset does not mean it’s always a risk for the customer. It may be trivial or arbitrary data or a small exploit that doesn’t result in any security impact. The outcome would be a final submission state that lands as an NR, an NA, or perhaps a P5. However, this doesn’t mean the PoC needs to stop here. We encourage you to tinker more with lower-impact submissions. Dive deep and see if you can run an exploit chain or take things to the next level and turn a P5 into a P4—or higher

Read triage comments carefully

If you don’t like the triage team’s or program owner’s decision for your submission, we ask you to first read the comments carefully and scrutinize your work before using the RaR feature. We’ve processed thousands of RaRs that are, quite simply, the equivalent of “Nuh-uh. Check again.” If you disagree with an outcome, then it is your responsibility to make a nuanced argument with clear steps in your RaR. This is where the difference between insistence and persistence becomes very clear.

Here’s an example of what you should avoid:

Triage: This vulnerability does not show clear impact for the customer and will be marked as NA. 

Submitter: You’re wrong. Check again. It’s in my PoC. It’s so obvious if you read it!

Okay, let’s pretend for a moment that it really is super obvious. Even then, it doesn’t hurt to simply rephrase and state the impact again. Hackers approach an attack surface with many of the same tools, but they often have different ways of thinking about and strategizing a problem. What may seem painfully obvious to one person may seem like a stretch to another, and vice versa. Approaching communications with no assumptions and respect is key here.

Here is an example of what helpful communication looks like:

Triage: This vulnerability does not show clear impact for the customer and will be marked as NA.

Submitter: The data that can be gathered as a result of the exploit allows a threat actor to steal sensitive PII, including recent customer names and addresses. This data contains new or recent records, which may result in violations and fines for the organization, as well as their own customer risks with fraud or identity theft.

This tells triage the specific risks carried with this vulnerability and what it means for the customer if this were exploited by a threat actor. This helps facilitate productive conversations between triage and the customer.

Final notes on general Platform behavior

We understand that there’s a lot of hard work that goes into bug hunting, and sometimes, circumstances can become frustrating. However, using Support tickets and submission comments to voice complaints featuring aggressive language toward Bugcrowd staff and customers isn’t tolerated. Derogatory insults, racial slurs, and various threats, all of which we’ve encountered, will get your account suspended or permanently banned. We encourage you to review our Code of Conduct and Standard Disclosure Terms, which are agreed upon when creating an account and by participating in any engagement.

Many of our hacking culture champions fought incredibly hard for open dialogue and productive communication with organizations to establish safe vulnerability reporting. We want to both respect and maintain this approach, lest we take steps, or even leaps, backward.