skip to Main Content

Essential to a Successful Bounty Brief: Exclusions

Essential To A Successful Bounty Brief: Exclusions

In continuing our series on building a bounty brief, we’ve already covered step 0, creating a scope, and also touched briefly on focus areas. Now that you have the foundation of what you want researchers to be testing, it’s now time to turn your attention to what you don’t want them to be testing – which is just as, if not more important, as clearly stating what you do want to be tested. We do this by explicitly noting and drawing the researcher’s attention to our exclusions.

Why is it so important? Simply put, it’s a matter of respecting researchers’ time and effort. If we take a moment to look at this from a researcher’s point of view, every issue that we clearly exclude on the bounty brief is something they won’t/don’t need waste their time testing for and/or reporting. A brief that doesn’t contain explicit exclusions runs the risk of receiving issues that the program owner may not care for – resulting in wasting the time and resources of both the researcher and the program owner. To clearly document these exclusions, we’ve identified five of the most common categories to consider for exclusions when building your program: low impact issues, intended functionality, known issues, accepted risks, and issues resulting from pivoting. 

1) Low Hanging, Low Impact Issues

It’s important to explicitly call out common, low impact issues that can be typically found within a few seconds or are usually caught via automated scanners. To help reduce the amount of inbound noise generated from these types of reports to any given program, Bugcrowd has built a fairly verbose and exhaustive list of these low-hanging/informational report types into our VRT as P5 issues – which neither receive rewards or kudos points.

The poster child for this type of exclusion is clickjacking. Can it be a problem? Yes. However, we’ve found that many program owners typically don’t see it as a threat – unless it’s on a critical page (e.g. your account transfer page, etc.). So, instead of waiting for researchers to submit clickjacking on random, non-sensitive pages, why not make it clear what you won’t accept from the start? In this case, this could be easily addressed by putting something in your brief to the extent of “submissions for clickjacking are out of scope unless they can be exploited on or against a sensitive functionality.”

2) Intended Functionality

Any intended functionality that could plausibly be misinterpreted by researchers as a vulnerability should also be called out on the brief.

For example: You create a bounty program for your webapp that, in addition to doing a bunch of other things, also allows for the admin portal to input custom HTML to modify the content of the header/footer/login/etc. A researcher finds this functionality, injects <svg onload=alert(1)> into the input field, sees their injection fire via the header/footer/login/etc., submits the finding as stored XSS, and then expects to be rewarded.

While the use of the functionality may seem intuitive and intended to you, the reality is that the researchers don’t know whether or not this is intentional – and rather than let someone else submit it (and get potentially paid for it), it’s in their best interest to be the first to report. By recognizing and understanding this reality and mindset, you can carefully evaluate your target for these sorts of potential hangups, and pre-emptively call them out of scope.

3) Known Issues

This is a commonly disputed and debated item when launching a bounty program. Generally speaking, there are two options if you have known issues: 1) fix them all prior to launching the program (however, this is often simply not feasible); or 2) call out the known issues on the bounty brief. You don’t have to be overly verbose in talking about them, as nobody wants to air their issues in depth, but doing this goes a long way towards establishing trust with the researchers. And if the known issues are severe/damaging enough that you can’t/don’t want to disclose them, you should probably fix them as quickly as possible.

4) Accepted Risks

Additionally, you may have valid reasons for not accepting certain vulnerability types – which is perfectly fine. Again, for the same reasons we’ve outlined previously, just make sure you call that out to researchers in advance. For instance, Google doesn’t reward for open redirects (see: https://www.google.com/about/appsecurity/reward-program/). Despite being an OWASP top ten issue, they’ve decided it’s not something they care about or have additional mitigating factors in place for. Which again, is perfectly acceptable – everyone tends to have things that they consider to be acceptable risks – the main takeaway here is to make sure researchers know that it’s a non-qualifying vuln type before they go testing for it, or submit it as an issue.

5) Issues Resulting from Pivoting

If you don’t want people to go for second-level findings, make that clear — and if you do, be sure also to make that clear (in that, you’ll accept these sorts of issues). If you leave this point unclear or uncertain, you run the risk of researchers either going too far or not going as far as they could have. Neither of which are desired outcomes for you or the researcher.

For example: A researcher finds SQLi on your application – and through this vulnerability, they’re also able to determine that your passwords are insecurely hashed/salted. Ask yourself: ‘Is this something you want to reward and encourage?’ That is, allowing researchers to take that extra step and find deeper vulnerabilities? Or do you want them to stop as soon as they’ve got a proof of concept for the first issue? In either case, this is something that we should make clear when building the program.

In all of the points above, there are two primary goals: saving yours and researchers’ time, and providing a better experience for researchers. It’s important to remember that you want to build a positive relationship with researchers, as there are hundreds of programs they could potentially be working on – but they chose yours. You want them to continue to choose yours, and to continue to come back to your program as a preferred place for exercising their skills and abilities. So make it a pleasant experience by being as clear as possible up front regarding exclusions.

If this blog was of interest to you, myself and Shpend will be speaking at Appsec EU on July 1st (in Rome) about this very topic of creating a successful bounty program – and encourage you to attend if you’re interested in learning more.

Tags:
Topics:

Grant McCracken

Solutions Architect at Bugcrowd.

Back To Top