As the quote goes, “if you don’t know where you’re going, you’ll end up someplace else”. This cliche, yet valid aphorism runs doubly true when running a crowdsourced security program. If we don’t have a clear idea of what success looks like, or what we’re trying to accomplish, then we most certainly are unlikely to achieve it.
As we’ve touched on earlier in this blog series, core to creating a successful program is the idea of having a clear and coherent program brief. As a quick reminder, the program brief is a single page document that outlines the scope, expectations, rewards, prioritization, and any other information that hackers need to be successful. Given that it’s the central and key artifact when it comes to your program, it’s of paramount importance that the brief contains not just the information so that your program can be understood, but also that it cannot possibly be misunderstood – creating what amounts to a contract between you and the hackers.
In today’s blog, we’ll take a high-level overview of all the components in a program brief, and provide some guidance in regards to putting yours together. As a note, for any program managed by Bugcrowd, our Solutions Architect team will provide guidance to all customers looking to launch a crowdsourced program; however, in lieu of having that hands-on guidance, this guide is intended to start you off on the right foot.
As the central and single most important piece to any program, a well-defined scope clearly tells researchers what they can and cannot test within the boundaries of the engagement. It’s thereby absolutely critical to ensure that as we’re defining our scope, there’s no room for misinterpretation. This in mind, the scope for any given program is entirely contingent on what your goals and objectives are with the program.
That said, in terms of guidance when setting up your scope, there are a few things we want to call out as potential points for consideration:
- Too narrow of scope may result in coverage and testing gaps (creating a false sense of security), or may signal to hackers that it’s not worth their time
- For instance, if we have an API with five total endpoints, and no credentials for researchers to test with, a good number of testers are going to look at the extremely narrow scope, and decide it’s not worth their time to look at something so small. Generally speaking, less attack surface = less opportunities for findings – and bug bounties are all about maximizing the time one spends testing to the vulnerabilities found (and thereby rewards).
- If there are a low number of meaningful findings against a narrow scope, you may begin to think that you’re reasonably secure. However, this is usually far from the case. There’s no shortage of examples where programs seemed to have no findings until they opened their scope up, receiving hundreds of submissions almost instantaneously. Wherever and whenever possible, we strongly advise you to open up your scope to include their entire digital footprint – allowing hackers to better mirror how attackers would function in the wild.
- Try to expose as much of your footprint as possible. If you have certain assets you value more than others, our recommendation is to simply tier those out, where your primary assets have the highest rewards, and secondary assets are rewarded at a slightly lower rate – and so on.
- Always evaluate your scope from the mindset of a hacker. Ask yourself “how could I misread or misinterpret this information?” A vague or incomplete scope may lead to lost cycles while hackers ask a question around verification, or worse, they move onto another program.
- Think about how you want to handle submissions that are valid, but out of scope. For instance, if there are two sides to your business, and only one side is in-scope, what happens if a hacker inadvertently stumbles on an issue that affects the other side of the business? How can they report that finding, and what is your policy? Clearly outlining these contingencies are extremely helpful in both setting expectations for hackers, as well as knowing what to do when such findings do come in.
- Finally, if your goal is to target a very specific app/target, as much as we prefer the scope to be open, it may behoove you in some situations to limit the program scope to just what you want tested. Again, this is only advised if you absolutely need hackers to focus on just the one thing. An overly broad scope may ultimately distract resources and time constrained hackers from focusing on what you need them to focus on.
Focus Areas / Target Information
After you inform hackers what you’re looking to have tested (e.g. what’s in scope), it’s important to now talk about where you’d like them to focus their attention and effort. For instance, if there’s a new feature or attack vector you’re want to ensure is covered, it’s worth calling those areas out explicitly. Additionally, many program owners also find it effective to add point-in-time or long-term bonuses around focus areas.
Of note, focus areas should not include generic vulnerability classes (e.g. XSS, SQLi), as hackers typically look for those types of findings by default. Calling out very specific situations is much more effective – such as advising hackers that you’ve got a premonition that there might be input validation issues on a few specific endpoints or parameters, etc.
In addition to including distinct and clear focus areas, it’s also important that we include some high-level information and all relevant documentation around the in-scope assets. For an API target, that means making sure to include API docs so that researchers are able to test effectively. For a credentialed application of any sort, it’s important to call out how researchers can register, or how they can gain access to any relevant authenticated scope. For a complex, multi-tenanted financial webapp, this means providing high-level guidance around what certain roles are supposed to have access to, etc. For complex or unintuitive targets, this means writing out detailed, step-by-step instructions for getting started, links to resources, and so on.
When building out the target information section, it’s helpful to keep perspective on what you’re providing to researchers. Understand that everyone seeing this is unlikely to see things with the clarity, insight, or understanding that you have – so, be sure to take a moment to really dive in and give as much information as you feel is needed to help researchers be effective against your assets. Remember, contrary to how it may feel, we want researchers to find vulnerabilities on our program, and providing the information they need to be successful is an integral part of doing so.
Out of Scope
The inevitable corollary to setting up a program’s scope is to simultaneously call out what is out-of-scope. Anything not explicitly in scope is presumptively out-of-scope; however, sometimes there may also be sub-components of the in-scope assets that you prefer hackers not test. For instance, you may not want people sending in support tickets that will clog up internal team queues, or you may want to exclude 3rd party hosts that run on infrastructure other than your own (blogs, forums, etc). Keep in mind that hackers are penalized for making out of scope submissions, so it’s important to be sure that we set proper expectations around what should/shouldn’t be submitted to the program.
Exclusions / Ratings
Very similar to, but slightly different from the out-of-scope section, is the inclusion of program exclusions. Program exclusions are typically any vulnerabilities or vuln classes you don’t want hackers submitting to the program. Historically, this has been a very long list of finding types that most program owners don’t like seeing (e.g. clickjacking, missing security headers, etc). If you look at a number of bounty programs on the web, you’re likely to see some variant of this – “non-qualifying submissions”, or something to that extent. However, this list very quickly gets long, obtuse, and hard to follow for hackers.
In an effort to help standardize this information and expectations, Bugcrowd has developed the Vulnerability Rating Taxonomy (VRT) that lists out all commonly submitted vulnerability types, and ranks them by their technical severity P1-5 – where P1 is a critical issue, and P5 is an informational, unrewarded finding type (thereby de-incentivizing P5s as non-qualifying issues at default). By having this definitive and documented list, it sets clear expectations for hackers around what they should and shouldn’t be submitting and removes the need for program owners to draft and update a lengthy and unwieldy list of findings they’re not interested in seeing. For this reason, it’s highly recommended that all programs follow the Bugcrowd VRT, which has the double-sided benefit of being advantageous for both program owners and researchers, while also simultaneously reducing clutter from the program brief.
On our last post in this series, we spent a fair amount of time talking about rewards and the prioritization of findings. While we do recommend referring to that page for more detail, we’ll very quickly also cover some high-level information around rewards as a refresher.
- We encourage all clients to offer cash rewards, as that’s the single best motivator for driving activity to your program. However, in offering rewards, it’s also imperative that those rewards match or exceed market value, so as to incentivize researchers to participate on your program.
- All rewards should be linked to a clearly specified priority level on the program brief – where vulnerabilities that fall into a specific category are easily understood as being worth between X and Y dollars. This is made easy and simple by utilizing the VRT and setting designated reward ranges for each priority level.
- Rewards will need to grow over time. It can be tempting to set-and-forget rewards on your program – but as time goes on and the low hanging fruit is knocked out, it’s important that we constantly re-evaluate and adjust the rewards to grow in line with your security maturity.
Disclosure + Rules
Finally, in building our program brief, it’s also important to ensure we include and clearly outline our policy on disclosure, as well as any supplemental rules/guidelines for participation. And while Bugcrowd believes public disclosure to be an important part of the vulnerability reporting ecosystem, and encourage our clients to work with hackers to disclose issues once a fix is released, we also understand, recognize, and support our customers’ customized/individual disclosure policies.
By default, on public vulnerability disclosure programs and bug bounty programs, the standard policy is coordinated vulnerability disclosure with the option to select non-disclosure. If you’d prefer a policy of non-disclosure on a public program, we’ll work with you to achieve your needs. For private bug bounty programs, private pre-launch VDPs, and for Next-Gen Pen programs, the default policy is non-disclosure with the option to select coordinated disclosure, if desired. In the absence of a clearly articulated disclosure policy, or if there’s ambiguity, we, as a platform, default to “ask first” for both the hacker submitting the vulnerability, as well as the program owner.
That said, coordinated disclosure has a number of marked benefits – namely: a) it allows other hackers to see and learn from the interesting and published work of other testers; b) it shows that your organization welcomes the reporting and submission of security issues, and works effectively with the security community to identify and remediate these findings; c) it shows a very proactive and advanced security posture for your organization – where you’re actually able to gain publicity for the quick, timely, and effective remediation of an issue that a researcher reported to your organization.
Lastly, once all the above points are complete and drafted, it’s worth a final/quick perusal of the brief we’ve created to consider your business’ unique use cases and any useful stipulations or considerations you want/need researchers to be aware of. The goal is to make sure researchers stay in line with your expectations as they’re testing. To help with this, it’s important to make sure to call out these points, so that all parties are on the same page going forward.
With all of the above having been covered, you should now have a pretty solid start for your program brief! Your Bugcrowd SA may have some additional suggestions prior to launch, but so long as you’ve got the above points covered, you have a healthy start and a solid base to work off.