A bug bounty program is a sponsored, organized effort that compensates hackers for surfacing and reporting otherwise unknown network and software security vulnerabilities, thereby enabling the digitally connected business to manage and reduce their cybersecurity risks.
Previously, the term “bug bounty” was used synonymously with the term “crowdsourced security.” With the arrival of additional ways to engage with a crowd, like penetration testing as a service (PTaaS) and attack surface management, the two terms have now been decoupled. Crowdsourced security is a resourcing model, while bug bounties involve an incentive (“pay for results”) model that encourages the discovery of severe flaws based on the potential for monetary rewards.
For example, if a hacker involved in a bug bounty reports a cross-site scripting vulnerability but the same vulnerability was already noted by the customer’s internal security team or if it was uncovered by another hacker first, the individual is not paid for that submission. In another example, two hackers may uncover different types of server security misconfigurations. If one is email spoofing and the other is using default credentials, both hackers would be paid, but the latter would command a higher rate due to greater potential business impact.
This model greatly reduces the average cost per vulnerability and ensures that customers are only paying for value received—which makes security return on investment (ROI) much easier to calculate.
In 1851, Charles Alfred Hobbs was paid 200 gold guineas by a lock manufacturer for rising to the challenge of picking one of its strongest locks. Flashing forward to the mid-90s and early 2000s, Netscape, IDefense, Mozilla, Google, and Facebook all had their own self-managed bug bounty programs, offering severity-based rewards to anyone who could identify vulnerabilities in their web applications.
The phrase “Bug Bounty” was initially coined in early 1995 by a Netscape technical support engineer. Several months later, on October 10, 1995, Netscape launched the first technology bug bounty program for the Netscape Navigator 2.0 Beta browser.
Some organizations with large security teams and at an advanced stage of security maturity may still run their own bug bounty programs, but most that start in self-managed mode eventually migrate to “bug bounty as a service” solutions when they reach a certain scale. Generally, running bug bounty programs is outside the core competencies of most teams.
In crowdsourced security, “the Crowd” is the term used to refer to the massive, global community of hackers (also referred to as security researchers, ethical hackers, or white hats) who participate in bug bounty programs. These individuals are independent actors who work on crowdsourced security programs that they find to be fulfilling, lucrative, or both, either as their sole occupation or as a side hustle. The Crowd is the lifeblood of any crowdsourced security or bug bounty program and the main reason why the approach is so effective.
Hackers can and do hunt bugs on multiple platforms—no provider has an exclusive monopoly on them—so it’s important for a platform to match the right crowd to the end user’s needs at the right time.
Some crowdsourced security vendors boast a high number of hackers working on customers’ programs, but quality is a more important metric to focus on when working with the Crowd. Organizations want to be sure they’re working with a vendor that uses data to source and activate hackers with precisely the right skill sets and experience for their programs to boost engagement and critical findings—not just “throw bodies” at a problem.
Bug bounties are a pay-for-results approach to proactive security testing designed to maximize the discovery of high-impact vulnerabilities. Through managed bug bounty programs, organizations are given access to thousands of highly skilled and thoroughly vetted hackers ready to help organizations find vulnerabilities that other tools miss. The global nature of the Crowd means 24/7 talent availability, with launch timelines that blow traditional utilization-based models out of the water. The ideal provider also offers 24/7 vulnerability visibility and reporting, fine-grained crowd matching to ensure access to the right talent, and seamless business process integration with a development team’s favorite ticketing and vulnerability management solutions.
Unless their security maturity level is extreme, most organizations will choose to work with a managed bug bounty provider. Providers differ in robustness, comprehensiveness, and depth, so when comparing them, it’s important to understand each of their approaches.
Bug bounty programs can take on many different forms depending on an organization’s goals, budget, testing timelines, and interest in specific skills. Before engaging with a bug bounty provider, ask the following questions:
While a few large enterprises do have the team required to manage their own bug bounty programs, these are usually highly visible, well-known, and reputable brands that can attract the attention of the broader security community. Organizations of all sizes typically opt for managed programs for a few reasons:
For example, a Bugcrowd customer in the communications space launched its bug bounty program as self-managed but failed to assign competitive payout rates and was lax in responses to submissions. This resulted in a sudden drop in engagement, with just 4 P1 (critical severity) vulnerabilities found over two years. After switching to Bugcrowd’s managed model, the customer received 50 P1s and 90 P2s (high severity) in the following two years.
Choosing a provider to handle a bug bounty program is the right choice for most organizations. Regardless of whom is responsible for triage, it’s important that the team involved has the following attributes:
Technical depth and hacker mindset—Ability to replicate vulnerability exploitations to ensure validity, keeping the “attacker view” in mind.
Triage expertise—Ability to properly assign categories based on industry standards, as well as associated rewards.
Deep triage toolbox—Triage can be quite challenging at scale for even the most talented engineers, so access to tools, historical data, and workflow automation is critical.
Strong communication skills—Ability to balance two sometimes conflicting perspectives and needs.
Patience—Triage is quite repetitive in nature but requires constant attention to detail, which can lead to burnout. Hackers also often require constant communication, which can leave those responsible feeling more like a help desk than a valued security engineer.
Commitment to security above all else—Escalations of valid vulnerabilities that aren’t being addressed sometimes require a bit of tough love to push through to acceptance and remediation.
A scope is the defined set of targets listed by an organization as assets that are to be tested as part of a particular engagement. Things that are listed as “in scope” are eligible for testing, and things that are “out of scope” are not to be tested. Within the context of a bug bounty, what’s in scope is what hackers are incentivized to report (and are rewarded for), and what’s out of scope is off limits, and no compensation is given for findings related to those targets. Generally, it’s best to reach a maturity stage that implements an open scope as quickly as is feasible because attackers have no limits where targets are concerned.
Organizations generally choose between a limited scope, a wide scope, or an open scope for their programs. Once an organization establishes what’s in scope, it can begin writing the “bounty brief” that will help communicate to hackers its targets, priorities, exclusions, and incentive scheme.
After determining what’s in scope, it’s time to consider where in the development lifecycle focused testing is most appropriate. Where possible, we suggest utilizing preproduction/staging environments, as opposed to production. There are many reasons to consider this option, including reduced impact on customers and ease of credential provisioning for hackers, and much more:
The power of crowdsourced security stems from its numbers. While this can refer to the total number of people involved in a program, it also refers to the broader network of available talent. More thoroughly vetted and continuously ranked hackers means that organizations will always have the team that best fits their testing environments. Because more people on a program also means more vulnerabilities, Bugcrowd recommends starting small, with invite-only access, until vulnerabilities reach a manageable level and organizations feel comfortable graduating to public access (if appropriate).
Private programs are invite-only programs that target a select group of hackers based on technical and business requirements. No one else in the community, or beyond, will be able to see details on or access these private programs.
With public programs, any registered hacker can see, access, and work on their stated scope of assets. Public programs typically have a much broader scope, which allows for a wider range of potential vulnerabilities to be identified by a larger set of unique skills and experiences. Check out some of Bugcrowd’s public programs on our website.
While the structure of crowdsourced security programs enables continuous testing where it was previously not possible, it may be the case that testing or budget cycles limit an organization to only ~2-week testing sprints.
A strong vulnerability discovery solution is weak without a way to facilitate rapid remediation. While security teams aren’t responsible for providing the fix, they are better served if they can make the remediation process as easy as possible for the development team. Therefore, it is important to ensure that a provider can provide vulnerability-specific remediation advice and decide which integrations matter the most to a development team for the presentation of that information. The Bugcrowd Platform offers pre-built integrations with JIRA, GitHub, ServiceNow, Trello, and Slack, in addition to webhooks and APIs, making it easier for security professionals to enqueue prioritized vulnerabilities and for developers to see what should be addressed first, how to go about this, and whether anything else stands in the way. Context is key.
As JIRA is the most common ticketing and management system for most users, it’s important to accommodate the following top three use cases:
Centralized JIRA security project: The AppSec team has one “security” JIRA project to manage its security work. Having one security JIRA project between security and development is a great way to centralize work; it is simple to maintain, as there is no logic needed to understand where tickets are created.
In developer JIRA projects: The AppSec team pushes security tickets into a developer’s JIRA projects while respecting the developer’s ownership. Enterprise organizations typically have more than one development team or business application, which requires more than one JIRA project.
Hybrid: This hybrid of both features one “security” project and a linked issue in a developer’s JIRA projects. The primary benefit of this approach is maintaining control if development makes edits. This provides an additional layer of accountability and visibility.
Bug bounties can greatly reduce the risk of vulnerabilities to an organization. Leveraging a solution like the Bugcrowd Platform can relieve a lot of the burden. But this doesn’t remove the importance of program owners being active participants in their programs. Bug bounty programs take time to maintain and grow over time, and broad organizational commitment is required to make them successful. It’s important for security professionals to have open dialogue with their executive teams about the implications of such a program. This could include potential impacts on budget structures (to accommodate a variable bounty pool), as well as impacts on engineering should a sudden influx of vulnerabilities disrupt current processes.
Additionally, bug bounty program owners must commit to timely platform responses, including accepting validated vulnerabilities or addressing program issues raised by a provider.
Approaching bug bounty programs with a “crawl, walk, run” mindset is a recipe for success for any organization of any size. Big public launches drive press coverage and broader awareness, but these aren’t always appropriate if a security team is not quite ready for that volume. Processing and payment is one matter, but once a program owner knows about an issue, they should also be prepared and equipped to promptly resolve them. An example of the “crawl, walk, run” approach includes:
Finally, iteration is an important part of any successful bug bounty program. What worked to fuel hacker engagement yesterday might not work today. Bugcrowd has a decade of experience in identifying any risks to growth, and as a result, will rely on three key “levers” to encourage long-term success.
Bug bounty programs, pen testing, and VDPs are standard offerings of an elite crowdsourced security platform. However, the difference between these three offerings can be a little confusing, especially for organizations looking to combine products as part of a layered security approach.
A VDP is a secure, publicly available channel for anyone to submit security vulnerabilities to organizations, helping them mitigate risk by enabling the disclosure and remediation of vulnerabilities before they are exploited by bad actors.
In contrast to bug bounties, VDP submissions are not incentivized by cash rewards. Publishing a vulnerability report after it has been fixed is another common attribute of VDPs and gives hackers the opportunity to share knowledge and enhance their own reputation in the process.
A VDP is also open scope, meaning that anybody can participate and attempt to find vulnerabilities on any target/asset belonging to an organization.
Whether an organization also has a bug bounty program, we highly recommend that every organization leverage a VDP. A VDP should be a baseline security standard for everyone. A VDP establishes a “see something, say something” mindset within an organization that carves out a global channel for vulnerability reports and publicly demonstrates that a company is doing everything possible to protect its customers, partners, and suppliers.
According to the National Institute of Standards and Technology (NIST), pen testing is defined as “security testing in which assessors mimic real-world attacks to identify methods for circumventing the security features of an application, system, or network.”
In other words, pen testing is a simulated cyberattack carried out by an authorized third party (known as pentesters), who tests and evaluates the security vulnerabilities of a target organization’s computer systems, networks, and application infrastructure.
Pen tests have three defining characteristics: they are performed by external testers, are typically time bound, and usually follow a testing methodology. Many organizations also expect a final report to demonstrate regulatory compliance to an auditor.
It’s common to conflate bug bounty programs and pen testing because they rely on attacker tools, techniques, and mindsets for vulnerability discovery under a predefined scope. Pen testing and bug bounty programs have very similar goals but differ with respect to the intensity of the assessment. Pen tests are methodology driven and are best for coverage, whereas bug bounties are better for risk reduction.
With this in mind, one can easily envision a layered strategy for both compliance and risk reduction that combines:
Hackers aren’t waiting, so why should you? See how Bugcrowd can quickly improve your security posture.