At Bugcrowd, one of the most common questions we get from customers is, “How do I grow and improve my bug bounty program over time?” And as the program matures, “How do I know what to do next, and when?”

The importance of these questions can’t be overstated. As the famous saying goes, you can’t manage what you can’t measure. By defining the objectives and metrics for your bug bounty program as a first step, you’ll get a critical tool to help you understand if your program is on track and healthy, as well as when it’s time to make adjustments.

Here are some common success metrics for a managed bug bounty program on the Bugcrowd Security Knowledge PlatformTM. Note that while these measurements are expressed in months, they can just as easily be applied on a yearly/quarterly/weekly basis–but on average, a monthly metric is the ideal tool for measuring efficacy. Rich analytics and reporting in the Bugcrowd Platform, along with your Customer Success team, help you track all of them, as needed:

X submissions per month

This metric is something you should track if your goal for the program is raw activity– providing insight into if researchers on your program (aka the “crowd”) are testing in-scope assets and reporting issues. Regardless of the validity of the submissions, this number tells you whether your crowd is at least making an effort to find them. This metric is useful for targets that have been thoroughly tested and are unlikely to yield a high number of unique findings per month.

X valid submissions per month

This is a more stringent version of the above that indicates whether your crowd is identifying valid issues. (As a general rule, roughly 30% of all findings are duplicates, 30% are invalid, and 30% are unique, valid issues.) Depending on the number of in-scope assets involved, this could be one valid finding per review period, or it could be 50–it all depends on your program’s scope and maturity.

X valid P1/P2 findings per month

This is an even more stringent variant of submission measurement focusing on critical vulnerabilities. For most programs, critical/high findings may be few and far between, so this number may be relatively low–but if finding critical vulns is your goal, it’s the right one to track. (Note that you can track multiple metrics concurrently–for example, you could track this and track total submissions overall; it’s not one or the other, so much as it can be an “and/or” situation, depending on goals.)

X dollars awarded per month

This metric is a reasonable reflection of value from most programs. For instance, if your program awards $500 versus $5,000 over a given period of time, it’s a safe bet that paying the latter amount will have derived more value from findings than the former. This goal will vary depending on budget, departmental objectives, and so on. One novel, self-sustaining approach is to allocate a certain amount of reward spend per month (say $5,000), and whatever isn’t rewarded that month is then stacked on top of the subsequent month (resulting in a respective increase in the amount of earnable pool). This allows for predictable budgeting, while organically increasing bounty payouts as findings become more sparse.

X targets added per month

If your goal is to expand coverage as a way to reduce risk across your attack surface, this is the ideal measurement because it indicates whether more assets are moving into scope.

Existence of the program

For some organizations, especially as they move to more mature programs, the goal for the program is often simply that it exists. At some point, rewards may reach a peak where it’s no longer tenable to continue to increase them, or submission volume may not be particularly high due to earlier findings being remediated and better coding practices being adopted. At this point, the program serves as something of an “insurance policy” that proactively incentivizes people to report issues, but doesn’t have a high volume of issues reported otherwise. The fullest expression of this goal is taking the program public–by allowing anyone on the internet to participate in your program, you create the most mature, realistic sample size.

X number of testers performing verified coverage per month

Finally, some organizations may want or need assurances that researchers are testing, but in the absence of findings, the question becomes “How do we know if people are doing testing as deeply as we need them to?” To answer this question, we recommend running a penetration test that pays researchers a flat rate to follow a testing methodology against your in-scope assets. Whether testing the login form, implementation issues, or other functionality, the crowd can provide the visible coverage that you and your organization need.

Next steps

After you’ve defined goals and metrics, the next step is to implement them in your program and then put yourself in a position to iterate meaningfully–whether by increasing incentives, growing your crowd, adding scope, or in any combination as needed. To provide a roadmap for that journey, we’ve published a new customer guide that explains everything you need to know about creating, tracking, and responding to bug bounty metrics appropriately based on simple examples. (Or, watch an on-demand webinar on this topic.) If you have any questions or need further help, we’re always here for you!