There are many key performance indicators (KPIs) of a successful bug bounty program–some that matter more to program owners, and some that matter more to researchers. At bugcrowd we aim at aligning the importance of these KPIs between all involved parties to articulate better what is most helpful and valuable to each.
In this post, we will explore the ever important metric, response time. This value is a key factor in both maintaining a healthy and successful program, as well as keeping researchers engaged and involved. Communication, both in swiftness and effectiveness, is key to staying on the same page throughout the vulnerability reporting and review process. Our recent post regarding proper escalation paths when communication falls through is proof of that.
Please keep in mind that this post is primarily targeted towards those running ongoing programs–both public and private–although it is also important to review submissions quickly once your On-Demand Program has closed.
A brief overview of the bug submission process:
|
What is a ‘good’ response time?
Response time is measured as the time between when a customer receives a triaged bug, to the time they respond to the submission. We expect that all submissions are responded to within 14 days, but obviously, quicker is better. Our winner of the first annual Buggy Awards for ‘Best Response Time,’ Fitbit, had a response time of less than one day, which, while amazing, may be unrealistic for some organizations.
In general, we rarely receive support inquiries for bugs triaged inside of a week, although 4-5 business days is an ideal and realistic benchmark to aim for.
We understand that reviewing submissions is not the only task you have to do each day. However, it is incredibly important for the program’s health and success that someone on your team is allocating some time every day or so to stay up to date on submissions.
Why is having a good response time critical?
It may go without saying that it’s crucial that you’re reviewing and patching incoming vulnerabilities quickly to decrease redundancy in testing, and obviously, (hopefully) be less vulnerable. Beyond that, however, there is a trickle down effect of consistently reviewing submissions quickly.
Reputation
To researchers, having a better than average response time is a sign of a well-run program–that the program owners care about the submissions coming in, and they take their program seriously.
In a recent survey, we asked a segment of researchers which program was their favorite and why. After ‘high rewards’ and ‘difficult targets,’ the most common positive attribute was related to their responsiveness.
Cutting through the noise
Positive reputation goes a long way.
Keep in mind that there isn’t an infinite number of qualified testers participating in bounty programs. Similar to researchers competing to be the first to discover a bug, you are competing to attract top talent. You, as a program owner, need researchers to invest their time and effort into testing your application. By nurturing a healthy relationship and positive reputation throughout the researcher community, your program will stand out amongst the bounty programs out there–to your benefit.
Better coverage
As you should understand by now, one of the top benefits of running an ongoing bounty program is continuous testing. To get as close to 24/7 testing as possible, it’s crucial to maintain a good relationship with the researcher community and stand apart from the competition. If researchers know that the effort they invest will be reviewed and acted upon in a reasonable timeframe, there’s a stronger chance that they’ll continuously test.
The key is to be a good member of the bug bounty community.
This is a very symbiotic relationship; researchers and customers must work together, and understand one another’s needs, which is why we invest heavily in communicating the expected behaviors and what it takes to make a program successful. You’ll be seeing more of these types of posts in the future.