In the past several months, bug bounties have gained popularity in the press and have been adopted with increasing velocity by enterprise organizations. Along with this popularity, the bug bounty model has also received some criticism, and various actors within the industry have raised some very good questions. In keeping with our commitment to transparency, honesty, and education, we thought it was as good as time as any to discuss two specific areas that have cropped up in the past several months, quality and impact, through examining some misconceptions about bug bounties.

Misconception #1: Most automated scanners could find most of the vulnerabilities coming out of bug bounty programs

Since day one, we have been proponents of a layered approach to security, prioritized based on specific organizational capabilities, needs, sensitivities, and goals. For many organizations, a large part of that approach is running a variety of vulnerability scanners. We accept this as a general security best practice, automation will always be better than humans at the things automation is good at.

It’s also no secret that, no matter how advanced, automation only goes so far–it can only find what it knows to find. This leaves a gap that requires human creativity to fill, and crowdsourcing that creativity is by far the most effective way to bring it into the mix. In fact, as the bug bounty model has matured, most organizations have begun to disincentivize the reporting of bugs that can be found by most scanners, as depicted in our Standard Exclusions list.
To borrow a line from Jim Hebert in our recent podcast with Fitbit, “bug bounties are part of our complete breakfast” and that includes automation and penetration testing.

 

 

Misconception #2: You’ll never have the same quality of testers working on your bug bounty program as you would with a pen test.

It’s true for public bug bounty programs that there is no way to verify the combined talent or skill being utilized at any given time. It is also true that the bug bounty model leverages volume to increase the quantity of skilled people being applied to the problem, and based on experience the volume of high-value results radically improves as result. For our customers that need some verification, we run private programs with a small, skills-vetted and trusted crowd. In both instances, the crowd always wins.

The majority of organizations that we’ve helped run bug bounties for have already had robust security testing programs in place, including automation and penetration testing, but we still find solid results, and usually within the first 24 hours. The reason the crowdsourced model is successful is that penetration testers are limited to their individual skill sets and the number of hours they’ve been assigned for the project. Compare this with the idea of opening up the creativity applied to the problem to a variety of different hackers, and incentivizing them to do what they, uniquely, are best at…  A collaborative contribution model will beat the pay for effort model every time.

Learn more about what makes the bug hunting community unique, and what makes the bug bounty model so powerful in our recent report ‘Inside the Mind of a Hacker.

 

Misconception #3: Bug bounties aren’t just a ‘quick fix’ they are costly, and resources should be spent on eliminating bugs prior to GA.

Bugs start in development. We completely agree that finding a bug in the wild is more expensive to fix than it is in development. From what we’ve talked about thus far, we also agree that there is no such thing as 100% secure code–vulnerabilities will always get past vulnerability scanners, your team, and yes, your pen test firm of choice. Even if you had a full team of the world’s best security talent, you would still have bugs.

We also think that the lack of funding for the SDLC is a product of companies not understanding the downside of these issues… Which is why, whilst not holding vulnerability discovery out as the silver bullet, we do believe it’s a productive tool to provide feedback into how much budget and attention the SDLC should receive.

A side note: Paying $500 for a vulnerability which could be found by an open-source code review tool or scanner is neither economically rational nor sustainable and this practice should stop.

Another side note: Bug bounties don’t mean writing a blank check to the crowd. They have evolved. If this is news to you, you should definitely get in touch with us… We preempted this objection back in 2012 and have been delivering it to happy customers ever since.

 

Misconception #4: Bug bounties don’t produce any quality findings.

In our programs alone, a Priority 1 submission (the most severe bug) is submitted on average once every 27 hours. That is nearly one critical, drop-everything-and-patch, vulnerability a day. That’s over 300 vulnerabilities a year that never would’ve been caught through automation, and much to our customers’ chagrin, were missed by their trusty quarterly or annual pen test. And that’s just P1s. Of the 75,000+ bugs our researchers have found, thousands of additional high severity bugs have been fixed by our customers.

We’ve found that until you run a bug bounty yourself, you’ll never truly understand how powerful, creative and smart the bug hunting community is, but you can take our word for it. Or you can check out some of our resources to learn more about bug bounties and the results coming out of them:

  • 2016 State of Bug Bounty 
  • Big Bugs Podcast by Jason Haddix
  • Financial Services Industry Report
  • Bugcrowd Case Study

The unprecedented growth in bug bounties over the last period of time is exciting, disruptive and if we’re being completely honest, happened faster than anyone expected. We are excited to continue down this path and welcome any and all feedback from the community.