During RSA, Bugcrowd founder and CTO Casey Ellis sat down with CyberScoop’s Greg Otto to discuss the future of work and how bug bounty programs have always been about the evolution of penetration testing. Below are a few highlights.
You can watch the full video on cyberscoop.com.
Bugcrowd’s always been in the pen testing space
GO: Talk to me about the strategy of a crowdsourced pen test.
CE: Bugcrowd’s always been in the pen testing space. The initial framing of what we were doing was around the use of crowdsourcing, the uses of the whitehat community, to improve things like pen testing through the economics and the resourcing model. What we did was to use vulnerability disclosure programs and bug bounty programs as a standalone market to grow that piece, but then the learnings from that actually pivot that off into the more consulting model. The announcement that we made over the last nine months or so around Next Gen Pen Test is really an evolution of that.
Pen testing is a logical application of crowdsourcing
CE: Prior to releasing Next Gen Pen Test, Bugcrowd ran private bug bounty programs for that use. What we were hearing from customers was that they needed to require some business usages of testing that go beyond just the ability to be really good at finding bugs. Things like methodology, attestation, coverage reporting, being able to demonstrate assurance as well as the discovery of bugs, as things start to get fixed.
Pen testing is a logical progression–a logical application of crowdsourcing. This has been part of the game plan the whole time. You get good at understanding what the Crowd can do, get very good at connecting them to as much of the security problem space as we’re capable of, and then incrementally add to that. There’ll be more of that from us because that’s been part of the strategy the whole time.
The golden rule of vulnerability disclosure: Aligning expectations
GO: Let’s talk about vulnerability disclosure, specifically the relationship between researchers and companies. You’re at an interesting standpoint because you’re the middleman when it comes to actual the crowdsource bug bounties. What conversations are you having with both the researchers and the companies when it comes to how they’re interacting with one another when there is a bug disclosure?
CE: The golden rule, and what we work to help establish, is actively establishing reasonable expectations and helping keep both groups aligned as a program changes. You keep the commitments that you’ve made, that you set out when you started as you go through the program, regardless of what pops up along the way. That’s not just a vendor thing it’s actually a researcher thing as well.
There is the ability to use disclosure of a good bug as something that can promote a pen test company or an organization, or even an individual. I think to a greater degree that’s actually a really good thing. But, if that causes a researcher to go ahead and prematurely disclose or to hype out of context an issue that the business decided, ‘No, that’s not something that we want to go out and fix just yet,’ then all of a sudden that’s mismatched, it’s imbalanced, and the entire relationship becomes adversarial again.
Education on both sides is coming back into the center
CE: We put a lot of work into trying to get everyone on the same page — hackers and companies have historically sucked at talking to each other. We’ve made some progress on that, which is good. My goal in this is to make sure that that alignment continues going forward and that both sides are actually coming to the party. I think for the last six years most of it’s been actually getting the companies to come onboard and not treat responsible disclosure as a moral loading or some sort of forcing function they can place on the hacker community. Realizing that it’s their responsibility as well to treat this information as important and useful, and then to be responsive with that too. Now, as the researcher community grows, and the velocity of this increases, education on both sides is coming back into the center.
You can’t eat the whole elephant at once
GO: Speaking of education, you recently had an announcement around training. Tell us a little more about that.
CE: We’re about taking human creativity and applying it to as much of the security space as we can. What we’re known for is finding critical issues in production systems at the end of development cycles. But part of what we’ve been doing, and what a part of this announcement focuses on, is shifting left in that process to help in development, UAT, and developer education.
We’ve announced a partnership with Secure Code Warrior. The thing that’s cool about them is that they’ve made what’s fundamentally a really difficult and frankly sometimes boring thing — getting engineers to sit through training and actually understand how to be better at coding securely — really fun.
They’ve approached it with a lot of design thinking, gamification — all of that stuff that we actually do a lot of at Bugcrowd as well — to be able to maximize the likelihood that an engineer is going to go through this stuff and actually get better, and not introducing bugs in the first place. The way the integration works is the Crowd comes in with their perspective, adversarial context, and feedback, to actually steer which parts of that training are most relevant to the organization next. There’s this huge body of things that you can teach an engineering team. You can’t eat the whole elephant at once, so it makes sense to do the things that are most important first. That’s where we’re coming together. I’m pretty excited about that.