One of the most common questions we encounter in conversations around crowdsourced security programs is : “why would I invite researchers to hack my assets?”, “why should I trust the crowd?”, or some variant thereof. There are different permutations around this question, but most are based on the assumption that the “crowd” is an unknown or shadow entity of actors whose motives are opaque and possibly nefarious. As a result, there is a lot of trepidation around integrating the crowd into an organization’s security strategy. It’s good to be cautious when considering opening up your assets to hackers and questions of this nature are a natural part of your due diligence when considering Bugcrowd’s services. In this blog we’ll address the most common questions we receive on this front and offer some advice when considering implementing a crowd-driven security program.
First:
Why, in the name of everything sane, would I invite researchers to hack my assets?
In short: because the bad guys are doing it anyway. Objectively speaking, a black (or grey) hat hacker doesn’t need your permission to look for vulnerabilities on your publicly facing systems (as a quick note on definitions: black hats are nefarious bad actors, white hats are ethical researchers who aren’t out for nefarious purposes, and grey hats are those play whichever side suits them at the time). Laws aside, nefarious parties are going to hunt for issues with or without your permission – if there’s any doubt, just ask anyone who has worked in a Security Operations Center (SOC) or Network Operations Center (NOC) – the port and application scans are nonstop. On a more macro level, just observe how frequently organizations across the globe are breached in some way or another – we just hear about the most newsworthy ones. Regardless of how small a company you might be, someone is out there looking for a way in. I’m not saying every black hat hacker is explicitly targeting you, so much as they’re driving by the house to see if any windows are open as easy entry points.
By engaging the crowd of researchers to test your assets via a crowdsourced security program, you’re effectively emulating what black/grey hats would do in the wild (e.g. passively or actively looking for vulnerabilities) – except with a twist – instead of learning about vulnerabilities after you’ve been compromised, you can learn about (and remediate) them before nefarious parties do! The benefits of this are two-fold:
It’s worth pointing out that crowdsourced security programs are only as effective as the scope of what you are trying to secure. An organization that tightly secures asset (A) via a crowdsourced security program, but leaves vulnerable asset (B) out of scope also leaves asset (B) open for attack, where hackers can find issues aplenty. An organization’s security posture is only as strong as its weakest link, and to combat weakness, we recommend ensuring the entirety of any organization’s attack surface is accounted for in the scope of a crowdsourced security program.
There are some additional organizational benefits from having hackers identify vulnerabilities in your assets. For example, if an organization is struggling to get executive buy-in on implementing a crowdsourced security program, there are few more compelling ways to demonstrate that not only do vulnerabilities exist, but people (regular people) can, and will find those issues – and if researchers can find them, it’s a safe bet that nefarious parties can too. Visibility into what testers are finding in the wild can also help provide perspective into what’s most likely to be exploited – due to the ease with which a given issue is being identified, or the sheer number of persons who have identified a given issue (e.g. duplicates of the original finding). If it’s easy for the crowd to find, it will be easy for bad actors to find as well. Bottom line: a crowdsourced security program can be extremely effective at helping elevate the conversation inside the office, as well as elevating the security posture of the organization as a whole. Embracing a crowdsourced security program as a live-fire replication of what could/would/does happen in the wild, makes any organization willing to adopt that perspective that much more resilient to attacks (not to mention the outlandish efficiency of running a program of this nature in terms of ROI, number of issues identified, overall value to the organization, and the marketability of a highly proactive security posture).
How can I trust the crowd? More specifically, how can I trust researchers to report the vulnerabilities to the program, and not sell them on the black/grey market?
An understandable concern – to which, there are a couple points to work through.
The Bugcrowd platform’s CrowdMatch feature finds the best (or nearest) matching researcher based on the customer task at hand, taking into account a multitude of factors, including required skill-set and trust scores. Researchers with an insufficient trust score are not given access to programs that require privileged insight or access (and those who have earned this privilege would be negligent to abuse it, since it typically represents a substantial amount of income for high caliber testers; a source of income they’d lose if they damaged their trust by performing nefarious actions). If one thinks of the crowd as a pyramid with multiple tiers (not to be confused with a pyramid scheme), the base is made up of the unverified crowd who can participate on any public program, middle tiers are made up of those with experience on the platform (e.g proven trust over time) and those who have undergone ID verification, and the final tier are those with whom we (Bugcrowd) have built personal relationships, and have gone through background checks and been fully vetted. Internally. Bugcrowd has multiple layers of researchers that we deploy based on the trust scores, comparative to the trust they’ve earned on the platform (that is, of course, provided said levels of trust are required, if not, refer back to point #1).
Finally:
How can the crowd help if I can only leverage people who are in certain geographies or with certain levels of trust?
As touched on above, using CrowdMatch, Bugcrowd is able to find the right researchers for your need – even if that need is as explicit as “we need ID-verified researchers only” or “we need people from specific geographies that have passed a background check”. However, we don’t recommend instituting any of those requirements unless absolutely necessary. Why? Because the crowd is at its absolute most powerful when there are no restrictions around who can participate. Is the NBA as a totality more compelling when one restricts it to exclusively players of Canadian origin, or is the amalgamation of all the players from all the places more compelling? Undoubtedly, players from across the globe make it a far more compelling league as opposed to one that’s limited to just Canada, the EU, or any one (or group) of countries. In the same way, you can certainly get a lot of talent in just one geography (or any demographic), but you’re also going to miss out on a lot of other talent, and quite possibly the best talent.
We strongly encourage organizations to avoid employing artificial barriers to the type of talent that can be invited to the program. But if/where the requirements are hard and fast as it relates to trust, Bugcrowd can help meet the needs and demands that may otherwise require background checks, ID verification, or geographic constraints. If certain specifications must be met to achieve a desired level of trust for your specific organization, Bugcrowd can help put that trust into place. Additionally, it’s worth remembering that the vast majority of the crowd have day jobs as security professionals. They’re not dark, mysterious people that have no past or forms of identification, and only exist between certain hours of the day. They’re largely infosec professionals (current, or reformed), just like the rest of us, who like breaking things in their free time, love poppin’ shells, getting paid for it, and helping make the internet a safer place, one bug at a time.
At this point we’ve covered three of the most common questions around “how can I trust the crowd?”, and have hopefully provided ample reasoning for each point as to how and why you can trust the crowd, both as a whole, and as a collection of individuals. In terms of how to think about the crowd, we always recommend: