skip to Main Content
This website use cookies which are necessary to its functioning and required to achieve the purposes illustrated in the privacy policy. To learn more or withdraw consent please click on Learn More. By continued use of this website you are consenting to our use of cookies.

Why You Can Trust The Crowd

Why You Can Trust The Crowd

One of the most common questions we encounter in conversations around crowdsourced security programs is : “why would I invite researchers to hack my assets?”, why should I trust the crowd?”, or some variant thereof. There are different permutations around this question, but most are based on the assumption that the “crowd” is an unknown or shadow entity of actors whose motives are opaque and possibly nefarious. As a result, there is a lot of trepidation around integrating the crowd into an organization’s security strategy. It’s good to be cautious when considering opening up your assets to hackers and questions of this nature are a natural part of your due diligence when considering Bugcrowd’s services. In this blog we’ll address the most common questions we receive on this front and offer some advice when considering implementing a crowd-driven security program.

First:

Why, in the name of everything sane, would I invite researchers to hack my assets?

In short: because the bad guys are doing it anyway. Objectively speaking, a black (or grey) hat hacker doesn’t need your permission to look for vulnerabilities on your publicly facing systems (as a quick note on definitions: black hats are nefarious bad actors, white hats are ethical researchers who aren’t out for nefarious purposes, and grey hats are those play whichever side suits them at the time). Laws aside, nefarious parties are going to hunt for issues with or without your permission – if there’s any doubt, just ask anyone who has worked in a Security Operations Center (SOC) or Network Operations Center (NOC) – the port and application scans are nonstop. On a more macro level, just observe how frequently organizations across the globe are breached in some way or another – we just hear about the most newsworthy ones. Regardless of how small a company you might be, someone is out there looking for a way in. I’m not saying every black hat hacker is explicitly targeting you, so much as they’re driving by the house to see if any windows are open as easy entry points.

By engaging the crowd of researchers to test your assets via a crowdsourced security program, you’re effectively emulating what black/grey hats would do in the wild (e.g. passively or actively looking for vulnerabilities) – except with a twist – instead of learning about vulnerabilities after you’ve been compromised, you can learn about (and remediate) them before nefarious parties do! The benefits of this are two-fold: 

  1. By engaging the crowd to help identify areas of risk, you get the most accurate possible picture of your exposure both in terms of vulnerabilities and attack surface. Scanners may find some things, and pentesters may find others, but a crowdsourced security program (such as a bug bounty program) brings the value of both human ingenuity and automated testing  at scale. Which is to say that two active testers will typically find more issues than one, ten more than two, five hundred more than fifty, and so on . . . all made possible as each individual (out of  thousands in the crowd) brings with them a unique set of skills, perspectives, methodologies, etc. The crowd at scale provides a perspective that is unrivaled by any technology on the planet to give the most accurate picture of how attackers think and would approach your assets (often finding huge amounts of unprotected assets that clients weren’t previously aware of). Additionally, the crowd increases this disproportionate advantage by often leveraging their own highly effective, self-developed tools (that aren’t, say, on the open market) in conjunction with their specific technical expertise. 
  2. By quickly identifying and remediating issues you make yourself a less attractive candidate for attackers. By having a more secure attack surface as a result of testing with the crowd, the time it takes to find a net-new valid issue is expanded substantially for attackers (black/grey hats), providing a far less appealing ROI from targeting your organization as opposed to attacking one with a less proactive security posture. Put this way: were you an attacker, would you target a resource that has thousands of people helping secure it, or one that doesn’t? Not to say that they won’t attack at all, but their life just got a whole lot harder due to the added expertise of the crowd. This, in a nutshell, is why we want to invite researchers to find vulnerabilities in our assets; by leveraging the crowd we can identify (and fix) more issues, more quickly – leading to reduced exposure and opportunities for attackers.

It’s worth pointing out that crowdsourced security programs are only as effective as the scope of what you are trying to secure. An organization that tightly secures asset (A) via a crowdsourced security program, but leaves vulnerable asset (B) out of scope also leaves asset (B) open for attack, where hackers can find issues aplenty. An organization’s security posture is only as strong as its weakest link, and to combat weakness, we recommend ensuring the entirety of any organization’s attack surface is accounted for in the scope of a crowdsourced security program.

There are some additional organizational benefits from having hackers identify  vulnerabilities in your assets. For example, if an organization is struggling to get executive buy-in on implementing a crowdsourced security program, there are few more compelling ways to demonstrate that not only do vulnerabilities exist, but people (regular people) can, and will find those issues – and if researchers can find them, it’s a safe bet that nefarious parties can too. Visibility into what testers are finding in the wild can also help provide perspective into what’s most likely to be exploited – due to the ease with which a given issue is being identified, or the sheer number of persons who have identified a given issue (e.g. duplicates of the original finding). If it’s easy for the crowd to find, it will be easy for bad actors to find as well. Bottom line: a crowdsourced security program can be extremely effective at helping elevate the conversation inside the office, as well as elevating the security posture of the organization as a whole. Embracing a crowdsourced security program as a live-fire replication of what could/would/does happen in the wild, makes any organization willing to adopt that perspective that much more resilient to attacks (not to mention the outlandish efficiency of running a program of this nature in terms of ROI, number of issues identified, overall value to the organization, and the marketability of a highly proactive security posture).

How can I trust the crowd? More specifically, how can I trust researchers to report the vulnerabilities to the program, and not sell them on the black/grey market?

An understandable concern – to which, there are a couple points to work through.

  1. To specifically address the notion of selling or exploiting found issues themselves – simply by having a crowdsourced security program (e.g. bug bounty program), the relative value on the black/grey market of a vulnerability against your asset diminishes substantially. How so? And what do I mean? First, what this means is that in a world of scarcity, with relatively few people competing to find issues on a given asset (say, one team of hackers targeting your site), a found vulnerability is likely only known to that group or a very small subset of individuals who may intend to use it in a nefarious context. Any finding identified here will ostensibly be usable for a long while, and the odds are relatively low that it will drop off the map (e.g. get remediated) overnight. Now, juxtapose that against a world where a crowdsourced security program is deployed against that same asset. As an attacker, suddenly the information that used to be scarce (the vulnerability against the asset) is now in jeopardy – hundreds or thousands of other ethical hackers (researchers) are now capable of finding and reporting that very same issue, and some more than likely have. What previously might have been scarce, is now abundant, and as a result, the value of that issue on the black/grey market drops like a rock. If someone were to hunt for vulnerabilities to sell on the black market against programs that are running a bug bounty, they’re picking an extremely steep hill to climb. Anyone who would buy those issues would be buying with the awareness that they could have already been found and reported by 20 other people – or, since there’s no honor among thieves, why would the nefarious actor not report it to the bounty program itself the moment they’ve sold it to someone else and collect twice? At the end of the day, if one’s goal is to sell vulnerabilities, the absolute worst place to do so is on a program with an existing bug bounty. The market value of the findings gets diminished and the number of available findings are lower than a company not running a bug bounty program – leading to an equation that’s far from ideal for anyone trying to make a living that way. Bad actors may (and will) still target you, but if you’re worried about permissioned individuals testing as part of a bug bounty program, it’s safe to say they’d be insane to use bounty programs as a basis for nefarious work. Bottom line: if one wants to be nefarious, there are far better ways to go about it.
  2. The above practice is good for public assets, but there are, of course, private programs that only offer hunting access to a specific group of researchers. In this case, a researcher may have privileged visibility and access into something a black or grey hat hacker may not have. We will cover this in more detail later, but researchers who are given access to private programs are held to a higher standard across multiple factors: performance (e.g. they have demonstrated competency in the areas of interest) and trust (all researchers on the Bugcrowd platform have a “trust” score based on time on the platform, whether they’ve had a  background checked or ID verified, the number of issues/escalations uncovered, etc).

The Bugcrowd platform’s CrowdMatch feature finds the best (or nearest) matching researcher based on the customer task at hand, taking into account a multitude of factors, including required skill-set and trust scores. Researchers with an insufficient trust score are not given access to programs that require privileged insight or access (and those who have earned this privilege would be negligent to abuse it, since it typically represents a substantial amount of income for high caliber testers; a source of income they’d lose if they damaged their trust by performing nefarious actions). If one thinks of the crowd as a pyramid with multiple tiers (not to be confused with a pyramid scheme), the base is made up of the unverified crowd who can participate on any public program, middle tiers are made up of those with experience on the platform (e.g proven trust over time) and those who have undergone ID verification, and the final tier are those with whom we (Bugcrowd) have built personal relationships, and have gone through background checks and  been fully vetted. Internally. Bugcrowd has multiple layers of researchers that we deploy based on the trust scores, comparative to the trust they’ve earned on the platform (that is, of course, provided said levels of trust are required, if not, refer back to point #1).

Finally:

 How can the crowd help if I can only leverage people who are in certain geographies or with certain levels of trust?

As touched on above, using CrowdMatch, Bugcrowd is able to find the right researchers for your need – even if that need is as explicit as “we need ID-verified researchers only” or “we need people from specific geographies that have passed a background check”. However, we don’t recommend instituting any of those requirements unless absolutely necessary. Why? Because the crowd is at its absolute most powerful when there are no restrictions around who can participate. Is the NBA as a totality more compelling when one restricts it to exclusively players of Canadian origin, or is the amalgamation of all the players from all the places more compelling? Undoubtedly, players from across the globe make it a far more compelling league as opposed to one that’s limited to just Canada, the EU, or any one (or group) of countries. In the same way, you can certainly get a lot of talent in just one geography (or any demographic), but you’re also going to miss out on a lot of other talent, and quite possibly the best talent.

We strongly encourage organizations to avoid employing artificial barriers to the type of talent that can be invited to the program. But if/where the requirements are hard and fast as it relates to trust, Bugcrowd can help meet the needs and demands that may otherwise require background checks, ID verification, or geographic constraints. If certain specifications must be met to achieve a desired level of trust for your specific organization, Bugcrowd can help put that trust into place. Additionally, it’s worth remembering that the vast majority of the crowd have day jobs as security professionals. They’re not dark, mysterious people that have no past or forms of identification, and only exist between certain hours of the day. They’re largely infosec professionals (current, or reformed), just like the rest of us, who like breaking things in their free time, love poppin’ shells, getting paid for it, and helping make the internet a safer place, one bug at a time.

At this point we’ve covered three of the most common questions around “how can I trust the crowd?”, and have hopefully provided ample reasoning for each point as to how and why you can trust the crowd, both as a whole, and as a collection of individuals. In terms of how to think about the crowd, we always recommend:

  1. Remembering they’re inherently human – meaning that the crowd isn’t some amorphous, soulless horde – but is instead a group of thinking, feeling, and human individuals who are helping you secure your assets, one bug at a time. And,
  2. It helps to see the crowd as an extension of your security team itself – an integral component to your security posture, as opposed to being considered analogous to a noisy scanner output that gets attended to every once in a while. For many clients, the reality becomes that the crowd can and does bring expertise to the table that heartily augments their existing team in terms of additional skills, perspective, or capabilities – and know the same can ultimately be true for your organization as well. Good luck and happy hunting!

 

Tags:
Topics:

Grant McCracken

Senior Director, Operations : Program Success at Bugcrowd.

Back To Top