Are you putting in the work when it comes to managing your crowdsourced security program?
In order to arrive at a desired outcome, it’s well-accepted common sense that one must “put in the work” to achieve it. If your goal is to restore a car, build the next disruptive technology, get into shape, or have a successful crowdsourced security program – one must, in every case, put in the work. Good outcomes don’t simply just happen – they must be intentionally acted on and earned. Without intention and effort, the best one can hope for is stagnation. What does this have to do with crowdsourced security programs? Everything.
People often approach crowdsourced security programs in the same way you’d approach a clock – set it once, and then forget about it until it stops working. When the necessary approach is much closer to that of a gym membership – which is to say that you must go consistently, and you only get out of it what you put into it. No gym membership has made anyone healthier simply by existing, just as no book has made anyone wiser or smarter simply by being on a bookshelf. Crowdsourced security programs require persistent and consistent engagement over the life of the program – in this guide we’ll cover the Top 5 essentials for the health and success of your program:
But why aren’t bounty programs set-and-forget? You’re paying for the Bugcrowd Platform and services, isn’t it Bugcrowd’s job to deliver results? Going back to the gym analogy – you may pay for the gym membership, and even for a personal trainer – but the results can only come through your sweat. The gym cannot burn the calories for you, nor can the trainer get you into shape – every ounce that you get out of it, is something personally earned. Think of the Bugcrowd Platform as your gym and trainer – we make it a whole lot easier to be successful, and will even guide you along the way to help you become successful and meet your goals. However, a trainer cannot force you to work out and eat healthy – just as we cannot turn a poorly managed program into a successful one, if it’s not willing.
But why? Why can’t Bugcrowd do that? We control the crowd, right? Not quite. Crowdsourced security programs operate on a singular economic principle: the crowd. The crowd operates on the same fundamental supply and demand principles that the rest of the world runs on. If we want an outcome (researchers to test on our targets), then we have to meet or exceed the existing market conditions for someone to be willing to spend their time testing on our program. To learn more, see our extensive two-part blog series: Why isn’t My Program Getting Submissions? Part One, and Part Two. If the crowd is generally non-participatory, then it’s likely a function of the program not being attractive enough to outweigh all the other options and opportunities facing a given researcher. Every program is competing for an extremely limited resource: researcher’s time.
At the end of the day, bugs don’t just happen – they’re the result of researchers putting in time and effort to identify security issues. And the time and effort they put in is time and effort that’s taken away from family, friends, work, games, or other ways they could be spending their time. So, if the goal is for researchers to engage on your program, then it’s on us to work together to make sure it has a level of attractiveness to researchers.
When we think about things that way, it re-frames the way one thinks about engaging with a crowdsourced security program. The crowd piece is fundamentally made up of individual humans who weigh the motivation and return on investment around participation in a program. The crowd has competing priorities in their life (e.g., being a parent, friend, student, etc.) and there’s limited time that they set aside for hacking. This means further competition for their time and energy amongst other program opportunities, of which there are many.
To use another analogy, one fun way to think about running a program on Bugcrowd is to see it as something analogous to a dating website like Match.com, etc. Organizations come to the site to set up a profile in hopes of attracting matches that are a good fit for them – and in doing so, both sides mutually benefit via a relationship. But, just as signing up for a dating account and paying the respective fee is far from a guarantee of matching with the most attractive person on the planet, the same goes for running a crowdsourced security program. You must put in work not only on your profile, but also in real life. A blank or weakly filled out profile will get far fewer matches than one that’s articulate, well put together, and that shows you’re on top of things. Potential matches (including the ones you really want) can spot a fake a mile away – facades will quickly be exposed, and word will get around… the only solution is to build yourself and program into something that’s truly attractive and that people want to invest their time and resources into.
So, what can one do? In short: put in the work, consistently to ensure your program is the most attractive (and thereby successful) as it can be. To help, here are the five areas we recommend focusing on, as well as our recommendations and expectations of good program owners in those areas:
It is Bugcrowd’s expectation and guidance that a healthy program should ideally action all submissions within five business days (e.g. one business week). Actioning a submission could be as simple as responding to a question from Bugcrowd or a researcher, resolving a blocker, reviewing a disclosure request, moving a submission from “triaged” to “accepted”, or even rewarding a submission. Slow movement here is highly correlated with decreased researcher participation over time, since nobody likes protracted wait times for their work to be recognized .
Keep in mind that many researchers are also on the same programs together, and routinely share information and notes around their experience on a given program. This can create both positive and negative feedback loops – if people consistently get a good experience from your program and share that other information with other researchers, then this network effect compounds to have more and more people working on your program, because it’s known to be a quality one to work on.
Conversely, word of bad experiences also spreads like wildfire, and burning one researcher will often result in a much larger impact than just that one individual. Said simply: good experiences for researchers often result in them (and others in the community) engaging with the program in greater depth over a longer period of time. While poor experiences will typically negatively impact long term participation and engagement.
A healthy program should engage all researchers and personnel with respect, dignity, and positive intent. Few things will turn off people who choose to work on your program than feeling like they’re being treated poorly. Keeping in mind that these researchers aren’t just anyone – they’re curated via our CrowdMatch Machine Learning (ML) technology as exceptional matches for your program – meaning these are the right people for the specific program that you’re running, and as such, are individuals we want and need to keep engaged. To that end, as mentioned above, it’s especially important to engage quickly – ideally within three business days, and no more than 5 business days. If you must take longer, we strongly recommend over-communicating that – place it on the program brief, and even if you can’t give a full response, at least let people know how long they can expect to get a reply. A little courtesy can go a long way.
Rewarding Quickly and Fairly
For a healthy program, all “triaged” findings are accepted (or rejected, or replied to) within five business days. As covered above, protracted wait times are highly correlated with lower participation and researcher satisfaction. It’s in everyone’s best interest to reward quickly and fairly – with an emphasis on fairly. Rewarding quickly but unfairly is never a good place to be.
On the topic of rewards, any point here would be incomplete if it didn’t touch on the need for rewards to also be competitive. Offering $50 for finding that would net thousands on a comparative program is a fast-track to obsolescence and a lack of interest from the crowd. When putting your program together, just as your personal trainer would do, your Bugcrowd TCSM (Technical Customer Success Manager) will make direct recommendations around what a program of your nature should reward, as well as guidance around growing the program – based on maturity, complexity, and a number of other factors. All things being equal, if your fin-tech program rewards half of what other fin-tech institutions reward, then returns will also be similarly impacted. Once more: you get out what you put in.
Improving Over Time
A healthy program should strive to improve over time in terms of increasing scope (adding new features, assets, domains, etc), incentives (rewards, bonuses, ad-hoc swag, etc), and exposure (crowd adds, increasing credentials/accessibility, going public, etc) for the program. Programs that do not do these things are highly correlated with diminished participation and engagement over time. Bugcrowd will regularly reach out to guide program owners in these regards, based on insights provided both by human expertise, as data that’s informed by Bugcrowd’s Security Knowledge Graph that organically identifies opportunities for growth and improvement on programs.
A healthy program remediates findings within a reasonable timeframe. It’s highly disengaging and perceived as disingenuous to work on a program that leaves vulnerabilities open for extended periods of time. While Bugcrowd can’t enforce this, it’s important to be aware that if someone engages with a program and their first finding marked as a duplicate of something that was originally reported years ago – but never remediated – they’re unlikely to continue testing for that organization. They now have the awareness that many other security issues likely still exist that have been previously identified, but haven’t been fixed – leading to wasted time and effort on the part of the researcher (since the tester won’t receive a payout for finding something that’s already been found before).
Stay in Touch
All of the above are easy to implement guidelines and expectations meant to help you own and manage a successful crowdsourced security program. By remembering and framing through the lens that it’s our responsibility to make sure that researchers want to show up and test on our assets, we can not only create a better atmosphere for testers, but more value for our program in general.
As always, Bugcrowd is here to help you be successful in all of the above areas and more. There’s no shortage of helpful assets available to you in our resource center. Stay informed by following us on Twitter. If you have any questions, drop us a line at firstname.lastname@example.org. We look forward to helping you and the crowd reach your goals!