Bug bounty programs are only successful when the security researchers working on them are qualified as well as motivated. For public programs in particular, creating the right incentives, challenge difficulty, and environment for mutual trust are the keys to unlocking access to the right hunters for the job. And when you add specialized targets like cryptocurrency or blockchain (aka Web3) to the mix, it’s even more important to design a program that makes domain-expert researchers feel valued and safe.

In this post, which is based on past experiences with Web3 customers and researchers on the Bugcrowd Security Knowledge Platform, we’ll offer a few simple rules along those lines for building a top-notch Web3 bug bounty program on the platform.

Offer appropriate, impact-based rewards

First, and most important, rewards have to be appropriate in size. It’s important to keep in mind that researchers often treat bug bounties as a full-time job (especially the extremely good talent, which are exactly who we want and need to attract here). It’s highly unlikely that such researchers will spend hours auditing an asset for a potential payout of only hundreds or a few thousand dollars, so if rewards are too low, don’t be surprised when all you get are shallow results from a scanner. Instead, offering appropriately sized monetary rewards ensures that researchers will have a clear, motivating incentive to spend quality time on your targets to find high-impact vulns, and that they won’t spend their valuable time on competing programs. (As an extreme example, in April 2022, Aurora awarded a $6 million bounty to a researcher for responsibly disclosing an inflation vulnerability!)

Rewards should also be impact-based. In other words, if your program’s main goal is to prevent theft on any tested platform, then regardless of how that goal is achieved (whether via a web request or by breaking the underlying cryptography), findings should be treated and rewarded the same way. 

Read “Setting Up Your Program Reward Ranges” for more information about designing appropriate incentives.

Open the scope

In situations as broad and as nuanced as finance (whether via fiat or crypto currency), it’s important that the entire organization be in scope as part of the bug bounty program. Securing one’s front door with 12 deadbolts is little protection for the open window out back, and the same holds true when securing financial assets. Whether attack vectors include leaked information on GitHub, credentials found on pastebin, a SQLi vulnerability, or an old server that someone forgot to take down, they all need to be in scope–because that’s how attackers in the wild will approach your organization as a target.  

As another example, unless your Ethereum fork is completely different than the original blockchain, it doesn’t make much sense to limit scope to your own fork without putting all your other assets in scope. In such scenarios, it’s much more likely that there are security deficiencies in assets other than in the blockchain code itself–which is already thoroughly audited by thousands of pairs of eyes.

Read “Scopes: Where Bigger is Better” for more background on the benefits of open scope, generally.

Be public, not private

We usually recommend that programs with Web3 targets should be run in public. Why? Because similar to the guideline above open scope, it’s important to maximize not only the attack surface that the good actors see, but also the number of eyeballs that see it–giving you the best and most effective route to identifying security risks before the bad actors do. Furthermore, having a healthy, public bug bounty program sends a clear message to your investors, customers, and potential users that you take security seriously and protect your customers over anything else.

Make it easy to get started

Onerous setup requirements guarantee that most researchers will avoid your program in favor of one where they can start testing immediately, without having to spend their own money and time deploying multiple services. (Even if you remove those requirements later, most researchers won’t bother giving you a second chance.) So, it’s critical to make it as easy as possible to get started.

For example, one of the best ways to do that is to provide a testnet deployment that researchers can quickly and safely test against.

Use familiar currencies for payments

Always offer the option to pay your bounties in USD, Bitcoin, or Ethereum; don’t make the overhead of safely converting niche currencies to mainstream ones a barrier for researcher participation. Again, you want it to be easy and appealing for researchers to participate in the program, removing any reason to take their skills somewhere else. 

Always provide detailed, transparent explanations

Sometimes you’ll need to change a submission’s priority or reward an unexpected bounty amount due to attack-scenario limitations only your team knows about. In these cases, the best thing to do is to put yourself in the researcher’s shoes and understand that a detailed, transparent explanation goes a long way toward establishing mutual trust. For researchers, there are few worse experiences than getting negative feedback about your hard work because of some hidden agenda.

The more detailed the explanation, the easier it will be for everyone to be on the same page about goals and rewards. This could also lead to researchers identifying additional attack vectors that circumvent your systems, and may even allow them to escalate their findings.

Let researchers publicly disclose findings after remediation

As we said previously, transparency helps build mutual trust between program owners and researchers, and that applies to disclosure policies, as well. For that reason, we strongly recommend a “coordinated disclosure” approach in which program owners allow researchers to publish mutually agreed vulnerability information after fixes are complete. Having that level of transparency about disclosure attracts positive attention from other researchers, and it’s good for your “security brand” (“Here’s how we quickly identified, remediated, and disclosed a risk for our users”), as well. 

Read our docs for more information about coordinated disclosure.

Happy Web3 hunting!

There’s no such thing as a sure thing when it comes to bug bounties, but if you follow all these recommendations, your program will be as well positioned as it can be for success on the Bugcrowd Platform!