By: Amit Elazari, a University of California, Berkeley Doctoral Candidate, CLTC Grantee (@amitelazari)

Bug Bounties are one of the fastest growing, popular and cost-effective ways for companies to engage with the security community and find unknown security vulnerabilities. Yet, while the vulnerability economy is booming, I’ve argued that some bug bounties and vulnerability disclosure programs still put white hat hackers at legal risk, shifting the risk for liability towards the hacker instead of authorizing access and creating legal safe harbors.

This lack of clear legal scope puts hackers, companies, and consumers at risk, and while regulators across the world grapple as to how to promote ethical vulnerability disclosure, legal chilling effects continue to take their toll.

“The threat of legal action was cited by 60% of researchers as a reason they might not work with a vendor to disclose” – September 2015 survey among 414 security researchers participating in vulnerability disclosure National Telecommunications and Information Administration (NTIA)

The current U.S. main federal anti-hacking laws, the Computer Fraud and Abuse Act (CFAA) and the Digital Millennium Copyright Act (DMCA), along with notable public incidents have had a chilling effect on the security researcher community. The ambiguity of existing laws and lack of framework surrounding protocols for “good faith” security testing has sometimes resulted in legal implications for ethical hackers working to improve global security.

How do we fix this reality? Do we (endlessly) wait for legal reform that will clarify the anti-hacking law murky landscape? I’ve suggested we take action by standardizing, across the industry, safe harbors and other legal issues – we fix the legal bugs in the system and start paying attention to the fine print.

This booming economy still doesn’t have real consensus on who defines the rules of the crowdsourced security economy when it comes to safeguarding the legal interests of the white hat hacker community — the crowd. But if we don’t actively push towards adoption of better standards that minimize the legal risks for this community, across the industry, we risk undermining the fundamental trust relationship needed to sustain it. This effort will require the help of all stakeholders involved: companies, platforms and, of course, the crowd.     

Today Bugcrowd launched Disclose.io.

Disclose.io seeks to address these concerns by providing a framework that expands on and unifies the work done by Casey Ellis and the Bugcrowd team, CipherLaw’s Open Source Vulnerability Disclosure Framework, my own work with #legalbugbounty, and Dropbox, have already done to protect security researchers. Establishing clear language before launching a program has a two-fold benefit: organizations feel safe and avoid situations such as extortion or reputational damage, while security researchers who are acting in good faith can report bugs without facing legal repercussions.

The legal landscape for crowdsourced security is currently lacking. Safe harbor is still the exception, not the standard and tens of thousands of white hat hackers are put in “legal” harm’s way. Standardization could fix this reality — but for this effort to work we will need all the help we can get.

Currently, around 19 companies running bug bounty and VDP programs have adopted language that follows current DOJ guidelines on the legal safe harbor for security research and also addresses the DMCA. Hackers, lawyers and programs owners are encouraged to participate and collaborate on the ongoing Disclose.io project which can be viewed on GitHub here.

I’m looking forward to continuing to work with the entire bug bounty ecosystem to ensure the protection of organizations and researchers engaged in vulnerability disclosure and bug bounty programs.