By Dave Gerry, Chief Executive Officer, Bugcrowd
At the recent RSA Conference in San Francisco, I was struck by how many conversations revolved around the rapid uptake of generative artificial intelligence technologies by businesses of all kinds. Everyone at the show seemed to have a viewpoint about what the big AI transformation would mean for the future of cybersecurity.
As far as innovations in the evolution of the human-computer interface go, generative AI is the most exciting thing to come out in recent times. When Bugcrowd published its annual “In the Mind of a Hacker Report” in 2020, 78% of global researchers believed they would outperform AI for the next ten years. I expect that a much lower percentage still feel that way today, based on the massive hype surrounding generative AI.
I believe that safety and privacy should continue to be top concerns for any tech company, regardless of whether it is AI focused or not. And when it comes to AI, the priority should be to ensure that the learning model has the necessary safeguards, feedback loops, and most importantly, the right mechanisms to highlight any safety concerns raised by the broader community.
As organizations rapidly adopt AI for efficiency, productivity, and the democratization of data, it’s important to ensure that there is a reporting mechanism to surface any related concerns. Human oversight and decision-making are crucial to safeguard the accuracy and ethics of these technologies, as well as to provide necessary contextual knowledge and expertise. On the upside, leveraging AI has the potential to significantly improve the productivity and efficiency of security experts. Generative AI technologies are already enabling defenders to rapidly disrupt adversaries. ChatGPT and similar AI technologies can provide leverage by analyzing data, detecting anomalies, and distilling insights to identify threats and point toward potential risks.
It is unlikely that AI will completely take over cybersecurity functions, as human operators bring creativity and ethical decision-making to the task, skills which will be difficult if not impossible to fully replace with AI. That said, AI will continue to play an increasingly important role in cybersecurity as it becomes more advanced, and a human-machine combination will be necessary to effectively defend against evolving threats. And while AI won’t replace human creativity and resiliency, it does hold the potential to fill some of the current talent gaps we see in the industry by automating tasks that will allow human defenders to focus on higher value activities.
Organizations today face more cyber threats than at any point in history, and this problem is only going to increase as the attack surface expands and hackers continue to adapt. Arming defenders with the capabilities to move faster in the face of these attacks will be critical to leveling the cybersecurity playing field.
Now is the time to set aside competition to work in a tightly coordinated way to promote widespread adoption of best practices that enhance transparency to protect people. To learn more about your hidden vulnerabilities, I encourage you to reach out to us for a conversation.
OpenAI’s collaboration with Bugcrowd
OpenAI has partnered with Bugcrowd, to manage the submission and reward process of their bug bounty program. Read more here.
Attending BlackHat Asia, come visit us at booth #A01