Over the past couple of years, the RSAC show floor has been dominated by AI buzz. This year will be no different, but most of the conversation will focus on tools, not the deeper structural risks AI is already creating. 

As the Chief Strategy and Trust Officer at Bugcrowd, I spend a lot of time hearing from my network and thinking about common AI assumptions. As we head into the conference, enterprises need to rethink some of the core assumptions about how AI affects their security posture, their development pipelines, and even their decision-making processes.

Assumption #1: AI will make security decisions easier

AI promises to help make decision-making easier and faster. Today’s reality is that AI is breaking enterprise decision-making structures. Machine-speed change management collides with human-speed governance, leaving many organizations unable to trace or validate why an AI system acted the way it did. 

Simply put, change management is moving to machine speed—and enterprises aren’t ready. We’re facing a business decision to adopt AI, not a security decision. We need to align with a sense of purpose. The enterprise must learn to partner with AI in ways that we can understand, instruct, and monitor change at velocity.

Assumption #2: AI will reduce vulnerabilities

For good reason, security teams hope that AI will help them reduce the amount of vulnerabilities in their attack surface. Unfortunately, the opposite will be initially true for most of us—that dream has not yet materialized. AI is widening the vulnerability gap. AI-generated code, agent authority, and attacker automation are increasing risk faster than security teams can review or remediate. 

I’m concerned that enterprises are facing more vulnerabilities than they have the capacity to patch. The vulnerability gap is about to get bigger and faster—we need to adopt and adapt to meet this shift

Assumption #3: AI threats will hit only advanced targets

I empathize with teams that assume AI threats will only hit advanced targets. I fear the first major AI-enabled incidents will hit the least prepared targets, from ICS/SCADA, to legacy workflows, and vulnerable populations. As attackers exploit voice cloning, learn to poison training data, manipulate workflows, and organize machine-to-machine deception, many will be left wondering what happened. Realistically, ICS and SCADA have been vulnerable for decades; AI made that attack surface available to a larger population of threat actors.

Keep the conversation going at the Hive

RSAC will be full of conversations about AI’s potential. The real conversation enterprises need to have is about AI’s impact on their attack surface, their development pipeline, and their executive decision-making layer. From there, we can understand how to regain visibility before attackers exploit the gaps. 

I’ll be spending RSAC at The Hive, Bugcrowd’s exclusive networking space just steps away from the conference floor. Stop by and let’s chat about how AI is impacting your security strategy.