In the third quarter of 2025, companies rushed to adopt AI and planned to invest an average of $130 million, reaching an all-time high. But beneath all that excitement, many struggled to keep up with the unique security demands the technology introduced, and the struggle continues to this day.
It’s easy to see why. The same capabilities that make AI powerful can also be weaponized against the businesses using it. Every new AI integration, data connection, and automated decision is a potential entry point for bad actors—and these risks don’t map cleanly onto traditional security frameworks. The data reflects this challenge: In Q4 2025, 80% of leaders cited security as the single biggest barrier to reaching their AI goals, up 12% from Q1 2025.
In this blog post, we’ll break down the cybersecurity concerns that are holding back AI adoption and what teams can do to move forward without leaving the door open to risk.
When it comes to non-AI systems, analyzing risk is pretty straightforward. There’s usually a contained attack surface (e.g., cloud configurations, apps, APIs, or third-party integrations) that teams can easily test to find and validate vulnerabilities. Plus, risk categories are well understood, so there are established controls, playbooks, and compliance frameworks for prioritizing and assessing the risk posed by known vulnerabilities.
However, AI systems are a completely different beast. Testing AI systems isn’t just about testing the safety and reliability of models; teams also need to test the surrounding infrastructure, which expands the attack surface significantly. However, testing this expanded surface isn’t straightforward. Unlike traditional software, AI systems don’t produce the same outputs from the same input. This fundamentally changes how you find and validate vulnerabilities. For security leaders, this inconsistency raises a fundamental question: What does “secure enough” mean when a system behaves differently every time?
Agentic AI adds another layer of complexity. When AI acts with some degree of independence (whether it’s executing tasks or making decisions), it increases the risk of something going wrong, like deleted files, unauthorized access, or compliance violations. These risks aren’t hypothetical. In 2025, Replit reported that one of its agents had accidentally deleted a live database during an active code freeze.
When leaders are under pressure to move fast with AI but lack the right approach to managing these risks, the consequences can compound quickly. For example, if customers believe an AI system can leak their data or produce harmful outputs, this can reduce user trust and damage the brand, stalling adoption. Additionally, bias, unsafe outputs, and inadequate testing can trigger regulatory scrutiny, resulting in fines, investigations, or forced product withdrawals.
Here’s a closer look at the core security concerns for teams when building and deploying AI systems.
AI models can be jailbroken, meaning they can be manipulated to do things they’re not supposed to do. These include revealing sensitive information, generating harmful content, or, in agentic contexts, triggering unauthorized workflows. One of the most well-documented methods of jailbreaking is prompt injection, in which a threat actor embeds malicious instructions into a prompt to override a model’s safety controls and hijack its behavior. In one case, a student used prompt injection to trick Bing’s chatbot into exposing the entire system prompt and revealing security and safety instructions.
Keep in mind that attacks aren’t always text-based either. Because modern generative AI models process images, audio, and video, attackers can embed malicious instructions directly into non-text inputs to bypass safety filters. They can also use those same assets to smuggle sensitive information back out. A zero-click exploit called EchoLeak demonstrated how attackers can create malicious instructions that trick AI systems into sharing sensitive information through image URL parameters.
AI systems are only as good as the underlying data used to train them, and such data often contains biases. Without proper auditing and guardrails, these biases can spill into a model’s outputs, leading to discriminatory decisions, the unfair treatment of users, and reputational and legal exposure. For example, an internal Amazon tool intended to identify promising applicants routinely overlooked qualified women. This bias stemmed from the model’s training data: most resumes were from men.
Many organizations use off-the-shelf models to power their AI systems. To address safety concerns, model developers often subject their work to rigorous testing before release. However, that baseline testing doesn’t hold up once the model has been fine-tuned or modified. Customization can inadvertently strip away built-in safety controls, reintroducing risks it had been hardened against.
Furthermore, testing a model in isolation isn’t enough. Most AI systems are integrated into products via remote services and REST APIs. This means that the attack surface extends beyond a model and includes the RAG pipelines, third-party integrations, and the data ingestion pipelines surrounding it.
To successfully build, launch, and scale AI systems, security leaders need to adopt a nuanced approach toward security. Here are some key principles to keep in mind:
Bugcrowd helps security leaders put these principles into practice. By leveraging the power of the Crowd, organizations can connect with hackers with specialized LLM and infrastructure expertise. Let hackers help you uncover vulnerabilities across your entire AI stack before threat actors do. Organizations can partner with Bugcrowd through three core offerings:
AI adoption shows no signs of slowing, and the threats evolving alongside it don’t either. That pressure is forcing security leaders to fundamentally rethink their approach to security.
Bugcrowd offers security teams a way to launch and scale AI systems without compromising on security. By leveraging the power of the crowd, teams can quickly spin up the right set of traditional and AI-specific security tests to properly secure their expanded attack surface.
Ready to see AI Risk Testing in action? Get started with Bugcrowd’s Platform today.