Post by Oxmoose
The author and hacker’s profile has been anonymized in this blog to respect their digital privacy. In this blog, we dive into the unexpected renaissance of platform AI automation used to gather low-hanging fruit–type vulnerabilities and what the age of AI-assisted development means for hackers.
Look, I’m not going to lie: I love vibe coding. You know what I mean—that beautiful, chaotic energy where developers throw caution to the wind and let AI autocomplete their dreams into production. It’s poetry. It’s art. It’s also my retirement fund.
As someone who’s been breaking systems and hunting bugs for the better part of a decade, I’ve watched the security landscape evolve from “We forgot to sanitize inputs” to “We implemented OAuth2 wrong” to “We have a WAF, so we’re basically Fort Knox.” But lately? We’ve entered a new golden age, and it smells like freshly generated code that nobody actually reads before deploying.
Here’s the thing about AI-assisted coding tools: they’re incredible force multipliers. They help developers ship faster, prototype quicker, and solve problems that previously required hours of struggle.
But here’s what’s also true: these same tools have also created a generation of code that has vibes but not necessarily validation. An AI suggests something that looks right, feels right, and passes the vibe check—and boom, it’s production time. There is no one to question whether an API key should be hardcoded or whether an authentication check is actually doing anything.. It compiles, the tests pass (if there are tests), and the PR gets merged.
And why would anyone question the code? After all, the AI responded with a green checkmark. It said, “✅ Successfully generated authentication middleware ✅ All tests are passing!” and followed up with “You’re absolutely right to want robust security here!” The developer feels like a 10x engineer. The AI feels like a supportive pair programmer, and the API key feels like it’s on vacation in a JavaScript bundle where anyone can visit.
This is not a dig at developers. I get it. You’re under pressure to ship, your sprint planning is aggressive, and your PM is breathing down your neck about velocity. And now you have this magical copilot that can generate an entire Express.js server in seconds. Why wouldn’t you use it?
But from where I’m sitting, frantically running my fuzzing scripts and manual recon, dependence on vibe coding is creating an environment so target rich that I barely know where to start.
Let me paint you a picture with some recent finds. These aren’t sophisticated zero days or novel attack chains that required three months of research. These are the kinds of bugs that would make a first-year security student do a double take.
I was testing a fintech application—you know, the kind that handles actual money. I opened the browser dev tools (as one does), clicked over to the Sources tab, and started poking through the JavaScript bundles. Nothing fancy, I just hit Ctrl+F for “api” to see what I could find.
There it was, line 2,847 of main.bundle.js:
main.bundle.js:
const STRIPE_SECRET_KEY = "sk_live_51K..."
A production Stripe secret key. No, not the publishable key—the secret key. This is the one that can actually charge cards and access customer data. It was just sitting there, minified but not obscured, waiting for anyone with basic curiosity and DevTools.
Another target, another JavaScript file. This time, it was a React app for a SaaS platform. Buried in a utility file called s3-upload.helper.js were hardcoded AWS credentials that pointed to their customer document storage bucket.
s3-upload.helper.js
These were not just any credentials; they had full S3 permissions to a bucket containing every customer’s uploaded files—invoices, contracts, identity documents, the works. The permissions were wide open: s3:GetObject, s3:PutObject, s3:DeleteObject on arn:aws:s3:::prod-customer-uploads/*.
s3:GetObject, s3:PutObject, s3:DeleteObject on arn:aws:s3:::prod-customer-uploads/*.
The really wild part? The bucket wasn’t even configured with proper access controls. Because the credentials were in the client-side code, they’d designed the bucket to allow those credentials to access any object. There was no pre-signed URLs or server-side validation of what a user should access—just raw bucket access from the browser.
The kicker? The file had a comment at the top: // Auto-generated upload handler – Copilot. There was another comment two lines down: // TODO: Move credentials to environment variables.
// Auto-generated upload handler – Copilot.
// TODO: Move credentials to environment variables.
This one’s a personal favorite. A marketing automation platform had a feature that allowed users to create custom email templates with dynamic variables. You know, the standard {{user.name}}-type stuff.
{{user.name}}
I decided to test what would happen if I put something a bit more spicy in there: {{7*7}}. The email came back with “49” rendered in it, demonstrating template injection—the kind that can escalate to remote code execution if you know what you’re doing.
{{7*7}}.
I went deeper and tried {{config.items()}}. I got back a dump of the application’s configuration, including database connection strings and internal API endpoints. Then I tried executing system commands. And, well… they executed.
{{config.items()}}
What the AI didn’t mention—or what nobody read in the documentation—was that Jinja2 needs to be configured in a sandbox mode if you’re allowing user input. The AI had simply generated a working template renderer. It processed templates! It substituted variables! ” ✅ Email templating system implemented successfully!”
What I’m seeing across programs isn’t malice. It’s not even negligence in the traditional sense. It’s a new phenomenon: vibe-driven development with production-ready confidence.
The pattern looks like this:
The genius—and I mean this with a mix of admiration and horror—is in the psychology. These AI tools are designed to be helpful and encouraging. They don’t push back. They don’t say, “Hey, should we really hardcode that?” until you question it, and they love to root for you: “Great thinking! Here’s your implementation✅.” This is because their job is to assist, not to audit.
And honestly? That constant positive reinforcement feels amazing. You ask for something, you get it immediately, and the AI makes you feel like you’re making brilliant decisions. “You’re absolutely right!” it says. “This is a great approach!” it confirms. You start to feel like a senior architect who just happens to code at the speed of light.
Oh, and if you mention literally anything about your codebase being messy? The AI lights up like a beaver who just spotted a prime logging opportunity. “I have big plans for this!” it practically squeaks, ready to refactor your entire architecture. You mention one function is a bit long, and suddenly you’re getting a proposal for restructuring your entire monorepo into microservices with a service mesh. The AI doesn’t just want to fix your code—it wants to reimagine it. And in that refactoring frenzy, security checks that were working perfectly fine get “simplified” away, authentication middleware gets “optimized” into oblivion, and validated inputs become “cleaner” by removing all that pesky sanitization.
The AI isn’t writing insecure code maliciously. It’s trained on the entire internet, which includes countless examples of bad security practices, hardcoded credentials in tutorials, and debug code that should never have been public. When you ask it to “Generate a REST API with authentication,” you might get something that technically has authentication but stores session tokens in localStorage, has no CSRF protection, and trusts client-side validation.
And it will deliver all of this with a cheerful “🏁 Authentication system implemented successfully!”
If you’re running a bug bounty program right now, you need to understand what’s happening in your codebase. Your developers aren’t intentionally shipping vulnerabilities, but the force multiplier effect of AI coding assistants cuts both ways.
For every hour of development time you’re saving, you might be creating 10 hours of security debt—debt that hackers like me are more than happy to collect on.
Some questions to ask yourself:
The reality is that bug bounty programs are seeing an uptick in reports for these kinds of issues. This is not because hackers suddenly got better at finding them. There are simply more of them to find.
Before you panic and ban AI coding tools company-wide, let me be clear: that’s not the solution. AI-assisted development is here to stay, and it should be. The productivity gains are real. The problem isn’t the tools—it’s the process.
What we need is a vibe check on our vibe coding.
Here are some practical steps that would have prevented every bug I mentioned above:
The cat’s out of the bag on AI coding—and honestly, I don’t want it back in. What I want is for the security and development communities to meet in the middle.
For bug bounty programs, this means the following:
Adjust your expectations. You’re probably going to see an increase in “low-severity” reports for things like information disclosure, hardcoded credentials in client-side code, and misconfigured endpoints. Don’t dismiss them just because they seem obvious. Obvious bugs still get exploited.
Update your remediation processes. When you fix one hardcoded API key, scan for others. These bugs tend to cluster because they come from similar development patterns.
Educate your team. Make sure developers understand that AI coding assistants are powerful tools, not infallible security experts. The AI doesn’t know what’s sensitive in your specific context.
Embrace the feedback loop. Bug bounty hunters are telling you where your AI-assisted processes are breaking down. That’s valuable data for improving both your development practices and your security posture.
To my fellow bug hunters: next year is going to be wild. The vulnerability landscape is shifting, and there’s genuine opportunity here. But let’s not get complacent. The industry will adapt. Security tooling will catch up. The low-hanging fruit won’t hang low forever.
In the meantime? I’ll be over here, running jsluice on every JavaScript bundle I can find and living my best life.
We’re in a transition period. AI-assisted coding is transformative, but we’re still figuring out the guardrails. The bugs I’m finding today are a symptom of growing pains, not a permanent feature of the landscape.
But for right now (December 2025), if you’re a Bugcrowd customer with AI-assisted development in your workflow, you need to be thinking about security differently. The old threat models assumed human developers were making human mistakes. The new models need to account for AI generating code at scale, with patterns and anti-patterns that can replicate across your entire codebase.
Stay curious, stay vigilant, and for the love of all that’s holy, please stop hardcoding your AWS credentials. If you’re a customer, visit Bugcrowd to learn more about securing your code. If you’re a hacker, find more resources here.