Holla! I’m Ads Dawson. I’m a staff AI security researcher by day and chaos wrangler by nature. My philosophy? Harness code to conjure creative chaos—think evil; do good. This means exploring how we as hackers bend, break, and occasionally outwit the very systems built to “outsmart” us.
This time, I want to take you on a ride exploring the messy family tree of artificial intelligence (AI): the everyday assistants we’ve been living with for decades (AI), the over-eager interns writing term papers and code on demand (artificial general intelligence; AGI), and the sci-fi nightmare fuel of sentient machines (artificial super intelligence; ASI). Along the way, we’ll pit Watson against Jeopardy! champs, revisit the day AlphaGo toppled Go masters, and of course, tip our hat to The Terminator.
Why? Not just because “the stochastic shall inherit the Earth one day”; understanding the AI spectrum isn’t just a nerdy trivia exercise. It helps us see where the real hacking primitives, opportunities, and threats are today—and what’s lurking beyond the horizon.
Let’s rewind to 2011. The stage: Jeopardy! The competitors are two of the show’s most dominant human players. The challenger? Not human at all but IBM’s Watson, an AI trained to parse natural language clues, search mountains of data, and spit out answers faster than you can say “How many Rs are there in the word strawberry?” Sound familiar?
Watson won. Handily. And while it was fun to watch a supercomputer dunk on humans, the truth is that an AI like Watson wasn’t new. We’ve been using “narrow AI” for decades through predictive text, search engines ranking results, spam filters blocking junk mail, and the Netflix algorithm recommending yet another murder mystery at 2 am.
Then came AlphaGo in 2016, which shocked the world by defeating Lee Sedol, one of the greatest Go players alive. Go had long been considered beyond computers as too complex, too intuitive. Yet AlphaGo showed that even “narrow” AI can master tasks we thought uniquely human, all while remaining hackable in interesting ways.
So where’s the hacker angle? Narrow AI can be tricked. We’ve seen adversarial attacks where tweaking a few pixels fools image classifiers or poisoning a dataset sends a model off the rails. If Watson were still around today, you’d better believe hackers would be probing its training data for ways to make it confidently answer “What is Nicolas Cage?” to every question or lose a game of Go with a single rogue stone.
Fast forward to now. We’re living in the land of ChatGPTs, Claude, Gemini, and friends. These systems write essays, draft code, pass medical exams, and even pen mediocre love poetry on demand. They’re not truly general intelligence, but they’re close enough to feel like the world’s most over-eager unpaid interns.
This is where things get interesting and messy. When your AI intern writes code, is it also shipping vulnerabilities? When it drafts term papers, is it also fabricating citations? The downstream impacts are massive; industries are being reshaped and productivity is surging, but misinformation is scaling like never before.
From a hacker’s perspective, AGI-adjacent systems are one giant new attack surface. We’ve seen prompt injection, jailbreaks, and data exfiltration as examples via a range of sophisticated crafted inputs to complete exploits that feel almost like AI-on-AI chess. But here’s the twist: not all of these require a 4D hacking brain. Some are trivial zero-effort wins born out of the way AI has been boxed or squished into use cases it was never designed for. Forget Skynet right now; the real danger is your AI intern with access to production executive functions becoming a gullible sidekick for attackers.
Want to hear more about AI vs. AI in the ring? Check out “Rigging the system: The art of AI exploits.”
It’s like bolting a jet engine onto a tricycle and then being surprised when someone tips it over with a pinky. Companies are strapping large language models (LLMs) onto customer support, financial workflows, and even security products, but without reshaping these systems around AI’s quirks. The result? Hackers don’t always need to invent a dazzling new exploit; sometimes, we just nudge the model in the wrong direction and watch the wheels fall off.
And then there’s the big, scary acronym of ASI: the realm where machines aren’t just smart, but they’re smarter than us in every possible way. Movie producers can’t get enough of this stuff, from The Terminator (sentient killing machines) and Ex Machina (robots that manipulate emotions) to Her (AI that can literally break your heart).
In the real world? We’re not there yet. No system today is self-aware, sentient, or plotting humanity’s downfall. But this speculation matters because it shapes how we think about risk. If an ASI—something that could rewrite its own code, improve itself infinitely, and operate beyond human patch cycles—were to emerge, then our entire security model would look like a floppy disk in the age of quantum computing.
But it’s not all doom and gloom. Pop culture has also given us parody takes, like The Simpsons Treehouse of Horror XII where Homer battles the charming Ultrahouse 3000, a smart home gone full HAL-9000 with Pierce Brosnan’s voice. It’s goofy, but it nails the underlying tension of what might happen if we were to hand over trust to systems we don’t fully understand.
Hacker spin? For now, ASI is a thought experiment. But it keeps us sharp, reminding us to ask: What if systems start chaining actions, autonomously testing their own exploits, and hiding traces better than any human red teamer? Suddenly, “Skynet” doesn’t feel so far-fetched.
From Watson’s trivia smackdowns to AlphaGo’s Go wizardry—and from today’s LLM interns to tomorrow’s sci-fi speculation—the story is clear that AI has been impressive, breakable, incredibly exciting, and occasionally terrifying at every stage. Hackers? We’re the necessary troublemakers in this tale. By probing weaknesses today, we help keep the Skynets of tomorrow in check for both fun and profit.
The next time you hear “I’ll be back,” remember that it’s not just Arnold; it’s every AI system waiting for its next iteration. And the job of keeping us safe (and a little bit weird) is ours.
Keep up with Ads’ wacky experiments or follow any of his content here. Feel free to hit me up on LinkedIn or GitHub. Ever in Toronto, Canada? Hit me up for coffee and donuts!