This blog was originally posted on Medium.com by Matt Held, Technical Customer Success Manager at Bugcrowd. 

AI changes the threat landscape, by a lot.

In my daily life as a Cybersecurity Solutions Architect, I see countless vulnerabilities coming into Bug Bounties, Vulnerability Disclosure programs, Pen Tests and any other form of intake for security vulnerabilities. AI has made it easier for hackers to find and report these things, but these are not the risks that break the major bones in our very-digital society. It’s (mostly) not about vulnerabilities in applications or services, it’s about losing the human factor and trust in digital media and the potential of loosing any privacy. Here is my forecast regarding the most prevalent and concerning challenges arising from artificial intelligence in the year 2024 and it’s not all about cybersecurity.

1. Deepfakes and their Scams

Many of us have seen the Tom Hanks deepfake where his AI-clone promotes a dental health plan. Or the TikTok video falsely presenting MrBeast offering new iPhones for only $2.

Source

Deepfakes are now as easy to produce as ordering a logo online or getting takeout. There are a lot of sellers on popular gig-economy platforms, that will produce realistic enough looking deepfakes of anyone (Including yourself or any other person you have a picture, a voice or both of) for a few bucks.

Armed with that a threat actor can make any person including celebrities do anything. This ranges from promoting fake crypto-sites to altering history books. This is extremely critical, especially in a time, where we are rewriting history books for the better accuracy and not only from the winning party’s point of view.

Exhibit A: Below image took 1 minute to produce

Image composed by author with Midjourney.

Their rise in usage has more than doubled in 2023 compared to 2022. And according to a 2023 report by Onfido, a leading ID verification company, there has been a staggering 3000% increase in deepfake fraud attempts this year as well.

With image generators and refined models, one is already able to create realistic verification images for any type of online verification all while keeping the original identifiers of that person intact (looks, bio-metrics, lightning and more) and without looking generic.

Image created by Reddit user _harsh_ with Stable Diffusion| Source

Unsurprisingly the adult entertainment industry has rapidly embraced emerging technologies, leading to a troubling surge in the creation of non-consensual explicit content. Whatever you feed an AI it will create, much to the suffering of people not consenting to having their face and body turned into an adult film actor or actress. Tragically, this misuse extends to the creation of illegal and morally reprehensible material involving minors (CSAM). The ease with which AI can be used for such purposes highlights a significant and distressing issue with the unregulated usage of image generators.

On the more economic side of things, AI-Influencers are already replacing the human ones. No more influencer farms (a shame I know), as they might be too costly to operate, when you can achieve the same outcome and even more with just some servers with powerful GPUs.

While this might not be as concerning on the first glance, the ability to have a non-person with millions of followers stating fake-news or promoting crypto-scams, Multi-Level Marketing schemes is just a post away.

While the use of convincing deepfakes in financial scams is still uncommon to this date, it will surge drastically with the rapid evolvement of models trained. And it will be more difficult to distinguish what is real and what not really fast.

2. Automation of Scams (Deep Scams)

Let’s talk more about scams because there are so many variants of them. And so many flavors of how to emotionally manipulate people into giving away their hard-earned money.

Romance-scams, fake online-shops, property scams, pig butchering scams, the list goes on and on — every single one of those will be subject to mass-automation with AI.

Romance Scams

A realistic photographic evidence of a spontaneous snapshot taken with a smartphone turns out to be just another fake generated image.

Image created by Reddit user KudzuEye | Source

Tinder-Swindler style romance scams are skyrocketing and we can expect even more profiles of potential online-romances to be fully generated images and text messages.

Fake Online Shops

“Have you heard about the new Apple watch with holo-display? Yeah, pretty nice, and only $499 here on appIe.shop”

(notice the capitalized “i” instead of the l in apple”)

AI makes it easy to clone legit online shops, create ads of fake products, spamming social media and paid advertising services and rack in profits.

Image composed by author with Dall-E 3.

135% increase of fake online shops in October 2023 alone speaks volumes on the effortless creation of such sites.

Property (AirBnB, Booking-sites) scams

Looking for a nice getaway? Oh wow, right at the beach, and look, so cheap at the exact dates we’re looking for — let’s book quickly before it is gone!

AirBnB scams like this one and others are on the rise. When you arrive at your destination the property does not look like the images advertised or doesn’t even exist at all. The methods are easy to replicate over and over again and platforms are not catching up fast enough in identifying and removing those listings. Sometimes, they don’t even care.

Image composed by author with Dall-E

And it’s not all about creating realistic looking visuals, also text-based attacks too.

Let’s go back to the age-old “Nigerian-prince” or 419-scam. According to research done by Ronnie Tokazowski, the new-ish way of these scammers making bank is using BEC (Business Email Compromise) on companies or individuals.

For context: this usually involves using a lookalike domain (my-startup[dot]com instead of mystartup[dot]com) and the same email alias as someone high up in the company asking another person in the company to transfer money or gift cards on their behalf.

Which is a pretty and low level point of entry and can be scaled to an infinite amount of messages that all differ in language, wording, emotion and urgency with the help of Large Language Models (LLMs).

More distinguished threat actors could try matching the style and wording of the person that actor is impersonating, automate the domain registration, email alias creation, social media profile creation, mobile number registration and so on. Even the initial recon can be done by AI just by feeding it social profiles of the company or “About Us” pages, marketing emails, invoice gathering and much more. All until their model becomes a fully automated scam machine.

The inner workings of these so-called “pig butchering” scams are not only sinister and deeply disturbing. Reports suggest these operations are not only involved in regular and extreme criminal activities, but are also linked to severe acts of violence, including human sacrifices and armed conflicts.

In conclusion, the staggering efficiency that AI brings to any of these scams, properly 1000-folds it, only intensifies the situation for our society.

3. Privacy Erosion

George Orwell’s seminal dystopian novel “1984”’ was published seven decades ago, marking a milestone in literary history. The popular Netflix Series “Black Mirror” was released less than a decade. And it really doesn’t matter which one we take to compare the current state of AI Surveillance in the world — all under the cloak of security of the public. We have arrived at the crossroads and have already stared to take a step forward. In 2019 CARNEGIE reported a big surge of mass surveillance via AI technology.

Image courtesy of carnegieendowment.org | Source

If you like to see an interactive world map of AI Global Surveillance (AIGS) data, you can go here.

By 2025, it is projected that the digital data will expand to over 180 zettabytes (180.000.000 terrabytes). This immense amount collected, particularly in the realms of mass surveillance and profiling of internet users and people just walking down the street needs to be put into a context for whatever analysts want to use it for — for good I assume of course. AI technologies are set to play a crucial role in efficiently gathering and analyzing vast quantities of data. It is true, a trained AI model is faster, cheaper and often better than a human in:

  • Data collection (AI web scrapers automatically collect data from multiple online sources, handling unstructured data more effectively than traditional tools. It can also take thousands of video feeds to analyze all at once.)
  • Data mapping (With the right training data mapping via machine learning can be a very fast process and relationships between entities are becoming apparent very rapidly)
  • Data quality (Data collection and mapping are just the beginning; identifying errors and inconsistencies in large datasets is crucial. AI tools aid by detecting anomalies and estimating missing values accurately and much more efficient than any human)

But what are we talking about here? Tracking movements of terrorist and people that are dangers to society? Or are we just tracking everyone — just because we can?
Already the latter.

Everyone in society plays their part too. Alongside the issue of external mass surveillance, there is a growing trend of individuals voluntarily sharing extensive details of their lives on the internet. Historically, this information has been accessible to governments, data brokers, and advertising agencies.

Or people with the right capabilities knowing where to look. ”Fun” fact, this video is 10 years old.

However, with the advent of AI, the process of profiling, tracking, and predicting the behavior of anyone who actively shares things online has become significantly more efficient and sophisticated. George Orwell might have been astonished to learn that it’s not governments, but consumers themselves who are introducing surveillance devices into their homes. People are willingly purchasing these devices from tech companies, effectively paying to have their privacy compromised.

“Alexa, are you listening?”

Imagine funneling an array of all this data, dump it into a trained model that profiles a person based on purchase history, physical movements, social media posts, sleep patterns, intimate behaviors and more. This level of analysis could potentially reveal insights about a person that they might not even recognize in themselves. Now, amplify this process to encompass a mountain of data on countless individuals, monitored through AI surveillance, and you could map out societies. What was once possible for governments or Fortune500 companies, becomes available to the average user.

With ease, the concept of the “Glass Human” — an individual whose life is completely transparent and accessible — is closer to reality than ever before, representing a scenario where individuals are transparently visible and easily analyzed in great detail.

4. Automation of Malware and Cyberattacks

Every malware creator’s wet dream is the creation of self-replicating, auto-infecting, undetectable malicious code. While the automation part came true a long time ago, the latter is now a reality as well.

It’s not difficult to imagine that such technology may already be deployed at a scale by ransomware groups and Advanced Persistent Threats (APTs). With evasion techniques that is making recognition and detection of harmful code an impossible task for tradition defense methods. Meaning, we already have code that is replicating itself into a different form, with different evasion techniques on every machine it infects for a few months now.

Personally I believe that by the end of writing this article, we already have above deployed and working in the wild.

In the realm of cyber-attacks, the sophistication is increasing rapidly with the help of LLMs, but also the possibility to launch multiple attack-styles all at once to overwhelm defense teams is not a far-fetched scenario anymore.

Instead of trying to tediously find a single vulnerability to gain access into a company’s application or chain multiple together, threat actors can launch simultaneous attacks at once to see which one works best and leverage the human factor to gain access or control.

Picture a multifaceted cyber-attack scenario:
A company’s marketing website is hit by a Distributed Denial of Service (DDoS) attack. Simultaneously, engineers working to fix this issue face Business Email Compromise (BEC) attacks. While they grapple with these challenges, other employees are targeted with phishing attempts through LinkedIn. Adding to the chaos, these employees also receive a barrage of Multi-Factor Authentication (MFA) spamming requests. These requests are an attempt to exploit their credentials, which may have been compromised in data breaches or obtained through credential stuffing attacks.

All these attacks can be coordinated simply by inputting endpoints, breach-data and contact profiles into a trained AI model.

This strategy becomes particularly alarming when it targets vital software supply chains, essential companies, or critical infrastructure.

5. Less Secure Apps

Speaking from years of experience: I can attest to a common trait among us developers: a tendency towards seeking efficiency, often perceived as laziness *coughs*. It comes as no surprise that the introduction of AI and Copilot apps made tedious tasks a thing of the past. However, an over-reliance on these tools without fully understanding their outputs, or how they fit into the broader context and data flow, can lead to serious issues.

recent study revealed that users utilizing AI assistants tended to produce code that was less secure compared to those who didn’t use it. Notably, those with access to AI tools were more prone to overestimating the security of their code, aka blindly copy & pasting everything into their codebase.
The study also highlighted a different but crucial finding — users who were more skeptical of AI and actively engaged with it, by modifying their prompts or adjusting settings, generally produced code with fewer security vulnerabilities. This underscores the importance of a balanced approach to utilizing AI in coding, blending reliance with critical engagement. Simultaneously, these tools such as GitHub Copilot and Facebook InCoder have a tendency to instill a false sense of confidence among developers regarding the robustness of their code.

But what does that mean in an economy where Move fast and break things is still a common mantra among startups and established companies?
Yes exactly, we will end up with less secure products that have so many security issues, so many privacy flaws and an overall lack of care about users’ data.

In Summary

… we need to balance the scales!

We are at a crossroads in technological development, where the creations we bring to life must not only understand humanity but also align with its long-term interests and ethics (we also need better ethics to begin with). The stakes are incredibly high, as the potential benefits of these advancements could be the greatest we’ve ever seen, yet the risks are equally monumental.

Key to navigating this minefield is the preservation of privacy, which, with the right choices made today, can transition from being a historical concept to a fundamental, technologically and lawfully enshrined right.

To achieve this, there’s an urgent need for global collaboration among governments as well as economic and technological drivers. We need to form a comprehensive and inclusive council dedicated to overseeing AI ethics, laws, and regulations. This council must be diverserepresenting various societal and cultural backgrounds and must have 0 financial interest to ensure that the development and use of AI is not dominated by any single culture or economic power, thus maintaining a balanced and equitable approach to technological advancement.

Do you have thoughts or questions about this article? Reach out to Matt!