Has anyone else noted ChatGPT’s casual tone lately? It’s responding like it tagged along on the last Cancun trip. Alexa has never given me this uneasy feeling. Thanks Jeffrey Bezos.

Call me old-fashioned, but I don’t like when my computer talks to me like we are roommates. Back in my day, chatbots were glorified calculators that provided frustrating answers to customer support questions—all without the small talk and emojis. 🤪

Luckily, you can set some boundaries within the app’s Settings by selecting Personalization Customize ChatGPT. Click Save and your instructions will automatically apply to future conversations.

However, as jarring as its responses may be, it’s a testament to how advanced these models are becoming. AI is not necessarily new technology. Chess-playing programs that can respond to human opponents existed as far back as 1950. This was the same year that Alan Turing published Computing Machinery and Intelligence, which introduced The Imitation Game.

Nowadays, due to advancements in the processing power of computer technology, AI is not just learning what to say but how to say it. So how does a bunch of code, math, and data come together to create a robo-bro that seems to have a personality?

In this article, we will demystify AI in an easy-to-understand way—no college degree required. I certainly don’t have one. Just bring your curiosity and a Lunchable. This is AI ELI5.

What is AI?

To understand AI, let’s take a step back and reflect on what makes us humans intelligent.

We, as humans, are able to acquire knowledge through experience, apply logic and reasoning to solve problems, adapt our behaviors to changing circumstances, make choices based on available information, perceive and understand the world around us through our senses, and generate novel ideas and artistic expressions.

We essentially have a biological supercomputer in our skulls, albeit a wrinkly pink one.

The mad scientists in the field of AI seek to imbue these same cognitive capabilities in machines so that thinking can be done at all times. Specifically, they are looking to enable thinking that requires little to no oversight or human intervention.

They are motivated by the corporate incentive of creating the ultimate obedient employee, one that doesn’t need inconvenient things like water breaks, payment, or sleep. But I digress.

How does AI work?

At its core, an AI model is an input/output system that uses algorithms and extremely large amounts of data to accomplish the goal of constant thinking.

Just imagine how easy trivia night would be if you could retain all of the information you read in a series of encyclopedias.

These algorithms are sets of instructions that use statistics to understand and linear algebra to do the calculations required for data analysis. This process can be thought of as following a recipe:

  1. Data is taken (ingredients).
  2. The data is processed according to a set of predefined rules (the cooking steps).
  3. The desired output or action is then generated (the meal).

At their most basic level, AI algorithms can consist of simple if/else statements in a decision tree: if X do Y or else do Z. Yet, doesn’t this apply to all of computing? Or they can be intricate, dynamic networks of logic gates and mathematical operations.

The ingredients

The data for these AI algorithms can be sourced from a vast array of places:

  • Databases, both public and private, can provide structured information.
  • Web scraping can extract data from websites and online platforms.
  • Internet of Things (IoT) devices, such as those with sensors and smart appliances, can generate real-time data streams.
  • Surveys and questionnaires can collect data directly from human respondents.

As is the case with human intelligence, the sources and diversity of the information taken in are crucial to the accuracy of knowledge. Just like people, AI can confidently present misinformation as fact, in what are known as “hallucinations”. AI can also inherit and amplify biases present in its training data.

It is important to understand that AI has no concept of what is correct vs. what isn’t on a factual level. Its output is just statistically likely to be correct in certain cases.

The cooking

The processing stage of an AI algorithm involves taking input data and analyzing it. This is accomplished using various techniques depending on the end goal:

  • Machine learning (ML) algorithms are often used in this stage to identify patterns and relationships in the data. These algorithms can learn from data without being explicitly programmed, which allows them to adapt and improve over time. Early ML models were limited in their ability to make predictions past the immediate context, like how autocorrect only predicts the next word or two.
  • Deep learning (DL) is a subset of ML that uses multiple layers of artificial neural networks to model complex patterns in data. These neural networks are inspired by the structure of the human brain and can learn to perform complex tasks with high accuracy.
  • Natural language processing (NLP) is a technique used to analyze and understand human language. This technique can be used to extract information from text, translate between languages, and generate human-like text. Transformers, a type of neural network architecture, are what determine relevancy between input and output. They accomplish this through parallel mathematical computations to get a sense of context. Turing was right, it is difficult to distinguish humans from machines.
  • Computer vision is a technique used to analyze and understand images and videos. This technique can be used to identify objects, track movement, and recognize faces.

The meal

By applying these and other techniques, AI algorithms can extract valuable insights from data—insights that would be difficult for humans to identify on their own. This information can then be used to make predictions, automate tasks, and solve complex problems.

The output of an AI algorithm can take many forms, depending on the specific application. It may be a simple prediction, a classification, a recommendation, a decision, or a complex action.

Types of AI

There are several different types of AI, though currently, the only type that exists is narrow AI. The others are only theoretical for the time being. AI is classified based on two categories of characteristics: capabilities and functionalities.

Capabilities:

  1. Narrow AI—Requires human-assisted training and operates within a specific field of expertise; it is only capable of performing a single task or tasks limited in scope.
  2. Artificial general intelligence (AGI)—Will be realized once AI is able to use previous knowledge and skills it has obtained to carry out new tasks in a different context without human intervention.
  3. Super AI—Also known as “artificial superintelligence,” AI models will belong to this type if they ever surpass human intelligence.

Functionalities:

  1. Reactive machine AI—This type of AI is limited in that it has no memory of previous outcomes or decisions and only performs specific tasks. Services that provide recommendations based on customer histories fall under this type.
  2. Limited memory AI—AI belonging to this category of functionality can monitor and recall past outcomes and decisions that are combined with present-moment data to decide on a course of action. Self-driving cars belong to this type.
  3. Theory of mind—Yet another theoretical type, an AI would be considered to have this functionality if it could infer human emotions.
  4. Self-aware—If ever achieved, an AI with this functionality would have consciousness.

Foundation models

Foundation models are large-scale neural networks trained on vast amounts of data that serve as base models to work with. Instead of training a model from scratch, you can start with a foundation model and build upon it so it can ultimately perform a specific task. Think of foundation models as starter kits.

Generative AI

The most recent advancements in the field have been in generative AI, or “GenAI.” These models are reactive, awaiting prompts, and can create original text, images, videos, and other media.

GenAI includes large language models (LLMs); think chatbots powered by AI. These AI assistants use NLP, natural language understanding (NLU), ML, DL, and memory and automation capabilities to predict the correct text response based on the words they receive. Since they await prompts from users, they are considered to be reactive machine AI. LLMs are trained on large datasets of text, such as books, articles, and conversations.

When conversing with an LLM, characters are grouped together into “tokens.” These tokens vary in character length, but each has semantic meaning for the model. The maximum number of tokens both a prompt and a response can have is known as the “context window length.”

Audio and video models are also considered to be GenAI. These models can create deepfakes, audio clips, or videos that recreate a subject’s voice or appearance.

Agentic AI

Like GenAI, agentic AI systems also await prompts but instead pursue goals through a series of actions. Since they perform multistep processes after an initial prompt, they are considered to be “proactive” systems.

Its life cycle consists of perception, deciding the action to take, executing the action, and then learning from the output—all with minimal human intervention. LLMs power the reasoning engine that agentic AI needs to carry out what is known as chain-of-thought reasoning.

For example, based on what you added to your online shopping cart, an agentic AI model would automatically fetch the product availability, monitor price fluctuations, and automate checkout as well as delivery.

Testing AI

To test the accuracy of AI predictions, models are “benchmarked” against standards, like No Child Left Behind but for AI.

Humanity’s Last Exam is designed to be the final benchmark test for determining how advanced AI models are. It is a global collaborative effort between nearly 1,000 subject matter experts who have made the test 2,500 questions long. Furthermore, it covers over 100 subjects.

Currently, the highest score achieved is only 20.3%, a record held by OpenAI’s o3 model. What an idiot, huh? Well, let’s view some of the example questions:

 

Conclusion

AI continues to grab headlines, and companies keep using marketing keywords and unfamiliar technologies. Hopefully, you will now have a general understanding of what is being discussed. The field has seen exponential growth in the past several years, and with major investments, it is likely to continue its trajectory.

Because of this, there is bound to be more breakthroughs that will bring more buzzwords, so stick around because we may have to cover this subject all over again.

However, with your current insight, you can learn about AI vulnerabilities and how they are exploited. Maybe you’ll be the next hunter to earn a bounty from an AI program!

Until next time,

Ninjeeter