This is the third article in a series written by Alona Fyshe, Science Communication Fellow-in-Residence at Amii, exploring the ideas and breakthroughs that shaped modern AI—and the questions we’re still asking.
Sometimes when I set out to write an article, I realize I’ve stumbled onto a centuries old question with no easy answer. Sometimes I take the easy way out and find another topic. But this question is important, so let’s dive in.
What is intelligence? This question has captured the imagination of generations of philosophers and thinkers. I’m going to slightly change the question from “What makes a human intelligent?” to “What would be the bare minimum required for us to call an AI ‘intelligent?’” What are the characteristics that, if not possessed by an AI, that AI could not be considered intelligent?
We can start simply. The model must have memory. If AI cannot store information and can only react to its environment, it can produce pretty complex behavior, but it is not intelligent. Memory is a basic need, but not all things with memory can be considered intelligent. For example, Wikipedia has a huge amount of information stored on its servers, and even comes with an efficient search mechanism. But no one would claim Wikipedia is intelligent, so what is it missing?
One thing that Wikipedia cannot do is learn. For all of the information it has, it is not equipped to learn from that information to perform a task. However, many language models (LMs) are trained on Wikipedia, and through that training process, they learn to produce sensible sentences. And most importantly, those sentences can be different from the sentences the LM saw during training. This means the LM learns in a way that is beyond memorization. It is learning in a way that allows it to generalize to new sentences.
This generalization is a third characteristic that an intelligent agent must have. To show that an agent really “understands” a concept, it must be able to apply that information in novel circumstances. This is like how we test students. If we want to make sure a student really understands a math concept, we must present the student with problems that require that concept, but are not exactly like any problem the student saw during instruction.
From here, we head into territory where the requirements of intelligence become less clear. For example, some have argued that intelligent agents must have the ability to identify and work towards goals. But what exactly is a goal, and what does it mean to work towards it? In some contexts (e.g., robotics), the goal might be clearer (e.g., navigate to a location), but in other areas, the idea of a goal might be more abstract. For example, is the goal of an LLM to participate in dialogue like a human might? That is, indeed, how it’s trained.
Perhaps anything that’s trained has a goal. Every learned system is working towards reducing its training error, however that’s defined. But then we found ourselves including simple regression models in our set of possible intelligent agents because they optimize for training error, and have the ability to store information in their weights (just like a neural network), learn and generalize to new, unseen examples. But calling a regression model intelligent doesn’t seem right.
It’s hard to argue that a regression model is intelligent because a single regression model can only perform one task. For an agent to be intelligent, it should not just generalize to new examples of the same type, but also to new tasks. The ability to flexibly adapt and learn new tasks is a hallmark of intelligent behavior.
We have identified four necessities of intelligence: memory, learning, generalization, and adaptability. These characteristics together draw some pretty clear boundaries between agents that could be intelligent and those that cannot. However, these characteristics are necessary but not sufficient – not every agent that satisfies these requirements is intelligent, but it’s hard to imagine an agent without these characteristics that is intelligent. Having drawn these clear boundaries gives us the foundation for more nuanced conversations about intelligence.
Alona Fyshe is the Science Communications Fellow-in-Residence at Amii, a Canada CIFAR AI Chair, and an Amii Fellow. She also serves as an Associate Professor jointly appointed to Computing Science and Psychology at the University of Alberta.
Alona’s work bridges neuroscience and AI. She applies machine-learning techniques to brain-imaging data gathered while people read text or view images, revealing how the brain encodes meaning. In parallel, she studies how AI models learn comparable representations from language and visual data.

