What wasps can teach us about artificial intelligence

Published

Feb 10, 2026

Categories

Insights

Subject Matter

Research

If you’ve ever seen inside a wasp’s nest or a beehive, you’ve likely marveled at the intricacy of the design. Wasps build nests inside walls, under eaves, and in trees. But no matter the location, the nest is adapted to suit the space, and each nest has a familiar set of corridors with hexagonal structures.

But have you ever wondered how a group of wasps or bees works together to build such structures? Who decides how big the nest should be? Or the shape of the walls? The answers to questions like these don’t just teach us more about the intelligence of insects. They also give us new ways to think about artificial intelligence.

When we build a house, it isn’t as if a bunch of construction workers show up without blueprints or leadership. You have a foreman directing things. But there is no insect “in charge” of designing a nest. So how does something so complex get made without a central decision maker?

A wasp nest is initiated by a queen, but the bulk of the work is done by the first generation of wasps that hatch from that initial nest structure. Each one of those wasps has its own internal programming that tells it how to build the hexagonal structures inside of a wasp nest. Those same instincts also tell it when to stop building the interior, and start building nest walls.

Each worker wasp has the same set of instructions, and they work on the nest by executing those simple instructions. The changes each wasp makes to the nest inform what the next wasp does, a phenomenon called stigmergy. Because each wasp understands these simple nest-building rules, they are able to construct the nest together in a distributed way, without any one wasp being in charge.

What does this teach us about intelligence, both natural and artificial? First, it reveals our assumption that there has to be a leader to build something complex. This is one of our biases about the nature of intelligence: we tend to assume there is a centralized organization when there is none. It also shows us that we might mistakenly assume that when we see complexity, there is great intelligence behind it. But the complexity of a wasp nest comes not from some master plan, but from the interaction of multiple simple agents executing a set of simple rules (not unlike the game of life).

The complex from the simple

In order to study the interaction of insects and the emergence of intelligent behavior from groups of agents, scientists have turned to robots. Using elegant experiments with simple robots, they have illustrated how complex behavior can emerge when a group of agents all follow a simple program. For example, in this video, Dr. Barbara Webb describes a swarm of robots that produce wall-building behavior like that seen in ants. Each robot has a simple program with only a few rules, but working together, they can build a new wall to protect their “nest.” No one robot is in charge, and even if half of the robots were to shut down, the wall building could still continue. The behavior looks complex, but it is the product of many simple agents working together.

There are older examples of similar intelligent-looking behavior arising from simple robots. Grey Walter’s tortoises were simple robots that were programmed to move towards a light source. If a robot sensed that it had bumped into something while moving towards the light, it would turn to a random direction and proceed forward. Then, it would move toward the light again. These two simple behaviors together create something that looks a lot like intelligent navigation. Early Roomba robots operated in a similar way - randomly turning when they encountered an obstacle, and, once their battery was low, navigating towards the beacon signal of their charging dock. Very few people would have called these early Roombas intelligent, but they were able to (mostly) successfully vacuum the floor.

There are a few lessons here that we can apply to modern AI. Just like the simple agents we’ve discussed here, neurons in neural networks act locally with simple rules. No one neuron is intelligent, no one neuron is in charge, but as a group, they can produce something that looks like intelligence.

This leads to a kind of conundrum. Groups of very simple agents can work together to create something that looks like organized intelligence. At the same time, a human brain is just a set of simple agents (neurons) working together. How can we differentiate something that creates intelligent-looking behavior from actual intelligence?

Some have argued that intent separates true intelligence from this organized complexity. No single wasp intends to build the entire nest at the outset. Each wasp acts locally, and together they accomplish a complex task without the intent of any one wasp.

On the other hand, though no single neuron in the brain has intent, there is a sense of intent created by the collection of neurons, and that intent can be focused and redirected. Each one of the wasps is at the mercy of their instincts, forced to execute that simple set of instructions. But a human (and indeed many living creatures) can move beyond the instinctual and display complex behavior with directed intent. So the question is not whether the complex behavior is intelligent, but rather if the behavior is with intent.

This slightly changes the goalpost of intelligence for AI. Defining what intent means for an AI is a deep question that can’t be answered by training an LLM that wins the International Math Olympiad (IMO), it requires a new way of thinking. We need to show that AI is more than a collection of neurons reacting to input, that there is attention and purpose behind its “behaviour.” It is more than a swarm of wasps building hexagons, without any plan for what the nest will be.

Alona Fyshe is the Science Communications Fellow-in-Residence at Amii, a Canada CIFAR AI Chair, and an Amii Fellow. She also serves as an Associate Professor jointly appointed to Computing Science and Psychology at the University of Alberta.

Alona’s work bridges neuroscience and AI. She applies machine-learning techniques to brain-imaging data gathered while people read text or view images, revealing how the brain encodes meaning. In parallel, she studies how AI models learn comparable representations from language and visual data.

Share