Alberta Machine Intelligence Institute

How do we ensure AI makes safe choices? with Revan MacQueen | Approximately Correct Podcast

Published

Aug 19, 2025

Categories

Insights

Subject Matter

Research

Machine learning offers huge potential in industrial control, helping to increase efficiency and responsiveness in settings like factories, power plants, and other utilities. But when artificial intelligence is managing critical infrastructure — say, for instance, as a water treatment plant — then safety isn’t optional. It’s the top priority. 

In the latest episode of Approximately Correct,  Revan MacQueen, an ML Engineer at RL Core and an Amii alum, talks about how to design AI systems that operators can trust. 

Loading...

RL Core, founded by Amii Fellows Martha White and Adam White, specializes in applying reinforcement learning (RL) to optimize processes in industries such as water treatment, where safety is paramount. (You can learn more about the project on our previous episode with Martha.)

“When we are talking applications that are in the real world, involving physical parts that are moving, that's really important to have a human more generally involved.  I think that having reactive, robust, safe agents is a top priority,” MacQueen told hosts Alona Fyshe and Scott Lilwall. 

Safety Starts with Humans in the Loop

One of the keys to safe AI operation, MacQueen explains, is a close partnership with the people who know the system best. Before an AI agent ever goes live, RL Core works with plant operators to define hard limits of the system, identifying the strict boundaries on things like chemical concentrations, pump speeds, or equipment stress levels that could pose risks to both people and equipment. These thresholds are non-negotiable, and the AI is programmed to stay within them.

MacQueen stresses that this isn’t about replacing operators, but instead building the systems using the input of human experts, and giving them oversight. 

"The operators are always in control of the water treatment plant. That's a big part of the approach we take to safety and trust—you’ve got to have humans in the loop with these kinds of systems," he said.

He explains that once humans have set the safety zones, the AI learns to fine-tune processes within those boundaries for better performance. 

MacQueen says that they’ve seen efficiency gains of 10–20% in some projects—enough to save thousands in material and energy costs each year, while also reducing wear and tear on equipment.

Just as important, it means water treatment operators can focus on long-term planning and overseeing the system,  rather than the time-consuming tasks of fine-tuning the plant's operations.

“You need operators. Human-in-the-loop is not going anywhere just because of all the different things that can happen in these plants that the agent can't address by itself,” he says.

"That's a big part of the approach we take to safety and trust—you’ve got to have humans in the loop with these kinds of systems."

Revan MacQueen

ML Engineer, RL Core

Critical systems mean critical safety

Reinforcement learning holds a lot of promise in applications like industrial control and optimizing the systems that keep our world running.  When dealing with such critical systems, the ability to trust that machine learning approaches are safe is vital. 

Earlier this year, Amii partnered with CIFAR, the National Research Council, Mila, and the Vector Institute as part of the Canadian Artificial Intelligence Safety Institute (CAISI). CAISI works with experts across government, business, academia, and research organizations to ensure AI serves society responsibly.

“It’s exciting to see real-world applications, like those led by RL Core, that put safety at their core and drive real innovation, adoption, and trust in AI,” says Alyssa Lefaivre Škopac, Amii’s Director of AI Trust and Safety.

“This solution demonstrates how safety isn’t a barrier to progress, but a pathway for AI to have a positive impact on our lives. When we consider safety implications from the outset, we can be confident in developing, deploying, and adopting these novel solutions at scale.”

The future of safe industrial AI

In addition to the discussion of hard limits and human-in-the-loop, MacQueen dives more into the challenges of creating AI agents that can interact with the world safely.  In the full episode, he digs into why real-world challenges can sometimes be easier than simulated ones, and how these methods could be applied to industrial control problems far beyond water treatment.

Check out the full episode, and subscribe to Approximately Correct on YouTube, Spotify and Apple Podcasts.

Want more from Revan? Check out his One-Minute Research video, where he explains his work on machine-learning self-play in multiplayer games.

Not Your Average AI Conference

Learn from Leading Minds in AI at Upper Bound

Be one of the thousands of AI professionals, researchers, business leaders, entrepreneurs, investors, and students in Edmonton this spring. Explore new ideas, challenge the status quo, and help shape a positive AI future.

Share