Alberta Machine Intelligence Institute

Why AI isn’t as scary as you think with Michael Littman | Approximately Correct Podcast

Published

Sep 16, 2025

Categories

Insights

Subject Matter

Research

Michael Littman wants you to know: AI isn’t as scary as you think. Despite its serious potential, AI can also be approached with humour and creativity.

On the latest episode of the Approximately Correct podcast, Littman shares how he uses humor and creativity to demystify AI, discusses his research into building human-focused intelligent systems, and offers practical advice on engaging with AI responsibly.

“We're the people in this equation. We're the people, we're in charge here. Programs are not in charge, they're programs,” he tells host Alona Fyshe and Scott Lilwall.

Over his career, Littman has had a lot of these types of conversations about AI with students, scientists, politicians, and the general public. He is a celebrated researcher in reinforcement learning and recently served as the Division Director for Information and Intelligent Systems at the National Science Foundation in the United States.

Most recently, he was named the first Associate Provost for Artificial Intelligence at Brown University, where he will guide the university on how to engage with AI responsibility.

Loading...

The scientist as a sculptor

Littman thinks machine learning is important, but that doesn’t mean it needs to be humourless. He often uses jokes, songs, and other creative methods to help people better connect with the concepts behind AI science — like putting together an a cappella hit to the tune of Michael Jackson’s Thriller to explain the idea of overfitting. 

Littman says that these days, people are flooded with information about artificial intelligence, some of it true, some of it false, and some of it ridiculous.  And playfulness can be a great tool for countering unfounded fears about AI.

“It's just a set of ideas. Maybe it's software, but it's not magic. And it's not an evil spirit. And so I think puncturing [those misconceptions] is okay,” he says.

Littman shares his own experience, where a conversation with a sculptor completely shifted the way he thought about machine learning. He says the way the artist talked about the thrill of working with new materials and lighting to create something new reminded Littman of how he felt when programming.

He realized that programming is a creative act.

“It's like, oh, here's a new algorithm. What can I do with that? What has it enabled me to do that I haven't been able to do in the past?” he says.

“This opens a door, this is now a new set of capabilities.”

Building AI for humans

Part of making people more comfortable with AI is making sure the tools that we built are actually made to benefit people.  One of the major challenges in that approach is how difficult it can be for people to communicate with machines.

AI systems can be very intimidating and difficult for non-experts to understand. But as machine learning becomes a greater part of our daily lives, it needs to be more accessible to people. Much of his work in reinforcement learning has focused on making it easier for people to communicate their goals and expectations to intelligence systems and building AI that is better at interpreting them.

That’s a two-way street. It’s not just about building AI that better achieves human goals. His research is also aimed at increasing our understanding of human intelligence and empowering people. 

Tune in to the Approximately Correct episode for a wide-ranging and fun conversation about all that and much more. And subscribe to Approximately Correct on YouTube, Spotify and Apple Podcasts.

Not Your Average AI Conference

Learn from Leading Minds in AI at Upper Bound

Be one of the thousands of AI professionals, researchers, business leaders, entrepreneurs, investors, and students in Edmonton this spring. Explore new ideas, challenge the status quo, and help shape a positive AI future.

Share