The AI race to develop more and more powerful applications has raised an important question: how open should these models be? And what do we risk by keeping the foundational models under lock and key?
On the latest episode of Approximately Correct, we’re joined by Joelle Pineau, who argues that openness is essential to the future of AI development. Pineau is a professor at the School of Computer Science at McGill University and a member of Mila. She is also the former VP of AI research at Meta, where she led its Fundamental AI Research (FAIR) and the release of its open-source Large Language Model, Llama.
Most recently, she has joined startup Cohere as their first chief AI officer, and has been named a member of the Government of Canada’s AI Strategy Task Force (along with Amii Fellow and Canada CIFAR AI Chair Michael Bowling).
Loading...
For Pineau, open-source models are more than something to aspire to. They are necessary for building safer, more effective foundational models. Open-weight models, like what FAIR achieves with its Llama LLM, allow the research community to find flaws, mitigate risks, and push forward artificial intelligence in a way that isn’t possible by keeping the inner workings of models locked away.
“We are all building on each other's contributions. None of us is an island,” she says.
“The more we can share in terms of not just the outcome but the process of our work, the more we can empower others to contribute.”
Even beyond making models stronger, Pineau warns of another danger of closed-source foundational models: algorithmic monoculture.
If the field becomes reliant on just a small number of foundational models, built and maintained by just a few organizations, that limits the diversity and usefulness of AI. It leads to a world where all models have the same weaknesses, vulnerabilities, and limitations. Resilient, safe AI requires an open approach.
“If all the agents are very similar to each other, that means they'll all be vulnerable to the same viruses and the same ways to manipulate them,” she says.
Check out the full episode to hear more about Pineau’s fascinating approach to AI research and why she thinks the true future of AI is an open one.
Approximately Correct: An AI Podcast from Amii is hosted by Alona Fyshe and Scott Lilwall. It is produced by Lynda Vang, with video production by Chris Onciul. Subscribe to the podcast on Apple Podcasts or Spotify.

Not Your Average AI Conference
Learn from Leading Minds in AI at Upper Bound
Be one of the thousands of AI professionals, researchers, business leaders, entrepreneurs, investors, and students in Edmonton this spring. Explore new ideas, challenge the status quo, and help shape a positive AI future.