AI Seminar Series 2022: Sriram Ganapathi Subramanian

The AI Seminar is a weekly meeting at the University of Alberta where researchers interested in artificial intelligence (AI) can share their research. Presenters include both local speakers from the University of Alberta and visitors from other institutions. Topics can be related in any way to artificial intelligence, from foundational theoretical work to innovative applications of AI techniques to new fields and problems.

On August 12, Sriram Ganapathi Subramanian, Postdoctoral Research Fellow at the Vector Institute, presented "Decentralized Mean Field Games" at the AI Seminar.

Multi-agent reinforcement learning algorithms have not been widely adopted in large-scale environments with many agents as they often scale poorly with the number of agents. Using mean field theory to aggregate agents has been proposed as a solution to this problem by prior works. However, almost all previous works in this area make a strong assumption of requiring a centralized learning system where all the agents in the environment obtain global observations and/or are effectively indistinguishable from each other (i.e., learn the same policy in the time limit). In his talk, Subramanian provides a method that relaxes this assumption about requiring centralized learning protocols and proposes a new mean field system known as Decentralized Mean Field Games, where each agent learns in a decentralized fashion based on their local observations, and can be quite different from others.

Further, Subramanian provides a theoretical solution concept and establishes a fixed point guarantee for a Q-learning-based iterative algorithm in this system. A practical consequence of our approach is that it can address a 'chicken-and-egg’ problem in empirical mean field reinforcement learning algorithms. Notably, it is possible to design efficient (function approximation based) Q-learning and actor-critic algorithms that use the decentralized mean field learning approach. Empirically, these algorithms give stronger performances compared to common baselines in this area. In this setting, agents do not need to be clones of each other and learn in a fully decentralized fashion. Hence, for the first time, the application of mean field learning methods can be extended to fully competitive environments, large-scale continuous action space environments, and other environments with heterogeneous agents. He also presents an application of the mean field method in a ride-sharing problem using a real-world dataset. He proposes a decentralized solution to this problem, which is more practical than the centralized training approaches considered by prior research efforts.

Watch the full presentation below:

Keep up-to-date on the AI Seminar Series by signing up for the mailing list.

Learn how Amii advances world-leading artificial intelligence and machine learning research: visit our Research page.

Latest News Articles

Connect with the community

Get involved in Alberta's growing AI ecosystem! Speaker, sponsorship, and letter of support requests welcome.

Explore training and advanced education

Curious about study options under one of our researchers? Want more information on training opportunities?

Harness the potential of artificial intelligence

Let us know about your goals and challenges for AI adoption in your business. Our Investments & Partnerships team will be in touch shortly!