News

The Tea Time Talks 2020: Week One

Now that the 2020 Tea Time Talks are on Youtube, you can always have time for tea with Amii and the RLAI Lab! Hosted by Amii’s Chief Scientific Advisory Dr. Richard S. Sutton, these 20-minute talks on technical topics are delivered by students, faculty and guests. The talks are a relaxed and informal way of hearing leaders in AI discuss future lines of research they may explore, with topics ranging from ideas starting to take root to fully-finished projects.

Week one of the Tea Time Talks features some heavy-hitters:

Rich Sutton: Are You Ready to Fully Embrace Approximation?

Approximation that scales with computational resources is what drives modern machine learning. The steady drumbeat of Moore’s law enables successes (e.g. deep learning and AlphaGo) that depend on scalable approximation and will continue to do so for the foreseeable future. Are we ready to be part of this future? Fully embracing approximation imposes a challenging discipline under which we must do without so much of what reinforcement learning takes for granted, including optimal policies, the discounted control objective, Markov state and more. 

Csaba Szepesvári: Embracing Approximation in RL

Approximations are central to everything that we do in RL and also play a major role in computer science. In this talk, Csaba discusses what results are already available, as well as how to pursue meaningful research goals in RL when you have no choice but to fully embrace approximations.

Patrick Pilarski: On Time

Time is fundamental to reinforcement learning. The literature to date has described many ways that animals and machines use aspects of the flow of time and temporal patterns to make predictions, inform decisions, process past experiences and plan for the future. In this talk, Patrick begins with a survey of how and why agents perceive and represent time, as selected from the animal learning and neuroscience literature, and suggests what he thinks is a desirable set of time-related abilities for machine agents to acquire, demonstrate and master as they continually interact with the environment around them.

Martha White: Policy Gradient Methods as Approximate Policy Iteration

While the view that many policy gradient methods can be thought of as approximate policy iteration (API) is not new, new questions arise under function approximation when using parameterized policies. Martha explains the interpretation of policy gradient methods as API, where the policy update corresponds to an approximate greedification step. This policy update can be generalized by considering other choices for greedification. Martha will also provide empirical and theoretical insights about good choices for this approximate greedification. 


Watch the Tea Time Talks live online this year, Monday through Thursday from 4:15 – 4:45 p.m. MT. Each talk will be conducted here (please note that if you are accessing the chat from an email ID outside the domain of ualberta.ca, you may have to wait a few seconds for someone inside the meeting to let you in). You can take a look at the full schedule to find talks that interest you, subscribe to the RLAI mailing list or catch up on previous talks on the Youtube playlist.

Latest News Articles

Connect with the community

Get involved in Alberta's growing AI ecosystem! Speaker, sponsorship, and letter of support requests welcome.

Explore training and advanced education

Curious about study options under one of our researchers? Want more information on training opportunities?

Harness the potential of artificial intelligence

Let us know about your goals and challenges for AI adoption in your business. Our Investments & Partnerships team will be in touch shortly!