News

The Tea Time Talks 2020: Week Seven

Now that the 2020 Tea Time Talks are on Youtube, you can always have time for tea with Amii and the RLAI Lab! Hosted by Amii’s Chief Scientific Advisory Dr. Richard S. Sutton, these 20-minute talks on technical topics are delivered by students, faculty and guests. The talks are a relaxed and informal way of hearing leaders in AI discuss future lines of research they may explore, with topics ranging from ideas starting to take root to fully-finished projects.

Week seven of the Tea Time Talks features:

Matthew Schlegel: A first look at hierarchical predictive coding

Predictions, specifically those of general value functions (GVFs), have led to many lines of research and thought at the RLAI lab. While there have been many new algorithms for learning GVFs in recent years, there are still many questions around their use. In this talk, Matthew introduces the core concepts of hierarchical predictive coding (Rao, 1999), a scheme that uses predictions to inhibit feed-forward signals through corrective feedback. He also discusses an instantiation of the hierarchical predictive coding model using techniques from deep learning.

Alex Lewandowski: Temporal Abstraction via Recurrent Neural Networks

Environments come preconfigured with hyper-parameters, such as discretization rates and frame-skips, that determine an agent's window of temporal abstraction. In turn, this temporal window influences the magnitude of the action gap and greatly impacts learning. Alex discusses ongoing work that uses a recurrent neural network to flexibly learn action sequences within a temporal window.

Shibhansh Dohare: The Interplay of Search and Gradient Descent in Semi-stationary Learning Problems

In this talk, Shibhansh explores the interplay of generate-and-test and gradient-descent techniques for solving supervised learning problems. He starts by introducing a novel idealized setting in which the target function is stationary but much more complex than the learner, and in which the distribution of input is slowly varying. Then, he shows that if the target function is more complex than the approximator, tracking is better than any fixed set of weights. Finally, he explains that conventional backpropagation performs poorly in this setting, but its performance can be improved if we use random-search to replace low utility features.

Dhawal Gupta: Optimizations for TD

In his talk, Dhawal explores the possibility of using adaptive stepsize techniques from the deep learning community for the use of temporal difference (TD) learning. Do the adaptive step size methods offer respite in TD learning divergence issues, mainly because of behavioural and target policy mismatch? Is this even something which merits looking into, or should completely separate stepsize techniques for TD learning be developed?


The Tea Time Talks have now concluded for the year, but stay tuned as we will be uploading the remaining talks in the weeks ahead. In the meantime, you can rewatch or catch up on previous talks on our Youtube playlist.

Latest News Articles

Connect with the community

Get involved in Alberta's growing AI ecosystem! Speaker, sponsorship, and letter of support requests welcome.

Explore training and advanced education

Curious about study options under one of our researchers? Want more information on training opportunities?

Harness the potential of artificial intelligence

Let us know about your goals and challenges for AI adoption in your business. Our Investments & Partnerships team will be in touch shortly!