News
Now that the 2020 Tea Time Talks are on Youtube, you can always have time for tea with Amii and the RLAI Lab! Hosted by Amii’s Chief Scientific Advisory Dr. Richard S. Sutton, these 20-minute talks on technical topics are delivered by students, faculty and guests. The talks are a relaxed and informal way of hearing leaders in AI discuss future lines of research they may explore, with topics ranging from ideas starting to take root to fully-finished projects.
Week nine of the Tea Time Talks features:
To effectively perform the task of next-word prediction, long short-term memory networks (LSTMs) keeps track of many types of information. Some information is directly related to the next word's identity, but some is more secondary -- for example, discourse-level features or features of downstream words. Correlates of secondary information appear in LSTM representations even though they are not part of an explicitly supervised prediction task. In contrast, reinforcement learning (RL) has found success in techniques that explicitly supervise representations to predict secondary information. In this talk, Qingfeng proposes predictive representation learning (PRL), which explicitly constrains LSTMs to encode specific predictions, such as those that might need to be learned implicitly. He shows that PRL significantly improves two strong language modeling methods, converges more quickly and performs better when data is limited. The fusion of RL with LSTMs shows that explicitly encoding a simple predictive task can facilitate the search for a more effective language model.
In this talk, Banafsheh introduces classical conditioning testbeds for studying the problem of state construction. These testbeds are modelled after tasks in psychology where an animal is exposed to a sequence of stimuli and has to construct an understanding of its state in order to predict what will happen next. The testbeds are proposed to study online multi-step prediction. Banafsheh provides results on the first testbed, characterizing a multitude of approaches including the common modern approaches as well as simpler methods inspired by models in animal learning.
The Tea Time Talks have now concluded for the year, but stay tuned as we will be uploading the remaining talks in the weeks ahead. In the meantime, you can rewatch or catch up on previous talks on our Youtube playlist.
Jul 24th 2024
News
How do we get the best results when AI and human beings work together? In this episode of Approximately Correct, we’re looking into Human-In-The-Loop AI with Amii Fellow and Canada CIFAR AI Chair Matt Taylor.
Jul 22nd 2024
News
Read our monthly update on Alberta’s growing machine intelligence ecosystem and exciting opportunities to get involved.
Jul 18th 2024
News
Amii announces work with Communitech, a Waterloo Region innovation hub, to empower startup founders with the AI tools and resources they need to integrate AI and build in-house capabilities successfully. The collaboration will leverage Amii’s leading AI expertise and resources and be centred around Amii’s Machine Learning Exploration (ML Exploration) program.
Looking to build AI capacity? Need a speaker at your event?