News

Amii Papers and Presentations at ICLR 2023

Today marks the first day of the 2023 Eleventh International Conference on Learning Representation (ICLR) , taking place in Kigali, Rwanda from May 1 - 5.

ICLR is one of the premier conferences on representation learning, a branch of machine learning that focuses on transforming and extracting from data with the aim of identifying useful features or patterns within it. The conference draws in experts from around the world to present cutting-edge work with applications that extend to areas like computer vision, computational biology, gaming, robotics and more.

Amii's Fellows, CIFAR Canada AI Chairs and students are presenting dozens of posters, papers and workshops at this year's conference. Their work covers a wide variety of topics in representation learning and deep learning: everything from new prompting strategies to enable complex reasoning in large language models, to addressing challenges in weakly supervised learning, to making experience replay more sample-efficient.

For this year's conference, we challenged some of our affiliated students to explain their papers in one minute. Check out the videos below, as well as a breakdown of what Amii researchers are contributing to ICLR 2023.

(Entries with a * note someone supervised by an Amii Fellow and/or Canada CIFAR AI Chair)

In-Person Poster Presentations

Latent Variable Representation for Reinforcement Learning

Tongzheng Ren · Chenjun Xiao* · Tianjun Zhang · Na Li · Zhaoran Wang · sujay sanghavi · Dale Schuurmans · Bo Dai

Any-scale Balanced Samplers for Discrete Space

Haoran Sun · Bo Dai · Charles Sutton · Dale Schuurmans · Hanjun Dai

Spectral Decomposition Representation for Reinforcement Learning

Tongzheng Ren · Tianjun Zhang · Lisa Lee · Joseph E Gonzalez · Dale Schuurmans · Bo Dai

Replay Memory as An Empirical MDP: Combining Conservative Estimation with Experience Replay

Hongming Zhang · Chenjun Xiao*· Han Wang · Jun Jin · bo xu · Martin Müeller

Least-to-Most Prompting Enables Complex Reasoning in Large Language Models

Denny Zhou · Nathanael Schaerli · Le Hou · Jason Wei · Nathan Scales · Xuezhi Wang · Dale Schuurmans · Claire Cui · Olivier Bousquet · Quoc V Le · Ed H. Chi

Self-Consistency Improves Chain of Thought Reasoning in Language Models

Xuezhi Wang · Jason Wei ·Dale Schuurmans· Quoc V Le · Ed H. Chi · SHARAN NARANG · Aakanksha Chowdhery · Denny Zhou

TEMPERA: Test-Time Prompt Editing via Reinforcement Learning

Tianjun Zhang · Xuezhi Wang · Denny Zhou · Dale Schuurmans · Joseph E Gonzalez

Score-based Continuous-time Discrete Diffusion Models

Haoran Sun · Lijun Yu · Bo Dai · Dale Schuurmans · Hanjun Dai

Mutual Partial Label Learning with Competitive Label Noise

Yan Yan · Yuhong Guo

Optimistic Exploration with Learned Features Provably Solves Markov Decision Processes with Neural Dynamics

Sirui Zheng · Lingxiao Wang · Shuang Qiu · Zuyue Fu · Zhuoran Yang · Csaba Szepesvari · Zhaoran Wang

The In-Sample Softmax for Offline Reinforcement Learning

Chenjun Xiao* · Han Wang · Yangchen Pan · Adam White · Martha White

Efficient Conditionally Invariant Representation Learning

Roman Pogodin · Namrata Deka · Yazhe Li · Danica Sutherland · Victor Veitch · Arthur Gretton

Noise Is Not the Main Factor Behind the Gap Between Sgd and Adam on Transformers, But Sign Descent Might Be

Frederik Kunstner · Jacques Chen · Jonathan Lavington · Mark Schmidt

Dichotomy of Control: Separating What You Can Control from What You Cannot

Sherry Yang · Dale Schuurmans · Pieter Abbeel · Ofir Nachum

Neural Episodic Control with State Abstraction

Zhuo Li · Derui Zhu · Yujing Hu · Xiaofei Xie · Lei Ma · YAN ZHENG · Yan Song · Yingfeng Chen · Jianjun Zhao

Greedy Actor-Critic: A New Conditional Cross-Entropy Method for Policy Improvement

Samuel Neumann* · Sungsu Lim · Ajin Joseph · Yangchen Pan · Adam White · Martha White

Latest News Articles

Connect with the community

Get involved in Alberta's growing AI ecosystem! Speaker, sponsorship, and letter of support requests welcome.

Explore training and advanced education

Curious about study options under one of our researchers? Want more information on training opportunities?

Harness the potential of artificial intelligence

Let us know about your goals and challenges for AI adoption in your business. Our Investments & Partnerships team will be in touch shortly!