Most reinforcement learning methods are based upon the key assumption that the transition dynamics and reward functions are fixed, that is, the underlying Markov decision process (MDP) is stationary. However, in many practical real-world applications, this assumption is clearly violated.
In this paper, the authors discuss how current methods can have inherent limitations for non-stationary MDPs, and therefore searching a policy that is good for the future, unknown MDP, requires rethinking the optimization paradigm. To address this problem, they develop a method that builds upon ideas from both counter-factual reasoning and curve-fitting to proactively search for a good future policy, without ever modeling the underlying non-stationarity. The effectiveness of the proposed method is demonstrated on problems motivated by real-world applications.
This paper was published at the 37th International Conference on Machine Learning (ICML).
Feb 15th 2022
Read this research paper, co-authored by Amii Fellow and Canada CIFAR AI Chair Adam White: Learning Expected Emphatic Traces for Deep RL
Feb 15th 2022
Read this research paper, co-authored by Canada CIFAR AI Chair Kevin Leyton-Brown: The Perils of Learning Before Optimizing
Feb 14th 2022
Read this research paper, co-authored by Amii Fellows and Canada CIFAR AI Chairs Osmar Zaïane,and Lili Mou, Non-Autoregressive Translation with Layer-Wise Prediction and Deep Supervision
Looking to build AI capacity? Need a speaker at your event?
Get involved in Alberta's growing AI ecosystem! Speaker, sponsorship, and letter of support requests welcome.
Curious about study options under one of our researchers? Want more information on training opportunities?