Research Post

Optimizing for the Future in Non-Stationary MDPs

Most reinforcement learning methods are based upon the key assumption that the transition dynamics and reward functions are fixed, that is, the underlying Markov decision process (MDP) is stationary. However, in many practical real-world applications, this assumption is clearly violated.

In this paper, the authors discuss how current methods can have inherent limitations for non-stationary MDPs, and therefore searching a policy that is good for the future, unknown MDP, requires rethinking the optimization paradigm. To address this problem, they develop a method that builds upon ideas from both counter-factual reasoning and curve-fitting to proactively search for a good future policy, without ever modeling the underlying non-stationarity. The effectiveness of the proposed method is demonstrated on problems motivated by real-world applications.

This paper was published at the 37th International Conference on Machine Learning (ICML).

Latest Research Papers

Connect with the community

Get involved in Alberta's growing AI ecosystem! Speaker, sponsorship, and letter of support requests welcome.

Explore training and advanced education

Curious about study options under one of our researchers? Want more information on training opportunities?

Harness the potential of artificial intelligence

Let us know about your goals and challenges for AI adoption in your business. Our Investments & Partnerships team will be in touch shortly!