Research Post

Exponential Lower Bounds for Planning in MDPs With Linearly-Realizable Optimal Action-Value Functions


We consider the problem of local planning in fixed-horizon and discounted Markov Decision Processes (MDPs) with linear function approximation and a generative model under the assumption that the optimal action-value function lies in the span of a feature map that is available to the planner. Previous work has left open the question of whether there exist sound planners that need only poly(H,d) queries regardless of the MDP, where H is the horizon and d is the dimensionality of the features. We answer this question in the negative: we show that any sound planner must query at least min(exp(Ω(d)),Ω(2H)) samples in the fized-horizon setting and exp(Ω(d)) samples in the discounted setting. We also show that for any δ>0, the least-squares value iteration algorithm with O(H5dH+1/δ2) queries can compute a δ-optimal policy in the fixed-horizon setting. We discuss implications and remaining open questions.

Latest Research Papers

Connect with the community

Get involved in Alberta's growing AI ecosystem! Speaker, sponsorship, and letter of support requests welcome.

Explore training and advanced education

Curious about study options under one of our researchers? Want more information on training opportunities?

Harness the potential of artificial intelligence

Let us know about your goals and challenges for AI adoption in your business. Our Investments & Partnerships team will be in touch shortly!