Research Post

Learning with Good Feature Representations in Bandits and in RL with a Generative Model

The construction by Du et al. (2019) implies that even if a learner is given linear features in ℝd that approximate the rewards in a bandit with a uniform error of ϵ, then searching for an action that is optimal up to O(ϵ) requires examining essentially all actions. We use the Kiefer-Wolfowitz theorem to prove a positive result that by checking only a few actions, a learner can always find an action that is suboptimal with an error of at most O(ϵ√d). Thus, features are useful when the approximation error is small relative to the dimensionality of the features. The idea is applied to stochastic bandits and reinforcement learning with a generative model where the learner has access to d-dimensional linear features that approximate the action-value functions for all policies to an accuracy of ϵ. For linear bandits, we prove a bound on the regret of order √dn log(k)+ϵn√d log(n) with k the number of actions and n the horizon. For RL we show that approximate policy iteration can learn a policy that is optimal up to an additive error of order ϵ√d/(1−γ)^2 and using d/(ϵ^2(1−γ)^4) samples from a generative model. These bounds are independent of the finer details of the features. We also investigate how the structure of the feature set impacts the tradeoff between sample complexity and estimation error.

Latest Research Papers

Connect with the community

Get involved in Alberta's growing AI ecosystem! Speaker, sponsorship, and letter of support requests welcome.

Explore training and advanced education

Curious about study options under one of our researchers? Want more information on training opportunities?

Harness the potential of artificial intelligence

Let us know about your goals and challenges for AI adoption in your business. Our Investments & Partnerships team will be in touch shortly!