Research Post

Randomized Exploration in Reinforcement Learning with General Value Function Approximation


We propose a model-free reinforcement learning algorithm inspired by the popular randomized least squares value iteration (RLSVI) algorithm as well as the optimism principle. Unlike existing upper-confidence-bound (UCB) based approaches, which are often computationally intractable, our algorithm drives exploration by simply perturbing the training data with judiciously chosen i.i.d. scalar noises. To attain optimistic value function estimation without resorting to a UCB-style bonus, we introduce an optimistic reward sampling procedure. When the value functions can be represented by a function class F, our algorithm achieves a worst-case regret bound of O˜(poly(dEH)√T) where T is the time elapsed, H is the planning horizon and dE is the eluder dimension of F. In the linear setting, our algorithm reduces to LSVI-PHE, a variant of RLSVI, that enjoys an O˜(d3H3√T) regret. We complement the theory with an empirical evaluation across known difficult exploration tasks.

Latest Research Papers

Connect with the community

Get involved in Alberta's growing AI ecosystem! Speaker, sponsorship, and letter of support requests welcome.

Explore training and advanced education

Curious about study options under one of our researchers? Want more information on training opportunities?

Harness the potential of artificial intelligence

Let us know about your goals and challenges for AI adoption in your business. Our Investments & Partnerships team will be in touch shortly!