Research Post
We propose a model-free reinforcement learning algorithm inspired by the popular randomized least squares value iteration (RLSVI) algorithm as well as the optimism principle. Unlike existing upper-confidence-bound (UCB) based approaches, which are often computationally intractable, our algorithm drives exploration by simply perturbing the training data with judiciously chosen i.i.d. scalar noises. To attain optimistic value function estimation without resorting to a UCB-style bonus, we introduce an optimistic reward sampling procedure. When the value functions can be represented by a function class F, our algorithm achieves a worst-case regret bound of O˜(poly(dEH)√T) where T is the time elapsed, H is the planning horizon and dE is the eluder dimension of F. In the linear setting, our algorithm reduces to LSVI-PHE, a variant of RLSVI, that enjoys an O˜(d3H3√T) regret. We complement the theory with an empirical evaluation across known difficult exploration tasks.
Feb 15th 2022
Research Post
Read this research paper, co-authored by Amii Fellow and Canada CIFAR AI Chair Adam White: Learning Expected Emphatic Traces for Deep RL
Feb 15th 2022
Research Post
Read this research paper, co-authored by Canada CIFAR AI Chair Kevin Leyton-Brown: The Perils of Learning Before Optimizing
Feb 14th 2022
Research Post
Read this research paper, co-authored by Amii Fellows and Canada CIFAR AI Chairs Osmar Zaïane,and Lili Mou, Non-Autoregressive Translation with Layer-Wise Prediction and Deep Supervision
Looking to build AI capacity? Need a speaker at your event?