Research Post
Representation learning and option discovery are two of the biggest challenges in reinforcement learning (RL). Proto-value functions (PVFs) are a well-known approach for representation learning in MDPs. In this paper we address the option discovery problem by showing how PVFs implicitly define options. We do it by introducing eigenpurposes, intrinsic reward functions derived from the learned representations. The options discovered from eigenpurposes traverse the principal directions of the state space. They are useful for multiple tasks because they are discovered without taking the environment’s rewards into consideration. Moreover, different options act at different time scales, making them helpful for exploration. We demonstrate features of eigenpurposes in traditional tabular domains as well as in Atari 2600 games.
Jan 31st 2023
Research Post
Jan 20th 2023
Research Post
Aug 8th 2022
Research Post
Read this research paper co-authored by Canada CIFAR AI Chair Angel Chang: Learning Expected Emphatic Traces for Deep RL
Looking to build AI capacity? Need a speaker at your event?