Research Post
A computer system and method for extending parallelized asynchronous reinforcement learning for training a neural network is described in various embodiments, through coordinated operation of plurality of hardware processors or threads such that each functions as a worker agent that is configured to simultaneously interact with a target computing environment for local gradient computation based on a loss determination and to update global network parameters based at least on local gradient computation to train the neural network through modifications of weighted interconnections between interconnected computing units as gradient computation is conducted across a plurality of iterations of a target computing environment, the loss determination including at least a policy loss term (actor), a value loss term (critic), and an auxiliary control loss. Variations are described further where the neural network is adapted to include terminal state prediction and action guidance.
Feb 15th 2022
Research Post
Read this research paper, co-authored by Amii Fellow and Canada CIFAR AI Chair Adam White: Learning Expected Emphatic Traces for Deep RL
Feb 15th 2022
Research Post
Read this research paper, co-authored by Canada CIFAR AI Chair Kevin Leyton-Brown: The Perils of Learning Before Optimizing
Feb 14th 2022
Research Post
Read this research paper, co-authored by Amii Fellows and Canada CIFAR AI Chairs Osmar Zaïane,and Lili Mou, Non-Autoregressive Translation with Layer-Wise Prediction and Deep Supervision
Looking to build AI capacity? Need a speaker at your event?