News
The AI Seminar is a weekly meeting at the University of Alberta where researchers interested in artificial intelligence (AI) can share their research. Presenters include both local speakers from the University of Alberta and visitors from other institutions. Topics can be related in any way to artificial intelligence, from foundational theoretical work to innovative applications of AI techniques to new fields and problems.
On Jan 20, Shibhansh Dohare —a PhD student at the University of Alberta — presented “Maintaining Plasticity in Deep Continual Learning," at the AI Seminar.
Abstract: Modern deep-learning systems are specialized to problem settings in which training occurs once and then never again, as opposed to continual-learning settings in which training occurs continually. If deep-learning systems are applied in a continual learning setting, then it is well-known that they may fail catastrophically to remember earlier examples. More fundamental, but less well known, is that they may also lose their ability to adapt to new data, a phenomenon called \textit{loss of plasticity}.
In his presentation, Dohare shows loss of plasticity using the MNIST and ImageNet datasets repurposed for continual learning as sequences of tasks. In ImageNet, binary classification performance dropped from 89% correct on an early task to 77%, or to about the level of a linear network, on the 2000th task. Such loss of plasticity occurred with a wide range of deep network architectures, optimizers, and activation functions, and was not eased by batch normalization or dropout.
In the experiments, loss of plasticity was correlated with the proliferation of dead units, units with very large weights, and more generally with a loss of unit diversity. Loss of plasticity was substantially eased by L2-regularization, particularly when combined with weight perturbation (Shrink and Perturb). He shows that plasticity can be fully maintained by a new algorithm---called \emph{continual backpropagation}---which is just like conventional backpropagation except that a small fraction of less-used units are re-initialized after each example. This continual injection of diversity appears to maintain plasticity indefinitely in deep networks.
Watch the full presentation below:
Want to learn how you can kick-start your AI career? Find out more about Amii's Career Accelerator to find out more.
Apr 8th 2024
News
Amii Fellows share tips on how to make the most of your conference experience.
Mar 26th 2024
News
In this month's episode, Alona talks about how ChatGPT changed the public’s perception of what AI language models can do, instantly making most previous benchmarks seem out of date, and the excitement and intensity of working in a fast-moving field like AI.
Mar 18th 2024
News
Google.org announces new research grants to support critical AI research in Canada focused on areas such as sustainability and the responsible development of AI. The grant will provide a total of $2.7 million in grant funding to Amii, the Canadian Institute for Advanced Research (CIFAR) and the International Center of Expertise of Montreal on AI (CEIMIA).
Looking to build AI capacity? Need a speaker at your event?