News

AI Seminar Series 2023: Shibhansh Dohare

The AI Seminar is a weekly meeting at the University of Alberta where researchers interested in artificial intelligence (AI) can share their research. Presenters include both local speakers from the University of Alberta and visitors from other institutions. Topics can be related in any way to artificial intelligence, from foundational theoretical work to innovative applications of AI techniques to new fields and problems.

On Jan 20, Shibhansh Dohare —a PhD student at the University of Alberta — presented “Maintaining Plasticity in Deep Continual Learning," at the AI Seminar.

Abstract: Modern deep-learning systems are specialized to problem settings in which training occurs once and then never again, as opposed to continual-learning settings in which training occurs continually. If deep-learning systems are applied in a continual learning setting, then it is well-known that they may fail catastrophically to remember earlier examples. More fundamental, but less well known, is that they may also lose their ability to adapt to new data, a phenomenon called \textit{loss of plasticity}.

In his presentation, Dohare shows loss of plasticity using the MNIST and ImageNet datasets repurposed for continual learning as sequences of tasks. In ImageNet, binary classification performance dropped from 89% correct on an early task to 77%, or to about the level of a linear network, on the 2000th task. Such loss of plasticity occurred with a wide range of deep network architectures, optimizers, and activation functions, and was not eased by batch normalization or dropout.

In the experiments, loss of plasticity was correlated with the proliferation of dead units, units with very large weights, and more generally with a loss of unit diversity. Loss of plasticity was substantially eased by L2-regularization, particularly when combined with weight perturbation (Shrink and Perturb). He shows that plasticity can be fully maintained by a new algorithm---called \emph{continual backpropagation}---which is just like conventional backpropagation except that a small fraction of less-used units are re-initialized after each example. This continual injection of diversity appears to maintain plasticity indefinitely in deep networks.

Watch the full presentation below:


Want to learn how you can kick-start your AI career? Find out more about Amii's Career Accelerator to find out more.

!-- Start of HubSpot Embed Code -->

Latest News Articles

Connect with the community

Get involved in Alberta's growing AI ecosystem! Speaker, sponsorship, and letter of support requests welcome.

Explore training and advanced education

Curious about study options under one of our researchers? Want more information on training opportunities?

Harness the potential of artificial intelligence

Let us know about your goals and challenges for AI adoption in your business. Our Investments & Partnerships team will be in touch shortly!