News
The AI Seminar is a weekly meeting at the University of Alberta where researchers interested in artificial intelligence (AI) can share their research. Presenters include both local speakers from the University of Alberta and visitors from other institutions. Topics can be related in any way to artificial intelligence, from foundational theoretical work to innovative applications of AI techniques to new fields and problems.
On Jan 20, Shibhansh Dohare —a PhD student at the University of Alberta — presented “Maintaining Plasticity in Deep Continual Learning," at the AI Seminar.
Abstract: Modern deep-learning systems are specialized to problem settings in which training occurs once and then never again, as opposed to continual-learning settings in which training occurs continually. If deep-learning systems are applied in a continual learning setting, then it is well-known that they may fail catastrophically to remember earlier examples. More fundamental, but less well known, is that they may also lose their ability to adapt to new data, a phenomenon called \textit{loss of plasticity}.
In his presentation, Dohare shows loss of plasticity using the MNIST and ImageNet datasets repurposed for continual learning as sequences of tasks. In ImageNet, binary classification performance dropped from 89% correct on an early task to 77%, or to about the level of a linear network, on the 2000th task. Such loss of plasticity occurred with a wide range of deep network architectures, optimizers, and activation functions, and was not eased by batch normalization or dropout.
In the experiments, loss of plasticity was correlated with the proliferation of dead units, units with very large weights, and more generally with a loss of unit diversity. Loss of plasticity was substantially eased by L2-regularization, particularly when combined with weight perturbation (Shrink and Perturb). He shows that plasticity can be fully maintained by a new algorithm---called \emph{continual backpropagation}---which is just like conventional backpropagation except that a small fraction of less-used units are re-initialized after each example. This continual injection of diversity appears to maintain plasticity indefinitely in deep networks.
Watch the full presentation below:
Want to learn how you can kick-start your AI career? Find out more about Amii's Career Accelerator to find out more.
Sep 27th 2023
News
A new report by Deloitte Canada on Canada’s national AI ecosystem finds that Canada tops world rankings in talent concentration, with patent growth and per-capita VC investments among the world’s highest.
Sep 25th 2023
News
Amii's Chief Scientific Advisor announces partnership with John Carmack to bring greater focus and urgency to the creation of artificial general intelligence (AGI).
Sep 21st 2023
News
On August 18, Kristen Yu —a PhD Candidate at the University of Alberta — presented “Adventures of AI Directors Early in the Development of Nightingale" at the AI Seminar.
Looking to build AI capacity? Need a speaker at your event?