Recently there have been several attempts to extend Nesterov’s accelerated algorithm to smooth stochastic and variance-reduced optimization. In this paper, the authors show that there is a simpler approach to acceleration: applying optimistic online learning algorithms and querying the gradient oracle at the online average of the intermediate optimization iterates.
In particular, they tighten a recent result of Cutkosky (2019) to demonstrate theoretically that online iterate averaging results in a reduced optimization gap, independently of the algorithm involved. They show that carefully combining this technique with existing generic optimistic online learning algorithms yields the optimal accelerated rates for optimizing strongly-convex and non-strongly-convex, possibly composite objectives, with deterministic as well as stochastic first-order oracles. The authors further extend this idea to variance-reduced optimization. Finally, they also provide universal algorithms that achieve the optimal rate for smooth and non-smooth composite objectives simultaneously without further tuning, generalizing the results of Kavis et al. (2019) and solving a number of their open problems.
This paper was published at the 37th International Conference on Machine Learning (ICML).
Feb 15th 2022
Read this research paper, co-authored by Amii Fellow and Canada CIFAR AI Chair Osmar Zaiane: UCTransNet: Rethinking the Skip Connections in U-Net from a Channel-Wise Perspective with Transformer
Sep 27th 2021
Sep 17th 2021
Looking to build AI capacity? Need a speaker at your event?
Get involved in Alberta's growing AI ecosystem! Speaker, sponsorship, and letter of support requests welcome.
Curious about study options under one of our researchers? Want more information on training opportunities?