Mark Schmidt explores the challenges that come with learning complicated models from large datasets.
Big data and big models
Mark Schmidt explores the challenges that come with learning complicated models from large datasets. His work is mainly focused on accelerating and verifying fundamental machine learning algorithms. Mark works in the areas of optimization for machine learning, probabilistic machine learning, computer vision applications among others. He has published papers on gradient methods, on improving the speed of convergence and on combining optimization methods. Through his work, Mark improves the speed, efficiency and effectiveness of machine learning models. He has applied his work in computer vision toward recognizing distinct objects in images, outdoor image segmentation and depth estimation and for image restoration and inpainting. He has also developed applications to analyse the propagation of ideas in social networks, for natural language sequence labeling and for modeling the kinematics of DNA strands.
Mark is a Canada CIFAR AI Chair at Amii, and an Associate Professor in the Laboratory for Computational Intelligence at the University of British Columbia. He is a Faculty Fellow with ElementAI and a consultant for 1QBit. Previously, Mark was a CIFAR Senior Fellow in the Learning In Machines and Brains program, and an Alfred P. Sloan Research Fellow. He has co-authored 85 papers, which have appeared in venues such as the International Conference on Machine Learning (ICML), the Neural Information Processing Systems (NeurIPS) conference, and the International Conference on Artificial Intelligence and Statistics (AISTATS). Since beginning his appointment at the University of British Columbia in 2014, Mark has supervised and co-supervised 20 early-career researchers at the M.Sc. and Ph.D. levels. He has been a Senior Program Committee member or Area Chair for a number of international conferences including NeurIPS, ICML, ICLR and IJCAI. In 2018, Mark was awarded the Lagrange Prize in Continuous Optimization from the Mathematical Optimization Society.
In 2018, Mark was awarded the Lagrange Prize in Continuous Optimization from the Mathematical Optimization Society.