Research Post
Abstract: How do we perform efficient inference while retaining high translation quality? Existing neural machine translation models, such as Transformer, achieve high performance, but they decode words one by one, which is inefficient. Recent non-autoregressive translation models speed up the inference, but their quality is still inferior. In this work, we propose DSLP, a highly efficient and high-performance model for machine translation. The key insight is to train a non-autoregressive Transformer with Deep Supervision and feed additional Layer-wise Predictions. We conducted extensive experiments on four translation tasks (both directions of WMT'14 EN-DE and WMT'16 EN-RO). Results show that our approach consistently improves the BLEU scores compared with respective base models. Specifically, our best variant outperforms the autoregressive model on three translation tasks, while being 14.8 times more efficient in inference.
Aug 8th 2022
Research Post
Read this research paper co-authored by Canada CIFAR AI Chair Angel Chang: Learning Expected Emphatic Traces for Deep RL
Jul 22nd 2022
Research Post
Read this research paper, co-authored by Canada CIFAR AI Chair Angel Chang: D3Net: A Unified Speaker-Listener Architecture for 3D Dense Captioning and Visual Grounding
Jul 7th 2022
Research Post
Read this research paper, co-authored by Fellow & Canada CIFAR AI Chair Russ Greiner: Prediction of Obsessive-Compulsive Disorder: Importance of neurobiology-aided feature design and cross-diagnosis transfer learning
Looking to build AI capacity? Need a speaker at your event?