Research Post
In this paper, we are discussing an approach in order to create a text corpus based on Wikipedia with exhaustive annotations of entity mentions. Editors on Wikipedia are only expected to add hyperlinks in order to help the reader to understand the content, but are discouraged to add links that do not add any benefit for understanding an article. Therefore, many mentions of popular entities (such as countries or popular events in history), previously linked articles as well as the article entity itself, are not linked. This results in a huge potential for additional annotations that can be used for downstream NLP tasks, such as Relation Extraction. We show that our annotations are useful for creating distantly supervised datasets for this task. Furthermore, we publish all code necessary to derive a corpus from a raw Wikipedia dump, so that it can be reproduced by everyone.
Feb 14th 2022
Research Post
Read this research paper, co-authored by Amii Fellows and Canada CIFAR AI Chairs Osmar Zaïane,and Lili Mou, Non-Autoregressive Translation with Layer-Wise Prediction and Deep Supervision
Feb 14th 2022
Research Post
Read this research paper, co-authored by Amii Fellow and Canada CIFAR AI Chair Lili Mou: Search and Learn: Improving Semantic Coverage for Data-to-Text Generation
Feb 14th 2022
Research Post
Read this research paper, co-authored by Amii Fellow and Canada CIFAR AI Chair Lili Mou: Generalized Equivariance and Preferential Labeling for GNN Node Classification
Looking to build AI capacity? Need a speaker at your event?