Research Post
Word vector models learn about semantics through corpora. Convolutional Neural Networks (CNNs) can learn about semantics through images. At the most abstract level, some of the information in these models must be shared, as they model the same real-world phenomena. Here we employ techniques previously used to detect semantic representations in the human brain to detect semantic representations in CNNs. We show the accumulation of semantic information in the layers of the CNN, and discover that, for misclassified images, the correct class can be recovered in intermediate layers of a CNN.
Acknowledgments
This research was supported by CIFAR (Canadian Institute for Advanced Research) and NSERC (Natural Sciences and Engineering Research Council). This research was enabled in part by support provided by WestGrid and Compute Canada.
Feb 14th 2022
Research Post
Read this research paper, co-authored by Amii Fellows and Canada CIFAR AI Chairs Osmar Zaïane,and Lili Mou, Non-Autoregressive Translation with Layer-Wise Prediction and Deep Supervision
Feb 14th 2022
Research Post
Read this research paper, co-authored by Amii Fellow and Canada CIFAR AI Chair Lili Mou: Search and Learn: Improving Semantic Coverage for Data-to-Text Generation
Feb 14th 2022
Research Post
Read this research paper, co-authored by Amii Fellow and Canada CIFAR AI Chair Lili Mou: Generalized Equivariance and Preferential Labeling for GNN Node Classification
Looking to build AI capacity? Need a speaker at your event?