Research Post
The representations generated by many models of language (word embeddings, recurrent neural networks and transformers) correlate to brain activity recorded while people read. However, these decoding results are usually based on the brain's reaction to syntactically and semantically sound language stimuli. In this study, we asked: how does an LSTM (long short term memory) language model, trained (by and large) on semantically and syntactically intact language, represent a language sample with degraded semantic or syntactic information? Does the LSTM representation still resemble the brain's reaction? We found that, even for some kinds of nonsensical language, there is a statistically significant relationship between the brain's activity and the representations of an LSTM. This indicates that, at least in some instances, LSTMs and the human brain handle nonsensical data similarly.
Feb 14th 2022
Research Post
Read this research paper, co-authored by Amii Fellows and Canada CIFAR AI Chairs Osmar Zaïane,and Lili Mou, Non-Autoregressive Translation with Layer-Wise Prediction and Deep Supervision
Feb 14th 2022
Research Post
Read this research paper, co-authored by Amii Fellow and Canada CIFAR AI Chair Lili Mou: Search and Learn: Improving Semantic Coverage for Data-to-Text Generation
Feb 14th 2022
Research Post
Read this research paper, co-authored by Amii Fellow and Canada CIFAR AI Chair Lili Mou: Generalized Equivariance and Preferential Labeling for GNN Node Classification
Looking to build AI capacity? Need a speaker at your event?