News

What Excited Us at AAAI 2024

Credit: Payam Mousavi

The 38th Annual AAAI Conference on Artificial Intelligence wrapped up last week and offered a wide-ranging look at the latest advancements in artificial intelligence, including achievements in generative AI, explainable AI, deep neural networks and more.

With the number of topics and presentations at this year's conference, it can be a bit overwhelming. Luckily, technical staff from Amii's Advanced Tech, Industry and Training teams were in Vancouver for the event and are sharing some of the most intriguing and exciting developments that they saw. Check them out below:

Justin Stevens: Educational Advancements in AI

Machine Learning Educator, Training Team

In parallel with the conference, the Educational Advancements in AI Symposium showcased the latest work in teaching AI and using AI to improve the work of educators. The symposium featured many different tracks, including using AI for education in the classroom and resources for teaching AI in K-12. It covered methods to teach AI in a more interdisciplinary and inclusive way, along with mentored undergraduate projects on using AI to improve accessibility in communications.

One of the highlights of the symposium was Michael Littman and Charles Isbell receiving the Outstanding Educator Award. The pair spoke about their journey of creating massive open online courses, incorporating comedy and music into teaching, and the responsibility of educators to teach AI now that it has become such a massively growing field.

As a recipient of the AAAI/ACM SIGAI New & Future AI Educator Award, I presented my idea on creating a database of analogies to make teaching AI more accessible to students from many different backgrounds at a Blue Sky Idea session on Saturday, February 24th. The event included presentations from the three other award recipients: Rachel Lugo (McLean School), Shira Wein (Georgetown), and William Agnew (Carnegie Mellon University).

I also co-facilitated the round table discussion session, AI Literacy and CS in the Age of AI: Guidance for Primary and Secondary Education, along with Charlotte Dungan from code.org. This was a unique opportunity to answer key questions about how AI is being adapted to the K-12 classroom and the importance of AI literacy. We discussed methods to avoid students becoming overly reliant on AI technologies and ensure students can access AI tools equitably.

I’m very grateful for the amazing community around AI education that the chairs, Marion Neumann and Stephanie Rosenthal, created at the symposium, and look forward to hopefully returning next year!

Payam Mousavi:AI for Science

Applied Research Scientist, Advanced Tech

Steering clear of the generative AI buzz, I redirected my focus towards the less mainstream yet equally fascinating applications of AI in the sciences and engineering, as well as cooperative multi-agent systems. I found the series of tutorials by Peter Frazier and Jake Gardner on Bayesian optimization applied to material design very informative and practical. They both led hands-on sessions on using Python packages BoTorch and GPyTorch to apply Gaussian Processes (GP) along with Bayesian optimization for multi-objective optimization in designing materials.

On a somewhat similar track, Grant Rotskoff presented an impressive (and mathematically dense!) introduction to Physics-Informed Neural Networks (PINNs). Initially, the field ambitiously aimed to train large, flexible models end-to-end solely using data. It has now circled back to leveraging centuries of scientific knowledge as inductive biases in training models for scientific domains such as fluid dynamics and physics broadly. This is a fast-growing field with diverse applications in multiple critical industries.

Another truly inspiring talk was given by Michael Bronstein. In his invited talk, Geometric ML: from Euclid to drug design, Michael started by giving a historical perspective on using physics and geometry to motivate and classify most of today’s neural network architectures in what he refers to as geometric deep learning. Inspired by ideas from differential geometry, he proposed a new class of physics-inspired graph neural networks that are suitable for many applications requiring a level of flexibility not afforded by the traditional discrete graphs.

Finally, I particularly enjoyed the full-day, star-studded workshop on Cooperative Multi-Agent Systems Decision-making and Learning. Giovanni Beltrame's talk on the role of hierarchies in scaling multi-agent systems was a highlight. He elegantly demonstrated through numerical simulations and experiments how hierarchies, often observed in nature, can mitigate complexities and enable a larger number of agents to cooperate more efficiently.

Sahir: Bridge Programs at AAAI

Machine Learning Scientist, Startups Team

Of the nine bridge programs hosted at this year’s conference, I found two particularly intriguing. The bridge program Collaborative AI and Modeling of Humans hosted a series of talks, panels, and poster sessions on this theme where Amii Fellow and Canada CIFAR AI Chair Matt Taylor was one of the organizers. One talk that piqued my interest was about Human-AI Synergy, where Eric Horovitz explained the various ways that humans and AI can collaborate to outperform AI or humans alone. While the applications for human-AI collaborations are endless, one of the most striking examples Eric mentioned was in healthcare, where a system involving human and AI collaboration was able to predict cancer with ~0% error, compared to 3.4% human error.

This example is based on their work on Learning to Compliment Humans which attempts to solve the CAMELYON16 challenge for detecting cancer in lymph nodes. The same work also explored the scientific problem of classifying galaxies, as part of the Galaxy Zoo project.

The other interesting bridge program was AI for Financial Services. The keynote talk by Greg Mori on Challenges and Opportunities in Machine Learning for Financial Services covered a wide range of machine learning applications in finance. Greg showcased some AI applications that Borealis AI (RBC’s research institute) has been working on. I mention select examples here. Aiden is their AI agent for electronic trading that uses deep reinforcement learning for dynamically executing large orders in the capital market. NOMI forecast is their money management tool that helps predict a user’s upcoming cash flow activities. Asynchronous Temporal Model (ATOM) is their recent project which is an attempt to create a foundation model from large transactional data that is tailored for applications in financial services. Lastly, Bo An presented a talk on how RL is currently applied in quantitative trading and showcased their open-source project called TradeMaster.

David Reid: Explainable AI

Lead Machine Learning Scientist, Industry Team

As more institutions are beginning to rely on AI to either automate or inform decision making, AI is simultaneously becoming less transparent, meaning that making adjustments when they make erroneous decisions is more difficult. An example of the practical consequences can be seen in a recent ban imposed by San Francisco on the self-driving taxi company Cruise, due partially to repeated interference with emergency services.

Explainable AI plays a significant role in shortening these development loops and resolving issues faster. At AAAI, Raquel Urtasun from Waabi showcased how their AI systems predict and visualize the probable trajectories of all actors identified, allowing its understanding of its surroundings to be scrutinized in the event of an accident.

A framework that can be applied more generally to reinforcement learning algorithms, including those used in self-driving cars, was presented by Mark Riedl from Georgia Institute of Technology. In his team’s work on human-centered intelligence, they trained a model to provide natural language explanations of the actions taken by human and AI agents completing tasks based on the influence of objects in their surroundings.

A completely different approach to assessing the competence of AI in environments not seen in their training was explained by the Kinds of Intelligence team at Cambridge University. Their team evaluates models using capability-based analysis to identify how difficult tasks are in terms of navigation, perception, planning, etc., and then evaluates how AI agents perform in reference to these dimensions of capability. By characterizing an unseen task in terms of its required capabilities, models can then predict with high accuracy how likely an agent is to succeed at the task.

Atefeh Shahroudnejad:Generative AI

Machine Learning Scientist, Advanced Tech Team

As Generative AI continues to captivate the world by pushing the boundaries of innovation, this year's AAAI buzzed with excitement and energy, centered around the remarkable strides in Gen-AI advancements across different domains.

Raquel Urtasun, CEO and Founder of Waabi, unveiled the potential of Generative AI in revolutionizing every facet of autonomous vehicles — including autonomy, simulation, mapping, hardware, commercial services, and safety. Urtasun argued the integration of Generative AI promises a leap forward—enabling faster, safer, and more scalable advancements in self-driving technology.

Yann LeCun, VP and Chief AI Scientist at Meta, brought attention to the critical absence of key human intelligence components — memory, reasoning, and hierarchical planning — in existing Generative AI models, revealing a substantial gap from achieving human-level intelligence. Introducing an innovative solution, he presented an Objective-Driven AI architecture inspired by cognitive processes, capable of learning, reasoning, and planning while acknowledging the inherent unpredictability of the world. He addressed the limitations of generative world models for images and videos due to uncertainties and proposed the Joint Embedding Predictive Architecture (JEPA) for world models. JEPA, by predicting a representation of the future using past and current state representations, offers a promising approach by leveraging the power of embedding space to simplify complex tasks.

Pascale Fung, Director of CAiRE at HKUST, delved into the pervasive issue of hallucination, a critical challenge within Generative AI models. Hallucination, characterized by the generation of inaccurate, misleading, or inconsistent responses divorced from actual data or factual information, was explored for its detrimental impacts ranging from discriminatory bias and abusive language to misinformation and privacy concerns. She outlined three levels of hallucination: factual inconsistency, tangentiality, and query inconsistency, where responses neither answer the question nor recall relevant knowledge. The presentation spotlighted the dual sources of hallucination: data-induced and training/inference-induced, along with an insightful discussion on contemporary methods such as Self-Reflection and Knowledge Grounding to mitigate these challenges.

Taher Abbasi: Robust Deep Neural Networks

Machine Learning Scientist, Advanced Tech Team
Nowadays, Deep Neural Networks(DNN) are everywhere: from robotics to defense to security to voice assistants, to medical imaging, etc. It is good news that they are so powerful, but we need more than just superior performance. It is vital to make sure that DNNs are both safe and trustworthy against adversarial attacks and non-adversarial perturbations, including random noise and semantic perturbations. To address this need, AAAI-24 featured a special track on Safe and Robust Artificial Intelligence (SRAI). This track focused on the theory and practice of safety and robustness in AI-based systems.

To frame the insights and discussion from the track, let's talk through a practical example that illustrates how this concept plays out in real-world scenarios.

In the classification context, in the latent space, each class occupies a region. The DNN is considered robust if after adding a considerable amount of noise to a data sample, it does not go out of the boundaries of its correspondent class. So with less robust DNNs, the attacker is interested in finding the minimal amount of perturbation which shifts the data sample out of its boundaries.

It means, in order to achieve robust DNNs, in the first place, we need to measure the robustness of a DNN! We can do it by finding the maximum robustness radius but it is an NP-Complete problem. So finding a non-trivial lower bound of maximum robustness radius r is another potential choice.

Tsui-Wei Weng, et al. delivered an insightful talk that discussed the history of attempts to quantify and solve this problem. They introduced the CLEVER score for evaluating DNN robustness against adversarial attacks. Fast-Lin, CROWN, CNN-Cert, POPQORN are some other algorithms for verifying the robustness of DNNs against adversarial attacks. Semantify-NN is another algorithm for certifying robustness to semantic variations like rotations, lighting, etc.

Measuring the robustness was only the first step. We trust DNNs more if we understand them, especially in high-stakes domains like healthcare or autonomous vehicles. On one hand, we have Linear models, which are less accurate and more interpretable. On the other hand, DNNs are more accurate and less interpretable. So the question is can we have the best of both worlds?

Apparently yes! Net-Dissect, MILAN, are different efforts in response to this need. The latest effort was by Oikarinen, et al. who offered CLIP-Dissect as the most computationally efficient interoperability tool for DNNs. In a different attempt, Oikarinen et. al suggested Label-free CBM which is an interpretable DNN by design.

Latest News Articles

Connect with the community

Get involved in Alberta's growing AI ecosystem! Speaker, sponsorship, and letter of support requests welcome.

Explore training and advanced education

Curious about study options under one of our researchers? Want more information on training opportunities?

Harness the potential of artificial intelligence

Let us know about your goals and challenges for AI adoption in your business. Our Investments & Partnerships team will be in touch shortly!