News

Overcoming challenges in healthcare AI: successfully deploying machine-learned models in medicine

The use of artificial intelligence and machine learning is a fast-growing area in healthcare. However, given the stakes at play, it’s also one that requires care and consideration, even as technology develops at a rapid speed. A series of three recent papers in the Canadian Medical Association Journal, two co-authored by Amii Fellow and Canada CIFAR AI Chair Russ Greiner (Professor, University of Alberta), examines some of the potential challenges to adopting AI in healthcare, as well as some of the possible solutions and approaches to addressing them.

This article is the first in a three-part series on those challenges, starting with a focus on the specific stumbling blocks that come with developing AI to work alongside health care providers. The next part will explore the issues of explainability and trust when it comes to medical AI tools, while the third will look at some real-world lessons from integrating machine learning into a working hospital.

From Greiner’s perspective, the question is not if machine learning will make a big impact in healthcare -- it already has. The number of papers and studies involving AI is rising exponentially, he says, and there’s no sign of it slowing down.

“The positive impact is tremendous. AI tools can figure out effective ways to diagnose diseases, to manage diseases, to treat diseases,” Greiner says. “This is not all just a possibility, but has already started to happen.”

However, he says there is also a reluctance by some health care professionals to fully embrace the potential benefits of AI tools -- not at all surprising, given the stakes involved.

“If I make a mistake in designing a video game, I might lose some money. But if I make a mistake in diagnosing or treating a patient, that could be fatal,” he says. ”So I’m glad the medical community is conservative, but it does mean that new ideas take a while to come into the field.”

It is important to be aware of some of the specific challenges that AI diagnostic and treatment tools will face when deployed and in use in the field. In the paper, Greiner and his co-authors outline two major obstacles: out-of-distribution errors and incorrect feature attribution. Both issues can cause machine learning tools to produce models that might not work in real-world settings..

Focusing on the best data

A machine learning agent is in many ways like a blank slate; it only knows what it has observed. Out-of-distribution errors happen when the AI is asked to interpret something it hasn’t yet learned about. A model that was trained on thousands of images of lung x-rays will probably do a good job in interpreting a new lung image, but not if it is asked to analyze an image of a heart. Worse, it might not even know that it made a mistake; how could it: given that it has only seen lungs, then it is quite naturally going to try to interpret anything it sees as a lung.

More subtle out-of-distribution errors are also a problem. An AI model might be trained on crystal-clear x-rays of lungs while it is being developed. But in the real world of medicine, x-rays can be overexposed or blurry, which can lead to mistaken diagnoses if care isn’t taken into how the machine processes imperfect data. Other issues include when the AI encounters diseases it hasn’t seen before and tries to fit them into the limited experience it has.

A similar problem is that of incorrect feature attribution. Machine learning often finds the “simplest pattern that explains the observed data” ” Greiner writes. That can lead to the tool zeroing in on certain elements while ignoring important information. As an example, imagine a machine trained on a set of cardiac images that are pulled from multiple hospitals, one of which is a centre that specializes in a certain disease. Since that specialized hospital might have a higher proportion of images with that disease, there’s the possibility that the AI could train itself to make a diagnosis based mostly on which hospital the image came from, while ignoring the details of that specific heart. While this model will be accurate on this specific dataset, it won’t generalize.

Better training means better results

While there are serious issues that need to be overcome, Greiner says they are not insurmountable. Being careful about how AI tools are trained will reduce the chance of these kinds of errors. He notes that most medical data sets are very small and might focus on one geographic location or a single demographic, which makes these problems more likely. A diverse data set is a good start. And if that isn’t possible, then researchers and doctors need to be aware that the tool might only work on the narrow set of cases it was trained on.

“If you want it to work in Edmonton and Jarakatra and Moscow, you need to train it on that data from those places,” he says.

“Medical history is full of data taken from Caucasian men and then applied to Black patients and Asian patients, or to women. While the trained model might still work, there is a good chance that it will not.”

Other options to increase the usability of AI in healthcare include devising training tests specifically to sniff out distribution errors or incorrect features before the tool is ready for the real world. Explainability, where the reasoning behind an AI’s decision is clearly laid out in a way that humans can interpret, can also be an important part of the equation. Although, it is not without its own challenges, as will be discussed in the next part of this series.


Learn how Amii advances world-leading artificial intelligence and machine learning research: visit our Research page.

Latest News Articles

Connect with the community

Get involved in Alberta's growing AI ecosystem! Speaker, sponsorship, and letter of support requests welcome.

Explore training and advanced education

Curious about study options under one of our researchers? Want more information on training opportunities?

Harness the potential of artificial intelligence

Let us know about your goals and challenges for AI adoption in your business. Our Investments & Partnerships team will be in touch shortly!