News

Meet The Fellows: Xingyu Li

Learn more about the research and work of Xingyu Li, one of the latest Fellows to join Amii’s team of world-class researchers. Xingyu Li is an assistant professor in the Department of Electrical and Computer Engineering at the University of Alberta.

Li's research focuses on computer vision, data analytics and health informatics. Her research outcomes have contributed to projects involving computational medical imaging, cranial implant design, genomes for COVID-19, and security in deep learning.

Check out her conversation with Adam White, Amii’s Director of Scientific Operations, where she talks about her groundbreaking work in visual anomaly detection, how to use AI for healthcare while maintaining patient privacy, and her thoughts on the future of AI models in medicine.

[This transcript has been edited for length and clarity. To see the expanded conversation, watch the video above.]


Adam White:

Welcome, Xingyu. It's really nice to have you here today. I'm really excited to talk about your research.

Xingyu Li:

Thank you very much. Thank you for having me here.

Adam White:

Why don't you tell us a little bit about your research? Give us the 30-second elevator pitch about what you work on.

Xingyu Li:

My current research is mainly focused on data-efficient learning, especially for good and robust representation.

Because we know that data scarcity is a general issue, especially for medical imaging and medical analysis. So how do we learn the robust and good representation is more challenging compared to traditional supervised learning, as well as big data unsupervised learning.

Right now, my focus is especially focused on anomaly detection and naturally attend to open-set problems. I think it's a good way to pave the road to the general real case to process the data.

Adam White:

What inspired you to go down this path of research? Like, your particular application area, what drew you in?

Xingyu Li:

Okay, so actually, it's a long history.

During my PhD, my research at that time focused on medical image analysis. We found that it is not easy to grab a lot of data. It means that a lot of things are very difficult, especially when we need to first approach the doctors. We get the data, but the [amount of] data itself is very small.

We found that data scarcity is an issue. That's why it's motivated us to say "okay, what can we do?"

I find if I focus on medical image analysis and medical data analysis, a lot of issues cannot be solved. Rather, we need to go deep into the fundamental problem of machine learning. I transitioned my focus to the core machine learning, representation learning, to tackle the problems from the very end and then adapt what I might discover to medical image analysis.

That's why the focus is more about methodology. But I'm still very open to collaboration for medical data analysis as well as healthcare data analysis.

Adam White:

Can you tell me about some of the challenges in collecting the data in the medical application compared to what we usually see in machine learning, where we get it from images on the internet or video games like we do in reinforcement learning?

What's it like getting data in the medical analysis world?

Xingyu Li:

Privacy. This is the first thing.

When we collect data, we need to gather an agreement from the patient. That’s the first step. The second thing is that some of the data we collect is very noisy. We need to somehow to process the data in some kind of format. The third one for medical is that compared to the natural image usually captured by the camera or the RGB format, in medical [data], there are a lot of modality. So, different modality. Different organs actually correspond to their own characteristic.

Of course, we can mix them together, but we need to tackle the domain issues here. There are a lot of others, for example, when it's very hard to get annotation because all the annotation needs to be taken from the authorized doctors. But doctors are not working for us, for sure. That's why if we need the labels, we need to ask the doctors to do the labelling alongside their [other] work. It's very, very expensive. It makes everything very, very scarce.

Adam White:

The machine learning community has been turned upside down by large language models, ChatGPT, all those kinds of things. How is that manifesting in your field and in your research?

Xingyu Li:

Right now, it shows a lot of potential, actually. I think it's a good thing because for large models, not necessarily the language model — the foundation models including the SAM (Segment Anything Model), and CLIP and so on, and so forth. In my opinion, all these foundation models can behave like Wikipedia or the dictionary. We need to tackle something in which we don't have a lot of data, but there was a lot of knowledge actually condensed in these foundation models.

So efficiently exploring these foundation models can actually shift a lot of insight to the downstream task.

For example, what we want to do, we need to somehow guide the design, the strategies, and machine learning algorithms to guide the large language model to extract the corresponding knowledge and insight. Then put this insight to ingest to the downstream task.

We recently just submitted one paper to AAAAI about using CLIP to do the zero-shot anomaly detection. Because, for normal detection, usually, we said we just need normal samples. We learn what is normal. But with a lot of knowledge in big data, actually, what is normal is just embedded in there because it trains a lot of data online. For most of them, if we need to categorise them, they are normal already. So all the information is there. How to use this information to zero-shot, right now the performance is quite surprising.

Just give a specific example: for anomaly detection for a particular data set, for example, their SOTA performance is 97, and for zero-shot, we can get 94, 95. So it's quite close, the zero-shot using the big data. Sorry, using the foundation model, the big model, you can't say it's zero-shot. But you can say it's another kind of supervision without a specific focus.

I think it's improved a lot; I see a lot of potential in using the foundation models on the open-set problems.

Adam White:

Thank you very much for that wonderful chat. It's been great having you here today.

Xingyu Li:

Thank you very much. Thank you.

Connect with the community

Get involved in Alberta's growing AI ecosystem! Speaker, sponsorship, and letter of support requests welcome.

Explore training and advanced education

Curious about study options under one of our researchers? Want more information on training opportunities?

Harness the potential of artificial intelligence

Let us know about your goals and challenges for AI adoption in your business. Our Investments & Partnerships team will be in touch shortly!