We use cookies to ensure the accessibility and functioning of our services, to analyze our visitors' behavior and to personalize their experience.
Update your preferences.
No items found.

Unraveling an Artificial Intelligence Mystery — Featuring Dr. Leo Celi of MIT Critical Data

We welcome Dr. Leo Celi to the Paytient Podcast to discuss his involvement in a recent study that focused on AI recognition of a patient's race based on medical imaging. Dr. Celi is a principal research scientist in IMES at MIT and an associate professor of medicine at Harvard Medical School.

Unraveling an Artificial Intelligence Mystery — Featuring Dr. Leo Celi of MIT Critical Data

This week, the Paytient Podcast is trying to unravel an artificial intelligence mystery with Dr. Leo Celi.

Dr. Celi is a principal research scientist in IMES at MIT and an associate professor of medicine at Harvard Medical School. He was involved in a recent study published in Lancet Digital Health that focused on AI recognition of a patient's race based on medical imaging. The team found that AI was able to accurately predict the self-reported race of patients from medical imaging alone, even though the images contained no mention of the patient's race. Dr. Celi discusses the mystery surrounding this study — nobody is quite sure how the model was able to do this — and the implications of AI and potential bias in the health and benefits space.

Here are a few highlights from our conversation with Dr. Celi:

The overarching focus of his work

“The mission and the vision is revamping the way we create and validate medical knowledge — knowledge that informs how we diagnose; knowledge that informs treatment decisions. So historically, as we know, treatment guidelines or guidelines for screening and prevention would be informed by research that is performed in a few rich countries. Research is still seen as a luxury in most of the world, and they're waiting for the United States or some Western countries to conduct clinical trials or to perform observational studies, and that would be the ingredients on guidelines that are being published by professional societies.

"That has been the status quo for the longest period of time: Medical knowledge creation and validation is controlled by a few academics, and that to us is a big contributor to why we're seeing suboptimal outcomes in most of the world. It's because the guidelines that are being employed when we see patients — whether you're in Africa or Asia — are informed by research performed in the United States or the United Kingdom, and in those countries, one demographic is overrepresented."

How clinical trials unintentionally lead to suboptimal outcomes

"We know that most clinical trials would have white middle-aged individuals as participants, and our understanding of health and disease unfortunately is centered around the white male demographic. And we think that this is a part of the reason why, for example, we're seeing worse outcomes for women presenting with heart attacks upon admission to the hospital. It's because the research that informs how we care for heart attacks is primarily focused on white males presenting with heart attacks. And this is the reality that we see in most of the world.

"The way we treat sepsis in intensive care units in Brazil or the Philippines would be informed by research in the United States and in the U.K., Germany, and France, and there's no guarantee ... that the findings of that research can be translated and can be generalized to the patients that we see in the rest of the world.

"The numbers are pretty discouraging. I think as of 2016, the majority of clinical trial participants above 80% would still be white individuals from rich countries. And given the fact that 75% of the world actually lived in Africa and Asia and they are not represented at all in these clinical trials for observational studies, then it's not a surprise to find suboptimal population health outcomes everywhere."

The incredible promise of AI in healthcare

"The excitement around artificial intelligence stems from the opportunity that it affords to augment the way we clinicians can diagnose or make treatment decisions. It's particularly game-changing in areas where you might have limited resources or for populations that are marginalized by health systems. What we do is we take data that is routinely collected in the process of care, and we build algorithms for prediction, for classification, for optimization. These algorithms can be given into the hands of healthcare providers — they could be community health workers, nurses, pharmacists, or doctors — to help them make decisions and to help them make diagnoses.

"That is really why we are thrilled about the possibilities that AI could give in countries where you would have one psychiatrist for every 4 million people, suddenly you could have tools that will allow community health workers to provide a diagnosis or to monitor patients who otherwise would not have access to this specialist. That's a very high-level overview of what AI is supposed to be promising to us."

Using algorithms to detect and eliminate bias in healthcare

"The consensus is leaning toward let's not try to hide these attributes — let's not try to hide the race/ethnicity. In fact, we should collect this information so that we could interrogate algorithms for bias. At present, only the United States does a decent job of collecting this information. This is not even done in other countries because they're afraid that if you collect this information it would disadvantage some groups.

"When you look at electronic health records coming from our European colleagues or colleagues from Australia and New Zealand, they don't even have race/ethnicity information. And we think that that could be a stumbling block for moving the field forward because it's important now to make sure that any algorithm that we develop will not perpetuate or even magnify the disparities that we're seeing now. ... Maybe we should use race/ethnicity to make sure that the predictions and the classifications are not going to lead to inequitable outcomes across patient groups."

Listen to the full discussion via the podcast player below or by clicking this link. To learn more about Dr. Celi and his research, you can follow MIT Critical Data on Twitter, LinkedIn, and Facebook.

The Business of Healthcare
Illustration of a paper plane

Enjoyed reading it? Share it now.

Learn how Paytient helps companies of all sizes.
Ready to get access to the care you need? Let’s talk.