By KIM BELLARD

You may have missed it, but the Association for the Advancement of Artificial Intelligence (AAAI) just announced its first annual Squirrel AI award winner: Regina Barzilay, a professor at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).   In fact, if you’re like me, you may have missed that there was a Squirrel AI award.  But there is, and it’s kind of a big deal, especially for healthcare – as Professor Barzilay’s work illustrates. 

The Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity (Squirrel AI is a Chinese-based AI-powered “adaptive education provider”) “recognizes positive impacts of artificial intelligence to protect, enhance, and improve human life in meaningful ways with long-lived effects.”  The award carries a prize of $1,000,000, which is about the same as a Nobel Prize

Yolanda Gil, a past president of AAAI, explained the rationale for the new award: “What we wanted to do with the award is to put out to the public that if we treat AI with fear, then we may not pursue the benefits that AI is having for people.”

Dr. Barzilay has impressive credentials, including a MacArthur Fellowship.   Her expertise is in natural language processing (NLP) and machine learning, and she focused her interests on healthcare following a breast cancer diagnosis.  “It was the end of 2014, January 2015, I just came back with a totally new vision about the goals of my research and technology development,” she told The Wall Street Journal. “And from there, I was trying to do something tangible, to change the diagnostics and treatment of breast cancer.”

Since then, Dr. Barzilay has been busy.  She’s helped apply machine learning in drug development, and has worked with Massachusetts General Hospital to use A.I. to identify breast cancer at very early stages.  Their new model identifies risk better than the widely used Tyrer-Cuzick risk evaluation model, especially for African-American women. 

As she told Will Douglas Heaven in an interview for MIT Technology Review:  “It’s not some kind of miracle—cancer doesn’t grow from yesterday to today. It’s a pretty long process. There are signs in the tissue, but the human eye has limited ability to detect what may be very small patterns.”

This raises one of the big problems with AI; we may not always understand why AI made the decisions it did.  Dr. Barzilay observed:

But if you ask a machine, as we increasingly are, to do things that a human can’t, what exactly is the machine going to show you? It’s like a dog, which can smell much better than us, explaining how it can smell something. We just don’t have that capacity.

She firmly believes, though, that we can’t wait for “the perfect AI,” one we fully understand and that will always be right; we just have to figure out “how to use its strengths and avoid its weaknesses.”   As she told Stat News, we have a long way to go: “We have a humongous body of work in AI in health, and very little of it is actually translated into clinics and benefits patients.”

Dr. Barzilay pointed out: “Right now AI is flourishing in places where the cost of failure is very low…But that’s not going to work for a doctor… We need to give doctors reasons to trust AI. The FDA is looking at this problem, but I think it’s very far from solved in the US, or anywhere else in the world.” 

A concern is what happens when A.I. is wrong.  It might predict the wrong thing, fail to identify the right thing, or ignore issues it should have noticed.  In other words, the kinds of things that happen every day in healthcare already.  With people, we can fire them, sue them, even take away their license.  With A.I., what we do to whom/what is not at all obvious.

“This is a big mess,” Patrick Lin, director of Ethics and Emerging Sciences Group at California Polytechnic State University, told Quartz. “It’s not clear who would be responsible because the details of why an error or accident happens matters.” 

Wendall Wallace, of Yale University’s Interdisciplinary Center for Bioethics, added: “If the system fails to perform as designed or does something idiosyncratic, that probably goes back to the corporation that marketed the device.  If it hasn’t failed, if it’s being misused in the hospital context, liability would fall on who authorized that usage.”

“If it’s unclear who’s responsible, that creates a gap, it could be no one is responsible,” Dr. Lin said. “If that’s the case, there’s no incentive to fix the problem.”  Oh, great, just what healthcare needs: more unaccountable entities.

To really make AI succeed in healthcare, we’re going to have to make radical changes in how we view data, and in how we approach mistakes.

AI needs as much of data as it can get.  It needs it from diverse sources and on diverse populations.  All of those are problematic in our siloed, proprietary, one-step-from-handwritten data systems.  Dr. Barzilay nailed it: “I couldn’t imagine any other field where people voluntarily throw away the data that’s available. But that’s what was going on in medicine.” 

Despite our vaunted scientific approach to medicine, the fact is that we don’t really know what happens to most people most of the time, and do a poor job of counting even basic healthcare system interactions, like numbers of procedures, adverse outcomes, even how much things cost.  As bad as we are at tracking episodic care, we’re even worse at tracking care — much less health — over time and across different healthcare encounters. 

Once AI has data, it is going to start identifying patterns, some of which we know, some of which we should have known, and some of which we wouldn’t have ever guessed.  We’re going to find that we’ve been doing some things wrong, and that we could do many things better.  That’s going to cause some second-guessing and finger-pointing, both of which are unproductive.

Our healthcare system tends to have its head in the sand about identifying errors/mistakes, for fears about malpractice suits (justified or not).  Whatever tracking does happen is rarely disclosed to the public.  That’s a 20th century attitude that needed to be updated in an AI age; we should be thinking less about a malpractice model and more about a continuous quality improvement model.

“The first thing that’s important to realise is that AI isn’t magic,” David Champeaux of Cherish Health said recently.  It’s not, but neither is what we already do in healthcare.  We need to figure out how to demystify them. 

Kim is a former emarketing exec at a major Blues plan, editor of the late & lamented Tincture.io, and now regular THCB contributor.



from The Health Care Blog https://ift.tt/3n4rdRK

0 comments:

Post a Comment

Popular Posts