Will Patients Accept ArtificiaI Intelligence-Driven Medicine?

Will Patients Accept ArtificiaI Intelligence-Driven Medicine?

Does Dr. Bot know best? That’s a question that will be confronting consumers in one form or another soon than you may think. The answer may lie in how artificial intelligence (AI) is integrated into the healthcare picture – whether doctors provide the gateway to AI diagnosis or whether patients interact directly with it.

Consumers already consult Dr. Google without hesitating, but Google is free and more or less anonymous and, last time we checked, doesn’t write prescriptions or have admitting privileges at hospitals. But how will patients feel if they not only seek information from a digital interface but are then expected to act on it?

Those are some of the questions posed in a new study by researchers at Boston University, published recently in the Journal of Consumer Research.  

Artificial intelligence is already “manifold and pervasive” in the practice of medicine, the researchers note, with IBM’s Watson detecting heart disease, SkinVision identifying skin cancer and algorithims diagnosing retinal disorders.

“AI and especially machine-learning algorithms introduce a fundamentally new kind of data analysis into the health-care workflow,” researchers at the University of Virginia wrote in a recent study. “By virtue of their influence on pathologists and other physicians in selection of diagnoses and treatments, the outputs of these algorithms will critically impact patient care.”

AI still mostly used by physicians

But most AI applications, at least in the United States, are intended to be used by physicians. In some countries, most notably the United Kingdom, patients are beginning to interact directly with AI-driven bots, which dispense medical advice for the National Health Service.

There’s little question that AI can detect many diseases more quickly and accurately than humans, while also amassing ever-growing mounds of data that drive research that can help develop and evaluate new treatments.

While patients, and Americans in particular, are reasonably accepting of technology, there’s some question about how readily they will take advice from a machine that may not go out of its way to recognize their individuality, something Americans perhaps prize more lustily than those in other countries.

Uniqueness neglect

“The prospect of being cared for by AI providers is more likely to evoke a concern that one’s unique characteristics, circumstances, and symptoms will be neglected,” the authors of the study said. They coined the term “uniqueness neglect” to describe this syndrome.

“Whereas consumers view themselves as unique and different from others, consumers view machines as capable of operating only in a standardized and rote manner that treats every case the same way,” they said.

The Boston researchers looked at eleven studies of consumer acceptance of AI in medical care and found that those who consider themselves “more different than others” were more likely to resist AI medical care as compared with those who consider themselves “more similar to others.”

The findings would seem to support the position of the American Medical Association, which has said that the goal of “augmented intelligence,” as the AMA calls it, should be to “supplement and enhance, rather than replace, human doctors’ judgment and wisdom.”

Combining AI methods and systems with an irreplaceable human clinician can advance the delivery of care in a way that outperforms what either can do alone,” said AMA Trustee Jesse M. Ehrenfeld, MD, MPH. “But we must forthrightly address challenges in the design, evaluation and implementation as this technology is increasingly integrated into physicians’ delivery of care to patients.”