There are many skills in medicine that a machine can’t replicate. Skills like how to listen to a patient and ask questions that go beyond what their current medical issue might be, how to empathize and develop trust, and many technical skills that are not currently able to be performed by a machine. For other skills like repetitive analysis and knowledge retrieval and routine, the average clinician can be greatly helped by an artificial intelligence (AI) field known as algorithmic medicine, said Joseph Sanford, MD, director of the University of Arkansas for Medical Sciences (UAMS) Institute for Digital Health & Innovation (IDHI).
“Algorithmic medicine brings these analysis and interpretation tools to the bedside, so that clinicians do not have to seek it out,” Sanford said. “It can improve accuracy in diagnosing from medical images. It can nudge clinicians towards care options that the literature supports, or away from those that have been found to be less effective. At its best, algorithmic medicine standardizes and improves a practitioner’s decision making. It does not supplant it.”
The goal is to improve medical decision making by leveraging the vast amounts of data in the electronic medical record with techniques from behavioral economics.
“There is excellent work being done in personalized genomics, medical imaging, population health modeling and health care logistics, among others,” Sanford said. “This is a rapidly evolving field, and I think the most impressive results are ahead of us.”
Trust is an incredibly important issue in medical decision making generallym, and Sanford said it can be difficult to get patients to trust these new tools. He said it is crucial to educate patients that these tools are just that, tools.
“They are not making decisions for the clinician, nor can they ever override patient autonomy,” Sanford said. “While not precisely the same, think about using these tools in the same way you might use Maps to plan a trip. It provides more information than you could ever collect on your own about route, duration, recommended stops, traffic conditions, etc. But only you can choose which road you drive down.”
Sanford said that factors that influence trust in AI are the same that influence trust in any large, complicated system: transparency, efficacy, control, perceived credibility and ethical alignment with your culture are just a few of the factors that must be considered.
Providers use algorithms for patient care every day already, said UAMS Associate Chief Medical Informatics Officer for Innovation, Research, and Entrepreneurship Kevin Sexton. “We use them to adjust dosages of medications, evaluate patient risk for a particular condition, among other uses,” he said. “The difference here is that the computer is helping make decisions. “
He finds it interesting that we trust AI in many parts of our lives every day, from what movies we watch, songs we listen to, to where we go to dinner. It’s easier for AI to gain adoption with decisions that have little downside risk.
“It’s this risk and low tolerance for error that also inhibit trust in these systems for clinical decisions,” Sexton said. “That’s why we see the most successful implementations where machines work with the clinicians to make a decision as a partner.”
The other potential is to expand screening capabilities of the health care system. Sexton said there is great work being done with AI for eye exams to detect the changes of illnesses like diabetes and hypertension.
“These types of exams are difficult to train providers to do consistently, but a machine taking pictures for analysis does a very consistent job obtaining this information,” Sexton said. “Also, with more patients wearing monitors at home (anything from a smart watch to a medical device) there is tremendous data for analysis, and health care systems don’t currently have a universal, organized approach to analysis of all of these data streams. Machines are ideal for this use case as well. Ultimately, we think that the future of machines in health care should be to allow more access to care.”
Both doctors said getting clinical trust in an AI system is much the same as it is for patients.
“We share the same biases and concerns about providing good care that patients have about receiving it,” they said. “Some aspects are of more interest to the provider than the patient. Namely, a clinician generally wants to see that it is immediately effective on a variety of patients. They also should have a strong understanding of what’s going on underneath the hood, particularly focusing on where the system is weak, so they can be prepared to independently validate something that appears odd to them. And they must have a strong sense of control as the care of the patient is ultimately their responsibility.”
Health care finances were difficult prior to the pandemic and have not gotten easier since. The UAMS physicians say that algorithmic medicine solutions need to demonstrate added value to clinicians and health care systems for whom any additional expense might mean decreasing care at the bedside.
Where investment has already been made, in the electronic medical record, for example, they are seeing results in medical and research data that would have previously been impossible to analyze. Further out, they hope to see augmented decision making act as a force multiplier, enabling physicians and nurses to care for more patients in the same time span and at a higher average quality that they would have without these tools.
READ ALSO: Locked & Loaded: Arkansas Tourism Forecasts Big Things to Come