The doom and hype cycle reigns supreme in artificial intelligence. “It will wipe us out!” some say. “It will bring utopia!” say others. But my bets are more prosaic: As groundbreaking as it is, AI is still a tool like any other.
The creation of “thinking” machines has been the dream of science fiction for a long time, much like the dream of space travel and fusion power, yet here we are. AI’s potential is real. Tangible. It’s a dream we can grasp.
This is no surprise for those who have studied and followed the development of AI systems, and while I have a grad degree in the field, I don’t have a crystal ball. So, I will try not to contribute to the doom and hype cycle here.
Alternatively, I do want to shine a light on a different “doom” altogether, one that seems clearer and more pressing, at least where healthcare is concerned: Clinicians are leaving the workforce. U.S. hospitals and clinics may be short by as many as 124,000 physicians by the mid-2030s.
The culprit, at least in part, is burnout.
How Can We Act Against Clinician Burnout with AI?
The short answer is that there are lots of ideas.
We could use large language models (LLMs), for example, to summarize vast amounts of clinical information. Yet, it’s early, and trusting what the LLMs say will take time and effort. AI researchers themselves don’t know how these digital minds work, and this unknowability plays a big role in the doom and hype cycle; however, as I wrote for Medical Economics in July (“Leveraging AI to minimize clinician burnout”), the purpose of any technology is to improve a process or solve a problem, and the key is to strike the right balance between the benefits and the concerns.
Do No Harm
The first of science-fiction author Issac Asimov’s “three laws of robotics” is that robots shall not harm people, whether directly or by failing to act. Asimov’s laws may need retuning now that AI has arrived on the scene, but the first law fittingly resembles the Hippocratic oath taken by physicians to do no harm.
Do no harm. That’s why I am advocating for the responsible use of AI in healthcare.
Medical scribing. Discharge summaries. Paperwork. These are burdensome tasks for clinicians and staff. They contribute to burnout—and they could be tackled by AI.
“Imagine a world,” said U.S. Food and Drug Administration Commissioner Robert Califf, “in which your questions were answered immediately in language appropriate for your literacy and numeracy. Also, your clinician can actually talk with you rather than spending all their time cutting, pasting and writing clinic notes. I could go on and on, but I see the regulation of large language models as critical to our future.”
AI, in other words, is poised to alleviate burnout by streamlining workflows.
Trust Is Everything in Healthcare
However, as we’ve seen, AI systems aren’t perfect. They can “hallucinate,” wherein the machine’s output sounds good but is factually incorrect. “In short,” according to IEEE Spectrum, “you can’t trust what the machine is telling you.” One reason for this is that the machine’s output is only as good as the data it’s fed. This reminds me of the classic “garbage in, garbage out” trope.
Then there are the risks around data use and patient privacy, which are two of healthcare’s universal concerns.
As we begin to use AI as a tool in healthcare, doing no harm means erecting and maintaining guardrails around these concerns, so that AI systems use only trusted information from reputable sources, just as our solutions for intelligent prescribing and clinical interoperability do.
This Q&A with my colleague Judy Hatchett expands on what I mean by guardrails. Hatchett’s job is to make sure that only those who are using data for the good of patients have access to it. This is our focus, and it will remain our focus, whether we use AI or not.
That’s because trust is everything in healthcare.