Welcome to the second of an ongoing series of roundtable discussions among Chartis consulting leaders around the emerging reality of artificial intelligence (AI) in healthcare.

This roundtable focuses on a recent study in JAMA Internal Medicine, in which ChatGPT outperformed physicians in its responses to patient questions, as rated by a clinician panel.

Join Tom Kiesau, Chartis Chief Innovation Officer and Head of Chartis Digital; Jody Cervenak, Chartis Informatics and Technology Practice Leader; Cindy Lee, Chartis Chief Strategy Officer; Julie Massey MD, Chartis Clinical IT Practice Leader; Chirag Bhargava, Chartis Revenue Cycle Transformation Practice Co-Leader; and Jon Freedman, Partner in Chartis Digital, as they discuss AI, what Chartis is seeing in real time, and what they think is coming next.


Tom Kiesau: Welcome back, everyone. A new study has gained a lot of attention with its finding that ChatGPT’s responses were better than actual physicians’ responses to patient questions nearly 80% of the time. What are your initial reactions to the promise this presents?

Jon Freedman:

This study, along with a wide range of other nascent experimentation with generative AI, is suggesting use cases that have never been on the radar before. While it's way too early to even suggest that providers will be wholly replaced by AI, there's increasing evidence that there are novel AI applications that can help providers with their provision of care and interaction with their patients that weren't even thought of just a few months ago in any real way. 

JULIE MASSEY, MD:

A big part of what we’re seeing here is a consequence of what is being asked of clinicians. Demonstrating empathy and providing a more thorough response to patients requires time and focus, which are both in limited supply for clinicians given the demands placed on them and their teams. The lower performance in this study is likely a reflection of their challenging schedules and a symptom of the burnout we have seen related to inbox management.

But this is also where the promise and the hope lies. Instead of so much “pajama time” spent on inboxes, organizations could leverage these tools to streamline—or even offload—some of this work for clinicians.

Yet there is still risk and a real challenge to ensure clinical accuracy.

Tom: To that point, clinicians also rated the chatbot responses higher for quality. Given the limitations of this single study, what are the broader considerations and concerns?  

Cindy Lee:  

Many of the patient questions the chatbot and physicians were responding to were relatively straightforward and more common clinical questions. I think there could be great utility in that. However, a chatbot’s responses related to clinical care with greater nuance would be quite different.

For instance, I’ve been involved in the care of a loved one with a complicated neuromuscular condition. It’s nicknamed the “snowflake disease” because the symptoms and treatments (and what works and doesn’t work), are all very specific to the individual. It would be difficult at this stage for AI to offer a sound recommendation in this type of situation where the care path is difficult to routinize unless perhaps the clinician is also documenting at the level the technology needs. Then, in the future, maybe AI would have all the right inputs. But this would be an added documentation burden for clinicians who are already burnt out from documentation.

Julie:  

I agree. AI can be great for augmenting the clinician’s legwork, but a clinician needs to ensure the accuracy and quality of what the patient is told.  

Tom: A fascinating outcome of the study was that the responses coming from actual humans had lower ratings for empathy than the chatbot’s responses. What can we learn from that?

Jon:  

Conventional wisdom has held that despite whatever value AI might add to care delivery and the operational support of care delivery, doctors and other care providers would always be the ones who would—and could—provide the human touch. It's that human touch around empathy, caring, and familiarity paired with clinical expertise that would keep the physician indispensable to the patient's healthcare experience. For all of its flaws, this study has made people think differently about that assumption. 

In a recent podcast, Peter Lee of Microsoft and Eric Topol, MD, suggested that AI tools might be useful as coaches for providers as they communicate with their patients, helping them to interact more effectively and empathetically. They cited a remarkable example of a provider who needed help in sharing very bad news with a patient. Incredibly enough, in addition to providing great advice, the AI-driven chatbot asked the provider how he was doing in dealing with this situation. So, AI may even help clinicians themselves deal with difficult situations, stress, and burnout. 

JODY CERVANAK:

How do you measure empathy? Healthcare leaders also need to consider the patient perspective. 

A lot of patients just want to be heard, acknowledged, and feel cared about because they have been prioritized. An important way of communicating empathy is being responsive—that can be measured by how timely the average response to patient messages is and how many messages go unanswered. AI can enable clinicians to be more responsive, at least with a first line of response to patients, because no response at all is arguably the worst result from the patient’s perspective. 

Tom: What are some use cases that would allow organizations to leverage chatbot strengths?

Julie:  

Anything that enables a clinician to respond more effectively to the patient. For example, scanning through the electronic health record (EHR) and presenting the clinician with the relevant information related to the patient’s question in the correct context can reduce burden and enable better clinician decision-making. It also allows the clinician to personalize their response to the patient, which is a great foundational building block for trust.

Chirag Bhargava:  

Related to personalization in clinical interactions, there are so many nonclinical touchpoints a patient has with a health system—that includes things like inquiries, the revenue cycle, and scheduling. Part of showing empathy is expediting and personalizing these interactions.  

Organizations can start documenting patient preferences in the EHR so they know what each patient is looking for and apply those preferences in a thoughtful way. 

Jon:

Given the already proven capabilities of ChatGPT and Bard to communicate in different ways with different people, the implications for improving healthcare literacy are also immense. Providing consumers with useful information at the right reading level; in the right language; and in the right structure, format, syntax, and style for the consumer's individual needs is a terrific opportunity to advance healthcare equity in some respects. 

Tom: What are the challenges that will need to be overcome? This obviously won’t be as easy a picking an AI solution and just “turning it on.” 

Jon:  

Health system leaders will need to wrap their heads around who to partner with, how to manage quality and safety, and how to be transparent that AI is behind certain interactions. Consumers are increasingly accustomed to interacting with chatbots, but there may well be a resurgence of unease as chatbots communicate more and more as if they were human. In fact, a recent Pew study found that many consumers are uneasy with AI-driven interactions in their healthcare.  

There are also very real questions about the expertise needed to operate and train AI systems with the appropriate data, and few—if any—health systems will have the in-house talent pool or computing power for the level of sophistication required. Additionally, skills in data science and clinical informatics will be important competencies to maintain to proactively monitor, audit, and resolve potential bias or incorrect outputs from the AI tool.  

jody:  

Deploying any new technology should always be done in combination with thoughtful business planning, period. AI is a powerful emerging technology that if deployed thoughtfully and in the right use cases, can yield tangible and scalable results. But health systems need to remember this equation: NT + OO = COO. That is, New Technology + Old Organization just equals a Costly Old Organization. AI is no different in that respect than any other new technology.

Tom: At the end of the day, while the results are spectacular, this is another powerful illustration about what AI could do, but not yet what it can do.  Thank you for your great insights and perspectives, as well as your collective guidance to healthcare executives as they plan for their organization’s AI journey.  I look forward to our next discussion!

© 2023 The Chartis Group, LLC. All rights reserved. This content draws on the research and experience of Chartis consultants and other sources. It is for general information purposes only and should not be used as a substitute for consultation with professional advisors. It does not constitute legal advice.

Related Insights

Contact us

Get in touch

Let us know how we can help you advance healthcare.

Contact Our Team
About Us

About Chartis

We help clients navigate the future of care delivery.

About Us