Welcome to the sixth of an ongoing series of roundtable discussions among Chartis consulting leaders around the emerging reality of artificial intelligence (AI) in healthcare.

As financial and workforce performance lead the list of pressing priorities for most healthcare organizations, AI applications present an opportunity to help streamline administrative processes. When optimally adopted, lower-risk AI applications can help transform the employee experience and productivity as clinicians and nonclinical staff alike are able to focus on more meaningful work. Meanwhile, the organization can simultaneously develop important learnings to apply to other AI applications in the future.

Join Tom Kiesau, Chartis Chief Innovation Officer and Head of Chartis Digital; Julie Massey MD, Chartis Clinical IT Practice Leader; Chirag Bhargava, Chartis Revenue Cycle Transformation Practice Co-Leader; and Bret Anderson, Partner in Chartis Digital, as they discuss AI, what Chartis is seeing in real time, and what they think is coming next. 


Tom Kiesau: Thanks for joining the discussion today. Let’s talk about how ongoing developments in AI are creating immediate opportunities to streamline administrative processes. What are some of best use cases, and what are their benefits?

CHIRAG BHARGAVA: 

AI in healthcare needs to start on the administrative side. In fact, many forms of AI use cases have been developing in the back office of many healthcare organizations for quite some time.

For instance, revenue cycle staff use AI tools to understand what to prioritize in the vast number of claims they need to look at. Many provider organizations are leaving money on the table by not getting to everything, and AI tools can flag invoices the organization should pursue first, providing scores based on the predicted likelihood of getting paid.

Other high-impact examples are using AI to help submit documentation to payers for prior authorization and validation of eligibility.

And finally, many organizations are planning to use generative AI for simple conversations with patients via the call center, drafting initial responses to patient queries and transcribing clinical documentation in order to improve agent efficiency.

BRET ANDERSON: 

We’re also seeing traction on the financial side beyond the revenue cycle. For instance, account reconciliation each month can be complex and fraught with errors, especially across different settings of care and departments. AI tools can make the process of creating reports much more efficient, freeing up staff to review, validate, and handle other things.

Another big opportunity area is automating supply chain management and identifying places to consolidate and renegotiate pricing. AI tools are particularly helpful with navigating the complexities of physician preferences, which are rarely documented anywhere, but can be accurately predicted using AI tools.

CHIRAG: 

Reporting and analyzing data is an AI specialty. Typically, organizations that invest in data and analytics have staff doing the analysis manually. Generative AI in particular holds the promise of not only doing this more efficiently but also enabling organizations to more efficiently and effectively probe the data analytically.  

AI tools are both helping to identify administrative priorities to address and doing some of the actual administrative work itself.

Tom: In addition to streamlining processes among administrative departments, what is the developing opportunity to reduce the administrative burden for clinicians as well?

JULIE MASSEY, MD: 

We’ve talked in previous roundtables about how AI tools can help reduce clinician “pajama time” in triaging their inboxes and helping draft responses to patients. There’s a real need to unburden clinicians and others from repetitive tasks. Coding and documentation is an interesting area for further reducing the administrative burden because there has been a pendulum shift in recent years from clinicians never touching the coding to now being principally responsible for it.  

There are concerns about fraud and heightened scrutiny of coding practices, especially in the ambulatory environment. When organizations introduce automated tools and AI, it will be important to have policies and governance in place that help not only ensure accuracy but identify the accountability when mistakes inevitably occur.  

CHIRAG: 

We are seeing high success rates for procedural coding in emergency medicine, urgent care, diagnostics, radiology, and outpatient surgery. NLP is building off clinical notes to generate the right codes. It can automatically generate charts and produce required documentation for things like preauthorization and claim denial appeals.

Automation can be deployed in cases where there’s enough of an audit trail to validate whether the results are accurate or not. As Julie said, having the right governance in place with a robust review methodology is essential to ensure the appropriate monitoring and auditing is occurring. These systems are not ready to run autonomously, especially right out of the gate.

Additionally, organizations should have AI-enabled revenue integrity and clinical documentation integrity teams. Organizations need these teams to understand the accuracy of what they are reviewing and how to identify where things are and aren’t working optimally—and make the needed improvements. AI can be a hugely beneficial support tool for these critical staff roles.

Tom: That brings us to mitigating some of the other major risks around leveraging AI to streamline administrative processes. What are some of these risks and the key considerations to ensure the best outcomes?

CHIRAG: 

One of the big risks is not fully understanding what your organization is getting into. First, you need to know what you are going to make better through AI. When leaders don’t understand the existing processes and where the challenges are, it’s just shooting in the dark—hoping that it works. You need to understand exactly what you’re trying to improve so you can develop the return on investment (ROI) framework, make the investment in a meaningful and thoughtful way, and track the results of your efforts.

Next, you have to know where you should invest. You need to understand your full technology portfolio before you add to it—many times, organizations have access in their current tech stacks to capabilities they are not using or even aware of. Talk to your current vendors so you know what new capabilities may be available in the near future, prior to implementing potentially duplicative solutions.

Finally, you need to have a framework for analyzing the ongoing application of AI in your organization to ensure you are using the technology for optimal results. For instance, when it comes to coding applications, your organization will need to assess whether you are coding at the right level to avoid lower reimbursement due to downcoding or undercoding (as well as, obviously, inappropriately overcoding or upcoding).

BRET:

When you are looking at your technology needs, be deliberate in matching the right technology with the use cases you’re trying to achieve. For example, when you know that generative AI introduces the risk of things like hallucinations that develop the wrong outputs, that may not be the best technology for things that can be solved with robotic processing automation (RPA), which has a markedly narrower risk profile.

You also need to position your AI initiatives with the right amount of human oversight, which should itself be reviewed by a central organizing body within the health system. Going back to the coding process, for instance, if you’re automating much of the process, you also need to define and set up the right human involvement for each use case to ensure quality and accountability so you have stronger outcomes for things like reimbursement, while ensuring there is accountability up the chain of command for appropriate use.  

Compliance and security should also be a major concern, especially with generative AI. Because this technology uses a lot of open-source data, you need to set security guardrails (especially related to sensitive financial and clinical information) so you aren’t exposing sensitive data or protected health information (PHI) and thereby putting your organization at risk by using the technology.

Tom: The health system workforce is at the heart of streamlining administrative processes with AI. How should leaders involve them?

CHIRAG: 

Your workforce should be involved from beginning to end. Bring them to the table early to help identify opportunities, and keep them engaged all the way through enterprise-wide messaging.

JULIE: 

Part of bringing the workforce to the table early is to ensure that implementing new AI technology doesn’t inadvertently add to their burden instead of improving it.

BRET: 

End-users should be part of the early deliberation and selection process and workflow design. This will help foster familiarity with the technology and ownership of its success.

There’s a lot of understandable fear and trepidation that AI is going to automate many jobs, cutting the workforce as a result. In reality, AI tools will help healthcare organizations do more with the workforce they have—enabling them to be more productive and to make more timely and efficient decisions.

Be clear about the value you are trying to generate and how it will be measured. And if workforce savings are part of the value driver, be sure to communicate to staff the plan for how you will realize those workforce savings and what will happen to those who are affected. Help people understand that it’s an enterprise goal, intended to benefit both your organization and your workers. Bringing these factors together will foster buy-in and help achieve the best results.

Tom: A theme of this roundtable series has been the centrality of people when it comes to AI in healthcare—and administrative processes are certainly no exception. Thank you all for the discussion today. I look forward to our next roundtable as we dig further into using AI to deploy clinical resources more efficiently and effectively. 


© 2023 The Chartis Group, LLC. All rights reserved. This content draws on the research and experience of Chartis consultants and other sources. It is for general information purposes only and should not be used as a substitute for consultation with professional advisors. It does not constitute legal advice.

Related Insights

Contact us

Get in touch

Let us know how we can help you advance healthcare.

Contact Our Team
About Us

About Chartis

We help clients navigate the future of care delivery.

About Us