Our Chartis experts dive into the evolving landscape of healthcare partnerships and what that means for leaders navigating their organizations' futures.
ann:
In a nutshell, AI mechanisms, algorithms, and the data behind the scenes make recommendations, but we have very little insight into how and why those recommendations are made. This has always been true to some extent, but now, with large language models (LLMs), true transparency is all the more difficult.
A good example is the recent launch of a wellness chatbot around eating disorders that was giving blatantly inaccurate advice, scaring people into unhealthy and even dangerous behaviors. It was unclear why the chatbot itself was going down the path it was—or why the creators were unable to solve for that in advance.
anN:
Some of these LLMs and generative AI models are giving answers in time frames and at a level of accuracy and deep insight that was previously unknown—but there are some trade-offs in our current understanding. The fundamental question is: How much should you trust the black box? How much do you have to take it apart and understand it to take advantage of the real value provided? The more you have to take it apart because of a lack of trust, the less you’re going to benefit from the advantages these evolving approaches offer. That’s what people have to wrestle with.