Iocane powder. You know the scene from The Princess Bride.

On a hilltop in the Kingdom of Florin, the villain Vizzini—who has kidnapped Princess Buttercup—engages in a battle of wits with our mysterious hero, the Man in Black.

Before them sit two cups of wine on a small table. The Man in Black tells Vizzini that he has put the deadly poison Iocane powder in one and casts a challenge: each will drink from a cup. Vizzini is to choose and then they will discover “who is right and who is dead.”

Right now, we seem to be facing our own “Where is the poison?” moment—or perhaps more of a “Pick your poison” moment. But instead of Iocane, it’s AI.

In one cup: AI will decimate our workforce and economy as we know it.

In the other: AI offers a promise of more productive, fulfilling work for those who adopt and embrace it, but doing so comes with attendant material risks (hallucinations, inaccuracies, biases, intellectual atrophy, and other unknowns).

But we may discover, as Vizzini did, that winning this existential contest requires an entirely different solution to the game.

The 1st Cup: “I am not a great fool, so clearly I cannot choose the cup in front of ME!”

Over the past few weeks, several powerful pieces have gained traction regarding AI’s potentially devastating impact on the global workforce, economy, and industrial base. 

Two such articles vividly describing this Iocane threat are Matt Shumer’s Something Big is Happening and Citrini Research’s The Global Intelligence Crisis.

Shumer writes:

The models available today are unrecognizable from what existed even six months ago. The debate about whether AI is "really getting better" or "hitting a wall" — which has been going on for over a year — is over. It's done. Anyone still making that argument either hasn't used the current models, has an incentive to downplay what's happening, or is evaluating based on an experience from 2024 that is no longer relevant. I don't say that to be dismissive. I say it because the gap between public perception and current reality is now enormous, and that gap is dangerous... because it's preventing people from preparing.”

Given what the latest models can do, the capability for massive disruption could be here by the end of this year. It'll take some time to ripple through the economy, but the underlying ability is arriving now… This is different from every previous wave of automation, and I need you to understand why. AI isn't replacing one specific skill. It's a general substitute for cognitive work. It gets better at everything simultaneously. When factories automated, a displaced worker could retrain as an office worker. When the internet disrupted retail, workers moved into logistics or services. But AI doesn't leave a convenient gap to move into. Whatever you retrain for, it's improving at that too.

The experience that tech workers have had over the past year, of watching AI go from "helpful tool" to "does my job better than I do", is the experience everyone else is about to have. Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service. Not in ten years. The people building these systems say one to five years. Some say less. And given what I've seen in just the last couple of months, I think "less" is more likely.

Where Shumer offers a real-time wake-up call about a future fast approaching, Citrini offers a dysregulating vision written from that future perch in October of 2028.

AI capabilities improved, companies needed fewer workers, white collar layoffs increased, displaced workers spent less, (resulting revenue declines and) margin pressure pushed firms to invest more in AI, AI capabilities improved…

It was a negative feedback loop with no natural brake. The human intelligence displacement spiral. 

The velocity of money flatlined. The human-centric consumer economy, 70% of GDP at the time, withered. We probably could have figured this out sooner if we just asked how much money machines spend on discretionary goods. (Hint: it’s zero.)

With stocks down…and boards demanding answers, the AI-threatened companies did the only thing they could. Cut headcount, redeploy the savings into AI tools, use those tools to maintain output with lower costs.

Each company’s individual response was rational. The collective result was catastrophic. Every dollar saved on headcount flowed into AI capability that made the next round of job cuts possible.

It is a future easy to imagine. We can feel it now…nipping at the periphery. 

The 2nd Cup: “I am not a great fool, so clearly I cannot choose the cup in front of YOU!”

At the same time that Shumer’s and Citrini’s pieces were causing angina, Anthropic published its labor market “study” offering a more sanguine outlook.

This is our other cup. The friendlier-seeming one, but it’s not without its own costs.

Anthropic’s study is largely summarized in this chart, using U.S. Bureau of Labor Statistics (BLS) to suggest the only work areas in which AI should raise concerns are in “customer service” and “cashiers.” 

test

Source: Anthropic, “Labor market impacts of AI: A new measure and early evidence,” March 2026, https://www.anthropic.com/research/labor-market-impacts.

Otherwise, nothing to see here folks. Move along.

The authors go on to claim that predicting the economics of workforce dynamics has historically been a “sky-is-falling” exercise.

The rapid diffusion of AI is generating a wave of research measuring and forecasting its impacts on labor markets. But the track record of past approaches gives reason for humility.

For example, a prominent attempt to measure job offshorability identified roughly a quarter of US jobs as vulnerable, but a decade on, most of those jobs maintained healthy employment growth. The government’s own occupational growth forecasts, while directionally correct, have added little predictive value beyond linear extrapolation of past trends. Even in hindsight, the impact of major economic disruptions on the labor market is often unclear. Studies on the employment effects of industrial robots reach opposing conclusions, and the scale of job losses attributed to the China trade shock continues to be debated.

They conclude that despite all, employment rates remain at historic highs. AI will change our work, yes, but it will not eliminate it.

Last month’s Harvard Business Review article AI Doesn’t Reduce Work – It Intensifies It bolsters this point of view. 

In our in-progress research, we discovered that AI tools didn’t reduce work, they consistently intensified it. In an eight-month study of how generative AI changed work habits… we found that employees worked at a faster pace, took on a broader scope of tasks, and extended work into more hours of the day, often without being asked to do so…On their own initiative workers did more because AI made “doing more” feel possible, accessible, and in many cases intrinsically rewarding.

Ah, that would be a drink from a pleasant cup indeed. Working with AI does not reduce work, it makes it “more rewarding.” 

All well and good in theory, until one factors in that the price for these new levels of fulfilling productivity is the potential “cost of defect” that can accompany AI use.

Where that cost is high, the risk can outweigh the rewards (Exhibit A: Anthropic’ s own current battle with the US Department of Defense).

Closer to home in healthcare, recent studies have shown that “despite benchmark accuracies as high as 94.5%, real-world deployments (of AI in diagnostic settings) often reveal performance drops of 15–30% due to population shifts and integration barriers.”

And…are we really eager to work ever faster and extend that work “into more hours of the day”?

Are the costs and potential turbulence worth it?

So, are we on the precipice of an AI-incited workforce collapse and economic crash or at the dawn of a new world of work productivity and fulfillment leading to unimaginable advances and prosperity, with its own attendant risks?

Spoiler alert: Our solution to the Iocane powder problem

Back to our hilltop duel in The Princess Bride.

Vizzini makes his choice and both men drink. Moments later Vizzini—basking in his own certainty, “Plato, Aristotle, Socrates, they’re morons!”—collapses while the Man in Black looks on unperturbed.

And then the reveal: there was no “right” cup for Vizzini to choose. The Man in Black had poisoned both—and survived by spending years inoculating himself against Iocane powder.   

The question we face is not which cup has the poison but rather how we, as leaders, inoculate our organizations against both.

How do we ensure that we do not fall victim to (nor contribute to) unprecedented AI-driven workforce and economic disruption—while also avoiding the “cost of defect” risk from over-indexing on AI? 

And how do we do so while keeping our organizations vibrant and on the vanguard of care delivery? 

Like Vizzini’s predicament, the answer is not where one may first think to look.

It is not in the technology. It does not rest on determining the optimal AI agent, platform, or roadmap.

These matter, of course, but in the long run, as Clayton Christensen showed in The Innovator’s Dilemma, once technologies achieve a basic performance threshold, or become widespread and “good enough,” the technology comes to matter less and competition shifts away from the commoditized innovation and towards the overall business model.

The source of your inoculation will not be your technology; it will be your people.

How that technology is adopted and used, how it is thoughtfully deployed with sound judgement, how your organization integrates it into your workflows and business models. That is all a function of your people.

Who and what your organization is and will be in the future will be a function of your people.

We have work to do here. While “the people are your future” may feel intuitive, directionally consistent benchmark data suggest the average healthcare organization is spending roughly three times more of its operating budget on AI technology investment (~1.5%) than on workforce training (0.5%).

Could it be we’re making massive investments in new machinery but not in how our people can best—and most wiselyuse it?

This is not the ideal recipe for success—especially in healthcare, the one industry that continues to demonstrate that people are its very core.

In 2025, healthcare added 693,000 jobs to the US economy. Over the same period, all other sectors combined lost 577,000 (Fortune, March 9, 2026).

The time is now to inoculate your organization. Be less vulnerable to what’s in the cups before you.

Elevate your workforce alongside advancing your AI capabilities. For in the end, the key to your successful AI driven transformation—one where you avail yourself of its promise while avoiding its pitfalls—will be your people. 

It’s not inconceivable.

Ken Graboys and David Jarrard

About the authors

Ken Graboys, CEO of Chartis, began his journey not in an office, but in rural Africa, working in communities facing famine and limited access to care. It was there he saw how the foundations of health—food, medicine, compassion—could shape the trajectory of entire communities. In 2001, he co-founded Chartis with the mission to improve the delivery of healthcare in the world. His career path—from fieldwork in Mauritania to healthcare transformation—illustrates a commitment to cultivating healthier communities through thoughtful, sustainable action. 

David Jarrard, Chairman of Jarrard, grew up in Oak Ridge, Tennessee—a town shaped by science and innovation. His early career as a journalist covering human stories gave him firsthand insight into how storytelling can influence public perception and drive change, which naturally connects to healthcare, a field where clear communication can impact patient outcomes and organizational success. This foundation led him to build one of the nation’s leading healthcare communications firms.  

Contact us

Get in touch

Let us know how we can help you advance healthcare.

Contact Our Team
About Us

About Chartis

We help clients navigate the future of care delivery.

About Us