97
Hard Truths
About AI in Healthcare
This is a living document.
If something is missing or needs updating, let me know.
[Last update: 2026-03-02]
Thanks to all contributors listed below.
Jan Beger
## Power, Politics & Trust1. In a system built on expertise and hierarchy, the biggest barrier to AI is not technology. It is attachment to how care has always been delivered.2. Hospitals do not have an AI adoption problem. They have a control problem. AI shifts decision gravity away from hierarchy and toward data.3. We talk about AI safety. We rarely talk about AI status loss. One is technical. The other is political. The political one stalls more projects.4. Healthcare does not buy intelligence. It buys reassurance. AI that threatens reassurance will stall, no matter how good it is.## Adoption & Scale5. AI in healthcare is operating at scale. A $600M ambient scribe market, 1,300+ FDA-cleared devices, deployment rates 2.2x other industries. The question now: can you scale without breaking trust?6. The pilot era is over. The real divide: organizations using AI as point solutions vs. those redesigning their operating model around it.7. AI adoption is outpacing readiness. The bottleneck shifted from infrastructure to governance. Unapproved tools are spreading, validation is lagging, revenue incentives are distorting clinical tools.8. Ambient scribes succeeded because they sit in the background. The next wave will demand more: redesigning workflows around AI, not squeezing AI into old ones.9. AI doesn't slot into one part of the treatment journey. It changes interconnected work processes. Rethink the entire care pathway.10. If your AI tool needs a champion in every department, it is not infrastructure. It is a feature. Infrastructure does not ask for approval.11. AI compresses the distance between observation and action. Organizations built on delay will struggle the most.## Workforce12. AI is not replacing healthcare professionals, yet. It's augmenting them. But the 'yet' keeps getting shorter. In narrow tasks, AI already outperforms physicians.13. The real disruption is not AI diagnosing better than doctors. It is AI reshaping which decisions require a doctor at all.14. Prepare for the physician retirement wave. When experienced clinicians leave, decades of expertise leave with them. AI is one of the few tools that can preserve it.15. Staff shortages demand AI support. Kaiser's ambient scribes saved 15,700 physician hours in one year. At Mass General Brigham, burnout dropped from 53% to 31% after 84 days. This isn't theoretical anymore.16. Healthcare needs people who combine clinical experience, operational understanding, and AI knowledge. With unsanctioned AI use spreading, this hybrid expertise is critical for governance, not just implementation.17. AI isn't a tech upgrade. It's a workforce lifeline. Clinicians are leaving. AI can ease workloads and restore time for patient care.18. Leaders who invest in AI as a workforce strategy will retain talent and attract future professionals.## Agentic AI19. AI agents are operational. The FDA deployed agentic AI for all staff in December 2025. Microsoft and Epic launched healthcare agent platforms. This is no longer a forecast.20. Agentic AI breaks the governance playbook. Traditional oversight assumes a human reviews every output. Agents act on their own. If you can't answer 'who's watching the agent?', you're not ready.21. The human oversight mandate will collide with autonomous AI. The EU requires human oversight for high-risk systems. Agentic AI is designed to minimize it. This tension will define the next decade.22. Agentic AI platforms will reshape competitive dynamics. Health systems building on major vendor platforms will move fast. AI startups that don't integrate will get shut out.## Back Office & Operations23. The real AI revolution is in the back office. While everyone debates clinical AI, administrative tools generate the fastest returns.24. Ambient AI is healthcare's first breakout category. $600M in 2025. Two unicorns. Thousands of physicians. The growth story is clear. The accountability story isn't.25. AI in operations saves more money than AI in diagnosis. Reducing inefficiencies is an easier win than changing clinical decision-making.26. Rising compensation pressures call for efficiency. As costs climb, AI-driven automation becomes essential to sustain operations.27. Hospitals expect ROI on AI within a year. That's faster than most drugs, treatments, or new hires.## Performance & Validation28. AI outperforms humans in more tasks every year. But the biggest barrier isn't accuracy. It's liability. The EU's revised Product Liability Directive removed the €85M cap and presumes defectiveness if AI violates safety law.29. AI is excellent at 'what' but terrible at 'why.' Understanding causation, not just correlation, remains the holy grail.30. Open-source initiatives drive innovation. Just like in computer vision, open datasets and global competitions are accelerating healthcare AI.31. Clinicians trust AI only if they understand how it works. Black-box models face skepticism in life-and-death scenarios.32. Black-box AI is a clinical risk. Opaque decision-making undermines trust and allows errors to go unnoticed.33. Explainability isn't just for techs and clinicians. Patients deserve to understand how AI influences their care. The EU AI Act now makes this law.34. AI can detect rare diseases. But only if someone listens. Too many flagged conditions go ignored.35. Evaluate rigorously. Define CTQs, run prospective multi-center validations, monitor continuously. The FDA and EU now mandate lifecycle management. This used to be a recommendation. Now it's regulation.36. Real-world evidence is the missing piece. Less than 2% of FDA-cleared AI devices are backed by randomized trials. Most healthcare AI runs on faith more than evidence.37. Clinical performance isn't enough. We've over-indexed on sensitivity and specificity. What matters: did care improve? Did we save time, money, lives?38. As AI models commoditize, the gap between vendors won't be in the architecture. It'll be in the three seconds between a finding and the next click.## Deployment Risks39. AI hallucinations are an active medical risk. Ambient scribe studies found 1-3% hallucination rates in clinical notes. At scale, that's tens of thousands of fabricated entries per year.40. Ambient scribes don't just write notes. They optimize billing. Payers fight back with auto-downcoding. We're in an AI-vs-AI billing arms race.41. AI makes hospitals a bigger target for cyberattacks. More automation, more vulnerabilities. Agentic AI introduces new attack surfaces.42. An AI that works in the lab could fail in the hospital. Real-world conditions ruin pristine academic models.43. AI that works in one hospital could fail in another. The microsystem rules all.44. The best AI insights are ignored if they arrive at the wrong time. If a doctor sees it too late, it's useless.45. Clinicians still trust a colleague's gut feeling over AI. They might be right. But in more and more cases, AI outperforms humans.46. AI-powered robotic surgery and pathology are advancing. Adoption remains slow. Trust and workflow fit are key barriers.47. AI-powered hospital bed management could save lives. It's underused. Too much data, too little action.## Patients, Regulation & Ethics48. AI regulation is splitting in two. The EU classifies most medical AI as high-risk with fines up to 7% of global turnover. The US is deregulating, loosening FDA oversight and preempting state laws. A widening fork.49. Who's responsible for an AI mistake? The EU is answering: everyone in the chain. Joint and several liability for manufacturers, integrators, and AI providers. The US is still largely silent.50. Ethical AI guidelines look good on paper. Proving you follow them is another matter. Policies exist. Proof of execution doesn't.51. 'Ethical AI' is meaningless without accountability. It's often a branding exercise, not a real standard.52. AI governance is a survival requirement, not an ethics exercise. Fines, liability caps removed, lifecycle mandates. Organizations without governance aren't at ethical risk. They're at legal and financial risk.53. Patients play a role in AI adoption. Ask not just if AI is used, but how and why. Demand transparency.## Data54. Healthcare's data problem isn't quantity. It's context. Models choke on data that lacks real-world meaning.55. AI doesn't need more data. It needs better data. More isn't always better when it's full of bias and errors.56. Synthetic data could unlock what real data can't. It helps overcome privacy concerns, reduce bias, and improve models. If used responsibly.57. Wearables generate oceans of patient data. The FDA's January 2026 guidance loosened wearable oversight, accelerating the flood. Without AI to filter signal from noise, clinicians won't just drown. They'll stop looking.58. Invest in data infrastructure. Resolve issues of data ownership and trust to unlock AI's potential in healthcare.## Equity & Bias59. Most AI models are trained on high-income country data. They perform well there. They perform poorly where they're needed most.60. AI promises to improve healthcare access. 4.5 billion people still lack essential services. The implementation gap in LMICs isn't closing fast enough.61. Global AI equity requires local ownership, not just access. The WHO warned explicitly: AI must not become a new frontier for exploitation.62. Bias in AI doesn't just create unfairness. It kills. Patients from underrepresented groups suffer first.## Shadow AI & Governance63. Shadow AI is healthcare's governance crisis. Staff adopt unapproved tools to cope with burnout. Organizations that survive this will channel the energy, not fight it.64. EHR vendors are becoming AI platforms. For health systems, it's the path of least resistance. For AI startups, it's existential. Distribution beats technology.65. Context engineering is the new competitive edge. AI quality depends less on the model and more on what information it receives.## Collaboration & Specialties66. Effective AI adoption requires technical, medical, regulatory, and operational teams working together from concept to product.67. Radiology holds 75-80% of all FDA-authorized AI devices because pixels were the easy win. The harder, higher-impact work lies beyond imaging.68. Radiology leads in FDA-cleared AI devices. But a lead only holds if you keep pushing. Other specialties are catching up. Coasting is how you lose the advantage you created.69. AI won't replace specialists. But it will equip generalists to do what only specialists could. A Lancet trial showed AI stethoscopes increased heart failure detection 2.3x and atrial fibrillation 3.5x in primary care. That's the pattern.70. AI is redefining medical specialties. Traditional boundaries between fields are blurring.## Prediction & Secondary Use71. AI accelerates the secondary use of health data. Research benefits faster and with fewer regulatory constraints than clinical applications.72. AI is pushing healthcare toward preemptive intervention. 71% of US hospitals now run at least one predictive AI tool. It's becoming default infrastructure.73. AI will predict system failures: staffing shortages, supply chain disruptions, capacity crises. The question is whether decision-makers act on early warnings.## Interoperability74. AI can work around interoperability failures. Translating, standardizing, connecting siloed systems. But it's a workaround, not a fix. Structural reform is still needed.## Emerging Issues75. The AI hype cycle harms healthcare innovation. Unrealistic expectations lead to disillusionment.76. The next healthcare privacy crisis will be AI-driven. Model extraction attacks can reconstruct patient data we thought was protected.77. AI is forcing a rethink of medical education. If residents don't write notes because AI scribes do it, do they develop clinical reasoning?78. Medicine needs minds, not just machines. AI can assist, but it can't replace a well-trained mind. Machines follow patterns. Humans catch what doesn't fit.79. LLMs can't keep pace with the research frontier on static training data alone. RAG and agentic AI are engineering around this. The problem isn't solved, but it's shrinking.80. AI-driven patient engagement sounds great until no one uses it. Chatbots are ignored just like appointment reminders.81. AI-based sepsis prediction is promising. False alarms make it frustrating. Precision is everything when minutes matter.82. AI triage chatbots are helpful but dangerously imperfect. No one wants to be misdiagnosed by a bot.## AI-Human Teamwork83. Let AI handle what it's best at, so humans can focus on what they do better.84. AI started as a tool. In 2025, it became a coworker. The shift from tool to collaborator didn't take a decade. It took two years.85. Everyone monitors AI for drift. Almost nobody monitors what AI does to the humans using it. Over-reliance, deskilling, shifted judgment. We audit the algorithm. We should be auditing the clinician too.## Future Direction86. The future of healthcare AI is multimodal. Single-source models are being replaced by systems that integrate text, images, speech, and sensor data.87. AI doesn't need to be perfect. Just better than humans. But defining 'better' is tricky when lives are at stake.88. AI will never fully replace human oversight. At least not while humans are held legally responsible for its mistakes.89. AI is changing the doctor-patient relationship. The examination room now has a third participant: the algorithm.## Culture & Change90. AI adoption requires time to learn, not just tools. Organizations that invest in education now will have a competitive edge.91. Be clear on the why. Clarify why using AI is core to your organization.92. Encourage people to play. But inside guardrails. If you don't provide a sandbox, people will play in the wild.93. Make new ways of working tangible. Show how AI augments roles.94. Reward your people for engaging with AI. Recognition drives adoption faster than mandates.95. The AI talent war in healthcare is over. AI expertise got absorbed by platform vendors. The bottleneck shifted from hiring engineers to training clinicians. The pitch isn't 'come to healthcare.' It's 'build healthcare into the platforms.'96. Innovate boldly. Challenge the status quo. Defy comfort zones.## One More Thing97. The end goal? We stop saying "AI." It's just how healthcare works.
With contributions from …
… and others