Walk through any health tech conference and you’ll hear it: AI is transforming healthcare. Algorithms can read scans, detect sepsis, and even predict who is likely to miss their appointments. The rhetoric is intoxicating.
Yet behind the buzzwords lies an uncomfortable truth: the majority of AI systems being piloted or sold into hospitals have never been peer-reviewed, independently validated, or clinically tested in the real world. A credibility gap exists, disguised as innovation.
The Evidence Vacuum in Healthcare AI
In traditional medicine, no drug, device, or diagnostic tool enters practice without rigorous evidence, controlled trials, peer review, and post-market surveillance. But healthcare AI operates in an ethical and scientific grey zone.
A systematic review identified only 44 peer-reviewed studies on AI in healthcare as of mid-2023, highlighting that, while academic validation is increasing, it remains sparse.
TMF mapped the expanding universe of AI use cases in healthcare — from speculative “on the horizon” ideas to “safe bets” backed by strong evidence.
Source: The Medical Futurist Institute
Only a handful of applications, such as AI-assisted radiology analysis and clinical documentation, have reached the “safe bet” category. Most others, from predictive analytics to digital twins, remain speculative or risky, underscoring a systemic validation deficit.
Regulatory Approval ≠ Clinical Validation
Regulatory approval often gets mistaken for evidence of benefit. The U.S. FDA has cleared more than a thousand AI-enabled devices, but most approvals were based on “substantial equivalence” — proving a product works like something else, not necessarily better.
In other words, AI may be entering healthcare without proving it improves patient outcomes.
And as the World Economic Forum’s Future-proofing AI in Health report bluntly states: “Without deliberate investment in high-quality evidence, we risk overlooking the most valuable use cases and funding the most dangerous ones.”
The Trust Deficit: When AI Fails to Earn Confidence
This crisis of validation feeds an even more profound crisis of trust. The KPMG and University of Queensland Global Trust in AI Study found that while 85% of people recognise AI’s potential, only half believe the benefits outweigh the risks.
Healthcare remains the most trusted AI domain, but only when clinicians retain oversight. A 2025 JAMA Network study revealed that two-thirds of Americans don’t trust their health systems to use AI responsibly, especially among those who’ve experienced bias in care.
Trust isn’t just ethical, it’s operational. Without peer review and transparency, AI becomes a black box, forcing clinicians and patients to take its recommendations on faith.
Regulatory actions that would increase trust in AI and the likelihood clinicians would adopt AI tools
Source: American Medical Association – Physician Sentiment Report
Why the Evidence Gap Persists
So why does the AI evidence vacuum remain so wide? There are three main culprits:
- Commercial secrecy – Vendors often hide training data and validation methods behind proprietary walls. Transparency clashes with the venture capital mantra of “move fast and sell first.”
- Academic inertia – Clinical researchers are still adapting trial designs to evaluate algorithms that are continually evolving.
- Policy lag – Most investment flows into product development, not the data and governance frameworks required for safe deployment.
The result? A perverse incentive loop where startups overpromise, hospitals pilot untested tools for publicity, and investors reward speed over rigour. In this environment, peer review becomes an inconvenience rather than a necessity.
The IMPACTS framework evaluates AI across seven domains — from interoperability and governance to cost, safety, and trust — offering a roadmap for responsible implementation.
Source: The Medical Futurist Institute
Rebuilding Trust Through Evidence
To restore confidence, peer review must become healthcare AI’s immune system, filtering out false positives before they infect clinical practice. Three key actions can help rebuild this foundation:
- Mandate transparency.
- Developers should disclose datasets, architectures, and performance metrics across diverse populations. If intellectual property is a concern, anonymised audit frameworks should be mandatory.
- Invest in real-world validation.
- AI must face the same evidentiary burden as a new therapeutic. Without rigorous testing in live clinical environments, its promise will crumble.
- Ensure diversity and governance.
- Validation must include diverse datasets and oversight structures to detect and prevent algorithmic bias.
Insights from philosophy and sociology highlight that trust in AI depends not only on system performance, but also on the ethics, transparency, and intentions of its creators.
Source: Journal of Medical Internet Research (2025)
Caveat Emptor
AI may one day outperform humans at many medical tasks, but until it can pass the simplest test of scientific scrutiny, it hasn’t earned our trust.
Peer review isn’t bureaucracy. It’s healthcare’s moral contract with the public.
Without it, we risk replacing the clinical trial with the press release and turning hospitals into unwitting laboratories.
Authored by Tom Varghese, Global Product Marketing & Growth Manager at Orion Health.
References
- Dorn, Spencer. 2024. “3 Ways to Manage AI Hype in Healthcare.” Forbes, September 9, 2024.
- Kaushik, Gaurav, and Rachana Kolluru. 2025. “AI and Real-World Data: Advancing Healthcare.” Veradigm Blog, April 8, 2025.
- Mateen, Bilal. 2025. “Future-Proofing AI in Health: Why We Must Prioritize Investments in Evidence, Infrastructure and Equity.” World Economic Forum / VaccinesWork, April 4, 2025.
- Mesko, Bertalan. 2025. “Navigating 20 AI Use Cases in Healthcare: From Hype to Evidence.” The Medical Futurist, April 22, 2025.
- Nong, Paige, and Jodyn Platt. 2025. “Patients’ Trust in Health Systems to Use Artificial Intelligence.” JAMA Network Open 8 (2): e2460628.
- Rahman, Md. Ashrafur, Evangelos Victoros, Julianne Ernest, Rob Davis, Yeasna Shanjana, and Md. Rabiul Islam. 2024. “Impact of Artificial Intelligence (AI) Technology in Healthcare Sector: A Critical Evaluation of Both Sides of the Coin.” Clinical Pathology 17: 1–5.
- Sagona, Madeline, Tinglong Dai, Mario Macis, and Michael Darden. 2025. “Trust in AI-Assisted Health Systems and AI’s Trust in Humans.” npj Health Systems 2 (10): 1–5.
- The University of Auckland. 2025. Maintaining Patient Trust as AI’s Role in Healthcare Grows. Presentation by Rosie Dobson, Melanie Stowell, and Robyn Whittaker, 14 April 2025.
- The University of Queensland and KPMG International. 2023. Trust in Artificial Intelligence: A Global Study. Brisbane: University of Queensland; Sydney: KPMG Australia.
- World Economic Forum. 2025. Future-Proofing AI in Health: Why We Must Prioritize Investments in Evidence, Infrastructure and Equity. Geneva: WEF.