The healthcare industry is experiencing an AI revolution. From diagnostics to workflow optimisation, AI promises transformative change. But beneath the hype, a sobering reality remains: more than 80% of AI projects fail, squandering billions in resources and eroding trust.

In a sector where stakes are literally life and death, that failure rate demands scrutiny. Are we asking the right questions about AI’s role in healthcare? Evidence suggests we aren’t.

AI Investment is booming, but are outcomes keeping up?

The global AI investment landscape is surging. In 2024 alone, private investment in AI reached a record $252.3 billion, with healthcare as one of the biggest beneficiaries. Regulatory progress reflects this growth: the number of FDA-approved AI-enabled medical devices has soared from 6 in 2015 to 223 in 2023.

And yet, despite these impressive numbers, the reality on the ground is more complex. AI’s pace of advancement has outstripped traditional development and governance frameworks, exposing cracks in adoption strategies.

Bar graph comparing global private AI investments across sectors between 2023 and 2024. Medical and healthcare lead the chart with a significant year-over-year increase, highlighting healthcare’s central role in AI growth.
Global Private Investment in AI by Focus Area, 2023 vs. 2024
Source: 2025 AI Index Report

Why AI projects fail: A familiar pattern of missteps

A recurring issue across industries is the “build it, and they will come” mindset. Many organisations focus on technological novelty without considering clinical fit or long-term integration.

This tendency is especially dangerous in healthcare. Projects often mimic historical data rather than reliably predicting future outcomes, a critical flaw when accuracy can impact lives. Too often, teams chase hype without deeply understanding the clinical workflows and operational challenges they hope to improve.

Table identifying key root causes for failed AI projects including lack of data readiness, poor problem definition, and underinvestment in infrastructure. Accompanied by definitions explaining the impact of data-driven failures and inadequate infrastructure on project success
Leading Root Causes Resulting in the Failure of AI Projects
Source: Ryseff, de Bruhl, and Newberry (2024)

Problem definition: The first and most common failure point

Poor problem definition is one of the top reasons AI projects falter. Healthcare organisations must ask: Is this truly a problem AI can solve?

AI excels at pattern recognition and prediction but struggles with causal reasoning and contextual understanding. Expecting AI to replace human clinical judgment is not only premature, it’s dangerous. Sophisticated models remain only as good as the data on which they are trained.

The Hidden Risk: Data Quality and Bias

Gartner estimates that 85% of AI models fail due to poor data quality.

This risk is magnified in healthcare, where data is fragmented across electronic health records (EHRs), labs, and imaging systems, often inconsistently structured and full of gaps.

Heatmap visualising the distribution of healthcare AI tasks across natural language processing (NLP) and understanding (NLU), with evaluation categories such as internal validation, external validation, and human oversight. The data highlights uneven evaluation efforts across AI-enabled clinical applications.
Heat Map of Healthcare Tasks, NLP/NLU Capabilities, and Evaluation Dimensions
Source: JAMA (2025)

Without rigorous data curation, models may reinforce existing biases or generate clinically unsafe recommendations. Unfortunately, many organisations skip over the groundwork of data readiness, diving into implementation before establishing trustworthy data pipelines. A foundational misstep akin to building a skyscraper on sand.

Measuring AI’s value: It’s more than accuracy.

Many healthcare AI projects boast about improved diagnostic precision or efficiency gains but stop short of asking more holistic questions:

  • Does this tool improve patient outcomes?
  • Does it ease the clinician’s workload or add to it?
  • Is it aligned with our principles of equity and quality of care?

The 2025 AI Index reports that while 78% of organisations used AI in 2024, most remain in early adoption phases and report modest financial returns.

Building an adaptive AI strategy in healthcare

To succeed, healthcare organisations must shift from rigid procurement cycles to agile, test-and-learn strategies. Budgeting for iteration, learning, and even failure should be part of the AI roadmap. Transparency is also key; organisations must be willing to report AI limitations, not just successes.

Most importantly, healthcare leaders, regulators, and AI developers must align on a new set of rigorous evaluation criteria. This means being honest about:

  • The problem AI is solving
  • The data quality and its biases
  • The measurable impact on patient care
  • The organisation’s capacity to adapt

Asking the right questions is the first step to success.

The promise of AI in healthcare is too important to squander on poor execution. To unlock its full potential, the industry must move beyond the hype and start asking tough, fundamental questions.

Getting AI right in healthcare isn’t about building more; it’s about building smarter, with clinical relevance, trust, and long-term impact at the centre.


References