The question seems simple, yet it captures one of the most important tensions in modern healthcare: trust in AI versus trust in clinicians.
In Germany, 45% of people say they consult an AI chatbot before seeking medical advice. 71% percent of people view AI in medicine positively. More than half say they understand chatbot responses better than conventional internet searches (Beneker, 2026).
On the surface, that looks like a vote of confidence in “Dr AI”.
And yet, when researchers tested how people perceive physicians who disclose they use AI, the results were sobering. In a large US study, doctors who described using AI were rated as less competent, less trustworthy, and less empathetic than those who did not mention it (Reis et al., 2025). Willingness to book an appointment fell when AI was explicitly referenced.
Patients seem ready to trust the machine itself, but they are cautious about doctors who use it.
AI adoption in healthcare is accelerating, but trust is lagging.
Health systems are embedding AI at pace.
In the United States, over a thousand AI-enabled medical devices have been cleared for clinical use. Across South East Asia, AI tools now power symptom checkers, mental health apps and clinical triage systems.
Yet trust is not keeping up with adoption.
In Singapore, 80% of residents report using AI, but trust drops sharply in sensitive domains such as mental health. Globally, nearly 60% of Americans report feeling uneasy about AI-assisted diagnosis (World Economic Forum, 2025).
This isn’t primarily a technical problem.
It’s a relational one.
AI risk and real-world incidents: what the data shows
Before we talk about trust, we need to talk about risk. The OECD’s AI Incidents and Hazards Monitor (AIM) tracks reported AI-related harms across sectors.
Source: OECD AI Incidents and Hazards Monitor (AIM)
As shown in the graph, incidents have steadily increased across multiple industries, including healthcare, financial services and digital security.
Healthcare, drugs, and biotech are often mentioned in reported incidents. This does not always mean AI is unsafe. However, as more people use AI, the chances of risk and failure become more visible.
Trust cannot grow in the absence of accountability.
Industry differences in AI use and organisational support
AI uptake also varies significantly by sector.
Source: KPMG Trust, Attitudes and Use of AI Global Report (2025)
Information Technology leads in both AI use (85%) and organisational support (76%). Healthcare and social assistance sit much lower on the scale, with 45% reporting AI use at work and 58% reporting organisational support.
This difference is important.
Healthcare is not simply another digital industry. It operates in high-stakes environments where human lives, ethical judgement and relational continuity are central.
Lower adoption rates may show caution instead of resistance.
Responsible AI and national readiness: Does governance build trust?
Trust in healthcare has never depended on patients’ understanding every mechanism at play.
As highlighted in the npj Health Systems perspective, trust involves vulnerability and uncertainty. Patients do not need to understand how paracetamol works at a molecular level to trust it. They rely on institutions, regulation, professional standards and the clinician who prescribes it.
AI changes how that trust works.
Many models work in ways that are hard to understand. Sometimes, explanations are only rough estimates instead of being truly clear. In one study about explainable AI in obstetric ultrasound, giving explanations did not always build trust, and sometimes even lowered performance. People felt more confident, but did not always rely on the system in the right way.
Governance, therefore, becomes critical.
Source: Global Index on Responsible AI (GIRAI)
The GIRAI ranks countries on responsible AI governance, human rights protections and institutional capability. The Netherlands, Germany and Ireland lead the index, while Australia and Canada score lower on specific governance dimensions.
The message is clear: responsible AI is not only about technical skills. It is also about having the right frameworks, taking action, and respecting human rights.
New Zealand released its first national AI strategy, titled Investing with Confidence, in July 2025. However, we do not yet have a dedicated national health AI strategy. Health AI policy is being developed through broader AI and health system programmes.
If trust is the foundation of healthcare, then the way we organise and communicate our approach to AI is just as important as the technology itself.
The clinician burden: when AI misfires
Trust is also shaped at the bedside.
A Scientific American report describes nurses pressured to act on sepsis alerts generated by algorithms that misfired, with overrides and accountability resting squarely on the human clinician.
When alerts cannot explain themselves, the clinician still carries the risk.
Trust in AI cannot come at the expense of trust in clinicians.
Why digital trust determines healthcare effectiveness
The World Economic Forum says that digital trust is needed for healthcare to work well. If patients feel uneasy, they may stop engaging, even if the care is accurate.
The New Zealand Medical Journal reinforces this: maintaining patient trust requires transparent governance, robust approvals and equitable deployment.
Overlay geopolitics, and the picture becomes more complex. The 2025 Government AI Readiness Index frames AI as a national capability, measured in compute power, infrastructure and policy maturity.
But healthcare is where the “people-first” principle is tested most strongly.
If AI is introduced primarily as a cost-saving device or as a way to reduce human interaction, patients will feel that.
If AI is presented as a way to support clinical judgement, make reasoning clearer, and give more time for conversation, it is much more likely to be accepted.
So, who do we trust more: AI or our doctor?
Most people do not want to choose. They want a doctor who uses good tools wisely.
They want assurance that AI systems are rigorously validated and equitably designed. They want clarity about accountability if something goes wrong. And they want to know that AI remains visibly subordinate to clinical judgement.
AI does not need to be loved. It needs to be demonstrably safe, responsibly governed and clearly aligned with patient wellbeing.
Trust will not be built through marketing claims or performance metrics alone. It will be built into consultation rooms, into governance frameworks, and into the everyday decisions of clinicians who know when to follow the prompt and when to override it.
In the end, the currency of healthcare is not data. It is trust.
And if we get that wrong, no amount of algorithmic accuracy will compensate.
Building trustworthy AI-enabled health systems
At Orion Health, we believe AI should be built on strong data, clear governance, and workflows led by clinicians, not take their place.
If you’re exploring how to embed AI safely within interoperable, standards-based health data platforms, learn more about our Amadeus AI platform.
Authored by Tom Varghese, Global Product Marketing & Growth Manager at Orion Health.
References
- Dobson, Rosie, Melanie Stowell, and Robyn Whittaker. “Maintaining Patient Trust as Artificial Intelligence’s Role in Healthcare Grows.” New Zealand Medical Journal 139, no. 1629 (February 13, 2026): 94–101.
- European Commission. Regulation (EU) 2024/1689 of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act). 2024.
- Hoffman, Karen M., et al. “Racial Bias in Pain Assessment and Treatment Recommendations, and False Beliefs about Biological Differences between Blacks and Whites.” Proceedings of the National Academy of Sciences 113, no. 16 (2016): 4296–4301.
- Nicolson, Angus, Elizabeth Bradburn, Yarin Gal, Aris T. Papageorghiou, and J. Alison Noble. “The Human Factor in Explainable Artificial Intelligence: Clinician Variability in Trust, Reliance, and Performance.” npj Digital Medicine 8 (2025): 658
- Obermeyer, Ziad, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. “Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations.” Science 366, no. 6464 (2019): 447–453.
- Oxford Insights. Government AI Readiness Index 2025. Oxford: Oxford Insights, January 2026.
- Reis, Moritz, Florian Reis, and Wilfried Kunde. “Public Perception of Physicians Who Use Artificial Intelligence.” JAMA Network Open 8, no. 7 (July 17, 2025): e2521643
- Sagona, Madeline, Tinglong Dai, Mario Macis, and Michael Darden. “Trust in AI-Assisted Health Systems and AI’s Trust in Humans.” npj Health Systems 2 (2025): 10.
- Schellmann, Hilke. “AI Enters the Exam Room: When Alerts Misfire or Can’t Explain Themselves, Nurses Still Carry the Risk.” Scientific American, February 17, 2026
- World Economic Forum. “The Trust Gap: Why AI in Healthcare Must Feel Safe, Not Just Be Built Safe.” December 5, 2025
- Beneker, Christian. “Germans Trust ‘Dr AI’ More Than Doctors Who Use It.” Medscape, February 18, 2026