Written by Dr. Chris Hobson, Global Chief Medical Officer at Orion Health
What am I looking for in using AI as a primary care physician?
What would pique my interest to such an extent that I would devote my most scarce and valuable resource, namely my time, to learning and applying AI tools in my day-to-day practice?
We know, from experiences working with physicians to get them interested in the adoption of other technologies, that they typically seek answers to the following five key questions:
- What is it I need to do?
- What’s in it for me?
- Can I trust it?
- What training is available?
- Have you covered privacy, security, and safety?
Let’s examine how those questions unfold when it comes to the adoption of AI in frontline healthcare.
What Should Primary Care Physicians Do to Start Using AI?
Choose a tool to work with, possibly one recommended by your organisation or available within your EMR. Select a scenario or a facet of your practice that you are interested in improving, and the tool may help. For instance, this could involve identifying a disease in its early, preventable stages, which might be preventable with early action. Using a predictive model to identify patients at risk for certain conditions is valuable.
It could also reduce administrative burden or improve the management of patient requests, such as email correspondence. Another key topic would be the hope for a reduction in burnout risk. One way this might be achieved is by using an advanced chatbot to handle basic patient inquiries and requests for information.
Regardless of the tool you select, it should be user-friendly and capable of addressing the problem you are asking it to solve.
What Are the Benefits of AI in Daily Practice?
There needs to be a clear line of sight regarding how the tool enhances my practice without interfering with my daily responsibilities. In other words, it should integrate seamlessly into your workflow and deliver tangible improvements. Ideally, the tool should deliver results such as seeing more patients, providing them with better care, and doing so with less stress.
If costs are involved, they must be readily justified by the expected financial benefits.
Can I Trust AI Tools in Healthcare? Here’s How to Decide.
What are the governance mechanisms in place?
The justification for physician trust in AI cannot be universal; it must be evaluated case by case. AI’s successful design and implementation requires strong governance and administrative mechanisms to mitigate these risks. As a physician, your trust should be proportional to the evidence of reliability, transparency, and alignment with clinical best practices for any specific AI system. While we would like to understand how the AI feature arrived at a particular recommendation, the reality is that this is not necessarily possible. Given this reality, we need to know that a tool has been thoroughly tested, commensurate with the level of risk involved.
When AI systems show consistent credibility, value, and reliability over time, physicians are justified in their trust.
The European Union’s AI Act, which began implementation in February 2025, prohibits certain AI applications in healthcare, including systems that could manipulate and mislead patients or exploit vulnerable patient groups. These regulatory frameworks provide external validation that can help justify physician trust in compliant AI systems while identifying AI applications that should not be trusted.
As a physician, your trust in AI should be conditional, informed, and evolving rather than absolute. Current evidence suggests that certain AI systems merit trust in specific applications, while others require significant caution. The justification for trust increases with regulatory compliance, demonstrated reliability, transparency, organizational support, and alignment with established clinical practice.
What Training Do Clinicians Need to Use AI Safely?
AI promises real improvements in our efficiency and quality of care delivery. Nevertheless, it’s a powerful technology that, to some extent, is still finding its proper place in healthcare. No one should attempt to introduce AI into their practice without having significant training to better understand its strengths and weaknesses in depth and best practices for its safe and effective use.
Is AI Safe for Patients? Privacy, Security, and Risk Explained
AI magnifies existing cybersecurity risks, potentially threatening patient privacy and confidentiality, further complicating the trust equation for physicians.
These requirements are not optional. I cannot use a tool I cannot trust, and the output must not cause harm to a patient. Ideally, I should be able to see and understand how the tool arrived at a specific conclusion, especially in the context of clinical decision support; it should be free from bias, such as not discriminating against people of colour, and so on. If I later have to justify my actions in caring for a patient, I need to know that using the tool couldn’t result in patient harm.
If we satisfy those five key points, we will be many steps further along in the exciting journey of enhancing our professional lives with AI.
References
- Quinn TP, Senadeera M, Jacobs S, Coghlan S, Le V. Trust and medical AI: the challenges we face and the expertise needed to overcome them. J Am Med Inform Assoc. 2021 Mar 18;28(4):890-894. doi: 10.1093/jamia/ocaa268. PMID: 33340404; PMCID: PMC7973477.
- European Parliament & Council of the European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 on artificial intelligence and amending certain Union legislative acts (Artificial Intelligence Act), Article 5. Retrieved from https://artificialintelligenceact.eu/article/5/
- European Parliament. (2023, June 1). EU AI Act: First regulation on artificial intelligence. https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence