By Kevin Ross, Research Director at Orion Health.

AI. Will it fundamentally change healthcare? Or will it just be another tool?  

When it comes to health, right now the likes of Chat GPT are generating a lot of interest. But the implications are more nuanced than many of the headlines would have you believe. 

The potential is enormous. But looking practically at the systems AI chatbots need to work within and around, we can see they are not the panacea of healthcare delivery – yet!

At the risk of stating the obvious, when you have a new hammer, everything looks like a nail. We must remember to use tools like Chat GPT for their intended purpose. Plus, assuming that something that’s perfect for one use means it must be equally impressive in another is likely to be a mistake.

With our experience building health technology systems to store, understand and share health records, we can see the likes of Chat GPT having a role in the future of healthcare delivery. But there is a huge difference between general health information and personalized healthcare delivery. 

Chat GPT appears to be excellent at curating information and producing high quality written text, based on information it can access. 

Where a clinician or health consumer wants to learn about something generally knowable, such as finding and summarizing the latest research or guidance for a condition, then ChatGPT could open an enormous resource.

If someone wants to translate specialist terminology into consumer-friendly information, ChatGPT could also provide a helpful translation service.

But looking beyond that, we can see there is more work to be done, and it all comes back to the data available. 

Think about ChatGPT as four layers: A communication layer (what/how it presents), a knowledge layer (what it knows), which is driven by a learning layer (how it learns) that links to a data layer (information all this is built upon).

Communication layer: ChatGPT is likely to provide new ways to search, summarize, and understand health data and how it links to general knowledge. This is where we anticipate a lot of applications at the intersection of health technology and AI. Health is notoriously difficult to understand for the general population, and ChatGPT provides a number of possibilities to quickly distill this complexity into more accessible outputs.

Knowledge layer: ChatGPT currently has access to a wealth of information in the public domain. Given that this includes nearly every textbook and research paper ever uploaded to the public internet, this is helpful to have around. 

The way ChatGPT learns is one of its great strengths. Selecting what data to incorporate and how to translate feedback loops into new knowledge is powerful. At Orion we’re working on algorithms that apply to individual patients, as well as population health tools that are designed to recognise patterns as they emerge. We expect that AI tools will give new ways of adding to these insights. One of AI’s strengths is the ability to identify patterns that are many steps deep and otherwise not apparent to humans. 

Data layer: This is what feeds the learning and knowledge layer. Health providers need to carefully consider how to expose patient data to these tools. It cannot be shared without consent, and without sharing it the tool is just generic information rather than patient-centric. 

Despite the promise of ChatGPT and its impressive results in more general knowledge situations, a significant limitation in healthcare is that it cannot access your health record, or that of others in the population. Its knowledge will not stretch to what it cannot see. 

Assuming we can isolate and protect patient information, feeding data into an AI-powered environment will almost certainly improve the personalisation of healthcare delivery – providing it’s plugging into the right technology ecosystem that’s ready to translate this into action. 

Medical interpretations are notoriously complex and nuances in patient profiles can drive large differences in treatment decisions, which in turn can have significant effects on health outcomes. 

In the near term, it isn’t likely to be a reliable tool for clinical decision making such as triaging, diagnosis, or treatment decisions – but it could provide prompts to patients, clinicians and others to help them get one step closer to the right answer, or at least, exclude obviously incorrect “answers”.

While we continue to work on more personalized long term benefits, in the short to medium term, it is possible that ChatGPT could offer benefit in preclinical interactions (live chat, does the patient need to talk to a nurse or a doctor, for example), facilitate the provision of general disease guidance (post-discharge instructions, etc.), and help clinicians distill the vast array of medical research that they use to inform their clinical decisions. (‘what does the latest research say about disease X with mutation Y’). 

So while many might worry about ChatGPT turning healthcare on its head, the real danger is that we just treat it as another tool. 

They can help us in the short term. But without tackling the problem of how to share patient data with it and what wider technology ecosystems it plugs into, we’re limiting the real potential AI chatbots have within healthcare delivery.