The two objectives for eHealth Initiative in 2021 for health IT in the United States are COVID-19 and addressing system inequity and discrimination.

Chris and Kevin take a deep dive into machine learning with those issues top of mind from the perspective of Kevin’s extensive work in this space. The following is a summary of their in-depth discussion.

Applying machine learning to population health

At the heart of population health is an abundance of data. There is the data that comes from traditional sources such as medical records and claims data, and increasingly there is more and more data coming from places like mobile apps, wearable devices, and even social media. Kevin describes that expecting healthcare professionals to understand and process this data themselves would be impossible.

This is where machine learning steps in. Machine learning is an advanced technology that helps us to recognize patterns in order to process data, match it against similar records, and ultimately use this information to make more informed decisions or inferences. Two examples of machine learning application in population health include predicting future clinical complications and medical image analysis.

As health systems collect more and more data through areas like a Health Information Exchange (HIE), machine learning will have a powerful role to play in helping us to achieve better health outcomes.

Addressing bias in machine learning projects

Kevin and Chris discuss the potential for bias in research, and the concerns and links back to inequity in healthcare and beyond. Kevin describes that, from the New Zealand perspective, while the goal is always to apply the best of global research here to maximize the benefits to those with the most to gain, the overwhelming majority of medical research worldwide has traditionally been applied to populations that are majority European. You have to consider that you can’t guarantee the same gains will be experienced in a different population of people.

New Zealand has a diverse population with important and substantial minority groups who, similar to minority groups in the rest of the world, have historically had poorer health outcomes. Machine learning is built on data sets that are based on what you can collect and where you are able to search. It’s important to consider the context and ensure that if you’re using data science to change the way care is delivered, you’re changing it in a way that benefits those who need it most.

To tackle this, Kevin and his team put together collaborations between data scientists, clinicians, consumers, and ethnic groups. This helps them understand the possible implications of using certain models in a different context to ensure that leveraging the great overseas research will in time decrease the equity gap.

Applying internationally developed models in New Zealand

The New Zealand COVID-19 response brought internationally developed modelling into its decision making. Kevin’s team deployed the New Zealand Algorithm Hub, which is a freely available library of algorithms that the New Zealand health sector can use to deal with an outbreak.

To manage the hub, a governance committee was set up that included a wide range of representatives from ethics, data science, clinicians, consumers, and Māori interest groups. Any models considered for the hub are subject to a rigorous ‘dragons den’ style process.

This begins with completing a questionnaire asking crucial questions around how it has been developed and tested, what do we know about the outcomes, where it’s been used, its intended uses and potential unintended uses. Following this, they will stand before the committee who will ask questions and provide guidelines (that need to be included) before releasing the algorithm to the hub.

Being robust and transparent is essential

It is essential to have a ‘robust but collaborative governance process’ and to find the right balance of genuinely wanting to see the benefits while minimizing any chance of harm. Thinking more broadly, the fundamentals of this will always be the same in different jurisdictions, the context will just differ slightly.

To ensure this type of governance approach has adequate protections for removing bias against vulnerable populations, being robust, transparent and explicit about what you want to achieve is key, Kevin describes. Recognizing the potential for bias is the first step. Considering where the data from the models actually comes from, and thinking carefully about how we counter that in the way we design the models and the healthcare delivery and system around it.

In the full podcast, Chris and Kevin talk in-depth about these topics plus privacy and security in the context of machine learning and the role of machine learning in managing the COVID-19 pandemic.

Dr Chris Hobson and Dr Kevin Ross also recently co-authored a white paper on the topic of Removing Data Bias from AI and Machine Learning Tools in Healthcare, published in partner with HIMSS. Read the full White Paper below.