A project that clinically validates an information model
A project is underway to develop a Standard to collect information about Adverse Reactions and make that information available to those who need to access it. Borrowing from the approach used to develop the highly effective FHIR® standard, the project is developing a guide that has had cross sector input, and practical validation.
Adverse Drug Reactions (ADRs) are a significant cause of morbidity and mortality in patients and contribute to the incidence of adverse events. The detection, management, documentation and reporting of ADRs are important for ongoing patient safety.
The project started by identifying the clinical need, and then collecting the requirements from a group of expert users (including clinicians and analysts). Following this we developed a model to collect this information using the FHIR standard.
I’ve long been proposing a process to involve clinicians in more health IT projects(recognising that most of them are more interested in the outcomes of the projects rather than the deep technical details).
At a high level this process is:
- Create a number of scenarios that represent what we are trying to achieve in the project.
- Create an information model (or models) to represent the information to be exchanged (regardless of how this is done)
- Validate that the model/s are fit for purpose
- Map that into the FHIR resources required
- Test at a connectathon.
One of the other benefits of this approach is also that it provides a path for clinicians who want to learn more about the underlying technology a channel to do so. By involving them in a project of interest to them, it’s an easier way than studying the specification.
We’ve been using this approach in the Adverse Reactions project in New Zealand, and we have come up with a Logical Model that the domain experts are happy with. Here’s a screenshot of part of that model – you can access the full model here: http://clinfhir.com/logicalModeller.html#s25zx (And thank you to SMILE CDR https://smilecdr.com/ for supplying the FHIR server where this model is stored).
Note that this model is in no way influenced by FHIR resources (though it does use FHIR datatypes) – and also represents all the data that could be captured in an adverse reactions report (my openEHR colleagues would likely call this a ‘maximal data set’) – you certainly wouldn’t expect an average report to include all these elements.
Have a look at the ReactionDescription element. Because the element is coded and has a ValueSet assigned to it (we call this ‘binding’ to the ValueSet), we’re able to show the kinds of data items we might expect).
How do we proceed to step 3? First, we must prove that the model is correct, and it contains all the options needed for submitting reports? After all, it was created by a small number of experts and it would certainly benefit from review by a wider group of people. One way (and the one we usually use) is to send the model out for comment in a paper form – transcribe the model into a document and ask people for comments. Then have the expert group update the model based on those comments.
The trouble is, this is a time consuming and manual process – and not exactly exciting to do. Unless you are really interested in this topic, are you going to put time into it? An alternative approach is to collaboratively test this at a Connectathon (whether in-person, remote or both) – and this is what the ConMan tool has been developed to facilitate (Incidentally, the name stands for CONnectathon MANager).
The way this works is that the tool takes the Logical Model and creates an on-line form that matches the model into which sample data can be entered to represent the real-life scenarios identified at the start of the process. There are also places where participants can enter comments or notes about the elements in the model – or the model itself – which are then available for review and serve as the feedback to help validate (or correct) the model.
Here’s a screenshot of an example of a simple amoxicillin allergy:
- In this view, only elements with a sample value – or a note is shown (which is handy for large models). There are a number of other views possible.
- In this configuration, we are allowing direct text to be entered against an element. Those more familiar with FHIR can enter structured data, which is particularly helpful with coded data as we want to identify the best ValueSet to use.
Once people have entered their data and notes, it’s possible to quickly view all the notes like this:
So, in this way we can collect a review of the model that is both easy to view and (hopefully) a bit more fun than writing a document!
The Adverse Reactions project is significant for a number of reasons. Firstly, it brought together sector participants in a common goal to share information across a digital ecosystem, that can – quite literally – save lives. And secondly, it is the first nationally sponsored FHIR based implementation guide.
We’re going to be testing this at the Health Informatics New Zealand (HiNZ) conference Connectathon on Tuesday 20th November this year.
If you would like to hear more about the Adverse Reactions project in New Zealand please come to our presentation at HiNZ where Dr John Fountain from BPAC and I will discuss this.
HiNZ Conference day 2
Thursday 22 November 11am
Sharing Adverse Reactions Across the Digital Ecosystem
To learn more about this topic read my new white paper – Implementing an Adverse Reaction Reporting Project Utilising the Innovative FHIR Standard.