Five minutes with VP of Engineering Reece Robinson

People say oh I’ve already got Databricks, Epic, or Snowflake, or I don’t need your Health Data Model (HDM).

We use overlapping technologies, and they’re not our competitors. They all offer some of our health data model’s functionalities, but none offer all. That’s why we call ours a supermodel. Others still store data in different siloed databases, it’s not unified and standardized into a single schema like ours, so it still retains its original character or shape. That’s great because there’s one place for everything to go, but it still takes a lot of manual work to get the goodness – like deep, intuitive insights and reporting – out of them. Orchestral HIP enables you to pull insights or create reports much more simply, with much less risk of information being missed or misunderstood, than any other engine. With datalakes, you still have to go use case by use case processing, whereas with a health data supermodel, you just do single-use cases, because it’s already processed. When you combine a datalake with a health data supermodel, they become exponentially greater than the sum of their parts.

It supercharges your ability to pull insights in other ways too.

Let’s say you’ve got heaps of different types of contributors; clinicians, providers, insurers, and patient-generated data. We can layer that together with data from other sources, including social determinants of health. Things like access to quality water, location-based radiation and pollution levels. Orchestral HIP allows you to see across those layers of data, from a system level to a population level. That type of insight might look like a state discovering it can dramatically reduce hospital admissions for respiratory diseases by insulating certain types of housing stock in targeted areas. The cost reductions and improved care you could deliver with these sorts of insights are staggering.

Something we also wanted to focus on was the capacity of our solution to accommodate change.

Traditional data repositories are built based on what they know at the time, but that becomes a lifelong constraint because they don’t know what they don’t know. You can add scaffolding over time, but you end up down a cul de sac. Our canonical health data model frees you from that problem because our architecture and tooling are built for change, so the cost is minimized. The next version is generated, no human interventions are required. Anything a human does is adding to the value, not maintaining the base platform, so that makes it easy and cheap to run long-term.

Why is it a big deal?

Well, Gartner recently shared their recommendations on data management, or what they call data fabric. They suggest that a well-implemented system can “quadruple” efficiency levels, reducing human-intensive data management tasks by half. That’s because data scientists and analysts typically spend approximately 60% of their time cleaning and organizing data. Orchestral HIP dramatically reduces this percentage, so healthcare data analysts and scientists can spend their time on analysis and interpretation, finding the goodness like improving health outcomes and reducing waste.

Harnessing data is a huge challenge for so many organizations, –

Because it’s so complex that it’s difficult to talk about unless you’re extremely knowledgeable, and there’s so much of it – especially in healthcare. Even having conversations about which solution to choose to harness a specific organization’s data, between the people who recommend, and the people who need to pay, are tricky, because the most effective, efficient solutions are not one size fits all.

I recently came across a great metaphor explaining how to think of data, that it’s like mud.

Bear with me! If you have too much of it, it can bog you down. If you have to store or transport it, it’s a challenge. If you have to contain it, you might get leaks. Left alone, it’s next to useless. But with the right tools, what you can do is build bricks of structured knowledge, which then become greater than the sum of their parts. With HIP, we’ve fast-tracked the process of cleaning and organizing muddy pools into those bricks of structured knowledge. I’d love to tell you how, but that’s a whole other conversation.

Our next chat with Reece covers how the pre-built ingestion pipelines get data into the supermodel, and how they’re applied at scale.