By: Daniel Mechanik
As FHIR gains momentum and recognition among health IT professional and becomes the hottest buzzword in the industry, it’s important to take a look at some of the most frequently encountered misconceptions around it. It’s way too easy for people, especially managers, to think about it as “just an API” (Application Programming Interface) – something purely technical, that is easy to implement when and where it’s needed, and doesn’t require any deep thinking or strategic planning. In our experience at Outburn, this approach is the main setback for widespread adoption of the standard.
To understand why this is the case, let’s start by having a look at the bottom line of the official appendix to the FHIR specification, titled “The Role of Informatics in the Shift from Reactive to Proactive Healthcare”:
“The development of digital technology has disrupted other sectors, notably media, retail and manufacturing, and the health sector is unlikely to remain immune. Digitization of biology and health will allow machines to help, lead to a demystification of disease, the democratization of healthcare, and a move from the treatment of disease to the promotion and maintenance of wellness.”
What is evident from the above quote is that the main goal of FHIR is to be a platform for the anticipated paradigm shift in the way healthcare works – from a highly fragmented and isolated set of disconnected enclosed parts, each with its own limited scope and functionality, to an entire distributed web of interconnected technologies that work together as a whole, sharing knowledge and insights, enabling an infinite number of new functionalities and innovations that we currently can’t even imagine.
This paradigm shift is not something to be taken lightly. First and foremost, it means that FHIR was NOT designed just to be an API – it was designed to break barriers between closed systems (and the people that use them) by defining a framework that encourages open inter-functionality. It’s a full-blown interoperability framework, not simply an integration protocol. The API part of FHIR is just the primary tool in achieving this greater goal, but it is not the goal by itself.
In other words – FHIR can be (and commonly is) used to define specific interfaces between systems, allowing for some pre-defined functionalities, but this is not where it shines. FHIR shines in its ability to define a practically infinite set of capabilities achieved by the combination of well-defined building blocks with well-defined consistent behaviors and relationships. But if our systems were not designed with this goal in mind, how can we enable all of this?
The simple answer is – we can’t. At least with our current systems, it’s impossible. Sooner or later, we will encounter the boundaries of our existing systems and realize that they define the limits of what we can achieve with FHIR. These limitations can arise both from the differences between the way our data is stored internally and the way it should be represented in FHIR, or from the way data entry and data manipulation is handled by the system. As a mid-term solution, we can work out a way to have some FHIR interfaces working on top of our data formats and the supported data manipulation operations by declaring the exact scope of our system’s capabilities in FHIR terms, thus “closing” the door on apps that need any extra capabilities. This is the most common approach to FHIR implementations these days. It means, for example, that our system will only support a specific subset of the resources defined by FHIR, and for those resources only a limited set of elements and operations will be supported, and only under certain strict circumstances (for example, only certain terminologies will be allowed). Usually this is done by defining system-specific profiles that allow only certain data elements, and system-specific capability statements that define, for example, that a certain resource can only be retrieved in a certain way by an outside system, but cannot be searched by some search parameters, and can never be updated or created from outside.
In this common approach, we can only achieve closed functionalities. Re-using and extending the interfaces we created will require significant efforts and may sometimes be impossible without changing the way our internal systems work and store data. If all we wanted was to comply with some government regulation – this might be enough (although even in this case, some of the mandated profiles may still require adaptations in the system, for example to handle “must support” elements). But if we want to ultimately achieve true interoperability, and have fast, bi-directional, loss-less and frictionless exchange of information and knowledge, this is not the way to go about it. Too many applications will be left outside of the game or will require custom adaptations, because they use a functionality that we could not expose.
Does this mean that we can’t implement FHIR without re-designing our existing systems? No, but it means that if we want to achieve the true goals of interoperability in healthcare, we should look at it as an evolutionary process during which our systems must adapt to the changing environment by learning a new universal language and “growing new organs” that did not exist before.
Let’s go back to the “just an API” conception. Technically, it’s true. It’s an API. More specifically, it’s a RESTful API. What does it even mean, RESTful? RESTful is from REST – Representational State Transfer. It’s an architectural approach in which a client initiates “requests”, a server processes them and returns “responses”, but most importantly – these interactions are tightly coupled to an agreed technical “representation” of “resources”, and the “states” they are allowed to be in. In a typical “conversation”, both sides can only exchange information as whole resources, and can only manipulate data by representing it as “states” of those resources. Most RESTful API’s are designed around resources that actually exist in some form in the system that exposes them. For example, Facebook’s API’s are closely coupled to the data entities defined in Facebook’s data model – Users, Posts, Photos, Ads, Groups, Pages etc. It’s easy to process requests and manipulate data accordingly when the language of the API is the same as the language of the underlying data model, since the client can easily communicate what it wants the server to do, and the server easily translates those requests to actual changes in the state of the corresponding database record.
But with FHIR, what we usually encounter is an internal underlying data model that consists of an entirely different set of data entities that are significantly different from the resources defined by FHIR in definition, granularity, attributes, relationships, structure, and terminology. This means that the native semantics of the data model cannot easily be expressed with FHIR resources, and more importantly – manipulation of the states of the underlying data entities through the exchange of FHIR resources and their different states becomes a very tricky and costly operation. More often than not – this turns out to be a process that heavily relies on the definition of business rules, much more than on technical knowledge or tools. We see the same situation when trying to translate between different human languages – often the exact meaning cannot be conveyed just by translating the separate words, there needs to be some cultural and contextual knowledge applied by the translator. Even when a translation perfectly preserves the original meaning, it may often require some non-elegant workarounds (like when a single word in one language becomes an entire complex sentence in another). It is safe to say that language barriers compromise the content that can be communicated between parties and make even simple conversations slow, expensive, and lossy. All these pains are avoided when both parties speak the same language natively.
When you relocate to a foreign country, you know you will be surrounded by the native speakers of the local language, so you usually go and learn the new language. You don’t hire a permanent translator. You also don’t expect the locals to learn YOUR language. It’s the same with FHIR – once you realize it’s here to stay, and that everyone around you will soon speak FHIR, it makes perfect sense to teach the systems the new language and not rely on translators. And no matter how nice and organized your local language is (e.g. your internal data model, your existing interfaces etc.), you cannot reasonably expect that everyone will learn it just for the sake of communicating with you.
So how do we teach our systems this new language? First of all, we need to understand that a translation will have to happen somehow, since it’s impossible for a system to change it’s internal language overnight, so we better analyze and map exactly where the gaps are in order to assess the required effort. For example, maybe our data model treats clinical conditions as lists of local terms (“codes”) in a “Visit” entity. In FHIR, each clinical condition should be represented as a separate resource, and that resource must have an identity (a consistent logical identifier assigned by the server). This identity could be referenced later in separate interactions. In addition, representing the condition code with our local terminology is meaningless for most use-cases, so we must translate the local code to at least one standard terminology, often more than one.
So, what would happen if we just transform and translate “on the fly”? This means we have two separate challenges to tackle. One is the translation between our internal representation of the data and the FHIR representation, including code translations and identity assignment. The other is the opposite challenge – translating requests from FHIR’s language to queries and operations in our internal database. These are separate efforts that must be coordinated, since we don’t want to lose unmapped incoming data, and we also don’t want to respond with data that contradicts what the client asked for (representation vs. search) – something that can easily happen when trying to maintain separate bits of code that do the same thing but in reverse. It’s important to remember that reversibility of translation and transformation functions is rarely easy or straight forward. Sometimes it’s just practically impossible to implement the reverse of a function. It is only straight-forward when a one-to-one mapping exists between the two data models (which basically means our data model is identical to FHIR’s).
This leads us to the unavoidable conclusion that in the long-term, to enable the widest range of functionalities that FHIR supports, we want our internal data model to evolve to one that is similar to FHIR, or even identical. This might mean adding support for multiple terminologies for each coded element, adding new elements that do not currently exist, adding support for extensions, adding new data entities that correspond to FHIR resources that we may encounter in future integrations, creating new indexes to support search capabilities, creating consistent logical identifiers for each element that corresponds to a “resource” (and tracking those identifiers as required by the specification), shifting some attributes from one entity to another, and the list goes on. How far should you take it? This is up to you to decide.
The ideal solution will be a complete FHIR server, including a persistent database layer that holds FHIR resources in their full representational states, and connecting the existing systems to that FHIR server for storing and retrieving data. This does not necessarily mean that we should replace the internal storage of the system with a FHIR server, but it does mean that the system should learn to “speak” FHIR so it can use and process data originating from outside the system, and so that outside systems will be able to consume and understand the data originating from our internal system. In this way, the effort of translation is done once – from the internal data model to the FHIR model, and all other capabilities are enabled out-of-the-box without additional efforts.
An intermediate solution could be some kind of a central repository where we persist the data in an intermediate model – one that is a combination of our internal model and FHIR’s. In this solution we gain some advantages over the FHIR server approach – it’s easier to keep additional data elements without having to declare them as FHIR extensions, and we may have a wider range of possibilities for data storing and retrieval (SQL queries for example), but we will not have all FHIR operations and capabilities enabled out-of-the-box – we will need to write code that will do the work.
It’s important to say – FHIR’s content model will never entirely replace our internal data models. Those will remain important for the ongoing operational functionality of systems. But it will be the common language for communicating the most important concepts that healthcare operates with, and these concepts are the ones that over time will have to be shared between different systems to achieve true interoperability. So, we better find a way to make our systems communicate these concepts natively in the “language” of FHIR, back and forth, and our systems must grow new “organs” that allow them to access and use external FHIR repositories without losing important information.
To summarize, what we are trying to say is: treating FHIR as just an API will not be a sustainable approach over time, and will not give much benefit to the organization in the future. It will cost a lot to implement and maintain, will be almost impossible to extend to new functionalities, and will not make integration any easier in the future. Interfaces will only work for what it they were defined to do, just like a proprietary interface. The path to true interoperability will require some internal changes that should be carefully defined and planned. A logical API layer by itself will not do the trick.