Contribution is not always coding. You can clean up stuff, add documentation, instructions for others to follow etc. Issues and feature requests should be posted under the ‘issues’ tab and general discussions under the ‘Discussions’ tab if one is available.
The .devcontainer folder will have the configuration for the docker container for development.
Version bump action (if present) will automatically bump version based on the following terms in a commit message: major/minor/patch. Avoid these words in the commit message unless you want to trigger the action.
Most repositories have GH actions to generate and deploy documentation and changelog.
What do I do next
My repositories (so far) are small enough for beginners to get the big picture and make meaningful contributions.
Don’t be discouraged if you make mistakes. That is how we all learn.
TL;DR Below is an open-source common-line tool for converting an OHDSI OMOP cohort (defined in ATLAS) to a FHIR bundle and vice versa.
OHDSI OMOP CDM is one of the most popular clinical data models for health data warehouses. The simple, but clinically motivated data structure is intuitively appealing to clinicians leading to its good adoption. In this respect, it has overtaken HL7-V3 which is more robust but has a steeper learning curve, especially for clinicians. The OHDSI OMOP CDM is widely used in the pharmaceutical industry for drug monitoring.
FHIR is emerging as the defacto standard for health system interoperability, owing largely to its simplicity and the use of existing and popular standards such as REST. As NoSQL databases become more and popular in healthcare, FHIR can also be a good persistence schema. It aligns well with search technologies such as elasticsearch.
As both standards are popular, conversion from one to the other may be commonly required. Researchers at Georgia Tech have an open-source tool – GT-FHIR2 – for mapping an existing OHDSI OMOP CDM database as FHIR endpoint. However, conversion between existing systems may not be easy with a full-stack solution.
I have a simpler solution that I believe will be useful in the following scenarios:
To export a cohort to a FHIR based analytics tool.
To load new resources to OMOP CDM databases for incremental ETL.
Omopfhirmap is a command-line tool for mapping a OHDSI cohort, defined in ATLAS, to a FHIR bundle that can be optionally submitted to a FHIR server for processing. Conversely, it can process a FHIR bundle and add resources to an existing CDM database ignoring duplicates. Unlike GT-FHIR2, the OMOP on FHIR Project at Georgia Tech omopfhirmap does not expose OMOP database as FHIR endpoints.Â
I have used spring-boot and JPA for easy wiring of services and abstraction of database and the hapi-fhir as it is an obvious choice for any java based FHIR applications. It is still a work in progress and any help will be appreciated (Refer to CONTRBUTING.md).
Serverless is the new kid on the block with services such as AWS Lambda, Google Cloud Functions or Microsoft Azure Functions. Essentially it lets users deploy a function (Function As A Service or FaaS) on the cloud with very little effort. Requirements such as security, privacy, scaling, and availability are taken care of by the framework itself. As healthcare slowly yet steadily progress towards machine learning and AI, serverless is sure to make a significant impact on Health IT. Here I will explain serverless (and some related technologies) for the semi-technical clinicians and put forward some architectural best practices for using serverless in healthcare with FHIR as the data interchange format.
Let us say, your analyst creates a neural network model based on a few million patient records that can predict the risk for MI from BP, blood sugar, and exercise. Let us call this model r = f(bp, bs, e). The model is so good that you want to use it on a regular basis on your patients and better still, you want to share it with your colleagues. So you contact your IT team to make this happen.
This is what your IT guys currently do: First, they create a web application that can take bp, bs and e as inputs using a standard interface such as REST and return r. Next, they rent a virtual machine (VM) from a cloud provider (such as DigitalOcean). Then they convert this application into a container (docker) and deploy it in the VM. You now can use this as an application from your browser (chrome) or your EMR (such as OpenMRS or OSCAR) can directly access this function. You can share it with your colleagues and they can access it in their browsers and you are happy. The VM can support up to 3 users at a time.
In a couple of months, your algorithm becomes so popular that at any one time hundreds of users try to access it and your poor VM crashes most of the time or your users have to wait forever. So you call your IT guys again for a solution. They make 100 copies of your container, but your hospital is reluctant to give you the additional funding required.
Your smart resident notices that your application is being used only in the morning hours and in the night all the 100 containers are virtually sleeping. This is not a good use of the funding dollars. You contact your IT guys again, and they set up Kubernetes for orchestrating the containers according to usage. So, what is Serverless? Serverless is a framework that makes all these so easy that you may not even need your IT guys to do this. (Well, maybe that is an exaggeration)
My personal favourite serverless toolset (if you care) is Kubernetes + Knative + riff. I donât try to explain what the last two are or how to use them. They are so new that they keep changing every day. In essence, your IT team can complete all the above tasks with few commands typed on the command line on the cloud provider of your choice. The application (function rather) can even scale to zero! (You donât pay anything when nobody uses it and add more containers as users increase, scaling down in the night as in your case).
Best Practices
What are the best practices when you design such useful cloud-based âfunctionsâ for healthcare that can be shared by multiple users and organizations? Well, here are my two cents!
First, you need a standard for data exchange. As JSON is the data format for most APIs, FHIR wins hands down here.
Next, APIs need a mechanism to expose their capabilities and properties to the world. For example, r = f(bp, bs, e) needs to tell everyone what it accepts (bp, bs, e) and what it returns (at the bare minimum). FHIR has a resource specifically for this that has been (not so creatively) named as an Endpoint. So, a function endpoint should return a FHIR Endpoint resource with information about itself if there is no payload.
What should the payload be? Payload should be a FHIR Bundle that has all the FHIR Resources that the function needs (bp, bs and e as FHIR Observations in your case). The bundle should also include a FHIR Subscription resource that points to the receiving system (maybe your EMR) for the response ( r ).
So, what next?
Take the phone and call your IT team. Tell them to take Kubernetes + Knative + riff for a spin! I might do the same and if I do, I will share it here. And last but not the least, click on the blue buttons below! đ
Open-source software (hereafter OSS) is a phenomenon that has revolutionized the software industry. OSS is supported by voluntary programmers, who regularly dedicate their time and energy for the common good of all. The question that immediately comes to mind is how is it sustainable? Will they continue to contribute their social hours forever? Read the programmers perspective here. But does it make sense for healthcare organizations to accept their charity always? And, how do these organizations that adopt OSS improve the sustainability of these projects? These are some of the factors to consider:
Do you have enough funding?
OSS supporters are humanists with an emancipatory worldview. OSS is fundamentally not designed for an organization that can sustain a paid product. Firstly, there is the ethical problem of exploiting the OSS community. But more importantly, healthcare organizations with enough funding tend to spend more on the long-term maintenance and customization of OSS. Hence, OSS is generally designed to be an option when you have no other option.
Does the project have a regional focus?
OSS projects generally aim to solve global problems. So be careful when you hear Canadian OSS or Danish OSS. Regional OSS is mostly just cheaper local products masquerading as OSS for funding or for other reasons. They are unlikely to have the support of the global OSS community and is prone to burnout.
Is the OSS really OSS?
Any OSS worth its salt will be on GitHub. If you cannot find the project on GitHub, you should definitely ask why.
Is it really popular?
Some OSS that masquerade as OSS claim that they have a worldwide network of developers. The GitHub stars and forks would be a reasonable indicator of the popularity. Consider an OSS for your organization only if it has a thousand stars on the GitHub sky.
Are you looking for a specific workflow support?
Is your workflow generic enough to be supported by a global network of volunteers? Well, OHIP billing workflow may not be the right process to seek OSSM support.
Do you need customization?
If you need a lot of customizations to support your workflow, then OSS may not be the ideal solutions. OSS is ideally suited for situations where you can use it out of the box.
Do you have the time?
Remember that OSS is supported by voluntary programmers. So if you need a feature, you make a request and wait. If your organization is used to demanding, then OSS is not for you. OSS project is not owned by anyone, so their priorities may be different from yours.
Do you have internal expertise?
It is far easier to use an OSS if you have someone supporting the project in your organization. OSS community tends to respect one of their own more than an organization.
Supporting Open-Source Software?
It is crucial for organizations that depend on an OSS for your day to day operations to support the project. If the project becomes unsustainable, it affects the organization too. You can support the project in many ways such as donations, coding support and infrastructural support.
Do you know what OSS means and stands for?
Does the higher management know what OSS means and stands for? It is common in healthcare organizations to adopt OSS focusing on the free aspect.
âFree softwareâ is a matter of liberty, not price. To understand the concept, you should think of âfreeâ as in âfree speech,â not as in âfree beerâ.
Personally, I think the first point is the most important. OSS is designed and intended for use in areas where a paid option is not viable. In other scenarios in healthcare, you are likely to spend more for an open-source product than you spend for a regular product.
Finally, a quick mention of some noteworthy OSS in healthcare. OpenMRS is an open-source EMR started with the mission to improve healthcare delivery in resource-constrained environments. DHIS2 is web-based open-source public health information system with awesome visualization features including GIS, charts and pivot tables.
Hamilton Health Sciences (HHS) comprises seven hospitals in Ontario, recently introduced a system that integrated vital sign monitors with the electronic health records (EHR). HHS has used an integration platform that could tie any device with the EHR called Iatric Systems. Mark Farrow, vice president and CIO at HHS, expects the use of an integration platform such as Iatric Systems will help HHS hospitals to work smarter.
Communication is vital in care coordination between physicians and other healthcare professionals. Effective information exchange is an important part of clinical workflow. However, very few HIS currently support HIPAA compliant support for provider communication. Hence providers still resort to age-old modes of communication such as phone call and email. Athenahealth recently introduced AthenaText an HIPAA compliant mobile text messaging system for healthcare professionals. This app hopes to keep ubiquitously connected during moments of care. Providers can access athenaText within the Epocrates mobile app or download it for free from the Apple App Store or Google Play Store.
Recently a firmware vulnerability was detected in Mac, generally considered impenetrable as opposed to windows. The so-called Thunderstrike vulnerability exploits target the boot ROM and are difficult to detect and remove. A vast majority of physicians rely on Mac, and this vulnerability could have an impact on patient data privacy and security.
I have been an RDF fan even before I used it for dermbase. I promptly signed the Yosmite Manifesto and blogged about it last year. After gaining more experience in the regional health information exchange initiative(s), I still feel that RDF is important, but in a different way.
Most federated regional clinical viewers query host databases, convert the results into an intermediary format (mostly xml or HL7), apply filters and then provide a consolidated view in the browser and mobile as html embellished with jQuery. Though this seems not-so-scalable technology, it works remarkably well in a regional context. Federated clinical viewers also attempt to create data warehouses on top of the Clinical Viewer. Such data warehouses have enormous potential in population informatics and RDF could be an ideal framework for this purpose.
RDF is a proven technology that is schema agnostic. However in this context the biggest advantage of RDF is its data-atomic nature that enables each data element to be queried, changed, or deleted independent of any other data element. RDF blank nodes can be used to effectively anonymize the data. From a data analytics perspective representation in the RDF format makes data amenable for âreasoningâ to discover new knowledge.
Genomic data analytics has revolutionized pre-clinical research. Growing popularity of Health Information Technology (HIT) and Health Information Exchange (HIE) has not yet resulted in a similar impact on population health. There are some fundamental differences between genomic and clinical data.
The fundamental characteristics of genomic data are:
1. The data format is simple though it can be annotated in different ways.
2. Raw data is collected first without consideration of relevance. Hypothesis formulation and analysis come later.
3. The data is mostly anonymous.
4. The format and analysis protocol remain the same.
The clinical data has different characteristics:
1. The data is often complex and hence it is difficult to have a uniform format.
2. Data is collected to prove or disprove a hypothesis/diagnosis. Hence only relevant data is collected.
3. The data is often tagged to an individual.
4. The analysis protocol and data collection depends on the hypothesis/diagnosis.
RDF framework would allow abstracting population data from normal everyday HIE data for clinical practice, but both operating within the same ecosystem. The framework will also allow clinical data to have the analytics friendly qualities of genomic data. The clinical viewer can push data into the RDF repository without a separate warehousing process thereby reducing overhead and increasing relevance. New generation wearable devices and monitors can push anonymized raw data directly into the RDF repository. The privacy and security concerns of this architecture will be minimal.
There is another hitherto unexplored advantage for such a clinical RDF repository. Temporal data related to climate changes and other events such as natural calamities can also be pushed into the âstructurelessâ RDF repository making it possible to assess the population health impact of such events.