Bell Eapen

Physician | HealthIT Developer | Digital Health Consultant

Clinical Query Language – Part 1

Clinical Query Language (CQL) is a high-level query language to represent and generate unambiguous quality measures or clinical decision rules. I am not a CQL expert. These are my notes from a system development perspective (not a clinical author perspective). I am trying to make sense of this emerging concept and add my notes here in the hope that others may find this useful.

Clinical Query Language – Part 1
U.S. Navy photo by Chief Warrant Officer 4 Seth Rossman. / Public domain (wikimedia)

Clinical Query Language is designed to be intuitive for clinicians authoring the queries for quality measures and clinical decision support. The decision support rules are mostly alert type rules at the individual and population level that is calculated from a database (not usually diagnostic decision support). You can use any data model with CQL, and you can use any data model you prefer.

Here is an example segment of CQL:

define “InDemographic”:
AgeInYearsAt(start of MeasurementPeriod) >= 16 and AgeInYearsAt(start of MeasurementPeriod) < 24
and “Patient”.”gender” in “Female Administrative Sex”

As Clinical Query Language follows strict semantics, you can autogenerate lexers, parsers and visitors using ANTLR. In simple terms, CQL’s semantics can be represented as a ‘grammar’ that ANTLR can read and generate code to process any CQL in a variety of programming languages, including Java, Javascript, Python, C# and Go. The CQL grammar files are here: https://cql.hl7.org/08-a-cqlsyntax.html. Incidentally, CQL grammar inherits from fhirpath.

If you wish to generate code from these files, there are two things to note:

  • You need to rename CQL.g4 to cql.g4 as the library names are case sensitive and should correspond to the filename.
  • Put fhirpath.g4 in the same folder as cql.g4, and cql refers to fhirpath grammar.

Clinical Query Language aims to provide a high-level domain-independent language for clinicians that can be translated into low-level database logic. As CQL does not prescribe a data model, an intermediary format linking CQL to the data management logic is required. That is called Expression Logical Model (ELM) that we will discuss in part 2.

Kickstart NLP with UMLS

The UMLS, or Unified Medical Language System, is a set of files and software that brings together many health and biomedical vocabularies and standards to enable interoperability between computer systems.

Natural Language Processing (NLP) on the vast amount of data captured by electronic medical records (EMR) is gaining popularity. The recent advances in machine learning (ML) algorithms and the democratization of high-performance computing (HPC) have reduced the technical challenges in NLP. However, the real challenge is not the technology or the infrastructure, but the lack of interoperability — in this case, the inconsistent use of terminology systems.

natural language processing
UMLS for NLP

NLP tasks start with recognizing medical terms in the corpus of text and converting it into a standard terminology space such as SNOMED and ICD. This requires a terminology mapping service that can do this mapping in an easy and consistent manner. The Unified Medical Language System (UMLS) terminology server is the most popular for integrating and distributing key terminology, classification and coding standards. The consistent use of  UMLS resources leads to effective and interoperable biomedical information systems and services, including EMRs.

To make things easier, UMLS provides both REST-based and SOAP-based services that can be integrated into software applications. A high-level library that encapsulated these services, making the REST calls easy to the user is required for the efficient use of these resources.  Umlsjs is one such high-level library for the UMLS REST web services for javascript. It is free, open-source and available on NPM, making it easy to integrate into any javascript (for browsers) or any nodejs applications.

The umlsjs package is available on GitHub and the NPM. It is still work in progress and any coding/documentation contributions are welcome. Please read the CONTRIBUTING.md file on the repository for instructions. If you use it and find any issues, please report it on GitHub.

How to deploy an h2o ai model using OpenFaaS on Digitalocean in 2 minutes

H2O is an open-source, distributed and scalable machine learning platform written in JAVA. H2O supports many of the statistical & machine learning algorithms, including gradient boosted machines, generalized linear models, deep learning and more.  OpenFaaS® (Functions as a Service) is a framework for building Serverless functions easily with Docker. Read my previous post to learn more about OpenFaaS and DO. 

H2O AI model deployment

H2O has a module aptly named sparkling water that allows users to combine the machine learning algorithms of H2O with the capabilities of Spark. Integrating these two open-source environments provides a seamless experience for users who want to make a query using Spark SQL, feed the results into H2O to build a model and make predictions, and then use the results again in Spark. For any given problem, better interoperability between tools provides a better experience.

H2O Driverless AI is a commercial package for automatic machine learning that automates some of the most difficult data science and machine learning workflows such as feature engineering, model validation, model tuning, model selection, and model deployment. H2O also has a popular open-source module called AutoML that automates the process of training a large selection of candidate models. H2O’s AutoML can be used for automating the machine learning workflow, which includes automatic training and tuning of many models within a user-specified time-limit. AutoML makes hyperparameter tuning accessible to everyone.

H2O allows you to convert the models to either a Plain Old Java Object (POJO) or a Model Object or an Optimized (MOJO) that can be easily embeddable in any Java environment. The only compilation and runtime dependency for a generated model is the h2o-genmodel.jar file produced as the build output of these packages. You can read more about deploying h2o models here.

I have created an OpenFaaS template for deploying the exported MOJO file using a base java container and the dependencies defined in the gradle build file. Using the OpenFaaS CLI (How to Install) pull my template as below:

Copy the exported MOJO zip file to the root folder along with build.gradle and settings.gradle. Make appropriate changes to handle.java as per the needs of the model, as explained here. Add http://digitaloceanIP:8080 to watersplash.yml

and finally:

That’s it! Congratulations! Your model is up and running! Access it at http://digitaloceanIP:8080/function/watersplash

If you get stuck at any stage, give me a shout below. 

Deploy a fastai image classifier using OpenFaaS for serverless on DigitalOcean in 5 easy steps!

Fastai is a python library that simplifies training neural nets using modern best practices. See the fastai website and view the free online course to get started. Fastai lets you develop and improve various NN models with little effort. Some of the deployment strategies are mentioned in their course, but most are not production-ready.

OpenFaaS® (Functions as a Service) is a framework for building Serverless functions easily with Docker that can be deployed on multiple infrastructures including Docker swarm and Kubernetes with little boilerplate code. Serverless is a cloud-computing model in which the cloud provider runs the server, and dynamically manages the allocation of machine resources and can scale to zero if a service is not being used. It is interesting to note that OpenFaaS has the same requirements as the new Google Cloud Run and is interoperable. Read more about OpenFaaS (and install the CLI) from their website.

DigitalOcean: I host all my websites on DigitalOcean (DO) which offers good (in my opinion) cloud services at a low cost. They have data centres in Canada and India. DO supports Kubernetes and Docker Swarm, but they offer a One-Click install of OpenFaaS for as little as $5 per month (You can remove the droplet after the experiment if you like, and you will only be charged for the time you use it.) If you are new to DO, please sign up and setup OpenFaaS as shown here:

In fastai class, Jeremy creates a dog breed classifier.

As STEP 1, export the model to .pkl as below

This creates the export.pkl file that we will be using later. To deploy we need a base container to run the prediction workflow. I have created one with Python3 along with fastai core and vision dependencies (to keep the size small). It is available here: https://hub.docker.com/r/beapen/fastai-vision But you don’t have to directly use this container. My OpenFaaS template will make this easy for you.

STEP 2: Using the OpenFaaS CLI (How to Install) pull my template as below:

STEP 3: Copy export.pkl to the model folder

STEP 4: Add http://digitaloceanIP:8080 to dog-classifier.yml

and finally in STEP 5:

That’s it! Your predictor is up and running! Access it at http://digitaloceanIP:8080/function/dog-classifier

The template has a builtin image uploader interface! If you get stuck at any stage, give me a shout below. More to follow on using OpenFaaS for deploying machine learning workflows!

Serverless on FHIR: Management guidelines for the semi-technical clinician!

Serverless is the new kid on the block with services such as AWS Lambda, Google Cloud Functions or Microsoft Azure Functions. Essentially it lets users deploy a function (Function As A Service or FaaS) on the cloud with very little effort. Requirements such as security, privacy, scaling, and availability are taken care of by the framework itself. As healthcare slowly yet steadily progress towards machine learning and AI, serverless is sure to make a significant impact on Health IT. Here I will explain serverless (and some related technologies) for the semi-technical clinicians and put forward some architectural best practices for using serverless in healthcare with FHIR as the data interchange format.

artificial intelligence
Serverless on FHIR

Let us say, your analyst creates a neural network model based on a few million patient records that can predict the risk for MI from BP, blood sugar, and exercise. Let us call this model r = f(bp, bs, e). The model is so good that you want to use it on a regular basis on your patients and better still, you want to share it with your colleagues. So you contact your IT team to make this happen.

This is what your IT guys currently do: First, they create a web application that can take bp, bs and e as inputs using a standard interface such as REST and return r. Next, they rent a virtual machine (VM) from a cloud provider (such as DigitalOcean). Then they convert this application into a container (docker) and deploy it in the VM. You now can use this as an application from your browser (chrome) or your EMR (such as OpenMRS or OSCAR) can directly access this function. You can share it with your colleagues and they can access it in their browsers and you are happy. The VM can support up to 3 users at a time.

In a couple of months, your algorithm becomes so popular that at any one time hundreds of users try to access it and your poor VM crashes most of the time or your users have to wait forever. So you call your IT guys again for a solution. They make 100 copies of your container, but your hospital is reluctant to give you the additional funding required.

Your smart resident notices that your application is being used only in the morning hours and in the night all the 100 containers are virtually sleeping. This is not a good use of the funding dollars. You contact your IT guys again, and they set up Kubernetes for orchestrating the containers according to usage. So, what is Serverless? Serverless is a framework that makes all these so easy that you may not even need your IT guys to do this. (Well, maybe that is an exaggeration)

My personal favourite serverless toolset (if you care) is Kubernetes + Knative + riff. I don’t try to explain what the last two are or how to use them. They are so new that they keep changing every day. In essence, your IT team can complete all the above tasks with few commands typed on the command line on the cloud provider of your choice. The application (function rather) can even scale to zero! (You don’t pay anything when nobody uses it and add more containers as users increase, scaling down in the night as in your case).

Best Practices

What are the best practices when you design such useful cloud-based ‘functions’ for healthcare that can be shared by multiple users and organizations? Well, here are my two cents!

First, you need a standard for data exchange. As JSON is the data format for most APIs, FHIR wins hands down here.

Next, APIs need a mechanism to expose their capabilities and properties to the world. For example, r = f(bp, bs, e) needs to tell everyone what it accepts (bp, bs, e) and what it returns (at the bare minimum). FHIR has a resource specifically for this that has been (not so creatively) named as an Endpoint. So, a function endpoint should return a FHIR Endpoint resource with information about itself if there is no payload.

What should the payload be? Payload should be a FHIR Bundle that has all the FHIR Resources that the function needs (bp, bs and e as FHIR Observations in your case). The bundle should also include a FHIR Subscription resource that points to the receiving system (maybe your EMR) for the response ( r ).

So, what next?

Take the phone and call your IT team. Tell them to take
Kubernetes + Knative + riff for a spin! I might do the same and if I do, I will share it here. And last but not the least, click on the blue buttons below! 🙂

Natural language processing (NLP) tools for health analytics

Natural language processing (NLP) is the process of using computer algorithms to identify key elements in language and extract meaning from unstructured spoken or written text. NLP combines artificial intelligence, computational linguistics, and other machine learning disciplines.

natural language processing

In the healthcare industry, NLP has many applications such as interpreting clinical documents in an electronic health record. Natural language processing is important in clinical decision support systems by extracting meaningful information from free-text query interfaces. It may reduce transcription costs by allowing providers to dictate their notes, or generate tailored educational materials for patients ready for discharge. At a high-level NLP includes processes such as structure extraction, tokenization, tagging, part of speech identification and lemmatization.

“cTAKES is a natural language processing system for extraction of information from electronic medical record clinical free-text. Originally developed at the Mayo Clinic, it has expanded to being used by various institutions internationally.”

cTAKES is relatively difficult to install and use, especially if the service needs to be shared by several systems. I have integrated cTakes into an easy to use spring boot application that provides REST web services for clinical document annotation. The repository is here.

  • SSH URL
  • Clone URL

You need a UMLS username and password for deploying the application. RysannMD is an efficient and fast system for annotating clinical documents developed at Ryerson University. Some of my other experiments with NLP are available here.

Are you working on any NLP projects in medicine?

How to create a Neural Network model for business in 10 minutes

Neural Network and deep-learning are the buzzwords lately. Machine learning has been in vogue for some time, but the easy availability of storage and processing power has made it popular. The interest is palpable in business schools as well. The ML related techniques have not percolated much from the IT departments to business, but everybody seems to be interested. So, let us build a Neural Network model in 10 minutes.

Computer Programming

This is the scenario:

You have a collection of independent variables (IV) that predict a dependent variable (DV). You have a theoretical model and want to know if it is good enough. Remember, we are not testing the model. We are just checking how good the IVs are in predicting DV. If they are not good predictors to start with, why waste time conjuring a fancy model! Sounds familiar? Let’s get started.

Setup

Do you have some preliminary knowledge of python? If not spend another 10 minutes here learning python. Now you have to spend some time to set up your system once. Just follow these instructions.

Code

The first step is to import few modules. If you don’t know what these are, just copy paste and ignore. Consider them as a header that you require.

Create a CSV file with your data with the last column as your DV. Now import that file.

nrows and ncols are the numbers of rows and columns. Now separate DV (y) from IVs (X) as below.

In most cases, you will be trying to predict a rare event. So add some oversampling for taste 🙂

Create, compile and fit the model.

The three model.add statements represent the three layers in Neural Network. The number after Dense is the number of neurons in each layer. You can play with these values a bit. These settings should work in most business cases. Read this for more information.

Now evaluate the model.

Put this code in a file (say nnet.py) and use it as below.

TL;DR

Just use QRMine. nnet.py is in there.

Operationalizing Neural Network models

Shortly, I will show you how to operationalize a model using flask.

ASP.NET Core 1: Some useful code snippets

ASP.NET Core is Microsoft’s answer to open source web development platforms. Probably it was inevitable as they realize the futility of fighting the open source ecosystem with the ever growing popularity of node, npm, and bower. If you can’t beat them, join them 🙂eHealth Programmer Girl

Most healthcare organizations in Canada still use the Microsoft products and hence ASP.NET Core may be a good platform to build business applications. As it is platform agnostic, you can deploy it on Linux, if things change in the future (Yes, It is possible with Core). Of late I have been working on an ASP.Net Core project (an intranet portal) and would like to share some code snippets that may be useful to others.

If you need to export the CRUD Index (list) as .csv, here is a useful resource from @damienbod https://github.com/damienbod/AspNetCoreCsvImportExport

The implementation of the InputFormatter and the OutputFormatter classes are specific for a list of simple classes with only properties. If you have more complex classes, map only the properties that you need to serialize as below:

I could not find any good reference to implement multiple file upload. The Microsoft’s documentation in this instance was unfortunately not very clear: https://docs.microsoft.com/en-us/aspnet/core/mvc/models/file-uploads

Here is my improvisation:

 

https://gist.github.com/dermatologist/5f3900074e7383befe5363331de238e6

Hope this helps.