Bell Eapen MD, PhD.

Bringing Digital health & Gen AI research to life!

When GenAI Ideas translate to practice with DHTI

If you’ve ever worked in healthcare, you know the feeling: you have a brilliant idea—something that would save time, reduce frustration, or make patient care smoother—and then… nothing happens. Not because the idea is bad, but because turning it into real software feels like trying to build a spaceship out of sticky notes.

That’s the gap vibe coding is trying to close. And with tools like DHTI, that gap is finally starting to shrink, and it resembles more of a conversation like below!

npx dhti-cli copilot --model gpt-5.3-codex --skill elixir-generator --prompt "Generate an elixir glycemic_advisor that summarizes diabetic patients' latest lab results and medications"

npx dhti-cli copilot --model gpt-5.3-codex --skill start-dhti --prompt "Start the glycemic_advisor elixir and display in CDS-Hooks sandbox"

Let’s walk through what vibe coding is, why it matters, and how DHTI makes it surprisingly doable—even if you’ve never written a line of code in your life.


So… what exactly is vibe coding?

Think of vibe coding as building software the same way you’d brainstorm with a colleague over coffee. You don’t start with code. You start with the vibe of what you want.

Instead of saying:

“I need a function that queries a FHIR endpoint and transforms the JSON.”

You say:

“I want a little helper that pulls a patient’s meds and tells me if anything looks risky.”

And the system starts shaping that idea into something real.

Vibe coding is:

  • Talking to the computer like you’d talk to a person
  • Iterating as you go
  • Letting the AI handle the technical scaffolding
  • Staying focused on the idea, not the syntax

It’s not magic. It’s just finally letting people who understand healthcare shape the tools they need—without having to become software engineers first.


Why healthcare needs this more than anyone

Healthcare is full of smart, creative people. But it’s also full of complexity: clinical workflows, privacy rules, specialized language, and data standards that feel like they were designed by a committee of cryptographers.

Even when clinicians know exactly what they want, translating that into something a developer can build is… hard. And developers, for their part, often spend more time deciphering clinical nuance than writing code.

Vibe coding cuts out the translation layer.
It lets clinicians express ideas in their own words.
It lets AI turn those ideas into working prototypes.
And it lets developers focus on polishing and deploying—not guessing.

But vibe coding alone isn’t enough. Healthcare needs structure. It needs guardrails. It needs standards.

That’s where DHTI comes in.


Meet DHTI: the “let’s actually build this” engine

DHTI is an open‑source reference architecture built specifically for healthcare GenAI applications. If vibe coding is the conversation, DHTI is the workshop where the ideas get shaped into something sturdy.

DHTI gives you a ready‑made foundation for building GenAI healthcare tools. It understands healthcare standards, provides synthetic data, supports agentic workflows, and helps you turn natural‑language ideas into real, testable applications.

In plain English:
DHTI makes vibe‑coded ideas actually work in healthcare environments.

Here’s how.


DHTI speaks healthcare, so you don’t have to

Most AI tools can generate code, but they don’t understand the rules of healthcare. They don’t know what FHIR is supposed to look like. They don’t know how CDS‑Hooks cards plug into clinical workflows. They don’t know what’s safe, what’s allowed, or what’s interoperable.

DHTI does.

So when someone says:

“Can you build something that checks whether a patient with diabetes is overdue for an A1c?”

DHTI can assemble the pieces:

  • A FHIR query
  • A little reasoning chain
  • A card that could show up in the EHR
  • A test environment to try it out

All without the user needing to know any of those words.


It lets non‑technical users build real workflows

Healthcare tasks aren’t simple. They involve multiple steps, multiple data sources, and multiple decisions. DHTI is built for that.

A clinician might say:

“I want something that looks at a patient’s skin images, compares them to previous ones, and drafts a note.”

DHTI can turn that into:

  • A workflow that loads images
  • A reasoning step that describes changes
  • A draft note
  • A preview card

It’s not just generating text—it’s building a mini‑application.


It makes experimentation safe and fast

One of the biggest barriers in healthcare innovation is simply being able to try things. Real patient data is locked down (as it should be). EHR systems are hard to access. And IT teams are stretched thin.

DHTI solves this by including:

  • Synthetic data that looks realistic but contains no PHI
  • A ready‑to‑use FHIR server
  • Prebuilt agent templates
  • A local environment you can spin up quickly

This means you can test ideas without waiting for approvals, access, or integration.

You can play.
You can explore.
You can see what works.

And that’s where the best ideas come from.


It smooths the path from prototype to production

Prototyping is fun. Deploying is not.

Healthcare IT teams have to think about:

  • Security
  • Compliance
  • Standards
  • Maintenance
  • Integration
  • Auditing

DHTI is built with these realities in mind. Because everything is structured, modular, and standards‑aligned from the start, IT teams don’t have to rebuild the prototype from scratch. They can refine it, secure it, and deploy it.

This is the difference between “cool demo” and “something translatable to practice.”


The Copilot SDK: your agent, packed right into the app

One of the most exciting pieces of this ecosystem is the Copilot SDK, making the AI agent directly available in DHTI—no external tools, no switching windows, no juggling platforms.

You can:

  • Build the agent
  • Test it
  • Tweak it

All in one place.

For vibe coding, this is huge. It means the conversation that creates the tool can happen inside the tool itself. Clinicians can test ideas in the same interface where they’ll eventually use them. Developers can refine behavior without rebuilding infrastructure.

It’s a tight, elegant loop.


Why this moment matters

Healthcare has always been full of ideas. What it hasn’t had is a way to turn those ideas into working software without months of meetings, requirements documents, and integration headaches.

Vibe coding changes the front end of innovation.
DHTI changes the back end.

Together, they make it possible for:

  • Clinicians to prototype ideas
  • Researchers to test hypotheses
  • Developers to build faster
  • IT teams to deploy safely
  • Organizations to innovate sustainably

It’s not just a new tool.
It’s a new way of building.


Last but not least, thank you, Hanson Professional Services, for supporting this project! Version 1 will debut at the Medical Informatics Europe Conference 2026 in Genoa, Italy, taking place May 26–28, 2026. Read more about DHTI and try it today! It is free and open-source. See the repository link below. Please comment/share if you find this useful!

Hanson - DHTI

Why DHTI Chains Matter: Moving Beyond Single LLM Calls in Healthcare AI (Part II)

Large Language Models (LLMs) are powerful, but a single LLM call is rarely enough for real healthcare applications. Out of the box, LLMs lack memory, cannot use tools, and cannot reliably perform multi‑step reasoning—limitations highlighted in multiple analyses of LLM‑powered systems. In clinical settings, where accuracy, context, and structured outputs matter, relying on a single prompt‑response cycle is simply not viable.

Healthcare workflows require the retrieval of patient data, contextual reasoning, validation, and often the structured transformation of model output. A single LLM call cannot orchestrate these steps. This is where chains become essential.


Image credit: FASING Group, CC BY-SA 4.0 https://creativecommons.org/licenses/by-sa/4.0, via Wikimedia Commons

What Are Chains, and Why Do They Matter?

A chain is a structured workflow that connects multiple steps—LLM calls, data transformations, retrieval functions, or even other chains—into a coherent pipeline. LangChain describes chains as “assembly lines for LLM workflows,” enabling multi‑step reasoning and data processing that single calls cannot achieve.

Chains allow developers to:

  • Break complex tasks into smaller, reliable steps
  • Enforce structure and validation
  • Integrate external tools (e.g., FHIR APIs, EMR systems)
  • Maintain deterministic flow in safety‑critical environments

In healthcare, this is crucial. For example, generating a patient‑specific summary may require:

  1. retrieving data from an EMR,
  2. cleaning and structuring it,
  3. generating a clinical narrative, and
  4. validating the output.

A chain handles this entire pipeline.


Sequential, Parallel, and Branch Flows

Modern LLM applications often require more than linear processing. LangChain supports three major flow types:

✅ Sequential Chains

Sequential chains run steps in order, where the output of one step becomes the input to the next. They are ideal for multi‑stage reasoning or data transformation pipelines.

✅ Parallel Chains

Parallel chains run multiple tasks at the same time—useful when extracting multiple data elements or generating multiple outputs concurrently. LangChain’s RunnableParallel enables this pattern efficiently.

✅ Branching Chains

Branch flows allow conditional logic—different paths depending on model output or data state. This is essential for clinical decision support, where logic often depends on patient‑specific conditions.

Together, these patterns allow developers to build robust, production‑grade AI systems that go far beyond simple prompt engineering.


Implementing Chains in LangChain and Hosting Them on LangServe

LangChain provides a clean, modular API for building chains, including prompt templates, LLM wrappers, and runnable components. LangServe extends this by exposing chains as FastAPI‑powered endpoints, making deployment straightforward.

This combination—LangChain + LangServe—gives developers a scalable, observable, and maintainable way to deploy multi‑step GenAI workflows.


DHTI: A Real‑World Example of Chain‑Driven Healthcare AI

DHTI embraces these patterns to build GenAI applications that integrate seamlessly with EMRs. DHTI uses:

  • Chains for multi‑step reasoning
  • LangServe for hosting GenAI services
  • FHIR for standards‑based data retrieval
  • CDS‑Hooks for embedding AI output directly into EMR workflows

This standards‑based approach ensures interoperability and makes it easy to plug GenAI into clinical environments without proprietary lock‑in. DHTI makes sharing chains remarkably simple by packaging each chain as a modular, standards‑based service that can be deployed, reused, or swapped without touching the rest of the system. Because every chain is exposed through LangServe endpoints and integrated using FHIR and CDS‑Hooks conventions, teams can share, version, and plug these chains into different EMRs or projects with minimal friction.

Explore the project here:


Try DHTI and Help Democratize GenAI in Healthcare

DHTI is open‑source, modular, and built on widely adopted standards. Whether you’re a researcher, developer, or clinician, you can use it to prototype safe, interoperable GenAI workflows that work inside real EMRs.

More examples for chains


✅ 1. Clinical Note → Problem List → ICD-10 Coding

Why chaining helps

A single LLM call struggles because:

  • The task is multi‑step: extract problems → normalize → map to ICD‑10.
  • Each step benefits from structured intermediate outputs.
  • Errors compound if the model tries to do everything at once.

Sequential Runnable Example

Step 1: Extract the structured problem list from the free‑text note
Step 2: Normalize problems to standard clinical terminology
Step 3: Map each normalized problem to ICD‑10 codes

This mirrors real clinical coding workflows and allows validation at each step.

Sequential chain sketch

  1. extract_problems(note_text)
  2. normalize_terms(problem_list)
  3. map_to_icd10(normalized_terms)

✅ 2. Clinical Decision Support: Medication Recommendation With Safety Checks

Why chaining helps

A single LLM call might hallucinate or skip safety checks. A chain allows:

  • Independent verification steps
  • Parallel evaluation of risks
  • Branching logic based on findings

Parallel Runnable Example

Given a patient with multiple comorbidities:

Parallel tasks:

  • Evaluate renal dosing requirements
  • Check drug–drug interactions
  • Assess contraindications
  • Summarize guideline‑based first‑line therapies

All run simultaneously, then merged.

Parallel chain sketch

{

  renal_check: check_renal_function(patient),

  ddi_check: check_drug_interactions(patient),

  contraindications: check_contraindications(patient),

  guideline: summarize_guidelines(condition)

}

→ combine_and_recommend()

This mirrors how pharmacists and CDS systems work: multiple independent checks feeding into a final recommendation.


✅ 3. Triage Assistant: Symptom Intake → Risk Stratification → Disposition

Why chaining helps

Triage requires conditional logic:

  • If red‑flag symptoms → urgent care
  • If moderate risk → telehealth
  • If low risk → self‑care

A single LLM call tends to blur risk categories. A branching chain enforces structure.

Branch Runnable Example

Step 1: Extract structured symptoms
Step 2: Risk stratification
Branch:

  • High risk → generate urgent-care instructions
  • Medium risk → generate telehealth plan
  • Low risk → generate self‑care guidance

Branch chain sketch

symptoms = extract_symptoms(input)

risk = stratify_risk(symptoms)

if risk == “high”:

    return urgent_care_instructions(symptoms)

elif risk == “medium”:

    return telehealth_plan(symptoms)

else:

    return self_care_plan(symptoms)

This mirrors real triage protocols (e.g., Schmitt/Thompson).


✅ Summary Table

ScenarioWhy a Chain HelpsBest Runnable Pattern
Clinical note → ICD‑10 codingMulti-step reasoning, structured outputsSequential
Medication recommendation with safety checksIndependent safety checks, guideline lookupParallel
Triage assistantConditional logic, different outputs based on riskBranch

Bringing Generative AI Into the EHR: Why DHTI Matter (Part I)

Large Language Models (LLMs) are transforming how we think about clinical decision support, documentation, and patient engagement. Yet despite their impressive capabilities, LLMs have a fundamental limitation that becomes especially important in healthcare: LLMs are stateless. They do not remember prior interactions unless that information is explicitly included in the prompt. For clinical use, this means that patient‑specific data must be added to every prompt if we want the model to generate relevant, safe, and context‑aware output.

This is where the real challenge begins.

Image credit: Grzegorz W. Tężycki, CC BY-SA 4.0 https://creativecommons.org/licenses/by-sa/4.0, via Wikimedia Commons

Why Patient Context Matters for Generative AI in Healthcare

Healthcare workflows depend on rich, longitudinal patient data—medications, allergies, labs, imaging, diagnoses, and more. To generate clinically meaningful output, an LLM must be given this context. Without it, the model is essentially guessing.

But adding patient data to prompts is not as simple as it sounds. Extracting structured, reliable data from Electronic Medical Records (EMRs) is notoriously difficult. EMRs were not originally designed with AI integration in mind. Data may be siloed, inconsistently structured, or locked behind proprietary interfaces. Even when APIs exist, authentication, authorization, and data‑mapping complexities can slow down innovation.

FHIR: The Standard That Makes Interoperability Possible

Fortunately, the healthcare ecosystem has rallied around a modern interoperability standard: HL7® FHIR® (Fast Healthcare Interoperability Resources). FHIR provides a consistent, web‑friendly way to represent clinical data, making it easier for external applications—including AI systems—to retrieve patient information.

Most major EMRs now expose FHIR APIs that allow authorized systems to query patient‑specific data such as demographics, medications, conditions, and lab results. This shift has been transformative. Instead of custom integrations for each EMR vendor, developers can rely on a shared standard.

FHIR also underpins many modern interoperability frameworks, including SMART on FHIR and CDS‑Hooks. These standards are now widely adopted across the industry, with CDS‑Hooks explicitly designed to connect EMRs to external decision‑support services using FHIR data.

Displaying AI Output Inside the EMR: The Role of CDS‑Hooks

Retrieving data is only half the problem. Once an AI model generates insights, the output must be displayed inside the clinician’s workflow—not in a separate window, not in a separate app, and not in a place where it will be ignored.

This is where CDS‑Hooks comes in.

CDS‑Hooks is a standard that allows EMRs to call external decision‑support services at specific points in the clinical workflow. When a clinician opens a chart, writes an order, or reviews a medication list, the EMR can trigger a “hook” that sends key context—including the patient ID—to a backend service. That backend can then use FHIR APIs to retrieve the necessary patient data, run AI models, and return actionable “cards” that appear directly inside the EMR interface.

This pattern is powerful because:

  • It keeps clinicians in their workflow
  • It ensures AI output is tied to real‑time patient context
  • It avoids sending large amounts of PHI directly from the EMR to the AI model

In short, CDS‑Hooks is the bridge between EMRs and modern AI‑powered decision support.

DHTI: A Reference Architecture for GenAI in Healthcare

As interest in generative AI grows, developers and researchers need a framework that brings all these pieces together—LLMs, FHIR, CDS‑Hooks, EMR integration, and modular AI components. DHTI (Distributed Health Technology Interface) is one such open‑source project.

DHTI embraces the standards that matter:

  • FHIR for structured data exchange
  • CDS‑Hooks for embedding AI output in the EMR
  • LangServe for hosting modular GenAI applications
  • Ollama for local LLM hosting
  • OpenMRS as an open‑source EMR environment

The project’s documentation highlights how CDS‑Hooks is used to send patient context (including patient ID) and how backend services retrieve additional data using FHIR before generating AI‑driven insights. DHTI’s architecture is intentionally modular, allowing developers to prototype new GenAI “elixirs” (backend services) and UI “conches” (frontend components) that plug directly into an EMR environment.

You can explore the project here:

Why This Matters for the Future of Clinical AI

Healthcare AI must be:

  • Context‑aware
  • Integrated into clinical workflows
  • Standards‑based
  • Secure and privacy‑preserving
  • Interoperable across EMRs

LLMs alone cannot meet these requirements. But LLMs combined with FHIR, CDS‑Hooks, and frameworks like DHTI can.

This is how we move from isolated AI demos to real, production‑ready clinical tools. Try DHTI and Help Democratize GenAI in Healthcare

Come, join us to make generative AI in healthcare more accessible! 

ChatGPT captured the imagination of the healthcare world though it led to the rather misguided belief that all it needs is a chatbot application that can make API calls. A more realistic and practical way to leverage generative AI in healthcare is to focus on specific problems that can benefit from its ability to synthesize and augment data, generate hypotheses and explanations, and enhance communication and education. 

Generative AI Image credit: Bovee and Thill, CC BY 2.0
Generative AI Image credit: Bovee and Thill, CC BY 2.0 https://creativecommons.org/licenses/by/2.0, via Wikimedia Commons

One of the main challenges of applying generative AI in healthcare is that it requires a high level of technical expertise and resources to develop and deploy solutions. This creates a barrier for many healthcare organizations, especially smaller ones, that do not have the capacity or the budget to build or purchase customized applications. As a result, generative AI applications are often limited to large health systems that can invest in innovation and experimentation. Needless to say, this has widened the already big digital healthcare disparity. 

One of my goals is to use some of the experience that I have gained as part of an early adopter team to increase the use and availability of Gen AI in regions where it can save lives. I think it is essential to incorporate this mission in the design thinking itself if we want to create applications that we can scale everywhere. What I envision is a platform that can host and support a variety of generative AI applications that can be easily accessed and integrated by healthcare organizations and professionals. The platform would provide the necessary infrastructure, tools, and services to enable developers and users to create, customize, and deploy generative AI solutions for various healthcare problems. The platform would also foster a community of practice and collaboration among different stakeholders, such as researchers, clinicians, educators, and patients, who can share their insights, feedback, and best practices. 

I have done some initial work, guided by my experience in OpenMRS and I have been greatly inspired by Bhamini. The focus is on modular design both at the UI and API layers. OpenMRS O3 and LangServe templates show promise in modular design. I hope to release the first iteration on GitHub in late August 2024. 

Do reach out in the comments below, if you wish to join this endeavour, and together we can shape the future of healthcare with generative AI. 

Read Part II

Kedro for multimodal machine learning in healthcare 

Healthcare data is heterogenous with several types of data like reports, tabular data, and images. Combining multiple modalities of data into a single model can be challenging due to several reasons. One challenge is that the diverse types of data may have different structures, formats, and scales which can make it difficult to integrate them into a single model. Additionally, some modalities of data may be missing or incomplete, which can make it difficult to train a model effectively. Another challenge is that different modalities of data may require different types of pre-processing and feature extraction techniques, which can further complicate the integration process. Furthermore, the lack of large-scale, annotated datasets that have multiple modalities of data can also be a challenge. Despite these challenges, advances in deep learning, multi-task learning and transfer learning are making it possible to develop models that can effectively combine multiple modalities of data and achieve reliable performance. 

Pipelines Kedro for multimodal machine learning

Kedro for multimodal machine learning

Kedro is an open-source Python framework that helps data scientists and engineers organize their code, increase productivity and collaboration, and make it easier to deploy their models to production. It is built on top of popular libraries such as Pandas, TensorFlow and PySpark, and follows best practices from software engineering, such as modularity and code reusability. Kedro supplies a standardized structure for organizing code, handling data and configuration, and running experiments. It also includes built-in support for version control, logging, and testing, making it easy to implement reproducible and maintainable pipelines. Additionally, Kedro allows to easily deploy the pipeline on cloud platforms like AWS, GCP or Azure. This makes it a powerful tool for creating robust and scalable data science and data engineering pipelines. 

I have built a few kedro packages that can make multi-modal machine learning easy in healthcare. The packages supply prebuilt pipelines for preprocessing images, tabular and text data and build fusion models that can be trained on multi-modal data for easy deployment. The text preprocessing package currently supports BERT and CNN-text models. There is also a template that you can copy to build your own pipelines making use of the preprocessing pipelines that I have built. Any number and combination of data types are supported. Additionally, like any other kedro pipeline, these can be deployed on kubeflow and VertexAI. Do comment below if you find these tools useful in your research. 

Dark Mode

kedro-multimodal (this link opens in a new window) by dermatologist (this link opens in a new window)

Template for multi-modal machine learning in healthcare using Kedro. Combine reports, tabular data and image using various fusion methods.

OHDSI OMOP CDM ETL Tools in Python, .Net and Go

TL;DR Here are few OHDSI OMOP CDM tools that may save you time if you are developing ETL tools!

Python: pyomop | pypi
.NET: omopcdmlib | NuGet
Golang: gocdm

OHDSI OMOP CDM Libraries

The COVID-19 pandemic brought to light many of the vulnerabilities in our data collection and analytics workflows. Lack of uniform data models limits the analytical capabilities of public health organizations and many of them have to re-invent the wheel even for basic analysis. As many other sectors embrace big data and machine learning, many healthcare analysts are still stuck with the basic data wrenching with Excel.

The OHDSI OMOP CDM (Common data model) for observational data is a popular initiative for bringing data into a common format that allows for collaborative research, large-scale analytics, and sharing of sophisticated tools and methodologies. Though OHDSI OMOP CDM is primarily for patient-centred observational analysis, mostly for clinical research, it can be used with minor tweaks for public health and epidemiologic data as well. We have written about some of the technical details here.

The OHDSI OMOP CDM is relatively simple and intuitive for clinical teams than emerging standards such as FHIR. Though the relational database approach and some of the software tools associated with OHDSI OMOP CDM are a bit old-fashioned, the data model is clinically motivated. There is an ecosystem of software tools for many of the analytics tools that can be used out of the box. The Observational Medical Outcomes Partnership (OMOP) CDM, now in its version 6.0, has simple but powerful vocabulary management. OHDSI OMOP CDM is a good choice for healthcare organizations moving towards health data warehousing and OLAP.

One weakness of OHDSI is the lack of tools for efficient ETL from existing EHR and HIS. Converting existing EHR data to the CDM is still a complex task that requires technical expertise. During the additional “home time” during the COVID pandemic, I have created three software libraries for ETL tool developers. These libraries in Python, .NET and Golang encapsulated the V6.0 CDM and helps in writing and reading data from a variety of databases with the V6.0 tables. The libraries also support creating the CDM tables for new databases and loading the vocabulary files.

Python: pyomop | pypi
.NET: omopcdmlib | NuGet
Golang: gocdm

These libraries might save you some time if you are building scripts for ETL to CDM. They are all open-source and free to use in your tools. Do give me a shout if you find these libraries useful and please star the repositories on GitHub.

Serverless on FHIR: Management guidelines for the semi-technical clinician!

Serverless is the new kid on the block with services such as AWS Lambda, Google Cloud Functions or Microsoft Azure Functions. Essentially it lets users deploy a function (Function As A Service or FaaS) on the cloud with very little effort. Requirements such as security, privacy, scaling, and availability are taken care of by the framework itself. As healthcare slowly yet steadily progress towards machine learning and AI, serverless is sure to make a significant impact on Health IT. Here I will explain serverless (and some related technologies) for the semi-technical clinicians and put forward some architectural best practices for using serverless in healthcare with FHIR as the data interchange format.

artificial intelligence
Serverless on FHIR

Let us say, your analyst creates a neural network model based on a few million patient records that can predict the risk for MI from BP, blood sugar, and exercise. Let us call this model r = f(bp, bs, e). The model is so good that you want to use it on a regular basis on your patients and better still, you want to share it with your colleagues. So you contact your IT team to make this happen.

This is what your IT guys currently do: First, they create a web application that can take bp, bs and e as inputs using a standard interface such as REST and return r. Next, they rent a virtual machine (VM) from a cloud provider (such as DigitalOcean). Then they convert this application into a container (docker) and deploy it in the VM. You now can use this as an application from your browser (chrome) or your EMR (such as OpenMRS or OSCAR) can directly access this function. You can share it with your colleagues and they can access it in their browsers and you are happy. The VM can support up to 3 users at a time.

In a couple of months, your algorithm becomes so popular that at any one time hundreds of users try to access it and your poor VM crashes most of the time or your users have to wait forever. So you call your IT guys again for a solution. They make 100 copies of your container, but your hospital is reluctant to give you the additional funding required.

Your smart resident notices that your application is being used only in the morning hours and in the night all the 100 containers are virtually sleeping. This is not a good use of the funding dollars. You contact your IT guys again, and they set up Kubernetes for orchestrating the containers according to usage. So, what is Serverless? Serverless is a framework that makes all these so easy that you may not even need your IT guys to do this. (Well, maybe that is an exaggeration)

My personal favourite serverless toolset (if you care) is Kubernetes + Knative + riff. I don’t try to explain what the last two are or how to use them. They are so new that they keep changing every day. In essence, your IT team can complete all the above tasks with few commands typed on the command line on the cloud provider of your choice. The application (function rather) can even scale to zero! (You don’t pay anything when nobody uses it and add more containers as users increase, scaling down in the night as in your case).

Best Practices

What are the best practices when you design such useful cloud-based ‘functions’ for healthcare that can be shared by multiple users and organizations? Well, here are my two cents!

First, you need a standard for data exchange. As JSON is the data format for most APIs, FHIR wins hands down here.

Next, APIs need a mechanism to expose their capabilities and properties to the world. For example, r = f(bp, bs, e) needs to tell everyone what it accepts (bp, bs, e) and what it returns (at the bare minimum). FHIR has a resource specifically for this that has been (not so creatively) named as an Endpoint. So, a function endpoint should return a FHIR Endpoint resource with information about itself if there is no payload.

What should the payload be? Payload should be a FHIR Bundle that has all the FHIR Resources that the function needs (bp, bs and e as FHIR Observations in your case). The bundle should also include a FHIR Subscription resource that points to the receiving system (maybe your EMR) for the response ( r ).

So, what next?

Take the phone and call your IT team. Tell them to take
Kubernetes + Knative + riff for a spin! I might do the same and if I do, I will share it here. And last but not the least, click on the blue buttons below! 🙂

10 points to consider before adopting open-source software in eHealth

Open-source software (hereafter OSS) is a phenomenon that has revolutionized the software industry. OSS is supported by voluntary programmers, who regularly dedicate their time and energy for the common good of all. The question that immediately comes to mind is how is it sustainable? Will they continue to contribute their social hours forever? Read the programmers perspective here. But does it make sense for healthcare organizations to accept their charity always? And, how do these organizations that adopt OSS improve the sustainability of these projects? These are some of the factors to consider:

artificial intelligence

Do you have enough funding?

OSS supporters are humanists with an emancipatory worldview. OSS is fundamentally not designed for an organization that can sustain a paid product. Firstly, there is the ethical problem of exploiting the OSS community. But more importantly, healthcare organizations with enough funding tend to spend more on the long-term maintenance and customization of OSS. Hence, OSS is generally designed to be an option when you have no other option.

Does the project have a regional focus?

OSS projects generally aim to solve global problems. So be careful when you hear Canadian OSS or Danish OSS. Regional OSS is mostly just cheaper local products masquerading as OSS for funding or for other reasons. They are unlikely to have the support of the global OSS community and is prone to burnout.

Is the OSS really OSS?

Any OSS worth its salt will be on GitHub. If you cannot find the project on GitHub, you should definitely ask why.

Is it really popular?

Some OSS that masquerade as OSS claim that they have a worldwide network of developers. The GitHub stars and forks would be a reasonable indicator of the popularity. Consider an OSS for your organization only if it has a thousand stars on the GitHub sky.

Are you looking for a specific workflow support?

Is your workflow generic enough to be supported by a global network of volunteers? Well, OHIP billing workflow may not be the right process to seek OSSM support.

Do you need customization?

If you need a lot of customizations to support your workflow, then OSS may not be the ideal solutions. OSS is ideally suited for situations where you can use it out of the box.

Do you have the time?

Remember that OSS is supported by voluntary programmers. So if you need a feature, you make a request and wait. If your organization is used to demanding, then OSS is not for you. OSS project is not owned by anyone, so their priorities may be different from yours.

Do you have internal expertise?

It is far easier to use an OSS if you have someone supporting the project in your organization. OSS community tends to respect one of their own more than an organization.

Supporting Open-Source Software?

It is crucial for organizations that depend on an OSS for your day to day operations to support the project. If the project becomes unsustainable, it affects the organization too. You can support the project in many ways such as donations, coding support and infrastructural support.

Do you know what OSS means and stands for?

Does the higher management know what OSS means and stands for? It is common in healthcare organizations to adopt OSS focusing on the free aspect.

“Free software” is a matter of liberty, not price. To understand the concept, you should think of “free” as in “free speech,” not as in “free beer”.

Personally, I think the first point is the most important. OSS is designed and intended for use in areas where a paid option is not viable. In other scenarios in healthcare, you are likely to spend more for an open-source product than you spend for a regular product.

Finally, a quick mention of some noteworthy OSS in healthcare. OpenMRS is an open-source EMR started with the mission to improve healthcare delivery in resource-constrained environments. DHIS2 is web-based open-source public health information system with awesome visualization features including GIS, charts and pivot tables.