The integration of generative AI into healthcare systems promises transformative potential, but realizing this vision requires more than just powerful models. It demands a robust, modular, and developer-friendly framework that can bridge the gap between cutting-edge AI and real-world clinical workflows. This new open-source initiative offers precisely that: a reference architecture designed to accelerate experimentation, deployment, and collaboration in healthcare AI.

Image credit: RMHare, CC0, via Wikimedia Commons
đź§© Modular by Design
At its core, this framework emphasizes modularity and configurability. It’s built to be non-opinionated, allowing developers and researchers to plug in their preferred language models, hyperparameters, and data pipelines. The architecture supports installable backend routines (called “elixirs”) and frontend UI components (called “conches”), enabling rapid prototyping and customization.
Elixirs are LangChain-based templates that encapsulate GenAI functionality, while conches are React-based UI modules designed to integrate seamlessly with electronic medical records (EMRs). Together, they form a flexible ecosystem that can be tailored to diverse healthcare use cases—from clinical decision support to patient record summarization.
🏥 Seamless EMR Integration
The framework integrates with OpenMRS, a widely used open-source EMR, and supports the FHIR data model for interoperability. This ensures that GenAI applications can operate directly within the context of patient records, enabling real-time insights and decision support. Clinical Quality Language (CQL) support further enhances its utility for rule-based reasoning and validation.
A built-in CLI tool streamlines setup and management. With just a few commands, users can spin up a local instance that includes an EMR, LLM server, vector store, monitoring tools, and more—all orchestrated via Docker Compose.
🔍 Privacy and Monitoring
Recognizing the sensitivity of healthcare data, the architecture supports self-hosted LLMs using tools like Ollama. This allows institutions to maintain control over their data while leveraging powerful AI capabilities. Monitoring is handled via LangFuse, providing transparency into model behavior and performance.
The inclusion of Redis as a vector store enables retrieval-augmented generation (RAG), while Neo4j adds graph-based utilities for complex data relationships. Together, these components form a comprehensive stack for building intelligent, context-aware healthcare applications.
đź§Ş For Developers and Researchers
Developers can build and test elixirs and conches locally, injecting dependencies and hyperparameters at runtime. The CLI supports copying working files into containers for seamless testing. Researchers, meanwhile, can use the platform to deploy and evaluate prompts, chains, and agents in realistic clinical scenarios.
Synthetic data generation is built-in, allowing safe experimentation without compromising patient privacy. The roadmap includes tools for fine-tuning models and publishing them to platforms like HuggingFace, fostering open collaboration.
🚀 Getting Started
Setup is remarkably simple. With Docker and Node.js installed, users can clone the repository, install dependencies, and launch the full stack in minutes. Sample elixirs and conches are available to demonstrate functionality, and templates make it easy to build custom modules.
Whether you’re a developer looking to prototype a GenAI-powered diagnostic tool or a researcher exploring prompt engineering in clinical contexts, this framework offers a powerful starting point. It’s a step toward making generative AI equitable, accessible, and impactful in healthcare.
This has not been released yet. The main module will be available at the link below once released. Please check back again in a few days.
https://github.com/dermatologist/dhti
- Building a Modular Framework for Generative AI in Healthcare - July 18, 2025
- V. LLM-in-the-Loop CQL execution with unstructured data and FHIR terminology support - June 4, 2025
- IV. DocumentReference hook in CQL execution - June 3, 2025