đ° $170M to govern what's inside, đ¤ Nomad
$125M to manage AI safely. $45M to staff it with clinicians.
Two health tech raises dropped yesterday that together describe the two bets about where the value in clinical AI actually sits.
Qualified Health raised $125 million in a Series B led by NEA, with participation from an Anthropic-backed fund. The company builds the infrastructure layer for enterprise AI at health systems: a connected data foundation that pulls EHR and operational data into an AI-ready schema, builder tooling for developing applications, and a governance framework with monitoring, access controls, and risk alerts. At UTMB, they generated $15M+ in run-rate impact in six months. The pitch: health systems donât need to figure out AI infrastructure themselves. Qualified Health does it for them.
Thesis Care raised $45 million in a Series A led by Oak HC/FT. The company â co-founded by Niren Gandra MD, previously chief commercial officer at Cedar â deploys AI agents for care management backed by a team of clinical experts in the loop. The model is explicit: AI handles the context, the coordination, and the first-pass action; clinical experts handle escalations and anything requiring genuine judgment. The pitch: the agent era in healthcare requires both AI and the clinical expertise to know when the AI is wrong.
Whatâs interesting is that these are not competing bets. Qualified Health builds the data and governance infrastructure for health systems deploying any AI. Thesis Care builds a specific clinical product that runs on top of that kind of infrastructure. Theyâre different layers of the same stack.
The clinician-builder implication across both raises: the infrastructure problem of health AI (clean data, governed deployment, auditable outputs) is now attracting serious capital. The clinical insight problem (what should the AI actually do, what workflow is actually broken, what does correct look like) is still where the domain expertise is irreplaceable. Qualified Health canât tell a health system what to build. Thesis Careâs MD co-founder is the reason the platform knows when to escalate. In both cases, the person who understands the clinical problem is the scarcest input.
đ¤ Haters
âAnother health AI governance platform â this is a solution in search of a problem.â
UTMBâs $15M run-rate impact in six months suggests otherwise. The implementation problem is real; health systems are not equipped to build this internally.
âExpert clinicians in the loop just means the AI is a fancy triage filter.â
Or it means the product actually works because it knows its own failure modes. Most clinical AI fails when it doesnât know what it doesnât know. Building human escalation in at the architectural level is the difference between a demo and a deployment.
âThese raises prove health AI is overfunded.â
Two companies, $170M combined, addressing the governance infrastructure and the care management layer simultaneously â in a sector spending $600B+ annually. Thatâs not frothy.
đĄ 80/20: For clinician-builders inside health systems: the most valuable thing you can do right now is document the workflow you want to automate in terms that an AI governance platform can evaluate. What does correct output look like? Who reviews it? Whatâs the escalation path? If you canât answer those three questions, you canât sell the use case internally â regardless of what platform the health system buys. Try: write a one-page âAI specâ for one clinical tool you want to build. Not the technical spec â the governance spec. Itâll be harder than you expect.
â Qualified Health story ¡ Qualified Health Series B ¡ Thesis Care Series A
Project NOMAD: offline AI + medical knowledge, designed for anywhere
Project NOMAD (N.O.M.A.D â Network-Optional Medical-grade AI Device, roughly) is an open-source, self-contained offline knowledge and AI system designed to run on any Debian-based system. It bundles offline Wikipedia, Project Gutenberg, medical references, repair guides, and local LLMs into a browser-accessible server â no internet required. The LLMs run locally; nothing goes to a cloud. Medical references are included offline alongside the general knowledge base.
The use case itâs built for is explicitly austere: no internet, no cloud, complete self-sufficiency. For clinicians, the obvious application is wilderness medicine, disaster medicine, resource-limited settings, and rural practice environments where connectivity is intermittent or unavailable. But thereâs a broader clinical AI design question it points at: what does it look like to build clinical decision support that is genuinely offline-capable, with a data footprint that never touches external servers?
â ď¸ Verify: âMedical referencesâ is a broad claim. Before relying on any offline medical knowledge base for clinical decisions, verify what sources are included, how theyâre updated, and what the version/currency lag looks like.
đ¤ Haters
âNo serious clinical deployment should run on an offline LLM with a static knowledge base. Guidelines change; a frozen model gives dangerous answers.â
Valid concern for anything close to the clinical decision point. The verification caveat applies here: offline knowledge has a currency problem that matters most for drug dosing, guideline-based recommendations, and anything where evidence has moved since the model was last updated. The use case is resource-limited environments where an outdated answer is better than no answer â which is a real use case for some clinicians.
âThis is a prepper project with a medical knowledge section, not a clinical AI tool.â
Maybe. The underlying architecture â offline LLM, local medical references, browser-accessible on a local server â is exactly what a clinician-builder would want for a privacy-first, air-gapped clinical tool prototype. The prepper framing doesnât negate the technical pattern.
đĄ 80/20: The pattern here matters more than the specific project: local LLM inference + offline medical knowledge base + browser-accessible interface. That architecture is directly relevant to anyone building clinical decision support that needs to work without internet (rural clinics, austere environments, international deployments, or just privacy-first prototypes on synthetic data). Try: install Ollama locally if you havenât already and run MedGemma or any available clinical model against a local medical reference document. Thatâs the prototype of what Project NOMAD is packaging more formally.
â Full write-up ¡ Project NOMAD on GitHub ¡ projectnomad.us
đŻ Clinician-Builder Tip of the Day
Before you write the first line of code for an AI feature, answer three questions: (1) What does a wrong output look like, and how would you know? (2) Where does a human sit in the execution loop â start, middle, or end? (3) Whatâs the escalation path when the AI isnât sure? If you canât answer all three, you havenât specced the feature â youâve just decided to start building. These arenât compliance theater. Theyâre the difference between a demo that impresses and a tool that actually gets used at 2 AM when someoneâs potassium is 6.8 and the clinical situation is ambiguous. The architecture answers these questions before your code does. Make sure itâs answering them the way youâd want.
What are you building this week? Reply and tell me â I read every one.
â Kevin

