The "Two-Door Principle" đȘđȘ, AI governance hits $100M đ°
đĄ Builderâs Radar
Qualified Health is reportedly raising ~$100M â AI governance is becoming its own category.
Axios reported that Qualified Health, which builds an AI evaluation and governance platform for health systems, is raising roughly $100M in Series A funding from NEA and SignalFire. They previously raised $30M in seed and have landed UT System and Jefferson Health as customers. The product wraps LLMs with audit trails, role-based access controls, and policy enforcement â essentially the compliance infrastructure that health systems need before theyâll let anyone deploy an AI tool in production. Hereâs the clinician-builder read: the governance layer is becoming a prerequisite, not an afterthought. If youâre building something you want a health system to actually use, understanding what Qualified Health (and tools like it) require of you is as important as understanding the EHR API.
Clinicians are leading AI evaluation now â and that shift matters for what you build.
A post-HIMSS development worth tracking: the Healthcare AI Challenge and its AI Arena platform are putting clinical experts directly in the loop for evaluating AI model outputs across tasks, including agentic workflows. This follows the STAT reporting I covered Wednesday about the validation gap at HIMSS â everyone has agents, nobody has validation frameworks. AI Arena is one answer: let clinicians compare model outputs head-to-head on real clinical tasks. The implication for builders: the tools that win deployment wonât just be the ones that work â theyâll be the ones that can demonstrate they work in a clinician-evaluated, auditable way. If your tool canât be put through a structured evaluation by a clinical expert, itâs going to struggle to get past the governance layer thatâs rapidly forming (see: Qualified Health above).
đ ïž From the Workbench
The âtwo-door principleâ for agent architecture â from Nateâs Newsletter, applicable to clinical tools. Nateâs latest issue introduces a pattern worth stealing: every AI agent extension should have an âagent doorâ and a âhuman doorâ to the same shared data. The agent enters through one interface (conversational, autonomous, background-processing). The human enters through another (visual dashboard, mobile view, structured display). Both read and write to the same underlying data, but each does what itâs best at. If youâre building a clinical tool with an AI agent component â say, a triage assistant, a care coordination tracker, or a medication reconciliation helper â this pattern maps directly. The agent monitors, flags, and prepares. The clinician reviews, decides, and acts. Same data surface, two doors. Itâs a useful mental model for designing the human-AI handoff that regulators and governance frameworks are going to demand anyway.
What are you building this week? Reply and tell me â I read every one.
â Kevin

