Utah's medical board pulls the cord on Doctronic 🛑, Wachter says Epic AI is good enough 🏭, A CTDO says academic systems should build their own AI in-house ⁉️
Utah’s medical board called for immediate suspension of the state’s AI-doctor pilot. The line on autonomous prescribing just got drawn [or redrawn?].
Reporting on April 24 confirmed that the Utah Physicians Licensing Board has demanded the immediate suspension of a state pilot using an LLM-based bot to renew prescriptions for Utah patients. The same day, the Federation of State Medical Boards posted the board’s letter publicly. The letter’s argument is structural: the pilot was authorized through the executive branch in a way the board considers a circumvention of the medical practice act and of the board’s authority to define what constitutes the practice of medicine in the state.
That letter sits inside a tight one-week stack. Health Affairs Forefront published a critique of the same pilot, framing it as a state operating ahead of federal oversight in a way that exposes patients to safety risks. And JAMA Network Open published a study — Mass General Brigham, 21 leading LLMs, 29 clinical vignettes — that found all 21 models (GPT-5, Claude, Gemini, Grok, all of them) failed 80%+ of differential diagnoses when patient data was incomplete. Final-diagnosis accuracy hit 90%+ when the data was there. The hard part is the iterative reasoning under uncertainty that pulls more data into the picture before settling on a diagnosis. That is what clinicians do. It is also what the Utah pilot was using AI to do, unsupervised.
The framing matters. This was not a vendor deploying a clinical tool that a state board found out about through a complaint. This was a state government, through executive action, declaring that an LLM could practice medicine. The board letter is not about the company; it is about the principle that defining the practice of medicine sits with the medical board, not the executive branch. The FDA has already signaled it will not regulate general-purpose LLMs even when they give health advice. State boards just established that they will fill the regulatory gap. For a clinician-builder, the line is now visible: shipping a tool that helps a clinician prescribe more efficiently is a real product. Shipping a tool that prescribes without a clinician in the loop has a regulatory letter on FSMB letterhead arguing the state cannot — by fiat — declare an LLM a practitioner.
😤 Haters
“This is one state board defending its turf, not a precedent.” Maybe — and the procedural challenge is real. But the FSMB hosting the letter publicly is the move that converts a single-state action into a national template. Other state boards facing similar pilots now have language to copy. The first letter is the precedent; the next five make it a movement.
“AI prescribing is happening anyway through Doximity, OpenEvidence, and ChatGPT for Clinicians — this is just labeling.” That confuses two different things. Decision support tools with a verified human in the loop — even free clinician-facing ones — are not what the Utah pilot was. Watch which states draw that distinction explicitly. The vendors with a human-in-the-loop posture should be glad this happened; it strengthens their position by clarifying it.
“The JAMA paper is just academics being academics — clinicians use Doximity AI all day and it works.” It works because they are the safety layer. The 80%+ failure rate on incomplete data is exactly what gets caught when a clinician reads the model output, notices what is missing, and asks the next question. Take the clinician out, and the failure rate is the actual failure rate. The paper is not arguing against AI in clinical practice; it is mapping where the human still has to be.
💡 80/20: Autonomous prescribing without a clinician in the loop is a product that just got a regulatory ceiling drawn under it, in writing, on FSMB letterhead. Reframe: stop asking “can this AI prescribe by itself?” and start asking “what does the clinician need from this AI in the moment they are deciding to prescribe?” The first question dies; the second one is where the next decade of defensible product gets built.
→ Full write-up
📡 Builder’s Radar
The “good enough” Epic AI debate became a real argument this week. The lesson cuts in both directions.
Bob Wachter published “Damn it, they’re good”, arguing Epic’s increasingly capable AI features are going to slow innovation by giving hospitals a $100K bundled option that undercuts the $1M standalone vendors. Halle Tecco echoed it on LinkedIn from a CMIO conversation. The pushback in the community channels was sharp: hospitals revealing a preference for cheaper-and-good-enough is the system working. If standalone vendors cannot charge $1M for the marginal upgrade, the marginal upgrade is not worth $1M.
😤 Haters
“This is just incumbents being incumbents — Epic always bundles, the market always survives.” It does survive, but the shape of survival changes. Ambient scribes already pivoted out of pure scribe positioning earlier this month because the bundle made the standalone scribe a feature. The same pattern is coming for decision support, image triage, prior auth automation. The vendor who owned the best version in 2024 is now selling a feature.
“The Wachter framing is anti-patient — bundled good enough is the social-welfare optimum.” Cleanly correct on the macro. The error in Wachter’s piece is treating the standalone vendor’s existence as the thing worth protecting. It is not. The patient who gets the diagnosis right is the thing worth protecting. The standalone vendor that cannot show outcomes commensurate with its pricing premium is pricing itself out, not being squeezed out.
💡 80/20: The defensible position for a 2026 clinical AI product is “thing Epic structurally cannot do” — specialty depth, multi-EHR portability, or a workflow that crosses the EHR boundary into payer/employer/pharma data. Try: write a one-sentence answer to “what can Epic not do here?” If the answer is a feature, the bundle is your competitor. If the answer is a structure, the bundle is your distribution clearer.
→ Full write-up
Providence stood up 12 Epic-native AI tools at once. Project Pixel is the Wachter argument made concrete.
Providence — 51 hospitals, headquartered in Renton, Washington — now has 12 AI use cases live inside Epic following an April upgrade, under an internal initiative called Project Pixel. Use cases span ambient documentation, clinical decision support, and operational tooling — categories that, until very recently, had standalone vendors competing for them as separate sales motions. The integration tax is being amortized across 12 features instead of being paid 12 times.
😤 Haters
“12 launched is not 12 actually used. Adoption is the metric.” Right, and Providence has not published outcomes data yet — that is the next thing to watch. But the buyer just declared 12 AI use cases solved by the existing vendor. A 13th vendor knocking is selling against an installed base, not into a procurement window.
“This is one IDN. Not every system is on Epic and not every Epic system has Providence’s internal capacity.” Both true. The constraint is the Epic upgrade cadence and an internal program lead. Both are spreading. The next 18 months are likely to look like a wave: HCA, Ascension, CommonSpirit, Sutter, Banner, UPMC each running their own version with whatever lag their upgrade schedule allows.
Nebraska Medicine’s CTDO says academic systems should build their own AI in-house. The buyer class is changing.
Michael Hasselberg, Chief Transformation and Digital Officer at Nebraska Medicine, argues academic systems now have the talent, the data, and the AI tooling to build internally with better workflow fit than they can buy. His harder claim: the bottleneck is executive alignment, not technology. He also flagged a specific empirical surprise — ambient AI is producing more lift in nursing than in physician documentation, a finding that should rewire any builder still chasing physician-only ambient capture.
😤 Haters
“Build-internal sounds great until the team turns over and the system inherits an unmaintained homegrown stack.” Real risk. The mitigation is a build-with partner who transfers the model, the eval suite, and the maintenance pattern — not a SaaS vendor. The unit economics of services-and-platform are different from SaaS, but the failure mode the hater names is what kills the SaaS-only model in this buyer profile.
“Nebraska Medicine isn’t Mass General. Most academic systems can’t actually build.” Two things. The capacity bar is dropping every quarter as the model + tooling stack matures. And the operator class — CTDO, Chief Transformation Officer, VP AI Roadmap — is being hired in systems that did not have it 18 months ago. The job titles are the leading indicator that the buyer profile is shifting.
💡 80/20: The CMIO who picks one of three vendors after a six-month evaluation is being augmented — and in some systems, replaced — by a CTDO-class operator with an internal platform team that can ship in weeks. Reframe: if your sales motion was built around the long evaluation cycle, you are selling to a buyer who is being routed around. If it was built around being the implementation partner that helps the internal team ship faster, you are selling to the buyer who is being elevated.
→ Full write-up
What are you building this week? Reply and tell me — I read every one.
— Kevin


