Doctronic's AI prescribes solo 💊, RSAC locks down agents 🔒, 397B model on a MacBook 💻
🔬 The Big Thing
Doctronic Raises $40M After Becoming the First AI to Legally Prescribe Routine Refills
Here’s the headline: Doctronic raised $40 million in a Series B led by Abstract and Lightspeed, bringing total funding to $65 million in under a year. But the real story happened in January, when Utah’s regulatory sandbox made Doctronic the first AI system in the country to autonomously renew prescriptions. No physician co-sign. No rubber stamp. The AI verifies patient identity, checks for contraindications, and either approves or escalates — on a formulary of roughly 190 non-controlled chronic disease medications. The safeguard: physicians reviewed the first 250 decisions per drug class before the system went autonomous.
Here’s why this matters if you’re building at the bedside. Doctronic didn’t build a tool that assists a clinician. It built a system that replaces a specific clinical decision — the routine refill — and got a state government to agree that was acceptable. That’s a fundamentally different product category than an ambient scribe or a CDS alert. It’s AI making a prescribing decision, end to end.
I have mixed feelings about this. On one hand, a metformin refill for a stable Type 2 diabetic genuinely doesn’t require a physician’s cognitive load — and the $4 fee versus a $39 telehealth visit removes real friction for patients. On the other hand, the “routine refill” is often the only touchpoint where a physician might catch something new. A potassium that crept up. A weight change nobody mentioned. The clinical judgment isn’t in the refill itself — it’s in everything you notice while doing the refill. I don’t know yet how to think about what gets lost when that touchpoint disappears. But Doctronic is already in talks with Texas, Arizona, and Missouri, so we’re going to find out fast.
📡 Builder’s Radar
RSAC 2026 Opens With AI Agent Security as the Dominant Theme
RSA Conference kicked off today in San Francisco with AI agent security front and center. Microsoft announced new Defender, Entra, and Purview capabilities specifically for governing AI agents, and a startup called Geordie AI made the Innovation Sandbox finals with an agent-native security platform. If you’re building AI tools that touch PHI, the governance tooling is catching up to the deployment pace — and the compliance frameworks for agentic systems handling clinical data are about to become a real conversation.
The “SaaSpocalypse” Is the Clinician-Builder Thesis in Real Time
A growing chorus of voices — including a solid analysis this week — is arguing that AI-driven cost collapse in software development means companies can replace expensive SaaS subscriptions with custom-built internal tools. Sound familiar? This is exactly the thesis of clinicians.build: when building costs drop by 10x, the person who understands the problem becomes more valuable than the person who writes the code. Every health system paying six figures for a tool a pharmacist could now build in a weekend should be paying attention.
New Data on the AI Scribe Billing Shift
The ambient scribe coding intensity story keeps developing. Trilliant Health published a detailed analysis of E/M coding trends at six health systems with AI scribes, adding real nuance to the upward shift in billing codes that’s been getting attention. Their take: the trend toward higher codes is consistent across all six systems and both new and established patients — but the mechanism isn’t simple “AI = upcoding.” Better documentation is capturing complexity that was previously undercoded. For anyone building in the RCM space or deploying scribes, the distinction between accurate capture and coding creep is going to define the next round of payer-provider fights.
🛠️ From the Workbench
Flash-MoE: Run a 397B Parameter Model on a MacBook
Flash-MoE is an inference engine that runs Qwen3.5-397B-A17B — a 397 billion parameter Mixture-of-Experts model — on a MacBook Pro with 48GB RAM at 4.4+ tokens/second. It streams the entire 209GB model from SSD through a custom Metal compute pipeline, no Python or frameworks needed. For anyone running Ollama or LM Studio locally (and thinking about what clinical-grade models could look like on consumer hardware), this is a meaningful proof point for where local inference is heading.
What are you building this week? Reply and tell me — I read every one.
— Kevin

