Three smart people argue about product moat 🏰, a med-student fellowship that ships products 🎓, AI compute costs more than employees? 💸
Three credible voices just published three different theses on the clinical-AI moat. Each one implies a different roadmap.
Brendan Keeler argues the moat is distribution. His read of OpenAI’s ChatGPT-for-Clinicians launch is that the product itself is mostly the same thing OpenAI shipped in January — the news is the go-to-market. PLG (product-led growth) is permissionless: ship it free to verified clinicians, skip the BAA where PHI is not in scope, build the daily-use habit, and let the enterprise contract follow the user. He frames it as a frontal assault on OpenEvidence, whose real moat is not the product — it is the 40%+ of US clinicians who already use it daily. “PLG is permissionless by design. Features are copyable. Distribution isn’t.”
John Lee argues the moat is the integrated data substrate. Posted on the morning of XGM: the impressive part of Epic’s announcements is not the agents themselves, it is what sits underneath — Chronicles as the data layer, Cosmos for population signal, Care Everywhere pulling outside records onto the same surface, an agent stitching it together. His showcase example was a pheochromocytoma demo that surfaced scattered signals across years of records with full provenance back to source data — the kind of synthesis humans struggle with on rare longitudinal patterns. Combine that with Agent Factory letting non-engineers spec agentic behavior in markdown, and his punchline lands: if your healthcare AI product does one thing well in a workflow Epic owns, the comparison is no longer to last year’s Epic. It is to an Epic where every adjacent tool gets better at the same time, and where customers themselves will be vibe-coding the niche tools you used to sell them.
Jared Pelo argues the moat is replaceable. He is the last in a four-part series titled “We Can Replace Epic, and This Is How.” Self-aware that he sounds like a fraud writing it — but he is a physician-founder who built ambient AI at scale (Nuance/DAX, then Microsoft) and built/rebuilt an ambulatory EHR before that. His thesis: this is not a small spunky startup play. It requires a stellar team and the right assets — but the assets exist now in a way they did not five years ago, and the foundation models are the unlock.
The three positions are not reconcilable. Keeler thinks the distribution channel is what compounds. Lee thinks the data substrate is what compounds. Pelo thinks the current substrate is the constraint and a clean rebuild beats the integrated one.
Read the three together and the question for a clinician-builder gets sharper. Each thesis implies a different roadmap on a different timescale. If distribution wins (the 12-month story), build the consumer-grade clinician tool, ship free, and let enterprise follow the habit — competing with OpenAI not on polish you cannot match but on a wedge they will not bother to enter. If integration wins (the 3-year story), the choice is build inside Agent Factory and accept the comparison to an Epic where every adjacent tool got better at the same time [seems like a bad idea], or build in the gaps Epic structurally cannot fill — non-Epic data, niches below Epic’s build threshold. If replacement wins (the 10-15 year story), you have a window to ship something that takes the EHR seat itself while foundation models do most of the engineering [very risky]. The strategy does not change because of one quarter of news. It changes because of which timescale you are willing to underwrite — and what your product looks like if you are wrong about the other two.
😤 Haters
“This is just three guys with takes on LinkedIn — none of them have shipped the EHR replacement they’re talking about.” Two of them have shipped enterprise-scale clinical software inside huge orgs and have the scars to back the analysis. The point is not that any of them is right, it is that they are smart and reasonable and looking at the same set of facts and arriving at incompatible conclusions. That is the signal that the answer is not yet in the room. The wrong reaction is to wait. The right reaction is to pick one of the three and commit to the roadmap that follows.
“Epic is going to win regardless — every cycle, the analysts say it’s replaceable, and every cycle Epic absorbs the threat.” Mostly true historically. The thing that is different this cycle is that the foundation models are doing the engineering work an EHR replacement project used to require ten years and a billion dollars to do. [Although AI is still expensive (see last story). Also, there are plenty of players in this space, see elion.health’s EHR section.]
“All three are wrong — the moat is the workflow inside the clinical encounter, not the platform underneath.” That is a fourth thesis and it might be the right one. The interesting tell is that none of the three articulated it. Either the workflow is the answer and three of the smartest observers on this question missed it, or the workflow is downstream of one of the three platforms they are arguing about.
💡 80/20: When credible voices publicly disagree about where a moat lives, the right move is not to wait for the disagreement to resolve — it is to commit to one of the theses and let the work tell you whether it was the right one. Try: write the one-paragraph answer to “if the moat is distribution / if the moat is integration / if the moat is replacement, what am I building this quarter?” Three short paragraphs, on paper, before the next sprint. The exercise forces a roadmap on each thesis, and the one you cannot stomach is the one to take seriously.
→ Full write-up
📡 Builder’s Radar
A new fellowship just opened to put medical students inside a16z bio+health portfolio companies for ten weeks of agentic AI building.
The MD+ Catalyst Fellowship launched today. Sherman Leung (Stanford EM, a16z bio+health) is running it: a ten-week summer initiative for medical students and trainees to build with agentic AI tools alongside digital-health companies and faculty. Sponsors and mentors are exactly the people you would want in a room — a16z bio+health (Vineeta Agarwala), AlleyCorp (Omar and Alexi Nazem), Chamber (Sameer Sheth, MD), Fabric (Aiden Feng, MD MBA), Thalamus (Jason Reminick, MD MBA MS), Visionairy (Mac Singer), Insight Health AI (Jaimal Soni). Trainee applications open next week; sponsor/faculty project submissions are open through mid-May.
😤 Haters
“Med students don’t have the clinical context to build anything useful — this is going to ship demo-ware with a press release.” The traditional version of this critique was correct when the engineering surface was the bottleneck — a med student writing JavaScript was going to ship slow. With agentic AI, the bottleneck moves to the question, the eval, and the workflow knowledge. That is exactly what a med student rotating on six services has more of than a CS senior at Stanford. The thing they will be slow at is the clinical operations and the procurement story; that is what the faculty mentors are there for.
“This is just an a16z talent pipeline dressed up as a fellowship.” Yes, and that is fine. Talent pipelines that put domain experts in front of capital are how this category gets built. The tell is who the faculty mentors are; if they are the same names that show up on the cap tables of the next five clinical-AI companies, the fellowship was a great filter for everyone involved. If they aren’t, it was a low-stakes experiment.
💡 80/20: The category of “med student or trainee who can ship a working product” did not exist as a serious thing two years ago. It does now, and the people building the talent pipelines have noticed. Try: if you are a med student, resident, or fellow, apply when the form opens next week. If you are a faculty member, propose a project — the marginal cost of mentoring a trainee with a working agent stack is much lower than mentoring a research project, and the output is more legible.
→ Full write-up
The cost of running an AI agent now exceeds the salary of the worker it replaces. The story is the inversion, not the absolute number.
Axios reported this weekend that some companies are now spending more on AI than on their own employees. Nvidia’s Bryan Catanzaro put it on the record: “the cost of compute is far beyond the costs of the employees.” Uber’s CTO told the same reporter he blew through his entire 2026 AI budget on Claude Code token costs alone. Worldwide IT spend is forecast at $6.31T this year — up 13.5% YoY (Gartner). Anthropic raised prices in response to demand. The Axios bottom line: when AI labs raise prices, big spending on AI shifts from a flex to a liability.
😤 Haters
“This is a temporary supply-side problem — compute prices fall every year, this normalizes by 2027.” Plausible at the macro, and that is the bet most CFOs are making. The unbet part is the demand curve. Agentic coding is using ~1000x more tokens per task than chat, and usage varies 30x across runs on identical tasks. The price-per-token is falling. The tokens-per-task are rising faster. Whether the net cost normalizes depends on which curve wins.
“For health systems, this is not the relevant story — clinical AI is metered by the seat, not by the token.” Today, yes. The thing to watch is whether the seat-priced clinical AI vendors are absorbing token-cost variance on their margin or passing it through. The first time a clinical AI company raises prices mid-contract because of a frontier-model price bump, the seat-pricing model will be in question. Read your renewal terms.
💡 80/20: The labor-vs-compute cost inversion is the macro story behind every clinical AI conversation right now — including the Epic moat debate above. Reframe: stop asking “is this AI cheaper than a human?” and start asking “is the delta between the AI cost and the human cost growing or shrinking, and which direction does my product’s margin live in?” The vendors who win the next two years are the ones whose margin survives the answer.
→ Full write-up
What are you building this week? Reply and tell me — I read every one.
— Kevin


