Anthropic buys biotech đ§Ź, AI makes every PCP a specialist đ©ș, NYC hospital CEO wants to replace radiologists đ€
The Claude-maker just paid $400M for a 9-person drug discovery startup. What that means for clinician-builders.
Anthropic dropped $400M on nine people and a thesis. Not a product â a team of computational drug discovery researchers from Genentech who launched a startup eight months ago. The Claude-maker is betting that the most valuable thing in healthcare AI isnât the model. Itâs the people who know where the biology breaks. If that sounds familiar, it should â itâs the same bet clinicians.build is making, just at a different scale.
Anthropic Acquires Coefficient Bio for $400M â Its First Biotech Bet
Anthropic bought Coefficient Bio in an all-stock deal reported at $400M. The startup was eight months old with nine employees â both founders came from Genentechâs Prescient Design group, where they worked on computational drug discovery. The team joins Anthropicâs healthcare life sciences group.
The math on this is striking. $400M for nine people is roughly $44M per head. Thatâs not a product acquisition. Thatâs Anthropic saying: the people who understand where biology meets computation are worth more than almost any product we could build ourselves. Wow.
đ€ Haters
â$44M per head for a startup with no product is insane.â It is. But Anthropic isnât buying revenue â itâs buying the ability to build domain-specific Claude tools for pharma and biotech. Google did the same thing with DeepMindâs AlphaFold team. The question isnât whether the price is rational today. Itâs whether it looks cheap in three years.
âThis is drug discovery, not clinical tools â it doesnât affect clinician-builders.â Not directly, not yet. But Anthropic is building out a healthcare vertical. Every investment they make in life sciences infrastructure makes Claude better at understanding clinical data, drug interactions, and biological mechanisms. The model youâre building on just got more context about your domain.
âAnthropic is just chasing the next revenue vertical.â Maybe. But against their $380B valuation, this is a $400M bet â roughly 0.1% dilution. Thatâs a signal, not a pivot.
đĄ 80/20: The company behind Claude is building a healthcare-specific bench. Every clinician-builder working with Claude benefits from Anthropic understanding your domain better. Try: if youâre evaluating which LLM to build clinical tools on, weight âdoes the provider invest in healthcare domain expertise?â alongside benchmarks.
â Full write-up
đĄ Builderâs Radar
AI Could Turn Every PCP Into a âGeneralist-Specialistâ
Bob Kocher, Robert Wachter, and Siobhan Nolan-Mangini published a Health Affairs Scholar paper arguing that AI can collapse the boundary between generalist and specialist. Their thesis: AI-augmented primary care physicians could manage full constellations of chronic conditions across disease-based domains â cardiometabolic, infectious, inflammatory â rather than organ-specific specialties. The catch: it requires reforming medical education, malpractice standards, and credentialing frameworks.
đ€ Haters
âWeâve heard âAI will replace specialistsâ before.â This paper doesnât say replace. It says augment and redistribute. The claim is narrower and more practical: an AI-equipped PCP handling stable heart failure follow-up frees the cardiologist for the case that actually needs one.
âCredentialing reform and malpractice modernization? Good luck.â Fair. These are decade-long fights. But the paper names the specific barriers, which is more useful than hand-waving about âAI transformation.â
đĄ 80/20: If youâre building clinical decision support, this paper reframes the user: not âhelping a PCP do PCP thingsâ but âgiving a PCP specialist-level context for conditions they already manage.â Reframe: the most valuable CDS tool isnât the one that helps a specialist go faster â itâs the one that helps a generalist go deeper.
â Full write-up
NYC Hospital CEO: âWe Could Replace a Great Deal of Radiologists with AIâ
Mitchell Katz, CEO of NYC Health + Hospitals â the nationâs largest public hospital system â told a forum that the network is ready to let AI handle first reads on imaging, with radiologists checking abnormals. He framed it as a cost and access play, particularly for breast cancer screening. The statement was from a March 25 forum, but the coverage wave and community reaction hit this week.
đ€ Haters
âAnother administrator who doesnât understand radiology making sweeping claims.â The pushback from radiologists was immediate and pointed. One called it âundeniable proof that confidently uninformed hospital administrators are a danger to patients.â The clinical reality of first-read accuracy, liability, and edge cases is more complex than the CEO framing suggests.
âAI first-reads would increase access to screening.â This part has merit. NYC H+H serves underserved populations where radiologist access is genuinely constrained. The question is whether the access gain outweighs the risk of missed findings â and whoâs liable when the AI misses a cancer.
đĄ 80/20: If youâre building imaging AI, this is a signal that health system leadership is ahead of regulatory frameworks. Try: build the audit layer â the tool that measures AI first-read accuracy against radiologist reads in your specific patient population â before building the AI reader itself.
â Full write-up
OpenEvidence Lands First Enterprise Deal at Mount Sinai â Embedded in Epic
OpenEvidence completed its pivot from PLG to enterprise, landing a B2B deal with Mount Sinai that embeds its clinical evidence tool directly in Epicâs EHR. The company had built a following with individual clinicians; now itâs selling to health systems. Abridge partnerships are also in play, and clinical trial enrollment is the next adjacency.
đ€ Haters
âAnother AI tool embedded in Epic â how is this different?â OpenEvidence competes with UpToDate, not ambient scribes. It surfaces evidence at the point of care, which means itâs fighting for the same real estate as the reference tools clinicians already use. The question is whether AI-surfaced evidence displaces the muscle memory of âIâll just look it up on UpToDate.â
âPLG to enterprise is where startups go to die.â Sometimes. But for clinical tools, enterprise is where you get into the EHR â and the EHR is where clinicians actually live. Individual adoption without system integration creates shadow IT problems.
đĄ 80/20: The PLG-to-enterprise path is the playbook for clinician-builders who want health system adoption. Try: build for individual clinician delight first, measure usage, then bring those numbers to your CMIO. Mount Sinai didnât buy OpenEvidence cold â they bought it because clinicians were already using it. Makes me think of Butterfly ultrasound approach.
â Full write-up
Best AI Agent Scored 1 out of 4 on Self-Assessment
Nate tested four prominent âoutcome agentsâ â Anthropicâs Cowork, Lindy, Sauna, and Googleâs Opal â against a framework built on one question: can the agent assess the quality of its own output? The best scored 1/4. The core insight: code has test suites, knowledge work doesnât. Agents that canât self-evaluate canât improve.
đ€ Haters
âThis is a sample size of four tools with a subjective rubric.â True â itâs one personâs evaluation. But the framework itself is the value: does the agent know when itâs wrong? Thatâs the question every clinician-builder should ask about their own tools.
âAgents are improving fast â this is just a snapshot.â Agreed. But the structural problem â knowledge work lacks the equivalent of unit tests â wonât be solved by better models. Itâll be solved by builders who define what âcorrectâ looks like for their specific clinical workflow.
đĄ 80/20: Before deploying any AI agent in a clinical workflow, define your test suite. Not âdoes it work?â but âwhat does wrong look like, and will I catch it?â Try: write 5 failure cases for your AI tool before you write a single success metric.
â Full write-up
What are you building this week? Reply and tell me â I read every one.
â Kevin


