Prior auth data goes public tomorrow 📊, deepfake X-rays fool radiologists 🩻, Advocate Health ships Agent Factory prototypes 🏭
🔬 The Big Thing
Prior Auth Goes Public: CMS Forces Payers to Show Their Cards Tomorrow
Starting March 31, every Medicare Advantage organization, Medicaid managed care plan, CHIP managed care entity, and qualified health plan on the federal exchanges must publicly report their prior authorization metrics on their websites. This is the CMS-0057-F Interoperability and Prior Authorization final rule hitting its first real compliance deadline. The data includes: percent of prior auth requests approved, denied, and approved after appeal, plus average time between submission and decision. MA organizations report at the contract level. State Medicaid programs report at the state level. Plans report at the plan level. The reporting template is standardized.
😤 Haters
“This is just a checkbox exercise — payers will bury the data in a PDF on page 47 of their website and no one will find it.” Probably true for the first cycle. But CMS standardized the reporting template, and the data is machine-readable enough that someone will scrape it within a week. The first aggregator to build a prior auth payer comparison dashboard wins a lot of attention from practice managers and referral coordinators.
“The data won’t change payer behavior — they already know their denial rates.” Knowing your denial rate and having it compared publicly to every other payer in your market are different experiences. This is the same dynamic that hospital price transparency created — slow to start, but the comparison tools eventually forced real conversations. And unlike hospital pricing data, prior auth metrics are simpler to compare: approve/deny/appeal is a cleaner signal than a chargemaster spreadsheet.
“Practices don’t have time to analyze payer-level prior auth data.” They don’t need to. The opportunity is for builders: a tool that pulls this data, cross-references it with a practice’s payer mix, and surfaces which payers are the most friction-heavy for specific service lines. That’s actionable on day one.
💡 80/20: This is a builder opportunity hiding in a compliance deadline. The practices that need this data most — small groups, independent specialists, FQHCs — are the least likely to go hunting for it on payer websites. Try: build a simple scraper that aggregates the reports as they go live this week, normalize the data, and publish a comparison. First mover advantage is real here.
→ Full write-up
📡 Builder’s Radar
Deepfake X-Rays Are Good Enough to Fool Radiologists — and AI
A study published in Radiology from Mount Sinai found that ChatGPT-generated X-ray images fooled radiologists 25% of the time even after they were warned synthetic images were present. Seventeen radiologists across 12 centers in 6 countries evaluated 264 images. Individual accuracy ranged from 58% to 92%. The multimodal LLMs tested (GPT-4o, GPT-5, Gemini 2.5 Pro, Llama 4 Maverick) did no better — 57% to 85% accuracy. Most concerning: when radiologists weren’t told synthetic images were in the mix, only 41% noticed anything was off. Experience didn’t help — years of practice had no correlation with detection ability.
😤 Haters
“This is a lab exercise — no one is actually injecting fake X-rays into PACS systems.” Not yet. But the researchers specifically flagged cybersecurity risk: if an attacker gains network access to a hospital, synthetic images injected into the imaging pipeline would be functionally undetectable. The litigation fraud angle is also real — fabricated fractures indistinguishable from authentic ones.
“This is a radiology problem, not a builder problem.” If you’re building anything that processes medical images — AI triage, clinical decision support, quality review — you now need to think about image provenance. Watermarking, chain-of-custody metadata, and authentication layers for imaging data just became relevant to your architecture.
💡 80/20: Image authentication is the next infrastructure layer. If you’re building on medical imaging data, start thinking about provenance now — not as a feature, but as a foundation. Reframe: every imaging AI pipeline needs a “was this image real?” check before it becomes a “was this diagnosis real?” question.
→ Full write-up
Advocate Health Ships Epic Agent Factory Prototypes for Pharmacy and Infusion
Advocate Health’s SVP/chief digital and AI officer Andy Crowder described how Advocate is using Epic’s Agent Factory to build four prototypes targeting pharmacy complex order verification and infusion charting prep. The prototypes came out of a three-day Epic Immersion sprint at the Pearl in Charlotte. Plan is production by July. This is the first concrete implementation story since Epic previewed Agent Factory at HIMSS26 in early March — moving from demo to deployment in under a month.
😤 Haters
“Four prototypes in three days sounds like a hackathon, not a production pipeline.” Fair. But the targets they picked — pharmacy verification and infusion prep — are high-volume, rules-heavy workflows where the error modes are well-understood. These aren’t open-ended AI experiments. They’re automation of specific, repeatable clinical sequences where the human stays in the loop.
“This is just Epic customers building inside Epic’s walled garden.” Yes. And that’s the point. The majority of US hospital workflows run through Epic. An agent framework native to the EHR, with access to the data model and order catalog, is a different beast than bolting an external agent onto FHIR APIs. The walled garden is where the patients are.
💡 80/20: Watch the pharmacy and infusion use cases closely — they’re the canary for agentic workflows in clinical settings. If Advocate ships these in July, the playbook for “3-day sprint → 4-month production path” becomes repeatable. Try: identify one rules-heavy workflow in your system that could be the next sprint candidate.
What are you building this week? Reply and tell me — I read every one.
— Kevin

