Docker sandboxes the blast š³, 81% of docs use AI š
š” Builderās Radar
Codegen is not productivity ā and the hidden cost matters more in clinical tools.
A piece on antifound.com (surfaced in TLDR DevOps yesterday) makes an argument worth sitting with: lines of code are a poor measure of programming productivity because programming is primarily about understanding problems and managing complexity. LLMs accelerate code writing while shifting costs to maintenance, comprehension, and collaboration. Read alongside the Big Thing above, the picture is consistent: vibe coding makes the creative parts faster while quietly making the operational parts harder. In a clinical tool where a misunderstood problem means a wrong medication or a missed diagnosis flag, the maintenance cost isnāt just technical debt ā itās clinical risk.
81% of physicians now use AI ā but the liability frameworks still donāt exist.
The AMA released its 2026 Physician Survey on Augmented Intelligence last week. The number everyone will cite: 81% of physicians now use AI professionally, up from 38% in 2023. The number nobody will put in the headline: 88% are concerned that heavy AI use will erode their clinical abilities over time. And the regulatory action physicians want most, above validation requirements and safety standards? Clear liability frameworks. You are building tools for a professional population that has adopted AI fast, is worried about what itās doing to their judgment, and has zero legal clarity on who is responsible when something goes wrong. That gap isnāt a detail to footnote. Itās either a design constraint (human-in-the-loop, explicit review steps, full audit trails) or a reason to wait until it resolves. It will not be small print.
š ļø From the Workbench
Docker + Claude Code sandboxes: the practical answer to the blast radius problem.
Kevin flagged this one from TLDR DevOps yesterday, and itās the direct infrastructure answer to the Big Thing. Docker published a guide this week: āClaude Code with Docker: Local Models, MCP, Sandboxes.ā Three components in one setup:
Docker Model Runner: Run Claude Code locally against your own model instances via an Anthropic-compatible API endpoint. No API calls leaving your machine. Full control over what the model sees.
Docker MCP Toolkit: Connect Claude Code to 300+ pre-built MCP servers (GitHub, filesystems, databases) with one-click deployment. Each server runs in its own container ā scoped access only.
Docker Sandboxes: Each agent session runs in an isolated microVM. When your agent installs packages, modifies files, or runs containers, your host machine is untouched. This is Nateās āblast radiusā concept implemented as infrastructure.
For anyone building tools that touch patient data: the combination of local model execution + isolated agent sandboxes directly addresses the two main compliance concerns about AI coding agents (where does code go, what can the agent access). Itās not HIPAA certification ā thatās not actually how HIPAA works ā but itās the architecture that makes the compliance conversation with your security team tractable. If youāre running Ollama locally and building health tools, this is the stack to evaluate next.
Docker Claude Code guide Ā· Docker Sandboxes
What are you building this week? Reply and tell me ā I read every one.
ā Kevin

