Corporate Surgery Is Not an Edge Case in Healthcare
The healthcare industry averages over 100 M&A transactions per year. When Advocate merges with Aurora, when Intermountain absorbs SCL Health, the first casualty is always the integration layer. And now that layer includes your shiny new AI agents.
A recent CIO piece nailing the "transplantable skeleton" metaphor could not be more timely. If 2025 was the year we gave healthcare AI a brain—LLMs summarizing discharge notes, triaging prior auths, parsing unstructured clinical narratives—then 2026 must be the year we give it a skeleton. A structural framework that survives corporate surgery.
Because in healthcare, corporate surgery is not an edge case. It is the operating model.
The Coupling Problem Nobody Talks About
Most agentic AI deployments I see in health systems are architected like this: a specific LLM provider, wired directly into a specific EHR instance, orchestrated by a vendor-specific workflow engine, reading from a Snowflake account with org-specific schemas. Every layer assumes the current org chart is permanent.
This works beautifully—until the board announces a merger. Then your AI agents become legacy systems overnight. Not because the models are wrong. Not because the logic is flawed. Because the infrastructure assumed permanence in an industry defined by change.
I have watched a 14-month AI deployment—agent-driven revenue cycle optimization, genuinely impressive work—get shelved three weeks into a system integration because it was hardcoded to one Epic instance's custom Z-segments and a data warehouse schema that existed only in the acquired system. Fourteen months of work, binned. Not because of bad engineering, but because of non-portable engineering.
What a Transplantable Skeleton Actually Looks Like
The skeleton metaphor works because skeletons do two things: provide structure and enable movement. Your agentic AI infrastructure needs both.
Abstraction at the data layer. Your agents should never read raw EHR tables directly. FHIR as a canonical data model is not just an interoperability checkbox—it is your portability layer. When you migrate from Epic to Oracle Health, or merge two Epic instances with different configurations, agents that consume FHIR resources survive. Agents hardcoded to Clarity schemas or custom ADT feeds do not. Build your Snowflake landing zone around FHIR resources using dbt models that transform source-specific extracts into canonical FHIR-aligned schemas. When the source changes, you rewrite the staging layer. The agents never know the difference.
Orchestration decoupled from compute. If your agent workflows are defined inside a vendor platform—embedded in a specific RPA tool or a proprietary automation layer—they die with that contract. Define agent orchestration as code. Use tools like Prefect, Dagster, or Snowflake's native task framework. The point is that the workflow definition must be extractable, version-controlled, and deployable against a different backend without rewriting business logic.
Model-agnostic agent interfaces. The LLM wars are far from over. Today it is Claude Opus 4.6 or GPT-5.3. Tomorrow it is something else entirely. Your agents need a clean interface boundary—a contract that defines inputs, outputs, and expected behaviors—with the model call abstracted behind it. Swapping the brain should be a configuration change, not a rewrite. Every direct API call to a specific model provider without an abstraction layer is technical debt with a fuse attached.
Identity and access as a portable layer. This one gets overlooked constantly. Healthcare AI agents need access to patient data, which means HIPAA-scoped access controls. If those controls are implemented as org-specific role mappings in a single identity provider, a merger means rebuilding your entire access model. Externalize your RBAC. Define agent permissions in terms of data sensitivity tiers and clinical context, not org-unit hierarchies.
The Snowflake Angle
For teams already building on Snowflake—and in healthcare data engineering, that is an increasingly large cohort—the platform's architecture lends itself to skeleton-thinking. Snowflake shares, secure views, and cross-account data replication mean you can structure your data layer to survive an org-chart upheaval. Combine that with dbt's declarative transformations and you have a data skeleton that is genuinely transplantable: the models define what the data should look like, not where it comes from.
The new Cortex capabilities push this further. If your agent's intelligence layer runs inside Snowflake—Cortex functions, Snowpark containers, Streamlit interfaces—it travels with the data platform, not with the org. That is not a small thing when the acquiring system's CIO decides to consolidate onto a single Snowflake account.
Build for the Merger You Don't See Coming
Here is the uncomfortable truth: most healthcare data teams are building agentic AI for this organization, this quarter. But the median tenure of a healthcare CIO is under four years, and the average health system undergoes a material organizational change every five to seven years. Your AI infrastructure will almost certainly outlive the org chart it was built for.
The teams that win are not the ones with the most sophisticated models. They are the ones whose infrastructure survives transplant. Build the skeleton first. Make it portable. Make it indifferent to which brain sits on top and which body surrounds it.
If you cannot draw a clean line between your agent's business logic and your current org's infrastructure, you do not have an AI platform. You have a very expensive prototype with an expiration date.