ibl.ai Agentic AI Blog

Insights on building and deploying agentic AI systems. Our blog covers AI agent architectures, LLM infrastructure, MCP servers, enterprise deployment strategies, and real-world implementation guides. Whether you are a developer building AI agents, a CTO evaluating agentic platforms, or a technical leader driving AI adoption, you will find practical guidance here.

Topics We Cover

Featured Research and Reports

We analyze key research from leading institutions and labs including Google DeepMind, Anthropic, OpenAI, Meta AI, McKinsey, and the World Economic Forum. Our content includes detailed analysis of reports on AI agents, foundation models, and enterprise AI strategy.

For Technical Leaders

CTOs, engineering leads, and AI architects turn to our blog for guidance on agent orchestration, model evaluation, infrastructure planning, and building production-ready AI systems. We provide frameworks for responsible AI deployment that balance capability with safety and reliability.

Back to Blog

How Healthcare Systems Can Experiment with AI Without Creating HIPAA Exposure

ibl.aiMay 11, 2026
Premium

The CMO approved an AI pilot for clinical decision support. Three departments are already using unapproved tools with patient data. Here's how to enable experimentation safely.

The Shadow AI Problem in Healthcare

Every health system CISO already knows this is happening. Clinical departments, frustrated with slow governance processes, are finding their own AI tools.

A hospitalist uses ChatGPT to draft discharge summaries. A coding team pastes operative notes into an AI tool to verify CPT codes. A nurse educator uses a consumer AI to generate patient education materials.

None of these tools have BAAs. None have been vetted by compliance. Patient data is flowing to third-party servers that the health system has no contractual relationship with.

This isn't hypothetical. It's happening at virtually every health system in the country. And the standard response — ban unapproved AI tools and send a compliance reminder — doesn't work, because the underlying need that drove the behavior hasn't been addressed.

Why Governance Committees Stall

Most health systems respond to AI interest by forming a governance committee. The committee includes the CMO, the CMIO, the CISO, representatives from compliance and legal, and perhaps a few department heads.

This committee then attempts to evaluate AI tools, develop policies, and approve pilots. The process takes months. Sometimes it takes over a year.

During that time, the departments that wanted AI capabilities yesterday are finding workarounds. The governance committee's deliberation isn't preventing AI use — it's preventing governed AI use.

The problem isn't that the committee is unnecessary. Governance matters enormously in healthcare. The problem is that the committee is trying to evaluate individual AI tools one at a time when what they need is a platform decision that enables safe experimentation across the organization.

The Platform Approach: Shared Infrastructure, Department-Specific Agents

Here's the architectural shift that makes experimentation safe.

Instead of evaluating, negotiating BAAs for, and deploying separate AI tools for each department, the health system deploys a single AI platform on its own infrastructure. That platform provides the HIPAA-compliant foundation — secure compute, PHI protection, audit logging, access controls — once.

Individual departments then build or configure agents on that shared infrastructure for their specific workflows.

The medical coding team gets a coding assistant that references their payer-specific guidelines. The nursing education department gets a patient education generator that uses approved clinical content.

The quality improvement team gets an agent that can analyze clinical outcomes data. The prior authorization team gets a tool that drafts appeals using the patient's actual clinical documentation.

Each agent serves a different department. All of them run on the same HIPAA-compliant infrastructure. No additional BAAs needed. No new vendor risk assessments required.

ibl.ai enables this approach — a platform where health systems create department-specific AI agents on shared infrastructure they control. The governance committee approves the platform once, and departments can experiment within governed boundaries.

Organizing Stakeholders for Safe Experimentation

Healthcare AI implementation fails when the organizational structure doesn't match the technology architecture. Here's how to organize the key stakeholders.

The Platform Layer: IT and Security

IT and the CISO own the platform infrastructure. They're responsible for deployment, security configuration, access controls, audit logging, and HIPAA compliance at the infrastructure level.

Their job isn't to evaluate every clinical use case. It's to ensure that any use case running on the platform is secure by default. When PHI never leaves the health system's infrastructure, the security team's role simplifies from "evaluate every vendor" to "maintain our own controls."

The Governance Layer: Compliance and Clinical Leadership

The CMO, CMIO, and compliance officer define the rules of engagement. Which data sources can AI agents access? What types of clinical recommendations require physician oversight? What audit trails are required for clinical decision support?

These governance decisions apply across all agents on the platform. They're policy decisions, not technology decisions — which means the governance committee can make them without evaluating specific vendor products.

The Application Layer: Department Heads and Clinical Teams

This is where experimentation happens. Department heads identify use cases, clinical teams test agents in their workflows, and feedback drives iteration.

The critical difference from the current model: departments don't need governance approval for each new experiment. They need governance approval for the types of experiments they can run — and the platform enforces those boundaries technically.

A department head who wants to create an agent that helps nurses with medication reconciliation doesn't need to go back to the governance committee. If the agent uses approved data sources, runs on approved infrastructure, and operates within approved guardrails, it's already governed.

Implementation Planning for Health Systems

Moving from point-solution AI to platform AI requires careful sequencing. Here's what the transition looks like across four phases.

Phase 1: Infrastructure Deployment (Weeks 1-6)

Deploy the AI platform on the health system's own infrastructure. Configure HIPAA-compliant security controls. Establish audit logging. Set up EHR integration via HL7 FHIR — starting with Epic or Cerner, whichever is the primary EHR.

The CISO validates the deployment against HIPAA technical safeguards. No PHI is processed yet.

Phase 2: Governance Framework (Weeks 4-8, Overlapping)

The governance committee defines policies for agent creation, data access, clinical oversight requirements, and acceptable use. These policies are configured as platform-level controls — not documented in a binder that nobody reads.

The CMIO leads the clinical governance decisions. The CISO leads the security governance decisions. Legal confirms BAA implications (which are minimal when PHI stays in-house).

Phase 3: Pilot Departments (Weeks 6-14)

Two or three departments deploy agents for specific workflows. Typical early candidates: medical coding (high volume, measurable accuracy), patient education (low clinical risk, high staff demand), and prior authorization (time-intensive, rules-based).

Each pilot runs on the shared platform. Results are measured against department-specific KPIs, not generic engagement metrics.

Phase 4: Governed Expansion (Weeks 12+)

Based on pilot results, additional departments create agents within the established governance framework. The governance committee reviews the framework periodically — not individual agents.

The health system now has a scalable AI capability that expands through governed experimentation rather than sequential vendor evaluations.

The Integration Layer: Epic, Cerner, and Beyond

Healthcare AI experimentation is only useful if the AI can access clinical data. This requires EHR integration — and integration strategy matters as much as the AI itself.

Epic: FHIR R4 APIs through Epic's App Orchard ecosystem. The AI platform reads clinical data through standard FHIR resources — Patient, Observation, MedicationRequest, Condition. Write-back capabilities (if needed for clinical decision support alerts) require additional Epic validation.

Cerner/Oracle Health: FHIR R4 APIs through Oracle Health's Millennium platform. Similar resource model to Epic, with platform-specific authentication requirements.

Allscripts and athenahealth: Both support FHIR APIs, though with varying levels of maturity. The AI platform's FHIR integration handles the differences at the connector level.

Meditech: FHIR support is newer but expanding. For Meditech facilities in a multi-hospital system, the AI platform may initially use HL7 v2 ADT feeds supplemented by FHIR as Meditech's API capabilities mature.

The architecture principle: integrate through open standards so the health system isn't locked into any single EHR vendor's AI ecosystem.

What This Changes

When a health system has HIPAA-compliant AI infrastructure that departments can experiment on safely, three things change.

First, shadow AI disappears. Clinicians use unapproved tools because approved tools don't exist or take too long to procure. When they can create purpose-built agents on approved infrastructure within weeks, the incentive to use consumer tools evaporates.

Second, the governance committee's role shifts from bottleneck to enabler. Instead of evaluating tools, they're setting policies. Instead of slowing experimentation, they're defining the boundaries within which experimentation is safe.

Third, the health system develops institutional AI capability — not just a collection of vendor subscriptions. Each department's experimentation builds organizational knowledge about what works in clinical AI, which accumulates as a competitive advantage.

The Uncomfortable Truth

The governance committee isn't protecting the health system by moving slowly. It's creating the conditions for shadow AI by failing to provide a safe alternative.

Clinicians and departments will use AI. The question is whether they use it on infrastructure the health system controls and governs, or on consumer tools that create unmanaged HIPAA exposure.

Enabling safe experimentation isn't a concession to impatient departments. It's the responsible governance strategy.

See the ibl.ai AI Operating System in Action

Discover how leading universities and organizations are transforming education with the ibl.ai AI Operating System. Explore real-world implementations from Harvard, MIT, Stanford, and users from 400+ institutions worldwide.

View Case Studies

Get Started with ibl.ai

Choose the plan that fits your needs and start transforming your educational experience today.