The Center of Excellence Paradox
Enterprise AI governance typically starts with a sensible idea: create a center of excellence (CoE).
Assemble a cross-functional team of data scientists, engineers, and business analysts. Task them with evaluating AI tools, developing standards, and running approved projects.
Six months later, the CoE has published a governance framework, evaluated fourteen vendors, and deployed two pilot projects.
Meanwhile, twelve business units have signed up for their own AI tools using departmental budgets. The sales team is using one chatbot. Marketing is using another. The legal team built something with a no-code tool. HR is experimenting with a third vendor's onboarding assistant.
Company data — customer records, employee information, proprietary processes — is flowing to servers the CIO has never heard of.
This is the center of excellence paradox. The more centralized and deliberate your AI governance, the more likely business units are to route around it.
Not because they're reckless, but because the CoE can't move fast enough to meet their needs.
Why Centers of Excellence Fail for AI
Centers of excellence work well for technologies with stable, well-defined use cases. A data analytics CoE makes sense because reporting requirements are relatively predictable. A cloud migration CoE makes sense because the migration has a defined end state.
AI is different. The use cases are emergent. Nobody predicted that the HR team would want an AI agent to handle benefits questions until an HR coordinator spent a weekend building one with ChatGPT.
Nobody anticipated that the compliance team would need AI to cross-reference regulatory updates against internal policies until a compliance analyst demonstrated it was possible.
A CoE model assumes the center can anticipate and prioritize all valuable AI use cases.
In practice, the people closest to the work — the HR coordinators, the compliance analysts, the sales managers — are the ones who discover the most valuable applications. A centralized team can't see what they see.
The CoE also creates a bottleneck. When every AI initiative requires CoE approval, review, and resourcing, the queue grows faster than the CoE can process it.
Business units with urgent needs and available budget do the rational thing: they go around the CoE and buy their own tools.
Shadow AI Is a Governance Failure, Not a People Failure
When a business unit deploys an unapproved AI tool, the standard response is to treat it as a compliance violation. IT sends an email. The CISO escalates. The tool is blocked. The employee who deployed it gets a talking-to.
This response treats the symptom without addressing the cause.
The business unit didn't deploy shadow AI because they wanted to violate policy. They deployed it because the approved path was too slow, too restrictive, or didn't address their use case.
Shadow AI is a governance failure. Specifically, it's a failure to provide a legitimate path for business units to experiment with AI at the speed the technology demands.
The governance failure has two dimensions. First, the organization lacks infrastructure that enables rapid experimentation within security boundaries.
Second, the approval process treats AI experimentation like a major procurement decision, requiring weeks of review for something an employee could prototype in an afternoon.
Distributed Ownership on Shared Infrastructure
The solution is a model that centralizes infrastructure while distributing ownership.
IT provides a shared AI platform that any business unit can use to create, deploy, and iterate on AI agents — without purchasing separate tools, without sending data to unapproved vendors, and without waiting for CoE prioritization.
Here's what this looks like operationally.
IT provides the platform. A single AI infrastructure deployed on the enterprise's own cloud — AWS, Azure, or GCP within the company's existing environment.
The platform supports multiple LLM providers, integrates with enterprise systems through standard protocols, and enforces security controls at the infrastructure level.
Business units provision their own spaces. Each department gets a sandboxed environment within the platform.
They can create AI agents, upload training content, configure integrations with their specific tools, and iterate without IT involvement for each experiment.
Security and compliance are embedded, not bolted on. Data governance controls are baked into the platform: encryption, access controls, audit logging, data residency rules.
Business units can't accidentally send data to an external server because the platform doesn't allow it. Compliance teams can audit any agent's behavior through the platform's built-in logging.
Okta handles identity. SSO through Okta (or whatever identity provider the enterprise uses) means employees access the AI platform with their existing credentials.
Access controls are managed through the same groups and roles that govern access to everything else.
This is the model organizations like those on ibl.ai have implemented. The platform provides the secure foundation.
Business units build on top of it. IT governs the infrastructure without bottlenecking the experimentation.
How to Organize the Stakeholders
Successful enterprise AI experimentation requires clear roles across four organizational functions.
IT and Infrastructure
IT owns the platform, not the use cases. Their responsibilities include deployment, security configuration, integration with enterprise systems (Workday, SAP SuccessFactors, Oracle HCM, Teams, Slack, SharePoint), model provider management, and infrastructure monitoring.
IT also maintains the integration layer. MCP connectors that link the AI platform to the enterprise data stack are IT's responsibility to deploy and maintain.
When Workday pushes an API update, IT handles the connector update — not each business unit individually.
HR and L&D
HR and L&D are typically the highest-volume users of enterprise AI.
Their use cases include onboarding assistants, compliance training agents, benefits Q&A bots, performance review preparation tools, and career development advisors.
The L&D team should own the content that trains these agents — company policies, training materials, competency frameworks — and have the ability to update that content without filing IT tickets.
Compliance and Legal
Compliance defines the guardrails: what data types can be processed, what outputs require human review, what retention policies apply to AI conversations, and what audit trails are required.
These rules should be enforceable at the platform level, not dependent on employee behavior.
Legal reviews data processing agreements, evaluates model provider terms of service, and ensures the platform's deployment model satisfies GDPR, data sovereignty, and industry-specific regulations.
Business Unit Champions
Each business unit that wants to experiment with AI designates a champion.
This is someone who understands the unit's workflows well enough to identify high-value AI use cases and has the technical comfort to configure agents on the platform.
Champions aren't data scientists. They're power users who can articulate a business problem, configure an AI agent to address it, test the results, and iterate.
The platform should be accessible enough for this profile — if it requires engineering skills to create an agent, the experimentation model won't scale.
Implementation Planning
Rolling out enterprise AI experimentation in a controlled way requires a phased approach.
Phase 1: Platform deployment (weeks 1-4). IT deploys the AI platform on the enterprise's infrastructure. Integrations with Okta, Teams or Slack, and one HRIS system (typically Workday or SAP SuccessFactors). Security review and compliance validation.
Phase 2: Pilot business units (weeks 5-8). Two or three business units with clear use cases — typically HR/L&D, sales enablement, and compliance — receive access to the platform.
Each unit designates a champion who creates initial AI agents. IT provides support for integration and configuration questions.
Phase 3: Governance calibration (weeks 9-12). Based on Phase 2 experience, compliance adjusts guardrails. IT refines access controls and monitoring.
The organization documents patterns that work and anti-patterns to avoid. This phase is critical — governance rules created in theory rarely survive contact with real use cases.
Phase 4: Broad rollout (weeks 13+). Additional business units receive access. Champions from Phase 2 units help onboard new teams.
IT expands integrations to include additional enterprise systems — ADP, Cornerstone, Degreed, LinkedIn Learning, SharePoint.
The CISO's Checklist
Enterprise AI experimentation raises legitimate security concerns. Here's what the CISO should verify before approving the platform.
Data residency. All data processing occurs within the enterprise's infrastructure.
No employee data, customer data, or proprietary information is sent to external servers unless explicitly configured and approved.
Access controls. Business unit sandboxes are isolated. Marketing can't access HR's agents or data. Access follows the principle of least privilege through the enterprise's existing identity provider.
Audit logging. Every interaction with every agent is logged. Logs include who accessed what, when, and what data was processed.
Logs are retained according to the enterprise's data retention policy and are accessible to the compliance team.
Model provider management. The CISO should know which LLM providers are in use, what data is sent to their APIs, and what the provider's data handling terms are.
A platform that supports running open-source models on the enterprise's own GPUs gives the CISO an option that sends zero data to any external provider.
Incident response. If an AI agent produces a harmful output or processes data it shouldn't have access to, the platform must support immediate deactivation, root cause analysis, and remediation.
This capability must work without depending on a vendor's support team.
From Experimentation to Production
The goal of organized experimentation isn't permanent experimentation. It's identifying which AI use cases deliver genuine business value and promoting them to production.
The advantage of the distributed ownership model is that this transition is smooth. An agent that starts as an experiment in the L&D team's sandbox — helping new hires navigate benefits enrollment — can be promoted to a production service that every new employee interacts with on their first day.
The infrastructure is the same. The security controls are the same. The only change is the scope of access and the level of monitoring.
This is where the CoE model can re-enter the picture — not as a gatekeeper for experimentation, but as a review board for production promotion.
The CoE evaluates which experiments are ready for enterprise-wide deployment, ensures they meet production standards for reliability and compliance, and coordinates the transition.
The enterprises that get AI right won't be the ones that controlled it most tightly. They'll be the ones that made it safe to experiment — and then had the infrastructure to scale what worked.