AI Agents Just Got the Ability to Pay for Things
On March 18, 2026, Stripe launched the Machine Payments Protocol (MPP) — an open standard that lets AI agents autonomously pay for services, subscribe to APIs, and transact with businesses. Agents can now spin up headless browser sessions, send physical mail, and even order food — all programmatically, without a human entering credit card details.
MPP works over standard HTTP. An agent requests a resource, the service responds with a payment request, the agent authorizes, and the resource is delivered. It integrates with Model Context Protocol (MCP), so any MCP-compatible agent can discover and pay for services as naturally as it calls a tool.
This is a genuine milestone. But it also reveals something important: payments are just one protocol in a much larger stack that agents need to be useful inside organizations.
The Protocol Gap
Consider what an AI agent inside a university or enterprise actually needs to do:
- Query the student information system for enrollment data
- Search the knowledge base for policy documents
- Pull analytics on learner engagement or operational metrics
- Create and train other agents on new course materials or compliance updates
- Schedule meetings, draft communications, manage workflows
- Remember context across sessions — who the user is, what they've asked before, what they're working on
Stripe solved the payment layer. But the data layer, the memory layer, the tool layer, the governance layer — those all need to exist too. And critically, they need to exist inside the organization's own infrastructure, not scattered across third-party SaaS platforms where data governance is an afterthought.
MCP as the Interoperability Standard
This is where Model Context Protocol becomes the connective tissue. MCP is an open standard (originally developed by Anthropic) that provides a uniform way for AI agents to discover and use tools. Instead of hardcoding integrations, agents query MCP servers that expose capabilities — analytics, search, document retrieval, agent management — through a consistent interface.
At ibl.ai, we built our Agentic OS around MCP as the interoperability layer. Here's what that looks like in practice:
Four MCP servers ship with every deployment:
| Server | What It Does |
|---|---|
| Analytics | Learning analytics, mentor usage, LLM costs, engagement metrics |
| Search | Course catalog and mentor discovery across the platform |
| Agent Chat | Programmatic conversations with any configured AI mentor |
| Agent Create | Full lifecycle management — create, configure, train, and deploy agents |
Each server authenticates via platform API keys, supports streamable HTTP transport, and can be connected to external AI tools like Claude Code, Cursor, or any MCP-compatible client. Organizations configure and control which connectors are active, what authentication they require, and what data they expose.
This isn't theoretical. A team using Claude Code can query their institutional analytics, create a new AI mentor trained on specific documents, and have that mentor answer questions — all through MCP, all within the organization's own infrastructure.
(Watch a walkthrough of connecting MCP servers to AI agents)
Why Ownership Changes the Equation
Stripe's MPP is deliberately open — any agent framework can implement it. But when it comes to the data and systems that agents connect to inside an organization, openness alone isn't enough. You need ownership.
Here's the difference:
- Rented AI: You call an API. The vendor controls the model, the data pipeline, the access policies, and the pricing. If they change terms, restrict use cases, or sunset a feature, you adapt or migrate.
- Owned AI: You deploy the full stack — code, models, connectors, policy engine — on your infrastructure. You configure what agents can access, how they authenticate, what they remember, and who they report to. If you want to swap the underlying LLM from GPT to Llama to Mistral, you change a config line.
This week, the Pentagon designated Anthropic as a "supply chain risk," arguing that the company could theoretically alter model behavior during operations. Whether you agree with that framing or not, it crystallizes the risk: depending on a single vendor for critical AI infrastructure is a strategic vulnerability.
ibl.ai's approach — full source code access, LLM-agnostic architecture, deploy anywhere (AWS, Azure, GCP, on-prem, air-gapped) — exists because we've seen this play out across 400+ organizations. Universities, enterprises, and government agencies all arrive at the same conclusion: they need to own their agentic infrastructure, not rent it.
The Stack Agents Actually Need
If you're building an agentic strategy for your organization, here's the stack to think about:
Data Layer — Unified access to your SIS, LMS, CRM, ERP, and knowledge bases. Not copies in a vendor's cloud — live connections to your systems of record.
Memory Layer — Per-user context that persists across sessions. What has this person asked before? What are their knowledge gaps? What are they working on? (See how memory works)
Tool Layer — MCP connectors that let agents call external services, run workflows, search catalogs, and create other agents. Configurable, auditable, toggleable.
Payment Layer — Standards like Stripe MPP and the Agentic Commerce Protocol for agent-to-service transactions.
Governance Layer — RBAC, SSO, audit trails, content guardrails, multi-tenancy. Who can do what, and who can see the logs.
Model Layer — LLM-agnostic routing. Use commercial models for some tasks, open-weight models for others. Switch without re-integrating.
No single vendor will own all six layers. The organizations that thrive will assemble their own stack — choosing best-in-class components at each layer and connecting them through open protocols like MCP and MPP.
What This Means for You
Stripe giving agents wallets is exciting. But a wallet without a workspace is just a payment method looking for a purpose.
The real opportunity is building agentic infrastructure — interconnected agents running inside your organization, wired into your data, operating in sandboxes you control, and governed by policies you set.
That's what ibl.ai's Agentic OS is built for. If you're exploring how to move from AI experiments to institutional AI infrastructure, start here.
ibl.ai is an Agentic AI Operating System deployed by 400+ organizations including NVIDIA, Google, MIT, and Syracuse University. Learn more