Three security models. One platform. The right isolation boundary for every compliance requirement.
AI agents that execute real code, browse the web, and manage files introduce a fundamentally different threat surface than chatbots. ibl.ai addresses this with three purpose-built security models—NanoClaw, IronClaw, and OpenClaw—each calibrated for a different risk tolerance and compliance posture.
Every model enforces strict boundaries between agent workloads and host infrastructure. Whether you need lightweight OS-level container isolation or five independent security layers including a WASM sandbox, ibl.ai gives your security team auditable, enforceable controls without sacrificing agent capability.
Built on the enterprise-hardened OpenClaw framework and battle-tested across 400+ organizations and 1.6M+ users, these security models are production-grade by design—not retrofitted afterthoughts.
Autonomous AI agents are no longer passive responders. They install packages, execute shell commands, access file systems, call external APIs, and act on schedules without human prompting. Deploying them on shared infrastructure without strict isolation is a critical security gap that exposes sensitive data, internal networks, and downstream systems to agent-level compromise.
Most enterprise AI platforms offer no meaningful isolation model at all—agents run in shared, opaque cloud environments with no audit trail and no boundary enforcement. Organizations in regulated industries cannot accept this. They need verifiable isolation, granular permission controls, and the ability to choose where and how agents execute—on their own infrastructure, under their own security policies.
Agents that can run Python, shell, or SQL without sandboxing can exfiltrate data, escalate privileges, or pivot to internal systems if a skill plugin is compromised or a prompt injection attack succeeds.
A single malicious or misconfigured agent action can breach network perimeters, corrupt databases, or expose regulated data—triggering compliance violations and incident response costs.Vendor-hosted AI platforms run agent workloads in shared environments. Organizations have no visibility into co-tenancy, no control over network egress, and no audit trail of what code actually executed.
Regulated industries—finance, healthcare, defense—cannot demonstrate compliance when agent execution environments are opaque and outside organizational control.Agents with access to API keys, database credentials, and OAuth tokens can leak secrets through logs, memory artifacts, or compromised skill plugins if credential handling is not isolated and scoped.
Credential leakage from an agent runtime can cascade into full account compromise across integrated SaaS platforms, cloud providers, and internal services.Autonomous agents operating on cron schedules or reactive triggers can consume unbounded CPU, memory, and network bandwidth—especially when executing long-running code or spawning subprocesses.
Without resource limits, a runaway agent can degrade or take down shared infrastructure, creating availability incidents that affect other workloads and users.Coarse-grained permission models that grant agents broad access to skills and data cannot satisfy least-privilege requirements. Per-user, per-skill, and per-organization controls are essential for enterprise deployments.
Over-permissioned agents violate zero-trust principles and create audit findings that block enterprise procurement and regulatory approval.Choose NanoClaw for lightweight OS-level container isolation, IronClaw for five-layer defense-in-depth including WASM sandboxing, or OpenClaw for application-level permission controls. Each model maps to a distinct compliance profile and infrastructure requirement.
Each agent receives its own isolated execution environment—a dedicated Linux container under NanoClaw or a layered sandbox under IronClaw. Environments are provisioned on your infrastructure, on-premises or in your private cloud, with no shared tenancy.
Network policies restrict agent egress to explicitly allowlisted endpoints. IronClaw adds request-level filtering to block unauthorized outbound calls. Agents cannot reach internal network segments beyond their defined scope.
Credentials and API keys are injected at runtime through scoped secret stores—never stored in agent memory files or accessible across agent boundaries. IronClaw's credential layer enforces per-agent secret scoping with automatic rotation support.
Agent skills—Python, R, shell, SQL, browser automation—execute inside the isolated environment. Resource limits cap CPU, memory, and execution time. All package installations and file system writes are contained within the agent's boundary.
Every agent action, tool call, code execution, and external request is logged to an immutable audit trail. Logs are exportable to your SIEM, satisfy SOC 2 and HIPAA audit requirements, and provide forensic-grade traceability for incident response.
Each agent runs in its own Linux container with ~500 lines of fully auditable isolation code. Lightweight enough for high-density deployments, strong enough to enforce hard boundaries between agent workloads and the host system. Ideal for organizations that need verifiable isolation without operational complexity.
Five independent security layers—network isolation, request filtering, credential scoping, WASM sandbox, and Docker container boundaries—provide overlapping controls so that compromise of any single layer does not result in a breach. Designed for the highest-risk agentic workloads.
Per-user, per-skill, and per-organization permission checks enforce least-privilege access across the 5,700+ community plugins available in the OpenClaw ecosystem. Administrators define exactly which skills each agent or user can invoke.
IronClaw's WebAssembly sandbox layer executes untrusted skill code in a memory-safe, capability-restricted environment before it reaches the container layer. This provides a second enforcement point for code that originates from community plugins or user-supplied inputs.
Every agent action—LLM call, tool invocation, file write, network request, code execution—is logged with timestamps, user context, and output hashes. Audit logs are tamper-evident and exportable to Splunk, Datadog, or any SIEM via standard connectors.
CPU, memory, disk I/O, and execution time limits are enforced at the container level. Administrators set per-agent and per-organization quotas. Runaway agents are automatically terminated and flagged without affecting adjacent workloads.
All three security models deploy on your infrastructure—on-premises, air-gapped, or private cloud. No agent workloads leave your environment. This is a hard requirement for defense, government, and regulated financial institutions that cannot use shared vendor clouds.
| Aspect | Without | With ibl.ai |
|---|---|---|
| Execution Isolation | Agents share a runtime environment; one compromised agent can access another's memory and files | Each agent runs in its own container or WASM sandbox with hard boundaries enforced at the OS level |
| Network Egress Control | Agents can make arbitrary outbound network calls to any endpoint, including internal services | Egress restricted to allowlisted endpoints; IronClaw adds request-level filtering as a second enforcement point |
| Credential Security | API keys and secrets stored in agent memory files or environment variables accessible across the runtime | Credentials injected at runtime through scoped secret stores; never persisted in agent memory or accessible cross-agent |
| Audit Trail | No structured audit log; no record of what code executed, what data was accessed, or what external calls were made | Immutable, tamper-evident audit log for every action with cryptographic chaining and SIEM export |
| Resource Control | Runaway agents consume unbounded CPU and memory, degrading shared infrastructure for all users | Per-agent CPU, memory, disk, and execution time quotas enforced at the container level with automatic termination |
| Permission Granularity | Binary access model: agents either have access to a skill or they don't, with no per-user or per-org scoping | Per-user, per-skill, per-organization permission checks enforced at every ReAct loop tool call |
| Infrastructure Control | Agent workloads run in vendor-managed shared cloud; no data residency guarantees, no co-tenancy visibility | Self-hosted on any infrastructure—on-premises, air-gapped, or private cloud—with full data residency control |
Agents share a runtime environment; one compromised agent can access another's memory and files
Each agent runs in its own container or WASM sandbox with hard boundaries enforced at the OS level
Agents can make arbitrary outbound network calls to any endpoint, including internal services
Egress restricted to allowlisted endpoints; IronClaw adds request-level filtering as a second enforcement point
API keys and secrets stored in agent memory files or environment variables accessible across the runtime
Credentials injected at runtime through scoped secret stores; never persisted in agent memory or accessible cross-agent
No structured audit log; no record of what code executed, what data was accessed, or what external calls were made
Immutable, tamper-evident audit log for every action with cryptographic chaining and SIEM export
Runaway agents consume unbounded CPU and memory, degrading shared infrastructure for all users
Per-agent CPU, memory, disk, and execution time quotas enforced at the container level with automatic termination
Binary access model: agents either have access to a skill or they don't, with no per-user or per-org scoping
Per-user, per-skill, per-organization permission checks enforced at every ReAct loop tool call
Agent workloads run in vendor-managed shared cloud; no data residency guarantees, no co-tenancy visibility
Self-hosted on any infrastructure—on-premises, air-gapped, or private cloud—with full data residency control
Meets DISA STIG and IL4/IL5 requirements. Zero data leaves the classified enclave. Full audit trail satisfies chain-of-custody requirements for intelligence products.
Verifiable isolation satisfies FedRAMP and FISMA audit requirements. Self-hosted deployment keeps citizen data within government-controlled infrastructure.
HIPAA-compliant audit trails, data residency enforcement, and least-privilege credential scoping reduce breach risk and satisfy OCR audit requirements.
SOC 2 Type II and PCI-DSS audit trails. Isolated execution prevents model code from accessing or modifying live trading infrastructure.
Enforces attorney-client privilege boundaries programmatically. Audit logs provide defensible records of AI-assisted review for court admissibility questions.
Researchers get full computational freedom. IT maintains hard isolation between research groups. Resource quotas prevent any single experiment from consuming shared capacity.
Zero-trust agent architecture. Credential scoping prevents lateral movement. Audit trails satisfy internal security review and SOC 2 evidence requirements.
See how ibl.ai deploys AI agents you own and control—on your infrastructure, integrated with your systems.