ibl.ai AI Education Blog

Explore the latest insights on AI in higher education from ibl.ai. Our blog covers practical implementation guides, research summaries, and strategies for AI tutoring platforms, student success systems, and campus-wide AI adoption. Whether you are an administrator evaluating AI solutions, a faculty member exploring AI-enhanced pedagogy, or an EdTech professional tracking industry trends, you will find actionable insights here.

Topics We Cover

Featured Research and Reports

We analyze key research from leading institutions including Harvard, MIT, Stanford, Google DeepMind, Anthropic, OpenAI, McKinsey, and the World Economic Forum. Our premium content includes audio summaries and detailed analysis of reports on AI impact in education, workforce development, and institutional strategy.

For University Leaders

University presidents, provosts, CIOs, and department heads turn to our blog for guidance on AI governance, FERPA compliance, vendor evaluation, and building AI-ready institutional culture. We provide frameworks for responsible AI adoption that balance innovation with student privacy and academic integrity.

Interested in an on-premise deployment or AI transformation? Call or text 📞 (571) 293-0242
Back to Blog

The Six Claws: A Field Guide to Open-Source AI Agent Frameworks

Miguel AmigotFebruary 25, 2026
Premium

Six open-source repos, ranging from 500 lines to 400,000+, each making different bets about what matters most in an AI agent. We walk through every one: architecture, tradeoffs, and who each is built for.

Same DNA, different tradeoffs

In the previous posts, we covered the foundational architecture of AI agents (tool calling, messaging, agent loop) and how memory and skills transform a bare-bones loop into something useful.

Now let us look at six real implementations. Each one builds on the same patterns but makes different bets about what matters most. Think of them as six different cars built from the same engineering principles. One is a go-kart. One is a Tesla. One is an armored truck.

NanoClaw: 500 lines and a container

Repo: github.com/qwibitai/nanoclaw Language: TypeScript (~500 lines) Messaging: WhatsApp (via baileys) Model: Claude via Agent SDK Philosophy: Minimal code, maximum isolation

NanoClaw is proof that you can build a fully functional AI agent in 500 lines of TypeScript that you can read and understand in an afternoon.

The entire architecture fits on a napkin: WhatsApp (baileys) -> SQLite -> Polling loop -> Container (Claude Agent SDK) -> Response. That is the whole thing. One Node.js process orchestrating everything.

The real innovation is the security model. NanoClaw gives each WhatsApp group its own isolated Linux container. Not an application-level permission check. A real OS-level boundary. On macOS, it uses Apple Container (lightweight VMs that ship with macOS Tahoe). On Linux, it uses Docker.

Each container has its own filesystem, IPC namespace, and process space. The agent inside Container A literally cannot access files in Container B, regardless of any bugs or prompt injections in the agent's code. The security boundary is the OS, not the application.

The tradeoff? NanoClaw is intentionally opinionated: one LLM (Claude), one platform (WhatsApp), one database (SQLite), one target machine (Mac). It says "no" to generality and says "yes" to simplicity. The philosophy is that with new models arriving every 3-6 months, code does not need to stand the test of time. Better agents will simply rewrite it.

NanoClaw also places a big emphasis on autonomous operation, not just reactive agents that respond to messages, but agents that schedule tasks, run background processes, and take initiative, with whatever controls the user requests.

Best for: People who want to understand exactly what their agent can do. People who care about security isolation. People who want to audit their entire agent in one sitting.

Nanobot: the MCP-first approach

Repo: github.com/HKUDS/nanobot Language: Python (~4,000 lines) Messaging: Telegram, Discord, WhatsApp, Slack, + 5 more Models: Claude, GPT, DeepSeek, Gemini, + 8 more providers Philosophy: Ultra-lightweight, MCP-first, research-ready

Nanobot comes from the Data Intelligence Lab at the University of Hong Kong. It was designed to answer a question: what is the absolute minimum code needed to build a fully functional multi-platform AI agent?

The answer: about 4,000 lines of Python. That is 99% smaller than OpenClaw.

Where NanoClaw delegates everything to the Claude Agent SDK, Nanobot implements its own agent orchestration with a clean layered architecture. Five major components:

  • AgentLoop (loop.py) runs the core reasoning cycle with a hard 20-iteration cap
  • ContextBuilder (context.py) assembles the prompt from SOUL.md, USER.md, memory, and skills
  • MessageBus (bus/queue.py) uses asyncio queues for non-blocking pub-sub routing
  • SkillsLoader manages always-loaded and on-demand skills
  • MemoryStore converts conversations into searchable facts

The main design decision is MCP-first architecture. Nanobot acts as a thin orchestrator. The interesting stuff happens in the MCP tools you plug into it. Web search, file operations, image generation, code execution: these are all external MCP tool servers that the agent connects to at startup. Adding a new capability means plugging in a new MCP server, not modifying the core codebase.

Performance: ~100MB RAM, 0.8-second startup. At 4,000 lines, a competent developer can understand the entire codebase in a few days.

Best for: Developers who want to understand agent architecture by reading source code. Researchers who need a clean, hackable codebase. Anyone who wants multi-platform messaging without the weight of OpenClaw.

OpenClaw: the full platform

Repo: github.com/openclaw/openclaw Language: TypeScript (~400,000+ lines) Messaging: 11+ platforms (WhatsApp, Telegram, Discord, Slack, Signal, iMessage, Matrix, Teams, Google Chat, Zalo, WebChat) Models: Claude, GPT, DeepSeek, Ollama, Mistral, + more Philosophy: Feature-complete, production-ready, batteries included

OpenClaw is the original. Created by Peter Steinberger, it sparked the entire "claw" ecosystem. 200,000+ GitHub stars. 5,700+ community-built skills on ClawHub. macOS, iOS, and Android companion apps.

The architecture is a three-layer hub-and-spoke design:

Layer 1: The Gateway is a WebSocket server (port 18789) that manages sessions, routes messages, coordinates channels, serves the web UI, and handles configuration.

Layer 2: Channel Adapters connect to messaging platforms. Each adapter runs independently (one failing channel does not crash the Gateway) and standardizes messages into a unified format.

Layer 3: The Agent Runtime executes the AI loop, assembling context, invoking the LLM, executing tool calls, and streaming responses back.

The skill ecosystem is where OpenClaw pulls away from the others. With 5,700+ skills on ClawHub, your agent can manage Gmail and Calendar, summarize web pages, automate GitHub workflows, generate images, control smart home devices, track crypto portfolios. Skills install with a single command.

On February 14, 2026, Steinberger announced he is joining OpenAI and transitioning the project to an open-source foundation. The project will be governed by the community rather than a single person.

The tradeoff is complexity. Over 400,000 lines of TypeScript across 52+ modules takes weeks or months to fully understand. Startup takes ~6 seconds. Memory usage hovers around 1.5GB. Karpathy put it bluntly: he is "a bit sus'd" to give his private data and keys to what he called a 400K-line vibe-coded monster that is being actively attacked at scale.

Best for: People who want the most complete AI agent platform available. People who value a large ecosystem and community support. People who do not need to understand every line of code.

IronClaw: security first, written in Rust

Repo: github.com/nearai/ironclaw Language: Rust Database: PostgreSQL + pgvector Philosophy: Privacy-first, defense-in-depth, zero trust

IronClaw is what happens when security researchers look at the agent ecosystem and decide to build it right. It is a direct answer to Karpathy's concerns about exposed instances, RCE vulnerabilities, and supply chain poisoning.

The security architecture has five layers:

  1. Network: TLS 1.3 encryption, SSRF protection, rate limiting per tool
  2. Request filtering: Endpoint allowlisting, prompt injection pattern detection, content sanitization
  3. Credential management: Secrets encrypted with AES-256-GCM, injected at host boundaries. Tools never see raw credentials. 22 regex patterns with Aho-Corasick optimization scan all requests and responses for credential leaks in real-time
  4. WASM sandbox: Untrusted tools run in isolated WebAssembly containers with capability-based permissions
  5. Docker isolation: Intensive tasks run in Docker containers with per-job resource limits

The numbers: 3.4MB binary, <10ms startup, ~7.8MB memory usage. Compare that to OpenClaw's 6-second startup and 1.5GB memory.

Best for: High-security environments. People handling sensitive data. Production deployments where prompt injection and data exfiltration are real threats.

PicoClaw: the $10 agent

Repo: github.com/sipeed/picoclaw Language: Go (95% AI-bootstrapped) Messaging: Telegram, Discord, QQ, DingTalk, LINE, WeChat Philosophy: Run anywhere, on anything, for almost nothing

PicoClaw asks a straightforward question: what if your AI agent could run on a $10 piece of hardware?

Written in Go for cross-platform single-binary compilation, PicoClaw runs on <10MB of RAM and starts in under a second on 0.6GHz processors. It targets RISC-V, ARM, and x86 architectures, meaning it runs on a LicheeRV-Nano, a Raspberry Pi, an old phone, or a cloud server.

The wild part is how it was built: 95% of the codebase was written by AI agents. The development team used an AI-bootstrapped approach where agents drove the architectural migration and code optimization, with humans providing feedback and course corrections.

The personality system uses seven markdown files that define everything about the agent's behavior. Changing your agent's personality means editing a text file, not writing code.

Best for: Edge computing and IoT deployments. Resource-constrained environments. People who want to experiment with AI agents on unusual hardware.

ZeroClaw: swap everything

Repo: github.com/zeroclaw-labs/zeroclaw Language: Rust Database: SQLite (embedded, no external deps) Philosophy: Trait-driven architecture, zero vendor lock-in

ZeroClaw's tagline could be: "what if you could swap every component without changing code?"

Built in Rust with a trait-driven architecture, ZeroClaw defines 13 core traits, abstractions that decouple every subsystem from its implementation. The Provider trait abstracts 22+ LLM providers. The Channel trait normalizes messaging platforms. The Memory trait abstracts storage backends. The Tool trait enables plugin execution.

The practical result: you can swap your LLM provider from Claude to a local Ollama model, change your memory backend from SQLite to PostgreSQL, add a new messaging channel, or implement a custom security policy. All through configuration. No code changes.

The memory system is worth calling out. ZeroClaw implements hybrid vector + keyword search entirely within SQLite. No Pinecone, no Weaviate, no external vector database. Everything runs locally in a single file.

Performance matches IronClaw: 3.4MB binary, <10ms startup, <5MB runtime memory.

Best for: Teams with diverse infrastructure needs. Anyone who wants to start with one provider and easily switch to another. People allergic to vendor lock-in.

The full picture

All six repos build on the same patterns: tool calling, messaging channels, the agent loop, memory as markdown, skills as extensions. The differences come down to what each team decided to optimize for.

Lines of CodeStartupMemoryChannelsSecurity Model
NanoClaw~500~1s~50MBWhatsAppOS containers
Nanobot~4,0000.8s~100MB9+Application-level
OpenClaw~400,000~6s~1.5GB11+Application + Docker
IronClawRust<10ms~7.8MBConfigurable5-layer defense
PicoClawGo<1s<10MB6+Basic
ZeroClawRust<10ms<5MBConfigurableTrait-based

In the next post, we take a deep dive into the security question. When you give an agent your API keys, email access, and filesystem permissions, how do you keep it from going off the rails? The answers vary wildly across these six repos, and they matter more than most people realize.

See the ibl.ai AI Operating System in Action

Discover how leading universities and organizations are transforming education with the ibl.ai AI Operating System. Explore real-world implementations from Harvard, MIT, Stanford, and users from 400+ institutions worldwide.

View Case Studies

Get Started with ibl.ai

Choose the plan that fits your needs and start transforming your educational experience today.