Platform Architecture
Data Flow & Infrastructure Diagram
Clients
All clients connect to a shared authentication service
Login Application
Centralized auth service — all clients redirect here
api.iblai.app — Load Balancer
First point of connection in the backend · Distributes requests across App Scalable pool · Only external entry point via port 443
App Scalable Servers
64 GB+ RAM each (min. recommended) · Ubuntu (latest) · Docker · Horizontally scalable · N instances
App Scalable 1
App Scalable 2
App Scalable N
Audio / Video Servers
Separate server pool · Ubuntu (latest) · Docker · Horizontally scalable · N instances
A/V Server 1
A/V Server 2
A/V Server N
Agent Sandboxes
8 GB+ RAM each · Ubuntu (latest) · 1 per agent · Unlimited sandboxes
Agent A
Agent B
Agent N
App Single Server — Persistence Layer
64 GB+ RAM (min. recommended) · Ubuntu (latest) · Docker · Single instance · Read replicas supported
PostgreSQL
- App data
- User records
- Permissions
- Read replicas
MySQL
- Course data
- LMS records
- Read replicas
MongoDB
- Documents
- Logs
- Unstructured data
- Read replicas
Redis
- Sessions
- Cache
- Rate limiting
Object Storage
- File storage
- Media assets
- User uploads
- Backups
Shared Filesystem
- Configurations
- Static files
Complete Data Flow Sequence
- 1User opens any ibl.ai client (Web, iOS, Android, Mac, Surface, API, MCP)
- 2Client redirects to centralized Login Application — user authenticates via SSO, password, or email OTP
- 3Login Application returns Access Tokens + RBAC Permissions — user redirected back to client
- 4Client makes authenticated API calls to api.iblai.app (Bearer token + RBAC)
- 5api.iblai.app routes to Learn (ibl-edx-pro) or Data Manager (ibl-dm-pro)
- 6Load Balancer distributes across App Scalable servers (64 GB+ RAM each)
- 7Data Manager opens ASGI connection → routes to appropriate Agent Sandbox (8 GB+ RAM, 1 per agent) — Sandboxes communicate with App Scalable servers exclusively via port 443
- 8App Scalable servers connect to App Single Server persistence layer — PostgreSQL (:5432), MySQL (:3306), MongoDB (:27017), Redis (:6379), Object Storage (:443), Shared Filesystem (:2049)
- 9Internal tooling: LLM Analytics (prompt tracing, costs) + Audio/Video (calls, voice AI)
- 10Agent Sandbox → Data Manager → api.iblai.app → User
Scaling Model
| Component | Scaling | Details |
|---|---|---|
| Clients | Static | CDN / app stores / API / MCP |
| Login Application | Shared | Centralized auth for all clients |
| api.iblai.app | Gateway | Routes to Learn + Data Manager |
| App Scalable Servers | Horizontal ↔ | N instances, 64 GB+ RAM each (min. recommended), Ubuntu (latest), Docker |
| Audio / Video Servers | Horizontal ↔ | N instances, separate from App Scalable, Ubuntu (latest), Docker |
| Agent Sandboxes | Horizontal ↔ | Unlimited instances, 8 GB+ RAM (min. recommended), Ubuntu (latest), port 443 → App Scalable |
| App Single Server | Vertical ↕ | 1 instance, 64 GB+ RAM (min. recommended), Ubuntu (latest), Docker, read replicas |
| Load Balancer | Single | Only external entry point (:443) — all other servers firewalled from external access |
Agent Sandbox Security Model
Defense-in-depth approach — multiple protective layers ensure no single point of compromise
Isolated Execution
Each agent runs in its own sandboxed environment with strict process isolation. Agents cannot access other agents' memory, filesystem, or network connections. Container-level separation ensures that a compromised agent cannot affect neighboring workloads.
Resource Limits
Memory, CPU, and execution time are capped per agent sandbox. This prevents any single agent from monopolizing shared infrastructure, mitigates denial-of-service from runaway processes, and ensures predictable performance across all concurrent agents.
Endpoint Allowlisting
Outbound HTTP requests from agent sandboxes are restricted to pre-approved hosts and paths only. Agents cannot reach arbitrary external services — every network call must match an explicit allowlist, blocking data exfiltration and unauthorized API access.
Credential Injection
Secrets and API keys are injected at the boundary — never exposed directly to agent code. The sandbox runtime attaches credentials to approved outbound requests at the last mile, so agents can use authenticated services without ever seeing or storing the raw tokens.
Leak Detection
All inbound and outbound traffic is scanned for secret exfiltration attempts. If an agent tries to embed credentials, tokens, or sensitive data in requests, responses, or tool outputs, the system blocks the transmission and flags it for review.
Audit Logging
Every tool execution, network request, and resource access is recorded in a tamper-resistant audit log. Administrators can trace the full history of any agent's actions, enabling forensic analysis, compliance reporting, and real-time anomaly detection.
Prompt Injection Defense
External content ingested by agents passes through pattern-based injection detection, content sanitization, and policy-driven severity rules (Block / Warn / Review / Sanitize). Tool outputs are wrapped before re-entering the LLM context, preventing untrusted data from hijacking the agent's instructions.