Interested in an on-premise deployment or AI transformation? Call or text 📞 (571) 293-0242

Platform Architecture

Data Flow & Infrastructure Diagram

Client Layer

Clients

All clients connect to a shared authentication service

Web
iOS
Android
Mac
Surface
API
MCP
Authentication Layer

Login Application

Centralized auth service — all clients redirect here

SSO (SAML / OIDC)Username / PasswordEmail OTPOther Providers
Returns Access Tokens + RBAC Permissions (Role-Based Access Control)
Authenticated API Calls (Bearer Token + RBAC)
Back-End Infrastructure

api.iblai.app — Load Balancer

First point of connection in the backend · Distributes requests across App Scalable pool · Only external entry point via port 443

Firewall: Only the Load Balancer is exposed to external traffic (port 443). All other servers are closed off from external access.

App Scalable Servers

64 GB+ RAM each (min. recommended) · Ubuntu (latest) · Docker · Horizontally scalable · N instances

App Scalable 1

UsersAgentsContentAnalyticsNotificationsMultitenancy

App Scalable 2

UsersAgentsContentAnalyticsNotificationsMultitenancy

App Scalable N

UsersAgentsContentAnalyticsNotificationsMultitenancy
Each App Scalable Server connects to:
All A/V ServersAll Agent Sandboxes · :443App Single Server
App Scalable ↔ Audio / Video Servers

Audio / Video Servers

Separate server pool · Ubuntu (latest) · Docker · Horizontally scalable · N instances

A/V Server 1

Real-Time AudioReal-Time Video

A/V Server 2

Real-Time AudioReal-Time Video

A/V Server N

Real-Time AudioReal-Time Video
App Scalable ↔ Agent Sandboxes · Port 443

Agent Sandboxes

8 GB+ RAM each · Ubuntu (latest) · 1 per agent · Unlimited sandboxes

Agent A

Isolated ExecutionResource LimitsEndpoint AllowlistingCredential InjectionLeak DetectionAudit Logging

Agent B

Isolated ExecutionResource LimitsEndpoint AllowlistingCredential InjectionLeak DetectionAudit Logging

Agent N

Isolated ExecutionResource LimitsEndpoint AllowlistingCredential InjectionLeak DetectionAudit Logging
Scales to any number of concurrent agent sandboxes on demand
App Scalable → App Single Server (direct)

App Single Server — Persistence Layer

64 GB+ RAM (min. recommended) · Ubuntu (latest) · Docker · Single instance · Read replicas supported

PostgreSQL

Port 5432
  • App data
  • User records
  • Permissions
  • Read replicas
pgVector — Embeddings

MySQL

Port 3306
  • Course data
  • LMS records
  • Read replicas

MongoDB

Port 27017
  • Documents
  • Logs
  • Unstructured data
  • Read replicas

Redis

Port 6379
  • Sessions
  • Cache
  • Rate limiting

Object Storage

Port 443
e.g. AWS S3
  • File storage
  • Media assets
  • User uploads
  • Backups

Shared Filesystem

Port 2049 (NFS)
  • Configurations
  • Static files
Data Flow

Complete Data Flow Sequence

  1. 1User opens any ibl.ai client (Web, iOS, Android, Mac, Surface, API, MCP)
  2. 2Client redirects to centralized Login Application — user authenticates via SSO, password, or email OTP
  3. 3Login Application returns Access Tokens + RBAC Permissions — user redirected back to client
  4. 4Client makes authenticated API calls to api.iblai.app (Bearer token + RBAC)
  5. 5api.iblai.app routes to Learn (ibl-edx-pro) or Data Manager (ibl-dm-pro)
  6. 6Load Balancer distributes across App Scalable servers (64 GB+ RAM each)
  7. 7Data Manager opens ASGI connection → routes to appropriate Agent Sandbox (8 GB+ RAM, 1 per agent) — Sandboxes communicate with App Scalable servers exclusively via port 443
  8. 8App Scalable servers connect to App Single Server persistence layer — PostgreSQL (:5432), MySQL (:3306), MongoDB (:27017), Redis (:6379), Object Storage (:443), Shared Filesystem (:2049)
  9. 9Internal tooling: LLM Analytics (prompt tracing, costs) + Audio/Video (calls, voice AI)
  10. 10Agent Sandbox → Data Manager → api.iblai.app → User
Scaling

Scaling Model

ComponentScalingDetails
ClientsStaticCDN / app stores / API / MCP
Login ApplicationSharedCentralized auth for all clients
api.iblai.appGatewayRoutes to Learn + Data Manager
App Scalable ServersHorizontal ↔N instances, 64 GB+ RAM each (min. recommended), Ubuntu (latest), Docker
Audio / Video ServersHorizontal ↔N instances, separate from App Scalable, Ubuntu (latest), Docker
Agent SandboxesHorizontal ↔Unlimited instances, 8 GB+ RAM (min. recommended), Ubuntu (latest), port 443 → App Scalable
App Single ServerVertical ↕1 instance, 64 GB+ RAM (min. recommended), Ubuntu (latest), Docker, read replicas
Load BalancerSingleOnly external entry point (:443) — all other servers firewalled from external access
Security

Agent Sandbox Security Model

Defense-in-depth approach — multiple protective layers ensure no single point of compromise

Isolated Execution

Each agent runs in its own sandboxed environment with strict process isolation. Agents cannot access other agents' memory, filesystem, or network connections. Container-level separation ensures that a compromised agent cannot affect neighboring workloads.

Resource Limits

Memory, CPU, and execution time are capped per agent sandbox. This prevents any single agent from monopolizing shared infrastructure, mitigates denial-of-service from runaway processes, and ensures predictable performance across all concurrent agents.

Endpoint Allowlisting

Outbound HTTP requests from agent sandboxes are restricted to pre-approved hosts and paths only. Agents cannot reach arbitrary external services — every network call must match an explicit allowlist, blocking data exfiltration and unauthorized API access.

Credential Injection

Secrets and API keys are injected at the boundary — never exposed directly to agent code. The sandbox runtime attaches credentials to approved outbound requests at the last mile, so agents can use authenticated services without ever seeing or storing the raw tokens.

Leak Detection

All inbound and outbound traffic is scanned for secret exfiltration attempts. If an agent tries to embed credentials, tokens, or sensitive data in requests, responses, or tool outputs, the system blocks the transmission and flags it for review.

Audit Logging

Every tool execution, network request, and resource access is recorded in a tamper-resistant audit log. Administrators can trace the full history of any agent's actions, enabling forensic analysis, compliance reporting, and real-time anomaly detection.

Prompt Injection Defense

External content ingested by agents passes through pattern-based injection detection, content sanitization, and policy-driven severity rules (Block / Warn / Review / Sanitize). Tool outputs are wrapped before re-entering the LLM context, preventing untrusted data from hijacking the agent's instructions.