Interested in an on-premise deployment or AI transformation? Calculate your AI costs. Call/text 📞 (571) 293-0242
Air-Gapped AI icon

Air-Gapped AI

Run ibl.ai's entire Agentic OS on air-gapped Ubuntu servers with NVIDIA GPUs. Local models via NIM, Ollama, or vLLM. Zero external API calls, complete data sovereignty for your institution. No need to choose build vs. buy — you get both.

Air-Gapped AI - Local Models, Maximum Control

Deploy ibl.ai's full Agentic OS on air-gapped infrastructure where no data ever leaves your network. Models run locally on Ubuntu servers with NVIDIA GPUs via NIM, Ollama, or vLLM.

ibl.ai's forward-deployed engineers install the entire stack on your hardware. You get the same AI agent capabilities as our cloud deployment—mentors, course generation, credentialing, analytics—with zero external API calls and complete data sovereignty.

What This Is

Air-Gapped AI is ibl.ai's on-premise deployment option. The entire Agentic OS—agent runtime, model serving, vector databases, orchestration layer—runs on Ubuntu servers inside your network with no internet connectivity required after initial setup.

Models are served locally through NVIDIA NIM, Ollama, or vLLM on your NVIDIA GPUs. You choose from models by NVIDIA, Meta (Llama), Google (Gemma), Microsoft (Phi), Mistral, and others. Every inference request stays within your security perimeter.

ibl.ai's forward-deployed engineers configure the stack, optimize model performance for your hardware, integrate with your campus systems, and transfer full operational knowledge to your team.

Every configuration file, every model weight, every integration adapter belongs to your institution.

Why Air-Gapped for Higher Education

Complete Data SovereigntyNo data leaves your network. No API calls to OpenAI, Anthropic, Google, or any external service. Student records, research data, and institutional knowledge stay within your security perimeter at all times.
FERPA Compliance by ArchitectureAir-gapped deployment eliminates the compliance complexity of third-party data processing agreements. Student data never crosses a network boundary. Audit trails are local and complete.
Research Data ProtectionSensitive research data—grant proposals, unpublished findings, IRB-protected datasets—never leaves your infrastructure. Researchers can use AI agents without risking data exposure to external providers.
Model Choice and FlexibilityRun any open model that fits your GPUs. Switch between Llama, Gemma, Phi, Mistral, or NVIDIA NeMo models without changing your agent configurations. No vendor lock-in to any single model provider.
Same Capabilities as CloudAir-gapped deployment runs the full ibl.ai Agentic OS. AI mentors, course generation, credentialing, analytics, multi-channel deployment—every feature works identically to the cloud version.

Supported Models and Inference Engines

NVIDIA NIMGPU-optimized inference microservices for maximum throughput on NVIDIA hardware. Supports Llama, Mistral, and NVIDIA NeMo models with TensorRT-LLM acceleration. Best for high-throughput production workloads.
OllamaLightweight model serving for rapid deployment and testing. Supports a broad catalog of open models with simple configuration. Ideal for development environments and smaller-scale deployments.
vLLMHigh-performance inference engine with PagedAttention for efficient memory management. Supports continuous batching for maximum GPU utilization. Production-grade serving for large-scale deployments.
Model CatalogMeta Llama (8B, 70B, 405B), Google Gemma (2B, 7B, 27B), Microsoft Phi (3.5, 4), Mistral (7B, 8x7B, Large), NVIDIA NeMo models, and any Hugging Face-compatible model. New models added as they release.

Infrastructure Requirements

Operating SystemUbuntu 22.04 LTS or later. Standard server installation with NVIDIA drivers and CUDA toolkit. No specialized OS or kernel modifications required.
GPU RequirementsNVIDIA GPUs with sufficient VRAM for your chosen models. A single A100 80GB runs Llama 70B. Smaller models like Phi-3.5 or Gemma 7B run on consumer-grade GPUs. We right-size recommendations to your workload.
NetworkNo internet connectivity required after initial setup. Internal network access to campus systems (LMS, SIS, IdP) for integrations. All model weights and dependencies are pre-loaded during installation.
StorageSSD storage for model weights, vector databases, and agent state. Capacity depends on the number of models deployed. Typical installations require 500GB to 2TB of fast storage.

Security and Compliance

ITAR CompatibleAir-gapped deployment meets ITAR requirements for research institutions handling export-controlled data. No data transmission to foreign servers. Complete physical and logical isolation.
FedRAMP AlignedOn-premise deployment within your ATO boundary. All NIST 800-53 controls are addressable locally. No shared infrastructure, no multi-tenant risks, no cloud provider dependencies.
HIPAA ReadyFor institutions with medical programs or health research. PHI never leaves your facility. Local model serving eliminates BAA requirements with external AI providers.
FERPA by DefaultStudent educational records stay on your infrastructure. No third-party data processing. Audit logs capture every agent interaction locally for compliance reporting.

Deployment Options

Single ServerEntire stack on one Ubuntu server with NVIDIA GPUs. Suitable for departments, research groups, or pilot programs. Simple to operate and maintain.
Multi-Node ClusterDistributed deployment across multiple servers for higher throughput and redundancy. Kubernetes orchestration with Helm charts. Scales to institution-wide usage.
Hybrid (Air-Gapped + Cloud)Sensitive workloads on air-gapped servers, general-purpose agents on ibl.ai cloud. Consistent agent configurations across both environments. Migrate workloads between them as policies evolve.

What You Own

Complete Agentic OS installation on your hardware with all agent configurations and model settings documented
Local model weights for all deployed models—pre-downloaded and optimized for your GPU hardware
Inference engine configurations (NIM, Ollama, or vLLM) tuned for your specific hardware and workload
Campus system integration adapters (LMS, SIS, IdP) with full source code
Infrastructure as Code (Ansible/Helm) for repeatable deployments and disaster recovery
Operational runbooks covering model updates, GPU monitoring, backup procedures, and troubleshooting
Security documentation for your compliance team—architecture diagrams, data flow maps, control matrices

Engagement Model

Infrastructure Assessment (1 week):Evaluate your server hardware, GPU inventory, network topology, and integration requirements. Right-size model recommendations to your compute capacity.
Installation and Configuration (2-4 weeks):Forward-deployed engineers install the Agentic OS, configure inference engines, load model weights, build campus integrations, and validate the full stack in your environment.
Agent Development (2-3 weeks):Build your first set of AI agents—academic advisors, research assistants, administrative aids. Configure guardrails, knowledge bases, and tool integrations specific to your use cases.
Knowledge Transfer (1-2 weeks):Train your IT team on model management, agent configuration, GPU monitoring, and operational procedures. Your team operates independently after handoff.

Get Started

Hardware Assessment:Free 30-minute session to evaluate your existing GPU infrastructure and recommend a deployment configuration.
Proof of Concept:Deploy the Agentic OS on a single server with one or two agents to validate the approach before committing to full-scale deployment.
Full Deployment:Complete air-gapped installation with campus integrations, agent library, operational documentation, and knowledge transfer.

What our partners say about us

Chris Gabriel

Chris Gabriel | Google

Lorena Barba

Lorena Barba | George Washington University

Dr. Juana Mendenhall

Dr. Juana Mendenhall | Morehouse College

Juile Diop

Juile Diop | MIT

Adam Tetelman

Adam Tetelman | Nvidia

Jason Dom

Jason Dom | American Public University System

Benjamin Breyer

Benjamin Breyer | Columbia University

Ken Fujiuchi

Ken Fujiuchi | SUNY

Erika Digirolamo

Erika Digirolamo | Monroe College

David Flaten

David Flaten | SUNY

David Vise

David Vise | Modern States Education Alliance

Linda Wood

Linda Wood | ARM Institute (U.S. Department of Defense)

Chris Gabriel

Chris Gabriel | Google

Lorena Barba

Lorena Barba | George Washington University

Dr. Juana Mendenhall

Dr. Juana Mendenhall | Morehouse College

Juile Diop

Juile Diop | MIT

Adam Tetelman

Adam Tetelman | Nvidia

Jason Dom

Jason Dom | American Public University System

Benjamin Breyer

Benjamin Breyer | Columbia University

Ken Fujiuchi

Ken Fujiuchi | SUNY

Erika Digirolamo

Erika Digirolamo | Monroe College

David Flaten

David Flaten | SUNY

David Vise

David Vise | Modern States Education Alliance

Linda Wood

Linda Wood | ARM Institute (U.S. Department of Defense)

Frequently Asked Questions