# Air-Gapped AI - Local Models, Maximum Control

> Source: https://ibl.ai/service/air-gapped-ai/higher-education

Run ibl.ai's entire Agentic OS on air-gapped Ubuntu servers with NVIDIA GPUs. Local models via NIM, Ollama, or vLLM. Zero external API calls, complete data sovereignty for your institution.

Deploy ibl.ai's full Agentic OS on air-gapped infrastructure where no data ever leaves your network. Models run locally on Ubuntu servers with NVIDIA GPUs via NIM, Ollama, or vLLM.

ibl.ai's forward-deployed engineers install the entire stack on your hardware. You get the same AI agent capabilities as our cloud deployment—mentors, course generation, credentialing, analytics—with zero external API calls and complete data sovereignty.

## What This Is

### 

Air-Gapped AI is ibl.ai's on-premise deployment option. The entire Agentic OS—agent runtime, model serving, vector databases, orchestration layer—runs on Ubuntu servers inside your network with no internet connectivity required after initial setup.

Models are served locally through NVIDIA NIM, Ollama, or vLLM on your NVIDIA GPUs. You choose from models by NVIDIA, Meta (Llama), Google (Gemma), Microsoft (Phi), Mistral, and others. Every inference request stays within your security perimeter.

ibl.ai's forward-deployed engineers configure the stack, optimize model performance for your hardware, integrate with your campus systems, and transfer full operational knowledge to your team.

Every configuration file, every model weight, every integration adapter belongs to your institution.

## Why Air-Gapped for Higher Education

### Complete Data Sovereignty

No data leaves your network. No API calls to OpenAI, Anthropic, Google, or any external service. Student records, research data, and institutional knowledge stay within your security perimeter at all times.

### FERPA Compliance by Architecture

Air-gapped deployment eliminates the compliance complexity of third-party data processing agreements. Student data never crosses a network boundary. Audit trails are local and complete.

### Research Data Protection

Sensitive research data—grant proposals, unpublished findings, IRB-protected datasets—never leaves your infrastructure. Researchers can use AI agents without risking data exposure to external providers.

### Model Choice and Flexibility

Run any open model that fits your GPUs. Switch between Llama, Gemma, Phi, Mistral, or NVIDIA NeMo models without changing your agent configurations. No vendor lock-in to any single model provider.

### Same Capabilities as Cloud

Air-gapped deployment runs the full ibl.ai Agentic OS. AI mentors, course generation, credentialing, analytics, multi-channel deployment—every feature works identically to the cloud version.

## Supported Models and Inference Engines

### NVIDIA NIM

GPU-optimized inference microservices for maximum throughput on NVIDIA hardware. Supports Llama, Mistral, and NVIDIA NeMo models with TensorRT-LLM acceleration. Best for high-throughput production workloads.

### Ollama

Lightweight model serving for rapid deployment and testing. Supports a broad catalog of open models with simple configuration. Ideal for development environments and smaller-scale deployments.

### vLLM

High-performance inference engine with PagedAttention for efficient memory management. Supports continuous batching for maximum GPU utilization. Production-grade serving for large-scale deployments.

### Model Catalog

Meta Llama (8B, 70B, 405B), Google Gemma (2B, 7B, 27B), Microsoft Phi (3.5, 4), Mistral (7B, 8x7B, Large), NVIDIA NeMo models, and any Hugging Face-compatible model. New models added as they release.

## Infrastructure Requirements

### Operating System

Ubuntu 22.04 LTS or later. Standard server installation with NVIDIA drivers and CUDA toolkit. No specialized OS or kernel modifications required.

### GPU Requirements

NVIDIA GPUs with sufficient VRAM for your chosen models. A single A100 80GB runs Llama 70B. Smaller models like Phi-3.5 or Gemma 7B run on consumer-grade GPUs. We right-size recommendations to your workload.

### Network

No internet connectivity required after initial setup. Internal network access to campus systems (LMS, SIS, IdP) for integrations. All model weights and dependencies are pre-loaded during installation.

### Storage

SSD storage for model weights, vector databases, and agent state. Capacity depends on the number of models deployed. Typical installations require 500GB to 2TB of fast storage.

## Security and Compliance

### ITAR Compatible

Air-gapped deployment meets ITAR requirements for research institutions handling export-controlled data. No data transmission to foreign servers. Complete physical and logical isolation.

### FedRAMP Aligned

On-premise deployment within your ATO boundary. All NIST 800-53 controls are addressable locally. No shared infrastructure, no multi-tenant risks, no cloud provider dependencies.

### HIPAA Ready

For institutions with medical programs or health research. PHI never leaves your facility. Local model serving eliminates BAA requirements with external AI providers.

### FERPA by Default

Student educational records stay on your infrastructure. No third-party data processing. Audit logs capture every agent interaction locally for compliance reporting.

## Deployment Options

### Single Server

Entire stack on one Ubuntu server with NVIDIA GPUs. Suitable for departments, research groups, or pilot programs. Simple to operate and maintain.

### Multi-Node Cluster

Distributed deployment across multiple servers for higher throughput and redundancy. Kubernetes orchestration with Helm charts. Scales to institution-wide usage.

### Hybrid (Air-Gapped + Cloud)

Sensitive workloads on air-gapped servers, general-purpose agents on ibl.ai cloud. Consistent agent configurations across both environments. Migrate workloads between them as policies evolve.

## What You Own

### 

Complete Agentic OS installation on your hardware with all agent configurations and model settings documented

### 

Local model weights for all deployed models—pre-downloaded and optimized for your GPU hardware

### 

Inference engine configurations (NIM, Ollama, or vLLM) tuned for your specific hardware and workload

### 

Campus system integration adapters (LMS, SIS, IdP) with full source code

### 

Infrastructure as Code (Ansible/Helm) for repeatable deployments and disaster recovery

### 

Operational runbooks covering model updates, GPU monitoring, backup procedures, and troubleshooting

### 

Security documentation for your compliance team—architecture diagrams, data flow maps, control matrices

## Engagement Model

### Infrastructure Assessment (1 week):

Evaluate your server hardware, GPU inventory, network topology, and integration requirements. Right-size model recommendations to your compute capacity.

### Installation and Configuration (2-4 weeks):

Forward-deployed engineers install the Agentic OS, configure inference engines, load model weights, build campus integrations, and validate the full stack in your environment.

### Agent Development (2-3 weeks):

Build your first set of AI agents—academic advisors, research assistants, administrative aids. Configure guardrails, knowledge bases, and tool integrations specific to your use cases.

### Knowledge Transfer (1-2 weeks):

Train your IT team on model management, agent configuration, GPU monitoring, and operational procedures. Your team operates independently after handoff.

## Get Started

### Hardware Assessment:

Free 30-minute session to evaluate your existing GPU infrastructure and recommend a deployment configuration.

### Proof of Concept:

Deploy the Agentic OS on a single server with one or two agents to validate the approach before committing to full-scale deployment.

### Full Deployment:

Complete air-gapped installation with campus integrations, agent library, operational documentation, and knowledge transfer.

---

*[View on ibl.ai](https://ibl.ai/service/air-gapped-ai/higher-education)*