# Air-Gapped AI - Local Models, Maximum Control for Enterprise

> Source: https://ibl.ai/service/air-gapped-ai/corporate

Run ibl.ai's entire Agentic OS on air-gapped Ubuntu servers with NVIDIA GPUs. Local models via NIM, Ollama, or vLLM. Zero external API calls, complete data sovereignty for your organization.

Deploy ibl.ai's full Agentic OS on air-gapped infrastructure where no data ever leaves your corporate network. Models run locally on Ubuntu servers with NVIDIA GPUs via NIM, Ollama, or vLLM.

ibl.ai's forward-deployed engineers install the entire stack on your hardware. You get the same AI agent capabilities as our cloud deployment—workforce training, compliance automation, skills development—with zero external API calls and complete data sovereignty.

## What This Is

### 

Air-Gapped AI is ibl.ai's on-premise deployment option. The entire Agentic OS—agent runtime, model serving, vector databases, orchestration layer—runs on Ubuntu servers inside your network with no internet connectivity required after initial setup.

Models are served locally through NVIDIA NIM, Ollama, or vLLM on your NVIDIA GPUs. You choose from models by NVIDIA, Meta (Llama), Google (Gemma), Microsoft (Phi), Mistral, and others. Every inference request stays within your security perimeter.

ibl.ai's forward-deployed engineers configure the stack, optimize model performance for your hardware, integrate with your enterprise systems, and transfer full operational knowledge to your team.

Every configuration file, every model weight, every integration adapter belongs to your organization.

## Why Air-Gapped for Enterprise

### Complete Data Sovereignty

No data leaves your network. No API calls to OpenAI, Anthropic, Google, or any external service. Trade secrets, employee data, and proprietary knowledge stay within your security perimeter at all times.

### Regulatory Compliance by Architecture

Air-gapped deployment eliminates the compliance complexity of third-party data processing. SOC 2, SOX, HIPAA, and GDPR obligations are simplified when data never crosses a network boundary.

### Intellectual Property Protection

Proprietary training materials, competitive intelligence, and internal processes never leave your infrastructure. Employees use AI agents without risking IP exposure to external providers.

### Model Choice and Flexibility

Run any open model that fits your GPUs. Switch between Llama, Gemma, Phi, Mistral, or NVIDIA NeMo models without changing agent configurations. No vendor lock-in to any single model provider.

### Same Capabilities as Cloud

Air-gapped deployment runs the full ibl.ai Agentic OS. AI mentors, course generation, compliance training, analytics, multi-channel deployment—every feature works identically to the cloud version.

## Supported Models and Inference Engines

### NVIDIA NIM

GPU-optimized inference microservices for maximum throughput on NVIDIA hardware. Supports Llama, Mistral, and NVIDIA NeMo models with TensorRT-LLM acceleration. Best for high-throughput production workloads.

### Ollama

Lightweight model serving for rapid deployment and testing. Supports a broad catalog of open models with simple configuration. Ideal for development environments and smaller-scale deployments.

### vLLM

High-performance inference engine with PagedAttention for efficient memory management. Supports continuous batching for maximum GPU utilization. Production-grade serving for large-scale deployments.

### Model Catalog

Meta Llama (8B, 70B, 405B), Google Gemma (2B, 7B, 27B), Microsoft Phi (3.5, 4), Mistral (7B, 8x7B, Large), NVIDIA NeMo models, and any Hugging Face-compatible model. New models added as they release.

## Infrastructure Requirements

### Operating System

Ubuntu 22.04 LTS or later. Standard server installation with NVIDIA drivers and CUDA toolkit. No specialized OS or kernel modifications required.

### GPU Requirements

NVIDIA GPUs with sufficient VRAM for your chosen models. A single A100 80GB runs Llama 70B. Smaller models like Phi-3.5 or Gemma 7B run on consumer-grade GPUs. We right-size recommendations to your workload.

### Network

No internet connectivity required after initial setup. Internal network access to enterprise systems (HRIS, LMS, IdP) for integrations. All model weights and dependencies are pre-loaded during installation.

### Storage

SSD storage for model weights, vector databases, and agent state. Capacity depends on the number of models deployed. Typical installations require 500GB to 2TB of fast storage.

## Security and Compliance

### SOC 2 / SOX Ready

On-premise deployment with complete audit trails. All data processing happens within your security boundary. No third-party subprocessors for AI inference.

### HIPAA Compliant

For organizations handling protected health information. PHI never leaves your facility. Local model serving eliminates BAA requirements with external AI providers.

### GDPR Aligned

Data residency requirements are met by default when all processing happens on your hardware in your jurisdiction. No cross-border data transfers for AI operations.

### ITAR Compatible

For defense contractors and organizations handling export-controlled data. No data transmission to external servers. Complete physical and logical isolation.

## Deployment Options

### Single Server

Entire stack on one Ubuntu server with NVIDIA GPUs. Suitable for departments, business units, or pilot programs. Simple to operate and maintain.

### Multi-Node Cluster

Distributed deployment across multiple servers for higher throughput and redundancy. Kubernetes orchestration with Helm charts. Scales to organization-wide usage.

### Hybrid (Air-Gapped + Cloud)

Sensitive workloads on air-gapped servers, general-purpose agents on ibl.ai cloud. Consistent agent configurations across both environments. Migrate workloads as policies evolve.

## What You Own

### 

Complete Agentic OS installation on your hardware with all agent configurations and model settings documented

### 

Local model weights for all deployed models—pre-downloaded and optimized for your GPU hardware

### 

Inference engine configurations (NIM, Ollama, or vLLM) tuned for your specific hardware and workload

### 

Enterprise system integration adapters (HRIS, LMS, IdP) with full source code

### 

Infrastructure as Code (Ansible/Helm) for repeatable deployments and disaster recovery

### 

Operational runbooks covering model updates, GPU monitoring, backup procedures, and troubleshooting

### 

Security documentation for your compliance team—architecture diagrams, data flow maps, control matrices

## Engagement Model

### Infrastructure Assessment (1 week):

Evaluate your server hardware, GPU inventory, network topology, and integration requirements. Right-size model recommendations to your compute capacity.

### Installation and Configuration (2-4 weeks):

Forward-deployed engineers install the Agentic OS, configure inference engines, load model weights, build enterprise integrations, and validate the full stack in your environment.

### Agent Development (2-3 weeks):

Build your first set of AI agents—compliance trainers, onboarding coaches, skills-gap analyzers. Configure guardrails, knowledge bases, and tool integrations specific to your use cases.

### Knowledge Transfer (1-2 weeks):

Train your IT team on model management, agent configuration, GPU monitoring, and operational procedures. Your team operates independently after handoff.

## Get Started

### Hardware Assessment:

Free 30-minute session to evaluate your existing GPU infrastructure and recommend a deployment configuration.

### Proof of Concept:

Deploy the Agentic OS on a single server with one or two agents to validate the approach before committing to full-scale deployment.

### Full Deployment:

Complete air-gapped installation with enterprise integrations, agent library, operational documentation, and knowledge transfer.

---

*[View on ibl.ai](https://ibl.ai/service/air-gapped-ai/corporate)*