Back to Blog

How mentorAI Integrates with Google Cloud Platform

Jeremy WeaverMay 7, 2025
Premium

mentorAI deploys its micro-services on GKE Autopilot and streams student queries through Vertex AI Model Garden, letting campuses route each request to Gemini 2.0 Flash, Gemini 1.5 Pro, or other models with up to 2 M-token multimodal context—all without owning GPUs and while maintaining sub-second latency for real-time tutoring. Tenant data stays inside VPC Service Controls perimeters, usage and latency feed Cloud Monitoring dashboards for cost governance, and faculty can fine-tune open-weight Gemma or Llama 3 right in Model Garden—making the integration FERPA-aligned, transparent, and future-proof with a simple config switch.

mentorAI harnesses Google Cloud Platform (GCP) to deliver fast, secure, and research‑ready generative AI for higher education. At the center is Vertex AI, Google’s serverless platform that now offers Gemini 1.5 Pro, Gemini 1.5 Flash, and the new Gemini 2.0 Flash family via a single API. Paired with Google’s managed compute, database, and observability stack, mentorAI scales from a pilot course to an entire university while meeting strict data‑privacy requirements.


Key GCP Building Blocks

  • Vertex AI Model Garden – one endpoint for Gemini, PaLM‑2, Gemma, Llama 3, Mistral, and more. mentorAI calls the same API for text, vision, audio, and RAG workflows.
  • Vertex AI Agent Builder & RAG Engine – lets mentorAI chain multi‑agent workflows and attach campus knowledge bases for retrieval‑augmented answers.
  • Google Kubernetes Engine (GKE) – container home for mentorAI’s microservices (API, orchestration engine, background jobs). Autopilot mode keeps ops light.
  • Cloud SQL / Spanner – relational store for user data and transcripts. Multi‑tenant schemas or per‑database silos meet FERPA needs.
  • VPC Service Controls + IAM – fence each university’s data with private networking and least‑privilege roles.
  • Cloud Storage – durable object store for lecture files, embeddings, and backups, partitioned by tenant prefix.
  • Cloud Monitoring & Logging – central dashboards, error alerts, and SLO tracking; integrates with Vertex observability for model latency.

How mentorAI Uses GCP Day‑to‑Day

1. Student question arrives. An HTTPS request hits a Cloud Load Balancer and lands in a GKE pod. 2. Model selection. The orchestration layer calls Vertex AI, choosing (or letting Vertex auto‑route) between Gemini 2.0 Flash for live chat or Gemini 1.5 Pro for deep analysis. 3. Context enrichment. Course docs are fetched from Cloud Storage / Cloud SQL and injected via Vertex RAG Engine. 4. Response & telemetry. The answer returns in <1 s; tokens, latency, and cost stream to Cloud Monitoring dashboards.

Why GCP Matters to Universities

  • Cutting‑edge multimodal LLMs – Gemini models handle text, images, and audio with up to 2 M‑token context windows.
  • Serverless scale – Vertex AI auto‑scales model endpoints; GKE Autopilot scales app containers without manual node ops.
  • Data governance – VPC Service Controls and IAM Conditions keep each tenant’s data isolated and audit‑logged.
  • Cost control – pay‑per‑request for models; cluster autoscaling shrinks spend after peak weeks.
  • Research flexibility – faculty can fine‑tune Gemma or open‑weight Llama 3 right in Model Garden, then wire them into mentorAI without code changes.
By combining Vertex AI’s managed LLMs with Google Cloud’s secure, elastic backbone, mentorAI lets campuses deploy real‑time, multimodal tutoring while keeping budgets, data, and compliance firmly under control. Learn more at [https://ibl.ai](https://ibl.ai)

Related Articles

How mentorAI Integrates with Blackboard

mentorAI integrates with Blackboard Learn using LTI 1.3 Advantage, so every click on a mentorAI link triggers an OIDC launch that passes a signed JWT containing the user’s ID, role, and course context—providing seamless single-sign-on with no extra passwords or roster uploads. Leveraging the Names & Roles Provisioning Service, Deep Linking, and the Assignment & Grade Services, the tool auto-syncs class lists, lets instructors drop AI activities straight into modules, and pushes rubric-aligned scores back to Grade Center in real time.

Jeremy WeaverMay 7, 2025

How mentorAI Integrates with Brightspace

mentorAI plugs into Brightspace via LTI 1.3 Advantage, letting the LMS issue an OIDC-signed JWT at launch so every student or instructor is auto-authenticated with their exact course, role, and context—no extra passwords or roster uploads. Thanks to the Names & Roles Provisioning Service, Deep Linking, and the Assignments & Grades Service, rosters stay in sync, AI activities drop straight into content modules, and rubric-aligned scores flow back to the Brightspace gradebook in real time.

Jeremy WeaverMay 7, 2025

How mentorAI Integrates with Microsoft

mentorAI launches as a one-click Azure Marketplace app, runs its APIs on AKS, and routes prompts to Azure OpenAI Service models like GPT-4o, GPT-4 Turbo, GPT-3.5 Turbo, and Phi-3—letting universities tap enterprise LLMs without owning GPUs. Traffic and data stay inside each tenant’s VNet with Entra ID SSO, Azure Content Safety filtering, AKS auto-scaling, and full Azure Monitor telemetry, so campuses meet FERPA-level privacy while paying only per token and compute they actually use.

Jeremy WeaverMay 7, 2025

How mentorAI Integrates with Groq

mentorAI plugs into Groq’s OpenAI-compatible LPU API so universities can route any mentor to ultra-fast models like Llama 4 Maverick or Gemma 2 9B that stream ~185 tokens per second with deterministic sub-100 ms latency. Admins simply swap the base URL or point at an on-prem GroqRack, while mentorAI enforces LlamaGuard safety and quota tracking across cloud or self-hosted endpoints such as Bedrock, Vertex, and Azure—no code rewrites.

Jeremy WeaverMay 7, 2025