Back to Blog

American University of Sharjah × ibl.ai: Course-Tuned AI Mentors for Calculus & Physics

Jeremy WeaverSeptember 18, 2025
Premium

AUS and ibl.ai are launching a fall pilot of course-tuned AI mentors for Calculus and Physics that use a code interpreter to compute, visualize, and cite instructor-approved resources—helping students learn reliably and transparently.

We’re excited to share that ibl.ai is partnering with the American University of Sharjah (AUS) on a focused Fall-2025 pilot of mentorAI in two gateway STEM courses—Calculus I (Math 103) and Physics 101. The pilot is designed to validate instructional impact, technical fit, and day-to-day faculty and student workflows before AUS considers a broader rollout.


What We’re Building Together

  • Two course-specific student mentors tuned for Math 103 and Physics 101 with AUS-specific prompts, tone, and guardrails. Each mentor is grounded in faculty-approved texts/OER and returns inline citations to those sources to keep learning transparent and verifiable.
  • Model-agnostic setup using AUS-provided API keys by default, with a pre-selected secondary LLM ready as a fallback if service quality fluctuates—no mentor changes required. A per-student usage cap (initially 50 messages/term) helps AUS manage consumption and can be adjusted during the pilot.

Why The Code Interpreter Matters For STEM Reliability

To be genuinely useful in math-heavy classes, an assistant has to compute and visualize, not just chat. AUS’s mentors will use a secure code-execution environment (“code interpreter”) to:
  • Plot functions and render precise graphs of equations and vector fields (as images students can reference later).
  • Check work numerically (e.g., verify limits/derivatives, evaluate integrals, test boundary conditions).
  • Sanity-check symbolic steps by sampling values, spotting algebraic slips, and comparing equivalent forms.
This dramatically reduces “plausible-sounding but wrong” answers, and it gives students clear, visual feedback—especially vital in early calculus and mechanics.

Simple Student Access In AUS’s LMS

To make the mentors easy to reach where students already work, we’ll provide LMS integration options—secure links or LTI—plus lightweight onboarding guidance. Technical items such as HTTPS, CSP allow-listing, and passing standard LTI claims (user/role/course) are covered up front so access is smooth across browsers and sections.

How We’ll Measure Impact

The pilot focuses on a few concrete targets and a tight feedback loop:
  • Graphing accuracy: ≥95% pass on a weekly 25-item checklist, with critical issues resolved within five business days.
  • Explanation quality: Monthly sampling scored on correctness, clarity, and alignment to sources (avg ≥4.2/5), and ≥80% student “helpful/very helpful.”
  • Adoption & engagement: ≥70% of enrolled students use the mentor at least once (tracking unique users, sessions, messages/session).
Quality is continuously monitored via monthly response audits, targeted spot-checks for graphing, and in-product flagging (faculty—and optionally students—can flag any response). Issues are triaged and addressed through precise prompt/dataset tweaks and tracked in a shared log.

Faculty Enablement And Support

AUS instructors will receive up to two working sessions per course (setup, testing, and deployment strategies), plus asynchronous support during the term for prompt tuning, dataset adjustments, and minor configuration toggles. At term’s end, we’ll host a debrief and deliver a brief pilot report (≤5 pages) summarizing methods, usage, satisfaction, notable accuracy issues/resolutions, and recommendations—with cost/scale implications for Spring 2026.

Roles, Responsibilities, And Risk Management

AUS will provide API keys and approve source materials (or OER substitutes), identify a lead faculty member for each course, and coordinate internal approvals. We’ll operate to clear service levels: rapid acknowledgement for issues, defined response targets by severity, and a straightforward resolution path (contain → fix → verify → log).

What Success Looks Like—And What’s Next

Success is defined by accuracy, adoption, and instructional fit (a brief rubric on alignment, ease of in-class use, out-of-class study support, and time/overhead). If targets are met, AUS and ibl.ai will agree on a path to expansion for Spring 2026 across additional courses/programs based on evidence from this pilot.

Conclusion

We’re honored to collaborate with the American University of Sharjah on this measured, student-first approach to AI mentoring—and we look forward to sharing what we learn together this fall. If you’re interested in a mentorAI pilot for your institution, visit ibl.ai/contact to learn more!