American University of Sharjah × ibl.ai: Course-Tuned AI Mentors for Calculus & Physics
AUS and ibl.ai are launching a fall pilot of course-tuned AI mentors for Calculus and Physics that use a code interpreter to compute, visualize, and cite instructor-approved resources—helping students learn reliably and transparently.
We’re excited to share that ibl.ai is partnering with the American University of Sharjah (AUS) on a focused Fall-2025 pilot of mentorAI in two gateway STEM courses—Calculus I (Math 103) and Physics 101. The pilot is designed to validate instructional impact, technical fit, and day-to-day faculty and student workflows before AUS considers a broader rollout.
What We’re Building Together
- Two course-specific student mentors tuned for Math 103 and Physics 101 with AUS-specific prompts, tone, and guardrails. Each mentor is grounded in faculty-approved texts/OER and returns inline citations to those sources to keep learning transparent and verifiable.
- Model-agnostic setup using AUS-provided API keys by default, with a pre-selected secondary LLM ready as a fallback if service quality fluctuates—no mentor changes required. A per-student usage cap (initially 50 messages/term) helps AUS manage consumption and can be adjusted during the pilot.
Why The Code Interpreter Matters For STEM Reliability
To be genuinely useful in math-heavy classes, an assistant has to compute and visualize, not just chat. AUS’s mentors will use a secure code-execution environment (“code interpreter”) to:- Plot functions and render precise graphs of equations and vector fields (as images students can reference later).
- Check work numerically (e.g., verify limits/derivatives, evaluate integrals, test boundary conditions).
- Sanity-check symbolic steps by sampling values, spotting algebraic slips, and comparing equivalent forms.
Simple Student Access In AUS’s LMS
To make the mentors easy to reach where students already work, we’ll provide LMS integration options—secure links or LTI—plus lightweight onboarding guidance. Technical items such as HTTPS, CSP allow-listing, and passing standard LTI claims (user/role/course) are covered up front so access is smooth across browsers and sections.How We’ll Measure Impact
The pilot focuses on a few concrete targets and a tight feedback loop:- Graphing accuracy: ≥95% pass on a weekly 25-item checklist, with critical issues resolved within five business days.
- Explanation quality: Monthly sampling scored on correctness, clarity, and alignment to sources (avg ≥4.2/5), and ≥80% student “helpful/very helpful.”
- Adoption & engagement: ≥70% of enrolled students use the mentor at least once (tracking unique users, sessions, messages/session).
Faculty Enablement And Support
AUS instructors will receive up to two working sessions per course (setup, testing, and deployment strategies), plus asynchronous support during the term for prompt tuning, dataset adjustments, and minor configuration toggles. At term’s end, we’ll host a debrief and deliver a brief pilot report (≤5 pages) summarizing methods, usage, satisfaction, notable accuracy issues/resolutions, and recommendations—with cost/scale implications for Spring 2026.Roles, Responsibilities, And Risk Management
AUS will provide API keys and approve source materials (or OER substitutes), identify a lead faculty member for each course, and coordinate internal approvals. We’ll operate to clear service levels: rapid acknowledgement for issues, defined response targets by severity, and a straightforward resolution path (contain → fix → verify → log).What Success Looks Like—And What’s Next
Success is defined by accuracy, adoption, and instructional fit (a brief rubric on alignment, ease of in-class use, out-of-class study support, and time/overhead). If targets are met, AUS and ibl.ai will agree on a path to expansion for Spring 2026 across additional courses/programs based on evidence from this pilot.Conclusion
We’re honored to collaborate with the American University of Sharjah on this measured, student-first approach to AI mentoring—and we look forward to sharing what we learn together this fall. If you’re interested in a mentorAI pilot for your institution, visit ibl.ai/contact to learn more!Related Articles
Roman vs. Greek Experimentation: Pilot-First Framework
A practical, pilot-first framework—“Roman vs. Greek” experimentation—for universities to gather evidence through action, de-risk AI decisions, and scale what works using model-agnostic, faculty-governed deployments.
Students as Agent Builders: How Role-Based Access (RBAC) Makes It Possible
How ibl.ai’s role-based access control (RBAC) enables students to safely design and build real AI agents—mirroring industry-grade systems—while institutions retain full governance, security, and faculty oversight.
AI Equity as Infrastructure: Why Equitable Access to Institutional AI Must Be Treated as a Campus Utility — Not a Privilege
Why AI must be treated as shared campus infrastructure—closing the equity gap between students who can afford premium tools and those who can’t, and showing how ibl.ai enables affordable, governed AI access for all.
Pilot Fatigue and the Cost of Hesitation: Why Campuses Are Stuck in Endless Proof-of-Concept Cycles
Why higher education’s cautious pilot culture has become a roadblock to innovation—and how usage-based, scalable AI frameworks like ibl.ai’s help institutions escape “demo purgatory” and move confidently to production.