Back to Blog

ibl.ai Evidence of Impact

Jeremy WeaverSeptember 18, 2025
Premium

mentorAI ibl.ai Evidence of Impact Learning Outcomes AI Higher Education

Introduction

Advances in AI have enabled new forms of personalized educational support, exemplified by ibl.ai’s mentorAI platform. mentorAI serves as a virtual tutor or AI mentor that students can interact with 24/7, integrated into learning management systems. This white paper analyzes the platform through an academic lens, detailing the learning theories underpinning its design, the features that enhance student engagement and performance, real-world deployments at institutions like GWU, Morehouse College, and Syracuse, and the evidence of its learning impact. By grounding the analysis in established educational theory and documented outcomes, we assess how mentorAI aligns with UNICEF EdTech goals of improving learning quality and equity through technology. The discussion draws on product documentation, institutional case descriptions, and platform data to provide a rigorous yet concise evaluation.

Theoretical Foundations of mentorAI

Modern learning science principles form the foundation of mentorAI’s design. Constructivist learning theory posits that learners build knowledge actively; mentorAI embodies this by engaging students in interactive problem-solving dialogue rather than passive content delivery. The AI mentor encourages students to ask questions, explain their reasoning, and explore topics in depth, which aligns with constructivism’s emphasis on active learning and knowledge construction. In practice, the mentorAI chat is student-centered: it never simply hands out answers, but instead prompts students with guiding questions and hints so they arrive at solutions themselves. This approach mirrors the Socratic method and ensures the student remains cognitively active, which research shows leads to deeper understanding. Another core principle is formative assessment – the use of ongoing feedback to guide learning. mentorAI functions as a formative assessor by providing immediate, tailored feedback on student inputs in real time. For example, when a student is working on an assignment or practice question, the AI mentor can give step-by-step hints, check intermediate steps, or correct misunderstandings on the spot. This continuous feedback loop helps students identify errors and knowledge gaps early, allowing them to adjust their learning strategies. It also gives instructors insight into student progress through analytics (discussed later), embodying Black and Wiliam’s idea that formative assessment “informs both teaching and learning” in an ongoing way. The instant feedback and explanation capability of mentorAI provides the kind of timely scaffolding that a human tutor or teacher would in one-on-one instruction, thereby operationalizing formative assessment at scale.

Platform Operation and Key Features Enhancing Learning

24/7 Personalized Mentor Chat & Instant Feedback

At the core of mentorAI is a conversational AI mentor that students can chat with at any time, serving as a personal tutor available 24/7. Through a friendly chat interface, students ask questions about their coursework, get help with assignments, or review for exams, and the AI responds in a contextual, helpful manner. A distinguishing feature is the immediacy and specificity of the feedback: the AI provides just-in-time explanations or hints tailored to the student’s query and level. For instance, a student might ask the mentor to clarify a difficult concept from class; the AI will generate a custom explanation (using relevant examples or even referring to the course materials if integrated) at an appropriate level of detail. If the student attempts a practice problem, the mentor can give step-by-step feedback, pointing out mistakes as they occur. This on-demand support mirrors one-on-one human tutoring in responsiveness. Notably, the AI mentor has been designed not to simply give away answers, but to guide the student toward the answer. As reported in a UC San Diego pilot, their course-specific AI tutor (built on similar principles) was “trained never to just give students the answer to a problem” but instead to ask questions that lead students to the solution and to encourage them when they get it right. Such an approach keeps students in an active problem-solving mode and prevents over-reliance on the AI. By being constantly available and interactive, the mentorAI platform also extends students’ time-on-task – learners can engage in study or practice at their own pace and schedule, beyond classroom hours. This increased practice and engagement time is known to positively correlate with learning outcomes. Indeed, one advantage noted by instructors is that an AI tutor is accessible “any time and anywhere,” unlike human TAs or tutors, giving students more opportunities to clarify doubts and reinforce learning whenever needed. In summary, the mentorAI chat interface provides a personalized, always-available learning companion that offers immediate, formative feedback, thereby enhancing engagement and understanding in line with how effective tutoring improves learning.

Proactive Guidance and AI-Driven Scaffolding

mentorAI goes beyond answering students’ questions by proactively guiding their learning process. After each response from the AI, the platform can suggest follow-up questions or actions – a feature referred to as Guided Prompts. These are intelligently generated prompts aimed at nudging the learner to explore subtopics or deepen their understanding of the current topic. For example, if a student just learned a concept, the mentor might ask “Would you like to attempt a practice question on this concept?” or “Shall we explore causes or applications of this concept further?”. By doing so, the AI is essentially scaffolding the learning experience: it provides a structure for the student to follow, similar to how a human tutor might say “Now that we solved this problem, try this related one” or “Think about why this formula works.” This AI-driven scaffolding nurtures a deeper engagement, prompting students to not only get answers but also to reflect and inquire further (consistent with active learning pedagogy). Crucially, the difficulty and nature of these prompts are adaptive. mentorAI “continually gauges your knowledge and adjusts the depth and complexity of explanations” to suit the learner’s level. A novice might get more basic follow-up questions, whereas an advanced student receives more challenging, open-ended prompts. In this way, the platform always aims a step beyond the student’s current proficiency. The system also supports multiple learning modes – for instance, a Socratic mode where the mentor primarily answers with questions to stimulate critical thinking, or a Guided mode providing more direct instruction for beginners (these modes are documented in the mentorAI guides). The combination of immediate feedback with forward-looking guidance transforms the learning session into a coached experience: the AI not only reacts to student queries but also proactively mentors the student through a learning pathway. This helps maintain student motivation and focus. By receiving encouragement and cues on what to explore next, students are less likely to hit a dead-end or become disengaged. Overall, the proactive and adaptive guidance features of mentorAI function as an AI scaffold, replicating proven human tutoring strategies (hints, probing questions, incremental challenge) and thereby keeping students in that optimal growth zone where learning is most effective.

Course Integration and Contextualized Knowledge

A critical aspect of mentorAI’s operation is its integration with course-specific content and context, which ensures that the AI’s guidance is curriculum-aligned and trustworthy. The platform employs a technique known as Retrieval-Augmented Generation (RAG) to ground the AI’s responses in the instructor-provided materials and relevant knowledge bases. In practical terms, each course or subject can have its own AI mentor that has access to that course’s syllabus, lecture notes, slides, textbooks, etc. When a student asks a question, the AI doesn’t rely solely on a general large language model’s memory; it also searches the relevant course documents to provide accurate, context-specific answers. This design addresses the common concern with generic AI chatbots like ChatGPT that “off-the-shelf tools don’t have access to course materials or knowledge of the instructor’s teaching goals”. By wrapping the AI agent with the course’s content and the instructor’s pedagogical intent, mentorAI delivers help that is on-target for what the student is supposed to learn in class, rather than drifting into irrelevant or incorrect territory. Professor Lorena Barba of GWU, who piloted an AI mentor in her engineering course, emphasized this point: her course-level mentor “grounds AI responses in course materials” to ensure the answers align with what was taught, thereby addressing the issue that many students were using ChatGPT “not very well” for coursework. The mentorAI platform allows instructor control over the AI’s knowledge base and persona. Faculty can easily upload or connect the course resources (readings, assignments, lecture transcripts) to the mentor’s dataset, and even define the mentor’s tone or role (e.g. “a helpful calculus TA”) through configuration settings. Key features reported in the GWU deployment include “integration of course resources” and “context-aware responses (via RAG)”. Instructors also have the ability to set boundaries – for example, disallowing the mentor from giving direct answers on assessed homework – to uphold academic integrity. This high degree of customization ensures the AI operates as a faculty-led tool rather than an autonomous black box. Indeed, Barba described her approach as “student-centered and faculty-led”, highlighting that the AI mentor was built to serve the instructor’s pedagogy, not replace it. The result is an AI mentor that can deliver contextualized knowledge: if asked about a concept, it can reference the exact definition from lecture slides; if a student is stuck on a specific homework problem, it can give a hint that aligns with how the teacher explained similar problems in class. This contextual grounding not only increases the accuracy of responses (reducing AI hallucinations) but also builds trust with faculty, since the AI is effectively an amplifier of the instructor’s own content. Another benefit is in multilingual or diverse learning contexts – because the mentor draws on authorized content, it can be used to provide consistent explanations across different languages or modalities (some institutions use it to automatically generate translations or alternative explanations for accessibility). Overall, the tight integration with course content and the control given to instructors embody best practices of adaptive learning systems and ensure that mentorAI supplements teaching in a coherent, aligned manner.

Learning Analytics and Instructor Dashboards

In addition to the student-facing tutor, mentorAI provides robust learning analytics dashboards for instructors and administrators. Every interaction students have with the AI mentor generates data that can offer insights into their learning behaviors and difficulties. The platform captures metrics such as the number of questions asked, topics that students inquire about most, the frequency and timing of usage, and even sentiment or feedback ratings on the AI’s responses. These data are aggregated and presented through an analytics interface. For example, an instructor can view a summary of what concepts students struggled with in a given week based on the questions they posed to the mentor. The mentorAI administrator panel includes conversation analytics visuals (as illustrated in the figure above), enabling educators to track “usage patterns, flag at-risk students, and get actionable reports.” This means if a subset of students are repeatedly asking the mentor for help on a particular subtopic, the instructor might realize that concept wasn’t clearly understood by the class and can address it in the next lesson. The system essentially functions as an early-warning system for student difficulties: by flagging at-risk students, e.g. those who ask an unusually high number of questions or exhibit frustration, the platform allows timely intervention. In terms of concrete metrics, mentorAI’s analytics can report things like average session length (a proxy for time-on-task), distribution of mentor usage across students, performance on any quizzes administered by the AI, and improvements over time. It also supports sentiment analysis or feedback from students after an AI interaction, which can be used to gauge their confidence or satisfaction. An example of the reporting capability is the “daily mentor performance snapshots (usage, cost, sentiment) delivered to dashboards” that ibl.ai has mentioned as part of their updates. Administrators likewise can see institution-level data: how overall engagement with the AI correlates with course outcomes, which departments are utilizing the tool most, etc., to inform decision-making on scaling the solution. These analytics embody the principle of data-driven instruction – empowering educators with formative data about student learning processes that were previously invisible. The platform thereby closes the loop between learning and teaching: students get personalized help, and instructors get feedback on student learning. Combined with the earlier features, mentorAI isn’t just a chat tool; it’s an integrated learning support system that not only tutors students but also feeds information back to educators and institutions to improve curriculum and support. The emphasis on analytics and transparency also reassures stakeholders (e.g., UNICEF’s evaluators) that the platform’s impact can be monitored and evaluated in measurable terms rather than being a black box. For instance, if student time-on-task increases by using mentorAI or if certain at-risk students improve their grades after consistent AI mentoring, these outcomes would surface in the data. In summary, the learning analytics dashboards in mentorAI provide a vital interface for measuring and enhancing the efficacy of the AI mentor, turning raw usage data into meaningful pedagogical insights.

Architecture, Integration, and Ethical Safeguards

The mentorAI platform is designed with a modern, extensible architecture that facilitates integration into existing educational technology ecosystems and ensures institutional control over data and AI models. It is offered as an LLM-agnostic platform, meaning institutions can choose or change the underlying large language model powering the mentors – whether it be OpenAI’s GPT series, Anthropic’s models, or open-source alternatives like LLaMA – without affecting the user experience. This flexibility prevents vendor lock-in and allows use of the most appropriate or cost-effective model, a feature particularly attractive for learning institutions concerned about long-term sustainability. Technically, mentorAI exposes comprehensive APIs and an open codebase, so it can integrate seamlessly with Learning Management Systems (LMS) such as Canvas, Blackboard, or Moodle. In fact, ibl.ai offers plug-ins (like a Canvas integration) to embed the mentor interface directly into the LMS course page for convenience. The platform is cloud-agnostic and deployable in self-hosted environments: universities can deploy mentorAI in their own secure cloud or on-premises infrastructure to maintain full control over student data and ensure compliance with privacy regulations like FERPA. For instance, one key selling point is “Secure Hosting: Deploy the platform within your own cloud environment to maintain data privacy and compliance.” This addresses common concerns around data security when using AI in education. Additionally, various ethical guardrails are built into the system. The platform includes content filters to avoid inappropriate or harmful responses, and it allows faculty to review conversation logs if needed to monitor how students are using the AI (balancing student privacy with oversight). mentorAI also provides configuration for academic integrity – e.g., it can be set not to answer specific quiz questions or to watermark AI-generated content – thereby encouraging responsible use of AI rather than facilitating cheating. Morehouse College’s program, for example, emphasized remaining human-centered and ethical while using AI mentors and even explored the concept of “Moral AI” in partnership with ibl.ai. In terms of scalability, the platform is built to support large user bases (ibl.ai supports users from 400+ universities) and multiple concurrent AI mentors (each course or department can have its own custom mentor). This scalable, flexible architecture ensures that mentorAI can be adopted at an institutional level, not just in isolated classrooms, and can evolve as AI technology advances. By giving universities a “full control over [their] AI infrastructure” and the ability to “retain ownership of the codebase”, mentorAI’s design aligns with the needs of educational institutions to be self-reliant and adapt AI to their unique context. In summary, the platform’s architecture and integration capabilities – LLM flexibility, LMS embedding, secure self-hosting, and ethical controls – provide a future-proof and institution-friendly foundation that underpins the pedagogical features described above.

Implementation Cases at GWU, Morehouse, and Others

George Washington University (Course-Level AI Mentor Pilot)

George Washington University’s School of Engineering and Applied Science undertook a pilot project in 2024 to develop a course-specific AI mentor, led by Professor Lorena A. Barba. The motivation was to harness AI to support students within the confines of a single course, addressing the observation that 86% of students were already using tools like ChatGPT, often ineffectively. The challenge was to provide an AI assistant that could improve learning rather than shortcut it. In this pilot, ibl.ai served as the technical partner helping to implement the custom AI mentor for Barba’s engineering course. The resulting mentor was faculty-curated and course-specific: Barba uploaded her course materials (readings, lecture notes, etc.) into the system and defined the mentor’s role as a TA for that course. This ensured the AI’s help stayed on-topic and accurate to the curriculum. Barba emphasized “what sets our approach apart is its focus on pedagogical design and cost-effectiveness”. Unlike some enterprise AI solutions, this pilot used a pay-as-you-go model for AI usage through the ibl.ai platform, which turned out to be highly cost-efficient. In fact, GWU reported that their AI mentor was 85% cheaper than using ChatGPT for comparable support tasks. This significant cost reduction demonstrated the value of a tailored solution and addressed the budgetary challenge of scaling AI support to many students. Pedagogically, the GWU mentor allowed Barba and colleagues to customize the AI’s persona and responses to be in line with their teaching. Key features included an instructor-controlled AI persona, integration of the course’s document repository for authoritative answers, context-aware help via RAG, and an analytics dashboard to track student questions. During a live demo to faculty, Barba showed how the AI could answer student queries with course-specific accuracy and even encourage critical thinking, rather than giving verbatim answers. The outcomes of the pilot were promising: faculty noted increased student engagement with the course materials via the AI, and students appreciated having a consistent go-to resource that was “always available” for help. While formal learning outcome gains (like grade improvements) from this small pilot were not rigorously quantified, anecdotal evidence and student feedback were positive. The initiative was endorsed by GWU’s engineering school as a model of a “student-centered and faculty-led” application of AI in education. It also yielded insights for scaling – for instance, how to integrate such mentors into multiple courses and how faculty from different departments could collaborate on developing AI mentors. The GWU case illustrates how mentorAI can be deployed at a course level to tackle the challenge of unguided AI usage: by offering a sanctioned, course-aligned AI assistant that is pedagogically tuned and cost-effective to run.

Morehouse College (AI-Powered Liberal Arts Pilot)

Morehouse College, a historically Black liberal arts institution, launched an innovative pilot in Spring 2025 to integrate AI mentors and avatars into its teaching, known as the AI-PiLOT (Artificial Intelligence – Pedagogical Innovative Leaders of Technology) Fellows Program. This program recruited five faculty fellows across diverse departments (Computer Science, Philosophy & Religion, Education, Business, and Online Learning) to experiment with embedding mentorAI tools in their courses. The primary challenge addressed was how to introduce AI in a liberal arts context in a human-centered, ethical manner. Morehouse recognized growing student interest in AI and the need for faculty to guide AI’s use in learning, rather than leave students to rely on random internet tools. With support from ibl.ai and integration into the college’s Canvas LMS, each faculty fellow created their own AI mentor (and in some cases an AI-powered 3D avatar as a front-end) to assist in one of their course modules. For example, a philosophy professor might have an AI mentor that can discuss logical fallacies or give feedback on essay drafts, whereas a computer science professor’s mentor could help students debug code or understand algorithms. The inclusion of avatars was an experimental twist – using AI-generated characters to increase engagement, especially in online course sections, by giving a “face” to the mentorAI. Morehouse’s goals were both practical and exploratory: “to lead the way in establishing how to use AI tools in liberal arts education while remaining human-centered.” This meant closely monitoring how students interacted with the AI mentors and ensuring the technology complemented the faculty’s teaching rather than overshadowing it. Early reports from the pilot noted that faculty found the mentors useful for providing just-in-time support to students after hours, and students reported feeling more inclined to ask the AI questions they might hesitate to ask a professor, thus increasing their time engaged with course content. There were challenges too – faculty had to invest time to train their AI on the right content and to fine-tune the mentor’s responses (addressing occasional inaccuracies or ensuring the tone was appropriate for Morehouse’s emphasis on ethics and values). The program also explicitly looked at ethical AI: one aspect termed “Moral AI” involved discussions on the ethical implications of AI in education, aligning with Morehouse’s mission. While outcome data is still being gathered, this pilot has positioned Morehouse as a leader in AI innovation among liberal arts colleges. It has also yielded best practices on training faculty to use AI tools. Juana Mendenhall, Morehouse’s Vice Provost who spearheaded the initiative, lauded ibl.ai’s partnership, noting the flexibility and support provided as the project evolved through many changes. The Morehouse case underscores mentorAI’s adaptability across disciplines – from technical fields to humanities – and the importance of institutional vision (here, a fellowship program) in scaling AI mentoring in a pedagogically sound way.

Other Notable Deployments and Outcomes

Several other institutions have piloted or adopted mentorAI (or similar AI mentor platforms) with noteworthy results. Columbia University is running a mentorAI pilot aimed at transforming not only teaching and learning, but also research and administrative support, indicating the broad applicability of the platform beyond student tutoring. While detailed results from Columbia’s pilot are not public, the very scope (covering academics and campus operations) suggests confidence in the platform’s versatility. At Syracuse University, the Chief Digital Officer, Jeff Rubin, spoke about the ibl.ai platform’s role in their digital strategy, highlighting how putting “full control in educators’ hands” allows them to customize AI mentors and ensure the AI’s responses remain grounded in institutional content. This was part of a broader case study indicating cost savings and faculty empowerment in Syracuse’s rollout of AI mentors. Additionally, collaborations with big tech companies lend credence to the platform’s impact: a Google Cloud blog featured ibl.ai’s GenAI-based chat mentor as a case of driving student success with AI, implying that even tech industry leaders see value in this educational approach. Across these implementations, common challenges addressed include: providing scalable one-on-one support to students, reducing the overload on instructors for routine queries, and improving student engagement in online or hybrid courses. The reported outcomes, while still accumulating, have been generally positive – from cost reductions (GWU’s 85% cost savings) to improved course pass rates.

Measuring Learning Impact of mentorAI

ibl.ai emphasizes an evidence-based approach to evaluating learning impact, using both quantitative metrics and qualitative feedback. The platform itself facilitates extensive data collection: every student interaction with mentorAI is logged and can be analyzed. One key impact metric is student persistence/retention in courses. By providing 24/7 support and personalized help, mentorAI aims to keep more students in the course and motivated to complete it. Institutions deploying mentorAI often track dropout or withdrawal rates before and after introduction of the tool. Increased retention is indeed promoted as a benefit – the platform “boosts student success & retention” by improving student confidence and outcomes through round-the-clock academic support. Another measurable outcome is time-on-task, which refers to the amount of time students actively engage in learning activities. mentorAI’s analytics can track usage duration per student (e.g., how long they spend in mentor chat sessions). An uptick in average usage time or frequency likely correlates with more practice and study, which can lead to better mastery. For instance, if students typically studied 2 hours a week but with mentorAI they study 3 hours (because the AI is always available to assist), that’s a positive impact on time-on-task. These usage patterns are visible in the dashboards, allowing educators to gauge engagement levels. Academic performance metrics are of course central. In pilot studies, instructors compare exam scores or assignment grades between classes (or semesters) with and without the AI mentor. The expectation is that students who effectively use the mentor will perform better in assessments thanks to reinforced understanding. As noted, some early data outside of ibl.ai’s own reporting showed small grade improvements (a few percentage points in final grades) with the introduction of AI tutoring support. ibl.ai encourages institutions to conduct such evaluations and even offers tools to run AI-driven quizzes or practice tests so that improvement in scores can be tracked over time. Course completion rates and pass rates are another metric: for example, if historically 70% of students passed a difficult course, do 75% pass after mentorAI’s adoption? Beyond grades and passes, learning gains can be measured through pre- and post-testing of concept knowledge, which some research pilots are undertaking. Crucially, student feedback and satisfaction are measured via surveys and instruments like the Net Promoter Score (NPS). Student surveys from the Spring 2025 semester after using mentorAI yielded a 97% satisfaction rate and an NPS of 100%, meaning every student was a “promoter” of the tool. Such high satisfaction suggests students feel the mentor is genuinely helpful for their learning. While satisfaction alone doesn’t prove learning, it correlates with student engagement and likelihood of continuing in a course. The platform can also solicit feedback after each AI chat session (e.g., a thumbs up/down or “Was this helpful?” prompt), providing micro-level data on the usefulness of responses. Another qualitative measure is student confidence or self-efficacy, which can be gauged through surveys – do students feel more confident tackling problems after using the AI mentor? Early anecdotal evidence indicates yes: students often comment that having an AI to double-check their work or answer naive questions makes them more confident and independent learners. On the instructor side, impact is measured by efficiency gains and satisfaction. Metrics such as reduction in instructor grading time or the number of routine questions handled by AI (versus emailed to professors) indicate how mentorAI offloads work. For example, if automated feedback on drafts or AI-generated quiz solutions save X hours of grading, that is a tangible impact allowing instructors to focus on more high-level teaching tasks. Some institutions have also looked at learning analytics insights as a metric – i.e., how often instructors actually use the data from mentorAI to inform teaching adjustments, which reflects the platform’s integration into pedagogical decision-making. Finally, ibl.ai and its partners consider comparative studies and controlled trials as the gold-standard for impact evidence. ibl.ai’s own “Metrics and KPIs” guidelines (as indicated in their case study references) suggest tracking: student usage rates, average improvement in assessment scores, persistence rates, and satisfaction/NPS. While specific numbers will vary by context, the early results from GWU, Morehouse, Syracuse, and UCSD are encouraging. They collectively show trends of improved engagement, high student and faculty approval, and hints of better learning outcomes (from qualitative feedback and preliminary data). As more data comes in, especially from larger-scale deployments, we expect to refine our understanding of mentorAI’s impact. But even now, the evidence suggests that a well-implemented AI mentor can be a powerful tool to increase student success: boosting time-on-task and persistence through constant personalized support, enhancing performance through timely feedback and practice, and ultimately contributing to more students achieving their learning goals.

Conclusion

In conclusion, ibl.ai’s mentorAI platform represents a convergence of sound learning theory and cutting-edge AI technology to tackle enduring challenges in education. Grounded in constructivist, student-centered principles and leveraging concepts like scaffolding and the Zone of Proximal Development, mentorAI provides an environment where learners can actively engage, receive personalized guidance, and build understanding at their own pace. The platform’s design – from instant feedback and adaptive prompts to analytics dashboards – operationalizes these theories at scale, offering a level of individualized support historically only possible with one-on-one human tutoring. Early implementations at institutions such as GWU, Morehouse, and UCSD demonstrate both the versatility of the system and its positive reception by students and faculty. These cases illustrate improved engagement, potential cost savings, and creative new pedagogical approaches (like AI mentors in liberal arts and course-specific AI assistants), highlighting mentorAI’s adaptability to different educational contexts. Critically, mentorAI doesn’t just introduce AI for the sake of novelty; it is paired with a framework for measuring impact – tracking improvements in persistence, performance, and student satisfaction. Preliminary data and testimonials show promising outcomes, and ongoing research will further clarify the gains in learning results attributable to the platform. Of course, effective implementation requires thoughtful integration (aligning AI mentors with curriculum and ethics), and the case studies so far underscore the importance of faculty leadership in this process. When implemented thoughtfully, mentorAI can alleviate instructional burdens, provide equitable 24/7 tutoring access to students, and inform instructors through learning analytics, all while keeping the human teacher in the loop as the guide and curator of the AI mentor’s knowledge and behavior. For UNICEF’s EdTech assessment context, mentorAI exemplifies an AI-enabled learning tool with a theoretical foundation in proven educational practices and a growing body of evidence for its impact on learning. It addresses issues of educational quality and equity by scaling personalized support to potentially any learner with an internet connection, without removing the teacher’s agency or the need for pedagogical oversight. In a world where learners often turn to unsupervised AI tools, mentorAI offers a safe, institution-sanctioned alternative – one that is pedagogically informed, adaptable, and data-rich. The platform’s early successes in diverse institutions indicate its potential for broader adoption and its alignment with the goal of enhancing learning outcomes. Going forward, continuous research and iteration will be key: as more data is collected on what works best, ibl.ai and its partners can refine the AI mentors to be even more effective. But as it stands, mentorAI provides a compelling case study in how theory-driven design combined with practical analytics can produce an EdTech solution that meaningfully improves student learning experiences and outcomes in education.