Pearson: Asking to Learn
Pearson’s analysis of 128,000 student queries to an AI study tool uncovers a surprising share of higher-order questions—evidence that thoughtful AI integration can push learners beyond rote memorization.
What Happens When Students “Ask to Learn”?
Pearson’s paper, “*[Asking to Learn: What student queries to Generative AI reveal about cognitive engagement](https://plc.pearson.com/sites/pearson-corp/files/asking-to-learn.pdf)*,” examines 128,725 queries posed by 8,681 students using the “Explain” feature of an AI-powered eTextbook. Each free-form question offers a window into learners’ thought processes—and the results challenge the assumption that AI tools encourage only surface-level inquiry.Key Findings at a Glance
- 80 % Basic Recall & Conceptual Understanding: Most questions sought definitions or straightforward explanations—expected for an introductory biology course.
- One-Third Showed Advanced Complexity: About 20 % reached Analyze or higher on Bloom’s Taxonomy, indicating genuine higher-order thinking.
- Active vs. Passive Learning Signals: The presence of sophisticated questions suggests many students are proactively framing problems rather than passively consuming information.
From Insights to Innovation: The “Go Deeper” Feature
Armed with these findings, Pearson developed “Go Deeper,” a new capability that recommends follow-up prompts designed to nudge students toward analysis, synthesis, and evaluation. By scaffolding curiosity, the tool aims to shift more queries into higher cognitive territory.Implications for Educators and EdTech Designers
1. AI Can Foster Depth, Not Just Speed- When thoughtfully integrated, generative AI can encourage students to explore relationships, critiques, and applications—not merely memorize facts.
- Training students to craft better questions may be as important as teaching content itself.
- Continuous analysis of real-world usage fuels data-informed improvements—exactly how Pearson arrived at Go Deeper.
- Platforms should embed structures that push learners up Bloom’s ladder, much like mentor systems (e.g., [ibl.ai’s AI Mentor](https://ibl.ai/product/mentor-ai-higher-ed)) that guide users from recall to reasoning.
Takeaways for Institutions
- Monitor Question Quality to gauge engagement and adjust instruction.
- Embed AI Tools that prompt reflection, not just delivery of answers.
- Support Faculty Development so instructors can leverage AI analytics to tailor lessons.
Final Thoughts
Pearson’s study reinforces a critical insight: AI’s educational value depends on how students interact with it. By designing features that prompt deeper inquiry, EdTech can transform generative models from answer engines into catalysts for critical thinking. As AI continues to permeate classrooms, the real differentiator will be tools that help learners not only ask questions—but ask *better* ones.Related Articles
Microsoft Education AI Toolkit
Microsoft’s new AI Toolkit guides institutions through a full-cycle journey—exploration, data readiness, pilot design, scaled adoption, and continuous impact review—showing how to deploy AI responsibly for student success and operational efficiency.
Nature: LLMs Proficient Solving & Creating Emotional Intelligence Tests
A new Nature paper reveals that advanced language models not only surpass human performance on emotional intelligence assessments but can also author psychometrically sound tests of their own.
Multi-Agent Portfolio Collab with OpenAI Agents SDK
OpenAI’s tutorial shows how a hub-and-spoke agent architecture can transform investment research by orchestrating specialist AI “colleagues” with modular tools and full auditability.
BCG: AI-First Companies Win the Future
BCG’s new report argues that firms built around AI—not merely using it—will widen competitive moats, reshape P&Ls, and scale faster with lean, specialized teams.