University of Cologne: AI Meets the Classroom – When Does ChatGPT Harm Learning?
LLMs can aid coding education when used as personal tutors by explaining concepts, but over-reliance on them for solving exercises—especially via copy-and-paste—can impair actual learning and lead students to overestimate their progress.
University of Cologne: AI Meets the Classroom – When Does ChatGPT Harm Learning?
Summary of https://arxiv.org/pdf/2409.09047
This paper explores the effects of large language models (LLMs) on student learning in coding classes. Three studies were conducted to analyze how LLMs impact learning outcomes, revealing both positive and negative effects.
Using LLMs as personal tutors by asking for explanations was found to improve learning, while relying on them to solve exercises hindered it.
Copy-and-paste functionality was identified as a key factor influencing LLM usage and its subsequent impact. The research also demonstrates that students may overestimate their learning progress when using LLMs, highlighting potential pitfalls.
Finally, results indicated that less skilled students may benefit more from LLMs when learning to code.
Here are five key takeaways regarding the use of Large Language Models (LLMs) in learning to code, according to the source:
- LLMs can have both positive and negative effects on learning outcomes. Using LLMs as personal tutors by asking for explanations can improve learning, but relying on them excessively to solve practice exercises can impair learning.
- Copy-and-paste functionality plays a significant role in how LLMs are used. It enables solution-seeking behavior, which can decrease learning.
- Students with less prior domain knowledge may benefit more from LLM access. However, those new to LLMs may be more prone to over-reliance.
- LLMs can increase students’ perceived learning progress, even when controlling for actual progress. This suggests that LLMs may lead to an overestimation of one’s own abilities.
- The effect of LLM usage on learning depends on balancing reliance on LLM-generated solutions and using LLMs as personal tutors, and can vary depending on the specific case.
Related Articles
Students as Agent Builders: How Role-Based Access (RBAC) Makes It Possible
How ibl.ai’s role-based access control (RBAC) enables students to safely design and build real AI agents—mirroring industry-grade systems—while institutions retain full governance, security, and faculty oversight.
AI Equity as Infrastructure: Why Equitable Access to Institutional AI Must Be Treated as a Campus Utility — Not a Privilege
Why AI must be treated as shared campus infrastructure—closing the equity gap between students who can afford premium tools and those who can’t, and showing how ibl.ai enables affordable, governed AI access for all.
Pilot Fatigue and the Cost of Hesitation: Why Campuses Are Stuck in Endless Proof-of-Concept Cycles
Why higher education’s cautious pilot culture has become a roadblock to innovation—and how usage-based, scalable AI frameworks like ibl.ai’s help institutions escape “demo purgatory” and move confidently to production.
AI Literacy as Institutional Resilience: Equipping Faculty, Staff, and Administrators with Practical AI Fluency
How universities can turn AI literacy into institutional resilience—equipping every stakeholder with practical fluency, transparency, and confidence through explainable, campus-owned AI systems.