ibl.ai Agentic AI Blog

Insights on building and deploying agentic AI systems. Our blog covers AI agent architectures, LLM infrastructure, MCP servers, enterprise deployment strategies, and real-world implementation guides. Whether you are a developer building AI agents, a CTO evaluating agentic platforms, or a technical leader driving AI adoption, you will find practical guidance here.

Topics We Cover

Featured Research and Reports

We analyze key research from leading institutions and labs including Google DeepMind, Anthropic, OpenAI, Meta AI, McKinsey, and the World Economic Forum. Our content includes detailed analysis of reports on AI agents, foundation models, and enterprise AI strategy.

For Technical Leaders

CTOs, engineering leads, and AI architects turn to our blog for guidance on agent orchestration, model evaluation, infrastructure planning, and building production-ready AI systems. We provide frameworks for responsible AI deployment that balance capability with safety and reliability.

Interested in an on-premise deployment or AI transformation? Calculate your AI costs. Call/text 📞 (571) 293-0242
Back to Blog

University of Cologne: AI Meets the Classroom – When Does ChatGPT Harm Learning?

Jeremy WeaverFebruary 17, 2025
Premium

LLMs can aid coding education when used as personal tutors by explaining concepts, but over-reliance on them for solving exercises—especially via copy-and-paste—can impair actual learning and lead students to overestimate their progress.

University of Cologne: AI Meets the Classroom – When Does ChatGPT Harm Learning?



Summary of Read Full Report

This paper explores the effects of large language models (LLMs) on student learning in coding classes. Three studies were conducted to analyze how LLMs impact learning outcomes, revealing both positive and negative effects.

Using LLMs as personal tutors by asking for explanations was found to improve learning, while relying on them to solve exercises hindered it.

Copy-and-paste functionality was identified as a key factor influencing LLM usage and its subsequent impact. The research also demonstrates that students may overestimate their learning progress when using LLMs, highlighting potential pitfalls.

Finally, results indicated that less skilled students may benefit more from LLMs when learning to code.

Here are five key takeaways regarding the use of Large Language Models (LLMs) in learning to code, according to the source:

  • LLMs can have both positive and negative effects on learning outcomes. Using LLMs as personal tutors by asking for explanations can improve learning, but relying on them excessively to solve practice exercises can impair learning.
  • Copy-and-paste functionality plays a significant role in how LLMs are used. It enables solution-seeking behavior, which can decrease learning.
  • Students with less prior domain knowledge may benefit more from LLM access. However, those new to LLMs may be more prone to over-reliance.
  • LLMs can increase students’ perceived learning progress, even when controlling for actual progress. This suggests that LLMs may lead to an overestimation of one’s own abilities.
  • The effect of LLM usage on learning depends on balancing reliance on LLM-generated solutions and using LLMs as personal tutors, and can vary depending on the specific case.

See the ibl.ai AI Operating System in Action

Discover how leading universities and organizations are transforming education with the ibl.ai AI Operating System. Explore real-world implementations from Harvard, MIT, Stanford, and users from 400+ institutions worldwide.

View Case Studies

Get Started with ibl.ai

Choose the plan that fits your needs and start transforming your educational experience today.