University College London: How Human-AI Feedback Loops Alter Human Perceptual, Emotional and Social Judgements
This study finds that AI systems can amplify human biases when trained on slightly skewed data. Interactions with biased AI can further increase human bias, particularly when users view AI as more authoritative. However, accurate AI systems have the potential to improve human judgment.
University College London: How Human-AI Feedback Loops Alter Human Perceptual, Emotional and Social Judgements
Summary of Read Full Report
This research investigates how interactions between humans and AI can create feedback loops that amplify biases.The study reveals that AI algorithms, trained on slightly biased human data, not only adopt these biases but also magnify them.
When humans then interact with these biased AI systems, their own biases increase, demonstrating a concerning feedback mechanism. The researchers found this effect to be stronger in human-AI interactions than in human-human interactions, and that humans often underestimate the influence of AI on their judgments.
The study demonstrated that using an AI system like Stable Diffusion can increase social bias. Critically, the study shows that accurate AI can improve judgement, while flawed AI amplifies human biases.
Here are five key takeaways from the provided study on human-AI interaction:
- AI systems can amplify biases present in human data. When AI algorithms are trained on data that contains even slight human biases, the algorithms not only adopt these biases but often amplify them.
- Human interaction with biased AI increases human bias. Repeated interaction with biased AI systems leads humans to internalize and adopt the AI's biases, potentially creating a feedback loop where human judgment becomes increasingly skewed. This effect is stronger in human-AI interactions than in human-human interactions.
- The perception of AI influences its impact. Humans may be more susceptible to bias from AI systems if they perceive the AI as superior or authoritative. The study showed that even when interacting with an AI, if participants believed they were interacting with a human, the bias learned was less than if they knew it was an AI.
- Humans underestimate AI's biasing influence. People are often unaware of the extent to which AI systems affect their judgments, which can make them more vulnerable to adopting AI-driven biases.
- Accurate AI improves human judgment. The study also demonstrated that interaction with accurate AI systems can improve human decision-making, suggesting that reducing algorithmic bias has the potential to enhance the quality of human judgment.
Related Articles
Gemini 3.1 Pro and the Case for Model-Agnostic Agentic Infrastructure
Google's Gemini 3.1 Pro doubled its reasoning benchmarks overnight. Here's why that makes model-agnostic agentic infrastructure more critical than ever.
Google Gemini 3.1 Pro, ChatGPT Ads, and Why Organizations Need to Own Their AI Infrastructure
Google launches Gemini 3.1 Pro with advanced reasoning while OpenAI rolls out ads in ChatGPT. These two moves reveal a growing tension in enterprise AI: who controls the intelligence layer, and whose interests does it serve?
ChatGPT Now Has Ads — And It Should Change How You Think About AI Infrastructure
OpenAI has started showing ads inside ChatGPT responses. This marks a turning point: organizations relying on consumer AI tools are now subject to someone else's monetization strategy. Here's why owning your AI infrastructure matters more than ever.
Gemini 3.1 Pro Just Dropped — Here's What It Means for Organizations Running Their Own AI
Google's Gemini 3.1 Pro launched today with 1M-token context, native multimodal reasoning, and agentic tool use. Here's why model releases like this one matter most to organizations that own their AI infrastructure — and why locking into a single provider is the costliest mistake you can make.
See the ibl.ai AI Operating System in Action
Discover how leading universities and organizations are transforming education with the ibl.ai AI Operating System. Explore real-world implementations from Harvard, MIT, Stanford, and users from 400+ institutions worldwide.
View Case StudiesGet Started with ibl.ai
Choose the plan that fits your needs and start transforming your educational experience today.