---
title: "University of California Irvine: What Large Language Models Know and What People Think They Know"
slug: "university-of-california-irvine-what-large-language-models-know-and-what-people-think-they-know"
author: "Jeremy Weaver"
date: "2025-02-17 18:46:10"
category: "Premium"
topics: "LLM Communication of Uncertainty, Calibration and Discrimination Gaps, Impact of Explanation Length, Influence of Uncertainty Language, Tailoring Explanations for Trustworthy AI"
summary: "The study reveals that users tend to overestimate large language models' accuracy due to discrepancies between the models' internal confidence and the users' interpretation, with longer explanations and specific uncertainty language boosting user confidence regardless of actual accuracy. Tailoring LLM responses to better reflect internal uncertainty can help bridge this calibration gap, improving trustworthiness in AI-assisted decisions."
banner: ""
thumbnail: ""
---
University of California Irvine: What Large Language Models Know and What People Think They Know
Summary of Read Full Report
This study investigates how well large language models (LLMs) communicate their uncertainty to users and how human perception aligns with the LLMs' actual confidence. The research identifies a "calibration gap" where users overestimate LLM accuracy, especially with default explanations.
Longer explanations increase user confidence without improving accuracy, indicating shallow processing. By tailoring explanations to reflect the LLM's internal confidence, the study demonstrates a reduction in both the calibration and discrimination gaps, leading to improved user perception of LLM reliability.
The study underscores the importance of transparent uncertainty communication for trustworthy AI-assisted decision-making, advocating for explanations aligned with model confidence.
The study examines how well large language models (LLMs) communicate uncertainty and how humans perceive the accuracy of LLM responses. It identifies gaps between LLM confidence and human confidence, and explores methods to improve user perception of LLM accuracy.
Here are 5 key takeaways: