The Impact of LLM-Generated Disaster Explanations on the Perceived Reliability and Clarity of Crisis News.
Keywords
Loading...
Authors
Issue Date
2025-07-14
Language
en
Document type
Journal Title
Journal ISSN
Volume Title
Publisher
Title
ISSN
Volume
Issue
Startpage
Endpage
DOI
Abstract
In times of crisis, clear and trustworthy communication is crucia;. With the rise of Large Language Models (LLM) like ChatGPT, it is increasingly relevant to assess whether AI-generated explanations (rationales) for crisis-related news are perceived reliability and clarity of human- versus LLM-generated rationales of crisis tweets.
A total of 114 participants were randomly assigned to read either a human- or an LLM-generated rationale. They rated the clarity and reliability of the rationale they read. An indepent samples t-test revealed no significant differences in perceived reliability or clarity between the two conditions. Human-generated rationales (M - 13.7) and LLM-generated rationales (M = 14.1) were rated similarly, as were clarity scores (human: M = 9.9; LLM: M= 9.6).
A significant positive correlation (r = .63, p <.001) suggests that more reliable rationales were also perceived as clearer.
Description
Citation
Supervisor
Faculty
Faculteit der Letteren
