The Role of Prior AI Experience in Evaluating Human vs. LLM-Generated Rationales for Disaster Tweet Classification.

Keywords

Loading...
Thumbnail Image

Authors

Issue Date

2025-06-20

Language

en

Document type

Journal Title

Journal ISSN

Volume Title

Publisher

Title

ISSN

Volume

Issue

Startpage

Endpage

DOI

Abstract

The rapid growth of large language models (LLMs) has led to a phenomenon where various industries are using LLMs in workplaces. Rationales as reasonings behind AI-generated decisions play a critical role in building trust and transparency in human-AI collaboration. The current study aims to examine how does individual’s prior experience with AI influence their perceived reliability and consistency of rationales in a disaster-related context. 114 participants completed the survey online via Qualtrics, where they were presented with the tweets, classifications, and rationales. No significant interaction effect among prior AI experience, type of rationales, perceived reliability and consistency was found, suggesting people with different levels of prior AI experience do not evaluate the reliability and consistency of the rationales differently, regardless of the generators. An unexpected significance was that people with high prior AI experience evaluated the consistency of the rationales more positively than those with low prior AI experience.

Description

Citation

Faculty

Faculteit der Letteren