The role of explainability and trust in the adoption of AI-generated financial forecasts.

Keywords

No Thumbnail Available

Issue Date

2025-07-07

Language

en

Document type

Journal Title

Journal ISSN

Volume Title

Publisher

Title

ISSN

Volume

Issue

Startpage

Endpage

DOI

Abstract

The influence of explainability and trust on the acceptance of artificial intelligence (AI) generated financial projections by financial experts is investigated in this study. Using survey data from 44 financial practitioners, the study builds on the Technology Acceptance Model (TAM) by include explainability and trust as main constructions and evaluates a mediation model. Examined were the links between perceived explainability, trust, perceived usefulness (PU), perceived ease of use (PEOU), and intention to adopt using structural equation modelling and regression analysis. While the direct impacts of explainability on trust and adoption were not statistically significant, the results underline trust as a fundamental predictor of adoption. But explainability's connections to trust and adoption in sensitivity studies revealed its practical importance. PU stayed the biggest driver of adoption, whereas PEOU showed an unexpected negative correlation suggesting that professionals could choose dependability over simplicity. The findings imply that open and context-sensitive artificial intelligence systems have to actively foster trust. This study adds to the body of knowledge on AI adoption in the financial sector and provides useful information for companies trying to use explainable AI solutions compliant with professional standards and responsibility criteria.

Description

Citation

Faculty

Faculteit der Managementwetenschappen

Specialisation