Predicting Speech: How Semantic Context and Visual Cues Modulate Audiovisual Speech Processing

Keywords

No Thumbnail Available

Issue Date

2014-08-01

Language

en

Document type

Journal Title

Journal ISSN

Volume Title

Publisher

Title

ISSN

Volume

Issue

Startpage

Endpage

DOI

Abstract

Spoken language communication usually happens face-to-face. Both the content of what a speaker has already said and her visible mouth movements (visemes) can help us predict which word we will hear next, because bath of these cues precede the acoustic onset of the upcoming word. However, it is not clear whether and how these two types of predictions interact when we perceive speech audiovisually. We orthogonally manipulated contextual constraint and viseme saliency to investigate whether a previously found auditory facilitation effect caused by salient visemes would be modulated by semantic context. Our hypotheses were that a strong semantic context prediction would either add to or dominate the viseme prediction. Results are most consistent with the latter, indicating that sentence context and visem.e predictions operate on different levels in the predictive processing hierarchy. We conclude that salient visemes facilitate early auditory speech processing only when there is a high amount of uncertainty about the upcoming word.

Description

Citation

Faculty

Faculteit der Sociale Wetenschappen