Gesture-speech coupling: A role of iconic gestures in predicting semantic content during sentence comprehension?

Keywords
Loading...
Thumbnail Image
Issue Date
2020-07-01
Language
en
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
During natural conversations (i.e., face-to-face dialog), speakers convey information through visual and auditory signals, such as speech and manual gestures (e.g., McNeill, 1992). Semantic information provided by iconic gestures is tightly linked with that in speech. Gestures, however, often temporally precede their corresponding speech units by various degrees (4300 ms – 6 ms). On this basis, it has been argued that gestures may be used as cues during prediction processes (e.g., Schegloff, 1984; Ferré, 2010), which are thought of as the fundamental principle of natural language processing (Clark, 2013). The proposed study aims to investigate the role of gestures in predicting semantic contents during sentence comprehension. To answer this question, EEG data will be recorded from 80 Dutch speakers while they are watching videos of an actress speaking and gesturing. The stimuli will comprise of a target word (object/noun) at the end of the discourse that is depending on the preceding sentence context either predictable or non-predictable, as well as iconic gestures that will be presented such that the gesture stroke starts either 520 ms ("early" condition) or 130 ms ("late" condition) before target word onset. To test for differences in EEG activity in response to the early and late gesture presentation, cluster-based permutation analyses will be used. We hypothesize that gestures are especially relevant in semantically predictive discourses, where they might further facilitate lexico-semantic processing of the target noun. Specifically, we expect the facilitation to be more efficient (i.e., highly automatic) when gestures strongly precede target words compared to when they only slightly precede them.
Description
Citation
Faculty
Faculteit der Letteren