Gaze allocation during gestural enhancement of degraded speech in native and non-native listeners

Keywords
Loading...
Thumbnail Image
Issue Date
2018-08-28
Language
en
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
In natural communication, humans mostly focus on the speaker’s face and devote very little visual attention to gestures, which play a significant role in language comprehension. To date, explicit attention to gestures has only been studied in optimal listening conditions, and only with native speakers of a language. We investigated how native and non-native listeners allocate attention to speech relevant visual information (visible speech and iconic gestures) when speech is both clear and degraded. We recorded participants’ eye movements while they watched videos of an actress who uttered a high frequency Dutch verb (e.g., rijden, to drive in English) in either clear or 6-band noise-vocoded speech (moderate speech signal degradation), and who also either performed or not the accompanying iconic gesture. We found that the face was the locus of sustained attention in clear and degraded speech, suggesting gestures, when they were presented, attracted little overt attention in (non)-native listeners. While both listener groups gazed more to gestures in clear than degraded speech, overall, non-native listeners gazed significantly more to gestures compared to native listeners. Taken together, the results suggest explicit attention to the face is functional for the extraction of information from visual speech articulators while gestural information can be extracted peripherally. Yet, the fact that gestures attract more visual attention from non-native listeners than would be normally expected may suggest that non-native listeners conceive of visual semantic information conveyed by gestures as an especially rich source of information that can aid language comprehension under both good and suboptimal listening conditions.
Description
Citation
Faculty
Faculteit der Letteren