Testing the prerequisites for hand gestures playing a potential predictive role in communication: A corpus study of the speech-gesture timing in English conversation.
Natural, face-to-face communication often involves rapid turn-taking sequences, with the average transition time of 200 ms between these turns. This is remarkably fast considering the 600 ms duration allocated for speech production. Therefore, language processing in communication must be both fast and predictive to resolve this timing constraint. For this to be possible, speakers must rely on both verbal and visual signals to predict the message and plan for the upcoming turn. This has lead to the assumption that gestures should be produced in anticipation of speech to facilitate predictive language processing. This study sets out to 1) investigate the speech-gesture asynchrony for both representational and non-representational gestures, and 2) prove the role of speech-gesture timing in predictive language processing. Based on 10 dyadic conversations of an English corpus, manual gestures associated with question-response (QR) sequences were annotated, along with the verbal information that are closest in meanings with these gestures. The researcher calculated the time gap for gesture onset – speech onset to examine the anticipative effect of gesture. Next, the relationship between speech-gesture asynchrony and response time in QR pairs with gestures was tested to provide evidence for the potential ability of preceding gestures in predictive language processing. The findings revealed that representational gestures and their strokes started before their lexical affiliates, yet non-representational gestures would follow their corresponding speech. Furthermore, no predictive effect was detected for speech-gesture asynchrony and the response time in QR sequences. These results thus provided further evidence for the timing relationship between gestures and their corresponding speech. However, further studies are needed to verify speech-gesture asynchrony in both representational and non-representational gestures, also the link between speech-gesture asynchrony and language processing time. Key words: multimodal communication, predictive language processing, co-speech gesture, gesture-speech asynchrony.
Faculteit der Letteren