Human face-to-face communication entails the rapid integration of auditory as well as visual signals. For this thesis, we examined whether such a multimodal context offers a benefit during language processing and, in addition, whether empathy has an influence on this procedure of multimodal language processing. In employing a shadowing task, the participants were required to repeat (shadow) speech as he or she hears it (i.e. the shadower started to repeat the fragment before he or she has heard all of it). The stimuli consisted of natural dyadic conversations and were presented in three different conditions: audio only (AO), audiovisual with a blurred mouth (AB) and audiovisual (AV). Along with the shadowing experiment, participants were asked to fill out the Empathy Quotient. Results demonstrated that participants made less errors in the shadowing task as more visual context was present, implying that a multimodal advantage for language processing indeed can be found.
Faculteit der Letteren