Predicting Speech: How Semantic Context and Visual Cues Modulate Audiovisual Speech Processing

dc.contributor.advisorMcQueen, James
dc.contributor.advisorLange, Floris, de
dc.contributor.authorSolberg 0kland, Heidi
dc.date.issued2014-08-01
dc.description.abstractSpoken language communication usually happens face-to-face. Both the content of what a speaker has already said and her visible mouth movements (visemes) can help us predict which word we will hear next, because bath of these cues precede the acoustic onset of the upcoming word. However, it is not clear whether and how these two types of predictions interact when we perceive speech audiovisually. We orthogonally manipulated contextual constraint and viseme saliency to investigate whether a previously found auditory facilitation effect caused by salient visemes would be modulated by semantic context. Our hypotheses were that a strong semantic context prediction would either add to or dominate the viseme prediction. Results are most consistent with the latter, indicating that sentence context and visem.e predictions operate on different levels in the predictive processing hierarchy. We conclude that salient visemes facilitate early auditory speech processing only when there is a high amount of uncertainty about the upcoming word.en_US
dc.embargo.lift2039-08-01
dc.identifier.urihttp://theses.ubn.ru.nl/handle/123456789/5193
dc.language.isoenen_US
dc.thesis.facultyFaculteit der Sociale Wetenschappenen_US
dc.thesis.specialisationResearchmaster Cognitive Neuroscienceen_US
dc.thesis.studyprogrammeResearchmaster Cognitive Neuroscienceen_US
dc.thesis.typeResearchmasteren_US
dc.titlePredicting Speech: How Semantic Context and Visual Cues Modulate Audiovisual Speech Processingen_US
Files
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
Solberg Okland, H 12 TH.pdf
Size:
1.46 MB
Format:
Adobe Portable Document Format
Description:
scriptietekst