Predicting Speech: How Semantic Context and Visual Cues Modulate Audiovisual Speech Processing
dc.contributor.advisor | McQueen, James | |
dc.contributor.advisor | Lange, Floris, de | |
dc.contributor.author | Solberg 0kland, Heidi | |
dc.date.issued | 2014-08-01 | |
dc.description.abstract | Spoken language communication usually happens face-to-face. Both the content of what a speaker has already said and her visible mouth movements (visemes) can help us predict which word we will hear next, because bath of these cues precede the acoustic onset of the upcoming word. However, it is not clear whether and how these two types of predictions interact when we perceive speech audiovisually. We orthogonally manipulated contextual constraint and viseme saliency to investigate whether a previously found auditory facilitation effect caused by salient visemes would be modulated by semantic context. Our hypotheses were that a strong semantic context prediction would either add to or dominate the viseme prediction. Results are most consistent with the latter, indicating that sentence context and visem.e predictions operate on different levels in the predictive processing hierarchy. We conclude that salient visemes facilitate early auditory speech processing only when there is a high amount of uncertainty about the upcoming word. | en_US |
dc.embargo.lift | 2039-08-01 | |
dc.identifier.uri | http://theses.ubn.ru.nl/handle/123456789/5193 | |
dc.language.iso | en | en_US |
dc.thesis.faculty | Faculteit der Sociale Wetenschappen | en_US |
dc.thesis.specialisation | Researchmaster Cognitive Neuroscience | en_US |
dc.thesis.studyprogramme | Researchmaster Cognitive Neuroscience | en_US |
dc.thesis.type | Researchmaster | en_US |
dc.title | Predicting Speech: How Semantic Context and Visual Cues Modulate Audiovisual Speech Processing | en_US |
Files
Original bundle
1 - 1 of 1
No Thumbnail Available
- Name:
- Solberg Okland, H 12 TH.pdf
- Size:
- 1.46 MB
- Format:
- Adobe Portable Document Format
- Description:
- scriptietekst