Listeners Normalise to Speaker Rate Dynamics Irrespective of Selective Attention
No Thumbnail Available
Temporal characteristics are fundamental to speech perception. This perception is often relative to contextual information rather than absolute. For example, surrounding speech context contrastively modulates the perception of subsequent words, known as context effect. This normalisation to the temporal properties of contextual speech appears to be supported by entrainment of neural oscillations to fast vs. slow syllabic rhythms. Moreover, we often find ourselves surrounded by multiple speakers, requiring that we “tune in” to relevant speech while inhibiting attention towards distracting speakers. So, do listeners normalise words only for the speech rate of an attended talker, or does an unattended speech rate also influence speech perception? Further, is there a relationship between successful attention and modulation of these context effects? In a magnetoencephalography study, participants were instructed to attend to one of two dichotically presented, rate matched or mismatched sentences. Following this, they categorised ambiguous target words. As a neural signature of success in selective auditory attention we computed an alpha (~10 Hz) power lateralisation index. Context effects were found following matching rates but not mismatching rates, suggesting that rate normalisation factors in the global listening environment. Further, our findings support previous research implementing alpha lateralisation as a neural index of attentional demands rather than successful attention. The findings herein contribute to our understanding of the properties of speech that precede attentional stream segregation. This in turn, could contribute to the development of modern hearing aids that are already attempting to take advantage of electrophysiological research methods.
Faculteit der Sociale Wetenschappen