Brainwaves synchronize to the speed of talking, influencing the way we hear words

people talking
Credit: CC0 Public Domain

Have you ever found yourself finishing someone else's sentences, even though you don't really know them that well? Fortunately, the ability to predict what someone is going to say next isn't the preserve of turtle doves or those in long-term relationships. Our brain processes all kinds of information to estimate what's going to come next, and the speed at which the speaker is talking, or speech rate, plays an important role.

This study, published in the journal Current Biology, delved deeper to find out what happens on a neural level. "The findings show neural dynamics predict the timing of future items based on past speech rate, and this influences how ongoing words are heard," says Anne Kösem of the MPI and the Lyon Neuroscience Research Center, and first author of the research paper.

Speech rhythms and perception

"We asked native Dutch participants to listen to Dutch sentences that suddenly changed in speech rates: The beginning of the was either compressed or expanded in duration, leading to a fast or a slow speech rate, while the final three words were consistently presented at the original recorded speech rate," says Kösem.

The final word of the sentence contained an ambiguous vowel, which could be interpreted, for example, as either a short "a" or a long "aa" vowel. Crucially, the speed of the beginning of the sentence could influence the way this ambiguous vowel is heard, leading to the perception of words with radically different meanings. For example, in Dutch, the ambiguous word is more likely to be perceived as a long "aa" word when someone was initially talking quickly (e.g. taak, word "task" in Dutch), and as a short "a" when someone was talking slowly (tak, "branch" in Dutch).

Participants reported how they perceived the last word of the sentence. The team recorded participants' brain activity with magnetoencephalography (MEG) while they listened to the sentences, and investigated whether neural activity synchronised to the initial speech rate and whether that influenced how participants comprehended the last word.

Just like riding a bike

The study showed that our brain keeps following past speech rhythms after a change in speech rate. If it synchronizes to the preceding slow speech rate we are more likely to hear the last ambiguous word with a short vowel, and if it synchronises to the preceding fast speech rate we are more likely to hear a long vowel word. "Our findings suggest that the neural tracking of speech dynamics is a predictive mechanism, which directly influences perception," adds Kösem.

"Imagine the brain acting like a bicycle wheel. The wheel turns at the speed imposed by pedaling, but it continues rolling for some time after pedaling has stopped because it is dependent on the past pedaling speed." This sustained synchronisation between brainwaves and helps us predict the length of future syllables, ultimately causally influencing the way we process and hear words.

Fundamental research with future potential

The team believes that, in the future, these findings may help researchers improve speech perception in adverse listening conditions and for the hearing impaired. Kösem added: "One follow-up study currently being performed tests if word perception can be modulated by directly modifying brain oscillatory activity with transcranial alternating current stimulation."

More information: Kösem, A., Bosker, H. R., Takashima, A., Meyer, A. S., Jensen, O., & Hagoort, P. (2018). Neural entrainment determines the words we hear, Current Biology.

Journal information: Current Biology
Provided by Max Planck Society
Citation: Brainwaves synchronize to the speed of talking, influencing the way we hear words (2018, September 6) retrieved 20 April 2024 from https://medicalxpress.com/news/2018-09-brainwaves-synchronize-words.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Researchers find a neural 'auto-correct' feature we use to process ambiguous sounds

903 shares

Feedback to editors