According to a new study, researchers have found the region of the brain that is sensitive to the timing of speech, this region plays an important role in human language.
For one to understand what others is saying the brain needs to interpret different time signatures, and hence timing plays an important part of human speech.
Speech is generally made of different time measurements.
Some of the measurements are phonemes that are the shortest unit of speech and it last for between 30 and 60 milliseconds. Syllable is other measurement which lasts for between 200 and 300 milliseconds and whole words are even longer.
To deal with this information the auditory system will sample information in chunks that are equivalent to average consonant or syllable.
For the study, researchers have tried to do the same and cut recordings of foreign speech in chunks ranging between 30 and 660 milliseconds in length. These chunks are reassembled using a new algorithm that creates speech quilts.
Researchers found that the shorter the speech quilt the greater the disruption to the speech’s original structure.
Then the researchers played the speech quilts to the participants while they underwent brain scans, they found that the superior temporal sulcus or STS is the region of the brain which became highly active when the 480 and 960 milliseconds quilts were played, but they also noticed that the brain region was not active during the 30 millisecond quilt.
Tobias Overath, an assistant research professor of psychology and neuroscience at Duke said, “That was pretty exciting. We knew we were onto something.”
For the first time STS which works to integrate auditory and other sensory information has been shown to respond to time structures in speech.
The researchers have tested other control sounds that mimicked speech to back up their findings. The control stimuli were also arranged into quilts and then it was played to participants and the researchers noted that the brain region did not respond to the control quilts.
Overath said, “We really went to great lengths to be certain that the effect we were seeing in STS was due to speech-specific processing and not due to some other explanation, for example, pitch in the sound or it being a natural sound as opposed to some computer-generated sound.’