Is There a Musical Method for Interpreting Speech?
Researchers evaluated whether musicians had an advantage in understanding and reciting degraded speech as compared to nonmusicians.
WASHINGTON, D.C. December 7, 2017 -- Cochlear implants have been a common method of correcting sensorineural hearing loss for individuals with damage to their brain, inner ear, or auditory nerves. The implanted devices use an electrode array that is inserted into the cochlea and assists in stimulating auditory nerve fibers. However, the speech patterns heard with the use of a cochlear implant are often spectrally degraded and can be difficult to understand. Vocoded speech, or distorted speech that imitates voice transduction by a cochlear implant, is used throughout acoustic and auditory research to explore speech comprehension under various conditions.
Researchers Kieran E. Laursen, Sara L. Protko and Terry L. Gottfried from Lawrence University, along with collaborators Iain C. Williams and Tahnee Marquardt from the University of North Carolina at Wilmington and the University of Oxford, respectively, will present their work on the effect of musical experience on the ability to understand vocoded speech at the 174th Meeting of the Acoustical Society of America, being held Dec. 4-8, 2017, in New Orleans, Louisiana.
Musical ability, described by a person’s aptitude for playing an instrument, interpreting sound patterns or recognizing different tones, has long been linked to higher cognitive capacity and better communication skills.
“We are testing to see if someone’s musicality or levels of musical experience affects their perceptions of vocoded speech,” Laursen said in an email. “So, the question lies in how does music affect one’s abilities to hear different pitches, intonations, and rhythms within distorted speech.”
“The acoustic information in vocoded speech is quite different from that of natural speech in the presence of noise,” said Gottfried. The rhythmic patterns of natural speech are often maintained in vocoded speech, so musicians may have the upper hand at interpretation due to their experience with rhythm production. However, musicians may also fair similarly to nonmusicians due to the loss of information that can result from vocoding.
Gottfried has been researching speech perception and its relation to music since he was in graduate school. “Over the years, I’ve continued my studies of this relation between speech and music perception, and there’s been considerable recent research that suggests musical experience is related not only to improved second language speech perception, but also to improved phonetic perception in one’s first language and in better recognition of speech in noise,” he said, regarding a study on speech perceptions by nonnative listeners to Mandarin tones.
Using a commercially available program called SuperLab, research participants (both musicians and nonmusicians) were asked to transcribe vocoded sentences and words. They were then assigned to a training method on either vocoded or natural speech and asked to again transcribe vocoded sentences. The initial results showed that musicians had no significant advantage over nonmusicians in interpreting vocoded speech patterns, but this may be due to limited sample variation.
“Both groups scored well above chance on the Musical Ear Test, so it’s possible that, if we tested listeners with very poor musical ears, they would also not do so well on the vocoded speech,” Gottfried said. He also noted that the results are still useful in assessing the extent to which musical experience may relate to the perception of degraded speech.
The applications of this research span beyond the understanding of vocoded speech to a variety of acoustical interpretation patterns. Understanding normal speech in a noisy environment is dependent on rhythmic pattern interpretations and is acoustically similar to attempting to understand vocoded speech patterns. If musical experience improves vocoded understanding, it may also be useful in day to day speech interpretation in noisy environments.
Abstract: 4pSC10: “Effect of musical experience on learning to understand vocoded speech,” by Kieran E. Laursen, Iain C. Williams, Tahnee Marquardt, Sara L. Prostko and Terry L. Gottfried, is at 1:00-4:00 p.m. CST, Thursday, Dec. 7, 2017, in Studios Foyer in the New Orleans Marriott. https://asa2017fall.abstractcentral.com/s/u/OBlFa5aSFG8
-----------------------MORE MEETING INFORMATION-----------------------
Main meeting website: http://acousticalsociety.org/content/174th-meeting-acoustical-society-america
Technical program: https://asa2017fall.abstractcentral.com/index.jsp
Meeting/Hotel site: http://acousticalsociety.org/content/174th-meeting-acoustical-society-america#hotel
Press Room: http://acoustics.org/world-wide-press-room/
We will grant free registration to credentialed journalists and professional freelance journalists. If you are a reporter and would like to attend, contact Julia Majors (firstname.lastname@example.org, 301-209-3090), who can also help with setting up interviews and obtaining images, sound clips or background information.
LIVE MEDIA WEBCAST
A press briefing featuring will be webcast live from the conference Tuesday, Dec. 5, 2017, in room Studio 1 at the New Orleans Marriott. Time to be announced. Register at https://www1.webcastcanada.ca/webcast/registration/asa617.php to watch the live webcast.
ABOUT THE ACOUSTICAL SOCIETY OF AMERICA
The Acoustical Society of America exists to generate, disseminate, and promote the knowledge and practical applications of acoustics. Two society meetings are held each year throughout the U.S. and Canada where acousticians can exchange information with various other researchers. For more information: http://acousticalsociety.org/