How Do Children Hear Anger?
Researchers pair acoustical analysis with brain mapping to understand how children process emotion in speech and how it might influence their development.
Newswise — Washington, D. C., December 1, 2016 -- Even if they don’t understand the words, infants react to the way their mother speaks and the emotions conveyed through speech. What exactly they react to and how has yet to be fully deciphered, but could have significant impact on a child’s development. Researchers in acoustics and psychology teamed up to better define and study this impact.
Peter Moriarty, a graduate researcher at Pennsylvania State University, will present the results of these studies, conducted with Michelle Vigeant, professor of acoustics and architectural engineering, and Pamela Cole professor of psychology, at the Acoustical Society of America and Acoustical Society of Japan joint meeting being held Nov. 28-Dec. 2 in Honolulu, Hawaii.
The team used functional magnetic resonance imaging (fMRI) to capture real-time information about the brain activity of children while they listening to samples of their mothers’ voice with different affects -- or non-verbal emotional cues. Acoustic analysis of the voice samples was performed in conjunction with the fMRI data to correlate brain activity to quantifiable acoustical characteristics.
“We’re using acoustic analysis and fMRI to look at the interaction and specifically how the child’s brain responds to specific acoustic cues in their mother’s speech,” Moriarty said. Children in the study heard 15 second voice samples of the same words or sentences, but each conveyed either anger, happiness, or were neutral in affect for control purposes. The emotional affects were defined and predicted quantitatively by a set of acoustic parameters.
“Most of these acoustic parameters are fairly well established,” Moriarty said. “We’re talking about things like the pitch of speech as a function of time... They have been used in hundreds of studies.” In a more general sense, they are looking at what’s called prosody, or the intonations of voice.
However, there are many acoustic parameters relevant to speech. Understanding patterns within various sets of these parameters, and how they relate to emotion and emotional processing, is far from straight forward.
“You can’t just talk to Siri [referring to Apple’s virtual assistant] and Siri knows that you’re angry or not. There’s a very complicated model that you have to produce in order to make these judgements,” Moriarty explained. “The problem is that there’s a very complicated interaction between these acoustic parameters and the type of emotion … and the negativity or positivity we’d associate with some of these emotions.”
This work is a pilot study done as an early stage of a larger project called, The Processing of the Emotional Environment Project (PEEP). In this early stage, the team is looking for the best set of variables to predict these emotions, as well as the effects these emotions have on processes in the brain. “[We want] an acoustic number or numbers doing a good job at predicting that we’re saying, ‘yes, we can say quantitatively that this was angry or this was happy,’” Vigeant said.
In the work to be presented, the team has demonstrated the importance of looking at lower frequency characteristics in voice spectra; the patterns that appear over many seconds of speech or the voice sample as a whole. These patterns, they report, may play a significant role in understanding the resulting brain activity and differentiating the information relevant to emotional processing.
With effective predictors and fMRI analysis of effects on the brain, the ultimate goal of PEEP is to learn how a toddler who has not yet developed language processes emotion through prosody and how the environment effects their development. “A long term goal is really to understand prosodic processing, because that is what young children are responding to before they can actually process and integrate the verbal content,” Cole said.
Toddlers, however, are somewhat harder to image in an fMRI device, as it requires them to be mostly motionless for long periods of time. So for now, the team is studying older children aged 6-10 -- though there are still some challenges of wriggling.
“We’re essentially trying to validate this type of procedure and look at whether or not we’re able to get meaningful results out of studying children that are so young. This really hasn’t been done at this age group in the past and that’s largely due to the difficulty of having children remain somewhat immobile in the scanner.”
Presentation 4aAA11, "Low frequency analysis of acoustical parameters of emotional speech for use with functional magnetic resonance imaging," by Peter M. Moriarty is at 11:15 a.m. HAST, Dec. 1, 2016 in Room Lehua.
-----------------------MORE MEETING INFORMATION-----------------------
The 172nd Meeting of the Acoustical Society of AmericaThe meeting is being held November 28-December 2, 2016 in Honolulu, Hawaii.
USEFUL LINKSMain meeting website: http://acousticalsociety.org/content/5th-joint-meeting-acoustical-society-america-and-acoustical-society-japanTechnical program: http://acousticalsociety.org/asa2016fall.abstractcentral.com/planner.jspMeeting/Hotel site: http://acousticalsociety.org/content/5th-joint-meeting-acoustical-society-america-and-acoustical-society-japan#hotelPress Room: http://acoustics.org/world-wide-press-room/
WORLD WIDE PRESS ROOMIn the coming weeks, ASA’s World Wide Press Room will be updated with additional tips on dozens of newsworthy stories and with lay-language papers, which are 300-1200 word summaries of presentations written by scientists for a general audience and accompanied by photos, audio, and video. You can visit the site during the meeting at: http://acoustics.org/world-wide-press-room/. PRESS REGISTRATIONWe will grant free registration to credentialed journalists and professional freelance journalists. If you are a reporter and would like to attend, contact Emilie Lorditch (firstname.lastname@example.org, 301-209-3029) who can also help with setting up interviews and obtaining images, sound clips, or background information.
LIVE MEDIA WEBCASTA press briefing featuring the acoustics of snapping shrimp and coconut beetles plus, how speech sounds influence female vocal attractiveness will be webcast live from the conference Wednesday, Nov. 30 from 10 – 11 a.m. HAST in room Iolani I.
ABOUT THE ACOUSTICAL SOCIETY OF AMERICAThe Acoustical Society of America (ASA) is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world's leading journal on acoustics), Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. For more information about ASA, visit our website at http://www.acousticalsociety.org.