Newswise — WASHINGTON, D.C., May 22, 2015 -- The room is loud with chatter. Glasses clink. Soft music, perhaps light jazz or strings, fills the air. Amidst all of these background sounds, it can be difficult to understand what an adjacent person is saying. A depressed individual, brought to this cocktail party by a well-meaning friend, can slide further into himself, his inability to hear and communicate compounding his sense of isolation.

“A lot of research has suggested that these people with elevated depression symptoms have a bias towards negative perception of information in this kind of environment,” said Zilong Xie, a graduate student in the SoundBrain Lab in the Department of Communication Sciences and Disorders at the University of Texas at Austin.

When a listener has difficulty understanding someone else's speech, the source of disruption can be placed into one of two categories: energetic masking or informational masking. In energetic masking, sounds from peripheral sources such as construction sites or passing airplanes interfere with speech perception. In informational masking, the interference comes from linguistic and cognitive sources, such as the background din of human conversation. Informational masking tends to place greater stress on executive function than does energetic masking, thereby turning a cocktail party, or a lecture hall, into a potentially isolating experience.

Psychoacoustics identifies five basic types of emotional speech – angry, fearful, happy, sad and neutral. “A lot of studies published in JASA [The Journal of the Acoustical Society of America] only look at neutral speech, speech without emotive content,” Xie said. “If we want to fully understand what’s going on with speech perception, particularly in a multi-tonal condition, which very often happens in our daily lives, we need to look at those kinds of emotional speech.”

From previous studies, Xie and his colleagues predicted that the bias of people with elevated signs of depression towards remembering sad information might lead them to more easily detect negative information in these environments. To test their hypothesis, the researchers recruited students with either low or elevated symptoms of depression to gauge their speech perception in the presence of either energetic masking or informational masking.

The researchers tested the volunteers' ability to perceive speech in various conditions by having them listen to a recording of a target sentence featuring one of the five types of emotional speech mixed with noise. The students then typed out the target sentence, which was later compared with the actual sentences, to determine how accurately they heard it. The test was performed fifty times with each volunteer, covering ten unique sentences of each emotional type.

"We found that people with elevated depression symptoms are generally poorer at hearing all types of emotional speech relative to people with low depression symptoms," Xie said. Contrary to the researchers’ expectation, the more depressed subjects did not better understand negative sentences conveyed in information masking environments than those without those symptoms. They performed poorly, regardless of the emotional content of the sentences. However, both groups performed comparably when the sentences were read to them in energetic masking conditions.

Xie and his colleagues will present their findings at the 169th meeting of the Acoustical Society of America (ASA), held May 18–22, 2015 in Pittsburgh.

Future work for the researchers involves expanding the scope of their study to include individuals with a broader range of major depressive disorders.

Presentation #5aSC12, "Elevated depressive symptoms associate with an emotion-general deficit in speech perception at a cocktail party," will be presented by Zilong Xie, W. Todd Maddox and Bharath Chandrasekaran during a poster session on Friday, May 22, 2015 between 8:00 AM and 12:00 noon Ballroom 2 at the Wyndham Grand Pittsburgh Downtown Hotel. The abstract can be found by searching for the presentation number here:


ABOUT THE MEETINGThe 169th Meeting of the Acoustical Society of America (ASA) will be held May 18-22, 2015, at the Wyndham Grand Pittsburgh Downtown Hotel. It will feature nearly 1,000 presentations on sound and its applications in physics, engineering, music, architecture and medicine. Reporters are invited to cover the meeting remotely or attend in person for free. PRESS REGISTRATIONWe will grant free registration to credentialed journalists and professional freelance journalists. If you are a reporter and would like to attend, contact Jason Bardi (, 240-535-4954), who can also help with setting up interviews and obtaining images, sound clips, or background information.

USEFUL LINKSMain meeting website: planner and technical program: site: Room:

WORLD WIDE PRESS ROOMASA’s World Wide Press Room is being updated with additional tips on dozens of newsworthy stories and with lay-language papers, which are 300-1,200 word summaries of presentations written by scientists for a general audience and accompanied by photos, audio, and video. LIVE MEDIA WEBCASTA press briefing featuring a selection of newsworthy research will be webcast live from the conference on Tuesday, May 19. Topics and times to be announced. To register, visit

ABOUT THE ACOUSTICAL SOCIETY OF AMERICAThe Acoustical Society of America (ASA) is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world's leading journal on acoustics), Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. For more information about ASA, visit our website at