Newswise — Although we have little awareness that we are doing it, we spend most of our lives filtering out many of the sounds that permeate our lives and acutely focusing on others – a phenomenon known as auditory selective attention. In research that could some day lead to the development of improved devices allowing users to control things like wheelchairs through thought alone, hearing scientists at the University of Washington (UW) are attempting to tease apart the process. The work will be presented at the Acoustics 2012 meeting in Hong Kong, May 13-18, a joint meeting of the Acoustical Society of America (ASA), Acoustical Society of China, Western Pacific Acoustics Conference, and the Hong Kong Institute of Acoustics.
Auditory selective attention is extremely important in everyday life, notes UW postdoctoral researcher Ross Maddox. “In situations as mundane as ordering your morning cup of coffee, you must focus on the barista while tuning out the loud hiss of the espresso machine and the annoying cell phone conversation happening in line right behind you,” says Maddox. “However, the mechanisms behind selective attention are still not well understood.” In addition, some individuals suffer from Central Auditory Processing Disorder (CAPD), “which means they have normal hearing when tested by an audiologist,” he says, “but they are completely lost in loud settings like restaurants and airports.”
To determine how auditory selective attention works – and perhaps how it fails in people with CAPD – Maddox, along with Adrian K.C. Lee, an assistant professor of speech and hearing sciences, and colleague Willy Cheung, created laboratory situations that promoted the breakdown of the process. The researchers had 10 subjects try to focus their attention on just one target sound – a continuously repeating utterance of a single letter – among a total of 4, 6, 8, or 12 such sounds. The subjects had to determine when an “oddball” item (the letter “R,” chosen because it doesn’t rhyme with any other letter) was inserted into the target sound stream.
“Most studies systematically degrade sounds and measure the effects on listeners’ performance,” Maddox explains. “Here, we made the target sound as easy to distinguish from all the other sounds present as possible, and tested the upper limit on the number of sounds a listener could tune out, given all these acoustical advantages.”
Unsurprisingly, it is harder to tune in to just one stream when the number of streams increases. However, study subjects did better than expected – successfully identifying the target 70 percent of the time in the most difficult conditions. Repeating letters faster did make the task harder – although with faster repetition, listeners more quickly learn what the letter they’re listening to sounds like, “so there is a tradeoff involved when deciding on repetition speed,” Maddox says.
The work, Maddox and colleagues say, is a first step toward developing an auditory brain-computer interface (BCI) – a device that reads brain activity to allow users to control computers or machines such as wheelchairs. “We hope to create a system that presents a user with an auditory ‘menu’ of sounds – similar to the letter streams here – and allows the listener to make a choice by reading their brainwaves to determine which sound they are focusing on. The more sound streams a user is able to tune out, the more menu options we can present at a single time.”
The 163rd Meeting of the Acoustical Society of America (ASA) will feature more than 1,300 presentations on the science of sound and its impact on physics, engineering, and medicine. This international acoustics meeting will be held jointly with the 8th Meeting of the Acoustical Society of China and the 11th Western Pacific Acoustics Conference. It is organized by the Hong Kong Institute of Acoustics and will take place May 13-18, 2012, at the Hong Kong Convention and Exhibition Centre. MORE INFORMATION ABOUT THE 163rd ASA MEETING
WORLD WIDE PRESS ROOMASA's World Wide Press Room (www.acoustics.org/press) will contain additional tips on newsworthy stories and lay-language papers, which are 300-1200 word summaries of presentations written by scientists for a general audience and accompanied by photos, audio, and video.
This news release was prepared for the Acoustical Society of America (ASA) by the American Institute of Physics (AIP).
ABOUT THE ACOUSTICAL SOCIETY OF AMERICA The Acoustical Society of America (ASA) is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world's leading journal on acoustics), Acoustics Today magazine, ECHOES newsletter, books, and standards on acoustics. The society also holds two major scientific meetings each year. For more information about ASA, visit our website at http://www.acousticalsociety.org.