Newswise — The 159th meeting of the Acoustical Society of America (ASA) will convene jointly with NOISE-CON 2010, the 26th annual conference of the Institute of Noise Control Engineering (INCE-USA) April 19-23, 2010 at the Baltimore Marriott Waterfront Hotel, Baltimore, MD. During the meeting, the world's foremost experts in acoustics will report on research that draws from scientific disciplines as diverse as medicine, music, psychology, engineering, speech communication, noise control, and marine biology.

Journalists are invited to attend the joint meeting free of charge, and registration information can be found at the end of this release.

HIGHLIGHTS OF MEETING TALKSThe following are just a few of the meeting's many interesting talks. Additional highlights of the joint meeting are also available and may be obtained by contacting Jason Bardi ([email protected]).

1) Voice Analysis: DETECTING EMOTIONAL STATE2) Acoustics and Sports: MEASURING CROWD NOISE3) Neuroscience: HEARING THROUGH NOISE4) Musical Acoustics: THE PHYSICS OF BANJOS5) Building Acoustics: GREEN BUILDINGS6) Acoustic Ecology: CENSUS BY SOUND7) Classroom Acoustics: IMPROVING THE LEARNING ENVIRONMENT8) Medicine: PORTABLE ULTRASOUND9) Engineering: MEASURING TURBULENCE10) Neuroscience: THE VOICE AND THE BRAIN11) Animal Acoustics: HOW BATS FORAGE FOR FOOD12) Medicine: ANTIFUNGAL ULTRASOUND13) More Highlights -- OTHER INTERESTING SESSIONS14) More Information for Journalists

-------------------------------------------------------------------------1) Voice Analysis: Detecting Emotional StateANALYSIS OF RADIO TRANSMISSIONS FROM THE VALDEZ OIL SPILL AND '94 USAir CRASH

On March 24, 1989, the Exxon Valdez dumped 10 million gallons of crude oil into Alaska's Prince William Sound. Toxicology results indicated alcohol in the captain's blood 10.5 hours after the accident, but eyewitnesses from the time of the accident provided conflicting information on the captain's state.

Malcolm Brenner, a National Transportation Safety Board (NTSB) human performance investigator, analyzed Coast Guard recordings of the captain's radio transmissions in the minutes before and after the accident and compared them to transmissions the captain had made 33 hours earlier and 9 hours later. Brenner looked at four speech characteristics -- rate, errors, slurring, and overall quality -- and found degradation around the time of the accident consistent with alcohol impairment. The NTSB, relying in part on speech evidence, determined that the captain's impairment by alcohol was a factor in the accident.

The findings are part of NTSB's official report and have been public record for years. Brenner is presenting in Baltimore this year to solicit new collaboration with the broader scientific community and to discuss NTSB interest in new research directions, including betters means to analyze speech for signs of human fatigue.

"A part of our congressional mandate is looking into new investigative techniques that can help us in our work," said Brenner, who will also be discussing speech evidence from an analysis of cockpit recordings made just before the crash in Pittsburgh of a USAir flight on September 8, 1994 that killed all 131 people aboard. The subsequent NTSB investigation, the longest in the agency's history, hinged in part on Brenner's work to map sounds of the pilot wrestling with the controls to the precise moment of the suspected failure of the rudder system.

Audio recordings from the Exxon Valdez investigation will be played at Brenner's Thursday afternoon talk and made available to the media on CD.

Talk #4pSCa1, "Speech analysis in accident investigation" is at 1:05 p.m. on Thursday, April 22. Abstract:

-------------------------------------------------------------------------2) Acoustics and Sports: Measuring Crowd NoiseQUANTIFYING THE "HOME-FIELD" ADVANTAGE OF COLLEGE FOOTBALL

Acoustics researchers at Pennsylvania State University have quantified their team's home field advantage in college football by making sound measurements at Beaver Stadium in State College, PA -- the largest college football stadium in the United States, with a capacity of 107,282. These measurements reveal the stark difference in terms of noise on the field. The quarterbacks of Penn State's Nittany Lions enjoy the advantage. They are subjected to crowd noise that is about 30 decibels (dB) lower than what the visiting quarterbacks have blasting at them -- crowd noise intended to disrupt on-field communication and encourage penalties.

"That 30 dB difference reduces the effective communication distance of the quarterback -- while shouting -- from about 20 feet to less than 1 foot," explains Andrew Barnard, a Ph.D. candidate in Penn State's Graduate Program in Acoustics who led the investigation. Measurements were taken during two fall 2009 games with 12 sound level meters at various positions coupled with additional acoustic arrays. Results showed that the corner of the field in front of the student section is loudest -- 110 decibels or more when the opposing team has the ball.

"That's like standing right in front of one of those giant speaker columns at a rock concert," Barnard says, adding that the university's athletic department has been happy with the results. "In college football, it's a badge of pride to have a stadium that's loud and a student section that is really loud," says Barnard. "We have both, and the data to prove it.''

Barnard will continue taking measurements, and he hopes other schools will contribute data to learn more about football crowd noise and strategic uses of disruptive sound. He won't measure every game, however -- he's a fan first. Asked about his own contribution to the noise, he admits it is minor. "I like to be able to talk after the game," he says. Talk #1pNCa5, "Evaluation of crowd noise in Beaver Stadium during Penn State football games" is at 2:00 p.m. on Monday, April 19. Abstract:

-------------------------------------------------------------------------3) Neuroscience: Hearing Through NoiseSCANS SHOW BRAIN'S SIGNAL-FROM-NOISE FILTER AT WORK

Stop and listen when walking into a crowded cocktail party. Notice how the room's soundscape gradually transitions from white noise to discernible categories of familiar sounds -- maybe the laughter of a friend in attendance here or the bars of a favorite song playing softly on the host's sound system over there.

This adjustment seems natural, like pupils shrinking when confronted with especially bright light. In fact, this ability hinges on the brain's ability to wrest control -- by way of higher learning and memory systems -- from its more hardwired automatic processes.

Johns Hopkins University researcher Mounya Elhilali studies just what happens in the brain when this top-down ability to focus on distinct sounds overrides its bottom-up abilities, which are mostly limited to creating neural facsimiles of undifferentiated noise. In Baltimore Elhilali will present results of a recent experiment in which she and her colleagues watched how the magnetic brain activity of subject volunteers changed when asked to focus on one sound amidst a noisy background.

"Our study shows that when we focus our attention to one sound among a number of competing background sounds, our brain boosts the representation of this target sound relative to all other sounds," says Elhilali, an assistant professor of electrical and computer engineering. "The brain responds more vigorously to this object of attention and also causes different populations of neurons to all respond at the same time to the target sound."

Elhilali's research is relevant to more than those seeking to better navigate the party circuit. Her data may eventually be useful to doctors seeking to understand why the ability to make sense of our environment aurally goes awry due to cognitive impairment and aging. And in the more distant future, her field may help open new frontiers for hearing technologies, such as automatic surveillance of soundscapes, diagnostic systems, brain-machine robotics interfaces and hearing prostheses.

Talk #5pAB2, "The role of innate and attentional mechanisms in parsing complex acoustic scenes: A neural and behavioral study" is at 1:25 p.m. on Friday, April 23. Abstract:

-------------------------------------------------------------------------4) Musical AcousticsTHE PHYSICS OF BANJOS

The banjo has many more tunable parts than most string instruments, but much less is known about its physical properties than about those of violins or guitars. Session 3aMU looks at a number of banjo properties, beginning with a talk by Thomas Rossing of Stanford University, editor of a forthcoming book about the science behind string instruments. Other talks look at the movement of the banjo's head, its bridge, a comparison of banjo instruments in North America, West Africa, the Middle East, Japan, and China. Following the formal talks, a short concert of Bluegrass music will feature a 5-string banjo.

Session 3aMU, "Musical Acoustics, Signal Processing in Acoustics, and Engineering Acoustics: Measurement and Modeling of the Acoustic Properties of the Banjo" is at 8:30 a.m. on Wednesday, April 21.

-------------------------------------------------------------------------5) Building Acoustics: Green BuildingsIMPROVING THE ACOUSTICS OF GREEN BUILDINGS

The Public Buildings Service (PBS) of the U.S. General Services Administration (GSA) is an $8 billion agency responsible for the work environments of more than a million employees at 60 different federal agencies. In October of 2009, Executive order 13514 challenged agencies to lead by example in energy and environmental performance and to meet 2020 greenhouse gas reduction goals and other standards for efficient, sustainable buildings. As the leading provider of federal workspace, PBS is aggressively seeking out and incorporating greener, more efficient building technologies and practices -- some of which will be described in a session on Monday, April 19th.

New studies by Kevin Powell at the GSA have already been put to use to increase occupant satisfaction in the renovation of green buildings that follows a set of standards known as Leadership in Energy and Environmental Design (LEED). GSA achieved its first LEED-certified building in 2002 and since then forty-eight additional LEED certified buildings have been completed. Based on studies of the LEED-certified buildings conducted in 2006 and 2009, GSA concluded that they deliver economic and environmental advantages, saving money, energy, water, and providing spaces to occupants who are more satisfied overall.

However, other GSA studies found that occupant satisfaction with workplace acoustics scored low in comparison with other workplace characteristics of LEED certified buildings. Since the GSA's ultimate objective is to create spaces that provide an optimal work environment for those who work in them, it sought to better understand the needs of those who worked in these new spaces. What they found was that the new mechanical systems in these buildings were actually too quiet -- background noise was needed to make occupants more comfortable. The design of the new buildings needed to take into account the lack of absorptive surfaces in the LEED-certified buildings. Privacy was also a major concern among occupants, who felt that even those areas designated for private conversation and meetings offered inadequate privacy for normal conversations.

"New designs and materials create unintended impacts on the acoustics of LEED certified buildings," says Powell. "But by understanding the expectations and work styles of the occupants we can develop acoustical standards for these buildings that help the people who work in them be as efficient as the buildings themselves."

Talk #1aAAa4 "Unintended consequences: Acoustical outcomes in LEED (Leadership in Energy and Environmental Design) certified offices" is at 8:50 a.m. on Monday, April 19. Abstract:

-------------------------------------------------------------------------6) Acoustic Ecology: A Census by SoundUSING SOUND TO SPOT RARE, SECRETIVE SPECIES

If we can't see animals because of their naturally secretive behavior, how do we know they are really there -- and in what numbers? How can responsible conservation decisions be made? In Antarctica, acoustic behavioral ecologist Tracey Rogers and a team from the University of New South Wales in Sydney, Australia set out to answer using case examples of the Antarctic pack ice seal and the leopard seal.

"For marine species that are rarely sighted, estimating abundance and spatial-use behavior can be challenging," Rogers explains. "This situation is exacerbated in the polar regions due to the peculiar logistical difficulties of working in the pack ice, which makes survey effort enormously expensive." To overcome these challenges to studying shy or secretive marine animals, Rogers' group devised a simple, cost-effective tool for "seeing the animals with sound." They used vocalizations as a proxy for field sightings. By developing a new means of modeling sounds per animal over unit of time they arrived at a relative population index for a given species -- a sound census. The model requires information on the production of vocalizations as well as data about the detection range. Vocalizations include seasonal calling patterns, daily calling patterns, and vocalizations that identify individuals and gender. Notes Rogers: "Our case study shows that with the advent of more sophisticated marine engineering, coupled with pertinent parameters of acoustic behavioral ecology, this approach is feasible. In a cost-effective way using sound, we can glean information about the behavior of rare, secretive, or low-density species."

Information yielded by this new model includes data about population density, natural history, and habitat use—all foundational elements of conservation policy.

Talk #1pAO4, "Are they really not there? Using passive acoustics to overcome false absences in the study of vocal species that are rare, secretive, or distributed at low densities" is at 2:00 p.m. on Monday, April 19. Abstract:

-------------------------------------------------------------------------7) Classroom AcousticsUSING A VIRTUAL CLASSROOM TO IMPROVE THE LEARNING ENVIRONMENT

Anyone who has endured the challenge of trying to converse at a loud cocktail party will know the difficulty of focusing on just one speaker with other people all around. This may produce nothing worse than faux pas in social situations, but in the classroom, it can be a real obstacle to learning. Daniel Valente and Dawna Lewis at Boys Town Research Hospital in Omaha, Nebraska are working to analyze typical learning environments by using a virtual classroom to determine how adults and children learn best and why.

Valente and colleagues used high-tech methods to test comprehension of both elementary school-age children and adults who were listening to a story read by talkers at different locations around the room -- as compared with a single, frontally located talker. This allowed them to simulate two plausible classroom interactions: teacher/student discussion vs. a teacher only lesson.

They created an environment with the same background noise and reverberation as would be found in a classroom where both children and adults scored 95 percent on a sentence perception test. A gyroscopic head tracker was used to monitor head movement of subjects during the study. In both the single and multi-speaker scenarios, the adults scored significantly higher on comprehension tests, and while adults scored about the same in both listening conditions, children did significantly worse in the multi-talker environment. A somewhat surprising result was that even though children turned their heads to visualize the active talker more often than adults, their comprehension scores were not improved by that adaptive behavior.

"This could indicate that the simple act of tracking the talker diverts focus of cognitive resources away from listening and comprehending," states Valente.

The team at Boys Town Research Hospital is already using this virtual classroom and experimental technique to expand their knowledge of the factors that affect comprehension and how an environment can be optimized for normal hearing and hearing-impaired populations to better enhance learning.

Talk #2pAAa9, "Comparing head rotation angle, visual localization, and recall-proficiency of school-aged children 8-12 while listening to a story by multiple discreet talkers in a virtual classroom" is at 3:55 p.m. on Tuesday, April 20. Abstract:

-------------------------------------------------------------------------8) Medicine: Portable DevicesPOCKET-SIZED ULTRASOUND TESTED FOR A VARIETY OF APPLICATIONS

George Lewis of Cornell University has redesigned ultrasound devices, shrinking them to the size of a cell phone. By improving the energy efficiency of the electronics inside ultrasound devices, he hopes to open the door to a new generation of cheap, portable therapeutic technologies to treat diseases independently or in combination with other therapeutic regimes such as chemotherapy.

At low energies, the interaction of ultrasonic waves with soft mammalian tissues can enhance tissue permeability, increasing drug uptake as a result. In his laboratory, Lewis has shown his portable ultrasound technology can significantly enhance the delivery and efficacy of drug delivery to treat brain gliomas. At higher energies, the ultrasonic waves can cause tissue necrosis and cell death, which opens the door to treatment.

At the meeting, Lewis will detail the progress made by collaborators currently testing his device for a variety of clinical applications -- from soothing painful joints to treating the aggressive brain cancer. The technology is being tested by vascular surgeons at Weill Cornell Medical Center to non-invasively cauterize veins as a novel method of varicose vein treatment. Physicians at Cayuga Medical Center are using the technology to develop a more reliable fetal monitor less sensitive to motion. Lewis’s lithium ion battery-powered ultrasound may even be powerful enough for military medics to use in the field to cauterize gunshot wounds.

Talk #1pBB14, "Pocket-sized ultrasonic surgical and rehabilitation solutions: From the lab bench to clinical trials" is at 4:45 p.m. on Monday, April 19. Abstract:

-------------------------------------------------------------------------9) Engineering: Measuring TurbulenceTINY MICROPHONE ARRAY MAY HELP TAME EFFECTS OF TURBULENCE

Turbulence is not just an issue for air travelers with weak stomachs. The phenomenon, in which airflow around a plane's fuselage is churned chaotically, has been the subject of study by aircraft designers for more than a half century. What these designers want most is to mitigate the effects of turbulence, mild forms of which can be felt on almost every air flight, and stronger forms of which can cause everything from discomfort to deafening noise. Extreme turbulence is even worse, and it can wreak severe damage on aircraft.

Taming turbulence requires understanding it mathematically. This in turn depends on the ability to build instruments that are at once sensitive, given the nuanced data required, and robust, given the airflow maelstrom that comes with hurtling through atmosphere at several hundred miles per hour. This is the focus of Joshua Krause, a mechanical engineering doctoral student at Tufts University.

Working in the Tuft Micro and Nano Fabrication Facility, Krause and his colleagues succeeded in building a hypersensitive microelectromechanical system (MEMS) array that packs 64 microphones on a chip measuring just one centimeter to a side. The array may give the most fine-grained look yet at the various forces encountered by a jet aircraft as it cuts through the atmosphere. Early results indicate the device may be among the most sensitive yet in measuring both the low-wavelength air flows most associated with structural rattling and cabin noise and the high-wavelength flows that pack the greatest energy -- and are potentially the most dangerous.

Krause stresses that his device is a prototype only, and that years of further work remain, including wind tunnel testing and ongoing electrical tweaking of the microphones.

"I enjoy creating MEMS devices and think the problem of turbulence is one of the most challenging problems possible," said Krause, who will present his work in Baltimore. "Combine the two and you have an excellent project that one could work on their entire life."

Talk #4pEA6, "MEMS (microelectromechanical systems) microphone array on a chip" is at 2:40 p.m. on Thursday, April 22. Abstract:

-------------------------------------------------------------------------10) Neuroscience: The Voice and the BrainFINDING OUR VOICE -- OR AT LEAST WHERE IN OUR BRAIN IT IS RECOGNIZED

Johns Hopkins professor Xiaoqin Wang began learning English 30 years ago as a freshman at Sichuan University in China. He remembers reciting words aloud with his English teacher after each class while walking with him to a bus station, where the teacher would catch a bus to his home far away from the university.

Wang himself has traveled far in the decades since and is today a professor whose expertise cuts across medicine, engineering and neuroscience. Yet he occasionally thinks back to these walks in Chengdu when he describes some of his current research, which concerns how an individual's brain processes his or her own voice while speaking. The neurological mechanism is the linchpin of learning new words in one's native tongue or mastering a new song.

Working with collaborators, Wang observed brain activity of marmosets fitted with headphones that allowed the animals to hear their own voices. Marmosets are highly vocal primate species that emit a range of sounds that make up, in effect, a rudimentary vocabulary. When an animal heard its voice being distorted, either shifted in frequency or embedded in a loud noise, neurons in the auditory cortex signaled the mismatch between what was heard and what was said.

Wang's results bolster what we intuitively expect to be true -- when we speak our brain recognizes the voice and knows it's not someone else speaking. More significantly, Wang's work represents the first ever pinpointing of the precise region in the brain that handles this word-by-word processing.

"I have vivid memories of how each word and sentence is learned through many repetitions," said Wang, who will present his results in a Friday afternoon session and hopes that this work will eventually help people with hearing or speaking deficiencies.

Talk #5pAB5, "Top-down vocal feedback control in active hearing" is at 2:40 p.m. on Friday, April 23. Abstract:

-------------------------------------------------------------------------11) Animal Acoustics: How Bats Forage for FoodIN FORAGING, BATS MAY BUILD MENTAL MAPS OF THEIR LOCAL ENVIRONMENT

For three- or four-inch-tall Big Brown bats, an impressive built-in sonar system makes up for other sensory shortcomings. Though their vision is poor, they are able to lurch wildly through the air, careening around to gobble up mosquitoes and wasps. One foraging approach would be to fly randomly, changing course only to capture prey or avoid imminent collision.

An alternative strategy would be to memorize the layout of the forest trees so that they can focus exclusively on aerial hunting, which is important since these bats need to eat up to their own body weight in insects each night. What it boils down to, says Brown University neuroscience doctoral student Jonathan Barchi, is how much a small bat brain, which weighs less than a peanut, can handle.

Barchi investigated these questions as part of a broader inquiry into the neural and evolutionary roots of animal behavior. He did so by observing real Big Brown bats in a laboratory environment, one large enough for the animals to maneuver easily through but small enough that scattered plastic chains hanging floor to ceiling presented real hazards that could not be ignored. Each bat's motion was recorded with a pair of thermal imaging cameras, which revealed consistent looping paths in the bats' flights.

"These patterns are very consistent -- sometimes within centimeters of each other in a space that is several meters on each side -- and the consistency persists during and across flights," said Barchi, who will present his work in a Friday afternoon session in Baltimore. "The degree to which these loops resemble each other and persist during flight suggests that the bat is using detailed memory of the space in addition to sensory cues."

Talk #5pAB8, "Bioacoustic and behavioral correlates of spatial memory in echolocating bats" is at 3:50 p.m. on Friday, April 23. Abstract:

-------------------------------------------------------------------------12) Medicine: Antifungal UltrasoundULTRASOUND FOR TREATMENT OF NAIL DISEASES

Onychomycosis is a fungal infection that affects the toenails or fingernails. Although not life threatening, it can be painful and disfiguring, producing serious physical and occupational limitations.

Now Danielle Abadi, an undergraduate at George Washington University (GWU), and her mentor Vesna Zderic, an assistant professor of electrical engineering at GWU, are developing a new device for treating nail fungal disorders that uses ultrasound to increase the permeability of the nail surface and improve drug delivery to the nail. The device may one day allow doctors to accelerate treatment of onychomycosis by allowing doctors to use low-frequency ultrasound to create microscopic "cavitations" or small bubbles on the surface of the nail bed of a patient's foot to increase the permeability of the nail. The collapse of these bubbles against the nail causes temporary pits that allow more of a topical drug called Penlac to be delivered to affected areas.

The new device is now being developed, but it has yet to undergo clinical trials -- it must prove effective and safe before it is approved for widespread use in people.

Talk #4aBB12, "Ultrasound-mediated nail drug delivery system to treat fungal disorders" is at 11:50 a.m. on Thursday, April 22. Abstract:

-------------------------------------------------------------------------13) OTHER HIGHLIGHTS -- INTERESTING SESSIONSIn addition to the highlighted talks above, there are many other interesting talks and sessions at the meeting -- some of which are listed below. For a complete list of abstracts for any of these sessions, go to the searchable index for the 159th Meeting ( and enter the session number with asterisk (e.g., 1aNSa*).

MONDAY- Acoustics of Green Buildings (1aAAa*), 8:00 a.m. - noon- Noise-Induced Hearing Loss (1aPP*), 8:00 a.m. - noon- Diagnostic Applications of Ultrasound (1aBB*), 8:00 a.m. - 11:30 a.m.- Musical Acoustics: Stringed Instruments (1aMU*), 9:30 a.m. - 12:15 p.m.- Wind Turbine Noise (1pNSd*), 1:15 p.m. - 2:40 p.m.- Community Noise (1pNCa*), 1:00 p.m. - 3:15 p.m.

TUESDAY- Blast-Induced Traumatic Brain Injury (2aBB*), 8:30 a.m. - noon- Musical Acoustics: The Contemporary Traditional Violin (2aMU*), 8:00 a.m. - noon- Space Vehicle Vibroacoustics (2aSAb*), 10:30 a.m. - 11:30 a.m.- Acoustics and Public Policy (2pNSc*), 1:00 p.m. - 4:10 p.m.- Estimating Spatial Density of Animal Populations with Passive Acoustics (2pAAb*), 1:00 p.m. - 3:00 p.m.

WEDNESDAY- Animal Hearing and Vocalization (3aABb*), 10:30 a.m. - noon- Measurement and Modeling of the Acoustic Properties of the Banjo (3aMU*), 8:30 a.m. - noon- Sound Source Localization (3aPP*), 9:00 a.m. - 11:30 a.m.- Military Noise Environments (3aNSa*), 8:10 a.m. - noon

THURSDAY- Architectural Acoustics: Hidden Gems (4aAAb*), 8:30 a.m. - 9:25 a.m.- Biomedical Ultrasound/Bioresponse to Vibration (4aBB*), 8:00 a.m. - noon- Music Processing: Neural Mechanisms and Hearing Impairment (4aPP*), 8:00 a.m. - 11:40- The Interface Between the Human Rights and Scientific Communities (4pID*), 2:00 p.m. - 3:25 p.m.- Speech for Tracking Human Health State, Performance, and Emotional State (4pSCa*), 1:00 p.m. - 4:00 p.m.

FRIDAY- Auditory Attention, Learning and Memory: From Neurons to Behavior (5pAB*), 1:00 p.m. - 4:20 p.m.- Ultrasonic Characterization of Bone (5pBB*), 1:00 p.m. - 5:00 p.m.- Outdoor Sound Propagation (5pPAa*), 1:00 p.m. - 3:15 p.m.

*******************************************************14) MORE INFORMATION FOR JOURNALISTS The 159th Meeting of the Acoustical Society of America is being held in conjunction with NOISE-CON 2010, the 26th annual conference of the Institute of Noise Control Engineering (INCE-USA). Both meetings take place at the Baltimore Marriott Waterfront Hotel in Baltimore, MD. The ASA meeting will be held Monday through Friday, April 19-23, and NOISE-CON 2010 will be held Monday through Wednesday, April 19-21.

The Baltimore Marriott Waterfront Hotel is located at 700 Aliceanna Street in Baltimore, MD 21202. The hotel main numbers are 1-410-385-3000 and toll free: 1-800-228-9290.

USEFUL LINKS:Main meeting website: meeting program: index: site:

WORLD WIDE PRESS ROOMIn the coming weeks, ASA's World Wide Press Room ( will be updated with additional tips on dozens of newsworthy stories and with lay-language papers, which are 300-1200 word summaries of presentations written by scientists for a general audience and accompanied by photos, audio and video.

PRESS REGISTRATIONWe will grant free registration to credentialed full-time journalists and professional freelance journalists working on assignment for major news outlets. If you are a reporter and would like to attend, please contact Jason Bardi ([email protected], 301-209-3091), who can also help with setting up interviews and obtaining images, sound clips, or background information.

****************************ABOUT THE ACOUSTICAL SOCIETY OF AMERICAThe Acoustical Society of America (ASA) is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,500 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world's leading journal on acoustics), Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. For more information about ASA, visit our website at

Register for reporter access to contact details