Newswise — Underwater wi-fi, music over the internet, pitch perception in the brain, discovering how whales find their favorite salmon, detecting dangerous swimmers, helping people who have undergone laryngectomy, rhythm and movement disorders, visualizing the sound of musical instruments, and finding a possible way to save manatees from collisions with boats.

These are a few of the topics that will be covered at the 156th meeting of the Acoustical Society of America (ASA) next month in Miami, FL. Convening at the Doral Golf Resort and Spa in Miami, acoustical scientists and engineers will present some 660 talks and posters related to acoustics, a cross-section of diverse disciplines devoted to architecture, underwater research, psychology, physics, animal bioacoustics, medicine, music, noise control, and speech.

Journalists are invited to cover the upcoming meeting either onsite in Miami or online through the meeting's World Wide Press Room. Registration instructions, contacts for reporters, and more information can be found at the end of this release.

HIGHLIGHTS OF 156th ACOUSTICAL SOCIETY MEETING 1) Killer Whales Picky about their Salmon 2) Giving a New Voice to Laryngectomy Patients 3) Underwater Wi-Fi 4) Musicians Hear More Particularly 5) Saving Manatees from Boat Collisions 6) Microbubbles May be Key to Drug Delivery in the Brain 7) Detecting a Human Swimmer 8) Visualizing Vibrations in Musical Instruments 9) Understanding Rhythm may Help People with Movement Disorders 10) Virtual Harmony: Synchronizing Music Over the Net 11) Bug Surveillance Reveals Moths' Stealthy Secrets 12) Laptop Music 13) Communicating Through Walls 14) Understanding Pitch Perception May Enhance Learning

******************************************************************1) KILLER WHALES PICKY ABOUT THEIR SALMONKiller whales swimming the waters off British Columbia and Washington State are known for their predilection for Chinook salmon, even in the months when the Chinook make up just 10 to 15 percent of the salmon swimming in the waters. How is it that the killer whales, with males weighing up to 12,000 pounds, can sort through the schools of salmon and find the Chinook swimming among the Coho and Sockeye salmon?

That question intrigued bioacoustician Whitlow Au, of the Hawaii Institute of Marine Biology, Kailua, and colleagues John Horne and Christopher Jones, of the University of Washington, Seattle. The discriminating taste of the killer whales suggested that the marine mammals were using echolocation to select their dinner, and the scientists wanted to determine why a Chinook sounds different than a Coho or Sockeye. Au's team used simulated killer whale echolocation signals and the measured the structure of the echoes as they bounced back from the three salmon species.

The results indicate that "the echo structure from similar sized but different species of salmon were different and probably recognizable by foraging killer whales." Au said that the radiographic images of the echoes show differences in the "swimbladder shape and volume" in the different species, and the whales can use that to pick the fish they like. The results, Au said, suggest that an echo-sounder could be developed that, when pointed down into the water, could be used to discriminate among salmon species.

The talk, "Backscatter measurements of three species of salmon using simulated killer whale echolocation signals" (2pAB2) by Whitlow W. L. Au, John K. Horne, and Christopher D. Jones will be at 1:50 p.m. on Tuesday, November 11.

******************************************************************2) GIVING A NEW VOICE TO LARYNGECTOMY PATIENTS Prosthetics for patients with cancer have improved significantly using new technologies, materials, and methods such that their disease and its effects are not apparent to those around them. However, this is not the case for many people who lose the ability to speak after surgical removal of the larynx (laryngectomy) to treat advanced cancer. The electrolarynx (EL), a hand-held device that is used by approximately one half of these individuals after surgery provides speech that is mechanical and non-human sounding, can be hard to understand, and draws undesirable attention to the user -- all of which can limit communication and negatively impact quality of life.

Researchers at Harvard, MIT and Massachusetts General Hospital are working together to improve the sound quality of EL speech by creating an approach that can automatically add missing pitch variation to the speech. This is a significant advance because listeners rate EL speech that includes pitch variation to be significantly more natural/human sounding than regular (monotone) EL speech. The research team is now utilizing this new understanding to begin correcting other acoustic factors that contribute to the poor quality and reduced intelligibility of EL speech. Ultimately, they intend to collaborate with companies that make EL devices to implement their improvements in systems that will make real differences in the lives of laryngectomy patients. The capability for more natural/human-sounding (and ultimately more intelligible) EL speech will allow laryngectomy patients to communicate more effectively, giving them a new voice after cancer surgery and enhancing their quality of life.

This study has also furthered the basic understanding concerning which aspects of normal speech production contribute effectively to creating natural speech sounds -- an understanding that can be useful to researchers in speech science and in addressing other speech disorders.

The talk, "F0 control in electrolarynx speech" (3aSC8) by Yoko Saikachi, Kenneth Stevens, and Robert Hillman is at 11:15 a.m. on Wednesday, November 12.

******************************************************************3) UNDERWATER Wi-FiUnderwater communication has always been a challenge, but a new modeling technique could help users go wireless by automatically setting acoustic modems for maximum speed. Many applications, such as monitoring a submerged volcano or listening to whale songs, require underwater sensors, but using cables to retrieve this data can be expensive and impractical. An alternative for staying connected would be to send acoustic signals through the water, in the same way that Wi-Fi signals travel through the air. However, these sub-surface messages tend to become garbled as the sound waves scatter off of various objects in the water. Much of this unwanted echo comes from the ocean surface, where the sound reflects in constantly changing directions due to the undulating water waves.

To help underwater wireless communicators, Geoffrey Edelmann of the U.S. Naval Research Laboratory and his colleagues have developed a method for predicting how good the acoustics are in a given stretch of ocean. The software first constructs a model of the current sea surface using input from a floating buoy or other device. It then takes this simulated ocean and calculates all the different sound wave paths and how they interfere with each other. The results tell a user exactly how fast they can reliably send their data at the present time. The research team plans to test their model this coming summer in a real ocean setting.

The talk, "Ocean surface degradation of shallow water acoustic communication" (5aSP1) by Geoffrey Edelmann, Shaun Anderson, and Paul Gendron is at 9:00 a.m. on Friday, November 14.

******************************************************************4) MUSICIANS HEAR MORE PARTICULARLYA new comparison of how musicians and non-musicians listen to sound shows that the benefits of musical training are not limited to music, but may help people listen in a variety of situations. Musical training enhances and sharpens the nervous system's response to speech particularly in background noise and to emotionally-significant sounds such as the cries of babies. Nina Kraus will describe a series of studies using electrodes placed on the scalp in which the neural transcription of music, and subtle changes in the nervous system, could be examined in detail.

"The cross-domain transfer from music to speech," says Kraus, "strengthens the case for keeping and enhancing music education in schools."

For example, her work has shown that some poor readers demonstrate a diminished neural transcription of those very sound elements important in reading. These readers don't suffer from a general disruption of neural transcription. Instead the problem lies in those particular transcriptions -- such as having a good sense of timing and of timbre --which are important for hearing and understanding speech sounds, and which are enhanced in musicians. The implications are that musical experience may enhance everyday listening situations and possibly help ameliorate language problems such as dyslexia. (http://www.brainvolts.northwestern.edu).

The talk "Dynamic encoding of pitch, timing, and timbre" (2aMU1) by Nina Kraus is at 8:35 a.m. on Tuesday, November 11.

******************************************************************5) SAVING MANATEES FROM BOAT COLLISIONSLast year, 73 manatees were killed in Florida when they were struck by boats. In 2006, boats plying Florida's bays and inland waterways killed 69 of the large mammals, which are typically 10 feet long and weigh upwards of 800 pounds. The standing response of marine authorities to deaths from boat collisions has been to impose low speed limits on boats. However, after more than a decade of slow speed regulations, the number of manatee mortalities and injuries from boats has still increased to record highs. In an effort to reduce the manatee deaths and injuries, Edmund Gerstein, director of marine mammal research and behavior at Florida Atlantic University in Boca Raton, FL, set out in 1991 to investigate what might be the underlying cause for these collisions. Gerstein disagreed with prevailing beliefs that manatees were just too slow and are unable to learn to get out of the way of the boats, so the boats had to slow down. He noted manatees have the "cognitive prowess to learn and remember as well as dolphins and killer whales," and when motivated "manatees can explode with a burst of power and move about 21 feet in a second." He asked some simple questions "After a manatee has been hit more than once (some have been hit up to 50 different times) why doesn't the animal learn to get out of the way?" Is it possible manatees are not aware or cannot hear the sounds of an approaching boat?

After a comprehensive series of hearing studies, Gerstein's research revealed that manatees cannot hear the dominant low frequency sounds of boats and that those sounds do not propagate well in shallow water. "Ironically, slow speed zones result in quieter and lower frequency sounds which manatees can not hear and locate in many shallow water conditions." In Florida's murky waters, where boaters and manatees cannot see each other, slow speed zones exacerbate the risks of collisions, by making these boats inaudible to manatees and increasing the transect times through manatee habitats.

Gerstein and his late colleague, Dr. Joe Blue, developed a narrowly focused alarm that emits a high-frequency signal in front of boats and other watercraft to alert manatees that may be in the path of an oncoming craft.

Gerstein has been testing this alarm in a NASA wildlife refuge where controlled studies are possible. He has reported that 100 percent of the controlled approaches toward manatees by a boat with the alarm have resulted in the manatees avoiding the boat up to 30 yards away. Only 3 percent of the manatees approached by the same boat, without the alarm, moved to avoid the boat. Gerstein notes that the alarm is tailored to exploit the manatees best hearing abilities so the alarm is not loud. It is also narrowly focused in front of the boat so that only manatees in the direct path of the boat are alerted and the marine environment in not disturbed.

The talk, "Field tests of a directional parametric acoustic alarm designed to alert manatees of approaching boats" (4aAB6) by Edmund Gerstein, Laura Gerstein, Joseph Blue, Josiah Greenewald, Narayan Elasmar is at 10:30 a.m. on Thursday, November 13.

******************************************************************6) MICROBUBBLES MAY BE KEY TO DRUG DELIVERY IN THE BRAINMany neurological disorders and neurodegenerative diseases, such as Alzheimer's and Parkinson's, remain difficult to treat because of the impermeability of the blood-brain barrier (BBB). Opening of the BBB noninvasively, transiently and with high spatial selectivity has been demonstrated using a combination of focused ultrasound (FUS) and microbubbles, allowing large molecules to infuse entire regions of the brain relevant in diseases such as Alzheimer's. However, while researchers know that this combination works, the mechanism of BBB opening remains unknown.

A team of researchers at Columbia University are trying to determine the bubble behavior, i.e., bubble oscillation and/or collapse, during focused ultrasound to gain an understanding not only of what occurs to open the blood-brain barrier but how it happens. Ultimately the goal of this research is to be able to effectively deliver drugs to the brain region where they are needed without affecting the surrounding healthy brain. This research demonstrates that the microbubble is as important to BBB opening as the ultrasound itself, since the amount and extent of the region undergoing BBB opening depends both on the type, size and stability of the microbubble used. Both bubble oscillation and collapse can result at the ultrasound pressures used for BBB opening but selection of the right type of microbubble can determine which phenomenon occurs and the resulting tissue and cellular effects.

Impacting disciplines from drug development and drug delivery to neuroscience, blood-brain barrier behavior, neurology and neurosurgery, this new understanding will have fundamental implications for the long-term functional treatment of neurological and neurodegenerative diseases. Using this enhanced understanding of the blood-brain barrier and the role of microbubbles, it may be possible to develop a noninvasive means of successfully delivering potent pharmacological agents to treat neurological and neurodegenerative diseases that are currently not treatable.

The talk, "Identifying the Inertial Cavitation Threshold in a Vessel Phantom Using Focused Ultrasound and Microbubbles" (2pBB6) by Yao-Sheng Tung, James Choi, Shougang Wang, Jameel Feshitan, Mark Borden and Elisa Konofagou is at 2:45 p.m. on Tuesday, November 11.

******************************************************************7) DETECTING A HUMAN SWIMMERProtecting military ships, oil tankers and pipelines from terrorists wearing scuba gear requires a sonar system that can distinguish a human swimmer from rocks, buoys, fish, and other marine mammals. The Swimmer Detection Sonar Network (SDSN), developed by Scientific Solutions, Inc., does this with narrow sonar beams that cut out the clutter of a harbor environment.

The SDSN sonar beams are formed by parabolic reflectors, which also record the return echo. As many as 12 reflectors pointing in different directions can be mounted together as a node on a pier or bulkhead. The SDSN strategy is less expensive and more effective than other swimmer detection techniques that use phased arrays, says Peter Stein from Scientific Solutions. It also has the advantage of being modular, in that additional sonar nodes can easily be added to increase area coverage.

Stein and his colleagues have run multiple trials in harbors and found that SDSN has a near 100% probability of identifying human swimmers at a distance of one kilometer or more. However, in some places the detection range is limited by temperature gradients that deflect sound waves. The best solution here is to add more low-cost nodes to illuminate the other side of the sound barrier. The company has begun making SDSN commercially available.

The talk, "Impact of the harbor environment on the performance of a swimmer" (4aUW8) by Peter Stein is at 9:45 a.m. on Thursday, November 13.

******************************************************************8) VISUALIZING VIBRATIONS IN MUSICAL INSTRUMENTSMusical instruments emit the sounds we recognize as music by vibrating in numerous normal modes. At session 3aMU, the first speaker, Thomas Rossing of Stanford University provides an overview of how normal modes are studied. The modes can be studied through experiments -- such as holographic interferometry, speckle pattern interferometry, or scanning laser vibrometry -- or they can be studied mathematically by solving, or partially solving, the complicated equations governing the vibrations. Papers 2, 3, and 5 at the session are quite notable in that they report on studies of modal analysis of musical instruments using inexpensive speckle pattern interferometers built by faculty and students at small undergraduate colleges. In this approach many short pictures of the vibrating instruments are taken and then processed together to form a single, cleaner picture. The images they produce rival the images obtained with expensive holographic interferometers. Papers 4 and 8 present examples of mathematical modal analysis of musical instruments using a finite element computer program, which sidestep solving the complicated differential equations governing the modes through the use of numerical methods.

The session "Musical Acoustics and Structural Acoustics and Vibration: Structural Vibrations in Musical Instruments" (3aMU) takes place from 8:00 a.m. to at 11:45 a.m. on Wednesday, November 12.

******************************************************************9) UNDERSTANDING RHYTHM MAY HELP PEOPLE WITH MOVEMENT DISORDERSThe ability to move rhythmically to a musical beat is universal within all human cultures. While tapping your foot to the beat of music may seem effortless, the mechanism that actually allows us to do this remains a mystery. Dr. John Iversen at the Neuroscience Institute in San Diego is working to uncover the brain mechanisms that enable us to hear the beat, and thus allow us to dance, sing and clap in time to music. Such knowledge could help people with movement disorders such as Parkinson's disease.

To understand beat perception better, Dr. Iversen is working together with Dr. Aniruddh Patel to measure the response of peoples' brains while they listen to rhythms. In fact, hearing the beat in music is an active, creative process -- the beat isn't actually in the music, but is created by our brains. It is that creative process that researchers are trying to understand. Listeners were asked to hear a simple rhythm (two notes followed by a rest: a "swing" pattern) with the beat in different places. Hearing the beat on the first note yields a pattern that sounds like DA-dum DA-dum..., while hearing the beat on the second makes exactly the same rhythm sound different, like da-DUM da-DUM. Dr. Iversen found that how one hears the beat causes a large change in the brain's responses to sound. The changes were in a range of brain responses called the beta band (20 to 30 oscillations per second), which is known from past studies to be associated with activity in the brain's motor centers. This suggests that even the simple, seemingly passive act of listening to music involves motor processes, even the absence of overt movement.

When the ability to move is impaired, such as in Parkinson's disease, the effect can be catastrophic, often making basic rhythmic motor activities like walking impossible. However, researchers believe that the deep connection between music and movement can be used to assist those with movement disorders, as demonstrated by the field of music therapy where using music with a beat helps enable some patients with Parkinson's to walk smoothly again. By providing a basic understanding of the brain mechanisms underlying this process, more effective treatments could be developed for people with movement disorders, ultimately improving their quality of life.

The talk, "Neural dynamics of beat perception and production" (1aMU3) by Dr. John Iversen will be presented at 9:35 a.m. on Monday, November 10.

******************************************************************10) VIRTUAL HARMONY: SYNCHRONIZING MUSIC OVER THE NETHigh-speed internet makes it possible for a cellist in Canada to play Mozart with a violist in California. The SoundWIRE research group at Stanford University has been working on ways to synchronize a network performance.

Over the last few years, SoundWIRE has brought together musicians from New York, Norway and China, without anyone having to get on a plane. Although this offers a wider range of collaboration, there are special challenges to overcome. The biggest is the time delay, which typically lasts between 50 and 100 milliseconds depending on geographical distance. Time delays -- even those as short as 20 milliseconds -- can pose a problem. One way to deal with this is to pick music that is not heavily affected by a time lag, such as jazz improvisation or slow instrumental pieces. However, a rhythmically tight repertoire performed by a string quartet is especially sensitive to time delay. The musicians end up waiting for each other, causing the tempo of the music to slow down (an example of this can be heard at http://ccrma.stanford.edu/groups/soundwire/research/).

The SoundWIRE group is exploring ways to use the delay as a structural musical element. Once the musicians become adapted to it, they can create entirely new works that may sound different on each side of the internet divide. "I think that since this practice is fairly new, most people need to experience it to realize all this potential, and once they do they are hooked," says group member Juan-Pablo Caceres. A CD of various SoundWIRE performances is due out in the coming year.

The talk, "Synchronization and acoustics in network performance" (2pMU2) by Juan-Pablo Caceres is at 1:55 p.m. on Tuesday, November 11.

******************************************************************11) BUG SURVEILLANCE REVEALS MOTHS' STEALTHY SECRETSInsect flight has previously been studied with slow motion video and strobe lights, but a new non-invasive technique could reveal much more about how bugs fly. This ultrasound-based system was designed primarily as a way of identifying agricultural pests, but it might also be used one day to listen in on faint bug whispers.

In preliminary lab trials, David Swanson and his colleagues at Pennsylvania State University placed a gypsy moth in a 200 kHz ultrasonic beam. They chose this frequency because anything below about 40 kHz would scare the moth into thinking a bat was attacking it. The team placed a receiver at different locations in order to measure the ultrasound waves bouncing off the moth. The signal showed distinct modulations that represented the wing beat frequency and body vibrations of the insect.

"We are getting to hear what the moth engine sounds like," Swanson says. His team found that the reflected signal was weakest when the moth was pointed away from the transmitter - a "stealth" advantage that likely explains why moths turn tail when they hear a bat's chirp.

Ultrasonic surveillance could later be used to listen to a bee hive or an ant colony without disturbing their natural behavior. This could reveal new forms of insect communication that have so far been unobserved because the sounds only travel a few centimeters before dying out.

The talk, "Ultrasonic characterization of insect wing reflectivity and wing-beat motion at 200 kHz" by David Swanson, Tom Baker, and Ryan terMeulen is at 3:00 p.m. on Thursday, November 13.

******************************************************************12) LAPTOP MUSICThe Internet allows musicians to perform together remotely and also to carry out real-time processing of the music being produced. It even allows anonymous users to participate in the ongoing sound product. Doug Van Nort (McGill University), who considers himself both an engineer and a musician, has been helping to facilitate internet concerts for several years, and is currently preparing a "concert" to be performed in the popular Second Life virtual reality website. Van Nort is especially interested in improvisations arising from sonic and visual feedback (including gestures by the performers) afforded by the Internet environment and by the inherent time delay imposed on performance. All of these, Van Nort says, alters the way musical rhythm and timbre are perceived. He hopes also to tap the inherent musical ability of those who aren't otherwise equipped or trained to perform by developing a system that encourages their improvisational music making. (http://www.music.mcgill.ca/~doug/ )

The talk, "Creating systems for collaborative network-based digital music performance" (2pMU4) by Doug Van Nort is at 2:45 p.m. on Tuesday, November 11.

******************************************************************13) COMMUNICATING THROUGH WALLSThe thick walls that surround pressure vessels not only keep the contents from escaping, they can also-unfortunately-keep any information about the contents from reaching the outside. To get around this, an ongoing project is using acoustic signals to communicate through several inches of steel wall.

The goal is to avoid drilling a hole in a pressure vessel, which so far has been the only way to learn the temperature or chemical state inside. Such holes weaken the vessel structure and raise the possibility of leaks. Henry Scarton of Rensselaer Polytechnic Institute and his colleagues have developed an alternative method. They attach ultrasonic crystals to both sides of a steel wall and essentially let them tap out a signal-not unlike two prisoners in adjoining cells talking to each other through Morse code.

In previous experiments, the researchers used a three-crystal system, but they now have a way to communicate with just two crystals. The first one, located outside the vessel, vibrates at 1 MHz, causing an ultrasonic wave to travel through the wall. On the other side, the second crystal vibrates in a way that modulates how much of the initial wave reflects back to the first crystal. It is through these modulations that the second crystal encodes whatever measurements an internal sensor has recorded, and the first crystal is able to retrieve this data by detecting the modulations. The researchers have shown that this setup can send 50,000 bits per second across the wall. Moreover, all of the internal electronics, including crystal and sensor, can be powered by skimming off a portion of the energy in the ultrasonic waves coming from the first crystal outside the vessel.

The talk, "Two ultrasonic transducer through-wall communication system analysis" (2pEA5) by Henry A. Scarton is at 2:30 p.m. on Tuesday, November 11.

******************************************************************14) UNDERSTANDING PITCH PERCEPTION MAY ENHANCE LEARNINGA team of researchers from Northwestern University has investigated how native speakers of tone and non-tone languages perceive musical pitch. Tone languages like Mandarin Chinese use pitch to differentiate words. This study and its results may provide further insight into how humans process auditory input, and the possible impact of experience on auditory processing, insight that could be used in the classroom to enhance auditory-based learning.

In Mandarin Chinese, as in other so-called "tone languages", the pitch of a word (its "tone") basically determines its meaning, while non-tonal languages, such as English, do not use pitch to signal the meaning of a word. For example, in English, the word "dog" always means "dog," regardless of its pitch. Given their experience using word-level pitch, the research team surmised that native tone-language speakers might process music pitch differently than non-tone-language speakers. To investigate this difference, native speakers of Mandarin Chinese and native speakers of English were asked to identify short music melodies by matching the melody they heard to its visual representation (a series of arrows that indicated pitch rises and falls). They were also asked to discriminate between the melodies by indicating whether any two were identical or different.

Relative to the English speakers, the Mandarin speakers more easily discriminated, but less easily identified, the music melodies. This difference in performance across the two language groups may be taken to support the notion that music-pitch and speech-pitch are processed via common cognitive mechanisms. In addition, the research team posited that the pitch identification task -- where listeners had to match musical pitch patterns to visual representations of those patterns -- was subject to the influence of existing linguistic pitch categories for the Mandarin listeners. In contrast, the English listeners could perform the musical pitch pattern identification task without interference from any pre-existing pitch categories. However, in the pitch discrimination task -- where listeners do not compare the input to stored categories but instead focus on small acoustic differences between pairs -- the Mandarin listeners' discrimination seemed to be enhanced relative to that of the English listeners, possibly due to their experience with linguistic tone discrimination.

The results of this study could be used by teachers and students, who might use their experience with linguistic pitch to tailor their approaches to teaching and learning about pitch in music, and vice-versa. An intriguing future direction is to investigate whether different types of music training or tone-language backgrounds might affect pitch perception differently. For instance, a future study might compare the identification and discrimination of music pitches among pianists (who manually produce discrete pitches via an external instrument), string players (who manually produce individual and gliding pitches via an external instrument), and singers (who produce discrete and gliding pitches with their voices) as well as perception of those same music- and tone-pitch stimuli in speakers of different tone languages (e.g., Cantonese, with 6 tones and Thai, with 5 tones). Such follow-up studies would potentially deepen our understanding of the effect of experience on the processing of pitch in music and speech and its impacts on auditory learning.

The talk, "Music melody perception in tone-language- and non-tone-language speakers" (2pSC8) by Jennifer Alexander, Ann Bradlow, Richard Ashley, and Patrick Wong, will be presented at 1:30 p.m. on Tuesday, November 11.

******************************************************************PRESS REGISTRATIONJournalists are welcome to attend the conference free of charge. We will grant free registration to any credentialed full-time journalist or professional freelance journalist working on assignment for a major publication or outlet. If you are a reporter and would like to attend, please contact Jason Bardi ([email protected], 301-209-3091).

Meeting staff will be available to help you set up interviews with presenters, obtain background information on a story, and provide any other resources you may need.

WEBSITES OF INTEREST/MORE INFORMATION- Main meeting web site is http://asa.aip.org/miami/information.html. - Downloadable meeting program: http://asa.aip.org/miami/program.html. - Meeting abstract search form: http://asa.aip.org/asasearch.html. - Hotel and other travel information can be found at: http://www.marriott.com/hotels/travel/miadl-doral-golf-resort-and-spa-a-marriott-resort/?groupCode=asaasaa&app=resvlink.

WORLD-WIDE PRESS ROOM ASA's World Wide Press Room will contain tips on dozens of stories as well as lay-language papers detailing some of the most newsworthy results at the meeting. Lay-language papers are roughly 500-word summaries written for a general audience by the authors of individual presentations with accompanying graphics and multimedia files. They serve as starting points for journalists who are interested in covering the meeting but cannot attend in person.

By the end of the first week of November, the World Wide Press Room (http://www.acoustics.org/press) will be updated with the new content for the 156th ASA meeting in Miami.

ABOUT THE ACOUSTICAL SOCIETY OF AMERICAThe Acoustical Society of America (ASA) is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,500 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America -- the world's leading journal on acoustics, Acoustics Today magazine, books and standards on acoustics. The Society also holds two major scientific meetings each year. For more information about the Society, visit our Web site, http://asa.aip.org.