Highlights from Upcoming 158th Meeting of Acoustical Society of America in San Antonio, Texas

Article ID: 557144

Released: 7-Oct-2009 4:45 PM EDT

Source Newsroom: American Institute of Physics (AIP)

Newswise — How does a woman's voice differ from a man's? Can being a good listener help a whale survive? Do babies who briefly hear a foreign language start to babble in that language?

These are just a few of the questions related to acoustics -- the science of sound -- that will be discussed at the 158th meeting of the Acoustical Society of America (ASA), which convenes from October 26-30, 2009 at the Hyatt Regency in San Antonio, Texas. The world's foremost experts in acoustics will present more than 650 talks and posters that draw from scientific disciplines as diverse as psychology, physics, animal bioacoustics, medicine, music, noise control, and speech.

Journalists are invited to cover the upcoming meeting either onsite in San Antonio or online through the meeting's World Wide Press Room. Information on how to obtain complementary journalist registration can be found at the end of the release.

For addition highlights and meeting information, see:




Singing During Pregnancy May be Harder Due to HormonesHomeland Security in the Hudson RiverHow Video Game Sounds Trigger Panic AttacksChanges to the Voice When a Man Becomes a WomanAre Whales Polite Listeners?Baby Babbling as a Second LanguagePortable Brain Scanner Detects Battlefield Shrapnel with UltrasoundWhy Teacher Talk Strains Voices, Especially for WomenThe Sound of Ocean Warming After Half a CenturyYou Can Hear Me Now: Clearer, More Capable Cell PhonesPhotoacoustic Imaging and Therapy Treats with Light nd Sound Acidifying Oceans Getting NoisierNative English as a Second LanguageFood in the WindpipeDisappearing Vowels "Caught" on Tape in US Midwest



The question of how hormones affect a woman's voice is relevant to professional singers because hormonal fluctuations may place them at risk of injury. Knowing when the risks are greatest would help singers avoid performing at those times -- in the same way that a track star with a bad knee will sit out a competition.

One of the most dramatic hormonal fluctuations occurs during pregnancy, and many professional singers have experienced difficulty singing while pregnant. However, scientists do not know if this effect is due to hormones or to some other cause, such as decreased lung capacity as the baby grows.

In order to assess the effect of hormones on a pregnant singer's voice, Filipa Lã of Aveiro University in Portugal followed a professionally-trained Portuguese singer through 12 weeks of pregnancy and for 12 weeks after birth. Once a week -- including just two days after the baby was born -- Lã recorded the singer reading and singing into a device that measures the pressure exerted to make each sound. Then Lã collaborated with Johan Sundberg of KTH in Stockholm, Sweden to reconcile the data with measurements of the singer's hormone levels.

This was the first longitudinal study of the effect of hormones on a singer's voice during pregnancy, and Lã and Sundberg found that the increased levels of hormones correlated with changes to the singer's vocal folds. Though temporary, the changes forced the singer to exert more pressure from her lungs to make the same notes.

"It seems that it's harder work during pregnancy to sing," says Lã. She adds, however, that this is preliminary research based on a single case study and that larger studies would be needed before doctors could give solid advice to professional singers.

The talk "Observations of the singing voice during pregnancy. A case study" (3aMU7) by Filipa Lã is at 11:15 a.m. on Wednesday, October 28.

Abstract: http://asa.aip.org/web2/asa/abstracts/search.oct09/asa355.html



Monitoring the daily ship traffic of a busy waterway like the Hudson River isn't an easy task for the Department of Homeland Security. The biggest ships are required to carry an Automatic Identification System that broadcasts information about their identity and location, but boats weighing less than 300 tons are often an invisible security risk.

Alexander Sutin and a team of acoustics experts at Stevens Institute of Technology in New Jersey are developing a system that tracks the traffic by listening to the noise it produces. They will present experimental data demonstrating the technology's ability to pick out and classify the sound of each boat in the throng.

As part of research conducted in the Center for Secure and Resilient Maritime Commerce (CSR), the national DHS Center of Excellence for Marine, Island and Port Security, Sutin and his team placed several underwater microphones ("hydrophones") in the Hudson River. These microphones recorded the din of engine and propeller noise produced by the ships above. They developed a computer algorithm that isolated each individual boat's sound and tracked its location based on how long the sound took to travel to each microphone.

The group was also able to classify each ship based on signature characteristics in its noise. Video cameras at the surface confirmed the accuracy of their technique.

"Classification parameters can be used like fingerprints to identify to identify what class a ship is," says Sutin.

The propellers of slow-moving boats like barges, for example, generate low-frequency modulation, while fast-moving speedboats produce high-frequency modulation. The team used special analysis techniques for extracting high-frequency modulation using low frequencies.

The team hopes to develop a database that keeps track of every individual ship's identity to assist various agencies, including the U.S. Coast Guard, with their missions.

The talk "Cross correlation of ship noise for water traffic monitoring" (3aUW7) by Laurent Fillinger is at 10:15 a.m. on Wednesday, October 28.

Abstract: http://asa.aip.org/web2/asa/abstracts/search.oct09/asa421.html

The talk "Passive acoustic classification of vessels in the Hudson River" (3aUW9) by Michael Zucker is at 10:45 a.m. on Wednesday, October 28.

See: http://asa.aip.org/web2/asa/abstracts/search.oct09/asa423.html



The fast-paced graphics and pounding soundtracks of video games are designed to give the gamer a rush -- they quicken breathing, accelerate heart rate, and flush the skin. In rare cases, though, this anxiety that video games can cause has been implicated in triggering panic attacks in susceptible individuals.

Judith Lauter and colleagues at the Stephen F. Austin State University in Nacogdoches, Texas explored the neurological link between video game soundtracks and panic attacks. Their results suggest that the brainstem -- a usually implacable region of the brain that controls many of the body's automatic functions -- may respond to the intense sound of heavy breathing.

In the study, 12 subjects listened to a series of clicks, a test that causes a reaction that neurologists use to assess brainstem function. The subject also heard, along with the clicks, either the sound of calm breathing or the sound of stressed breathing, modeled after an action video game in which the character is frightened, wounded, and running.

Compared to the calm breathing, the stressed breathing decreased the brainstem's reaction to the clicks in every individual tested -- and the one male in the group was affected almost twice as much as any of the females. According to Lauter, the brainstem may interpret the breathing as a warning, "as a kind of 'Danger, Will Robinson!' alerting signal" and turns down responses to other sounds to focus on the intense breathing.

The experiment provides the first clue to the events that may lead to a panic attack. It is a first in a series of tests that will examine the sounds' effect on physiology and behavior -- from blood pressure and speech production to other areas of the brain such as the cerebral cortex.

The talk "How can a video game cause panic attacks? I. Effects of an auditory stressor on the human brainstem" (2aPP7) by Judith Lauter is at 9:45 a.m. on Tuesday, October 27.

Abstract: http://asa.aip.org/web2/asa/abstracts/search.oct09/asa218.html



How does the voice of a woman differ from the voice of man? You might think that pitch is the big difference, but according to speech pathologist James Dembowski, you would be wrong. And he should know -- for the last year, he has been working with a middle-aged transgendered woman born as a boy, teaching "Ms. J" to use her male vocal anatomy to speak in a womanly way.

"She did not want a high-pitched voice or some kind of ditzy-sounding voice," says Dembowski. "As a successful academic in a local university, she posed an interesting challenge. She wanted to sound feminine but not stereotypical."

When boys reach puberty, rising levels of testosterone change the anatomy of their vocal cords, causing the voice to crack and drop. Older female-to-male transgendered individuals taking hormone supplements experience similar changes. But when a man decides to become a woman, his vocal anatomy stays fixed and old speech habits have to be unlearned.

It's true that men and women do tend to speak at different pitches -- men around 130 hertz, woman around 200 hertz.

But plenty of women also have lower voices, says Dembowski, who has been focusing on other characteristics drawn from the scientific literature. Women, on average, speak slower, while men often have more rasp and croak in their voices. And men tend to emphasize words by speaking louder, while women vary their pitch.

By recording his client as the therapy progressed and analyzing the speech quantitatively, Dembowski has learned that some of the features are easier to change than others.

So does he think that it is possible to teach someone who is biologically male to have a woman's voice?

"Yes, but it takes a lot of work," says Dembowski.

The talk "Acoustic changes associated with transgender speech therapy: A case study" (1aSC6) by James Dembowski is on Monday, October 26

See: http://asa.aip.org/web2/asa/abstracts/search.oct09/asa39.html



What do a West African drummer and a sperm whale have in common? According to some reports, they can both spot rhythms in the chatter of an ocean crowded with the calls of marine mammals -- a feat impossible for the untrained human ear.

Now a group of marine biologists at the Littoral Acoustic Demonstration Center has developed a tool that can spot these rhythms and identify individual animals. Their results suggest that whales make a specific effort to keep their calls from overlapping.

George Ioup at the University of New Orleans and colleagues have developed a way to analyze calls produced by marine mammals. Their technique, which follows principles similar to how the human ear picks out a voice at a crowded cocktail party, groups similar-sounding clicks to isolate the calls of individual animals.

Natalia Sidorovskaia of the University of Louisiana at Lafayette and colleagues have discovered that whales change the intervals between these echolocating clicks in a way that seems to prevent cluttering the echoes from these calls.

"In other words, whales are polite listeners; they do not interrupt each other," writes Sidorovskaia. She suspects that this communication strategy would allow groups of whales to explore their environment faster and more efficiently.

The talk "Identifying individual clicking whales acoustically I. from click properties" (1aSP3) by George Ioup is at 9:30 a.m. on Monday October 26.

Abstract: http://asa.aip.org/web2/asa/abstracts/search.oct09/asa50.html

The talk "Identifying individual clicking whales acoustically II. From click rhythms" (1aSP4) by Natalia Sidorovskaia is at 9:50 a.m. on Monday, October 26.

Abstract: http://asa.aip.org/web2/asa/abstracts/search.oct09/asa51.html



When babies are six months old, they begin to develop the ability to understand their parents' speech, long before they themselves can actually speak. Researchers have shown that as babies acquire the ability to perceive their native language, they become especially attuned to its subtle nuances in sound. At the same time, they lose the ability to perceive differences between foreign sounds.

Researchers have also shown, however, that even short-term exposure to a foreign language can change this. Limited experience with a foreign language can affect a child's ability to process the sounds of that language. But scientists do not know if the same thing is true of language production. Can exposing infants to foreign languages influence how they babble, for instance?

Hoping to address this, Nancy Ward and Megha Sundara of the University of California, Los Angeles designed a study with their colleagues Patricia Kuhl and Barbara Conboy at the University of Washington looking at a group of 13 one-year-old children from English-speaking households. The babies were "trained" in five one-hour sessions over several weeks where they interacted with a native Spanish speaker. Then the babies were recorded while they were interacting with a parent in English or with a Spanish speaker, and these recordings were played to adult listeners, who attempted to identify which babbles were from English-speaking babies.

The results -- to be announced for the first time at the meeting in San Antonio -- may show whether limited exposure to a foreign language can influence the way that a child babbles and address the question of how important imitation is to learning a new language.

The talk "Consequences of short term language exposure in infancy on babbling" by Nancy Ward is on Friday, October 30.

See: http://asa.aip.org/web2/asa/abstracts/search.oct09/asa627.html



A CT scanner is a great tool for spotting shrapnel that has pierced the skull of soldier, but it is far too cumbersome to lug into the heat of battle. So the Army is sponsoring a project to develop a portable scanner, based on ultrasound technology used to image babies in the womb.

The scanner is designed to be light, handheld, and operable by a soldier with no special medical knowledge.

"We're powering the device off of just two AA batteries," says Caleb Farny, who is designing and building it at Brigham and Women's Hospital in Boston, Massachusetts.

The device currently consists of two transducers that are moved along the outside of the head. They generate ultrasonic pulses and look for echoes, reflections produced by solid materials. The pulses are similar to those used in prenatal consultations, but lower in frequency -- which gives them the ability to penetrate the boney skull but limits the size of the foreign objects that can be detected.

"We can detect down to 2.5 millimeters, and I'm pretty sure we can go smaller than that," says Farny, who has used the device to detect bits of stainless steel embedded in a human skull filled with a substance that simulates brain tissue.

On the battlefield, the system would be used to detect the presence -- but not neccesarily the location -- of a foreign object in the brain or, potentially, in any part of the body. An injured soldier could then be taken to a more sophisticated facility for imaging and treatment.

The talk "A low-frequency array system for transcranial foreign body detection" (2pBB5) by Caleb Farny is at 2:00 p.m. on Tuesday, October 27.

Abstract: http://asa.aip.org/web2/asa/abstracts/search.oct09/asa258.html



Teachers tend to spend more time speaking than most professionals, putting them at a greater risk for hurting their voices -- they're 32 times more likely to experience voice problems, according to one study. And unlike singers or actors, teachers can't take a day off when their voices hurt.

Now a new study by the National Center for Voice and Speech (NCVS) reveals how teachers use their voices at work and at home and uncovers differences between male and female teachers.

Eric Hunter, deputy director of the NCVS, and colleagues equipped teachers with the NCVS voice dosimeter, a device which captures voicing characteristics such as pitch and loudness rather than actual speech. The dosimeter sampled their voices 33 times per second. The researchers analyzed 20 million of these samplings which were collected during waking hours over a 14 day period for each teacher.

Female teachers used their voices about 10 percent more than males when teaching and 7 percent more when not teaching. The data also indicated that female teachers speak louder than male teachers at work.

"These results may indicate an underlying reason for female teachers' increased voice problems," writes Hunter.

All of the teachers spoke about 50 percent more when at work, at both a higher pitch and a volume (about 3 decibels louder). Instead of resting their overworked voices at home, the teachers also spent significant amounts of time speaking outside of work.

The talk "Variations in intensity, fundamental frequency, and voicing for teachers in occupational versus non-occupational settings" (1aSC14) by Eric Hunter is on Monday, October 26.

Abstract: http://asa.aip.org/web2/asa/abstracts/search.oct09/asa47.html



Climate change affects the entire ecosystem: atmospheric conditions, oceans, continents, plants, animals, and human activities. Some of these changes can be monitored using sound waves. Changes in the temperature of the oceans, for example, can be measured using acoustics. Several talks at the ASA meeting will be reporting on how climate trends affect various acoustic phenomena.

The warming of the ocean is one such trend. Brian Dushaw of the University of Washington in Seattle will resurrect a half a century old historical experiment to show how acoustics could be used to monitor climate. In 1960 a powerful underwater sound signal was sent from a spot in the Indian ocean near Perth, Australia to a detector in Bermuda, thousands of miles away in the Atlantic Ocean. The acoustic pulse's travel time in traversing these "antipodal" points on opposite sides of the Earth was measured to be 13382 seconds, or 3 hours and 43 minutes (a simulation of this event can be seen at this website: http://909ers.apl.washington.edu/~dushaw/Perth/pb.gif).

Since the speed of sound in water depends on the temperature of the water, Dushaw hopes to use the acoustic pulse's travel time to establish, through computer modeling, a temperature for the world's oceans for 1960. This in turn would serve as a baseline for newer measurements of ocean temperature.

The talk "Antipodal acoustic propagation and a half-century of ocean warming" (1aAO8) by Brian Dushaw is at 10:30 a.m. on Monday, October 26.

Abstract: http://asa.aip.org/web2/asa/abstracts/search.oct09/asa16.html



The computing power of today's cell phone rivals that of home computers of a decade ago, allowing modern phones to receive more data and run an ever-expanding set of applications. At the ASA meeting, two groups of researchers will present their ideas for improving the functionality and sound quality of cell phones.

Yu-Hao Hsieh of National Cheng Kung University in Taiwan will discuss efforts to reduce noise and distortion in voices on 3G phones using a technique called time reversal signal processing. The technique boosts the signal-to-noise ratio by comparing the waves sent out to those reflected back. Hsieh will present results from an array of micro-electro-mechanical-system (MEMS) microphones that simulate a cell phone and discuss the enhancement levels achieved.

The talk "Research on the applications of time reversal method for noise control by use of micro-electro-mechanical-system array microphones for cell phones" (1pSPb3) by Yu-Hao Hsieh is at 4:00 p.m. on Monday, October 26.

Abstract: http://asa.aip.org/web2/asa/abstracts/search.oct09/asa145.html

Cheol Hong Kim of Chonnam National University in Korea is interested in developing phones that automatically adjust the volume of their ringtones based on the loudness of the surrounding environment. He will present preliminary results from a study that looks at the relationship between the environment (temperature, vibration, ambient noise, and humidity) and the loudness of ringtone. Kim will also present experimental results from a second project that makes use of a cell phone's vibration motors to enhance mp3-playing capabilities.

The talk "Analysis of the environmental effects on the ringtone volume" (5aNS11) by Cheol Hong Kim is at 11:50 a.m. on Friday, October 30.

Abstract: http://asa.aip.org/web2/asa/abstracts/search.oct09/asa619.html

The talk "Improving mp3 capability of mobile phones by linking acoustic information with vibrations (1pSPb4) by Cheol Hong Kim is at 4:15 p.m. on Monday, October 26.

Abstract: http://asa.aip.org/web2/asa/abstracts/search.oct09/asa146.html



Atherosclerosis, the build-up of waxy plaques in the arteries, can turn deadly when certain plaques rupture, clotting downstream blood vessels and causing heart attacks and strokes. Physicians treating people with atherosclerosis would like to know which plaques are vulnerable to rupture, but there is currently no good way to tell.

Now Stanislav Emelianov and his colleagues at the University of Texas at Austin are developing a new technique that combines intravascular ultrasound imaging with photoacoustic imaging -- a combination that Emelianov refers to as "thunder and lightning." It may one day help doctors detect and characterize vulnerable atherosclerotic plaques thus allowing a patient-tailored treatment of atherosclerosis.

In this technique, a combined photoacoustic and ultrasound imaging catheter is inserted into a blood vessel. The catheter first delivers short pulses of laser light. These pulses deposit energy into tissue, which then thermally expands ever so slightly, producing acoustical waves detected by ultrasound imaging catheter. The acoustic response varies depending on the optical properties of the material in the plaque, and the wavelength of the laser light. By imaging with several different optical wavelengths, they have shown that they can differentiate atherosclerotic plaques in rabbits that are at risk of rupturing. They also were able to visualize the placement of coronary stents.

In a related study, Emelianov and his colleagues showed that small nanoparticles that absorb the laser light and target molecular signatures of breast cancer tumors and melanomas can be used for image-guided photothermal therapy. Blasting the nanoparticles photoacoustically allows the tumors to be imaged. Moreover, if enough particles are accumulated within tumor, the laser radiation will selectively heat and kill the tumors -- an effect that they can monitor and guide by looking at the photoacoustic signal. So far they have shown that this effect works in mice.

The talk "Intravascular ultrasound and photoacoustic imaging" (3aPA4) by Stanislav Emelianov is at 9:05 a.m. on Wednesday, October 28.

Abstract: http://asa.aip.org/web2/asa/abstracts/search.oct09/asa359.html

The talk "Biomedical photoacoustics: From sensing to imaging to therapy" (3pID1) by Stanislav Emelianov is at 1:05 p.m. on Wednesday, October 28.

Abstract: http://asa.aip.org/web2/asa/abstracts/search.oct09/asa427.html



Each year some 30 percent of the carbon dioxide released into the air from burning fossil fuel finds its way into the ocean; the rate today is about 1 million tons of carbon dioxide per hour. Carbon dioxide then forms carbonic acid, which in turn dissociates into bicarbonate, carbonate, and hydrogen ions. This has led to a measurable increase in the acidity of the ocean, or equivalently to a drop in the ocean’s pH value.

Acousticians are interested in this issue because the increased acidity can change how sound travels through the oceans. At a lower pH, dissolved borate in seawater can change into a form that absorbs less energy from underwater sound waves. The change in borate causes sounds to travel farther in the seas than they used to, especially sounds at lower frequencies, such as 1-3 kilohertz. The ocean is becoming more transparent to sound at frequencies used for human and marine animal communication.

How big is the change? Peter G. Brewer of the Monterey Bay Aquarium Research Institute says that a lowering of ocean pH by 0.3 (the expected mid-century consequence of a doubling of CO2 levels) will reduce absorption by about 40 percent, allowing some sound waves to travel as much as 70 percent farther than under current conditions. Brewer will report on the chemical mechanisms at work in the ocean that affect sound absorption, on the likely consequences for underwater sound, and on the possibility of using acoustic measurements to keep track of the changes over large scales.

The talk "Rapidly changing ocean pH and the increasing transparency of the ocean to sound" (2aAOb2) by Peter Brewer is at 10:45 a.m. on Tuesday, October 27.

Abstract: http://asa.aip.org/web2/asa/abstracts/search.oct09/asa181.html



Linguists have known for some time that when someone who is fluent in one language becomes very fluent in another, the pendulum can swing too far in the opposite direction. The way they pronounce certain words in their first language changes -- making a native English speaker, for instance, sound foreign. This "phonetic drift" often happens to people who move to other countries and become assimilated in those cultures. They lose some of their native language because they don't use it much.

Now Charles Chang of the University of California, Berkeley has shown that this effect can happen faster than was previously thought. He has studied a group of 20 Americans who were taking a six-week Korean immersion program in Chuncheon, South Korea. He observed that the way these native English speakers pronounced certain sounds in English could change in as little as one week. Says Chang, this small study raises fundamental questions about the neuroscience of language -- how the brain keeps sounds from different languages distinct and how learning a new language affects the memory of another one.

The talk "Native language phonetic drift in beginning second language acquisition" (5aSC7) by Charles Chang is on Friday, October 30.

Abstract: http://asa.aip.org/web2/asa/abstracts/search.oct09/asa632.html



Swallowed food and air breathed in both are aimed down the throat but must go down the right pipe. Food should go to the stomach and air to the lungs. In healthy people this usually gets sorted out. But occasionally food goes down “the wrong way,” and a coughing fit ensues.

Sometimes food or drink will get lodged on the vocal folds of the larynx, also called the "voicebox" owing to its role in producing spoken or sung sounds. Speaking or singing is produced in a complicated coordination of air blasts from the lung, and the vibrations of the vocal folds, throat, nasal passages, and the mouth cavity. Shanmugam Murugappan and his Colleagues at the University of Cincinnati have studied how the presence of a foreign substance, such as swallowed food, changes the quality of speech. He believes this knowledge will help physicians with the diagnosis of patients with swallowing or certain breathing problems.

The talk "Characterization of acoustic behavior when prandial material is present in the larynx" (3pSC5) by Shanmugam Murugappan is on Wednesday, October 28.

Abstract: http://asa.aip.org/web2/asa/abstracts/search.oct09/asa441.html



Try to pronounce the words "caught" and "cot." If you're a New Yorker by birth, the two words will sound as different as their spellings. But if you grew up in California, you probably pronounce them identically.

American English is slowly changing; across the nation, the two "low-back" vowel sounds in these words are merging, region by region. Now Christina Esposito of the Macalester College has tracked the change sweeping eastwards across the Midwest into Minnesota.

Working with graduate students Hannah Kinney and Kaitlyn Arctander , she asked Minnesotans to read a list of 100 words that contain these vowels, recorded the speech, and analyzed patterns within the recordings.

"We make a visual representation of the speech, a spectrogram," says Esposito. "Every single vowel has its own unique frequencies, like a fingerprint."

Unlike past studies of other areas of the country, which rely interviewing people over the telephone and judging differences by ear, Esposito's experiment recorded and dissected the speech quantitatively. Her results suggest that 30 percent of Minnesotans have lost the distinction between the two vowel sounds.

The talk "Low-back vowel merger in Minnesotan English" (1aSC4) by Christina Esposito is on Monday, October 26.

Abstract: http://asa.aip.org/web2/asa/abstracts/search.oct09/asa37.html



Main meeting website: http://asa.aip.org/sanantonio/sanantonio.html

Full meeting program: http://asa.aip.org/sanantonio/program.html

Searchable index: http://asa.aip.org/asasearch.html


In October, ASA's World Wide Press Room (www.acoustics.org/press) will be updated with additional tips on dozens of newsworthy stories and with lay-language papers, which are ~500-word summaries of presentations written by scientists for a general audience and accompanied by photos, audio, and video.


We will grant free registration to credentialed full-time journalists and professional freelance journalists working on assignment for major news outlets. If you are a reporter and would like to attend, please contact Jason Bardi (jbardi@aip.org, 301-209-3091), who can also help to set with setting up interviews and obtaining images, sound clips, or background information.


The Acoustical Society of America (ASA) is the premier international scientific society in acoustics devoted to the science of technology of sound. Its 7,500 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world's leading journal on acoustics), Acoustics Today magazine, books and standards on acoustics. The society also holds two major scientific meetings each year. For more information about ASA, visit our website at http://asa.aip.org.


Chat now!