The Acoustical Society of America and the Canadian Acoustical Association are co-hosting a joint meeting May 13-17 at the Shaw Centre/Westin Ottawa Hotel.
Hannah White and her colleagues at Macquarie University researched how creaky voice is reflected in Australian English used in Sydney, and what factors influence its prevalence.
Xinyang Wu from the Hong Kong University of Science and Technology has designed a computer algorithm to intelligently create mashups using the drum tracks from one song and the vocals and instrumentals from another. The algorithm mimics the process used by professionals, identifying the most dynamic moments to adjust the tempo of the instrumental tracks and add the drum beat mashup at exactly the right moment for maximum effect. The result is a unique blend of pleasing lyrics and exciting instrumentals with wide-ranging appeal.
James Boland, an acoustician for SLR Consulting, employed insights from the field of sensory criminology to better understand the unique acoustic needs inside prison environments. By focusing on speech intelligibility, strategic reduction of noise levels, and the incorporation of privacy considerations, acoustic design can significantly improve the overall prison environment. Creating distinct zones within the prison and balancing moments of quiet with activity are essential to fostering a more comfortable and secure space.
Gea Oswah Fatah Parikesit and their team at Universitas Gadjah Mada have investigated the physics behind why the bundengan, a portable shelter woven from bamboo that features a collection of strings and bamboo bars, sounds better when played in the rain. The bundengan is constructed by weaving bamboo splits, which are covered by overlapping bamboo culm sheaths with ropes to secure everything in place. When wet, the culm sheaths seek to return to their curled form, but tied down in their planar formation, they instead press into each other. The resulting tension allows the sheaths to vibrate together.
Phoebe Peng, an Engineering Honours student at the University of Sydney, is researching ways to allow people with low vision and blindness to play pingpong using sound. The process uses neuromorphic cameras and an array of loudspeakers, designed to allow players to track the ball and movements based on sound. Using two perfectly positioned cameras, Peng could identify and track a ball in 3D in real time. She then fed that data into an algorithm controlling loudspeakers along the sides of the table, which created a sound field matching the position of the ball.
Joe Wolfe and John Smith from the University of New South Wales conducted acoustic experiments to study the didjeridu’s unusual and complicated performance techniques.
Parag Chitnis of George Mason University led a team that developed a wearable ultrasound system that can produce clinically relevant information about muscle function during dynamic physical activity. The system uses a patented approach that uses long-duration chirps and ultrasound sensing, and it allowed the team to design a simpler, cheaper system that could be miniaturized and powered by batteries. The result is an ultrasound monitor with a small, portable form factor that can be attached to a patient.
As humans age, our eyes adjust based on how we use them, growing or shortening to focus where needed, and we now know that blurred input to the eye while the eye is growing causes myopia.
Darcy Dunn-Lawless, a doctoral student at the University of Oxford, is investigating the potential of a painless, needle-free vaccine delivery by ultrasound. The method uses cavitation, which is the formation and popping of bubbles in response to a sound wave. Though initial in vivo tests reported 700 times fewer vaccine molecules were delivered by the cavitation approach compared to conventional injection, the cavitation approach produced a higher immune response. The researchers theorize this could be due to the immune-rich skin the ultrasonic delivery targets. The result is a more efficient vaccine that could help reduce costs and increase efficacy.
Press conferences for Acoustics 2023 Sydney will be held virtually at 8:00 a.m. AEDT, Dec. 6 and Dec. 7. Topics will focus on a wide range of newsworthy sessions from the upcoming meeting, which runs Dec. 4-8 in Sydney, Australia.
The Acoustical Society of America and the Australian Acoustical Society are co-hosting Acoustics 2023 Sydney, Dec. 4-8. The scientific conference brings together acousticians, researchers, musicians, and more experts from around the world.
To reach the stratosphere, Daniel Bowman of Sandia National Laboratories and his collaborators build relatively simple, solar-powered balloons that span 6 to 7 meters across. After releasing the balloons, they track their routes using GPS and use them to collect data and detect low-frequency sound with microbarometers. Rarely disturbed by planes or turbulence, the microphones on the balloons pick up a variety of sounds unheard anywhere else. Bowman will present his findings using these hot air balloons to eavesdrop on stratospheric sounds at the upcoming 184th ASA Meeting.
Acoustic monitoring is the go-to solution for locating a leak in a large urban pipe network, as the sounds from leaks are unique and travel far in water, but even this method struggles in complex systems. To tackle the problem, Pranav Agrawal and Sriram Narasimhan from UCLA developed algorithms that operate on acoustic signals collected via hydrophones mounted on fire hydrants. In doing so, the team can avoid costly excavation and reposition the devices as needed. Combined with novel probabilistic and machine-learning techniques to analyze the signals and pinpoint leaks, this technology could support water conservation efforts.
Scientists can harness sound on other worlds to learn about properties that might otherwise require a lot of expensive equipment, like the chemical composition of rocks, how atmospheric temperature changes, or the roughness of the ground. Extraterrestrial sounds could also be used in the search for life. Timothy G. Leighton from the University of Southampton has designed a software program that produces extraterrestrial environmental sounds and predicts how human voices might change in distant worlds. He will demonstrate his work at the upcoming 184th ASA Meeting.
At the 184th ASA Meeting, Colin Malloy of Ocean Network Canada will present his method to transform ocean data into captivating, solo percussion songs. He employs sound from hydrophones and introduces elements inspired by ocean-related data such as temperature, acidity, and oxygenation. For example, in his piece, Oil & Water, Malloy represents the impact of oil production on the oceans. He plays an eerily catchy melody on steel drums and inserts noise to represent oil production over the past 120 years.
At the 184th ASA Meeting, Yolanda Holt of East Carolina University will describe aspects of the systematic variation between African American English and white American English speech production in children. Holt and her team examined final consonant cluster in 4- and 5-year-olds and using instrumental acoustic phonetic analysis, they discovered that the variation in final consonant production in AAE is likely not a wholesale elimination of word endings but is perhaps a difference in aspects of articulation. Professional understanding of the difference between typical variation and errors is the first step for accurately identifying speech and language disorder.
Spread across 106 acres in southcentral Utah, the Pando aspen grove resembles a forest but is actually a single organism with more than 47,000 genetically identical aspen stems connected at the root. As an artist-in-residence for the nonprofit group Friends of Pando, Jeff Rice used a variety of microphones to record Pando’s leaves, birds, and weather. As part of the 184th ASA Meeting, Rice and Lance Oditt will describe their work to reveal a unique acoustic portrait of this botanical wonder.
By understanding how insects perceive sound and using 3D-printing technology to create custom materials, it is possible to develop miniature, bio-inspired microphones.
With optoacoustic tomography emerging as an effective breast cancer screening method, Seonyeong Park of the University of Illinois Urbana-Champaign and her team wanted to determine its reliability in patients with darker skin. They simulated a range of skin colors and tumor locations using digital breasts to make rapid and cost-effective evaluations, and the results confirmed that tumors could be harder to locate in individuals with darker skin. Park has developed a virtual framework that allows for more comprehensive investigations and can serve as a tool for evaluating and optimizing new OAT imaging systems in their early stages of development.
At the 184th ASA Meeting, Georgia Zellou and Michelle Cohn of the University of California, Davis will describe experiments to investigate how speech and comprehension change when humans communicate with AI. They examined how people adjust their voice when communicating with an AI system compared to talking with another human and, on the listening side, how what a device sounds like impacts how well listeners will understand it.
The coqui frog, one of Puerto Rico’s most iconic animals, gets its name from its distinctive two-note call, “co-qui,” which can be heard throughout the island every night. The males produce these calls to mark their territory and ward away rivals, but scientists can use them to study the changing climate. At the 184th ASA Meeting, Peter Narins of the University of California, Los Angeles will describe changes in the coqui calls over a 23-year period. Every frog call had grown higher in pitch, indicating a mini-migration that corresponds with the temperature shift induced by climate change.
At the 184th ASA Meeting, Ashley Alva of the Georgia Institute of Technology will describe how attaching microbubbles to macrophages, a type of white blood cell, can create high-resolution and sensitive tracking images useful for disease diagnosis. Because of the attached microbubbles, the cells sent back an echo when hit with ultrasound, which is nonionizing and noninvasive and has great depth of penetration. This allowed the team to visualize the macrophages in vivo with high resolution and sensitivity. Visualizing macrophages in vivo could also provide a powerful tool for understanding immune responses and monitoring therapeutic efficacy.
Imagine a cocktail party full of 3D-printed, humanoid robots listening and talking to each other. That seemingly sci-fi scene is the goal of the Augmented Listening Laboratory at the University of Illinois Urbana-Champaign. With precise control over the simulated subjects, the researchers can adjust the parameters of the experiment and even set the machines in motion to simulate neck movements. They will describe the talking human head simulators, and their work investigating how humans receive sound and developing audio technology, at the 184th ASA Meeting.
At the 184th ASA Meeting, Emily Sandgren and Joshua Alexander of Purdue University will describe experiments to determine the best hearing aids for listening to music. To test and compare, they took over 200 recordings of music samples as processed by hearing aids from seven popular manufacturers. They asked study participants to rate the sound quality of these recordings and found that the hearing aids had lower ratings for music than their control stimuli. The researchers found bigger differences in music quality between hearing aid brands than between speech and music programs.
The 184th ASA Meeting will include three press conferences on Tuesday, May 9. The in-person presentations will also be livestreamed and recorded. Topics will focus on a wide range of newsworthy sessions, including 3D-printing head simulators, tracking immune cells with ultrasound, investigating the impact of skin color on breast cancer diagnosis, mimicking insects to create miniature microphones, and locating leaks in water networks. Reporters can register for in-person or virtual attendance.
ASA will hold its 184th meeting May 8-12 in Chicago, offering in-person and hybrid sessions throughout the week. The scientific conference brings together acousticians, researchers, musicians, and more from around the world, who will describe their work on topics that include measuring the calls of Puerto Rican coqui frogs, communicating with artificial intelligence, capturing the sounds of the stratosphere, simulating sounds on other planets, and ensuring linguistic justice by considering the unique aspects of African American English. Conference highlights can be found on social media by searching the #ASA184 hashtag and reporters are invited to attend in-person and hybrid sessions at no cost.
The Acoustical Society of America offers Science Communication Awards in Acoustics to recognize excellence in the communication of acoustics-related topics to a popular audience. The 2023 award cycle will accept content created between Jan. 1, 2021, and Dec. 31, 2022; if you have seen, heard, or created something acoustics-related during this time frame, please nominate it! Each nominated entry will be judged according to its general accessibility, relevance to acoustics, accuracy, and quality. Nominations will be accepted until March 15, 2023.
Studying whether animals possess additional language-related skills can help us understand what it takes to learn speech and reveal the history of its evolution. Andrea Ravignani and colleagues studied seal pups' vocal plasticity, or how well they can adjust their own voices to compensate for their environment, and found that seal pups can change the pitch and volume of their voices, much like humans can. Ravignani will discuss his work linking vocal learning with vocal plasticity and rhythmic capacity at the 183rd ASA Meeting.
At the 183rd ASA Meeting, Kenton Hummel will describe how soundscape research in day cares can improve child and provider outcomes and experiences. He and his team collaborated with experts in engineering, sensing, early child care, and health to monitor three day care centers for 48-hour periods. High noise levels and long periods of loud fluctuating sound can negatively impact children and staff by increasing the effort it takes to communicate. In contrast, a low background noise level allows for meaningful speech, which is essential for language, brain, cognitive, and social/emotional development.
At the 183rd ASA Meeting, Brendan Smith will describe how hydrophones can listen to the sounds of deep-sea hydrothermal vents, informing the environmental impacts of deep-sea mining and assisting with interplanetary exploration. He and his supervisor David Barclay have developed noninvasive ways to study the vents that are sustainable in the long term because they work from a safe distance. Understanding the acoustics in the vicinity could help predict and prevent environmental impacts.
Modern movie sound mixing uses techniques like impulse responses to reproduce dialogue and other sounds. These methods are crucial to align what moviegoers see and hear and keep them engaged in the story. At the 183rd ASA meeting, Jeffrey Reed of Taproot Audio Design will demonstrate the behind-the-scenes audio engineering required to re-create the acoustics of movie sets and locations, sharing short clips of film to compare the original recording to the studio mixed product.
"I am sitting in a room, different from the one you are in now." With these words, Alvin Lucier begins a fascinating recording where his voice warps and becomes indistinguishable over time, solely because of how sound reflects in the room. For physics students, this audio can be used to reveal details of the surrounding room and teach important lessons about acoustic resonance. Andy Piacsek, of Central Washington University, will discuss how he employs Lucier's project in the classroom during his talk, "Students are sitting in a room."
For musicians, sound designers, and other audio professionals, a text-to-audio model opens avenues of creative application and exploration and provides workflow-enhancing tools. At the 183rd ASA Meeting, Zach Evans will present his team's early success in generating coherent and relevant music and sound from text. They employed data compression methods to generate the audio with reduced training time and improved output quality, and they plan to expand to larger data sets and release their model as open-source option for others to use and improve.
Louis Urtecho and his team hope to study dust devils in the Mojave Desert on Earth, then extend the analysis to scale for the different atmosphere on Mars. Based on microbarometer data from the Mojave, they built an algorithm to look for the pressure activity indicative of a dust devil. The vortices have a distinct drop in pressure near their centers, and their pressure fluctuates to look like an electrocardiogram signal over time. The team hopes to learn more about the convective vortices and how they move, which will improve the accuracy of Martian weather models.
At the 183rd ASA Meeting, researchers will describe "The evolution of Blackbird Studio C," a space designed to provide an accurate and immersive mixing and production environment. They wanted to create a unique, ambient anechoic space that would allow ambient sound to decay equally across different frequencies and be free from interfering reflections, making it sound like an indoor forest. So they covered the walls and ceiling with primitive root diffusers. This technology causes sound energy to diffuse and radiate in many directions.
At the 183rd ASA Meeting, Markus Mueller-Trapet will describe experiments designed to simulate and measure the perceived annoyance experienced from noisy neighbors in multi-unit residential buildings. He and his team provided a living room-like situation and recorded impact sounds of objects dropping and people walking. They then presented the recordings to study participants, using different playback techniques and virtual reality, and created an online survey. The team hopes to provide guidance to architects and building code developers.
Researchers describe how a noninvasive microphone sensor could identify bowel diseases without collecting any identifiable information. They tested the technique on audio data from online sources, transforming each audio sample of an excretion event into a spectrogram, which essentially captures the sound in an image. The images were fed to a machine learning algorithm that learned to classify each event based on its features. The algorithm's performance was tested against data with and without background noises.
Researchers have developed a machine learning algorithm to identify cough sounds and determine whether the subject is suffering from pneumonia. Because every room and recording device is different, they augmented their recordings with room impulse responses, which measure how the acoustics of a space react to different sound frequencies. By combining this data with the recorded cough sounds, the algorithm can work in any environment.
Researchers present an app that creates playlists to help listeners care for their emotions through music. The app could be used by people who may not want to receive counseling or treatment because of feelings of shame, inadequacy, or distrust and aims to leave them more positive and focused than they were when they began. Users take three self-led questionnaires to measure their emotional status and the app then creates a customized playlist of songs using one of three strategies: consoling, relaxing, or uplifting.
In a crowded restaurant, the sounds of conversations bounce off walls, creating background noise. Each individual wants to be heard, so they end up talking a little bit louder, which increases the overall din. Eventually – barring an interruption – the system gets loud enough to reach the limit of the human voice. Braxton Boren will discuss this cycle, called the Lombard effect, and how it can be disrupted in his presentation, "A game theory model of the Lombard effect in public spaces."
The 183rd ASA Meeting will include an urban sound walk, in which media are invited to explore Nashville, its sounds, and efforts to design projects that enhance the sonic environment and mitigate noise. Following the walk, ASA will host a workshop on soundscape design and how planning can be used to create sustainable, walkable, livable urban environments. The walk is an opportunity for media and anyone interested in urban soundscapes, while the workshop is intended for city planners, architects, officials, and others whose work lies on the interface between sound and the community. All are welcome.
The Acoustical Society of America (ASA) will hold its 183rd meeting Dec. 5- 9 at the Grand Hyatt Nashville Hotel. ASA183 will be an in-person meeting with several hybrid sessions where remote attendance will also be possible. Reporters are invited to attend the meeting at no cost.
When an offshore wind farm pops up, there is a period of noisy but well-studied and in most cases regulated construction. Once the turbines are operational, they provide a valuable source of renewable energy while emitting a constant lower level of sound.
For someone using an assistive listening device in a crowded place, it might make little difference whether the device is on or off. Nearby conversation directed at the user might be drowned out by distant conversation between other people, ambient noise from the environment, or music or speech piped through a loudspeaker system.
Corey and his colleagues worked to eliminate at least one source of noise, the one emanating from loudspeakers or other broadcast systems.
Altogether, the hundreds of thousands of animals living in the reef sound like static on the radio, or the snap, crackle, and pop of a bowl of Rice Krispies as you pour milk on the cereal, when the coral reef is healthy. The sound changes for reefs that are not healthy, becoming quieter and less diverse.
Infrasound waves can probe some of the most complex weather patterns hidden to normal observations, but finding a powerful enough source of infrasound waves can be a challenge unless there is a munitions factory nearby.
Atherosclerosis, a buildup of plaque, can lead to heart disease, artery disease, and chronic kidney disease and is traditionally treated by inserting and inflating a balloon to expand the artery. During the 182nd ASA Meeting, Rohit Singh, of the University of Kansas, will present a method that combines a low-power laser with ultrasound to remove arterial plaque safely and efficiently.