Fever was found to be the most common non-respiratory feature of infection with SARS-CoV-2, the virus that causes COVID-19, according to research published at the ATS 2023 International Conference.
In a randomized controlled trial, ICUconnect helped ICU physicians to reduce unmet palliative care needs of critically ill patients and their families better than standard care did, according to research published at the ATS 2023 International Conference.
For children with asthma residing in urban areas, the neighborhood they live in is a stronger predictor of whether they will have exacerbations (asthma attacks) than their family’s income or their parents’ level of educational attainment, according to research published at the ATS 2023 International Conference.
Stony Brook University will honor the life and legacy of eminent paleoanthropologist, conservationist and politician Richard E. Leakey by hosting “Africa: The Human Cradle: An International Conference Paying Tribute to Richard E. Leakey” from June 5 - 9, 2023 at the university’s Charles B. Wang Center. The Turkana Basin Institute (TBI) and Stony Brook are hosting the conference, in partnership with the National Geographic Society. Thought leaders from around the world will celebrate the immeasurable, life-long contributions by Leakey to furthering the appreciation of Africa’s centrality in the narrative of human evolution.
Researchers at Rutgers University have found a major flaw in the way that algorithms designed to detect "fake news" evaluate the credibility of online news stories. Most of these algorithms rely on a credibility score for the "source" of the article, rather than assessing the credibility of each individual article, the researchers said.
To reach the stratosphere, Daniel Bowman of Sandia National Laboratories and his collaborators build relatively simple, solar-powered balloons that span 6 to 7 meters across. After releasing the balloons, they track their routes using GPS and use them to collect data and detect low-frequency sound with microbarometers. Rarely disturbed by planes or turbulence, the microphones on the balloons pick up a variety of sounds unheard anywhere else. Bowman will present his findings using these hot air balloons to eavesdrop on stratospheric sounds at the upcoming 184th ASA Meeting.
Acoustic monitoring is the go-to solution for locating a leak in a large urban pipe network, as the sounds from leaks are unique and travel far in water, but even this method struggles in complex systems. To tackle the problem, Pranav Agrawal and Sriram Narasimhan from UCLA developed algorithms that operate on acoustic signals collected via hydrophones mounted on fire hydrants. In doing so, the team can avoid costly excavation and reposition the devices as needed. Combined with novel probabilistic and machine-learning techniques to analyze the signals and pinpoint leaks, this technology could support water conservation efforts.
Scientists can harness sound on other worlds to learn about properties that might otherwise require a lot of expensive equipment, like the chemical composition of rocks, how atmospheric temperature changes, or the roughness of the ground. Extraterrestrial sounds could also be used in the search for life. Timothy G. Leighton from the University of Southampton has designed a software program that produces extraterrestrial environmental sounds and predicts how human voices might change in distant worlds. He will demonstrate his work at the upcoming 184th ASA Meeting.
At the 184th ASA Meeting, Colin Malloy of Ocean Network Canada will present his method to transform ocean data into captivating, solo percussion songs. He employs sound from hydrophones and introduces elements inspired by ocean-related data such as temperature, acidity, and oxygenation. For example, in his piece, Oil & Water, Malloy represents the impact of oil production on the oceans. He plays an eerily catchy melody on steel drums and inserts noise to represent oil production over the past 120 years.
At the 184th ASA Meeting, Yolanda Holt of East Carolina University will describe aspects of the systematic variation between African American English and white American English speech production in children. Holt and her team examined final consonant cluster in 4- and 5-year-olds and using instrumental acoustic phonetic analysis, they discovered that the variation in final consonant production in AAE is likely not a wholesale elimination of word endings but is perhaps a difference in aspects of articulation. Professional understanding of the difference between typical variation and errors is the first step for accurately identifying speech and language disorder.
Spread across 106 acres in southcentral Utah, the Pando aspen grove resembles a forest but is actually a single organism with more than 47,000 genetically identical aspen stems connected at the root. As an artist-in-residence for the nonprofit group Friends of Pando, Jeff Rice used a variety of microphones to record Pando’s leaves, birds, and weather. As part of the 184th ASA Meeting, Rice and Lance Oditt will describe their work to reveal a unique acoustic portrait of this botanical wonder.
By understanding how insects perceive sound and using 3D-printing technology to create custom materials, it is possible to develop miniature, bio-inspired microphones.
With optoacoustic tomography emerging as an effective breast cancer screening method, Seonyeong Park of the University of Illinois Urbana-Champaign and her team wanted to determine its reliability in patients with darker skin. They simulated a range of skin colors and tumor locations using digital breasts to make rapid and cost-effective evaluations, and the results confirmed that tumors could be harder to locate in individuals with darker skin. Park has developed a virtual framework that allows for more comprehensive investigations and can serve as a tool for evaluating and optimizing new OAT imaging systems in their early stages of development.
At the 184th ASA Meeting, Georgia Zellou and Michelle Cohn of the University of California, Davis will describe experiments to investigate how speech and comprehension change when humans communicate with AI. They examined how people adjust their voice when communicating with an AI system compared to talking with another human and, on the listening side, how what a device sounds like impacts how well listeners will understand it.
Join us July 22-25 in Boston for an exciting lineup of scientific symposia, educational sessions, hot-topic discussions, and award lectures covering the latest developments in nutrition science.
Synthetic data from large language models can mimic human responses in interviews and questionnaires. Research data from popular crowdsourcing platforms may now contain fake responses that cannot be reliably detected, raising the risk of poisoned data.
The coqui frog, one of Puerto Rico’s most iconic animals, gets its name from its distinctive two-note call, “co-qui,” which can be heard throughout the island every night. The males produce these calls to mark their territory and ward away rivals, but scientists can use them to study the changing climate. At the 184th ASA Meeting, Peter Narins of the University of California, Los Angeles will describe changes in the coqui calls over a 23-year period. Every frog call had grown higher in pitch, indicating a mini-migration that corresponds with the temperature shift induced by climate change.
At the 184th ASA Meeting, Ashley Alva of the Georgia Institute of Technology will describe how attaching microbubbles to macrophages, a type of white blood cell, can create high-resolution and sensitive tracking images useful for disease diagnosis. Because of the attached microbubbles, the cells sent back an echo when hit with ultrasound, which is nonionizing and noninvasive and has great depth of penetration. This allowed the team to visualize the macrophages in vivo with high resolution and sensitivity. Visualizing macrophages in vivo could also provide a powerful tool for understanding immune responses and monitoring therapeutic efficacy.
Imagine a cocktail party full of 3D-printed, humanoid robots listening and talking to each other. That seemingly sci-fi scene is the goal of the Augmented Listening Laboratory at the University of Illinois Urbana-Champaign. With precise control over the simulated subjects, the researchers can adjust the parameters of the experiment and even set the machines in motion to simulate neck movements. They will describe the talking human head simulators, and their work investigating how humans receive sound and developing audio technology, at the 184th ASA Meeting.
At the 184th ASA Meeting, Emily Sandgren and Joshua Alexander of Purdue University will describe experiments to determine the best hearing aids for listening to music. To test and compare, they took over 200 recordings of music samples as processed by hearing aids from seven popular manufacturers. They asked study participants to rate the sound quality of these recordings and found that the hearing aids had lower ratings for music than their control stimuli. The researchers found bigger differences in music quality between hearing aid brands than between speech and music programs.