i, Solo

Artificial intelligence and free computer app allows classical musicians to perform solo with accompaniment of virtual orchestra

Article ID: 617203

Released: 2-May-2014 2:30 PM EDT

Source Newsroom: Acoustical Society of America (ASA)

Newswise — WASHINGTON, D.C., May 7, 2014 -- Musicians can now perform as the soloist with a full philharmonic orchestra from the comfort of their own living rooms, thanks to a new computer system that will be described in a presentation at the 167th meeting of the Acoustical Society of America, to be held May 5-9 in Providence, Rhode Island.

“Classical musicians spend untold hours learning to play the solo literature featuring their instrument, but very few ever perform this music with the accompanying ensemble. The reason is it takes many players to make an orchestra, but only one to be a soloist,” said Christopher Raphael, chair of Computer Science in the School of Informatics and Computing at Indiana University, Bloomington, and a former professional oboist. “While the oboe is not the favorite solo instrument of composers or audiences, I have performed as a soloist 10 or 15 times. The experience was thrilling, so I wanted to find a way to replicate the feeling of this experience, and to share it with others,” he said.

“I have worked on this for many years,” Raphael added, “since it represents a kind a grand challenge—bringing together an application domain I care deeply about, and some areas of computer science in which I have expertise.”

In musical accompaniment, or, for that matter, in any ensemble performance, each musician must form ongoing predictions about the way the music will evolve and continually revise these predictions based on what they hear. To emulate this process with a computer model, Raphael developed a so-called Bayesian Belief Network, which is “a simple model for musical timing that understands the nominal note values from the score and what they imply about duration, and the way tempo changes fluidly in a performance,” he explained.

To model the hearing of the accompanists—and thus be able to identify, and respond to, the notes played by the soloist, and when they occur—the system uses an algorithm known as a hidden Markov model, which is commonly employed in speech-recognition technologies.

The simulated orchestra is synthesized from a prerecorded orchestra, which means there is no limit to the number of instruments involved, “though it isn't always easy to find a recording of a concerto minus the soloist,” Raphael says.

The system—which Raphael has dubbed the “Informatics Philharmonic”—is designed to understand the “imprecise nature” of humans, and, like an artificial intelligence system, can learn to adapt to the soloist’s interpretation of the music. The model can be automatically trained from past performances (and must be trained for each individual soloist), “thus capturing the essence of the human rehearsal process in which one learns from example,” he said.

Technically, he added, the program has a score of the piece that gives the basic information of the score, more or less as it is presented in the score: pitches and rhythmic values. This is the information that is used to "hear" the soloist. To create the accompaniment, the program uses a prerecorded accompaniment performance that is matched to the symbolic score offline, which allows the accompaniment to be played back while its timing is warped so that it synchronizes with the soloist.

Presentation #4pMUa4, "The informatics philharmonic" by Christopher Raphael will be presented at 3:15 p.m. ET on Thursday May 8, 2014 in Ballroom C of the Rhode Island Convention Center.

MORE INFORMATIONFree app for Mac: https://itunes.apple.com/us/app/cadenza-orchestra-that-follows/id525322035Videos explaining how the program works: http://www.music.informatics.indiana.edu/~craphael/info_phil/info_phil_2012

Performance clips from the 2013 Informatics Philharmonic concert with the Summer String Academy students at Indiana:http://www.music.informatics.indiana.edu/~craphael/info_phil/info_phil_2013


ABOUT THE MEETINGThe 167th Meeting of the Acoustical Society of America (ASA) will be held May 5-9, 2014, at the Rhode Island Convention Center and Omni Providence Hotel. It will feature more than 1,100 presentations on sound and its applications in physics, engineering, and medicine.

PRESS REGISTRATIONReporters are invited to attend in person for free. If you are a reporter and would like to attend, contact Jennifer Lauren Lee (jlee@aip.org, 301-209-3099), who can also help with setting up interviews and obtaining images, sound clips, or background information.

Journalists may also remotely access meeting information with ASA’s World Wide Press Room http://www.acoustics.org/press

WEBCASTA live media webcast featuring this and other newsworthy research presented at the ASA meeting will take place at 3:30 p.m. ET on Wednesday, May 7, 2014. Register and watch at: http://www.aipwebcasting.com

USEFUL LINKSASA World Wide Press Room http://www.acoustics.org/press Webcast on May 7: http://www.aipwebcasting.comMain meeting website: http://acousticalsociety.org/meetings/providenceTechnical program: http://asa2014spring.abstractcentral.com/planner.jsp

ABOUT THE ACOUSTICAL SOCIETY OF AMERICAThe Acoustical Society of America (ASA) is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world's leading journal on acoustics), Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. For more information about ASA, visit our website at http://www.acousticalsociety.org



Chat now!