AES Conference to Target Immersive Audio

NEW YORK—Top audio engineers will come together in Hollywood next week to hammer out the steps necessary to deliver immersive audio, in which discrete sounds come at a listener from nearly any part of the room or auditorium.

The Audio Engineering Society will hold its 57th International Conference March 6–8 in Hollywood, Calif., at the TCL Chinese 6 Theatres. This marks the first “Future of Audio Entertainment Technology Conference” for AES, and it aims to bring together researchers, acousticians and engineers to address the current and future audio needs of the cinema, television and online entertainment industries.

Immersive sound is on the forefront of audio technologies gaining ground in cinema and television. Unlike mono, stereo or even 5.1 surround sound, immersive audio is not dictated by the number of channels feeding a likewise number of speakers.

“At the foundation of immersive audio is object-based elements,” said Dolby’s Jeff Riedmiller at the Hollywood Post Alliance Tech Retreat earlier this month. You break all ties with channels. In mixing channel-based audio, channels were fixed. In this case, you’re mixing for the space instead of the specific channel location. In object-based audio, you can think of metadata as the flight plan.”

The metadata in this case defines the perceived location of the various discrete sounds—a helicopter overhead, a car approaching from behind, a beatbox from one side, a busy New York sidewalk in all directions.

AES conference organizers said that, “as the next wave of immersive audio technologies arrive, it is incumbent on the industry to rally around an organized and focused set of standards.”

Immersive audio workshops, lead by spatial-audio specialist Dr. Francis Rumsey, will include a session on Delivery Standards and Methods, co-sponsored by the SMPTE; Integrating Object–Based and Conventional Audio for Delivery to the Home; and the Rendering of Immersive Audio in the Home, from stakeholders including Dolby Labs, DTS-MDA, Auro-3D, Iosono, Fraunhofer, MPEG-H, and NHK 22.2.

At the end of the third conference day, organizers said the delegates will be asked to develop a list of action items that will need to be addressed by the AES, and perhaps other standards bodies.

The three-day conference, led by co-chairs Brian McCarty and Dr. Sean Olive, will cover several other issues, including dialog intelligibility, loudness and frequency response in the streaming domain.

Dr. Peter Mapp, described by organizers as “one of the world’s leading experts on dialog intelligibility,” will explain the scientific issues surrounding these topics and then lead a workshop that includes multi-Oscar-winning sound mixer Lon Bender in an analysis of the challenges faced by the creators of sound for picture.

Loudness and noise-induced hearing impairment will be examined in detail, including input from medical experts. While audio in streamed media is approaching a more equal footing with video, it comes with its own challenges, i.e., a wide range of available codecs that often render inconsistent results. Loudness in streaming has become better controlled in recent years, however, with the implementation of the CALM Act and technologies that allow for more precise measurement and management of loudness.

Frequency response in the streaming domain, though, remains less comprehensively addressed. A workshop, chaired by Roger Charlesworth of the DTV Audio Group, will tackle this issue with engineers from Starz Entertainment, Meridian Audio and the Telos Alliance. This day’s panels will also focus on headphones, a consumer electronics category that has seen explosive growth in the last few years. Delivery of immersive sound and binaural techniques with headphones will be demonstrated with conference partner Sennheiser Electronic.

More information on the AES 57th Conference on The Future of Audio Entertainment Technology, as well as further Registration, Travel and Technical Program information, visit http://www.aes.org/conferences/57/. An AES member discount is available.

See…
January 26, 2015
“All-Around Sound: Mark Richer on ATSC 3.0 Audio”
Immersive audio for ATSC 3.0 is composed of two different sound enhancements over the current ATSC 1.0 system; first, personalization and the ability to customize the audio program based on the viewer’s unique needs, environment or device and second, enhanced surround sound, bringing a much more enveloping experience to both the home theater and headphone listener.