Exploring Broadcast Audio at AES


SAN FRANCISCO
The Audio Engineering Society will convene its 129th AES Convention at the Moscone Center here, Nov. 4–7, for four days of technical papers, workshops and technical tours and three days of exhibits, which open Nov. 5.

There'll be plenty for the broadcast community to see and do. Among the many offerings is a track of "Broadcast and Streaming" workshop sessions geared specifically towards the issues facing broadcasters today—streaming, DTV, sports, news, facility design and careers.

The Moscone Center in San Francisco will host the 129th AES, Nov. 4–7. David K. Bialik, a systems engineering consultant in New York, and the "Broadcast and Media Streaming" sessions chair for the past 22 years, said that the number of AES workshops in this category has expanded and evolved over the years to become the premier forum for audio implementation issues for TV and radio. "These sessions are about technology and techniques with discussions among peers," Bialik said.

WHAT TO EXPECT

Thursday sessions begin with the Broadcast Facility Design session, "Attending to the Details," chaired by architect and acoustician John Storyk, co-principal Walters-Storyk Design Group. The session will include a panel of network representatives and other acousticians, architects, and project managers to focus on all those pesky details that one must consider for successful design and implementation, including acoustical requirements, end-user expectations, work flow, construction details and materials.

Any live event poses its own set of audio challenges, and the Olympic Games arguably set the gold standard for audio production. Host broadcaster CTV produced the entire Vancouver 2010 Winter Olympic Games in 5.1 surround. In the session "Audio for the Olympic Broadcast," Michael Nunan and Joshua Tidsbury from the CTV Operations and Engineering group will offer a behind-the-scenes sonic tour of how they developed 2,450 hours of programming for 12 TV stations and 20 radio stations in 22 languages over 17 days, from preproduction to the Closing Ceremony.

While audio for sports has received much attention at this and past AES conventions, "Audio for Newsgathering" will be a first at this year's convention. In this session, Skip Pizzi, a media technology consultant, will lead a panel discussing the specific tools, technologies, skills and processes to successfully capture raw live sound for news in often tumultuous settings.

DTV

DTV brings the promise and often the delivery of an improved and immersive audio experience, but there's still work to be done. "The Lip Sync Issue" panel with moderator Jonathan S. Abrams, chief technical engineer at Nutmeg Post in New York, will update attendees on a problem that just won't go away. Audio/video latencies still lurk, but where and why? What's the latest in measurement and correction techniques? Where should corrections be inserted?

After lip sync, loudness is the other hot DTV audio topic. DTV has the potential to equalize loudness levels among different program sources, but implementation hasn't always been successful to get it just right. And now the federal government is stepping in.

The workshop "Loudness, Metadata, and other Audio Concerns for DTV," chaired by Tomlinson Holman, School of Cinematic Arts and Viterbi School of Engineering at USC, will present updates on the audio features for DTV, implementation issues, and why audio turned out to be more complex than perhaps originally envisioned.

How can loudness be easily quantified and compared for different program material, taking into consideration variations in dynamic range? Bringing in panels of listeners for subjective testing isn't really practical outside of a lab setting. Various means for objective loudness measurements are on the market, but there's always room for improvement.

"Innovations in Digital TV," a panel chaired by David Wilson, CEA, will include representatives from NAB, Cox Communications, NBC, ATSC and companies DTS and THAT Corp., and will cover the range from 3DTV to Mobile DTV and what that means for the audio portion of these new delivery methods.

AUDIO STREAMING

As audio streaming is playing a greater role in content delivery, whether standalone or as part of a video piece, it's being recognized that processing and treatment for these low bit-rate feeds differs from that used for broadcasting. Understanding Internet protocols and streaming formats is now essential for an audio engineer. A series of related streaming sessions will help in that education process.

The "Audio Processing for Streaming" session, moderated by Bill Sacks, Optimod.FM, promises to explain the language of IT and protocols so that audio engineers can speak with IT professionals on their own terms, and also acquire the ability to educate their IT counterparts about specific audio requirements.

Related streaming sessions include "Audio Performance in Streaming" with moderators David Prentice, Dale Pro Audio, and Ray Archie, CBS, and "Stream Formats for Content Delivery Networks" with Archie as moderator.

More audio gear is now IP-enabled, to interconnect via a computer network instead of traditional audio routers. In "Audio over IP: A Tutorial," Steve Church, president, Telos Systems, and Skip Pizzi will explain the technology behind this innovation, and how to implement it.

Audio is processed for a variety of quality and aesthetic reasons, but how often is listener fatigue considered? And just what is listener fatigue? Expect a diversity of opinions and lively discussion on this topic, Bialik said, in the session "Listener Fatigue and Retention." Moderated by David Wilson, the panel includes noted recording engineer and audio innovator George Massenburg along with representatives from Harman, Omnia, DTS and SIA.

Comment on this or any story. Write totvtech@nbmedia.comwith “Letter to the Editor” in the subject line.