Consoles and mixers

Consoles sit at the center of every recording and broadcast project — in the studio, in the field, routing and processing audio. But, although technology has developed significantly, the role of the console has remained fundamentally consistent for the last 75 years or so.

As with every other class of equipment manufacturers, however, companies that build boards must keep listening to their clients. Are audio engineers asking for new feature sets? What will the next generation of consoles look like? Representatives from some of the top console manufacturers spoke with Broadcast Engineering and shared where this key industry is in 2012 and what changes consumers and the industry might look forward to in coming years.

Shown here aboard a Telegenic OB truck, the Calrec Apollo console allows users the option to output LtRt phase-encoded signals directly from the console’s main outputs.

Broadcast Engineering: In the 1990s, digital audio workstations began including consoles as part of software packages. Are we beginning to see a push back in the area of console design? Specifically, can we expect to see consoles that host third-party plug-in processors?

Piers Plaskitt, Solid State Logic CEO: “It’s arguable that a digital audio workstation and a digital console are built from the same pieces; i.e. a control surface, I/O and digital processing. However, the console is optimized for scale, performance and control, with short latencies and good hands-on operability. A digital audio workstation is designed to be screen-, keyboard- and pointer-controlled and should be optimized for editing and cost, with a modest amount of I/O. The technologies may be similar between each, but the applications and performance of each are quite different.”

Phil Owens, Wheatstone Eastern U.S. sales manager: “Only in a pure production environment. Live boards typically have feature sets (mix minus, IFB, delays) that music production boards lack, and music production is where things like WAVES plug-ins are most helpful. Some of our competition implements some control over DAWS through the Mackie HUI protocol, but that doesn’t allow for plug-in control. So, the distinction hasn’t really blurred all that much. It’s still a case of the right tool for the job.”

Andy Trott, Studer by Harman vice president and general manager, mixers, microphones, headphones: “It’s a question of workflow and personal choice. DAW systems are obviously used in the creation process, while the console is used alongside for production, but standalone in delivery.”

Broadcast Engineering: Miles of analog snake being replaced by a digital matrix has brought large change to live audio production. What impact has this had on console manufacturing?

Henry Goodman, Calrec Audio head of sales and marketing: “Network technologies allow hundreds of audio signals to be transferred in both directions along a single connection. At its simplest, a densely packed I/O box can be located anywhere it is required and only one single connection (or two for redundancy) needs to be made back to the audio desk in order to send and receive signals.

“A smart system design allows us to create massively scalable networks where all resources are available to any client on the network. For the user, it’s fantastic. You simply find a source on the network and connect it to a destination with a few button presses. It doesn’t matter where on the network the source and destination are, or how many matrices the signal has to pass through. All of the internal routes are managed for you and created instantly.

“But, this technology goes way beyond simple routing. Complex access rights management, I/O aliasing for effortless movement between studios, flexible virtual patchbays and integration with broadcast control systems all contribute to incredibly powerful systems that were either not possible, or simply unmanageable with traditional patch bays.

Shown here, the Dimension One unit installed at KOMO-DT, in Seattle, WA, is one of Wheatstone’s flagship consoles.

“In terms of the impact on console design, it makes sense to integrate all of this smart network technology into the heart of each mixing desk. The desk becomes a client of the network, and user operation is designed around accessing shared, remote resources. All of this smart network technology is purely an enabling technology, though. A mixing desk’s primary job is to process and mix audio; it should behave like an audio mixer. It should not be an audio router with some processing capability bolted on. It needs to be designed around the workflow and requirements of a human audio engineer, not an IT technician.”

Andy Trott: “In a broadcast scenario, the sources all feed a central location where the stagebox would be located — where all the XLRs and other format connections are made. There are already numerous digital audio formats reaching this point. Much of a broadcast infrastructure may well be already using AES or SDI for signal distribution, so analog snakes are already rare, except in a fly-away system used for location live production, for example.”

Piers Plaskitt: “Analog snakes in TV broadcast are almost an extinct reptile. Multichannel digital connections over fiber and copper, such as MADI, are the evolution of the species.”

Page 2

Broadcast Engineering: What new features are users asking for?

Andy Trott: “The obvious topics these days are based around workflow, loudness monitoring, remote control and AoIP.”

Piers Plaskitt: “They are asking for more ‘intelligent’ products that are simpler to integrate and cost less money.”

Phil Owens: “We’ve been asked for the ability to de-embed SDI audio, and also for MADI I/O. With the approaching CALM Act deadline, we are also seeing requests for integrated loudness metering solutions using the LKFS scale.”

Henry Goodman: “More often, users are requesting features that simplify their workflow. They want facilities to provide clear and concise information about the state of the system and the ability to quickly track down any issues. Integration with external control systems are important, as is remote control of desk parameters. Labor-saving devices such as automixers are becoming more popular, allowing engineers to focus on the more complex and artistic qualities of a mix.”

Shown here in the DutchView DV8 OB van, a Studer Vista 9 console is part of the production package built for both low- and high-end productions.

Broadcast Engineering: Have you noticed any changes in broadcast production, and if so, do they influence console design? What differences are there between live mixing in surround and stereo, and how has the advancement of surround sound changed the way consoles are built?

Phil Owens: “Many small to mid-market stations have moved to production automation with varying degrees of success. There is still some resistance to moving away from a traditional audio operator mixing on a console because of the nature of live TV. Moving to automation doesn’t wave a magic wand over many day-to-day audio level problems.

“All broadcast consoles have to handle 5.1 content in combination with stereo and mono sources. 5.1 content usually comes from network feeds, while local content is typically stereo. A typical broadcast day will have a combination of both.”

Henry Goodman: “A critical point in mixing for different multichannel formats is in the correct monitoring of the full width and various downmixed versions of the program. Integration of the monitoring system with the various decoders is vital in understanding how a mix will translate to the viewer in different listening configurations.

“In terms of constructing the mix, working with multichannel signals should, in general, be as simple as working with mono or stereo signals.”

Piers Plaskitt: “Probably the biggest recent change in TV broadcast production, from a console perspective, has been the adoption of SDI and its associated embedded audio/video routing infrastructure. This means that audio consoles often need to de-embed and embed audio to/from SDI streams.

“The difference with live mixing in surround and stereo is that there is sound coming directly from behind your ears, and a console needs a way to allow the operator to manage working with and listening to these extra channels. This increases the complexity of the console’s channel panning, bussing and central monitoring sections.”

Andy Trott: “Productions are far more complex with more remote location feeds, multi-platform delivery and tri-media for TV, radio and online, and adding surround brings more challenges. Surround means six or more discrete channels to handle on the fader surface.”

Page 3

Broadcast Engineering:Have new technologies been developed recently to simplify the mix process? If so, how do they affect console design?

Piers Plaskitt: “To create a mix-in surround, the console needs a way to position sounds across multiple busses. It must also be able to conveniently handle incoming surround pre-mixes and monitor all the various mono, stereo and surround listening permutations. To this end, ‘surround ready’ consoles are equipped with 5.1 and 7.1 channels, as well as complex panning features on every channel. Consoles must also fold down from surround to stereo, to cater for stereo feeds. Designing a console to manage all of these complexities is mandatory if you’re going to do multi-format mixing properly. So, yes, surround audio affects console designs considerably.

The “multipurpose” room at MSG Media’s facility, in New York, houses the Solid State Logic C100 HDS console. The board’s primary use is for live audio mix, voice-over and SAP.

“Another development is automated balance control, which allows multiple console channels to be automatically adjusted so the levels deliver a consistent mix. This makes balancing a fast-paced talk show a much more straightforward process.”

Henry Goodman: “Downmixing is an integral part of contemporary broadcast workflows. Various versions of the final mix are sent to different broadcasters and are used in different contexts, be it host transmissions or clean feed to other countries, for example. Some of these feeds are not required in the original 5.1 mix, so a quick and effective method of turning this into a stereo mix had to be developed within the console.

“Left only/right only downmixes are mainly used for local monitoring — people in and around the facility who have stereo or mono monitors.

“Left matrix total/right matrix total is used for transmission. As LtRt can be listened to as stereo, or decoded back to surround, the same path can be sent to all consumers. If they have 5.1 decoders, it will decode it. If not, they hear stereo.

“Another benefit to LtRt is the ability to send 5.1 over two channels. Until relatively recently, it has been difficult to send more than two channels and keep them all time-aligned together, which is why DolbyE became popular in the pro environment. Transmitting two encoded channels also takes up less bandwidth than transmitting six discrete channels.”

Phil Owens: “Advances in DSP silicon have enabled us to design extremely powerful mixing and processing cores that use significantly less power and occupy about one-third the space of previous designs. Today’s consoles must do upmixing and downmixing: 5.1 content must be downmixed for feeds to stereo program and IFB outputs. Stereo and mono content must be upmixed and positioned for inclusion in 5.1 program outputs.”

Gary Eskow is a composer, journalist and project studio owner.