Audio Mixing in the Age of Remote Production

Calrec
Ross Production Services (RPS), a division of Ross Video, recently upgraded its Connecticut facility by integrating three new 60-fader Argo S consoles. The Argos are housed within the facility’s three REMI control rooms that produce events for clients like CBS, ESPN, Athlete’s Unlimited and EA Sports. (Image credit: Calrec)

Audio mixing is and will always be a key component of live broadcast production. How, where and on what kind of equipment it is carried out in the future is not entirely certain but trends are emerging and technology is being established that already give some indication of how the audio console is evolving to meet the changing requirements of broadcasters.

For the time being at least there is still a definite need for big, physical multiple fader sound desks, most especially on large-scale, prestige broadcasts such as premium sports events and entertainment shows. The difference, according to Henry Goodman, director of product management at Calrec Audio, is where the mixing surface is located and where the processing takes place.

‘Distributed Production’
“What we’re seeing is a break in the geographical connection of the control room having to be where the studio is and the operator having to be at the venue,” he says. “If you don’t have to physically tie your operator to the venue or a truck, you can put them in a nicely built studio where they have the space to monitor properly with all the necessary equipment. That changes how you can manage the operator and the equipment, as well as providing greater consistency in the mixing.”

Software and the cloud play major roles in this new arrangement, which Goodman prefers to call distributed production rather than remote production.

“It’s not just remote, it is distributing the different elements of the production in different places,” he explains. “The cloud part is another step along that way. Instead of having your physical DSP processing next to the console or in a central control room, you have it either in a public cloud or on COTS hardware that’s under your control, replacing the traditional DSP engine with a software-based engine. 

Audio has been left behind when it comes to cloud production and we’re all playing catch up right now.”

Martin Dyster, Telos Alliance

“Quite a few of the broadcasters we’re talking to are not necessarily totally sold on public clouds, so building their own private cloud and running software on that in an agile way is quite appealing,” he added.

Calrec’s parent group, Audiotonix, has produced a technology Proof of Concept (PoC) for audio cloud processing that is now providing what is described as the “backbone” of live broadcast consoles being developed by both Calrec and fellow subsidiary Solid State Logic (SSL). 

For Calrec, it involves a RP1 remote production unit at the venue linked to an Argo mixing surface in a control room over Dante Connect, with an Audiotonix New Heights audio DSP mix engine in AWS Cloud. 

“When we started thinking about what we needed for cloud processing we looked across the group for technology we could use, which is why it’s seen as an Audiotonix development,” Goodman says. “Calrec and SSL are working on it at the moment because we’re both operating in the broadcast sector and it’s mainly broadcasters that are driving the need to get audio processing in the cloud.”

Best of Both Worlds
Other leading broadcast audio console manufacturers are also adding their own takes on how to provide more flexibility by splitting up the various aspects of the mix process and putting a substantial part of it in the cloud. Wheatstone’s Layers Software Suite brings mixing, processing and streaming capabilities to any server, either on premises or in an AWS or other cloud data center. 

Layers

Wheatstone’s Layers Software Suite brings mixing, processing and streaming capabilities to any server, either on premises or in an AWS or other cloud data center.  (Image credit: Wheatstone)

Senior Sales Engineer Phil Owens comments that in a “typical console system” today, which will be based on audio over IP (AoIP), some of it can be easily virtualized and other parts of it will remain physical.

“But you want the part that’s physical to work in your dream system in the cloud at some point,” he says. “For this we have virtualized the guts of some of our consoles. By that I mean when you sit at a physical console and push a switch or raise a fader, there’s CPU hardware in that console that’s telling the rest of the system what you did. 

“When you virtualize that, you’re still going to push a switch or raise a fader but it’s going to be on a touchscreen and those commands still have to have a CPU that tells the rest of the system you took those action,” Owens continued. “So we have the console virtualized to the extent it will run on a Linux server and you interface to it via a touchscreen, which can communicate with a server in the next room or the next town or in the cloud.”

Lawo has developed the HOME IP management platform as the basis of its audio mixing systems, with various apps for different requirements.

Lawo

Christian Struck (Image credit: Lawo)

“We have not decided to nudge our customers in any particular direction,” says Christian Struck, senior product manager for audio production. “While these containerized microservices can run in the public cloud if users so wish, they are as effective on standard servers in a data center as on-prem. We like to call data centers that can be accessed from just about anywhere in the world a ‘private cloud,’ which is easier to protect and more affordable with respect to ingress and egress costs.”

Just Getting Started
Even with this activity, it can be said that— as Martin Dyster, vice president of business development for TV at Telos Alliance observes—these are still early days for software and cloud-based mixing. 

Martin Dyster

Martin Dyster (Image credit: Telos Alliance)

Dyster highlights the Audiotonix New Heights project and AWS’s involvement with audio companies, adding Telos is also working with the cloud platform but more for its virtual intercom system. 

“Audio has been left behind when it comes to cloud production and we’re all playing catch up right now,” he says. “I’ve been involved with the cloud for about three years through the comms platform and have become very aware that the landscape around us for cloud-based mixing has been pretty sparse. 

“A lot of broadcasters we talked to early on were using things like REAPER [digital audio workstation] but we weren’t seeing the major console brands you might expect. That’s starting to change now but it’s still not a well populated landscape and it will be interesting to see what develops over the next five years.”

Dyster notes that the concept of a virtualized console is picking up more in radio, with Telos’s Axia Altus virtual cloud mixer now being used for some applications on that side of broadcasting. While TV and radio sound desks are different animals, there is now more crossover between the two areas due to the growth of visualization in radio, with cameras now becoming more common in on-air studios. 

“Features like automix are an absolute standard now,” he says. “Automation control from third party orchestration layers is more requested, particularly with visual radio and algorithms that can automate based on the schedule of the show, so the console cuts itself to some extent, with more audio-follow-video features.”

What About AI?
As with all broadcast production technologies, many are now considering what influence or impact artificial intelligence (AI) might have on audio consoles. Dyster points to specialized mixing systems from Salsa Sound and LAMA Mix, which provide features such as ball tracking for sports coverage, automixing and monitoring for language recognition. 

Lawo’s Christian Struck adds that “there is no doubt AI will find its way into future audio mixing consoles or their DSP engines,” while Wheatstone’s Phil Owens says that although AI is working its way into the broadcast workflow, it hasn’t hit audio yet. 

But it does have potential. 

“What AI can do for sound is provide the basis for plug-ins that perform noise cancellation. When you apply an AI to that job it gets better at doing it, recognizing noise as opposed to signal,” Owens said.

Henry Goodman at Calrec agrees it is an interesting area and one people are looking at to see what benefits it can bring.

“On the console side, the area that interests us is having facilities providing assistive mechanisms for the operators, whether that’s balancing external feeds coming in or standardized EQ for specific microphones,” he said. “It’s something we’re keeping a close eye on but it’s not at the forefront of our development right now.”

All of which makes the audio console the gear to keep watching, if only to see where it ends up in the broadcast center.