IP for signal management

With the wider adoption of IT-based infrastructures for playout facilities, it's natural to consider the scope for moving to IP-based signal management in production studios. This is typically a much more demanding environment than playout for real-time signal processing and routing, especially with respect to multichannel audio handling.

When it comes to moving video and audio signals efficiently around a television facility, there are endless challenges, especially as HDTV consumers have driven the demand for broadcasters to produce theater-quality sound. Establishing this capability in video production facilities involves more complex audio mixing equipment and production switchers, and often recording consoles. It also requires monitoring equipment to evaluate quality, as well as the ability to switch and control mono audio channels, discrete AES-3 signals, AES-3 signals with non-PCM payload and possibly even MADI signals for bulk audio transport. (See Figure 1.)

Moving to IP?

When considering a move to IP for signal management in studios, broadcasters have to consider both the IP model and the use of Ethernet as a common physical layer for audio and video. Let's assume that audio and video would be switched using the best available IP switch. Even with high-performance switches, a key issue is that HD video in studios is 1.5Gb/s and moving to 3Gb/s, and this exceeds the bandwidth of the affordable GigE IP physical layer. Video mezzanine compression can be used to reduce the channel bandwidth requirements, but this will add cost and introduce an additional, bothersome delay that must be managed.

There is also the problem of determinism inside the IP physical layer. One might contemplate an SDI video layer at full bandwidth and use an IP layer for audio. But here, the problem is once again determinism for the audio signal layer and the bridge between the IP physical layer and the SDI physical layer. An embedder or de-embedder is still required; it has just slightly changed its form.

Another possible solution would be to use IP for audio only. This puts all the de-embedding and possibly embedding into the final or output piece of equipment, which reduces the system cost of embedders and de-embedders, but it does not solve timing issues. It also introduces its own delay for IP buffer management, which could be problematic for identical A and B chain playout back-up.

These delay issues are important because any quality audio production requires that its source audio signals, or more specifically samples, must use signals exactly in phase. The use of multiple IP switches and devices connected to even the most carefully designed IP system can result in slipped sample alignment and significant audio program degradation. Furthermore, today's facilities tend to incorporate routers with matrices ranging from 200 × 400 up to 500 × 1000. Given these dimensions, and the deterministic timing requirements for synchronism and low latency, it seems that Ethernet infrastructures and IP are just not a plausible solution in the production environment.

The move to hybrid routing

The solution to improving signal management in production environments involves a switch fabric that is both synchronous and deterministic for audio and video. This requires a baseband “hybrid” router with 3Gb/s/HD/SD switching (using embedded audio) and integrated audio processing. These types of routers are now being adopted by studios, and they can handle multiple formats and functions within a single frame, including embedding and de-embedding audio, handling mismatched audio channels, audio shuffling and audio breakaways. (See Figure 2.)

Hybrid router frame architecture currently follows the same general approach as traditional baseband routers, although there are some important differences. For instance, in such a router, one crosspoint card switches both the video signals and the audio signals. The video signals are switched traditionally with a crossbar matrix chip, and the audio signals are switched in the time domain using a shared memory architecture. It is critical that audio delay is minimized during this switch process. The video signals and audio TDM streams are then fed to their corresponding output cards.

Critical timing parameters

It is imperative that the delay of video and audio through a production studio router is as short as possible. With careful design of the output embedder, the video delay of a hybrid router can be just a few pixels. This is accomplished by always leaving the video signal in the serial domain and embedding audio data “on-the-fly” in what is, effectively, a bit-by-bit mode of operation. Short video delays, ideally much less than half of a video line, ensure that plant system timing is simplified. This is especially true when using a hybrid router for preselection of inputs to a video production switcher.

The maximum audio delay with a hybrid router is constrained because every embedded signal fed to the router will have differing sample distributions. Therefore, differing buffer depths must be managed for each de-embedder and embedder in the router so that any mono audio signal de-embedded from any video signal may be embedded into a common video output. This can then result in a one-line minimum to two-line maximum delay. Add in uncertainty for +/- half video line input HD-SDI timing, and the audio delay becomes three lines maximum. This is more than satisfactory to ensure that even after multiple re-entries, lip sync will be preserved.

In production applications, the hybrid router provides a direct interface with embedded video signals, audio signals and MADI signals. Every audio input is made available as mono channel audio. MADI is connected directly to the router inputs and outputs providing a single-cable, low-cost connection for the audio production switcher or mixing console, and all embedded audio is de-embedded from video inputs. Because the system is synchronized, the switching between inputs and outputs is deterministic and sample-accurate.

Care must be taken to ensure full preservation of the multichannel phase coherence, or audio image. Embedder sample distribution will vary between video signals. Audio sample timing slips can be generated when switching audio from one embedded input into a different embedded output. Recall that one sample slip in time alignment is a significant phase error, which degrades the surround sound image of the program audio. With 16-channel audio embedders, it is possible to have image-accurate audio transport within a single video signal. If more than 16 channels of audio need to be exactly in phase, MADI is the better signal transport. Since Dolby E is a common signal for production, it also needs to be handled correctly within the router with switch points that comply with SMPTE RP-168 Dolby guard band specifications. This capability should be available simultaneously for HD and SD signals in the same frame.

Ingest and DHP

Another popular application for a hybrid router is ingest. In this case, the router affords complete flexibility to shuffle and route any single de-embedded mono input to any other embedded mono output. Hybrid routers may also offer dynamic hybrid pathfinding (DHP). This involves populating a portion of the router with hybrid inputs and outputs that are connected to the sources that need frequent channel reassignment. Another smaller router partition is populated with additional hybrid input and output modules that are fed by, and fed back into, the router. This is the same pooled resource topology used by discrete de-embedders and embedders in an external modular equipment frame. The balance of the router can be filled with standard video or MADI input and output cards.

When sized correctly, DHP may reduce a router's cost by approximately 20 percent, and it also reduces the overall hybrid card count for the core router. Hybrid cards used for pathfinding are often significantly less expensive than external terminal equipment. Importantly, the hybrid pooled resource provides full mono audio routing between the pool and the core of the router, something that is not even possible with external embedders and de-embedders.

With DHP, an audio breakaway route can be made automatically between signals on standard input and output cards. The router control system will find an available output/input path, which re-enters the router hybrid cards described earlier and generates the additional takes for these cards. What would have been four takes becomes just one.

Conclusion

In summation, the most successful technology for high-quality real-time signal management in live production environments is hybrid routing. It avoids signal timing problems and offers the ability to save costs by dramatically reducing equipment needs. In essence, hybrid routing provides the highest possible performance for combined A/V signal switching in production and ingest operations, either on land or on wheels.

Neil Sharpe is vice president of marketing at Miranda Technologies.