With the wide variety of audio and video formats in a digital broadcast facility, routing systems have evolved beyond their fundamental function: to allow switching of real-time signals from any source to any destination. Conversion between analog and digital, embedding and de-embedding audio, up- and downconversion, transitions, and auto failover capabilities can now be incorporated into routing equipment, simplifying workflows and reducing the need for discrete conversion equipment.
Yet, with the use of compression technology, traditional baseband facility signal routing has been augmented by the addition of a new dimension: file-based audio and video routing over a media network. In fact, with file-based acquisition, multicast channels and Web distribution, content may never exist in the uncompressed domain. This integration of IT and broadcast systems has extended the routing and distribution infrastructure; media networks must be considered in audio and video routing system design.
Early SDI routing systems were designed for SD 601 serial digital signals. As HD-SDI found its way into broadcast operations, the first HD-capable routers only supported the 1.485Gb/s HD data rate; separate SD and HD distribution was required. Similarly, SDTI, SMPTE 310 and ASI routing was either not possible or required a dedicated routing infrastructure.
Over time, auto-sensing capabilities were implemented, and SDI routers became SD/HD-agnostic. As broadcasters demanded more versatility, router I/O cards became available to support ASI, SMPTE 310 and SDTI.
SDI speeds continue to increase. 1080p60 has spawned 3Gb/s standards. Vendors have addressed the 3Gb/s requirement by exploiting the modular design of their routing systems. Existing routers with backplane and connection schemes capable of supporting 3 Gb/s bandwidth can be upgraded by replacing input, output and cross-point circuit boards in existing frames.
A limitation to increased data rates, however, is the existing cable infrastructure. Due to eye pattern degradation, existing coaxial cable run lengths may not support 3Gb/s data rates. This may create problems if a facility migrates to 1080p60-based production or desires to implement faster than real-time HD-SDI signal distribution.
1080p60 signals may be able to be distributed over existing cabling. HD-SDI capable cable lengths will be cut in half for 3Gb/s signals. Installation of reclocking distribution amps may not be possible for existing cable runs. Additionally, these are RF signals in the L and S bands, where existing cable crimps, wire nicks and tight bends can easily degrade a signal.
Maximizing source availability is desired in broadcast operations. This has led to a distribution infrastructure where one large house router is fed every source and feeds every destination, as shown in Figure 1, above. Frequently, the physical router is partitioned into physically dispersed frames, with redundant signal paths. In this way, if one portion of the router fails, signals can be routed through a secondary path, ensuring uninterrupted operations.
There is a trend to augment the centralized router with small local routers that serve control rooms, quality control, ingest, edit and graphic areas. (See Figure 2) Local control panels are configured with limited source and destination cross-point control.
Taking this approach further leads to distributed baseband routing systems where there is no centralized house router. Instead, many smaller routers are strategically interconnected based on workflow and scheduling. Figure 3 (next page) illustrates a brute force implementation of distributed routing. The local routers are connected in a fully meshed network. The number of dedicated interconnections grows exponentially as the number of local routers increases. The result is the significant decrease of the number of source/destination connects because of the need for ports dedicated solely to router interconnection — not a very real-world implementation.
Let there be light
Consider a distributed routing system where each SDI input and output port can handle a full 1.5Gb/s SMPTE 292 serial signal. Now add an “uplink” capability, a fiber port with data rates of 10Gb/s, 40Gb/s or higher.
Rather than dedicated baseband interconnection, Figure 4A below shows how the use of a dedicated high-speed meshed optical core can solve the interconnect problem. All ports on the router are now source/destination; only the uplink handles inter-router distribution.
The key is to guarantee data rates by using time division multiplexing (TDM). In this way, each group of data (cell) from an HD-SDI signal is assigned a data slot (time slice) in the data stream over a single light frequency. Multiple streams of aggregate HD-SDI can be Coarse Wave-Division Multiplex, increasing the number of real-time HD-SDI signals that can be transferred between distributed routers.
As a backup in case the high-speed core becomes saturated, a few router ports can be dedicated to baseband interconnect of the local routers. Figure 4B shows a ring interconnection topology. In a worst-case scenario, baseband signals can be routed anywhere over the ring
As acquisition and production move away from tape-based workflows, there is an opportunity to distribute compressed content in file format. Many cameras now support file-based acquisition; play to air is increasingly from media servers. As content is repurposed over the Web and included in DTV multicasts, it may never exist as baseband audio and video.
Figure 5 on the next page illustrates an integrated traditional and IT routing system. Ingest and playout servers are already connected to the media network. The key additions are codecs with network interfaces that are IP-compatible. Content may be transferred from source to destination over the media network. Centralized storage is accessible from the network, and content can be routed there as well.
Naturally, there are issues to resolve. Content consists of essence and metadata. The essence/metadata association must be managed during content movement. There can be problems with real-time delivery of content to servers. Network congestion must be avoided by verifying bandwidth with rigorous testing, and efficient routing protocols that guarantee Quality of Service are necessary. Compression, decompression and transcoding will add processing delays and affect lip sync.
Configuration and control
Distributed routing and the inclusion of the media network will make system configuration and control significantly more of a challenge. In a centralized routing topology, an input is sent directly to an output — a straightforward source A to destination B switch. In a distributed scenario, point A to B may make a number of “hops” from router to router over a switched connection.
Optical TDM routing is complicated. Each switching node in a distributed architecture must demultiplex, then switch the signal of interest to a time slice that has as its destination the desired output port. This may entail a number of hops, each requiring a demux, switch and remux. This makes SDI routing more and more like IP packet routing. However, using TDM, packet jitter is non-existent, but latency increases with each hop.
Design-friendly configuration applications will be the only effective way to configure a distributed routing system. There will be the need to view systems as physical sources and destinations managed by logical “level” assignments across distributed resources. This will require careful planning and an understanding of workflows.
IP addressing schemes for devices and control panels must be carefully planned so that they do not conflict with existing network addresses and so that there are sufficient addresses reserved for future expansion. Suffice it to say that static IP addresses will be necessary.
Ready for prime time?
Although not presently available on the broadcast equipment market, there have been discussions, white papers and R&D efforts to develop BE/IT hybrid router systems. However, distribution and routing of uncompressed signals will not disappear anywhere in the near future. There is also a lot of legacy analog audio and video material to keep in mind.
Switched GigE has enabled distribution of compressed content to workstations and servers. However, 1Gig is limited. At best, four streams of content at 200Mb/s can be simultaneously transferred, provisioning 20 percent header routing, control and file validation information.
If 10GigE paths are switched to servers and workstations, more than one uncompressed 1.5Gb/s HD stream could be supported. 100Gb/s is close to standardization and commercial deployment and may support 50 1.5Gb/s streams.
With 10GigE paths to devices on a 100GigE network, the integration of SDI and IP routing in broadcast facilities becomes attainable. The limitations of dedicated ports to distributed routing nodes can be resolved with 100Gb/s single-cable interconnects. It is not all that far-fetched; 40Gb/s Infiniband and OC 768 exist now. A 100Gb/s link has recently been implemented between New York and metro Washington, DC.
It is time to plan ahead for the inevitable, all file-based production work-flow. The routing of compressed content should be considered as part of the overall facility routing system.
Phil Cianci is a design engineer for Communications Engineering, Inc.