Digital networks: Piecing it together

The conceptual design of video facilities has begun to move away from the paradigm that has sufficed for the analog facilities and the transitional "digital
Author:
Publish date:
Social count:
0

The conceptual design of video facilities has begun to move away from the paradigm that has sufficed for the analog facilities and the transitional "digital baseband" facilities being commonly implemented today. When viewed at a macro level, most digital facilities replicate NTSC approaches with SMPTE 259 or 292 bearers. At the edge of the topological map you will find new facility techniques intended for both uncompressed component digital video (SDTV and HDTV), as well as emerging classes of media that are beginning to take advantage of other methods of interconnection appropriate to the media content.

Central to this emerging theme is the widespread use of compressed video as an object in both production and distribution. Analog techniques do not apply and questions of time dependency can now be evaluated to find the best fit to available technology and operational techniques. Real-time playout is no longer the only way to deliver to the transmission point. Rather, a "best effort facility" can deliver the media to a buffer at the transmission point, where the final program is assembled from media streams (or files) sent at any speed (faster or slower than clock time) so long as an intelligent buffer understands the transmission requirements. Using SDTI-CP on either SDI or HD-SDI bearers can permit a wide range of options, as can other topologies. Conventional computer networking techniques can achieve the same result depending on the network topology available. The faster the network, the wider the topological options available on both ends of a circuit.

In the past, a video facility was often based around layers, usually with conventional routing in each layer. For instance there might have been discrete hardware for video, key signals, RGB component analog signals, system control and multiple analog audio layers. An eight-level routing system would not have been unusual. In such designs it is often necessary to plan a facility where levels are not fully utilized. This may be done for simplicity, though it can waste crosspoints in routing systems. In Figure 1, crosspoints for which no logical connection is correct are shown in white.

Multiple levels cost more to assemble than single levels, except when the crosspoint count in a router is driven higher by increasing I/Os. Such increased costs move as the product of the I/Os, and can take step-wise increases when a new router or distribution frame size is required. Size does matter, and multiple frames will cost more.

Even with the advent of a number of new formats in the last 25 years (SDI, ASI, HD-SDI, AES, Dolby E, AC-3 and other compressed audio, ASI, SSI, SDTI, HD-SDTI, etc.) the number of levels necessary to carry a multiplicity of formats can be reduced to perhaps three: 270Mb, 1.5Gb and AES. Unlike conventional design topology where segregated signals in levels were necessary to ensure compatible signals connected in logical circuits (and to reduce total crosspoint count creep), it is now possible to use sophisticated control systems to accomplish the same thing. Some of the complexity is moved from wiring to the control system. Using components that are capable of handling both SMPTE 259M (270/360Mb), as well as SMPTE 292M signals, can yield a very efficient topology where fewer components are underutilized.

This approach can radically change the complexity of the physical connections. Instead of multiple signal levels, designers can execute multi-use planes, including the physical media planes (SDI, HD-SDI, AES), system control planes (proprietary, TCP/IP or other structures on a variety of physical bearers), as well as a device control plane (RS-422, Ethernet or other methods). Despite the increase in the types of signals carried, the low number of planes reduces complexity. Extensions into the wide area can extend each plane using standard computer network architectures with appropriate interfaces.

Note that in Figure 2, metadata first shows up as a media type. Though facilities have been built with limited metadata for many years, carrying closed captioning both embedded and in parallel with the relevant content for instance, handling metadata as a unique media element is now not uncommon. It is important to note that AES-3 has been chosen as the bearer to eliminate the need for an additional plane. Not shown is the control plane, which intelligently moves media to where the content is needed, controlling multiple classes of record and playback, processing, and routing equipment.