SMPTE to Standardize HD Over IP

ORANGE, CONN.
Transporting uncompressed SD and HD video across IP networks has been done routinely for several years, but always as vendor-specific solution. Work is now underway inside SMPTE technical committees on new standards that cover uncompressed and variable-rate video. SMPTE's first set of standards for IP video were published in 2007 for transporting constant bit rate MPEG-2 video signals over IP networks. These standards, which were based on previous work done in the Pro-MPEG Forum and the Video Services Forum, have seen widespread adoption by a variety of manufacturers.

STANDARD NEED
New standards are needed for a couple of reasons. First of all, more broadcasters and service providers are working to transport uncompressed video over IP networks, as bandwidths of these networks improve, and as the local access bottlenecks are resolved. Also, existing IP video standards (from the IETF and other bodies) have shown a preference for separating the video, audio and other data content into separate packet streams. While this makes sense for networks that must serve a wide variety of users with different levels of access bandwidth, it makes life more difficult for broadcasters who would prefer to manage one stream with all the required video, audio and metadata delivered in a single, intact package.

Another major goal for these new standards is to permit interoperability between IP transmission equipment supplied by different vendors. Uncompressed video over IP products currently on the market don't work across vendor boundaries. In contrast, there are a number of companies today that supply mutually compatible IP video equipment that is COP-3 compliant (which has now evolved into SMPTE 2022-2), but that format requires the video stream to be formatted inside a MPEG-2 transport stream. Hopefully new SMPTE standards will allow for interchangeability.

Uncompressed SD video circuits using fiber optic and coaxial cable links have long been used inside the studio and between facilities located in the same city. They are easy to use, very high quality, and supported by a wide variety of equipment. Several long distance carriers also offer uncompressed SD services, even though many of these links end up transporting compressed video packaged within DVB/ASI signals. Uncompressed HD is also well entrenched inside the studio, although it is not as widely used between facilities. As IP network bandwidth grows and costs decline, the incentive to transport uncompressed SD and HD over IP networks will certainly increase.

SMPTE has already published three standards for IP: 2022-1-2007 Forward Error Correction for Real-Time Video/Audio Transport Over IP Networks, 2022-2-2007 Unidirectional Transport of Constant Bit Rate MPEG-2 Transport Streams on IP Networks, and 2022-3-2010 Unidirectional Transport of Variable Bit Rate MPEG-2 Transport Streams on IP Networks. These standards define many of the principle technologies that the forthcoming standards will be based upon.

The most interesting technology included in these standards is the concept of Row/Column Forward Error Correction (FEC). (For a more detailed discussion of this technology, see "Correcting IP Network Errors With FEC" in the May 2, 2007 issue of TV Technology.) This scheme groups the datagrams (as IP packets are wont to be called in the standards world) that carry the video payload into a series of matrices. These matrixes can be either one-dimensional (for row-only FEC) or two-dimensional (for row/column FEC). For each row and each column, an extra datagram is calculated and sent by the source to provide the ability to correct errors. Each FEC datagram can be used to correct for the loss of at most one datagram in the corresponding row or column; if more than one error occurs, intersecting rows or columns can be used to recover lost datagrams. When Row/Column FEC is used, burst errors consisting of multiple consecutive lost datagrams can be corrected, provided that the burst is shorter than one row of the FEC matrix. In the new standards, the limits on the lengths of the rows has been greatly expanded, allowing the FEC to operate even when datagram loss bursts exceed 1 millisecond, which could be several hundred datagrams in a stream carrying uncompressed 1080p traffic at 3 Gbps.

KEY FEATURES OF THE NEW STANDARDS
With the new standards, it will be possible to transport a complete video stream, carrying every bit that is present in a SMPTE 259M or 292 signals, including the full HANC, the full VANC plus any embedded audio, time codes, metadata, or any other signals that are contained within the 270Mbps or 1.5Gbps source signal. This simple (but bandwidth intensive) step greatly simplifies the work needed to ensure interoperability, because of the wide acceptance of these signal formats. When finalized, the standard should even include the ability to transport uncompressed 1080p signals at 3Gbps.

Another addition is the use of a flag to signal the last datagram of each video frame, and to start with a fresh datagram at the beginning of each new video frame. This capability, which wasn't feasible with MPEG-2 transport streams, allows receiving devices to easily determine where each video frame starts and finishes. This flag could also make it easier for systems to be more resilient to extremely large gaps in the datagram sequence by using techniques such as video frame replication n the receiver to mask corrupted video frames.

TECHNICAL CHALLENGES
In order to perform an FEC correction, the receiving device needs to collect and buffer all of the datagrams in a given row or column, along with the associated FEC datagram. This buffer adds delay to the end-to-end signal transmission, with the tradeoff that larger buffers mean more error resiliency. Implementers are faced with a dilemma: short rows and columns for low delays, or larger rows and columns for better protection against burst errors? Since each network and application is different, the standards will likely allow a wide range of settings for both row and column lengths.

Another challenge for the creators of these standards is developing enhanced methods to handle variable-rate bit streams. Particularly in the case of low bit-rate streams, it may take a long time to fill up a buffer, meaning that signals could become excessively delayed. To solve this, additional null packets can be added to the buffer whenever the (user-controlled) maximum delay limit is reached. That way, when variable rate bit streams drop to a low data rate (such as when sending a still image), the overall delay through the end-to-end system will still be controlled.

FUTURE DIRECTION
With luck, these standards should be approved and released within the next 6-12 months. Already, some work is being done within the Video Services Forum to set up sessions for interoperability testing between vendors who are developing equipment that is intended to meet these new standards. The upcoming VidTrans conference, to be held Feb. 1-3, 2011, in Marina Del Rey, Calif., may be the first occasion where vendors publicly show this equipment, and, just possibly, the first public demonstration of interoperability. It's also fairly likely that by this time next year, it will be possible to buy standards-based uncompressed video over IP gear on the open market.

Disclaimer: Keep in mind that much of the information in this article is about standards that have not yet been fully approved and ratified. Until that process is complete, it would be foolish indeed to develop or release products that are based on concepts that could be revised or discarded at any time.

Wes Simpson

Wes Simpson is President of Telecom Product Consulting, an independent consulting firm that focuses on video and telecommunications products. He has 30 years experience in the design, development and marketing of products for telecommunication applications. He is a frequent speaker at industry events such as IBC, NAB and VidTrans and is author of the book Video Over IP and a frequent contributor to TV Tech. Wes is a founding member of the Video Services Forum.