Efficient digital distribution of media

Today’s media landscape is characterized by a rapidly expanding volume of content; new opportunities and accompanying demand for wider distribution and syndication; and broadening, increasingly diverse distribution outlets. These trends bring with them the need for easier and more efficient mechanisms to deliver media between content providers, contributors, aggregators, affiliates and distribution partners.

Traditionally, the delivery of content from studios and post facilities to broadcast networks and affiliates relied on physical media and transportation, shipping videotapes or hard drives by courier. Expensive, time-consuming and labor-intensive, physical distribution is no longer efficient and has gradually been giving way to electronic distribution, although physical delivery has not yet been completely replaced. Fully digital distribution via terrestrial IP-based networks or satellite offers increased automation, greater immediacy, higher security and tremendous cost savings over tape-and-truck transport. For the benefits to be fully realized, however, there are performance impediments that must be overcome, and technical and workflow inefficiencies that can be further improved.

The breadth of distribution far exceeds the past scenario of broadcasters and affiliates. Distribution partners may also span multiple platforms and channels, including VOD providers, electronic sell-through, mobile providers and digital cinema — each of which may have specific formatting requirements. Many of these recipients do not have the dedicated network lines or satellite downlink access that traditional broadcast affiliates have had and may only be reachable over public networks. Overcoming inherent performance limitations in these networks is an essential aspect of increasing distribution efficiency, but just one of multiple factors.

In any individual transfer, speed is often limited by the receiver’s bandwidth, leaving much of the sender’s bandwidth unused. Optimizing unicast distribution concurrency can maximize efficient bandwidth usage, while multicasting where possible and intermediate points shared by multiple recipients can minimize the amount of data being sent across costly network links. Meanwhile, recipients’ varying requirements for file conformance, compression, container and metadata formats must be addressed in an efficient and extensible manner, ideally minimizing the delivery of multiple variants while avoiding the need for multiple tools at the receiving edge. Upon receipt, workflow processes including discretionary QC, review and approval must be addressed homogeneously and efficiently as part of the distribution system.

In this two-part article, we will take a layered approach to exploring these challenges and solutions. We will first briefly look at how low-level network transport considerations affect individual transfers. In the second part, which will be published in the August issue, we will consider optimization of overall distribution architectures to leverage concurrency in a number of scenarios. Finally, we will look at the media being moved over these architectures and conforming it to the receivers’ requirements.

The basic bottleneck — the transport protocol

No distribution system can operate at optimal efficiency without maximizing the performance of its underlying transport mechanisms. Much like even the fastest courier vehicles are limited in speed when forced to drive on dirt roads in heavy traffic, the distribution of media files over IP-based networks is limited by inherent performance impediments in the underlying communications protocols. These impediments can limit transfers to a fraction of their potential speed and reliability even on private networks, and they can all but cripple them over the public Internet, which is necessary for reaching many recipients in today’s converged workflows. The inefficiencies increase exponentially with high bandwidths and long distances — significant when delivering media across the country or between countries and continents. Multicast-based transmission will be touched on in the next article when we discuss concurrency, but to understand the fundamental challenges of Transmission Control Protocol (TCP), we will first look at unicast TCP transfers between two points.

The root of these performance limitations is the nature of TCP, the protocol for IP networks upon which transfer mechanisms such as FTP are based. With TCP, transmitted data packets must be received in the correct order. To achieve this, data is sent sequentially up to the size of the recipient’s receive window (buffer). (See Figure 1.) When the receive window is full, the receiver sends an acknowledgement back to the sender. (See Figure 2.) This acknowledgement must arrive at the sender before more data can be transmitted.

This round-trip time, the time between sending the data and the sender receiving acknowledgement of its receipt, is referred to as latency, and it can dramatically limit transfer speeds irrespective of available bandwidth. While local area networks and intra-area links (such as those within a major city) typically have latencies of less than 10ms, latency is particularly problematic over long distances. Transmission over the public Internet between the West and East Coasts of the United States may have latency between 80ms and 100ms. Links between continents typically have latency in excess of 120ms; a tested connection between sites in Toronto and China during the writing of this paper reported more than 280ms. There are many factors influencing this latency — the number of network “hops” between the sender and receiver, the characteristics of each hop, configuration of the routers along the transfer path, the sender and receiver’s effective proximity to high-speed backbones, the sender and receiver’s particular connectivity, etc. The practical result is the same: The latency between any two transfer points is a significant factor in the overall throughput of TCP-based transfers such as FTP.

Another key factor constraining network throughput is packet loss, when a transmitted packet does not reach its destination. Loss can be caused by many factors, but it’s typically network congestion — essentially the overloading of a point anywhere in the network between the sender and receiver. The receiver may request retransmission of a lost packet if it recognizes this condition, or the sender may automatically retransmit if no acknowledgement has been received within a defined time period, which clearly must be at least as long as the network latency or every packet would be retransmitted. With basic TCP/IP, the entire receive window will be retransmitted even for a single failed packet within it or lost successful acknowledgement, which further reduces efficiency and causes large amounts of bandwidth to be wasted resending data that had already been successfully received. In addition, these retransmissions must occur before subsequent packets can be transferred, further slowing throughput.

TCP responds to packet loss (or perceived packet loss, which may be the result of increased latency or other factors) by significantly lowering its transmission rate on the assumption that the loss was caused by congestion, although that is not always the cause. The “slow start” nature of the TCP protocol, necessary to avoid congestion with other sessions competing for bandwidth, means that TCP does not ramp its performance back up nearly as aggressively as it throttles it. This creates further inefficiencies in performance, particularly for short sessions. In scenarios where multiple TCP sessions are contending for resources, over-subscription — exceeding the sustainable reliable capacity — is also extremely common.

High amounts of loss can cause TCP-based applications such as FTP to fail completely. Combining the effects of latency and loss, FTP transfer throughput may max out at 1.5Mb/s or less on typical nonlocal networks even with 45Mb/s of available bandwidth. Most notably, increasing the bandwidth available will not help because it will not reduce the loss or latency that is responsible for slowing down the transmission.

There are many other considerations that affect the throughput and efficiency of network transfers, but the above is likely sufficient to convey the overall inefficient nature of TCP for high-speed transfer of large amounts of data such as media files. The net result is poor bandwidth use — in general, less than 30 percent and often less than 10 percent for long-distance delivery.

One approach to overcoming these limitations and maximizing the throughput and efficiency of file transfers over IP networks is to use the User Datagram Protocol (UDP) network protocol rather than TCP. UDP transmission does not require acknowledgements from the receiver at the network level; any reliability management and error checking must be performed at the application level. As packets can be continually sent without waiting for an acknowledgement from the receiver, the problem of network latency is overcome. The sender (transmitter) is able to use a much greater percentage — in fact, almost all — of the available network bandwidth.

As the UDP protocol does not ensure that packets will be delivered in their original transmission order, it is up to the receiving application to ensure that packets are reassembled in the correct order to reconstruct the transferred file. The transfer application is also responsible for handling packet loss, as the UDP protocol does not have inherent retransmission. When the receiving application determines that it is likely that a packet has been lost, retransmission can be requested as part of periodic status updates that are sent back to the sender. The amount of data resent is equal to the amount originally lost, minimizing overhead.

Because the UDP protocol lacks the acknowledgement-based flow control of TCP, UDP transmission can saturate available network capacity, creating congestion and thus causing packet loss. Repeatedly doing so creates one of the key performance inhibitors that we’re attempting to overcome, degrading performance. The impact of high loss can be greater than the performance gained by overcoming latency, so the result may actually be slower than TCP-based FTP. Many UDP-based transfer solutions use measurements of loss to throttle transmission, but in the effort to continually maximize throughput, loss is effectively intentionally created as a measurement technique. Beyond the impact on the transfer itself, this congestion and the resulting loss is also unfriendly to other traffic on shared networks. To avoid this, it is best not to rely upon packet loss as the sole rate determination measurement. Instead, monitor additional network conditions, and adjust the transfer rate accordingly.

A deeper look at techniques for increasing transfer efficiency beyond standard TCP is beyond the scope of this paper. The key takeaway is that overcoming the inherent limitations of TCP-based transmission is a necessary component of improving digital content distribution over public networks, but it is just one aspect of maximizing overall distribution efficiency. In the next article, we’ll look at concurrency optimization and receipt-based conforming.

Brian Stevenson is director of product management, and Mike Nann is director of marketing at Digital Rapids.