Concurrency & conforming

In part one of this look at distributing digital media files, we discussed overcoming impediments inherent in the TCP/IP protocol to improve the performance of individual file transfers. (See “Efficient digital distribution of media” in the July issue.) With improved performance for any given transfer between two points achieved, we must now extend this into real-world applications, where content is distributed to many — even hundreds — of destinations. We will look next at optimizing distribution architectures for delivery concurrency and addressing recipients' differing media format requirements.

The second stage — concurrency

Ideally, multicast could be used to significantly reduce the total amount of data transferred. While multicast is viable on private terrestrial IP or satellite networks, it is not practical over the public Internet. As a result, delivery to multiple recipients reachable only over the public Internet generally involves multiple IP unicast transfers.

For a content owner or provider delivering content to large numbers of distribution partners, the next best thing to the ideal of multicast is a hybrid model — multicast over satellite to those who can receive it, multicast over IP networks where possible (such as private networks) and multiple IP unicast transfers to the remaining recipients. The transfer application should dynamically acquire information about the receiving capabilities of each destination point or user and concurrently manage a hybrid mix of transfer types to reach all destinations optimally.

With multiple IP unicast transfers almost inevitable in broad distribution schemes, optimizing transfer concurrency is a key to efficiency. It is also significant that a location that typically functions as a receiver may also sometimes function as a sender. For example, a local broadcast television affiliate receiving national network content may also be a contributor of local news coverage to other affiliated stations in the region. In this case, while multiple stations receive content from a central point, there are also transfers directly between some of these stations.

To demonstrate ways in which transfer concurrency can be made more efficient, consider a number of different scenarios for multiple unicast transfers. All of the below scenarios involve a source location, A, delivering a file to three destinations: B, C and D. Assume that the outgoing available bandwidth at A is at least as great as the incoming bandwidth of each receiver, as this is typically the case for senders that frequently distribute large amounts of content. For simplicity, assume that connections between any of these points share the same characteristics for packet loss and that an optimized transfer system is being used to overcome latency and achieve high utilization on the network.

Scenario 1: Consecutive (nonconcurrent) transfers

In this scenario, content is first transferred from A to B, then A to C and finally A to D. This is clearly the least efficient, as there is no concurrency. Such an approach could be acceptable if all points have equivalent bandwidth or if the sender's outgoing bandwidth is lower than the incoming bandwidth of each recipient. In such cases, the sender's outgoing bandwidth is the constraint on overall performance. This is unlikely when content is regularly distributed from a centralized source, however, as the sender will typically have considerable bandwidth dedicated to distribution.

As such, transfer rates are typically limited by receiver bandwidths; if the sender has 45Mb/s outgoing bandwidth and a receiver only 10Mb/s, utilization of the sender's outgoing bandwidth can be less than 25 percent during that transfer. Another problem is that the total time before all destinations have received the file increases with the number of recipients, irrespective of the available bandwidth at the sender. Requirements for timeliness of content may make this impractical.

Scenario 2: Star (central server)

In this architecture, similar to a star network topology, all senders and receivers access a central server. A content file is uploaded to the server and is distributed to all destinations. (See Figure 1.)

In this example, the file would be first transmitted from A to the server; the server then delivers the file to the three recipients. This may be advantageous from the perspective of the sender, as the total amount of “good” data(ignoring overhead, resends of lost packets, etc.) transmitted from A is the size of the file, similar to a multicast transmission. From the overall system perspective, however, the transfer to the server increases (by the size of the file) the total amount of good data transferred. This could be negligible overall if there are a large number of unicast recipients (with more than 100 recipients, the extra transmission adds less than 1 percent overall), but it will be significant with a small number (33 percent additional overhead in this example).

This still could be beneficial if the bandwidth of sender A is the overall bottleneck, as it only delivers the file to one recipient directly. Again, though, where A is a common distribution source, it will likely have considerable dedicated bandwidth. Another scenario in which this topology may be beneficial is if the link from A has high transit costs (such as a private network link) or unreliable performance. In such cases, minimizing the transfer from A can result in cost savings or increased overall throughput. Also note that as the central server is linked to all senders and recipients, it can work to optimize the overall throughput of the system.

Continue to next page

However, all that is achieved in this scenario is to defer the concurrency problem, moving it from A to the central server. This topology does not offer a direct connection between the transferring locations. An additional file transfer intended to move from B only to C would need to go through the server, doubling the data transfer required.

Scenario 3: Network mesh

An improved scenario employs a network mesh topology in which all points can connect directly to each other. (See Figure 2 on page 54.) File transfers take place directly between sending and receiving locations. By separating system-level management from the transfers themselves, a central server can still be used to monitor the network links between each point and control each point to optimize individual transfers for overall throughput but without the server actually performing any file transfers.

This topology can use all available bandwidth at A and keeps the total overall amount of good data transferred to the unicast minimum while reducing the overall time before all recipients have received the file. This architecture also allows the formation of a star-like topology within the mesh, where a receiver also functions in a fashion similar to the server in Scenario 2. (See Figure 3 on page 54.) The transfer to that receiver isn't additional overhead, however, because it needs to receive the file anyway. Furthermore, the potential advantages of Scenario 2 can be maintained, such as when A has limited bandwidth or high outbound transit costs. Finally, additional transfers directly between points can be performed easily.

Scenario 4: Network mesh and shared reception points

As a further extension to Scenario 3, consider A, B, C and D as master transfer points (“engines”) in separate local areas. As previously discussed, transfers on a LAN or within a metropolitan area can be more efficient than those over significant distances because of factors such as lower packet loss. If private networks are being used (as opposed to the public Internet) for long-distance transit, the cost of transfer on local network links can also be much less than that over the long-distance links (or free in the case of a LAN). In this scenario, multiple recipients (clients) within the same local area can share a particular engine using a star-like topology. (See Figure 4 on page 55.) The engine acts as the server for that local area, with the file transferred once to the engine and then from the engine to the local clients. Unless the engine is one of the final recipients, this does add to the total overall data transferred but minimizes the amount transferred on the lower-performance or more costly external network (as opposed to sending directly to each client from the origin).

Conforming to recipient requirements

The preceding sections can be applied to transferring any type of data. With digital media, additional considerations affect the overall distribution beyond raw data delivery. Recipients may have varying conformance requirements for compression format, media container format and metadata. Even just looking at broadcast affiliates, there will be a variety of brands and models of playout servers in use, each with its own requirements.

Delivering variants of the content directly from the sender in each required output format can hinder distribution efficiency and increase the amount of data transferred, particularly where multicast could have been used. (See Figure 5.) As rewrapping media essence in a different container format can be nonlossy, the ability to conform a transmitted media package at the receiving end (such as rewrapping it from MXF to LXF) allows the sending of a common master format without any visual degradation, reducing the number of variants that must be sent. Content delivery solutions now exist that integrate functions such as rewrapping and transcoding, as well as output to tape, directly within the receiving appliance.

There may also be a requirement to transcode into a different compression format, frame size or bit rate. For example, some distribution partners and affiliates may hold the rights to the content for both local television and their own website. While transcoding between compression formats for content intended for the same display size and type may not be ideal for quality because of recompression artifacts, the use of a source with a sufficiently high bit rate can mitigate these issues (such as transcoding a 50Mb/s MPEG-2 source down to 6Mb/s H.264). Similarly, content destined for lower resolutions and bit rates (such as a website) can typically be transcoded from a higher source without significant loss of visual quality. Repurposing the asset at the receiving end eliminates the need for transmitting separate versions for each platform from the source, increasing delivery efficiency. Of course, sending multiple variants from the source can still be done when maximum quality must be maintained.

As target output platforms go beyond just broadcast servers, there may also be unique metadata required by some recipients. For example, Web or mobile publishing platforms may require additional descriptive or technical metadata beyond what is normally associated with broadcast content. As such, the metadata set supported by the distribution system must be extensible with these unique additions, and the metadata distributed with the master package should be a superset of all requirements across all recipients. This may require that the metadata be sent in a format such as XML rather than within a standard media container.

Conclusion

Maximizing efficiency in digital content distribution systems is more than just accelerating transfer performance; concurrency optimization and local conforming are key contributors to achieving the benefits that digital delivery offers.

Brian Stevenson is director of product management and Mike Nann is director of marketing/communications at Digital Rapids.