There is little sign of recession in the content delivery network (CDN) service market, which is expanding at 22 percent to 25 percent per annum, according to most surveys, and set to reach $5 billion by 2016. This is being driven mostly by proliferation of OTT video services, which are demanding not just quality of service and reliability, but increasingly also the levels of security and content protection that operators have come to expect within "walled garden" pay- TV infrastructures.
With a growing expectation in the industry that OTT deployments will generate a spike in video piracy, particularly around tablets, content owners and operators are looking to CDN providers to play their part in defending against unauthorized use and redistribution of content while it is within their domains, and also to participate in end-to-end security. As matters stand, the major studios may prohibit premium content from being distributed over OTT, especially to tablets. Yet given the anticipated boom in tablet content consumption, it is in the interests of rights holders as much as service providers to ensure that HD content can be delivered to all devices whose owners are willing to pay for it either directly or through advertising. For these reasons vendors of CDN platforms or services are integrating security with their delivery platforms, and we will hear much more about this aspect of CDNs over the coming year.
There will be a trend toward a standard model unifying security and delivery, almost certainly around the MPEG-DASH standard, which was ratified at an MPEG meeting in December 2011, with deployments likely to start toward mid-2012. The standard will specify a common adaptive streaming mechanism over HTTP, the common file format (CFF) for file-based video transmission, and common encryption schemes. Over time it will also include a multi-DRM (digital rights management) model aligned with the one adopted by the Digital Entertainment Content Ecosystem (DECE) for the UltraViolet digital locker.
In the wider sense security is part of the overall CDN objective of generating new revenues through distribution of content to Internet-connected devices, including smart TVs as well as PCs, tablets and smartphones. So far, OTT has failed to make much money for traditional pay-TV operators, which have viewed it as something they have to do to keep up with competitors in the hope that revenues will come in time. CDN service providers have done their bit by attempting to cater to the various possible business models through their architectures, with four main possibilities.
First, a CDN service, an example being Ericsson's Media Delivery Network, can support a pay-TV operator's own infrastructure to deliver content to subscribers, reducing costs, ensuring QoS, and expanding reach to connected devices outside the existing service footprint. Second, the CDN can be provided on a wholesale basis, where the capacity is resold by a service provider to broadcasters, content owners or, indeed, smaller pay-TV operators. Third is the model sometimes called OTT caching, which is the one feared by pay-TV operators because it threatens to cut them out of the distribution loop. Under this model, the CDN is a transparent layer connecting content owners or rights holders directly to consumers. (See IPTV Figure 1.) This model is being exploited by some big makers of Internet-connected TVs as a way of getting into the content value chain, rather than just being providers of dumb displays. The final model, somewhat different, is the federated CDN, connecting multiple CDNs together to cover a larger geographical or service area, facilitating global delivery of services or content.
These models can be implemented using a variety of technologies, protocols and architectures, so that the CDN providers themselves face deployment decisions that will depend on the target market, geographical factors, and nature of the service, such as whether it is mostly live, linear, on demand, or a combination — as it usually will be.
The need for CDNs arises from the use of the Internet or public IP networks to deliver content, and from the unpredictability of these OTT services, where operators can no longer predict demand, and increasingly lack control over the end devices, often having no idea of the breakdown between, for instance, tablets, PCs, web-connected TVs and smartphones. The video is now being transported over networks with varying traffic levels and, even when there is plenty of capacity, there is a risk of unpredictable traffic surges causing packet drops or delays, leading to glitches in QoS.
CDNs have evolved to overcome the congestion problem through a combination of caching, adaptive streaming and other traffic optimization techniques. Typically a CDN has two layers, a set of core servers responsible for managing the CDN, also acting as ultimate sources of content for the second layer, which comprises edge servers dealing directly with client requests. It is the edge servers that deliver the content to the end devices, and so they tend to be distributed geographically closer to the users. This architecture brings scalability because the edge servers can be expanded independently according to the number of users in their area. The big advantage is that the number of routing hops between the immediate source of content, the edge server, and the consuming device is reduced, which in turn cuts down on transmission delay and makes packet loss less likely.
It is important to realize that congestion can occur not just in the network but also in the servers, given that OTT opens up the possibility of delivering video on a unicast one-to-one basis to huge numbers of users. Even with edge servers, OTT services that include free content accessible by anybody can be overwhelmed to the extent that a server may not be able to satisfy demand. In that event there are three possibilities. First, service can be denied to some people so that it can be provided at full bit rate to others. Second, the bit rate can be reduced to the point where all people can receive service but at lower quality. This was one of the motives for Adaptive Bit Rate Streaming (ABRS). A third option in which there is a growing interest is the hybrid CDN/ peer-to-peer (P2P) model, where clients themselves become servers of content to other clients. This in effect adds a third layer to the distribution architecture beyond the edge servers, comprising the clients themselves.
These methods all have advantages and disadvantages. The P2P model saves bandwidth within the core network and reduces the amount of output capacity needed in the server, but requires cooperation of clients, and also upstream access bandwidth for the P2P transmission. Reducing the bit rate can enable more clients to be served, but can deliver noticeable variations in quality, especially if it is poorly designed or implemented. But ABRS has another major function, which is to enable content to be delivered to end points with different capabilities. Streams are encoded at different bit rates and transmitted in parallel, so that end devices can obtain video at a suitable resolution. This has led to another innovation, the mezzanine format, so called because it is an intermediate between the contribution resolution and that of the end devices. The idea is to avoid the complexity of encoding at different resolutions and bit rates, and instead send all content in just one master mezzanine format, which can serve all devices. But this format has to encode at five to 10 times the bit rate of the highest quality target devices in order to ensure QoS. It is therefore quite wasteful of bandwidth itself, and imposes complexity on the target device, which has to transcode from the mezzanine into its own format.
The other important function of ABRS is to smooth out variations in IP network performance by breaking video streams or files into small chunks that are then reassembled at the end. This reduces the likelihood and impact of congestion because it makes it easier to share the load across multiple links within the IP network, diverting chunks in real time before serious traffic buildup occurs at any one point.
Choosing a CDN
The most suitable CDN design depends particularly on the nature of the content, and pay-TV operators or broadcasters should consider the mix between linear, near-live catch-up, and longer-term on-demand. The last option is easiest to deal with and can be served centrally if it is niche content, or cached closer to subscribers if it is popular.
The interesting situation concerns the growing amount of catch-up content, often consumed just a short time after the scheduled broadcast. Until now, cable TV operators in particular, and some IPTV ones, have deployed multicast technology to avoid flooding their whole access infrastructure with content being watched at a given time by just a few people in one or two segment or service areas. But now such operators are facing increasing demand for catch-up content at varying times just after it has been broadcast, so conventional multicast, which really means selective broadcast to nodes where there are people watching downstream, does not work so well.
Now an alternative to multicast called the "caching hierarchy," or sometimes the "content distribution tree," has emerged within the CDN world. Caching hierarchy can work for linear or live content as well. This reverses the usual multicast approach where content is pushed out from the center and pruned on the way so that it only reaches users that have elected to watch it. Instead the first user to request some content that is not broadcast effectively pulls it out from the center. The content is then transmitted to that user, but fills caches on the way to the end device. The effect is like multicasting, but works better for on-demand services because the content can be retained in the caches it has filled for some time after the first user has watched it. The length of this period can be set by the CDN operator on the basis of how likely it is that more people will want to watch it. For instance, in the case of a popular drama series the content might stay in these caches for around a week until the next episode goes out. This approach is gaining favor for CDNs handling a lot of on-demand content.
Some service providers object that all this caching associated with CDNs is expensive and offsets the benefits. But there is an alternative approach making much less use of caching promoted by, among a few others, Swedish media transport specialist Net Insight. This company calls its approach service-aware networking, which it claims delivers totally lossless IP routing across a CDN, and therefore avoids or greatly reduces the need for cache storage. In effect, the network itself takes care of any caching needed, through routers specially designed for handling video, which perform traffic shaping to avoid congestion, and forward error correction on a link-by-link basis to minimize packet loss. This solution delivers impressive performance for linear applications, with low latency and packet loss, but does require purchase of routers dedicated to video rather than generic data routers. The likes of Cisco would argue their products are becoming increasingly video-aware themselves, but Net Insight is finding a market for its routers among some large operators such as TeliaSonera in the Nordic countries, Tata in India, and KPN in the Netherlands.
Another Swedish company, EdgeWare, is also promoting video-aware CDN technology, in order to provide CDN operators service feedback such as level of viewer engagement and QoS, information that can help their customers such as pay-TV operators or content owners monetize their service. Without video awareness, it is often difficult with an OTT service to know what is going on, given that the content is broken into chunks during transmission.
The CDN field is moving fast and is creating a new opportunity associated with the evolution of cloud services in the broadcasting world, providing pay-TV operators the option of shedding responsibility for delivery and focusing purely on the content package and higher-level quality of experience.
Philip Hunter writes the Beyond the Headlines Europe e-newsletter.