There are signs of convergence toward common standards shared by fixed networks for mobile video. But, for now, operators have to blend an array of technologies to deploy rich media services on any scale. For mobile video, there are also different sections of the delivery chain to consider, with an obvious distinction between the radio access network, the backhaul serving the cell towers and the core fixed line network behind that. The backhaul itself can be wireless using microwave, but is increasingly served by fiber given the high bandwidth now called for to serve the fast-growing amount of video traffic. For radio access, there will be two options over the coming years: 4G LTE within cellular networks, and Wi-Fi, within the home and in public hotspots, with different technical requirements.
The two main challenges are the same as for fixed-line Over The Top (OTT) video delivery: quality of experience and bandwidth efficiency. These are related to the extent that infinite bandwidth can deliver perfect quality in theory but is not practical or affordable, meaning a compromise has to be reached. It is this factor that has led to use of adaptive bit-rate streaming (ABRS) combined with caching out in the network, to achieve the best quality possible over limited and varying bandwidth. On this count, the issues for mobile video delivery are essentially the same as for OTT as far out as the backhaul network or even the cell tower (or Wi-Fi router), with both converging around ABRS.
The story began with Real Time Streaming Protocol (RTSP), which was widely implemented for streaming on early video capable mobile devices. Meanwhile, for fixed-line OTT, Adobe’s Flash came to dominate, and brought its own proprietary protocol, Real Time Messaging Protocol (RTMP). Then, this was rudely shoved aside by Apple on the back of the iPhone and iPad boom, with Apple’s proprietary version of HTTP streaming called HTTP Live Streaming (HLS).
At the least, this established HTTP streaming as the agreed method for video delivery to both fixed and mobile devices, but with several variants. Adobe fell into line by introducing its version, HTTP Dynamic Streaming (HDS), when it brought out Flash 10.1, while Microsoft introduced Smooth Streaming for Silverlight. These are now being folded into the ISO-MPEG standard initiative called MPEG-DASH, which is being adopted by both fixed and mobile operators. (See Figure 1.)
“MPEG-DASH is where we will likely end up for OTT or mobile delivery, eventually,” said David Springall, chief technical officer at mobile video platform vendor Yospace, whose cloud-based service is being used in the U.S. by Hearst-based TV stations for delivering short-form content.
However, MPEG-DASH was only ratified early this year. In the mean time, service providers have to work with existing streaming protocols while they consider how to migrate to the common standard. YoSpace is among vendors providing help, releasing a Software Development Kit (SDK) that allows app developers to enable smartphones or PCs using Flash players to receive HLS content. YoSpace will follow with a version that also supports MPEG-DASH, which will be significant because it will help service providers migrate to the common converged platform. This will involve replacing Flash with HTML5, which has become the commonly agreed mark-up language for formatting and presenting multimedia content across multiple web-connected platforms. HTML5 was designed to run efficiently on low-powered battery devices, and this has led to rapid adoption on mobile handsets. One survey from Strategy Analytics predicts there will be 1 billion HTML-compatible handsets by 2013, which does not leave much room for Flash or anything else.
Edge caching and multicast
One important aspect of streaming protocols like MPEG-DASH is that they change the fundamental nature of video delivery, and, to some extent, end the old debate over the relative merits of unicast, multicast and broadcast packet delivery — at least as far as the mobile or Wi-Fi cell. At first, all OTT and mobile video was unicast as it had to be, since it required a one-to-one path between a content server and end-user device to deliver the audio and video as requested on demand. But, it quickly became clear this was not scalable to large numbers of users all consuming popular content, whether via live streaming or on demand, and especially not at the higher bit rates required for broadcast-quality HD. This is where streaming came in, breaking video down into chunks and distributing these out to edge caches close to the points of consumption to avoid clogging core bandwidth with multiple unicast streams or downloads.
An alternative approach for linear content consumed simultaneously by many users is IP multicasting implemented within the network. It is set up to transmit video just once over each network hop, and only when there is at least one user consuming the content at the time it is on a downstream TV or other end device. IP multicast has been deployed by some IPTV operators but is of no use for on-demand content, which, in such cases, has to be unicast because people consume it at different times. This makes IP multicast useless for mobile video applications, where most content is consumed on demand.
However, adaptive streaming with edge caching can save dramatically on core bandwidth for both linear and on-demand content. In either case, video can be transmitted just once to a network of edge caches. This way, when users request it, the core network does not have to be touched. Also, latency is reduced because the content has a shorter distance to travel, traversing fewer router hops. This, as Springall pointed out, means that MPEG-DASH avoids the need for multicast because it provides the same function of avoiding unnecessary bandwidth consumption, but at a higher application level rather than at the IP network level.
“Protocols like MPEG-DASH change the meaning of ‘unicast’ because with edge caches, it becomes a layer 7 multicast, which is much easier to implement and does not need big changes in the layer 3 network,” Springall said.
But, there is no way of avoiding unicast delivery of on-demand video within a mobile cell or Wi-Fi network, with potential for using up the available spectrum and ruining performance. The only alternative is mobile broadcast, but this only applies to popular linear channels watched by a number of people at the same time within a given cell. So far, the history of mobile broadcast is littered with the corpses of failed technologies such as DVB-H, and Qualcomm’s MediaFLO in the U.S. But, this was largely because sufficient bandwidth was not available, and devices were incapable of displaying at suitable quality. The advent of 4G LTE services, combined with the arrival of larger-screened smartphones and tablets, has renewed interest in mobile broadcast as a way of avoiding a spectrum crunch. But, the area is still contentious, with a dividing line between traditional broadcasters, (arguing that unicast delivery of linear content within the radio access network is a ludicrous waste of spectrum) and the mobile operator camp (that contends that there is limited demand for broadcast TV for mobile handsets).
Broadcast to mobile
One U.S. mobile operator, MetroPCS, which has almost 10 million subscribers, has come down in the broadcast camp, after announcing this past January that it would sell an Android-based Samsung smartphone later in 2012. The device will be equipped with an ATSC chip, the hardware needed to receive broadcast mobile television signals in the U.S. The phone will be preloaded with the Dyle mobile television app from Mobile Content Venture (MCV) — a group of 12 broadcasters that came together in 2011 to provide a nationwide mobile TV service. But, Springall does not believe such services will gain much traction, not least because users are unlikely to watch much long-form content on mobiles.
“On-demand works well for mobile, content snacking and so on, while live or ‘event’ television is more social,” Springall said. “The big game is better enjoyed on a big screen with your friends.”
Springall’s view is that sending chunks of video via some form of HTTP streaming as in MPEG DASH will work well enough for delivering the limited amount of broadcast content that will be required.
That view, however, is not shared by everybody in the mobile video world, given the proliferation of tablets that are better suited to live TV with their larger screens than mobile phones, as well as growing video consumption on laptop and netbook computers.
But, nearly all video consumed on these devices is delivered over Wi-Fi at present, given that it is installed in most homes, generally having more bandwidth and providing better quality than 3G and often than LTE 4G. But, the quality in both cases depends on the number of users sharing a given cell, and this comes back to the broadcast argument. Within a home Wi-Fi network, there is usually enough capacity to support one or two users consuming video on laptops or tablets. Within a public hot spot, however — at an airport, for example — there may be a number of people all watching the same popular channel. This can bring the network to its knees, resulting in slow response for other users, as well as diminished video quality for those watching TV.
An experiment by the University of Wisconsin-Madison found that as few as three users attempting to watch an HD video stream over Wi-Fi resulted in a sharp drop in performance and quality. This is partly because the video packets are sent separately in unicast mode to each of the three users even though content is the same. But, the study identified another factor, which is that the Wi-Fi network places equal priority on all packets without knowing how relevant they are for the application. This is inefficient because, for video, some packets can be dropped with much slighter effect on quality than others. This led UW-Madison to develop software called Medusa, which implements video broadcast over Wi-Fi. It incorporates priority mechanisms that place greater effort on delivering important packets, such as packets carrying bits of the I frames within compressed video that contain information needed to display subsequent frames. Medusa has been demonstrated delivering HD video consistently at 20Mb/s over Wi-Fi — more than enough for full broadcast-quality.
It is quite likely, then, that broadcast techniques such as Medusa will be deployed within Wi-Fi hot spots that entertain many users, even if not for 3G or 4G cellular services. The ability to cut in and start broadcasting as soon as two or more people are consuming the same video would have a large impact on quality of service, not just for the people watching TV but for other users on the network as well.
It may well be in any case that more video is consumed over public Wi-Fi than 4G, given the widespread availability of hot spots in places where people spend their time while on the move — transportation terminals, restaurants, etc. In that case, it is quite likely that mobile broadcast will at last become an important medium, even if not over cellular networks as was originally intended.
—Philip Hunter writes the “Beyond the Headlines Europe” e-newsletter.