A media container is a “wrapper” that contains video, audio and data elements, and can function as a file entity or as an encapsulation method for a live stream. Because container formats are now starting to appear in broadcast situations, both OTA and online, it is useful to consider the various ways that compressed video (and audio) are carried therein, both by RF transmission and by the Internet. Earlier this year, we looked at how several container formats support different compression formats. This month, we'll look at a related development that will impact content distribution: HTML5.
Web browsing and broadcasting crossing over
The ubiquitous Web browser is a tool that users have come to rely on for accessing the Internet. Broadcasters already make use of this for their online presence, by authoring content and repurposing content specifically for Internet consumption. But browsing capability is something that will come to OTA broadcast as well, once features like non-real-time (NRT) content distribution become implemented. For example, by using the ATSC NRT specification, now under development, television receivers can be built that support different compression formats for cached video, including AVC video and MP3 audio, and different container file formats, such as the MP4 Multimedia Container Format. It is envisioned that these receivers will have the capability of acting as integrated live-and-cached content managers, and this will invariably involve support for different containers and codecs. For this reason, we need to understand how browsers and containers — two seemingly different technologies — are related in the way they handle content.
Several container formats currently provide encapsulation for video and audio, including MPEG Transport Stream, Microsoft Advanced Systems Format (ASF) and Audio Video Interleave (AVI) and Apple QuickTime. While not a container format per se, the new HTML5 language for browsers nonetheless has the capability of “encapsulating” video and audio for presentation to a user. With the older HTML, there was no convention for playing video and audio on a webpage; most video and audio have been played by the use of plug-ins, which integrate the video with the browser. However, not all browsers support the same plug-ins. HTML5 changes that by specifying a standard way to include video and audio, with video and audio “elements.”
The HTML5 Working Group includes AOL, Apple, Google, IBM, Microsoft, Mozilla, Nokia, Opera and many other vendors. This working group has garnered support for including multiple video codecs (and container formats) within the specification, such as OGG Theora, Google's VP8 and H.264. However, there is currently no default video codec defined for HTML5. Ideally, the working group thinks that a default video format should have good compression, good image quality and a low processor load when decoding; they would like it to be royalty-free as well.
Multiple codecs present complex choices
HTML5 thus presents a potential solution to manufacturers and content providers that want to avoid licensed codecs such as Adobe Flash (FLV), while preferring the partially license-free H.264 (i.e., for Internet Broadcast AVC Video), and fully license-free VP8, Theora and other codecs. Flash, which has become popular on the Internet, most often contains video encoded using H.264, Sorenson Spark or On2's VP6 compression. The licensing agent MPEG-LA does not charge royalties for H.264 video delivered to the Internet without charge, but companies that develop products and services that encode and decode H.264 video do pay royalties. Adobe nonetheless provides a free license to the Flash Player decoder.
Years ago, content developers predicted the crossover of television and Internet. With standard codecs, container formats and specifications like HTML5, integration of the two media will soon be common.
Aldo Cugnini is a consultant in the digital television industry.
Send questions and comments to:firstname.lastname@example.org