Graphics and captioning

In the analog television world, graphics were generated at the production facility and inserted into the analog signal as part of the video. Captions were later added to the vertical interval, and this evolved into the digital captions available on digital transmission systems today. But the graphics portion has remained embedded in the video itself — until now. The evolution of mobile TV systems has brought about the development of several specifications aimed at bringing a rich multimedia experience to handheld devices. Both ATSC-M/H and DVB-H have incorporated specifications originally developed for the cellular telephone environment, enabling the compatible presentation of graphical elements across a wide range of devices.

Formed in June 2002, the Open Mobile Alliance (OMA) is a specification-writing organization focused on enabling interoperable mobile services across countries, operators and mobile terminals. The membership of the OMA includes nearly 200 companies representing the world's leading mobile operators, device and network suppliers, information technology companies, and content providers. Two key OMA specifications define a common toolkit for providing graphics to mobile devices: OMA Mobile Broadcast Services Enabler Suite (OMA-BCAST) provides an electronic service guide, and OMA Rich Media Environment (OMA-RME) provides a rich graphical multimedia experience. It should be noted that while the OMA is not a standards organization per se, its specifications are developed in a consensus-based activity that is carried out by its member companies. Thus, the specifications essentially carry the weight of standards, and there is careful attention to the evolution and backwards compatibility of the technical requirements. Both ATSC-M/H and DVB-H specify a service guide based on the OMA-BCAST Service Guide and graphics based on OMA-RME.

XML provides a standard way to convey data

As with MPEG transport, descriptors are used to define the various elements of both OMA-BCAST and OMA-RME. However, the various mobile broadcast schemes have now moved toward an IP-based transport mechanism. As such, the OMA-based data descriptors are relayed by using XML, which provides a standard (and IP-friendly) way to define a data set. (See Figure 1.)

Being an open standard, XML has gained popularity in the Internet world; one well-known data language based on XML is RSS, used to automatically feed Web pages and content to a browser. One advantage of using XML is that an XML document is written in plain text, i.e., the data are inherently readable by humans. (See Figure 2.) Also, because the language is highly structured, it is straightforward to take a data set from multiple file formats, e.g., a TV broadcast schedule, and convert it into an XML file that can be transmitted to multiple entities, including a receiving device. In addition, the textual nature of XML makes it inherently amenable to compression for storage and transmission purposes. The receiver will not display this data exactly as shown, of course, but will parse the data and use it to generate a user-friendly electronic service guide.

A number of data “fragments” can be defined by means of OMA-BCAST, using XML schema. The description of services, schedule and content all apply to a service guide, while access, session and provisioning can apply to the way content is controlled, such as free or pay, limited viewing times and purchasing of or subscription to content and services.

Electronic service guides can be customized

The various standards describe how to send a service guide over the broadcast transmission stream. All or part of the service guide can also be sent on an out-of-band interaction channel, and there are elements in OMA-BCAST to indicate to the receiver which portions of the guide are available using an alternative access URL. This presents some interesting options to the service providers in that custom service guides can be delivered to individualized receivers.

OMA-RME provides graphical functionality by integrating a number of standardized elements, including the Scalable Vector Graphics (SVG) language for graphical object creation and ECMAScript for script support. RME content consists of scenes of objects such as video, images, animation, text and audio that are composed together. By defining each object separately, the presentation can follow scripts that control how each object appears, disappears and animates. In addition to providing creative flexibility, this minimizes both transmission bandwidth and device processing power, as updates are needed only for new objects or animations.

SVG, which is an open standard, is an alternative to the JPEG or GIF formats, and it provides a way to generate and render both static and dynamic (animated) graphical elements on display devices; most current Web browsers support SVG directly. As the name suggests, objects can be scaled (resized or resampled) easily to different presentation resolutions without aliasing or other artifacts. Thus, its use is amenable to devices where there will be a wide range of display resolutions.

Like MPEG, SVG is defined with multiple profiles; in fact, MPEG-4 Part 20 (Lightweight Application Scene Representation or LASeR) is based on a profile of SVG 1.1 called SVG Tiny (SVGT). SVGT was defined with mobile phones as the target application, so it is limited to the horsepower and graphical constraints of small battery-powered devices. A higher-performance profile, SVG Basic, extends capabilities to devices such as PDAs.

ECMAScript is a standardized version of JavaScript, which was developed by Sun Microsystems as a structured programming language. It enables Web applications to have a compact environment in which they can run computationally intensive programs, scripts and the like. As an example, a graphical applet can be designed using SVG and ECMAScript to render a functioning calculator on a mobile video device. Similarly, the formatting, scripting and timing of scene descriptions, content rendering and user interface graphical elements can all be defined in a creative and distinctive manner.

Continue to next page

There are multiple ways to distribute captions

Captions transmitted in ATSC A/153, as described in CEA-708, can be carried using descriptors as defined in ATSC A/65, constrained as per ATSC A/72. The captions are listed — together with video, audio and other services — in the Service Map Table, or SMT-MH. Captions could also be carried by other means. DVB-H IP Datacasing Electronic Service Guide (ESG) Implementation Guideline A112 describes how captions can be carried within the ESG of that type of transmission. Captions could be carried similarly in the ESG or other auxiliary elements of an ATSC-M/H transmission by using the framework of OMA-RME.

These technologies continue to remind us that the processing power of handheld devices has progressed so far that it is now practical to depend on the devices to render some pretty complex graphics — something that formerly could only be done at the production or transmission side. As broadcasters ramp up mobile services, it is increasingly important to keep abreast of the state of the art in consumer devices to maximize the opportunities for content development and distribution.

Aldo Cugnini is a consultant in the digital television industry.

Send questions and comments to: aldo.cugnini@penton.com

DIGITAL HANDBOOK

Woody, Buzz and the rest of their toy-box friends are dumped in a day-care center after their owner, Andy, departs for college.

2010

Animation

Comedy

G