Accessible TV

Closed-caption subtitling and audio description are two increasingly popular service additions to modern TV channel output. Providing access for people with hearing and sight impairments is not the only driving factor. Programs subtitled with multiple languages enable TV channels to increase their audience figures through wider distribution, which is especially useful in Eastern Europe, where many languages are spoken.

The cost to add multiple languages to the broadcast signal through closed captioning (using line 21, teletext or DVB) is relatively low. Also these additional services can be highly automated.

The past

Historically, there was an occasional requirement to add closed-caption subtitles to selected programs, typically high-profile dramas and documentaries. Consequently, subtitle management developed as a manual process. A videotape recording would be dubbed to produce a VHS copy, and this would be delivered to the subtitle authoring department for file preparation.

The completed subtitle file, containing subtitle text and associated time code reference, would be delivered back to the transmission area where an engineer would load the file into a subtitle server. This server would be time code-locked to the transmission system, be it a tape-based cart machine or a playout server.

The system would cue the subtitle file with the matching video file name on the transmission system, and the two files would play out synchronically. A hardware insertion device would insert the subtitle file in real time — typically onto line 335 using the World System Teletext (WST).

To ensure redundancy, there would typically be two hot-swappable inserters, possibly with an automated failover capability. For occasional program subtitling, this system operated successfully, although not entirely without problems.

Things have changed

UK government legislation has incrementally raised the level of subtitling required from terrestrial channels. The BBC has committed itself to providing subtitles on all of its output by 2008. This includes subtitling live programming, such as news and sport, which brings with it a whole set of other issues beyond the scope of this article. For a high percentage of programs to contain closed captioning, the manual process is inefficient and prone to human error.

If you add to this requirement the desire to offer subtitling in a number of languages, then the workflow and management issue is elevated directly to the realms of an automated system with sophisticated monitoring to ensure high reliability of service provision. To this, also add the desire to include audio description, and hopefully the complexities of the access services provision start to become clear.

There are a number of issues regarding the transmission of multiple-language subtitle files within a single program. It is possible to transmit as many as eight separate subtitle files, supporting eight different languages, in a WST system. However, there are time delays and refresh rates to consider. Therefore, there is no simple hard and fast rule.

The requirements for managing subtitle and audio description files can be designed into file-based transmission systems to take advantage of the inherent workflow enhancements. To ensure maximum efficiency, this should be designed as part of the complete system and not simply treated as an afterthought.

With the changes in file-based technology and the increasing access service requirement becoming apparent, there is a need for an access service management system that can either be used as a standalone application or integrated into a modern media management system as a specialized task management module designed around file-based media.

File-based embedding

Rather than embedding at the point of transmission, subtitles can be added in the file domain. This redefines the offline closed-caption transmission workflow, eliminating the need for downstream hardware inserters and their associated redundancy schemes.

Subtitle embedding writes the subtitle data directly into the media file. (The precise location varies depending on the type of file and playout server in use.) The numerous advantages of this software-based process include:

  • Embedding takes place prior to transmission, so the final playout file is complete and no downstream processing is required.
  • The embedded subtitle file can be viewed for quality assurance before transmission. This allows the user to play back the broadcast file, decoding the closed captions into visible captions for review. This is particularly useful when an existing subtitle file will be used with a program that has been through minor editorial changes, e.g., for prewatershed transmission.
  • The closed captions become an integral part of the program file. When the material is archived, it contains the embedded subtitles, so the process doesn't need to be repeated when the material is recalled for subsequent transmission.

In addition to server-specific capabilities, software can also embed subtitles into generic IMX-encoded media. This allows the technique to operate with any playout server capable of transmitting IMX files. IMX files contain VBI data within the picture as a waveform in the nonvisible lines. The IMX variant embeds the subtitle data as a VBI waveform directly into the MPEG picture. Only the VBI lines are touched, so there is no picture degradation during the process.

By comparison, server-specific versions add subtitle data to the broadcast file set, and the server is responsible for creating a VBI waveform from that data on playout. Additional functionality includes the ability to encode the V-Chip information used in North America to mark a program with appropriate content labelling.

Software products have been developed to embed subtitles into transport stream encoded material in the form of either teletext or DVB subtitles. This is aimed at broadcasters who wish to include access services in material repurposed for use in areas such as VOD or IPTV.

Audio description

The required format of the audio description file will vary depending on the transmission system employed. The file can be supplied as a separate commentary WAV file with associated control track (that includes pan and fade information) or as a premixed audio file containing normal program sound with the audio description commentary added.

Software utilities can transcode description files into the various forms required, and therefore file conversion is simply another process that can be designed into the transmission system workflow. Audio description is either incorporated into the transmission media as a separate audio file that end users control via their set-top boxes, or it is transmitted as another complete audio track (premixed) that users select, as they would with an alternative language audio track.

Workflow considerations

In the example workflow in Figure 1, the subtitle file requirement list is generated directly from the original program schedule. The delivery of files to fulfill this requirement is monitored from either an in-house department or an external specialist supplier.

Automated QC is applied to the delivered files to minimize possible errors, such as overlapping time codes or incorrectly labeled language files. The latter is especially important when used in a system with multiple-language access service files, as the final transmission monitoring staff may not be able to interpret the many languages that need to be broadcast simultaneously. In other words, it may be evident that a subtitle file exists at the appropriate position in the broadcast package, but is it the correct language?

This figure also includes monitoring of file delivery status by comparison to the transmission requirement derived from the scheduling system. This ensures that all missing files are automatically tracked and the people responsible for operation are informed well in advance. This includes in-house transmission staff, who are warned of the nonavailability of the file, as well as the author of the access service files. Both groups can be automatically e-mailed to inform them of missing files or file errors.

When the appropriate access service files are available to the transmission system, they can be embedded into the broadcast media while stored on nearline storage, before the final media wrapper is moved to the transmission server and is controlled by the traditional transmission automation.

The Future

My humble attempts to predict the future developments in this area of technology err on the side of caution. However, I believe it is very safe to predict the following:

  • Legislation for broadcasters to include access services to some degree will expand internationally.
  • The requirement for carrying access services on a higher percentage of programs will continue to increase.
  • Broadcasters will continue to want to broaden the market appeal for their output with channels carrying closed captioning in multiple languages.
  • Broadcast transmission systems will become increasingly file-based with conventional videotape being used only for final archive storage.
  • Increased reliability of transmission is a goal for all providers, and service level agreements will become increasingly demanding.
  • Media management systems will increase in complexity and coverage of transmission systems.
  • High-level broadcast monitoring systems will increasingly be used to maximize reliability, warn of impending problems and identify exact points in the workflow where failures occurred. These systems will demand sophisticated two-way communication with task management systems for access services.

All of these developments will result in more of the transmission workflow being automated and monitored. Consequently, solutions to manage and monitor access services will become increasingly important to broadcasters and suppliers of video content.

Peter Blatchford is sales and marketing director at Starfish Technologies.