Recently, a lot of time and energy has been invested in squeezing the complexity out of the television playout chain. A key initiative has been the ongoing development of integrated playout systems, also known as IT-based playout or channel-in-a-box systems, which have now advanced to the point where they can handle virtually the entire playout process. Even complex television playout, with rich graphics, multilingual audio and a combination of live and pre-prepared content, can be readily delivered by these systems, as they have offered ever greater functional integration spanning automation, switching, signal processing and graphics.
With this substantial streamlining of the “back end” driven by integrated playout systems, the focus of engineers is starting to shift to upstream processes, where there are now greater productivity gains to be realized. Indeed, the “front end” processes of content reception and preparation now represent some of the biggest challenges facing broadcasters, and they are consuming an increasing proportion of the resources in a typical facility. (See Figure 1.)
There are a couple of key factors that have driven up the level of complexity involved in content preparation, and the associated time and costs involved. These are the fast growth in the number of file formats handled at a typical playout facility and the almost exponential growth in nonlinear content delivery. Both of these challenges are crying out for greater simplification and automation to better contain the problems, and fortunately there is a new generation of tools and processes that can address this situation and make these processes more manageable and scalable.
Growth in file formats
The growth in file-based content reception has been swift, and now more than 90 percent of incoming content arrives as files at a typical multichannel facility, with more than 50 different formats to manage, spanning multiple codec formats and file wrapper formats. All these files need to be ingested, checked and normalized to match the house style, and it's a big job to ensure there are no missing or misplaced metadata.
Traditionally, much of the signal processing in the playout chain has been performed in real time during live playout, using signal processing modules to prevent typical problems, such as aspect ratio errors, excessive loudness and audio track issues. However, this process involves tangible risk because the process is live and some problems, such as loudness control, can be challenging to manage using real-time processing alone. Real-time signal processing is also difficult to monitor and scale effectively because of the complexity and live nature of the processing.
This has led to a requirement to simplify and remove risk from the process by moving content normalization to the front end using off-line, file-based processing and monitoring for content preparation. With tight integration to the playout automation system, this approach to normalization can simplify workflows significantly and reduce the manning requirements for QC, while also improving the QoS through fewer errors.
A key factor in achieving effective automated, file-based processing and workflows is adapting the automated normalization process to suit the origin of the content. For instance, with known content, such as internally produced or network content that has been standardized, there is often no need for manual evaluation of the content ahead of file-based processing. In this case, new files can be sent straight to the processing grid for automated review and rules-based conversion to address any errors, such as excessive loudness. (See Figure 2.)
In contrast, wild files from file delivery services, which typically arrive in an unknown format, must be quickly and manually assessed to determine the requirement for optimal file-based normalization using a rapid file assessment tool. This involves quickly reviewing key parameters like the format, audio tracks and loudness to determine the processing required to bring the file into house format, and then automatically transferring this requirement to the processing grid.
By using a combination of fully automated and semi-automated workflows, the normalization can be performed faster than real time with significantly lower operating costs, and any errors can be spotted and addressed before they go on-air.
VOD content preparation
The other major content preparation problem for many playout facilities, namely VOD content generation, is typically an even bigger concern than the large volume of file formats entering a facility. The need to deliver content to the ever increasing range of VOD, catch-up and OTT services is stretching broadcasters and service providers, who often do not have the budget to increase manning to address the problem. Unfortunately, this is making the production of services like catch-up or just-aired VOD a costly drain on resources.
The low efficiency of on-demand content generation is typically due to the fact that nonlinear content has a separate workflow from traditional linear television. Many broadcasters have operated their playout and new media operations independently, with the latter being responsible for generating all on-demand content. This approach worked just fine some years back when there were only a few media platforms. However, the volume of on-demand content has grown dramatically, with the upsurge of new formats, including cable and satellite VOD distribution, owned and syndicated Web delivery, mobile device services, and download to own/rent (DTO/DTR) media portals.
Many broadcasters have embraced this VOD growth wholeheartedly, with their TV-Everywhere style initiatives. These have sought to retain viewers and maintain revenues by allowing their audiences to watch their favorite programs through their preferred medium at a time that best suits them. Unfortunately, the net result of all this growth in VOD content is a serious overloading of broadcasters' existing on-demand content generation processes. The complexity of VOD generation has also increased, with many broadcasters needing to offer their content in multiple languages, and add elements like captions and subtitles, while also controlling audio loudness, adding AFD and inserting SCTE triggers, etc. All these factors have contributed to the demands of content generation, and have resulted in some convoluted, overly manual workflows.
The workload of VOD content generation is also exacerbated by the fact that the revenue streams from many new platforms are typically low, although some broadcasters are able to claim advertising credit for catch-up content to bolster their income on linear TV services. In fact, we are seeing this as a key driver for a VOD capability that is intrinsically linked to the playout schedule. However, catch-up VOD requires many different versions of content to be generated quickly to earn viewership credits for nonlinear content.
For instance, the Nielsen C3 format, which is viewed within three days of the original broadcast, is presented with the same commercials as the original broadcast. The C7 format, for delivery four to seven days after the original broadcast, allows for the substitution of advertising to create additional revenues, and premium VOD will play without advertising or markers for targeted advertisements.
Branding in the VOD and catch-up TV world is another major challenge, and it's as important to audience retention as branding on a traditional linear playout channel. However, VOD content is obviously not consumed in a linear fashion, and hence the traditional “up-next” promos just aren't relevant. There needs to be a new and highly flexible in-show promo strategy to generate new types of graphics, such as, “See new episode every Thursday.” Traditionally, these promo graphics elements have been packaged manually on NLE systems, which introduce additional latency and costs to the VOD operations.
Furthermore, since most program content is the same across versions, the transcoding systems used to encode VOD files typically end up spending the majority of their time reprocessing the same content. Again, this introduces additional, unnecessary latency and cost. Unsurprisingly, the net result of all this complexity has been many broadcasters publishing nonlinear content with minimal or zero branding.
In this situation, using automated VOD preparation and crossplatform branding are the only viable solutions to these critical problems. This demands a new type of VOD content preparation system, with exceptional integration of linear and on-demand content preparation to speed production and minimize costs. The same assets, workflow and scheduling processes need to be shared across VOD and linear television operations. (See Figure 3.) With full schedule awareness, this type of system can offer significant performance improvements over traditional transcoding systems by avoiding unnecessary processing of the same content across different VOD versions.
There's also a need for integrated support for Nielsen watermarking, closed captions, AFD and V-Chip metadata, as well as rich channel branding. Additionally, there also needs to be full integration with program management and traffic systems (with support for BXF). This wider integration enables sources for VOD generation to be automatically fetched from generic storage, video servers, NLE systems, archiving systems, content delivery systems, data or videotapes, as well as from live video sources.
By using the latest generation of VOD generation with these capabilities, broadcasters can meet their delivery commitments without hiring a new team of operators, while also ensuring all content maximizes crosspromotion opportunities.
In summation, the big front end problems facing broadcasters and playout facilities can now be tamed with automated content preparation. The net effect is that playout can be better scaled and costs better controlled; it's where facilities must go to stay competitive.
Stephane Blondin is senior product manager at Miranda Technologies.