Centralized and distributed broadcasting
A year ago, centralized broadcasting practically monopolized trade publications, industry gatherings, and the boardrooms of station groups, networks and equipment manufacturers alike. Centralized broadcasting is simple in concept: a single location controls multiple stations over a given geographic region. The degree of control at the main facility can vary depending on the objectives of the network or station group.
Signal distribution and monitoring equipment plays an important role at Maison Radio Canada, the origination facility for the centralized broadcasting of the Canadian Broadcasting Corporation’s French-language services across Canada. Photo courtesy Miranda.
Now, 12 months later, the subject of centralized broadcasting has somewhat lost the spotlight, but consolidation projects continue as station groups and networks work to streamline operations and save costs while working on models to offer new and more diverse services.
Degrees of centralization
There are a number of models for centralized broadcasting, but they all have the same basic objectives: take advantage of economies of scale and increase productivity through collaboration. Rather than having every station in a broadcast group duplicate all functions at the local level, the group can consolidate certain key functions: such as graphics, traffic, sales and archive management. Likewise, rather than investing in highly priced digital equipment at each individual site, a central facility can, in many cases, house the core infrastructure. Thus, remote stations can function with a smaller equipment investment and with a smaller operational staff. In certain cases, this may help with the transition to digital because it can free some stations from having to make the capital investment in new equipment. The number of functions and amount of equipment a particular broadcast group centralizes depends on the structure and objectives of the group.
There are many different scenarios for consolidating operations. The two models that represent polar opposites on the continuum are the centralized-playout model and the distributed-playout model. In the centralized-playout model, the central facility carries out as many functions as possible. In the distributed-playout model, most functions remain at the local level, with only some management and control activities transferred to the central facility.
In the centralized-playout model, a central location houses most of the equipment and functions (see Figure 1). The central facility handles all network program reception, syndicated program ingest, commercial insertion, master control, branding and presentation switching functions, and all monitoring. Essentially, the central facility streams a ready-for-air signal to each of the stations in the centralization. Other than originating local news and the like, the remote stations have few on-air functions beyond moving the ready-for-air signal out to the local transmitter for broadcast.
Figure 1. In the centralized-playout model, a central location houses most of the equipment and functions.
They file-transfer non-live local content such as commercials, taped programs and promos to the central facility for playout. Live local content such as news is either switched in locally under remote control from the central facility, or, in some cases, backhauled to the central site. The central site then switches it in master control and sends it right back to the local station, fully integrated with other programming and commercial elements.
Pros and cons
The main benefits of this model come from the co-location of most of the technical infrastructure, and in the simplicity of the automation and content management. Automation is simpler because all playout and air devices are located within the same facility. In fact, the operations and technical infrastructure of such a central facility are similar to the multichannel origination facilities operated by specialty channel providers, DBS and other multichannel originators. The technology for multichannel server, automation and highly integrated master control for these types of operations has existed for years. Since this model centrally locates all equipment and media, it easily achieves redundancy and protection.
The disadvantages of this model lie in the cost of distributing the ready-for-air streams to each of the individual stations from the central location, as well as with the risk involved in relying on these communication links. The key new system element is remote monitoring to allow the central facility to monitor not only what it is sending, but also what each of the remote cities is actually airing. Such centralized operations use remote signal telemetry and streaming video over standard IP networks extensively to provide remote monitoring.
Centralization at the CBC
Canadian broadcasters have chosen the centralized-playout model for practically all of their consolidation projects in the past two years. Originally employed by regional networks that operated four to six stations in a single provincial region, the model has now been deployed on a national level as well. The Canadian Broadcasting Corporation (CBC) has recently completed an important consolidation project, centralizing all of its English-language network operations in Toronto. The French-language network Radio Canada will soon do the same, consolidating its operations in Montreal. The two centers are linked and, in the event of a catastrophic failure at either center, will be able to act as backup with a reduced number of feeds.
Figure 2. The main status page of the CBC’s centralized monitoring of remote operations features a national map with alarm and off-air video display capability streamed back from each location via a TCP-IP network.
The CBC has deployed a fully centralized model and uplink, mainly using satellites. It is a multiplex of 15 ready-for-air streams to regional centers located throughout the country. The system streams local programming, news and special events from each regional center to Toronto through a combination of telco lines and satellite return paths (see Figure 2). A network command center (NCC) was constructed in Toronto to integrate and monitor the 19 English-language services. One of the most important reasons the station decided to loop local programming through Toronto and distribute a ready-for-air signal by satellite was so it could eventually feed isolated transmitters serving remote communities by satellite, thereby reducing operating cost. The principal enabling technology for this system was the compression and multiplexing technology to uplink 15 streams on a single multiplex.
The centralized playout model described above has all but fallen out of favor in the United States in the past year. U.S.-based groups who studied the model were not able to balance the cost savings of consolidating master-control operations against the high cost of operating the real-time video links required to stream ready-for-air streams to each remote station. In some cases, especially in smaller cities where reduction of local infrastructure makes most sense, there may not have been sufficient telco access. In addition to lack of sufficient cost savings, U.S. group owners were concerned with the risks of removing master control and master-control operations from the local station and relying exclusively on those distribution links to get the final signal to air.
U.S. group owners have instead opted in increasing numbers for the distributed playout model, which falls toward the opposite end of the spectrum of consolidation models. In the distributed-playout model, most of the equipment and the primary station functions remain at the local station level (see Figure 3). Some content creation, ingest and preparation, traffic, and automation can be centralized, but the local station generates the final, ready-for-air signal. While the central station can push some material to the local stations using file transfers, playout to air occurs from the local station server. Master control and the responsibility for ensuring continuity in the programming – particularly during sporting events and other live events where break points cannot be predicted – is left in the hands of a local master-control operator.
Figure 3. In the distributed playout model, most of the equipment and the primary station functions remain at the local station level
Among the benefits of the more distributed approach is a reduced reliance on distribution links. High-capacity links are still required; but these links are now carrying files transferred in non-real time, rather than real-time streaming video. The performance characteristics on network links are less demanding, the availability of network providers and methods are wider and the costs lower. Losing the link for a moment does not take the signal off the air. Instead, the temporary loss of connection may require a resend of a file or small portion of a file. There are now modern protocol extensions designed specifically for this purpose that allow the resend to happen automatically and transparently to the users.
Another advantage of the distributed-playout model comes from the fact that the local stations remain whole and somewhat autonomous. In addition to being safer, it is beneficial within the context of changing ownership rules and the lifting of restrictions on duopolies. A station that is still whole and autonomous can be sold or traded more easily.
Among the disadvantages of this model is an increase in the complexity of the automation and content management and distribution, because server and other on-air elements are distributed.
Where’s the beef?
While studying the costs of their consolidation options, group owners analyzed workflows and operations in all of their facilities. In analyzing the workflows at the individual stations, they realized that there were more significant costs in non-live but daily tasks such as production, graphics creation, traffic management, ingest and QA, than in master control. Moreover, they found that if they looked at all of the group facilities, there was a great deal of duplication in those daily tasks. They recognized that many people were ingesting the same content, performing QA, cataloging operations and entering the metadata necessary for traffic and automation. Group owners saw that they could generate significant savings by consolidating these time-intensive, repetitive operations on non-live material. They concluded that they could get the most possible benefit, without incurring the telco cost and the risks of total centralization.
The key is to consolidate non-live portions of the workflow and to leverage content distribution from centralized servers to servers at individual stations using file-transfer techniques over terrestrial or satellite networks. (See Table 1.)
Table 1. Non-live content can often be created or ingested at a central location and then transferred to local stations, while live content is often produced locally.
Handling syndicated programs is an excellent example of a time- and labor-intensive process that is repeated (duplicated) at dozens, if not hundreds, of TV stations every day. Each facility does the same thing: It aims the dish, tunes the receiver and records the show’s content. Once the station checks the recording for errors and quality, it reviews the recording to identify commercial-break insertion points and usually produces a promo clip based on excerpts from the show. The station then transfers the program and promotional materials to a server or cassette and enters metadata required by the automation and traffic systems. This same linear process is performed dozens of times each day. By contrast, a consolidated operation using the distributed model can perform this process at one location and, after ingest and QA, can file-transfer the material to multiple servers in multiple cities. If a large number of stations are involved, then the file transfer can be accomplished over satellite using IP over MPEG-2. Satellite-based delivery bypasses the limitations of currently deployed WANs in terms of capacity and ability to handle multiple receive points (multicast). Broadcast groups need not tackle the consolidation of syndicated programs on their own. Modern video-service providers have begun to ingest and prep syndicated programs and deliver them by satellite to edge servers acting as electronic mail boxes located at local TV stations.
The Holy Grail is to have material arrive at a facility and have the metadata automatically formatted for the local station’s automation and traffic systems so that material can be directly transferred to a station’s server and inserted in the lineup with minimal operator intervention. By offering a standard way of specifying metadata, MXF, a new standard soon to be approved by SMPTE, promises to facilitate such direct transfers.
Hyper local news
The Sinclair Broadcast Group recently detailed some of its centralization plans in a series of press releases. Sinclair’s thinking has been to use the advantages of both models (real-time streaming and store-and-forward of files) while minimizing the disadvantages of both methods. With an eye on consolidation, Sinclair built a 15,000-square-foot production facility at its corporate headquarters in Baltimore. Here, both live and non-live material is produced for its stations. It broadcasts live national news out to its client stations in real time over satellite, while it file-transfers near-real-time material like the weather and other non-time-sensitive content over the Sinclair WAN to the stations for insertion. Sinclair is continuing to build out the distribution network, and it plans on linking all 40 of its stations to this network.
Among the many tasks the group chose to consolidate, the most unusual and interesting was consolidating local weather reports for some of its stations to its centralized production facility in Baltimore. On-air weather talent in Baltimore produce multiple “local weather” segments. These segments are file-transferred to the local stations using Telestream clip mail boxes (like an e-mail message with a large attachment), where they are inserted into the news. It does this on a near-real-time basis. This arrangement has allowed the group to upgrade and improve its ability to deliver local weather to its stations. And the cost savings in consolidating the use of expensive weather equipment and graphics systems is considerable. Although we think of the weather report as presented live from the studio of the station, Sinclair maintains that it matters little whether the studio is in the local market or it is hundreds of miles away. What is important is the quality of the data and the professionalism of the presentation. Sinclair has partnered with Accuweather to provide this data. Under these circumstances, weather is a perfect candidate for consolidation. The company will be able to keep up with the latest advances in weather graphics and presentation techniques without having to continuously upgrade equipment at each local station.
At the heart of the distributed playout model lies the important notion that non-live material does not have to be handled as a real-time, synchronous video stream; it can be treated as a file instead. Files are easier to deal with than streams. They can be transferred asynchronously across a wider range of networks. In this case, asynchronous translates into two important concepts: 1) the transfer can happen slower or faster than real time, depending on the link and, 2) the transfer can be achieved without the need for operator intervention. Like an e-mail, once initiated, a file transfer just happens. Computers at both ends take care of the details. The network constraints are simplified and the workflow becomes nonlinear – two important ingredients for dramatic simplification of the overall process.
Until recently, exchanges based on file transfers were not practical for many reasons. First, legacy video servers usually were proprietary boxes; the only way to place content on the server was to stream it in through one of the server’s video ports. Second, networks linking video servers, particularly WANs, were not fast enough or consistent enough to handle large video files. Third, it was not possible to transfer files between servers from different vendors because even though they may have used common compression formats, they did not use consistent file headers. It was also not possible to transfer transparently any metadata describing the attributes of the file from one server to another, unless of course both servers were from the same vendor.
But the situation has been changing rapidly lately, and the barriers listed above have started to disappear. Several factors have converged to make file-based operations across geographies and across server platforms a reality. These factors include recent advances in storage technology; high-capacity, wide-area data networking over land-based lines or satellite; acceptance of MPEG-2 as a universal compression format; and the emergence of MXF, an important new metadata exchange format. Simplification of infrastructure and de-linearization of the workflow will have a major impact on TV facilities and on the way they distribute TV content.
In the context of centralization and consolidation, the simple fact is that most material aired by local TV stations is not live. (By popular estimates, less than 20 percent is live.) This ratio means that broadcasters can fully leverage the simplified workflow and infrastructure surrounding file-based operations.
At the moment, there are two chief disadvantages to the distributed-playout model: the cost of putting complex server systems at each and every facility, and the complexity of content distribution and remote automation. Proponents of the centralized-playout model maintain that it is simpler to manage the operation when all of the equipment and the media are in one place. Well, video servers are no longer complex or expensive. Consider what some are calling an edge server: a low-cost, highly integrated box that incorporates video server and switching functionality. This is a stand-alone device that can be remotely controlled and whose content can be remotely loaded through signaling and data embedded in the network signal feeding the station.
As for the complexity of media distribution and automation, all that remains is for TV automation systems and media asset management systems to evolve to the same level as they have in other file-intensive industries. Once this happens, users will no longer have to worry about where to put that material and from where it will be played out. The content and the intelligence to play it will be truly distributed. Having simplified the entire operation, we will be able to leverage our systems to further specialize our advertising and programming without incurring any additional cost or complexity.
Michel Proulx is vice president of product development at Miranda Technologies.
Get the TV Tech Newsletter
The professional video industry's #1 source for news, trends and product and tech information. Sign up below.
By Tom Butts
By Tom Butts