BBC Broadcast’s Broadcast Centre


The Broadcast Centre uses projection multiviewers from Christie Digital. Photo courtesy Matt Wain Photography.

Last year after NAB2003, BBC Broadcast awarded Television Systems Ltd (TSL) a systems integration contract as a fundamental part of the establishment of the Broadcast Centre in London. The contract for the design, installation, commissioning and integration of its playout and broadcast automation systems represents one of the largest migration projects in broadcasting history.

This contract has entailed the complete relocation of playout and channel management operations from BBC Television Centre to the new and purpose-built Broadcast Centre, located in a new complex in White City. This has clearly entailed a seismic shift in the operations of BBC Broadcast. With the majority of TSL’s work under this contract now completed, it is a good time to look at just what has been achieved.

The purpose of BBC Broadcast is that the BBC is its primary shareholder and, therefore, it needs to bring benefits to the BBC in both savings and the ability to launch new services into new markets. Secondly, its role is to engage with external clients and customers to do the same thing for them and bring business into BBC Broadcast and, therefore, feed profits and earnings back into the BBC to fund program making.

The motivating forces were wide-ranging. From a technical point of view, the driver is to support new business processes. What does that constitute? Accommodation is obviously an important factor — having the right space to carry out the right tasks. If you look at where the playout facility was before in Television Centre, then simple things such as cabling, providing infrastructure changes and new services was difficult and expensive to do. Expansion was at a premium.


Media Server Systems from Omneon Video Networks are used for ingest and playout server technology.

An internal feasibility study was initiated approximately four-and-a-half years ago for consolidating what was then Broadcast and Presentation’s functions with new accommodation and new technology. The Broadcast Centre project assessed the feasibility. Current and future business processes were examined in detail, and a migration strategy was looked at. That took more than a year, and the construction of the building took 18 months.

People have been talking a great deal about IT and broadcast convergence, and that is important. However, key is the business convergence in terms of the compression of processes. If you have a piece of media, you need to be able to manipulate that, process it, modify and then send it to its delivery mechanism in short timescales or in an efficient way or multiple ways. That is what this building is about: ingest the material (or deliver it by file), manipulate it and convert it to whatever the desired format is for delivery, and then send it to the delivery mechanism. It is also about future-proofing, which is important. (Creative Services is also part of this process, based around generationQ from Quantel.)

TSL has worked on several projects with the BBC, and when it came to appointing a systems integrator for the Broadcast Centre, that work stood TSL in good stead. The company also shared the BBC’s vision.

In terms of key technology decisions, some were made largely by BBC Broadcast — OmniBus for automation and Omneon Video Networks for the ingest and playout server technology. Other decisions were largely handled by TSL — Christie Digital for the high-quality projection multiviewers and Axon Digital Design for its Synapse modular media system. However, none of these decisions were made in isolation.

Taking the structure of the building first, the broadcast central apparatus room (BCA) is on the ground floor. It is bombproof and has space for 680 racks. Above the racks is a mezzanine level for cable management. There are then Brattberg seals between there and the first floor that enclose the cabling. These are water- and gas-impermeable.

BBC Broadcast plays out a wide range of services, so while there is a high amount of flexibility within those playouts, and the fact that a large amount of the overall architecture underpins all of them, they are not all equipped in the same way. Flexibility is key because the facility wanted to be able to offer clients services at either end of the scale.

In terms of the deployment of the playout areas, a process of de-risking was employed as much as possible. For example, Playout Two manages BBC World, a complex channel to handle because of its multi-feed global nature, while Playout One manages the BBC national channels. Playout Two was built before Playout One because the technology level used in both was similar.

Playout Three was actually constructed first because it was closest to the multi-streaming area (MSA) that the broadcaster has used as a test bed. The MSA was a facility built on the back of the digital transmission area at Television Centre. This was then subsequently used for the playout of BBC3, BBC4, CBeebies and CBBC as well as the the broadcaster’s interactive channels, which were the first to migrate to the Broadcast Centre. It was during that project that OmniBus automation and Omneon video server technology was first used by BBC Broadcast.


BBC Technology’s Colledia monitoring system, shown here in Playout Three, allows BBC Broadcast to pinpoint and monitor every key component.

The facility wanted a wrapped, turnkey SI solution. It wanted the responsibility for the integration, and the system ultimately working, to be handled through a single contractual line. When there is a set of complex technologies being put together, having one point of responsibility is necessary to achieve efficient delivery. Ultimately, contractually TSL is there to make it work for the playout facility, and has done so.

OmniBus Systems automation, the latest modular G3 front-end on a Colossus backbone, is fundamental to the playout facility. For operators, the functionality and inherent flexibility of this solution is essential as it offers the ability to tailor each desktop to the requirements of a given channel. It is also a robust distributed network-based architecture.

Not only does BBC Broadcast have different channels and different propositions for different clients, but also those propositions change over time. Parts of schedules are fairly reactive and need the support of individual operators and directors, and at other times of the day they are not so critical. What G3 offers is the ability to provide a flexible staffing model around the channels and share resources across those channels.

In terms of automation, the systems integrator was careful with its introduction. Not only did the company use a new facility at its base in Maidenhead, UK, to test all parts of the chain over time, but with the automation, it was careful to introduce it to operators in a structured way so that any initial bugs could be appreciated for what they were rather than engendering a long-term negative attitude. This has proved to be a successful way of operating. An ongoing test facility exists at the Broadcast Centre for software upgrades and advanced training.


The broadcast central apparatus room houses servers from SGI.

On the first floor are the playout rooms — nine in total — with five equipped under this contract. The playout racks are located as close as possible under the respective playout suites. Media Management Operations (MMO), where all ingest onto servers occurs, is also situated on the first floor.

Omneon Media Servers are used for both ingest and playout, with an SGI cluster in between that is used by Creative Services. Currently, the system is using D10 I-Frame as the video protocol, but one of the core reasons that Omneon was selected was because of the system’s ability to handle a range of current and predicted file formats. Again, flexibility is the key. The API and the level of control that provides is also appropriate.

The facility uses separate audio servers supplied by Pharos Communications, and this illustrates another major benefit that the structure brings. The playout facility is now able to separate out audio and treat it as it wishes, as it does with graphics. (Pixel Power Clarity graphics systems are central to this.) These streams are then pulled together by the automation system at playout. Again, this is required because of the range of channels that the facility works with.

One of the core decisions made with the systems integrator and network specialists Osprey was to flood wire the building with CAT6E cabling. Looking toward the future, that provides a vast amount of capacity. The network is split into a series of tightly controlled VLANs, essentially with one VLAN per core manufacturer solution. The commands between those VLANs are also tightly controlled. A firm grip on traffic flow is necessary, and VLANs are the best way to do this.

Perhaps the last core element is BBC Technology’s Colledia monitoring system. It enables the facility to pinpoint and monitor every key component, which is essential. Again, because of the variety of levels of service that the facility provides, it is able to tailor the alarms to service criteria, enabling staff to react quickly if something does go wrong.

This is still a work in progress. MXF will have a role to play in the future, and the amount of content ingested from tape will decrease. BBC Broadcast is extremely pleased with the systems that it has in place and the overall architecture.

Chris Howe is head of technology for BBC Broadcast.