Master control and channel branding

Master control technology has changed significantly over the last few years to support the ever-increasing demands being placed on master control operators. Switching itself has been an area of extensive development in automation technology yielding advanced devices that automate routing changes and require little or no operator intervention. With the exception of live events, most master control functionality is driven by automation. In most master control environments, one or two operators are busy 24 hours a day to guarantee the uninterrupted operation of channels. They are faced with the often daunting task of monitoring the quality and accuracy of each channel, which can number from one to hundreds, ensuring that the transmission conforms to regulations, overseeing the operations' response to equipment problems and preparing content for future distribution.

This article will look at the issues surrounding stereoscopic 3-D playout in relation to master control — particularly the positioning of stereoscopic graphics and the real-time possibilities. It will examine these concerns in the wider context of the relationship between modern master control technologies and channel branding.

With the increasing proliferation of new TV channels, master control technology must fulfill a new role: allowing all that is mentioned above plus the ability to facilitate the cost-effective increase of channel count even for small broadcasters. In order to satisfy this new brief while also allowing cost-effective yet highly dynamic channel branding, master control units must be able to provide this branding across those channels without requiring separate devices.

Early adopters

Added to all this is the more recent arrival of stereoscopic 3-D. Stereoscopic 3-D is an interesting proposition for the broadcast industry, particularly arriving as it has against a backdrop of only slowly improving overall economics. On the one hand, it is to be celebrated as genuine innovation; on the other, a realistic approach has to be taken in terms of market penetration and short-to-medium term revenues throughout the chain. It would be extremely churlish not to celebrate the market-leading efforts of ESPN and BSkyB, but at the same time it was refreshing to see a more realistic overall market approach being adopted at IBC2010.

Stereoscopic television is currently much tried but not really tested. Stereoscopic must be viewed alongside a raft of other real changes in playout demands and workflows — particularly the need to reduce cost while increasing capabilities per rack unit.

The most obvious advantage is that stereoscopic television can be deployed throughout the playout chain with minimal technology upgrades. This is because it may be implemented by two signals being carried through the same HD chain rather than a single 2-D feed, though this can have ramifications with compression. But that's not the whole story. While the core technical challenges appear manageable based on what we know so far, from a creative perspective, there are some subjects that continue to need exploration and development.

Once you move from 2-D to 3-D, anything else that appears on-screen is also affected. In a prerecorded environment, this is not an issue for obvious reasons, but during a live broadcast, the situation is very different.

How to manage the Z-plane

While many of the issues and solutions lie at the production end of the spectrum, the introduction of dynamic/live graphics is not one of them. This poses a real challenge for broadcasters and playout providersas they are dealing with Z-plane positioning, a new and black art. So what does stereoscopic mean for the positioning of graphics both during a show/event or for branding purposes? How can broadcasters prevent clashes between 3-D on-screen “furniture” and video, and therefore significant disturbance to viewers?

There is an acknowledgement and understanding among graphics suppliers and broadcasters that overlay graphics — which are generally used in real time and are not those created in post production, though the same is true of those graphics created in post — have to be positioned so that they don't upset the live action, film or whatever the content happens to be. It's vital that they don't cause visual disturbance to the brain. In basic terms, less is more. Graphics need to be sympathetic to underlying content. Stereoscopic 3-D is not about flying saucers coming out of the screen; it has to be subtler than that, as broadcasters have recognized.

When broadcasting in 3-D, graphics have to be correctly placed in the Z-plane, and this can be a significant problem. The issue comes to the fore as the action moves from shot to shot, therefore altering the depth of field and the stereoscopic scope. Taking golf coverage as an example, if the operator doesn't position the graphics far enough forward, then during the long flight of the ball, it could end up in front of the graphics in the Z-space. But the viewer wouldn't actually see it in front of the graphics because they have been laid over the top. This would be disconcerting for the brain, as it would be seeing something that it has never seen in real life. For sports fans, losing sight of the ball would be annoying. So as the video image moves in the Z-plane, the graphics must either move too or be in a fixed position that will allow the flight of the ball to be captured undisturbed. It's going to take some innovative thinking to make the best use of graphics as an overlay in this environment.

3D graphics

3D graphics in a 2-D broadcast have been possible for a long time now, but in a stereoscopic 3-D space, the issues are different. When a graphics artist designs and animates a 3D strap for some football scores, for example, where that is positioned in the Z-plane hasn't mattered at all in a 2-D broadcast. Whether you have the object a long, long way away and zoom in or very close and zoom out, the result for the viewer is similar. But when broadcasters start working in a 3-D stereoscopic world, where that “furniture” is placed in the Z-plane is important in relation to stereoscopic effect on the video that the graphics operator is placing the graphics over.

So how can broadcasters cope? Here are three possible ways that catering for this potential shot-by-shot variation can be achieved.

Continue on next page

First, introduce some form of metadata into the broadcast stream — or as a separate file, a bit like a subtitle file — that has depth information. The graphics device can then use that depth information to position the graphics in the Z-plane so they don't look unsuitable. For this to be truly widespread and economic, there would need to be a global standardization effort, and that will take time. SMPTE is continuing work on its 3D Home Contents Master Standard project.

Second, analyze the incoming feed in real time and then make automatic adjustments based on that. Technology can achieve this, but it's expensive to incorporate into a graphics device and may well require a separate unit. This runs counter to another imperative across the industry: the reduction of the number of devices in the playout chain.

Finally — and this is likely to remain the most feasible solution for some time — operators will have to manually adjust the graphics in a master control environment on a shot-by-shot basis. (See Figure 1.) Technology available today allows operators to adjust the graphics shot-by-shot in the Z-plane. Master control operators will need to be trained to do this, something that as a manufacturer we recognize. While it is a cost, it is also necessary to achieve the benefits of stereoscopic coverage.

There is another aspect to this that most people are only just beginning to think about, and that is the relationship between audio and stereoscopic 3-D. If the illusion of depth is being created, then the audio has to tie in with that illusion. If there are more distant objects, any associated audio will need to be quieter than that for objects further forward in order to enhance that depth of field. A typical example is a now-and-next graphic on-screen with an associated voice-over; that audio has to be in the same space as the graphic because it is the graphic that is “talking” to you. If there's then a graphic that flies in and a sound effect that goes with it, then that sound effect has to track the graphic in the Z-plane; otherwise, the viewer could be confused.

There are two consequences that spring from the necessary increase in the complexity of master control systems. Master control must support an even wider range of functionality — switching, graphics, stereoscopic, audio — while not increasing operational or capital costs. The control of this increased level of functionality needs to be available to operators in a familiar yet flexible form, allowing them to seamlessly incorporate new operations and tailor the control options to meet their requirements.

With the advent of stereoscopic broadcasting, the requirements of master control are being stretched ever further. In today's broadcast environment, multifaceted functionality, high levels of integration and flexible control are essential ingredients of a modern master control system.

James Gilbert is joint managing director of Pixel Power.