Video processing

When I discuss video processing with engineers, their specific needs are definitely varied. But one common factor does exist, which is that video processing typically gets lumped into two categories: one-time setup and operational. One-time setup is usually carried out during the plant commissioning or studio setup: Although there may be some presets as programs change throughout the week, there are typically few modifications after the initial configuration. The second category of video processing allows for operational control. Back in the good old days, this generally entailed simple stuff like proc amp controls.

But the move to digital has complicated processing functions. Going 100-percent digital simplifies matters (notice I said 100 percent); it is the mixed-mode transition era that makes operations so difficult. Let me give you a simple example using audio. 

In analog, there were many “plant standards,” which made it rough to move audio from plant to plant. In digital, it’s simple: Full scale is 0dBFS — not +10dB, not even +4dB; all 1s are defined at 0dBFS. The trick here is to always keep the audio in digital — right from the mixing board. The minute it hits analog again, the signal needs to go through analog-to-digital converters, which is where the nightmare begins; both levels and phasing are bound to get messed up. The same goes for NTSC and PAL composite video; someone’s fingers are typically on the input proc before the converter to “fix up” the video before conversion to digital.

Figure 1. Shown here is the processing paradigm shift. Today, more products incorporate processing functions, eliminating the need for intervention by engineers and leading to a paradigm shift in plant design.

So let’s zoom out a little into your future plant being 100 percent digital. Will you still need proc functions? Of course, technical directors will always want their show to look its best and will tweak a little here and there, but that is where it should stop. Once the show has been created, there is no reason to change it for standard television distribution, short of adding in some localized branding. The key is in the setup. Normalize as much as possible within your plant, and give operation people the chance to fix bad feeds when needed (which should not be often, except in live situations).

Keeping this consistency in plant digital video means that video processing (including timing) can increasingly be of the one-time-setup variety. Technology advances have eliminated the need for an engineer to adjust every process continually, enabling hands-free operation — and creating a clear paradigm shift (as shown in Figure 1) in plant design.

The new breed of router incorporated video processing circuitry on the input side of the router, which enables all outputs of a single source to be corrected together.

The server paradigm shift

Let’s look at this paradigm shift in more detail by using video playout servers as an example. Ingested video can be stored in many formats — 480i, 720p, 1080i. The trick here is to define at the output port what format is needed. Let’s say it is 720p. Video stored in 480i will be scaled, and color space changed, and pillar-boxed (or stretched) based on AFD codes and the user’s preset parameters — all automatic and hands-free. Software-based video processing running in real time on multiple CPUs in the video server make this conversion possible. In some cases, branding and multiviewer functions are also incorporated into servers.

The router paradigm shift

Multiviewers integrated directly into routers reduce the need for external video processors.

Functioning with a single routing switcher and one set of patch panels was inconceivable in the analog days. Although the switch to digital enabled broadcasters to keep audio and video together on a single coax, the advance came at a price. Looking again at the adoption curve, it was a royal pain to have outboard embedders, de-embedders and separate AES routers to handle the initial wave of serial digital — not to mention all the fun with processing the occasional audio breakaway.

For fully HD plants, having every signal needed (including timecode) on a single coax was a dream come true — especially when the new breed of routers started to have breakaway audio capabilities. The next logical step was adding video processing into routers. This type of processing circuitry is typically on the input side of the router, which enables all outputs of a single source to be corrected together and is akin to having a proc amp on incoming router feeds. Processing such as optical-to-electrical and electrical-to-optical conversions also is incorporated directly into the router as I/O options.

Multiviewers typically feature extensive video-processing capabilities to scale dozen of pictures onto large flat-screen displays. The integration of multiviewer technology into routers takes advantage of internal routing and eliminates the need for external cabling. Many of these multiviewers are used for on-air operations employing external master-control switching. Master-control systems typically have to perform considerable processing in order to seamlessly mix signals before going on-air. As most of the sources typically come from the router, newer additions such as clean/quiet switchers incorporated directly in the router are ideal for simple switches or master-control backups.

The modular-platform paradigm shift

Literally thousands of different types of processing modules have been developed in the broadcast industry alone. Typically, these are standalone modules, with perhaps some type of centralized controller; they usually are connected up with cables on the back of the frame to cascade functions. In some cases, manufacturers even produced some mini routing switcher cards to go in the frame.

Thanks to Moore’s Law, we are getting more and more complex processing functions on a single module — in many cases, even multiple channels of video running on the same module.
This sophistication has now stepped up yet another level. The designers of newer platforms that now combine traditional studio processing with transmission and compression functions have also figured out how to seamlessly integrate video routing and IP routing as well. External cables are not required to interconnect modules; it is all done with built-in configuration tools — providing total integrated solutions from a stack of modules.

The processing paradigm shift

Again, thanks to Moore’s Law of processing capability doubling every two years, both video and audio processing is now scattered between software and hardware, operating in standalone and integrated applications. Proc amp controls exist in practically everything, but what about the proc amp knobs?

Gone are the days of running a panel per feed into the control room. Control systems are highly functional and intelligent — able to discern crosspoint information, source IDs and the metadata for each essence. The operator no longer needs to understand the complete signal path.
It is up to the plant designer to ensure that I/O loops are not created, such as by one operator changing input levels on the router and another correcting it by changing the output levels. These are all standard processing details that need to be kept in mind to avoid loss in signal quality.

In summary, whether you are undertaking a complete new plant design, adding in a new area, performing an upgrade or just switching up to HD, it is worthwhile to take note of this video-processing paradigm shift. The shift moves bidirectionally; for example, while processing functionality has moved into video servers and routers, routing has also moved into processing platforms.

The complete switch to digital — along with Moore’s Law — has made this possible. Although plant designs continue to become more complex on a functional level, these new advancements are making installations and reconfigurations much simpler, and greener too!

Stan Moote is vice president of business development at Harris Broadcast.