As in many areas of technology, we are entering a time of particularly strong change in how video is processed. It is transparent that analog processing is a dead issue, or at least mostly. I am obliquely referring to processing that happens in a camera of course, when analog signals (light) are processed into varying voltage as the sensor is read out. In addition to properly handling the high-bandwidth analog signals that result, analog optical filters are typically employed to manage aliasing in the sampled output of the sensor. But beyond this stage, little analog processing is still done, and it is a good thing since few engineers are trained these days in the subtleties of analog video signal management and degradation. Even in cameras, most of the processing is done digitally.
We have also arrived at a new point where the technology impacts video processing in the digital realm. To be clear, I consider general-purpose software processing to be quite different from that done in either hard-coded silicon or reprogrammable devices. I am also not focusing this article on the process of compression, as that is completed in yet another special purpose programmable device. One could argue that compression is video processing, but the other two choices — general-purpose platforms and special graphics processors — offer interesting possibilities, each with its own characteristics.
General-purpose computing platforms can do a lot of video processing, as well as generation of video signals for character generation, weather graphics and art creation. That is nothing new of course, since general-purpose computers loaded with one or more video frame buffers have been available for a couple of decades now. Early Sun, SGI, Mac and PC platforms transformed portions of station workflow.
But with the power of multiple-core, multiple-processor systems, we now see a plethora of real-time systems based on computers that are in most respects quite ordinary. That much speed makes generating and processing video much easier. Such garden variety computing platforms can be quite cost-effective and can serve as the basis of editing, transcoding, ingest and special effects systems, to name only a few. I do not mean to imply they are necessarily cheap, for, as in all things, performance comes at a price. One only need contrast a Yugo and a Porsche to see that there is no free lunch in technology. Both ends of the spectrum get you to the location, but they are oh so different.
Some video processing today is virtually free. Consider for a moment the resizing and processing that is done in smart phones and tablets. Twenty-five years ago, performing a 2-D rotation on a moving image and rescaling it to a different screen size would have been available only in digital effects systems that cost 50 to 100 times what a phone costs today. I sometimes watch rented movies on my tablet on airplanes; the quality is astounding, and I cannot help but smile when I rotate the device and the picture stays level to the floor. The resizing engine that is doing the processing has all the same requirements that the ADO of the '80s had, namely 2-D filtering of the image, resizing and smooth controls to make it feel and look “natural.” Clue, nothing in nature can do this.
Carrying this thought a bit further, the processing in portable electronics is similar to that in virtually every display device you can buy for video today. Every monitor and TV receiver has video processing that can resize images with excellent results. Those of us with enough grey hair can amuse ourselves by thinking of early standards converters, which occupied a couple of racks and did at most two conversions (525 to 625 and reverse). Modern resizing engines have to handle multiple frame rates and scanning standards, often including the ability to display computer outputs directly. To do that and be affordable as a solution in a CE device means high price-to-performance ratios.
I think, though, it is more exciting to look under the hood of devices that use GPU power. Graphics processing units have been with us for a couple of decades, but it has only been recently that we have seen them used for general-purpose video processing. I did some consulting with a U.S.-based startup that wanted to do some fairly high-end processing, things like noise reduction, dust and grain removal, cadence repair, standards conversion and other complex processes. By using GPUs as the engine, it was able to significantly cut the development time and increase the capability well beyond what some of its competitors with long-established pedigrees were providing. Indeed, GPUs are a key element in many real-time graphics processes, like weather graphics.
Perhaps the most interesting part of this technology is that our use of it is in its most early stages of development. It is easy to see applications like virtual sets using GPU power, but what about master control station-in-a-box systems? With the kind of power GPUs can bring to bear on a processing problem, we might see rich 3-D graphics applications with fully configurable windows. Each station could create a unique look for its interstitials and transitions, making it unique in the marketplace.
By integrating GPU-enhanced devices in studio plants, processing that was once quite expensive might become just one more service called up as needed. Or sophisticated processes not generally affordable could be more common, for instance, preprocessing for improving the output of compression hardware for both ATSC and ATSC MH signals. Noisy ENG shots could be significantly cleaned up. Old movie packages can be scaled to HD and improved in quality at the same time while bit rates are optimized.
I have (intentionally) omitted discussion of processing that is more commonplace, though without the intent to minimize its importance in the industry. Frame synchronizers, for instance, are a class of video processing that most readers are thoroughly familiar with. Quite often, a frame sync will contain a resizing engine and use hard-coded software to make short work of many functions that at one time required many boxes, including audio track assignment manipulation, color gamut control and color space conversion, sync delay adjustment, and other important processing functions.
John Luff is a television technology consultant.
Send questions and comments to:firstname.lastname@example.org