The time is approaching when all displays will process images, with many displaying more than one input at a time. In fact, like it or not, essentially all displays now have image processing, including frame rate, aspect ratio and pixel map changes. This arises not from a crying need in the professional marketplace, but rather from consumer applications requiring home displays to support a random variety of input standards.
Beginning with the first flat-screen televisions almost 15 years ago, display processors became a necessary evil. Many of the early processors did only the minimum necessary to display SD and HD images on the same 16:9 display. Quality was, well, only so-so. But, as flat-screen technology has matured, so too has the technology of scaling engines that are employed in displays, as well as in other devices.
Out of that simple, or seemingly simple, addition to displays has sprung the entire class of display processors for multi-image and, I might add, a host of changes coming for monitors themselves. The basic idea that is embodied in a display processor is to map an incoming pixel map (and sequence of time samples, or frame rate) into a different pixel map on the output. The simplest is to take a smaller frame size, say 720 × 486 interlace, and map it onto a larger pixel map for display, perhaps a 1920 × 1080 interlace screen. The display processor interpolates the missing samples between every two output samples, effectively creating new data for the interpolated points. Going the other way (HD to SD with the same frame rate and sampling structure, progressive or interlace) is the reverse, with pixels discarded to decimate the picture to the smaller pixel map. Simple enough! But throw in more complicated problems — for instance, mixing interlace and progressive — and the display processor must be reprogrammable to do more jobs.
In a multi-image processor, the downconversion decimation is done on multiple inputs, which are then added into a single output. The trick, and where the secret sauce is found, is in preserving the image quality. Part of the display remapping is dealing with overlapping spectra from the input and output samples. When filters are not done well, the aliasing can be quite objectionable indeed.
This technology is not new. The first digital effects units had to have multiple tap filters to allow image transforms that were variable. They also included the geometric effects of viewing a frame displaced in direction and center from the dead-center head-on processing that happens in most display processors. We all have direct visceral experience with display processors that can be reprogrammed on the fly in our pockets — in the form of smartphones. Think critically about what the display processor in your phone is doing when you rotate the display and the picture resizes and scales automatically, and then think about the Ampex ADO of 20 years ago.
So, processors are now taking multiple images and using discrete individual processors to output multiple images, or potentially varied sizes, to a combiner that delivers a single composite image to the output device. Fair enough, but is that all? Not today. We expect to have the ability to pick the background images onto which we place the video windows. We also expect to be able to place names on each window, and tally information that must be decoded from data streams derived from production and routing switchers. But in recent years, display processors have had to handle lots of other information as well.
Multi-image systems now are expected to decode closed captions (and/or subtitles) and to display bar graphs for embedded and discrete audio channels. They also are expected to decode other metadata such as VChip; standards for frame size and frame rate; and perhaps emergency alerts, PSIP data and AFD codes.
Some manufacturers have chosen to add support for waveform displays, which grays the line between a “waveform rasterizer” and a multi-image processor. A rasterizer may well provide more than one image at a time, and it is likely in the future that a combination of waveform generation and multi-image processing will allow a video operator to have camera repeat monitors and the waveforms associated with them all in one display processor output. That saves real estate in a monitor wall and reduces the amount of wiring needed, which is a double win in my book.
Multi-image processing systems can support many outputs as well. By combining a routing switcher internally, the user can have the flexibility to reprogram a monitor wall at will — when a new SNG truck signal is received, for instance — even during a show. The size of the router is arbitrary, of course, as is the number of outputs that can be supported. Some systems support literally dozens of outputs and many more than a hundred inputs.
For some applications, this has resulted in tight integration between multi-image processors and external routing switchers. By taking outputs directly from the crosspoint matrix in a router, it is possible in some cases to feed a display system from the router without increasing the physical size of the routing matrix. Doing this saves money and allows interconnection with multiple digital streams. The display processor appears to the router like another control panel making requests to switch outputs. This tight integration in some cases is between products from a single manufacturer, though two manufacturers can easily weave their products together seamlessly if customers strongly encourage them.
Finally, some display manufacturers have begun tiptoeing into the multi-image world directly. One manufacturer showed a monitor at the Hollywood Post Alliance Technology Retreat in February that had both rasterizer and direct inputs tightly integrated into cards resident inside the monitor. It would not be surprising to see further integration of scaling technology inside professional monitors, and perhaps consumer sets as well.
John Luff is a television technology consultant.
Send questions and comments to: firstname.lastname@example.org