Does anyone out there still have a dot matrix printer? It seems they have gone the way of the horse-and-buggy and round tube TVs, a term that the consumer electronics industry is using to put the last nail in the shipping crates for CRT-based displays.
In 1984, I bought my first computer and first printer, a dot matrix machine that could print at 72 dots per inch (DPI), which also happened to be the resolution of the Mac Classic display. The term what you see is what you get (WYSIWYG) was coined to describe the benefits of a graphical user interface and printer that could reproduce what you saw on the computer screen. Unfortunately, the results had more in common with the mechanical TV demonstrated by John Logie Baird in 1926 than a modern ink jet or laser printer.
Much the same can be said about the evolution of TV from the early experimental days through the worldwide deployment of 525-line (NTSC) and 625-line (PAL/SECAM) systems to today's SDTV and HDTV systems with 480, 576, 720 and 1080 lines. We are only a decade into the era of HDTV, yet some folks are already talking about moving beyond HDTV. In fact, the consumer electronics industry is now promoting 1080p displays, despite the fact that nobody is broadcasting 1080 at 50p or 60p, or is planning to in the near future.
The trouble with dot matrix printers, and legacy TV systems, is that they did not offer enough resolution to deliver high-quality, alias-free images. Despite this shortcoming, however, TV has prospered, based primarily upon the content that is delivered, rather than the quality of the pictures. While digital television can deliver outstanding quality pictures, the unfortunate reality is that what we watch today is often compromised by the decision of program distributors to focus on quantity rather than quality.
So what gives with the seeming disconnect between the quality of the images that go into the transmitter and the push for displays that offer far more resolution than a broadcast can deliver? And why are people talking about even higher resolutions for TV image acquisition?
Decoupling and oversampling
Baird's television system used a camera with a rotating disk to sample the image. The display had a rotating disk that recreated those samples. The major advancement that made commercial TVs a practical reality was the development of the round tube TV, aka, a scanning CRT display. The camera electronically scanned the image on a pickup tube from top to bottom, and the display scanned this image onto the surface of the round tube display. We could delve deeper into interlaced vs. progressive scanning, but for the purpose of this discussion, the important takeaway is that everything was tightly coupled. The camera and the receiver/display all operated synchronously using the same scanning parameters.
With digital television, all that has changed. Image acquisition (the camera), image processing (production, master control and transmission) and reception/display are completely decoupled. Today, a TV program may be displayed on a 65in HDTV display, a notebook computer, an iPod or a cell phone. Low-quality images can be viewed on high-quality displays, and high-quality images can easily be encoded for delivery at multiple resolutions.
In most cases, the limiting factor in delivered image quality is not the camera or the TV. It is the bandwidth of the channel that is being used to deliver the compressed bits to the receiver, which in turn must decode these bits and present them on a display — any display, no matter the resolution.
A modern digital TV receiver must deal with video formats at multiple resolutions, converting them to the resolution of the local display for presentation to the viewer. And that DTV display may be connected to additional sources of bits, which have different characteristics from a video signal, like video game consoles and multimedia PCs. Some TVs are even shipping with integrated Web browsers that bring the world of Internet content to the big screen in the family room.
There is one golden rule when it comes to the creation and delivery of high-quality imagery, whether it is video, digital photography or computer-generated images. It is desirable to start with higher quality than you plan to deliver. The term for this is oversampling. In laymen's terms, it is the difference between a dot matrix printer that creates characters and images at 72DPI versus a laser printer that produces characters and images at 300DPI. More samples allow the digital imaging system to reproduce higher frequencies without aliasing artifacts.
Oversampling is highly beneficial in cameras as it helps to eliminate sampling errors and minimize the effect of noise on the sampling process. When we oversample and then resample to a lower resolution, we improve the overall accuracy of the image. This pays significant dividends when the imagery is digitally compressed for emission. Reducing the number of samples that must be encoded, along with improving the accuracy of the samples, makes it possible to deliver higher quality images in bandwidth-constrained channels.
You do not want to push the sampling system and the compression system to the limits. But this is what many broadcasters have chosen to do with HDTV. By selecting the interlaced 1080i format, they are compromising the acquisition system in two ways. First, interlace is a spatial/temporal undersampling system. This adds stress to the MPEG compression system, which is optimized for frame-based images (progressive scan). Second, most current camera designs cannot oversample the 1920 × 1080 format. Many of the less expensive cameras use sensors that only have 960 to 1440 samples per line. This results in less accurate samples that require more bits to compress.
Progressive image acquisition and processing improves the quality of the samples, which reduces stress on the MPEG encoder. And the lower sample rate leaves more room in the 19.3Mb/s ATSC payload to handle peak bit-rate requirements. Oversampling to create 720p further improves the image quality.
In the decoupled world of DTV, oversampling plays a significant role in display quality as well. One of the major limitations of NTSC and PAL (in addition to the use of interlace) is that the scanning lines become visible as the display size increases, and the artifacts associated with interlace accentuate the perception of the scanning lines in the raster. NTSC was designed to produce a sharp picture on a 19in display viewed at seven picture heights. As larger CRT displays became more common, the perception of both image artifacts and the raster was more apparent. The move to HDTV was driven primarily by the need for more samples to create sharp images on larger displays.
Today we find many display resolutions in the marketplace, some of which are different than any of the ATSC formats. The good news is that virtually all flat-panel and projection displays being sold today offer progressive scanning. But the consumer electronics industry is pushing consumers to purchase displays with 1080p resolution, often in display sizes where the viewer will not be able to resolve this level of detail.
As a rule of thumb, 1280 × 720 resolution is adequate for displays with diagonals up to 50in; 1920 × 1080 is only needed for displays larger than 50in at typical entertainment viewing distances. But that excess resolution is not necessarily a bad thing; it helps to suppress the perception of the display raster, and it may be useful for other applications, such as viewing a Web page on the TV display.
In this, modern TVs are much like computer displays. They can present properly sampled video images with high quality, and they can present computer-generated samples that would cause aliasing in video images. Thus a smaller (e.g., 32in) 1080p display may have more resolution than the viewer can see when sitting at five to seven picture heights, but that resolution may be useful when viewing a Web page — albeit by moving closer to the screen to see the additional details.
We are also seeing HDTV displays that are oversampling in the temporal domain as well. In Europe, 100Hz displays have been common for several decades to help minimize the perception of 50Hz flicker. Many manufacturers are now selling 120Hz HDTV displays, with image processing engines that can improve the quality of 24p source images.
In the United States, we have lived with improper presentation of 24p since the inception of TV service at 60Hz, which became 59.94Hz when color was added. This required a frame repetition technique known as 3:2 pulldown. In a theater, the projector typically operates at 48Hz or 72Hz, displaying each frame two or three times. To convert the 24p source to 60 video fields, one frame is repeated for three field periods, the next for two field periods. This disrupts the motion adding to the problems that exist with the low 24p acquisition rate.
New 120Hz HDTV displays would allow each film frame to be displayed five times, eliminating the uneven motion rendition of 3:2 pulldown. But these displays go a step further and create in-between frames using sophisticated motion-compensated prediction techniques. The result is smoother motion that one would see in a theater.
There is a great deal of interest in new imaging systems that offer resolutions well beyond today's HDTV. For applications where the imagery is presented on very large screens, this is both desirable and practical. For a consumer television system, however, there are many things we need more than increased resolution.
First and foremost, we need to deliver high-quality samples, even if this means reducing the resolution slightly to make the content fit the channel's bit budget. The vast majority of digital content offered today is significantly over-compressed. Needless to say, trying to squeeze 1080 at 60p into an existing ATSC channel is only going to worsen the delivered image quality.
Next, we need to get rid of interlace, both as an image acquisition format and as an emission format. And finally, we need to take full advantage of the benefits of oversampling, both during image acquisition and at the display. If we do these things, we will move beyond the limitations of what is rapidly becoming a legacy HDTV system.
Craig Birkmaier is a technology consultant at Pcube Labs.
Send questions and comments to:firstname.lastname@example.org