Cameras can be thought of as optical analog-to-digital (A/D) converters these days. Photons respond in the lens as waves, and the sensor converts photons (arguably acting like particles) into stored electrical charge. That is a sampling process pure and simple. To make it more specific, it is a three-dimensional sampling technology, with spatial samples taken at the frame (or field) rate in a succession of samples that only represent the original scene.
Unlike compression, the goal of this kind of sampling is to reproduce the most accurate rendition of the scene as possible. MPEG is generally coded at 8 bits per sample, but cameras commonly sample at 10 to 14 bits per sample. This has a clear and direct effect on the capture range and noise floor of the camera. More is better — better contrast, detail in the light and dark areas, less noise sensitivity, more accurate color rendition, and more natural looking pictures. Though a camera cannot reproduce the full range of light values possible (think starlight to nuclear explosion), partly because the display cannot accomplish the other end of the acquisition/display process, a camera with more potential range goes a long way to making natural images look, well, natural.
In a way, many modern camera sensors are similar to oversampling audio A/D converters. They use more pixels in the sensor than the image format requires. Individual pixels are formed by combinations of pixels, either at the time of A/D conversion or in post processing. This allows a camera with more pixels to create a multitude of scan standards. For instance, one manufacturer's sensor is 4320 × 1920 pixels. By combining six samples vertically into one, a 720-line format is created. Combining four vertical samples into one makes a 1080-line format. The same is true in the horizontal direction, with 1920 samples natively, or by combining three samples into two output samples, a 1280-sample structure is achieved to create a 1280 × 720 native image.
Cameras intended for digital cinema production do the same thing, with the result that varying resolutions and aspect ratios can be created without building purpose-specific cameras. It would be prohibitively expensive to create a multitude of chips for each output format, and from a practical standpoint, an agile camera is a much smarter approach. From a manufacturer's perspective, it may be the only way to make cost-effective cameras. From a user's viewpoint, it allows some future flexibility in how the camera is used.
It is important to realize that we have begun to approach the limits of physics in the design of practical HD cameras. A 2/3in sensor is actually about 11mm diagonal, or roughly 5.4mm × 9.6mm. On that chip, each pixel is about a .005mm square. To achieve full resolution, an HD camera system, lens and camera, must produce about 81 lines a pair per mm. Now consider the future standard for UHDTV, developed by NHK in the last decade and standardized by SMPTE recently. The picture is a whopping 7680 × 4320 pixels! A 2/3in sensor would now have pixels only a little over .001mm, one-quarter of the size we have today. And the lens/camera performance must produce nearly 400 line pairs per mm to achieve the same performance. That would be a stunning optical system indeed, or the imager must become much larger to allow practical optics to be used.
But let's look at the other end of the scale. Consumer camcorders have many different sensor sizes, from 1/2in on high-end crossover cameras to as small as 1/6in, with many in the range of 1/3in. Extrapolating the numbers above means that on a 1/3in sensor, the pixels for 1080 images are about .0025mm square, and the lens system performance would also have to double to achieve the same per results. But the truth is that inexpensive cameras use inexpensive optics. Performance has to suffer. The pictures are demonstrably better than SD cameras, but cannot equal the quality of a studio camera and lens.
This gives rise to an interesting dynamic. In order to produce higher volumes of HD content, the natural order of economics moves people to less expensive hardware. The resulting content cannot be the same quality as that shot with more expensive hardware. So in the rush to get more HD content to air, we are in fact choosing often to produce less than HD performance. News cameras might be high quality, or they might be consumer crossover cameras. One network recently announced it was giving reporters HD consumer cameras that are barely bigger than a cell phone so they can capture images when a crew is not present. The images might be 720p or 1080i technically, but the resolution will clearly suffer a dramatic degradation. That is acceptable because of the possibility that the content is more impactful and therefore worth the trade-off. Intercutting HD and pseudo-HD images creates a change in quality discernable to average viewers. The key is that consumers understand not to expect the same production approach as taken for the Super Bowl. Thus, this is readily accepted.
Nonetheless, the importance of the rise of consumer crossover products is important to understanding the camera marketplace. It is clear that DVCPRO, which is based on the research done for the consumer DV camcorder marketplace, changed the way camera technology is developed and deployed worldwide. The technology arises in the professional market in no small measure because it can leverage the research paid for by a high-bandwidth consumer product delivery chain.
In the last year, we have seen the advent of digital SLRs with video capture capability, extending the range of options to a new class of cameras. Most are oversampled, with some having up to 21 million pixels (single chip, not three as many video cameras have). But the range of optical choices is an interesting dynamic. I wonder how we will fit a 7in viewfinder to a 1.8lb camera? Will it work on a steadycam? Production professionals have already found innovative ways to use this interesting possibility and will continue to do so. NBC's “Saturday Night Live” shot a recent segment entirely with SLRs. Can news crews be far behind?
John Luff is a broadcast technology consultant.
Send questions and comments to:firstname.lastname@example.org