Skip to main content

Camera imaging

Capturing images has always started at the most elemental level — in fact, all the way to elemental particles. For nearly 200 years, chemical photography relied on the effect that photons have on chemical bonds. Moving pictures were no different. They were just a series of short exposures played in rapid sequence, but still captured the same way. The fact that the camera shutter is open only a brief time is a form of sampling, averaging the image over the length of the exposure.

Modern camera sensors

Although we might want to think of modern and highly sophisticated electronic cameras as fundamentally different, they still rely on the effect that photons have on a sampling medium. The big advantage of electronic sensors is that they don't require out-of-camera processing to access the latent image. Instead they extract the image, erase the sensor's memory of the previous integration period and get ready for another image, all in a pretty short period of time.

This quantum conversion is critical to practical imaging. For decades, chemists worked to make better emulsions for motion picture film, with great success. The best films are both sharp (which one might think of as sensor density) and sensitive (quantum efficiency). Of course, the chemistry is much more complex than we mere mortals can understand, as the shape and size of the grains of emulsion have a huge effect.

In electronic sensors, the chemistry is no less complicated, and the physical arrangement of the cells in an array sensor can greatly affect the way the visible light is converted. Just like film, it is integrated over an exposure period, though the light is never interrupted by a mechanical shutter. Unlike film, which produces essentially a continuous frequency spectrum and falls at higher spatial frequencies, electronic sensors have discrete sample positions in regular patterns. This gives rise to interference with the content being sampled. To prevent aliasing, low-pass filters are inserted ahead of the sensor.

Even more interesting, film is inherently resolution-independent. It works at SD and HD resolutions, set as much by optics as anything else. Electronic sensors are optimized for one resolution, though interesting approaches allow processing groups of cells into virtual pixels, even if they did not exist on the physical sensor. Thomson, for example, uses a sensor in some of its cameras that is 1920 × 4320. By grouping cells vertically in groups of four, it can get 1080 lines (4320 divided by four). By using groupings of six vertically, it can get 720 lines (4320 divided by six). Of course, horizontally it needs to combine cells from the 1920 to get the 1280 needed for 720p images (not a neat integer number at 2.666) while maintaining aspect ratio, but the concept is quite clear.

What happens to the low-pass filter in such a circumstance? Seemingly, it cannot be optimized. In reality, however, the number of original samples has not changed, and any filtering needed to correct the image data can be done as part of the management of the combined pixels in the signal processing and scaling. This moves the issue from the physical domain to the abstracted electronic domain.

Television camera sensors need to store photons — actually, they are electrons representing the photons that collided with the sensor. Those need to be transported back to signal electronics, which we often assume are digital in modern cameras. But fundamentally, television cameras are analog devices with digital signal processing on the tail end of the sensor. This arises directly out of the analog nature of the light and the lenses used to capture the scene.

We choose to process digitally for all the right reasons, including the flexibility it offers and the ability to transmit the image over long distances without degradation. Doing so with HDTV signals in an analog format would limit high-quality links to a few dozen feet, and maintaining quality would be nearly impossible compared with digital plants. But inside the front end of cameras lurks an analog core.

Amazingly, modern cameras have become so ubiquitous that HD cameras for POV applications now cost less than $2000 with an integral lens. Chip sizes are even more of a shock in today's camera systems. Sensors vary from barely 1/6in to a full inch in size and even larger for some D-cinema cameras. Barely a quarter goes by without a new format's introduction.

Image quality is so good in high-end applications that the limiting factor is no longer the sensor or the electronics behind them, but rather the optics in front of them. Simply put, diffraction-limited optics cannot match the capabilities of some of the cameras they attach to today. As cameras continue to improve in performance and features, optics will become even more expensive as optical engineering is reaching the point of diminishing returns from development.


Also affecting modern camera systems is renewed interest in interconnection strategies. At the World's Fair in 1939, when David Sarnoff kicked off the first demonstration of live television in a public venue in North America, the interconnection was a few dozen wires with complex signals driving the scanning of the image tubes directly. By the 1970s, the number of conductors was up to 81 for a color camera and 101 for some cameras made in Europe, carrying extremely high voltages.

A huge improvement came with the introduction of triax cameras, allowing a simple connector and camera cables of thousands of feet without degradation. But HD cameras make triax a difficult proposition. With the introduction of the standardized SMPTE 311/304 fiber camera cable and connectors, much was gained.

Transmission distances now can count in kilometers instead of the meters available seven decades ago, with no change in signal quality. Even HD viewfinder return signals are practical because of the ability to send standardized digital HD signals over standardized media. One might ask if compression will ever become so good that we can transmit a camera's pictures in compressed format with lower bandwidth, while at the same time wondering how we will achieve interconnection of the 8K cameras now in early development for D-cinema and other purposes.

John Luff is a broadcast technology consultant.

Send questions and comments