CCDs vs. CMOS

High-quality broadcast television cameras have been using CCDs as imaging devices for more than 20 years. Now on the marketplace is an HD systems camera based on a CMOS sensor, which has finally come of age in terms of image quality. It seems a good moment to consider the relative merits of modern imagers.

HD cameras are available with frame transfer CCDs, interline transfer CCDs and now CMOS imagers. Each sensor type has its good points and some potential weaknesses. (See Figure 1.) How do manufacturers make the choice, and what should you consider when shopping for new cameras?

Before looking at the different types of CCDs, let's spend some time on the basic functionality. CCDs are called charge coupled devices because they have two sets of components that are coupled.

The photosensitive element captures photons and converts them into an electrical charge. This is a linear effect: The more photons that hit the photosite, the greater the charge — in direct proportion. Once the photons are captured — after a frame, in video terms — the charge from each photodiode is transferred to its coupled readout register, from where it is read by the signal processing.

There are two important points to remember here. First is that the readout register is effectively the same material as the photodiode, so it will be affected by light falling on it. Second, the charge is transferred from one to the other. Once the charge is shifted, the photodiode is empty, and the process starts again.

Frame transfer CCD

In a frame transfer CCD, as its name suggests, the whole frame is transferred from the light-sensitive area of the chip to a storage area on the chip. This results in the largest possible surface area being given over to light capture, which creates the best dynamic range, no smearing and good aliasing performance.

The disadvantage is that it needs a mechanical shutter to ensure that no light hits the chip while the transfer is taking place. This is a manufacturing issue that has had to be addressed, and which adds cost and complexity to the camera front end. But it does produce the best images.

Interline CCD

An interline CCD, again as its name suggests, has the shift register for each row of the sensor alongside the photosensitive component, but masked by a fine metal strip. Transfers are swift and secure, and there is no need for a shutter as the readout register is masked.

The disadvantages include the fact that the light-sensitive area is now much reduced — to only about 40 percent of the surface area. Although microlenses on the surface of the sensor increase the sensitivity and thus reduce noise, dynamic range and aliasing are not improved and remain worse than a frame transfer chip.

Another issue is that the masking is not perfect, so some light gets collected by the shift register, either directly or through internal reflections. This is likely to affect highlights, as will the related problem of a photosite becoming overloaded. Excess charge should be designed to leak away into the substrate of the chip, but it is likely that some will move into both the related readout component and to those around it.

You may remember the performance of early interline CCD cameras, which showed marked vertical smear on highlights, an artifact we would certainly not find acceptable today. Modern interline CCDs are much better, although if you use them at higher frame rates — for example, in a slow-motion camera — or on shorter exposure times, the smear becomes visible.

CMOS imager

This brings us to the CMOS sensor. It differs from the CCD in that it incorporates the front line of processing on the chip itself, rather than being a largely passive device that moves the signal into the readout register to be collected. This means that each photosite has its own amplifier; each uses several transistors around it.

As a result, the light sensitive area is similar to the IT CCDs — relatively small, which is a significant disadvantage to the FT CCD sensors. CMOS sensors for high-quality video use microlenses to achieve a good level of sensitivity. One important difference with CCDs to note, though, is that a leakage of light into any of the amplifiers would not have been transferred through the whole row or line connected to it so there is no vertical smear possible.

The CCDs transfer the charge from the photodiode to the readout register, and in the process empty the photodiode. In a CMOS device, there is no shift, so the amplifier can be sampled at any time during the frame. This is important for two reasons.

First, it gives an active means of controlling an issue known as fixed pattern noise. Second, this same ability to read the signal at any time in the frame will be used to create a camera with a huge — effectively unlimited — dynamic range.

Imagine that the maximum amplified output of a photosite is 100 units. With a CCD camera, if you get 100 units, it's unclear whether that is a true measure of the light falling on the sensor or if you have just hit the limit of the system. But in a CMOS camera, if you get 100 units after 20ms (the frame sampling time) in parts of the picture, you can measure it earlier.

If you measure at 10ms and find 80 units, then you know (because the output is directly proportional to the light falling on the imager) that the full frame should have been 160 units. You can use that information to adjust either the onboard amplifier or the downstream processing to change that particular pixel's gain to give you the correct level. If you sample at 10ms and it is still 100 units, then sample at 5ms, and so on.

CCD television cameras have a dynamic range of eight or nine f stops. Digital cinematography cameras are a little better, perhaps reaching 11 f stops. A CMOS sensor can deliver several times more dynamic range than the best CCD sensor could. But a practical application for this increased dynamic range would still need to be developed.

Broadcast engineers sometimes look at the still picture industry with a little envy, noting that all the top-quality cameras have already migrated to CMOS. They can do that because of the economies of scale: Consumer pressure and the massive numbers of products sold mean that they can introduce a new generation of sensors every couple of years. In the broadcast business, our revolutions come every five to 10 years.

A look ahead

The current state of play is that most broadcast cameras use either interline or frame transfer CCDs for imaging. Both have their advantages and their disadvantages.

Now a dedicated CMOS alternative has made an appearance. It is more efficient to fabricate, and so today it is being used to provide an economic solution for broadcasters and production companies that need to move to high-quality HD production. In the future, the CMOS imager holds out the promise of dynamic range handling that is much more impressive than anything we have seen.

Klaus Weber is director of product marketing cameras for Grass Valley.