Broadcast camera developments

You might get the impression that there is a revolution going on in the world of cameras. However, SD is still with us, HD is in its growing phase, and the Internet is not ready for broadcast. In 2007, around half of the cameras sold worldwide were still SD.

Institutes like parliaments, high-end security and other audiovisual services are buying broadcast cameras as well, which is keeping the market for SD alive.

The majority of broadcast cameras now are 1920 × 1080/50i/60i, have a 16:9 aspect ratio with HD-SDI and HD analog outputs, and can be switched between popular formats and ratios. As a kind of insurance, SDI and analog composite video outputs are also provided. That means users can swap their SD cameras with an HD one without replacing their complete broadcast chain. This allows broadcasters to gradually upgrade their studios to HD operation.

Where are we going?

We are moving from SD 720 × 576 to HD 1920 × 1080 resolution, and in a few years, the industry could move beyond HDTV. Japanese broadcaster NHK offers Ultra High Definition Television (UHDTV) with 7680 × 4320 resolution. In sports, there is a strong demand for high-speed, slow motion in HD, and this will be expected from UHDTV.

Another issue is 2-D or 3-D. At IBC2007, 3-D displays were demonstrated, but for professional use only. There is the potential for growth here.

The film industry is exploring digital acquisition. Digital cinematography cameras are used more as replacements for 16mm and 35mm film cameras. The latest HDTV cameras are not only producing 1080i but also progressive scan pictures.

Computers offer processing speed combined with high-capacity memories, which make them ideal to use in post production. The idea that future cameras will consist of a computer with a lens and a memory stick is not valid. The reliability of operating software seems to be acceptable for computing, but it is far away from what is acceptable in a live broadcast camera.

The developments of UHDTV and 3-D indicate that camera growth looks to higher pixel counts, more frames per second and higher bit rates. The existing architecture with CCD was introduced in the late 1980s and served broadcast's needs well. Over the years, problems such as lag, smear, fixed pattern noise and leaking pixels have been solved.

CCD technology is mature now, and it will be around for another few years. However, it is clear that another architecture should be in place in that same time frame. The time to market for a new type of sensor technology is around five to eight years.

Camera applications

Whatever the application for a camera, is there revolution in the offing in camera technology? The best way to find out is to check the unit's components. Start with the lens. Then check the optical input, and end with base station or recorder as an electronic output. Here's a closer look at each component:

  • LensesSince the turret of fixed focal length lenses was replaced by the zoom, the evolution has been longer zoom ranges, digital interfacing and focus assist. The revolution will start when more camera electronics are integrated in the lens. This has already happened with black-and-white security and traffic control cameras. Cameras for these specific applications are really lenses with a video output, much like a webcam.
  • Camera housingCameras have evolved from a large body to a portable (with hip-pack and portable processing unit) to the shoulder-mounted camera we know today. The weight has reduced from 25kg to less then 7kg. However, cameras are still front-heavy and roll off your shoulder. There are two possible solutions. One is a well-fitting and adjustable shoulder pad with balancing weights in the back of the camera. A second option is to create a small handheld camera. In the prosumer market, the HD handheld weighs around 6.5lb. In the consumer market, HD palmtops weigh about 1lb.
  • FiltersIn the past, a camera carried neutral density filters plus a cap. Today, it is common to include four-, five- and six-point stars in addition to mist filters. Apart from the matte-box in front of the lens, cameras generally carry two filter wheels between the lens and prism. Some cameras have memories on the video processing board where you can store electronic effect filter parameters. A revolution could be to move all effect filtering in the camera on the RGB level, including the effect filters which are normally carried out in the production switcher. There are no contours on that level, so it will benefit the quality. A remote input controlled by the vision mixer is needed.
  • Beam-splitterSince the mirror cross was replaced by a three-way RGB prism block, only minor changes have been made to the beam-splitter design, mainly driven by quality issues. There is a move to use a single sensor with a Bayer filter. The disadvantage is the loss in sensitivity of two stops, which is not acceptable for many applications.
  • ImagersTubes have been replaced by CCDs, which in turn will be replaced by CMOS with system on chip (SoC). Camera video processing is integrated. Light is captured, and bits are coming out. The CMOS sensor chip with SoC can be seen as a digital device. The way to a single integrated circuit (IC) broadcast camera is open.
  • Video processingNuvistors and tubes were replaced by discrete transistors, and then by ICs, including ASICs, EPLDs and FPLAs. Processing has changed from dedicated analog to digital with embedded software. Integration with the sensor is on its way. The future could see the introduction of wide gamut color space like the new xvYCC color standard.
  • PowerPower consumption on the CCD camera side grows to the extent that batteries need to increase in size or live shorter. The maximum cable length in an OB operation is decreasing. Digital processing consumes more power, and the higher the clock rates, the more the current. The more power, the more heat, the more fans and the more noise. Adding fill-in light, displays, tallies and floor monitoring does not help either. CMOS consumes far less compared with a CCD, and by integrating more video processing, the power consumption should decrease further.
  • AudioAudio quality for camera audio channels has improved with the move to digital. The improvement is such that it can be used for more than just ambient. The problem is the synchronization between video and audio. Digital video processing adds delay.
  • Intercom and tallyThe digital intercom benefits from improved quality with digital processing, which is much better than the old analog systems. As an example, a message box for the camera person can be added. The tally is triggered by the video mixer preset or program bus, but digital circuits can cause noticeable delay.
  • ViewfinderWhat was once a monochrome CRT has evolved to a full-color LCD. This revolution, however, is not for the best. Focussing an HD picture is already a problem with a black and white CRT; with an LCD, it is even worse. This area is ripe for development.
  • Camera outputThe camera output has evolved from multicore copper cable to triax with hybrid fiber that provides the bandwidth for HD signals plus a power supply to the head. Developments in this area must deliver a long cable length with wide bandwidth, but cost, reliability and field repair are major issues.
  • CCU/base stationThe CCU/base station has shrunk from a full-height rack to 2RU for the latest cameras. The ability to fit more cameras in a given space has created a revolution in the OB world, enabling soccer coverage, for example, to expand from five cameras to the 24 cameras now expected for premier matches.
  • Outputs formatsApart from the standards used within the broadcast industry like SDI, smaller camcorders can include FireWire and HDMI, with Ethernet starting to become a feature.

The coming camera revolution

The leading edge is the architecture of the imager. Almost all broadcast cameras use CCDs, but a few are already starting to use CMOS sensors. (See Table 1 on page 86 and Table 2 below.) Three CMOS sensors mounted on or glued to an optical beam splitter divide the incoming light into RGB video information. The standard lens mount is B4, and the imagers are 2/3in.

The quality of the camera can never be better than its imager. Parameters like noise, sensitivity, resolution, dynamic range and aliasing mainly depend on imager specifications.

A closer look at noise and sensitivity

These are some signal-to-noise ratio (SNR) numbers, with camera sensitivity 2000lux at f8:

  • SD camera specifications are 62dB,with a 720 × 525 imager.
  • HD camera specifications are 54dB, with a 1920 × 1080 imager.
  • UHD camera specifications are expected to be <45dB, with a 7680 × 4320 imager.

The numbers mentioned above are specification numbers and are useful as a comparison between cameras. Depending on gain, gamma and contour settings, the numbers will be lower.

An average 12-bit SD camera under normal operational conditions will be 52dB to 56dB, and a 14-bit HD camera will be 50dB to 52dB. The SNR in a UHD camera will be around 45dB; noise created by the video processing will be far below the sensor noise floor.

Sensitivity is a trade-off between SNR and gain. The standard setting for a three-chip, 2/3in broadcast camera is f8/2000lux/3200K. The SNR for a camera is 54dB in Y.

At HD (54dB) and UHD (45dB), the SNRs are relatively low. To lose another 12dB for Bayer filtering is not realistic. One stop less sensitivity and a bigger sensor is an option, but then what would you do with the installed 2/3in B4 lenses?

Beam splitting

A smaller-sized camera with less weight and lower power consumption requires another concept in beam splitting. The f1.4 prism, as widely used in our industry, consumes too much space in the camera. On the other hand, a single chip with a Bayer filter will cost sensitivity.

An example of a new beam splitting assembly is the organic optical layered imager (OOLI). The visible light spans wavelengths from 400nm to 700nm, with blue at 470nm, green at 535nm and red at 610nm. The blue, green or red layer should be sensitive for its spectral band and pass the rest. Such a design cannot light sensitive metal, but probably will need to be an organic material. The Foveon chip indicates the direction of such concepts.


Look for the trends in CCD, CMOS imagers, developments in HD and ultra HD, plus the specialist high-speed and slow-motion cameras. Make yourself a checklist:

  • Check the noise and contours in the darks.
  • Test the colorless whites.
  • Move the camera and look at the dynamic behavior.
  • Assess the ease of operation.
  • Feel the body at the end of the day.
  • Listen to the noise at the end of the day.
  • Ask for specs, prices, delivery time and where the service is located.
  • Make a comparison.
  • If you want an undisturbed close look, go to the stand of a lens manufacturer.
  • If you are really interested, ask for a demo.

Outlook for the future

CCD will be around for the coming years. (See Figure 1.) CMOS probably needs the same process steps as CCD. It could mean that the price in the end will be the same as CCD, but by that time, it should outperform it.

The three-chip CMOS SoC camera has a promising architecture. (See Figure 2.) However, the imager needs some modifications. The integration of camera electronics into the optical chip would lower power consumption. High bit and frame rates are possible so it is suited to SD, HD, UHD and high-speed cameras.

The single imager with a SoC camera using a Bayer filter has a problem with sensitivity or noise, but nevertheless a CMOS SoC could be a step in between. The camera can be compact and suited to SD, HD, UHD and digital cinematography.

The camera of the future will be a lens with a small adapter. The adapter will host a new sensor with fully integrated camera electronics. (See Figure 3.) A viewfinder will be either a part of the lens or a screen with optical ultra HDMI or wireless interface. The lens and its adapter will be powered locally by battery, so the connection with the OB truck could be a single-mode, dual-window fiber carrying the full video bit rates, controls, returns, audio and intercom.

Because no remote power is needed, the CCU or base station will be just an interface between the fiber and the connectivity of the system. There's no need for a hybrid fiber, and there are no cable length limitations other then the optical loss budget. Such a camera would be suitable for ENG, EFP, drama, sports and digital cinema.

Berry Ebben worked for Philips on the LDK 3 camera, was a member of the Viper design team and senior product manager for Thomson Grass Valley cameras, and now works as a consultant on broadcast, digital cinematography and conference systems.

Table 1. Comparison of CCD and CMOS system on chip (SoC) for broadcast. *CMOS SoC has the possibility to host electronic circuitry, so a choice regarding what to do in the camera chain can be made so that the end result equals or outperforms the CCD. Factors CCD frame transfer CCD frame interline transfer CMOS SoC Architecture Read-out field store Read-out field store Line scanning Output Analog Analog Digital, 2 × 12 bit fixed 16:9 2.37:1 switching Native In FPGA In FPGA Power consumption High High Low Temperature range Standard Standard Standard Dynamic range 600 percent 600 percent 400 percent Highlight handling Adjustable in preamp Adjustable in preamp Fixed on the chip Dark areas Good Good Noisy* Fixed-pattern noise Not visible Not visible To be solved* Noise Well below operational needs Well below operational needs Could be better* High-speed possibilities 2X 3X 3X and higher Aliasing performance Very good Good Good Resolution Good Good Good Sensitivity Standard Standard Standard Smear No Negligible No Table 2. Camera-related factors Factors With CCD frame transfer With CCD frame interline transfer With CMOS SoC Power consumption High High Low Size Too big Too big Decreases Weight Acceptable Acceptable Decreases Ergonomics Front-heavy Front-heavy Front-heavy Price High High/medium Low