The Ultimate Video-Compression System - TvTechnology

The Ultimate Video-Compression System

There have been many mentions in the press recently of a laser-based video display said to be so good, so inexpensive and with such a small appetite for power that it will render obsolete all other large-screen technologies, including LCD and plasma. And then there’s the ultimate video-compression system. According
Author:
Publish date:

There have been many mentions in the press recently of a laser-based video display said to be so good, so inexpensive and with such a small appetite for power that it will render obsolete all other large-screen technologies, including LCD and plasma. And then there’s the ultimate video-compression system.

According to some of those reports, the new display is the world’s first laser television. That wouldn’t count the Sony/Silicon Light Machines grating light valve (said to blow DLP away), the Laser Display Technologie system shown in 1997 at the Internationale Funkausstellung (Europe’s international consumer electronics show), or any of the other systems dating back to laser-inventor Dr. Theodore Maiman’s hang-on-the-wall Laser Video rear-projection apparatus in 1972.

One of those other systems is associated with display pioneer Dr. William Glenn, who has also done work on video compression. The more-than-50:1 compression ratio of today’s HDTV broadcasts doesn’t come close to the limit theorized by Glenn in one paper. MPEG-4 AVC gets closer. Other systems, such as FrameFree Technologies’ interpolative version, achieve even greater compression ratios. And then there’s that ultimate compression system.

It comes from a thought experiment conducted by Ed Fraticelli, vice president of technology at Production Masters (PMI) in Pittsburgh. What is the maximum number of video frames that can ever be shot?

That might sound like a trick question. How could anyone know what people will do in the future? But it isn’t.

Given the trend towards higher-resolution, large-format digital cinematography, suppose that each frame has a maximum of 4096 active pixels per line and up to 4096 active lines per picture. Suppose that each pixel can have 34.4 billion different colors—12 bits (4096 possibilities) each for red, green and blue. Frame rate doesn’t matter; this is about the individual images, not how they’re combined.

Any video frame that is 4096 x 4096 or smaller and has no more than 36 bits per pixel must be part of a set comprising 4096 x 4096 x 4096 x 4096 x 4096 different frames. That’s a lot of frames—1,152,921,504,606,846,976, to be exact, or, to be less precise, about 1.2 quintillion. But that’s all of them.

It includes frames of meaningless noise patterns. It also includes frames from moving-image sequences that have yet to be shot, with actors who have yet to be born, wearing clothes that have yet to be designed, riding in vehicles that have yet to be invented. It also includes every form of cropping, squeezing or downconversion to HDTV or standard definition that anyone might ever dream up.

Each of those frames contains a lot of information. There are 16,777,216 pixels, every one of which has 36 bits of color and brightness information, or 603,979,776 bits per frame.

About 14,563 of those frames could fit on a future one-terabyte holographic disk the size of a CD or DVD. It would take 79,167,857,213,957 such disks to store every possible frame that could ever be shot (within the chosen parameters). That’s within the realm of possibility, but they wouldn’t easily all fit in a single room.

About 1.2 quintillion numbers would be required to index each frame. That’s tough to think about—much harder than 79.2 trillion. It’s two raised to the 60th power. But that can be expressed as just 60 bits. So, simply by matching each frame being shot to one that’s in the all-frames library, one could theoretically achieve a totally lossless data compression from 603,979,776 bits per frame to just 60, a ratio in excess of 10 million to one!

Of course, there are a few areas that still need some work. A frame-matching system that can quickly compare each shot frame to each library frame is necessary. If it takes just one billionth of a second per comparison, then the average duration of a single match would be more than 18 years. That’s quite some latency! There’s also the need for high-bandwidth links between each camera, editing and playout system and the master library.

That’s why, even though a library of every frame that will ever be shot can be achieved, there’s no indication it will be achieved. That’s worth remembering when scientific breakthroughs are announced.

Maybe 2007’s displays will finally surmount all issues that have previously kept laser-based video from becoming commonplace. If not, it’s always nice to dream.

Mark Schubin is an engineering consultant with a diverse range of clients, from the Metropolitan Opera to Sesame Workshop.