Skip to main content

As for TV Technology, It’s All an Illusion

You might not have noticed that there’s no such thing as negative light. Or, if there is, no one is letting us lay folks know about it.

I point this out on account of black being visible on TV. I don’t care whether it���s a tuxedo jacket, a penguin’s feathers, or even a swatch of black velvet. TV has no problem showing you any of those. But it’s a physical impossibility.

Grab a flashlight, and aim it at a TV screen. Describe the color of what you see. It might be gray, green or brown. So here’s my question: Without using negative light, how do you get black pictures from gray, green or brown screens?

Here’s a long answer. At a SMPTE convention in Pasadena, Calif., a few years back, a guy from Kodak showed a slide of a woman with a yellow towel around her neck. Then he showed the same slide with a blue filter positioned over the towel. It turned green. Then he covered the whole slide, not just the towel, with the same blue filter. That time the towel looked yellow.


TV technology ain’t brain surgery. TV technology ain’t rocket science. TV technology is all about illusions. The most important thing ain’t what’s actually in the pictures or sounds but what folks see and hear in them.

For instance, look at the loudness of commercials. Maybe I should have said, “Listen to the loudness of commercials.”

I don’t care whether you use a VU meter or a PPM (peak program meter) or look at peak-to-peak traces on a scope. With any of those, you’re measuring level. Maybe it’s level averaged over some short period of time (a whopping 0.3 seconds in a VU meter), but it’s level nonetheless. Level ain’t the same thing as perceived loudness.

Try recording yourself speaking normally. Then speak more and more softly until you’re just barely whispering. Then speak a word normally. That last word will probably shock you with its loudness, but it shouldn’t be any higher in level than your first word. On the other hand, if you raise your voice a little at a time until you’re orating away, by the time you get to the end of the Declaration of Independence, even though you’re at a pretty high level, you might not sound very loud.


As long as I’m on the subject of sound, let me tackle multiple channels just a bit. I’ll start with the minimum of two. It’s surely possible for a tone in a room (maybe something a flute player is doing) to bounce around and end up with reversed polarities at two stereo mics. That ain’t a big problem for a listener who’s got stereo headphones. A listener to a single speaker fed the combined stereo channels gets nothing.

You can be transmitting the stereo perfectly, but you’re screwing your mono listeners. It ain’t about signals; it’s about perception.

Now I’ll stick in a few more channels. Maybe you want to broadcast a movie shot in surround sound. The movie has 5.1 channels; your audio encoder has 5.1 channels. You carefully match the movie’s left front, center, right front, left surround, right surround, and low-frequency effects to the appropriate encoder inputs. Does it sound good to your audience? Maybe. Maybe not.

Ever been to a movie theater? Ever seen a center speaker? You can’t see it on account of it’s behind the screen. In any home setup short of a theater with a perforated screen, the center speaker ain’t behind the screen.

Now then, when you go to a movie theater these days you do see some speakers. Every last one of them—even the ones on either side of the screen—is a surround speaker. That means that folks in a movie theater get much (if not all) of their surround from speakers in front of them. Folks at home, despite Wendy Carlos’s instructions to the contrary, usually have their surround speakers behind them. Does it make a difference? Maybe.

How about when you shoot your own 5.1-channel stuff? Heck, home video camcorders have 5.1 channel pickup systems these days (or anyhow say they do). Ever been in a studio with a mic boom? That sucker whips the mic back and forth in a flash. So, if you replace the mono mic on the boom with a 5.1-channel mic, the acoustic perspective of the surround goes whipping around just as fast. Is that a good thing, or will that induce nausea in listeners? And what if your best news shooter whips around with a camera-mounted surround-sound microphone?


It ain’t just sound. Think about video levels. Over a fairly broad range, we human-type folks are sensitive to a change of about 1 percent in contrast level.

So figure an 8-bit system with 256 levels. At level 100, a move of one level to either 99 or 101 represents that 1 percent change. All’s right with the world.

Up at level 200, a 1 percent shift would jump two levels to either 202 or 198. Either way, it looks like you’re wasting quanta. But down at level 10, a one-level change is a shift of 10 percent. To get it down to perceptible levels, you’d need 10 times more levels, or something like 12 bits. And down at level 1, you’d need another 10 times more levels.

That’s a good reason why video gets encoded nonlinearly, something that some computer folks don’t seem to understand. But which nonlinearity should we use? Different video standards have different gammas, and HDTV and digital cinema are different from all of them (and from each other). And video ops can do their own things.

Don’t get me started on color. The primaries of the different video standards ain’t very far apart, but the matrix equations sure are. And then there’s compression, where you throw away 98 percent of the info in a 50:1 compression ratio. Is it the right 98 percent? Maybe it’d be worthwhile to check.