Audio 101 for Video Folks: Levels

A couple months back, I taught a seminar about audio for a joint session of the Connecticut Broadcasters Association and the local chapter of the Society of Broadcast Engineers. Only about 10 percent of the assembled multitude was directly involved with audio-the vast majority were "video guys."

My presentation was about dialnorm, metadata and some of the related issues I've been ranting about in this column for the past few months. At the lunch break after my presentation, a couple of the participants told me they wished I had done a lot more on the basics of audio. They said they had problems with some really simple things and were really at a point where if the meters moved, it meant the audio was good. They wanted to do better than that.

In response to that request, I'm going to devote a couple of columns to audio basics for video folks. I hope you find it useful, or (if you're an "audio guy/gal") that you can pass the column on to some of your video colleagues. This month we'll talk about audio levels.

There are three basic physical aspects to an audio signal-the signal's magnitude (how big it is), its spectrum (what frequencies are included) and its phase (something about its placement in time relative to other signals). Here we are going to address the magnitude of the signal, what we usually refer to as its amplitude or its "level."

WHAT AMPLITUDE MEANS

The magnitude of an audio signal is an expression of how large the range of energy swings are, from least pressure to greatest pressure, as the signal oscillates over time. Such pressure can be air density (for sound), voltage (for analog audio) or numbers (for digital audio). Take a look at Fig. 1 for more information.

(click thumbnail)Fig. 1 One cycle of a 1 kHz sine wave. Note the peak and RMS amplitudes. From my book, "Total Recordings," courtesy of KIQ Productions.
The first thing to know is that the range of possible audio magnitudes is huge, about 10,000,000:1.

The second thing to know is at the smallest magnitudes (the narrowest pressure variations), a signal becomes merged with, or subsumed, by noise (it's called the "noise floor") and at the largest magnitudes by distortion.

The third thing to know is that you mainly need to worry about large magnitudes, the area where we do most of our work.

The fourth thing to know is that there are several ways to measure the size of a signal. They all have their uses. None of them are ideal for everything.

THE IMPORTANCE OF RMS

When an audio signal voltage swings back and forth between positive and negative voltages, it reaches positive and negative peaks on each cycle, but usually it is not at the peak voltages all the time. However, if we simply took an average of voltage readings over time, that average would converge on zero volts, which wouldn't be a meaningful expression of the magnitude of the signal. So, we use a fairly tedious but simple algorithm called "Root-Mean-Square" (RMS) to convert all the negative voltages to positive (for calculation purposes) and then find the average.

This value, which is almost always less than the peak value, is called the RMS value, and it represents a continuing "average" magnitude of the audio signal over time. However, it does not indicate the peak values that the signal reaches; so, keep in mind that peak values and RMS values will be different for a given signal and are used for different purposes.

THE SENSATIONOF LOUDNESS

In general, the greater the RMS level of a signal is, the "louder" it will sound to us humans-in other words, a rough correlation exists between amplitude and loudness. If we triple the RMS amplitude of a signal, it will sound approximately "twice" as loud. Really short signals, called "transients," don't sound as loud as sustained ones of equal amplitude. Similarly, signals with a broad spectrum (many frequencies) sound louder than signals with very little spectrum (like a sine wave, or single frequency, for instance). Keep in mind that loudness is a "subjective" auditory sensation; amplitude is a physical magnitude. Although they are related, they aren't the same.

UNDERSTANDING SPECS

A decibel (dB) is a numerical expression based on logarithms. We use it because the range of magnitudes we wish to deal with is too large to be easily handled by normal numbers. A pressure change of 1 dB is a 12 percent change in amplitude; a pressure change of 6 dB is a doubling of amplitude; 20 dB is 10 times the amplitude; 60 dB is 1,000 times the amplitude. Meanwhile, -60 dB is 0.001 times the amplitude. Got it?

To make decibels more useful, we tie them to various reference values. Zero decibel Sound Pressure Level is equal to 0.0002 microbars (the threshold of human hearing). Zero dBFS is the maximum possible level of a sine wave in any digital audio system. Zero dBu is equal to 0.775 Volts RMS, and so on.

Nominal Levels are arbitrary levels (usually RMS values) that we establish for maintaining good audio-level management; a typical one is +4 dBu. What it means is that we would like our audio signal to "hover" around that nominal level, which we also use to calibrate the system.

Signal-to-Noise ratio is the range, in dB, between our signal level (or our nominal level) and the noise floor of the audio system.

Headroom is the range, in dB, between our signal level and the maximum undistorted level of the audio system.

Dynamic Range is the range, in dB, between the noise floor and the maximum undistorted signal level of the system. It is the sum of the signal-to-noise ratio and the headroom of a signal.

Clipping is the badness that occurs when the signal exceeds the maximum undistorted level of the system. The audio signal peaks are lopped off, hence the term "clipping," and the resulting sound is called harmonic distortion. Clipping is generally to be avoided, except when distortion is desired (as in electric guitars). It's usually a painfully audible manifestation of bad audio.

MANAGING LEVELS

Decent control of levels is the most important part of good audio engineering for broadcast. Good level management craft involves taking into account the digital, analog and acoustic level relationships, the balance of diverse program elements such as voices, FX, music and so on. It also involves careful adjustment of these values on an ongoing basis to (a) keep the audio from being distorted, (b) keep it way above the noise floor, and (c) keep it sounding natural and realistic.

Turns out it ain't easy-it takes practice and some careful use of your ears. But it can be done, by anybody. You don't have to be an audio guru; trust me.

I'm happy to take questions on any of this stuff. Got an audio question? Send me an e-mail at dmoulton.ma.ultranet@rcn.com.

Next month, I'll take a look at the fundamentals of equalization.

Thanks for listening.

Dave Moulton