Testing audio systems

The performance of an audio system element (e.g., an audio mixing console) or a complete system (consisting of a number of individual system elements connected in a typical operational configuration) is expressed in terms of measured values of performance-indicative parameters. Audio performance-indicative parameters are grouped in three major categories: linear distortions, nonlinear distortions and noise. This article deals with performance test concepts as developed by North American broadcasting organizations. A future article will discuss the dynamic range, with reference to analog and digital systems, as well as implications of various concepts of audio signal level monitoring (PPM and VU-meter).

Linear distortions

Electrical signal waveform modifications independent of the signal amplitudes are linear distortions. It is assumed that the amplitude of the electrical signal does not exceed the clipping level of the equipment under test. There are two major types of linear distortions encountered in practice: non-uniform frequency response and non-uniform phase response.

Amplitude vs. frequency response is the peak-to-peak variation over a specified frequency range of the measured amplitude of an audio signal, expressed in dB with reference to the signal level at a specified frequency (usually 1kHz). The input port of the object under test is fed a 1kHz signal at the standard operating level (SOL) (i.e., +8dBu or +4dBu for high-level inputs and, typically, -60dBu or -70dBu for microphone inputs).


Figure 1. The measurement of frequency response. Click here to see an enlarged diagram.

The gains are adjusted to obtain SOL (+8dBu or +4dBu) at the output. The audio analyzer is calibrated to read 0dB at the reference frequency. The input signal frequency is varied in discrete steps, or continuously, and readings in dB, with reference to 0dB, are taken at specific frequencies. The measured frequency range is usually 20Hz to 20kHz. Figure 1 shows the typical setup for frequency response measurements.

Phase vs. frequency response is the variable phase shift occurring in a system at several frequencies within a given band. The input of the object under test is fed a signal of variable frequency. A calibrated phase meter is connected at the output of the object under test. A plot of phase vs. frequency is carried out over the frequency band of interest.

Nonlinear distortions

Nonlinear distortions of an electrical signal are caused by deviations from a linear relationship between the input and the output of a given equipment or system. There are two types of nonlinear distortions encountered in practice: harmonic distortion and intermodulation distortion.

Harmonic distortion occurs when a system whose input is fed with a pure sine-wave signal of frequency ƒ produces at its output a signal of frequency ƒ and a set of signals with frequencies (2ƒ, 3ƒ, … nƒ) harmonically related to the input frequency. The distortion factor of a signal is the ratio of the total RMS voltage of all harmonics to the total RMS voltage. The performance of audio amplifying devices is expressed in terms of percentage of total harmonic distortion (THD) at a specified output level. For professional studio-quality equipment, the output level THD is measured at 10dB above SOL (+18dBu or +14dBu). This level is called maximum operating level (MOL). The percentage of THD is the distortion factor multiplied by 100. The mathematical expression is:


Figure 2. The measurement of total harmonic distortion. Click here to see an enlarged diagram.

THD=√ (E2f2 + E3f2 + … + Enf2)/√ (Ef2 + E2f2 + E3f2 … + Enf2) × 100

where:

Ef = amplitude of fundamental voltage

E2f = amplitude of second harmonic

Enf = amplitude of nth harmonic voltage

To measure the THD, the audio analyzer removes the fundamental (first harmonic) component of the distorted signal present at the output of the object under test, and all the remaining energy, including noise and harmonics, is measured. The measurement bandwidth is usually limited to 20kHz. Because of the contribution of noise to the measured results, the method is better described as total harmonic distortion and noise (THD + N). The tests are carried out at several frequencies, such as 50Hz, 100Hz, 1kHz, 5kHz, 7.5kHz and 10kHz. THD measurements at frequencies above 10kHz are irrelevant because the harmonics generated by the object under test are above the audio bandwidth. Figure 2 shows the typical setup for THD measurements.


Figure 3. The measurement of intermodulation distortion. Click here to see an enlarged diagram.

Intermodulation distortion (IMD) occurs when a system whose input is fed with two signals of frequencies ƒ1 and ƒ2 generates at its output, in addition to the signals at the input frequencies, signals having frequencies equal to sums and differences of integer multiples of the input frequencies. The SMPTE IMD test specifies the use of a test signal consisting of two separate frequencies (ƒ1 = 60Hz and ƒ2 = 7kHz) with a respective amplitude ratio of 4:1. The IMD causes the 7kHz “carrier” to be modulated by the 60Hz signal. This results in the generation of sidebands above and below the 7kHz carrier with components at 60Hz and its harmonics. The IMD is computed as:

IMD = Demodulated signal/Ef2 × 100

where: Ef2 = amplitude of the 7kHz component.

Figure 3 shows the typical setup for IMD measurements.

Noise

Audio signals are affected by noise, which is best defined as an unwanted disturbance superimposed on a useful signal. The noise level is usually expressed in dB relative to a reference value and is commonly referred to as signal-to-noise-ratio (SNR). In professional studio equipment, the reference level for SNR measurements is the maximum operating level (MOL), which is typically the output level at which the THD is 1 percent. Usually, MOL is 10dB above the SOL.


Figure 4. The measurement of signal-to-noise ratio. Click here to see an enlarged diagram.

The main source of random noise is the thermal agitation of electrons. Given R, the resistive component of an impedance Z, the mean square value of the thermal noise voltage is given by: En2 = 4kTBR, where:

En = The noise voltage

k= Boltzmann's constant (1.38 × 10-23 joules/Kelvin)

T = The absolute temperature in Kelvin

B = The bandwidth in hertz

T is usually assigned a value such that 1.38T = 400, corresponding to about 17°C. The SNR at the output of a system depends on the noise generated by the resistive component of the signal source (e.g., the microphone) and the noise generated by the earliest amplifier stage in the chain. Assuming B = 20kHz and a microphone with a resistive component R = 150Ω, En = 0.219µV. This is the theoretical thermal noise of the microphone input circuit.

The microphone preamplifier contributes its own random noise, which considerably reduces the SNR of the system. The situation can be visualized as having an ideal noiseless amplifier whose input is fed by a noise generator. This fictitious noise is called the equivalent input noise (EIN) of the amplifier. The difference between the EIN and the calculated theoretical thermal noise level of the audio signal source is called the noise factor of the amplifier.

The measurement of SNR is a rather involved procedure, and the accuracy of the results depends on a strict adherence to a set of rules. The routine test procedure illustrated in Figure 4 and described below is suitable for the SNR measurements of an audio mixer:

  • Step 1: Disable all inputs except the one in the measurement path. Disable all compressors and equalizers. Feed a 1kHz audio signal at the rated input level (e.g., -70dBu) at the microphone input and adjust input sensitivity, channel gain and master gain for SOL at the output (+8dBu or +4dBu).
  • Step 2: Remove the input signal source and substitute with a low-noise 150V resistor. Measure the noise at the output with the audio analyzer in dBu in a 20kHz bandwidth. An optional noise-weighting network may be used to simulate the ear frequency response.

The SNR is given by the difference, in dB, between MOL in dBu and the measured noise in dBu. The use of a weighting network will produce SNR values that may differ by 10dB or more from flat 20kHz bandwidth measurements.

Periodic noise is generated outside the equipment and coupled in some manner into it. Unlike random noise, periodic noise can be eliminated by good engineering practice. The main type of periodic noise, commonly called hum, is 60Hz, and its harmonics. The measurement of signal-to-periodic-noise ratio is similar to the measurement of signal-to-random-noise ratio except that a 200Hz low-pass filter is used. A spectrum analyzer or oscilloscope may be added to help identify the frequency of the periodic noise.

Crosstalk is defined as the injection of an unwanted signal from a neighboring circuit via a mutual impedance (e.g., between signal sources in an audio mixer). The measurement is quite involved. It includes feeding an MOL signal to the unwanted (crosstalking) input and measuring its effect at the wanted path, whose input is loaded with its characteristic source impedance. The two paths have to be adjusted for normal operating conditions. The audio analyzer is connected to the wanted path output, and the input of the crosstalking path is fed with a constant amplitude signal whose frequency is varied in discrete steps or continuously in the bandwidth of interest. The signal-to-crosstalk ratio is expressed in dB with reference to MOL.

Michael Robin, a fellow of the SMPTE and former engineer with the Canadian Broadcasting's engineering headquarters, is an independent broadcast consultant located in Montreal. He is co-author of “Digital Television Fundamentals,” published by McGraw-Hill and translated into Chinese and Japanese.

Send questions and comments to:michael.robin@penton.com