Using microphones

The transition to digital is necessitating a transformation in the industry's infrastructure.
Publish date:
Social count:

The transition to digital is necessitating a transformation in the industry's infrastructure. The microphone will remain the one analog link in the chain. The good news is that today's microphones are all set to make the transition to digital broadcasting. Developed for the music market, microphones with the dynamic range and noise floors required in a digital environment are already in use.

Due to the audio requirements for DTV, it is reasonable to expect that stations and networks will have to start using better microphones across the board. In fact, if there is one overarching consideration for microphone usage in a digital broadcast landscape, it is that microphones of higher sensitivity will increasingly come into regular use in systems that will be less tolerant of noise than ever before. As noise floors fall and gain is able to increase in a digital audio environment, audio engineers will have to become more aware of ambient noises in the studio.

The process of reconciling noise and sensitivity already is under way, led, interestingly enough, by the increased prominence of live music on television today. For instance, CBS' “The Late Show with David Letterman,” which features live music from the CBS Orchestra, already is using hand-held versions of Audio-Technica 4055 and 4054 studio microphones. Such mics are studio staples, but until now have been relatively rare in broadcast.

Wireless tips

Wireless transmitters will remain vulnerable to hazards including metal, which can absorb or deflect radio waves, and moisture, which is often detectable by crackling. One preventative measure is that when placing small wireless microphones, make sure there is no conductive material in contact with them.

Part of any activity programming is getting the sound closer to the source. Lavalier miking can make this easier.


Microphone systems will have to adapt, and even the microphones themselves will evolve to some degree. (There have been assertions of so-called “digital microphones” over the last few years, but the reality is that some manufacturers are integrating the A/D converter into the microphone itself.) But in terms of microphone techniques, most that worked well in the analog age will translate seamlessly to the digital era.

One area that will see some significant change is in the broadcast of multichannel sound. Not only does digital television offer the bandwidth to do it, but audiences who are accustomed to 5.1 and 7.1 audio in public and home theaters will likely demand it. Will that simply be an extension of stereo? We will see.

Stereo broadcasts have been in use for some time, mainly for music, but increasingly to add dramatic effect to other types of programming, including ENG. The most commonly used approach to getting stereo on location is the X-Y approach, using either cardioid or supercardioid polar patterns. (See Figure 1.)

The setup is basically a pair of similar mics crossed at an angle between 60 and 120 degrees. The degree of the angle determines the spread of the stereo image, and starting at an even 90 degrees is a good reference point. Digital television's greater bandwidth and lower noise floor may tempt audio engineers to make wider images. However, beyond 120 degrees, the X-Y technique will begin to create a hole in the center of the image and a loss of mono compatibility.

Less common in broadcast — although that will likely change in the digital era — is the M-S (mid-side), a technique that employs one microphone, typically a cardioid, as its M or mid component and one bilateral (figure eight) microphone as the S or side component. (See Figure 2.) They are resolved into a conventional X-Y stereo signal by a sum and difference matrix network producing an M+S and M-S output.

The greatest benefit to the M-S approach is its absolute mono compatibility. This works because when you sum the left and right channels, only the mid component remains. This also significantly reduces the ambient audio component because the side signals are nulled and the mid component is on axis with the centerline of the signal source. Thus, the listener gets natural-sounding, location-specific audio but the voice component is never crowded off center stage.

A few field sound engineers have reported success using one of a few new microphones that have a dual-capsule stereo matrix built in, such as the Shure VP-88. It is a bit too elaborate for most news stories, but very useful for others, where, for instance, cars moving across the screen can be buttressed by the sound moving too. It is not a big jump from stereo to surround in that context, and these same techniques can be applied to multichannel audio, especially the M-S technique, which can provide the information for a matrixed L-C-R array. Another consideration would be to use either X-Y or M-S techniques for the stereo or surround channels, and use a dead-on mono source, such as a shotgun, for the center channel.

If human nature and the entertainment industry have taught us anything, it is that if something can be done, it will be done. As DTV provides a more sophisticated and high-resolution canvas for broadcast audio, engineers and program producers naturally will want to push the edges of what the format can accommodate. But the best advice is for broadcast to always keep the center channel paramount. Even in multichannel music recording and mixing, after much experimentation on the subject, the consensus seems to be that the center channel is the best one for conveying critical, direct information, whether it is a song lyric or a line of dialogue or news copy. The other channels — stereo or surround — will always play a supporting role.

Dan Daley is a widely published journalist covering the pro audio industry.