Often, articles like this start by heralding new eras of technology in which paradigms shift and deliver a sea of change destined to alter the workflow and deliver — well, you get the point. There are products that change the game and deliver functionality that was at one time unforeseen.
Video compression was first touted in technical literature as bit rate reduction and was often discussed in terms of lossless and lossy algorithms. Those early discussions, as long ago as the 1970s, were way in advance of the practical introduction of compression in the early 1990s. As often happens, the new technology was based on a desire to reduce transmission bandwidth, which equates to an economic benefit. It also comes with limitations that, once conquered, permanently change how we approach our craft.
Splicing streams, at a cost
MPEG changes deterministic pictures with fixed pixel locations and a fixed number of pixels per second (or frame) into a statistical representation of the picture with sufficient quality to fool the viewer. It reconstructs the image into believing the original picture has been faithfully transmitted.
Of course, nothing could be further from the truth. In the ATSC standard, barely more than 1.5 percent of the original bit rate is used in the bit stream that represents 1080i30 content. MPEG frames are always variable in length. Although with constant bit rate, content null packets are inserted when the encoder calculates no meaningful picture data to transmit, stuffing the bit rate up to a fixed value.
This statistical nature of the bit stream means that deterministic switching of streams on frame boundaries is not possible in the same way that baseband signals allow.
This is for two main reasons. First, it is not possible to know precisely when the next frame will begin unless a sequence is inserted, giving a warning when the switch might happen. Second, the use of B-, I- and P-frames and the rules for their construction mean that a switch on an arbitrarily selected frame boundary to a similar arbitrarily selected incoming frame could mean that a B-frame might be followed by a P-frame from a different bit stream. (See Figure 1.) This makes decoding impossible because of the forward and backward reference each contain.
The workaround, developed by SMPTE and others several years ago, is a sequence that is inserted into the transport stream, giving a warning when the switch can happen. This is an inelegant solution, but one that works so well that it was adopted by SCTE for the standard that facilitates the insertion of commercials in almost all content on cable systems today (SCTE 35). Multiple splice points can be announced, and SCTE 35 allows messages to be sent in the stream, notifying the splicer when to cut.
However, this allows streams to be spliced without regard to other parameters. For example, it is entirely possible, though perhaps not useful, to splice a piece of 720p content into a 1080i stream. The resulting stream would be legal, but unlikely to be decoded without errors in display.
A more likely issue is splicing streams of two different bit rates together. This can easily be handled by adjusting the bit rate on the new content to match the old content more closely. For example, if a bit stream is running at 14Mb and a commercial is spliced in running at only 4Mb, a decoder would be quite happy. If, on the other hand, the master stream was 4Mb and the commercial was 12Mb, it is quite possible the buffer in the decoder could overflow before the spot is over. It is also likely that the allowed bandwidth in the transmission channel could be exceeded, resulting in a truncated stream or worse.
The solution is transrating
This is easily fixed today. The solution is transrating, or adjusting the bit rate on the fly to a value that fits in the channel and does not exceed the capacity of the channel. (See Figure 2.) In the simplest terms, the quantization tables are changed to make the compression more aggressive, thus lowering the bit rate.
In Figure 2, several feeds are sent to a mux, exceeding the capacity of the channel. After transrating, however, the aggregate bandwidth fits in the available channel. Not all of the feeds would necessarily be scaled, and perhaps one of the feeds contains considerable null packets that can simply be dropped to reduce the capacity needed.
Reducing the bandwidth in this way affects quality. Some consumer delivery services heavily modify incoming streams to minimize bandwidth and maximize channel count. This technique facilitates consumer demand, though at the expense of preserving maximum quality. For clarity, you could reverse the process, increasing the bit rate, but because the original content has already been lost, it would only produce null packets and wouldn't improve the pictures. Also, you can create a stat mux from a group of unrelated feeds by calculating the bandwidth required at all times and adjusting the individual feeds to minimize the impact on the quality level of the final multiplexed stream.
At one time it was assumed that MPEG streams couldn't be edited. However, various techniques have been developed that make what was intended as a consumer distribution chain technology work in many applications, including editing. Today, complete workflow solutions can be created in the compressed domain. The slow march of technology will certainly make many new techniques possible in the future.
I once told a futurist in our industry he was nuts when he said you could put MPEG encoders in cameras and make transmission over long distances cheaper. Although not practical today, I would no longer question the sanity of such an approach.
John Luff is a broadcast technology consultant.
Send questions and comments to: email@example.com