Increase your understanding of this important piece of broadcasting equipment.
Multiviewers have become ubiquitous in most broadcast control rooms. The flexibility of displaying multiple images in various resolutions in an active and situational-aware display has many benefits. Building on some of the key points learned in the last article, “Understanding blocking capacitor effects,” (August 2011), let's continue our review of broadcast and production equipment design considerations, this time with respect to multiviewers.
As you may remember, my first task in the video broadcast industry was to build an optical-to-electrical converter and to evaluate small form-factor pluggable (SFP) integrated circuits from different vendors. That assignment was promptly interrupted due to a change in priority, and I moved to the multiviewer group to increase the manpower. At that time, I had little first-hand information about multiviewers, and Wikipedia didn't help much either. So I had to learn the hard way. Fortunately, as I look back on the experience, I can say that we had a talented team, and the product we developed turned out to be a great success.
A full discussion of the multiviewer by itself could take more space than this entire issue, so I will attempt to explain it from a 10,000ft view. If portions of this technology generate questions or you want to discuss these issues, please let me know, and in my blog or through Broadcast Engineering's Disqus, I can explain things in more detail.
The global functionality of the multiviewer can be summarized like this: Take multiple video and audio signals and combine them to create attractive and flexible layouts. In addition, some multiviewers also have the ability to analyze each input signal to be sure the signals are correct and without errors.
Basically, multiviewers are composed of several types of common cards: input video card, input audio card, GPIO card, output card, interconnect card and, in some cases, router cards. (See Figure 1.)
The input video card
The input video card is responsible for the reception of the multiple video feeds. The number of inputs often varies from four to 16 inputs, depending of the size of the multiviewer. The first task of the input video card is to receive the signals and perform any needed equalization or light-to-electrical conversion. One of the new trends is to use SFP cards to handle this task. Because of the variety of signal types — coaxial SDI, optical SDI, composite, DVI, video component — and the pressure to release products faster and faster, it makes sense for manufacturers to use a generic connector (SFP cage), which enables them to support all video formats without the need to create a variety of different cards.
In the first stage, the shape of the signal is restored and converted to SDI. After Stage 1, the serial data is converted to parallel data for processing, but why? The reason lies in the speed of the SDI signal. To properly convert the data (with cost-justifiable technology), it is advantageous to first convert the serial data to parallel data. This is what the deserializer does — take the serial data and parallelize it to ease the further processing of the video.
The video can now be upscaled or downscaled more easily. We will soon have the technology to process the data without deserializing it, but for now, the data must be processed in parallel form. The next processing steps are now feasible: deinterlacing, upscaling, downscaling, color conversion, audio analysis, audio extraction, etc. All of these functions are possible on the parallel video data.
Depending on the multiviewer's architecture, the functions implemented by the input video card vary from deserialization only to full scaling. To better understand the input card functionality, let's reconsider the example multiviewer input card. (See Figure 2.)
In this case, each of the eight video feeds on each input card represents a 3G-SDI, 1080p signal. Each signal is first equalized/converted at Stage 1 and then deserialized at Stage 2. This represents 8 × 2.97Gb/s, a total of 23.76Gb/s of aggregate data.
The third stage is to probe and/or extract auxiliary data, which may include audio, teletext, closed captioning, etc. This is an optional step. The video data itself is processed at the fourth stage. In the fourth stage, the signal may be deinterlaced, which is not required for 1080p, and then scaled. This stage is crucial, and each multiviewer manufacturer has its own secret sauce to maintain the quality of the content.
Let's assume that we did downscale all the input video in stage four by a factor of eight to all inputs with the same size on one display. The aggregate data is now 8 × 2.97Gb/s / 8 = 2.97Gb/s between this input card and the output card.
To send this data to the output card for the final scaling, color correction and picture-in-picture, along with closed captioning, VU meters, VITC, etc., the data channel needs to support from almost 6Gb/s (when SD-SDI signals are downscaled) up to 23.76Gb/s for nonscaled eight inputs. To accomplish this in a cost-effective manner, the input card usually reserializes the data (Stage 5) before sending it on the interconnect card. An important factor to consider when designing a multiviewer input card is to perform this step with uniform latency. Any delay induced by processing must be constant across all inputs. This becomes even more important when processing 3-D imagery that originates as two separate sources.
In many cases, the interconnect card is a passive card. It can be seen as a large array of cables that connect every input card and every GPI card to every output card. This card also contains the communication links to control every other components in the multiviewer: input cards, output cards, GPIO cards and audio cards. One of the most important links on the interconnect card is the synchronization link. This includes Hsync, Vsync, Fsync, clock and proprietary links to synchronize all the cards together.
Different architectures have been implemented to connect inputs to outputs. These implementations are often thought of as daisy chain, bus or point-to-point. However, with the increase in signal speed, the point-to-point architecture has become the more common design for video links. (See Figure 3)
For the communication channel, the daisy-chain approach can be used. (See Figure 4) But like above, with the increased data speed, it has become more difficult to do. Usually, the internal communication link is composed of two or more communication paths to ensure system redundancy. In our example, (see Figure 5), output card 4 is the origin of the master path (colored red), and output card 3 is the origin of the redundant path (colored orange). The communication link also contains external communication to allow the user to take the full control of the multiviewer. Nowadays, the external communication link is typically Ethernet (with SNMP) to allow multiple users to share the same control path and statuses.
Finally, the synchronization link is composed of Hsync, Vsync and other synchronization signals. This link can be a daisy chain, like that shown with the blue line in Figure 4. However, the delay between the first card in the system to the last one should be almost the same. The delay on a printed circuit board is typically 150ps to 180ps per inch, which is not a significant factor, but buffers in the path are critical. Let's assume a standard rack unit chassis where the printed circuit board will be 19in long. (See Figure 6.)
If this delay is not acceptable (architecture dependant), a point-to-point architecture can be used instead.
For the current crop of multiviewers, each vendor does it differently, and the interconnect card can be created in a thousand different ways. Let's use Figure 3 as the reference for our multiviewer discussion.
Each input card sends one video link to each output card, and there are four output cards. This means the interconnect card is composed of 4 × 4 high-speed links that carry video signals. But this is theoretical, and this depends of the bandwidth between input cards and output cards. In practice, two or more links are used for video links. Let's now take a look at the output card.
Output video card
In our example, each output card can receive video from four input cards. In this example, that means eight video inputs per input card. Under a worst-case scenario, each output card will receive the entire video stream from all input cards (a 32-input stream). Let's assume the output card is capable of driving one monitor with a 4K resolution maximum. In this example, the output card will drive one monitor capable of 4K resolution each (3840 × 2160 pixels).
Because we can have 4 × 1080p images displayed on this monitor, the worst-case aggregate bandwidth will then be 4 × 2.97Gb/s = 11.88Gb/s. We will ignore any saved bandwidth from removing the blanking period. (Only the active picture is usually transmitted between the input and output cards to save bandwidth.)
The question the system designer needs to answer is: Can the 4 × 1080p imagery come from one input card? If so, the bandwidth of each input card is now 11.88Gb/s maximum. With today's new programmable logic FPGA cards, 11.88Gb/s can be achieved with one high-speed link, commonly called serializer/deserializer or SerDes.
Each input card can send one or two links to the output card for processing, for a total of each output card receiving up to eight links from the input's cards. The processing power of the output card therefore must be 4 × 11.88Gb/s, or 47.52Gb/s of video data. This is a large amount of data to process in real time. Often the video data is encoded to ease the deserialization process using a well-known encoding scheme like the 8b/10b. Using 8b/10b, the final data bandwidth becomes 47.52Gb/s × 1.25, or 59.4Gb/s. This requires that the output card be equipped with a powerful enough FPGA to be able to process this amount of video data.
The first stage performs the deserializing process. (See Figure 7.) Once this is done, further data processing can be done in Stage 2, such as final scaling and color conversion (from YCbCr to RGB). Following the second stage is the positioning and the graphic muxing. This is where the images are placed at the correct location in the final layout. Graphics are also muxed to allow the user to add transparency, pictures-in-pictures and fancy layouts like rotating images. Multiviewers also must be capable of displaying closed captioning, VU meters, tallies, and other signals and alerts.
The final stage is another serialization, this time to format the data to be received by the monitor or to send to another destination (SDI, streaming, etc.). The graphic objects are generally created by an internal CPU that controls the layout of the objects and the system behavior, SNMP, etc.
The software is a key component of any multiviewer. This operational feature applies not only to the graphical objects and final layout, but also external devices the multiviewer may need to control. Such software should remain easy to understand and to use. With remote access, this software should run on different platforms and sometimes from different locations throughout the world, thanks to the Internet and the thumbnails!
The multiviewer's complementary cards include any other supporting features needed to make this device work in the real world. While these cards are important, they are also often less complex. Let's review some of the more common cards used on multiviewers.
The GPIO card is used to add even more functionality to a multiviewer. These cards control the tallies and visually report alarms. The GPIO card is typically equipped with either 64 or 128 general-purpose inputs and outputs to control other devices in the studio (such as microphone mute, communication request, ring alert, start a play-out system, red-light, studio warning lights, etc). The inputs are tolerant to 24V, and the outputs are isolated by opto-couplers to permit easy interfacing.
The genlock card is used to synchronize the output of the multiviewer to the reference signal of the studio. The genlock card can also create all the clocks used in the multiviewer to ensure that all boards receive the same clock (remove the drift between clocks) and maintain clock phase.
The need for audio monitoring is obvious. A multiviewer provides a convenient platform to see visually what is happening on both video and audio sources. Also, multichannel operation precludes operators from being able to hear the audio from perhaps hundreds of channels. Visual displays, along with automated silence sensors, make the process far less cumbersome. Instead of requiring a separate device to analyze the audio sources, the multiviewer can do it and alert the operator only when necessary.
Multiviewers and IP
As IP technology intrudes into the traditional serial digital video master control room, multiviewers will need to adapt. Some multiviewers already support the encoding format; others are able to receive thumbnails for monitoring.
Another growing trend is to embed the multiviewer in routers and other equipment. Some vendors already embed the monitoring router function inside the multiviewer. For some applications, it may make sense to minimize the interconnect between the router by locating the devices close together, even by making the multiviewer a component inside the router. When this approach is taken, manufacturers will need to make this a robust device because the router is the most crucial device in any broadcast or production environment.
Renaud Lavoie is president and CEO of Embrionix Design.