Stan Moote, Creative Planet /
05.20.2014 03:00 PM
Will IP Take Over Traditional Broadcast Facilities?
This revolution brought to you by the Internet
MULTIPLE CITIES—Ask broadcasters about IP in their facility and you are bound to get conflicting stories. These stories will range from “I never put plant video over IP” to “I use IP everywhere, 100 percent.” My take is that neither extreme is true. To understand this, let’s look at how IP crept into the die-hard, baseband-centric broadcast facility. We’ll examine where IP is fully entrenched today, where it isn’t in use, and why.
Figure 1. Plant wiring and equipment placement is traditionally based on workflows.


First off, let’s get everyone on the same page. We all know IP is “Internet protocol,” but it clearly gets confused with IT, “information technology.” This leads to confusion about networking, and Ethernet in particular. Broadcasters who deal with IP on a daily basis often still think with a coax mindset, getting signals from source A to destination B. The differences go beyond the electrical specs. Ethernet is full-duplex, packet-based, and can be routed in a seemingly random manner that results in packets being received out of order and culminating in significant amounts of jitter and timing difficulties in genlocked facilities.

Breaking down a facility into a few key functions helps to pinpoint where IP is in use today and better understand how the shift to 100 percent IP will ultimately happen over time.

Business systems were the early adopters, and by this I mean back-office systems like scheduling, ad sales, traffic and billing, which have been part of the IT world for decades. They are connected using IP with either proprietary interfaces or standardized protocols such as BXF (Broadcast eXchange Format, or SMPTE-2021).

Those dealing in written communication, logs, alerts and even task and project delegation quickly adopted e-mail as the central driving force, driving the demand for LANs to be built to communicate between desktop computers and mail servers. Although this was a positive force, as I will describe shortly, the shift had negative ramifications for IP in broadcast facilities.

Editing and graphics systems were the first to actually get IP into workflows. Now these suites are rapidly becoming 100 percent file-based operations. Sure, a few VTRs are used here and there, but this is no longer the norm. Files are transferred around using IP, from editing systems to archive and playout servers, using MXF and other file formats.

News operations also quickly got into the mix, using IP-interconnected systems to run teleprompters, keep track of stories, operate cameras and studio lighting remotely, and manage both video and graphics. MOS has been the interface of choice here.

Plant ingest and playout are interesting hybrid animals. In looking at what is really going on, it is no wonder there is uncertainty about IP. File-based video comes into the plant via IP or satellite, often into a catch server. This server transfers clips, often via baseband SDI, into archive and playout systems. More advanced operations stick with files and move these directly into the server for playout.

Why do so many facilities still transfer these files in the baseband domain? I pin it down to two reasons: comfort and the lack of file format compatibility. There are so many flavors of file formats with different audio and video encoding schemes that most servers cannot guarantee 100 percent playout reliability. So the solution is pretty simple. If you want to keep operations reliable, use SDI and suffer through the multiple-pass encode/decode artifacts.

Why do we have so many different compression formats? This is both historical and application-dependent. Some applications require low latency; others don’t care about latency at all. Some are concerned about bandwidth and storage consumption, while others are focused on maintaining quality.

On a practical note, the aforementioned uses of IP don’t involve real-time, full-bandwidth video streams. IP is being used mainly for files, control and various types of data being transferred about.

It is essential to look at some realistic calculations to understand the value of IP technology transporting full-bandwidth SD, HD and UHD signals around a facility. Considering the full-bandwidth streaming video rates that we have in plants today and going forward into the future against Ethernet rates, you can see that GigE is only practical for standard-definition video running at 270 Mb/s (see table 1). To be realistic, a facility needs, at a minimum, 10 GigE network capacity.

 

1 GigE

10 GigE

40 GigE

100 GigE

SD

3

37

148

370

HD

0

6

26

66

1080p60

0

3

13

33

UHDTV1p24

0

1

6

16

UHDTV1p60

0

0

3

8



Table 1 shows the number of full-bandwidth videos that fit into a single Ethernet transport.

10 GigE is still pretty new, but then HD is also still pretty new from a broadcaster’s perspective. All this aside, a new plant should be designed for 1080p running at 3 Gb/s at a minimum. Video products with 3G interfaces have been around for years. They are stable and reliable.

So why even move over to IP? The answer is future-proofing and streamlined workflows. Figure 1 shows a snapshot of how systems are wired and configured for ingest, edit and playout. Signal flows dictate the equipment and wiring for today’s operations.

We have no idea of the future mix between live, streaming, compressed and file-based operations. If somehow we can run them all together using an IP backbone, our future-proof comfort level is high. (See figure 2.)

Figure 2. Having an IP backbone makes workflows highly configurable.
In researching some of the world’s most technically advanced broadcast facilities and their newer system designs, I learned that they all have a clear mandate to move into the IP environment for video streams inside their plants. Serious consideration is given for compression formats, IP networking to handle both streaming and file-based operations, and control. Running full-bandwidth HD video over IP is far from a simple task, but the transmission is free from compression artifacts and latency issues. In the perfect world, all I/Os would be plugged into the facility backbone. Configuration and setup could be done anywhere. Bandwidth would appear to be infinite and non-blocking due to provisioning. Single connections would have multiple feeds on-board and not point-to-point jack-field style connections.
 
Now I will get back to my point about e-mail having had a negative impact. E-mail systems are generally thought of as unreliable. Clients crash, servers get overloaded, LANs get saturated and e-mails often mysteriously get “lost.” This always seems to happen during on-air crunches. Chief engineers have these bad experiences repeatedly, prompting a negative association with the technology. And it is their day-to-day reactions that keep stations on air.

Many of the new plant designs that have embraced IP started out with an assumption that they would not just have “islands of IP” but would be only IP, and were designed accordingly. When the designs were finalized, the engineers at the management level got nervous—perhaps because of the previous e-mail failures they had experienced. They just couldn’t trust IP yet. I found this amazing as I never saw this happen during the transition from analog to digital (perhaps because digital used coax and BNCs!).

This apprehension resulted in what I would call a “dual layer” approach: the buildings are wired and designed to work using IP, and a second layer of coax wiring with routing is added to handle HD over a 3G-SDI infrastructure. All major functional points in the IP design can be switched over to SDI, making the engineering department less anxious.

The bottom line is that broadcast facilities are already heavily reliant on IP for business systems, file-based activities, lower-resolution proxies, backhauls, and real-time control of newsrooms, cameras, lighting, intercoms and automation, yet use it only sparingly for full-bandwidth streaming. Table 1 is daunting by the very nature of having more than one full-bandwidth video on a single IP connection. We have accepted this in all other areas of the plant. Once we can overcome this fear and be convinced that we can see a solid benefit of a plant’s 3G infrastructure moving over to a reliable and cost-effective IP backbone, all will progress quickly. It’s not here yet, but it’s just around the corner.

Based in Toronto, Canada, Stan Moote has served in different capacities within the broadcast industry for the past three decades. He has won multiple technology Emmy Awards and holds several patents.


Comments
Post New Comment
If you are already a member, or would like to receive email alerts as new comments are
made, please login or register.

Enter the code shown above:

(Note: If you cannot read the numbers in the above
image, reload the page to generate a new one.)

1.
Posted by: Michael Drzymkowski
Tue, 55-27-2014 04:55 PM Report Comment
My eyes were opened a few years back when we first integrated our ShowMakerPro TV Automation software to run all in the iptv realm on the Thomson Sappire. It is really cool to have dozens of channels on a single wire, then "cherry pick" from the MUX the channel you want to manipulate (ie. record, insert to, brand, etc.). It made me realize that in a single rack unit we could replace an entire rack of equipment (routing switchers, demodulators, A>D & D>A converters, etc.) Truly amazing. A 32x32 routing switcher is a nice thing to virtualize, however, with 1080p signals being limited to a total of 3 per 10 GigE to get those 32 signals into something is going to chew up 11 lines on your 10 GigE switch... still a savings of cabling and space. Realistically it only makes sense to skip directly to 100 GigE. Now those 32 signals are on a single line -- some amazing streamlining happening. As a group who manufacture TV automation software this tech is extremely exciting and we are enjoying our iptv projects. The bi-directional nature of a connected device via a single Cat 5/6 cable is pretty cool! At first it takes some getting used to the idea that for example a server doesn't per se have dedicated lines for inputs and/or outputs. The main challenges I'm thinking of are getting the average house tech's up to par on setting up the configuration and keeping it maintained, interfacing with the inevitable non ip gear we find in most every facility, such as satellite receivers, multi-viewers, HDMI TV's, and i/o for the studio production switcher. And having boatloads of redundancy to the connected devices and the "main" Gig switch considering that if that switch goes down....




Thursday 10:05 AM
NAB Requests Expedited Review of Spectrum Auction Lawsuit
“Broadcasters assigned to new channels following the auction could be forced to accept reductions in their coverage area and population served, with no practical remedy.” ~NAB

Sue Sillitoe, White Noise PR /   Monday 10:45 AM
100 Free DPA Microphones – How Do You Wear Yours?

 
Featured Articles
Discover TV Technology