Where Is the Broadcast Industry Heading?

Prognostications on how an all-IP future will revolutionize media distribution
Author:
Publish date:
Social count:
1

I could fill this entire article with quotes from the smartest, most knowledgeable people of by-gone eras, with predictions that were completely wrong, even laughable today. And the fact that I have spent more than 50 years in broadcasting—the last half of those working for Belden—gives me no real clairvoyance about the future of broadcasting. But allow me, dear readers, to tell you what I think will be happening to our industry. Your job is to tell me where you think I am wrong. If you are reading this article in 2025 or 2050, then some of my suggestions might be equally laughable.

YOU ARE NOT ALONE

Image placeholder title

Fig. 1: The broadcast station of the future?

As of the end of 2017, there were a little more than 33,000 broadcasters in the United States of America. This includes AM broadcasters, FM broadcasters, VHF broadcasters, UHF broadcasters and many low-power broadcasters, translators and other licensed players. It is very clear that this industry is going through a transformation where some will thrive, some will survive, some will morph into something else and some will cease to exist.

One of the problems is the size of broadcasting as an industry. If you compare it to the data industry, broadcasting is less than 1 percent the size of the data industry. And while the data industry is growing, it has its own problems, which may or may not relate to what broadcasters need. But let me start with my first obvious prediction: 

DIGITAL HAS WON

Sure our eyes and ears are analog—until we have direct-to-brain connections, I think analog will be around for a long time. But once the originating signal has been converted from photons to electrons (video) or air pressure to electrons (audio), it makes no sense to continue in analog. There used to be an argument that analog was better than digital, but I never hear this argument any more. And if the quality of the final product needs to be better, we can just throw more bits into the pot. Already we can process both video and audio way beyond our ability to see or hear the result. Then it’s just a question of computing power and the cost involved. Which leads to my second prediction:

EVERYTHING IS GOING ETHERNET

This is not to say that Ethernet is the ideal format to send and receive absolutely everything, but that it can be modified to do a better and better job until it finally passes whatever ultimate test you wish to require. And this means that everything will look like Fig 1.

So if you agree with me, everything that produces a signal or sends a signal, will look like that data center in Fig. 1. In other words, that picture could just as easily be captioned “TV Station” or “Radio Station” or even “Post Production Facility” or even “Recording Studio.” Of course, the number of racks for each of these might vary widely, depending of the computing power required. 

This picture could also represent the 100,000 racks in Antarctica or in a geostationary satellite. (Maybe the “cloud” really will be up above the clouds!); which leads to my next prediction:

YOU WON’T CARE

If you want to aggregate data (songs, commercials) and spoken words (air talent) and you want it to ultimately end up where 7 billion people can consume it, do you really care how it gets there? And what if you could be offered quality, reliability, resiliency that you never had before, then what exactly is your problem?

So if your data goes through that data center in Fig. 1, some of the time you will need one or two racks of equipment. If you’re doing something really complex, maybe a third rack for a few hours. So you don’t want to own them, you just want to rent them. You’ll still get a bill at the end of the month, but it’s not your stuff, so you won’t need a bunch of engineers.

Image placeholder title

Fig. 2: Relay rack from 1911

And this leads directly to my next prediction: 

THIS HAS ALREADY STARTED

And the first salvo of this new technology is called OCP, aka “Outside Compute Platform.” This started with an idea by Mark Zuckerberg who asked his data team at Facebook (essentially) “What if you didn’t need to use all the existing equipment? What if you could design a data center from the ground up? What would it look like?” 

The discussion covers so much grou 53 nd at this point that I would be hard-pressed to mention even the tips of the major icebergs, much less any detail (Google “OCP” if you must). It starts with the rack that holds the equipment, as shown in Fig. 2, which shows a relay rack from 1911. For 10 points, what kind of signal was “relayed” by this electrical marvel? If you said “railroad” give yourself 10 points. This “relayed” the track switching information in a train yard. And these racks were commonly 19-inches wide (sometimes 24-inches) using 10/32 screws (sometimes 10/24 screws).

Is this beginning to sound familiar? Like many other technologies, we have locked ourselves into a system which has no connection to our needs and desires.

Image placeholder title

Fig. 3: Part of Facebook’s OCP project

Part of the OCP project is shown in Fig. 3.

Here is a very short list that makes this rack unique:

1. No screws. Everything in the rack is sled-mounted (slides right in);

2. No power supplies, there are hot rails in the back;

3. Hot swapping;

4. Power supplies are in another rack, even in another room;

5. Every module, every server, is identical (more on that below);

6. Compute power is maximized, and

7. Cooling is also maximized (flows from bottom to top—hot air rises).

Since all the servers are the same, if one fails, you just hot-swap replace it. Since all these servers are sharing signals, you (the user) won’t even know that anything failed and was replaced. These servers have within them a little bit of everything: some processing, some memory, some switching.

The whole point is to make every piece of equipment identical, then you would have thousands (or even millions) of identical pieces. The cost of each server dramatically goes down! If you needed more computing power or more memory than a single unit, you would use two or twenty or twenty-thousand; and you would only pay for the time you used them. These servers would look something like Fig. 4.

Image placeholder title

Fig, 4: Server farm of the future?

You might recall that the data world is 100 times bigger than the broadcast world—but that 1 percent is quite significant. The people who design these servers would like to maximize the number of customers who could use them, so if there’s something you would want that is different than regular old Ethernet, now would be a very good time to say something! Low latency? Zero bit-errors over long distances? You should become part of the OCP team and voice your opinions! To grow the market by 1 percent by simply making some additions or changes to existing hardware, well that’s a no-brainer! Which leads to the next prediction:

THE REAL DIFFERENTIATOR WILL BE IN SOFTWARE

You, and your industry and your needs, define the software. This could change on the fly in the data center; all you need is the hardware that would support it. The one thing you might need is a box at your transmitter site that can receive the data (or multiple data streams to give you resiliency). These would de-encrypt your signal and feed it to your transmitter(s). From the data center, you could feed the same signal, or a modified version, to your online connections.

This will put the “broad” into broadcasting like never before. There will be no limit where your content can go. Sure, you will have a million competitors, so you’d better do something unique and different. But that’s your “sweet spot, isn’t it?

Steve Lampen is a consultant for Belden.