Who Needs the Cloud?

 In the early 1970’s, the author wrote computer programs by using punch cards and a light reader that were fed into a small computer called the “comp-u-core.”

JOHNSTON,IOWA—Since returning from NAB, I have been giving a lot of thought to “the cloud” and how it may impact the workflow at Iowa Public Television.

First, I would like to explain my interpretation of “cloud.” In the early 1970’s, I started writing computer programs to solve math equations in high school, initially using punch cards and a light reader that were fed into a small computer called the Comp-U-Core. I moved from machine language to Fortran to Basic, where programming was via a Teletype Corporation ASR33 terminal with a paper tape reader/ punch that was connected via dedicated phone circuit to the Hewlett Packard 2000C time share computer... this was my first experience with “cloud” based computing. So from my point of view, the concept of cloud- based services is no more new than wireless delivery of television pro- grams to receivers.

IS CENTRALCAST ALREADY THERE?

Within the PBS member station community there are a couple of very high-profile central casting projects being implemented in New York and Florida. From the point of view of the participating stations, these would be cloud services. Services like switching, graphic inserts, and the like take place outside of the station (in the cloud) and then the final mixed product is delivered from the cloud to the station for distribution via their local transmitter systems as well as cable and satellite services. Local content is created on-site and then uploaded into the cloud for inclusion with national network and syndicated programming.

As I understand it, the national and syndicated programming is actually captured and stored at the cloud facility rather than at the local station so that all of the stations using this cloud-based service are working from a single (but fully redundant) copy of shared content rather than each station storing a copy of the same content at their local facilities. In this environment, the shared master control treats each individual stations’ content streams as separate, individually processed channels and will pull common shows from a single library.

This is no doubt an over-simplification of the cloud-based facilities but I think it is conceptually correct. The benefit to the individual stations is increased efficiency and cost savings by eliminating the need for standalone master controls at every station. I remember discussing this same concept while working at NBC affiliates in Honolulu and West Virginia in the 1980s and 90s after the network had switched to satellite

delivery and was providing real time feeds to all time zones other than Hawaii and Alaska where we were required to manu- ally tape delay east coast feeds.This was still better than the old process of getting tapes of the network programs in Honolulu and airing them a week later than they did on the mainland and then bicycling them up to Alaska where they aired two and then three weeks later.

The primary objection from the network affiliates that I remember was that if there was an error at the network, it went out na- tionally, not just to the eastern and central time zone. In this case, 30 Rock became the cloud and the affiliates weren’t comfortable with the concept, especially the occasional Saturday Night Live F-bomb that would make headlines in the east and yet never be heard west of the Mississippi. Obviously times, technology and broadcasting have changed so centralcasting is once again a hot topic.

LOW-HANGING FRUIT

The concern that I have with the centralcast model is that it looks for savings in an area of most stations that have very little potential. Since the dawn of primitive sequencing and early automation, master control was the “low hanging fruit” and I am not convinced that there is all that much more money to save. All of the facilities I have been involved with have automated master controls that are run by a single per- son or are unattended. Even when the room is manned, the operator is seldom running master control; they are performing numerous other station functions, most of which do not go away if master control is moved to a cloud.The local operator’s primary interface with master control is to deal with exceptions and those are becoming less and less frequent.

In addition, based on my preliminary research into companies offering master control as a service, I am hearing figures be- tween $20K–$40K a month, not including the last mile connection to the cloud. I am not convinced that there is that much money to be saved in the average master control operation.

I’d like to do more serious research on the idea of moving the computational part of the master control system into the cloud with a fairly simple internet appliance-type controller attached to my local storage and switching infrastructure. In his 1996 book “Only the Paranoid Survive: How to Identify and Exploit the Crisis Points that Challenge Every Business,” Andrew Grove spoke about a “connection co-op” and internet appliances. While he was dubious about the usefulness of these less sophisticated limited devices, the book was written almost 20 years ago and a lot has changed since then.A cloud-based high-end processing system understanding the capabilities and orchestrating the function of a myriad of simple device controllers seems quite possible. You only have to look at the work going on within the IEEE and its SmartGrid initiative to see the potential.

LOCAL STORAGE

Local storage seems to make sense to me for a couple of reasons. First, storage is already commoditized so as an expense item, it is relatively cheap.There are also the concerns regarding cloud storage such as security, as in the protection of your intellectual property and the more subtle where in the world your content is actually stored. Not everyone is comfortable with the idea of their data being stored on foreign soil. I understand that these issues can be dealt with but my concern is will the benefit be worth the hassle?

In addition, originating content from cloud-controlled local storage requires a high bandwidth path between the station and the cloud facility which is an added expense that may not be inconsequential. A simple terminal at the station that monitors the cloud-based control system would not need a lot of bandwidth and the high bandwidth path for the local distribution system already exists.The local terminal equipment would also allow for local control when the need arises such as during a pledge program where at my stations, we dynamically slide our program breaks as we follow the mood and flow of our audience.

It just appears to me that the improvements that centralcasting can make in the efficiency and costs at most local television station master control operations are marginal and may not actually be as effective as envisioned in an area that has undergone repeated iterations of automation and redesign, all focused on reducing costs and personnel requirements.The real area of opportunity may be in developing an architecture that allows for costs and benefits of improvements in the speed and sophistication of the computational platform to be shared among all the users independent of changes in simple control devices and systems at the individual local stations.

Bill Hayes is the director of engineering for Iowa Public Television.

Bill Hayes

Bill Hayes is the former director of engineering and technology for Iowa PBS and has been at the forefront of broadcast TV technology for more than 40 years. He’s a former president of IEEE’s Broadcast Technology Society, is a Partnership Board Member of the International Broadcasting Convention (IBC) and has contributed extensively to SMPTE and ATSC.  He is a recipient of Future's 2021 Tech Leadership Award and SMPTE Fellow.