Supply Chain Television: Using a Retail Approach

In these days of meteoric technological development, does in-depth knowledge about a subject actually become a hindrance to real progress?

At what point does our risk aversion and its associated behaviors conspire to become the biggest risk of all?

How do we avoid being transformed from ingenious and invigorating iconoclasts pursuing radical new ideas into dinosaurs with "legacy think" albatrosses weighing heavily across our thought processes?

These and other similar questions came into focus about 10 months ago in the Enterprise Technology Department at PBS. We had recently completed the merging of the IT and broadcast engineering areas and were still getting used to each other's lingo, acronyms and personalities when we were presented with tremendous opportunity to enact change.

Economic pressures across the system were at an all-time high, several stations were at risk of insolvency and compounding it all, we were-and still are-facing the dual-headed capital investment monsters of digital migration and expiration of our 15 year-old transponder leases.

SUPPLY CHAIN MODEL

It was at that point that a crucial transformation occurred. We started looking at our entire set of processes as a supply chain environment with strong similarities to any other manufacturing industry. We have multiple suppliers (content providers), a distribution entity (PBS), about 177 retail outlets (our member stations) and finally the consumers (viewers).

Looking at our system through that lens, it became obvious that momentous system-wide change needed to occur. In order to optimize a content supply chain, one must put in place operational tenets and expectations that each partner must then execute flawlessly for the benefit of the entire system. It requires each partner to perform all of his or her appropriate tasks, utmost reliability of upstream quality-control processes, accuracy of content and its associated descriptors (metadata), integrity of the distribution mechanisms and finally flawless execution at retail level.

Over the next few columns, I will expound on our projects designed to accomplish all these tasks, but today I will focus on our development of an optimized "retail" environment.

THE PARAMETERS

We set out to create a broadcast infrastructure that would allow our member stations to:

  • Reduce capital by designing and deploying a "standard station";
  • Reduce operational costs by automating channel operations;
  • Leverage IT technologies to automate local systems monitoring;
  • Use remote exception monitoring, problem identification, escalation and resolution;
  • Seamlessly meld national content with local programs and interstitials;
  • Minimize or eliminate on-air discrepancies by offering 99.99 percent availability and flawless upstream quality controlled content and metadata.



It wasn't easy! Most of the concepts associated with the first four objectives were absolute anathema to the system's standard operating procedures and almost everybody thought that the last two would be rendered impossible by automating channel operations. To this day, the thought of having no one watching the signal as it goes out to the transmitter still represents a major leap of faith that only demonstrated performance will eventually enable.

All of this was further complicated by our own impossible deadlines. In order to accommodate the needs of several stations heavily involved in the process of moving into new broadcast facilities, we had to design, procure and deploy a fully operational, demonstrable "model station" in less than six months.

We set out to identify the set of vendors that offered not only the necessary functionality to accomplish our objectives but also demonstrated the vision required to carry the effort into multiple future generations of this operational model. We wanted a system that abided by both broadcast engineering and IT best practices, and ended up settling on a few baseline requirements:

  • Maximize TCP/IP rather than point-to-point serial connections;
  • Enable resource sharing among channels, including an automated failover channel;
  • Utilize software solutions rather than proprietary hardware;
  • Consolidate protocols, servers, OSs, databases and messaging environments.



By the end of NAB2003, and after thorough due diligence, we had assembled a consortium of eight vendors offering the best combination of demonstrated capabilities and credentials.

  • Accenture as the program management and integration services provider;
  • BroadView Software as the supplier of traffic and scheduling software;
  • Intel as the supplier of critical server and desktop components and software enabling services;
  • Microsoft as the supplier of key software components;
  • Miranda Technologies as the supplier of network-based monitoring and master control;
  • Omneon Video Networks as the supplier of networked server infrastructure;
  • Omnibus Systems as the supplier of network-based automation;
  • SES Americom as the supplier of satellite communication services.



The results have been nothing short of astonishing. By mid-September, with tremendous help and collaboration from our consortium members and CEI, we had assembled at PBS in Alexandria, Va., a fully functional digital broadcast facility with six channels in a 4+1 SD and 1 HD configuration. Through the combination of broadcast engineering and IT staff and vendors, it has been flawlessly operating completely unattended as we continue to improve it with software and hardware upgrades. We call it ACE, no acronym for once.

Four months later, we have five PBS member stations who have committed to deploy the ACE architecture and we expect the first one to be "on-air" by this summer.

Meanwhile, because sometimes ignorance is bliss, we are going to continue to ask a lot of "Why?" and "Why not?" questions as we relentlessly strive to optimize the Public Broadcasting content supply chain. You can count on IT!