I was recently reading a thread online regarding a broadcast program feed that had a couple of problems. One of the problems included a breakup based on the localization of the stations; it was a weather-related downlink rain fade.
A number of stations experienced the problem but the majority did not. Traveling with the same notification were additional reports from stations that, although they did not experience the breakup, their recording of the program was down-cut because the timing of the show provided to them was incorrect and their automation systems stopped recording when the clock reached the end of the scheduled recording time. Some stations dealt with the timing inaccuracies by programming a pad at the start and end of their automated recording processes. My favorite comment was from a station that pointed out that they are not automated and thanks to the fact that they use humans to perform their recording, they don’t push the stop recording button until they see the program end. They closed their comments with “Humans-1, Computers-0!” My first thought was to contact this person with the message that the timing error was probably the result of a human operator entry mistake on the front end of the process and the poor computers were only doing what they were told. To me that makes the score zero to zero and although a tie, everybody actually lost.
But there is a fact here that cannot be ignored. In this particular case, a human was able to look at the data and at the actual feed and make a more correct decision than the computer. How is it possible for a human to make a better decision than a computer? Simple; the human had access to more real-time information than the computer and was therefore able to adjust the decision process in accordance with the real world situation, not the theoretical situation that was programmed.
COMING UP SHORT
This is one of the oldest problems in automated systems. Since my earliest days in broadcasting and dealing with automated workflows in both radio and television I have envisioned a time when the program/traffic log would seamlessly drive the automation controlling the hardware in real time. Content would flow into the system from the network, the syndicators, the advertising agencies and local production. It would report itself to the automated workflow system with all of the pertinent data necessary to allow the traffic component to schedule it accurately and let the automated playback system find it and after playing, log the exact time and duration that it played and create an as-run log that reflected accurately how the content aired. Virtually no human intervention would be involved in the process under normal circumstances. Only during live events or when there was an actual equipment failure would people, with their natural ability to make decisions based on current conditions be called into action.
Every few years I hear about, or become involved in a project that ostensibly promises this kind of automated workflow and so far every attempt has come up short. My experience has been that when concerns are raised about the actual implementation versus the promised system’s goals or target performance specifications, they are met with phrases like managed expectations, implementation issues or next version solutions. As an example, I was recently involved in a conversation where the system providers declared that the system performance criteria had been met and when one of the test users of the product pointed out that only one small part of the criteria had actually been satisfied there was considerable discussion and backtracking to clarify what the provider actually meant. As it turned out, the end user’s point was well taken and the provider did agree that although progress on the entire system had been realized, it was premature to announce success. I have no doubt that had this user not raise a fuss about this overstatement of performance, a lot of unnecessary grief would have been generated as other potential users proceeded along with their planning under the assumption that everything was working fine when in fact there is still much more work to be done.
For me, these types of issues are what differentiate automation from automated workflow. In the good old days of isolated systems, we could autonomously automate a process and the change to the workflow was compensated for by the humans that acted as the interfaces between the processes. Because they could make decisions based on the situation rather than just on the data, the humans would adjust the input and output between the systems. In essence, even though all of the systems within their silos were being automated, humans were driving or providing the interface between the automated processes.
Another significant change is the business that we’re working in. Many of the products we deliver are available from other sources. There are many other sources of similar products. The products we create must now be delivered via other mediums. The old assembly line model that was broadcasting has changed and the infrastructure must change to support it. This environment is much broader than our original business and much more dynamic and requires a level of flexibility and adaptability that has not been required in broadcasting.
SERVICE ORIENTED ARCHITECTURE
I think this new reality is what is driving our industry toward the service oriented architecture. Depending on whose book you read or what vendor you talk with, the formal definition of SOA has some subtle difference. For me, I like to look at it as a “shock absorber” between automated systems that smoothes out the road that content takes on its journey from creation to consumer.
However you want to look at it, SOA is a methodology that can be used to decouple the various processes within the system to allow for modifications to existing processes and addition of new processes without shutting down the overall service. I haven’t talked to any station yet that hasn’t had the experience of doing an upgrade or update to one of their automated products only to find that the change has rippled through the system and created unforeseen problems in other systems.
The heart of the issue is finding the solutions that allow for the decoupling of processes and services to permit modifications, additions and deletions while the overall system continues to perform. In the silo world of early automation, I can remember working with systems until they were stable and then never touching them again. As long as they did the job, they were left alone. And when the job was as simple as getting content through the one assembly line that was the station, this worked. In the environment where we are now delivering products through a growing multitude of additional distribution channels, with different needs and dynamics, this process doesn’t work. Service Oriented Architecture may offer stations the ability to compete and stay relevant in a very different and changing media environment.
Bill Hayes is the director of engineering for Iowa Public Television. He can be reached via TV Technology.