Much paper has been covered over the last few years with articles about huge paradigm shifts in technology. The debates at IBC and NAB have all focused on these changes, talking about “file-based infrastructures” and “service-oriented architectures” taking us away from the comfort of SDI on BNCs and audio on XLRs.
But there is an even more fundamental shift that, while some are reluctant to make it, is ultimately inevitable. This is the shift from a technology-driven approach to a business-driven approach, in which the hardware — the servers, the transcode farms, the asset management — are all enabling elements.
In this view, the requirements will be defined in terms of outcomes and cost per unit. The systems engineering will have to support those business requirements. That is a big shift from the traditional, engineering-led television station. It is the only way that the challenges we all now face can be addressed.
One major international content producer found that, in the space of a year, its audiences moved from choosing 30,000 hours of online content to 750,000 hours — an increase of 2500 percent. The inescapable fact is that consumers have the ability to watch what they want, when they want, on the device they want, and they like that ability. All content owners need to find a response.
No one would argue that delivering to multiple platforms can be done without seamless automation. There are simply too many combinations of resolution, codec, wrapper, metadata and streaming for it to be possible to create all of the required deliverables by hand.
In turn, that means that metadata has to do much more than simply be accumulated. Workflows have to be automated, making intelligent decisions based on the metadata to determine what happens to the material. We now routinely hear of the “workflow engine.” The critical point here is that the workflow engine must be capable of rapid configuration by the user to meet developing needs.
In this content factory model, the workflow engine is alerted to the presence of new material by the asset management registry. It then interrogates the rights management database to see what the content is, what can be done with it and when it should be available. That triggers a series of workflows that ensure that the media ends up in the right form at the right place at the right time.
The important point to underline, though, is that once the various workflows are established, then content will move from ingest to being delivered on multiple platforms with zero human intervention. The workflow engine alone moves the content, making its decisions based on commercial rules, all the way to the playout suite, the content distribution network or the mobile streaming drivers. That is what I mean by seamless automation.
Each delivery format will require a new packaging process and thus a new workflow. The logical path for such a workflow might include:
- Identify the content and the intellectual property rights attached to it, to determine if it can be offered on this platform;
- Determine the resolution and frame rate of the content, and if necessary modify them to suit the target device;
- Encode it using the appropriate codec and bit rate (or, in the case of mobile devices, bit rates for adaptive delivery);
- Perform quality control checks on the content;
- Select or set the required metadata and reformat for the target delivery platform;
- Perform quality control checks on the metadata;
- Bundle the essence and the metadata in the appropriate wrapper;
- Deliver the content to the buffer store or to the content delivery network. (See Figure 1.)
Figure 1. Workflows have to be automated, making intelligent decisions based on the metadata to determine what happens to the material. The workflow engine must be capable of rapid configuration by the user to meet developing needs.
Each step requires the content to be routed to a specialist device, which might be a piece of dedicated hardware, or it might be software running on a standard server or processor farm. It might even be a cloud service. So as well as pushing the content down the chain, the workflow engine has to consider priorities in those devices. What happens if there is congestion in any part of the workflow? Again, intelligent decision making will resolve the issue. If congestion is routine, then the system should report the fact, suggesting the need for further capital investment.
The obvious technical route to achieve this is the service-oriented architecture (SOA), and I would argue that is the best way to do it. It is important to remember, though, that the SOA is not a technology; as the name suggests, it is an architecture that binds the technology together. Its value is in linking the islands of processing to create a robust and reliable system that will work day in, day out, and if necessary, create its own workarounds should any element fail.
I return to my key point, though, which is that the technology platform is there to support the business requirements. The other benefit of the solution-oriented technology, therefore, is to generate business analytics, information that can be delivered to the enterprise management system and on which commercial decisions can be made.
A few paragraphs back I outlined the workflow that needs to be established every time there is a new delivery platform — a new model of tablet or smartphone, for example. The workflow requires the use of a number of technical resources along the way to package the content correctly.
How much of these resources is required? Will this new workflow create bottlenecks? Will it put established workflows at risk? Business analytics, as part of the workflow engine, will give you the answer.
Most important, though, it should give you the answer to the most important question of all. How much does it cost to implement this workflow? If we want to serve a new device, what are the financial implications? How will we recover those costs? Will our income from the new service be more than the cost? Is it financially viable to do this?
We are now seeing broadcasters using business analytics to put real numbers into their return on investment calculations. Irish national broadcaster RTÉ has just implemented a comprehensive file-based architecture, on the basis that it can develop its own workflows using the simple tools in its digital and media asset management platform.
The broadcaster is saying publicly that it will save around $600,000 in the first year alone. That is value created across the enterprise, which again is a new way of looking at investment in broadcast engineering that until now has been focused on the cost of implementation. It’s another shift in attitude, driven by solution-oriented technology.
—Tony Taylor is chairman and CEO, TMD.