Building file-based workflows

Understand the true essence of workflow.
Author:
Updated:
Original:

You don't have to look too far into broadcast technology today to see a fundamental shift in paradigm at work. For decades, since the first audio recorders were used in radio broadcasts, we have been focused on the conception that physical media contains programs for broadcast. Millions of hours of “content” exist on shelves in stations and in deep archives in all parts of the world. We think of the media as the actual program, which is a convenient construct, though at best imprecise.

The original work in our business is by definition ethereal, unlike, for instance, works of art that hang in museums. What we really have is no more than a representation of the original work sampled by either analog or digital means and stored on media that we can use to reproduce a simulation of the content at a convenient time. The same is true of books, where the author's original copy is used to create many copies for consumption by an arbitrary universe of users. Each book appears to be the original work, but of course we know that is not true.

You probably wonder why this matters, to which I would tell you, “elementary, my dear Watson.” A file-based workflow is all about moving and processing versions of content for air, not about processing the original works. There are some key concepts that need to be understood. In this article, instances refer to precise copies of a piece of content that cannot be distinguished from the original. Different instances are clones of the original, which cannot be differentiated from the original. Parent and child in this context refer to original content and derivative works made from it. Let me further explain the meaning and implications.

Two concepts for understanding workflows

You might have two instances of precisely the same piece of content in redundant servers, with redundant storage used for protection of air signals. No matter which copy plays, you cannot determine a difference because the copies are in fact two instances of the same thing. Think of it as multiple copies of a book at your local bookstore. It doesn't matter which one you read, because each was produced in precisely the same “workflow.”

But this raises questions about managing multiple instances that are not always transparent. How do you distinguish between the copy in the archive, the backup data tape on a shelf and the copy in the air server? You have to be able to know where copies live if you are to manage them effectively, and you must be able to recognize them as copies and not original works.

The other concept, parent and child, is quite different. The parent, perhaps the original of a program, can be used to make a copy, a child if you will, with perhaps different bit rates or audio tracks for release for other purposes. Indeed, a derivative of a PBS program might have different underwriting credits, or a child of a syndicated program might be edited to allow an extra commercial break. It is the same program in the most generic sense, but differs in critical ways, making it essentially a new program derived from the parent. Another example is a down-converted copy of an HD program displayed in a letterbox on a 4:3 screen. It contains the same content, but in a different form, and is easily identified as a derivative of the original.

Defining a workflow

File-based workflow is all about facilitating the processes that are possible while maintaining control over the content through managing metadata and tracking instances and relationships between media in external databases. Doing so allows orderly media management that enhances the value of the original asset and avoids destroying the original work to create a derivative.

To be clear, there is nothing new about these concepts. Tape-based workflow 40 years ago when I worked on “The Johnny Cash Show” was only different in the sense that the media was sampled using analog means, and the tracking was done with file cards containing notes we call metadata today. We created instances to be used for protection masters and audio sweetening that — while not true clones — were as close as technology would permit. We could trace back to the original recordings if something happened to the edit master because there were technical differences, but the workflow shared many of the same attributes. We delivered copies to ABC in New York that were dubs with commercials physically cut into the dub. They were clearly children of the edit master. In fact, we had two protection masters on the shelf, made at the same time, in the event the plane went down carrying tapes to New York. Those were truly second instances and not derivatives.

File-based workflow today allows us to create, often using automated processes, the derivatives and additional instances we need for production and air of content. When a motion picture airs today, it is a derivative transferred from the original work (film), and then protected and distributed as additional instances after editing a new child from the parent media as delivered from the studio.

Let me try to be precise about why this matters. Though technology allows new processes and different workflow that files facilitate, the most essential step is to define a workflow based not on the fact that it is file-based but rather on the manipulation and copying of the original essence of the media. If you can define a process and the intended result, you can build a workflow that uses file-based content to great advantage.

It is not, however, an improvement in workflow to replicate workflow from old paradigms in a newer technological era. The temptation to do so is great, but resist that temptation at all costs. Nothing is gained if you take that easy route. Rather, it is appropriate to throw out all assumptions about what “has to be done” and instead look at what needs to be delivered. Commercials can be delivered as files to edge servers in a station and then played out as analog video to be recorded in a video server for integration into the concatenated air signal. That treats the instance of the content in the edge server as original essence with another instance created by conventional means.

It is substantially better to clone the content directly to the air server via FTP transfer, with perhaps new wrappers and specs. While that might be a new derivative, it allows for the best of digital file-based workflow to enhance quality and improve tracking of the metadata resident in the edge server that is not transferred with an analog dub. Keeping these concepts in mind should help you to understand the true essence of workflow in modern file-based systems.

John Luff is a broadcast technology consultant.

Send questions and comments to:john.luff@penton.com