Metadata standards

File-based workflows feed off metadata. Anyone who has put a workflow together knows that making it all work depends on knowing information about the media. Sometimes it’s simple: Is the file SD or HD? Sometimes it’s more complicated: What is the episode and series title? Sometimes it’s dynamic: Should I use English or Spanish on audio channels 3 and 4? Many of these issues can be solved by investing a fortune in a MAM that runs your whole business.

But is there a lighter touch that can solve 80 percent of the problems for 20 percent of the effort? What if you could carry some of that metadata in the file? What if that file could then describe itself to your transcoders and QC tools? What if some of that metadata were standardized?

This article explores some of the possibilities for efficiency gains and cost savings by using embedded metadata specified by application specifications such as the Digital Production Partnership from the UK that is based on AS-11 from Advanced Media Workflow Association (AMWA).

Anyone who tells you that file-based workflows are simple is either clueless or someone who has really got their head around metadata. If you visit enough facilities and look at file-based workflows, you start to notice a few common themes bubbling around:

  • Increased efficiency and cost reduction are actually happening.
  • The magnitude of efficiency increase is less than desired.
  • The things that go wrong are often surprising to everyone.
  • The more automated processes become, the more efficient the facility.
  • Achieving high levels of automation involves knowing lots of information about the media you have and the media that you need to make.

“Information about media” is just another way of saying metadata, and metadata is the key to getting good file-based automation. In a closed environment, keeping track of metadata can be quite simple: Buy everything from a single vendor with a complete end-to-end management system and then hope that the single vendor is nice to you and doesn’t go out of business or focus in a direction that doesn’t suit your business.

In reality, most businesses need to spread their risks and choose a variety of best-of-breed vendors and then attempt to get all the software working together. With formats such as MXF and QuickTime, the format in which the media is stored is pretty much vendor-agnostic now. The same is not true for metadata. Not only does metadata storage vary on a company-by-company basis, the actual definition of many of the metadata terms changes on a company-by-company basis.

To a large extent, this problem is rooted deep in the history of the media industry. A couple of decades ago, we were battling against physics to build electronic SD and HD processing equipment. Most countries had one or maybe two dominant broadcasters, and the market for interchanging content between broadcasters was tiny compared to today. The concept of “monetizing an archive” was unheard of because “repeat programming” was what you did to fill the low-revenue slots in the schedule.

Metadata was written on cards and filed in cabinets or tape boxes, and there was no cost benefit in adopting anyone else’s system because there was no need to exchange the data. There was always a human about to re-key or rewrite the card with the “metadata” on it.

Today, moving media between businesses and between business units drives the industry. Anyone who has built a multiplatform transcode farm knows that creating the XML sidecar file for delivery is often more difficult than creating the media itself. Why is this? Why can’t the metadata be “just right?” Often, this is because it is not present or has not been validated until late in the media’s lifecycle.

A few years ago, a friend of mine at a conference explained the issue succinctly. He stood up and said to the audience, “Metadata is like an upstream tax. I run a post house, and you want me to get all the metadata right for free, so that you [expletive-deleted]ers can increase your levels of automation downstream.” It got the laugh from the audience, but the point was well made: Accurate metadata upstream dramatically increases automation and accuracy downstream. But how do we store and propagate that metadata?

The classic answer is to use a MAM or DAM system, but this only works within a single facility. At the edges of a facility, the metadata needs to be transferred to another business. XML sidecar files are the de facto solution today. They are easy to change and can be validated for integrity against a schema (.xsd file; see the tutorials at w3schools.com if this is new to you), but in practice this feature of XML is underused in our industry.

There are times when an XML sidecar file is the perfect answer; it works well when componentized media formats (like AS02 and QuickTime reference files) are being used. It works well for local interchange between equipment, but sometimes it would be great to put the metadata inside the file in a way that allows interchange between businesses.

A project in AMWA has done just that with the new AS-11 application specification that has been adopted by the UK’s Digital Production Partnership (DPP) for the interchange of content within the UK market. The AS-11 specification gives a metadata framework that allows “shims” to be defined that place application-specific metadata within the MXF file. The DPP (digitalproductionpartnership.co.uk) has defined a standardized set of metadata that defines the titles, contents and segmentation of the file. In other words, the basic metadata required to bring media into a facility so that higher levels of downstream automation can be achieved.

Will this format solve all of the world’s problem? No. But it will solve more than 80 percent of the problems for less than 20 percent of the effort. For this reason, manufacturers, broadcasters and post houses are all looking to AS-11 and the DPP to lead the way in creating commercially effective file-based interchange.

Bruce Devlin is chief technology officer, AmberFin.