Metadata Harvested with AI Creates New Revenue Opportunities

Most content owners would agree that manual tagging and cataloging of assets is a time consuming and expensive activity that can stymy the usefulness and profitability of media assets. Today however, audio-visual content can be transformed into dynamic assets using metadata enriched with a combination of Artificial Intelligence (AI) techniques and manual processes.

Beyond basic tags that allow for different variations of the same word (e.g. singular and plural) or multiple tags for the same concept, automated AI technology offers more powerful and faster capabilities that can aid content discovery for easy re-use and greater productivity. For example, content can be auto-tagged using Object Recognition as well as Face/Location Recognition technology to further enrich metadata. AI metadata can also be used to discover specific areas where compliance issues have been spotted. Improvements in Speech-to-text and Image Recognition algorithms are further automating parts of the metadata process, making it increasingly useful for both new and legacy content use cases.

AI-ENABLED MEDIA ASSET MANAGEMENT

Here is an example of how AI technologies are already starting to become an integral part of commercially available products and services. AI functionality has recently been integrated into CLEAR, a cloud-based Media Asset Management (MAM) solution by Prime Focus Technologies. It helps Media & Entertainment (M&E) enterprises leverage both AI-generated and manually logged metadata to maximize content discovery. This makes it faster and easier for users to sort, locate, and re-use their assets for creating compelling stories from a vast content repository.

Importantly, CLEAR empowers users with the freedom to access the entire galaxy of AI/machine learning capabilities (AWS, Azure, Google, IBM Watson to name a few), as appropriate for their downstream use case.

Useful functions for AI-driven visual analysis include character identification as well as recognition of objects and actions. Ideally, AI video analysis should also recognize text within the image, such as street signs, advertisements, and products. The video AI functionality in CLEAR, for example, includes the capability to recognize human faces, their genders and approximate ages. It also recognizes brands and logos, along with well-known landmarks, and presently, to some degree, other objects at a level that includes phones, tables, vehicles, and animals, for example. Additionally, the system can also recognize basic interactions between these objects, such as a human running, dancing, or smoking as examples, and can detect the general facial emotions of people and events, such as being able to tell the difference between a festive party and an intense group discussion.

While all AI-based functionalities at present require human review and quality control, one of the key characteristics of AI is its ability to learn and improve over time, so we expect these functions to continue to evolve going forward.

METADATA IMPLICATIONS

If a collection of media assets could be compared to a file cabinet full of important papers, then metadata would be the labels on the folders. While it is true that the papers themselves are important and valuable, it is the labels that make it possible to locate the particular paper you presently need quickly and effectively.

In a similar way, as content is analyzed and then augmented with metadata, value is added because the ability to locate the desired content and understand important features of that content is enhanced. The value of metadata in our current time is even further enhanced because the quantity of content that already exists, along with what is now being created, is so extensive. If AI techniques can be used to help human evaluators review new or existing content and create valuable metadata, then it can help organizations unlock additional value in these assets.

There are several ways that using AI to improve metadata processes can help an organizational business case. Making the most of digital content often requires M&E enterprises to move beyond basic asset management and embrace new-age, Cloud-based MAM solutions. Such solutions, with extensive data models and built-in tools for cataloging and metadata tagging, help organize content in useful ways, make it easily accessible, and open up new avenues to monetize assets. Ultimately, they streamline and improve the overall efficiency of digital content creation, boost monetization, and drive creative enablement.

Here is a recent example of how more detailed and accurate metadata gave some content providers an advantage when broadcast content regulations changed. Whenever such regulations change, there is a time consuming and stressful burden placed on the M&E industry to quickly come into compliance. In this case, it was legislated in India that all smoking scenes must be prefaced with a warning message about the hazards of smoking. When this regulation came into effect, detailed and accurate metadata let some content providers quickly identify every scene that included smoking. Armed with this information, broadcasters could then choose to either remove the scene, or add the required warning message.

Looking at content from another angle, content creators can also depend on the use of metadata to locate existing materials for new projects. Documentary film makers, news organizations, and investigative reporters are just a few of the kinds of content creators in this category that frequently look for content relating to specific themes, public figures, locations, and similar categories. When a famous or influential person passes, or a current event warrants a special report, detailed and accurate metadata describing existing footage and media about the topic can make the search for this material far faster and more effective.

CONCLUSION

The integration of MAM solutions with metadata management tools, extensive data models and AI-based functionalities, hand-in-hand with human quality control and judgment, will help organizations achieve the effective labeling of vast quantities of existing and new content. And, the improved accuracy and efficiency will deliver, in turn, improved business efficiencies and results across cataloging, editing and mastering operations. By making it possible to find content faster, easier, and more accurately, AI-enhanced systems will create new revenue opportunities.

T Shobhana is Vice President and Head of Global Marketing & Communications for Prime Focus Technologies.