Captioning for television has come a long way since it was first introduced in 1972 when the most popular cooking show of the time, “The French Chef” with Julia Child, was captioned.
The idea of captioning was quickly embraced by deaf and hard-of-hearing viewers and grew in popularity with general audiences as well since it helped viewers clearly interpret their favorite programs. Closed captioning steadily evolved from conventional methods to voice writing, to what is currently a far more automated process. The application of closed captioning has also evolved as it now improves the discoverability of video content and cognitive modeling (simulating human problem solving in a computerized model) for automated analysis of broadcast content.
Creators of news and sports content face new challenges with the latest FCC regulations, which designate video clips of live and near-live TV programming published online have up to 12-hour and 8-hour delays in posting closed captioning after the programming has appeared on TV, respectively. Existing FCC closed-captioning quality rules also require non-live programming captions to be accurate, complete and in-sync with the dialogues. While content producers may view it as a challenge to stream video content that’s in compliance with this law, there are multiple captioning services that can be used to ensure regulatory compliance while simultaneously improving user experience.
One way to quickly comply with the new FCC rules is by repurposing broadcast captions using a fully automated cloud-based caption synchronization service. These services scan for unsynchronized broadcast captions that are sent to an application for automatic synchronization and generate multiple formats suitable for publishing the content online. This technology coordinates all pre-recorded and online video content through an automated process in the cloud providing a suite of options for clipping, data transfer and caption formats as well as integration directly to the customer’s video platform. Since cloud-based technology does not lock a provider into a specific vendor, users can integrate this technology into their existing workflow via API’s.
Another challenge broadcasters face while publishing media online is the non-availability of captions. For instance, broadcasters may lose track of original broadcast captions and may need to regenerate captions for the content to be published online. A caption lookup service can help quickly identify captions associated with specific broadcast content. Typically, a caption lookup service uses state-of-the-art fingerprint technology to precisely identify air-time of a piece of media and then looks up for associated captions based on identified air-time.
Using a caption lookup service, the original captions from the broadcast program are retrieved, synchronized and used for online clips. Caption lookup along with synchronization process offers great advantages to production houses or streaming video providers to sync captions after editing content for air in multiple countries. Since the original captions are retrieved using the lookup process, there’s no need to re-do the entirety of their subtitles and captions. The synchronization and format conversion necessary to meet all requirements are provided automatically. Also, broadcasters looking to rebroadcast clips or montages can use the caption lookup service to retrieve captions from the original broadcast.
A smart captioning workflow can ensure 100-percent accuracy for content that is streamed online and distributed on social media channels. It also provides a better user experience for not only the deaf and hard of hearing, but also for the millions of people that watch videos from their smartphones and tablets every day without audio. In environments where users on laptops and mobile devices don’t want to—or can’t—turn on the volume, closed captioning allows them to watch a sound-free show and expands the providers’ audience, creating a much larger audience through online viewing.
With so much content on streaming services like Amazon, Netflix, YouTube, Hulu, Vimeo and the like, closed captioning technology is an industry set to take off. Streaming services source content from around the world. To reach a global audience, providers must include caption services. Thanks to closed captioning, there are many good programs available regardless of the language spoken. Captioning opens avenues for content providers to reach a global audience.
Another factor driving the use of closed captioning is the creation of metadata. Closed captioning and its creation of metadata increases the searchability of an asset, facilitating SEO for video assets. For content owners, this increases the visibility of their video. Users can locate desired content they want with ease. Enterprise video platforms within large corporations is another growing area where the use of closed captioning and subsequent metadata increase the ease of locating desired video assets.
Recently, the use of online video in the enterprise has seen exponential growth driven mainly by improved bandwidth and processing power. A typical information workplace uses on-demand video/live streaming as part of its regular executive and HR communication, marketing and training activities, etc. Unified communication solutions that feature videos are also growing fast and intensifying this growth. Today, it’s safe to assume that an average large corporation sits on a library of more than 10,000 hours of internally created videos that could represent storage of about 20 to 30 terabytes. With the intensification of video creation, archiving needs will only increase. The integration of closed captions on such enterprise videos adds value because they’re easy to comprehend, and simultaneously, good metadata makes it easy to locate a clip from such an extensive library.
Online video providers (OVD) and subscription video on demand (SVOD) services with large amounts of footage can benefit from cloud-based subtitling by utilizing automated speech-to-text capabilities for increased efficiency and high-volume handling, along with the ability to deliver multiformat or customized versions for integration directly into existing workflows. With Digital Nirvana’s Video Logging service, media production houses and producers with large amounts of footage can improve their editing efficiency, as well organize content and the discoverability within their data centers. The company’s cloud-based Closed Captioning service uses audio fingerprinting to automate near-live synchronization of live broadcast captions with the ability to revise the text. Automated speech-to-text conversion coupled with state-of-the-art workflow and experienced captioners reduce the time and cost to publish, provide better search engine discoverability—while complying with broadcast guidelines.
By using an automated captioning service, content creators can not only comply with all FCC guidelines, but also they can reduce the time and cost to publish, while providing a greatly improved user experience and search engine discoverability.
Hiren Hindocha is CEO and President of Digital Nirvana.
Future US's leading brands bring the most important, up-to-date information right to your inbox