Why would you want to repurpose material that has already been seen? Do you really think your viewers want to see things that were shot last year? If this is the case, why do we have film crews roaming the country? Couldn't we just show edited stock footage for every newscast?
Of course not.
Last month I had the opportunity to sit with some friends while they went through a huge vault of material. The topic of the day was how to continue to monetize content collected over the years.
We discussed a number of issues from royalty payments to whether certain technologies fit better with specific content. After talking about the specific technical aspects of streaming media, DVD, HDTV and others we determined that it is not necessarily the content, but the user experience, that determines which technology should be used.
As we looked more closely at the specifics, we realized that the technologies themselves actually had different sociological impacts. For instance, DVD content will most likely be enjoyed in a family room on a television with multiple people watching whereas Internet streaming media (remember broadcast is the original streaming media) will most likely be viewed by a single individual on a computer screen at close range.
So why is this important for repurposing content?
In addition to all the technical aspects, some content is better suited to certain technologies. In the case of streaming, content that speaks to the individual works best.
Given this scenario, streaming content requires that extremely careful attention be paid to the encoding of the material. As with all material that is encoded, the quality of the original is extremely important. Today choosing material that is more or less static, thereby reducing the deltas between frames, will produce better quality streaming video at very low bandwidths. Minimal movement is the key to streaming success. Be aware that even though all the Internet pendants talk about "broadband," the simple fact is most Internet users still use a 28.8 modem.
But wait, who wants to watch more talking heads? That's boring. Here is where the real power of the Internet/network comes in - using metadata.
Metadata is additional information such as URLs that can be incorporated into your content to allow the user to interact with the stream, and even give boring material exciting new twists.
Creating metadata to be streamed along with your content was a fairly tedious manual task until recently. Excalibur Technologies (www.excalib.com) has introduced a suite of tools called the Screening Room.
The Screening Room Client automatically captures, analyzes and storyboards analog or digital video assets - including live feeds and associated speech, closed caption text, and annotations. It will automatically generate video storyboards by selecting the most important key frames.
Further, it recognizes and detects cuts, fades, shifts, pans, tilts, blank screens and dissolves (as well as optional salient frames) for highly accurate storyboard representations. Users can also manually select keyframes on demand for full control over the storyboarding process.
Screening Room extracts closed caption text in several standard formats for concept and pattern searching, and lets users add user-defined metadata and annotations. Lastly it converts speech-to-text when closed caption text is not available for indexing and searching. Annotations can also be added using voice recognition.
It has all the standard capture and management tools you would expect, but its real muscle is in its ability to do indexing and speech-to-text. This is an awesome feature. The product is designed as modular components and actually uses multiple speech-to-text engines.
Imagine being able to convert your vast video library into a searchable archive. Your library now becomes a valuable asset that can be monetized much the same way we currently pay for news subscriptions on the Web today (Wall Street Journal).
CNN recently selected and installed the Agility Enterprise encoding system from Anystream, Inc. to handle the encoding of completed story clips. This system is promoted as "any input, any output." Anystream automates as much of the process of encoding as possible to insulate the human operator from the underlying complexities. At CNN this has helped to eliminate a significant number of steps from the production process and provides encoding for multiple formats faster than real time.
Agility Enterprise's features include a unified encoding language that is codec-, device- and platform-independent. The programming model is distributed computing built on Internet standards such as TCP/IP and HTTP. According to Darian Germain of Anystream, this allows the system to be fault-tolerant, provide high-performance distributed encoding, offer unlimited scalability, and allow sophisticated remote management and control. The software offers customizing in areas such as billing systems, database systems, media-asset management systems, and network and system management applications.
The Agility product allowed CNN Interactive to eliminate multiple analog-to-digital-to-analog passes. Implementing an all-digital system reduced image degradation and maximized image quality. CNN has estimated that its streaming-media production output has more than doubled without any increase in staff headcount. According to Tom Gerstal, CNN Sport Illustrated has been using the Anystream system since Oct. 1, 2000. "They have gone from having regular encoding glitches [with the old method] to not having one complaint since the Anystream system started. It has tremendously improved reliability here," explains Gerstal.
System design: Jonathan Perkes, Director, Engineering, GWNS
Development and installation: Engineering Department, GWNS (Peter Zackowski, Joe Olejniczak, Richard Rosevalt, David Bingham, Tom McWilliams, George Parson, Andy Solywoda, Clifton Beckford, Jay LaPrise, Rick Holman, Tom Douglas, Ripton Hazel, Philip Laskowski, Bill Malchow, Tom Alessi)
Operating concepts/procedures: David Lostracco, Director, Operations, GWNS; Joe Romaniello; Ken Breitenstein; Bob Summa; Heather Weber; Brian Hitney
SeaChange Broadcast MediaCluster 1237
Oxtel IS2 Imagestore automated master control & channel branding, with Easyplay, Easysound, Squeezy DVE
Oxtel EK1Easykey switcher/downstream keyer
Miranda Kaleido multi-image display system
EVS Broadcast Equipment Super Split multi-image display system
Evertz 7760 AVM serial digital video, audio, CC and V-chip monitoring
Evertz 8084 closed-captioning encoders
Tektronix PQM 300 program QoS monitor
Videotek VTM multiformat on-screen monitor
Philips Venus routing
Sony DVW A510 Digital Betacam players
ADC DV6000 fiber optic transport
Crestron touch-panel control system
Avocent DS1800 Remote computer control system
Straylight Digital Cluster Sync
Sony DXC-950 video camera with Fujinon VCL -714 BXDEA Lens RMC-950 camera remote control PVM 8040 8" monitor SVO-1630 B50D DSS receivers
Videotek VTM-183 rasterizer VDA-16 video distribution amp RS-12A 12X1 stereo audio/video router
Shure Brothers U4D UHF diversity receivers U1 UHF transmitter bodypacks FP-410 4 channel auto-mixer
Vega RMT-10 single-channel IFB transmitter PL-2S bodypack transmitter
ESI Model 180 controller DPT-133 pan-tilt head Kino- Flo Diva 400 D4 120 lights
QTV Win Cue Prompter software FDP-11 Prompter hardware
Aphex 320A Compellor
Studio Technologies Model 81 stereo audio distribution amp
Electrograph DTS25 25" plasma monitors
ADC ICON I-WA QPC cross connect
James J. Skvaril, Production Craft, Inc., Project Manager Mark Sarantakos, MTS Systems, system design Jon M. Salzman, Architect, Eastlake Studio Morningstar Managers: Stephanie Kerch, Kathy Habiger Michael Laszuk, Studio Manager Construction: Tom Murray, Larry O'Connell, Scott Pillsbury