Servers Enable Newsroom Workflow Changes

The advent of solid-state, optical and magnetic spinning discs for the capture of field content is moving newsroom editorial functions into an improved, high-speed workflow that changes how news is prepared and multipurposed. Technologies enabling this change include the physical media (e.g., Blu-ray, P2, or direct hard drive recording); the camcorder systems that generate proxies as a low-resolution copy and/or as thumbnail renderings; and the servers that manage essence and metadata throughout the process.

The result is editing systems that support collaborative editing and allow content to move from field to air in a smoother, more efficient means.

METADATA SERVERS

This workflow process requires new forms of servers that collect content in the field and move it to laptop or purpose specific editors; servers that cache thumbnails or low-resolution proxies for a reviewer to make pre-production decisions from; and servers that collect higher resolution content (i.e., IMX, DVCPRO-HD, DNxHD, etc.) and then marry those images with their proxies and associated metadata in preparation for editing and play to air. Beyond video- and audio-centric servers, metadata servers manage various sets of information related to the content, then translate or manipulate that data for applications such as newsroom computer systems, rundown automation systems for play-to-air, or archive directors that manage content for other purposes.


(click thumbnail)Fig. 1: Field acquisition and transfer workflow.Content may be further aggregated for Web publishing, streaming media RSS feeds. Metadata is then pushed into an asset management database whereby information associated with this ancillary material is then linked wtih current content from previous stories allowing for user interaction and searching. Finally, one must not forget the mobile-handheld device, whose content is generated by rate shaping and reformatting to give that message its new up close and personal version—that can then be promoted with advertising, yielding new revenue to support those new technological aspirations.

All these processes require both a collaborative and an associative workflow environment, the later topic which we will introduce this month and carry forward in greater detail over future installments. We’ll start with the collaborate workflow, whereby we define by example applications currently used by broadcast and news service organizations.

COLLABORATIVE WORKFLOW

First, content gathered in the field is selectively segmented from outtakes, possibly tagged by version or slug, and then organized for preview on either a portable or laptop editing system. A reviewer, possibly the journalist, the field producer or maybe a person located at the newsroom home-base—will select clips and determine which are appropriate for the final edited segments. Metadata is formatted for the home-base editing system and the newsroom computer system, which is then readied for transmission, with the proxies or thumbnails, to home base (see Fig. 1).

Depending upon features in the camera capturing platform, reviewers can set in points and out points, add to metadata that may already be captured from the camcorder and then automatically associate the items to the physical media acquired during field capture and field-reviewing processes. This happens shortly after acquisition, given camcorder systems may partition metadata and proxies from the high-resolution content, and allow it to be exported from the camera platform to the review platform with little human intervention.

Once the content’s metadata, thumbnails and/or their low-res proxies are downloaded to a reviewer’s laptop, the larger files on the physical media—still in the camcorder portable memory—are record lockout marked to prevent accidental deletion. Once protected, the reporter/camera team can disconnect and move on to the next assignment.

Using proxies or thumbnails, content is then further reviewed on the laptop, where it may be assembled into packages consisting of proxies and metadata. AirCards, cell phones or the Internet provides the transport of this content to the newsroom. These rough elements can now take their place in the next step of the production process.

Higher-resolution content, still held on the camcorder memory, will be transferred later in the workflow. By the time high-res files are transferred to the newsroom-editing server (via microwave or sneakernet), only files previously identified by the reviewer, or the rough-cut editor, need to be transferred. Other files may be deleted or later moved to archive when or if needed.

The idea is to minimize the amount of physical data transferred, and hence the time it takes for transfer—all the while allowing the workflow to progress without encumbering other tasks or assignments. Eventually, the high-res files catch up in the workflow as editors seamlessly find their full resolution content appearing on the editing servers, ready for cutting as a final story.

GETTING IT READY

Second, in our collaborative workflow process, content needs to be utilized by multiple persons to assemble various other elements such as teases, previews or promotional elements—without having to make copies of the same content, and without waiting for the final cut story to be completed. This makes for faster and more flexible utilization of content long before the story airs on the 5 o’clock news.

Historically, as in videotape-only models, each tape element was physically brought to the edit bay and copied so that other editors on different transports could edit their piece. This model vanished as nonlinear, newsroom editing servers surfaced, moving to the next generation field-tape/NLE-server model, whereby multiple persons edited different stories from the same material. Here, the drawback comes from the linear nature of videotape which first forces all the unscanned content to be ingested, cataloged and then tagged for in-points and out-points. Metadata was hand written on notebook paper and carried to the next portion of the editorial process.

In this model, ingest happened in real time and although some server-based NLE systems allow editing to begin shortly after ingest, the lack of pre-review functionality prominent in today’s solid-state or random access disc systems still created a bottleneck.

Today, videotape/NLE-server models are slowly evaporating in favor of the new demands for instant availability. The rule seems to be that the faster content can move from image-capture to preview to finished segment, the more opportunity there is for associative uses of that content. Station Web sites are proving that people will accept stories or information in a less than polished form—in turn bolstering viewers and building loyalty. As content becomes available 24/7 and the audience evolves, cell phones, PDAs or laptops will provide the new means of distributing information. This, in turn, is where an adjunct workflow requirement begins.

In the next installment, we’ll explore this associative workflow concept, the second part of the fundamental change in the news environment.

Karl Paulsen

Karl Paulsen is the CTO for Diversified, the global leader in media-related technologies, innovations and systems integration. Karl provides subject matter expertise and innovative visionary futures related to advanced networking and IP-technologies, workflow design and assessment, media asset management, and storage technologies. Karl is a SMPTE Life Fellow, a SBE Life Member & Certified Professional Broadcast Engineer, and the author of hundreds of articles focused on industry advances in cloud, storage, workflow, and media technologies. For over 25-years he has continually featured topics in TV Tech magazine—penning the magazine’s Storage and Media Technologies and its Cloudspotter’s Journal columns.