Networking for Production and Storage

As the onward march of IT-based equipment into broadcast television continues to accelerate, there is an increasing need to look carefully at the way items connect together and work together. Networking and storage networking is the key. These can change, and hopefully improve, workflow and take full advantage of the possibilities that become available with the new IT technology.

While some analog systems continue to linger, most have already moved to component digital operation. Standard definition (SD) video is digitized according to ITU-R 601 and this, along with AES/EBU digital audio, is carried over familiar coax cable up to 200m using the Serial Digital Interface (SDI), which has a data rate (bandwidth) of 270Mb/s. Modern digital studio equipment comes complete with SDI connections so all the components of a facility — cameras, switchers, DVEs, recorders, routers, etc. — can be quickly hooked up.

SDI is designed just for television. It is real-time “streaming” with very low latency (i.e. delay) — material can be instantly used as it arrives. Communication is “best effort” and one-way, so it does not have the degree of error correction of many IT-based schemes, but this is no problem for the type of material involved.

There is also IT-based equipment in use, especially in graphics and editing, that may or may not have an SDI connection but will connect over an IT network. Rapid change is moving the balance toward IT-based equipment and networking. Two years ago some post houses counted more network than video connections. Some are now (almost) completely IT-based.

Few would doubt the potential benefits of the changes afoot. However, handling video and audio — television content — is far from the native “data” environment of IT-based equipment. More than ever, correct system design is essential. To better understand this requires some knowledge of both networking and disk storage applied to television. These technologies lead to the currently “hot” area of storage networking. In storage networking, storage is shared, as opposed to networking, where various stores exchange data. Both depend on network technology.

Networking

Networking can be defined as communication using a series of data packets. SDI does not work this way and so cannot make a network, but its extension, SDTI (Serial Digital Transport Interface), can by carrying packetized data over SDI infrastructure. This transports material such as MPEG, DV and even HDCAM compressed video in real time. (DV can be carried at 4x faster). However, it has limited general IT application. The further extension of SDTI-CP standardizes the format of data sent down the cable.

With television so well provided for by SDI and SDTI, why look at IT-style networking? Growth in services could continue by adding SDI interfaces to all new IT-based equipment. The truth is that networking is IT's native way of communicating. Moving away from that would spoil some of the advantages the equipment brings — including the ability to work with shared storage. As IT has many applications, it also has many methods of networking. Only those appropriate to television are mentioned here.

Networking offers many advantages:

  • It offers cost savings in infrastructure and operation.
  • It provides transport of all required information: video, audio, metadata, control, talkback — virtually anything you wish to put down it.
  • It is the only way to create shared access to data.
  • It has plug-and-play capabilities for easy use.
  • With standard platforms, networking can be easier and cheaper than SDI.
  • You need a network for all the desktop computers in broadcast, so why not use it for video and audio too?
  • It can handle live SD video (although this is not straightforward).

Types of networks

There are many types of network in use but LAN and WAN are the most common. The local area network (LAN) is spread throughout a building and may have thousands of connections. It is of particular interest as it can be directly applied to studio/post house needs. Breaking out of one location forms a wide area network (WAN). Two LAN sites can be connected by a WAN. The connection normally is rented from a telco or ISP.

Others include metropolitan area networks (MAN), which are generally telco/ISP systems for handling traffic in a city or suburb. Personal area networks (PAN) are just appearing. These provide very short-range wireless networks (less than 30 feet for Bluetooth) — for instance, between your laptop and PDA.

For general networking information, see www.cisco.com/univercd/cc/td/doc/cisintwk/ito_doc/

Ethernet. Ethernet (IEEE 802.x) is ubiquitous and remains the choice for data exchange between IT equipment — traditional networking. It has undergone continuous development since its early 1980s 10Mb/s origins. 100Mb/s is in general use and 1Gb/s is also well established, while 10Gb/s is on the way. Note that numbers always need careful interpretation. Here 1Gb/s data speed is actually 1.25Gb/s transmission, but coded 8B/10B. (Every eight-bit data byte for transmission is converted into a 10-bit Transmission Character to improve the transmission characteristics for more accuracy and better error handling.) All standards above 10Mb are capable of full duplex operation — full data rate in both directions simultaneously.

Ethernet is a connectionless architecture. Each data packet, of between 72 and 1518 bytes, has a destination address and all connected devices listen for this and decide whether it is for them or not. A device (e.g. a PC) waits for the line to be quiet before starting its transmission. A mechanism called Carrier Sense Multiple Access Collision Detect (CSMA/CD) handles cases where two stations attempt to transmit at the same time. It follows the rules of polite conversation; they simply wait a random length of time before starting again.

For further information see www.standards.ieee.org

Fibre Channel. Today, most FC is 1Gb/s transmission speed, which, again due to 8B/10B encoding, is an 800kb/s maximum data speed. A newer standard of 2Gb/s has existed for some years but is only now coming into general usage. Both are capable of full duplex. Despite its name, FC can run over copper as well as fiber connections. Because of its close association with disk drives, its TV application is mostly, but not always, in the creation of storage networking.

The two primary ways of interconnecting FC devices are via Fibre Channel-Arbitrated Loop (FC-AL) or the more powerful fabric switching (see Infrastructure Devices). Like Ethernet, FC is also a connectionless protocol and uses an arbitration sequence (not CSMA/CD) to ensure access before transmission.

As with all networking, Fibre Channel too is defined in layers, here labeled FC-1 to FC-4, which range from a definition of the physical media (FC-1) up to the protocol layers (FC-4) which most importantly includes SCSI, the widely used disk interface. This is key to its operation in storage networking.

For further information, see www.iol.unh.edu/training/fc/fc_tutorial.html and www.t11.org/index.html

ATM. Asynchronous Transfer Mode (ATM) provides excellent, if expensive, connections for reliable transfer of streaming data, such as television, with speeds ranging up to those of telecom backbones (10Gb/s). It is mostly used by telcos. Those most appropriate to TV operations are 155- and 622Mb/s. Unlike Ethernet and FC, ATM is connection-based. A path is established through the system before data is sent. A strong point is its Quality of Service (see later).

There are sophisticated lower ATM Adaptation Layers (AAL) offering connections through the network on which higher layers of the protocol run. AAL1 supports constant-bit rate, time-dependent traffic such as voice and video. AAL3/4 supports variable-bit rate, delay-tolerant data traffic requiring some sequencing and/or error detection. AAL5 supports variable-bit rate, delay-tolerant connection-oriented data traffic requiring minimal sequencing or error detection support. This is often used for general data transfers.

See www.atmforum.com/

IEEE 1394. IEEE 1394, branded as Firewire by Apple Computers and I-Link by Sony, is unusual. It provides both an asynchronous (no guarantee of time taken — like Ethernet) and isochronous (guaranteed within a time frame — like some ATM) data transfer modes. This is because it is aimed at AV applications and is widely used in prosumer and consumer products.

It runs at 100-, 200- and 400Mb/s, is simple and cheap to plug together and uses an arbitration technique to access the bus bandwidth between connected devices. However, it is currently restricted to short cables of 4.5m or 10m maximum. The upcoming IEEE-1394b standard offers higher speeds and longer cables (see Future).

See excellent article at www.computer.org/multimedia/articles/firewire.html

And www.zayante.com/html/IEEEinfo/IEEE.com/html

IP. The network protocols carrying the data lie on top of the physical networks and connections. Of the many, attention is focused on two types — IP, the defacto standard, and other protocols that run on Fibre Channel.

Internet Protocol (IP) is the most widely used protocol in IT. Besides its Internet use it is also the main open network protocol that is supported by all major computer operating systems. IP, or specifically IPv4, describes the packet format for sending data using a 32-bit address to identify each device on the network with four eight-bit numbers separated by dots, e.g 192.96.64.1. Each packet contains a source and destination address.

Above IP are two transport layers: Transmission Control Protocol (TCP) and User Datagram Protocol (UDP). TCP provides reliable data delivery, efficient flow control, full-duplex operation and multiplexing (simultaneous work with many sources and destinations). It establishes a connection and detects corrupt or lost packets at the receiver and re-sends them. This TCP/IP is the most common form of IP. It is used for general data transport but is slow and generally not ideal for video.

UDP uses a series of “ports” to connect data to an application. Unlike the TCP, UDP adds no reliability, flow-control or error-recovery functions, but it can detect and discard corrupt packets using checksums. Because of UDP's simplicity, its headers contain fewer bytes and consume less network overhead than TCP. This makes it useful for streaming video and audio, where provision of a continuous flow is more important than replacing corrupt packets.

There are various other IP applications that live above these protocols such as File Transfer Protocol (FTP), Telnet for terminal sessions, Network File System (NFS), Simple Mail Transfer Protocol (SMTP) and many more.

Other protocols, such as SCSI, are often mapped onto networks, such as Fibre Channel, to act as a protocol layer. The aim is to carry a protocol targeted at a specific function — disk interfaces in the case of SCSI — over the network at maximum efficiency. This is why FC-SCSI is so important to storage networking. IP is a general-purpose network protocol designed for any application. This flexibility means it is less efficient than the targeted mappings.

Networks topologies

Today, most networks are connected in a star configuration with connections to the various networked devices radiating from a central unit, hub or switch. Some networks, notably Fibre Channel, can be arranged as a loop of devices. Figure 1 shows a ring topology.

Stars offer the benefits of easy removal and reconnection of devices, fault isolation and, given the right network devices, they can be faster than ring topology. Interestingly Fibre Channel devices are usually arranged as a star (not ring) for these reasons. Figure 2 shows star network topology.

For more information on topologies, see www.techweb.com/encyclopedia/defineterm?term=topology

Infrastructure devices

Devices arranged in a star need to connect with a network device in the middle. There are three general types. The most basic are hubs. How these work differs for each of the network types and some, such as ATM, do not support hubs. Ethernet hubs terminate and repeat the signals from one network spoke onto all the others — so all connected devices see all network traffic. For Fibre Channel, hubs make it easier to add and remove devices from their arbitrated loop.

Switches (also called fabric switches) are far more intelligent. They inspect the destination address of each data packet and, knowing the locations of all devices, send it down the appropriate spoke. This gives a massive performance improvement, as traffic not meant for a device does not clog its spoke's bandwidth.

Switches have fast hardware for packet inspection as well as a huge back plane bandwidth to send all the traffic to the correct ports. They are measured by their packet-per-second routing capability and the bandwidth of their internal switching back plane. Wire speed, or non-blocking switches, pass all network data without missing anything. Many such switches exist today for the high-speed networks used for television.

Routers or gateways can be combined with switches. A router handles the packets that need to pass from one network to another. For instance, if your plant had a LAN that wanted to connect to the Internet, it would use a router.

Types of transfer

There are several ways data can be transferred between devices over networks. Here, without referring to protocols, the approach of three “transfer styles” is reviewed.

Using an IT-style transfer, nothing can be done with the file by the receiver until the transfer is completed. This is not normally a problem for smaller files and documents but for large video or audio files it may cause a serious delay. So some broadcast manufacturers provide “broadcast” file transfers allowing file access as soon as the transfer starts. This allows editing or even playout of a file during transfer. These are proprietary systems but nevertheless very useful.

The other method of AV transfer, especially suitable where the receiving device wants to play the information soon, is streaming. Here data is sent as a continuous stream, often without error correction, at a constant data rate. Streaming is similar to an SDI connection but may have a large variable delay.

Quality of Service

The broadcast industry grew up on reliable connections — via a patch panel or router — knowing that the video/audio will get through this dedicated connection instantly. Heaven forbid that someone else should even think of muscling in on the same cable! Welcome to networking.

Network switches can ensure that the data goes from one source to another but there may still be bottlenecks where traffic shares a single connection between two areas or switches. This traffic aggregation is one of the benefits of networking, but if too many streams try to use one connection something suffers and in video that means missed frames. It is a triumph of marketing over adversity that this problem is referred to as Quality of Service (QoS). To be fair, it usually means definable or good QoS, but it highlights that care is needed.

ATM was designed with QoS in mind and does this job well, allowing detailed characteristics to be set for any connection. In contrast, IP is having QoS grafted on and it has taken some time for this to be generally implemented. Even now, most IP networks have no built-in global QoS, although it can be done.

There are three defined levels of QoS in IP: Best-Effort, Integrated Services (IntServe) and Differentiated Services (DiffServe). ATM has its own QoS defined by the AAL layer definition.

Best Effort/AAL-5, is self-explanatory and is what you get with most Ethernet-based networks. There are no guarantees on data delivery or bandwidth and all traffic has equal access to the network. In overload conditions data is either delayed or lost and has to be retransmitted. IntServe/AAL-1 provides applications with a guaranteed level of service by negotiating the required bandwidth across the network between the two ends that want to communicate. But DiffServe is more popular. It classifies IP data so that higher priority-traffic is given preference over lower-priority traffic — which may get delayed/lost in busy periods.

Disk storage

Storing video and audio on computer disk drives is common today. It would be easy to assume that is it better than tape, which it is in many respects. A more accurate view is that it is different from tape — not everything is positive and there are limits to disk-based performance. One huge advantage is that disks allow breaking away from the rigors of totally real-time operation, making possible video operation with IT equipment.

Unlike tape, disks provide random (nonlinear) access to storage, millions of accurate read/write cycles without any deterioration and, being digital, their fidelity is assured. Well-known downsides are that they are usually not removable, they are susceptible to shock damage and they are limited in capacity due to relatively high costs compared with tape.

However, there remain some fundamental barriers that can mainly be attributed to applying a computer peripheral to television. Computers generally require short bursts of data, files of a few kilobytes, from disks. A single channel of uncompressed eight-bit SD video requires 21Mb/s (31MB/s for RGB) continuously for the whole length of the item — which may be hours. It is only within the last year that a single drive has become available to sustain such performance (not for HD, which requires over seven times the data). Also, there needs to be some failsafe protection and, in editing, more than one video channel is desirable.

The solution is to group drives and aggregate their performance. Usually this is done with a redundant array of individual (or inexpensive) disks (RAID), which also offers data protection, should a drive fail. There are many configurations, or levels, but RAID 3 is usually accepted as most suitable for real-time video. To provide the continuous data speeds required, these are not off-the-shelf items but are specifically designed for video. Such RAIDs may be used as stores for stand-alone systems, such as edit workstations, or as storage blocks in SANs.

Such stores offer performance tailored to needs. Maintaining a flawless 24-hour, high-level performance is not straightforward, as the fundamentals of disks impose limits.

Disk drives have fewer moving parts than VTRs - only two, the disk platters themselves and the arm used to position the read/write heads. (See Figure 3.) A modern high performance drive spins the disks at 10,000RPM, taking 6ms/revolution. To access required data the disk must spin to its start — an average of 3ms (latency), and the arm must be positioned over the correct track — this positioning time averages about 6ms (worst case edge to center ~ 15ms, best is between adjacent tracks — 1ms). Having all the video data held on adjacent tracks is most efficient but, as work progresses, with deletions and new recordings, the data becomes progressively fragmented around the disk and access times increase. This leaves less time to read the data and the data rate suffers.

Video servers that stream only long clips of video, such as in transmission playout, may record most or all of their material on contiguous tracks and replay them in the same order. But those working with editing workstations need to randomly access down to frame level, preferably in real time, causing the store to rapidly fragment. Interestingly, analysis shows that a single server store, using disks as above, is limited to around 20 simultaneous real-time random access video channels as the access time, not the bandwidth requirement, maxes out the performance. Although you may consider this is a harsh requirement, it illustrates that there are limitations to disk-based performance and fragmentation remains an important issue. However, if servers are to maintain continuous 24-hour performance, stores will eventually fragment and there may be no time to defragment them. However some do run defragmentation routines when the workload allows.

There are two basic types of storage applications. The first is for the record and playout of long-form elements with limited or no editing — as in the transmission example above. This typically would rely on compressed video — up to 50Mb/s per channel (compatible with IMX and DVCPRO50 VTRs). The second application focuses on editing where uncompressed video at 21- or 31Mb/s per channel is needed. Multiple channels of real-time random access to every frame are often required here.

Making a store work requires much more than the disks. You also have to provide some form of database management to keep track of where all the clips, or even individual frames, are stored and some thought is needed as to the interface to the outside world. Sometimes the latter is presented as an IT-style network connection, sometimes as TV-style SDI or SDTI and sometimes both. Running a server multiplies these needs. For instance, a server providing 20 real-time connections must run at 20x speed in all respects — including database and data bandwidth access. A telling test is to verify that all the listed connections can all run together!

Storage networking

Networking is closely associated with storage — where else does all this data come from? In particular one network technology, Fibre Channel, has a particular role in storage networking. See www.searchstorage.techtarget.com/bestWebLinks

Desktop computers provide the most basic form of network storage, with the ability for one machine to make its local disk visible to other computers via a network — usually Ethernet. This is very useful, allowing transfers between machines, but it is not what is really meant by storage networking. The first level is a server that uses a general-purpose computer to provide storage that workstations can access. These can range from old PCs recycled as servers using Linux and Samba software right up to multi-processor PCs or Sun servers with RAID controllers connected to a large group of disks. Other tasks may be handled as well, such as running the centralized mail or handling some networking tasks. There are no fixed rules.

Network Attached Storage (NAS) describes a dedicated file server. It differs from general-purpose servers in that it runs a stripped-down operating system and its sole job is to provide network storage. It runs over existing networks and so may well be unsuitable for data-intensive applications such as video. See www.techweb.com/encyclopedia/defineterm?

Storage Area Networks (SAN) are a whole new ball game, especially with regard to networks. Their importance is huge as now they are the most common method of providing shared video storage. The design recognizes that moving large amounts of data is inconsistent with normal network general data traffic. SANs therefore form a separate network dedicated to connecting data-hungry workstations to a large, fast array of disks. (See Figure 4.) While SANs could use any network technology, Fibre Channel predominates. Its 800Mb/s data rate and disks with direct FC connections are ideal for making large, fast storage networks. In practice, basic networking and storage networking are used side-by-side to offer wide scope for sharing and transferring material. Besides disks, essential items are FC switches (if FC is used to connect storage) and software for file sharing and management. See www.techweb.com/encyclopedia/defineterm?term=SAN

Exactly how SANs are applied varies among broadcast manufacturers (see later), but they often provide the storage to double up, or totally replace workstations' local video storage. Thus the workstations can operate directly from a common, shared storage pool. Not only does this promote work sharing but it also leads to other efficiencies such as eradicating the dead time required to load new material. This can now be laid-off from the main editing areas to a dedicated loading station. Backups can become more straightforward too.

Video servers

The prime aim of a video server is to supply multiple channels of real-time video, often via SDI or SDTI connections. Even so, no video server can ignore network connections. GVG's Profile, one of the earliest systems, uses Fibre Channel to allow files to be copied between Profiles and third-party access. Avid's Pluto server AirSpace has a Gigabit Ethernet connection, as does Quantel's Clipbox systems. Besides offering direct connections with IT-based equipment these may allow faster than real-time transfer of files with third-party applications.

Performance

Between the networking and storage there are a large number of elements all, hopefully, working together. The whole ethos of networking is sharing, so predicting performance is not straightforward unless specific steps are taken to take charge of capacity — going against the ethos but guaranteeing performance where it is needed.

A chain is only as strong as its weakest link, so every step of a network needs attention. Starting with the disks themselves, a modern high-performance drive may quote an average data transfer rate of around 30Mb/s but this is not constant. The data rate from near the circumference is considerably greater and that from near the center is much less. Also, since constantly high data rates are required, time taken to make random accesses significantly affects data delivery. A good design will add drives and management to ensure required specifications are met.

The use of non-blocking switches and QoS features does not mean that the workstation performance on a network will be anywhere near its wire speed. The problem is complex, depending on the physical network characteristics, the protocol used, the Network Interface Card (NIC — or Host Adapter) and the workstation power. For instance, Fibre Channel excels in SAN systems because mapping SCSI protocol onto FC works so well and, with an NIC tuned for SCSI, performance near FC wire speed is possible. However, run TCP/IP instead of FC-SCSI and performance drops dramatically.

Conversely, Gigabit Ethernet is mainly used with TCP/IP, so the NICs and the workstation software are tuned to this, making it much faster than FC-TCP/IP. However, performance is far short of the 1Gb/s Ethernet wire speed. Due to the small data packets and the overheads of the TCP/IP protocol, around 400Mb/s is reported on a modern PC/NIC. Also, the quality and power of the NIC will determine how much load is made on the processor to handle the network data transfers. Even so, real-time performance at SD is achievable with Gigabit Ethernet if the system is put together with care.

End-to-end system operation under normal working conditions gives the only true measure of performance. The network, protocol, server, switch, NIC, workstation processor power and the application are all parts of the puzzle. The numerous items, many incompatible, come from different suppliers, which means only a highly skilled IT workforce has any chance of building a system. For this reason many broadcast suppliers provide a one-stop-shop approach to their systems and the associated networking. Although this removes the chance of an open choice of components, it does provide a complete solution.

Practical issues

Ultimately the systems have to work in busy, pressured operational environments. There are more issues to consider. For example, when was the last time your SDI router failed or a VTR broke? How long did it take you to get something working again? What were the consequences of the failure — bad and maybe job threatening? Networks and disks are more complex than SDI routers and very different from VTRs. It is likely they will fail and, possibly, in more complex ways. The good news is that solutions exist to make your network and SAN 100 percent reliable — but at a price. The bad news is that the complexity rises with every extra piece that you add. It may be reliable, but does anyone understand the system anymore?

What about support? Analog video and SDI are well understood, but who can talk TCP/IP subnets, RAIDs and black-and-burst? Support staff needs to understand video as well as solve network and storage problems. The job just got a whole lot more interesting — or difficult. For some systems, especially SANs, the network can be considered as a separate unit, which often makes support easier.

Upgrading is important, but can parts of the network be upgraded while it is on-air? This could be helped by compartmentalizing the networks in the same way as SDI routers do today. This helps maintenance, support, reliability and ease of installation.

Available systems

Looking at the offerings from a few manufacturers illustrates what can actually be done with the technology today. As the latter is moving fast, this is only a snapshot in time so expect things to be different tomorrow.

The sharing of work is clearly useful in areas such as news, sports, editing and post production. Although these are prime targets for companies offering systems that connect their products together, servers are most commonly found in transmission/playout areas.

Transmission/playout. For most broadcasters, revenue depends on the successful airing of commercials. Video servers are rapidly displacing cumbersome tape cart machines in this area. These are handling compressed video and are not expected to create edits. The former reduces the data rate and the latter means that files can be stored in groups of pictures, rather than the picture-per-file basis needed for editing — thereby reducing the database management overhead.

Transmission is a popular application for Omneon's Network Content Server. In the Omneon product, two stores provide 80 hours of material at 25Mb/s (other bit rates can be used) with Fibre Channel connections to the Director. Somewhere, systems using disk storage — which is file-based and asynchronous — have to make the video data fit with television's regular line and frame rates — which are synchronous. Omneon chose 1394, as it allows attaching synchronous equipment to a file-based world. The Director interfaces between the Mediaports and the file system. The Mediaports translate the 1394 data into video, audio and data (carrying all three on one connection saves cabling) for the various video applications. Mediaports are not always required. The 1394 can connect directly to video applications such as a FAST purple. NLE. The system is expandable with more storage, 1394 connections and Mediaports.

Above all, the need is for reliability and so, although there may be spare bandwidth — or even actual SDI connections — on production servers, typically the on-air device is kept separate. This is to maximize reliability and avoid being blocked out by other demands on server bandwidth. A different approach is offered by SGI, where their Guaranteed Rate I/O (GRIO) ensures that a designated area of their SAN always has sufficient resources guaranteeing its bandwidth at all times. While not offering any form of equipment redundancy, this approach may be attractive to some as it also offers rapid transfers to the transmission area from adjacent storage.

Post production. Avid and Discreet offer server products for editing and post production. Here, the need is often for dual-channel support for a number of editors with real-time uncompressed video, which makes heavy demands on bandwidth. Avid's popular Unity MediaNet SAN-based system (see Figure 5) uses Fibre Channel-connected disks and supports up to 25 dual-stream clients over a wide variety of Avid products. While the server provides work sharing, system management also makes a big contribution. For example, the Unity Administration tool creates dynamic virtual storage so the SAN space may appear as a single “disk” of 7.3TB or many disks allocated to each workstation. Should one need more space, a suitably privileged user can re-allocate any surplus from one workstation to another to make the most efficient use of all available space. For more information on Avid's Unity, see www.avid.com/products/unity_medianet/index.html

Discreet has a SAN, Stone and Wire, for its high-end systems. This combines a Fibre Channel-connected storage system, Stone, with a HIPPI-based client-to-client network connection, Wire. The new jobnet pro offers a SAN environment for up to 10 NT-based edit workstations with dual-stream uncompressed video supplied directly over Fibre Channel. Maximum storage is 7.7TB, or 108 hours. Figure 6 shows a quite typical mix of FC-connected SAN and 100Mb/s peer-to-peer Ethernet. As with many systems, tasks are divided and here jobnet producer software runs on a PC to provide browse-level functions such as shot logging, storyboard editing, approval, etc. See www2.discreet.com/products/d_products2.html?prod=infrastr&cat=storage

These systems are proprietary. Open systems do exist that make a shared SAN appear to be just another disk on a workstation. However they are not in widespread use in the editing arena, partly because of the service and support issues of a mixed-supplier environment.

Where there is 3D animation, shared storage is obviously present but the demands on the system are lower than streaming video. Good quality networking such as 100Mb or Gigabit Ethernet can handle such systems.

News. News is the harshest broadcast environment for servers. There is a constant flow of material progressing through the system with many workstations involved — nearly all of which want access to the video, audio and text for the journalists. Graphics are also involved. Work is highly parallel — many people working at once, possibly on the same story. Finally the material has to be played to air. The period up to on-air time is always frantically busy with many demanding instant access to everything. Finally there is playout into the bulletin. Much of video server system design is about supplying adequate bandwidth and the removal of bottlenecks. Any shortcomings will be noticed, as the whole system has to operate smoothly under extreme conditions that occur every day.

Of the over 100 workstations that may need to share the video, most are for journalists whose needs are met with compressed versions of the clips for making their edit decisions. Quantel's digital news production system (see Figure 7) often features two separate networks, the Ethernet serving the journalists' stations with browse-quality video and audio from the browse server, and another buried inside the Clipbox Power (central production) server, providing the broadcast quality material. The latter is a SAN system-in-box structure with no FC but an extremely fast internal bus connecting its RAID storage and presenting 14 editable SDTI channels (the production edit suites operate directly on the server store) and 1Gb/s Ethernet to the edit stations and other news facilities. Figure 7 shows a full news system. Note the use of a separate server for transmission, with backup from the production server.

Pinnacle Systems VorteX Networked News solution (see Figure 8) uses a SAN-based FC-connected MediaCore as its main shared storage but breaks out from that with Gigabit or 100Mb/s Ethernet. Using standard, but tuned, TCP/IP (achieving 70Mb/s payload data over 100Mb/s Ethernet) significantly reduces infrastructure costs (vs. Fibre Channel) and yet achieves the required broadcast quality performance with DV or MPEG compression. The 1Gb/100Mb Ethernet mix can be varied to suit specific requirements. Here again, there are several networks employed: the SAN storage, broadcast-quality equipment and browse “proxy” quality for the many journalist workstations. Much of the equipment uses standard IT platforms following Pinnacle's open technology principles.

See www.pinnaclesys.com/docloader.asp?templ=7&doclink=/bsd/solutions/networkednews/doc/index.html

Future

The technologies employed in networking and storage networking are rapidly developing. Such changes are bound to boost the efficiency and performance of networking and storage.

Disks. Disk drive capacity has always been cited as a limitation but its importance continues to recede. Historically capacity has doubled every two years (41 percent pa), but recent development has been nearer 60 percent pa. (See Figure 9.) Current in-use drives are up to 73Gbytes (approx. one hour of uncompressed SD) but 180Gbytes is already available.

HD imposes roughly seven times the demand for data (~560GB/h), yet disk stores have already been built to provide dual-channel, uncompressed support. Such rapid progress ensures that disk-based stores will increasingly dominate television operations into the future.

Much of the increased storage capacity comes from increasing the track density (TPI, tracks/inch: 18,000) and recording density (BPI bits/inch: 342,000) the linear data density along the tracks, making an overall gain in area density (figures shown for a high performance 73GB drive). Even the compact 1.6-inch high, 3.5-inch drives may have as many as 12 stacked platters. Note that increases in recording density affect both capacity and data rate. Another way of augmenting data rate is to increase the RPM — spindle speed. Currently 10RPM is fast and there are some 15,000RPM models available. Faster rotation also reduces the latency — in turn reducing the time taken to reach required data. Despite the pitch to which drives have already progressed, this pattern of development is expected to continue towards 2010.

IP. The 32-bit address space of IPv4 is not enough to support future development, and workarounds are already in use. The Internet Engineering Task Force (IETF) proposed a new standard, IPv6, in 1998. This massively expands addressing capabilities from 32 to 128 bits. There is also better QoS with a new implementation of DiffServe. Authentication, data integrity and confidentiality are supported and the handling of common packets becomes easier and faster. There are also extensions to multicast and multi-homing IP addresses.

The change to IPv6 may well be driven by telcos, as the European 3G cell phone system requires two globally unique fixed IP addresses for each mobile device to be provided via IPv6.

Gigabit Ethernet and IP. The commodity Ethernet products running the open standard protocol IP do a great job but presently cannot provide reliable high performance networking for multiple uncompressed SD or HD video streams. This will change. Again, the mighty Telecom market sees packet switching networks and IP as the way to go. They need multi-vendor working QoS solutions to get voice, and ultimately video, reliably through their systems. Sources say this goal is close.

The IT sector uses IP, and demand for bandwidth and data is growing. Top-end NIC cards are offloading ever more of the IP protocol handling to improve network performance and lighten the load on the workstations' processors.

10Gb/s Ethernet is around the corner, with initial use expected to be for switch-to-switch interconnects. NIC cards for high-end servers will offload most of the IP protocol as Ethernet packets arriving every 1.2μS present far too heavy an interrupt load for a processor doing other work.

IEEE-1394b. Networking is encouraging but it focuses on files and storage. What about live TV? Can cameras and vision mixers ever have their synchronous SDI replaced by a network connection?

IEEE-1394a with its isochronous transfers has guaranteed delivery and timing. The upcoming IEEE-1394b with longer cables (100m over fiber) may offer a new option for broadcast. The current 400Mb/s is fine for SD video and compressed HD but 1394b defines 800Mb/s and 1.6Gb/s rates — covering HD in all its current forms. IEEE-1394 is one to watch.

Although “IT-based” and “open” are often taken as synonymous, this is hardly the case with storage networking. Self-built SANs are not easy, so many wisely chose proprietary offerings. However, connecting to someone else's Fibre Channel is not the same thing as plugging in SDI. Maybe it will happen by default but there is a definite need for standards to truly open up this technology to the television industry.

Bob Pank is a television industry journalist. He can be reached atbob@pank.demon.co.uk.

Jon Smith is principal consultant for Three Steps Forward Ltd. He can be reached atjon.smith@threestepsforward.com.