Production room infrastructure

The added benefit of using this type of ATM spoke/hub topology was that any single island could be upgraded as technology forces networking at higher bandwidths.

“Help me,” screamed the engineer from the graphics suite. His 20GB file had just failed to transfer twice to the post room storage, and Bobby, the editor, was screaming that our network sucks and it is nothing like the one at ABCD Productions. “Who created this garbage network anyway — a bunch of monkeys?”

Believe it or not, I recently heard a similar conversation as I was touring a friend's facility, looking at their new DVD production suites. Once we were out of the fray, I turned to him and asked what was going on. To me this looked like a new facility. After he gave me the history of the place — very interesting, it was actually an old firehouse — we got down to the network question.

I discovered that because of the way the facility's network had grown up over the years, it was actually a collection of networks as opposed to a single cohesive link. Sure you could move files around, but in the worst case they went from an Appletalk network through a gateway to a switch out to a router then to a hub and finally to an artist's desktop. He had a mix of twisted pair, coax, Fibre Channel, 1394 and a Gigabit Ethernet switch they had just gotten in. What a mess! Does this sound familiar?

Networks that were once built to carry text are now being required to carry everything from graphics to real-time interactive HDTV. Walk into any post production/production room in the world and you'll see more computers than cameras. Whether video is initially created in digital format or not, digitization is a key part of virtually every production and post production process. We once thought Appletalk could do it all, only to find out that 10 Base-T is useless and 100 Base-T is not much better. Gigabits Ethernet and ATM switches are where post production networks are happening today. More and more, our post networks are looking a lot like high-end telephone company backbone providers.

So my friend saw me snickering as he was explaining all this and asked if I had a suggestion. In my most obnoxious voice I said “Throw it all out and start over.” Of course management would not think of throwing out the device they paid $100k for, even if it is nearly worthless today. They figure, why should they if it isn't broken?

The challenge here was to take an overall view of the facility. We looked at their technology plan (amazing that they actually had one, but all facilities should) and where their customers were going to take them in the future. There seemed to be a growing need for more high def and even wide area connections to a planned remote facility and external partners.

So, over the course of the next two days, we worked out a plan to better utilize what equipment he already had and put a proposal together to migrate to a unified network. The first thing we did was segment his production processes into “islands” mainly based on bandwidth needs. This was actually quite obvious once we had a clean sheet to work with.

So we drew out a 100 Base-T Ethernet network for the mostly ProTools audio suites, using existing equipment around the facility. This included a 16-port switch and a number of hubs. We dedicated three ports for each of the four audio rooms and added a dedicated 16-port hub for each room (a total of 18 ports per room). The remaining four ports were saved for gateway functions.

The DVD room was already a self-contained network based on Gigabit Ethernet so we left it alone.

Next were the mighty editing suites — mostly digital equipment, but still with a little bit of analog gear just to make the old timers feel at home. This is where we actually had the most trouble. There was a lot of dedicated gear with dedicated interfaces and protocols, media file servers, NLEs, routers, switchers (the video kind) and effects boxes. Here he had Fibre Channel for the media servers, Ethernet and ATM for the NLEs and effects, and 10/100 Base-T for the router and switches (for status and control).

We decided to split this problem up even finer and put the media servers on their own network. This is to allow the greatest bandwidth between these devices without having to first go out to a gateway device. The rest of the gear was networked using Ethernet in a configuration similar to the audio rooms. The few devices that were capable of ATM were included in the following network segment.

Last we looked at the two graphic rooms, where there was an interesting mix of equipment with PCs, Macs, SGIs and one Sun. The Sun and SGI equipment had ATM connections as well as 100 Base-T, although only the Ethernet was being used. Here we decided to have a little fun and pull the whole facility together under one unified network.

I priced out ATM cards for the PCs and a network switch that was capable of ATM, Gigabit Ethernet, 100/10 Base-T and, I was hoping, Fibre Channel. As it turned out, I found a few suitable ones but none with Fibre Channel. The cost of the ATM switch with all the proper interfaces was not much more than the Gigabit Ethernet switch they had just gotten in. We discussed this and thought it could still be returned, even if we had to pay a restocking fee.

So we finished off this high bandwidth network with a switch in the center that is natively capable of handling various interfaces and protocols. The added benefit of using this type of ATM spoke/hub topology was that any single island could be upgraded as technology forces networking at higher bandwidths. Also, by using ATM we were able to put a plan in place for connectivity to the outside world.

Connecting up to the outside world over a WAN has its own issues. Video sources generate high bandwidth bursty traffic. Video traffic requires very low cell loss rates and low cell delay variation. Although ATM networks provide the required QOS, in a competing situation for bandwidth along a public WAN, the video stream may experience degradation of the required service. More on this in the future.

Just in, Sony to release networkable VTRs — watch out world. Soon everything will talk to everything and each device will be “aware.”

Steven M. Blumenfeld is currently the GM/CTO of AOL — Nullsoft, the creators of Winamp and SHOUTcast.

Media 100's iFinish4

Media 100 recently announced a streaming media production system for Windows 2000, iFinish4.

iFinish4 is an interactive streaming media solution optimized for corporate networks. The package enables creative professionals to develop high-quality interactive streaming media (video and audio) content for delivery on the Web. Web designers can create content with which viewers can directly interact using EventStream, which allows them to embed interactive instructions directly into streaming media programs to trigger highly visual, content-rich capabilities, including graphics, Flash animations and Java applications — all synchronized with the video on the site.

Designers can define hot spots, URL flips and chapter marks that allow viewers to interact with the stream by clicking on objects to gather information, launch related websites from the video or even purchase items depicted in the content. Such capabilities are advancing interactive online advertising and e-commerce and raise interesting questions about revenue models for broadcast stations looking to generate revenue from streaming their content.

For more information, go to Media 100's website at www.media100.com.