Using the cloud

The emergence of server virtualization, hypervisors and virtual machine managers (VMM) have matured rapidly, making the use of virtual servers indistinguishable from the real thing to end users and software alike. Many of the most popular software services in use today, by both consumers and businesses, are running on virtual or cloud-based platforms, often with the end user not even being aware of this fact.

If you work for a large corporation with a significant IT department provisioning IT services for a large workforce, there is a high probability that at least some of your IT services are being provided through virtualized servers, cloud-based platforms or some form of virtual appliance. These are mature products that are in constant day-to-day use for some of the world's biggest organizations.

It is this ubiquitous, connected nature of cloud computing that makes it so powerful. Applications, once tied to the corporate network or requiring complicated installation on an end user's computer, can now offer their services to those same users without any of these previous restrictions.

Let's consider in more detail how these cloud or virtual services are provisioned, created and maintained. (See Figure 1.) There are several companies and organizations involved in the provisioning of cloud-based services and virtualization platforms. As the technology has progressed over the second part of the last decade, so has the need to move virtual machines (VM) from one platform to another. VMs are just software and as such can be encapsulated in a single file. That file can contain not only the virtual representation of the physical hardware itself, but also a fully installed operating system, service packs, applications, tweaks and everything else to make a given application work. This encapsulation of the virtual hardware, OS and installed applications is known as a Virtual Appliance. In the early days of virtualization, vendors of VM software had their own format for how a Virtual Appliance was represented and stored.

With the introduction of standards such as Open Virtualization Format (OVF) for the interchange of virtual machines and appliances, work on the platforms that support them has been able to move ahead at a faster pace. Support within VMMs for clever features, such as the ability to take a currently running physical machine and take a virtual image of it, has increased the ease with which IT managers have been able to deploy virtualized environments for their 24x7 core applications.

Click here to order your new channel

In the past, to create an e-mail server, an IT manager would need to purchase a server; have it physically installed; and provision power, network, cooling, etc. for it. Once installed, the process of installing and patching the OS would begin, which could take several days. Only after a good solid OS was installed could the actual application install begin, which, depending on the complexity, may take several extra days. Configuration of the application and migration to the production environment may take even longer.

Using a virtual appliance running on a cloud-based platform, an IT manager can simply deploy the new server as a single encapsulated file and boot it instantly. Today, these virtual appliances can actually be purchased as pre-built systems from companies, where, with a credit card it's possible to simply go online and order a new web server or e-mail server and have it instantly deployed on your favorite cloud platform. Contrast this process of around two minutes with the 10 days required in our previous example.

Leading on from this, many of the key cloud providers have now developed data center automation systems, which allow automatic start-up, shut down and movement of running virtual servers based on rules, scripts and load balancing algorithms. This allows a cloud-based application to automatically start additional servers when the load becomes high. This technique is used to great effect by Amazon as it scales its active servers to match the current demand for its services.

Using the above techniques, there is no reason why a broadcaster can't provision relevant media services in a similar way. Already, this technology can provide a useful contribution to the broadcast technology industry.

Today, several broadcast vendors are marketing solutions based on these technologies, which will develop further over the next few years to provide an important component in the way we architect and deploy future broadcast platforms. It has the potential to change the face of broadcast technology forever, empowering broadcasters to provision additional services quicker and at a lower cost than at any time in the past.

To take advantage of these technologies, however, a next generation of systems integration is required. Integration at the network and software interface level will become an important differentiator of systems integrators in the future.

Applying cloud infrastructure to broadcast applications

VMs have enabled a new public utility of cloud-based computing, providing IT resources that are charged based on usage. This new utility has transformed the way IT companies deploy applications hosted on x86 server architecture, allowing them to quickly deploy new software and services.

More recently, vendors of broadcast technology have extended their use of the x86 Intel server architecture as they move towards workflows based on files. In the early days, it was back-room systems such as Traffic and Scheduling that made the most use of an x86 architecture. As IT servers become ever more powerful, many of the broadcasting tasks, which historically have been handled by unique proprietary DSPs, are now able to be carried out, using software, on standard IT servers. If we look at an entire workflow — ingest, transmission and uplink — of a typical publisher-broadcaster, we will find IT servers running applications at every stage.

Ingest

Baseband video is always going to be a problem for a cloud-based platform. The whole point is that the cloud-based platform uses standardized interfaces such as Cat 5 and IP, and not broadcast-specific standards such as SMPTE 424M (3Gb/s HD/SDI).

In a cloud-based application, the essence of a broadcast transaction will remain the same, but the interface is likely to change. Its foundations will still be based on broadcast standards — for example, SPMTE 424M — which cover the interface for transmission of 3G digital video signals over a single coaxial cable with BNC connectors. In a cloud-based application, the essence of that transaction will remain the same, but the interface is likely to change. The essence will still be based on broadcast standards, but the interface will be an IT interface.

This has already happened in the coding and MUX world, where traditionally signals passed around as DVB-compliant TS on ASI via coax and BNC connectors. Today, the same DVB-compliant TS is passed to the same broadcast standards, but it is now carried over UDP/IP or RTP/IP on a standard twisted pair and Cat 5.

However, because the direct ingest of media as files is now commonplace, this effectively negates the need for a conversion from the baseband material. If baseband is unavoidable, then products equipped with Sony's e-VTR interface and protocol allow a stream to be sent directly from the tape onto a network. Up to 70 percent of broadcasters' content is now being delivered by file rather than tape. Over the last 12 months, during which time the tsunami in Japan made tape products scarce, there was a significant switch to file-based delivery — allowing the migration to the cloud to happen that bit more swiftly.

Even live feeds coming in from live news or sports events can be accommodated. More recently designed DSNG vans and OB vehicles can deliver live material by using IP delivery protocols. Often today when these signals get to the broadcast center, they are converted to a baseband SDI signal, but this is often only to satisfy the needs of other legacy equipment. In a total cloud-based system, this wouldn't need to happen, allowing the live signal to remain as a stream.

Transmission

Even the complex functionality of a traditional broadcast chain — including the video server, presentation, logo generation, graphics and ARC — can be replicated exactly using software running on a generic x86 server. The channel-in-a-box systems have already shown this concept to be true. There are even broadcast vendors selling solutions today that in addition to providing channel-in-a-box services carry out the live encoding of the DVB TS. This removes the requirement for the traditional PCI-based conversion cards and allows the software to be deployed in a virtual machine or, to put it another way, in the cloud. Instead of a channel-in-a-box, we get the channel-in-the-cloud. The output from such devices is no longer coax cable and a BNC connector but rather uses standard routed IT infrastructure to deliver the encoded signal using UDP or RTP over IP.

We start to now see the emergence of a stream-based workflow enabled by the underlying file-based workflow. It is the next logical step as the broadcast industry moves toward a generic base for its platforms.

The standardized agile channel-in-a-box software that we just described above can now be cloud-hosted and created in an instant by broadcasters that have invested in the technology. Think about a TV company that has just secured permission to transmit some major sporting event, perhaps winning the rights from another competing broadcaster. This channel will need to create additional channels quickly and with minimum cost to ensure a credible return on investment.

Today, this would be a major problem to any TV channel in this situation. It would have to ask if it can afford the physical deployment of hardware. Can the services be built in time? Does it have the space to house the equipment in its existing buildings? What will happen to the new channels if it doesn't manage to retain the rights in future years?

Using a stream-based workflow, which can run on virtual appliances and provisioned using cloud technologies, this would be a much easier exercise. With our channel-in-the-cloud, we can use the online environment to order a new channel with just a few mouse clicks and a credit card. In the same way that the IT professionals were able to create new MS Exchange servers in a few clicks, so it can be with broadcast professionals, creating a new channel by configuring options from the channel-in-the-cloud website — aspect ratio, HD or SD, HD format, closed or open captions, RTP/UDP steam delivery point, etc. The broadcaster's operators would be able to access their thin client GUIs within minutes of the order being completed to start the work of uploading content, scheduling the channels, managing content, compliance editing, QC, and ultimately controlling and monitoring transmission.

Uplink

Exiting the cloud infrastructure as a real-time MPEG-2 or MPEG-4 encoded DVB-compliant transport stream delivered over RTP/UDP over IP, the stream-based workflow can be extended even to the uplink site. Already broadcasters are delivering to their uplink providers using routed MPLS networks, even though they may sometimes be unaware of it themselves.

Through this type of routed infrastructure, the uplink site itself is effectively in the cloud and can receive DVB streams sent via RTP/UDP over IP for re-multiplexing or uplinking directly to satellite.

Redundancy and economic disaster recovery

The implication of cloud connected uplink sites is the ability to simultaneously “route” packets to two geographically separate uplink sites. (See Figure 2.) These packets could be provided by virtual appliances dual hosted in two separate cloud domains that might be physically on different continents. Even total loss of a broadcaster's primary facility would not result in any significant loss of service. Every “thin client” attached operator would be simultaneously updating both cloud domains, keeping them perfectly in synch.

In the event of a total facility loss, operators would only need to get access to the Internet to continue their day-to-day operations. An entire cloud-based broadcast operation could be running in a disaster scenario from a handful of downtown coffee shops with good connectivity.

For broadcasters wanting economical disaster recovery (DR), the second cloud domain doesn't even need to be active — simply springing to life when required. The decision to spring to life can even be automated based on the loss of other services. Cloud resources are only paid for when they are consumed and then are usually billed by the second.

The astounding thing about this approach is the ability to have DR that you only pay for when you need it. Contrast that approach with today's prohibitively expensive DR sites — sites that, although necessary, are often not implemented due to cost and return on investment issues. (See Figure 3.)

The next generation of systems integrators

The technology, software and hardware to provide the above services and solutions already exist today. Some systems integrators already have run proof of concept trials of this type of solution, integrating several manufacturers' existing off-the-shelf software products into fully cloud-based solutions. It is this new approach to systems integration that will bring these isolated technologies together into a coherent whole.

Andrew Davies is General Manager at TSL Middle East.