Taking advantage of cloud-based processing
It’s impossible to avoid mention of “cloud” when discussing changes in broadcast workflows, yet many organizations are cautious about investing in new technologies when the benefits are not entirely proven. Whether cloud-based processing is right for an organization or a specific functional area within that organization depends on the overall workflow and environment in which that organization or functional area operates.
Small organizations keen to tightly manage costs and with limited budgets for IT overheads are usually happy with a dedicated server model, where one server or workstation is used by one person at a time for one application at a time. Even then, such organizations are already likely to be interacting with cloud-based resources, via collaborative tools such as Aspera and Signiant, albeit in a fairly limited way, focused mainly on the content transfer element of the workflow
Increasingly, broadcasters and media organizations are using server farms or clusters, as well as virtualized resources, because, depending on the management software implemented, these allow users to gain speed advantages by running multiple jobs in parallel, assuming each server is licensed for the appropriate application.
However, a growing number of broadcast organizations have gone beyond the cluster/Virtual Machine (VM) model, and are starting to use cloud technologies, which offer dynamic and flexible means to configure processing resources. Organizations have a choice of private cloud, in which all hardware is located within their facilities (which could be on different sites) and on which runs their own cloud layer and VMs, or public cloud, where the user contracts with a service provider to access resources on-demand.
Want to find out more?
Subscribe to our Digital Edition and check out the August issue to find out. It is FREE!
Already a Digital Edition subscriber? Click here to get the issue.
Or, get a FREE printable PDF of all our August In-Depth Special Report articles here.