HOLLYWOOD, CALIF.—Shortly before the start of the SMPTE 2015 Annual Technical Conference & Exhibition, TV Technology spoke with Richard Welsh about his upcoming session “Utilising Massive Compute Resource in Public Cloud for Complex Image Processing Applications.”
TV TECHNOLOGY: What kind of compute power are we talking here, and what is untenable for the average post-production facility?
RICHARD WELSH: A recent job we had, processing a 50-minute UHD natural history documentary, ran for four hours on just over 5,000 processors. We could have run that job in real time if we had split it up more, and that would have taken us up to more than 20,000 processors for one hour (there was no rush to get the job back in real time).
A really big VFX house may have 20,000 processors in a render farm, and they may have several of those around the world, but they are running 24/7 on multi-million pound (£) budget VFX jobs. Try soaking up their entire render farm for the odd hour here or there when you feel like it — unlikely. For the average facility, the capital expenditure required for a render farm setup with multi-tens of thousands of processors, and all the required storage, networking, power, cooling and ongoing cost of maintenance is pretty much impossible to justify. Post-production margins are squeezed tighter than ever; and in that environment, when it comes to compute and storage expansion, all roads are now tending to lead to cloud.
TVT: What are some applications and options that cloud computing can provide that were previously unavailable?
WELSH: We're seeing that customers are really positive about the on-demand nature of cloud, and elasticity that it offers. You can dip in and out on tiny jobs without risking the capital expenditure on a box that gathers dust 90 percent of the time. Then when the time comes you can scale to meet huge demand for a short period of time, where the cap-ex spend to meet that demand would be huge for the sake of one or two weeks’ work.
We're seeing huge uptake of transcoding services in the cloud, as an example. It makes a lot of sense to be able to trickle jobs through transcoders during quiet periods; and yet not have to manage the flood every time a new title, or library comes in that needs a generic process. Also, on the video on-demand/ IPTV side, the ability to deploy new channels or upgrade existing ones (from HD to UHD for example) is really compelling. Of course, you can't start another Netflix tomorrow, but the ability to effectively have "pop-up" video channels in the cloud is grabbing the attention of traditional broadcasters. It's almost a saturated market at the top end, so it has reached maturity very quickly compared to other areas.
At our end of the market (post-production) take-up of cloud has been somewhat slower. We're seeing that change now with big post-production hardware names making moves into the cloud. In setting up Sundog a couple of years ago we made a conscious decision to go for the post production niche, and part of that rationale was the ability to productize new processes that were effectively R&D code. We knew that building a platform was a big task, building all the tools from scratch on top didn't make sense. We spoke to lots of big name manufacturers about putting their tools into Sundog. Normally, that product cycle takes many months, if not years. We have gotten to the point now that we can sometimes deploy new tools in weeks. We have several third-party manufacturers already in the system including Dolby and Digital Vision. A couple of the new processes we have that are not available elsewhere are RealD's TrueImage process and a high-accuracy stereoscopic disparity mapping technique, also from RealD. Those tools would have taken much longer to come to market without cloud. Additionally, deploying these tools in a wider platform has enabled us to build other tools around them, such as fully automatic 3D subtitle placement for 3D movies.
TVT: You are using “an iterative motion-compensated, super-resolution pixel reconstruction technique” by way of illustration. How much of a demand is there for this particular technique?
WELSH: The technique referred to is the RealD TrueImage process. This is brand new to the industry so it's too early to tell what the mature market will look like. However, we're currently processing jobs on a couple of big Hollywood feature films, with more in the pipeline. At the same time, we've run the process on restoration footage, low-budget indie movies, and wildlife documentaries. Evolutions Television in Bristol, U.K. is a leading post house for natural history work and they have really embraced this process as a core part of their work on high-end content, particularly UHD/4K work. Evolutions just ran TrueImage on "Jago: A Life Underwater", which won the Jackson Hole Wildlife Film Festival's "Grand Teton" prize. So we're seeing take-up from all sides where people are really concerned with getting the best possible quality out of their pictures.
The technique is hugely sophisticated, but delivers spectacular results. The development team at RealD cut their teeth with TrueImage on Peter Jackson's “Hobbit” movies (before Sundog was involved). The productions were struggling with camera noise that was exacerbated by shooting at high frame rate. This is bread and butter for TrueImage — in simple terms it takes noise out while reconstructing detail in the image. You have to see it to believe it; we've had nothing but positive feedback so far. The downside on face value is that the process is so complex it looked impractical in a normal post-production environment. However, when we looked at deploying this process in the cloud it became apparent that this was a way to keep the complexity that makes TrueImage so good and deliver in reasonable timescales.
TVT: You say this type of processing is not without “pitfalls” and “practical limits.” Such as?
WELSH: The sophistication in the TrueImage technique lies in the fact that each pixel is reconstructed by reference to millions of other pixels in space, time and through color channels. The result is that each image frame is referencing trillions of data points during processing. This means hardcore machines and a tonne of attached RAM. We utilize Amazon Web Services' most RAM intensive machines for this process, and it turns out that when you start looking to spin up hundreds of these for an hour or two here and there, you have to pick and choose your data centers. The total RAM on the aforementioned natural history documentary totaled over 15 Terabytes. We have been processing some 120fps 3D 4K footage, and the equivalent length of content on that would require 50,000 processors with 150 Terabytes of RAM. We don't control or have visibility of our customer's workflows so we need to be ready for worst-case scenario because big jobs always seem to come along at the same time. We want to be comfortable that the automatic scaling functions in the Sundog platform won't hit the endstops. For that reason, we prefer to run in the largest Amazon data center available in the same geography as our customer's facility. That gives us the flexibility to call such massive resource in a short timescale.
TVT: What aspects of post production are most likely to be cloud-based going forward?
WELSH: There isn't really anything that's off the table in the future. Right now connectivity is one of the big limiting factors, but the cost is falling rapidly; available bandwidth is going up; and the business models of enterprise class telecom's providers are changing to match the on-demand nature of cloud with on-demand bandwidth.
I think it will still be a few years before we see live color grading or playback of uncompressed 4K high frame-rate content where all the processing and data are in the cloud. I have no doubt that eventually that will become mainstream. But in the meantime there are so many new formats coming down the line that all need more compute, more storage, more network capacity. Scalability of all these things is a core benefit of cloud infrastructure.
With super high resolution, high frame rate, high dynamic range and wide color gamut, virtual reality, augmented reality, holographic image and so much more all coming down the track at speed, cloud is going to be at the cutting edge of the next generation of processing and delivery of entertainment media. What we will see is that it makes more and more sense to push as far back up the process towards the content capture/creation with cloud applications. Where that stops remains to be seen, but there's no reason not to expect even high-end cinematography cameras to stream their data straight to the cloud in the future. Then the content stays in the cloud until it goes to the audience, be that in the cinema, the home, tablets, phones, eyewear or something else we haven't thought of yet!
Richard Welsh is co-founder and CEO of Sundog Media Toolkit Ltd. He serves on the board of the Society of Motion Picture and Television Engineers (SMPTE) as International Governor for EMEA, Central and South America. Welsh has worked in the cinema industry since 1999. He has been involved in various technology development work during this time, primarily in digital cinema and is named on patents in the area of visual perception of 3D. He started cloud software company Sundog in 2013, which specializes in scalable post-production software tools aimed at high-end broadcast and movie productions.
Future US's leading brands bring the most important, up-to-date information right to your inbox