As buzzwords go, “microservices” is big right now, so let’s take a step back to try and understand why, starting with what the term actually means.
A microservice architecture comprises a collection of services where each one is focussed on a single business function, is self-contained and lightweight. Being independent, microservices can be maintained and developed as required without having to re-compile and debug an entire application. However, there is still a debate on how small the business function must be to constitute the service being “micro.” By contrast, traditional “monolithic” software can be a modular application, but packaged and deployed as a single entity, or monolith.
While microservices-based software development can offer clear benefits in many applications, as usual, there are also drawbacks. Simply claiming that a product is microservices-based is meaningless if the benefits this approach delivers are not used for the job at hand. So, we’ve been asking what the relevance is to playout. And the answers are interesting.
It is a fact that microservices can make it easier to scale functionality. Let’s say between 2 a.m. and 5 a.m. when there is hardly any activity, you could theoretically have a single microservice managing a system within your facility, and then scale it up to maybe 10 instances to support the peak load later in the day. And it goes without saying that it is easier to scale a small application than a large one that encompasses a lot of functionality. Similarly, if there is a failure with one of those instances, a new one is very easy to spin up.
Scaling requires additional hardware (compute and memory) being available to run the extra instances on, so for an installation with a fixed amount of hardware there could be less benefit. This is where the cloud comes in to provide more flexible compute, allowing the hardware and cost to be reduced during quieter periods and expanded on demand. Of course, you don’t only need extra compute, but there is also a requirement for some intelligence to know when to scale each part of the system. This can mean the complexity of your system increases, and you will need the correct tools to manage this.
It’s easy to see how, for example, a microservices-based MAM system would improve servicing a queue of workflow tasks that fluctuates based on the demand. However, with a playout system you are not generally expecting massive variations in load, and your peak is relatively easy to predict. Let’s not forget that at the heart of your playout solution sits a video processing engine for 24/7 real-time playback, and this is not a small—let alone “micro”—application. And if you’re looking to handle 4K video frames at high data rates, you are likely to see performance issues and increased complexity when splitting each component out into its own process.
Rather than fixating on the latest buzzword, let’s shift the debate to what will really deliver the resilience, efficiency and sustainability you need. For playout this means a system that is centrally monitored and self-healing, with open APIs, modularity and proper versioning of software that is correctly deployed. The system must interface well with third parties, and where required, must incorporate dedicated video processing hardware that delivers focussed functionality with reduced overhead on the underlying host system.
Perhaps a better approach, therefore, is to deploy an architecture that uses the appropriate tools for the specific job in hand. Some of these may be microservices, but where it counts, a modular, or even monolithic, application may be the best approach to deliver the reliability and speed the task demands.
Taking a look into the future, beyond microservices, it would theoretically be possible to deploy a serverless architecture, which would take the granularity down to the level of individual functions and no longer require the broadcaster to manage hardware. Let’s say you wanted to do a playlist import on a cloud-native system, you would pay just to run that one function of that import. That’s a single API call to run a single job, which happens fast and it costs micro-cents. You could even go down as far as paying tiny sums for individual validations of the playlist. Even running thousands of these would be relatively cheap compared to having a box sitting there all the time. The question is, how much variability in cost is desirable? How easy is it to budget and forecast for this? And can you really predict to this level what your usage will be? Food for thought! A word of warning though, you may find yourself locked in to a specific cloud provider as some of the larger public cloud providers have proprietary interfaces for serverless technologies.
While the microservices approach offers some tangible benefits, it’s not a panacea for every challenge in playout. Here at Pebble we would argue that the closer a module is to doing the actual playout, the "bigger" the service should be. Equally, the closer we are to the management and automation logic, the greater the relevance of more lightweight and modular services. In this service-based architecture, all services are appropriately scaled for the task they are responsible for. They are connected by an open API, which supports independent versioning, easy deployment and ultimate flexibility, whether virtual or physical, cloud or on-premise.
It can be easy to get lost in the “nano/micro/macro” debate, but in the end you as the broadcaster need to understand the goals you want to achieve with your system and ensure the vendors you work with can provide a solution that meets these, no matter what system architecture they use.
Stuart Wood is a product owner at Pebble Beach Systems.