Within the next week and half, “Broadcast Engineering” will be co-producing an HD Technology Summit in Los Angeles with “Broadcasting & Cable” and “Multichannel News.”
One of the participants in the upcoming event is Harris, which offers broadcasters a wide-ranging line of technology to make completion of the transition from analog to digital HD successful.
HD Technology Update spoke with Andrew Warman, product marketing manager of servers, at Harris in preparation for the summit. He provides some insight into where the industry stands with compression performance and where it needs to go.
HD Technology Update: Can broadcasters remain competitive in the HD realm, because they are locked into MPEG-2 by law as part of the ATSC standard for their primary free OTA channel?
Andrew Warman: I think the answer is yes. For our part, we have switched our services in the last couple of years to focus on software-based coding. For us, that means we are able to add optimizations for most media formats, which helps downstream when you’re encoding for final output.
Speaking from our point of view as a vendor of baseband server and ad delivery to coding systems, we are able to keep that quality at a surprisingly high rate even at the lower bit rates, which we obviously pass on down. A lot of the AVC coding systems we’ve seen really aren’t actually that good in quality at the sort of 10Mb or 12Mb range. Actually, it’s fairly disappointing. There is actually one notable exception to that.
HDTU: So, you haven’t liked what you’ve seen from MPEG-4 AVC at this point?
AW: We have seen a variety of AVC encoding systems. Based on what I saw yesterday, I can say there is a lot of improvement that can be made. The vendor I saw yesterday has made a significant jump over the others in terms of their end quality. Clearly, there is a lot of room for maneuvering in there for everybody.
We see the same sort of thing when we work on the software-based encoding side, taking raw coding algorithms and doing a lot of optimization on our own to improve speed and image quality.
Our experience so far tells us that there’s still room in there for further optimization, so you know it’s going to get better.
HDTU: How does Moore’s Law play into the equation?
AW: It will only help. One of the problems with MPEG-4 is it doesn’t scale very gracefully. The larger image becomes significantly harder on processing; however, you approach it to code the stuff. So, obviously the improvement in processing power is huge because it will make the job of dealing with MPEG-4 and MPEG-4 AVC a lot easier down the road.
HDTU: What is the best strategy for broadcasters to employ to feed out SD and HD channels during this final stage of the ongoing analog to digital transition?
AW: A lot of this will fall to the customer preference. We only store one version of essence in our system regardless of what the output is going to be. If you want to output SD and HD at the same time and the essence is of a particular piece of media that happens to be SD, it gets upconverted on the designated HD output.
It can be crossconverted, and it can be downconverted. If customers decide that a single playout server is the right approach, and they designate output ports for SD and HD, they can use the same essence media file from storage and output it in two resolutions. The server port will be up, down or crossconverted depending on the resolution of a particular piece of essence. Then ARC (aspect ratio conversion) can be used to apply the correct aspect ratio on output.
If they wanted to do two separate playout servers, they can still use the same piece of essence they share across the playout servers by mirroring, file transfer or other data movement, but again, you still use up, down or crossconversion on output to get the required signal out in SD or HD.
We tend to direct customers whose workflows are appropriate to use shared storage. There’s lots of redundancies, mirroring and partial mirroring techniques that can be applied to get the level of protection to where the customer would like to be, in budget. But at the end of the day, the SAN helps because there’s only one pot to put everything in that you have to look after.
In some scenarios that doesn’t work, particularly if they are in a hub and spokes-type environment across physically different locations, or if their station is such that older systems are connecting to newer systems and therefore there’s migration of data from one to another. MXF and other technologies can be applied at that point, such as transcoding to turn that material around.
Not having to touch the essence is the ideal thing because we don’t suffer image quality loss.
Tell us what you think!
HDTU invites response from our readers. Please submit your comments to firstname.lastname@example.org. We'll follow up with your comments in an upcoming issue.