IBC 2018 opens in Amsterdam in a few days, and IP video transport and networking are sure to be on the minds of many attendees.
NewTek will be at IBC 2018 showing its latest NDI developments. Before the show opens, I had the chance to speak with NewTek President and CTO Dr. Andrew Cross about what the company has in store for the RAI.
Besides discussing the important additions NewTek is making to its NDI SDK, Cross offered thoughts on how the market has accepted NDI and the direction in which the video industry is headed.
(An edited transcript.)
Phil Kurz: What’s new with NDI for IBC 2018?
Andrew Cross: We are going to be announcing NDI 3.7 at IBC, and I kind of trapped myself by releasing 3.5 at NAB. I don’t want to call it 4.0, but it’s the biggest reasonable step I could think of. That’s why it got to 3.7.
It’s a big step forward. There’s a whole load of improvements. The first, which is probably going to have the biggest market impact, is we are announcing what we are calling the NDI Embedded SDK. It’s a version of the software development kit for people who wish to use NDI in hardware devices.
All along NDI has been great for software, and our focus –and this goes back to what our company does—has always been on making it accessible to software users. But that’s only half of the world. So, making NDI something that truly can be used in hardware products has been missing for some time and something we have been working to fix with this embedded SDK that allows people to build actual, real NDI hardware. With that we include the tools to run it on your own device, but we are also making available our own FPGA for NDI compression.
We are actually releasing what we use for our own NDI Spot Connect Pro, which is our own 4K-HDMI-to-NDI converter. We are making this so people can build their own hardware products around NDI. That fills a big gap.
What other NDI releases will NewTek make at IBC?
AC: We are also releasing something we have been asked for a lot, which is a utility called NDI Stream Analyzer, which allows people to get a lot of information about an NDI stream. Obviously, our goal with NDI has been to make it just work for everybody. But at the same time, there are a lot of people and a lot of setups that want more information about how well a stream is working, if there is something wrong with it, the bitrate and the timing information on the stream. So, we are releasing tools that do this for free. We are giving those away.
At the SDK level we are also making big improvements. Honestly, the biggest one –although it might not sound that glitzy, it is really important—is we have put a frame synchronizer into the SDK. You know, when people are building computer software, nobody even thinks about the concept of genlock. So, your computer is running at 60Hz on its monitor, but that almost certainly doesn’t match the 60Hz the camera is running at.
It is nontrivial to take the 60Hz the camera is running at and display it correctly on a computer screen. A lot of people think you just get tearing. But you get a lot of other problems that are related to the clocks not matching. And the same is true of audio, of course.
Our frame synchronizer takes all of the complicated work in analyzing the timing and taking the corrective steps for the timing differences between your local computer clock and the remote clock, and it does all of that for free for people.
This makes it possible to build computer applications displaying and using video and audio without having to worry about timing issues so much easier. This saves so much time for people, and putting it in the SDK makes the SDK much more usable for people.
Beyond that there are a lot of other improvements, such as significant improvements in the way we send data over the network, and personally, that’s been the bane of my existence for the last three months. That is finding better ways to do this because it turns out that a real-world network –like getting UDP with forward error correction working well—is really quite difficult.
There are all sorts of weird wrinkles in the real-world that books don’t tell you about. So, we have significantly improved all of that.
Tell me more about FPGA support and the SDK?
AC: You don’t even have to buy the FPGA from us. We will allow you to buy a Xilinx test board, where anyone working on an FPGA project probably would start –so they would buy a Xilinx or Altera development kits.
We are producing in the SDK an SD card image that you can put onto one of these dev kits, and it will include our FPGA SDK, so it will serve as a testbed for people to build NDI encoders. So, if you bought one of these FPGA dev kits and put the SDK on it, you would have the basics of a working piece of NDI hardware. Now obviously people will adapt that for whatever their actual project is, but you get a great starting point.
And we allow people to get the SDK and build NDI encoders pretty much for free. To take them into a commercial application, we need to work out a licensing agreement with them. But our goal is to allow people to build something, play with it and worry about the details when they want to go to market.
How would you characterize the demand from hardware developers for an NDI SDK solution?
AC: This is a common request and answers a good number of requests from people with interest in doing a hardware-based NDI product. But with so many people and so many requests we will be talking about an embedded SDK 2.0 and 3.0 here soon with all of the additional capabilities people will want.
We are basically a software-focused company, so I am really happy that we are taking the need to make NDI available to hardware vendors seriously here. That’s a good position to be in.
How would you assess where NDI is today among developers?
AC: I have lost track is the answer. What I mean by that is there are so many people using it for so many things that it’s hard to keep track.
Something I have seen is we are about three years in. In the first two years, all of the big software vendors were using it. But if I look at the people signing up for the SDK now, we are seeing more and more startup companies needing to interact with IP video and individuals who have cool projects.
So, it has gone from being something the big vendors use to something that’s the de facto way people quickly use IP video in their startups. It’s gone one tier further from this is what the industry uses to this is the way people experiment with IP video and build products –even at the lower level of the market—and I find that to be very cool.
Has NDI become a de facto standard for IP video?
AC: Obviously, it is self-serving to say yes. But I don’t just for the sake of marketing spin say yes. My sense is certainly that it has become very widely adopted, and I believe the answer at this point is yes.
If you walk around at IBC and in the media and product tools halls and you tick off the companies that support and don’t support NDI, you end up with more than 50 percent of the exhibitors supporting NDI, which is quite remarkable.
I am not saying –nor have I ever said—that it solves all problems for all people, but I certainly think it has become commonly seen as solving a good number of problems for a good number of people in the real world.
Are you or NewTek presenting any technical paper or speaking at IBC?
AC: Personally, I am giving a talk at the IABM meeting about IP video –more about IP video and the place of computers in that than NDI, although NDI is intertwined.
Can you offer a few details?
AC: I get asked a lot of smart questions by people, and that’s helped me to realize over the past few years that the real revolution going on in our industry is one not just about networked video. It is about a broader trend in the market and how computing technology –the combination of computer software and networking—have put together the pieces needed to produce video.
We have gone from a place where computers and the associated pieces of technology weren’t sufficient to produce broadcast quality video to one in which they are.
So, the real revolution going on in our industry, and NDI is part of it, is one in which we are able to use general purpose computing products to produce real-time video.
Our industry tends to look at that as –and I hope I’ve excluded NewTek from this—but or industry as a whole tends to say, “Look these computer things are getting fast, and you can do video with that. So, let’s steal the core things computers do and put them into our products.”
That’s the way many people think, but what I am encouraging is that people think the other way around. Instead of thinking “we’re from the video industry, how do we steal from the computer” think the other way around. The world is computing. We are computing people. How do we take all of this incredible technology and bring it into the video world?
The best analogy I can give is what happened in the phone market. When the iPhone came out, this was a computing company, Apple, which decided they could take a computer and turn it into a phone. At that time, the traditional phone vendors probably thought these guys have no idea what they are doing. This isn’t a phone. It can’t even dial 911 reliably. Nobody is ever going to use this.
But the computer industry said, “Let’s put cool screens on our traditional phones.” And a few years later the phone industry is dominated by computer manufacturers. It’s Google with Android, Apple with iOS and Samsung. Three computer companies own the phone industry now.
That is the exact same shift our video industry is going to go through. And the reason for it is technology has gotten cheaper and cheaper; computers have gotten smaller and smaller; and chip sizes have shrunk and shrunk. We can do more and more for less money, and it’s all connected together via Ethernet and the internet. We need to look as how instead of stealing the pieces of that that we want, we can take all of that and embrace it and build something that is newer and bigger than what the video industry can do on its own.
This is why we created NDI –to allow people to use software and their existing networks on computers they have today. That is where the future of the industry is going to be.
Phil Kurz is a contributing editor to TV Tech. He has written about TV and video technology for more than 30 years and served as editor of three leading industry magazines. He earned a Bachelor of Journalism and a Master’s Degree in Journalism from the University of Missouri-Columbia School of Journalism.
Future US's leading brands bring the most important, up-to-date information right to your inbox
Thank you for signing up to TV Tech. You will receive a verification email shortly.
There was a problem. Please refresh the page and try again.