The reality of IPTV may be just around the corner for many North American viewers. And hundreds of thousands of viewers in France and Hong Kong already receive broadcast-quality video delivered via IP to their homes.
IPTV is not a revolutionary concept. It is an evolutionary transition of broadcast television that presents some significant challenges despite the marriage of two well-understood technologies — IP and video compression. For service providers, it is an opportunity to gain additional revenue using existing infrastructure and to develop a growth strategy around well-defined technologies and an established service model.
In most cases, IPTV will not be offered as a standalone service, but as part of a triple-play package. The IPTV element of the triple-play market is relatively immature when compared to other transmission infrastructures supporting video (i.e. cable and terrestrial broadcast systems). The focus, therefore, in initial network pilot deployments and during new service rollouts, has been on getting the systems to work.
IP is now widely deployed for internal distribution in broadcast and network operations facilities. Test and analysis for internal distribution can be distinctly different from that required for IP delivery directly to the home (i.e. IPTV). Test strategies during IPTV initiation, establishment and rollout will vary, but all will be targeted to ensure quality of service.
The organizations involved in IPTV deployments are at varying points along the implementation curve. But in general, those responsible for the system face a three-part problem of design, deployment and management.
In the process of designing components for use in the network and in the home, including set-top boxes, designers and manufacturers must ensure standards compliance and interoperability. There are too many standards, formats and sources of network traffic to leave this to chance or to take shortcuts in the process.
A triple-play network needs to assure availability of network resources and bandwidth to deliver video services. However, overall network bandwidth is finite, and so it is equally important that unnecessary pathways can be torn down successfully. This requires test equipment capable of establishing and testing the IP pathway and providing statistics on network jitter and packet loss.
In the process of new service deployments, the initial concern is getting the system to work. Once the basic network design considerations are met, the next step is looking at what goes in and comes out of the network.
Having established that the IP pathway can be reliably set up, it is then essential that the video data pushed into and received from the pathway is correct. This requires monitoring and analyzing of transport streams (TS) at the output of encoders, multiplexers and headends to ensure the source material is correct.
At the receiver end, similar monitoring and analysis will ensure that there has been no degradation to the video as it passes through the system. Those responsible for installation and commissioning will require video expertise and trusted tools to make the correct compressed video measurements for both multi-program transport stream (MPTS) and single-program transport stream (SPTS) systems.
As initial problems of early deployment are solved and operations scale up for full IPTV system use, the need for system management and test requirements will shift toward networkwide monitoring as part of a large service assurance system. Test strategies and equipment must lead to fast and correct solutions to ensure that the highest quality content is always delivered.
QoS measurement using MDI
The media delivery index (MDI) for IPTV networks predicts expected video quality based on network level (or IP) measurements. It is independent of the video encoding scheme and is a lightweight, scalable alternative to measurements that decode and examine the video itself.
MDI is the ratio of media delay factor (DF) to media loss rate (MLR), which is the number of packets lost in one second. The DF indicates how long a data stream must be buffered (i.e. delayed) at its nominal bit rate to prevent packet loss. MDI is typically displayed as two numbers separated by a colon — DF:MDR.
MDI is defined by the European Telecommunications Standards Institute (ETSI) in RFC 4445. It recognizes impairments in the IP layer and offers a general idea of network jitter. While MDI infers video quality, it does not provide specific measurements related to video quality.
The MDI-DF can give a measure of congestion in a network by showing usage level and detecting when queuing happens in network components. It is also useful to know how much of the queue buildup is due to video packet bunching and to have an indication of whether the session bit rate can handle the required MPEG TS bit rate.
A good MDI does not mean a faultless IPTV transmission. And a bad MDI can be unrelated to video quality. The best solution is to use MDI and MPEG layer protocol test in conjunction (i.e. as cross-layer measurements).
A 2001 ETSI technical report, TR 101 290, defines a set of standard evaluation tests for digital video systems that can be performed repeatedly and provide consistent results assuming reasonable management of variables. The tests to detect transport stream corruption and network jitter are useful MPEG layer tests.
It is best to perform these in conjunction with IP tests throughout the network. Cross-layer testing is useful when monitoring both the IP and MPEG domains to trace and track performance degradation before the problem gets too serious. (See Figure 1.)
It is possible to detect video errors at a user site with MOS techniques or even a visible check. However, these methods provide no indication of the problem's root cause. A problem that appears in one layer, medium or device may have been caused by a variety of seemingly unrelated problems elsewhere in the system. The root causes may include:
- dropped or out-of-order packets;
- traffic congestion on the IP network at aggregation; and
- core or access satellite feed problem due to rain.
Test and measurement solutions that provide information, not just data, offer insight to the analysis process in a variety of ways, including graphing jitter within the MPEG stream or at the IP layer and quantifying them.
Viewing graphical plots helps correlate events over time as they happen on one layer and allows users to see whether they affect one another. (See Figure 2) This helps pin down the root cause to a specific layer, allowing a fault to be isolated and rectified.
Time stamping alarms, trace files and fault logs improve the operator's ability to backtrack intermittent faults to see whether an MPEG stream had an internal transport stream fault or whether the fault was related to an IP event. (See Figure 3.) The TR 101 290 tests analyze many parameters on the MPEG layer. The specific parameters affected by dropped and out-of-order packets on the IP layer are the sync byte, continuity count (a four-bit rolling counter on a packet identifier basis) and cyclic redundancy check (CRC).
Most MPEG streams contain a built-in timing packet called the program clock reference (PCR). Graphing PCR inaccuracy and overall jitter gives a good indicator of a stream that may be suffering timing distortions due to packet burstiness or jitter. When combined with the simultaneous display of IP packet inter-arrival timing, an engineer can time-correlate the signal with PCR and with presentation time stamp (PTS) on elementary streams. This cross-layer check is also possible even if the TS does not contain a PCR. (See Figure 2)
PCR overall jitter (PCR-OJ) and PCR frequency offset (PCR-FO) can be compared with packet arrival timing stability to assess the source of IP or MPEG introduced jitter. There are defined acceptable limits for PCR but none for IP arrival interval. These are user defined, and test equipment should allow users to set limits.
For MPEG-4 transports that don't strictly need PCR, some operators still establish a PCR feed, as it takes up little bandwidth and offers a fast indication of timing health.
Finally, a cross-layer test can extend to RF layers, where off-air content is acquired for delivery over IP networks. RF test probes can demodulate, decode and perform MPEG tests to detect whether timing (PCR) and jitter are already present on the downlink.
IPTV-specific video test and measurement operational parameters vary from network to network and operator to operator. They depend on many variables, including some not within the network operator's control. Establishing the correct operating parameters is an iterative process.
As organizations move into the challenging new space of IPTV, they will need to empirically develop a test strategy that delivers the QoS levels demanded by viewers. It's essential to select test and measurement tools with the appropriate depth of capability and user definable test parameters. It is critical to establish a correlation between events in one layer of the pipeline and another to provide a warning of impending disaster, or in the worst case, rapid disaster recovery.
Jon Hammarstrom is senior manager of video global marketing at Tektronix.
Large-scale IPTV deployments are still in the early stages
Today, there is only a handful of network operators globally that has deployed IPTV systems with more than 250,000 subscribers. For the most part, IPTV deployments have been limited in scale and targeted to select markets, but they are being positioned for significant expansion. There is still a lot to be learned about ensuring quality of service in large scale systems, while rolling out significant new services, incorporating new technologies, and especially when combining video with voice and data over a single IP network.
Triple-play networks have yet to fully converge
Video, data and voice are generally handled in separate networks and by separate teams, only converging deep into the network. Point solutions will allow operators to monitor and test the most critical points within their IPTV network. And because IPTV is a technology that will force IP network engineers to consider video issues and video engineers to consider IP network issues, it is important to choose a test and monitoring solution that provides visibility of both domains and correlation between them. This will enable faster problem identification, isolation and resolution with minimal impact on the end-user experience.
Delivering video presents unique challenges
Video has intensive bandwidth requirements in a best efforts network. Video is intolerant to packet loss, out-of-order packets and resends. These issues will only increase as more video is carried through IP networks and HD and other services begin to demand increasing amounts of bandwidth. There is a lot of complexity associated with ingesting, storing, encoding, transcoding and multiplexing video streams for delivery to the home. There is little margin for error. And any errors that do occur have a high impact on the user experience.
The advantages of cross-layer correlation
- Graph packet arrival interval (to show burstiness)
- Time-correlate packet arrival, PCR and PTS arrival interval graphs
- Identify underlying IP transport errors
(CRC, dropped packet, out of order packet)
- Identify all TR 101 290 errors
- Time-correlate errors at IP, TS and RF levels to identify root cause
- Timestamp errors with layer identifiers in error logs