QoE for IPTV end users

Delivering TV services over an IP-based network has a unique set of technical and commercial challenges. Ensuring that end users receive a TV experience that compares favorably with other services offered by cable and satellite pay-TV operators is one of the biggest challenges faced by IPTV service providers. At its most basic level, IPTV operators need to ensure that their customers immediately receive a channel that they request via their remote control and that high-quality video content is displayed on screen. From a subscriber's perspective, this is just a basic requirement, but the implementation of this functionality, particularly for a large IPTV network, can be problematic for network operators.

QoE and QoS defined

One of the core requirements of achieving high satisfaction levels is to implement a quality of experience (QoE) measurement system that will closely monitor and benchmark the perception that an IPTV end user has of the service. The term QoE refers to the experience associated with watching an IPTV service. This not only relates to picture quality but can also cover other areas such as responsiveness and usability. This contrasts with quality of service (QoS), where the measurements are purely based on networking parameters such as jitter, packet loss and delays. Before examining both topics in greater detail, it is first helpful to understand some of the factors that reduce QoE and QoS levels.

There are several factors that negatively affect an end user while interacting with IP-based TV services. Packet queuing and video jitter are two of the most common issues faced by providers when attempting to maximize QoE and QoS levels.

Packet queuing

At its most basic level, the queuing of IPTV packets involves a router or a switch storing packets of IP data for a short period, while the network device is busy processing other IP packets. Once these IP packets are processed, the network device selects the next packet in the buffer queue. The number of queues supported by routers will vary according to the number of services that are supported across the network. The process of queuing packets is illustrated by the following example.

Suppose a broadcast IP streaming server sends an H.264 compressed IPTV multicast channel onto a distribution network at a constant bit rate (CBR) of 2.5Mb/s. Some of the potential network traffic patterns between the streaming server and the residential gateway located at an IPTV end user's home are illustrated in Figure 1.

The first scenario illustrates the situation when network conditions are ideal and the packets are separated by a time difference of 1.5ms. In this environment, the intrapacket gap stays the same as it traverses each of the network hops on route to the residential gateway. This scenario is unlikely to occur across a live network.

In the second scenario, two of the IPTV packets have been injected onto the distribution network without time gap, and the third packet is 3.5ms behind the other pair of packets. Although no queues are occurring, the injection of another pair of packets back-to-back results in the formation of a mini-queue.

The third scenario is more representative of what occurs across a network that is used to transport triple-play IP services. In this final scenario, the VoIP, IPTV and Web servers have simultaneously injected IP packets into the network. Once on the network, the intrapacket delays of all three streams are more or less the same as they arrive on three different router interfaces at the same time. The router is unable to process three streams simultaneously so a queue is formed, and packets need to wait a finite period of time in memory before being forwarded to the correct output port. In this example, the queue overflows, and a packet storing Internet access content is discarded. Meanwhile, the VoIP and IPTV traffic receive preferential treatment in the queue, and the intrapacket time gap remains the same at the output of the router.

This is a simplistic model of the types of network patterns that could develop across an IPTV network.

Video jitter

IPTV services are particularly sensitive to delays caused by overloaded servers, routing, network congestion and queuing as the IP video packets traverse the network. The quality of a video signal depends on the delivery of a lossless IP stream at a constant bit rate. The decoder in the IPTV consumer device requires a steady, dependable incoming IP stream. This is achieved through a sophisticated synchronization and clocking process that occurs between the decoder in the IPTV consumer device and the encoder at the IPTV data center. Variations associated with packets arriving too early or too late result in a behavior called jitter. Figure 2 depicts the effect that jitter can have on an IPTV stream.

As illustrated, the first stream pattern is considered ideal and will deliver a high-quality TV signal to the end user. The packets are evenly spaced, and the bit rate remains constant throughout the network path.

In the second scenario, picture viewing problems are more likely because the buffer does not fill fast enough, and the feed to the decoder does not happen at a constant rate.

In the third scenario illustrated, the stream of IP packets is flowing too quickly. Thus, the IPTV consumer device buffer overflows, and packets are lost. Once this starts to occur, jitter effects, such as flicker on the TV screen and lockouts to the IPTV consumer device, become obvious to the end user. This effect can be minimized or offset by using large memory buffers in the IPTV consumer device or by increasing the bandwidth capacity on the broadband network. Although the use of larger buffers helps to minimize jitter, it does, however, introduce delays because it takes extra time to fill the buffer full of packets. The size of the buffer varies between different types of IPTV consumer devices. For example, the buffer sizes of IP set-top boxes can store between five and 40 seconds of video before forwarding to the decoder. The jitter on an efficient IPTV network is generally measured in terms of milliseconds.

Other factors such as poor quality source content, encoding mechanisms, IP packet issues, latency, incorrect configuration parameters and server congestion all contribute to a decrease in QoE levels and a corresponding increase in customer complaints.

Apply a QoS mechanism

Given the sheer variety in factors that can generate various types of video quality degradation, it is critical that a QoS system is applied to the networking infrastructure to improve end users' viewing experience. Methods called differentiated services (DiffServ) architecture and Multi-Protocol Label Switching (MPLS) DiffServ are becoming increasingly common as a means of improving the delivery performance of both time-sensitive unicast and multicast IPTV streams.

Some IPTV providers use the DiffServ architecture to manage and guarantee a particular level of QoS for subscribers. Implementing this architecture basically means that IPTV traffic is given higher priority over other types of IP-based traffic. The specification for DiffServ was published in 1998 by the IETF and can be found in RFC 2475.

MPLS is a traffic engineering system that boosts the efficiency of IP routing networks. By combining the many benefits of MPLS with the QoS guarantees of DiffServ, network operators can deploy services that require strict performance guarantees such as IPTV. The mechanisms used by MPLS-DiffServ QoS systems are defined in RFC 3270.

IPTV network operators who combine adequate network resources with an enforcement of QoS techniques such as DiffServ architecture and MPLS-DiffServ will help to ensure that end users enjoy a high QoE.

Keep an eye on QoE

Although a properly implemented QoS system will help to ensure that IPTV streams travel from source server to the destination IPTV consumer device in an unhindered manner, service providers still need to monitor how end users perceive the quality of the IPTV experience. A set of QoE measurement models are typically used to measure the satisfaction levels of IPTV end users. There are three primary models used by IPTV quality measurement systems to identify the presence of IPTV stream impairments, namely full reference, zero reference and partial reference.

Full reference

This system makes a copy of the stream at the IPTV consumer device and compares it with a reference signal obtained from source video content. The signal size varies between measurement equipment but is typically uncompressed and quite large. This measurement will determine distortion levels and degradation that occurred during the encoding and transfer of the original video content across the network. Figure 3 illustrates a basic reference topology used to compare the original signal with a signal post transmission across the IP distribution network.

Zero reference

There are also systems available that do not require a reference video signal to score the video quality of an IPTV stream. Instead, a sample compressed signal is obtained from the IPTV consumer device in real time but not from the source. Zero reference systems are particularly suited to measuring real-time IPTV streams because they analyze fewer factors compared with full reference systems.

Partial reference

Similar to the approach used by full reference systems, partial reference measurement equipment is designed to take a sample at the source and also at the destination IPTV consumer device, compare the signals and output a metric. Partial reference systems require less computational complexities when compared with full reference systems and use a smaller reference IPTV stream sample when comparing both signals.


An increased understanding by network administrators of QoE is critical to ensuring that an end-to-end IPTV system operates effectively. Only when this occurs will companies start to truly unlock the potential of IPTV technologies.

Gerard O'Driscoll is an international telecommunications expert, entrepreneur and author of “Next Generation IPTV Services and Technologies.”