Aloman Alexandru Draft05.21.2015 [310023]

This work has been done in the framework of the “COoperative INtelligence Schemes for Future Internet emerging technologies” [anonimizat]écnica de Cartagena. One of the focuses of this project is “quality of service/[anonimizat]-to-end network performance”. The key to improved wireless transmission of rich media content lies in increasing channel reliability and error resilience without sacrificing bandwidth efficiency. We must consider that some new network cooperation approaches for video transmission are focusing on resource allocation based on a QoE (Quality of Experience) perspective rather than on classical QoS (Quality of Service), with promising results. There are previous works in the related literature that exploit cooperative transmission in the communication layers following a classical QoS approach; they address the dynamic allocation of network resources through medium access control (MAC)/scheduling and/[anonimizat] a given transmission (optimized by allocating many cooperating nodes) for network cost (power, interference, coordination overhead and delay). However, there is little work on resource allocation in cooperative networks from a QoE perspective. Indeed, [anonimizat]-plane management for intensive traffic flow transmissions. Similarly, traffic identification and classification as well as traffic management issues are of special interest in challenging scenarios such as cooperative networks. [anonimizat]500 emulator will help to better understand the relationship between QoS and QoE in a [anonimizat] i) to propose and analyze QoS and QoE measurement techniques and models to derive their relationship and ii) to design and assess cooperative QoE protocols for intensive traffic flow systems in our ongoing research.

1. Introduction

Video streaming has become the most important way to share video and audio over a network. The video quality greatly affected by packet loss and delay in the network which in turn lowers the user’s perception on quality of the received videos. In this paper I [anonimizat]’s experience. I [anonimizat]-T P.1201.1 (10/2012). [anonimizat]. Also, I [anonimizat].

Long Term Evolution (LTE) is emerging as a major candidate for 4G [anonimizat].

Cellular networks are on the verge of a third phase of growth. [anonimizat]. The increaseing demand for multimedia-based communications is made viable by increased computational resources in mobile phones and special purpose video processing chips evolution of video services to mobile segment and evolution to new mobile broadband standards like 3G LTE and LTE-Advanced. Service and network providers are exploring the opportunity to further enhance their current offerings and to increase revenues by catering for the demand in rich multimedia services to both mobile and fixed users using cellular networks such as LTE.

Video streaming over the internet or over a network has been increasing in the last years. This video streaming has been particularly important because of its adoption by several users around the world. The increasing demand to deliver rich multimedia content over the network has also made the video streaming an interesting area for research. There are several areas that have to be taken care for video streaming. These may include delay and packet loss. It is well known that these impairments appears in every network causing rebuffering events which would result in jerky playback and diform the video’s temporal structure and it may result in degradation of video quality and a poor quality of experience.

Online Video exists in form of live streaming and Video on Demand (VOD) and has multiple consumption points. Some of these popular interfaces include Desktop (PC), Smart TV, Mobile web, Mobile Applications, ipad, gaming consoles and IP connected media players connecting to TV.

Research Questions

The analysis of delay and packet loss on the video streaming gives a better view of the user’s QoE. In order to perform this analysis a research will be done while answering the following questions:

1. What is the effect of packet loss and delay variation on video QoE?

2. What is the relation between MOS and Delay/packet loss for the videos?

3. What are the differences between MPEG DASH and RTSP?

2. Streaming video

Streaming video is content sent in compressed form over the Internet and displayed by the viewer in real time. With streaming video or streaming media, a Web user does not have to wait to download a file to play it. Instead, the media is sent in a continuous stream of data and is played as it arrives. The user needs a player, which is a special program that decompresses and sends video data to the display and audio data to speakers. A player can be either an integral part of a browser or downloaded from the software maker's Web site.

Few years ago, the only method to view a video was to download the complete file which meant waiting for several minutes depending on the connection speed before viewing could be possible. Nowadays, service providers have the mission to ensure seamless viewing experience for its subscribers, but the task is more complex with network variations and device capability. In video delivery domain we find three models, which are technically different, but they have the same role: progressive download, RTMP/RTSP streaming and adaptive streaming.

2.1 Progressive Download

In earlier simple download, users could not play until the entire file was not downloaded. The progressive download was a significant improvement. He reduces waiting time for playback. The content gets downloaded over standard HTTP over TCP, requires no special web servers and it is very easy to implement. It begin to playback as soon as the minimum file is downloaded to fill the buffer. Progressive download works on standard browser port, but presents concerns with the security due to local caching of downloaded content, restriction of linear playback, the inability to jump ahead unless that portion is already downloaded, absence of monitoring and quality adjustement and amount of wasted bandwidth for non-viewed content.

2.2 RTMP/RTSP chunk based delivery

RTMP (Real Time Messaging Protocol) and RTSP (Real time streaming protocol) are content delivery mechanisms which used specialized streaming server, like Wowza. The media player consumed transferred chunks of media without any local caching. This offers security over the content. The user experience cand be improved by a special mechanism which can detect environment changes because it is a stateful connected protocol. RTSP saves bandwidth by downloading content in time and it enables fast forward and rewind of video. The technology can achieve mush faster transfer rates if the delivery can be done over UDP. It immediately start playback by providing initial burst of content. It uses trick modes.

2.3 Adaptive Streaming

Adaptive streaming is one of the most popular form of streaming. It’s uses HTTP for delivery and has benefits of quality switching. It uses manifest files, which contains index of chunks and their location, to aggregate parts of multi bit encoded videos, which are indexed by client. If the bandwidth is available, the client requests for highest quality of video downloading the manifest file and using manifest lookup. The video has an un-interrupted playback because of the flexibility of adaptive streaming to switch video segments and of network conditions and CPU availability. Hence, the adaptive streaming can offer the best viewing experience using low or high performance of device and slow or fast conditions of network. It ensures continuous playback by evaluating its environment every second: if network connection is good, it request higher bit-rate stream if not, degrades to a lower bit-rate.

2.4 Comparison between three types of streaming

Fig. 2.1. Table with comparison between streaming types

3. QoS and QoE

In future generation networks, video streaming session will be even more present in our professional and personal activities. The key issue to attract and keep customers, while increasing the profits of providers and optimizes network resources is to understand QoS and QoE and the impact of these over the network and video quality.

From the network point of view, the challenge is to use efficient control techniques distribution and shared wired and wireless resources. From the user point of view, QoE must provide to end users accessible video streaming session anytime, anywhere.

3.1 QoS

Quality of service (QoS) is the overall performance of a telephony or computer network, particularly the performance seen by the users of the network.

Currently approaches provide QoS assurances for video streaming sessions according to network/packet-based metrics. Examples of these metrics are throughput, packet loss, delay and jitter, which do not indicate the real impact on the video quality level from the user point of view.

QoS control schemes must be performed taking into account the current network conditions, video characteristics and QoE support in order to optimize the usage of network resources and increases the video quality level. With this goal in mind, an approach that combines QoS adaptation control, multimedia CODEC and human perception experience is required.

3.2 QoE

QoE concept is concerned with the overall experience the consumer has when accessing and using provided services and differs from QoS, which is more focused on the performance of the network. The concept of quality over a video is a subjective term to most people, but in telecommunications and multimedia world, quality has been measured objectively for decades. The tool which helps us to measure QoE of a video is the Mean Opinion Score (MOS). The MOS is a numeric value between 1 and 5, 5 representing the highest quality of video.

Fig. 3.1. MOS values

First, the MOS was used for audio content, for example in voice over IP systems, and is still used as a method for speech quality, but the same scoring system have been applied to video content. When the concept of MOS was introduced, groups of people were invited to listen and see to a set of different clips. After this step, each person gave a numeric value score in function of the quality of that clip, and the MOS for that clip was the average of the scores across the group. MOS is a relative measurement that is properly used as a comparative measure of quality.

Many quality issues experienced by end users of mobile video are caused by issues within the transport network. Older mobile telephony networks are not well-suited for video transmission, because of the restricted bandwidth and old protocols used.

In video streaming over mobile networks, the quality issues are introduced at the following times: during video creation, during transcoding, during video transmission and when displaying video on a device.

Video creation

Nowadays, video content can be generated from a variety of sources, from professionally-produced content created by a movie studio to content that can be created by anyone who has a video camera on the back if his phone, so the quality varies widely. The content created on recording devices that do no produce high quality have low pixels values. Other quality issues can be traced to the environment which the content was recorded or some form of impairment of the person doing the recording. If a video is recorded on a poor quality device or with shaky hands, the quality of the video will be poor.

Video transcoding

To ensure that the video is transmitted efficiently through a network, it is typically encoded, compressed. We must underline the idea that there is some inherent loss of quality each time a video is compressed, because the best performing compression algorithms require a lot of processing power. Once a video is encoded, it must be decoded before it can be played. Even though most devices support a wide variety of compression algorithms, it is still necessary to perform a translation from one coding algorithm to another at some point between video capture and video display. This translation is called transcoding and it typically consists of decoding the video and then re-encoding it into the format required by the player, which causes a loss of some video fidelity, reducing video quality.

Video transmission

Because of packet loss or delay, in today’s IP networks, any time information is sent, some of IP packet can be lost, which can cause low quality of multimedia content. Network impairments such as packet loss, delay and jitter have existed in voice networks, but the impact of these was not so important, because the human ear can recover from gaps in voice transmissions. In the video communication the impairments must have low values, because this service is more sensitive and the human eye cand detect small glitches.

Displaying video on a device

Mobile video can be displayed on small screens with dead pixels, so the video quality is ofted judged to be inferior to the same content displayed on a higher-quality device. The mobile phones are created to provide high quality voice service and video capabilities were a secondary consideration. Other problem is that vendors offering a longer battery life by sacrificing display quality in the mobile device.

Here are some of the most important QoE indicator examples:

Black or frozen screen

Missing programs

Low or high audio levels

Chroma/color issues

Program guide problems

4. LTE

Long Term Evolution (LTE) is emerging as a major candidate for 4G cellular networks to satisfy the increasing demands for mobile broadband services, particularly multimedia delivery. It’s been several years the first LTE networks were appears online. Nowadays, all cell phones support 4G LTE. The LTE network simplifies the infrastructure of network operators, reduces costs and improves quality to subscribers. It’s the most advanced network technology, easily deployable network technology, offering low latencies and high speeds over long distances.

4.1 Wireless Technology Evolution

Due to the increasingly growing number of mobile subscribers in the last few years, a continuous development in wireless technology has become a necesity. Throughout this chapter,I will present the evolution of wireless technology, as well as an explanation as to how LTE works as a radio techonology.

Fig. 4.1. Evolution of wireless technology

The level of maturity of the underlying technology can be used as a factor in order to group into several generations the evolution of wireless telephone technologies. However, it does not represent a strict demarcation given that there is no standard, concerning any parameter, applicable to the classification into generations. In spite of this, it is conceived to be an unwritten standard both by academia and industry, because of its commonly agreed upon perspective.

Figure 4.2, presents a pictorial depiction of the next discussion. Also, Figure 4.2 describes the evolution of application and services in step with the evolution of the underlying wireless access technology.

Fig. 4.2. Evolution of application and services

The first generation, also referred to as 1G, was represented by a focused on voice traffic analog wireless access system. The generation contained the TACS (Total Access Communication System) and the AMPS (Advance Mobile Phone System) for most parts of Europe and The United States, accordingly. Even though the analog channel din not provide any protection regarding eavesdropping on shared media, and was also sensitive to static noise, “cellular” technology, which pioneered the use of small hexagonal service areas thus supporting interference free frequency re-use across the “cells”, owes its foundation to the AMPS.

The second generation, 2G, replaced the first generation technology, by changing the analog radio network with digital radio network. Because of its ability to be subject digitized data to superior processing techniques thus rendering it less susceptible to noise, digital technology was regarded as vastly superior to its analog counterpart. Even more so, digital devices, due to their increased ease to calibrate and maintain based on the use of discreet bi-level signals, are much less cheaper than analog devices which use continuous analog signals. 2G technologies can be even further classified into Time Division Multiple Access (TDMA) based and Code Division Multiple Access (CDMA) based. In Europe, mostly adopted was the TDMA technology which was called GSM ( Global System for Mobile communications or, originally, Groupe Special Mobile). USA adopted the CDMA based technology and it was called CDMAone or standardized as IS-95a. CDMA, in comparison to GSM, has a better usage of the spectrum hence an advantage in supporting more users. BY using a different orthogonal code, each user of CDMA, which is a spread spectrum technology, is allowed to transmit over the whole spectrum by using a distinct code of one’s and zero’s to represent a one and a zero at the other end. All the codes are orthogonal to each other and hence don't interfere. As long as neighboring cells use different code, they may reuse the same frequency band without interfering hence allowing better use of the available spectrum. CDMAone supported digital data transfer rates varying between 4.8 and 14.4 kbps while CDMAtwo or IS-95b supported data rates of roughly 115.2 kbps.

The 2G technology introduced the interim generation of 2.5G meaning an implementation of packet switch domain in addition to the circuit switched domain of the 2G system. GSM adopted the 2.5G technology known as General Packet Radio Service (GPRS) which provides a packet switched service over GSM offering data speeds between 56 and 114 kbps. Although Enhanced Data Rates for GSM Evolution (EDGE) over GSM and CDMA2000 1xRTT over CDMA were thought to be 2.75G technologies, they may well be called 3G technologies as the surpass data rates of 144kbps required to qualify as 3G technology because their data rates were far below the data rates of actual 3G technologies. EDGE provides data rates of 236.8 kbps while CDMA2000 deployments limit the data rates at 144kbps.

The interim period led to the appearance of the 3G technology also knows as the third Generation of Mobile technology. The minimum data rate of 144 kbps for any technology to qualify as a 3G technology was imposed by The International Telecommunication Union (ITU) under the International Mobile

Telecommunications Program. However, the minimum limit is by far surpassed by most technologies which fall under this category, with data rates generally varying between 5 and 10 Mbps. Higher data rates and enhanced services are attainable due to the fact that 3G technologies reach better spectral efficiency over wide area cellular telephone networks. Japan, followed by South Korea, was the first to have installed pre-commercial and commercial 3G technology. Europe adopted the 3G technology of Universal Mobile Telecommunication System (UMTS) using W-CDMA (Wideband Code Division Multiple Access) as the air interface. In order to emphasize that UMTS is the 3rd generation technology succeeding GSM, is it sometimes called 3GSM. Through the CDMA2000 family of protocols, in particular EV-DO (Evolution-Data Optimized), which uses multiplexing techniques including TDMA and CDMA to increase per user as well as system throughput, CDMA based technologies evolved to 3G.

With HSDPA (High Speed Downlink Packet Access), UMTS based 3G technologies have allowed data rates up to 7.2 Mbps thus raising themselves to 3.5G.

The European efforts of 3GPP 4th generation technology development have been given the brand name of LTE (Long Term Evolution) whereas the brand name for similar efforts in North America by 3GPP2 is UMB (Ultra-Mobile Broadband). Data rates are 100 Mbps for downlink and 50 Mbps for uplink.

Up until this point, I have presented a historical roadmap with some evolutionary details. In the next part of the chapter, I will provide a brief introduction of the LTE project along with a glimpse of the different technical challenges and adopted technologies which contribute to the success of the 4G efforts.

4.2. LTE Signaling

In communication systems, a common channel is used by all users to exchange data. There are three types of transmission techniques, depending on wheater the data is transmitted or received simultaneously: simplex, half duplex and full duplex. In cellular networks, the spectrum needs to be shared between all users. Here we have a full duplex communication. Note that a half duplex channel can carry a full duplex service, like phone conversations. But this can be done using two methods:

1.TDD: the communication is realized using one frequency with different times for transmiting and receiving. This emulated full duplex communications using half duplex link.

2.FDD: two frequencies are used for communication and the transmitting and receiving data is simultaneous.

The advantage of TDD are observed in asymmetrical uplink and downlink data transmissions. Another advantage is that the channel estimations for beamforming apply for uplink and downlink. But TDD has a disadvantage: between transmissions, it uses guard periods.

The advantages of FDD are observed in symmetrical uplink and downlink data transmissions. With FDD, the interference between two radio base stations is lower and the spectral efficiency is better than using TDD.

The most common used version of LTE is FDD, which uses separate frequencies for downlink and uplink in a form of a band pair.

4.3. LTE key parameters

Fig. 4.3. Parameters of LTE

This image provides an overview of LTE key parameters. First of all, the frequency range for LTE: the LTE can be operated in the FDD and TDD band of UMTS. The channel bandwidth for LTE: these bandwidths are defined by these values: 1.4MHz, 3MHz, 5MHz, 10MHz, 15MHz, 20MHz and each bandwidth actually is composed of a certain number of resource blocks, as you can see in the table. A resource block is defined to be a 180kHz each and they represents the smallest entity for resource assignment, so a terminal or subscriber can be assigned one resource block or multiple of it and this si true for uplink and downlink, so 1.4MHz actually is represented by 6 resource blocks and so on up to the 20MHz being represented by 100 resource blocks. For every 5MHz of spectrum allocated per cell, the LTE can support up to 200 data clients. The 200 clients per 5MHz ratio is the optimal configuration, but there are ways to increase the number of clients sacrificing speed and capacity.

Modulation schemes that are available for LTE are: QPSK, 16-QAM and 64-QAM. In the uplink, the 64-QAM modulation comes optional for the handset to support. Multiple Access schemes for LTE are OFDMA in the downlink (Orthogonal Frequency Division Multiple Access) and SC-FDMA in the uplink (Single Carrier Frequency Division Multiple Access). Lte supports the MIMO technology. Actually MIMO antenna technology is very essential for LTE in order to support the high data rate and throughput requirements. LTE peak data rate: in downlink we can achieve 150Mbps with the 2×2 MIMO configuration, which could be represented by UE category 4 and we can even achieve 300Mbps peak data rate in downlink with the UE of 5, which would stand for the 4×4 MIMO setup. Both values are true for 20MHz operation. In uplink: 75Mbps peak data rate in 20MHz operation are possible, but note that peak data rate are really more theoretical values. They are not likely to be achieved in real life scenarios because in real life you have certain cell loads and certain radio links situations, so nevertheless peak data rates always close useful to see because they give you kind of a reference of whats’s theoretically possibly with the system.

4.4 Operating Bands

Fig. 4.4. Operating bands

As you can see in these tables, the bands 1 up to 17 are defined for FDD operation and the bands 33 up to 40 are defined for TDD operation. There are some prioritizations already defined, so is a certain prioritization for band 1, which is the existing UMTS, band 3, 7 and 13. For TDD, bands 38 and 40 have certain prioritization for network deployment and for the first commercial deployments, but LTE cand be operated in all these frequencies bands.

4.5 OFDM

Fig. 4.5. Subdivided bandwidth in an OFDM system

In an OFDM system the available bandwidth is subdivided into multiple sub-carriers. Any of these sub-carriers cand be independently modulated. So typically you have several hundred sub-carriers in a certain bandwidth and they have a constant spacing of x kHz. The figure shows an example for the 5MHz bandwidth, but of course you can easily scale the OFDM principle also to high up bandwidth. So compare the OFDM transmission with the single-carrier transmission, for example for wideband CDMA. Since the multiple sub-carriers in OFDM are transmitting in parallel, each one can transmit with a low symbol rate. Thus improving robustness of the technology for mobile propagation conditions.

Fig. 4.6. OFDM signal generation chain

This picture shows the OFDM singnal generation chain. The modulated data symbols which are going to be transmitted are first parallelized and then they are used as input pins to an IFFT operation, which is taking place on the transmitter side of an OFDM system. This operation produces OFDM symbols, so actually we have a conversion from frequency to the time domain here and the OFDM symbols are then transmitted in the time domain of OFDM signal. On the receiver side, you would expect a FFT operation to receive the symbols and to convert again in the frequency domain.

Fig. 4.7. OFDM and OFDMA

OFDM allocates users in time domain only. In the left picture you can see that user 1, 2 and 3 are separated only in the time domain, compared to that on the right side, where the OFDMA allocates users in time and frequency domain. User 1 uses a part of the available bandwidth as well as user 2, so user 1 can share the available bandwidth.

OFDM helps us to extend wireless access system over wide-areas. It is look similar with frequency division multiplexing scheme, where the frequency channel is divided into multiple smaller sub-channels. To avoid interferences between channels, it requires guard bands. The main difference between FDM and OFDM is that the second one divides the frequency bandwidth in sub-carriers, narrow orthogonal parts. The sub-carrier is composed by DC, which marks the centre of the channel, data carriers (used to carry data) and by pilot carriers, which are used for channel sensing purposes.

Fig. 4.8. OFDMA scheme

LTE uses a conventional OFDMA scheme. The picture shows an example for 5MHz bandwidth and a representation of the signal in the time and in the frequency domain. In the frequency domain you see the sub-carriers represented. In LTE they have a constant spacing og 15kHz for the regular configuration. As we have seen, you can transmit independently modulate on each of the sub-carriers.

The "LTE signaling" firmware application (option R&S CMW-KS500) allows to emulate an EUTRAN cell and to communicate with the UE under test. The UE can synchronize to the downlink signal and attach to the PS domain. A connection can be set up (3GPP compliant RMC or user defined channel).

The current release supports duplex mode FDD for one antenna configurations.

4.6. Test Setup

The basic test setup for a standard cell scenario uses a bidirectional RF connection between the tester and the device under test (DUT), carrying both the downlink and the uplink signal:

● The R&S CMW500 transmits the downlink signal to which the DUT can synchronize in order to perform an attach. The downlink signal is used to transfer signaling messages and user data to the DUT.

● The DUT transmits an uplink signal that the R&S CMW500 can receive and decode in order to set up a connection and perform various measurements.

For this setup the DUT is connected to one of the bidirectional RF COM connectors at the front panel of the R&S CMW500. No additional cabling and no external trigger is needed. The input level ranges of all RF COM connectors are identical.

Fig. 4.9. Initiating signaling tests

The signal generator of the "LTE signaling" application is controlled like any real-time signal generator.

The LTE downlink signal is turned on as long as the "LTE Signaling" softkey indicates the "ON" state.

When DL signal transmission has been turned on, the connection states can be controlled via hotkeys at the R&S CMW500 and via actions at the UE.

The default settings of the R&S CMW500 generally ensure a DL signal with suitable characteristics for connection setup. The most important settings can be modified directly in the main view.

Connection States

An LTE core network provides only Packet Switched (PS) services. The main PS connection states are described in the following table.

Fig. 4.10. PS connection states

A number of control commands initiated by the instrument or by the UE switch between the listed states. The following figure shows possible state transitions. Dashed lines correspond to actions initiated by the UE, solid lines to actions initiated by the instrument. The Disconnect action can be initiated by both.

Fig. 4.11. Connection states

In addition to the main states shown in the table and the figure the instrument indicates the following transitory states:

● Signaling in Progress:

Displayed e.g. during attach or when the channel changes for an established con-

nection.

● Connection in Progress:

Displayed during connection setup.

● Disconnect in Progress

Resources in Time and Frequency Domain

The DL radio resources in an LTE system are divided into time-frequency units called resource elements. In the time domain a resource element corresponds to one OFDM symbol. In the frequency domain it corresponds to one subcarrier (see next figure).

For the mapping of physical channels to resources, the resource elements are grouped into resource blocks (RB). Each RB consists of 12 consecutive subcarriers (180 kHz) and 6 or 7 consecutive OFDM symbols (0.5 ms).

Fig. 4.12. Resource element grid (7 OFDM symbols per RB, 1 TX antenna)

The positions of resource elements carrying reference signals (pilots) are standardized and depend on the number of transmit antennas. The figure above applies to one antenna configurations. If more than one transmit antenna is used, each antenna uses different resource elements for reference signals. These resource elements are reserved for one antenna and not used at all by the other antennas.

The smallest resource unit that can be assigned to a UE consists of two resource blocks (180 kHz, 1 ms). The assignment of resources to a UE may vary in time and frequency domain.

Fig. 4.13. Assignment of Resource Blocks (RB) to UEs

In the time domain the additional units radio frame, subframe and slot (containing the OFDM symbols) are defined, see figure below. A guard time called Cyclic Prefix (CP) is added to each OFDM symbol. Depending on the duration of the guard time, it is either called normal CP or extended CP and the slot contains either 7 or 6 OFDM symbols.

Fig. 4.14. LTE DL frame structure for FDD

Physical Channel Overview

A downlink physical channel corresponds to a set of resource elements carrying information originating from higher layers. Physical channels can be either broadcast channels or shared channels. Dedicated channels are not used for LTE.

Broadcast channels carry messages that are not directed at a particular UE; they are point-to-multipoint channels. Shared channels are shared by several UEs. At a given time, a shared channel is assigned to one UE only, but the assignment may change within a few timeslots.

An overview of the physical channels of the generated downlink signal is given in the following table.

Fig 4.15. Physical channel

Scheduling Types

The signaling application supports two categories of channels that can be scheduled.

●3GPP Compliant RMCs

●User Defined Channels

3GPP Compliant RMCs

Reference Measurement Channels (RMC) as defined in 3GPP TS 36.521 are required for various transmitter and receiver conformance tests.

An RMC can be defined via the following set of parameters:

● Channel bandwidth

● Number of allocated Resource Blocks (RB)

● Modulation type

● Transport block size index

The 3GPP compliant combinations supported by the R&S CMW500 are listed in the following tables. Many other parameters are indirectly determined by these four parameters. Refer to 3GPP TS 36.521 for detailed tables. Some of these parameters are also displayed at the GUI for information.

The allocated resource blocks are contiguous and located at one end of the channel bandwidth. It can be configured which end is used.

Fig. 4.15. DL RMCs for FDD, one TX antenna

User Defined Channels

In addition to 3GPP compliant RMCs the signaling application supports channels not conform to 3GPP (option R&S CMW-KS510 required).

Similar to RMCs these channels are defined via a set of parameters:

● Channel bandwidth

● Number of allocated Resource Blocks (RB)

● Position of the first allocated RB

● Modulation type

● Transport block size index

The allowed combinations are much more flexible than for 3GPP compliant RMCs. The allocated resource blocks are contiguous and the position of the first RB can be configured. The supported modulation types are independent of the selected number of allocated RBs and for each modulation type a range of transport block size indices is available.

Supported number of RB / position of first RB (Downlink)

In a SISO downlink the number of allocated resource blocks is only restricted by an upper limit. Any number of RBs between 1 and this upper limit can be allocated. The maximum number of RBs depends on the channel bandwidth and is listed in the table below. The position of the first allocated RB is freely selectable within the channel bandwidth. Thus allowed positions are 0 to <Maximum no of RBs> – 1.

The carrier frequencies for LTE signals are defined in 3GPP TS 36.101. Each operating band contains a number of carrier frequencies identified by channel numbers (EARFCN, E-UTRA Absolute Radio Frequency Channel Number).

The tables below provide an overview of all bands, for uplink and downlink signals. For each band they list FOffset, NOffset, channel numbers N and carrier center frequencies F. The table for uplink signals lists also the separation between uplink carrier frequency and downlink carrier frequency (frequency pair for one UE in FDD mode).

Fig. 4.16. Operating bands for uplink signals

Fig. 4.17. Operating bands for downlink signals

4.7. RF Settings

The parameters in this section provide general signal settings and configure the RF input and output path.

Fig. 4.18. RF settings

RF Output > Routing

Selects the output path for the generated RF signal, i.e. the output connector and the TX module to be used.

Depending on your hardware configuration there may be dependencies between both parameters. Select the RF connector first. The "Converter" parameter offers only values compatible to the selected RF connector.

RF Output > External Attenuation

Defines the value of an external attenuation (or gain, if the value is negative) in the output path. With an external attenuation of x dB, the power of the generated signal is increased by x dB. The actual generated levels are equal to the displayed values plus the external attenuation.

If a correction table for frequency-dependent attenuation is active for the used connector, the table name and a button "FDCorr!" are displayed to the right of this parameter. Press the button to display the table entries.

RF Input > Routing

Selects the input path for the measured RF signal, i.e. the input connector and the RX module to be used.

Depending on your hardware configuration there may be dependencies between both parameters. Select the RF connector first. The "Converter" parameter offers only values compatible to the selected RF connector.

RF Input > External Attenuation

Defines the value of an external attenuation (or gain, if the value is negative) in the input path. The power readings of the R&S CMW500 are corrected by the external attenuation value.

The external attenuation also enters into the internal calculation of the maximum input power that the R&S CMW500 can measure.

If a correction table for frequency-dependent attenuation is active for the used connector, the table name and a button "FDCorr!" are displayed to the right of this parameter. Press the button to display the table entries.

RF Frequency > …

"UL Channel" specifies the center frequency of the RF analyzer and "DL Channel" the center frequency of the generated LTE signal.

The relation between operating band, frequency and channel number and the UL/DL separation are defined by 3GPP.

To specify the center frequencies, select an operating band first, then enter a valid channel number or frequency for uplink or downlink. The related frequency or channel number and the parameters for the other direction are calculated automatically.

The DL and UL channels can be changed in all main connection states including "Connection Established" (intra-band inter-frequency handover, the R&S CMW500 performs a physical channel reconfiguration)

RF Power Uplink > …

These parameters configure the expected UL power. Two modes are available:

●According to UL Power Control Settings

The UL power is calculated automatically from the UL power control settings. The resulting expected nominal power is displayed for information. The displayed reference level is calculated as:

Reference Level = Expected Nominal Power + 12 dB Margin

●Manual

In manual mode the expected nominal power and a margin can be defined manually.

The displayed reference level is calculated as:

Reference Level = Expected Nominal Power + Margin

The margin is used to account for the known variations (crest factor) of the RF input signal power.

Downlink Power Levels

This section defines power levels of physical downlink channels and physical downlink signals.

The parameters in this section except the power offset PA can be changed in all main connection states including "Connection Established".

Fig. 4.19. DL power levels

RS EPRE

Defines the Energy Per Resource Element (EPRE) of the Reference Signal (RS). The RS EPRE corresponds to the DL power averaged over all resource elements carrying cell-specific reference signals within one subcarrier (15 kHz).

Additionally the "Full Cell BW Power" resulting from the RS EPRE is displayed. It is calculated assuming that the full DL cell bandwidth is used (all subcarriers) and all power offsets equal 0 dB.

PSS Power Offset

Power level of a Primary Synchronization Signal (PSS) resource element relative to the RS EPRE.

SSS Power Offset

Power level of a Secondary Synchronization Signal (SSS) resource element relative to the RS EPRE.

PBCH Power Offset

Power level of a Physical Broadcast Channel (PBCH) resource element relative to the RS EPRE.

PCFICH Power Offset

Power level of a Physical Control Format Indicator Channel (PCFICH) resource element relative to the RS EPRE

PHICH Power Offset

Power level of a Physical Hybrid ARQ Indicator Channel (PHICH) resource element relative to the RS EPRE.

PDCCH Power Offset

Power level of a Physical Downlink Control Channel (PDCCH) resource element relative to the RS EPRE.

OCNG

Enables or disables the OFDMA Channel Noise Generator (OCNG).

When the OCNG is enabled it uses all not allocated Resource Blocks (RB) of the cell bandwidth, so that the full cell bandwidth is used. Example: if for a bandwidth of 10 MHz only 16 RBs are used by the RMC, the remaining 34 RBs are used by the OCNG.

The power level of the OCNG is chosen automatically so that the displayed

"Full Cell BW Power" is reached. Thus the overall DL power is constant in each transmission time interval.

PDSCH > …

These parameters define the power level of the Physical Downlink Shared Channel (PDSCH). According to 3GPP TS 36.213 the power level of a PDSCH resource element relative to the RS EPRE is denoted by:

●ρA(rhoA) if no RS resource elements are transmitted simultaneously on other subcarriers

●ρB(rhoB) if RS resource elements are transmitted simultaneously on other subcarriers

The power offset PA and the power ratio index PB are related to these ratios as follows:

●PA= ρA

●PB= 0, 1, 2, 3 corresponds to ρB/ρA= 1, 4/5, 3/5, 2/5 for SISO; 5/4, 1, 3/4, 1/2 for

MIMO

The displayed ratios "rhoA" and "rhoB" are calculated from the parameters "PA" and "PB"

AWGN

Total level of the Additional White Gaussian Noise (AWGN) interferer in dBm/15 kHz (the spectral density integrated across one subcarrier). If enabled, the AWGN signal is added to the DL LTE signal for the entire cell bandwidth.

Option R&S CMW-KS510 required.

Uplink Power Control

This section defines parameters related to the control of the UE uplink power by the instrument during connection setup and during an established connection.

The parameters of the "TX Power Control" section can be changed in all main connection states including "Connection Established".

Fig. 4.20. UL power control

PUSCH > Open Loop Nominal Power

Defines a cell specific nominal power value for full resource block allocation in the UL (entire channel bandwidth used). From this value the cell specific nominal power value related to one resource block is determined and sent to all UEs via broadcast.

The UE shall use this value for calculation of the average power of an SC-FDMA symbol in which the PUSCH is transmitted.

This power control procedure is performed during connection setup. Afterwards the power shall remain constant until changed by TPC commands.

TX Power Control (TPC) > Active TPC Setup

Select the TPC setup to be executed. All setups use relative (accumulative) TPC commands and are only executed when a connection has been established. During connection setup 0 dB commands are sent.

● Max Power, Min Power, Constant Power

The UE is commanded to maximum power (+1 dB steps), to minimum power (-1 dB steps) or to keep the UL power constant (0 dB).

If one of these setups is selected, it is executed automatically without additional user action.

● Single Pattern

A user defined TPC pattern (see next parameter) is sent to the UE when the "Exe-

cute" button is pressed. Before and after execution of the pattern the power is kept

constant (0 dB commands).

● Closed Loop

The UE is commanded to the configured target power. This is done by sending +1

dB or -1 dB commands until the difference between the measured UL power and the target power is less than 1 dB. Afterwards the power is kept constant (0 dB commands).

This setup is executed automatically without additional user action. Even the measurement of the uplink power is performed automatically in the background. No additional measurement application is needed.

TX Power Control (TPC) > Single Pattern

Defines a pattern for the TPC setup "Single Pattern". The pattern consists of 1 to 5 up (+1 dB) or down (-1 dB) commands

TX Power Control (TPC) > Closed Loop Target Power

Defines the target power for the TPC setup "Closed Loop".

Max. Allowed Power P-Max

Specifies the maximum allowed output power for the UE in the cell. This value is signaled to the UE. The UE output power shall neither exceed this value nor the maximum power value resulting from the UE power class

Physical Cell Setup

This section defines physical layer attributes of the simulated cell.

Fig. 4.21. Cell setup parameters

DL / UL Cell Bandwidth

Define the DL and UL cell bandwidths, also called channel bandwidth by 3GPP. In the current release the two values are identical.

The resulting maximum number of DL resource blocks is indicated for information. It is smaller than the DL bandwidth divided by 180 kHz, because some space at the channel borders must not be occupied by resource blocks.

Physical Cell ID

The cell ID is used for generation of the physical synchronization signals. During cell search the UE determines the cell ID from the primary and secondary synchronization signal. The physical cell ID can be set independent of the E-UTRAN cell identifier sent to the UE via broadcast.

Cyclic Prefix

Defines whether a normal or extended Cyclic Prefix (CP) is added to each DL OFDM symbol. The current release supports only normal CP.

PRACH > Configuration Index

Displays the (fixed) PRACH configuration index, defining the preamble format and other PRACH signal properties, e.g. which resources in the time domain are allowed for transmission of preambles. This value is broadcasted to the UE.

PRACH > Frequency Offset

The frequency offset shall be used by the UE to calculate the location of the 6 preamble. Resource Blocks (RB) within the channel bandwidth. The value is broadcasted to the UE.

PRACH > Logical Root Sequ.Idx

Specifies the logical root sequence index to be used by the UE for generation of the preamble sequence.

PRACH > Zero Corr. Zone Conf.

The zero correlation zone config determines which NCS value of an NCS set has to be used by the UE for generation of the preamble sequence. The value is broadcasted to the UE.

4.8. Network Settings

The "Network" settings configure parameters of the simulated radio network.

Fig. 4.22. Network settings

Identity Settings

The "Identity" settings configure parameters of the simulated radio network. The values are transferred to the UE under test via broadcast.

Fig. 4.23. Network identity settings

MCC

Specifies the 3-digit Mobile Country Code (MCC).

MNC

Specifies the Mobile Network Code (MNC). A two or three-digit MNC can be set.

TAC

Specifies the Tracking Area Code (TAC).

E-UTRAN Cell Identifier

Specifies the E-UTRAN cell identifier, unique within a PLMN. It is sent to the UE via broadcast and can be set independent of the physical cell ID.

Security Settings

The "Security Settings" configure parameters related to the authentication procedure and other security procedures.

Fig. 4.24. Security settings

Authentication

Enables or disables authentication, to be performed during the attach procedure. Authentication requires a test USIM. An appropriate 3GPP R8 USIM can be obtained from Rohde & Schwarz (R&S CMW-Z03, stock no. 1202.9503.02).

NAS Security

Enables or disables Non-Access Stratum (NAS) security. With enabled NAS security the UE performs integrity protection of NAS signaling. This setting is only relevant if authentication is enabled.

AS Security

Enables or disables Access Stratum (AS) security. With enabled AS security the UE performs integrity protection of RRC signaling. This setting is only relevant if authentication is enabled.

Integrity Algorithm

Selects an algorithm for integrity protection. NULL means that integrity protection is disabled. Use this setting for UEs which do not support the SNOW3G (EIA1) algorithm.

Milenage

Enable this parameter to use a USIM with MILENAGE algorithm set.

OPc

The key OPc is used for authentication and integrity check procedures with the MILENAGE algorithm set (parameter "Milenage" enabled). The value is entered as 32-digit hexadecimal number.

Secret Key

The secret key K is used for the authentication procedure (including a possible integrity check). The value is entered as 32-digit hexadecimal number.

The integrity check fails unless the secret key set by this parameter is equal to the value stored on the test USIM of the UE under test. The test USIM R&S CMW-Z03 is compatible with the default setting of this parameter. The secret key is ignored if authentication is switched off.

UE Identity

The "UE Identity" settings configure the default IMSI.

Fig. 4.25. UE identity settings

Default IMSI

15-digit International Mobile Subscriber Identity (IMSI). The default IMSI is required for mobile terminated connection setups without previous attach of the UE.

5. Scenario

The experiment setup consists of a video server and a client. Between them, there is CMW500, an equipment from Rohde&Schwarz, which helps me to generate LTE signal to be able to connect the client to the server and also to introduce packet loss and delay. The design of the experiment is shown in the Fig. 5.1.

Fig. 5.1. Design of experiment

Fig. 5.2. Design of experiment: interfaces

As we can see from the picture, the mobile device is connected to the CMW500 via LTE and the video server via Ethernet. But for this, the R&S equipment has a special port, called LAN DAU.

6. Data Application Unit

The Data Application Unit (DAU, option R&S CMW-B450A or -B450B or -B450D) offers a common data testing solution for supported radio access technologies. It allows to test End-to-End (E2E) IP data transfer and to perform user plane (U-plane) tests for an IP connection to a mobile, set up via a signaling application or a protocol test application.

The DAU is independent of the underlying radio access network. It provides a common user plane handling and ensures data continuity during handover from one radio access technology to another one. The DAU also hosts IP services that have been optimized for high throughput and are running in an isolated controlled environment to ensure reproducible test results.

The following internal IP services are currently available:

● File transfer via File Transfer Protocol (FTP)

● Web browsing via Hypertext Transport Protocol (HTTP)

● IP Multimedia Subsystem (IMS) server supporting voice calls and SMS over IMS (R&S CMW-KAA20 required)

● DNS server supporting DNS requests of type A, AAAA and SRV

If connected to an external network, the DAU acts as IP gateway, separating the R&S CMW500 internal IP network from the external IP network. The mobile can use both the embedded IP services provided by the DAU and the IP services provided by the external network. For example it can access web servers and DNS servers both in the internal network and in the external network..

For DAU measurements, option R&S CMW-KM050 is required. It provides the following measurement applications for testing the properties of an IP connection to the mobile:

● Ping measurement, testing the network latency

● IPerf measurement, testing the throughput and reliability, using TCP/IP and UDP/IP

● Throughput measurement, indicating the total throughput at the DAU on IP level

● DNS request measurement, monitoring all DNS queries addressed to the internal DNS server

● IP logging application, creating log files of the IP traffic at the LAN DAU connector or between DAU and mobile

● IP analysis application, monitoring and analyzing the uplink and downlink IP traffic of the mobile (R&S CMW-KM051 required in addition to R&S CMW-KM050)

● IP replay application, replaying IP traffic from packet capture files

The DAU supports the internet protocols IPv4 and IPv6. Both can be used individually or in parallel, depending on the IP connection established by the Radio Access Network. IPv4 requires option R&S CMW-KA100.

A standalone test setup comprises the mobile, the R&S CMW500 and at least one RF connection between them. The mobile can access embedded DAU services, for example browse pages of the built-in web server or exchange files with the built-in FTP server.

It depends on the application used to set up the IP connection and its configuration, whether one or more RF connections are required and which RF connectors can be used.

The used RF path (connector and converter) is known by the used application, but not by the DAU. The DAU accesses the established IP connection via an internal Ethernet connection to the signaling units.

For scenarios involving only one downlink carrier and one uplink carrier, you use typically an RF COM connector, e.g. RF 1 COM. If you want to access external services with the mobile, connect the external network to the LAN DAU connector at the rear panel of the instrument. Note that only the LAN DAU connector allows the DAU and thus the mobile to access an external network and can be used for U-plane tests. The other LAN connectors of the instrument can not be used for this purpose.

Fig. 6.1. DAU menu

Fig. 6.2. DAU IP configuration menu

For this scenario, I configured the DAU with 192.168.1.117 IP, the server with 192.168.1.34 and for the phone 192.168.1.19. As you can see, all the addresses are in the same network, with the same subnet mask: 255.255.255.0.

Data Application Measurements

Data application measurements require option R&S CMW-KM050. Activate the measurements via the "Measurement Controller" dialog. You can then access the main view of the measurements dialog from the task bar. The "Data Application Measurements" dialog contains common elements at the top, one overview tab and one tab per DAU measurement.

RAN Selection

DAU measurements can be performed for connections established via signaling applications. Via "Select RAN" at the top of the "Data Application Measurements" dialog, you can select a signaling application. As a consequence, the expected maximum throughput for this signaling application is displayed and quick access softkeys for this signaling application are activated.

Overview Tab

The tab provides an overview of the DAU measurements, including the state of each measurement and selected measurement results.

Fig. 6.3. Overview tab

Ping Tab

The "Ping" measurement sends ping requests to a configurable IP address and displays the resulting ping latency in a diagram. The "Ping" tab allows to configure the properties of the ping command and displays the result diagram.

Fig. 6.4. Ping tab

Config

Configures the properties of the ping command.

● "Destination IP": IP address that you want to ping

● "Interval": pinging interval

● "Timeout": timeout for unanswered ping requests

● "Payload": packet size used as probe

● "Ping Count": number of Internet Control Message Protocol (ICMP) echo request packets to be sent

You can change values that are not grayed out even during the measurement, for example, the "Interval".

IPerf Tab

The "IPerf" measurement uses the open source tool IPerf to evaluate the throughput and reliability of a connection in uplink and/or downlink direction.

For a measurement, the IPerf tool must be running at both ends of a connection. At the end sending data, it is configured as "client" and at the end receiving data, as "server".

Fig. 6.5. IPerf tool scheme

The "IPerf" tab allows to configure up to 8 server instances and 8 client instances at the DAU in parallel.

Fig. 6.6. IPerf tab

Fig. 6.7. IPerf settings

Results

The bar graph indicates up to three result bars per instance:

● "Uplink": bit rate measured in uplink direction for active server instances

● "Downlink": bit rate transmitted in downlink direction for active client instances

● "Lost Packets": percentage of packets lost during the measurement, for active

server instances with protocol type UDP

The uplink and downlink bit rates are also indicated in the instance table above the bar graph. The lost packet percentage is also indicated below the bar graph.

Packet Size

Defines the packet size for IPerf tests (payload bytes), applicable to UDP and TCP. The packet size is especially important if you want to use a low UDP bit rate. In order to reach this bit rate, configure the packet size according to the following rules:

● Convert the bit rate to byte/s and divide it by an integer number (divisor n) to derive the packet size.

● Use the smallest possible divisor n, resulting in an integer packet size.

● Ensure that the packet size is smaller than or equal to the maximum payload size,

resulting from the configured MTU (MTU minus overhead)

Server Settings

Defines the properties of the server instances for uplink direction (DAU receives data from the mobile):

● "Use"

Specifies whether the server instance is used.

● "UDP or TCP"

Selects the protocol type to be used.

● "Port"

Specifies the LAN DAU port number for the connection. This value must match the mobile's client port settings.

● "Window size"

Specifies the TCP receiving window size (for TCP only)

Client Settings

Defines the properties of the client instances for downlink direction (DAU sends data to the mobile).

● "Use"

Specifies whether the client instance is used.

● "UDP or TCP"

Selects the protocol type to be used.

● "Port"

Specifies the LAN DAU port number for the connection. This value must match the mobile's server port settings.

● "UE IP Address"

Specifies the IP address of the mobile.

● "Window size" Specifies the size of the NACK window (in kByte), for TCP only.

● "Parallel Connections"

Specifies the number of parallel connections for the selected port, for TCP only.

● "Bit rate"

Specifies the maximum bit rate to be transferred, for UDP only.

If the bit rate is not reached, check the packet size

Throughput Tab

The "Throughput" measurement reports the overall throughput at the DAU on IP level, in uplink and downlink direction. The entire packets are considered, including the IP header – in contrast to IPerf measurements that consider only the payload. Several mobile connections may contribute to the overall throughput.

Please note that the measured downlink data rate on IP level and the downlink data

rate on lower layers might differ. The lower layers might not be able to transfer the set IP data rates and discard IP packets.

Fig. 6.8. Throughput tab

Results

The diagram shows the overall throughput on IP level vs. time.

"Uplink" indicates the received bit rate measured at the DAU while "Downlink" indicates the bit rate transmitted to the mobile. In downlink direction, the throughput values on RLC or MAC level can differ from those on IP level.

Below the diagram the measured/transmitted current, minimum and maximum throughput values are displayed.

IP Analysis Tab

The "IP Analysis" application monitors all IP traffic from or to the mobile. This includes traffic between the mobile and the DAU as well as traffic between the mobile and an external destination, passing the DAU.

Typical use cases are for example:

● Analysis of reasons for too low throughput. Use the "TCP Analysis" view to check the number of retransmissions, the TCP window size and the overhead.

● Debugging of applications, for example smartphone apps, by analysis of the IP

traffic caused by the applications.

● In general: getting an overview / a statistical evaluation of all IP connections

Option R&S CMW-KM051 is required for "IP Analysis". Without this option, the "IP Analysis" tab is hidden.

The results of the "IP Analysis" application are displayed in an overview and several detailed views. The overview summarizes the most important results. If you are interested in a specific part of the overview, expand this part by opening the related detailed view.

Fig. 6.9. IP analysis tab

Opening another view

Use the "Select View" hotkey to display a specific detailed view or to return to the overview. Alternatively, you can select a part of the overview and press ENTER or the rotary knob to open the related detailed view.

Detailed View: Data Pie Charts

This view analyzes the amount of data transported since the measurement was started. The results are displayed as a list and as a pie chart. For each list entry, the transported data is given as absolute number and as percentage of the total transported data. All values indicate the sum of downlink and uplink. The list and the pie chart are linked. Select an entry in the list to highlight the corresponding part of the pie chart. At the top of the view, you can select a view variant "Data per …". The specifics of the individual categories are described in the following sections.

Data Per Connection

The view provides an overview of the amount of data transported via the individual connections since the measurement was started.

Fig. 6.10. Data per connection

The term "Remote Destination" refers to the partner destination of the mobile (mobile at one end of the connection, "Remote Destination" at the other end).

Data Per Protocol

The view provides an overview of the amount of data transported via the individual protocols since the measurement was started.

Fig. 6.11. Data per protocol

Network Impairments

Real networks are usually not perfect. IP packets may for example be lost, delayed, reordered, duplicated or corrupted. Such network impairments can be simulated by the DAU. They can be applied to all IP traffic from the DAU to a specific destination IP address. The destination can for example be a mobile or a host at the LAN DAU connector. The network impairments can be configured and controlled from all measurement tabs.

Fig. 6.12. Network impairments

The configuration table contains seven columns, each providing a set of impairments. A set of impairments is either applied to a specific default bearer, identified via the destination IP address or to a specific dedicated bearer, identified via destination IP address plus port range.

To apply network impairments to a bearer, configure one of the columns according to your needs: Specify the destination IP address and the port range of the bearer and configure the impairments. Enable the column via the checkbox at the top and press the button "Impairm. ON" at the bottom of the dialog.

The DAU allows to simulate network impairments for any outgoing IP traffic, for example traffic from the DAU to a mobile or traffic from the DAU to a host reachable via the LAN DAU connector.

The DAU applications "IP Analysis" and "IP Logging" monitor the outgoing traffic after the network impairments have been applied. They have no information about the unimpaired traffic.

In order to recognize a retransmitted packet, the "IP Analysis" application must receive both the initially transmitted packet and the retransmitted packet.

If the initially transmitted packet gets lost after passing the application, the retransmitted packet is counted correctly by the "IP Analysis" application (for example packet loss on the RF connection to a mobile).

If the initially transmitted packet gets lost before passing the application, the retransmitted packet is counted as initial transmission, not as retransmission (for example packet loss rate configured as network impairment).

In the latter case you may want to use the "IP Logging" application to analyze the retransmissions, for example by checking for retransmission requests.

7. Streaming server

Streaming is a method of delivering video content over the Internet in a continuous flow allowing viewers to begin watching while the remaining data is sent. The streaming server application that I used in this paper is Wowza.

The server is powered by Intel® Core™ i5 CPU M480 @2.67GHz(4 CPUs), 4096 MB DDR2 RAM, 500GB SATA Hard Drive is the storage memory, the display adapter is a NVIDIA GeForce GT 425M and the network interface is equipped with Atheros AR9287 Wireless and Generic Marvell Yukon 88E8057 PCI-E Gigabit Ethernet Controller. It runs Windows 7 Ultimate 64-bit Operating System.

“Wowza Media Systems enables organizations to harness the power of streaming by reducing the complexities of video and audio delivery to any device.

Fortune 100 companies, small to mid-sized businesses, leading Content Delivery Networks (CDNs), educational institutions, and government entities in more than 150 countries depend on leading-edge Wowza software to build, deploy, and manage streaming solutions for the delivery of high-quality and engaging live and on-demand experiences.”

Wowza support multiple types of clients and devices simultaneously playback: Adobe Flash Player, Microsoft Silverlight Player, Apple QuickTime Player and devices such as Apple, mobile phones, game consoles and popular STBs.

The great advantage of using Wowza servers is that they do not need-specific encoders players to run, unlike other streaming platforms that do not offer this.

For my tests, I used MPEG-DASH and RTSP players. Dynamic Adaptive Streaming over HTTP, known as MPEG-DASH it is a chunk-based streaming technology that uses HTTP for delivery and is similar to proprietary adaptive streaming technologies such as Adobe HDS, Apple HLS and Microsoft Smooth Streaming. The streaming engine is performing all media-chunking and packaging necessary to deliver a stream using this technology.

The protocol for real-time flow (Real Time Streaming Protocol) establishes and controls one or many synchronized streams of data, whether audio or video. The RTSP acts as a remote control through the network for multimedia servers. RTSP is a connectionless, instead of this protocol server maintains associated with an identifier, in most cases session RTSP uses TCP for control data player and UDP for audio data and video but may also TCP use if necessary. During an RTSP session, a client can open and close several transport links to the server that meet the needs of the protocol.

Intentionally, the protocol is similar in syntax and operation to HTTP so that the expansion mechanisms can be added to HTTP, in many cases, added to RTSP. However, HTTP RTSP differs in a significant number of aspects:
     RTSP introduces new methods and has a different protocol identifier.
     An RTSP server needs to maintain the connection status unlike HTTP
     Both the server and client can launch requests.
     Data is transported by a different protocol

8. Video quality estimation module

“Recommendation ITU-T P.1201.1 specifies the algorithmic model for the lower resolution application area of Recommendation ITU-T P.1201. The ITU-T P.1201 series of Recommendations specifies models for monitoring the audio, video and audiovisual quality of IP-based video services based on packet header information.”

In this paper, I used only the video model to determine the MOS of MPEG DASH and RTSP videos. The video quality estimation module is shown in the next figure:

Fig. 8.1. Video quality estimation module

Depending on the degradation type, there are three video quality estimation elements which determine the final video MOS:

Video quality due to compression (V_MOSC)

Video quality due to packet-loss (V_MOSP)

Video quality due to rebuffering (V_MOSR)

Video quality due to compression (V_MOSC) is calculated as shown in the formula:

=5

=1

(normalized video bit rate) is the modified value of (video bit rate), in kbps, and is calculated as follows:

(video content complexity factor) is the factor describing the content’s spatio-temporal complexity. The maximum value is 1, the initial value is 0.5, and is calculated as shown in the formula:

is the average number of bytes per I-frame

are coefficients as shown in the table:

Video quality due to packet-loss is calculated as follows:

To calculate the video distorsion quality due to packet-loss (V_DP), ITU-T uses the following pseudocode:

If the video is slicing (the are no rebuffering events), then:

Else

Where

is the average impairment rate of video frame (the sum of impairment rate per video frame divided by the number of damaged video frames)

is the impairment rate of video stream

is the video packet-loss event frequency

are coefficients as shown in the following tables:

Video quality due to rebuffering is calculated as follows:

To calculate the (video distorsion quality), the following formula is use:

Where

is the number of rebuffering events.

is the average rebuffering length.

The final MOS is calculated as shown below:

V_MOS=V_MOSC (no packet-loss and no rebuffering)

V_MOS=V_MOSP (packet-loss and no rebuffering)

V_MOS=V_MOSR (packet-loss and rebuffering)

9. Hardware configuration

Fig. 9.1. Hardware configuration

This configuration is used to effectuate all tests on two types of streaming to determine the relation between QoS and QoE. CMW500 base station is connected via Ethernet to the video server Wowza and via RF signal LTE to the mobile phone, which will be used to view videos.

To realise the connexion with Wowza server and with the mobile device, I configured all of the equipments with IP addresses from the same network, as follows:

192.168.1.117/24 for DAU

192.168.1.32/24 for Wowza server

192.168.1.19/24 for phone

The following pictures illustrate the steps required to connect the mobile phone to the base station:

Fig. 9.2. LTE Parameter setup

Fig. 9.3. LTE signal generation

Fig. 9.4. Finding the network by the mobile phone and realizing the connection with the base station

Fig. 9.5. Finding the network by the mobile phone and realizing the connection with the base station

Although the phone is connected to the base station, it can not access the video server until configure a virtual private network (VPN). This allows the mobile phone to send and to receive data over the public network to which it is connected, ie LTE.

10. Results and discussions

The purpose of the measurements that we have done is to compare the two types of streaming protocols, to highlight how the network impairments affects video quality. This is highlighted by calculating MOS.

For these tests we chose two videos, each with a different bitrate: cartoon and movie. The cartoon’s bitrate is set to 570kbps and the movie’s bitrate has the value of 1157kbps, both with 25frames / second. These were tested using both video streaming (MPEG DASH and RTSP) under various network conditions. With network equipment CMW500 we introduced different levels of packet loss (0%, 0.5%, 1.5%, 3%, 6%) and delay (0 ms, 25 ms, 100 ms, 200 ms and 400 ms) to observe the impact over the quality of the videos.

Before presenting measurements graphs, it is very important to watch what protocols from transport level are used by each player. With Wireshark program I observed that MPEG DASH uses TCP, and RTSP uses UDP.

Fig 10.1. Capture from Wireshark: MPEG DASH using TCP

Fig 10.2. Capture from Wireshark: RTSP using UDP

It is known that TCP (Transmission Control Protocol) is connection-oriented, meaning that it is used to initialize handshake method of communication between systems. The main features of TCP are:

Safety: TCP manages acknowledgment, retransmission and time-out moments. It made several attempts message delivery. If you lose something on the way, the server will ask again the lost part. There is no middle ground: either there is no packet, or the connection is lost.

Order: if you send two messages in succession, they will be received in the order they were sent. If packets arrive out of order, TCP stores the disordered data to get all the packages and then reorders and delivers them to the application.

Streaming: data is read as a stream of bytes; there are no indicators to show the limits of a segment. TCP send 3 packets to initialize a socket connection. Only then can begin to send data. TCP handles reliability and congestion control

UDP is a simple protocol without connection. Connectionless protocols not initialize a dedicated connection between the ends. Communication is done by transmitting information in one direction without checking the status or availability receiver. However, a strong UDP's advantage is speed. UDP's features are:

Unsafe: When a message is sent, it is not known whether it will reach its destination, may be lost on the way. Do not apply concepts for confirmation, retransmission or timeout.

Without order: if two successive messages are sent to the same receiver can not predict the order in which they arrive. Operation based on datagrams: packets are sent individually and are checked for integrity only if they arrive at their destination. Packages have well-defined borders.

No congestion control: UDP does not avoid congestion by himself, and it is possible that high speed applications to lead to deadlock if not implemented congestion control methods at the application level.

In this paper we have tried to emphasize the connection between the three levels of QoS: network QoS, application QoS and user QoS (QoE).” To characterize the relationship between network QoS and QoS application, previous works [17], [21] performed analytical studies to model the video streaming performance.An algorithm was proposed to estimate the receiver buffer requirement based on the model in [17].” Specific parameters that are part of application QoS during the test are initial buffering time and number of rebuffering events encountered. The relationship between the two levels of QoS is highlighted in the following graphics and express how variation of delay and packet loss affects the parameters of application QoS.

With Rohde&Schwarz CMW500 equipment we measured throughput for each test under the influence of different packet loss and delay values and noticed that it is more reduced as the number of network impairments is higher.

Fig. 10.3.1. Packet Loss Rate- Throughput; Delay- Throughput (cartoon)

Fig. 10.3.2. Packet Loss Rate- Throughput; Delay- Throughput (movie)

Fig. 10.4.1. Packet Loss Rate-Initial Time(s); Packet Loss Rate-Frequency of

Rebuffering Events (cartoon)

Fig. 10.4.2. Packet Loss Rate-Initial Time(s); Packet Loss Rate-Frequency of

Rebuffering Events (movie)

Fig. 10.5.1.Delay- Initial Time(s);Delay-Frequency of Rebuffering Events(cartoon)

Fig. 10.5. 2.Delay- Initial Time(s); Delay-Frequency of Rebuffering Events(movie)

In the chart above it can be observed how throughput is affected by network impairments. Also it can be noticed that the throughput is inversely proportional to the initial time and rebuffering frequency. When throughput’s value falls below the video bitrate, the initial time increases greatly under the influence of network impairments compared to the usual value (calculated with packet loss of 0% and delay of 0ms). When throughput drops below 570kbps (for cartoon clip) and 1157kbps (for movie clip), the first rebuffering events appear.

Not only is the initial buffering time influenced by both delay and packet loss, but, also, a higher number of rebuffering events, along with a slower starting time are attributed to the larger bit rate video. These particularities are sustainable only by DASH streaming, whilst no QoS application parameter is specifically affected with the use of RTSP.

The difference between MPEG DASH and RTSP is that the rate of transfer in the case of adaptive streaming (DASH) will automatically change in response to the transfer conditions. If the receiver isn't able to keep up with a higher data rate, the sender will drop to a lower data rate and quality. RTSP is a transport protocol which is built on UDP and designed specifically for real-time transfers. The value of throughput is not greatly influenced by packet loss and delay growth, but unlike MPEG DASH, the videos quality has greatly suffered; this happening because of UDP, which does not attempt to retransmit damaged packets. An advantage of this type of streaming is that the initial buffering time is very reduced and does not change under the influence of network impairments.

Fig. 10.6. Packet Loss Rate – MOS Delay – MOS

It can generally be observed in above tables that, the smaller the amount of dropped packets, the better the MOS, hence the better user perception of the received video quality. Based on ITU-T P.1201.1 (10/2012), we calculated MOS for both streaming protocols. In the case of MPEG DASH we obtained better values than for RTSP. Although in every test we introduced a higher amount of packet loss and delay, the video which used MPEG DASH streaming had a better quality than expected. The only negative effect that network impairments had on the video was the initial buffering time. Although it retained its image quality after a certain threshold when the packet loss and delay were high, the video became almost impossible to watch given that a high number of rebufferings appeared. The time of a rebuffering event is directly proportional to the initial time. In the case of RTSP, network impairments did not affect initial buffering time, but image quality has suffered under the influence of packet loss. In the graphics we can see that delay does not have such a big impact over the MOS as packet loss has. The small value of MOS in the case of RTSP is explained by the multiple number of rebuffering events . The advantage of this streaming protocol is that the video can run even under the influence of 6% packet loss and 400 ms delay, but with very poor quality. Instead, the video which was streamed using MPEG DASH remains blocked under the influence of these values because a rebuffering event can last up to one minute.

Given that the connection between the mobile phone and CMW500 was implemented using LTE, we considered it important to check if its parameters have an impact on the MOS.

The first parameter that we tested was channel bandwidth. For the operating bands we chose, we created scenarios using 5MHz and 20MHz channel bandwidths. Although a single video transmission takes up very little of channel capacity, it can be noticed that the video quality is slightly affected by this parameter for both MPEG DASH and for RTSP. We noted that the tests were done under the influence of different values of packet loss and delay.

Fig. 10.7. Cell bandwidth – MOS

Another important parameter we analyzed was modulation. LTE uses three types of modulation: QPSK, 16-QAM and 64-QAM. For testing we chose only two of the three types of modulation, namely QPSK and 64-QAM.

Modulation QPSK (2bit / symbol) is robust, but less efficient thus less information is being transmitted. It has four symbols, a single amplitude and four different phases. 2 bits can be coded with a symbol. It is the least susceptible to interference because it is used in the edge of the cell when the signal is weak.

64-QAM modulation (6Bit / symbol) is less robust, but more efficient than the others. It has 64 symbols, 9 different amplitudes and 52 phases.

The lower forms of modulation, (QPSK) do not require such a large signal to noise ratio but are not able to send the data as fast. Only when there is a sufficient signal to noise ratio can the higher order modulation format be used.

Fig. 10.7. Modulation – MOS

11. Conclusions

Similar Posts