University Politehnica of Bucharest [303202]
University “Politehnica” [anonimizat] a Decentralized Cloud Platform using
5G [anonimizat]
S.l. Dr.Ing. [anonimizat]
2016
Copyright © 2016, Codrin -Alexandru Burlă / Beia Consult International
All rights reserved.
The author hereby grants to UPB permission to reproduce and to distribute publicly paper and electronic copies of this thesis document in whole or in part.
Abbreviation list
5G = 5th generation mobile networks
ADC = Analog to Digital Converter
API = Application Programming Interface
BPSK = Binary phase shift keying
BTS = Base transceiver station
CD = Compact Disk
CDMA = Code division multiple access
CPU = Central Processing Units
CRC = Cyclic redundancy check
CRM = Customer Relationship Model
DAC = Digital to Analog Converter
DDC = Digital Down Converter
DSL = Digital Subscriber Line
DSP = Digital Signal Processors
DUC = Digital Up Converter
DVB = Digital Video Broadcasting
ERP = Enterprise resource planning
EU = European Union
FDM = Frequency-division multiplexing
FFT = Fast Fourier Transform
FPGA = Field-Programmable Gate Arrays
GPRS = General Packet Radio Service
GPS = Global Positioning System
GPSDO = Global Positioning System disciplined oscillator
GPU = Graphic Processing Units
GSM = Global System for Mobile Communications
GSM = Global System for Mobile Communications
GUI = Graphical User Interface
HA = Home agent
HetNet = Heterogeneous Network
HSPA = High Speed Packet Access
IaaS = Infrastructure as a Service
IF = Intermediate frequency
IFFT = [anonimizat] = Horia Hulubei National Institute for R&D in Physics and Nuclear Engineering
IoT = Internet of Things
IP = Internet Protocol
ISM = Industrial, Scientific and Medical
IT = Information Technology
ITU = International Telecommunication Union
LAN = Local Area Lan
LTE = Long Term Evolution
M2M = Machine 2 Machine
MCC = Mobile Cloud Computing
METIS = Mobile and wireless communications Enablers for Twenty‐twenty Information
MIMO = Multiple-[anonimizat] = Machine Type Communications
NAS = Network-Attached Storage
NAT = Network Address Translation
NCO = Numerically Controlled Oscillator
NFC = Near Field Communication
NFV = Network function virtualization
NIST = National Institute of Standards and Technology
OFDM = [anonimizat] = Operating System
P2P = Peer to Peer
PaaS = Platform as a Service
PCS = Personal communications service
QoS = Quality of Service
QPSK = Quadrature Phase Shift Keying
R&D = Research and Development
RF = Radio Frequency
RTU = Remote Terminal Unit
SaaS = Software as a Service
SAN = Storage Area Networks
SD = Secure Digital
SDN = Software Defined Network
SDR = Software Defined Radio
Society
SPI = Service-platform-infrastructure
SRAM = Static Random Access Memory
SWIG = Simplified Wrapper and Interface Generator
TCP = Transmission Control Protocol
TCXO = Temperature Compensated Crystal Oscillator
UHD = USRP Hardware Driver
UMTS = Universal Mobile Telecommunications System
USB = Universal Serial Bus
USRP = Universal Software Radio Peripheral
USRP = Universal Software Radio Peripheral
VM = Virtual Machine
WCDMA = Wideband Code Division Multiple Access
WLAN = Wireless Local Area Network
XCP = Xen Cloud Platform
Introduction
Cloud Computing, in simple terms is accessing and storing information over the Internet from any computer in any remote location instead of accessing it on our computer’s storage. Computing involves storing data or running programs, but for it to be called cloud computing, we need to access our information or programs via the Internet.
The National Institute of Standards and Technology (NIST) formal definition of cloud computing is : “Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”
Cloud Computing is motivated by the fact that information can be processed more efficiently using large farms of computers and storage space over the Internet. „The cloud is just a metaphor for the Internet”,
Cloud Computing is a platform that can be used to expand mobile devices applications. Mobile devices are constrained by processor power and storage and cloud computing provides theoretically infinite computing resources. Mobile Cloud Computing (MCC) is a concept that combines mobile devices and cloud computing creating a new infrastructure where the cloud performs computing tasks and stores huge amounts of data. The main 3 technologies involved in Mobile Cloud Computing are: Mobile Computing, Wireless Networks and Cloud Computing.
With the exponentially increased capabilities of 5th generation (5G) mobile networks, MCC will become even more powerful and will develop to such an extent that it is anticipated that it will change people’s life styles and patterns. As of today there are over one trillion connected devices that can benefit from cloud-based applications. “The evolution towards 5G is considered to be the convergence of Internet services with legacy mobile networking standards, leading to what is commonly referred to as the ‘mobile Internet’ over Heterogeneous Networks (HetNets), with very high connectivity speeds. “
General Aspects of Cloud Computing
Purpose of Cloud Computing
Today modern computing is represented by a multitude of devices that include desktops, laptops, tablets, smartphones which are used to access data stored at a remote location via the Internet. Those applications include e-mail, social networks, video streaming websites, etc.
In order to accomplish the above scenario, we need an infrastructure that consists of: hardware, networking devices, storage, software. In order to achieve this, the user must buy the above resources, assign physical space, maintain them, and make them operational, which would imply additional cost. These requirements are increased exponentially when we are dealing with enterprise solutions.
“Cloud computing is a mechanism of bringing–hiring or getting the services of the computing power or infrastructure to an organizational or individual level to the extent required and paying only for the consumed services.” Therefore cloud computing is a very efficient way of reducing cost of operations.
Another advantage that cloud computing brings is that even if the physical device is lost (e.g. laptop), our data is not lost as it is safely located at a remote location. As of this, security can be added when accessing these remote locations. Figure 1.1 represents various cloud computing applications.
Figure 1.1 – Cloud Computing Applications
Cloud Computing replaces the need for physical storage that existed in the past, consisting of USB flash drives, CD’s, etc., and having to bring that device to another physical location. Saving a file in the cloud enables us to access it on any computer with Internet connectivity and makes it very easy to share it with other people and collaborate.
Cloud Computing Fundamentals
The formal definition enacted by the National Institute of Standards and Technology (NIST): “Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model is composed of five essential characteristics, three service models, and four deployment models” is not complete if not reminding essential characteristics, service models and deployment models.
Essential characteristics of Cloud Computing:
NIST defines 5 characteristics of Cloud Computing:
On-demand self-service: Storage and computing power is allocated automatically as per user requirements
Broad network access: Resources are available to many types of devices (smartphones, tablets, laptops, desktops, etc.)
Resource pooling: The provider’s resources are shared with multiple consumers. Resources are assigned dynamically depending on consumer usage
Rapid elasticity: The cloud network is fully scalable, meaning that capabilities can be easily expanded such as from the point of view of the consumer, the resources are unlimited
Measured service: Usage of resources is monitored and controlled continuously. Statistics regarding usage and consumption are available for every customer and can be used for billing purposes.
“A cloud infrastructure is the collection of hardware and software that enables the five essential characteristics of cloud.” The five characteristics are depicted in the figure below:
Figure 1.2 – The essential characteristics of cloud computing
Cloud Deployment Models
Deployment Models show how the cloud service can be made available to the end user, taking into account user location and type of users that have access to it.
Private Cloud: Cloud destined for exclusive use by a single entity. It can be managed by the entity or by a third party and it can be located anywhere.
Public Cloud: Cloud destined for general public use. It can be managed by any entity, and it is located at the cloud provider.
Community Cloud: Cloud shared by multiple entities having a common domain of activity. It does not have specific management or location.
Hybrid Cloud: Cloud that is a combination of at least two of the above models. (private community, or public) and are connected by a common standard or technology.
Cloud Service Models
There are 3 kinds of services described by NIST that are available to end users in regards to cloud computing applications:
Software as a Service (SaaS): The consumer can access applications located on the provider’s network including virtual machines, servers, storage. The application can be accessible through a web browser or a program. Typical examples of applications offered as a service are: ticketing software, reporting tools, monitoring tools. Example of SaaS provider: Microsoft Dynamics CRM.
Platform as a Service (PaaS): The consumer has access to a cloud infrastructure that can use to deploy proprietary or acquired applications. The provider offers services and tools to support this service. The provider is responsible for the entire physical infrastructure (servers, storage, etc.) and for maintenance. Examples of PaaS providers: Google Play Store, Apple App Store.
Infrastructure as a Service (IaaS): The provider offers storage, servers, computing power and other infrastructure resources to the consumer. The consumer can deploy any software on the provided infrastructure including applications and virtual machines. The provider manages and maintains the entire cloud infrastructure, but the consumer can have access to the network configuration (e.g. firewall). Example of IaaS provider: Amazon Web Services.
These services compose the service-platform-infrastructure. (SPI)
Cloud Computing Architecture and Structure
The Cloud Architecture describes the way in which the cloud works. It includes components and services that are used. The cloud is a technology that is completely dependent on an Internet connection to function. The cloud can be divided as described in into 4 layers:
Layer 1 – User/Client Layer
The first layer represents the place where the client/user initiates the connection to the cloud. The client used can be any device that supports access a web application.
Layer 2 – Network Layer
This layer facilitates the connection of the user to the cloud. This layer represents the whole infrastructure that is used in order to have a connection. In the case of a public cloud the infrastructure can be the Internet. When a private cloud is used a Local Area Network (LAN) can be used to access the cloud in a secure way.
Layer 3 – Cloud Management Layer
This layer represents all software used in managing a cloud. The software is typically an operating system (OS), that acts as an interface between the data center (where the actual resources are) and the user or it can be a management software.
Layer 4 – Hardware Resource Layer
The Hardware Resource Layer represent the actual hardware resources available. A good example would be a data center. A data center is a location for storage, dissemination and management of information. A data center implies a high-speed network and an interface to transfer to and from the data center. A cloud can involve multiple data centers, and multiple clouds can share the same data center.
Any cloud application must respect the above architecture in order to function.
Figure 1.3 – Cloud Architecture
Cloud Computing Advantages & Disadvantages
Supporting a large computing infrastructure has become more difficult over the years. The users found it difficult to locate a certain system that could be able to run an application. Also, upgrading and downgrading systems based on the work load, also recovering after a system crash was quite challenging for the end user.
On the other hand, providers found it difficult to manage and maintain a large number of physical equipment, and at the same time providing a guaranteed QoS (Quality of Service). This involved high cost both for the provider and the end user.
Cloud Computing solves all of the issues mentioned above. Cloud Computing implies low infrastructure and maintenance cost for the provider, but also for the user that is being billed only for the used infrastructure. “Users benefit from the potential to reduce the execution time of compute-intensive and data-intensive applications through parallelization. “ The workload can be split among available servers in the cloud, thus improving performance greatly.
Applications not suited for cloud computing are in general applications that that need to communicate between parallel threads of execution and thus the workload cannot be split among different servers. “Communication and memory-intensive applications may not exhibit the performance levels shown when running on supercomputers with low latency and high-bandwidth interconnects.”
Cloud Computing Advantages
Accessibility
The main benefit that end users observe when using Cloud Computing is accessibility. If our data and applications are “on the cloud” and not on some local server or flash storage, we can access and modify them from any location. Cloud Computing enables easy sharing of information to business partners or 3rd parties.
Significantly reduced costs
Cloud Computing implements a “pay as you go” system, as the customer or the user is billed according to exactly how many resources or computing power have been used. Thus the initial investment in infrastructure along with maintenance and operation costs are in the responsibility of the provider or vendor.
Costs involving personnel are greatly reduced due to the fact that there is no more for technicians for maintaining and running the data centers, staff to manage server and equipment procurement. The utility bills are greatly reduced not having to pay for server power and cooling. Personnel in charge of tracking licenses and leases are now on the payroll of the service provider.
Hardware such as servers have a limited lifespan (usually four to five years) and renewal falls in the responsibility of the provider.
“Low monthly costs, plus the convenience of having someone else update, patch, troubleshoot and maintain your cloud computing infrastructure, are what make managed services so appealing.”
Management of resources
Cloud Computing manages resources efficiently by dynamically allocating them. Providers make resources available by demand. Customers get as much computing power as they need and they are billed accordingly. This method makes Cloud Computing very cost-effective.
Scalability
Scalability is another great benefit of Cloud Computing by giving access to theoretically infinite resources from the point of view of the user. Operational and maintenance costs are shifted entirely to the provider giving a certain level of reliability.
Virtualization and disaster recovery
Cloud Computing makes full use of virtualization features. Instead of having to move data from physical servers, data can be restored or backed up automatically. “Separate backup systems, with cloud disaster recovery strategies, provide another layer of dependability and reliability.”
Cloud Computing as a green alternative
The use of paper is strongly discouraged by Cloud Computing and the hardware required on the premises is reduced to a minimum. Another aspect that makes Cloud Computing a green alternative is that not only it reduces power consumption by using less hardware but it also “can reduce the environmental impact of building, shipping, housing, and ultimately destroying (or recycling) computer equipment as no one is going to own many such systems in their premises and managing the offices with fewer computers that consume less energy comparatively”
Flexibility
Another benefit that results from using a Cloud Computing architecture is flexibility by being able to make fast changes and being able to adapt to changes in network and structure. A new solution can be rapidly and easily implemented in a Cloud Computing and without impacting the customer’s resources, as the provider has the responsibility of creating and managing the infrastructure.
Cloud Computing Drawbacks
Dependency of Internet connectivity
The main drawback of Cloud Computing is obvious: dependency on Internet connectivity. Without Internet connectivity to the Cloud we have lost our link to our data and applications.
Security
Another challenge of Cloud Computing is security. Our entire data and applications are stored at a remote location (cloud provider) and security compliance is totally dependent on the provider’s policy.
Drawbacks of scalability
Scalability is a great benefit, but at the same time the user does not have control of these resources as they are owned by the provider.
Availability and compatibility of services and applications
Users may face limitations on the provider’s side in regards to available services, operating systems and infrastructure options. Some development tools may not be available to use through the Cloud simply by the fact that the Cloud provider is not able to implement such solutions.
A major drawback of Cloud Computing is incompatibility of applications. Certain applications may not be able to share data and communicate in a Cloud Computing environment.
Reliability
Reliability is a major concern of Cloud Computing, due to the fact that we are dealing with a large number of nodes that compute at the same time. The chance of failure of every nodes is now added to the chance of failure of the whole process.
Latency
Latency (time duration for a packet to go prom a point to another) is a concern for certain application. Applications that require a very low latency (a few milliseconds) will surely not migrate to a Cloud Computing scheme. Latency value and stability is dependent on Internet connection performance and also on the destination (usually a good latency is achieved for locations situated at small distance),
Logging and troubleshooting
Logging in a Cloud Computing environment must be done for every software instance of the respective node, “Logging is typically done using instance storage preserved only for the lifetime of the instance.” Troubleshooting is more difficult in this matter, due to the fact that it is very hard to identify the source of the error in a Cloud Computing environment that contains a multitude of nodes.
Migration is not always easy
A user’s architecture might be incompatible with the architecture of the Cloud provider. An application might depend on sensitive internal databases, or might not be supported by the Cloud provider. Also, because of network or structural complexity, migration might involve a long downtime for the user and for the user’s clients. As such, migration won’t be an easy decision to make.
Virtualization – Core technology of Cloud Computing
Definition and Implementation
“Virtualization is a proven software technology that makes it possible to run multiple operating systems and applications on the same server at the same time. It’s transforming the IT landscape and fundamentally changing the way that people utilize technology.” In other words, “Virtualization is a technology that enables the single physical infrastructure to function as a multiple logical infrastructure or resources.” Virtualization does not deal only with hardware but it can involve: networks, operating systems, applications, processors.
Practically virtualization is a way to reduce costs drastically and introduces an easier way to manage resources like computing power.
Virtualization enhances security by isolation of services and performance. Applications can migrate easily between platforms which improves reliability and performance. As shown in the chapters below there are multiple forms of virtualization: processor virtualization, memory virtualization, storage virtualization, network virtualization, data virtualization and application virtualization.
A classical computing scenario is shown in Figure 1.4, while a computing scenario improved by virtualization is shown in Figure 1.5.
Figure 1.4 – Computing System
Figure 1.5 – Computing System after virtualization
Advantages and Disadvantages of using Virtualization technology
The main advantage of adoption virtualization is efficiency of resource management and utilization. As such virtualization is more profitable, achieving the same results, but being cheaper than a physical infrastructure, in terms of initial investment and maintenance and operation costs. It provides an easier administration of resources, being able to share resources and also improves disaster recovery.
The main disadvantage of virtualization is reliability, having a single point of failure. The infrastructure demands will always be high-end in order to keep up with the latest technologies that the users use. In some scenarios, virtualization leads to lower performance, being capable of “virtualizing” only a limited range of functions, devices or applications. Also virtualization mean having personnel with high IT proficiency which implies higher costs.
Forms of Virtualization
Different resources that are part of the Cloud can be virtualized. In the section below we will give a few examples of virtualization.
Processor Virtualization
Virtual processors are created from physical processors found in the base infrastructure. Virtualization enables Virtual Machines to share those virtual processors based on requirements and demands. Processor virtualization can also be implemented on multiple servers. Figure 1.6 shows an example of processor virtualization.
Figure 1.6 – Processor virtualization
Memory virtualization
Memory virtualization means combining all physical memory available into a virtual main memory available to the Virtual Machines. This is done by mapping the physical memory to the virtual memory.” The main idea of main memory virtualization is to map the virtual page numbers to the physical page numbers.”
Another practice used in data center is that all unused memory across virtual servers will be combined into a virtual memory pool and will be provided to the Virtual Machines. An example of Memory virtualization is presented in Figure 1.7.
Figure 1.7 – Memory virtualization
Storage Virtualization
Similarly, to memory virtualization, storage virtualization uses the same mechanism. Physical storage disks are combined into a pool of virtual storage disks, that is provided to the Virtual Machines.
Storage Virtualization is essential in backup applications. It ensures that the management of storage resources is done efficiently by making sure that every free storage space is made available to all Virtual Machines involved. Common storage virtualization methods are network-attached storage (NAS) and storage area networks (SAN).
An example of storage virtualization is illustrated in Figure 1.8.
Figure 1.8 – Storage Virtualization
Network Virtualization
Network Virtualization represents transforming the physical network into a virtual network. Physical network devices like router and switches are now referred to as virtual network components and are controlled by a virtualization software. Network virtualization can be applied to an internal network or it can be used to bridge several external networks. An important aspect of network virtualization is that it allows Virtual Machines on the same physical network to communicate. A virtual network has multiple modes of functioning, which include:
Network Address Translation: The Virtual Machines will receive an IP from a virtual DHCP server. That IP is part of a Network Address Translation (NAT) Network. The virtual network adapter will translate every IP from the NAT Network (Virtual Machine IP) to a public address that will be used to access a specific network or even the Internet.
Bridged Network: The Virtual Machine will be on the same network as the host. The VM can get an IP address from the same DHCP server that the host is using. An advantage to NAT is that every VM will have a unique IP address, and can be identified from the external network.
Host Only: This type means that the VM will be on a virtual network with the host, but it will not have access to LAN (Local Area Network), Internet or any other network. Similar to NAT, the VM will receive an IP (different from IPs from NAT type networks) from the virtual DHCP server.
An example of Network virtualization is presented in the figure below:
Figure 1.9 – Network Virtualization
Data Virtualization
Data Virtualization means concentrating all data to a single logical(virtual) volume of data. From the point of view of the user, data is retrieved without knowing where it is located. In this way, logical data is accessed by any application or service the same way. “It also ensures the single point access to data by aggregating data from different sources.” An example of Data virtualization is shown in Figure 1.10.
Figure 1.10 – Data Virtualization
Application Virtualization
“Application virtualization is the enabling technology for SaaS of cloud computing.” Application virtualization means the ability to run a specific application or tool without having to install any software. The application will be hosted on a central server. After the application is virtualized, the user will be given access to a separate virtual copy of the application. The mechanism is illustrated in Figure 1.11.
Figure 1.11 – Application Virtualization
Open Source Cloud Platforms
Working with an open source platform brings many benefits. Open source brings large support, especially from developers because they are able to modify or upgrade the platform as they like. Also developers are free to customize the platform and tailor it to every customer needs. Open source means that the code is public, such that anyone can inspect and signal bugs or vulnerabilities that the code may contain. Also if a product has no longer any support from the original developer, anyone can take over the project and continue to release new versions or improvements.
An Open Source IaaS Cloud Platform – OpenStack
OpenStack is an Open Source Cloud platform that is very easy to implement and has a lot of great features. OpenStack is designed to be very scalable and provide interoperability between Clouds. OpenStack is licensed under Apache 2.0 and it is considered to be “the Linux of the Cloud”.
Figure 1.12 shows the architecture of OpenStack.
Figure 1.12 – Open Stack Architecture
OpenStack features seven components that it requires in order to work:
Object storage: enables users to store or get files. It is referred to as swift architecture.
Image: location where virtual disk images are stored. These images are used by OpenStack Compute. The image component is called glance.
Compute: creates and allocated virtual servers based on user demand. It is called nova.
Dashboard: user interface used by OpenStack. It is a web based interface and it is called horizon.
Identity: is in charge of authentication and authorization of every user that want to use any OpenStack service. Its codename is keystone.
Network: represents the service of network connectivity with other OpenStack services. Users can create their own networks and can create network interfaces associated to them. The codename given is neutron.
Block storage: represents persistent block storage for Virtual Machines. It is called cinder.
An Open Source PaaS Platform – Xen Cloud Platform
“The Xen Cloud Platform (XCP) manages storage, VMs, and the network in a cloud.” Its main role is configuration and maintenance of Clouds. Xen Cloud Platform has only an infrastructure management role and does not have knowledge about the general cloud architecture, hence it does not have an interface for the end user. It is a tool designed for administrators and a great Application Programming Interface (API) for developers of Cloud operating systems.
An Open Source SaaS Platform – Google Drive
Google Drive is an online service provided by Google that provides storage, sharing and simultaneous editing of files. Google Drive is a service that lets the user access its documents from anywhere. Authentication is done using an account protected by a password. Some shared documents can be accessed with no registration. Google Drive has support for any computer or mobile operating system. If a file is modified from any device (computer or mobile device) then it is updated on every device where Google Drive is attached.
An Open Source Research Platform – CloudSim
CloudSim is an application that enables us to simulate a Cloud so we can experiment with different setups and configurations. It is very customizable and allows modification of policies regarding the software stack, which makes it a great research tool.
The architecture of CloudSim is shown in the figure below:
Figure 1.13 – CloudSim Architecture
Cloud Computing in the future
Mobile Cloud Computing is a part of people’s daily life. With hundreds of mobile applications emerging every day, and the popularity of cloud computing, MCC has become an implementation of Cloud Computing in the mobile world. The main feature of MCC is that it reduces energy consumption and loading times of applications, expands storage and battery life. There are however several aspects of MCC that need to be improved, such as quality of service (QoS), convergence, interface, bandwidth and network management.
Enhancement of bandwidth – 5G and Small Cells
MCC bandwidth is a concern that will be solved by 5G networks and implementations of small cell networks. Higher bandwidth will mean faster speed of execution, along with signal strength brought on by the use of small cells. The only concern is that 5G involves heterogeneity. “Densification arises due to the use of small cells, which in turn causes interference management problems. Research is required in this field to solve the problems of densification and interference management.”
The femtocell inclusion in MCC can solve coverage and bandwidth issues. “Hay Systems Ltd has combined femtocells and cloud computing to offer a scalable, secure, and economical network service for mobile operators. Femtocell can remove various obstacles such as low signal strength and low bandwidth.” The femtocell integration is shown in Figure 1.14.
Figure 1.14 – MCC setup using femtocell base station
Improving Network Access Management – Cognitive Radio
Cognitive radio technology will be used to improve network access management. Cognitive radio is a system that selects the best wireless channel available. It scans the available radio channels and makes the selection by avoiding occupied channels. Practically cognitive radio is embedded for efficient spectrum utilization and employs methods of spectrum sensing.
Improving QoS – Cloudlet
Cloudlet represents one or more computers connected to the Internet and accessible to mobile devices that are nearby. Instead of offloading to the Cloud, a mobile device can opt to use a nearby cloudlet. This brings reduce latency by using a single hop path and a high speed wireless access to the cloudlet.
Cloudlets have their own drawbacks which include dependence on provider willingness to provide cloudlets and small compatibility with Virtual Machines.
Cross-Cloud Communication – Mobile sky computing
A single Cloud provider may not meet the needs of a customer. For this matter mobile sky computing can be a solution.
Mobile sky computing represents a combination of mobile computing and sky computing. It present providers with a solution to implement cross- cloud communication. The direction of cross-cloud communication is heading toward negotiating latency and bandwidth between different resources situated at different providers. Figure 1.15 depicts the architecture of mobile sky computing.
Figure 1.15 – Mobile sky computing architecture
Interface in MCC
The most used interface in MCC is the web interface. Web interfaces are not optimized for mobile devices. Improving the interface for mobile devices will raise productivity and increase popularity of MCC. Compatibility of interfaces with several mobile devices is another issues. An interface that is compatible with popular mobile operating systems like Android and iOS should be developed.
MCC Security – Encryption and biometric identification
It is difficult to secure a mobile device to its low power capacity and average computing power, so security should be provided for the MCC setup. The data is stored in the Cloud so security should be enforced at the level of the provider. Privacy issues may arise due to the fact that location information can be transferred into the Cloud, such as a user could be tracked.
Data could be stored in different locations and countries that have different policies and laws regarding information security. A solution to data security can be encryption. By encrypting our data before uploading to the Cloud, we can ensure that no one else can read our data.
Another method of assuring security of our data is password protection. This method is however considered weak because there are several methods of obtaining or bypassing a password (e.g. brute force attack). For this matter biometric authentication seems a strong security measure. In some cases, if the mobile device is destroyed or stolen the data is forever lost, having an authentication based only on the device. Using biometric identification, we can ensure that our data is never lost and can be accessed from anywhere and from any device.
Disaster Recovery
Some options of designing a disaster recovery plan are creating redundant data or implementing the Cloud in different sites. Future research is needed in this matter.
The number of users of MCC in increasing by the minute, hence research effort should be at a high level and it should keep up with the latest trends and technologies. MCC will continue to grow rapidly because it supports new technologies and concepts like: Internet of Things, Machine learning, M2M, etc.
SlapOS – A Decentralized Cloud Platform
SlapOS Introduction
SlapOS (Simple Language for Accounting and Provisioning operating system) is an open-source Cloud Computing operating system that can be used to implement a decentralized Cloud Platform. “SlapOS can be described as a cloud operating system in which “everything is a process” unlike Unix in which “everything is a file” “.
SlapOS has the capabilities of managing multiple resources provided by multiple providers in different location, managing to create a distributed Cloud. Interconnection can be done using IPv6 or IPv4.
SlapOS is more resilient and redundant than classical Cloud operating systems by interconnecting providers of public services (e.g. electricity), networks and storage solutions. In this way SlapOS can recover in case of disasters such as natural disasters, wars, financial crisis.
SlapOS Architecture
In order for a SlapOS deployment to function two major components must be installed: a SlapOS Master and multiple SlapOS Nodes. The SlapOS Master is in charge of storing node configurations and assigning tasks to different nodes. The SlapOS nodes represent the actual Cloud Computing resources available and controlled by the SlapOS Master. The principle described is represented in Figure 2.1.
Figure 2.1 – SlapOS Architecture
SlapOS is based on a community technology called buildout, a Python-based build system used for developing and deploying applications. It is a tool used for reproducing software using a buildout configuration. SlapOS is also based on the GNU project which provides full compatibility with almost all operating systems (Windows, Linux, MacOS, etc.), programming languages (Python, Java, Ruby, Perl, PHP, etc.), application server or container, SQL database and almost any frontend available.
One of SlapOS’s main features is that it can be deployed in merely a few hours. This makes it a great competitor to popular Cloud Platforms developed by providers such as Amazon (Amazon Web Services), Google (Google Cloud), Microsoft (Microsoft Azure), etc.
A SlapOS Node can handle up to 200 databases at the same time and has support for virtual machines which makes it compatible with legacy software. A Node’s structure is illustrated in the figure below:
Figure 2.2 – SlapOS Node Structure
A SlapOS Master is capable of implementing Enterprise Resource Planning and an online store. SlapOS is capable of handling all matters of billing, accounting, customer account and it is highly customizable in this matter. Practically a Master Node contains: Clients and Suppliers, Allocation requests, Available Capacity & Price and the entire Software Catalog. “With SlapOS, everyone can run a Cloud Business in 24 h.” a SlapOS Master Node is shown in Figure 2.3.
Figure 2.3 – SlapOS Master Node Structure
SlapOS Installation and Deployment
The basic requirement of installing a SlapOS Node is a GNU/Linux Server and the GNU wget utility. SlapOS is based on Python so we would need Python 2.6 or 2.7 installed. SlapOS supports various distributions of Linux including Debian, Ubuntu, CentOS, OpenSUSE. SlapOS also supports the feature of installing from source. For this matter, the “make” and “patch” tools from gcc and g++ compilers are needed. Linux headers “uml-utilities” for tunneling and “bridge-utils” for bridging are also required. After the requirements are fulfilled the following command must be executed:
# apt-get install python gcc g++ make uml-utilities bridge-utils linux-headers-$(uname -r) patch wget
A SlapOS Slave Node contains computer partitions, that contain a UNIX user, a home directory, a TAP interface and an IPv6 address. A computer partition can contain a software instance installed by a SlapOS Master. A SlapOS Master searches for a free partition across all registered Nodes and chooses one based on specific parameters set up by the user. A partition can be freed and made available at if it is no longer in use.
In order to install the SlapOS bootstrap the following shell script should be executed in a terminal:
“
mkdir -p /opt/slapos/log/
cd /opt/slapos/
echo "[buildout]
extends = https://lab.nexedi.com/nexedi/slapos/raw/master/component/ slapos/buildout.cfg " > buildout.cfg
unset PYTHONPATH
unset PYTHONDONTWRITEBYTECODE
unset CONFIG_SITE
python -S -c 'import urllib2;print
urllib2.urlopen("https://raw.github.com/buildout/buildout/1/bootstrap/bootstrap.py").read()'
| python -S –
bin/buildout
“
As observed from the last line of the script. SlapOS is deployed using buildout which makes sure all required dependencies and components are installed. The script firstly creates the directory in which SlapOS software will be installed. After moving the command line to the respective directory, we then create the buildout file and link it to the respective URL. The python script that follows will read the buildout configuration file and install SlapOS accordingly.
SlapOS Registration and Execution
In the current subchapter the following operations will be covered: Registration for the server, network configuration and execution of the system.
Registration of the SlapOS Server
The first step is to register the server in the SlapOS community Cloud. After registration, a X509 certificate and a key are obtained, which are needed for configuration. By obtaining a security token from the SlapOS Master, a secure authentication is assured. After successful registration the screen captured in Figure 2.4 should appear.
Figure 2.4 – SlapOS Registration and Certificate
SlapOS uses IPv6, but compatibility with IPv4 can be assured by implementing an IPv6 tunnel. The configuration process involves selecting an IPv6 interface.
Running SlapOS
To create the configuration files, we execute the following command, having in mind that first the Token obtained at the previous step will be requested:
# slapos node register –interface-name lo –partition-number 20 COMPUTER_NAME
The command will result in the following files being generated:
/slapos.cfg: Configuration file for the SlapOS Node
/ssl/certificate: The server’s SSL Certificate
/ssl/key: The server’s SSL Private Key
In order to finish SlapOS configuration, SlapOS need to be executed using the command:
# slapos node format –alter_user=True –now
SlapOS Client Software
SlapOS client represents a set of tools used for managing SlapOS Nodes, instances and the SlapOS Master. It is found in the classical SlapOS Node Installation.
It allows us to install Software Releases on nodes locally via the terminal instead of having to access slapos.org. The following steps should be executed in order to install SlapOS Client:
Registration and generation of a security token
A Security Token must be generated from the SlapOS Master.
5.2 Running SlapOS Configuration Client
In order to create the required configuration files, we input the Token obtained and issue the following command:
# slapos configure client
The above command will generate the following files as shown by the terminal:
$HOME/.slapos/slapos-client.cfg :Client Configuration file
$HOME/.slapos/certificate: The user’s SSL Certificate
$HOME/.slapos/key: The user’s SSL Private Key
The SlapOS configuration file(slapos-client.cfg) must be edited to include the following line:
alias = webrunner http://git.erp5.org/gitweb/slapos.git/blob_plain/ slapos-0.204:/software/slaprunner/software.cfg
To check the certificate and key validity we can run the following commands:
$ /opt/slapos/parts/openssl/bin/openssl x509 -noout -in $HOME/ .slapos/client.crt: for certificate validity
$ /opt/slapos/parts/openssl/bin/openssl rsa -noout -in $HOME/ .slapos/client.key –check: for key validity
In order to install a specific software on a SlapOS Node we use the following command:
slapos supply webrunner <computer number>
In order to remove a specific software on a SlapOS Node we use the following command:
slapos remove webrunner <computer number>
If we want to make a request for a new or existing instance, we can invoke the command below. Deleting an instance via the command-line is not possible.
slapos request <instance name> <type of instance>
The SlapOS console is a Python command line that is linked to all slap modules installed. In order to open the SlapOS Console we use the command:
slapos console
SlapOS Software Concepts
A Software Release represents an installation of a software. It does not include configuration files, so it is not ready to be executed “out of the box”.
A Software Instance uses a freshly installed Software Release to create specific configuration files and wrappers.
Firstly, a Software Release is installed on a machine. Using the Software Release, a Software Instance is created containing configuration files and the disk image. This way, we have a single Software Release with multiple Software Instances running in parallel, thus saving space and the need to install multiple Releases every time a new instance is needed.
Earlier, we introduced the notion of Computer Partitions. Every Computer Partition will contain a Software Instance of a given type.
The workflow for all these operations is the following: A new Software Instance is requested through the SlapOS Master’s order page. The SlapOS Master find a free computer to host the Software Instance. After the Software Instance is up, the SlapOS Master will check for service availability. Options for stopping or destroying the instance are now available.
By registering a server at slapos.org we are allocated with options of installing new software. The process of installing new software is initiated as shown in Figure 2.5.
Figure 2.5 – Installing new Software
A list of available software which can be installed on the SlapOS Node is presented. (as shown in Figure 2.6).
Figure 2.6 – Selecting a Software
After selecting a specific software, a list of Software Releases is presented (Figure 2.7). A SlapOS Node can support multiple versions of releases of the same software. The list can contain experimental or beta releases of the same software, but for stability purposes, it is recommended that the latest stable release be installed.
Figure 2.7 – Selecting a Software Release
Logs that contain all the software installed on a SlapOS Node can be viewed by executing the following command:
# tail -f /opt/slapos/log/slapos-node-software.log
“If the Software Release to install is not yet available to your distribution, it will be automatically compiled, so it may require a lot of time to finish – from 20 minutes to several hours.”
5G Networking Concepts
Introduction to 5th generation mobile networks
Wireless technologies have become a part of our daily life and have a profound impact in our daily task giving us access to a full range of services for multimedia (videos, video-conferences, images), information services (encyclopedias, academic content) and access to several applications used for e-commerce, health emergency applications. „If analysts’ prognostications are correct, just about every physical object we see (e.g. clothes, cars, trains, etc.) will also be connected to the networks by the end of the decade (Internet of Things).” . In comparison to 4G, „5G is an evolution considered to be the convergence of Internet services with legacy mobile networking standards leading to what is commonly referred to as the ‘mobile Internet’ over Heterogeneous Networks (HetNets), with very high‐speed broadband.” In this sense 5G in not only an upgrade in speed but a way to include wide area coverage network using HetNet’s in a low-cost and low-power efficient way.
Standards and Features
The Next Generation Mobile Network Alliance (NGMN) defines in the white paper for 5G requirements for a full functioning 5G Network. In summary the NGMN suggest the following:
Data rates of up to 1Gb/s should be supported in specific environments such as indoor offices while at least 50 Mb/s shall be available everywhere cost-effectively.
The 5G system should provide 10ms E2E latency (duration between the transmission of a small data packet from the application layer at the source node and the successful reception at the application layer at the destination node plus the equivalent time needed to carry the response back) in general and 1ms E2E latency for the cases that require very low latency. The end user should have the perception that he is always connected. The establishment of initial connection to the network should be instantaneous from the perspective of the user.
In the case of mobility, 5G should not assume mobility support for all devices and services but provide mobility on demand only to those devices and services that need it.
Other considered requirements are that spectral efficiency should be increased significantly compared to current 4G networks, coverage should be increased and also signal efficiency should be greatly enhanced.
All these aspects are represented in Figure 3.1.
Figure 3.1 – 5G Parameters
5G is in research stage but 5G are expected to be operational around Q4 of 2020. According to NGMN the current timeline for 5G is represented by the figure below:
Figure 3.2 – 5G Roadmap
Meanwhile several important vendors have begun research and development for 5G in 2013 and in 2015 5G laboratory trials have begun.
5G Architecture
5G will be a fully converged system that will support a multitude of applications ranging from data, voice and multimedia to critical communications, internet of things, low latency (for example driverless cards) and moving platforms due to increased mobility. (For example trains)
Figure 3.3 – 5G Architecture
The Network architecture of 5G can provide the following capabilities:
Integrates the Radio Access Network (RAN) in various frequency bands. The radio frequency range will vary from frequencies of 6 GHz up to 100 GHz. According to 5G requirements the RAN will provide virtually zero latency.
Flexible deployments can be implemented using wireless and relying on optical technologies.
HetNet Implementation
Cloud Computing can be applied to the RAN. This capability is combined with the transformation to cloud-based radio access
Virtualization of network functions will optimize network resources which improves scalability. This will be done in communication with data centers and will enhance Software Defined Networking (SDN) capabilities
Full usage of SDN capabilities
„Networks will become self-aware, cognitive, and implement extensive automation and continuous and predictive learning.”
Internet of Things (IoT) integration
Heterogeneous networks (HetNets)
A heterogeneous network is a network in which multiple radio access technologies are used (e.g. GSM, WCDMA, LTE) along with base stations that vary in size. A heterogeneous network is an efficient way of expanding mobile network capacity.
A heterogeneous network (HetNet) is made of two components: small cells (provides mobility) and macro cells (increase coverage and capacity). A HetNet is an evolution of a mobile access network in which an operator can add macro cell capacity as demanded. HetNet can extend closer to the end-user by positioning low cost and low power access nodes indoors or outdoors. (E.g. roadside, posts, corporate buildings) To facilitate deployments, 3G, LTE, 5G and Wi-Fi interfaces can be embedded within cells.
Figure 3.4 – HetNet Architecture
The HetNet access nodes are as follows:
Macro/Micro Cells – Macro and micro cells provide universal coverage due to the fact that
they have an inter-site distance of more than 500 meters
Small Cells – Small Cells are better suited for cloud applications due to higher speed
demand. Small cells include:
Pico Cells – Pico Cells must be placed at about 200 meters or less
Femto Cells – The Coverage range for a Femto cell is about 100 meters
Distributed Antenna System – A network of spatially-placed antennas connected to a common source via wireless
Relay Nodes – Base stations that provide coverage/capacity to macro cells. Relay Nodes are connected via a Donor eNodeB (through a radio interface)
The figure bellow exemplifies some small cell types:
Figure 3.5 – Small Cell Types
OFDM
Definition and use of OFDM
Orthogonal frequency-division multiplexing represents a type of frequency-division multiplexing (FDM) scheme that uses multiple carrier frequencies for encoding. In order to achieve data transmission on a greater bandwidth, multiple orthogonal frequencies are used.
OFDM is used in common applications, such as wireless networks(802.11a), DSL, digital television (DVB), digital radio (DAB), 4G networks (LTE).
Working Principle
OFDM uses IFFT (Inverse Fast Fourier Transform) on the transmitter side and FFT (Fast Fourier Transform) on the receiver side, which reduces system complexity making OFDM easy to be widely used. The block diagrams of the receiver and the transmitter are illustrated below:
Figure 3.6 – OFDM Transmitter and Receiver
Traditional FDMA transmission consists of dividing a channel into multiple sub-channels in order to transmit data streams in parallel, the sub-channels being separated by a group of filters found at the receiver. Guard bands are required between sub-channels, making the spectral efficiency low. In OFDM, subcarriers are overlapping and orthogonal, thus greatly improving the spectral efficiency, as shown in the figure below:
Figure 3.7 – Difference between FDM and OFDM
5G Applications
Software Defined Network
The Software Defined Networking approach is composed of a logically centralized entity called the Controller which manages the associated network data plane using an Application Programming Interface (API) that allows configuration of parameters such as forwarding tables of network equipment. (E.g. router, switch).
Figure 3.8 – SDN Approach
As presented in figure 3.9 5G can benefit from the programmability and scalability of SDN and NFV (Network function virtualization) technologies. „As such, the 5G architecture is a native SDN/ NFV architecture covering aspects ranging from devices, (mobile/ fixed) infrastructure, network functions, value enabling capabilities and all the management functions to orchestrate the 5G system. APIs are provided on the relevant reference points to support multiple use cases, value creation and business models.”
Figure 3.9 – 5G SDN
Mobile Cloud Computing
Cloud applications have witnessed exponential growth in the last decade. The Cloud network has expanded and it now contains millions of cloud nodes, all interconnected.
A Cloud-based resource is typically a service provided by the vendor to the consumer (e.g. Cloud application) that result in used resources. In Cloud Computing, multiple nodes provide computational power which is summed up in a cloud resource pool. The location of the nodes in not relevant to the consumer such as the service provider has the means to optimize node location, placement and configuration. An example of a Cloud Computation application that uses 5G is a weather application. Such application is shown in Figure 3.10.
Mobile Cloud Computing has been introduced, transforming mobile devices into nodes that are part of a cloud resource pool. MCC is a great technology that can be used to offload demanding task to the Cloud. Today’s mobile phones have the computational power of the high-end desktop computers of a decade ago. “This inclusion is similar to the expansion of the network‐centric paradigm of client‐server communications into a peer‐to‐peer architecture, which, nonetheless, still relies on the client server principals by requiring individual nodes of the P2P network to simultaneously function as clients and servers (or clients and service providers of a conglomerated resource cloud).”
Figure 3.10 – Example of MCC – Weather application
Architecture of Mobile Cloud Computing
The architecture of MCC can be summarized in 3 basic layers as described in :
Mobile Network
Internet Service
Cloud Service
An illustration of the above architecture is described in the figure below:
Figure 3.11 – MCC Architecture
Mobile network:
A mobile network is composed of telecom operators and mobile devices, that can include smartphones, tablets, laptops, etc. They connect to the operator using base transceiver stations (BTS), access point or satellites. They have the role of establishing and managing the link between the mobile device and the operator. The central processor and the servers of the operators receive the device’s ID and location. Based on the home agent (HA) and the subscriber data found in the operator database, the user is provided with various services such authentication, accounting and authorization (AAA).
Internet service
The Internet service establishes a bridge between the Cloud and the mobile network. The requests of subscribers are delivered via Internet to the Cloud. This is accomplished using wired connections or various wireless technologies of 3G, 4G, 5G (HSPA, UMTS, WCDMA, LTE, etc.).
Cloud service
The Cloud service is managed by a Cloud controller. The controller has the role of processing the requests and providing the appropriate services. The Cloud is composed of several layers, as illustrated by Figure 3.12.
Figure 3.12 – Service Model of Cloud Computing
Data Center Layer
The layer is composed of the hardware and the infrastructure of the Cloud. A data center represents a place with a multitude of servers that are connected using high speed links.
Infrastructure as a service
IaaS provides hardware, servers, storage to customers on a “pay as you go” scheme. Infrastructure can be adjusted dynamically depending on requirements. (e.g. Amazon Elastic Compute Cloud 2)
Platform as a service
PaaS provides an embedded platform that users can use to build, execute and deploy applications or tools. Popular platforms include Python, Java, PHP. (e.g. Google Play Store, Microsoft Azure)
Software as a service
SaaS is a model used to deliver applications. Applications are hosted on the Cloud and are ready to be provided as solutions. The software can be executed by the user without the need for installation of any software or tools (e.g. CRM, ERP).
By reviewing the above architecture concepts, we can conclude that in Mobile Cloud Computing, the user does not have to worry about processing power or battery life of mobile devices, because storage and computation are now done on the Cloud. The user now has an on-demand service that works seamlessly.
Redundancy is improved due to the fact that cloud resources are distributed in multiple geographical area. Flexibility is also added due to the fact that mobile devices support different potential cloud services. An overview of multiple resource pools using mobile devices is illustrated in Figure 3.13
Figure 3.13 – MCC Resource Sharing
Software Resources
Operating systems are defining the operation of a node, including low-level interfacing. An example of an OS for mobile device (node) is Android or iOS.
Non-serviceable software, which is mostly pre-installed on mobile devices can monitor device load and communicate results to the provider.
User application can be installed, depending on user requirements and needs. Applications can also be shared throughout the Cloud and can communicate with other applications.
Hardware Resources
Computational resources include Central Processing Units (CPU), Graphic Processing Units (GPU), Field-Programmable Gate Arrays (FPGA) or Digital Signal Processors (DSP).
Storage resources, that can be volatile (Random Access Memory) or non-volatile like flash memory.
Sensors can be of different type including: location, microphone, camera, temperature
A mobile’s speakers, notification light, flash, display represents actuators.
An energy resource can be represented by the mobile phone’s battery or even solar panels connected to the mobile phone.
Networking Resources
Mobile Devices contain a generous range of interfaces for communications, from long-range to short-range antennas, as well as wired connectivity.
Cellular communications, like 5G provide perfect connectivity, being always connected, with the condition of having service coverage of course. The number of users and the demand is always increasing making 5G almost a necessity.
Wireless LAN’s are common and very popular, especially for large devices such as laptops. The WLAN interface is usually used to offload traffic from cellular networks in case of transfer of large file, and can also be used in ad hoc between multiple mobile devices.
Bluetooth is a popular communication choice due to the fact that is requires very little power due to Bluetooth Low Energy implementation.
Infrared represents an optical interface, but unpopular due to low flexibility and average speeds.
Near Field Communication (NFC) is becoming a very choice. Mobile devices are presently outfitted with NFC out of the box. An example of NFC implementation is Google Wallet, a solution that makes payment easier by using NFC to transmit payment information.
Wired interfaces can also be attached to mobile devices, directly or using extensions called dongles. The most common example are tablets that feature ports for wired connections.
Each interface has its own “ups and downs”, and makes trade-offs when it comes to data bandwidth, range, power consumption. When implementing a mobile Cloud, we must carefully analyze what kind of interface best suits our needs.
5G and the Internet
Internet today impacts our personal and professional life, by including a multitude of features and services that include: entertainment (multiplayer video games, video and audio streaming services, online movies), location services, commercial services (online stores, e-commerce) and safety applications (e-Health, first responder teams). According to the International Telecommunication Union (ITU), more than 3.2 billion have access to the global Internet by May 2015 statistics. Also, Cisco’s predictions say that by 2017 video communications will be 80-90% of total IP traffic. This concept is described by Internet of Things (IoT), meaning that every object (automobiles, clothes, trains, etc.) will be connected. “The drivers of the future Internet are all kinds of services and applications, from low throughput rates (e.g. sensor and IoT data) to higher ones (e.g. high-definition video streaming), that need to be compatible to support various latencies and devices.”
5G and the Cloud
Models of Cloud Computing (IaaS, PaaS, SaaS) are hosted on the Internet. SaaS is a model where data is saved on the Cloud and can be accessed through a web browser via the Internet (e.g. Google Docs). PaaS enables us to develop and customize applications without having to install big software on our own machine (e.g. Google Play Store). IaaS makes network infrastructure available to the users on a “pay-as-you-go” model (e.g. Amazon Elastic Compute Cloud). The Cloud from the point of view of the user is presented in Figure 3.14.
Figure 3.14 – Cloud Services
Computing means shifting told and data from the client devices to the Cloud. “If we assume that the network convergence and cloud have already happened and look forward, we will view the future Internet not as network, cloud, storage or devices, but as the execution environment for smart applications, services, interaction, experience and data.”
With the increased capacities of 5G, Mobile Cloud Computing is expected to become a new pillar of mobile services. It will have such an impact that it is expected to affect people’s lifestyles and patterns. There are billions of cloud devices already, that could benefit from the features that Mobile Cloud Computing brings.
“Many technical challenges still remain to be addressed in the related areas, ranging from MCC architecture/5G network design, resource/mobility management, security enhancement and privacy protection, to networking protocol development and new MCC service provisioning.”
5G in Europe
Past research projects developed in Europe include: 2G GSM, Universal Mobile Telecommunications System(UMTS) and LTE. “Timely development of the 5G technology is now of paramount importance for Europe to drive the economy, strengthen the industry’s competitiveness, and create new job opportunities.”
Development of 5G is of vital importance for the EU (European Union), because of its important role in economic growth. “As a whole, the ICT sector represents approximately 5% of EU GDP, with an annual value of €660 billion. It generates 25% of total business expenditure in Research and Development (R&D), and investments in ICT account for 50% of all European productivity growth.” 5G is a great opportunity of creating new job opportunities in Europe.
METIS Project
METIS (Mobile and wireless communications Enablers for Twenty‐twenty Information
Society) is a research project on 5G that plans to invest a total of €28.7 million. It is led by Ericsson and involves 29 partners, including telecom vendors, network operators and academic institutions.
The project aims to deliver scalability, efficiency and versatility by researching technologies to support the functioning of the system. The architecture of the project is shown in Figure 3.13.
Figure 3.15 – METIS Project
The target of the project is designing a system with the following requirements according to :
• 1000x higher area capacity
• 10 to 100x higher number of connected devices
• 10 to 100x higher typical user data rate
• 10x longer battery life for low power MTC
• 5x reduced end‐to‐end latency, compared to LTE‐A
5G Case Study: “Understanding 5G: Perspectives on future technological advancements in mobile”
In order to better understand the benefits, impact and future perspectives of 5G mobile networks, GSMA Intelligence has made a short analysis that presents the definition of 5G, reals use cases and scenarios and future impact on mobile operators.
The 5G potential is described as “the prospect of being considerably faster than existing technologies, 5G holds the promise of applications with high social and economic value, leading to a ‘hyperconnected society’ in which mobile will play an ever more important role in people’s lives.” [8]
As illustrated in the above chapters, the low latency of 5G (under 1ms) and big bandwidth (1GBps) are highlighted.
The services currently offered by previous generation mobile networks are shown in Figure 3.16.
Figure 3.16 – Evolution of mobile network generations
Two present views of 5G exist:
A hyper-connected vision: Existing technologies (3G,4G, Wi-Fi) would co-exist in order to maintain or extend coverage. Also, this would enable connectivity to more devices and would be an excellent environment for Internet of Things (IoT) and Machine-to-Machine (M2M) technologies.
Next-generation radio access technology: Specific parameters for data rates and latency must be achieved for newly produced radio devices. All technologies and concepts will be classified as being 5G compliant or not.
The 5G features described above will enable the rise of new technologies and fulfill certain network requirements, as illustrated in Figure 3.17.
Figure 3.17 – Use Cases of 5G
Virtual Reality
5G will have multiple uses in multimedia and entertainment. Virtual Reality systems are in still in an early development phase, but nonetheless 5G will enable them to communicate at a higher speed with devices like motion sensors or HUDs (Heads Up Display).
Self-driving cars
In order to achieve the technology of a self-driving car, communication must be done almost instantaneously. That is where 5G’s low latency comes in use, making it the only compatible technology for this project implementation. Other 5G features that are essential for self-driving cars are mobility and reliability.
Wireless Office
A current business requirement is working from everywhere. With 5G it is now possible to work remotely with large data and Cloud services that require high bandwidth and low latency. Another application of 5G is high definition videoconferencing.
Machine-to-machine
“Our forecasts predict that the number of cellular M2M connections worldwide will grow from 250 million this year to between 1 billion and 2 billion by 2020, dependent on the extent to which the industry and its regulators are able to establish the necessary frameworks to fully take advantage of the cellular M2M opportunity.”
M2M applications will be strongly empowered by 5G. Typical applications of M2M will include smart home applications like: smart light, smart thermostats, smart smoke detectors.
Challenges for mobile operators
Operators and researchers are finding technical solutions for implementing 5G on frequencies between 6 and 300 GHz. Challenges faced by operators include the small coverage obtained at frequencies specific to 5G. Beam-forming can be accepted as a solution for this issue, meaning concentrating the whole radio signal into a beam that can be transported over great distances. This implies that the beam needs to be oriented at the end device and in order to implement mobility it needs to track the devices. This however would imply huge costs on a large scale deployment.
By implementing an array of antennas in a device, we can implement a method called high-order MIMO. This however implies a lot of radio interference, so a solution to minimize interference must be found.
“Today, inter-operator interconnect points are relatively sparse, but to support a 5G service with 1 millisecond delay, there would likely need to be interconnection at every base station, thus impacting the topological structure of the core network.”
5G Development
5G has full support from the European Union (EU), because of Europe falling behind on technological advancements regarding mobile advancements. “European governments are particularly keen to get ahead of the curve in the 5G space and there have been a number of announcements from Neelie Kroes, European Commission (EC) Vice President for Digital Agenda, on the subject going back to Mobile World Congress 2013.” A map of recent developments is shown in Figure 3.18.
Figure 3.18 – 5G Development
Scalable Radio Transceiver for Instrumental Wireless Sensor Networks – SARAT Project
About Beia Consult International
Beia Consult International is a company founded in 1991 and it’s one of the top distributors and providers of telecommunication equipment in Romania. Its solutions are mainly used for enterprise cloud transmissions and telemetry. Beia is a Siemens authorized distributor and has partnerships with Siemens and Alcatel. “The company’s references include over 5,000 turn-key projects for advanced IT and communications solutions. BEIA is certified ISO 9001, 14001, 18001 and 27001.”
Beia also has academic partnerships with University Politehnica of Bucharest, Romanian Space Agency, National Institute for Research and Development in Electrical Engineering and Romanian Academy (Research Institute for Artificial Intelligence). Beia is also a censor in the Romanian-German Chamber of Commerce (AHK), member of Romanian Association for Electronic and Software Industry (ARIES) and leader of the NEM Romanian Mirror Group.
“BEIA has R&D expertise in Cloud and embedded M2M (Machine 2 Machine) tele monitoring applications, one of the R&D results consist in “IP-Wireless-Telemetry” experimental system: Remote Terminal Unit (RTU stand alone, GPS location, data acquisition and processing, command and control, GPRS/CDMA on-line data transmission with TCP/IP embedded); Field Interface Unit – communications server; client application (fleet management, data monitoring and command), human machine interface.”
Beia has ongoing projects in the fields of: M2M, Cloud, IoT (Internet of Things) that will greatly benefit the Cloud communication domain.
The SARAT Project
Purpose and project description
The SARAT Project is a research project in collaboration with University Politehnica of Bucharest and IFIN-HH (Horia Hulubei National Institute for R&D in Physics and Nuclear Engineering).
The purpose of this project is to design and implement a radio transmitter, capable of very high speeds and of handling multiple communications at the same time. This platform is intended to be a very flexible platform due to the fact that the implementation is done using only software.
Software Defined Radio
A Software Defined Radio system is a system meant to replace hardware components (like filters, mixers, modulators, demodulators) with a single system or a computer that can accomplish the same functions. The simplest SDR system is a computer equipped with a sound card acting as an input and Radio Frequency equipment (i.e. antenna).
Equipment and tools
USRP
USRP N210 Networked Series (Figure 4.1) is a hardware designed by Ettus Research and used by research labs and universities. USRP (Universal Software Radio Peripheral) is a hardware equipment capable of implementing Software Defined Radios (SDR).
USRP is a platform capable of supporting different types of transmission (WLAN, Bluetooth, DECT, ZigBee) on the same hardware. The only part configurable is the software, the part that defined the protocol layers and the functions of every layer. The most important aspect is that the processing of signals is not done by hardware, but by software. In this way we can minimize costs and time for designing, deploying and testing new systems.
USRP’s contain ADC’s (Analog to Digital Converter) and DAC’s (Digital to Analog Converter), RF (Radio Frequency) front-end circuits, and FPGA’s. (Field-programmable gate array)
In summary, a USRP is an interface between the analog(RF) and digital (computer) domain.
Figure 4.1 – USRP N210
Features :
“
• Use with GNU Radio, LabVIEW™ and Simulink™
• Modular Architecture: DC-6 GHz
• Dual 100 MS/s, 14-bit ADC
• Dual 400 MS/s, 16-bit DAC
• DDC/DUC with 25 MHz Resolution
• Up to 50 MS/s Gigabit Ethernet Streaming
• Fully-Coherent MIMO Capability
• Gigabit Ethernet Interface to Host
• 2 Gbps Expansion Interface
• Spartan 3A-DSP 3400 FPGA (N210)
• 1 MB High-Speed SRAM
• Auxiliary Analog and Digital I/O
• 2.5 ppm TCXO Frequency Reference
• 0.01 ppm w/ GPSDO Option “
The modular approach of N210 allows for operations at frequencies ranging from DC to 6GHz, depending on the transmitter used. The FPGA handles functions of general processing like decimation, interpolation, digital conversion while the computer will handle the signal processing functions like filtering, modulation, demodulation. An SD card expansion slot is available that contains the firmware and can be used for firmware upgrade.
VERT900 Antenna
The VERT900 Antenna is an Omni-directional antenna functioning in 824-960 MHz, 1710-1990MHz Quad-band Cellular/PCS and ISM bands working at 3dBi gain.
+
Figure 4.2 – VERT900 Antenna
GNU Radio
GNU Radio is an open-source software that contains a series of tools for implementing Software Defined Radios. It contains blocks for signal processing as well as virtual sources and virtual equipment in order to emulate real equipment. GNU Radio can be used with external hardware (i.e. USRP) or it can be used mainly for simulation purposes.
GNU Radio applications are written in Python programming language. The core functions for signal processing are compiled in C++ programming language.
GNU Radio has a GUI (Graphical User Interface) called GNU Radio Companion which makes is very user-friendly and makes development of applications very fast and easy. The user will build the radio signal by creating a signal graph, in which nodes represent signal processing blocks and link represent data streams between these blocks. The processing blocks are made in C++, and the link are created using Python scripts. The final compilation is done using SWIG compiler.
The signal blocks processes streams of data from the input ports to the output ports. The attributes of a processing blocks are: types of data supported and number of input and output ports. Types of data include complex, float and short. GNU Radio contains about 100 of such blocks. A block’s properties can be defined by parameters that can be set statically on a given scale of values or dynamically, based on given variables like sampling frequency. A variable can be changed during application execution using graphical widgets like sliders.
Working principle
The interface to the analog domain is represented by the antennas connected to the USRP’s ports. An RF analog signal can be received or transmitted by the connected antenna. The operating frequency can vary up to 5.9 GHz. The signal is then transmitted to the motherboard, where the analog signals are converted in digital signals and are mixed in the base band of the FPGA, and the sampling frequency is modified. The signal obtained by the FPGA is then transmitted to the computer via USB (Universal Serial Bus) or Gigabit Ethernet (speeds of up to 1Gbps). Once the signal reaches the computer, it is processed by GNU Radio.
For USRP2, the motherboard, an ADC (Analog to Digital Converter) samples the received signal and converts it to digital values, depending on the dynamic range of 14 bits. The number of measurements per second is defined by the sampling frequency of the ADC, resulting in 100 million reads per second at a sampling rate of 100 mega samples per second.
The digital values of the samples are sent to the FPGA and processed using Digital Down Converters (DDC) used to obtain the exact frequency and sampling rate necessary at output.
The samples from the ADC are then mixed at the desired intermediate frequency (IF), being multiplied by a sine or cosine function, resulting in: phase path (I) and quadrature path(Q). The frequency is generated by a Numerically Controlled Oscillator (NCO), that synthetizes a discrete waveform in time and amplitude, in the FPGA. By using the NCO, rapid leaps in frequency are possible.
A decimation by N in sampling rate is realized. The sampling frequency, divided by N is in fact the output sampling frequency sent to the computer. For transmission, the same process is done the other way around, but using Digital Up Converters (DUC) and DAC’s. The FPGA supports time-dependent applications like TDMA. An internal clock keeps track of timing for sending samples.
USRP2 uses Gigabit Ethernet which is significantly faster than USB. Using complex samples of 4 bytes (16 bits for I and 16 bits for Q) and respecting the Nyquist criterion, we obtain a usable spectral band of 8 MHz. Using Gigabit Ethernet, we have a theoretical speed of 125MB/s, allowing a theoretical RF band of 31.25 MHz. Therefore, we can safely say that a band of 25 MHz can be achieved.
OFDM Transmitter and Receiver in GNU Radio
An OFDM transmitter designed in GNU Radio is presented in figure 4.3.
The stream of data that is wanted to be transmitted is taken from a source file and it is first processed by the Stream to Tagged Stream block that executes a series-parallel conversion, forming packets of 96 bytes. The packets are then added a CRC parity field, using the Stream CRC32 block, in order to detect errors. To the resulting packet we add a header using a block called “Packet Header Generator” and the actual data is now prepared for the modulation operation by the Repack Bits block. The stream of bits corresponding to the header and the actual packet are now linked with symbols that are to be transmitted using two Chunks to Symbols blocks. In terms of modulation, the header will use BPSK, while the actual stream of data will use QPSK. The corresponding symbols of header and data stream are then multiplexed using a block called Tagged Stream Mux and are allocated on different sub-carrier frequencies using the OFDM Carrier Allocator block. This block is used to distribute symbols in time and frequency domains, and add the pilot symbols. Using th and add the pilot symbols. Using the OFDM Cyclic Prefixer, the necessary cyclic prefix is added in order to prevent inter-symbol interference. Transmission of the new processed data to the USRP is done using the UHD: USRP Sink block. This block will connect to actual device through an IP address specified in the block’s parameters.
Figure 4.3 – OFDM Transmitter in GNU Radio
Figure 4.4 – OFDM Receiver in GNU Radio
An OFDM receiver is presented in Figure 4.4.
The data stream is received through the USRP’s radio interface and is read by the computer using the UHD: USRP Source block. The synchronization in time domain of the received OFDM symbols is done using the Schmidl-Cox algorithm, represented by the Schmidl & Cox OFDM synch block. With the use of the Header/Payload Demux block we achieve the separation of the header and the actual stream of data, using the information contained in the header to determine the length of the actual data packet. The blocks OFDM Channel Estimation and OFDM Frame Equalizer are used to attenuate the losses suffered by the signal during the propagation through the radio interface. The inverse of the operations that were done at transmission will result in a final flux of symbols that will be decoded by the Constellation Decoder block. From the obtained bytes flux we will verify the CRC parity fields using the Stream CRC32, which will function in case of the receptor in Check CRC mode. The data will be finally saved in a destination file using the File Sink block.
Using the above chain of transmission and reception, transmissions were successfully done between two N210 USRPs, equipped with XCVR2450 RF modules and operating at a frequency of 5.1 GHz.
Testing and Results
For testing purposes, a file was sent (Figure 4.5)
Figure 4.5 – Text file sent
After compilation and execution of the above systems we can make a few observations.
During execution, the computer’s resources are being fully used (as illustrated in figure 4.6), meaning that such a system has high demands in regards to computer hardware.
Figure 4.6 – Computer Resource Usage during execution
First the execution was done at a frequency of 5 GHz (central frequency of 5.1 GHz) and a gain at emission and reception equal to 25dB. The results can be seen in Figures 4.7 and 4.8.
Figure 4.7 – Representation of amplitude, function of time and spectrum at emission
Figure 4.8 – Representation of amplitude, function of time and spectrum at reception
The file received is shown in figure 4.9, and using the number of packets of errors, a rate of packet error was calculated (packets with errors per 100 received packets).
Figure 4.9 – Text file received
The table below shows the different values obtained, at different antenna gain values.
Figure 4.10 – Results Table
In conclusion, the best result (8% Error Rate) was obtained at a transmission and reception gain of 25 dB.
Conclusions and future directions for the project
In this project, we have designed an OFDM system (transmitter and receiver) and implemented it with Software defined Radio, using a platform called GNU Radio and a hardware equipment called USRP. We have then proceeded to test the OFDM system using different values of parameters (i.e. gain) in order to obtain a setup with better results.
For the transmission we have used QPSK modulation and for the header BPSK.
After analyzing all results we can conclude that this system is very sensitive to changes in certain parameters. We can observe that as we decrease the gain at reception, the error rate will significantly raise. Taking into account that the project is in an experimental phase, the results can be considered somewhat acceptable.
In conclusion, using Software Defined Radio, this platform accomplishes radio equipment functions but a lower cost. The system presents very high flexibility by the fact that any parameter can be changed. In order to improve the performance of the system, more research need to be done in regards to OFDM systems on USRP and on 5G networks.
The successful implementation of the system will open many future paths involving MIMO (Multiple-Input Multiple Output), block spreading and channel coding. The system can be implemented in various setups including outdoors. The system can also be implemented on different standard such as IEEE802.11, DVB or WLAN.
Conclusions and future applications
With the emergence of 5G mobile networks, Mobile Cloud Computing will be empowered to become an important technology that will impact everyone’s lifestyle and daily patterns. Mobile Cloud Computing will be able to retrieve and send information for us in an instant, making use of 5G’s incredible speeds.
The implementation proposed, using USRP hardware platform and GNU Radio software proved to be a very cost effective platform with incredible flexibility that bring surprising performance, taking into account the low cost. The advantage that I am most excited about is the fact that the platform is highly customizable. With little effort (just a click of a button or a slider) and no changes to the platform’s architecture, changes can be made to parameters such as frequency and gain. Because of GNU Radio’s ease of use, changes like modulation method or multiple access method can be made by changing a single block of the scheme. In my opinion, this makes GNU Radio a great tool for experimentation and testing of cutting edge technologies like 5G before entering production.
For management of the Cloud, I have chosen a decentralized Cloud operating system called SlapOS. The main reasons for choosing SlapOS was the ease of use and installation and the efficiency that SlapOS provides. It is also part of the GNU Project which means that is has full compatibility with any existing operating system. By using SlapOS we create a fully flexible platform that can be used for small purposes such as managing a few Cloud Nodes up to managing a whole global Cloud network.
In conclusion, by combining concepts technologies like 5G, Software Defined Radio and Mobile Cloud Computing with powerful hardware like the USRP and great software like GNU Radio and SlapOS we can implement a powerful, flexible and cost-effective Cloud Platform that can achieve great performance and speeds by making full use of 5G mobile networks.
Because of research that goes beyond a diploma thesis, a final product can be produced as part of a future dissertation thesis and when research for 5G standard and implementation is completed.
Copyright Notice
© Licențiada.org respectă drepturile de proprietate intelectuală și așteaptă ca toți utilizatorii să facă același lucru. Dacă consideri că un conținut de pe site încalcă drepturile tale de autor, te rugăm să trimiți o notificare DMCA.
Acest articol: University Politehnica of Bucharest [303202] (ID: 303202)
Dacă considerați că acest conținut vă încalcă drepturile de autor, vă rugăm să depuneți o cerere pe pagina noastră Copyright Takedown.
