University Politehnica of Bucharest [618767]

University “Politehnica” of Bucharest
Faculty of Electronics, Telecommunications and Information Technology

Design and implementation of Cloud based Internet services

Diploma Thesis
submitted in partial fulfillment of the requirements for the Degree of
Engineer in the domain Electronic Engineering and Telecommunications ,
study program Technologies and Systems of Telecommunications

Thesis Advisor(s) Student
S.l. Dr. Ing. Șerban OBREJA Iulian -Valentin MOHORA

2017

Table of Contents

Table of Contents………………… ………………………………………………………….… 4
List of figures………………………………………………. …………………………………… 5
List of acronyms ……………………………………………… ……………….………….…… 6
Introduction……………………………… ……………………………………………………. 7
Chapter 1 – Cloud Computing……………… ……………………………………………….… 8
1.1 Cloud computing characteristics…………………………………………… …….. .. 8
1.1.1 Cloud Archit ectures…………………………………………………………. 9
1.1.2 Deployment models …………………………………………………………….. 10
1.1.3 Virtualization…………………………………………………………………….. 1 1
1.2 Web……… ………………………………………………………………………………12
1.3 LAMP Concept…… ……………………………………………………………………. 12
1.3.1 Linux Operating System …………………………………………………… 13
1.3.2 Apache Service………………………………………………………………… . 14
1.3.3 MySQL …………………………………………………………………………… 15
1.3.4 PHP…… …………………………………………………………………………. 15
1.4 Mail servers…… ……………………………………………………………………….. 15
1.4.1 SMTP protoco l…………………………………………………………………… 16
1.4.2 POP3 protocol…………………………………………………… …………..… 17
1.4.3 IMAP proto col…………………………………………………………………… 18
1.4.4 Mail address es………………………………………………………………….. 18
1.5 DNS servers… …………………………………………………………………………. 18
1.6 iRedMail………… …………………………………… ………………………………… 19
Chapter 2 – Security in the Cloud……………………………………………… ……………… 20
2.1 General aspects………………………………………………………………….. . 20
2.2 Cloud Computing r isks…………………………………………………………… 20
2.3 IPTables technol ogy……………………………………………………………… 22
Chapter 3 – Practical Implementation ……………………………………………………….. 24
Conclusion…………… ……………………………………………………………………… 28
Bibliography……… …………………………………………………………………………. 29

List of figures

Figure 1.1 – Characteristics of Cloud computing …………………………………………… 8
Figure 1. 2 – Cloud Computing Architecture………………………………………………… 9
Figure 1.3 – Cloud deployment model …………………………………………………… … 10
Figure 1.4 – LAMP concept ………………………………………………………………… 1 2
Figure 1.5 – Linux components ……………………………………………………………… 1 3
Figure 1.6 – Linux architecture ……………………………… ……………………………… 1 4
Figure 1.7 – SMTP in use ……………………………………………………………………. 1 6
Figure 1.8 – POP3 architecture ……………………………………………………………… 1 7
Figure 1.9 – iRedMail installation structure ………………………………………………… 1 9
Figure 2.1 – Packet flow in Netfilter and General Networki ng………………………………. 2 3

List of acronyms

AWB Amazon Web Services
DB Data Base
DNS Domain Name Server
EC2 Elastic Cloud (2nd version)
GB Giga -Byte
HTTP HyperText Transfer Protocol
IMAP Internet Mail Access Protocol
IP Internet Protocol
ISP Internet Service Provider
LAMP Linux -Apache -MySQL -PHP
MDA Mail Delivery Agent
MSA Mail Submission Agent
MTA Mail Transfer Agent
MUA Mail User Agent
OS Operating System
PHP Personal Hypertext Processor
POP Post Offic e Protocol
RAM Random Access Memory
SMTP Simple Mail Transfer Protocol
SQL Structured Query Language
SSH Secure Shell
TCP Transmission Control Protocol
VM Virtual Machine

7
Introduction

This project intends to install some services such E -mail (postfix + dovecot) and web (LAMP)
with redundancy. Final product could be used in companies which has a public site with client
accounts, employees mail addresses.
For having an insurance that our services would be online 24 hours per day and a system
failure would not affect the principal features we develop, we will use a platform that has a backup
solution which will work all time in background and arises when it is needed.
It can be improved by using more servers connected under the same network, but it will be
more expansive and hard to implement and I decided to start with this idea and look forward if other
things are needed. I will use a single computer, but with more virtu al machines that work like
separated machines , everyone having its resources and controlled by a master (host) operating system.

For realizing this project, we will use the following technologies:

● 1 physical server with minimum 8 GB of RAM
● operating system at bare metal level, with type 1 virtualizing technology
ș these variants must be discussed
a) CentOS operating system + KVM virtualization (freeware, open source, no license
needed)
b) ESXi operating system with virtualization capability (license con ditions must be
analysed on VMware site)
● 8 Linux servers (CentOS), distributed as follows:
➢ 2 DNS servers: ns1.example.com, ns2.example.com
➢ 2 WEB servers (https + PHP): web1.example.com, web2.example.com
➢ 2 mail servers (dovecot + postfix): mail1.example.com , mail2.example.com
➢ 2 data base servers ( MariaDB ): db1.example.com, db2.example.com

To offer High Availability, all services will be configured for redundancy, permitting
complete functioning of the stack, even in the case of 50% of them are down (considering that one of
each layer – DNS, web, mail, data base – will remain up).

After discussing with my thesis advisor, w e decided to use the free version of implementation
using Cen tOS, so I started to learn how to use this operating system before borrowing a server on
which to start the project.

Project Overview :
After this Introduction you are reading now, I have written other three chapters as follows:
Chapter 1 – Cloud Computing, the idea of this concept and a part of used technologies (hardware and
software), from where it comes and why is it very used nowadays.
Chapter 2 – Security in the Cloud, problems that could appear and solutions we may implement in
order to have a safe platform.
Chapter 3 – Practical Implementation, step by step explanations about my work on this project.

8
Chapter 1 – Cloud Computing

1.1 Cloud computing characteristics

The main Cloud characteristics include broad network access, resource pooling, rapid
scalability, market and service orientation. These main characteristics are depicted in the next figure .
The available service models are classified into SaaS (Software -as-a-Service), PaaS (Platform -as-a-
Service), and IaaS (Infrastructure -as-a-Service), while the cloud deployment models are categorized
into public, private, community, and hybrid Clouds.

Figure 1.1 – Characteristics of Cloud computing

Virtualization provides an efficient approach of managing resources by allowing viewing
them as a pool of unified resources and allowing applications to be efficiently separated from the
hardware.
Due to virtualization, Clouds are provided with a major be nefit, and namely scalability.
Scalability is the ability of Clouds to scale resources up or down in a matter of minutes or seconds,
in order to avoid over or under -provisioning of the resources they lease.
Pay-per-use utility model refers to the fact tha t the pricing model fluctuates according to the
expected QoS (Quality of Service), which means that consumers are only required to pay for their
used services and providers can capitalize poorly utilized resources.
Clouds exhibit autonomic behaviour in ord er to provide highly reliable services, fault
tolerance and performance degradation management.
One of the most advantageous ways of reducing energy consumption is by virtualization.
Virtualization is a technique through which multiple independent virtual operating systems can be
run on a single physical machine. Thus, hardware independence, isolation of guest operating system
and encapsulation are provided. By encapsulation, all virtual machines are grouped into a single
resource pool which can be altered or allocated dynamically. The simulated environment is called a
virtual machine (VM).
By increasing the percentage of physical machine’s utilization, virtualization allows the same
amount of processing to occur, but on a reduced number of servers. Conseque ntly, because of the

9
decreased number of necessary servers, the size and the consumption of the necessary cooling
equipment will be drastically reduced.

1.1.1 Cloud Architectures

As depicted in the next figure , the cloud computing system delivers several core services,
namely infrastructure, platform, and software (applications) services, known in industry as SaaS
(Software as a Service), PaaS (Platform as a Service), and IaaS (Infrastructure as a Service), which
are made available to consumers as subscription -based services.

Figure 1.2 – Cloud Computing Architecture

Infrastructure -as-a-Service (IaaS)
When using the Infrastructure -as-a-Service model, the consumer is provided with the
opportunity of deploying and running its desired software on the fundamenta l computing resources
previously provided. These computing resources are known as processing, storage and network.
The consumer is not in control of the underlying cloud infrastructure, but it is in control of the
operating systems, storage or deployed ap plications. The IaaS model provides accessibility to virtual
servers within minutes and consumers are provided with the opportunity of pay -per-use utility. An
example of IaaS is the Amazon EC2 which provides the users with a variety of resources such as
CPU, memory, OS and storage to cater to their particular needs. Also, and API access to the
infrastructure could be offered as an option.

Software -as-a-Service (SaaS)
The SaaS provides the user with the capability of using the provider’s applications availa ble
through an interface or a web browser and running them on a cloud infrastructure. As in IaaS, the
consumer is not in control and does not manage the underlying cloud infrastructure. On the other
hand, the control and management of the OS, storage, netw ork and servers is not granted. Also, some
user specific application configuration settings can be limited.

10
Platform -as-a-Service (PaaS)
The PaaS provides the user with the capability of deploying onto the cloud infrastructure the
desired software applications that must be supported by the provider. As in SaaS, the consumer is not
provided with the control and management of the underlying c loud infrastructure including network,
storage, OS and servers. On the other hand, the consumer is provided with control over the deployed
applications.

1.1.2 Deployment models

The Cloud middleware is deployed on physical infrastructures and delivers various services
to consumers. In literature, three commonly -used deployment models are defined. These are known
as hybrid, public and private cloud as depicted in the figure below . Also, there exists a fourth
deployment model, known as the community cloud, but it is less -commonly used.

Figure 1.3 – Cloud deployment model

Private cloud
The private cloud is designed for an exclusive use, which means that it is best suited for
organizations who may desire to maintain their own specialized environment to meet with their
demands. An example of the health care industry which desires to mainta ining their confidential data
private. Because of the privacy, a limitation in scalability may appear. However, the consumers are
provided with a greater control over the infrastructure which improves security.

Public cloud
The public clouds are designed for providing availability and open use to the general public
and are most appealing for their cutting IT costs. Some of the most popular public clouds are the
Amazon Web Services, Google AppEngine and Microsoft Azure. Besides being able to host
individual services, the public cloud offers the possib ility of using collections of services.

11
Hybrid cloud
The hybrid cloud resulted from the emergence of both the private and the public cloud
advantages. In this particular deployment model, organizations using t he hybrid cloud can outsource
the information considered non -critical, while keeping their sensitive data, private. Moreover,
organizations can use the hybrid cloud as private and they can resort to the public model whenever it
is required to auto -scale th eir resources.

Community cloud
The community cloud is a special cloud environment which is shared and managed across
several organizations, and can be managed by either third -party providers or organizational IT
resources.

1.1.3 Virtualization

One of the most advantageous ways of reducing energy consumption is by virtualization.
Virtualization is a technique through which multiple independent virtual operating systems can be
run on a single physical machine. Thus, hardware independence, isolation of guest oper ating system
and encapsulation are provided. By encapsulation, all virtual machines are grouped into a single
resource pool which can be altered or allocated dynamically. The simulated environment is called a
virtual machine (VM).
By increasing the percent age of physical machine’s utilization, virtualization allows the same
amount of processing to occur, but on a reduced number of servers. Consequently, because of the
decreased number of necessary servers, the size and the consumption of the necessary cooli ng
equipment will be drastically reduced.
Virtualization technique s were divided as follows:
1) Emulation
Emulation is a virtualization approach which provides flexibil ity due to the fact that the
hardware behaviour can be converted to a software program.
2) Hypervisor
Also known as virtual machine monitor, the hypervisor is an intermediate layer between the
operating system and hardware, used for monitoring server’s resources by taking into consideration
consumer’s needs. The hypervisor controls the flow of i nstructions between the guest OS and the
hardware such as CPU, memory and storage.
3) Full virtualization
Full virtualization is a technique in which various operating systems and the applications they
contain are run on top of the virtual hardware. The hyper visor is the one who manages the guest
operating system represented by the virtual machines.
4) Para virtualization
Unlike full virtualization, the guest OSs are aware of one another. Para virtualization has a
poor portability and compatibility as it cannot support unmodified operating systems. In a para
virtualized environment, various operating systems can run simultaneously .

12
1.2 Web

The Web is an information space where documents and other web resources are identified by
Uniform Resource Locators (URLs), interlinked by hypertext links, and can b e accessed using an
Internet connection. It was invented in 1989. Sometimes, “web” word describes the Internet.
In 1988 the first direct IP connection between Europe and North America was made and
Berners -Lee began to openly discuss the possibility of a web -like system at CERN. "Imagine, then,
the references in this document all being associated with the network add ress of the thing to which
they referred, so that while reading this document you could skip to them with a click of the mouse."
said Berners -Lee in his concept of creating the web.
In 1990, with his colleague Robert Cailleau ’s help, Berners -Lee publishe d a project
“WorldWideWeb” as a web hypertext document that can be opened in browser . First browser had the
same and the first web server was a NeXT Computer. The day of publication was December 20th.
Principal technologies described in Berners -Lee’s book , “Weaving the Web” were:
● system of globally unique identifiers for resources on the Web and elsewhere, the
universal document identifier (UDI), later known as uniform resource locator (URL) and
uniform resource identifier (URI);
● the publishing language: HyperText Markup Language (HTML);
● the HyperText Transfer Protocol (HTTP).

Some features of these technologies are often use d nowadays, of course with many
improvements: HTML exceeded version 5 which can offer video streaming services, HTTP has a
secured version (https://) that is in switching for all user. All social -media sites and banks or
transaction services are using an HTTPS connection which could not be intercepted from outside .
URLs are approximatively the same: www.example.com (dots are separating the names by domains
or subdomains, WWW being the acronym of “WorldWideWeb”).

1.3 LAMP Concept

Figure 1 .4 – LAMP concept

13
LAMP (Linux -Apache -MySQL -PHP) is an open source Web development platform that uses
Linux as the operating system, Apache as the Web server, MySQL as the relational database
management system and PHP as the object -oriented scripting language. Because the platform has
four layers, LAMP i s considered to be a stack.
This stack is used in majority of websites for its simplicity in configuring and using it. We
will discuss each part in next subchapters.

1.3.1 Linux Operating System

Linux is a Unix -like computer operating system assembled under the model of free and open –
source software development and distribution. The name comes from Linus Torvalds which created
the first Linux kernel in September 17th, 1991. It was originally developed for personal computers
based on Intel x86 architectu re, but it has been ported to more platforms and is the base of other
operating systems like Android or ChromeOS.
There are some hundreds of Linux distributions available, some of the most popular being:
Arch Linux, CentOS, Debian, Fedora, Linux Mint, Ubu ntu (as free versions) and Red Hat Enterprise
Linux and SUSE Linux Enterprise Server (as commercial ones).

Linux operating system has primarily three components:
● Kernel – the core part. It is responsible for all major activities of this operating system. It
consists of various modules and it interacts directly with the underlying hardware. Kernel provides
the required abstraction to hide low level hardware details to sys tem or application programs.
● System Library – special functions or programs used for access Kernel’s features. These
libraries implement most of the functionalities of the operating system and do not requires kernel
module's code access rights.
● System Util ity – programs responsible to do specialized, individual level tasks.

Figure 1.5 – Linux components

14
Architecture:
● Hardware layer – Processor, RAM, storage and other computer’s components ;
● Kernel – core component of the OS which interacts directly with hardware and provides low
level services to upper layer components ;
● Shell – an interface to kernel, hiding complexity of kernel’s functions from users ;
● Utilities – programs that provides the most functionalities to users .

Figure 1.6 – Linux architec ture

Linux is very used for servers because its advantages over other platforms like Windows:
➢ Stability – It was tested and demonstrated that Linux can run for years without failure and it does
not require a reboot for any configuration changes;
➢ Security – There are few viruses, malwares or vulnerabilities on Linux and only root users have
administrative privileges to keep the kernel part protected;
➢ Hardware requirements – Being slim, trim, flexible and scalable, Linux can be configured to
stretch on your computer configuration by stopping the services you want.

I have chosen the CentOS because it is like the commercial version Red Hat and it is free.
‘The CentOS Project is a community -driven free software effort focused on delivering a robust open
source ecosystem. For users, we offer a consistent manageable platform that suits a wide variety of
deployments. For open source communities, we offer a solid, predictable base to build upon, along
with extensive resources to build, test, release, and main tain their code. ’1

1.3.2 Apache service

Apache is an open -source HTTP server built for Unix -like systems as well as Windows. It
serves more than 100 million websites nowadays. It supports a variety of features, many implemented
as compiled modules which extends the core functionality.

1 Oficial CentOS site – http://centos.org/ , 2017

15
1.3.3 MySQL

MySQL is an open -source relational database management system (RDBMS). Its name is a
combination of "My", the name of co -founder Michael Widenius' daughter, and "SQL", the
abbreviation for Structured Query Language. It is written in C and C++ and it works on all platforms
starting with AIX up to Windows.
Released in 1995, MySQL growled from year to year as a free platform , which made it well
used in online databases, up to 2008 when Sun Microsystem bought it for one billion dollars. After
that, Oracle purchased Sun Microsystem (in 2009) so MySQL belongs to Oracle now. Because the
main creator of MySQL wanted to be free, he s tarted to develop another platform which in fact is the
same with other name, MariaDB. MariaDB is the ‘non -Oracle’ SQL we will use on our machine
because it has the same functionalities and the license is not a problem.
A database is an organized collecti on of data including schemas, tables, queries, reports, views
and other objects. MySQL (or MariaDB in our case) interacts with the user which can add, remove
or change these data using some commands (or interface where is present).

1.3.4 PHP

PHP (Personal Hypertext Processor ) is a widely -used open source general -purpose scripting
language that is especially suited for web development and can be embedded into HTML.
What distinguishes PHP from something like client -side JavaScript is that the code is executed
on the server, generating HTML which is then sent to the client. The client would receive the results
of running that script, but would not know what the underlying code was. You can even configure
your web server to process all your HTML files with PHP, and then there's really no way that users
can tell what you have up your sleeve.

1.4 Mail servers

A typical mail server consists of many software components that provide a specific function.
Each component must be configured and tuned to work nicely together and provide a fully –
functioning mail server. Because they have so many moving parts, mail servers can become complex
and difficult to set up. The main components of a mail server are:
• Mail Transfer Agent
• Mail Delivery Agent
• IMAP and/or POP3 Server

Mail Transfer Agent (MTA) is used to send from our users to an external MTA and to
receive mails from others. This handles SMTP (Simple Mail Transfer Protocol) traffic, which will
describe in the next subchapter. In Linux, we have Postfix and Sendmail as well -known MTA
software.

16
Mail Delivery Agent (MDA) is retrieves mail from the MTA and places in user’s mailbox.
Is also called Local Delivery Agent because it works only with local storage on our server. We can
also use Postfix here, but we have also Dovecot.
IMAP and POP3 are prot ocols used by mail clients to read or send messages. Software which
does this job are Courier, Dovecot and Zimbra.

1.4.1 SMTP protocol

Simple Mail Transfer Protocol (SMTP) is an Internet standard for electronic mail (e -mail)
transmission defined by RFC 821 in 1982. It was last updated in 2008 with Extended SMTP additions
by RFC 5321, which is the protocol in widespread use today.
SMTP communication between mail serv ers uses TCP port 25. Mail clients on the other hand,
often submit the outgoing emails to a mail server on port 587. Despite being deprecated, mail
providers sometimes still permit the use of nonstandard port 465 for this purpose.
E-mail is submitted by a mail client to a mail server using SMTP on TCP port 587. Most
mailbox providers still allow submission on traditional port 25. Often, these two agents are instances
of the same software launched with different options on the same machine. Local processing can be
done either on a single machine, or split among multiple machines; mail agent processes on one
machine can share files, but if processing is on multiple machines, they transfer messages between
each other using SMTP, where each machine is configured to use the next machine as a smart host.
Each process is an MTA (an SMTP server) in its own right.
Next, we have a sketch of STMP working diagram, where
– MUA=Mail User Agent (mail client);
– MSA=Mail Submission Agent (mail server)
and other acronyms we alre ady know:

Figure 1.7 – SMTP in use

17
1.4.2 POP3 protocol

Post Office Protocol (POP) is an application -layer Internet standard protocol used by local
e-mail clients to retrieve e -mail from a remote server over a TCP/IP connection. POP has been
developed through several versions, with version 3 (POP3) being the last which is used now adays.
POP3 listens on TCP port 110 for connections from e -mail clients, authenticates the client,
and manages the connection with the client.
The authentication store is the repository of user information needed to authenticate the user.
The store can be the Active Directory database, the local SAM database, or the encrypted password
file for the user. The authentication module accesses the authentication store to verify the credentials
submitted by the client to the POP3 service.
The Mail Storage Access A PI is the common interface to the mail store for all processes. The
POP3 service, the SMTP delivery service for POP3, and the POP3 Server Administrator use the API
to access the mail store.
The mail store uses the file system for storage. The mail store is typically located on the same
server as the POP3 service, but it should be located on a different local or network volume than the
operating system to avoid potential disk space problems. For large mail stores, the mail store can be
placed on a Network At tached Storage (NAS) device and accessed by one or more servers running
the POP3 service. Even though the mail store is contained in the file system, it is accessed by using
the Mail Storage Access API.
The SMTP delivery service for POP3 is the component t hat transfers e -mail from the SMTP
service to the user mailboxes. The delivery service is notified by the SMTP service when new e -mail
arrives. New e -mail is delivered to the mail store by means of the Mail Storage Access API.
Next figure shows us how this protocol works:

Figure 1.8 – POP3 architecture

18
1.4.3 IMAP protocol

Internet Mail Access Protocol (IMAP) was launched as an advanced POP3 because some
users wanted to do more than downloading the messages from the server. If it could remain on the
server, the mail could be accessed from more machines (for example, if you want to read your E –
mails from PC and from your smartphone, POP3 would not be what you need).
IMAP is a more advanced protocol which solves these problems. It stores all the messages on
the server and we can organize them in folders. Using some clients, we can access these folders and
search live in them, making our lives easier.
This protocol uses the port 143 and is present in well -known mail companies such as Gmail
or Yahoo.

1.4.4 Mail addresses

The general format of an email address is name @domain , for example : iulian @etti.com. An
address consists of two parts separated by an @ character . The part before the @ symbol is also
named local -part and identifies the name of a mailbox. This is often the username of the recipient,
iulian in our case . On the server, could be more users an d everybody has his/her mailbox . The part
after the @ symbol (domain) is a domain name that represents the administrative part for the mail
box, for example a company or university ‘s domain name .
Local -part could use any of ASCII characters (Latin letters, digits, special characters, dot and
spaces. Comments are allowed with parenthesis: (me)iulian@etti.com is the same with
iulian@etti.com. Some servers do not allow any characters or does not make the differ ence between
uppercase and lowercase.
Domain name part has to conform to strict guidelines: it must match the requirements for a
hostname, a list of dot -separated DNS labels, each label being limited to a length of 63 characters and
consisting of letters, digits and hyphen ( -), the last one not being the first or last character. This rule
is known as LDH rules (letters, digits, hyphen). In addition, the domain may be an IP address literal,
surrounded by square brackets [].

1.5 DNS server s

Domain Name Server (DNS) is a TCP/IP protocol service that translates domain names and
hostnames into the corresponding IP addresses. It helps us to remember websites or mail addresses
because it’s easier to memorize a company name than a series of number representing its IP address.
A DNS server is a machine that contains a database of IP addresses associated with their hostnames.
Google has its public DNS server which is the largest one in the world and contains almost
all hosts on the Internet. It indexes new domains every day. OpenDNS is another DNS server very
used internationally and also e very ISP has its DNS server for their users.
Using BIND software, we will configure a private DNS server which will communicate with
public ones telling them our addresses and we will configure some security measures on our
computer.

19
1.6 iRedMail

Configuring all the services and files needed for a mail server seems to be so difficult to
beginners like most of us. To help us, a team thought to create an automation script which does these
steps asking only the main settings such as server name, count ry and administration account.
‘iRedMail is designed to be deployed on a FRESH server system, which means your server
does NOT have mail related components installed, e.g. MySQL, OpenLDAP, Postfix, Dovecot,
Amavisd, etc. iRedMail will install and configur e them for you automatically. Otherwise it may
override your existing files/configurations although it will backup files before modifying, and it may
not be working as expected .’2, they said.
Using their program and their tutorial, we can setup our mail server which consists of the
components shown in the figure below:

Figure 1.9 – iRedMail installation structure

2 Official iRedMail website – http://iredmail.org , 2017

20
Chapter 2 – Security in the Cloud

2.1 General aspects

Several current cloud computing applications involve services provided to end -users,
inclusive mail and social networks. These services c ollect and store large volumes of terabytes
including personal information in data centers in all countries of the world.
Personal data protection and confidentiality management can determine the success or failure
of many cloud services.
Security and confidentiality problems are complicated by the data storage location in different
servers’ storage with different levels of protection. Interception and other oversight activities of
government agencies are considered also an issue. There have been instances in the past when
information raising activities were at a limit of commercial espionage. On the other hand, legitimate
access to law enforcement agencies can be difficult. Protection of intellect ual property, especially in
the copyright protection will also be a problem . There have already been attempts to make Internet
Service Providers (ISPs) resp onsible for preventing unauthorized sharing of copyrighted materials. It
is not yet clear how they will be fixed these issues in the cloud.
Ensuring information security in the cloud computing requires three levels of security:
network security, server se curity and application security. These security needs are present in internal
infrastructure and directly affected by access policies and workflows of an entity which holds and
manages its resources.
When an entity passes through cloud computing , security challenges arising from each of the
three levels, as the well as ones that looks for operation of economic activity and people involved in
system management. Although these security challenges are exacerbated by cloud computing, they
are not produced by this.
Encryption could not be a complete solution, because the data must be decrypted in certain
situations – so that it can be processed and able to carry out to normal functions of data management,
indexing and sorting. Thus, although the data in transit and stored data are actually encrypted, the
need to decrypt generally cloud service provider can be a security problem.
However , cloud services can be secured by filtering e -mails (including back -up and spam),
web content filtering and vulnerability mana gement, all these things could improve security. Some
threats are better threated by large data centers ( for example, attacks such as Distributed Denial of
Service, which involv es attempts to prevent an Internet site or service to operate). Cloud -based
applications are less vulnerable to such attacks.
Identity and access management and related policies for the use of cloud services must be
equivalent to the current practices of the undertakings and ensure the ability to interoperate with
existing applicati ons.

2.2 Cloud Computing risks

Many of the risks frequently associated with Cloud platforms are not new, but there can be
found in many of today's companies. Efficient planning of the activities of risk management is crucial
in ensurin g that information is available, but in the same time it is protected.

21
Business processes and procedures must take into account the security , and information
security managers must adjust policies and procedures of their companies to me et the needs of their
business.
Given the dynamic business environment focused on globalization, there are very f ew
companies that offer external information of their work. Engaging in a relationship with a third part
person is not only the use of services and the technology of the provider of cloud platform, but also
must consider how the provider functions , architecture of the offered solutions , as well as the cultural
and national policies of the provider . Some examples of risks that need to be managed are:
1) Organization must be very specific when choosing a provider. T heir reputation , history
and sustainability would be factors to consider. Sustainability is of great importance to
ensure that services will be available and the data can be tracked.
2) Cloud platform provider should take responsibility for managing the information, which
is a key part. Failure to achieve a certain level of service quality can have a major impact
not only for data confidential ity, but also their availability, seriously affect ing work flows .
3) The dynamic nat ure of cloud technology can lead to confusion as to where the information
actually resides. When you need to retrieve information , this may create delays.
4) Third-parties access to sensitive information creates a risk of compromising the
confidential inform ation. In cloud computing, this thing can impose a significant threat to
insuran ce of intellectual property protection a nd of trade secrets.
5) Cloud platforms allow systems to provide high -availability levels often impossible to be
achieved in the private networks, with the exception of the cases of massive investments .
The disadvantage consists of the possibilit y of mixing information with other Clo ud
customers, including the competition. Respect of rules and laws in different geographic
regions c an be a challenge for companies . At this moment, there are a few legal precedent s
regarding the assumption of responsibility for Cloud platforms. It is extremely important
to obtain adequate legal support to ensure that the contract clearly stipulates area s where
Cloud platform provider is responsible.
6) Due to the dynamic nature of Cloud platforms, information cannot be located immediately
in case of a disaster. Business continuity plans in case of disaster should be well
documen ted and tested. Cloud provider must understand the ir role in terms of backup,
incident management and recovery services. The recovery time objectives should be
written down in the contract.

These risks, as well as the others th at an organization identifies must be managed efficiently .
An organization must put in place a robust risk management that is flexible enough to ensure the
risks. In an environment where confidentiality ha d become extremely important for enterprise
customers, unauthorized access to data in t he Cloud is a critical concern.
When signing a contract with a Cloud service s provider , a customer must have an inventory
of their data and act as ensure that they are classified and labelled properly. This will help determine
what needs to be specified when developing a Service Level Agreement (SLA), – usually signs a
service contract – identifying the need for encryption of stored or transmitted data as well as additional
controls of sensiti ve information.
Agreement to ensure the level of quality of service (SLA) is what also define your relationship
between customer and Cloud service s provider, one of the most effective tools to ensure that adequate
information stored in the Cloud platform. SLA is the instrument by which customers can specify

22
which control models will be used and describe d as expectations following an external audit. In
addition, requests for business continuity disaster recovery (discussed above), will be provided in this
Agreement.
Implementation and using of Cloud services must be considered not only in the context of
"internal" vs. " external", which refers to the location of physical data, resources and the information ,
but also must be considered in defining consumer ser vices and respons ibilities for their governance,
security, as well as the compliance with policies and standards.
This is not to suggest that the location (inside or outside the premises) an asset, resource, or
data does not affect the handling mode of security or risks treatment, but explain that the risk depends
on:
1) The types of data, resources and information managed
2) Who manages them and how
3) Selective controls and how they are integrated
4) Compliance issues

For example, a LAMP solution installed on Amazon AWS EC2 would be classified as a public
solution , off-premise, a n IaaS solution managed by a third -part, even if the instances and their
applications/ data have been managed b y the consumer or a third party. A custom solution that serves
multiple de partments, installed on "Eucalyptus" unde r the management of a corporate controlling
could be described as a private application, on -premise, self -managed SaaS. Both examples describe
the elasticity and the capability of "self-service" Cloud platforms. The Cloud Cube model highlight s
the challenges of understandi ng and assign ing models of cloud standards such as ISO / IEC 27002 ,
standard which offer s "a series of guidance and general principles for the initiation , implementing,
maintaining and i mprovement s of the information s ecurity within an organization ."
One of the major advantages of cloud computing is cost efficiency direct result ing from
savings on solution scalability, high level of reuse and standardization. For bring ing their solutions
to these y ields, Cloud providers must offer a high level of flexibility to can address a large segment
of the market. Unfortunately, integrating security into these solutions is often perceived as a
reinforcing them. This rigidity often manifests itself unable to ob tain keep parity in carry the security
control in C loud environments , compared to traditional IT environments.
This stems largely from abstraction of infrastructure, as well as the l ack of visibility and the
ability to integrate many familiar security con trols – especially to the network layer.

2.3 IPTables technology

To offer a network protection on our platform, we must deny all connexions except the needed
ones. This can be done using the IPTables which comes with the Linux OS and it is efficiently enough
to create our rules in accessing the platform.
As we can deduct from its name, this is based on tables containing chains of rules for the
treatment of packets. The origin of the packet determines which chain it traverses initially. There are
five pr edefined chains :

1) PREROUTING – Packets will enter this chain before a routing decision is made.

23
2) INPUT – Packet is going to be locally delivered. It does not have anything to do with
processes having an opened socket; local delivery is controlled by the "lo cal-delivery"
routing table: ‘ip route show table local ’.
3) FORWARD – All packets that have been routed and were not for local delivery will
traverse this chain.
4) OUTPUT – Packets sent from the machine itself will be visiting this chain.
5) POSTROUTING – Routing decision has been made. Packets enter this chain just before
handing them off to the hardware.

A good explanation of data flow in the network and how firewall can handle with it is shown
in the figure below:

Figure 2.1 – Packet flow in Netfilter and General Networking

Netfilter is a framework provided by Linux kernel used by IPTables utility.

24
Chapter 3 – Practical Implementation

My learning process started with an Amazon Cloud Server because initially I did not have a
powerful computer to develop this project. I created a free account to access my VM by SSH (using
Putty) and I began to install needed services to learn how to configure them.
From there I have learnt how to install an Apache Server for the website and MariaDB
(MySQL open -source version available for Linux). But the most important thing I have learnt is that
we must secure very well a machine when it is accessible from the Inter net.
How I discovered this? I was a victim of hacking. My VM had open SSH port 22 and root
password was hacked probably by force. At the end of May I had a bill of over 300$ and I should call
Amazon support to explain what happened and to pray them to not pay for this inconvenience because
I was a beginner, I did not know what could happen and is a big amount of money for my money
income.
After solving this problem, I gave up on this idea. I closed my account and borrowed a laptop
with minimum hardware re quirements for realizing my project: HP ProBook 6470b with Intel Core
i5 CPU and 8GB of RAM.
First step was to install the host operating system – CentOS 7 64 -bit version. I downloaded a
disk image from official website, burnt to a CD and started the insta llation.
After installation of the host OS, I downloaded KVM software and configured the virtual
machines using the wizard as follows: for having 1GB of RAM remaining for the host OS, all 8
should totalize an amount of 7GB. 4 of them (one of each feature, the master ones) will have each
1GB of RAM and other 4 (the backup machines) will have 768MB (0.75GB) because will be used
only on demand at masters ‘s failures. All could use all 4 logical CPUs (physical one has 2 cores and
4 threads) and the storage wil l be 30GB.
Next step was to configure the local network between host and guest machines. I used only
IPv4 technology because it is easier and a lot of big services did not evolve to IPv6. KVM is using
192.168.122.0 class with the mask 24 (255.255.255.0) b y default, but it has configured a DHCP such
as at every reboot the machine could receive another IP address. This could create problems in our
project, so I had to configure manually the addresses using ‘ifcfg -eth0’ file and with
‘NetworkManager ’ service disabled. Because the host machine is automatically seen as
192.168.122.1 in the KVM structure and 192.168.122.254 is the broadcast address, I had chosen the
machines addresses in the range .101 -.108. Before setting the DNS servers, I used the Goo gle’s open
DNS service to have Internet access on my machines (8.8.8.8 and 8.8.4.4). The configuration file
looks as follows:

[root@DB1 ~]# cat /etc/sysconfig/network -scripts/ifcfg -eth0
TYPE="Ethernet"
BOOTPROTO=none
IPADDR=192.168.122.101
PREFIX=24
GATEWAY=192.168.122.1
DNS1=192.168.122.103
DNS2=192.168.122.104
DEFROUTE="yes"
PEERDNS="yes"
PEERROUTES="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT=no
NAME="eth0"
UUID="baedf189 -305f-49dc-bd56-15c44d540c7d"

25
DEVICE="eth0"
ONBOOT="yes"

All machines have the same configuration (only IPADDR parameter differs) except the DNS
ones which do not have DNS1 and DNS2 because they are DNS servers.
Having internal network configured, I began to install services on each machine working in
parallel two by two (master and slave) for each service.
Trying to access the machines ports via Terminal, I discovered that all of them were blocked.
So, I decided to turn off the Firewall (IPTables) for that moment an d configure it in the final steps
being more difficult than LAMP part of the project.
First and easiest part was the MariaDB configuration. I used YUM installer to download and
install server and client for MySQL application and I ran ‘mysql_secure_install ation’ script for
automating the configuration process. I tested the remote connection via terminal and afterward I
installed an Apache Server with PHP support on the same machines to access easier via
PHPMyAdmin web interface.
The second part, because I h ad already configured Apache, MySQL and PHP on 2 machines
and I had got some experience with them, I installed these services on the WEB machines (only client
part of MySQL because they will use the mail database as remote connection). Using online tutoria ls
and my gained experience, I set up the WordPress platform and MyBB Forum on our machines.
Leaving the mail part for the final because I found it very hard, I started to configure the DNS
servers using BIND9 service. For a better security and an easy setup, I forwarded the configuration
from ‘named’ service to ‘named -chroot’ one and used files l ooks like this:

• ‘named.conf ‘ which is the main configuration file where service looks for our settings (it is
set to listen on port 53 on any IP and allow queries from them and in the final lines are the domains
configurations, iulian.com and reverse req uests) :

cat /var/named/chroot/etc/named.conf

options {
listen-on port 53 { any; };
#listen-on-v6 port 53 { ::1; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics -file "/var/named/data/named_stats.txt";
memstatistics -file "/var/named/data/named_mem_stats.txt";
allow-query { any; };

/*
– If you are building an AUTHORITATIVE DNS server, do NOT enable re cursion.
– If you are building a RECURSIVE (caching) DNS server, you need to enable
recursion.
– If your recursive DNS server has a public IP address, you MUST enable access
control to limit queries to your legiti mate users. Failing to do so will
cause your server to become part of large scale DNS amplification
attacks. Implementing BCP38 within your network would greatly
reduce such attack surface
*/
recursion yes;

dnssec-enable yes;
dnssec-validation yes;

/* Path to ISC DLV key */
bindkeys -file "/etc/named.iscdlv.key";
managed-keys-directory "/var/named/dynamic";

26
pid-file "/run/named/n amed.pid";
session-keyfile "/run/named/session.key";
};

logging {
channel default_debug {
file "data/named.run";
severity dynamic;
};
};

zone "." IN {
type hint;
file "named.ca";
};

zone "iulian.com" {
type master;
file "iulian.com.zone";
};

zone "0.122.168.192.in -addr.arpa" IN {
type master;
file "192.168.122.0.zone";
};

include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";

• ‘iulian.com .zone ’ where I configured the redirection records for nameservers, mail servers,
web and database A records (maps for IP addresses) :

cat /var/named/chroot/var/named/iulian.com.zone
;
; Addresses and other host information.
;
$TTL 86400
@ IN SOA iulian.com. root.iulian.com. (
2017082501 ; Serial
43200 ; Refresh
3600 ; Retry
3600000 ; Expire
2592000 ) ; Minimum

; Define the nameservers and the mail servers

IN NS ns1.iulian.com.
IN NS ns2.iulian.com.
IN A 192.168.122.103
IN A 192.168.122.104
IN MX 10 mx1.iulian.com.
IN MX 10 mx2.iulian.com.

server IN A 192.168.122.103
server IN A 192.168.122.104
mx1 IN A 192.168.122.105
mx2 IN A 192.168.122.106
ns1 IN A 192.168.122.103
ns2 IN A 192.168.122.104
www IN A 192.168.122.107
www IN A 192.168.122.108
mail IN A 192.168.122.105
mail IN A 192.168.122.106
db IN A 192.168.122.101
db IN A 192.168.122.102

27

• ‘192.168.122.0.zone’ file contains the reverse PTR records which redirects the IP addresses
to specified domain :

cat /var/named/chroot/var/named/192.168.122.0.zone
$TTL 86400
@ IN SOA iulian.com. root.iulian.com. (
2017082401 ; Serial
43200 ; Refresh
3600 ; Retry
3600000 ; Expire
2592000 ) ; Minimum

0.122.168.192.in -addr.arpa. IN NS server.iulian.com.

105 IN PTR mx1.iulian.com.
106 IN PTR mx2.iulian .com.
103 IN PTR ns1.iulian.com.
104 IN PTR ns2.iulian.com.
107 IN PTR www.iulian.com.
108 IN PTR www.iulian.com.
105 IN PTR mail.iulian.com.
106 IN PTR mail.iulian.com.
101 IN PTR db.iulian.com.
102 IN PTR db.iulian.com.

Mail part created me a lot of problems because I started by installing dovecot and postfix, I
configured them using some online tutorial, but it cannot be accessed via Mozilla Thunderbird,
RoundCube or other applications. After some researches, I discovere d iRedMail, an application which
install automatically all the needed files and configure them by asking us minimum information. All
worked fine and after rebooting the machines, mail service was working fine. I added some test
accounts via web iRedAdmin p latform and logged in via RoundCube interface. Generated files for
SMTP and POP3 configuration are saved on my project CD because they have hundreds of lines and
could take a lot of space here.
Last configuration part was the firewall and some tests. I di sabled the ‘ Firewall D’ service
from CentOS 7 because its settings are harder and it does the same as old IPTables with which we
are familiar. After this, I enabled and started ‘iptables’ service and I added the rules needed to access
our platform (locally, for moment) . Commands were like:
iptables -I INPUT -s 192.168.122.0/24 -p tcp –dport 80 -j ACCEPT
where: ‘ -I’ comes from Insert, INPUT/OUTPUT shows the direction of requests (for example,
mail needs in both directs), with ‘ -s’ I specified that t his rule applies to all IPs in that class with that
mask, ‘ -p’ indicates the protocol used (TCP or UDP), ‘ –dport’ is used to select the port and ‘ -j’ comes
from jump to this rule. ACCEPT/BLOCK is the final word which set if this rule will accept or block
the traffic specified.
After adding the rules, for saving the configuration and keep it after a restart, ‘service iptables
save’ command will be used.
Testing part consists in checking open ports by nmap utility from host machine, ping all the
nameservers and connect to them by designed applications (I used Firefox as web browser to test my
site, forum, mail and database connect).
All worked fine locally, so my purpose was achieved. Next target could be to buy a public
domain and configure it using my ISP p ublic IP (could pay for a static one).

28
Conclusion

After some months of w orking on this project, I had learnt about main Cloud perspectives
such as LAMP platform and mail services. I was put in situation of learning on my skin what means
security in server and how we must protect them. This solution could be escalated on more real
machines in a network.
There are more companies using this configuration nowadays because it is the era of the
Internet: starting with the fruit producers which could sell their products on a website, up to big
companies of telecommunications – all of them are using the Internet and E -mail services.
Advantages of this solution is that using virtualization and high availability we can maintain
our services 24 hours per day u p and use them from everywhere by Internet connection.
The Cloud platform should have a structure to provide access to Cloud data only after passing
through a security task . The remediation and recommendation solutions we have mastered following
the thes is are the delivery mechanism, and user interfaces should provide flexible security interfaces.
We can create more security levels in relation to the users to have a better security setup.
Concluding, the project is successfully finished and the targets t hat I had established were
achieved. I look forward to achieving more experience in this direction by getting a job at a company
which uses these services.

29
Bibliography

[1] – S. Kumar Garg and R. Buyya “Green Cloud Computing and Environmental Sustainability”,
2012.
[2] –M. Bertoncini, B. Pernici, I. Salomie and Stefan Wesner “GAMES: Green Active Management
of Energy in IT Service centres”, 2012.
[3] – A. Uchechukwu, K. Li and Y.Shen ,“Energy Consumption in Cloud Computing Data
Centers”, 2014.
[4] – A. Murtazaer and S. Oh, “Sercon: Server Consolidation Algorithm using live migration of
virtual machines for Green Computing”, 2011.
[5] – R. Buyy, S. Kumar Garg and R. Calheiros “SLA -Oriented Resource Provisioning for Cloud
Computing: Challenges, Architecture, and Solutions”, 2011.
[6] – “History of World Wide Web”, 2017,
https://en.wikipedia.org/wiki/History_of_the_World_Wide_Web accessed on 20.06.2017
[7] – “Operating System – Linux”, 2017,
https://www.tutorialspoint.com/operating_system/os_linux.htm accessed on 20.06.2017
[8] – “Five Reasons Linux Beats Windows for Servers”, 2010,
http://www.pcworld.com/article/204423/why_linux_beats_windows_for_servers.html accessed on
21.06.2017
[9] – “Wh at is PHP?”, 2017, http://php.net/manual/en/intro -whatis.php – accessed on 01.07.2017
[10] – “Why You May Not Want to Run Your Own Mail Server ”, 2014,
https://www.digitalocean.com/community/tutorials/why -you-may-not-want -to-run-your-own-mail-
server – acces sed on 02.07.2017
[11] – “The History of Electronic Mail ”, Tom Van Vleck , 2013,
http://www.multicians.org/thvv/mail -history.html – accessed on 02.07.2017
[12] – “How E -mail Works”, 2017, http://computer.howstuffworks.com/e -mail-messaging/ –
accessed on 02.07.2017
[13] – “iRedMail Documentation”, 2017, http://www.iredmail.org/docs/ – accessed on 01.08.2017

Similar Posts