Performa nce and s calability of [601217]
Performa nce and s calability of
Android Low -pass f ilter
Agnes Bolovan
Faculty of Automatic Control and Computers
University POLITEHNICA of Bucharest
Bucharest, Romania
Abstract— Cloud is developing fast and new services are
made available. But in order to be efficient and effective Cloud
solutions must satisf y several standards : elasticity, high
availability, robustness, etc. In this article we will focus on the
first one. We prove the elasticity of Cloud by demonstrating the
scalability of Android Low -pass filter.
In order to achieve that we divided our program in three
main sections and we run each of them in a distributed
environment . The tests we made revealed the evolution of
execution time when increasing the number of machines. Based
on the results, we were able to determine the optimal number of
machines for each section of the algorithm.
The conclusion we reached at the end of our experiment is
that the Android low -pass filter runs optimally using different
amounts of resourc es, at some point of its execution , depending
on the complexity of operations.
Keywords — android, low-pass filter , cloud, elasticity,
accelerometer ; scalability, distributed systems;
I. INTRODUCTION
We will start with explaining what a “low -pass fil ter” is
and we will introduce the basic concepts .
The theory says that a low -pass filter attemp ts to pass low –
frequency signals unchanged, while reduc ing the signals that
have got a higher amplitude than a so called “cutoff
frequency”. A simple example is illustrated in figures below.
Let’s assume we have two sine waves, one at 3Hz and the
other at 6 Hz. It might look th is:
Figure 1. Two signals at 3Hz and 6Hz . Source: 1
By applying an ideal low -pass filter with a cutoff
frequency of 4Hz the result would be eliminating the 6 Hz
signal, while leaving unchanged the 3 Hz si gnal, as it is shown
in Figure 2.
Figure 2. 3 Hz signal. Source: 1
However, in reality things are slightly differ ent because a
low-pass filter will not eliminate for completely the 6 Hz
signal. Instead, it will attenua te it.
Figure 3. Previous signals after applying a low -pass filter
Source: 1
A low -pass filter can be used for image processing, audio
filtering, signal and digital processing.
In this paper we use t he Android Developer low -pass filter
to calculate the linear acceleration using the output of
accelerometer integrated in smartphones.
The sensor provides the values of acceleration and other forces
that act on the device over the three axes: x, y and z. In order
to obtain the linear acceleration – or, in other words, the real
acceleration – we must isolate the force of gravity. The soluti on
to this problem consists in applying a low -pass filter to the
original input values.
Several transformations are applied to the input, as shown
below. We consider by convention the array “gravity” to store
the force of gravity, “input” is related to the init ial values.
Coefficient “alpha” is also named “filter coefficient ”, it
determines how much weight should be applied to the signal.
Depending on this value, the filter can be more or less
restrictive.
Alpha is calculated with the following formula:
α =𝑻
𝑻+𝒅𝒕
with T – low-pass filter’s time constant and dt – delivery rate;
gravity[0] = alpha * gravity[0] + (1 – alpha)
* input[0];
gravity[1] = alpha * gravity[1] + (1 – alpha)
* input[1];
gravity[2] = alpha * gravity[2] + (1 – alpha)
* input[2];(*)
output[0] = input[0] – gravity[0];
output[1] = input[1] – gravity[1];
output[2] = input[2] – gravity[2];(**)
The array “output” holds the values of linear acceleration
over the axes x, y and z.
A low -pass filter is useful not only when it comes to working
with android sensors, but also in areas like image and audio
processing. In the next section we will focus on its other
utilities. Moreover, we will present similar papers, which are
testing an algorithm’s scalability.
II. RELATED WORK
One of the main selling arguments for Cloud solutions is
represented by high availability. This characteristic is also
included in most se rvice level agreements (SLA) [1]. Another
important argument consists in robustness of service
composition, because it complements reliability and
availability [2]. There are several algorithms that evaluate a
system’s reliability, such as scalable algorithms based on non –
sequential Monte Carlo simulation [3 ], or by using hybrid
methods, which comb ine MTTF/MTTR and CTMC models
[4].
When we talk about modern systems, the pr ior criteria for
appraising the superiority is the capacity to satisfy the increasing demand for high performance and energy -saving
[5], [6 ]. Various resources, such as CPU and Memory S torage
are allocated on multiple virtual machines, so the problem of
resource sharing has significant importance on both:
performance and energy consumption. There is always a
tradeoff be tween energy consumption and task completion
time w hen the VMs execute a task [7]. Many techniques have
been pr oposed to control energy consump tion: dynamic
voltage and frequency scaling [5 ], virtual resource
management [9], distributed resource sharing te chnology [9 ].
In this paper , however, we will focus on two
characteristics of cloud services: elasticity and scalability.
Even if many people use these terms interchangeably, there is
a difference between them. Elasticity is used to describe how
well the system’s archite cture can adapt to workload in real
time. Elasticity adapts to both workload increase and workload
decrease ; on the other hand, scalability adapts only to
workload increase, by providing resources in an incremental
manner. It means increas ing the capacity to serve an
environment when the workload is increasing. We can say that
a system design is scalable if it can be deployed at a range of
scales, in an economic way, in small and large configurations.
There has even been defined a new measu re of scalability for
distributed systems, ca lled P -scalability [10 ].
The key char acteristic of cloud environment is elasticity.
On one hand it allows applications to adjust to changing
demands, by acquiring or releasing resources dynamically. On
the o ther hand, the difficult part consists in deciding the right
amount of resources on a pay -as-you-go basis. Several auto –
scaling techniques have been proposed. In [11] the authors
classify these techniques into five main categories: static
threshold -based r ules, control theory, reinforcement learning,
queuing theory and time series analysis.
Reference [12] presents a cost -aware system , named
Kingfisher, which provides support for elasticity in cloud. It
uses an optimization selection of the virtual server
configuratio n in order to minimize the cost and through
various mechanisms it reduces the time to transition to new
configuration s. It also illustrates alternatives to the trade -off
between the cost of server resources and the time required to
scale the application.
Using Cloud infrastructure for scalable applications also
involves several challenges we must face , such as evaluating
the performance of Cloud provisioning policies and resources
performa nce model, under customized user configuration and
requirements. Paper [ 13] exposes a way to overcome these
problems , proposing a simulation toolkit (CloudSim) , which is
able to simulate the Cloud computing systems and application
provisioning environment s. CloudSim provides custom
interfaces for implementing policies and provisioning
techniques.
A. Singh and M. Malhotra [14] proposed an agent based
framework for scalability in Cloud Computing , supported with
algorithms for searching another available cloud when the
current cloud becomes overloaded. The framework makes use
of mobile agents and associates a mobile agent with e ach
public/private cloud. In [15 ] the authors present a framework
that automatically manages elastic deployment of component
based applications. The proposed mechanisms provide
applicati on sca ling in a public cloud computing cluster.
Scala bility is essential in a system because it contributes to
efficiency, quality and also competitiveness. Cloud scalability
is critical to cloud/SaaS vendors, because it assures the quality
of cloud elasticity to sup port SaaS and cloud services [16 ].
Applications are offered cloud platform’s resources to scale
during runtime, however, in [17 ] the authors claim that just -in-
time scalability is not achie ved by simply deploying the
application on the platform, and that the conditions are forcing
the developers to rewrite their application in order to leverage
the resource utilization that is being demanded.
In the last part of this section we will present similar
works, which focus on demonstrating the scalability of othe r
types of filters, that are commonly used. Unlike us, in [18 ] the
authors used a parallelized versio n of Kalman filter. It was
used for developing and testing an Ocean General Circulation
Model, called Poseidon. Reference [19] proposes scalable
Bloom filters. Such filters are useful when it comes to
providing space -efficient storage of sets, where the cost is
defined by the probability of false positives on membership
queries. The scalabl e implementation is able to adapt
dynamically to the number of elements that are stored.
Nowadays, the filters are used more and more in many
technological fields, such as video compression or data
analysis and reduction , just like in our case. Another example
of filter used for data reduction is t he Spam Filter, presented in
[20]. The filter is used for global social email networks and
concentrates on providing a n efficient , distributed manner to
search for a user’s content in a network, consid ering a minimal
traffic cost on the network. In [21 ] the authors use a motion
compensated temporal filter for vid eo coding, as it provides
high scalability.
III. EXPERIMENTAL METHODO LOGY
As we briefly presented in introduction, the purpose of this
paper is t o test the performance of Android Low -pass filter in
a distributed environment. In order to do that, we made some
tests over the execution time and we obtained a se t of
measurements, which were consequently an alyzed and
interpreted. We used the data received from the accelerometer
integrated in a smartphone, over the three axes , as input . The
accelerometer output was stored in a text file.
Algorithm division
We divided the algorithm in three main parts and evaluated
separately the performance .
The three sections of our program are:
isolat e the force of gravity ;
calculate the linear acceleration ;
calculate the standard deviation for each variable;
Next, we will describe each section and we will specify the
type and the complexity of operations that were executed.
Our decision is based on the fact that during the program
execution the amount of resources allocated differs from one
stage to another, depending on the complexity of operations.
Because of this, the tests we made allow ed us to prove the
elasticity of the sy stem, when running on multiple machines .
The first section of the program consists in isolating the
force of gravity. This involves arithmetic operations , such as
addition and multiplication . The correspondent source code
fragment is marked with (*) in introduction. At the end of this
stage we obtained an array representing the force of gravity,
over the three axes, which we will use for the second section
of the program.
The second section calculates the linear accele ration, by
extracting (subtraction) the gravity values calculated in the
previous section, from the initial input (source code fragment
marked with (**) in introduction).
The third and the last section of our program calculates
the standard deviation for each of the variables x, y and z,
associated with the three axes. We chose this as part of our
program because this proves to be very handy when it comes
to determining the outliers fr om the distribution of the data .
Hardware and Software capabilities
Our demo source program is written in C and uses
Message Passing Interface ( MPI) to run the algorithm on
multiple machines. For each part of the program we
mentioned above we measured the execution time. The tests
were run on machine s Intel Xeon E5630, 2.53 GHz, 16 Cores,
32 GB memory.
Measurements procedure
Beside the data processing time, the program also requires
communication between machines. For example, at the
beginning the amount of data was read from the text file by
the master and then divided in to small chunks and sent to the
slaves. In order to obtain a more accurate and relevant result
we only measured the actual processing time, without taking
into account the time used for data transfer.
Data was read from the text file as sub -sets of rows and
then divided into smaller chunks of the same size, the
exception being the last one w hich can store a large r amount
of data if the number of rows does not divide exactly by the
number of machines used for processing.
Of all the sections of the program the third one requires
the largest amount of communication between machines. In
order to calculate the standard deviation we had to calculate
first the variance for each variable, then extract the square
root. The formula for calculating a variable’s variance is the
one below:
S2 = 𝑥−𝑚𝑒𝑑 2
𝑛−1
We notate x – the analyzed variable, med – variable’s ave rage,
n- number of observations registered.
The steps for proceeding with this part are listed below:
– master divides the original data and send small
chuncks to slaves
– each slave calculates the partial average then send it
to master
– master calculate s the final average and send it to
slaves
– the slaves calculate partial sum of squares then send
it to master
– the master sums up the values and calculates the final
variance.
– the master extracts square root and calculates the
standard deviation
Running tests
The number of virtual machines used was incremented
sequentially and multiple consecutive tests were run for the
same test case. The average of the results was the one used in
interpretations , in order to assure the most accurate result.
In the following section we present in detail the results of
the tests we ru n and the conclusi on we reached by analyz ing
those resul ts.
IV. EXPERIMENTAL RESULTS
In this section we analyze and interpret the results of our
tests, by illustrating graphically the correlation between the
number of machines used for testing and the execution time o f
the program. Axes X and Y represent the number of machines ,
respectively the execution time, in seconds.
For the first section of the algorithm the results are
presented below in figure 4.
Figure 4. First section of the program: isolate the force of gravity
Incrementing the number of machines determines a
signifi cant improvement regarding the execution time, until
the 4th machine. From the 4th machine until the 12th one, the
improvement beco mes less noticeable. We can o bserve that
starting with the 13th machine used the execution time does
not show a significant improvement from the previous case,
when 12 machines were used. In fact, the values recorded are
quite similar . Starting with the 16th machine we don’t observe
any improvement regarding the execution time. That means
that any number higher than 16, representing the number of
machines used to run does not prove any usefulness , only
meaning extra resources consumed for zero effects.
Now let’s analyz e the evolution of execution time when
using between 12 and the 16 machines . As it i s shown in
figure 4 , there is a small , visible improvement. However, the
effect does not compensate the effort: we must face a trade -off
between the amou nt of resources allocated and the
improvement , visible in the execution time. Which way is
more cos t-efficient? Taking all this into account we
summarize by deciding that the optimal number of machines
for section one of the algorithm is 12.
For the second section of the algorithm we obtained the
following result s, illustrated in figure 5 .
Figure 5. Second section of the program: cal culate the linear acceleration
For the second section of the program the execution time
is characterized by an exponential evolution. We observe that
it seems to evolve in a similar way with the f irst case,
discussed above. Unlike the previ ous case, however , using
between 12 and 16 machines determines improvements
regardin g the execution time. The similarity with the first case
is more noticeable starting with the 16th machine, up to the
20th one, because the execution time does not record any
evolution, remaining constant all the way. Because of this, we
consider 16 the optimal number of machines for the second
section.
The execution of the last section of the program provides the
results shown in figure 6 .
Figure 6. Third section of the program: calculate the standard deviation
In this case, we observe that either using 8 or 12 machines
is almost the same thing, considering t he evolution of
execution time. A small improvement is visible starting with
the 13th machine; the execution time continues to decrease
until the 16th machine, when a constant evolu tion is estimated.
A quick descent takes place until the 8th machine, thus, the line
plot shows us that the optimal number of ma chines that should
be used is 8, since increasing over 8 does not produce any
efficient results.
The interpretations we made showed us that depending on
the complexity of operations, a program runs optimally in a
distributed environment, using a different number of machines
at several moments during its execution.
For the Android Low -pass filter that we tested, for ea ch
part of the program we obtained different values meaning the
optimal number of machines that should be provided for
execution . The final results are visible in figure 6 .
Figure 6. Optimal number of machines for each section of the program
Conclusions
The purpose of our paper was to demonstrate the
scalability of the Android Low -pass filter, and thus, the
elasticity of Cloud.
The test s we made prove that specific algorithms when run
in a distributed environment are in need of different amounts
of resources, at different moments during execution, in order
to work properly, but most important, in a cost -efficient
manner . The statistics p resented in figure 6 summarize our
experiment and demonstrate our hypothesis.
References
[1] Nygard, M. (2007). Release It!: Design and Deploy Production -Ready
Software . Pragmatic Bookshelf.
[2] Chauvel, F., Song, H., Ferry, N., & Fleurey, F. (2015). Evaluating
robustness of cloud -based systems. Journal of Cloud Computing , 4(1),
1-17.
[3] Snyder, B., Ringenberg, J., Green, R., Devabhaktuni, V., & Alam, M.
(2015). Evaluation and design of highly reliable and highly utilized
cloud computing syst ems. Journal of Cloud Computing: Advances,
Systems and Applications ,4(1), 11 .
[4] Xuejie, Z., Zhijian, W., & Feng, X. (2013). Reliability evaluation of
cloud computing systems using hybrid methods. Intelligent Automation
& Soft Computing , 19(2), 165 -174.
[5] Aydin, H., Melhem, R., Mossé, D., & Mejía -Alvarez, P. (2004). Power –
aware scheduling for periodic real -time tasks. Computers, IEEE
Transactions on ,53(5), 584 -600.
[6] Hanning, W., Weixiang, X., Yang, J., Wei, L., & Chaolong, J. (2013).
Efficient processing of continuous skyline query over smarter traffic
data stream for cloud computing. Discrete Dynamics in Nature and
Society , 2013 .
[7] Zhang, H., Li, P., & Zhou, Z. (2015). A Correlated Model for Evaluating
Performance and Energy of Cloud System Given System
Reliability. Discrete Dynamics in Nature and Society , 2015 .
[8] Nguyen Van, H., Dang Tran, F., & Menaud, J. M. (2009, May).
Autonomic virtual resource management for service hosting platforms.
In Proceedings of the 2009 ICSE Workshop on Software Engineering
Challenges of Cloud Computing (pp. 1 -8). IEEE Computer Society.
[9] Foster, I., Kesselman, C., Nick, J. M., & Tuecke, S. (2002). Grid
services for distributed system integration. Computer , 35(6), 37 -46.
[10] Jogalekar, P., & Woodside, M. (2000). Evaluating the scalability of
distributed systems. Parallel and Distributed Systems, IEEE
Transactions on , 11(6), 589 -603.
[11] Lorido -Botran, T., Miguel -Alonso, J., & Lozano, J. A. (2014). A review
of auto -scaling techniques for elastic applic ations in cloud
environments. Journal of Grid Computing , 12(4), 559 -592.
[12] Sharma, U., Shenoy, P., Sahu, S., & Shaikh, A. (2011, June). A cost –
aware elasticity provisioning system for the cloud. In Distributed
Computing Systems (ICDCS), 2011 31st International Conference
on (pp. 559 -570). IEEE.
[13] Calheiros, R. N., Ranjan, R., Beloglazov, A., De Rose, C. A., & Buyya,
R. (2011). CloudSim: a toolkit for modeling and simulation of cloud
computing environments and evaluation of resource provisioning
algorithms. Software: Practice and Experience , 41(1), 23 -50.
[14] Singh, A., & Malhotra, M. (2012). Agent based framework for
scalability in cloud computing. International Journal of Computer
Science & Engineering Technology (IJCSET) , 3(4), 41-45.
[15] Kächele, S., & Hauck, F. J. (2013, April). Component -based scalability
for cloud applications. In Proceedings of the 3rd International
Workshop on Cloud Data and Platforms (pp. 19 -24). ACM.
[16] Gao, J., Bai, X., & Tsai, W. T. (2011). Cloud testing -issues, challenges,
needs and practice. Software Engineering: An International
Journal , 1(1), 9 -23.
[17] Yang, J., Qiu, J., & Li, Y. (2009, September). A profile -based approach
to just -in-time scalability for cloud applications. In Cloud Computing,
2009. CLOUD'09. IEEE International Conference on (pp. 9 -16). IEEE.
[18] Keppenne, C. L., & Rienecker, M. M. (2002). Initial testing of a
massively parallel ensemble Kalman filter with the Poseidon isopycnal ocean general circulation model. Monthly Weather Review , 130(12),
2951 -2965.
[19] Almeida, P. S., Baquero, C., Preguiça, N., & Hutchison, D. (2007).
Scalable bloom filters. Information Processing Letters , 101(6), 255 -261.
[20] Kong, J. S., Boykin, P. O., Rezaei, B. A., Sarshar, N., & Roychowdhury,
V. P. (2005, July). Scalable and Reliable Collaborative Spam Filters:
Harnessing the Global Social Email Networks. In CEAS .
[21] Golwelkar, A. V., & Woods, J. W. (2003, June). Scalable video
compression using longer motion compensated temporal filters.
In Visual Communications and Image Processing 2003 (pp. 1406 -1416).
International Society for Optics and Photonics.
Copyright Notice
© Licențiada.org respectă drepturile de proprietate intelectuală și așteaptă ca toți utilizatorii să facă același lucru. Dacă consideri că un conținut de pe site încalcă drepturile tale de autor, te rugăm să trimiți o notificare DMCA.
Acest articol: Performa nce and s calability of [601217] (ID: 601217)
Dacă considerați că acest conținut vă încalcă drepturile de autor, vă rugăm să depuneți o cerere pe pagina noastră Copyright Takedown.
