Full length Article [605083]

Full length Article
Extended study of network capability for cloud based control systems
Jan Schlechtendahla,n, Felix Kretschmera, Zhiqian Sangb, Armin Lechlera,X u nX ub
aInstitute for Control Engineering of Machine Tools and Manufacturing Units (ISW), University of Stuttgart, 70174 Stuttgart, Germany
bDepartment of Mechanical Engineering, University of Auckland, Auckland 1010, New Zealand
article info
Article history:
Received 31 March 2015Received in revised form7 July 2015Accepted 21 October 2015
Keywords:
Cloud based manufacturing
Cyber physical systemRealtime communicationabstract
Current control systems are limited from a technical viewpoint in areas such as scalability, start-up and
recon figuration time and computational complexity for algorithms. These limitations call for a new
concept for control systems to address current and future requirements. It has been suggested that thephysical location of the control system be moved from that of the machine to a cloud, i.e. control system
as a service (CSaaS). In this way, the control system becomes scalable and can handle highly complex
computational tasks while keeping the process know-hows. Utilizing capabilities of modern Wide AreaNetwork (WAN) and Local Area Network (LAN) the control system can be connected with the rest of the
machine, e.g. drives, sensors, devices and HMI. This approach, however, presents new challenges, i.e. the
requirement for integration of network, cloud computing and control system expertize. This paper willfocus on the requirements of the communication for a cloud based control system.
&2015 Elsevier Ltd. All rights reserved.
1. Introduction
“Intelligence is the ability to adapt to change. ”– Stephen
Hawking
This quote from Stephen Hawking is also valid for production
systems today. Only intelligent production systems can meet therequirements of a flexible production of the 21st century with
increasing demand for versatility and scalability. Intelligent pro-duction systems can be developed only if the control system isintelligent, while current machine controls are not. Being limitedin areas like e.g. recon figuration ability [1], security [2]and com-
putational power [3], the machine control demands for a radically
new concept. In recent years, cloud computing have been dra-matically changing enterprises and industries in the way of orga-nizing business. Cloud manufacturing adopts the concept of cloudcomputing, i.e. virtualized manufacturing resources are distributedthrough Internet as services, and encourages resource sharing andcollaboration among medium and small sized enterprises espe-cially [4]. Researcher in cloud manufacturing area mainly focus on
the system architecture [5], service management [6], and enabling
technologies of cloud manufacturing, such as virtualization ofmanufacturing resource [7]and service oriented technology [8].
The chosen approach of the project of the joined research be-
tween the Institute for Control Engineering of Machine Tools
(ISW), University of Stuttgart, Germany, and the Department ofMechanical Engineering, University of Auckland, New Zealand, is
to provide a control system as a service (CSaaS) from a cloud en-vironment. In this way, the control system becomes scalable andcan handle highly complex computational tasks while retaining
the process know-hows. Utilizing capabilities of modern Wide
Area Network (WAN) and Local Area Network (LAN) the controlsystem can be connected with the rest of the machine, e.g. drives,sensors, devices and HMI. For the owner of the machine, there is
no difference to a conventional machine control. This approach,
however, presents new challenges, i.e. requirements for integra-tion of network, cloud computing and control system expertize.
Networked Control System (NCS) is the control system in which
the components including controller, sensors, actuators and othersystem components exchange the information using a shared
media or network [9]. For several decades researchers investigated
the in fluence of network imperfections including delay and
dropout to the system performance e.g. the scalability [10], sta-
bility [11], and quality of control (QoC) [12]. The system perfor-
mance is also application dependent, for example, the sampling
rate [9]and whether the sampling is time-varying or constant [13]
will in fluence the system performance. Among industrial control
networks, realtime Ethernet (RTE) gains the popularity in manu-
facturing area because Ethernet is faster, more capable of trans-
mitting large quantity of data and cheaper in hardware [14]. Apart
from that, it still maintains the possibility of migrating from thehome and of fice Ethernet to RTE, using standard hardware/soft-
ware components and permitting coexistence of Ethernet and RTE
on the same cable [15]. Three realtime classes are identi fied: softContents lists available at ScienceDirect
journal homepage: www.elsevier.com/locate/rcimRobotics and Computer-Integrated Manufacturing
http://dx.doi.org/10.1016/j.rcim.2015.10.012
0736-5845/ &2015 Elsevier Ltd. All rights reserved.nCorresponding author. Fax: ț49 711 685 82808.
E-mail address: jan.schlechtendahl@isw.uni-stuttgart.de (J. Schlechtendahl).
Please cite this article as: J. Schlechtendahl, et al., Extended study of network capability for cloud based control systems, Robotics and
Computer Integrated Manufacturing (2015), http://dx.doi.org/10.1016/j.rcim.2015.10.012 iRobotics and Computer-Integrated Manufacturing ∎(∎∎∎∎ )∎∎∎–∎∎∎

real-time with scalable cycle time used by shop- floor monitoring,
hard realtime with cycle time 1 –10 ms used by process control
and isochronous realtime with cycle time 250 μs to 1 ms and
jittero1μs used by motion control [16]. The Ethernet is based on
the Carrier Sense Multiple Access with Collision Detection (CSMA/
CD) mechanism which cannot guarantee time-deterministic
transmission [17]. To deal with that issue, many adopted solutions
involve modi fications to the Ethernet, which could be categorized
into three different approaches. The first one modi fies the top
layer rather than the TCP/UDP/IP protocols and be called “on top of
TCP/IP ”. Modbus/TCP, EtherNet/IP, P-Net and Vnet/IP belong to that
category. The second one bypasses the TCP/UDP/IP protocols ( “on
top of Ethernet ”). Ethernet Powerlink (EPL), Time-Critical Control
Network (TCnet), Ethernet for Plant Automation (EPA) and PRO-
FINET CBA are typical examples. The last one modi fies the Ethernet
mechanism and infrastructure to reach real-time performance
(modi fied Ethernet), which includes SERCOS III, EtherCAT and
PROFINET IO [15]. However, in the cloud based control scenario
the communication between the cloud and the machine goes
through different Internet service providers' Wide Area Networks
(WANs) which may use different technologies e.g. fiber, xDSL,
coaxial cable, which are in different architecture in Layer 1 and
Layer 2 according to Open Systems Interconnection (OSI) model. At
the boundary of WANs and Local Area Networks (LANs), hubs,
switches, routers and firewalls are implemented. To make possible
the data's being transmitted through Internet, no modi fication to
Layers 1 –3 in OSI model is performed [18]. With the first solution
mentioned above the communication is possible over the Internet,
however, it may requires the exclusive use of the network. For
better realtime performance, it may also utilize IEEE1588 [19]
protocol whose packets cannot travel through ordinary routers.
The second solution modi fies the TCP/UDP/IP protocols, which is
hardly possible to go through firewalls. The last solution relies on
the modi fied Ethernet hardware. Even general Ethernet hardware
and software components cannot meet its speci fication.
Providing control system as a service over networks creates
additional challenges resulting from the nature of the IT infra-
structure. Communication errors could happen on a communica-
tion route and have to be considered by a communication protocol
[20]. The errors “repetition ”,“loss”,“insertion ”,“wrong sequence ”,
“falsification ”and “delay ”are also relevant for CSaaS and have to
be detected and resolved by a communication protocol or even by
the cloud machine control.
As afirst test to CSaaS a communication analysis has been done
between two servers located in Germany. The results, which
support the idea of a CSaaS, were presented on the SPS/IPC/Drives
congress in Nuremberg, 2013. However, the very short commu-
nication path and the limited period (only 1 –2 h) were not able to
provide suf ficient data to identify the challenges of WAN towards
CSaaS.
This paper presents an opportunity analysis for the commu-
nication of control data between a machine and cloud-based
control. The opportunity analysis is based on two scenarios. For
thefirst scenario, the cloud-based control is located in Stuttgart,
Germany whereas the “machine ”is located in Auckland, New
Zealand. In the second scenario, the cloud-based control is also
located in Stuttgart, Germany but the “machine ”has been moved
to a Google cloud center located in Europe.
In the first section of the paper, requirements of the commu-
nication between cloud and machine based on two use cases are
defined. Based on the use cases, a network test setup is described
in the second section of the paper, which locates the control sys-
tem's communication module in Stuttgart and the dummy com-
munication module of a simulated machine in Auckland or at the
Google cloud Europe. The test setup is expanded by a monitoring
solution to analyze the network behavior. In the third section ofthis paper, the communication monitoring results will be dis-
cussed with respect to parameters, which have a big impact on
the production process. As a final section, strategies will be pre-
sented for both use cases that could resolve some of the com-
munication challenges of Section 3 . The paper ends with a con-
clusion whether a cloud-based machine control for machines
could be possible.
2. Use cases and communication requirements
As a first step towards a control system as a service, a general
understanding of the transferred data between control system and
machine has to be developed. It is important to know what type
and amount of data is transferred in which cycle time. Further
knowledge is needed about the impact of the data on the process
results and if the data is part of a control loop. This analysis has
been done based on two use cases. One use case is based on a five-
axis milling machine located in Stuttgart, the other one is based on
a three-axis milling machine in Auckland.
2.1. Use case 1: five-axis milling machine
For the first use case, an Exeron HSC 5-axis milling machine is
used. For the Exeron HSC 600 three types of data streams could be
identi fied which have always a two-way communication with the
control system:
●Data that is exchanged between spindle/axis drives and the
control system. This data includes drive control and status word,
setpoint and actual position. The origin and destination of thesedata is the computerized numerical control (CNC).
●Data that is exchanged between the machine control and I/O
terminals. This data is linked to the programmable logic con-
troller (PLC) where e.g., pumps are controlled and information of
sensors is evaluated for plausibility checks [21].

The third data stream is originating and ending in the human
machine interface (HMI). Actions taken by the machine user
have to be transferred to the machine control (e.g. start NC
program) and feedback values (e.g. current line of executed
NC program and axis position) have to be transmitted for
visualization.
Table 1 shows the data streams corresponding to the amount of
data and cycle time. The direction is relative to the machine con-
trol. These values are the ones con figured by the vendor of the
machine.
A closer look at the transferred data, especially for the I/O
terminals, identi fies that a cycle time of 1 ms is not necessarily
required. The internal bus of the I/O module and the used clamp
can expand the reaction time dramatically.
As a second step, control loops, which are depending on status
values resulting from the machine in the machine control, have to
be identi fied. For the Exeron HSC 600 machine the most critical
control loops are resulting from the axis and the CNC system. The
CNC is performing the following:
●Check if the actual position is within a monitoring window to
the commanded position. If this is not the case, an error is set.
This behavior results in the requirement to either deactivate or
increase the monitoring window or do the comparison within a
short cycle time, for example by shifting this checking procedure
closer to the machine.
●Check if changes to the drive control are executed according to
the expectations of the CNC and correctly reported by the drive
over the drive status. A possible solution would either be aJ. Schlechtendahl et al. / Robotics and Computer-Integrated Manufacturing ∎(∎∎∎∎ )∎∎∎–∎∎∎ 2
Please cite this article as: J. Schlechtendahl, et al., Extended study of network capability for cloud based control systems, Robotics and
Computer Integrated Manufacturing (2015), http://dx.doi.org/10.1016/j.rcim.2015.10.012 i

deactivation of the checking procedure or always reporting a
good drive status to the CNC.
For the PLC control loops are normally not critical since most
components controlled by the PLC are activated and then the ac-
tual status is checked before further steps are initialized (e.g. ac-tivate pump over output clamp and check over input clamp ifpump started working). Adding extra delays in these loops will
decrease the machine performance but will have no in fluence on
the quality of the work piece, for example.
Since HMIs are mostly windows operating system-based, no
real-time requirements arise. A reaction on the user input is ex-
pected within 200 ms. If the reaction time is longer, the user sa-tisfaction is decreased, but without in fluence on the work piece
quality.
2.2. Use case 2: three-axis milling machine
As the second use case, a Sherline 2000 series 3-axis milling
machine is evaluated. A real-time Ethernet fieldbus, EtherMAC, is
used to control the motors and I/O points. As the control of the
machine tool is moved to the cloud, however, a local control of the
machine tool is still needed, especially for setting up coordinateand inspection systems. A local control system is proposed asshown in Fig. 1 .
The upper adapter receives the network packages from the
cloud, which contains the interpolated data, and extracts themotor control command for every cycle, which will then be
“translated ”by the lower adapter following EtherMAC protocol.
The lower adapter sends and receives packages to and from theEtherMAC controller to control the motors, set I/O outputs andreads in axis feedback and I/O inputs. The local interpolator takescharge of some motion control functions, for instance, manual
operation and setting coordinate systems. The local HMI receives
instructions from the local operator and noti fies the lower adapter
to execute control commands from either the upper adapter or thelocal interpolator. It also displays information useful for local op-
erators including current coordinate information, feed rate, spin-
dle speed, tool information, each I/O status and maintenance in-formation. The process monitor keeps an eye on thecommunication and machining process. A serial number is at-
tached to every package transmitted from the cloud to the localcontrol. When an error occurs, the process monitor will stop themachining and send the serial number back to the cloud, based onwhich the error can be located in the part program. The datapackage will be discarded thereafter.
Data stream requirements are shown in Table 2 . The amount of
data comes from EtherMAC protocol and only includes the es-sential data for controlling the machine tool. The data for con-
trolling the axes, the spindle and I/O terminals and the feedback
information from the machine are transmitted every four milli-second. HMI data has no real-time requirement. Some informationwill be transmitted only when necessary. The data transmittedback to the cloud will have no effect on the control. However, asthe CNC in the cloud has all the feedback information from themachine, if the difference between command data and feedbackdata is beyond a certain threshold, an error will be triggered.
A conclusion from this article is that every machine whose
control should be provided as a service has to be analyzed in re-
gard to the transferred data and control loops that exist between
machine control and machine tool.
3. Test setup and monitoring solution
As a next step on the path to a cloud based control system, the
influence of the Wide Area Network (WAN) and the Local Area
Network (LAN) has to be evaluated. For this purpose, a commu-nication test setup has been developed as shown in Fig. 2 . One part
of the test setup –the“cloud communication module ”–located in
Stuttgart, Germany, is creating the data that is transferred from thecloud to the machine and transmits it over WAN and LAN. Themachine communication module, which receives the data from
Stuttgart, is located either in Auckland, New Zealand or in Google
cloud center located in Europe. In Auckland and at the Googlecloud center the data is received, logged, and as a second step, the“machine communication module ”creates and transmits the data
from the machine to the cloud system. Both communicationTable 1
Data streams: amount of data and cycle time of HSC Exeron machine.
Type and direction of data
streamAmount of data (Bytes) Cycle time (ms)
To axis/spindle 46 1
From axis/spindle 96 1To IO system 50 1From IO system 52 1To HMI Variable VariableFrom HMI Variable Variable
Fig. 1. System structure of the local control.Table 2
Data streams: amount of data and cycle time of Sherline machine.
Type and direction of data
streamAmount of data
(Bytes)Cycle time (ms)
To machine:
–Axis/spindle/IO/serial number 53 4
–HMI Variable Variable
To cloud:
–Axis/IO 18 4
–HMI Variable VariableJ. Schlechtendahl et al. / Robotics and Computer-Integrated Manufacturing ∎(∎∎∎∎ )∎∎∎–∎∎∎ 3
Please cite this article as: J. Schlechtendahl, et al., Extended study of network capability for cloud based control systems, Robotics and
Computer Integrated Manufacturing (2015), http://dx.doi.org/10.1016/j.rcim.2015.10.012 i

channels –from Stuttgart and to Stuttgart –could be con figured
to use different connections as User Data Protocol (UDP), Trans-
mission Control Protocol (TCP) and Websocket Protocol [22].
Both communication modules run on non-real-time operating
systems. This does not necessarily guarantee the deterministic
transmission and reception of telegrams in the cycle time of the
use cases. To achieve a similar behavior, the communication
modules have an internal timer, which allows them to recognize
the elapsed time since the last execution. If cycles are missed, the
communication modules regain the lost cycles by multiple ex-
ecution of the send and receive procedure.
For monitoring the network behavior, a monitoring solution
has been integrated in the communication modules. The mon-
itoring solution includes the following information in the trans-
mitted data:
●Cloud communication module: consecutive counter which in-
creases every execution cycle.
●Cloud communication module: time which is currently valid in
the operating system.
●Machine communication module: consecutive counter of re-
ceived telegrams.
●Machine communication module: time which is currently valid
in the operating system.
●Machine communication module: consecutive counter of sent
telegrams.
After receiving a telegram, all information is stored in log files.
Further, the monitoring solution is evaluating the following on-
line for an easier analyses:
●Maximal latency in a de fined time interval.
●Minimal latency in a de fined time interval.
●Average latency in a de fined time interval.
●Number of maximal consecutive telegram losses in a de fined
time interval (UDP only).
●Sum of telegrams lost in a de fined time interval.
●Length of queue (TCP only).
Based on the information it is possible to evaluate which
challenges have to be expected for a cloud-based control located in
either Germany and a machine based on the counterpart.
4. Monitoring results of use cases
For the two use cases the following measurements have been
created by the test setup. Both measurements have used theamount of data and the cycle time of the use cases described
above.4.1. Scenario 1: communication between Stuttgart And Auckland
In the first scenario it has been discovered, that the capability
to send data is limited in Auckland and is not able to transmit
100 Bytes of data in millisecond intervals without heavy telegram
losses. To reduce the impact of a limited transmission capability
the bigger amount of data is always send from Stuttgart to Auck-
land and the smaller amount of data is always received.
4.1.1. Use case 1: five axis milling machine
For the first measurement, 85,752,629 UDP packages have been
transmitted with a payload of 100 Bytes to Auckland and 50 Bytes
to Stuttgart. On the path to Auckland 1% (863,924 telegrams) of the
data was lost. On the way back to Stuttgart 2.9% (2,488,668 tele-
grams) of the data was lost. On average, the round trip time was
0.3174 s, but with big peaks as shown in Fig. 3 . Quite often, the
average round trip time (RTT) scaled on 10 s time intervals in-creased up to 2 s with the overall peak of 17 s.
Looking at the consecutive telegram losses in Fig. 4 it can be
seen that the number of losses in certain 10-s intervals increased
almost up to 10,000 telegrams.
Further –looking at the TCP communication –it can be stated
that the TCP communication is not usable for a cloud to machine
communication in this use case between New Zealand and Ger-
many. Even though TCP has quality of service mechanisms in-
tegrated, which result in no telegram losses, the transmission of
telegrams is slower as with UDP. In the measurement it was not
possible to transmit more than 305 telegrams per second. In cer-
tain time intervals, where not enough bandwidth is available, the
sent queue in Auckland is increasing which also in fluenced the
RTT. As shown in Fig. 5 the RTT at the end of the measurement
increased up to 3500 s.
4.1.2. Use case 2: three-axis milling machine
During a consecutive test of about 24 h, 27,015,516 UDP packages
were sent, among which 2.64% (715,237) were lost. Similar to the
first use case, most package loss happened during transmission
from Auckland to Stuttgart, which is 2.31% compared to 0.33% from
Stuttgart to Auckland. The peak number of consecutive package
losses in 10 s intervals occurred 21 h after the start of the test,
which is more than 450 packages. Nearly at the same time, the peak
package loss from Auckland to Stuttgart was witnessed, which is
nearly 2500 packages. The overall average RTT was 0.3466 s, and
the maximum RTT was about 20 s. However, most of the maximum
RTT is less than 11 s. It also seems that there was a period of loss
between better performance and worse performance. In the
transmission point of view, UDP connection can transmit 313
packages per second, which ful fills the second use case. However,
the package loss is an essential issue. Missing packages will fail the
machining and even cause damage to the machine tool.
Cloud
Communication
ModuleMachine
Communication
ModuleWAN / LAN
UDP / TCP
UDP / TCPMonitoring
SolutionMonitoring
SolutionAuckland /
Google
ResourceStuttgartResource
Fig. 2. Test setup communication.J. Schlechtendahl et al. / Robotics and Computer-Integrated Manufacturing ∎(∎∎∎∎ )∎∎∎–∎∎∎ 4
Please cite this article as: J. Schlechtendahl, et al., Extended study of network capability for cloud based control systems, Robotics and
Computer Integrated Manufacturing (2015), http://dx.doi.org/10.1016/j.rcim.2015.10.012 i

In another consecutive 24-h test, 27,000,269 TCP packages were
transmitted. The maximum RTT was about 85 s, and a great numberwas above 10 s. Due to the capacity of TCP communication, therewere long queues of packages waiting to be transmitted from time
to time, even more than the packages needed for machining some
small parts, which causes the whole system's de ficiency.
4.2. Scenario 2: communication between Stuttgart and Google Cloud
Center Europe
4.2.1. Use case 1: five axis milling machine
Fig. 6 shows the measurement of 100 Bytes of data via a
Websocket connection between a local client in Stuttgart and a
remote server at the Google Cloud Center Europe. The cycle time ofeach generated package with a random payload was 1 ms. The59,795,451 transferred packages had an average RTT of 42.91 mswith maxima peaks at 150 ms. The measurement resulted in thecollection of packages from the Cloud Center to Stuttgart (thesender received several packages at the same time) and thereforeincreased the RTT for the each telegram within a collection fromthe minima at 25 –150 ms at the maxima RTT.
4.2.2. Use case 2: three-axis milling machine
The measurement of the second use case sent 19,290,892
number of packages with an average RTT of 47.16 ms as seen inFig. 7 . Depending on the use of parts of the connection by other
clients the measurement shows maxima peaks up to over3500 ms. According to the first use case packages were also re-
ceived in collections and therefore increased the RTT for eachpackage within one collection.
5. Strategies for communication challenges
It has been shown in the previous sections that the TCP con-
nection is not applicable for a control system as a service (CSaaS)between Auckland, NZ and Stuttgart, Germany. Missing bandwidthcapability results in queues and increased RTT, which are not ac-ceptable for both use cases.
UDP connections have an average telegram loss of 2.64 –3.9%
but with peaks up to 10,000 consecutive telegram losses. Thismeans that the machine is not receiving telegrams for almost 10 s.This might be solved through buffers located in the communica-tion modules. To re fill the buffers, mechanisms need to be im-
plemented that stop the machine and allow the buffers to be re-filled. Further, the machine communication module –depending
on the type of data transferred –might interpolate single missing
telegrams.
Another possible way to deal with package loss is adding serial
numbers to every package. If receiving a non-consecutive data
Elapsed time of experiment in hoursRound Trip Time scaled to 10 s interval
Measurement 08.01.2014 00:30 –23:30
Average
Max. value
Min. value
0 5 10 15 20 25Round trip time in seconds18
16
14
12
10
8
6
4
2
0
Fig. 3. Scenario 1 –use case 1 –UDP –round trip time.
Max. consecutive package losses scaled to 10 s interval
Measurement 08.01.2014 00:30 – 23:30
0 5 10 15 20
Elapsed time of experiment in hours10000
9000
8000
7000
6000
5000
40003000
2000
1000
0Packages consecutive lost
Fig. 4. Scenario 1 –use case 1 –UDP –consecutive telegram losses.J. Schlechtendahl et al. / Robotics and Computer-Integrated Manufacturing ∎(∎∎∎∎ )∎∎∎–∎∎∎ 5
Please cite this article as: J. Schlechtendahl, et al., Extended study of network capability for cloud based control systems, Robotics and
Computer Integrated Manufacturing (2015), http://dx.doi.org/10.1016/j.rcim.2015.10.012 i

package, the receiver can request packages with certain serial
numbers. However, if the network is encountering performance
issues, the successive requesting of missing packages may increase
the burden on the network and make the performance evenworse.
Websocket connections between Stuttgart and a Google Cloud
Center in Europe show better results: the average RTT is usable
for CSaaS. Problems with missing telegrams did not occur
through the design of the Websocket protocol. Only the max-imum peaks of the RTT of 3500 ms during use case two are not
acceptable.
Network performance monitoring mechanism and strategies
for dealing with the performance issue may be needed by both
CSaaS and the machine tool. When performance deterioration is
detected or predicted, the machine tool can make decisions basedon strategies including adaptively pausing the machining in the
middle of executing a toolpath or at the connection point of ad-
jacent toolpaths, which also requires additional information.When the network performance restores and enough packages are
received and stored in local cache, the machining could be
resumed.6. Summary
In this paper two use cases based on two milling machines
have been described. The data transferred between control systemand machine in these use cases has been analyzed. Then the datawas used to analyze if a control system as a service (CSaaS) is
possible. For doing this, a test environment was set up where the
data according to the use cases could be transferred betweenAuckland, New Zealand and the Google Cloud Center in Europeand Stuttgart, Germany. The data transfer has been monitored andanalyzed in the last part of the paper. A conclusion can be stated
that CSaaS between New Zealand and Germany is not possible,
since the network challenges are too big. The control systemshould be located closer to the machine. CSaaS between theGoogle Cloud Center in Europe and Germany is possible for slowcycle times and depends on the usage of the connections by other
clients. The Google Cloud platform is not suitable for control sys-
tems that require high performance connections. Due to technicalconditions (e.g. load balancer, routing mechanisms, etc.) thetransferred packages are received in collections by the local cli-ents. Slower cycle times will probably reduce the number of col-
lections but result in less performance for the control system.
Average
Max. value
Min. valueRound Trip Time scaled to 10s interval
Measurement 09.01.2014 00:14 – 10.01.2014 00:10Round trip time in seconds4000
3500
3000
2500
2000
15001000
500
0 5 10 15 20 25
Elapsed time of experiment in hours0
Fig. 5. Scenario 1 –use case 1 –TCP–round trip time.
150
100
50
0
0 5 10 15 20 25
Time in [h]RTT in [ms]Average RTT: 42.91ms
Fig. 6. Scenario 2 –use case 1 –Websocket –round trip time.J. Schlechtendahl et al. / Robotics and Computer-Integrated Manufacturing ∎(∎∎∎∎ )∎∎∎–∎∎∎ 6
Please cite this article as: J. Schlechtendahl, et al., Extended study of network capability for cloud based control systems, Robotics and
Computer Integrated Manufacturing (2015), http://dx.doi.org/10.1016/j.rcim.2015.10.012 i

The existing general TCP/UDP/Websocket protocol is not sui-
table for interpolated data to be transmitted through WAN con-
nection. The unpredictability of the WAN connection requires
additional mechanisms and strategies to be developed for both
CSaaS and machine tool.
Acknowledgments
The research presented in this paper is resulting from the
project “pICASSO ”which is funded by the German Federal Min-
istry of Education and Research. The authors are grateful for the
financial support of China Scholarship Council –University of
Auckland joint scholarship.
References
[1]T. Lorenzer, S. Weikert, K. Wegener, Mit Rekon figurierbarkeit gewinnt der
Anwender (Engl.: Gain Users with Recon figuration Ability), Eidgenössische
Technische Hochschule Zürich, Institut für Werkzeugmaschinen und Ferti-
gung, Zürich, 2010 .
[2] M. Birkhold, A. Verl, Post-Stuxnet: Sicherheitslücken bedrohen weiterhin
Produktionsanlagen (Engl.: post-Stuxnet: security bugs putting production
sites on risk), ZWF Zeitschrift für wirtschaftlichen Fabrikbetrieb, April 2011,
pp. 237 –240.
[3] M. Keinert, A. Verl, System platform requirements for high-performance CNCs,
in: Proceedings of FAIM 2012 22nd International Conference on Flexible Au-
tomation and Intelligent Manufacturing, Helsinki, Finland, June 2012.
[4]X. Xu, From cloud computing to cloud manufacturing, Robot. Comput. Integr.
Manuf. 28 (1) (2012) 75 –86.
[5]X. Vincent Wang, X.W. Xu, An interoperable solution for cloud manufacturing,
Robot. Comput. Integr. Manuf. 29 (4) (2013) 232 –247.
[6]W. He, L. Xu, A state-of-the-art survey of cloud manufacturing, Int. J. Comput.
Integr. Manuf. (2014) 1 –12.[7]N. Liu, X. Li, A Resource Virtualization Mechanism for Cloud Manufacturing
Systems, Enterprise Interoperability, Springer, 2012 .
[8]O.F. Valilai, M. Houshmand, A collaborative and integrated platform to support
distributed manufacturing system using a service-oriented approach based oncloud computing paradigm, Robot. Comput. Integr. Manuf. 29 (1) (2013)
110–127.
[9]R.A. Gupta, M.-Y. Chow, Networked control system: overview and research
trends, IEEE Trans. Ind. Electron. 57 (7) (2010) 2527 –2535 .
[10] M.S. Branicky, V. Liberatore, S.M. Phillips, Networked control system co-si-
mulation for co-design, in: Proceedings of the 2003 American Control Con-
ference, 2003, pp. 3341 –3346.
[11] W. Zhang, M.S. Branicky, S.M. Phillips, Stability of networked control systems,
IEEE Control. Syst. 21 (1) (2001) 84 –99.
[12] K. Ji, W.-j Kim, Real-time control of networked control systems via Ethernet,
Int. J. Control. Autom. Syst. 3 (4) (2005) 591 .
[13] L. Zhang, H. Gao, O. Kaynak, Network-induced constraints in networked con-
trol systems —a survey, IEEE Trans. Ind. Inform. 9 (1) (2013) 403 –416.
[14] J.D. Decotignie, Ethernet-based real-time and industrial communications,
Proc. IEEE 93 (6) (2005) 1102 –1117.
[15] M. Felser, Real-time ethernet –industry prospective, Proc. IEEE 93 (6) (2005)
1118–1129 .
[16] P. Neumann, Communication in industrial automation —What is going on?
Control Eng. Pract. 15 (11) (2007) 1332 –1347 .
[17] F.A. Tobagi, V. Bruce Hunt, Performance analysis of carrier sense multiple ac-
cess with collision detection, Comput. Netw. 4 (5) (1980) 245 –259.
[18] J. Schlechtendahl, F. Kretschmer, A. Lechler, A. Verl, Communication me-
chanisms for cloud based machine controls, Procedia CIRP 17 (0) (2014)
830–834.
[19] IEEE, IEEE1588:2008 Standard for a Precision Clock Synchronization Protocol
for Networked Measurement and Control Systems, 2008.
[20] N.N. Sicherheitsrelevantes Konstruieren von Druck und Papierbearbei-
tungsmaschinen –Elekktrische Ausrüstung und Steuerungen (Engl.: Safety
Relevant Construction of Printing and Papercutting Machines –Eletrical
Equipment and Control, BG Druck und Papierverarbeitung, June 2004, p. 79.
[21] M. Weck, C. Brecher, Werkzeugmaschinen 4 –Automatisierung von Maschi-
nen und Anlagen (Engl.: Machine Tools 4 –Automation of Machines and
Sites), 6th ed., Springer, Berlin/Heidelberg, 2006 .
[22] Internet Engineering Task Force (IETF): The WebSocket Protocol RFC 6455,
〈http://tools.ietf.org/html/rfc6455 〉(accessed 27.03.15).
4000
3500
3000
2500
2000
1500
1000
500
0
0 5 10 15 20 25
Time in [h]RTT in [ms ]Average RTT: 47.16ms
Fig. 7. Scenario 2 –use case 2 –Websocket –round trip time.J. Schlechtendahl et al. / Robotics and Computer-Integrated Manufacturing ∎(∎∎∎∎ )∎∎∎–∎∎∎ 7
Please cite this article as: J. Schlechtendahl, et al., Extended study of network capability for cloud based control systems, Robotics and
Computer Integrated Manufacturing (2015), http://dx.doi.org/10.1016/j.rcim.2015.10.012 i

Similar Posts