SITE: The Simple Internet of Things Enabler For Sma rt [629523]

SITE: The Simple Internet of Things Enabler For Sma rt
Homes

Journal: IEEE Access
Manuscript ID Access-2016-02635
Manuscript Type: Original Manuscript
Date Submitted by the Author: 28-Nov-2016
Complete List of Authors: Hafidh, Basim; University of Ottawa, School of Elec trical Engineering and
Computer Science
Al Osman, Hussein; University of Ottawa, Electrical Engineering and
Computer Science
Dong, Haiwei; University of Ottawa, School of Elect rical Engineering and
Computer Science
Arteaga-Falconi, Juan; University of Ottawa, School of Electrical
Engineering and Computer Science
El Saddik, Abdulmotaleb; University of Ottawa, Scho ol or Electrical
Engineering and Computer Science
Keywords: Human computer interaction, Internet of Things, Sma rt homes
Subject Category<br>Please
select at least two subject
categories that best reflect the
scope of your manuscript: Sensors, Instrumentation and measurement, Computers and information
processing
Additional Manuscript
Keywords: End user development, Smart objects, Usability

For Review OnlyIEEE Access

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFI CATION NUMBER (DOUBLE-CL ICK HERE TO EDIT) <
1
Abstract —This paper presents the Simple Internet of Things
Enabler (SITE), a smart home solution that allows users to
specify and centrally control Io T smart objects. Unlike most
existing systems, SITE suppor ts End-User Development.
Hence, it defines a simple language for the specification of control rules for smart objects. It also provides a user interface
to graphically illustrate data received from smart objects. To
assess the usability of SITE, we conduct an empirical study involving 20 participants belonging to two user groups: users with technical training (IT user s) and users without technical
training (Non-IT users). We demonstrate that both user groups
can satisfactorily build smart objects and define control rules in a smart home environment using SITE.
Index Terms — End user development, human-com puter interaction, internet of
things, smart homes, smart objects, usability.
I. INTRODUCTION

HE Internet of Things (IoT) is a worldwide network of
smart objects or “things” [1, 2] . Through the Internet, these
objects can communicate their sensory data while being remotely monitored and controlled by users or autonomous
applications [1]. The IoT concept was initially popularized by
the MIT Auto-ID labs where res earchers proposed the use of a
wireless sensor network and Radio Frequency Identification
(RFID) technology to realize object localization [3]. Later, the
International Telecommunication Union (ITU) in their 2005
report formally defined the IoT as the set of all objects that can
communicate with each other via networks [4]. The IoT paradigm can be applied to a multitude of domains such as
healthcare and industrial automation, smart energy monitoring
and control, elderly assistan ce, public security, urban
management, infrastructure construction, business services, and
smart homes [5]. In this paper, we are interested in the latter
application of IoT.
The term smart home has evolved from exclusively referring
to the centralized and semi-automated control of environmental
systems, such as heat and lights, to the use of technology to monitor and control any compatible object in the home
environment. Typically, the goal of smart home systems is to
provide comfort, health care, security, safety, and energy consumption reduction services [6, 7].

The authors are with the Multimedi a Communication Research Laboratory
(MCRLab), School of Electrical Engineering and Computer Science,
University of Ottawa, Ottawa, ON, K1N 6N5, Canada (email: Smart Objects (SOs) are the foundational building blocks of
IoT. However, most “regular” objects in our everyday environments are not smart (i.e. equipped with remote
monitoring and control capacities). To extend the IoT beyond
inherently smart objects, we have to append transducers (sensors or actuators) and network connectivity components to
“regular” objects. Such ope ration would add a level of
intelligence to these objects by allowing them to communicate their status (e.g. temperature or pressure applied). Also, they
can be potentially controlled remotely through actuators (e.g.
switch to turn an electri cal device on or off).
Today, the logic pertaining to the control of SOs in smart
homes is programmed by highly technical divisions using programing languages that are not accessible to most end-users.
The relatively recent perforation of technological devices into
our daily lives has compelled users, including those who are not technically trained, to seek an active role in technical
development. This allows them to design, configure, modify, or
realize technologies that are be tter tuned to their individual
needs. This phenomenon has been dubbed in the literature as
End-User Development (EUD). EUD refers to a set of
activities, techniques, and tools that allow end users to configure, modify and control software and hardware artifacts
[8-10]. These artifacts should be fully “plug and play” [10]. We
propose a system that enables EUD for smart homes. Individuals after all personalize all aspects of their home, and
therefore, it is valuable to equip them with the necessary tools
to configure, modify, and control their smart home systems. Furthermore, the latter tools would allow them to adapt these
systems as their needs or pr eferences change over time.
In this paper, we introduce the Simple Internet of Things
Enabler (SITE), a complete system, composed of hardware and
software components, that allows end-users to design and configure a smart home system that responds to their needs. The
system is designed to support two broad classes of users: IT and
Non-IT users. We define IT users as those that possess an undergraduate degree in a discip line that includes intermediate
or advanced courses in software development, hardware
development, or both. Non-IT users refers to users that have not undergone any training in software and hardware development.
To support Non-IT users, we propose the Simple Control
Language (SCL) for the central control of SOs in a smart home. We demonstrate through a usability study that both user classes
can:
1) Build SOs out of everyday objects using the General
bhafi014@uottawa.ca; halosman@uo ttawa.ca; jarte060@uottawa.ca;
hdong@uottawa.ca; elsaddik@uottawa.ca). SITE: The Simple Internet of Things Enabler
For Smart Homes
Basim Hafidh, Hussein Al Osman, Member, IEEE , Juan Arteaga-Falconi, Haiwei Dong, Senior Member, IEEE, and
Abdulmotaleb EL Saddik, Fellow IEEE
TPage 1 of 17
For Review OnlyIEEE Access
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFI CATION NUMBER (DOUBLE-CL ICK HERE TO EDIT) <
2
Purpose Transducers Network (GPTN) presented in [11];
and
2) Specify SCL rules that permits a central server to interact
with these SOs.
The remainder of this paper goes as follows: Section II
presents an overview of related works. Section III details the proposed system design. Section IV showcases the user interface of the proposed sy stem. Section V presents an
empirical study that evaluates the usability of the system. The
obtained results are analyzed a nd discussed in this section.
Finally, Section VI concludes this paper.
II. B
ACKGROUND AND RELATED WORK
A. Abstract View of Smart Home architecture
A typical smart home is composed of several SOs that
communicate with a central application. This application is
referred to in various works as the Distributed Services
Oriented Middleware [12], E-Servant [13], Controller [14],
Home Gateway [15], Gateway and Integrator[16], ZigBee-based Intelligent Self-Adjusting Sensor (ZiSAS) [17], etc.
However, the principle functions of such application are similar
and can be summarized as follows:
1) It receives sensory information from the SOs deployed in
the smart home environment.
2) It controls the smart environment through commands sent
(typically wirelessly) to a subset of the existing SOs
deployed in that environment. These commands are
usually sent in response to sensory information obtained from the environment, a user command entered onto the
system, or a pre-configured timer.
Also, this kind of application optionally allows the user to
visualize the collected sensory in formation at various levels of
granularity. In this paper, we will refer to such application as
the Central Visualization and Control (CVC) application.
B. Existing Smart Home Systems
Several works presented smart home systems for monitoring
and controlling SOs [12-14, 16]. These systems are composed
of hardware and software components and typically allow the
user to seamlessly, locally or remotely, control the house brightness, ventilation, temperature, humidity, doors and
windows, and so forth. Some of these works presented an IoT
middleware (a software layer li nking the infrastructure and the
CVC applications using it) [12]. Other researchers used mobile
devices to execute the CVC application through which the user
can remotely control and monitor the house SOs [14-16]. In these works, a home gateway or h ub is used to aggregate all the
data coming from the SOs before being forwarded as a single
stream to the CVC application.
Developing a CVC application to interact with SOs is often
a programming task. However, Nichols and Myers [18]
presented a method to automati cally generate user interfaces
that expose CVC functions. In particular, they developed the
User Interface Descriptive Language (UIDL) to describe the
functionality of SOs. Using these descriptions, they devised a scheme to automatically generate interfaces to monitor and
control corresponding SOs. Similarly, Mayer et al. [19]
presented a modality-independent user interface generation
method for IoT. Interface generation is enabled by detailed descriptions of the atomic interactive components of SOs that
also capture the semantics of that interaction.
C. Smart Objects
In general, a SO is an item that can interact with other
computerized items or humans in its environment. SOs can be
employed within the home environment for automation (e.g.
automatically adjusting the heat in each room), m onitoring (e.g.
measuring carbon monoxide levels), and control (e.g. switching
lights through mobile phone) to achieve a so-called smart home
that optimizes comfort, secu rity, and energy savings.
Most objects in our environment do not fall under the above-
presented definition of a SO. Hence, developing mechanisms to
make these objects compatible with smart homes is necessary.
Some researchers have tackled this challenge. For instance,
several works describe general purpose sensor/actuator boards that can be attached to everyday objects. For example, in the
“smart-Its” project, Holmquist et al. [20] developed a small
“stick-on” computer that can be attached to un-assembled furniture parts. The purpose is to gather information about the
assembly process. Hence, two separate boards were designed:
the core board which is composed of a processor and wireless transceiver and the sensors board which includes light, sound,
pressure, acceleration, and temp erature sensors. Two more
sensors can be optionally added: camera and gas sensors. Tapia et al. [21] introduced a low-cost sensor called “state-change
sensor”. This is a tap-on sensor that measures a change in an
object’s state in a home envi ronment to recognize a user’s
activity. As an example, various tap-on sensors were added to a
bed to track the movement and position of the user. A similar
approach was taken by Kameas et al. [22]. They embedded
hardware boards in objects (e.g., chairs, lamps, coffee jars,
alarm clocks, and desks) to enable interaction between the
objects and the user.
As opposed to creating a general purpose sensor board, some
researchers focused on appending sensors and/or actuators to a particular object with the purpose of employing it in a specific
application. For instance, Antifakos et al. [23] designed a smart
door handle for user identifi cation and room access control.
They used accelerometers on th e door handle and the user’s
wrist. The person’s identity is detected by measuring the
correlation between the two accel erometer signals. Also, the
“Mediacup” project described in [24] presents a regular coffee
cup augmented with temperature and motion detection sensors,
a processor, and a communication module. It studies the capture and communication of the cup’s status, such as moving,
stationary, full, and empty. All hardware components were
integrated into a board located at the base of the cup.
D. End User Development (EUD)
The EUD paradigm has been adopted for several smart home
solutions. Examples of these sy stems are iCAP [25], CAMP
[26] and SPOK [27]
. iCAP [25] allows end-users to visually
design smart home applications using a pen-based interface. It
permits users to specify input and output devices and setup
behavioral logic using “if-then” rules that describe a condition
and its associated action. CAMP [26] provides an interface
consisting of words that can be clustered into rules that denote
four pieces of information: who, what, where, and when. These
pieces of information describe what actions the system will Page 2 of 17
For Review OnlyIEEE Access
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFI CATION NUMBER (DOUBLE-CL ICK HERE TO EDIT) <
3
command in response to what st imulus and at what time. SPOK
[27] uses a pseudo-natural language (combines rule-based and
imperative programming) to define event-condition action rules
to configure and control appliances.
E. Summary
Although the concept of IoT can be applied to many
applications, smart home systems are some of the most studied
by researchers. Smart homes typica lly rely on SOs to realize the
intelligence needed to monitor and control the home
environment. We have surveyed several SO technologies and focused on general purpose sensor/actuator boards that can be
attached to object s [11, 21, 22]. These technologies are
comparable to the SO specifi cation feature of the proposed
system. However, the existing technologies have limitations on
the number and type of sensors that a board can support.
Furthermore, we surveyed several typical smart home
systems [12, 16], the CVC logic responsible for controlling the
SOs is defined programmatically. This makes the update or maintenance of such systems difficult and costly. Furthermore,
it would pre-empt Non-IT users from constructing such
systems. The interface generation schemes presented by [18] and [19] allow users to directly monitor and control SOs.
However, these systems do not support automated control of
SOs based on rules of actuation.
To respond to the limitations of programmatically defined
CVC logic, several EUD systems have been proposed [25- 27].
These systems support rule-based behavior description. The rules define principally a condition and an action. The
conditions are tests on sensor values and the actions are
commands to be sent to actuators. Existing EUD rules use crisp sensor and actuator values to define conditions and actions [25,
27].
This paper presents a complete smart home monitoring and
control system. We call this system the Simple Internet of
Things Enabler (SITE). SITE resolves the challenges stated above as follows:
1) SOs can be realized by appending sensor/actuator clusters
built through the GPTN [11] onto “regular” objects. We
describe the GPTN in Section III-B.
2) SOs can be controlled through the CVC using SCL or fuzzy
logic rules. The SCL rules are designed to simplify the
control process. We describe the CVC, along with the
proposed rule specification schemes, in Section III-C.
3) SCL advocates the use of qualitative expressions as
opposed to quantitative measures for rule definition.
Humans assess their environment qualitatively. For
example, to describe the temper ature of an object, they use
qualifiers such as very hot, hot, cold, etc. Hence, intuitive
qualitative expressions can simplify rule construction for
end-users.
III. T
HE PROPOSED SITE SYSTEM
A. SITE Overview
SITE interacts with two types of entities: users and SOs. We
define a user as an individual that creates SOs using the GPTN,
configures a smart environment using the SITE CVC, sends
commands to SOs through the CVC, and/or visualizes the
information produced by SOs using the SITE CVC. To configure a smart environment, the user specifies what SITE
CVC should do in response to the data received from the SOs’
sensors. For example, consider the following SO: an office
chair equipped with a pressure sensor and a vibration actuator,
both integrated into its seat cu shion. The user can configure
SITE CVC to send a command to the actuator to vibrate if the
pressure is detected to be high (i.e. a person is sitting on the chair) for a couple of hours. The vibration would remind the
seated person to take a walk. Fig. 1 shows the high-level use
cases supported by SITE. In order to setup a smart environment, the user performs the following:
1) Build SOs using the GPTN and deploy them in the
environment.
2) Use the SITE CVC to:
a) Register the available SOs by supplying their name, IP
address, and geographical coordinates.
b) Configure all or a subset of the registered SOs by
defining rules to control them through the CVC.
c) Visualize SO sensor information.

B. The General Purpose Transducer Network (GPTN)

In this section we describe th e GPTN presented in [11] that
allows the creation of network enabled clusters of transducers
that can be attached to various ob jects to form SOs. Each cluster
is associated with a single ne twork address. The clusters are
formed through a plug and play mechanism where a variety of
transducers can be connected to a main board (see Table I)
through wires using a serial communication protocol called I2C
(Inter-Integrated Circuit). Th erefore, the main board can
recognize the added and removed transducers dynamically. The
network supports commonly used sensors such as temperature,
pressure, light, acceleration, and CO as shown in Table I.
All components used to create a SO are designed as small
blocks that can be interconnected in many possible
configurations as shown in Fig. 2. Each cluster organizes its
transmitted data into Sensor Model Language (senorML) and receives control data in Actuator Model Language
(actuatorML) format [11]. Five possible media of
communication can be used with the proposed transducer node including: USB (RS232), Bluetooth technology, Ethernet,
WIFI, and ZigBee. However, we have improved the GPTN
presented in [11] to support WIFI in order to communicate with SOs through the internet.
General-Purpose
Transducers’
NetworkGeneral-Purpose
Transducers’
Network
UserSpecify SO
Actuation
Control Rule
Visualize SO
Sensor
InformationSmart ObjectSend Sensor Data
Receive Actuator
DataRegister SO
Build SO
Send Control
Command
General-Purpose
Transducers’
Network
UserSpecify SO
Actuation
Control Rule
Visualize SO
Sensor
InformationSmart ObjectSend Sensor Data
Receive Actuator
DataRegister SO
Build SO
Send Control
Command

Fig. 1. SITE UML use case diagram. Page 3 of 17
For Review OnlyIEEE Access
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFI CATION NUMBER (DOUBLE-CL ICK HERE TO EDIT) <
4

C. SITE CVC Architecture

Fig. 3 illustrates a high-level architecture of the proposed
SITE CVC which is composed of several functional components that we describe in the following sections.

1) SO Registrar
The SOs available to the SITE CVC are registered by the
user through the SO Registrar. During registration, the user
specifies the SO name and IP ad dress. All collected information
is stored in the Static Information Database (SID).

Fig. 3. SITE CVC high level architecture. TABLE I
THE MAINBOARD AND COMMONLY USED TRANSDUCERS
I2C ID Transducer Prototype Remarks
Wireless
Node’s Core
(Mainboard)
Data collection,
processing and communication
1-10 Pressure Sensor
I2C IDs 1-10 are
reserved for pressure sensors (FSRs)
72-75 Temperature
Sensor
I
2C IDs 72-75 are
reserved for
Temperature
Sensors (TMP102)
83-84 Accelerometer
I2C IDs 83-84 are
reserved for
accelerometer
(ADXL345)
41,57 Light Sensor
I2C IDs 41, 57 are
reserved for Light
sensors (TSL2561)
11-20 Vibro-tactile
Actuator I2C IDs 11-20 are
reserved for vibro-
tactile actuators
21-30 On/Off
Actuator
I2C IDs 21-30 are
reserved for on/off
actuators
31-40 Dimming
Actuator
I2C IDs 31-40 are
reserved for
dimming actuators
Extension
Connected to the I2C
bus when needed

Fig. 2. SOs with reconfigurable transducers. The transducers are
interconnected to form clusters. Clusters in teract wirelessly with
the CVC. Page 4 of 17
For Review OnlyIEEE Access
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFI CATION NUMBER (DOUBLE-CL ICK HERE TO EDIT) <
5

2) SO Selector
The SO Selector allows the user to select a subset of the
registered SOs whose information is stored in the SID for a particular configuration prof ile. A configuration profile
specifies which available SOs are monitored and controlled and
how they are actuated. The selected SOs sensor data can be used to specify the rules of SO actuation through the Configurator.
Furthermore, the SO Selector sends a message, through the
Data Transmitter, querying the selected SOs about their dynamic information. Dynamic information refers to the
number and type of each SO’s sensors and actuators (since
transducers can be added or removed dynamically through a
plug and play mechanism [11]) and real time sensor
measurements. All received dynami c information is stored in
the Dynamic Information Database (DID).
3) Configurator
SOs control definition in SITE is based on the fuzzy logic
theory. Fuzzy set theory [28] is designed to mimic the human
reasoning mechanism. For exam ple, it is much easier for
humans to think about room temperature in qualifying terms
such as hot or cold, as opposed to specifying crisp thresholds that describe the room’s thermal state. Therefore, from a
usability perspective, using fuzzy logic will render the
configuration of a smart environment more intuitive, especially to users with little to no computer programming background.
There are a plethora of Fu zzy controller languages,
however, these controllers are suitable for engineering applications and require technical experience [29]. However,
SITE supports users with no formal technical training.
Therefore, we introduce SCL, a rule based language that allows
users to define actions that are performed by actuators in
response to sensor data or user commands. SCL will be described in Sections III-C-4 and III-C-5.
The user can set up SO cont rol rules using one of three
modes: Form-Based, Editor-Based, and Advanced. We estimate that users that possess a background in programming
or IT in general can learn SCL or fuzzy logic rule creation much
faster than other users. However, the goal of SITE is to support the largest set of possible users. Nonetheless, the advanced
mode provides the user with more fine-tuned control
capabilities.
In the Form-Based mode, the user defines the control rules
using SCL. However, instead of providing the user with a text
editor to write SCL, a graphical editor is employed. This editor allows the user to build SCL rules incrementally by filling a
form. Hence, users do not have to learn the structure of SCL. In
the Editor-Based mode, the user specifies SCL using a text editor. The SITE controller is based completely on fuzzy logic.
Hence, as a last step, in both the Form- and Editor-Based
modes, the SCL rules are automa tically translated into fuzzy
rules associated with membership functions through the Fuzzy
Generator. In the Advanced mode , the user specifies fuzzy rules
and creates membership functions directly without using SCL.
Hence, the Fuzzy Generator is not used in the latter mode. In
any case, at the end of the conf iguration, the fuzzy rules and
membership functions are saved in the Fuzzy Database (FD).
4) SCL Syntax Verifier
This component (see Fig. 4) is responsible for verifying the
syntax of the SCL rules before the Fuzzy Generator translates
them into fuzzy rules and membership functions. Hence, the
SCL rules are processed through two stages by the modules:
1) Lexical Analyser: to convert the series of characters in
SCL into tokens using regular expressions, and
2) Syntax Analyser: to ensure that the SCL rules comply with
the SCL context-free grammar specified in Listing I and
build a parse tree.
Note that in the context-free grammar of Listing I, terminals
are surrounded by quotations .

5) Fuzzy Generator
This component is responsible for generating the
membership functions and fuzzy rules based on the SCL rules.
It receives three inputs; SCL rule s organized in a parse tree from
the SCL Syntax Verifier and list of available SOs along with
their sensors and actuators from the DID. The Fuzzy Generator
component produces fuzzy membership functions for the
sensors, user commands, and actuators referenced in SCL and
then produces fuzzy rules that match the logic described in the
SCL.
a) Generating Membership Functions
The procedure of generating membership functions for the
sensors is listed in Algorithm 1. The most common types of
membership functions (triangle and trapezoid) are supported.
Each transducer can have up to 6 levels in its associated membership function. The range of the membership levels (Ca
and Cb) for each sensor is calc ulated according to the number
of membership functions per sensor and the maximum

Fig. 4. SCL syntax verifier block diagram.

LISTING I
SCL CONTEXT -FREE GRAMMAR

scl::= when condition 'then' action
condition::= sensorCondition | commandCondition
commandCondition::= ‘user Command is’ commandId
commandId::=”alphabet numbers’ sensorCondition::= 'sensor' sensorName 'is' sensorLevel period?
conditionPrime
conditionPrime::= operator condition| ε
operator::= 'and' | 'or' period::= 'for' positiveInteger timeUnit
timeUnit::= 'seconds'|'minutes'|'hours'
action::= (msg | actuate) ('also' msg | actuate)+ actuate::= 'turn actuator' actuatorName actuatorLevel
msg ::= 'send message:' message 'to' msgDestination
msgDestination ::= phoneNumber | email sensorName::= SOName '. ' sensorType positiveInteger
sensorType::= 'pressure' | 'humidity' | 'carbonMonoxide' | 'light'
| 'acceleration' actuatorName ::= SOName '.actuator' positiveInteger
sensorLevel::= "high" | "medium" | "low"
actuatorLevel::= “on” | “off”| "high" | "medium" | "low" digit::='0' | '1' | '2' | '3' | '4 ' | '5' | '6' | '7' | '8' | '9'
positiveInteger::=digit+
Page 5 of 17
For Review OnlyIEEE Access
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFI CATION NUMBER (DOUBLE-CL ICK HERE TO EDIT) <
6
measurable value of the sensor (SMAX). SMAX is retrieved
from the DID.
Because the actuators (switche s, vibro-tactile actuators,
warning messages, timeouts) have only 2 controlled values (for
example “on” and “off” in case of switches and vibro-tactile
actuators), their membership functions are of the “singleton” type and the number of these membership functions per actuator
is set to 2. However, in the case of the Advanced mode, the
actuators membership functions can be of any of the common types (singleton, triangle or trapezoid). Examples of these
membership functions are illustrated in Fig. 5. As shown in the
figure, the timeout has both input and output membership functions. Timeouts are used in SITE to trigger actuation events
in response to the passage of time. b) Generating Fuzzy Rules
The Fuzzy Generator produces fuzzy rules based primarily
on the logical statement in the SCL rule. However, it also produces rules necessary for starting and stopping timers. Also,
the Fuzzy Generator creates rule s for stopping actuation when
the condition that triggered its start is no longer valid. The latter details are not specified in SCL to increase its usability. Hence,
when using SCL, the user does not have to specify logic for
starting, stopping, and resetting timers or actuators. The Fuzzy

Fig. 5. Input and output ge nerated membership functions.
Algorithm 1. Membership Function Generation

Input: SMAX[n] // maximum sensor value, n is number of sensors
TMF [n][l] // membership functions Type, L is number of
levels
Output: MF[n][L] // membership function
Procedure:
for i = 0 to n-1 do
for j = 0 to L-1 do
Ca ← 1*( )*21MAX jSL// the minimum value of the membership
function level
Cb ← 1(3 ) ( ) *21MAX jSL //the maximum value of the membership
function level
if (TMF [i][j] type is Triangle) then
P1 ← Ca
P3 ← Cb
if (TMF is the first left) then // i.e. j = 0
P2 ← P1
else if (TMF is the last right) then // i.e. j = L-1
P2 ← P3
else // i.e. TMF is one of the middles
P2 ← 1()2ab aCC C
end if
MF[i][j](x; P1, P2, P3) ← 13max(min( , ),0)21 32xP P x
PP PP

else if (TMF[i][j] type is Trapezoid) then
P1 ← Ca
P4 ← Cb
if TMF is the first left) then // i.e. j = 0
P2 ← P1
P3 ← 3()4ab aCC C
else if (TMF is the last right) then // i.e. j = L-1
P2 ← 1()4ab aCC C
P3 ← P4
else // i.e. TMF is one of the middles
P2 ← 1()4ab aCC C
P3 ← 3()4ab aCC C
end if
MF[i][j] (x; P1, P2, P3, P4) ← 14max(min( ,1, ), 0)21 43xP P x
PP PP

end if
end for end for

Algorithm 2. Fuzzy Rules Generation

Input: SCL[s] // the list of SCL rules, s is the total number of SCL rules
Output: R[q] // Fuzzy rules
Procedure:
k ← 0 // gives a number to the timeout.
for i = 0 to s-1 do
c ← 0 // gives a fuzzy rule’s number.
condition ← SCL[i].getCondition() // returns the SCL condition part
action ← SCL[i].getAction() // returns the SCL action part
reverseAction ← SCL[i].getReverseAction() // these actions are the
opposite of the SCL rule’s actions, such as turn vibrator off,
don’t send email, and so on. sensor[n] ← condition.getSensors() // returns the list of sensors in
the condition part.
sensorMF[n] ← condition.getSensorsMFs() // returns the list of the
membership functions associ ated to the sensors list.
actuator[m] ← action.getActuators() // returns the list of the
actuators in the action part.
actuatorMF[m] ← action.getActuatorsMFs() // returns the list of the
actuators membership functions a ssociated to the actuators list.
fuzzyCondition ← condition.convertT oFuzzyCondition() // removes
unnecessary keywords, attaches timer labels to timers among other simple adjustme nts in order to render the SCL condition
compatible with the fuzzy format.
for j = 0 to n-1 do
if (sensor[j] has timeout) then
R[c++] ← IF Sensor[j] IS sensorMF[j] THEN timeout[k++]
IS start
R[c++] ← IF Sensor[j] IS
complement(sensorMF[j]) THEN
timeout[k++] IS reset
end if
end for
R[c++] ← IF fuzzyCondition THEN action()
R[c++] ← IF complement(fuzzyCondition) THEN reverseAction()
end forPage 6 of 17
For Review OnlyIEEE Access
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFI CATION NUMBER (DOUBLE-CL ICK HERE TO EDIT) <
7
Generator automatically generates fuzzy rules to cover these
tasks. Algorithm 2 details the steps of extracting the relevant
information from SCL rules to generate corresponding fuzzy
rules. Table II depicts examples of three SCL rules and their
translation to fuzzy rules.

6) Controller
This component receives raw da ta from the SOs’ sensors,
fuzzifies it using the associated membership functions,
generates the fuzzy output using the fuzzy rules stored in the
FD, and defuzzifies the output into crisp results. The crisp
results are sent to the associated actuator through the Data
Transmitter. In addition to traditional actuators such as vibrators or electric switches, SI TE supports soft actuators in
the form of message generators. Hence, these soft actuators can
send warnings or informative messages to the user in the form
of email or SMS.
Fig. 6 illustrates the behavior of the Controller using a UML
activity diagram. This component loads the fuzzy rules from the
FD and requests real-time sensor measurement data from the
SOs. Once it receives the requested data, it processes it and generates relevant actuation commands whenever necessary
(per the fuzzy rules). The Data Transmitter component
transmits these commands to the assigned SOs.

7) Visualizer
This component sends a read request to the SOs’ selected by
the user, through the Data Transmitter, and displays their real-
time sensor(s) measurements on the screen. It also displays
sensor(s) information, such as the sensors’ type, manufacturer, and sampling rate.

8) Data Transmitte r and Receiver
These components are re sponsible for exchanging
information between the SITE CVC, user, and SOs. The
information exchanged is packaged into SensorML and
ActuatorML messages [11]. Note that for simplicity, we
package the user command into SensorML messages where the
value field is populated with the command ID. SITE CVC sends
three types of requests to the SO s: read SO information, read
sensor measurement, and send actuation signal. Once data is received from the SOs, it is forw arded by the Data Receiver to
the DID.
IV. SITE
CVC USER INTERFACE
In the previous section, we focused on the design and
behavior of the SITE system. In this section, we illustrate the
principal user interface components of the SITE CVC. The user
interface is developed with the help of the LabVIEW TABLE II
SAMPLE SCL RULES WITH THEIR FUZZY RULES SET TRANSLATION
SCL Rule Fuzzy Rules Set
1 when userCommand is Cmd1 turn
actuator A on 1 IF Cmd1 IS on THEN A IS on
2 IF Cmd1 IS off THEN A IS off
2 when sensor S1 is high and (sensor
S2 is low for 10 minutes) then send
this message: “This is a test
message” to 613-123-4567 1 IF S2 IS high THEN Timeout1 IS start
2 IF S2 IS low THEN Timeout1 IS reset
3 IF S1 IS high AND timeout1 IS reached THEN Message1 IS set
4 IF S1 IS low AND S2 IS high THEN Message1 IS reset
3 when (sensor S1 is high for 2
minutes or sensor S2 is high for 10 minutes) and (sensor S3 is low or
sensor S4 is high for 30 seconds)
then turn actuator A on and send this message: “This is a test message” to
bhafi014@uottawa.ca
1 IF S1 IS high THEN Timeout1 IS start
2 IF S1 IS low THEN Timeout1 IS reset
3 IF S2 IS high THEN Timeout2 IS start
4 IF S2 IS low THEN Timeout2 IS
reset
5 IF S4 IS high THEN Timeout3 IS start
6 IF S4 IS low THEN Timeout3 IS reset
7 IF (timeout1 IS reached OR timeout2 IS reached) AND S3 IS low AND timeout3 IS reached THEN A IS
on ALSO Message1 IS set
8 IF S1 IS low AND S2 IS low AND S3 IS high AND S4 IS low THEN A IS off ALSO Message1 IS reset

Fig. 6. The controller activity diagram. Page 7 of 17
For Review OnlyIEEE Access
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFI CATION NUMBER (DOUBLE-CL ICK HERE TO EDIT) <
8
Programming environment. Fig. 7 depicts the flow chart of the
SITE CVC user interface. When the user runs the program, the
SO Registrar lists all the registered SOs in the “Registered Objects” field. The user can sel ect the wanted SOs and add them
to “Selected Objects” field. Once the SOs are selected, SITE
CVC displays relevant information about them in the “Objects’ Dynamic Information” field (see the main window in Fig. 8). To monitor real time sensors’ measurements for the selected
SOs, the user can run the visualizer by clicking on the
“Visualizer” button. Fig. 9 shows an example of the visualizer
window showing two sensors’ measurements for a registered object. The type of the displayed measurement meter is
specified in the sensorML messages received from the SO.

Fig. 7. Flowchart of the SITE CVC user interface .

Fig. 8 SITE CVC main window.

Fig. 9. The visualizer window showi ng the real-time measurements for two
sensors of a registered object.

(a)

(b)

(c)
Fig. 10. Configuration modes: (a) Fo rm-Based configurator mode. (b) Editor-
Based configuration mode, and (c ) Advanced configuration mode.

Page 8 of 17
For Review OnlyIEEE Access
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFI CATION NUMBER (DOUBLE-CL ICK HERE TO EDIT) <
9
The main window in Fig. 8 contains buttons that correspond
to the three “Configurator” modes: Form-Based, Editor-Based
and Advanced. The Form-Based and Editor-Based
configurators (shown in Figs. 10(a) and 10(b)) are SCL based. The Form-Based mode is designed for novice users and allows
the specification of rules using a simplified Graphical User
Interface (GUI). The user simply builds SCL rules by completing a form. In the Editor-Based mode, the user enters
SCL textually.
The Advanced configurator mode is the most expressive as
it allows the user to directly specify fuzzy rules and
membership functions. Hence, the user is not restricted in the
level, range, and shape of the membership functions. Fig. 10(c)
illustrates a screenshot of this configurator mode.

V.
SITE USABILITY STUDY
In this section we present an empirical study that assesses the
usability of the SITE system. We propose the following
hypothesis: “Given that users receive individually a 15 minutes video
tutorial about using the SITE system, they will be able to
successfully complete the tasks of:
1) Building SOs using the GPTN framework;
2) Visualizing the data generated by the SOs; and
3) Defining rules to control SOs through the SITE CVC.”
The experimental procedure is composed of 5 main phases as
shown in Fig. 11. Each of these phases is explained in Section
V-B.
A. Participants
We conducted a user trial involving 20 adults (10 males and
10 females) with a mean age of (29.45 ± 5.3) years. All
participants signed an informed consent form.

B. Procedure
1) Background Questions
The subjects were first asked four questions centered on
their experience in information technology. Table III presents the distribution of their answers. Based on the answers, the
participants were divided into two broad user groups; IT (5
males and 5 females), and non-IT (5 males and 5 females). A description of the user groups is shown in Table IV.

2) Tutorial
Participants received individually a 15 minutes video
tutorial describing how to build a SO using the GPTN and
interact with the SITE CVC. The tutorial is designed to teach
the subjects by example. Hence, it shows them how to build a
smart chair (described next s ection). Also, the subjects were
given an instruction manual that covers the same material as the
video tutorial. They were told that they can reference the
manual whenever needed during the evaluation.
3) Usability Scenarios
Participants were directed to build SOs in three selected
scenarios. For the first scenar io (Fig. 12(a)), the users were
asked to realize a smart chair that vibrates when the user
continuously sits for a predefined period. The chair is also equipped with a basic posture m onitoring mechanism to ensure
that the seated person does no t lean forward for a prolonged
period. This scenario was covere d by the tutorial. Hence, the
users were required to reproduce the steps shown in the tutorial.
To build the smart chair, th e subjects had to append:
1) A pressure sensor on the seat to sense the presence of a
seated person
2) A light sensor on the back to detect whether the seated
person is leaning forward
3) A vibro-tactile actuator on the back to convey a haptic
warning regarding sedentary behavior
For the second scenario (Fig. 12(b)), the users were asked
to realize a smart fridge that se nds a notification message in the
form of email or SMS when the eggs container is empty. It also
Fig. 11. Empirical study phases. TABLE III
BACKGROUND QUESTION ANSWERS DISTRIBUTION
Background
Question Answer option Response
Count % Response
Q1: Education High School 6 30%
Bachelor 6 30%
Masters 8 40%
Doctorate 0 0%
Q2: General
expertise with
computers None 0 0%
Basic 6 30%
Intermediate 4 20%
Excellent 10 50%
Q3: Formal
software/ Hardware design training None 10 50%
Basic 7 35%
Intermediate 3 15% Excellent 0 0%
Q4:Fuzzy logic
knowledge None 18 90%
Basic 1 5%
Intermediate 1 5%
Excellent 0 0%

TABLE IV
GROUPS OF PARTICIPANTS
Participants group Common characteristics Sample size
IT Users that possess an undergraduate
degree in a discipline that includes
intermediate or advanced courses in
software development, hardware development, or both. 10

Non-IT
Users that have not undergone any training in software and hardware
development.
10
Page 9 of 17
For Review OnlyIEEE Access
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFI CATION NUMBER (DOUBLE-CL ICK HERE TO EDIT) <
10
activates a buzzer if the fridge door is open for a predefined
period. To build the latter SO, the subjects had to append:
1) A pressure sensor underneath the eggs container to detect
the presence of eggs
2) A light sensor inside the fridge to detect when the door
opens
3) A buzzer inside the fridge to remind users to close the
fridge’s door
For the third scenario (Fig. 12(c)), the users were asked to
realize a smart living room where a TV set is turned on when
the user sits on the couch, a lamp light intensity changes depending on the room’s lighting level, and a fan speed changes
based on the room’s temperatur e. To build the necessary SOs
for this scenario, the subjects had to append:
1) A pressure sensor to the couch seat cushion to sense the
presence of a seated person
2) A temperature and light sensors inside the room to
measure the room’s temperature and light intensity
3) An On-Off actuator to control the TV power
4) Two dimmer actuators for the lamp and fan Each scenario was divided into multiple tasks as shown in
Table V. Subjects were asked to select any Controller
Configurator mode. They all se lected the Form-Based mode.
C. Evaluation Metrics
We assessed the usability of the proposed system using the
metrics presented in the following sections.

1) Error Score
The error score corresponds to the severity of mistakes
committed by a subject during task performance [30, 31]. Each task is given an error score between 0 and 1 as follows:
1) Very High Severity (1): Errors preempted task
completion.
2) High Severity (0.75): Errors led to significant
difficulties in task completion.
3) Medium Severity (0.5): Errors required substantive
remedial actions.
4) Low Severity (0.25): Error required minor remedial
actions.
5) No Error (0): Task was completed without error.
Every task was given a score between 0 and 1. So if all the
participants completed the tasks without error, the sum of error
score would be 0. In contrast, if all of them fail to complete all
the tasks, the sum of error score would be 300 (20 participants x 15 tasks).

2) Efficiency
ISO-9241 standard defines the efficiency as: “resources
spent by user in order to ensure accurate and complete
achievement of the goal” [32]. Hence, in software and information system, the resource is considered as the time spent
by the user to accurately achieve the goal. This time-based
efficiency can be calculated us ing the following equation [33]:

11RN
ij
ji ijn
tTimeBasedEffeciencyNR
(1)
Where: (a)

(b)

(c)
Fig. 12. The three usability study s cenarios. (a) The smart office chai r
scenario, the hardware prototype composed of the mainboard, pressor sensor,
light sensor and vibro-tac tile actuator. (b) The smart fridge scenario, (Left) a
pressure sensor located underneath the e ggs container. (Right) the mainboard,
light sensor, and buzzer are mounted on the fridge door. (c) The smart living
room scenario, composed of four GPTN nodes, one node (MCRLab1) for the
sensors (pressure, temperature and li ght) and three nodes (MCRLab2, 3, an d
4) for the lamp, fan and TV actuators.

TABLE V
THE THREE SCENARIOS OF THE EXPERIMENTAL STUDY
Scenario Task Description
1
Smart
Chair 1 Building the chair SO
2 Writing SCL rules for pr olonged seating detection
and user warning
3 Writing rules for basic posture monitoring and user
warning
4 Running controller and testing the system
2
Smart
Fridge 1 Building the fridge SO
2 Writing SCL rules for eggs absence and user
warning
3 Writing SCL rules for open door detection and user
warning
4 Running controller and testing the system
3
Smart
Living
Room 1 Building the lamp SO
2 Building the fan SO
3 Building the TV SO 4 Writing SCL rules for controlling lamp’s intensity
5 Writing SCL rules for controlling fan’s speed
6 Writing SCL rules for controlling TV on/off switch
7 Running controller and testing the system
Page 10 of 17
For Review OnlyIEEE Access
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFI CATION NUMBER (DOUBLE-CL ICK HERE TO EDIT) <
11
N= number of tasks
R=10 (Number of participants per group)
nij= The result of task i by participant j (1 if the task is
completed successfully and 0 if not)
tij= The time elapsed for completing task i by participant j.

3) Survey
In addition to the quantifiable metrics, a survey was used to
measure the subjects’ satisfaction level. A total of 11 questions,
7 Likert scale and 4 open ended, were answered by the subjects
at the end of the evaluation. These questions, along with the
subjects’ answers are provided in the next Section.

D. Results
Table VI presents the sum of er ror score and the %error score
for the IT users, Non-IT users, and all participants. For scenario
1, the sum of error score and %error score metrics are very comparable across both user groups. A 1-tailed Mann-Whitney
test indicates that the sum of erro r score for scenario 1 is greater
for Non-IT users (3.25) compared to IT users (1.75), U = 40.5, p = 0.481. The difference between both groups is not
statistically significant. There is a slight increase of less than
2% in the %error score for Non- IT users. For scenario 2, the
error score of the Non-IT users (4 .5) is almost twice as high as
that of the IT users (2.5). However, it is still a somewhat
inconsequential score, considering that the maximum possible sum of error score is 40 (10 pa rticipants × 4 tasks). A 1-tailed
Mann-Whitney test reveals th at the error score difference TABLE VI
THE ERROR SCORES FOR THE THREE SCENARIOS
Sum of Error Score % Error Score (sum of errors /
(#participants x #tasks))
IT Non-IT Total IT Non-IT Total
Scenario 1 1.75 3.25 5 2.18% 4.06% 3.125%
Scenario 2 2.5 4.5 7 6.25% 11.25% 8.75%
Scenario 3 8.25 10.75 19 11.78% 15.35% 13.57%

(a)

(b)

(c)
Fig. 13. Sum of error score per task for (a) scenario 1, (b) scenario 2, and (c) scenario 3. 00.511.52
1234Sum of Errors Per Task 
For Scenario1
Task NumberIT Non‐IT
01234
1234Sum of Errors Per Task For 
Scenario2
Task NumberIT Non‐IT
0123
1234567Sum of Errors Per Task 
For Scenario3
Task NumberIT Non‐IT
(a)

(b)
Fig. 14. (a) The main elapsed time fo r the two groups for each scenario. (b)
Time-based efficiency for the two groups for each scenario. Scenario 1S c e n a r i o  2S c e n a r i o  3
IT 308.2 230.9 540.7
Non‐IT 415.9 319.3 607.6
Average 362.05 275.1 574.150100200300400500600700Average Elappsed  Time (Seconds)IT Non‐IT Average
Scenario1 Scenario2 Scenario3
IT 0.003278 0.00446 0.00187
Non‐IT 0.002666 0.003541 0.00165
Average 0.002972 0.004001 0.0016700.0010.0020.0030.0040.005Time‐based Efficiency  (goals/sec)IT Non‐IT AveragePage 11 of 17
For Review OnlyIEEE Access
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFI CATION NUMBER (DOUBLE-CL ICK HERE TO EDIT) <
12
between both groups for scenario 2 is statistically insignificant,
U = 33.0, p = 0.218. The error score for Non-IT users in
scenario 3 (10.75) is also slightly higher than that of IT users
(8.25) with a 4% increase in %error score. The maximum
possible error score is 70 (10 participants × 7 tasks). The 1-
tailed Mann-Whitney test reveals that the error score difference
between both groups is statically insignificant, U = 34.0, p = 0.247. Note that the Mann-Whitney non-parametric statistical
test of difference was employed for the three scenarios since the
normality assumption could not be satisfied. Fig. 13 shows the sum of error score per task for each scenario.
Fig. 14(a) compares the mean elapsed time for each
scenario. The Non-IT users required more time to complete the
tasks for all three scenarios compar ed to the IT users. A 1-tailed
Mann-Whitney test indicates that the elapsed time is
significantly greater for Non-IT users compared to IT users for
scenarios 1 and 3, with U = 16.5, p = 0.009 for scenario 1, U =
28.0, p = 0.105 for scenario 2, and U = 11.0, p = 0.002 for scenario 3. This is also reflected in Fig. 14(b) that shows that
the calculated time-based efficien cy for Non-IT users is below
that of IT users by 18.6% for s cenario 1, 25.9% for scenario 2,
and 13.3% for scenario 3. Note that the Mann-Whitney non-
parametric statistical test of difference was employed since the
normality assumption could not be satisfied. Table VII summarizes the results found from Mann-Whitney test.
In terms of participants’ satisf action, we present the results
of the survey conducted after test completion. The first 7 questions were Likert-scale based and prompted participants to
specify their opinion on various interaction aspects by selecting
one of 5 options. Table VIII lists these questions. Fig. 15 graphically summarizes the percentage of the IT and Non-IT
participants’ satisfaction associat ed with them. Table VIII also
lists the level of agreement for a question corresponds to the
median value of its responses. The results show that there is an
overall very high level of agreement with the positive statement
of each question.
Participants were also asked four open-ended questions.
These questions are listed in Ta ble IX. When answering these
questions, most of the participants gave positive responses. The
following is sample of the answer s to question 8. Most of the
participants were repeating the word “simple”:
Feedback#1 : “I like the organization of the interface
components and the display of sensors information. The way I can set the rules is very simple and organized. I can say, it is
clear and very easy to use”
Feedback#2: “…very helpful in dealing with different sit
uations that require interaction with sensors”
Feedback#3: “User friendly GUI. Simple and
Straightforward steps”
Feedback#4: “It is easy to use and friendly, no need for any
computer or programing skills”
Feedback#5: “I like its simplicity and user friendliness” TABLE VII
RESULTS FROM 1-TAILED MANN-WHITNEY TEST
Metric Scenario # Statistically
significant? U
value p
value
Sum of error
scores Scenario 1 no 40.5 0.481
Scenario 2 no 33.0 0.218
Scenario 3 no 34.0 0.247
Time elapsed Scenario 1 yes 16.5 0.009
Scenario 2 no 28.0 0.105
Scenario 3 yes 11.0 0.002

(a)

(b)
Fig. 15. Answers distribution over sa tisfaction questionnaire for (a) IT and
(b) Non-IT Participants. 0246810
Q1 Q2 Q3 Q4 Q5 Q6 Q7ResponsesStrongly Agree Agree Neutral Disagree Strongly Disagree
0246810
Q1 Q2 Q3 Q4 Q5 Q6 Q7ResponsesStrongly Agree Agree Neutral Disagree Strongly Disagree

TABLE VIII
PARTICIPANTS ’ LEVEL OF AGREEMENT RESULTS
Likert-scale question Level of
agreement
Q1 The blocks were easy to connect together 4.7
4.5
Q2 The setup of the controller on the computer
program was easy to understand 4.7
4.8
Q3 The rules of the controller on the computer
program were easy to write 5.0
4.5
Q4 The computer program was easy to use 4.9
4.3
Q5 The blocks that you had to connect together
for the second and third scenarios were
easy to build 4.4
4.4
Q6 The rules of the controller on the computer
program for the second and third scenarios
were easy to write 4.5
4.4
Q7 After these tutorials, I can build several
objects without needing technical help 4.5
4.2 TABLE IX
OPEN ENDED QUESTIONS
# Open ended question
8 Please indicate what you like about the computer program
9 Please indicate what you dislike about the setup of the controller in
the computer program
10 Please provide any suggestions to improve the SITE system
11 Please provide any feedback about this study itself, so I may
improve it Page 12 of 17
For Review OnlyIEEE Access
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFI CATION NUMBER (DOUBLE-CL ICK HERE TO EDIT) <
13
Feedback#6: “The program was easy to use; the labels on
the program matched the ones in the video, making it easy to
follow”
Feedback#7: “Simple, no extra buttons that could confuse
me”
Feedback#8: “The rules were clear and easy to use”
The following selected feedb acks are responses for what
participants disliked about the SITE system:
Feedback#9: “Although it works perfectly, it’s slow when I
click to run the controller”
Feedback#10: “The buttons can be enhanced by making
them larger in size with more interesting icons, like the iPhone
buttons”
The third open-ended question was if they have any
suggestions in order to improve the SITE software system. The following are samples of their suggestions:
Suggestion#1: “Instead of using the words “low” and
“high” for the values of the sens ors, for example, for the light
sensor, use the words “dark” and “bright” or add a description
of what "high" and "low" mean with a small example embedded
in the software”
Suggestion#2: “I think the Advanced configurator button in
the main window should be located somewhere else, so that the
user doesn’t click on it mistakenly and confuse her/himself”
Suggestion#3: “The screen should be adjustable in size, font
selection can be added, other languages should be supported,
and pictures of sensors instead of labels should be used”
The last open ended question asked participants if they have
any suggestions to improve the study. Most of the suggestions
focused on improvements to the tutorial. There were few suggestions for various services:
Suggestion#4: “… this system can be linked directly to the
grocery store in the case of Scenario 2”

E. Discussion
We elected to permit users to choose the Controller
Configuration mode of their choice. They all chose the Form-
Based mode for rule specification. This is unsurprising given that this mode does not require users to remember SCL syntax
or develop fuzzy logic expertise. All of our subjects were first
time users of SITE. We suppose that increased familiarity with
the system after prolonged inter action might encourage users to
use the Editor-Based mode. This mode might offer a faster
method of entering rules into the system given that the user has memorized the syntax after repeated use. The Advanced mode
will probably only benefit expert users familiar with fuzzy logic
knowledge. Although this might be a small user group, we developed this mode to allow interested user to fine-tune
control rules more effectively.
Although there were small differe nces in the performance of
IT and Non-IT users in terms of error score, elapsed time, and
time-efficiency, all subjects successfully completed the three
scenarios. We obtained a sta tistically significant difference
across groups in the elapsed time for scenarios 1 and 3.
However, we did not find a statistically significant difference in the error score across groups for the scenarios. Hence, we can
deduce that although Non-IT users were spending more time
performing the tasks, they we re not committing significantly
more errors. IT users had a particular advantage in both scenarios 2 and 3 since subjects were not assisted in the
development and configuration of both the smart fridge and
smart living room. Although the completion of these scenarios
did not require technical skills beyond those that were covered in the tutorial, they involved some design tasks pertaining to the
choice of sensors, their placem ent (subjects were given hints on
sensor placement), and the development of SCL rules.
Both types of users were highly satisfied by the interaction
with SITE, as evidenced by the results of the survey. These
results are encouraging as they reflect the potential desire of Non-IT users to not only benefit from smart home systems, but
also participate in their development.
The hypothesis stated at th e beginning of Section V is
validated given that all particip ants were able to successfully
complete the experiment’s tasks.
F. Limitations
We identify limitations related to the size and scope of the
study. We list them as follows:
1) Number of participants: There is an on-going debate about
the appropriate sample size in usability studies. Nielsen
[34] recommends the recruitment of 20 subjects for
quantitative usability studies. We have followed this
recommendation.
2) Age range of the participants: None of the subjects were
above 42 years of age in both user groups. Older
individuals are potentially less comfortable with
technology [35] and hence might attain a lesser level of success, especially for Non-IT users. Further studies are
needed to assess the usability of the system with other age
groups.
3) Scope of features investigat ed: All subjects chose the
Form-Based mode for rule sp ecification. This left the
other configuration modes untested. Further
investigations are required to assess the usability of the
other supported modes.
VI. C
ONCLUSIONS
In this paper, we propose the SITE system that interacts with
two types of entities: users and SOs. SITE allows the
development of a smart environment using the GPTN to create SOs and SITE CVC to specify the SO control. The system
supports users with varying levels of technology expertise.
Hence, three modes of SO control rules specification are provided (Form-Based, Editor-Based and Advanced). The
Form-Based and Editor-Based modes are SCL based and
designed for novice and expert users. The Advanced mode allows more refined rule specification and is designed for
technology savvy users with basic knowledge of fuzzy logic.
An Empirical study was conducted to evaluate the usability
of the SITE system. Twenty (IT and Non-IT) participants were
directed to build SOs and interact with the SITE CVC.
Although some differences in performance were observed
across the groups, with the IT user group achieving a slightly
smaller error score and higher time-efficiency, all users were able to complete the experimental tasks. Furthermore, both user
groups expressed their satisfaction with the system. These
encouraging results signify that Non-IT users might be Page 13 of 17
For Review OnlyIEEE Access
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFI CATION NUMBER (DOUBLE-CL ICK HERE TO EDIT) <
14
interested in taking on the role of smart home system designer
in addition to end-user.
REFERENCES

[1] L. Atzori, A. Iera, and G. Mora bito, "The intern et of things: A
survey," Computer Networks, vol. 54, no. 15, pp. 2787-2805, 2010.
[2] D. Bandyopadhyay and J. Sen, "Int ernet of Things: Applications and
Challenges in Technology and Standardization," (in en), Wireless
Personal Communications, vol. 58, no. 1, pp. 49-69, 2011/05//
2011.
[3] I. Bose and R. Pal, "Auto-ID: managing anything, anywhere,
anytime in the supply chain," (in en), Communications of the ACM,
vol. 48, no. 8, pp. 100-106, 2005/08/01/ 2005.
[4] I. Peña-López, "ITU Internet re port 2005: the internet of things,"
2005 2005.
[5] A. Zanella, N. Bui, A. Castellani, L. Vangelista, and M. Zorzi,
"Internet of Things for Smart Cities," IEEE Internet of Things
Journal, vol. 1, no. 1, pp. 22-32, 2014/02// 2014.
[6] G. Fortino, A. Guerrieri, and W. Russo, "Agent-oriented smart
objects development," in Computer Supported Cooperative Work in
Design (CSCWD), 2012 IEEE 16th International Conference on ,
2012, pp. 907-912: IEEE.
[7] M. R. Alam, M. B. I. Reaz, and M. A. M. Ali, "A review of smart
homes—past, present, and future," Systems, Man, and Cybernetics,
Part C: Applications and Reviews, IEEE Transactions on, vol. 42,
no. 6, pp. 1190-1203, 2012.
[8] J. A. Macías, "Development of end-user-centered EUD software,"
in Proceedings of the 13th International Conference on Interacción
Persona-Ordenador , 2012, p. 24: ACM.
[9] H. Lieberman, F. Paternò, M. Klann, and V. Wulf, "End-user
development: An emerging paradigm," in End user development :
Springer, 2006, pp. 1-8.
[10] R. Dautriche, C. Lenoir, A. Demeure, C. Gérard, J. Coutaz, and P.
Reignier, "End-user-development for smart homes: relevance and
challenges," in Proceedings of the Workshop" EUD for Supporting
Sustainability in Maker Comm unities", 4th International
Symposium on End-user Development (IS-EUD) , 2013, p. 6.
[11] B. Hafidh, H. Al Osman, H. D ong, and A. El Saddik, "A Framework
of Reconfigurable Transducer Nodes for Smart Home Environments," IEEE Embedded Systems Letters, vol. 7, no. 3, pp.
81-84, 2015/09// 2015.
[12] A. Souza and J. R. A. Amazonas, "A Novel Smart Home
Application Using an Internet of Things Middleware," in Smart
Objects, Systems and Technologies (SmartSysTech), Proceedings of
2013 European Conference on , 2013, pp. 1-7: VDE.
[13] R. Blasco, Á. Marco, R. Casas, D. Cirujano, and R. Picking, "A
smart kitchen for ambient assisted living," Sensors, vol. 14, no. 1,
pp. 1629-1653, 2014.
[14] R. Piyare and M. Tazil, "Bluetooth based home automation system
using cell phone," 2011, pp. 192-195: IEEE.
[15] R. Piyare and S. R. Lee, "Smart Home-Control and Monitoring
System Using Smart Phone," in 1st International Conference on
Convergence and its Application (ICCA) , 2013.
[16] M. Yan and H. Shi, "Smart liv ing using Bluetoot h-based Android
smartphone," International Journal of Wireless & Mobile Networks,
vol. 5, no. 1, p. 65, 2013 2013. [17] J. Byun, B. Jeon, J. Noh, Y. Kim, and S. Park, "An intelligent self-
adjusting sensor for smart home services based on ZigBee
communications," Consumer Electronics, IEEE Transactions on,
vol. 58, no. 3, pp. 794-802, 2012.
[18] J. Nichols and B. A. Myers, "C reating a lightweight user interface
description language: An overview and analysis of the personal
universal controller project," ACM Transactions on Computer-
Human Interaction (TOCHI), vol. 16, no. 4, p. 17, 2009.
[19] S. Mayer, A. Tschofen, A. K. Dey, and F. Mattern, "User interfaces
for smart things–A generative appr oach with semantic interaction
descriptions," ACM Transactions on Computer-Human Interaction
(TOCHI), vol. 21, no. 2, p. 12, 2014.
[20] L. E. Holmquist et al. , "Building intelligent environments with
smart-its," Computer Graphics and Applications, IEEE, vol. 24, no.
1, pp. 56-64, 2004.
[21] E. M. Tapia, S. S. Intille, and K. Larson, "Activity recognition in the
home using simple and ubiquitous sensors," in Proceedings of
Pervasive , 2004, pp. 158-175: Springer.
[22] A. Kameas, I. Mavrommati, and P. Markopoulos, "Computing in
tangible: using artifacts as components of ambient intelligence environments," Ambient Intelligence, pp. 121-142, 2005.
[23] S. Antifakos, B. Schiele, and L. E. Holmquist, "Grouping
mechanisms for smart objects based on implicit interaction and context proximity," in Adjunct Proceedings of International
Conference on Ubiquitous Comput ing (Ubicomp), Seattle, USA ,
2003: Citeseer.
[24] M. Beigl, H.-W. Gellersen, and A. Schmidt, "Mediacups:
experience with design and use of computer-augmented everyday
artefacts," Computer Networks, vol. 35, no. 4, pp. 401-409, 2001.
[25] A. K. Dey, T. Sohn, S. Streng, and J. Kodama, "iCAP: Interactive
prototyping of context-aware applications," in International
Conference on Pervasive Computing , 2006, pp. 254-271: Springer.
[26] K. N. Truong, E. M. Huang, and G. D. Abowd, "CAMP: A magnetic
poetry interface for end-user programming of capture applications
for the home," in International Confer ence on Ubiquitous
Computing , 2004, pp. 143-160: Springer.
[27] J. Coutaz, A. Demeure, S. Caffiau, and J. L. Crowley, "Early lessons
from the development of SPOK, an end-user development environment for smart homes," in Proceedings of the 2014 ACM
International Joint Conference on Pervasive and Ubiquitous
Computing: Adjunct Publication , 2014, pp. 895-902: ACM.
[28] L. A. Zadeh, "Fuzzy sets," (in en), Information and Control, vol. 8,
no. 3, pp. 338-353, 1965/06// 1965.
[29] P. Cingolani and J. Alcala-Fdez, "jFuzzyLogic: a robust and flexible
Fuzzy-Logic inference system language implementation," in FUZZ-
IEEE , 2012, pp. 1-8: Citeseer.
[30] H. I. H. Aljamaan, "Model-Oriented Tracing Language: Producing
Execution Traces from Tracepoints In jected into," University of
Ottawa, 2015.
[31] W. Albert and T. Tullis, Measuring the user experience: collecting,
analyzing, and presenting usability metrics . Newnes, 2013.
[32] W. ISO, "9241-11. Ergonomic requirements for office work with
visual display terminals (VDTs)," The international organization
for standardization, vol. 45, 1998.
[33] A. Sergeev. (2010 12/01/2016). User Interfaces design and
UX/usability evaluation . Available: http://ui-
designer.net/usability /effectiveness.htm
[34] J. Nielsen, "Quantitative studies: How many users to test," Alertbox,
June, vol. 26, p. 2006, 2006.
[35] A. Smith, Older adults and technology use . 2014.

Page 14 of 17
For Review OnlyIEEE Access
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60

Basim Hafidh received his B.A.Sc in Electrical Engi neering from the University of Baghdad, Baghdad, Ir aq in 1981.
He then received his M.A.Sc in Electrical Engineeri ng from the same university in 1985. He received a second
M.A.Sc in Electrical and Computer Engineering from the University of Ottawa in 2012. He is currently a PhD
Candidate with the Multimedia Communication Researc h Laboratory (MCRLab), School of Electrical Enginee ring
and Computer Science (SEECS) at the University of O ttawa. He has received Queen Elizabeth II Graduate
Scholarship twice. His research interests include tangible user interf aces, multi-model interaction with environment,
IoT, smart homes, smart cities and serious gaming.

Hussein Al Osman (M’12) received the B.A.Sc. (Hons. ) (summa cum laude) degree in computer engineering, the
M.A.Sc. degree in electrical engineering, and the P h.D. degree in electrical engineering from the Univ ersity of
Ottawa, in 2007, 2007, and 2014, respectively. He i s currently an Assistant Professor with the School of Electrical
Engineering and Computer Science, University of Ott awa. Over the course of his academic journey, he ha s received
several scholarships and awards (3X NSERC Scholarsh ips, Queen Elizabeth II Graduate Scholarship, best paper
award at the DS-RT 2008 conference, and the Part Ti me Professor Award). He has authored/co-authored ov er 30
research articles. His current research interests i nclude cloud gaming, software defined networking, a ffective
computing, and human–computer interaction. He is a member of ACM.

Page 15 of 17
For Review OnlyIEEE Access
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60

Juan Sebastian Arteaga-Falconi (M’09 S’12) is curre ntly a Ph.D Candidate in Electrical and Computer En gineering
at the University of Ottawa, Ottawa, ON, Canada. He received the Engineering degree in Electronics fro m the
Politecnica Salesiana University, Cuenca, AZ, Ecuad or, in 2008, and the M.A.Sc. Degree in Electrical a nd
Computer Engineering from the University of Ottawa in 2013. From 2008 to 2011 he was with SODETEL Co. Ltd.,
Cuenca, AZ, Ecuador, where he was a Co-Founder and served as General Manager. He joined the MCRLab at the
University of Ottawa in 2012 and currently he is a Teaching Assistant at the same university. His rese arch interests
are: Biometrics, Signal Processing, System Security and Machine Learning. Has served as Treasurer in t he IEEE
ExCom of the Ecuadorian section from 2010 to 2012. He has received the 2011 and 2013 SENESCYT Ecuadori an
Scholarship for graduate studies.

Haiwei Dong (M’12–SM’16) received the D.Eng. degree in computer science and systems engineering and th e
M.Eng. degree in control theory and control enginee ring from Kobe University (Japan) and Shanghai Jiao Tong
University (China), in 2010 and 2008, respectively. He is currently with the University of Ottawa. Pri or to that, he
was appointed as a Post-Doctoral Fellow with New Yo rk University Abu Dhabi, a Research Associate with the
University of Toronto, a Research Fellow (PD) at th e Japan Society for the Promotion of Science, a Sci ence
Technology Researcher with Kobe University, and a S cience Promotion Researcher with the Kobe Biotechno logy
Research and Human Resource Development Center. His research interests include robotics, haptics, cont rol, and
multimedia.

Page 16 of 17
For Review OnlyIEEE Access
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60

Abdulmotaleb El Saddik (F’09) is Distinguished Univ ersity Professor and University Research Chair in t he School
of Electrical Engineering and Computer Science at t he University of Ottawa. His research focus is on m ultimodal
interaction with sensory information in smart citie s. He has authored and co-authored four books and m ore than 550
publications. Chaired more than 40 conferences and workshop and has received research grants and contr acts
totalling more than $18 Mio. He has supervised more than 120 researchers. He received several internat ional awards,
among others are ACM Distinguished Scientist, Fello w of the Engineering Institute of Canada, Fellow of the
Canadian Academy of Engineers and Fellow of IEEE, I EEE I&M Technical Achievement Award and IEEE Canada
Computer Medal.
Page 17 of 17
For Review OnlyIEEE Access
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60

Similar Posts