OVIDIUS University of Constant a [616594]

Ministry of National Education
”OVIDIUS” University of Constant ¸a
Faculty of Mathematics and Computer Science
Degree Program: Computer Science
Home Assistant for Elderly People
Scientific Adviser:
Conf. dr. Pelican Elena
Student: [anonimizat] ¸a
2018

Outline
Outline ii
List of Figures ii
List of Tables 1
1 Introduction 2
2 Content 4
2.1 Reminder . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Chatbot . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 Face Recognition . . . . . . . . . . . . . . . . . . . . . . . 12
3 Application 21
4 Conclusions 34
4.1 Future works . . . . . . . . . . . . . . . . . . . . . . . . . 34
References 35
ii

Abstract
Indiferent de vˆ arst˘ a, foarte multe persoane sufer˘ a de pierderi de memo-
rie, severe sau mai put ¸in severe. Aceasta problem˘ a este mult mai ˆ ıntˆ alnit˘ a
la persoanele ˆ ınaintate ˆ ın vˆ arst˘ a. Un studiu [1] f˘ acut ˆ ın 2017 arat˘ a c˘ a,
mondial, num˘ arul persoanelor care au Alzheimer sau o dement ¸˘ a asociat˘ a
este estimat la 44 de milioane. Doar 1 din 4 persoane bolnave de Alzheimer
au fost diagnosticate. Mai mult de unul din 6 ˆ ıngrijitor ai bolii Alzheimer
¸ si dement ¸iei au trebuit s˘ a renunt ¸e la munc˘ a ˆ ın ˆ ıntregime fie pentru a de-
veni un ˆ ıngrijitor ˆ ın primul rˆ and, fie pentru c˘ a ˆ ındatoririle au devenit prea
ˆ ımpov˘ ar˘ atoare. Unul dn 3 ˆ ıngrijitori are 65 de ani sau mai mult. ˆIntre 200
si 2015 decesurile din cauza bolilor de inim˘ a au sc˘ azut cu 11% ˆ ın timp ce
cele de Alzheimer au crescut cu 123%. Aceasta crunt˘ a boal˘ a omoar˘ a mai
mult decˆ a cancerul de sˆ an ¸ si cel de prostat˘ a combinat. Aceast˘ a lucrare are
ca scop u¸ surarea viet ¸ii pacient ¸ilor bolnavi de Alzheimer ¸ si nu numai. Pen-
tru crearea aplicat ¸iei am folosit elemente din Machine Learning ¸ si sisteme
de gestiune a bazelor de date.
No matter the age, many people suffers from memory loss, severe or
not. This problem is much more common to older people. A 2017 study [1]
shows that, worldwide, the number of persons who suffers from Alzheimer
or a related dementia is estimated to 44 millions. Just 1 in 4 people who
suffers from Alzheimer have been diagnosed. More that 1 in 6 caretakers
of Alzheimer and dementia had to give entirely up work, either to be
a caretaker or beacuse it was to burdensome.One in 3 caretakers is 65 or
older. Between 2000 and 2015 deaths from heart diseases have decreased by
11%, while deaths from Alzheimer’s disease have increased by 123%. This
cruel disease kills more than breast and prostate cancer combined. The
application was built using elements from Machine Learning and database
management systems.

List of Figures
1.1 Alzheimer statistic, US, 2017 . . . . . . . . . . . . . . . . . . 3
2.1 Reminders table schema . . . . . . . . . . . . . . . . . . . . . 5
2.2 Symbolic Reduction . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Divide and Conquer . . . . . . . . . . . . . . . . . . . . . . . 9
2.4 Spelling and Grammar correction . . . . . . . . . . . . . . . . 9
2.5 Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.6 Conditionals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.7 Synonyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.8 Flowchart of the algorithm of the Eigenfaces method . . . . . 15
2.9 Project block diagram . . . . . . . . . . . . . . . . . . . . . . 18
3.1 Main Activity . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.2 Reminder Activity . . . . . . . . . . . . . . . . . . . . . . . . 23
3.3 Reminder Add Activity . . . . . . . . . . . . . . . . . . . . . . 24
3.4 Reminder Activity – with some reminders in the database . . . 25
3.5 Chatbot conversation . . . . . . . . . . . . . . . . . . . . . . . 26
3.6 Chatbot contact feature . . . . . . . . . . . . . . . . . . . . . 27
3.7 Face Recognition Activity . . . . . . . . . . . . . . . . . . . . 29
3.8 The gorup ”visitors” . . . . . . . . . . . . . . . . . . . . . . . 30
ii

List of Figures List of Figures
3.9 Two people single recognition . . . . . . . . . . . . . . . . . . 31
3.10 Two people, both recognized . . . . . . . . . . . . . . . . . . 32
3.11 No recognition . . . . . . . . . . . . . . . . . . . . . . . . . . 33
iii

List of Tables
2.1 Comparison of some work related to face recognition . . . 14
4.1 Aletheia pros and cons . . . . . . . . . . . . . . . . . . . . 34
1

Chapter 1
Introduction
This idea came to life from the desire of easing the lives of elderly peo-
ple. How will it ease their lives? Simple! This application is designed to the
needs of the elderly,having features that will help them throughout the day.
This paper stands as foundation for the mobile application that I named
”Aletheia” and has as purpose helping the elderly people remember events,
activities, details or in extreme cases even people. Because this disease it
is one that needs lots of attention I integrated a chatbot to socialize with
the users or help them whenever they are in need and no one is around.
I know that this application will not miraculously heal the people that
suffers from this disease, but I do hope that, at least, it will ease their
day-by-day life.
This paper is structured in 4 chapers:
⊿Chapter 1 – Introduction: this chapter, as the name suggests, is an
introduction for the chosen theme and application.
⊿Chapter 2 – Content: this chapter presents the technologies that I used
for the application
⊿Chapter 3 – Application: this chapter presents the mobile application,
its features and a small ”how to use it” guide, although I made sure
that it is user friendly.
⊿Chapter 4 – Conclusions : contains final conclusions and som future
works.
2

Introduction Introduction
Figure 1.1 : Alzheimer statistic, US, 2017
As it is clearly visible in the figure above, only 4% of people who suffers
from alzheimer in United States are under 65, if you think about it, some
of this people are taken care by their families, but the caretakers have jobs
or childern to take into conssideration as well, what I want to underline
here is that, these alzheimer’s patients by using this application will regain
a bit of their independence back.
3

Chapter 2
Content
2.1 Reminder
For the creation of this feature I was in need of a database, so after
some researches I came to the conclusion that SqLite was the database
that would fulfill my requirements. First, and most important, I might
add, it is a mobile database – that means that it runs on mobile devices
itself. I recommend using a mobile database because it offers full offline
modes for apps that depend on stored data, it has a performance that is
independent from network, it allows you to store personal data with the
user and is frugal on bandwidth for the apps that are dependent on stored
data. Second, it does not have any dependencies and is also included with
both Android and iOS. And last but not least, SqLite supports most of
the SQL query language. Although it mostly has pros it hase one con, on
Android is has a limitation of 1 MB BLOB, but it is up to the developer
to decide if the cons weights more than the pros.
Because the minim sdk required for the application is API 15 – I needed
to be certain that it works on most devices, so this API has a 99.9% rate of
running on devices – and because WakefulBroadcastReciever cannot run on
Android API 23 or higher, for background tasks I had to use both Wakeful-
BroadcastRecievr and JobIntentService, the latter being the replacement
on Android O for the first.
4

Content Chatbot
Figure 2.1 : Reminders table schema
2.2 Chatbot
A chatbot (or bot) is a conversational software program designed to
chat with humans via voice or text.
Chatbots can be deployed in a variety of channels including popular voice
and messaging platforms. The use cases are virtually endless, from au-
tomating common customer service queries, to providing touch points
along the customer journey, to optimizing internal IT processes, to learn-
ing applications like language and enterprise soft skills, to games, toys,
entertainment, and more.
For this feature I used the program-ab, known also as Ab.jar library
is responsbile for processing of the AIML files. Llinked to the applica-
tion gives the posibility to add the chatbot feature. Using this libray you
can add code for specific application, by using cutomizable AIML tags.
Program-ab is the reference implementation of th AIML specification draft.
AIML or Artificial Intelligence Markup Language is XML-based, which
makes it’s structure vrey easy to understand and therefore use. It is a stan-
dard widely adopted for creating chatbots and mobile virtual assistants.
These AIML files compose the chatbot dictionary. Some of the most im-
portant tags that can’t be missing from an aiml file are:
5

Content Chatbot
⊿the<aiml>tag – which marks the begining and ending of a AIML file
⊿the<categor>tag – it is a category of knowledge in the chatbot or
virtual assistant knowledge base
⊿the<pattern>tag – contains a pattern that matches what the input
of the user may be
⊿the<template>tag – this is the chatbot response to the pattern
Despite its simplicity this language powers some of the most complex con-
versational agents on the market.
The AIML 2.0 specification (released as an initial draft in 2013) intro-
duced a number of new features to the language that dramatically improve
on AIML 1.x and the natural language processing power of chatbots. These
features include:
⊿Zero+ Wildcards
⊿Highest Priority Matching
⊿Migration from XML attributes to tags
⊿AIML Sets
⊿AIML Maps
⊿Loops
⊿Local variables
⊿Sraix
⊿Denormalization
⊿OOB (Out of Band) Tags
The basic unit of knowledge in AIML is called a category. Each cat-
egory consists of an input question, an output answer, and an optional
context. The question, or stimulus, is called the pattern. The answer,
or response, is called the template. The two primary types of optional
context are called ”that” and”topic. ” The AIML pattern language is sim-
ple, consisting only of words, spaces, and wildcard symbols like and∗. The
words may consist of letters and numerals, but no other characters. The
pattern language is case invariant. Words are separated by a single space,
and the wildcard characters function like words.
The first versions of AIML allowed only one wild card character per
pattern. The current AIML standard permits multiple wildcards in each
pattern, but the language is designed to be as simple as possible for the
task at hand, simpler even than regular expressions. The template is the
6

Content Chatbot
AIML response or reply. In its simplest form, the template consists of only
plain, unmarked text.
More generally, AIML tags transform the reply into a mini computer
program which can save data, activate other programs, give conditional
responses, and recursively call the pattern matcher to insert the responses
from other categories. Most AIML tags in fact belong to this template
side sub-language.
AIML 1.x versions supports two ways to interface other languages and
systems. The <system>tag executes any program accessible as an op-
erating system shell command, and inserts the results in the reply. Simi-
larly, the<javascript>tag allows arbitrary scripting inside the templates.
(Note that Pandorabots supports AIML 2.0, which does not implement
these tags.)
The optional context portion of the category consists of two variants,
called<that>and<topic>. The<that>tag appears inside the category,
and its pattern must match the robotâĂŹs last utterance. Remember-
ing one last utterance is important if the chatbot asks a question. The
<topic>tag appears outside the category, and collects a group of cate-
gories together. The topic may be set inside any template.
AIML is not exactly the same as a simple database of questions and
answers. The pattern matching ”query” language is much simpler than
something like SQL. But a category template may contain the recursive
<srai> tag, so that the output depends not only on one matched cate-
gory, but also any others recursively reached through ¡srai¿.
AIML implements recursion with the <srai>operator. No agreement
exists about the meaning of the acronym. The ”A.I. ” stands for arti-
ficial intelligence, but ”S.R. ” may mean ”stimulus-response,” ”syntactic
rewrite,” ”symbolic reduction,” ”simple recursion,” or ”synonym resolu-
tion. ” The disagreement over the acronym reflects the variety of applica-
tions for<srai>in AIML. Each of these is described in more detail in a
subsection below:
7

Content Chatbot
⊿Symbolic Reduction: Reduce complex grammatical forms to simpler
ones.
⊿Divide and Conquer: Split an input into two or more sub-parts, and
combine the responses to each.
⊿Synonyms: Map different ways of saying the same thing to the same
reply.
⊿Spelling or grammar corrections
⊿Detecting keywords anywhere in the input.
⊿Conditionals: Certain forms of branching may be implemented with
<srai>
Symbolic Reduction
Symbolic reduction refers to the process of simplifying complex grammat-
ical forms into simpler ones. Usually, the atomic patterns in categories
storing chatbot knowledge are stated in the simplest possible terms, for
example we tend to prefer patterns like ”WHO IS SOCRATES” to ones
like ”DO YOU KNOW WHO SOCRATES IS” when storing biographical
information about Socrates. The simplest term of expressing the funda-
mental meaning of an utterance is often also known as the Intent.
Many of the more complex forms reduce to simpler forms using AIML cat-
egories designed for symbolic reduction:
Figure 2.2 : Symbolic Reduction
Divide and Conquer
Many individual sentences may be reduced to two or more subsentences,
and the reply formed by combining the replies to each. A sentence begin-
ning with the word ”Yes” for example, if it has more than one word, may
be treated as the subsentence ”Yes. ” plus whatever follows it.
8

Content Chatbot
Figure 2.3 : Divide and Conquer
Spelling and Grammar correction
The single most common client spelling mistake is the use of ”your” when
”youâĂŹre” or ”you are” is intended. Not every occurrence of ”your” how-
ever should be turned into ”youâĂŹre. ” A small amount of grammatical
context is usually necessary to catch this error:
Figure 2.4 : Spelling and Grammar correction
Keywords
Frequently we would like to write an AIML template which is activated by
the appearance of a keyword anywhere in the input sentence. The general
format of four AIML categories is illustrated by this example:
The first category both detects the keyword when it appears by itself,
and provides the generic response. The second category detects the key-
word as the suffix of a sentence. The third detects it as the prefix of an
input sentence, and finally the last category detects the keyword as an
infix. Each of the last three categories uses <srai>to link to the first, so
that all four cases produce the same reply, but it needs to be written and
stored only once.
9

Content Chatbot
Figure 2.5 : Keywords
Conditionals
It is possible to write conditional branches in AIML, using only the <srai>
tag. Consider three categories:
Provided that the predicate ”he” is initialized to ”Unknown,” the cat-
egories execute a conditional branch depending on whether ”he” has been
set. As a convenience to the botmaster, AIML also provides the equivalent
function through the <condition>tag.
10

Content Chatbot
Figure 2.6 : Conditionals
Synonyms
AIML does not permit more than one pattern per category. Synonyms are
perhaps the most common application of <srai>. Many ways to say the
same thing reduce to one category, which contains the reply:
The layout consists of a list view which allows the user to see its own
text and the response. This feature needs two permissions to function
corectly: the WRITE EXTERNAL STORAGE and MOUNT UNMOUNT
FILESYSTEMS permision. If accepted, the first one lest the application
write to external storage, in this case it writes the chatbot knowledge base,
and the second one permits mounting and unmounting file systems for re-
movable storage.
11

Content Face Recognition
Figure 2.7 : Synonyms
2.3 Face Recognition
For this feature I used the OpenCV for the integration of the Eigen-
faces algorithm in C++. OpenCV is an open source library that has
multiple programming language and is supported on many operating sys-
tems, including Android and iOS. For the matrix operations I used the
Eigen library, because it does not have any dependencies and it can be
used on any hardware platform. Beside the Eigen library, I also have used
the RedSVD library.
Principal Component Analysis or PCA is the foundation of the eigen-
faces method. Eigenfaces and PCA have been used by Sirovich and Kirby
to represent the face images efficiently. They have started with a group
12

Content Face Recognition
of original face images, and calculated the best vector system for image
commpresion. Then Turk and Pentlad appliend the Eigenfaces to the
recognition problem.
PCA is a mathematic proccedure from which you can obtain from a
initial big data set, another dataset but of smaller dimmensions. The
PCA is a projection method to a subspace beign widely used in pattern
recognition. One of the PCA’s objectives is the replacement of correlated
vectors of large dimmensions with smaller uncorellated vectors.An objec-
tive of PCA is the replacement of correlated vectors of large dimmension
with the uncorrelated vectors of smaller dimmesnsions. Another objective
is calculating a base for the dataset. Main advantages of the PCA are
its low sensitivity to nois, reduction of the requirements of the memory
and the capacity, and the increase in the efficiency due to the operation in
space of smaller dimensions.
Eigenfaces’ strategy is to extract the characteristic features from the
face and to represent the said face as a linear combination of ”eigen faces”
that is obtained from the extraction of the feature.
The principal component of the faces in the training set are calculated.
Recognition is achieved using the projection of the face into the space
formed by the eigenfaces. A comparison on the basis of Euclidian distance
of the eigenvectors of the eigenface of the image under question is made. If
this distance is small enough, the person is identifies. On the other hand,
if the distance is too large, the image is regarded as one that belongs to
an individual for which the system has to be trained.
To determine the principal components we first obtain the covarience
matrix. If the covarience is positive, then the vectors/patterns (for which
it was computed) depends one of another, meaning that, if one grows, the
other grows too and if one drops, the other drops too.If the covariance
is nagative, then one of the patterns/vectors grows and the other one
drops and vice versa. If tha covariance is 0 the two vectors(patterns) are
independent.
All the data from the table was provided from [10]
13

Content Face Recognition
Method Number of
images in
training setSuccess rate References
Principal Component Analysis 400 79.65% [7]
PCA + Relevant Component Analysis 400 92.34% [7]
Independent Component Analysis 170 tah function – 69.40% [6]
40 Gauss function – 81.35% [6]
Hidden Markov Model 200 84% [13]
Active Shape Model 100 78.12 – 92.05% [14], [5]
Wavelet Transform 100 80 – 91% [11]
Support Vector Machine – 85-92.1% [9],[8]
Neural Networks – 93.7% [4]
Eigenfaces Method 70 92-100% [2]
Table 2.1 : Comparison of some work related to face recognition
From the table above one can conclude that the reason why I choose to
use the Eigenfaces Method was because on small datasets it had nearly
perfect results. The user dataset will be small, I presume it will contain
maximum 10 persons, so the results will go near 100%.
As a starting point, the training images of dimensions N*N are read
and they are converted to
N2∗1
dimensions. A training set of
N2∗M
dimensions is thus created, where M is the number of samle images. The
average of the image set is calculated as:
Ψ =1
MM/summationdisplay
i=1Γi,
where
Ψ
is the average image, M the number of images and
Γi
is the image vector.
The eigenfaces corresponding to the higest eigenvalues are retained.
Those eigenfaces define the face space. The eigenspace is created by pro-
jecting the image to the face space formed by the eigenfaces. Thus the
weight vectors are calculated. Dimensions of the images are adjusted to
14

Content Face Recognition
Figure 2.8 : Flowchart of the algorithm of the Eigenfaces method
meet the specification and the image is enhanced in the processing steps
of recognition. The weight vector of the image and the weight vectors of
the faces in the database are compared.
Average face is calculated and substracted from each face in the training
set. A matrix (A) is formed using the results of the substraction operation.
The difference between each image an the average images is calculated as
φi= Γ i−Ψ,i= 1,2,…,M
where
φi
is the difference between the image and the avergae image.
The matrix obtained by the substraction operation (A) is multiplied
by its transpose and thus covariance matrxi C is formed:
C=ATA
where A is formed by the difference vectors, i.e.
A= [φ1φ2,….,φ M].
The dimesnsions of the matrix C is N*N. M images are used to form
C. In practice, the dimensions of C is N*M. On the other hand, since the
rank of A is M, only M out of N eigenvectors are nonzero.
15

Content Face Recognition
The eigenvalues of the covariance matrix is calculated.
The eigenfaces are created by using the number of training images
minus numer of clases (total number of people) of eigenvectors.
The selected set of eigenvectors are multiplied by the A matrix to create
a reduced eigenface subspace.
The eigenvectors of smaller eigenvalues correspond to smaller variations
in the covariance matrix. The discriminating features of the face are re-
tained. The number of eigenvectors depend on the accuracy ith wich the
dataset is defined and it can be optimized. The group of selected eigen-
vectors are called the eigenfaces. Once the eigenfaces have been obtained,
the images in the dataset are projected into the eigenface space and the
weights of the image in that space are store. To determine the identity of
an image, the eigen coefficients are compared with the eigen coefficients in
the dataset.
The eigenface of the image in question is formed.
The Euclidian distances between the eigenface of the image and the
eigenfaces stored previously are calculated.
The Eigenfaces algorithm
Step1: All images are transformed into vectors:
Γ1,Γ2,…,ΓN
Step2: Then it is calculated the mean vector
Ψ =1
NN/summationdisplay
i=1Γi
Step3: The mean vector is subtracted from all the vectors from dataset:
ϕi= Γ i−Ψ,i=1,N
Step4: We obtain the covariance matrix:
C=1
NN/summationdisplay
i=1ϕiϕT
i=1
NAAT
16

Content Face Recognition
Step5:We obtain the eigen vectors
ui=1,N
of the matrix C and we keep the first k vectors, which correspond to the
biggest k eigen values.
Step6: We then obtain the vector:
ΩT
i= [ωi
1,ωi
2,…,ωi
k],
with
ϕi/similarequalˆϕi=k/summationdisplay
j=1ωi
jui
j
Step7: Being given an image
Γ,
it is normalized:
ϕ= Γ−Ψ
Step8:
Γ
is projected on the space of eigen vectors:
ˆϕi=k/summationdisplay
j=1ωjuj
Step9:
ˆϕ
is represented as
ΩT
i= [ω1,ω2,…,ω k]
Step10: We are searching for a
i0∈1,…,N
that satisfy
/bardblΓ−Γi0/bardbl= min
1≤i≤N/bardblΓ−Γi/bardbl
17

Content Face Recognition
The eigenfaces algorithm was take from [3] and [12].
Figure 2.9 : Project block diagram
Reasons to choose the eigenfaces algorithm for face recognition:
⊿it has higer success rate compared to other alorithms
⊿the recognition speed is better than the other algorithms
⊿it is independent from the facial geometry
18

Content Face Recognition
Chalenges for the face recognition feature:
⊿scalling and shifting of the image
⊿the lighting
⊿different pose, angle, hairstyle
In order to reduce the complex facial recognition problem, the signifi-
cant local and global âĂİfeaturesâĂİ of faces must be found and evaluated.
These features do not necessarily correlate to the intuitive notion of facial
features such as eyes, nose, hair, etc. Instead, they should be extracted
from the comparison of a large variety of faces as the features that vary the
most from one face to another. Mathematically speaking, the procedure
is to find the principal components of the distribution of faces.
In other words, if each face is treated as a point/vector in a very high
dimensional space, we wish to find the eigenvectors of the covariance ma-
trix of a set of said face points/vectors. Each eigenvector corresponds to a
single mathematical feature, and can be displayed as an eigenface. A linear
combination of all the eigenfaces produces a unique face. Furthermore, we
can define an M – dimensional subspace – the âĂİface spaceâĂİ as the span
of the best M eigenfaces. Each face can then be approximated as a linear
combination of these M eigenfaces. With this initialization, we can reduce
each face down to a set of M dimensional weights, by which the original
face can be approximated.
To summarize, we perform the following steps to initialize the system:
1. Acquire the training image set.
2. Calculate the eigenfaces from the training set, keep M highest eigenvec-
tors and define the face space. (This step can be repeated when necessary
to update face database)
3.Calculate the weight space spanning M – dimensions for each of the face
in the database by projecting their face images onto the face space.
With the initialized system, we can then recognize new face images by:
1.Calculate a set of weights based on the input image and M eigenfaces by
projecting the input image onto each of the eigenfaces.
2. Calculate the faceâĂŹs distance to the face space, determine if it is
within a preset boundary (determine whether or not it is a face).
19

Content Face Recognition
3. If a face is identified, classify it as known or unknown.
20

Chapter 3
Application
I gave this application the name of ”Aletheia”, which is a greek word
that can be translasted as truth or disclosure. But there is also a literal
meaning to the word, the state of not being hidden or being evident.
The reason why I chose this word was because, being the oposite of the
word lethe that can be translated to forgetfulness, oblivion or concealment,
I believe that it captures the essence of what this application is built for.
General
The minimum sdk version for the application is 15, the target version is
22.
For the design of the application I used icons from googleicons and som
styling for the buttons and datetimepicker from fabgetbase and datwdul-
laer. It was developed using Android studio and java. For now, the appli-
cation is available only on Android.
21

Application Application
Figure 3.1 : Main Activity
In the figure above it can be seen the main activity – the first page that
pops up when entering the application. It has a drawer navigation menu
which is at first close and the only thing that can be seen beside the header
title and button for the menu, is a background image.
Reminder
I thought of the application to have a reminder activity that will help the
user remember activities or events. For example this part of the applica-
tion could be used for taking pills, or checking every night before bedtime
if the doors and windows are closed.
22

Application Application
⊿a date picker, so the user can choose the data of a certain event or
activity
⊿a time picker, so the user can choose the time of the said activity or
event
⊿a repeat function, so the user can choose if the activity he is creating
is a repeating one or not
⊿a repetition interval, so the user can choose the interval of the repeating
activity
⊿the type of repetition, e.g hourly, daily, weekly etc.
Figure 3.2 : Reminder Activity
Fig. 6 represents the layout of the reminder feature after the installa-
tion. The design is minimal
23

Application Application
Figure 3.3 : Reminder Add Activity
Fig 7. represents the layout both for adding a reminder or editing an
existing one. All the fields must be completed or it will appear a notifica-
tion. As it can be seen the user has the possibility of choosing the date,
time, if he wants to be a repettitive activity or not, the repetition interval
and the type of repetition. After completing the neccesary details, the user
has also the posibility of saving the activity or cancel it.
24

Application Application
Figure 3.4 : Reminder Activity – with some reminders in the database
Fig. 8. is practically the fig. 6. but the only difference is that this
figure shows two reminders, already set by the user. Of course after the
first install the figure that you will see will be 6. You can manage re-
minders from this layout. If you want to erase a reminder just long touch
the reminder and in the header will appear two buttons one for compliting
the action and the other for canceling it.
25

Application Application
Chatbot
The application has integrated a chatbot functionality mainly to help en-
tertaining the elders, helping them in different activities and socialize with
them.
⊿the list layout that allows the conversation to be seen
⊿an adapter that helps the interaction between the bot and the user
⊿the dictionary of the bot
Figure 3.5 : Chatbot conversation
Fig. 9. shows a simple exchange between the user and the bot. The
chatbot dictionary is still small having only about 200-300 words. In the
future its vocabulary will be more developed.
26

Application Application
Figure 3.6 : Chatbot contact feature
The chatbot has a feature that can add and store contacts. In the
picture above you can see an example of two contacts, one that is already
stored and the other that is about to be stored.
27

Application Application
Face recognition
The last functionality is it for a rather extreme case. I thought about
integrating a facial recgnition in the application so it can be used mostly
by people who suffers from alzheimer and they cannot recognize even their
family.
⊿you can take a picture or you can choose one from the gallery
⊿you can add persons to the ”visitors” group or you can create a new
group
⊿the identification of the person
After the user takes a picture, the recognition algorithm runs and when
is finished it returns one of the possible messages: 1. ”The name of the
person” – the image is clear and the algorithm returned the name of the
persons 2. ”Unknow” – the person is not in the dataset
28

Application Application
Figure 3.7 : Face Recognition Activity
The face recognition layout has 3 portions:
⊿the upper portion – the input portion- where the user can either take
a picture or choose one from gallery
⊿the middle portion – the dataset portion – where the user adds one or
many persons to a group
⊿the lower portion – the result portion – where the alorithm returns its
status
29

Application Application
Figure 3.8 : The gorup ”visitors”
I created the gorup visitors and added 3 people in order to check the
algorithm. One of those people is me so I can check it not only using
gallery images but by also taking a picture on the spot.
30

Application Application
Figure 3.9 : Two people single recognition
In the above figure it is noticeably that only one person was recognised,
the other one – not being in the group – returned ”unknown person” .
31

Application Application
Figure 3.10 : Two people, both recognized
This is the most easiest case possible, we have two persons in the pic-
ture, both of them are in the group, so both of them are recognized.
32

Application Application
Figure 3.11 : No recognition
The last case is at the opposite dirrection from the previous. We have
two persons in the picture, niether one of them are in the group, so the
algorithm returns thse unknown person status.
33

Chapter 4
Conclusions
Pros Cons
it is user friendly talking with the chatbot is via text
it has face recognition it is not real-time face recgonition
you can set the repetition interval it has only one face recognition algorithm
Table 4.1 : Aletheia pros and cons
4.1 Future works
In the future I plan to integrae
⊿speech recognition so the user can communicate easily with the chat-
bot without needing to write. I plan to integrate both speech-to-text
and text-to-speech so it will make a more enjoyable communication
between the man and machine.
⊿google maps so that the users can know the path back home, or to the
doctor or to the store, or anywhere else. This feature will have a table
that contains some crucial information such as: the home address, the
doctor address, the store address, the park address. With this data
I wil draw the minimum distance between two points such as: the
user wants to go to the store from home, he will select the destination
location and the app will shortly show the most favorable route to the
destination from the current location.
⊿a medical log so that if the user knows what pills they have alergies
to, etc.
⊿home orientation
⊿iOS availability
34

References
[1] https://www.alzheimers.net/resources/alzheimers-statistics/.
[2] I. Atalay and M. Gokmen. Face Recognition Using Eigenfaces .
SIU1996, 1996.
[3] Elena Pelican ¸ si L˘ acr˘ amioara Lit ¸˘ a. Algoritmi pentru recunoa¸ sterea
fet ¸elor . MATRIX ROM BUCURES ¸TI, 2015.
[4] H. Ergezer. Face Recognition: Eigenfaces, Neural Networks, Gabor
Wavelet Transform Methods . Baskent University, 2003.
[5] B. Kurt F. Kahraman and M. Gokmen. Face Recognition Based on
Active Shape Model . SIU2005, 2005.
[6] H.S. Yavuz I. Yazar and M. A. Cay. Face Recognition Performance
Comparisons by Using Tanh and Gauss Functions in the ICA Method .
IATS, 2009.
[7] B. Karaduman. Relevant Component Analysis . Yildiz Technical Uni-
versity, 2008.
[8] F. Karagulle. Face Finding Using Support Vector Machines . Trakya
University, 2008.
[9] B. Kepenekci and G. B. Akar. Face Classification with Support Vector
Machines . SIU2004, 2004.
[10] Figen Ozen Muge CarikciÄś. A Face Recognition System Based on
Eigenfaces Method . Halic University, 2012.
[11] A. Ozdemir. Recognition of Frontal Face Images by Applying the
Wavelet Transform . Kahramanmaras Sutcu Imam University, 2007.
[12] Elena Pelican. Course support – Algoritmi de calcul ¸ stiint ¸ific. Re-
cunoa¸ sterea formelor (fet ¸e ¸ si cifre) .
35

References References
[13] F. S. Samaria and A. C. Harter. Parameterization of a Stochastic
Model for Human Face Identification . 1994.
[14] C. Tirkaz and S. Albayrak. Face Recognition using Active Shape
Model . SIU2009, 2009.
36

Similar Posts