Smart car alarm and accident monitoring system [307037]

Smart car alarm and accident monitoring system

DIPLOMA THESIS

Graduate: Tufiși Răzvan Manuel

Supervisors: As.Eng. [anonimizat]: Tufiși Răzvan Manuel

Smart car alarm and accident monitoring system

Project proposal: Development of an application which acts as an S.O.S beacon in case of car accident. The location of the accident will be displayed on a map using Google maps API.A big data algorithm will be used to process and index data which otherwise would be inconclusive.

Project contents : Introduction, [anonimizat], [anonimizat], [anonimizat], Conclusions, References

Place of documentation : [anonimizat], Automations department

Consultants: As.Eng. [anonimizat]: November 1, 2016

Handover date : July 11th 2017

Declarație pe proprie răspundere privind

autenticitatea proiectului de diplomă

Subsemnatul(a) [anonimizat](ă) cu CI seria VX nr. 467250

CNP [anonimizat], autorul lucrării: Smart car alarm and accident monitoring system

elaborată în vederea susținerii examenului de finalizare a [anonimizat]-Napoca, sesiunea a anului universitar 2016-2017, [anonimizat], [anonimizat], și în bibliografie.

Declar, [anonimizat] a convențiilor internaționale privind drepturile de autor.

Declar, [anonimizat] a mai fost prezentată în fața unei alte comisii de examen de licență.

In cazul constatării ulterioare a [anonimizat], respectiv, anularea examenului de licență.

Data Tufiși Răzvan Manuel

1 Introduction

1.1 Purpose and Scope

Car accidents tend to claim thousands of lives every year. [anonimizat].

We may be talking about a slippery road due to weather or the road layout (e.g. Bend, hill, narrow road) or a construction or deposit on one side of the road. These can all be factors which can cause a car crash [1]. It is clear that in most cases we cannot modify the existing infrastructure or landscape plan because of economic and environmental reasons and neither enforce a person to choose another road because of the presumed danger. The only course of action we can take is to warn and try to prevent other car crashes.

[anonimizat], but many of them tend to be processed and a [anonimizat] a fast decision or a decision based on the yearly government funding. These decisions can be speed up if the country’s Ministry of Transportation and Infrastructure are aware of these problems in due time.

[anonimizat] access. Using this application we can get data in real time and send it to the web application which can refine it and generate a result using high computation speed and precision.

1.2 Project Objectives

The project is made of two parts: a data acquisition application which is meant to register the position of a car crash when one occurs and a web application which processes data gathered by the first and stores it in a database. The second application is also designed to provide a user with visual representation of the processed data.

The first application is not user configurable. It is meant to be installed on a car as part of the car’s features. The Raspberry Pi board will have a sensor which will simulate the car crashing and will be connected to a Smart Phone with an Android operating system. The Smart Phone receives the distress signal and obtains a set of coordinates using the built-in GPS, and using an internet connection it can send the signal further to the second application for processing.

The web application is used to process the data obtained from the crash sites and display it to a user. After the data is processed, it is categorized in small chunks, so that it can be displayed and analyzed by the user using a digital map.

The user can be a worker of the government or a member in the services sector such as a Police or Hospital worker. His job is to monitor any change which can occur in the system, observe where the car crash has occur using the map and act according to the situation.

1.3 Specifications

The data acquisition is meant to solve the problem of giving the location of the crush. It is used to imitate a real car crash in which a pressure sensor is triggered causing a signal to be send to the Raspberry PI board. This sensor is connected directly to the board and will be processed immediately by the application running in background on the board and then be sent to the android application. The Android application can get its current location using the GPS module integrated within it.

The main goal of this operation is to obtain the shortest time for the entire process. This time can be calculated as the time elapsed from when the sensor is triggered and the moment when the request is send to web application. These should be as short as possible and a series of complications can occur. One such case is that the car crash causes the Raspberry board to malfunction before the signal from the sensor is processed.

Another case is similar to the first one, and is related to the way sensors are connected in the car. Presuming that multiple sensors are distributed in the car so that each case for a crash is covered, it may be a probability that the impact severs the connection to the board. A systematic distribution of the sensors is needed so that when one fails, another can cover its function, but these can include additional costs to the manufacturer.

All of these taken into account are not nearly enough. Another issue is the Android application, which needs to run on the personal phone of each driver. If it is not reliable and robust, causing the fast draining of the phone battery or interfering with the performance of the phone or other apps, it can be rejected by the consumer even if it is meant for his safety. Also the fact that the application is currently made only for Android platform, creates a gap because there are many users of IOS systems, Windows phones etc.

The third issue is the connection to the internet, which could be limited for some user, and in that case the application should use the services provided by the mobile phone distributor.

These are issues which concern the first part of the application, which is more focused on hardware, rather than software. For this proof of concept, the most basic test is implied using only one sensor, a Raspberry board, and device with Android 3.0 or greater version.

The second application focuses on receiving the requests from first application. Car crashes may occur by the thousand in each minute causing the server to overload and crash, or slowing down the rendering of the map. The algorithm used for processing is a big data algorithm which requires a lot of computing power and time, and the optimization of this process may require additional costs for creating distributed clusters of nodes, which share their computing power or to use an existing cloud computing configuration from a provider. Also the main functionality of this part is to group and classify data into small regions called clusters on which the geo profiling algorithm can be used on each one independently. The time required by the algorithm to finish the processing is dependent on the amount of data that is provided as input.

Another problem is precision which can vary from 0.001-0.00001 for two points in the vicinity of one kilometer. So generating a random set of data with values close to the specified ones is particularly difficult.

The interface is user friendly and requires no previous experience from the user. So as long as the user is familiar with the algorithm tuning factors and has the specific data stored in a comma separated values format or database, the algorithm can be regularly used to determine any changes which may have occurred.

In this proof of concept data is generated by a tool which takes five parameters the minimum and maximum latitude and longitude bounds and the amount of data we require to get a random sequence of coordinates, which are saved in a comma separated values format. They are then asynchronously processed by the application. This way the user is free to use the application for other purposes. After the algorithm finishes the processing we can have an overview of the so called ‘hot zones’ which spam from the area where the pin is put on the map, to the surroundings within the given precision value.

The application creates a theoretical model using patterns and geo profiling of the area where crashes occur often, so that an investigation can furthermore be conducted in that area to give an expertise if whether or not extensive measures must be taken to prevent other incidents.

2 Bibliographic study

2.1 Big Data

To get a quick overview of the application it is imperative to understand the concept of big data. Big data refers to sets of data larger than traditional one (e.g.: millions, billions etc.). The purpose of big data is to be used for user behavior analytics, predictive analytics, and other advanced data analytics methods to offer certain results.

The challenges for big data analysis include acquiring, storing, analysis etc. Acquiring this data can prove difficult because it is usually not provided by one system and by many distributed systems, while storing it can prove more difficult due to the large amounts of information (e.g. from a few megabytes, data from a sensor in a factory to terabytes which can store the information in a DNA sequence). Analysis of this data can also be complex because of the previous factors and the computational power required by the algorithm, but the information obtained is invaluable due to its potential.

The term was introduced in 1990 by John Mashey [2] and focuses mostly on unstructured data, which can be modeled and adapted to various needs, according to the requirements of the user. The most important characteristics are Volume, Variety and Velocity.

To understand whether or not Big Data is required for this project other related concepts must be understood: Data Mining, Warehousing and Clustering.

2.1.1 Data Mining

Data mining is the process used to discover large patterns in data sets. It uses methods from artificial intelligence, machine learning, statistics etc. to extract information and transform it into a model for future use.

The main ideas behind Data Mining are patterns. These earliest methods of extracting patterns included Bayes theorem (1700) and regression analysis (1800) and still form the start point for machine learning. [6][5]

There are three basic stages of a data mining process [4]:

Pre-processing – Create sets of information suitable for analysis, cleaning the redundant ones.

Data Mining – Processing of data. Take use of the following methods: association, clustering, classification, regression and summarization.

Validation – Verifying the discoveries and results.

The main focus will be in the clustering method, what it implies, and the data storage.

2.1.2 Clustering

Clustering represents the process of discovering patterns which group data in a specific structure, without any previous knowledge about these structures. Members of a cluster aggregate by their characteristics and similarities, and are partitioned using a particular algorithm which needs to adapt to the provided input. This makes it impossible for an item to be part in more than one cluster.

The most important algorithm models are [3]:

Centroid – each cluster is represented by a single vector mean, and an object value is compared to these mean values e.g.: k-means algorithm [7]

Distributed – the cluster is built using statistical distributions

Connectivity – the connectivity on these models is based on a distance function between elements

Group – algorithms have only group information

Graph – cluster organization and relationship between members is defined by a graph linked structure

Density – members of the cluster are grouped by regions where observations are dense and similar e.g.: DBSCAN and OPTICS

For the application we use an algorithm based on DBSCAN algorithm and for validation and analysis we can switch it to a K-Means algorithm.

2.1.2.1 DBSCAN algorithm

The DBSCAN or Density-based spatial clustering of applications with noise is a density based clustering algorithm. It takes as input parameters, besides the set of data which needs to be analyzed, a minimum number of points a cluster can have and the maximum distance between these points.

It is best suited for the analysis of the data received by the first application because it created cluster based on the precision we provide and the minimum number of points that cluster can contain. This is suitable because the set of data is not complex, containing only the minimum knowledge. The main interest is to determine distance between two points, so that the density between them has a reference.

The Manhattan distance criteria is used rather than the traditional, Euclidian distance because it works better with large vectors of data, and eliminates the square error. Absolute value introduced by the Manhattan distance gives more robust results.

The algorithm pseudo-code:

Take a point in the dataset. If the point is visited, then another point is taken from the dataset and marked as visited.

Find the neighbors of that point and return them by creating a region.

If the region contains enough points it can be inserted into a cluster, otherwise it is marked as NOISE.

Expand cluster by finding all the neighbors of each point already in the cluster and merging the two lists

Repeat the process until all points in the dataset are marked as Visited or Noise.

The complete algorithm can be found in reference [8].

2.1.2.2 K-means algorithm

The K-means algorithm is a centroid based and can be used for comparing results with the DBSCAN algorithm. It is not suitable to be used in the application because it requires specifying the number of clusters.

The algorithm is:

Because it is centroid based we need to provide a set of random centroids, then each point is assigned to the closest centroid.

After all points are assigned to a cluster the clusters are recomputed. Their centroids are reassigned by summing their members and getting the mean value. Then each point computes the distance to all centroids and are reassigned if needed.

Repeat step 2 until convergence, which means that the points are no longer moving among clusters.

The algorithm also uses a Manhattan metric to compute the distance. K-means algorithm can be used on specific datasets. We can determine the number of clusters it creates and compare it with the number of clusters that are created by DBSCAN.

This can be used to study if the precision is optimal for the DBSCAN algorithm, and determine how much the number of points affects the clustering process.

2.1.3 Data warehousing

Data warehousing is a relative term. It is used to describe the method used to store the data and the results. For large application databases must be distributed and accessed via a common warehouse. [9]

The application stores directly data into the database and periodically runs the processing algorithm.

2.2 Rossmo’s formula

Rossmo’s formula is a geographic profiling formula used in forensic science for predicting the location of a serial killer. The formula has been used besides forensic science in determining the behavior of predatory animals and epidemiology [10].

The basic idea behind this equation is to find the source of a problem using its cause or effect. In this case the effect is the car accident and the source is yet to be discovered. The first term of the equation describes the idea of decreasing probability with increasing chance. The second term describes the idea of a buffer zone. Adapting it to the application solution, it can be said that when departing the site in which accidents occur often, the more chances are it can be avoided. The second concept is responsible for extending the area of search outside the main the zone of the car crashes and decreasing the chance to obtain an incomplete area.

These concepts are also tuned by the constants which we include in the equation: f, g represent the probability that a car crash occurs in the predicted area or outside and B is the radius of the buffer zone.

3 Software Architecture

3. 1 MVC pattern

MVC or Model View Controller is a software architectural pattern widespread in the domain of computer science. Initially, in 1979, it was called Thing-Model-View-Editor, later being renamed MVC. It aims to separate concerns in a web, giving a single responsibility to a single component (e.g. separating the data access layer from the layer which processes data or business logic). This can add a certain degree of complexity to the application, but the developer benefits from a clean code, perfectly suited for unit tests, extension or further adjustment. The MVC pattern can be found in the application written in JAVA, C++ and C# and also in many frameworks.

3.1.1 Spring 4 MVC

Spring 4 is a framework which provides the necessary services for building enterprise applications in Java. It is build using Java, so we can benefit from all the features the language can offer.

One of the advantages of using Java is that the application can run anywhere. The independence of platform characteristics of Java makes it possible for the program to run anywhere once it is compiled. This is possible because unlike C or C++, Java creates a byte code which can be interpreted by any device which has a JVM installed.

Java packs applications into an executable format called Java Archive (.jar) which contains all Java class and metadata (e.g. images, text etc.) into one file. Java archives are built using the ZIP format and may contain a manifest file in the following path “META-INF/MANIFEST.MF” which provides special instructions on how the program should work. The same principle applies to the Web application Archive (.war) which contains besides the already enumerated features of the “.jar” the necessary web descriptors contained in the path “WEB-INF”. These descriptors can be the web.xml which describes the structure of the application or Java Server Faces configuration or different XMLs which define a configuration or a resource to be used by the application.

The most important characteristic of Java is the fact it is completely Object Oriented. The code will benefit from all the features of the language and the framework.

The MVC is easy and intuitive to use with Spring and it is designed as a standard for all web application. The pattern isolates the business logic and the user interface making it more reliable, easy to test and to maintain. Also the three levels of the application are well defined and eliminate any kind of dependency.

The MVC pattern has the following representation:

Model: The object which we are interested to change. In many cases this can be considered the object which encapsulated the characteristics stored in a database, and we can modify using the business logic.

View: This corresponds to the interface presented to the user. It is composed of dynamic HTML documents and has the role to transform the data processed by the server into a visual representation.

Controller: This component represents the glue between the View and Controller. It manages the communication between the two components and detects changes on the user side, and notifies the model.

A visual representation of the MVC pattern can be seen in the Figure 2.1.

Spring MVC is the best option when the following features for the application need to be achieved:

A high control over the generated HTML files. Due to the controller proficiency, the server can control the data sent and received from the view, and the state of the representation.

Easy unit testing.

Separation of concerns. Some applications tend to have components tightly coupled. By using MVC we can decouple those components giving the developer more control over the application and making it easy to maintain.

MVC with Spring is easy to use because of the user friendly interface provided by the framework. By convention, the controller classes will be decorated with the annotation @Controller and the Model classes will be decorated with annotation for each individual component in the corresponding layer of architecture (e.g.: @Repository, @Service, @Entity etc.).

Another important feature of the MVC pattern is the routing engine. The framework contains one such engine which maps request sent by the user side to the corresponding process in the controller. A request is handled by the annotation @RequestMapping in which we specify the path where the resource should call the web method which calls the mapping (e.g.: GET, POST etc.) and the result it needs to return. This facilitates the bidirectional nature of the MVC pattern.

3.1.2 JSON

JSON or JavaScript Object Notation is a standard in which information is exchanged between different consumers. It is language independent and most of the languages can parse it. JavaScript Objects can be parsed into JSON and sent via AJAX request. The MVC framework can interpret them and transform them into Java Objects.

JSON is usually used instead of the old XML representation because it can provide a data exchange between server and browser with a lightweight and without the need of a code interpreter.

3.1.3 RESTful Web Services

REST or Representational State Transfer was first introduced in 2000 by Roy Fielding. REST is an architectural style for distributed systems. It is composed of a set of constraints such as having a client/server relation, a uniform interface and being stateless. [11]

Principles of REST:

Resource: It is easy to identify resource by URI. This facilitates the interaction with the clients.

Representation: transfer in the form of JSON objects to provide a response or request body.

Messages: HTTP methods are used for decoupling the responsibility on a resource endpoint.

Stateless: Interactions store no client data and client holds only session state.

Spring framework is capable of providing a user with all these features together with a series of helpers so that the application can be considered Restful.

The figure 2.2 describes the lifecycle of a Restful Web Service.

3. 2 Three-tier architecture

Three-tier architecture is a software concept which implies that an application should separate concerns and divide responsibility so that each layer of the application is specialized in performing just one task. This means that the application will become more reliable and easy to test and maintain. The layers are separated so each of them is responsible for doing just a part of the logic needed for the data transformation.

The Three-Tier model implies we have three layers:

Presentation tier: is responsible for presentation and user interaction. This layer is responsible for accessing the second layer in a secure and intuitive manner. It can support several client types. Clients cannot access this layer directly and then they are sent to the second layer for processing.

Business tier: called also the application logic tier, offers all the operations which are needed by the first tier to process the data. It can be accessed by multiple clients simultaneously and needs to manage its own transactions.

Data tier: or database access layer. Here the database can be accessed but only by the second layer. This layer is usually secured by the previous tier, so protected data is not accessed only within the secure network.

The figure 2.3 is a representation three tier architecture which also incorporated the MVC pattern.

Java and Spring framework offer full support in developing applications which respects these principles. While Java provides the necessary means to create a hierarchical project structure based on packages, classes, interfaces and resource files, Spring framework offers its services in form of third party libraries each specialized to perform an operation.

3. 3 Server features

3.3.1 Spring Dependency Injection

3.3.1.1 Dependency Injection

In an application we tend to eliminate tightly coupled components. When creating an instance of a component into another component we can say that we created a dependency between them. So, to eliminate all these we shouldn’t create instances but inject them via constructors, setters or interfaces which set an instance of that component. Now the high level components are responsible for creating instances and passing them down to lower level components. The problem is the complexity of the application is always increasing and these components will become hard to maintain and so, we will need specialized components which are responsible for this part.

A simple solution will be the Factory pattern which creates objects which are related to each other by a level of abstraction without exposing the creation logic. This can work for a small application, but it cannot resolve the dependencies on the top level of the application and neither on the lowest level because there is no degree of abstraction between them. Another problem is that the client is responsible for creating instances and managing their lifecycle.

The Service Locator pattern is similar in functionality with the factory pattern, but it provides a cache in which instances of the objects are stored. The initial context which creates objects is needed just once and then the cache is responsible for providing already existing objects. This makes the service locator responsible for creating instances and not the client.

These two patterns have a so called “pull model” which gives the high-level component the responsibility of having dependencies to the other objects. [11]

DI or dependency injection takes the opposite approach adopting a “push model”. Inversion of control is the term used to describe this technique and Dependency Injection is an implementation. With this approach a class is responsible for pushing the dependencies into a high level component at runtime. Now a specialized class called Injection Container is responsible for providing the instance of the object. The high-level component has knowledge just of the interface that object implements. (Figure 2.4)

3.3.1.2 Spring IoC container and Dependency Injection

The Spring IoC container is the core of the Spring Framework. The container will be responsible for creating objects, configuring them and managing their lifecycle and destruction. These objects are called Spring Beans. The container knows which beans must be added by using metadata from xml configurations, annotations or Java Code.

The Bean is a general definition for an object which resides in the container. It benefits from all the features provided by the container, including the injection methods. This bean offers the component the possibility of modifying the scope (number of instances), the lifecycle or the injection method.

The @Autowire annotation is used to configure these beans and inject them into specific properties or constructors.

As I have mentioned earlier Spring manages dependencies in a user-friendly manner so it provides us with annotations for each layer, which define beans to be stored in the container. They may seem the same at first, but each of them has a specific feature (e.g.: one may require multiple instances, one must be destroyed after first use etc.).

For the model objects the @Entity annotation is used, which means that the bean is used for operations which include the database and data persistence.

The Data Access layer classes have a @Repository annotation which means they are stored in the container and have a single instance.

The services layer contains the annotation @Service so that instances will be created. The controller which has the annotation @Controller is also a bean stored in the container. All of these, in composition with the interfaces each one implements, create the perfect environment for the dependency injection pattern.

Besides the annotation for dependency injection, we can also have annotations for behaviors, configurations and predefined messages (e.g.: @Override).

3.3.2 Java Persistence API

Java Persistence API (JPA) is the framework responsible for object-relational framework for Java. It is an implementation of Hibernate framework. It has its own querying language called Java Persistence Query Language for complex tasks.

The main use of this framework is to provide a real representation of the database tables stored in POJO (plain old java object) and use them to manipulate data.

From an architectural point of view is composed by entities which are managed by an EntityManager. The EntityManager is provided by an EntityManagerFactory which is responsible for creating instances from the first type. Also the EntityManager provides functionality for transactions and queries on the database.

The relationship between these components is described in Figure 2.5. The EntityManager can handle only one transaction at the time, but it can have multiple queries. All these mechanisms, together with the entities they manage, form a persistence unit.

3.3.2.1 JPA Entity Classes

As mentioned earlier the @Entity annotation is used by the classes whose instances are stored in the database. To store data in an Object DB using JPA the defined entities need to represent that data object model.

Fields in the database are represented by the class attributes, and are mapped with respect to the database.

A JPA entity should be:

A top level class, not nested/inner class;

Should have a default constructor;

Cannot be final or have final methods;

The name of these classes should be unique and also be the same as the one in the database. It uses Java defined classes to be mapped accordingly to the types that the database contains. They can have constraints which are also applied to the database and they can be mapped to other entities creating the relational structure of the database.

3.3.2.2 Entity mapping

Another feature JPA is the ORM, which allows the user to create complicated relations between entities in the same manner that relationships are established between tables in a database.

Annotations such as @OneToMany, @ManyToOne, @ManyToMany, @OneToOne are used to establish relationships between entities. The relationships are established by creating fields in each entity with the type of the other. These must be a single object or collection of objects with respect to the relationship between the two. Also the column where the foreign key is to be added must be declared in the @JoinColumn annotation. Data retrieve methods such as Lazy loading (fetch only when needed) and Eager loading (fetch immediately) are available for the entities due to JPA.

3.3.2.3 Code first approach

Another feature of the JPA is the so called “Code first”. It is a concept which defines the fact that we do not need to create the database and the model, but use the model to generate the database. This is available to JPA and Hibernate due to .dll-auto properties in both frameworks. This can be used to create, update and validate a database.

This approach is preferred by most developers as a simplification of process to create a database using a database engine or SQL language queries.

3.3.2.4 PostgresSQL

PostgresSQL is an advanced open-source relational database management system which is standard-compliant and extendable. It is capable of handling many tasks and has support for concurrency. The driver for postgres is not available in the standard JPA framework and must be imported from org.postgresql jar.

3.3.3 Spring Boot

Spring Boot is a feature offered by the Spring platform. It can create stand-alone, production-grade applications in the form of standard Java applications. All that is needed is the third-party library “spring-boot-starter-web” and the declaration of the spring boot starter parent.

Unlike the traditional applications where the application needed to configure data sources, security issues or server spring boot brings forth the concept of “auto configuring” project. All the xml configurations are eliminated, the developer needs just to declare the necessary attributes in the file application.properties and spring boot automatically configures the project.

Spring boot comes with an embedded Tomcat server as default and can also be configured to work with other servers.

2.3.4 Apache Maven

All the third party libraries, the plug-ins, the build directives in a large project must be managed by a system. This is the task of Apache Maven. The core functionalities of Maven describe the build process of the application and add its dependencies.

A XML file (pom.xml) is present in all Maven projects which describe the functionalities just enumerated before. It dynamically loads the Java libraries and plug-ins from the specific repositories.

Maven is also used as a project management system, and can be used to add different Java projects with different functionalities to the same application and use them for the benefit of the new project. They are treated as dependencies and imported by the new project.

Other benefits of Maven are standardization, reuse, build lifecycle and dependency management, consistency and scalability.

3.3.5 Lombok

Lombok is a third party library which offers support for clean coding. It offers the possibility to eliminate boilerplate code by replacing the standard POJO methods and features with an annotation. It keeps the class clean with only the fields and required annotations.

It also comes with a Builder pattern template which can be applied to a POJO, making its creation more specific and adding more visibility to the code.

3.3.6 Commons CSV

A third party library from Apache makes simple use of the “.csv” files by parsing or writing objects directly from a record using the provided mapping of the header.

3. 4 Web Client Application

3.4.1 Node.js

Node.js is an open source, cross-platform server side run-time environment written in JavaScript [13]. Traditional JavaScript is used on the client side to create dynamic web pages and can be interpreted by the browser. Node.js improves these features by providing the means to produce dynamic content before the page arrives in the browser. It has a powerful event driven architecture.

Npm is the largest package manager for JavaScript and the world’s largest registry. It can discover packages of reusable code and assemble them to create new applications. Its main goal is to automate dependency and package management.

With npm the most important libraries of the project are imported:

Babel for transpiling ES6 to ES5 so the browser understands it;

Webpack module bundler for minifying and transforming all the JavaScript code into a single javascript file which is used in the browser;

Mocha for testing the code;

ESlint to create an environment which is aware of mistakes in the code;

One important feature of npm is that beside the module we request in the package.json, the additional dependencies of the modules will also be fetched, providing a safe way to resolve dependencies that may came loose in the manual import process.

In conclusion, using npm provides the user with a set of benefits besides other management tools such as gulp or grunt:

Simple to learn;

Less abstract;

No dependency on separate plugins;

Simple debugging;

Better documented.

3.4.2 Webpack module bundler

Webpack will bundle our code into a single javascript file. All the Webpack configurations will be stored under a webpack.config.dev.js file and a production bundler will also be set under the webpack.config.prod.js file.

These configuration files describe the process of bundling files. It contains a debug property set to true so that the application will give us information about the issues. Another important part is the array entry where multiple entry points are added so they may facilitate development. Here the middlewares such as webpack-hot-reloading is declared. The “Hot reloading” concept is a concept used in computer science to describe an application which is aware of any changes made during development and can instantly integrate them for testing. The target is web, so that webpack knows we target the browser. The output property defines the place where the final bundle should be placed.

The plug-ins section is used to define plug-ins, which are essential for enhancing the power of webpack. These are HotModuleReplacementPlugin for adding the capabilities of hot reloading to our application, the NoErrorsPlugin to reload so that hot reload is not broken by error, the HtmlWebpackPlugin which creates references to the bundled CSS and JS files and also defines a minified version and a DefinePlugin where global constants that can be used at compile time are defined.

The last important property of the configuration file is the module where loaders can be configured to tell webpack with what kind of files we are working. Here we include the javascript files, css and sass files, json files, image files and also the files that bootstrap uses for fonts. The css files are handled by a post autoprefixer, which ensures that the prefixes for every browser are set so they can be interpreted.

Other properties are defined also, for the purpose of development or packing but the most important features are enumerated. Other plug-ins can be found in the production mode example. They are used mostly for security and performance reasons.

3.4.3 Babel 6.0

Babel is used to solve the compatibility issues between the ES6 which contains a new set of user friendly declaration, functions, modules for javascript into ES5, which is rather more primitive. Babel is configured in the “.babelrc” file and the principle is simple, the user just needs to define the presets he is working with and babel will know to convert them. Also the environment can be configured and added special features to.

3.4.4 Browsersync Server

Browsersync server is used to create a javascript-based server, which is used to run the front-end application. It is configured to work with the webpack middleware. It contains a path to the index.html file where all the files are bundled. An instance of Browsersync is created for the development mode and one for the production mode, adding separate configurations for them.

3.4.5 ESlint

Eslint is used by developers to enhance the coding standards of an application. This is done by defining rules in the “.eslintrc” file which is then configured by the application to show comments if these standards are not respected.

These standards are set so that the application accepts react and ES6 syntax, but there is also an environment section to help ESlint to expect certain global variables. Most of the rules are defined for react and have a certain degree of severity: 0 means off, 1 means Warning and 2 means Error. The user can configure the application to break in case of error.

3.4.6 Mocha

Mocha is a JavaScript test framework running on Node.js and in the browser. Mocha unit tests run easily in a virtual DOM defined using webpack and offer a simple and elegant solution for detecting bugs and creating a clean structure of the project.

The tests are found in each subfolder of the “src” directory and are used by any unit which may need testing. They are running before the build and ensure that changes do not affect it.

3.4.7 Chalk library

Chalk.js is a library from javascript which gives the user the possibility to create a visual friendly set of messages in the console. It is configured in the “chalkConfig.js” file then used in every other configuration file in the “tools” to create messages.

3.4.8 Npm commands for application

The application is built so that multiple commands are run in the same time. They are defined in the scripts section of the package.json file. They have distinctive labels, so each one is differentiated. They use these labels to form new commands.

The most important command:

“npm start” starts the application in development mode;

“npm run build” bundles the application for production uses;

These scripts are not self-sustained, but rather are used as a reference for many more less scripts which perform each one a particular task. These tasks can be the test running and watching, the ESlint watch for errors or starting server, removing files and adding messages.

They are all glued together by npm-run-all command which can have a flag for running in parallel, so no time will be wasted.

The build for production will fail if ESlint displays errors or tests are failing.

3.4.9 CSS

Css or Cascading Style Sheets is a language used for formatting and styling a markup language document. It is useful for creating visual effects which are offering the application a dynamic and stylish aspect. It can be applied to HTML, XHTML and XML files.

It is easy to use due to the fact it is applied directly to the markup tags belonging to the language and any browser can interpret them. For more flexibility the so-called classes and ids are used.

It can be integrated with javascript to create spectacular visual effects or complex objects which can be reused.

CSS is highly reusable, reliable and open for everyone to use because it requires no special parser, just a regular browser.

3.4.9.1 Bootstrap

Twitter Bootstrap or bootstrap, is a popular open source framework for CSS/HTML and JavaScript. It combines the joint power of both three languages to help users with limited experience to create interface, with a high degree of complexity and less effort.

It was designed to be extended and reused, so it can be applied to different projects. Also it is designed to be responsive, so it can run on different screens such as tablets, mobile phones, big screens etc. and can adapt itself according to the screens resolution.

3.4.9.2 Sass

Sass (Syntactically Awesome Style Sheets) is an extension of CSS that allows the use of variables, mix-ins, nested rules etc. and are fully CSS-compatible. It helps large style sheets to become more organized and run quickly.

The problem with standard CSS is that after some time the application complexity increases and conventional organization will not work, causing the code to get loose and difficult to read and maintain. This is resolved in Sass by adding a hierarchical structure to the code and the opportunity to reuse code.

3.4.10 React.js

React.js is a javascript library for building complex user interfaces. It was developed by Facebook and provides the means to build interactive, stateful and reusable components for UI. The innovation that React.js brings is called the Virtual DOM which, instead of rendering the entire DOM of an html document, compares them and changes only the elements which have to change. This is done by running a “diffing” algorithm which identifies changes and updates the DOM with the results of the diff.

One feature of React.js is that it creates a mix of Javascript and Html to create a JSX component which is highly reusable. This JSX files are transpiler via Babel. Traditionally, a React component in standard form is a javascript object which encapsulates all the logic required for processing. This can become hard to track and complex data structures may appear. With ES6 we can separate components and make them stateless or stateful with simple declarations.

Using states and events, components changes are triggered. The method adopted by React.js is a unidirectional data flow called Flux. The concept is simple, a view triggers an event, which updates a model, and the model triggers an event from which the view knows which parts need updates.

The model for the unidirectional data flow is described in Figure 2.6.

Stores are the place where all the logic behind the data transformation is applied. They mostly are responsible for asynchronous calls to the server and operation for changing the states.

Actions are a sophisticated way to describe events. When an event occurs, all the data is packed into an action which is sent to the dispatcher. The application flow reaches the store where the logic is applied and then they reach the view in the form of callbacks.

The dispatcher can be considered a library of callbacks for certain actions that are registered. These callbacks are used after a store finishes the data transformation, and the new objects need to be redirected to the store.

3.4.10.1 Reflux.js

Reflux.js is an implementation of the Flux pattern. The only difference is that actions take the role of the dispatcher, the last one being removed from the pattern. Stores listen to actions and they can also listen to other stores. This offers the possibility to aggregate data between the stores.

If the store has any actions registered for listening, it can respond by triggering a defined state to change, which the application will recognize and will take immediate action.

3. 5 Android application

The android application consists of two parts: the server application which is responsible for subscribing to the Raspberry PI application and the client application which is responsible for sending requests to the web application server. It can be considered the bridge between the hardware part of the project and the web application, working as a receiver-transmitter but also as a location provider. The Raspberry Pi board does not include in the default form a GPS provider and neither access to the internet. Most of the mobile phones in existence today have these two functions incorporated in the default configuration.

The android application is written in Java using the android framework and specific libraries. The application uses android forms and simple elements, such as activities and intents. The purpose is to keep it simple and provide the user with the minimum viable environment for this application to run.

The most important components used for communication are the Retrofit and the Paho MQTT libraries.

3.5.1 Retrofit

Retrofit is a Java android library used for Http request communication with the web server. It is configured to use endpoints from a single address and send objects using the JSON format. This can also be modified to support other kind of communication formats. It is simple to use and can manipulate http calls with ease.

3.5.2 Paho MQTT Server

This library is written in Java and is compatible with any device which supports a JVM such as android applications. The purpose is to create an asynchronous client for the messaging mechanism which is using a topic model. So when a client publishes a message, multiple servers can subscribe to it, making it easy for multiple devices to be notified and process the data.

This technology is widely used in the domain of Machine-Machine communication and implicitly in the IoT domain. This provides a level of decoupling between the applications and facilitates an easy and reliable way to communicate.

3.6 Raspberry PI application

The Raspberry Pi application is a program written in Java core. The operating system of the board is a distribution of the Debian Linux operating system which is adapted to run on the technical specifications of the Raspberry Pi board. Due to this fact and to the fact that Java is a cross platform and because the system contains a default JVM installed, the application written in java can be run on the board.

The purpose of the application is to communicate with the android application via TCP protocol. This is done using a MQTT (MQ Telemetry Transport) protocol which uses a topic to publish messages when an event occurs on the Raspberry PI. These can be considered the client application for the android-Raspberry PI data transmission because it listens to the events from a sensor and sends a set of data forward for processing.

Raspberry can support a series of digital sensors and triggers, because of the digital pins located on the board. Data from these pins are processed using a Java library called pi4j, which is included in the application. For the communication, another library called MQTT Paho is used.

3.6.1 PI4J library

Pi4J is a library written in Java and used for getting data from the Raspberry Pi board. It can access the Raspberry Pi board pins and ports and other electronic data for obtaining telemetry for the application. The library can listen to interrupts on the board, configure GPIO pins and set states, I2C communication and even serial communication.

This makes the manipulation of data on the Raspberry PI board a simple task for a Java programmer. All it is required is to know how to connect the board and the elements.

For this application the main use is to listen to the state change on the button, which is used as a mock for a sensor which records the crash.

3.6.2 Paho MQTT Client

The MQTT send a message once an event occurs on the button. This is wrapped into a message and published to all the listeners. It can contain a set of data, which is provided by the board. This data can be used further by the upstream application to be processed and complex logic can be created.

In the existing application the message is a simple S.O.S which is used to trigger the chain of events from the android application and save the location into the database.

4 Detailed Design

4.1 Use Cases

The project is composed of two applications, each one destined to a specific user. Even if some parts of the application are not meant to be used by regular people and are presented as a product feature, the others must have their commitment. These users can be considered a consumer, which is likely to use the mobile application on its personal phone and a specialized user which is directly responsible for monitoring and taking technical considerations from the web application and the algorithm results.

In the figure below we can observe the function each user has, according to its role:

The consumer must ensure that the application runs in the background of a device that uses an Android operation system. This way the user is not constrained only to use the mobile phone and can also use a tablet or any other system that can be integrated into the car which contains a version of the Android OS. Also, this type of user must have a connection configured with the raspberry pi board. This can be factory configured and the user can connect to it via the application. The normal user should not have the possibility to configure the board, because it can result in a set of malfunctions which can later on let the care exposed.

The second user is a specialized one which has the task of monitoring the application and take theoretical data. This data can later on be used for conducting further test regarding the tracking and preventing the car crashes. It is also possible for him to import data on which to run tests, and also add data manually to observe if the algorithm is influenced by these changes.

4.2 Software detailed architecture

The system is composed of four components which communicate and interact with each other. These components are the web application, the android application and the application running on raspberry Pi.

These interactions form a cyber-physical system, described in the figure below:

Cyber-Physical system is a relatively new term which describes a mechanism controlled or monitored by computer-based algorithms [14]. This system lacks the control part, but it is oriented on collecting and monitoring data. The first two parts of the application communicate through TCP/IP which is a protocol of the Internet because it is fast and reliable. It uses the client-server model in which the server listens for request from the client. When it receives one, the processing starts and the client receives a response at the end.

The HTTP protocol is used for communication between the server and the Android application and the client and front web application due to a web API. The API offers an endpoint where the user can connect while displaying the correct set of data. Each of these applications run on a specific port. Figure 4.3 describes the port each application uses:

By further decoupling the application the user obtains the flexibility needed for further improvements. This transforms the application into a fully reusable system which can be accessed by programs written in other languages than Java, giving no constraints for the developer.

4.3 Database structure

For the database structure PostgresSQL was used. It is a reliable, open source and extendable database with high computational power. The user friendly interface is another point for which to choose Postgres. The Java library of Postgres.org is necessary for creating and accessing the database.

The database structure is simple. It has only two tables with a relationship of oneToMany between them and it stores the data obtained from the mobile application and the experimental data obtained through the algorithm.

The database is generated using the code first approach. In Figure 4.4 we can observe the database structure.

The position database is where all the requests are stored. It is linked to the region table which has a region definition and a presumed location. This location represents the point where the car crashes have their origin. This table is not always populated due to the fact that some locations are scarce in data and are not processed by the algorithm.

4.4 Java application

The application structure is simple, composed of six packages with descriptive names. They are useful for maintaining and reusing the code in the application. Besides the folders which describe the three tier architecture, there are two additional packages called algorithm which contains the full implementation of the clustering and deterministic algorithms and a dto package where simple models used for communication with the front-end application are stored. These models offer some flexibility to the application in places where the standard model is too heavy and only a few fields of information are needed.

The detailed figure containing the package diagram is listed in the figure below:

The clean structure of the project offers no possibility for circular dependencies to be added to the project.

In the model package the two persistable models are defined: Region and Position alongside the mappings. In the repository packs the Repositories interface extends CrudRepository<E,ID>, which have a master class which the DI container can resolve for each one. In the service pack the business logic of the application is defined and the repositories are injected using the @Autowired annotation.

The ImportantDataService has a special method savePosition() decorated with the @Async annotation. Asynchronicity is a concept widely spread in computer programing as well as in other domains. Spring framework supports asynchronous methods. The purpose of asynchronicity in this method is to create a concurrent method which saves and processes data. Large amounts of data can block the user interface while processing and can cause the application to crush due to timeouts and other server issues. An asynchronous method creates a separate Task (thread) for the method which is responsible for processing the data, which is disposed after all data is processed.

In this manner blocks due to complexity or volume can be avoided.

The algorithm package is the place where the two clustering algorithms are stored. There are two additional packages: model which contains details about the algorithm implementation and forensic for the deterministic algorithm.

The sequence diagram for a data import can be observed in the following figure (4.5):

The model sub-package contains two interfaces. One is implemented by the algorithm, making it easy to decouple the code and instantiate each algorithm with its specific implementation. The other interface i.e. DataPoint is used for the model that the algorithm can process. The current implementation is a mathematical/geographic point with latitude and longitude, but other implementations can also be added for this part.

The rest package contains the controller classes with their specific mappings and the injected services. The dto package contains the models for which operations are made on the front-end. The purpose of dto is to decouple the application further by eliminating unused information from the models, making data transfer more lightweight.

4.4 Front end application

The front end application is configured with Node.js and built using the Reflux.js library.It can be found under the path “SCA\ FrontEndApplication\src\main\resources\”.

It is built using node commands and runs on a Browser sync server. Webpack is responsible for gluing all the plug-ins and files in the project and also on applying the configurations on them.

The two modes on which webpack is configured are development and production, each of them in their separate configuration files, and each one responsible in performing its specific tasks.

The development mode configuration is located in webpack.config.dev.js file and contains a series of plug-ins which help a developer to keep track of errors and changes that may occur in his code. It is configured to add all the files in the src path into the file index.ejs file which will be run on the server. Using the hot reload middleware and the plug-in, the developer is able to view the changes made in the code directly in the running application. This means that the code is integrated in real time. Also, in this file the loaders for additional files such as style sheets and images and sass files are configured. This is done automatically, due to the fact that webpack has a mode called webpack-dev which is created for handling such files. Also some of the characters are minified, in order to create a clean environment.

The production configuration located in webpack.config.prod.js is similar to the development mode, the difference is that it builds a single bundle with all the information of the web application minified and uglified, so that its size is reduced and can be easily integrated into a web host, which has a node environment.

The commands for running the application are located in the package.json file. The commands combine npm calls for running calls in parallel and commands specific for a Node.js module. The configurations for each module command can be found in the “/tools”.

The “npm start” command is responsible for starting the development mode. It is a composed command running three other commands in parallel:

test-watch: is a command for running the unit test from Mocha module in watch mode. This is done by using the “testSetup.js” configuration in which all the media files are eliminated and babel is used for code transpiling.

open:src: the command runs the browser sync tool on port 3000 and creates a UI server on port 3001. This makes the synchronization with the browser and also introduces the hot-reloading-middleware and the dev-middleware. The middleware is the place where all the configurations for the webpack bundler are introduced along with other settings.

lint:watch: the linter module uses the configuration files to search and find any coding issues found in the files of the project.

The “npm run build” command is responsible for starting the production mode. It is a composed command running two other commands in parallel:

build: is the command for minifying the product and building the new application files in the folder dist. It uses the configurations from “build.js” located in the “tools” directory.

open:dist: starts a dist server on the same port as development mode and with no middlewares. The configuration can be found in the “distServer.js” file.

4.4.1 Reflux architecture

As mentioned before, Reflux is a powerful one-way binding architecture inspired by Flux, the default architecture of the library React.js. Traditional variable and scopes are replaced by states which are responsible for updating the UI. These states can be changed in different components. This way the virtual DOM knows when a change occurred and where these changes should be reflected. These state changes are done by events, which in React are called actions. Actions are the method React uses to describe a change of event. They can be with or without parameters and they act as a broker between the view and the store.

In the traditional Flux architecture a dispatcher is used to map the actions to the specific callback function. These callbacks reach the stores where all the logic is done and return a set of changes. The state receives those changes back into the event where they occur and the user interface is updated.

In Reflux these state changes are done directly in the store, while the dispatcher is eliminated. This means that the actions do not require callbacks, a simple function is mapped to an action and this function is responsible for updating the state and the UI.

The stores are kept under the path “src\stores” in the project directory. They are responsible for the API calls, which access application endpoints. The “position” store is responsible for actions calls linked to create and view a point. The calls are done using JQuery AJAX library for asynchronous calls.

JQuery is a robust javascript library for manipulating the DOM of an application. It contains the mechanism for making asynchronous calls to a web API. This means that the calls will not block the UI and the response will arrive once all the elements are loaded. This also gives the user the possibility to access other features while large amounts of data are awaited.

The “region” stores are responsible for calls on the already processed set of data. A large amount of data is splinted into smaller sets, corresponding to a cluster called regions. These regions are displayed on the map.

The actions are kept under the path “src\actions” and they define the events which are used to update the UI. Unit tests can be applied in here for testing the functionality of a store.

The layout for the front end application is declared in the “components” folder and will contain the basic layout on which the pages are building. It combines different views to create a single page application, which is easy to debug and open for improvements. Here the API key of the Google map module is added. Also in here is defined the React router. It uses the settings from routes.js to create a complex routing system which is available for the user.

The last folder is “pages”, where the views of the application are maintained. In here all the different tabs are created, each one having a simple responsibility. These pages are also used by the router to display pages with respect to a route.

All these features, combined with the helper tools mentioned in the previous chapter create a simple and flexible medium which a developer can use to create complex and long term applications in the React library.

4.5 Android application

The Android application is built using the android framework for Java. It contains a single activity i.e. MainActivity and additional helper and services classes.

In the MainActivity the GPS settings are created. A LocationManager is created and a Listener is configured to check for changes in the status. Additional permissions are required to access the GPS module, permissions added in the AndroidManifest.xml:

ACCESS_COARSE_LOCATION

ACCESS_NETWORK_STATE

ACCESS_FINE_LOCATION

These are double-checked when an event is configured and cannot be used without a check. The triggering of the event means that GPS will be aware of any change in the state of the device location.

The next part is the MQTT service. Defined in the Manifest file and created in the “mqtt” package it is composed of an “MqttServiceProvider” class which extends Service and is responsible for binding it to the main activity. The service is binded through an intent which is configured in the MainActivity class. The Service creates a Thread inner class, called “MQTTService” which instantiates the ”MqttClientConnection” and makes it subscribe for events in the topic. “MqttClientConnection” is responsible for creating a connection to the topic. In here the location of the topic server is imposed and the connection is established. Also here the Client is instructed to listen to the topic messages.

The second component needed for the flow is the Retrofit library. The classes for the configuration can be found in the “retrofit” package. Here a RetrofitProvider class is defined, which takes as parameters the location of the Web API server and the object converter. The class is defined using the singleton pattern which creates a single instance of the service in the entire application. The LocationService class is used by retrofit as a mapper where the calls to the endpoint are made. Here a @POST method for the “/position” is defined. In the MainActivity a position request is defined which uses the retrofit service and sends the object model from the triggered event. The callback function implies that a response is required, which is transformed into a message. In case of error another message is sent.

Because of the http protocol which retrofit uses, an additional permission is required: INTERNET. This permission is also added in the Manifest file.

The view of the application is defined in the activity_main.xml in the layout folder and contains a ScrollView with a simple TextView on which the information from the event is binded, and a Button which can be used as a second option in case the Raspberry PI board cannot send the signal.

4.6 Raspberry PI application

The Raspberry PI application can be found under the path “SCA\RaspberryPI”. The application is made of a simple class which is packed into a jar using the maven plug-ins for packing assembly with dependencies.

The main purpose is to use the “pi4j” library to connect to a GPIO pin on Raspberry on which the button is wired. The event listener waits for an event to occur on the “PullResistance.PULL_DOWN” action and uses the MQTT topic to publish a message. The topic is an advance messaging protocol based on a queue where the messages are pushed and then dequeued by the client, which requires certain information. The messaging is synchronous. So if a client is not active in the moment the message is sent, it will be gone. The application requires that the MQTT broker is installed on the Raspberry PI board, so that messages can reach the exterior.

Due to the fact that the topic allows multiple clients to subscribe to it, more devices can run the Android application. The only thing that a client needs to configure is the messages it subscribes to. This feature is described in figure 4.6.

5 Testing and Validation

For testing the first part of the application the three devices need to be simultaneously open, because all three need to communicate. A wireless router is also required so that the three applications are in a sub network, so no additional configurations are needed for the handshake protocol. This facilitates the fact that all three devices share the same IP address and the only difference is in the host part where the last byte will be different, because there are different devices.

The testing is done locally using the Android Studio IDE with the Genymotion plug-in installed for the Android application. The Genymotion plug-in is used to simulate a mobile device. The chosen device is a Sony Xperia Z which uses Android 4.3 release as as operating system. For the Web application the InteliJ IDE is used and the application is started using maven command “mvn spring-boot: run”.

The front-end application is started from its root folder using a bash terminal or shell and the command “npm start -s”, where ”-s” means that the application will display minimal messages or noise.

The Raspberry Pi application needs to be packed into the “.jar” file. This is done by using the “mvn clean install” command. The resulted file needs to be transferred to the Raspberian operating system and run using the command “sudo java -jar [name of jar with .jar]”. This can be done by copying on a flash memory and transfer it to the board.

The approach used for testing is using “putty” to open a SSH connection to the Raspberry PI system. Here Linux commands can be used to navigate and run the .jar file. For transferring the file a SFTP protocol is used. This protocol is available in the WinSCP tools for Windows. This allows a user to transfer files directly on the board, using an internet connection. In order to use this method of testing the board and the device which sends the data, they need to be in the same subnet, so a router or switch can be used.

For the first flow of the application, the “.jar” file needs to be transferred to the Raspberry PI board. This is done by transferring the file via WinSCP. The IP address of the board is needed alongside a user and password. The default super user of the Raspberian operating system is “pi”, and the password is “raspberry”.

The IP address is allocated by the router and is the same for all the devices connected in the subnet. The difference consists in the last byte which is the unique identifier for each device. For getting this device it is necessary to open a terminal on Raspberry and type the command “ifconfig” and locate the address of the connection. Another method is to open the router settings and see all connected devices.

The next step is to connect Putty to the same address. Putty will use the SSH protocol to connect to the shell of the Linux operating system, and will be able to send commands to the terminal. The commands will be essential for running the Java application on the board.

The console will add messages when pushing the button, and also at the start of the program. The messages refer to the button push state and they are displayed each time an action occurs. The program works in an infinite loop and does not require intervention once it is started. This makes it an easy task for a computer board car to start and run. The program cannot be closed by the simple user, only by a command or by restarting the board. The features can be observed in Figure 5.1.

Figure 5.1 Raspberry PI connection and program running mode

The next step is to deploy and start the Android application. The application can be deployed on a real device, but, for testing purpose, the device will be simulated using an emulator. The Genymotion emulator is used because it is faster than the default Android emulator which is available in Android studio. The simulated device is a Sony Xperia Z, with Android 4.4.4, because it is a robust device with a minimal set of technical characteristics which make it the ideal candidate for testing.

Once the button is pushed the message will appear in the Putty connection to the board and the process of sending data will start. The Android will send the location which is configured in the GPS simulator and will display it on the TextView of the application screen. If the location reaches the web server the message “Location sent!” will be displayed, else the “Location not sent. Check connection” will be displayed. The features are illustrated in the pictures below.

The event on the Android program will display continuously the location of the device, making it track able from the beginning.

The button on the Android application can be used to trigger the same event in a different manner. This will help a user to have a similar flow in case the connection to the board is severed and the user can no longer access it.

The last step to the registration flow is the view. The web application contains a Google Map under the latest changes tab to display all the changes made in the last hour. This gives the user the possibility to have an overview of the latest changes which occur on the system.

The web application also contains a Position recorder tab in which a location name can be added and it will be recorded to the database. This feature can be used when a location can balance the algorithm or for the simple means to view the precision of a recorded position.

All these features form the first part of the application which is useful for the registration and monitoring of the car accidents. These can be considered the data gathering process.

For the second part of the application the data is used by the clustering and processing algorithms to form regions and generate experimental models.

The data import and generation of the experimental model is done by importing a “.csv” file located under the path “\SCA\data” and must be named “CoordinatesSet”.

Figure 5.5 Position recorder

Figure 5.6 Import data window and Region view before populating

Figure 5.7 Region view after populating

The tuning parameters are located in the Import Data tab. They are experimental data gathered from a study and are applied directly to the algorithm. The first two parameters are given in percentage and are applied to the deterministic algorithm. The others refer to the precision of the clustering process. This parameter can be given in kilometers and is automatically converted to the approximation in Manhattan metric of a kilometer on Google map.

5.1 Comparison of Clustering Algorithms

The comparison is done by using the two clustering algorithms K-Means and DBSCAN. Even though the DBSCAN algorithm is used as the main method for clustering the data, comparing the two algorithms can help a user to get information about the performance on a dataset or the precision chosen to work with.

The data gathered through experimental testing is indexed on the following table. Here the same input is used on the same dataset, in order to obtain the set of parameters with different characteristics.

Although the inputs for the algorithms differ, we can use the experimental data gathered for a model as an input for the other to check the performance parameters.

So, after running a DBSCAN clustering with a distinct precision the resulted number of clusters can be added as input to the K-Means algorithm.

Time is approximately the same for both cases, except for a large number of clusters in the case of the K-Means. Here the average time for clustering is bigger than 30 minutes, making it impossible to work with clusters which require precision.

The number of average points is bigger in the case of K-Means, which means that the residual points in a cluster are also included. This is due to the lack of precision, because K-Means doesn’t use a distance as reference for clustering and only gathers points around a given centroid.

Figure 5.8 Clustering experimental observations

All these experimental data gives the user the picture on how the algorithms differ and why they can be applied in different situations, depending on the desired results. In a situation in which the exact number of regions is known, the K-Means can be used, because the residual data cannot affect the experiment, it can only offer additional precision.

When there is no knowledge about the data, DBSCAN can be used because it is more reliable when the model is shaped using a constant value.

6 Hardware Architecture

Figure 5.8 Raspberry PI 3 wiring

The hardware part consists in a Raspberry PI board and a push button. The button is of type “brick” and has a simple configuration. A VCC pin which needs to be connected to the 5V pin on Raspberry, a ground and an OUT pin which is connected directly to the GPIO2 pin on Raspberry.

General purpose input-output (GPIO) is a pin on an integrated circuit with a distinct behavior. Whether it is an input or output pin, it is controllable by the user at run time.

Also, the wireless module of Raspberry means that there is no need for additional components to be connected to the board.

7 User Manual and installation guidelines

7.1 Installation

For the Raspberry Pi application a memory card containing Raspberian must be available. The operating system is about 512 MB and the application requires additional 10 MB for storage. Next the mosquitto broker must be installed in order for the MQTT to work. This can be done by typing the commands “sudo apt-get update” and “sudo apt-get install mosquitto-clients” to the terminal. The “service mosquitto status” can be used to verify if the broker is ready to use.

The next step is to install an Android application. This can be done on any Android device with an operating system which contains minimum Android 4.0 Ice-Cream-Sandwich. The device must have a GPS module and a wireless connection to the internet.

The last step is to compile and set both web applications. The Front-end application can be either run in Dev mode, either compiled in production mode or put under the path “src\main\resources\public” of the web application. The same can be said about the web server application which can either be run in a development mode or packaged into a war file alongside the front-end application and run under a Tomcat server. The configurations for the server are automatically resolved by Spring Boot using the “application.properties” file.

7.2 User Manual

The flow for a normal user is restricted by the application character. The sole operation a normal user can do is ensuring the Android application runs on a device, and triggers in case of emergency the S.O.S. button.

The specialized user has additional operations available. He can start a data import using the “Import data tab” with an imposed set of parameters. A specialized user can also add a location on the map using the name, not the coordinates, and view the changes which are done in the la hour.

The most important feature is the ability to view the results of the algorithm on a real map. This offers the user the possibility to observe and take notes on the evolution of the areas most affected by accidents.

8 Conclusion

A large amount of technologies and frameworks have been used to create the application. This is due to the fact that the evolution of technology obliges a user to continuously integrate the new technologies, architectures and trends.

Data mining and artificial intelligence have been widely used in the last years because of their capabilities to predict events, using high amounts of unorganized data from different users. This has made systems faster, more accurate and reliable.

Safety is one of the basic needs of a human being.

Making a product safe can contribute to the market side as well as the publicity of the product and the development and further improvements. Another fact which makes the product reliable is the communication of the Machine-Machine and IoT which gives a car a continuous contact with the Internet and makes it easier to have smart senses.

The application can help automotive industries to make a car safer in case of emergency as well, and also it can offer a specialized user a short overview in the area where accidents occur. This can be helpful when searching for patterns in car accidents areas, places where safety needs to be increased.

Considering all these facts, we can say that SCA is an application designed for normal users and specialized users, which needs to be accepted and included in every automobile so that it can be used for the safety of the individuals.

8.1 Future improvements

As future improvements we can consider the flow model. It will be easy to eliminate the additional router or switch and replace it with the Bluetooth communication. This way there will be a Bluetooth client-server communication.

If a module for a SIM card can be added to the Raspberry PI, with a low Internet consumption and a fair price, the Android part can be eliminated making the product simple.

The web application can be adjusted with additional functionalities. One will be a distance calculator between the points, to see the precision of the algorithm. Additional algorithms can be used to import data and their parameters can be added by the user. This is a method preferred by scientists to get all the necessary comparisons. Also sets of real data must be used to verify the authenticity and validity of the algorithm.

Additional parameters for calculating the probabilities of the algorithm can be also included. A detailed research on Rossmo’s algorithm can be conducted, to find if the buffer zone is also a feature which can give more accurate calculus.

The algorithm can be further improved in many ways, making it the perfect specimen for analysis and further development.

Similar Posts

  • Univ ersitatea Politehnica din Bucureș ti [626498]

    Univ ersitatea ―Politehnica‖ din Bucureș ti Facultatea de Electronică, Telecomunicații si Tehnologia Informaț iei Accesul în garaj al autovehiculelor folosind tehnologia RFID Lucrare de disertație prezentată ca cerință parțială pentru obținerea titlului de Master în domeniul Inginerie electronică, telecomunicații și tehnologii informaționale programul de studii de masterat Tehnologii avansate î n electronică auto Conducători științifici…

  • Motivația lucrării… …4 [624188]

    3 CUPRINS Motivația lucrării……………………………………………………………………………………………………… ……4 CAPITOLUL I …………………………………………………………………………………..5 ANATOMIA ȘI BIOMECANICA BAZINULUI  Alcătuirea bazinului…………………………………………………………………………………………..  Oasele bazinului ………………………………………………………………………  Mușchii bazinului …………………………………………………………………………………………….  Articulațiile bazinului……………………………………………………………………………………….  Mișcările în articulațiil e bazinului……………………………………………………………………… CAPITOLUL II ……………………………………………………………………………….. FRACTURILE BAZINULUI  Introducere. Mecanism de producere. Clasificare……………………………………………..  Examenul clinic ………………………………………………………………………………………………. ….  Leziuni asociate……………………………………………………………………………………………………  Tratamentul fracturilor………………………………………………………………………………………. …

  • Marketing industrial [302133]

    [anonimizat] : SL. dr.ec. Natalia Manea Absolvent: [anonimizat] 2017 [anonimizat] : SL. dr.ec. Natalia Manea Absolvent: [anonimizat] 2017 Cuprins Capitolul 1 Piața restaurantelor asiatice de top din București Capitolul 2 [anonimizat] 2.1 Scurtă prezentare a restaurantului LOFT 2.2 Politica de produs 2.3 Politica de preț 2.4 Politica de plasare ( distribuția produselor ) 2.5 Politica…

  • În principiu, tematica abordabilă într -un anumit tip de lucrare este extrem de generoasă, [606594]

    3 Introducere În principiu, tematica abordabilă într -un anumit tip de lucrare este extrem de generoasă, indiferent de domeniul de referință. În ceea ce privește muzica, s -a scris extrem de mult despre creatorii și creațiile ce s -au succedat de -a lungul secolelor, fără însă ca aspectele de interes să fie în totalitate acoperite;…