Cercetari Privind Controlul Dispozitivelor Inteligente, Utilizand Interfete Brain Computer
PROIECT DE CERCETARE ȘTIINȚIFICĂ
Cercetări privind controlul dispozitivelor inteligente, utilizând interfețe ”Brain-Computer”
(Researches Regarding Control of Intelligent Devices using Brain – Computer Interfaces)
TABLE OF CONTENTS
Chapter 1. Introduction
Chapter 2. History and development of robots
Chapter 2.1.: The emergence and history of robots
Chapter. 2.2.: Mobile robots used in BCI systems
Chapter 3. Brain Computer Interfaces
Chapter. 3.1.: Introduction in B.C.I.
Chapter. 3.2.: BCI system basic principle
Chapter. 3.3.: Applications of BCI systems
Chapter. 3.4.: Brain signals and methods used in a BCI system
Chapter 4. LabVIEW program and the N.I. Starter Kit 2.0 robot
Chapter 4.1: The robot and its parameters
Chapter 4.2: Ultrasonic sensor accuracy measurement using the N.I. Starter Kit 2.0’s Ping))) sensor
Chapter 5.: Control and learning methods used in robots in BCI applications
Chapter. 5.1.: Neural Networks
Chap. 5.1.1: Learning process of the Neural Networks
Chap. 5.1.2: The mathematical model
Chapter. 5.2.: NeuroSolutions program
Conclusions
References
Chapter 1. Introduction
Choosing this theme as mentioned / described in the title for my PhD thesis has at it’s base the fact that for a long time I study and I am passionate by the ”knowledge of future”: by the methods, models, technics and algorithms used in Brain – Computer Interface’s (BCI) domain.
Combining the passion of helping people with the computer sciences domain, I identified this opportunity to realize something new, like the BCI system mentioned before.
In the approach of this theme for my PhD thesis I considered first the recent progresses in development of new models, methods, technics and algorithms used in this domain, don’t forgetting about the complementary domains and the desire to help people too.
By realizing this research project, I will reach the objective to open new fields in the use of robots – using them as “aid-robots” for people in need, and – by this – I can realize a higher social comfort level, based on BCI applications.
This BCI applications are a necessity at national and European level, having the goal to evaluate more exactly the effects of this realisation, of its application and to reduce the number of persons who need help and attention of other peoples “24/24 – 7/7”.
By realizing this, we will obtain:
A better quality of BCI systems;
A higher efficiency in using the BCI systems;
Shortening the time to implement, learn and use the BCI systems;
Increased social comfort.
This speciality treaties and the literature -linked to this domain- published in the most prestigious publications (I.S.I., B.D.I., N.U.R.C.) include extensive references on the benefits of users with different movement problems – who are using BCI systems; quality indicators of BCI systems, new methods and directions using robots in this domain and modeling brain signals applied to BCI systems.
Chapter. 1.1.: Research methods to be used
Choosing and utilising the adequate methods of scientific research has the condition to understand the basics of science.
Research methodology – as an integrated system of methods – represents all the steps that are needed to reveal and demonstrate a scientific idea, to produce scientific knowledge and thus enhance and enrich science as a whole.
Resources used for research:
Until now the targeted resources to carry out the research are:
Material Resources: N.I. Starter Kit 2.0 robot and the existing BCI system;
Existing software: N.I. LabVIEW Robotics – 2012.
Magazines and libraries:
– IEEE Transaction on Reliability;
– database of the Library of the University of Oradea;
– database of the County Library ”Gheorghe Șincai”.
Databases, electronic libraries and websites used:
ISI Web of Knowledge: http://sub3.webofknowledge.com/;
ScienceDirect Journals: http://www.sciencedirect.com/science/journal;
IEEE Journals: http://www.ieee.org/publications_standards/publications/periodicals/index.html;
IEEE Xplore: http://www.ieee.org/publications_standards/publications/periodicals/index.html;
Springer Journals: http://www.springer.com/librarians/;
ISOGRAPH: http://www.isograph-software.com/;
SCOPUS: http://www.scopus.com/home.url;
Google Academic: http://scholar.google.ro/.
Chapter 2. History and development of robots
Chapter 2.1.: The emergence and history of robots
The term “robot” (original from Czech language, original word “robot”) was first used by Karel Čapek and Josef Čapek in their works of science fiction at the early twentieth century.
The word “robot” is of Slavic origin and its translation means “work, group work or hard labor”. Before the emergence of the term “robot”, the terms “automatic” and “semiautomatic” were used.
Today’s robots history begins long before our era. The first models of “robots” can be rather called “automatics”. These “automatics” could execute only one task, because they were constrained by design and construction.
One of the first automated (robot) was built by greek mathematician Archytas: it was a wooden dove, powered by steam, which could fly alone, being filled with vapor (generated from heated water) under pressure and having a valve.
Figure 2.1: The dove ot mathematician Archytas
Source: http://www.samosin.gr/exhibition/exhibits_uk.html, accessed in 29.11.2013
In the centuries that followed, many other models of “robots” or “automatics” appeared. Some of them eased or reduced people's work (robot operators), others were serving for the amusement of the people.
After discovering the mechanic clock in the fourteenth century, the way of new possibilities was opened free. There was the possibility that movements automatedly / mechanizedly follow one after another, without the need for manual intervention in that system.
The development of electrotechnics in the XX century allowed the development of robotics. Among the first mobile robots are “Elmer” and “Elsie”, built in 1948 by William Grey Walter. Theese machines could turn in the direction of the light or a light source and could recognize collisions in their surroundings.
The year of “birth” of industrial robots is considered to be year 1956 – the US patent application has been filled for the “programmed article transfer” by George Devol. A few years later Devol built the machine with Joseph Engelberger and called it “UNIMATE”.
Figure 2.2.a (left): Elsie robot built by William Grey Walter in 1948
Source: http://www.extremenxt.com/walter.htm, accessed in 29.11.2013
Figure 2.2.b (right): “UNIMATE” Industrial robot built by George Devol and Joseph Engelberger in 1956 – Source: http://spectrum.ieee.org/automaton/robotics/industrial-robots/george-devol-a-life-devoted-to-invention-and-robots, accessed in 29.11.2013
The scientific domain, which deals with the conception, design and construction of robots is called “robotics”.
Robots are made most often by the combination of other disciplines such as mechanics, electrical engineering and computer science. The linkage created between these three areas is called “mechatronics”.
Most important components of the robots are the sensors of different types (acoustic, optical, proximity, etc.), which provides conditions and informations for mobility of robots in the physical environment and more precise external control of them.
Figure 2.3: Optical sensor examples
Source: http://www.adelaida.ro/tcrt5000-senzor-optic.html, respectively http://www.sursedetensiune.ro/en/spd/38/Senzor-optic-Fotek-CDR-30X-M12, accessed in 29.11.2013
Figure 2.4: Acoustic sensor examples
Source: http://www.pacndt.com/index.aspx?go=products&focus=sensors.htm, respectively http://spanish.alibaba.com/product-gs-img/emisi-n-ac-stica-del-sensor-432958462.html, accessed in 29.11.2013
Figure 2.5: Proximity sensor example
Source: http://electroblue.ro/Senzori-de-proximitate-inductivi–Seria-AK,p-347.html, accessed in 29.11.2013
The robot can or can not be able to act autonomously; this is why autonomous robots are distinguished from the drones.
Figura 2.6.a-b: Example of autonomous robot and remote-controlled robot
Source: http://science.howstuffworks.com/robot4.htm, respectively http://www.ziarmm.ro/cele-mai-importante-stiri-ignorate-in-2010-acestea-pot-schimba-lumea/, accessed in 29.11.2013
The term “robot” describes a broad field with multiple application, so robots are classified into several categories, such as:
• autonomous mobile robot (example: mobile vacuum cleaner);
• controlled mobile robot (example: toy carts guided by radio signals);
• humanoid robot (example: ASIMO developed by Honda);
• service robot (example: blender, mixer, vacuum cleaner, etc.);
• industrial robot (example: non-mobile robots in the automotive industry);
• explorer robot (example: robots used in emergency situations in search of survivors);
• walking robot (to simulate human gait – usually humanoid robots);
• medical robot (example: used in hospitals for lifting patients);
• toy robot (example: toy carts guided by radio signals);
• military robot (drone, robot for finding and / or destruction of artisanal bombs, etc.).
Chapter. 2.2.: Mobile robots used in BCI systems
The robots described above are very useful to people in their everyday work. The command of these robots can be (pre)programmed (a programmer provides the kind of situations may occur in the use of the robot and each situation has a procedure that the robot must follow), manual (using joysticks, keyboard and/or mouse, etc.), or – more recently – using brainwaves of the user without him or her to touch the robot or other control peripherals.
Unfortunately in some cases the user can not use the peripherals (keyboard and / or mouse) because of various reasons: broken arm, injured hand or because it can not move the hand – even if it is not broken or wounded; for example a person who is well aware of its environment, but had an accident and has broken his or her column (neck) and signals (commands) given by the brain to muscles can’t reach the muscles because of the injuries. This situation exists in people suffering from amyotrophic lateral sclerosis (amyotrophic lateral sclerosis – ALS): people suffering from this disease are fully conscious but can not move their limbs (the lower and upper ones), and in some cases they can not move even their eyes.
To help these people with this disease (amyotrophic lateral sclerosis) or other diseases with similar disastrous effects for the patient, tha pacients family and society as a whole, robots can be used to help them. Examples of robots and / or applications to help these people who can not move at all are as follows:
Figure 2.7: Example of robot used in BCI application
Source: http://www.upmc.com/media/media-kit/bci/pages/default.aspx, accessed in 30.11.2013
Thoose who took the picture wrote: ”Jan Scheuermann, who has quadriplegia, brings a chocolate bar to her mouth using a robot arm she is guiding with her thoughts”. This patient without the existence of such robots would be helped by family or by a third person (people who can not take care of her 24 hours of 24) is assisted by a robotic arm in some basic applications (to eat, drink, etc.); robotic arm does not bother in time and do not have to sleep or have repaous, so this person can have as-of-care on her own and therefore she do not need another person to care of her 24 hours of 24.
Figure 2.8: Another example of robot used in BCI application
Source: http://www.upmc.com/media/media-kit/bci/Pages/images.aspx, accessed in 30.11.2013
Other people need help too, such as Mr. Hemmes:
Figure 2.9: Mr. Hemmes practicing with a robotic arm (using a BCI system for controlling the robotic hand)
Source: http://www.upmc.com/media/media-kit/bci/Pages/images.aspx, accessed in 30.11.2013
In addition to command robotic hand, used in theese applications to help people who need help, can be used to move the patient uld be helped by family or by a third person (people who can not take care of her 24 hours of 24) is assisted by a robotic arm in some basic applications (to eat, drink, etc.); robotic arm does not bother in time and do not have to sleep or have repaous, so this person can have as-of-care on her own and therefore she do not need another person to care of her 24 hours of 24.
Figure 2.8: Another example of robot used in BCI application
Source: http://www.upmc.com/media/media-kit/bci/Pages/images.aspx, accessed in 30.11.2013
Other people need help too, such as Mr. Hemmes:
Figure 2.9: Mr. Hemmes practicing with a robotic arm (using a BCI system for controlling the robotic hand)
Source: http://www.upmc.com/media/media-kit/bci/Pages/images.aspx, accessed in 30.11.2013
In addition to command robotic hand, used in theese applications to help people who need help, can be used to move the patient (example: an electronic wheelchair controlled with brainwaves, using a BCI) or people who have ”Blocked-in” syndrome and can not speak – they can use these systems to communicate with the outside world.
Figura 2.10: Mr. Hemmes using mind power to guide a virtual ball on the screen
Source: http://www.upmc.com/media/media-kit/bci/Pages/images.aspx, accessed in 30.11.2013
Chapter 3. Brain Computer Interfaces
Chapter. 3.1.: Introduction in B.C.I.
What is a BCI?
The Brain Computer Interface (BCI), (called “Mind Machine Interface” or “Brain Machine Interface” is a direct communication link between the human brain and an external electronic device. In BCI, the communication does not rely on peripheral nerves and muscles.
Brain computer interface (BCI) offers communication and control capabilities to people. It is using electroencephalographic (EEG) signals recorded from scalp, but there exist other invasive methods too to record the brain’s activity.
One example of BCI application is the EEG-based brain-controlled mobile robots, that can serve as powerful aids for severely disabled people, especially people with “locked in” syndrome in their daily life, to help them move voluntarily and communicate with the external world, as presented in Chapter 2.
BCI operation depends on the interaction between two adaptive controllers: the first is the user, who encodes the commands in input provided to the BCI, and the second is the BCI, which recognizes the commands and expresses them in device control. That is why BCI use is like a skill that both user and system must acquire and maintain in time. The adaptation of user-to-system and system-to-user is the fundamental principle of BCI operation.
Electrical signals produced by brain activity were first recorded from the cortical surface of animals by Richard Caton in 1875 and from the human scalp by Hans Berger in 1929.
Recent interest and activity in BCI
This recent interest and activity around BCI is focused on the next four factors: the first factor is the increased appreciation of needs and abilities of people, who are severely affected by paralysis or by the “locked-in” syndrome motor disorders.
The second factor is the highly increased understanding of the nature and biology.
The third factor contains the availability of powerful and low cost computer hardware.
The fourth factor is recognition of the remarkable adaptive capacities of the human central nervous system (CNS), both in normal life and in response to damage or disease [1].
Nowadays trends in BCI articles show that investigators in BCI research select EEG instead of invasive methods [2].
BCI technology has increasingly been applied to new application fields.
Figure 3.1: The number of published brain-computer interface (BCI) articles for each year from 2007 to 2011 [2]
Figure 3.2. The numbers of EEG-based brain–computer interface (BCI) applications developed in BCI articles introduced from 2007 to 2011 [2], [5]
Chapter. 3.2.: BCI system basic principle
Chap. 3.2.1.: General description
A BCI uses brain signals to control a device or to adjust the communication between user and a device [1].
Figure 3.3 shows the basic design and operation mode/concept of BCI.
Figure 3.3. The BCI System – basic concept
The BCI translates the features recorded from the scalp or from the cortex into commands that will operate a device in realtime.
Generally, the electrodes are placed according to the standard of 10–20 international system.
Figure 3.4. The international 10-20 system, Source: http://www.bem.fi/book/13/13.htm, accessed in 31. 11. 2013.
There are a lot of techniques and methodologies to record brain signals for BCI. They can be divided in two categories:
noninvasive record methods;
invasive record methods.
The noninvasive recording methods include: recording of electrical or magnetic fields (like electroencephalography [EEG], magnetoencephalography [MEG]), functional magnetic resonance imaging [fMRI] [7], positron emission tomography [PET], infrared [IR] imaging, near-infrared spectroscopy [NIRS], fetal magnetoencephalography [fMEG] and single photon emission computed tomography [SPECT] [2].
Other BCI researchers have used invasive record methods: electrocorticography [ECoG] or microelectrode arrays [MEAs] [2].
EEG-based BCIs
Until today three different kinds of EEG-based BCIs have been tested on humans. The difference is in the particular EEG features that conveys the user’s intentions. Common BCI techniques and inputs include [11]:
motor imagery;
event related potentials;
steady state evoked potentials.
Suitable Brain Signals used in BCIs can be:
1) P300 potentials.
2) Steady state visually evoked potential (SSVEP).
3) Event-related (de)synchronization (ERD / ERS) [15].
A P300-based BCI: The P300 BCI system in use was described by Donchin’s group as flashing letters or other symbols in rapid succession.
Brain-controlled mobile robots can be divided in two categories according to their operational modes: 1) “direct control by the BCI” – the BCI translates EEG signals directly into motion commands to control the robots.
2) “shared control”, where a user and an intelligent controller share the control over the robot or wheelchair.
Signal Acquisition: EEG signals can be collected with electrodes placed on the scalps surface. The most widely used electrodes are silver/silver chloride (Ag/AgCl) because their low cost, low contact impedance, and relatively good stability. Some researchers have been exploring “dry” electrodes, which do not need to use gel and skin cleaning.
Figure 3.5. Dry electrode examples, Source: http://openi.nlm.nih.gov/detailedresult.php?img=3231409_sensors-11-05819f1&req=4, accessed in 31.11.2013
Signal processing
The acquired signals are first preprocessed in order to remove artifacts such as power line noise, electrocardiogram (ECG), electrooculogram (EOG), electromyogram (EMG), etc. and any body movement. Features are then extracted from the preprocessed signals. Finally, the classifier translates these extracted features into commands that subjects desire to output [4].
A BCI records brain signals, extracts the features ant then translates the features into device commands.
Figure 3.6. Recording sites for electrophysiological signals used by BCIs [3], page 31
Evaluation Metrics
It can be classified in two major categories:
1) “Task metrics” – which focuses on the question “How well specified tasks can be performed with the brain-controlled robots?”.
2) “Ergonomic metrics” – representing the state of the user rather than his/her performance.
Two other ergonomic metrics include “learnability”, representing the ease of learning to use the robot, and “level of confidence” experienced by the participants [4].
In the beginning BCI technology was developed as a communication device for the “locked-in” users, but the range of research has increased to include non-medical applications too; today even first commercial products are available for home users. As a result, new disciplines enter the BCI community, and new researches are introduced.
Chapter. 3.3.: Applications of BCI systems
Device control
One of the most important desire in developing BCI’s was to give users, who lost full control of their limbs, access to devices and communication with the other people. Users can already benefit from BCI devices, like mentioned above, even if it has limited speed, accuracy and efficiency yet. For healthy users a BCI nowadays cannot act as a competitive source of control signals due to its limitation in bandwidth and accuracy compared to the standard muscular control [6].
Communication
People with amyotrophic lateral sclerosis (ALS), multiple sclerosis (MS), or stroke need to communicate their needing to the external world. They are able today to do this, using BCI systems. Some examples of communication are controlling a cursor on the screen, selecting letters from a virtual keyboard, browsing internet, etc.
User state monitoring
User-machine interfaces in the future will need to understand user’s current state and user’s intentions or commands. These future implementations will require systems to gather and interpret information on mental states such as emotions, attention, workload stress, and even mistakes [1].
Evaluation
Evaluation applications can be used online and offline. Neuro marketing and neuroergonomics are only two evaluation examples. Neuroergonomics is linked to Human Computer Interaction: it evaluates how well a technology matches human capabilities and limitations [1].
Training and education
A lot of aspects of training are related to the brain and its plasticity. Measuring this plasticity and the afferent changes in the brain can help to improve training methods in general. Indicators like learning state and rate of progress are useful for automated training systems and virtual instructors. Currently, this application area is in a theoretical phase with limited experimental evidence [1].
Gaming and entertainment
The entertainment industry is very often the front runner in introducing new concepts – among others in human-computer interaction for consumers. In the last few years new games have been developed that are exclusively for use with an EEG headset by companies like Neurosky, Emotiv, Uncle Milton, MindGames, Mattel, Microsoft, Hitachi, Sega, IBM, etc. [1].
Safety and security
EEG alone or combined EEG could realize the support of the detection of abnormal behavior and suspicious objects. An observer or multiple observers are watching CCTV and using EEG with eye movements together might help to identify potential targets that otherwise may not be noticed consciously [1].
Chapter. 3.4.: Brain signals and methods used in a BCI system
The electrical fields produced by brain activity can be recorded from the scalp using EEG recordings; from the cortical surface – using the electrocorticography activity (ECoG); recording signals from within the brain – recording local field potentials (LFPs) or neuronal action potentials – materialized in spikes. Each recording method has its own advantages and disadvantages.
EEG-Based BCI paradigms descriptions
In [2] are presented the used techniques to realize EEG based BCI systems. The EEG-based BCI articles are classified in [2] into seven categories according to the experimental paradigm used to elicit different kinds of brain activities:
motor imagery;
visual P300;
steady-state visual evoked potential (SSVEP);
nonmotor mental imagery;
auditory;
hybrid;
other paradigms.
The detailed descriptions of each BCI paradigm can be found in [2].
In [5] we can find a description list of the recording technologies used to record brain activity: MEG, MRI, fMRI, PET, SPECT, TMS.
TABLE I
EEG SIGNALS CLASSIFICATION [5]
Among the different ERP types, P300 has been used the most frequently, and a few BCI studies used other ERPs such as N100, N200, P100, P200, movement-related ERP, and error-related ERP [2].
In addition to the main brain signals that are mentioned previously, two additional types of brain signals that are used to develop brain-controlled mobile robots are:
1) error-related potential (ErrP), which occurs after a user becomes aware of an error made by himself/herself or by another entity;
2) the synchronization of alpha rhythms, which significantly occurs in the visual cortex when the eyes are closed [15].
Figure 3.7. The BCI system concept and the above mentioned process steps [16]
Chapter 4. LabVIEW program and the N.I. Starter Kit 2.0 robot
The practical part of this research project should be preceded by the presentation of the program in which we will work and presenting the robot that we will use and work on and which one we will make some measurements.
NI LabVIEW software, with graphical environment for developing and using the systems in over 28 years, has revolutionized application development test, measurement and control. Regardless of experience (beginner or advanced), engineers and scientists can rapidly and effectively interface with acquisition and control hardware, analyze data and design distributed systems using it.
Chapter 4.1: The robot and its parameters
National Instruments Starter Kit 2.0 robot is a robot with three wheels – from which one two wheels have motors, an ultrasonic sensor -which can rotate- used to detect obstacles in front of the robot, an FPGA development board, and other accessories, which will be described below.
Figure 4.1: The NI Starter Kit 2.0 robot – diagonal view
Source: http://www.ni.com/white-paper/11564/en/, accessed in 04.12.2013
Figure 4.2: The NI Starter Kit 2.0 robot – front view
Source: http://www.pitsco.com/About/?art=5000, accessed in 04.12.2013
Figure 4.3: The NI Starter Kit 2.0 robot – view from above
Source: http://www.pitsco.com/About/?art=5000, accessed in 04.12.2013
Figure 4.4: The NI Starter Kit 2.0 robot – side view
Source: http://www.difi.net/top_menu/27/mid_menu/83/0/left_menu/119, accessed in 04.12.2013
NI’s Starter Kit 2.0 robot has the following electrical and electronic parameters (parameters found in “NI-Datasheet-ds-217.PDF” downloaded from the NI website on 04/12/2013):
Completely assembled mobile robot base (starter kit);
Does ultrasonic sensor, encoders, motors, batteries and charger included;
Controller based on NI Single-Board RIO;
Making decisions in real time and data processing using FPGA I / O;
Evaluation of 180 days of LabVIEW Robotics software, LabVIEW Real-Time and LabVIEW FPGA software module (If the University of Oradea, software LabVIEW FPGA module is indefinite);
Connects easily to a variety of robotic sensors and actuators;
It can execute a program obstacle avoidance.
System requirements and compatibility:
Compatible operating systems: Windows;
Information / about driver: NI-RIO;
Compatibility Software: LabVIEW, LabVIEW FPGA Module, LabVIEW Real-Time Module, LabVIEW Robotics Module.
Description of the program and robot: NI LabVIEW Robotics Starter Kit is a platform “out-of-the-box” of a mobile robot, which has sensors, motors and NI Single-Board RIO hardware for embedded control. LabVIEW Robotics software has features for beginners and for those who are more experienced. If you are new to LabVIEW, you can use high-level programming “LabVIEW Robotics Starter Kit API” to quickly get a program and control the robot in real time. If you are an advanced user, you can access the FPGA and perform low-level hardware customizations.
LabVIEW Robotics Starter Kit features:
• Engines “Pitsco Education” 12 VDC, offering 152 rpm;
• optical quadrature encoders with 400 pulses per revolution;
• ultrasonic distance sensor “PING)))” for distance measurement, range between 2cm and 3m;
• Mounting bracket for ultrasonic distance sensors “PING)))” for an environmental scan of 180 degrees;
• Two motors “Educatio Pitsco Tetrix 4” in wheels and wheel “omni” for direction.
Overview of the NI sbRIO-9632 chip:
NI Embedded sbRIO-9632: acquisition and control device integrates a real-time processor, reconfigurable logic gates network using an FPGA (Field Programmable Gates Array), and I / O on a single printed circuit board (PCB). It features an industrial 400 MHz processor, a 2M gate Xilinx Spartan FPGA line 110 digital I / O 3.3V (compatible with 5V / TTL), 32 channels single-ended or 16 differential channels of 16-bit analog input to 250 kS / s and four channels of 16-bit analog output at 100 kS / s. She also has three connectors for expansion I / O modules using plate level I / O in C series. SbRIO-9632 Series offers operating temperature range -20 to + 55 ° C and has a wide input voltage between 19 and 30 VDC, 128 MB DRAM for operating the system “embedded”, and 256 MB of nonvolatile memory for storing programs and data logging.
This device also has an Ethernet controller with 10/100 Mbit / s, which can be used to perform programming communications and networking and a host server for Web (HTTP) and file (FTP). You can also use the RS232 serial port to control peripheral devices.
The robot platform has the following physical dimensions:
Dimensions: 405 mm x 368 mm x 150 mm (15.9 "x 14.5" x 5.9 ");
Weight: 3.6 kg (7.9 lb);
Battery charge time: 1.7 hours;
Battery charge time (with engines running): 1 hour;
Battery Charging time (engines off): 4:00.
The ultrasonic sensor:
Ultrasonic sensor “Parallax PING)))” detects objects by emitting a short ultrasonic burst and then “listening” to the echo. Under control of a host microcontroller (for triggering pulse), the sensor emits a short series of bursts at 40 kHz (ultrasound). This bursts travels through air at about 1130 steps per second (about 344.42m / sec), hits an object and then bounces back to the sensor. Ultrasonic sensor “PING)))” provides an output pulse to the host, which ends when the echo is detected; therefore, the width of this pulse corresponds to the distance to the target.
Power supply: 5 VDC;
Current consumption: 30 mA typical; 35 mA max;
Range: from 2cm to 3m;
Trigger input: TTL positive pulse, 2 mS min, 5 mS typical;
Pulse Echo: positive TTL pulse from 115 mS to 18.5 ms;
Holdoff Echo 750 mS after the end of the trigger pulse;
Series frequency: 40 kHz to 200 mS;
Indicator series: LED shows the activity sensor;
Delay before next measurement: 200 mS;
Dimensions (H by W by D) 22mm x 46mm x 16mm (0.84in. X 1.8in x 0.6 in.).
Chapter 4.2: Ultrasonic sensor accuracy measurement using the N.I. Starter Kit 2.0’s Ping))) sensor
This part is based on the reflection and/or absorption of ultrasound (US) waves in some materials that can be found in indoor environment.
Sound and ultrasound waves are defined as longitudinal pressure waves in the medium in which they are travelling. This medium can be air, water, steel, concrete, granite blocks, human body, etc.
Targets/subjects whose dimensions are larger than the wavelength of the colliding sound waves will reflect these waves; the reflected waves are called “echo”. Using the time-of-flight (TOF) technique, we can measure the distance between the US system and the target. But there are some materials which can absorb the US waves and the system will not find any obstacle, but in reality the obstacle is there.
Chap. 4.2.1.: Introduction
Ultrasonic (US) is not a human genuine invention, because some animals, like bats and dolphins, use this technique for long time before humans observed it or (re)invented it. Bats and dolphins navigate for a long time with the help of the US “transducer” embedded in their body.
Based upon the US principle of operation, people created artificial transducers, which ones can be used to different purposes, like robot navigation, distance calculation, internal flaw detection, medical and safety applications, etc. US measurements are very much used in different domains.
The US applications can have more names, like, “real time ultrasound” (RTU) in [17], “non-destructive evaluation” (NDE) in [18], or “ultrasonic distance measurer” (UDM) in [19].
US sensors are generally used for anti-collision and rangefinder purposes by measuring the distance to an obstacle [20]; some application ideas where US sensors can be used are: security systems, parking assistant systems, interactive animated exhibits and robotic navigation.
Chap. 4.2.2.: Sound and ultrasound principles
Sound is a mechanical vibration transmitted by an elastic medium (usually air). The range of frequency of sound that human beings are able to hear is approximately between 20 Hz and 20.000 Hz. This range is by definition “the audible spectrum” and its range can vary by individual. Sound above 20,000 Hz is known as “ultrasound”, and sound below 20 Hz is called “infrasound”.
In the air, US speed is approximately 345 m/s, in water 1500 m/s and in a bar of steel 5000 m/s [19], but this values depend on the physical parameters of the travelled medium, like humidity, temperature, atmospheric pressure in case of air, temperature, salinity, pressure in case of water, and steel type, carbon containment in percent, etc. in case of steel. Ultrasonic wave propagation velocity in the air is the same as sonic velocity.
US measurement is widely used since it is a noninvasive technique and the equipment is relatively inexpensive and compact [21].
US sensors can be found in a wide range of frequencies ranging from 20 kHz to a few hundreds of MHz [22]; US transducers with frequencies of 2.25MHz, 5MHz, 10MHz, 20MHz, 50MHz and 100MHz were used in [23].
The ultrasonic pulse can travel in the material reflected, refracted, scattered or transmitted through its (in)homogeneities [24].
The industrial community has used ultrasonic time-of flight (TOF) and phase-shift (PS) methodology to detect distance of objects to plus/minus 0.05 mm [25] – this shows how precise can be the measurement using US.
The US system is presented in Figure 4.5., and works as follows: a burst signal is transmitted for short duration (is emitted) by the emitter. After that there will be a silent period. This period is actually called “response time” and is the time waiting for reflected waves. The acoustic emitted signal may find an obstacle or not. If an obstacle is found, the acoustic signal will be bounced back from the obstacle. This back-bounced signal is called “echo”. The echo is received by the receiving transducer and is converted into electrical signal. Usually this signal is amplified, filtered and can be converted into digital form [24]. Using the elapsed time between transmission and reception, the distance between the US system and obstacle/object can be calculated.
Figure 4.5: Schematic principle of US System
As a result of the spread of ultrasound in the air, there will be considerable attenuation and the degree of attenuation is directly proportional to the level and frequency: at high frequency or high-resolution, it should choose a short distance measuring sensor with high frequency; the measurement of long-distance application uses low-frequency sensors [26].
Modern ultrasound machines are lightweight, extremely portable and battery-operated systems, which are capable of making complex imaging presentations and are available for use in the field [27].
The frequency of ultrasonic wave around 40 kHz has the best transmission efficiency [28].
There are surfaces that can simultaneously generate a multiple echo responses [22].
Other technologies than Ultra Sound are: vision cameras, time of flight cameras or infrared sensors [22].
The reflecting objects properties can be [19], as shown in Figure 4.6:
Surface: An ideal targets surface is hard and smooth. This surface reflects the greatest amount of signal.
Distance: The shorter the distance from the ultrasonic sensor to an object, the stronger the returning echo is.
Size: A large object has more surface to reflect the signal than a small one.
Angle: The portion of the object perpendicular to the sensor returns the echo.
Pro’s and con’s of using US sensors and US systems:
Pro US: Ultrasound has the features like strong directivity, nondestructive testing, speed, robustness, versatility in use, accuracy, lower cost than other technologies, propagating a long distance in a suitable medium.
The material is unaffected by the propagation phenomenon of the US, allowing the sample to be tested a number of times without becoming deformed [29].
Other US sensor characteristics are: the aerial and in-water transmission speed is slower than light – this makes signal processing easier [20]. US can propagate through steel – light can’t do this. US is not affected by the colorof an object; it can be used to measure the distance from the sensor to a transparent body – such as a glass object or transparent plastic.
Figure 4.6: US reflecting objects properties
Low cost and accuracy as well as speed are important in most of the applications [30]. Ultrasonic sensors are quite fast (fast enough) for the most of the common applications.
US is a good methodology for low labor continuous remote monitoring.
Against US: US can consume more energy than other technologies, it has a lower range and the speed of sound depends on the environmental temperature and humidity [31]. US has low directionality compared to optical sensors [20]. Other shortcomings of US are the inaccuracies, crosstalk, and spurious readings that can appear [32].
Generally high frequencies are desirable for good spatial resolution when US systems are used; but, the highly sound-attenuating composite materials require low frequencies for good penetration. In these cases is used the technique called “Distance Amplitude Correction” or “DAC” and is integrated in every modern Ultrasonic device. It can amplificate the signal even in the order of 1 dB/mm, depending on the material [33].
The application of ultrasound in air is less developed than in other media such as liquids or solids [22].
Some applications need coupling materials to obtain adequate acoustic contact.
Chap. 4.2.3.: Time Of Flight (TOF)
The distance travelled by a sound wave is directly proportional to the time-of-flight (TOF).
The time between the sending of an ultrasonic signal by the transmitter (T) and receiving the signal by the receiver (R) of a T/R pair represents the time for sound to travel from the T/R pair to the target and, after it, back again to the T/R pair.
The sound wave travels from the transmitter to the target and then is reflected back an equal distance to the receiver, so the distance between T/R unit and target can be computed as the speed of sound’s time-of-flight (TOF) divided by 2 [25].
After receiving the reflected pulses, Δt is calculated and the distance of the object can be found, using equation (1):
Δt = tR-tT , (1)
where tR is the received time of US wave and tT is the transmission time.
The distance d between the US sensor and target is proportional to the time Δt (TOF). The distance is calculated, using equation (2):
d = (c×Δt)/2 , (2)
where c is the sound velocity [34].
Chap. 4.2.4.: Domains of use of US sensors
The list of application fields where US sensors and US waves can be used is quite varied, like in the list that will follow. In this list will be presented just some examples, but in reality the number of them is much higher.
The use of US can be divided in different categories, like:
Ultrasonic sensor research: in [30] is presented the accuracy of the measured distance; it is dependent on the separation between the ultrasonic transmitter and receiver.
Automotive/safety: in [35] an advanced accident avoidance system for automobiles is presented, where the main objective of this project is to realize a collision avoiding system for automobiles and to provide security in bad weather.
Agriculture: in [36] is presented an ultrasonic system for weed detection in cereal crops, having separation between weed infested and non-infested areas up to 92.8% success.
Metallurgy and mechanical testing: in [37] is presented and thoroughly studied the micro-structural and mechanical behavior of the nitrogen alloyed type 316L austenitic stainless.
Medical: in [38] ultrasonic measurement of normal spleen size in infants and children in paediatria is described and is established a correlation of spleen size with age, height and weight.
Industry: in [39] are presented two approaches to ultrasonic measurements of temperature in aqueous solutions.
Material research: in [40] US is used for measurement of corrosion depth in exposed concrete to acidic environment.
Geographical: in [41] measurements of velocities and absorption of acoustic waves in minerals at elevated pressures and temperatures were realized to understand seismic information.
Food industry/veterinary: in [25] is presented pre-parturition restlessness in crated sows using ultrasonic measurement.
Mixed projects: in [31] is presented a localization system for Wireless Sensor Networks (WSN) based on ultrasonic (US) Time-of-Flight (ToF) measurements.
Robotics (movement) control: in [32] is presented a new method of obstacle avoidance for service robots in indoor environments.
Chap. 4.2.5.: Ultrasonic Sensor Description
The N.I. Starter Kit 2.0 uses an US sensor, which one’s datasheet can be found in [42].
This sensor is called “PING))) Ultrasonic Distance Sensor”, part number: #28015, made by Parallax.
This ultrasonic distance sensor provides precise, non-contact distance measurements from about 2 cm to 3 meters. It is very easy to connect to microcontrollers, requiring only one I/O pin.
The PING))) Ultrasonic Distance Sensor’s features:
Range: 2 cm to 3 m (0.8 in to 3.3 yd);
Burst indicator LED shows sensor activity;
Bidirectional TTL pulse interface on a single I/O pin can communicate with 5 V TTL or 3.3 V CMOS microcontrollers;
Input trigger: positive TTL pulse, 2 μs min, 5 μs typ;
Echo pulse: positive TTL pulse, 115 μs minimum to 18.5 ms maximum.
The PING))) Ultrasonic Distance Sensor’s key specifications:
Supply voltage: +5 VDC;
Supply current: 30 mA typ; 35 mA max;
Communication: Positive TTL pulse;
Package: 3-pin SIP, 0.1” spacing (ground, power, signal);
Operating temperature: 0 – 70° C;
Size: 22 mm H x 46 mm W x 16 mm D;
Weight: 9 g.
All of this specification data come from PING))) Ultrasonic Distance Sensor’s manual.
The key features of this US sensor are:
provides precise, non-contact distance measurements within a 2 cm to ~3 m range;
US measurements work in any lighting condition, making this a good choice to supplement infrared object detectors;
simple pulse in / pulse out communication requires just one I/O pin;
3-pin header makes it easy to connect to a development board, directly or with an extension cable, hence no soldering required.
Chap. 4.2.6.: Measurement Description
In this paper ultrasonic sensor measurement test is presented on different materials, using the NI Starter Kit 2.0’s ultrasonic sensor, presented in Figure 4.7. The used materials are: cardboard sheet, sheet glass, wood, sponge, sheet of plastic, expanded polystyrene, porous gum, metal sheet and fur.
These materials were placed at different distances from the US emitter-receiver pair, having 0° angle (perpendicular) of the incidental US wave, as shown in Figure 4.8.a.; in the second measurings the wave’s incidental angle was approximately 45°, as shown is Figure 4.8.b.
Figure 4.7: PING))) Ultrasonic sensor front and back
Figure 4.8.a. (above): 0° angle of the incidental US wave
Figure 4.8.b. (under): 45° angle of the incidental US wave
The tested materials were positioned at 35 mm, 50 mm, 100 mm, 200 mm, 350 mm, 500 mm, 750 mm and 1000 mm placed from the NI Starter Kit 2.0’s US sensor.
The measurement was realized using N.I.’s LabVIEW program (www.ni.com/labview). We are using the LabVIEW Robotics program, version 2012.
This application will realize continuous reading of the distance sensor mounted on the robot, using the “Read PING))) sensor” block and the results will be displayed on the front panel, using the “WaveForm Chart” block. The measured distance is between 0.02m (2cm) and 1m (100cm).
Figure 4.9: The LabVIEW Robotics program realized for measuring the distance between the US sensor and the different target materials used in this research project
The US sensor reading program will use the specific block for this sensor, namely the “Read PING))) sensor” block.
The distance is determined by measuring the return time of the wave transmitted by the sensor, as described its theory in the chapters before. The used block automatically converts the dates and forwards data directly in meters.
The waveform data is displayed using “WaveForm Chart” block, which generates continuously a graphic on the front panel of the program with the data obtained from the distance sensor.
As can be seen in Table II, in case of the sponge, fur and gum, the measured distance by the US sensor are not correct at little distances (between 35 mm – 350 mm distance from the US sensor). These measured distances are bigger – in some situations almost double than the real distance, like in the case of sponge, where the real distance was 35 mm and the measured distance was 70 mm, or in case of fur: the real distance was 35 mm, the measured distance was 66 mm. This measurement error does not appear in the case of other materials.
This problem is not present (or is not relevant) as the distance between these materials and US sensor grows – in case of sponge, fur and gum. The US sensor measures correctly the existing distances – only with little errors – for all the materials from 500 mm distance. By “little errors” we understand errors at +/- 10%.
In the case when the incidental ultrasonic wave is 45°, as shown in Figure 4.8.b. and the measured distance is shown in Table III, the situation and the measured values are much changed, compared to the 0° incidental ultrasonic wave.
In Table III we can see that in a lot of places is written “-” instead of a measured value. This is because the distance measurement values were unclear – they were unstable. One reason can be that some materials were reflecting in 45° the incidental ultrasonic waves, and they were measuring in reality other distances than the distance between the tested materials and the PING))) ultrasonic sensor, like in Figure 4.10 is drawn.
Other reasons can be that some materials from those used in this test – more specifically sponge and fur – are absorbing the US waves.
In the time of test some observations were made: the metal is a very good ultrasound reflector (it is like a mirror for the light); in the distance measuring errors were frequent (non stable distance values); the glass is like a mirror for the ultrasound, but it had little errors (relatively stable distance values); the cardboard is a worse reflector than metal or glass (probably absorbs a part of the signal’s amplitude), but, even so, this property remains observable; the plastic sheet was a good reflector, better than cardboard, but worse than metal or glass sheet; the wood had reflecting properties too; the porous gum had big oscillations at measuring the distance; the expanded polystyrene was somehow mediocre – not the best reflector, but not the worst. These results were somehow predictable. The interesting part was with the sponge, because it absorbed all the ultrasonic wave.
Fig. 4.10.a. and 4.10.b.: Ultrasonic distance measurement differences appear when the person who holds the materials stays closer or farther from the target
We assumed the temperature is 20°C, so the velocity of ultrasound in the air is 343 m/s. The travel distance is very short so the travel time is little affected by temperature.
Autonomous mobile robots require many kinds and large numbers of sensors to measure the distance, velocity, and scale of objects for environment recognition.
The N.I. Starter Kit 2.0 robot with the presented US sensor will be used in the field of B.C.I. (Brain – Computer Interface), where it will be controlled by the user’s brain activity (the user’s will). As we can observe, in this case study, there will be needed to introduce some “emergency stop” functions/commands given by the users brain, because simply leaving the robot to move around, using only the US sensor’s measurements will not be enough safe – the unattended robot will hit something for sure in the time of long use.
An observation of ours is that US waves can be used in security applications, instead of laser / ultraviolet / infrared light. US waves can be used as a “sound barrier” – that means US can be used for counting persons or objects or for surveillance.
Chapter 5.: Control and learning methods used in robots in BCI applications
In BCI applications robots must execute the received orders. This orders may be based on a program – the user selects from a list of (pre)written programs what he / she wishes – in this situation the BCI system is an active system; other method is the full control of the user – in this case the system effectively behaves as a simple performer – BCI is a passive ”behaving” system. The third method is the hybrid system – that means that the user selects what he wants and the robot starts executing the program, but man can whenever stop the program execution or change ”on the fly” the program parameters.
Active BCI systems can also be based on predefined programs or they can be adaptive. If the BCI system is adaptive, bayesian networks or neural networks can be used.
Bayesian networks are the best solution when there are issues with an uncertain decision. They are used in applications that require decision making in a very short time. A viable alternative to bayesian networks are neural networks. Neural networks are mathematical constructions that can produce optimization by minimizing errors, and application / implementation requires a relatively low effort.
Neural networks require two phases:
• training (learning);
• simulation.
The output of the neuron can be connected to one or more inputs of the neuron from the previous layer. There are numerous ways of interconnecting neurons but can be identified two main classes of architectures:
• feedforward network architecture – the information propagates from input to output only;
• recurrent networks / feedback network architecture – the information propagates from input to output and from output back to input.
Figure 5.1: Feedforward (left) and feedback (right) network architectures
A major drawback of neural networks is the lack of theory specifying network type and number of elementary neurons and interconnection method. There are several techniques of ”learn and grow”, but they are still in research.
Artificial Neural Network with feed forward and back propagation algorithms have been used. Feed forward algorithm is used to calculate the output for a specific input pattern. Back propagation algorithm is used for learning of the network.
The main difference is the ability of neural networks learning from interaction with the environment and improve performance over time based on the number of attempts.
These algorithms are confined to two major classes: supervised learning and unsupervised learning. Lately, still stands a class of algorithms, learning algorithms using a critical, resulting from experimental observations made on animals – they are the type of reward / punishment.
Chapter. 5.1.: Neural Networks
Nowadays Neural Networks have become increasingly popular over the last decade as an alternative to standard statistical approaches for classification of multispectral remote sensing data [44]. Contrary to statistical classifiers, the neural networks do not rely on an ”a priori” model of data distributions. This systems are often used as black box systems. In order to obtain good results, various parameters need to be chosen carefully and correctly. Examples of parameters include network type (feedback or feedforward), size and architecture, training step size, learning algorithms, stop criterion and data representation.
A Neural Network is considered in which there are input, hidden and output neurons (layers) and where all the possible connections between theese units are allowed, including specifically the recursive connections and self-feedback connections; and it can be viewed as a ”general network with no hidden units and requirements on symmetry of the weight matrix, and the typical feedforward networks with hidden units” [45].
One type of network „sees” the nodes of a system as „artificial neurons”. These are called „artificial neural networks” (ANNs). An artificial neuron basically is a computational model inspired from the natural neurons. Natural neurons receive signals through synapses located on the dendrites or membrane of the neuron [48]. In the case of the received signals are strong enough (stronger to surpass a certain threshold), the neuron is activated and emits a signal though it’s axon. This exiting signal might be sent to another synapse, and might activate other neurons [48].
The complexity of real neurons is highly abstracted when modeling artificial neurons [48]. It consist of inputs – like synapses – multiplied by weights (strength of these signals), and then computed with a mathematical function, which determines the activation or the deactivation of the neuron. Another function of the artificial neuron is to compute it’s output (it’s value is in dependence of a certain threshold).
ANNs use artificial neurons in order to process information. Neural Networks offer much improved performance over conventional technologies in areas including: Machine Vision, Robust Pattern Detection, Virtual Reality, Signal Filtering, Data Segmentation, Data Compression, Text Mining, Data Mining, Artificial Life, Adaptive Control, Complex Mapping, Optimization and Scheduling and more [48].
Using Neural Networks to simulate various kinds of processes is nowadays recognized in multiple areas of research. One of these areas is biological modeling, which realizes to bring together cognitive scientists, biologically oriented AI researchers and neurophysiologists. The main research interests in this domain is to construct models of complete agents fully equipped with sensors, motor effectors and a set of behaviours. The aim of this study is twofold: firstly, it is an attempt to realize a neural network theory for autonomous agents; secondly, it is an attempt to increase our understanding of the mechanisms which underly in biological networks.
Basically, an ANN is a system. A system is a structure that receives an input or more inputs, processes the input data, and provides an output according to the input. Usually, the input is a data array which can be anything – such as an image file, a wave sound or any other kind of data that can be represented in an array. Once an input is introduced in the NN and a corresponding desired response (or target response) is set at the output, an error is composed just from the difference of the desired output/response and the real system output/response.
Artificial neural networks between the newest signal processing technologies invented nowadays. This field of work is very interdisciplinary. An artificial neural network is developed with a systematic procedure (step-by-step procedure), which optimizes the criterion commonly known as “the learning rule”.
The input/output training data is fundamental for these neural networks, because it conveys the information which is necessary to discover the optimal operating point of it. In addition, a non linear nature makes neural network processing elements a flexible system [48].
A result of this is that the structure in this network is not formed by the external stimuli, but is capable to evolve from inside. This evolving process is what we can recognize as the ontogenesis of a neural network.
This perspective on a Neural Network introduces some constraints: 1) the goal is an autonomous system and the learning must be unsupervised and incremental; in plus, the system must work with a continous flow of inputs.
Adapting a Neural Network can be realised by modifying it’s Weights or it’s Architecture or even it’s Learning Rules.
An ANN is a system which is based on the replication of the operation of real biological neural networks, in other words, is an emulation of biological neural system.
Computing – nowadays – is truly advanced, but there are some specific tasks, that a program made for a common microprocessor simply can not perform; even so, a software based on / using the implementation of a neural network can be realised with their advantages and disadvantages.
Advantages of ANNs [48]:
A neural network is able to perform tasks that a linear program cannot realise;
When an element of the neural network fails, the system can continue without any problem because of their parallel nature;
A neural network learns and does not need to be (re)programmed;
It can be implemented in any application and without any problem.
Disadvantages of ANNs [48]:
The neural network needs training to operate before use;
The architecture of a NN is different from the architecture of usual microprocessors, therefore it needs to be emulated;
Requires high processing time for large neural networks.
A typical Neural Network consists of a set of nodes grouped in layers: input, output and hidden layer. Between the nodes exists a set of directed connections – each connection has a weight. The nodes compute by integrating their inputs, using an activation function for it, and passing on their activation as output for the next nodes/neurons. Neural Networks compute by accepting external inputs at their input nodes and computing the activation of each node in turn.
Neural Networks are concerned firstly with the modeling of parts (neurons, even whole sectors) of the human brain and nervous system, described by/in a mathematical or computer based context.
Through the intense study of biological processes – like vision, perception and memory – complex networks were created to allow computer vision, efficient organisation of information and even natural language processing. Many of these tasks appear very simple to humans (when observed in the day to day living of biological organisms), but the mathematical and electrical models of these processes reveal how incredibly complex these systems really are.
The fundamental component of the Neural Network is the neuron, which is found biologically in the brains and nervous systems of living organisms. One of the earliest studies, back than, in the domain of modelling a neuron, which ultimately set the basics of the field, was conducted by McCulloch and Pitts in year 1942, who argued that a neuron could be modeled as some threshold function applied to the sum of it’s weighted inputs.
Since the development of McCulloch and Pitt’s “TLU” neuron, a vast number of models have been proposed; every model seeks to expand the applications domain of Neural Networks and more faithfully and correctly represent the underlying biological mechanisms. To permit any kind of modelling, Neural Networks are often developed as a highly abstracted representation of a biological system. The biological behaviour may not be typically deterministic, but the language of representation of the Neural Networks is often mathematics because it permits concise descriptions of a system, having the ability to reason formally about its behavior.
The real (biological) neurons can be defined as: “Neurons are basic signaling units of the nervous system of a living being in which each neuron is a discrete cell whose several processes are from its cell body” [48]. The biological neuron has four main regions of its structure: the neuron cell’s body – or called also „soma”, has two offshoots from it: the dendrites and the axon ends in pre-synaptic terminals. The cell’s body is the „heart” of the cell. It contains the nucleolus of the cell and maintains protein synthesis. A neuron usually has many dendrites, which look like a tree structure, and through them receives signals from other neurons.
Biological neurons can be classified by their function in the neurological system: they fall into three categories: the first group is represented by sensory neurons – they provide all the information for perception and motor coordination; the second group gives information to muscles and glands – theese are called motor neurons; the last group – the interneurons – contains all other type of neurons and has two subclasses: one group is called „relay” or protection interneurons – they are usually found in the Central Neural System – CNS (in the brain) and connect different parts of it; the other group is called „local interneuron’s” and theese are only used in local circuits.
Figure 5.2: A simple Neural Network and a simple Bottlenecked Network
A bottleneck network – in its simplest form – is a network of inputs and outputs, both of size “x”, linked by several “hidden” middle layers, with the property that the inner layers have a size “<x”.
Bottleneck networks have a wide variety of domains of uses in Neural Network systems, mostly in the field of image and signal encoding and noise reductions. An example bottleneck network is shown in Figure 5.2. The network implemented in the model is a simple multi-layer network.
A node integrates inputs with:
(1),
where: yi is the output of node i;
fi is the activation function (typically a sigmoid function);
n is the number of inputs to the node;
wij is the connection of weight between nodes i and j;
xij is the j-th input to node i;
i is a threshold (or bias).
Evolution can be applied at 3 levels: with Weights, with Architecture (connectivity: which nodes are connected to which ones, activation functions: how does the nodes compute outputs, plasticity: which nodes can be updated) and with Learning rules modification.
Most NN learning algorithms are based on gradient descent Including the best known: backpropagation (BP).
Generalisation of the Neural Networks is the ability to train with one data set and, after then, successfully classify independent test sets. Continued training will increase the training set accuracy, but there exists the danger that test set accuracy decreases after a certain point. This is attributed to overfitting, since just individual training samples are available rather than the whole underlying distribution. This can cause some distortions or displacements of the decisions boundaries.
In [45] a new method is presented to realise dynamically adaptation of the topology of a Neural Network, using just the information of the learning set. The proposed algorithm in [45] eliminates unneeded connections from an initial fully connected network, and presents some characteristics about the so-called ”critical period”. Preliminary experimental results were presented which proved the effectiveness of the proposed algorithm.
Chap. 5.1.1: Learning process of the Neural Networks
In the traditional approach, the topology of the Neural Network is chosen among a set of known models, and it’s dimensions are also fixed with slight references to the particular problem to be solved. The solution is performed by adapting the synaptic weights of the Neural Network. The efficiency of the network can give poor results, and several configurations and dimensions of a chosen configuration must be tried to obtain an acceptable solution.
If the topology is too small, the input-output mapping cannot be learned with satisfactory accuracy by the Neural Network. In case of the network is too large – after a long training phase -it will learn the given set correctly, but will be able to generalize badly because of learning data overfitting. This is why the goal – when training a network – is to ”find a topology large enough to learn the mapping and as small as possible to generalize correctly” [45]. The procedure based on the training of several topologies – without taking advantage of previous network learning results – is heavy and time consuming.
Figure 5.3.: The learning process, where:
Where: Ig,t – Input at generation g and time t;
Og,t – Output;
Fg,t – Feedback (either Neural Network error or fitness).
If the Neural Network has fewer nodes, that means that is less expressive and fits training data less. If more nodes are available, than it is more expressive, but fits data more.
If there are too few nodes, than the Neural Network underfits the data; if there are too many nodes, than the Neural Network will overfit the data.
Evolution has it’s own advantages, like it does not require continuous differentiable functions, and the same method can be used for diffrent types of network (feedforward, recurrent or higher order networks).
Architecture of the Neural Network has important impact on results: it can determine whether Neural Network under- or over-fits it. Designing it by hand is a hard and an expert trial-and-error process.
In the training process, architectures and problems to resolve must represent the test set. To be able to get general rules, the Neural Network has to be trained on general problems/architectures, not just one kind. To get a rule for a specific architecture/problem type, the system must just train on that specific architecture/problem type.
To construct a Neural Network system a few principles as possible to be needed. This principles will minimize the need of human preprogramming and it will introduce generality of the system.
For the design of a self-organizing, evolving, and minimal neural learning system three basic principles are required [47]: (1) Spatial chunking,
(2) Temporal chunking, and
(3) Learning modulation.
By (1) we reach the traditional achievements of neural networks. Principle (2) allows encapsulation (or categorization) of events and sequences, and, finally, (3) is necessary as a mean to direct the learning. These mechanisms should be operating all over the network and hence not isolated in different modules.
Figure 5.5: The evolution of architectures
Evolving network:
An evolving network must have a basis from which to evolve [47]. The task to evolve is to move the platform into more and more complex structures. To achieve this, there must be: (a) a way for the network to be able to search for new structures, and (b) to exist a mechanism, that can maintain the progress by moving the platform.
The search is realized by neuronal spontaneous activity and by the network architecture.
Principles of a network, which is capable of evolving the complexity of the categories: The network is based on two fundamental issues: spontaneous activity (or noise) and, correlation of activity. By noise a constant level of spontaneous activity must be understand. It can be seen as strange to add noise to a system, because noise is usually something unwanted and bad in a system. Noise in this case has a central function in the network. If the noise is not added to the system, than the network will enter in a state of “silence” and no activity will take place.
Chap. 5.1.2: The mathematical model
Three basic components must be taken into account if we want to try to model an artificial functional model of the biological neuron:
1) Synapses of the biological neuron are modeled as weights. For an artificial neuron, this weight is a number, and represents the synapse; a negative weight means an inhibitory connection, while positive values show excitatory connections.
2) The following components of the neural model represent the actual activity of the neuron cell: all the inputs are summed altogether and modified by their weights. This activity is referred/called as a “linear combination”.
3) An activation function controls the amplitude of the output, e.g.: an acceptable range of output of the neurons is usually between 0 and 1, or – in some cases – it could be -1 and 1.
Figure 5.6: The mathematical model of a neuron
From this model the interval activity of the neuron can be shown to be:
(X)
The output of the neuron, yk, will be the outcome of the activation function on the value of vk.
Activation functions: the activation function acts as a squashing function, like the output of a neuron in a NN is between certain values (0 and 1, or -1 and 1).
In general, there are three types of activation functions, denoted by Φ (.):
the Threshold Function, which takes on a value of 0 – in the case of the summed input is less than a certain threshold value (v), or the value of 1 – if the summed input is “greater than or equal” to the threshold value.
(X+1)
the Piecewise-Linear function: it can take on the values of 0 or 1, but it can also take on values between 0 or 1 – depending on the amplification factor in a certain region of the linear operation.
(X+2)
the sigmoid function: it’s range is between 0 and 1, but – sometimes – it is useful to use the -1 to 1 range, e.g. of the sigmoid function is the hyperbolic tangent function.
(X+3)
The artificial neural networks described here are all variations of the parallel distributed processing (PDP) idea [48]. Each neural network’s architecture is based on very similar building blocks, which perform the processing.
Processing units
Each unit (neuron) performs a relatively simple job: receives an input from neighbors – or external sources – and uses this to compute an output signal, which is propagated to other units (next hidden layers or to the output layer). A second task of the neurons is the adjustment of the weights. The NN system is inherently parallel because many units (can) carry out their computations at the same time.
During operation, the neurons can be updated synchronously or asynchronously. In the case of synchronous updating, all units update their activation in the same time (simultaneously); in asynchronous updating mode each unit has a probability (it is usually fixed) of updating its activation at a time t; usually just one unit will be able to do this at a time. In some cases the second model has some advantages.
The pattern of connections of an NN can be distinguished in
1) feed-forward neural network – where the data from input to output unit is strictly feed forward. The data processing can be extended over multiple (layers of) units, but there are no feedback connections.
2) recurrent neural networks do have feedback connections and the dynamical properties of this NN are important. In some cases, the activation values of the units can undergo to a relaxation process, such that the neural network will evolve to a stable state and these activations do not change anymore. In other cases, the changes of the activation values of the output neurons are important and the dynamical behavior constitutes the output of the neural network.
Chapter. 5.2.: NeuroSolutions program
The NeuroSolutions program can be found on http://www.neurosolutions.com/. Here the customer can choose between several products of NeuroSolutions’s company, as follows:
NeuroSolutions Infinity:
“NeuroSolutions Infinity neural network software offers reliable, scalable, distributed processing of large data across clusters of computers to create highly accurate predictive models for data mining and analysis. It is designed to scale up from a single computer to thousands of machines, each offering local computation.”
NeuroSolutions Products:
NeuroSolutions:
“NeuroSolutions is an easy-to-use neural network software package for Windows. It combines a modular, icon-based network design interface with an implementation of advanced artificial intelligence and learning algorithms using intuitive wizards or an easy-to-use Excel™ interface.”
NeuroSolutions for MATLAB:
“The NeuroSolutions for MATLAB neural network toolbox is a valuable addition to MATLAB's technical computing capabilities allowing users to leverage the power of NeuroSolutions inside MATLAB. The toolbox features 16 neural models, 7 learning algorithms and a host of useful utilities integrated in an easy-to-use interface, which requires "next to no knowledge" of neural networks to begin using the product.”
Add-on Products:
NeuroSolutions Accelerator
“NeuroSolutions, NeuroSolutions Infinity and NeuroSolutions for MATLAB nerual network software can now harness the massive processing power of multi-core CPU's and graphics cards (GPU's) from AMD, Intel and NVIDIA through parallel computing with the NeuroSolutions Accelerator add-on.”
NeuroSolutions Infinity QuickDeploy
“QuickDeploy is a revolutionary program that takes a neural network created with NeuroSolutions Infinity and automatically generates a definition file and associated Windows-based Dynamic Link Library files (DLL), which can then be embedded into your own Visual Basic .NET, Visual C#, and Microsoft Excel.”
Custom Solution Wizard
“The Custom Solution Wizard is an add-on that takes a neural network created with NeuroSolutions and automatically generates and compiles a Windows-based Dynamic Link Library (DLL), which can then be embedded into your own Visual Basic .NET, Microsoft Excel, Microsoft Access, Visual C# or Visual C++ application.”
C++ Code Generation for Windows
“The C++ Code Generation for Windows along with NeuroSolutions Pro allows you to generate ANSI C++ compliant code for the neural networks you create within NeuroSolutions.”
C++ Code Generation for Non-Windows
“The C++ Code Generation for Non-Windows along with NeuroSolutions Pro allows you to generate ANSI C++ compliant code for the neural networks you create within NeuroSolutions that can be compiled for non-Windows based systems such as Linux, Unix, Sun to name a few.”
Other Products:
Neural Network Courses
“NeuroDimension teaches an interactive course on neural networks and how this powerful technology can best be utilized within the NeuroSolutions platforms. During the course, each student is provided with his/her own computer and much of the learning takes place by actually performing the steps that the instructor describes.”
Interactive Book
“The interactive book "Neural and Adaptive Systems: Fundamentals Through Simulations" by Principe, Euliano, and Lefebvre, is a softback book that theory with practice with the highly graphical simulation environment of NeuroSolutions to produce a revolutionary teaching tool. The book contains over 200 interactive experiments in NeuroSolutions to elucidate the fundamentals of neural networks and adaptive systems.”
TradingSolutions
“TradingSolutions is a software product that helps you make better trading decisions by combining traditional technical analysis with state-of-the-art artificial intelligence technologies. Use any combination of financial indicators in conjunction with advanced neural networks and genetic algorithms to create trading models that are remarkably effective.”
Trader68
“Trader68 is a fully automated order routing system which allows you to trade your real-time signals live – as soon as they are generated! Trade orders are routed from your system to your broker without any action required on your part.”
OptiGen Library
“OptiGen Library provide a general purpose API for genetic algorithm design. OptiGen Library for COM is an ActiveX component which can be used from languages capable of calling ActiveX components, such as Visual Basic and VBA in Microsoft Office products. OptiGen Library for C++ is a C++ library compiled for use from Visual C++ versions VC6, VS2003, VS2005, VS2008, and VS2010 Beta 2. OptiGen Library for .NET is a .NET component using .NET Framework 3.5.”
All the descriptions and generic program images are from the next link: http://www.neurosolutions.com/products/.
The program installed by us is the Free Trial Version of the NeuroSolutions 7, downloaded from http://www.neurosolutions.com/neurosolutions/.
Figure 5.6: NeuroSolutions 7
In it’s description, about how it works the program, it is written: “Neural networks are long, complicated mathematical equations and NeuroSolutions is designed to make the technology easy and accessible to both novice and advanced neural network developers. There are three basic phases in neural network analysis: training the network on your data, testing the network for accuracy and making predictions/classifying from new data. Only the Express Builder in the NeuroSolutions Excel interface can accomplish all of this automatically in one simple step!”
This program is compatible with Excel Interface and Excel files: “With NeuroSolutions Excel interface, it has never been easier to get started quickly in solving your problem. The Excel interface in NeuroSolutions provides an easy-to-use and intuitive interface for users to easily setup a simulation that automatically builds, trains and tests multiple neural network topologies and generates an easy to read report of the results including the best performing model.”
NeuroSolutions can be upgraded/extended later: “NeuroSolutions features several add-ons for neural network deployment and training speed improvements through parallel computing with NVDIA CUDA™ and OpenCL™ gpu processing.
NeuroSolutions Accelerator enables NeuroSolutions and NeuroSolutions Infinity to harness the massive processing power of multi-core processors and graphics cards (GPU's) from AMD, Intel and NVIDIA through parallel computing. It enables training time improvements from hours to minutes when compared to traditional processors on neural networks using Levenberg-Marquardt.
NeuroSolutions also features robust deployment options that will allow you to embed your custom neural network solution into your own application. The easiest and most popular method is through the Custom Solution Wizard which encapsulates the neural network into a Windows DLL (Dynamic Link Library) and embedding it into a sample Excel, Access, Visual Basic, Visual C++ application or even an ASP webpage.
Alternatively, we also offer C++ Code Generation for Windows or for All Platforms which allows you to generate ANSI C++ compliant code which can be compiled on Windows or other platforms such as Linux.”
The NeuroSolutions has it’s “Student version”:
The NeuroSolutions [anonimizat] provides students and faculty an inexpensive entry point into neural networks using NeuroSolutions. The [anonimizat] of NeuroSolutions provides all of the features and capabilities of the base level NeuroSolutions. There are a few small differences between the [anonimizat] and professional version of NeuroSolutions including the indication of the software being for "non-commercial use".
With the [anonimizat] for NeuroSolutions you can excel in your course work, have fun with projects and build important career skills to help you in the work force.
“NeuroSolutions Infinity is the easiest, most powerful neural network software of the NeuroSolutions family. It streamlines the data mining process by automatically taking care of the entire neural network development process – everything from accessing, cleaning, and arranging your data, to intelligently trying potential inputs, preprocessing, and network types, to selecting the best neural network and verifying the results.”
“NeuroSolutions Infinity is the next generation of NeuroSolutions. It will automatically search through the raw input variables and preprocess them using up to 50 different mathematical functions, in conjunction with intelligently searching through up to 34 neural network topologies with varied neural network components and parameters. And, it does all of this using as much processing power as you want to give it – starting with all of your computer's processing cores and up to one additional computer, and expandable to include many more computers and even graphics cards.”
NeuroSolutions installed:
After installing the downloaded free trial version of the NeuroSolutions, the shortcut and the program’s starting image will be as shown in Figure 5.7 and 5.8. as follows:
Figure 5.7: The NeuroSolutions shortcut on the Desktop
Figure 5.8: The NeuroSolutions’s starting image (it’s main menu)
Conclusions
This thesis will be devoted to a problem of great interest in the current environment, the goals of increasing quality, reliability and classification requirements met by the B.C.I. systems in use. It is clear that the interest of people has risen to help the others having the ”blocked-in” or other syndromes like this, having desastrouos result fot the life conditions of these ill people. In plus, because ot it’s novelty, some other healthy people will embrace this technology, just because of their curiosity or because the very early existence of games controlled by B.C.I. systems.
In this project of realising a B.C.I. system
Neural networks provide better predictions than conventional parametric methods are easier to use genuine than recalibration method. Estimates made by neural network method are improved further by "combining" data enable an arithmetic average instead of traditional methods of data using group means. Toto estimates neural networks are not dependent on previously known models.
In conclusion from the research results provided in this thesis can be used to increase the quality and reliability of Web services.
References
V. R. Pavitrakar, “Survey of Brain Computer Interaction”, in International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering, Vol. 2, Issue 4, April 2013.
H. J. Hwang, S. Kim, , C. H. Im, “EEG-Based Brain-Computer Interfaces: A Thorough Literature Survey”, Intl. Journal of Human–Computer Interaction, 2013, pp. 814–826.
B. Grainman, B. Allison, G. Pfurtscheller, “Brain computer interfaces”, The Frontiers Collection, Springer, 2010, pp. 31.
L. Bi, X. A. Fan, Y. Liu, “EEG-Based Brain-Controlled Mobile Robots: A Survey,” IEEE Transaction on human-machine systems, vol. 43, no. 2, March 2013, pp. 161.
V. Vashisht, Dr. T. V. Prasad, Dr. S. V. A. V. Prasad, “Technology boon: EEG based brain computer interface – a survey,” International Journal of Computer Science and Mobile Computing, IJCSMC, Vol. 2, Issue. 4, April 2013, pp. 447 – 454
A. Nijholt, et. al. “Brain–computer interfacing for intelligent systems,” IEEE Intell. Syst., vol. 23, no. 3, May/Jun. 2008, pp. 72–79.
N. Weiskopf, et. al. “Principles of a brain–computer interface (BCI) based on real-time functionalmagnetic resonance imaging (fMRI),” IEEE Trans. Biomed. Eng., vol. 51, no. 6, June 2004, pp. 966–970.
G. Pfurtscheller and F. H. Lopes da Silva, “Event-related EEG/MEG synchronization and desynchronization: Basic principles,” Clin. Neurophysiol., vol. 110, no. 11, Nov. 1999, pp. 1842–1857.
X. Perrin, R. Chavarriaga, F. Colas, R. Siegwart, and J. d. R. Mill´an, “Brain-coupled interaction for semi-autonomous navigation of an assistive robot,” Robot. Autonom. Syst., vol. 58, no. 12, 2010, pp. 1246–1255.
A. Ferreira, R. L. Silva, W. C. Celeste, T. F. Bastos, and M. S. Filho, “Human–machine interface based on muscular and brain signals applied to a robotic wheelchair,” J. Phys.: Conf. Ser., vol. 90, no. 1, 2007, pp. 1–8.
J. Jin, E. W. Sellers, Y. Zhanga, I. Dalyd, X. Wanga, A. Cichockic, “Whether generic model works for rapid ERP-based BCI calibration”, Journal of Neuroscience Methods, 2013, pp. 94-99
A. Nijholt, D. P. O. Bos, B. Reuderink, “Turning shortcomings into challenges: Brain–computer interfaces for games” Entertainment Computing vol. 1, 2009, pp. 85-94.
J. Jin, B. Z. Allison, X. Wanga, C. Neuper, ”A combined brain–computer interface based on P300 potentials and motion-onset visual evoked potentials,” Journal of Neuroscience Methods, 205, 2012, pp. 265-276.
I. Käthner, C. A. Ruf, E. Pasqualotto, C. Braun, N. Birbaumer, S. Halder, ”A portable auditory P300 brain–computer interface with directional cues,” Clinical Neurophysiology 124, 2013, pp. 327–338.
S. Fazli et. al., “Enhanced performance by a hybrid NIRS-EEG brain computer interface,” NeuroImage 59, 2012, pp. 519-529.
R. B. Nagy, F. Popentiu, C. Tarca, “Survey of Brain Computer Interface Systems”, Dec. 2013 – Submitted for publication.
D. H. Lee, H. C. Kim, “Genetic Relationship between Ultrasonic and Carcass Measurements for Meat Qualities in Korean Steers”, Asian-Aust. J. Anim. Sci., Vol 17, No. 1., pp. 6-12, 2004.
M. Darmon, S. Chatillon, “Main Features of a Complete Ultrasonic Measurement Model: Formal Aspects of Modeling of Both Transducers Radiation and Ultrasonic Flaws Responses”, Open Journal of Acoustics 3, pp. 43-53, 2013.
L. P. Palma, “Ultrasonic Distance Measurer Implemented with the MC9RS08KA2”, Freescale Semiconductor, Application Note, Document Number: AN3481, Febr. 2008.
M. Ishihara, M. Shiina, S. Suzuki, “Evaluation of Method of Measuring Distance Between Object and Walls Using Ultrasonic Sensors”, Journal of Asian Electric Vehicles, Volume 7, Number 1, June 2009.
L. Liu, K. Funamoto, T. Hayase, “Numerical Experiment for Ultrasonic-Measurement – Integrated Simulation of Developed Laminar Pipe Flow Using Axisymmetric Model”, Journal of Biomechanical Science and Engineering, Vol. 3, No. 2, 2008.
E. G. Sarabia, J. R. Llata, S. Robla, C. T. Ferrero, J. P. Oria, “Accurate Estimation of Airborne Ultrasonic Time-of-Flight for Overlapping Echoes”, Sensors 2013, 13, pp. 15465-15488, Nov. 2013.
H. Saiki, Y. Marumo, L. Ruan, T. Matsukawa, Z.H. Zhan, Y. Sakata, “Examination of conditions in contact interface using ultrasonic measurement”, Archives of Materials Science and Engineering, Vol. 28., Issue 2, pp. 113-118, February 2007.
Y. B. Gandole, “Simulation and data processing in ultrasonic measurements”, Anadolu University Journal of Science and Technology, Vol.:12, No: 2, pp. 119-127, 2011.
J. S. Wang, Y. S. Huang, M. C. Wu, Y. Y. Lai, H. L. Chang, M. S. Young, “Quantification of Pre-parturition Restlessness in Crated Sows Using Ultrasonic Measurement”, Asian-Aust. J. Anim. Sci., Vol 18, No. 6., pp. 780-786, 2005.
X. Chen, C. Wu, “Ultrasonic Measurement System with Infrared Communication Technology”, Journal of computers, vol. 6, no. 11, pp. 2468-2475, November 2011.
E. M. Stringer, M. K. Stoskopf, T. Simons, A. F. O’Connell, A. Waldstein, “UltrasonicMeasurement of Body Fat as a Means of Assessing Body Condition in Free-Ranging Raccoons (Procyon lotor)”, International Journal of Zoology, Volume 2010, Article ID 972380, 2010.
P. Rodge, P.W. Kulkarni, “ARM 11 Based Advance Safety System in Vehicle”, IJESRT – International Journal of Engineering Sciences & Research Technology, pp. 167-172, July 2014.
F. G. R. de Oliveira, J. Anadia O. de Campos, A. Sales, “Ultrasonic Measurements In Brazilian Hardwood”, Materials Research, Vol. 5, No. 1, pp. 51-55, 2002.
A. K. Shrivastava, A. Verma, S. P. Singh, “Effect of variation of separation between the ultrasonic transmitter and receiver on the accuracy of distance measurement”, International Journal of Computer science & Information Technology (IJCSIT), Vol 1, No 2, pp. 19-28, November 2009.
O. Bischoff, N. Heidmann, J. Rust, S. Paul, “Design and Implementation of an Ultrasonic Localization System for Wireless Sensor Networks using Angle-of-Arrival and Distance Measurement”, Procedia Engineering 47, pp. 953 – 956, 2012.
W. Budiharto, A. Santoso, D. Purwanto, A. Jazidie, “A New Method of Obstacle Avoidance for Service Robots in Indoor Environments” , ITB J. Eng. Sci., Vol. 44, No. 2, pp. 148-167, 2012.
U. Pfeiffer, W. Hillger, “Spectral Distance Amplitude Control for Ultrasonic Inspection of Composite Components”, ECNDT 2006 – Mo.2.6.4, 2006.
J. S. Wang, M. C. Wu, H. L. Chang, M. S. Young, “Predicting Parturition Time through Ultrasonic Measurement of Posture Changing Rate in Crated Landrace Sows”, Asian-Aust. J. Anim. Sci., Vol. 20, No. 5, pp. 682 – 692, May 2007.
T. U. A. S. Kumar, J. Mrudula, “Advanced Accident Avoidance System for Automobiles”, International Journal of Computer Trends and Technology (IJCTT), Vol. 6 Num. 2, pp.79-83, Dec. 2013.
D. Andújar, M.Weis, R.Gerhards, “An Ultrasonic System for Weed Detection in Cereal Crops”, Sensors 2012, 12, pp. 17343-17357, 2012.
P. Palanichamy, V.S. Srinivasan, T. Jayakumar, V. Rajendran, “Microstructural Characterization of Fatigue and Creep-Fatigue Damaged 316L(N) Stainless Steel Through Ultrasonic Measurements”, Procedia Engineering 55, pp. 154 – 159, 2013.
N.A. Tanna, M.V. Ambiye, V.A. Tanna, H.A. Joshi, “Ultrasonic Measurement of Normal Splenic Size in Infants and Children in Paediatric Indian Population,” Natl J Community Med., 3(3), pp. 529-533, 2012.
A. Afaneh, S. Alzebda, V. Ivchenko, A. N. Kalashnikov, “Ultrasonic Measurements of Temperature in Aqueous Solutions: Why and How”, Physics Research International, Volume 2011, Article ID 156396, 2011.
F. Yingfang, H. Zhiqiang, L. Jianglin, ”UltrasonicMeasurement of Corrosion Depth Development in Concrete Exposed to Acidic Environment”, International Journal of Corrosion, Volume 2012, Article ID 749185, 2012.
N. Chigareva, P. Zinin, D. Mounier, A. Bulou, A. Zerr, L.C. Ming, V. Gusev, “Laser ultrasonic measurements in a diamond anvil cell on Fe and the KBr pressure medium”, 2nd International Symposium on Laser-Ultrasonics – Science, Technology and Applications IOP Publishing, Journal of Physics: Conference Series 278, 2011.
http://www.parallax.com/sites/default/files/downloads/28015-PING-Sensor-Product-Guide-v2.0.pdf, accessed in 02.21.2013.
http://www.ni.com/white-paper/11564/en/, accessed in 02.12.2013.
Copyright Notice
© Licențiada.org respectă drepturile de proprietate intelectuală și așteaptă ca toți utilizatorii să facă același lucru. Dacă consideri că un conținut de pe site încalcă drepturile tale de autor, te rugăm să trimiți o notificare DMCA.
Acest articol: Cercetari Privind Controlul Dispozitivelor Inteligente, Utilizand Interfete Brain Computer (ID: 149610)
Dacă considerați că acest conținut vă încalcă drepturile de autor, vă rugăm să depuneți o cerere pe pagina noastră Copyright Takedown.
