Icie 2011 Caius(v1) [629744]
AUTONOMOUS MOBILE ROBOT NAVIGATION
USING OPTICAL FLOW
1Caius Suliman, 1Florin Moldoveanu, 1Mihai Cernat
1Transylvania University of Brasov , Romania
[anonimizat], [anonimizat], [anonimizat]
Abstract
This paper presents an algorithm for visual
based navigation of a mobile robot in a virtual
reality scenario. As input for the proposed
algorithm we use a sequen ce of images
captured from the virtual reality environment .
Then, with the help of Horn and Schunk's
optical flow algorithm information is extracted
from the sequence of images in order to be
used in the navigation algorithm. As a result of
the computation of optical flow we can gather
very useful information about the robot's
environment, such a s the obstacles disposition,
time to collision and focus of expansion
estimation .
The navigation strategy consists of two major
steps: first we compute the optical flow and
determine the time to collision of the robot
with the obstacles in its path, and second a
balance strategy is applied in order to avoid the
obstacles so that the robot can navigate in the
environment without any collision.
Key Words: mobile robot , autonomous
navigation, optical flow, time to collision, virtual reality .
1. Introduction
Autonomous mobile robots are being seen in
more and more real-world applications like
surveillance, transportation, etc. The
navigation task contains basically of two
problems: the localisation problem and the
obstacle avoidance problem. We focus our
attention on solving the second one.
Many methods have been developed to find
moving objects in a sequence of images. Rett
and Dias [7] describe a methodology for
mobile robot navigation (the navigation task
implies the detection of objects in the robot's path), based on log-polar transform of image and optical flow. Mochizuki and Imiya [6]
used the disparity of optical flow on spherical
images to compute the robot's dire ction for
navigation. They used an omnidirectional
vision system that allows the robot to observe
the back view in which it has safely navigated
without colliding to obstacles. In his paper, Duchon [3] inspired by insects that also use
optical flow for obs tacle detection and
avoidance, proposes a method for maze navigation. This is a perception based method
that uses optical flow to detect obstacles so
that the robot can take the right decision (turn
left or turn right). In their paper Caldeira and
Schneebeli [1] propose a system based on
optical flow and time to contact calculation, which enables a mobile robot to react in the
presence of obstacles when navigating in an
unstructured environment. They implemented
an algorithm for motion segmentation, based
on the calculation of the optical flow, which
produces the map of depths used by the robot to avoid obstacles. All the methods presented
here are applied for indoor navigation.
The aim of this work is the development of an
algorithm applied in the navigation of
autonomous mobile robot. The system's input
is a sequence of images captured from the
robot's camera while it is in motion. The
sequence of images is taken from virtual
reality where it is simulated the robot’s view.
The virtual robot tries to underst and its
environment by extracting important features
from the sequence of images. Then it uses this
information as its guide for motion. The
adopted strategy for avoiding obstacles during
navigation is the balance between the left and
the right side optica l flow vectors.
In section 2 of the paper is presented the
computation of the optical flow from the
sequence of images. To determine the robot
orientation is necessary the computation of
focus of expansion (FOE) in the image plane.
Then, from the optical f low and from the FOE
it is computed the time to contact (TTC).
Section 3 deals with the experimental
implementation and the results obtained .
Section 4 closes with few conclusions.
2. Optical flow
Optical flow is the distribution of apparent
velocities of movement of brightness patterns
in an image. Optical flow can arise from
relative motion of objects and the viewer.
Consequently optical flow can give important
information about the spatial arrangement of
the objects viewed and the rate of change of
this arrangement.
Most of the existing methods for motion
estimation fall into four categories: The correlation based methods, the energy based
methods, the parametric model based methods
and the differential based methods, we have
opted for a differential technique.
The optical flow can be computed from a
sequence of images by making assumptions
about the variations of the scene brightness. One such assumption is represented by the
following equation [4]:
(, ,) ( , , )Exyt Ex xy yt t= +∂ +∂ +∂ (2.1)
where (, ,)Exyt represents the luminance
function of the pixel ( x, y) at time t and
(,)xy∂∂ represents the displacement occurring
at pixel ( x, y) during t∂.
Performing a Taylor development limited to
the first order, the result is the following:
(, ,) (, ,)ExExyt Exyt
xt
Ey Et
yt tt∂∂= ++
∂∂
∂∂ ∂∂++
∂∂ ∂∂ (2.2)
For simplification we will make the following
notations:
,,xytE EEEEE
xyt∂∂∂= = =
∂∂∂ (2.3)
,xyuv
tt∂∂= =
∂∂ (2.4) where xE, yE, tE (see (2.3) ) are the first
partial derivative s of E with respect to x , y, t. u
and v (see (2.4) )are the optical flow
components in the x and y directions. Equation
(2.2) represent s the optical flow constraint
equation. The equation provides only the normal velocity component.
So we are only able to measure the component
of optical flow that is in the direction of the intensity gradient.
The system is undetermined because we have
one equation and two unknowns. To resolve
this problem it is necessary to add additional constraints. After the optical flow is computed, it is used for nav igation decisions such as
trying to balance the amount of left and right
side flow to avoid obstacles. If optical flow is
detected, then the robot should change the
forces produced by its effectors so as to
minimize this flow, according to a law of
control :
() F f flow∆= ∆ (2.5)
The change in the robot’s internal forces is a function of the change in the optic flow. The
optical flow contains information about both
the layout of surfaces, the direction of the
point of observation called the focus of
expansion and the time to contact.
2.1 Estimation of the FOE
As one moves through a world of s tatic
objects, the visual world as projected on the
retina seems to flow past. For a give direction of translat ional motion and direction of gaze,
the world seems to be following out of one particular retinal point. Each direction of
motion and gaze induce s a unique FOE, which
may be a point at infinity if the motion is
parallel to the image plane. The exact
determination of the FOE in not the focus of this paper, so we only estimated the position of
the FOE from the optical flow. If we take a
look at figur e 1, we can see that x coordinate
(on the horizontal) corresponds to the column
where the magnitude of the x component of the
optical flow has its minimum . In a similar
manner we can determine the y coordinate of
the x component.
Fig.1 The FOE.
Our entir e application was developed in the
environment called Matlab . In our application
we have d ivided the captured image into three
sub-images and we have determined the FOE
for each one. The FOE’s determined will be
used in the step where we determine the time
to contact for each of the sub-images. In the
next figure it can be seen the FOE’s for a
frame.
Fig. 2 The determined FOE’ s for the three sub-
images.
In the above image (the image was taken from
the virtual reality environment) with the green
dot is rep resented the F OE for the left sub –
image, the red dot represents the FOE for the
central sub -image and the blue one is the FOE
for the right sub-image. The two magenta lines
are used only to see more clearly which are the
three sub -images.
2.2 Estimation o f the TTC
A primer use of optical flow in robot vision is
collision detection, in particular time to contact
(TTC), known also as time to collision or time
to crash. The theory of TTC was first
introduced by Lee [5] . Lee conducted many
studies upon humans and birds showing evidence that TTC is a critical component used
in the timing of motion and actions. In our case there are three TTC that need to be
computed. In a first step we have determined
the optical flow for the three sub -images. By
using the deter mined optical flow and the
estimated FOES’s, now we can determine the
TTC for each of the sub- images. This can be
done with the following equation:
()()22
22C FOE C FOE xx yyTTC
uv− +−=
+ (2.6)
where Cx and Cy are the coordinates of the
center of the region under consideration, u and
v are the components of the optic al flow vector
in such a region, FOEx and FOEy are the
coordinates of the FOE in the image.
Fig. 3 Image taken before impact.
Fig.4 The TTC’s for the three sub -images.
In figure 3 is presented a scenario wh ere the
robot is set on a collision course with an object
from the environment (a crate). The system
warns us when the TTC is under a predefined
threshold. The threshold, in our case, was set
to 7. The measurement unit for the TTC is
frames remaining to contact. In figure 4 are
presented the three TTC’s. The green plot
corresponds to the TTC for the left sub- image,
the red one corresponds to the TTC for the
center sub -image and the blue one is for the
TTC for the right sub -image. From the images
it can be seen that when the robot approaches
the crate, the left and the center sub -images are
the closest from the crate and the TTC (green
line and red line) are decreasing. The TTC for
the right sub-image is almost constant, because
the robot is moving almost par allel to the wall.
When either of the TTC’s is below the predefined threshold, the system warns us that
the robot needs to take action in order to avoid
the obstacle.
2.2 The balance strategy
The main idea behind the adopted strategy [8]
is that when the robot is translating, closer
objects give rise to faster motion across the
retina than farther objects. It also takes
advantage of perspective in that closer objects
also take up more of the field of view, biasing the average towards their associated flow. For
this purpose the captured image from the
virtual reality was divided into two halves. The
control law that we have used is formulated
by:
()LR
LR
LRww
FF
ww−
∆−=
+∑∑
∑∑
(2.7)
where ()LRFF∆− represents the difference in
forces on the two sides of the robot’s body,
and w∑
is the sum of the magnitudes of
optic flow in one half of the robot's heading. If
the optical flow in one half is bigger than the
one in the other half, the robot must turn away
from the half with the greater flow.
3. Experimental evaluation
The first step in applying the above theory was
to create the virtual world in which the robot
should navigate without colliding to obstacles.
The virtual environment was created with the
help of Matlab's toolbox for virtual reality. The
toolbox makes it poss ible not only to create or
to visualize a virtua l world, but also to capture
it into an image from a specified position, orientation and rotation.
Fig. 5 The virtual environment of the robot.
In the created virtual environment we've put
some random immovable obstacles like some
crates.
In the following experiment we have tested the
virtual robot's capacity to navigate through the
virtual environment without colliding to
obstacles, using the adopted balance strategy.
The block diagram of the navigation algorithm is presented in figure 6 .
Fig. 6 The obstacle avoidance algorithm.
For the computing of the optical flow from two
successive camera images, we have used an
Matlab implementation of Horn and Schunk’s
optical flow algorithm.
Based on this optical flow field, we have
estimated the FOE’s for the th ree sub -images.
Next, with the help of the estimated FOE’s and
the determined optical flow we determined the TTC for each of the sub-images. If the TTC
drops under the predefined threshold the robot
knows that there is an obstacle in front and it
needs to apply the balance strategy. For this
purpose the flow magnitudes of right and left
half of the captured image are calculated . Then
the computed flow magnitude of right and left half image is used to formulate the balance
strategy: if the right flow is larg er than the left
flow, the robot turns left, otherwise it turns
right.
First, if one of the TTC is below the predefined
threshold the robot will rotate with an
30°
angle and turn away from the side with the
greater flow (see fig. ). If two or a ll of the
TTC’s are below the threshold, the robot will
rotate with an 120° angle. This is presented in the following figures.
In the above figure it is presented the case in
which the robot encounters an obstacle like the
one in figure 3. In the TTC plot it can be seen
that the TTC in the left sub -image was the one
under the predefined threshold and the robot
applied with success the balance strategy and
turned away from the side with the greater flow. Next the robot was going straight to the wall. The TTC corresponding to the right sub-
image is decreasing and when it drops below the threshold the robot needs to apply again the
balance strategy. In figure 7 we have also
presented the value for the optical flow on the
two halves of the image, the current frame, the
value of the
()LRFF∆− and its plot.
3. Conclusion
In this paper we describe how with the help of
optical flow and the use of a strategy called
"balanced strategy", a virtual robot received
the ability to avoid obstacles in an virtual
environment. The main goal is the detection of
objects close to the robot based on the
information of the movement of the image
brightness. The experimental result s have
shown that the robot is effectively able to avoid the immovable obstacles, based only on the information from the optical -flow , the FOE
and the TTC . Improvement of the adopted
method is possible by incorporating on the robot other sensors like sona r, infrared, etc., in
collaboration with the camera already mounted
on the robot.
4. Acknowledgements
This paper is supported by the Sectoral
Operational Programme Human Resources
Development (SOP HRD), financed from the
European Social Fund and by the Romanian
Government under the contract number
POSDRU/6/1.5/S/6.
5. References
[1] Caldeira, E.M.O., Schneebeli, H.J.A.,
2007. An Optical Flow – Based Sensing
System for Reactive Mobile Robot
Navigation. Revista Controle &
Automação, Vol. 18 (3), September 2007,
pp. 265-277.
[2] Duchon, A.P., Warren, W.H., Kaelbling, L.P., 1998. Ecological Robotics. Adaptive
Behaviour, Vol. 6 (3-4), pp. 473- 507.
[3] Duchon, A.P., 1996. Maze Navigation Using Optical Flow. In: Proceedings of the Fourth International Conference on
Simulatio n of Adaptive Behavior, MIT
Press, September 1996, pp. 224-232.
[4] Horn, K.P. & Schunck, B.G., 1981.
Determining optical flow. Artificial
intelligence, Vol. 17, pp. 185 -203.
[5] Lee, D. N. & Young, D. S., 1985. Visual
timing of interceptive action. Brain
Mechanism s and Spatial Vision, Vol. 19,
pp. 1- 30.
[6] Mochizuki, Y., Imiya, A., 2008.
Featureless Visual Navigation using
Optical Flow of Omni-directional Image
Sequence. In: Workshop Proceedings of
Simulation, Modelling and Programming
for Autonomous Robots, Venice (I taly),
November 2008, pp. 307-318.
[7] Rett, J., Dias, J., 2004. Autonomous Robot
Navigation – A study using Optical Flow
and log- polar image representation. In:
Proceedings of the Colloquium of Automation, Salzhausen (Germany),
November 2004.
[8] Souhila, K., Kar im, A., 2007. Optical Flow
Based Robot Obstacle Avoidance.
International Journal of Advanced Robotic
Systems, Vol. 4 (1), pp. 13 -16.
[9] Tresilian, J., 1990. Perceptual Information
for the Timing of Interceptive Action,
Perception, Vol. 19, pp. 223 -239.
Copyright Notice
© Licențiada.org respectă drepturile de proprietate intelectuală și așteaptă ca toți utilizatorii să facă același lucru. Dacă consideri că un conținut de pe site încalcă drepturile tale de autor, te rugăm să trimiți o notificare DMCA.
Acest articol: Icie 2011 Caius(v1) [629744] (ID: 629744)
Dacă considerați că acest conținut vă încalcă drepturile de autor, vă rugăm să depuneți o cerere pe pagina noastră Copyright Takedown.
