MINISTRY OF NATIONAL EDUCATION AND RESEARCH [629733]
ROMANIA
MINISTRY OF NATIONAL EDUCATION AND RESEARCH
THE ANNALS OF "DUNAREA DE JOS"
UNIVERSITY OF GALATI
Fascicle III
ELECTROTECHNICS
ELECTRONICS
AUTOMATIC CONTROL
INFORMATICS
ISSN 1221-454X
2009: Volume 32, Number 2
Table of Contents
Sabina MUNTEANU: Major Approaches to Medical Diagnosis and their Drawbacks
Adina COCU, Marian Viorel CRACIUN, Bogdan COCU: Learning the Structure of Bayesian
Network from Small Amount of Data
George PECHERLE, Cornelia GYORODI, Ovidiu BUKSA, Stefan MOGYOROSI: Solution for an
Improved WEB Server
Emilia PECHEANU, Diana STEFANESCU, Adrian ISTRATE: On Modeling the Instructional
Content in Computer Assisted Education
Ana DOBRESCU, Gheorghe MORARIU, Mihai MIRON: Bipolar Disk Microstrip Antenna.
Theoretical and Experimental Considerations
Mihai VLASE, Dan MUNTEANU: Ranking Patents for Better Search Capabilities
Cornelia TUDORIE: Intelligent Interfaces fo r Database Fuzzy Querying
Gabriel DANCIU, Iuliu SZEKELY: Methods of Object Tracking
Razvan SOLEA, Urbano NUNES, Adrian FILIPESCU, Daniela CERNEGA: Sliding Mode Control
for Trajectory Tracking of an Intelligent Wheelchair
P. Vijaya KUMAR, S. Rama REDDY: Simulation Results of Double Forward Converter
Marian GAICEANU: Speed Estimation Method for AC Drives
P. Usha RANI, S. Rama REDDY: Digital Simulation of a Dynamic Voltage Restorer System
Caius SULIMAN, Dan PUIU, Florin MOLDOVEANU: Single Camera Calibration in 3D Vision
THE ANNALS OF “DUNAREA DE JOS” UNIVERSITY OF GALATI
FASCICLE III, 2009, Vol.32, No.2, ISSN 1221-454X
ELECTROTECHNICS, ELECTRONICS, AUTOMATIC CONTROL, INFORMATICS
This paper was recommended for publication by Rustem Popa
69 SINGLE CAMERA CALIBRATION IN 3D VISION
Caius Suliman, Dan Puiu, Florin Moldoveanu
Department of Automation
Transilvania University of Brasov
Eroilor. 29,69121 Brasov, Romania
{caius.suliman, puiudan, moldof }@unitbv.ro
Abstract: Camera calibration is a necessary step in 3D vision in order to extract metric
information from 2D images. A camera is considered to be calibrated when the
parameters of the camera are known (i.e. principal distance, lens distorsion, focal length etc.). In this paper we deal with a single camera calibration method and with the help of this method we try to find the intrinsic and extrinsic camera parameters. The method was implemented with succes in the programming and simulation environment Matlab.
Keywords: camera calibration, 3D vision, calibration pattern, intrinsic parameters,
extrinsic parameters.
1. INTRODUCTION
1
Camera calibration is a necessary step in 3D vision in
order to extract metric information from 2D images. A camera is considered to be calibrated when the parameters of the camera are known (i.e. principal distance, lens distorsion, focal length etc.). For this purpose, in the last twenty years, many calibration algorithms have been developed in the computer vision community. This algorithms are generally based on the perspective camera models. Among the most popular is Robert Tsai’s calibration algorithm (Horn, 2000), (Tsai, 1987). His algorithm is based on
the pinhole model of perspective projection. The
model proposed by Tsai assumes that some of the camera’s parameters are given by the are manufacturer, in order to reduce the initial guess of estimation. The algoritm requires n feature points ( n
> 8) per image and solves the calibration problem with a set of n linear equations based on the radial
alignment constraint. A second order radial distorsion model is used while no decentering distorsion terms
1This paper is supported by the Sectoral Operational Programme
Human Resources Development (SOP HRD), financed from the
European Social Fund and by the Romanian Government under the
contract number POSDRU/6/1.5/S/6. are considered. This method can be used with either a
single image or multiple images of a planar or 3D calibration grid.
Another important and very popular calibration
method has been developed by Zhenyou Zhang (Zhang, 1998). His method requires a planar checkerboard grid to be placed in front of the camera at different orientations. The algorithm uses the extracted corner points of the checkerboard to calculate a projective transformation between the image points of the different images. The camera’s intrinsic and extrinsic parameters are recovered using
a closed-form solution. The radial distorsion terms
are recovered within a linear least-squares solution. The final step is the use of a non-linear minimization of the reprojection error that refines all the recovered parameters. Zhang’s method is similar to the one proposed by Triggs (Triggs, 1998). Zhang’s algorithm is the basis behind some popular open source implementations of camera calibration (i.e. Intel’s OpenCV and Matlab’s calibration toolkit).
In this paper we present the implementation of a
camera calibration method based on the calibration methods presented by Trucco (Trucco and Verri, 1998).
THE ANNALS OF “DUNAREA DE JOS” UNIVERSITY OF GALATI
FASCICLE III, 2009, Vol.32, No.2, ISSN 1221-454X
70
2. CAMERA CALIBRATION
The key idea behind calibration is to write the
projection equations linking the known coordinates of a set of 3D points and their projections, and solve for the camera parameters. In order to get to know the coordinates of some 3D points, camera calibration methods rely on one or more images of a calibration pattern (a 3D object of known geometry, possibly located in a known position in space and generating image features which can be located accurately). In the aborded met hod we use the
perspective camera model, also known as the pinhole
camera model.
The method presented here consists in two stages:
• Estimation of the projection matrix by
linking world and image coordinates;
• Computation of the camera’s parameters
as closed-form functions of the entries of the projection matrix.
2.1. Estimation of the projection matrix
The relation between the 3D coordinates ( Xiw, Yiw,
Ziw) of a point in space and the 2D coordinates ( x, y)
of its projection on the image plane can be written by means of a 3×4 projection matrix, M, as follows:
(1)
,
=
w
iw
iw
i
iii
ZYX
M
wvu
with:
(2a)
34 33 32 3114 13 12 11
m Zm Ym Xmm Zm Ym Xm
wuxw
iw
iw
iw
iw
iw
i
ii
+ + ++ + += =
(2b)
34 33 32 3124 23 22 21
m Zm Ym Xmm Zm Ym Xm
wvyw
iw
iw
iw
iw
iw
i
ii
+ + ++ + += = The matrix M is a scaling factor and its entries can be
determined through a homogenous linear system formed by writing (2) for at least 6 world image points. With the help of a calibration pattern like the one presented in fig.1, many more correspondences
and equations can be obtained and M can be
estimated through least squares techniques.
The projection matrix M can be estimated by solving
the following homogenous linear system:
(3)
,0=Am
where A is presented in (3) and m is:
[]Tmm mmm34 33 12 11 , ,…, , =
The non-trivial solution of the homogenous equation
0=Am is found by recovering the vector m from
singular value decomposition (SVD) techniques of
matrix A as the last column of V. In agreement with
the above definition of M this means that the entries
of M are obtained up to an unknown scale factor.
2.2. Camera parameters from the projection matrix
In practical solutions it is not always sufficient to
have estimated the projection matrix M, the camera
intrinsic (y x y x ooff , , , ) and extrinsic ( R, T)
parameters are also needed. We assume that the
projection matrix has been estimated with the procedure from the previous section. The estimated
projection matrix is denoted as
Mˆ:
. ˆ
33 32 3123 22 2113 12 11
=
mmmmmmmmm
M
First we rewrite the full expression for the entries of
the projection matrix M:
(4)
+ + + ++ + + +
= zzy yy y y y y y yzx xx x x x x x x
T r r rTo Tf ro rf ro rf ro rfTo Tf ro rf ro rf ro rf
M
33 32 3131 23 31 22 31 2133 13 32 12 31 11
In this case we are trying to find the camera
parameters y xff,(where f represents the focal
length in horizontal pixels size units),y xoo ,(where
xoand yo are the image center coordinates), R (the
rotation matrix) and the translation vector T
(z y x TTT , , beeing its elements). For this purpose we
also need the following 3D vectors:
[]Tmmmq13 12 11 1ˆ,ˆ,ˆ= []Tmmm q23 22 21 2ˆ,ˆ,ˆ=
[]Tmmm q33 32 31 3ˆ,ˆ,ˆ=
[]Tmmm q34 24 14 4ˆ,ˆ,ˆ=
Now, the estimated projection matrix Mˆ can be
written as MM=ˆ . Here is 33qqT. 3q is the
last row of the R m a t r i x . T h e n e x t s t e p i s t h e
THE ANNALS OF “DUNAREA DE JOS” UNIVERSITY OF GALATI
FASCICLE III, 2009, Vol.32, No.2, ISSN 1221-454X
71
division of the Mˆ matrix by . From the last row
of (4) we have:
34ˆm Tz= and 3,2,1 ,ˆ3 3 = = i m ri i .
with 1±= . By taking the dot product of 3q with
1q and 2q it results that:
31qq oT
x= and 32qq oT
y= .
Then we can compute xf and yf as:
2
11 xT
x o qq f = and 2
22 yT
y o qq f = .
Until now we have computed the intrinsic parameters
of the camera. Now we can compute the extrinsic parameters:
() ,3,2,1 , / ˆ ˆ1 3 1 = = if m mo rx i i x i
() ,3,2,1 , / ˆ ˆ2 3 2 = = if m mo ry i i y i
() , / ˆ14 x zx x f m To T =
() . / ˆ24 y zy y f m To T =
The estimated rotation matrix Rˆ obtained by this
procedure is not really orthogonal. Therefore we must compute the rotation matrix that is the closest to
the estimated matrix
Rˆ. By using SVD we have:
TUDVR=ˆ and TUVR= .
Now we have all intrinsic and extrinsic parameters.
The only thing that bothers us now is the sign of .
It can be obtained very easily from 34ˆm Tz= . If the
origin of the worl frame is in front of the camera then
the sign of is “+”, else the sign of is “-”.
3. EXPERIMENTAL EVALUATION
For the experimental evaluation of the above method
we have used a wireless CMOS camera. Because the images taken by the camera were very noisy, we needed a filter to remove that noise. The filter used for this purpose was a simple one, the median filter.
The calibration object used for the process consists of
two perpendicular planes, in our case we have used two sides of a box. On the two sides we added a calibration pattern consisting of square tiles. The sides of the tiles are 1 cm. Another important step in the calibration process is to chose a convenient world coordinate frame (see Fig.1). The first step of the calibration process is to set the
3D matrix XYZ, that containts the 3D coordinates of
the calibration points. We have chosen those points as follows: as we know the sides of the tiles are 1 cm, so the XYZ coordinates for a point near the origin of
the world system are (-1, 0, 1). Then we need to set
up the 2D matrix xy, matrix that contains the 2D
coordinates of the calibration points. For this purpose we have developed an interactive program that colects the 2D coordinates of these points (see Fig.2).
Fig.1. The calibration object and the chosen world
coordinate frame.
Fig.2. The calibration object and resulting calibration
points.
For an accurate calibration process we need between
20 and 30 points. After this step is completed, the estimation of the projection matrix M can be done by
applying the method presented early in this work.
To show that the calibration process was accurate we
have considered some cubes with the sides equal to the sides of the tiles and we have put them over the calibration pattern. It can be easily observed the correction of the projection matrix estimation (see Fig.3).
THE ANNALS OF “DUNAREA DE JOS” UNIVERSITY OF GALATI
FASCICLE III, 2009, Vol.32, No.2, ISSN 1221-454X
72
Fig.3. The calibrated camera allows for 3D objects to
be drawn in the scene.
In the final step we have computed the estimates for
the camera’s intrinsic and extrinsic parameters from
the projection matrix. The resulting intrinsic parameters are:
, 5092.329 ; 0655.321 px fpx fy x = =
. 7667.164 ; 8256.156 = =y x o o
And the camera’s extrinsic parameters obtained from
the calibration process are:
=
0078.0 7377.0 6750.00189.0 6751.0 7375.09998.0 0070.0 0192.0
R,
=
9378.236508.02695.6
T. The entire calibration process has been implemented
in the programming and simulation environment Matlab.
4. CONCLUSIONS
The method presented here is a simple calibration
method that gets the job done. The precision of the calibration depends on how accurately the image and world reference points are located. The errors on the
parameter estimates propagate to the result of the
application. The calibration process ultimately depends on the accuracy requirements of the target application. For example, in industry accuracies of submillimeter are required. In other application are accepted even errors of centimeters.
5. REFERENCES
Horn, B.K.P. (2000). Tsai’s camera calibration
method revisited. Technical report , MIT
Artificial Intelligence Laboratory website.
Triggs, B. (1998). Autocalibration from planar
scenes. In: Proc. of Fifth European Conference
on Computer Vision , pp.89–105.
Trucco, E., Verri, A. (1998). Introductory
Techniques for 3D computer vision . Prentice-
Hall, Inc., Upper Saddle River, New Jersey,
ISBN 0-13-261108-2.
Tsai, R.Y. (1987). A Versatile Camera Calibration
for 3D Machine Vision. In: IEEE Journal of
Robotics and Automation , Vol. RA-3 , No. 4 ,
p.323-344.
Zhang, Z. (1998). A Flexible New Technique for
Camera Calibration. Technical Report MSR-TR-
98-71 , Microsoft Research.
Copyright Notice
© Licențiada.org respectă drepturile de proprietate intelectuală și așteaptă ca toți utilizatorii să facă același lucru. Dacă consideri că un conținut de pe site încalcă drepturile tale de autor, te rugăm să trimiți o notificare DMCA.
Acest articol: MINISTRY OF NATIONAL EDUCATION AND RESEARCH [629733] (ID: 629733)
Dacă considerați că acest conținut vă încalcă drepturile de autor, vă rugăm să depuneți o cerere pe pagina noastră Copyright Takedown.
