A Robust Feature Point Registration Method for Retinal Images Using [600691]
A Robust Feature Point Registration Method for Retinal Images Using
Kirsch Template and Genetic Algorithms
Fatiha Meskine1, Nasreddine TALEB2
1 Department of Electrical Engineering, Faculty of Science and Technology,
University of Mascara, Mascara Algeria, 29000
E-mail: [anonimizat]
2 Department of Electronics, RCAM Laboratory, Djillali Liabes University
of Sidi Bel Abbes, Sidi Bel Abbes Algeria, 22000
E-mail : ne_taleb@univ -sba.dz
Abstract. Image registration is a fundamental problem to several image processing and computer vision
applications. In the case of medical imaging, disease diagnosis and treatment planning are often supported by
multiple images acquired from the same patient. Image registration techniques, hence, are needed in order to
integrate the information gained from several images to obtain a comprehensive understanding. A broad range of
image registration methods have been proposed for different medical imaging applications i ncluding retinal
image registration. In this paper, we have developed a robust registration algorithm for estimating a rigid
transformation using genetic algorithms. The aim is to develop a highly accurate and automated feature point
image registration met hod of retinal images. Feature detection is first performed based on vessel blood
segmentation using the Kirsch edge operator, followed by a matching process. Then for registration, we have
adopted a genetic algorithm as an optimization method to estimate the optimal parameters of the rigid
transformation. The simulation results have shown that this method is really efficient for registration of retinal
images.
Key Words. Image registration, retinal images, vessel extraction, Kirsch edge, genetic algorithms .
1 Introduction
Image registration (IR) is an important research topic in the field of image processing and computer
vision [1,2]. The problem has been studied in various contexts due to its significance in a wide range
of areas, including medical imaging [3,4], remote sensing [5,6,7], recognition, tracking, mosaicing and
so on. In its most general formulation, image registration is the task of aligning two or more images in
order to establish a spatial correspondence of their common content. Such images us ually have the
same or a similar object but have been acquired under different conditions, such as time and
viewpoint, or by multiple sensors.
Image registration has become extensively popular within the medical community as a powerful
diagnosis tool. A broad range of image registration methods have been proposed for different medical
imaging applications including retinal image registration. Var ious criteria, e.g., modalities,
dimensiona -lities, elasticity of the transformation, have been proposed to categorize registration
methods [1,8]. Registration tasks may involve inter -modal registration (e.g. generating retinal fundus
maps), or registering images taken over a period of time as a monitoring procedure. Recently, multi –
modal registration has been widely adopted since the fusion of different modalities can offer much
greater diagnostic information when analysing structural components, due to th eir different
representations in the images, for example, registering Magnetic Resonance (MR) images with
Computed Tomography (CT) images.
In general, existing image registration techniques can be categorized into two classes: area -based and
feature -based methods. Area -based techniques are generally based on pixel intensities and certain
optimized objective functions, such as least mean square error, cross -correlation, phase correlation, or
mutual information, In the case of retinal image registration, are a-based approaches are often used in
multimodal or temporal image registration applications [9,10]. Typically, there are two major factors
that may degrad the performance of area -based methods: non consistent/non uniform contrast within
an image and large homogeneous/textureless areas.
On the other hand, feature -based method utilizes extracted features and matches the common
structures from two images. There are two important steps in feature -based techniques. The first one is
feature extraction and the se cond is finding feature correspondence. Generally, most of these methods
are based on the vessel structure of retina [11,12]. A lot of works have addressed this area of research;
a brief review of the most ones is given as follows: In [13], the bifurcation points of a vascular tree,
also called landmark points, are labeled with surrounding vessel orientations. In [14], the similarity
matrix for all possible correspondences is computed based on the orientations of vascular centerlines
and the similarity meas ure is converted to a prior probability. In [15], the dual -bootstrap iterative
closest point (dual -bootstrap ICP) algorithm was introduced. The approach starts from one or more
initial, low -order estimates that are only accurate in small image regions call ed bootstrap regions. In
each bootstrap region, the method iteratively refines the transformation estimation, expands the
bootstrap region, and tests to see if a higher order model can be used. The performance of feature –
based methods largely depends on su fficient and/or reliable correspondences, especially, when the
overlapping part of an image pair is very limited or when there are mismatched correspondences.
The feature based and area based techniques can also be combined effectively to form hybrid
technique. In [ 16], a hybrid retinal image registration approach is developed by using entropy -based
thresholding technique for extracting the vescular tree and zeroth -order translati on is estimated by
maximizing mutual information based on the binary image pair. [17] have proposed the use of elastic
matching algorithms based on reconstructed vascular trees. Work of Chen et al. [18] presented a
partial intensity invariant feature descr iptor (PIIFD) for multimodal registration, Deng et al. [19]
introduce graph matching method for retinal image registration, and they also combined graph -based
matching and ICP to generate a registration framework called GM -ICP. Although there are many
approaches in retinal image registration, several challenges still exist in retinal image registration. The
study in [20] a hybrid technique of combining gray scale information and geometric transformation is
been implemented.
Three components characterize an IR method: the kind of transformation that is used to relate the
images, the similarity metric that measures the quality of alignment and the optimization procedure
that performs the search for a suitable transformation. A comprehensive review of the exis ting
literature available on Image registration methods is given in [21]. Independently of their nature, IR
techniques involve an iterative optimization procedure that explores the space of possible
transformations. Registration approaches based on evolut ionary algorithms have proven to be a
promising solution to overcome the drawbacks of traditional optimization algorithms. In fact they are
considered global optimization approaches able to perform a robust search in complex spaces like
arising in IR such as shown in [22] which gives a review of application of these methods for medical
image registration.
Genetic Algorithms (GAs) represent one of the most famous evolutionary algorithms. Numerous
researchers have attempted to apply GAs to help search over t he complex search landscape in image
registration [6]. A few works have addressed the retinal IR based on GAs. In this paper, o ur
contribution focuses on developing a rigid feature registration method for retinal images based on the
kirsch edge and GAs. We have used the kirsch template to extract the vessel features from retinal
images and the feature point sets while GAs are performed to search the global estimation of the
transformation parameters from the extracted data sets.
2 Feature extraction u sing Kirsch edge
2.1 Proposed feature extraction method
The first step in our algorithm is to extract the two data sets from the two retinal images. In this paper
the Kirsch edge is used for the extraction blood vessels from retinal images.
The Kirsch’s operator is one of the discrete versions of the first derivatives used for edge enhancement
and detection. It can adjust the related threshold value automatically due to image characteristics.
Therefore, the kirsch gradient operator is chosen to e xtract the contour of the objects [23]. Kirsch
algorithm works well for images having clear distinction between the foreground and background.
Since the retinal blood vessels can be considered as required foreground information from fundus
images, Kirsch a lgorithm can be efficiently applied [24,25].
By convolving the image with eight template impulse response arrays in each and every pixel, the
gradient is then computed. Therefore, the gradient of different directions is achieved. The final
gradient is the summation of the enhanced edges by considerin g all directions for the RGB channel
rather than any single channel only. The Kirsch edge detection algorithm uses a single mask of size
3×3 and rotates it in 45 degree increments through all 8 directions as shown below [25]
[
], [
], [
],
[
] , [
], [
],
[
], [
]
The templates are evolved with a scaling factor of ⁄ . Except the outermost row and column, every
pixel and its eight neighborhoods in a given image are convolved with these eight templates,
respectively. Every pixel has eight outputs. Also, the maximum output of the eight templates is chosen
to be the edge magnitude [24,25].
In our work, we have followed these steps: The retinal RGB image is converted to a grayscale image.
Then we apply the algorithm of the vessel extraction based on the Kirsch operator edge. To enhance
the image, we have applied morphologi cal operations and therefore we select some feature points from
the resulting binary image in order to form the point set.
The feature extraction algorithm steps are summarized as follows:
1. Convert the image to grayscale image
2. Perform Kirsch edge algorithm
a. Define the eight filter templates with a 3×3 mask
b. Fix the scale factor at 1/15
c. Gradient convolution is calculated for all these templates within the retinal image
d. Given a threshold value, choose the maximum magnitude of all of them as a kirsch
edge magnit ude.
3. A binary mask is created in order to eliminate the disc of retina and defining the zone of
interest.
4. Apply a morphological opening operation in order to refine the edges. As a result, a binary
image of blood vessel structure is generated.
5. In order to limit the size of the points set, we choose one point location from each ten
neighboring points from the segmented image. At the end, a data set of points is created.
An example of feature point extraction is shown in Fig.1.
Fig.1. Feature Point Extraction: (a) Retinal input image (b) Gray intensity image (c) Edge image
with Kirsch template (d) Vessel blood extraction with applying a mask (e) Image segmented after
applying the morphological operation (f) Feature points selected mark ed with green cross.
2.2 Feature point matching
After the two point sets have been selected from both images, a corresponding mechanism between
these two feature point sets must be established to obtain the accurate points. The objective is that each
feature point in the reference image is paired with i ts correspondent in the sensed image. In this work,
the matching process is performed by computing the correlation coefficient of the two sets. The
matched points are those who give the maximum coefficient of the correlation value.
The algorithm of points matching is abstracted in three main steps:
1. For every control points, choose a moving circular template of radius centered at
this point in the reference image.
2. Calculate the correlation coefficient of the window template and the corresponding
template w2 centered at qj in the sensed image.
3. The control point qi Q that might correspond to the candidate point pi P in the
reference image is determined by the shif t giving the maximum correlation.
In our experiment, we have used a circular window of a radius of 11 pixels.
3 Genetic algorithms based image registration
Once the matching process is complete, and for a given number of feature points from both images to
be compared, the transformation parameters can be estimated by using GAs. GA is a global stochastic
search and optimization technique that can explore a lar ge solution space and concentrate the search in
the regions which lead to fitter structures, and hence better solutions of the problem [26]. Although,
GAs are randomized, exploit historical information to direct the search into the region of better (a) (b) (c)
(f) (e) (d)
Enhancement segmented image
Vessel Extraction using Kirsh Template
perform ance within the search space. The power of GAs comes from their ability to combine both
exploration and exploitation in an optimal way.
A GA borrows its name from the natural genetic system [27]. According to the natural selection
theory, individuals that are better fit to a given environment are more likely to survive. GAs use the
same tools, points in the search space called string are encoded to form a population. Each string has
given a fitness value according to a certain criterion. The selection proc ess chooses high -fitness
strings, which means that strings with higher fitness values have a higher probability of reproducing
new strings in the new generation. Two operators crossover and mutation, introduce new individuals
into the population. The term crossover is used for the operation of generating new individuals by
combining chromosomes of other individuals, while mutation refers to applying a small variation to
the chromosome of an individual. These operations enable GAs to use relatively few sampl es to search
large spaces. All these four operations are inherently stochastic, and both crossover and mutation are
applied with certain probabilities. The algorithm is iterative until a suitable solution has been found or
a certain number of generations h ave been passed, depending on the needs of the programmer. The
pseudo code of GAs is as follow:
BEGIN
INITIALISE population with random candidate solutions
EVALUATE the fitness of each candidate
REPEAT
SELECT parents
RECOMBINE pairs of parents
MUTATE the resulting offspring
EVALUATE new candidates
STOP until TERMINATION CONDITION is satisfied
Basically, A GA requires two elements for a given problem:
encoding of candidate structures (solutions)
method of evaluating the relative performance of candidate structure, for
identifying the better solutions
In the following, the formulations of the chromosome and the fitness function adopted in this work are
described.
3.1 Objective Function
To evaluate the image registration performance, a fitness function must be defined for this problem.
The GA should try to find a chromosome with the minimum Euclidean distance between each
correspondence pair [28].
Given the two point sets to be matched, t he model set { } which refer to the
reference image; and the scene set { } which refer to the input image where m is not
necessarily equal to n. our registration method finds the parameters of a transformation which
minimizes the cost function. The study is focused on a rigid registration that consists on determining
parameters due to rotation and translation. So, under these transformation parame ters, we define the
corresponding transformed point Pt of a given source point pi by the following mapping [4]
Pt=R*p i+T (1)
Where
[
] is a rotation matrix
[
] is a translation vector.
The best possible correspondence of the point Pt in Q is determined as depicted in equation 2
‖ ‖ (2)
Which ‖ ‖ means the Euclidean distance.
Thus the objective is to minimize the Euclidean distance between the transformed point Rp i+T and q
in Q. So, the process searches the closest point for all the points in P to the data set Q. At this end, a
vector of distances of the correspondence pair are considered, and we can now determine our fitness
function that should be minimized as like
‖ ‖ (3)
3.2 Chromosome Encoding
In relation to image registration, every individual represents a combination of all transformation
parameters which describes an image transformation. Therefore, the vector of parameters (R, Tx, Ty) is
used as a chromosome that participates in an iterative process. So, the chromosome string is
constituted of three genes, each one represents a parameter of transformation; rotation and shift in x
and y axes.
In this work, we have adopted a binary coding to encode the chromosome. An 8 -bit field is used to
represent the possible relative rotati on R; likewise six bits are used to express the translation in each
direction. All representations are signed magnitude, using one bit for the sign and the rest of the bits to
represent the magnitude of the rotation or translation. Thus, the relative rotat ion has the range of ± 127
degrees, while the relative translation in the x (or y) direction has the range of ± 31 pixels. Thus, the
total length of the chromosome is 20 bits, that they could represent 220 different elements by this
string.
4 Simulation results
In our implementation, the experiment is run on 2D retinal images of size (576×768) pixels. In order to
demonstrate the capabilities of the proposed algorithm for image registration, a transformed image is
generated from an original image using som e known geometrical transformations (translation,
rotation). A bilinear interpolation is used for rotation. The simulation results have been obtained under
the MATLAB programming environment.
We note that in our simulation, we have used elitism technique in order to enhance the capability and
the performance of GAs. Elitism consists of replacement of the weakest individual of population by
the strongest one of the previous population.
The GA parameters used in this work are set as follows:
GAs Parameters:
Population size: 80 individuals
Crossover type: one point crossover
Crossover rate: 0.85
Mutation type: binary mutation
Mutation rate: 0.02
Generation gap: 100
Elitism rate: 5%
The main objective of GAs is to minimize the median distance between the two point sets. Fig.2
shows the progress of the fitness value during GAs cycle. As we see, this value median (dist) is
minimized from one generation to another until the opt imal fitness value is found which corresponds
to the optimal solution.
Fig.2. Evolution of the fitness value during GA process.
Also, Fig.3 illustrates the evolution of the best solution (R, Tx, Ty) during the runs of the GAs . The red
dashed lines show the initial test parameters and blue lines show the optimal parameters found during
the GAs process with our registration method.
Fig.3. Evolution of the best solution (R,X,Y) during the GA run.
0 50 100 150 200 250 300-10-50
generationsRotation REvolution of the transformation parameters during GA process
Rinit=-5
0 50 100 150 200 250 300-20020
generationsTranslation XXinit=-10
0 50 100 150 200 250 300-2002040
generationsTranslation YYinit=15
0 50 100 150 200 250 30013141516171819202122
generationsmedian(dist)Evolution of the Best fitness during GA process
The analytical results are depicted in Table 1. It can be seen from Table I and Fig.3 that the estimated
transformation parameters are very close to the true ones. This demonstrates the efficiency and
robustness of the proposed registration algorithm.
Table 1. Analytical results of registration of retinal images
The two images to be aligned, reference and transformed images, and the resulting registered image
obtained using the proposed registration algorithm are shown respectively in Fig.4.
Fig.4. Results of retinal image registration: (a) Source or reference image which corresponds to
model M, (b) Transformed or tested image which corresponds to model P, and (c) Registered
image with our proposed method of registration based on GAs.
Real Application
To show the robus tness and efficiency of the proposed registration method, we have applied our
algorithm for registering some retinal images of a size of (768×676) pixels acquired for an application
of fusion. The experiment is conducted with the same parameters of GAs me ntioned previously. Fig.5
shows a real example of image registration result. It can be seen clearly that we have arrived to align
the pair images with our proposed registration method.
Transformation
parameters R (degrees) X (pixels) Y (pixels)
The true parameters – 5 -10 15
The obtained
parameters -5 -10 14
a b
c
Reference Image
Transformed Image
Corrected Image(a) (b) (c)
Fig.5 Registration results of retinal images (a) reference image (b) input image (c) corrected image
with the proposed method of registration.
5 Conclusion
IR is a fundamental step in medical image analysis and a very active research field due to the steady
improvement of medical imaging technology. It is the task of aligning two or more images in order to
combine the information they contain. Despite the eff ort of the scientific community over the last
decades, there is still a need for improvement of current image registration techniques. Specially,
retinal image registration techniques become increasingly important in clinical human retina diseases
diagnosi s and treatment. In this work we have developed a feature based registration method of retinal
images based on GAs and kirsch edge.
The basic procedure involves feature identification, matching of correspondent features, and the
alignment of these matches by evaluating a distance. A feature points extraction technique based on
Kirsch template is employed to determine the data point set. Therefore, a GA is performed to estimate
the rigid transformation parameters between the two data sets to achieve the ima ge registration task by
minimizing the distance between the two sets. The obtained results are very promising as shown in the
simulation, which demonstrates the efficiency and accuracy performances of the proposed method for
registration of retinal images.
Further opportunities exist for extending the capability and application of the proposed point
registration approach to retinal images. Several enhancements can be explored, in specific, the feature
point selection and the search space from 2D to 3D which considers the rotation and translation in
three axes.
(a) (b) (c)
Corrected Image
References
1. L. G. Brown, “A survey of image registration rechniques,” ACM Comput. Surv. , vol. 24, pp.325 –
376 (1992).
2. B. Zitova, J. Flusser, ‘Image registration methods: a survey’. Image and Vision Computing, 21(11), pp.977 –
1000 (2003).
3. D. L. G Hill, P.G Batchelor, M. Holden, D.J Hawkes, ‘Medical image registration’, Phys. Med. Biol., 46(3),
R1–R452001 (2001).
4. F. Meskine, N. Taleb, A. Almhdie, ‘A Feature Point Based Image Registration Using Genetic Algorithms’,
Mideterranean Telecommunication Journal, Vol 2(2), pp.148 -153 (2012).
5. Y. Bentoutou, N. Taleb, K. Kpalma, J. Ronsin, ‘ An automatic image registration f or applications in remote
sensing’, IEEE Trans. Geosci. Remote Sens , 43, pp.2127 -2137 (2005).
6. F. Meskine , M. Chikr EL Mezouar , N. Taleb, ‘ A Rigid Image Registration Based on the Nonsubsampled
Contourlet Transform and Genetic Algorithms’, Sensors journal, 10, pp. 8553-8571 (2010).
7. F. Meskine, N. Taleb, A. Almhdie -Imjabber, ‘A 2D Rigid Point Registration for Satellite Imaging Using
Genetic Algorithms’, The 5th International Conference on Image and Signal Processing ICISP, Lecture
Notes on Computer Science LN CS 7340 Springer, Agadir, Morocco, June 28 – 30, pp.442 –450 (2012).
8. J. B. A. Maintz and M. A. Viegever, “A survey of medical image registration,” in Medical Image
Analysis . Oxford, U.K.: Oxford Univ. Press (1998).
9. N. Mouravliansky, G. K. Matsopoulos, K. De libasis, and K. S. Nikita, “Automatic retinal registration using
global optimization techniques,” in Proc. Int. Conf. Eng. Med. Biol. Society , vol. 2, pp.567 –570 (1998).
10. G. K Matsopoulos, N. A Mouravliansky, K. K Delibasis, K. S Nikita, ‘Automatic retinal image registration
scheme using global optimization techniques’, IEEE Transactions on Information Technology in
Biomedicine, 3(1), pp.47 –60 (1999).
11. A. Can, C.V Stewart, B. Roysam, H.L Tanenbaum, ‘ A feature -based, robust, hierarchical algorithm for
registering pairs of images of the curved human retina ’, IEEE Transactions on Pattern Analysis and Machine
Intelligence, 24(3), pp.347 –364 (2002).
12. F. Laliberte, L. Gagnon, Y. Sheng, ‘Registration and fusio n of retinal images -an evaluation study’, IEEE
Transaction on Medical Imaging., 22(5), pp.661 –673 (2003).
13. Eunhwa Jung and Kyungho Hong, ‘Automatic retinal vasculature structure tracing and vascular landmark
extraction from human eye image’, Proceedings of IEEE International Conference on Hybrid Information
Technology, vol.12, pp.161 -167 (2006).
14. A.M. Mendonca and A. Campilho, “Segmentation of retinal blood vessels by combining the detection of
centerlines and morphological reconstruction”, IEEE Transactio ns on Medical Imaging, vol. 25(9),
pp.1200 –1213 (2006).
15. C.V Stewart, C. -L Tsai, B. Roysam, ‘The dual -bootstrap iterative closest point algorithm with application to
retinal image registration’, IEEE Trans. Medical Imaging., 22 (11), pp.1379 –1394 (2003).
16. T. Chanwimaluang, G. Fan, S.R Fransen, ‘ Hybrid retinal image registration ’, IEEE Transactions on
Information Technology in Biomedicine, 10(1), pp.129 –142 (2006).
17. B Fang, Y.Y Tang, ‘ Elastic registration for retinal images based on reconstructed vascular trees ’, IEEE
Transactions on Biomedical Engineering, 53(6), pp.1183 –1187 (2006).
18. J. Chen, J. Tian, N. Lee, J. Zheng, R. Smith, A.F. Laine, A partial intensity invari -ant feature descriptor for
multimodal retinal image registration, IEEE Trans.Biomed. Eng. 57(7), pp.1707 –1718 (2010).
19. K. Deng , J. Tian, J. Zheng , X. Zhang , et al.: ‘Retinal Fundus Image Registration via Vascular Structure
Graph Matching’ , International Journal of Biomedical Imagin g (2010), Article ID 906067.
20. P. Palraj, L. Vennila, ‘A Hybrid Retinal Image Registration Using Mutual Information’, American Journal
of Applied Sciences, 10(11), pp.1386 -1391 (2013),
21. V. Medha V. Wyawahare, Dr. Pradeep M. Patil, and Hemant K. Abhyankar, ‘Im age Registration
Techniques: An overview’, International Journal of Signal Processing, Image Processing and Pattern
Recognition Vol. 2(3) (2009).
22. S. Damas, O. Cordon, J. Santamaria, ‘Medical Image Registration Using Evolutionary computation’, IEEE
comput ational intelligence Magazine, 6(4), pp.26 -42 (2011).
23. P. Gopikannan, S. Grahalakshmi, C. Subramanian, ‘Identification of Vessels from retinal image analysis’,
International journal of Innovative research in science engineering and technology IJIRSET, 3(3), pp.1057 –
1063 (2014).
24. N. Nithin, M.B Anupkumar, R. Jayakrishna, ‘Detection of Retinal Blood Vessel using Kirsch Algorithm’,
Journal of computing, 5(1), pp.55 -57 (2013).
25. H.S Bhadauria , S.S Bisht , Annapurna Singh, ‘Vessel Extraction from Retinal images’, IOSR Journal of
Electronics and Communication Engineering (IOSR -JECE), 6(3), pp.79 -82 (2013).
26. Goldberg, D.E.: ‘Genetic Algorithm in search, optimization and machine learning’, Addison Wesley (1989).
27. J.H. Holland, ‘Adaptation in Natural and Artificial Syste ms’, Press edition MIT, First edition: University of
Michigan Press (1975).
28. Chi Kin Chow, Hung Tat Tsui, Tong Lee, “ Surface Registration using a Dynamic genetic Algorithm”,
Pattern Recognition, Vol. 37, pp.105 – 117 (2004).
Copyright Notice
© Licențiada.org respectă drepturile de proprietate intelectuală și așteaptă ca toți utilizatorii să facă același lucru. Dacă consideri că un conținut de pe site încalcă drepturile tale de autor, te rugăm să trimiți o notificare DMCA.
Acest articol: A Robust Feature Point Registration Method for Retinal Images Using [600691] (ID: 600691)
Dacă considerați că acest conținut vă încalcă drepturile de autor, vă rugăm să depuneți o cerere pe pagina noastră Copyright Takedown.
