Pamfiloiu Nicolae Thesis 08.07.2020 [624108]
1
“LUCIAN BLAGA” UNIVERSITY OF SIBIU
ENGINEERING FACULTY
DEPARTMENT OF COMPUTER SCIENCE, ELECTRICAL
AND ELECTRONICS ENGINEERING
DISSERTATION
SCIENTIFIC ADVISOR : Professor PhD. Eng. Volovici Daniel
GRADUATE :
Pamfiloiu Nicolae
Advanced Computing Systems
– Sibiu, 2020 –
2
“LUCIAN BLAGA” UNIVERSITY OF SIBIU
ENGINEERING FACULTY
DEPARTMENT OF COMPUTER SCIENCE, ELECTRICAL
AND ELECTRONICS ENGINEERING
Study of Image Fusion methods,
benchmarks and software tools
comparison
SCIENTIFIC ADVISOR : Professor PhD. Eng. Volovici Daniel
GRADUATE:
Pamfiloiu Nicolae
Advanced Computing Systems
3
Contents
1. Abstract ………………………….. ………………………….. ………………………….. ………………………….. ……………. 4
2. Introduction ………………………….. ………………………….. ………………………….. ………………………….. ……… 6
3. Theoretical substantiation ………………………….. ………………………….. ………………………….. ……………… 7
3.1 Image processing fundamental concepts ………………………….. ………………………….. ………………….. 7
3.1.1 Image acquisition and formation ………………………….. ………………………….. ………………………….. … 7
3.1.2 Image processing based concepts ………………………….. ………………………….. ………………………….. .. 8
3.1.3 Image enhancement methods ………………………….. ………………………….. ………………………….. …… 10
3.2 Digital image fusion based techniques ………………………….. ………………………….. …………………… 12
3.2.1 Image fusion characteristics ………………………….. ………………………….. ………………………….. …….. 12
3.2.2 Spatial domain image fusion techniques ………………………….. ………………………….. ………………… 13
3.2.2.1 Image fusion technique based on average method ………………………….. ………………………….. …… 13
3.2.2.2 Image fusion technique based on high pass filtering method ………………………….. …………………. 14
3.2.2.3 HSI transform based image fusion ………………………….. ………………………….. ………………………… 15
3.2.2.4 Image fusion using Principal Component Analysis (PCA) method ………………………….. ………… 17
3.2.3 Image fusion techniques based on frequency domain ………………………….. ………………………….. . 20
3.2.3.1 Laplacian pyramid based image fusion ………………………….. ………………………….. ………………….. 20
3.2.3.2 Image fusion based on discrete cosine transform ………………………….. ………………………….. …….. 22
3.2.3.3 Image fusi on based on wavelets techniques ………………………….. ………………………….. ……………. 23
3.2.3.3.1 Image fusion based on discrete wavelet transform (DWT) ………………………….. …………….. 23
3.2.3.3.2 Stationary wavelet transform based image fusion ………………………….. …………………………. 26
4. Image fusion benchmarks, results and software applications comparison ………………………….. .. 27
4.1 Preprocessing steps usually used before image fusion ………………………….. ………………………….. 27
4.1.1 Noise reduction ………………………….. ………………………….. ………………………….. ……………………… 27
4.1.2 Image registration process ………………………….. ………………………….. ………………………….. ……….. 28
4.1.3 Measurement methods used for quality of fused images ………………………….. ………………………. 31
4.3.1 Characteristics ………………………….. ………………………….. ………………………….. ……………………….. 33
4.3.2 Benchmark im ages used for fusion of digital imaging ………………………….. ………………………….. 34
4.3.3 Image fusion usage in the medical imaging domain ………………………….. ………………………….. … 39
4.3.4 Image fusion benchmarks i n the medical imaging domain ………………………….. ……………………. 41
4.3.5 Used software applications and results for image fusion ………………………….. ………………………. 44
4.3.6 Advantages and disadvantages of image fusion ………………………….. ………………………….. ………. 65
5. Conclusion ………………………….. ………………………….. ………………………….. ………………………….. ………. 66
6. References ………………………….. ………………………….. ………………………….. ………………………….. ………. 68
4
1. Abstract
In a world where more and more proce sses are being automated and many complex problems
are being solved by using a machine, which in most cases this machine is represented by a
computer, the image processing field makes no exception by being a very important part from
solving very complex pr oblems by using the computer along with cameras and sensors for
acquisition of the images from the real world.
The image fusion can be considered as bei ng a process of merging significant information
from two images or even more images into one improved and more informative image, more
precisely, the image fusion procedure can be defined as being the integ ration of information from
a collection of images in order to extract relevant data of the same view or scene into only one
and more informative image w ithout introducing any distortion.
The applications of image fusion are vast and in the most important areas of our society,
like the medical domain , and here we can include medical image a cquis ition and medical
diagnostic.
Another important domain where the image fusion process is used is the navigation
guidance , military , and civilian surveillance. [46]
Also, in the robotics field, fused images are mostly used to analyze the frequency
variations in the view of images. [46]
In the satellite imagin g for remote sensing or also for comparing different satellite images
of an area a cquired at di fferent times to compare them in the sense of how that area evolved in
time from the areal point of view mostly.
5
In this paper, I am going to present and analyze some of the most used image fusion methods
used in the digital image domain, foreward I am going to present some pre -processing steps
required before applying the image fusion process like image noise reduction or image
registration in order, that the image fusion process to give the best results , another thing that I am
going to present are some image matrics parameters used in evaluation of image quality, in our
case the quality of the fused image and in the final chapter, after that I am going to present some
of the most used image fusion benchmark sets for the digital images and also some benchmark
sets for the image fusion in the medical field, forewardly I am going to do a short review of some
image fusion tools and packages that I found on the web, I am going to evaluate what fusion
techniques implement, what results are obtained by using thouse tools and, to what the tools can
be used further, for example for testing the image fusion techniques for your own project or, for
a future research st udy.
In the image fusion research directions, there are available many papers regarding the
comparison of image fusion algorithms and also there is a growing research interest in the
medical field for using the image fusion techniques beca use there are ma ny images a cquired by
different medical devices especially in radiology, devices like MRI, CT or PET , and the image
fusion techniques can be used successfully for combining images from different sources in order
to obtain an improved image that can help me dical specialists to put a medical diagnostic easier,
but more precisely a correct medical diagnostic.
In the touch with the image fusion software applications comparison, from what I have
searching on the web I can say that there isn’t availabl e a study like this so far.
6
2. Introduction
Digital image fusion is a part of the image processing subject and it represents a technique
where two or more images of th e same scene are fused by applying a specific algorithm in order
to achieve more and bette r details about image entities or targeted image areas and a higher
resolution image as an output.
Regarding, image fusion, after the images are a cquired, there are used many complex
operations to interpret and process those images.
These operations that a re applied to images are part of the large domain of image processing
and those operations refer to different techniques and algorithms.
So we can conclude that image processing is the study of any algorithm that takes an image
as input and retu rns a matri x of features as output.
Furthermore as explained above we can conclude that in an electronic system like a
computer, images are represented as a matrix with rows and columns , and every element of the
image/matrix is called a pixel.
Some application s of image processing include image restoration or increasing brightness of
an image or it is also used in the medical field for performing medical diagnosis based on the
images a cquired by some medical devices like CTs or MRIs.
In the next chapters of this pape r, we are going to approach in short some basic image
processing and image enhancement characteristics and after that , we are going to study in -depth
the basic image fusion techniques and algorithms.
Also, we are going to analy ze some image fusion applica tions and techniques for digital
images and to go further, we are going to analy ze some of the most promising image fusion
applications and techniques in the medical area where the research is still going further.
And at last, there will be presented some advantages and disadvant ages of the image fusion
techniques and also some conclusions about th ese techniques.
7
3. Theoretical substantiation
3.1 Image processing fundamental concepts
3.1.1 Image a cquisition and formation
The main aim of image a cquisition is to transform an optical image, the image that is
taken from the real world into an array of numerical data, which could be easily manipulated on
a computer or by a computer. [1]
The images in which we tend to have an interest are generated by the combinat ion of an
illumination source and the reflection of energy from that source by the elements of the scene
that is to be imaged.
The i llumination may originate from a source of electromagnetic energy such as radar,
infrared, X -ray, or ultrasound systems or e ven a computer -generated i llumination pattern.
In order to make image a cquisition , we need at least a sensor, the sensor which makes
image a cquisition is called photodiode , and with the use of a filter in front of a sensor the image
selectivity is improve d.
In order to generate an image using only one sensor, it has to be relative displacements in
both of the x and y coord inate directions between the sensor and the area that has to be pictured
in the end .
Figure 1
In the digital cameras, a CCD array of sensors is used for image formation. A CCD or
charge -coupled device is an image sensor that converts the taken image into an electric signal. A
CCD is like a matrix where each cell in the matrix contains a small sensor that senses t he
intensity of photon. [2]
Figure 2
8
Also, like analog cameras, in the case of digital too, when light falls on the object, the
light reflects after striking the object and allowed to enter inside the camera. [2]
3.1.2 Image proces sing based concepts
Image processing is a form of signal processing for which the input is an image
and th e output of image processing could be also an imag e that usually has many
improved characteristics resulted after processing. [3]
Digital image proce ssing is the use of computer processing speed and algorithms
for perform ing processing on numerical or digital images. There are some benefits of
digital image processing , enumerated below :
is obtained a high -quality image
image processing is a low -cost pr ocess
the process itself gives the ability to eas ily manipulate all aspects of the
image.
There are various image processing techniques that, it will be explained one by
one below, as it follows:
1. Image Representation – an image is represented as a f unction of
two real variables f(x,y) where f represents the amplitude or
brightness of the image at the real coordinate pos ition (x, y). The
image which is represented by f(x,y) is divided into N rows and M
columns, where the inters ection of a row and a column is called a
pixel. [3]
2. Image Pre -processing – it is used to remove no ise and eliminate
irrelevant or unnec essary information like noise , which is
unwanted information that can result from the image a cquisition
process. Other processes that are included in thi s category are
image scaling , image rotation , and image mosaic. Image scaling
represents the process of resizing an image [3], which means that
by magnifying or zooming the interest ing part in the image we can
have a closer look at some details in an image .[4] Also t oo much
zooming can conduct to lose some cleary details from the image so
the resizing process must be done in order to take into a ccount the
image resolution . Another method applied to images is mosaic .
9
Mosaic is a process of mixing two or more images to form a single
large image . Mosaic is required to get the synoptic view of the
entire area, otherwise , capture as small images. [4] Image rotation
is a common image processing routine with applications in
matching, alignment, and other image -based algorithms. The input
to an image rotation routine is an image, the rotation angle θ, and a
point about which rotation is done .[5]
3. Image restoration – it refers to removal or minimization of
degradations in an image. This includes de -blurring of imag es
degraded by the limitations of a sensor or its environment, noise
filtering, and correction of geometric distortion or non -linearity
due to sensors. Is the process of taking an image with some known,
or estimated degradation, and restoring it to its ori ginal
appearance. [3]
4. Image Analysis – it consists of making measurements on the image
in order to produce a description of the data that the image is
containing in order to determine exactly the information necessary
to help solve a computer imaging probl em like object detection or
image classification. [3]
5. Image reconstruction – it is a process used to retrieve image
information, information that has been lost in the process of image
acquiring and image formation. [3]
6. Image data compression – it involves i n reducin g the amount of
data needed to represent an image, usually the data that is visually
unnecessary by taking advantage of the redundancy that is
available in most images.[3]
7. Image segmentation – it is the process that decomposes an image
into its c onsistent parts or objects and it is usually used to
recognize the objects and characteristics in the input image .
Segmentation is considered to be one of the key problems in
image processing. [4]
10
3.1.3 Image enhancement methods
The aim of imag e enhancement is to improve the interpretability or
perception of information in images for hum an viewers, or to provide better input for
other automated image processing techniques [6]. Image enhancement techniques can
be divided int o two broad categories:
Spatial domain techniques – these techniques operate directly on the pixels
from an image [6].
Frequency domain techniques – these techniques operate on the Fourier
transform of an image, applied on an image the frequency is abou t the rate
of change of the pixel value intervals [6].
Image enhancement methods are an integrated part of image processing and the
methods involve taking an image and improving it visually, typically by taking
advantage of human Visual System s responses . [3]
Some of the basic and the most known image enhancement methods are
represented by contra st changes , amplitude scaling , contrast adjustment, histogram
changes including adaptive and non -adaptive histogram changes and image frequency
filteri ng methods which are shortly described below:
adjusting the contrast for the input image will be done by resizing the
intensity (gray level) of each pixel. The first step involves determining the
minimum and maximum level of grayscale for the given image. Then for
each pixel , the value will be scaled so that it is in the new intensity range .
The output image will not have all the levels in the input image range and
some gray level transitions will be larger than in the original image.
This processing can r esult in a contouring effect in gray levels, resulting in
an image that can be more easily analyzed by humans or specialized
software. [7]
amplitude scaling – a digitally processed image may occupy a different
range from the original image range. In fact, the numerical range of a
processed image may include negative values, which cannot be transferred
directly into the light intensity range. One method is to transform the
extreme values of the resulting image amplitude into the maximum and
11
minimum limits o f the original image. This technique is preferred
especially in situations where the image contains a small number of pixels
that exceed the limits. The pixel limits of an image are between 0 and
255.[7]
histogram changes – the luminance histogram of a na tural image that has
been linearly quantified usually contains considerably many dark levels;
most pixels in the image have a below -average luminance. In such images ,
the details of the dark regions are not at all perceptible. One means of
improving these types of images is the technique called "histogram
modification", in which the original image is rescaled in such a way that
the histogram of the improved image follows the desired shape. Histogram
equalization is a method of non -adaptive modification of t he histogram of
images and has the role of highlighting information that can be difficult to
identify in the original image. [7] The adaptive modification of the
histogram consists in the application of spatial adaptive transformations,
applying modificati ons of the histogram to each point in the image, based
on the histogram of the neighboring points, located inside a window. This
technique is complex in terms of calculations because it requires the
calculation of the histogram, the calculation of the modi fication function ,
and the application of this function to each point in the image. [7]
frequency filtering – frequency filters process an image in the frequency
domain. The image is Fourier function, transformed in the frequency
domain to do some image operations , and then got back into the spatial
domain of the image. Attenuating high frequencies results in a smoother
image in the spatial domain, attenuating low frequencies enhances the
edges. [8]
12
3.2 Digital image fusion based techniques
3.2.1 Image fusio n characteristics
Image fusion represents a process by which to or more images a re combined into a
single image but keeping the important features from the original images. [9]
Another definition of image fusion is that image fusion mainly represents a me thod of
combining significant information from two or more images into a single new image.
The new image which represents the output image should be additionally informative
than any other input image, also the output image should be enhanced in quality w hich
means that the output image should have better accuracy and better visuali zation. [10]
Image fusion techniques are often used in visual sensor networks where multiple
images mainly of the same scene are a cquired by d ifferent visual sensors and those
images are fused together in order to form a single image which could contain relevant
information and the fused image could highlight useful information and features but the
fusion process must be done without introducing inconsistencies in the fused image . [11]
For image fusion applications, depending on the application type , there are different
fusion levels with different fusion algorithms. The main fusion categories include
multiview fusion which means that several images of the same scene are taken by the
same sensor but from different viewpoints and this images are fused in order to obtain a
higher resolution image than the sensor resolution itself, another fusion category is
multi modal fusion which means fusion of images from different sensors placed at
different devices such as CT, MRI or NMR, going on, another fusion category is
multitemporal fusion whose aim is to identify dis similarities over time. [10]
For example, images of the same scene are a cquired at different times in order to
evaluate chang es in the scene, the method which is often used in medical imaging to
evaluate for example how a tumor ch anges over time or in remote sensing for monitoring
land or forest explo itation and how they are going to evolve over time.
Another fusion category an d our last where image fusion techniques are used is
represented by multi -focus image fusion. For example, a multi -focus fusion of images of
a 3D scene taken repeatedly with various focal length s in order to obtain a hig her
resolution and better -detailed i mage as output. [10]
In conclusi on, the purpose of image fusion and mainly its most important
characteristic consists of decreasing the volume of a cquired data, hold significant
13
features, remove artifacts and provide an image that is more suitable for huma n, but more
importantly for machine interpretation all in the output image that it will be stored. [12]
3.2.2 Spatial domain image fusion techniques
In the next chapters that follow I am going to present som e image fusion
based on spatial domain techniques .
3.2.2.1 Image fusion technique based on av erage method
Image fusion based on average method it is considered to be one of the simplest image
fusion technique. The fused image is obtained by averaging every corresponding pixel of the
input images. [13]
The region of images that are in focus has higher pixel intensity, so with this algorithm ,
we can obtain an output image that has all regions in focus. The value of pixel P(i,j) of the source
images are added and divided by 2 to obtain av erage value which is assigned to the
corresponding pixel of output image using the following equation [13] :
f(i,j) = {X(i,j) + Y(i,j)}/2…
where X(i,j) and Y(i,j) are two input images.
The averaging fusion method has a limitation since we know that a single image does not
focus on a ll regions in a scene. [13] So the input images can be modified in order to have the
regions that interest us in focus and then to apply the av eraging fusion algorithm.
The first method that ha s to be applied is called the Maximum Selection Method which
consists of selecting the maximum intensity values of the corresponding pixels. [10]
By applying this algorithm on t he source images, it highlights the pixels hig her in
intensity from the image so the same important characteristics from the scene are more obvious,
in other words , some objects from the scene are in better focus.
Another method that can be applied to the source images is Minimum Selection Method,
a method that acts like the Maximum Selection Method but it does the opposite, this method
highl ights the minimum values of the corresponding pixels that are applied to. [10]
So applying the Maximum Selection Method together with the Minimum Selection
Method on the input images we can obtain different images with the high pixel values and with
low p ixel values. After th ese images are obtained we can apply the average image fusion method
in order to obtain a better -fused image as output which highlights the better the details available
in the input images.
14
In conclusion , by using the image fusion av eraging method, this method highlights
regions of images that are in focus are of higher pixel -level intensity compared to the other
regions of the images. Aver age method of fusion is mainly used to obtain an output image in
which all regions are in focus. The average value obtained is given to the corresponding pixel of
the output image. [11]
Adva ntages of using this method, one of them is that it is easy to implement and also fast
running speed is one of the main advantages of this method. The major disad vantage, is that
clear objects are not seen by using this method. [11]
3.2.2.2 Image fusion technique based on high pass filtering method
Image fusion based on high pass filtering methods is used for the high -resolution
multispectral images that are obtained by using high pass filtering. The high –
frequency information from the high -resolution images is added to the low -resolution
multispectral images to obtain the fused output image. [15]
The high -pass filter (HPF) resolution merge function allows combining high-
resolution panchromatic images with low -resolution multi -spectral images so that the
resulted fused output image will have both exce llent details from the panchromatic
images together with the multispectral images creating a realistic representation of
spectral contents from the original multispectral scene contained by the multispectral
images. [16]
The high pass filtering fusion algorithm involves a convolution by using the HPF
on the high -resolution images and then combining the obtained images with t he lower
multispectral images. [16]
The HPF algorithm steps are as follows:
read pixel sizes from image files and calculate R, the ratio of
multispectral image cell size to high -resolution cell size.
high pass filter s the high spatial resolution image .
resample the multispectral image to the pixel size of the high -pass
image
add the HPF image to each multispectral band. The HPF image is
mean relative to the global standard deviation of the multispectral
band.
stretch the new multispectral image to match th e mean and
standard deviation of the original multispectral image. [16]
15
The output fused image ha s a lighter tone than the input multispectral image.
Also, the texture of the image is very smooth and the sharpness is less, which results
in less clarity o f objects. [16]
Figure 3.
Fused image using HPF method
The main advantage of this method is that the 3D details are present in the image and the
details are preserved. [12] This method is better to be used in the systems that a cquire spatial and
spectral images, so the fusion serve s to preserve all th ese details into a single image.
3.2.2.3 HSI transform based image fusion
The IHS (intensity, hue saturation) technique is a standard procedure in image
fusion, but this procedure has a major limitation which c onsists that only three bands are
involved. [18]
Originally, it was based on the RGB true color space, but by using the IHS, it
offers the advantage that the separate channels outline certain color properties given by
IHS. This color space is often chosen because of the visual cognitive system of human
beings tends to treat these three components as ort hogonal perceptual axes. [18]
The IHS technique usually comprises four steps: [18]
transform the red, green, and blue or RGB channels (corresponding to
three multisp ectral bands) to IHS components
match the histogram of the panchromatic image with the intensity
component
replace the intensity component with the stretched panchromatic image
inverse -transform IHS channels to RGB channels.
The IHS technique is t he earliest technique used for image fusion. Intensity, Hue ,
and Saturation represent three basic properties of a color which give a visual
16
representation of an image. The IHS is a color space where hue represents a wavelength
of color, going forward the h ue of color represents the color itself in form of an angle
between [0 deg, 360 deg], 0 deg means red, 120 deg means green, 240 deg means blue,
60 deg means yellow and 300 deg means magenta, saturation represents the total amount
of white light of a color and intensity is the overall lightness or brightness of the color.
[17] [18] In other words, this method transforms the RGB values of an image into a IHS
image, then the reverse transform ation is applied to get a RGB image as output.
In RGB color model, co lors are represented by the amount of red light, green light
and blue light reflected from an image and they are represented numerically with values
which range from 0 to 255 , where (0,0,0) is the lowest value and it represents the black
color and (255, 25 5, 255) is the highest value and it represents the white color. [17]
The HSI model represents every color with its hue, saturation , and intensity,
which need to be meticulously controlled because it contains almost all the spectral data.
[9]
In the fusion process high -resolution spatial images and multispectral images, the data of
high spatial resolution is fused with spectral information. IHS technique is based on the principle
of replac ing one of the three components (I, H , or S) of one data set with anot her image.
IHS transform is done on the low spatial resolution images a nd then the intensity
component is replaced by the high spat ial resolution image.
Reverse I S transform is a pplied to a new set of components to form the fused image. The
IHS technique is one of the most frequently used fusion method s for sharpening. [9]
For image analysis, the most widely used perceptual color model is the IHS model.
In the classical IHS model, the brightness, saturation , and hue expression are [19]:
17
The advan tages of IHS transform for image fusion is that it is simple, efficient , and fast
processing. The main disadvantage is that it results in color distortion in the output fused
image. [13]
3.2.2.4 Image fusion using Principal Component Analysis (PCA) method
The Principa l Component Analysis or shortly PCA is a statistical method that
performs a linear mapping to extract optimal features from an input distribution in the
mean -squared error sense. The input data for the PCA algorithm is represented as input
vectors in terms of the eigenvectors of their covariance matrix Rx. [20]
The PCA can be implemented as a neural network. For a network with n input units
and m output units, the activation of an output unit k is given by the following formula:
where yk denotes the activ ation of the output unit k, x = [x1, x2, …, xn]T denotes
the input vector, wk = [w1,k,w2,k, …, wn, k], T denotes the weight vec tor of the output
unit k coming from the input layer, and uj,k denotes the weight of connection from the
output unit i to the output unit k. [20]
The weights between the layers are updated by
where Rx is the eigenvectors created from the input data.
PCA is an unsupervised dimension reduction technique in whi ch we seek an orthonormal
basis function W = (w1,w2,. . . ,wd) with d << MN, such that each individual image can be
adequately represented as a linear combination of this basis. This requires that the error obtained
when the input vector is reconstructed from its low dimensional representation A is minimal. We
achieve this goal as follows. Given a training set of K input vectors ak,k ∈ {1,2, . . .,K}, we seek
directions that have the largest variances in the MN dimensional input space. The sub -space is
reduced to a low dimension d by discarding those directions alon g which training vectors have a
small variance. [19]
So, the PCA image fusion method uses the pixel values of all source images at each pixel
location, adds a weight factor to each pixel value, and takes an average of the weighted pixel
18
values to produce the result for the fused image at the same pixel location. The optimal ly
weighted factors are determined by the PCA technique. [21]
It is a statistical technique that transforms a multivariate data set of an inter-correlated
variable into a data set of new uncorrelated linear combinations of the original variables. It
generat es a new set of axes which is orthogonal. By us ing this method, the redundancy of the
image data can be decreased. [21]
Principal Component Analysis (PCA) is also known as the Hotelling Transform. It
transforms a number of correlated variable s into a numbe r of uncorrelated variable s. These
uncorrelated variables are called principal components. This property of the PCA is used in
image fusion. The PCA is used to reduce the dimensionality of the input data set with very less
loss of data. [22]
The steps of P CA algorithm for image fusion are the following: [22]
the two images to be fused are arranged in two -column vectors.
perform the empirical mean along each column.
next, we need to subtract the empirical mean from each column of the data
matrix respectiv ely.
the covariance matrix ′ 𝐶′ is obtained from the mean subtracted data
matrix. It will be of dimension 2×2.
obtain the eigenvalues ′ 𝐷′ and eigenvectors ′ 𝑉′ from the covariance matrix
and sort them by decreasing eigenvalues. The resulting matrix is of
dimension 2×2.
then compute normalized components by using:
the fused image is obtained by
Figure 4 Block diagram for image fusion algorithm
19
The PCA algorithm adopted for image fusion projects data from original space to
eigenspace to improve its variance and minimize the covariance by preserving the components
corresponding to the significant eigenvalues and discarding the other, so as to enhances the
signal -to-noise ratio. The PCA is a statistical technique that is used to transform the multivariate
datas et of correlated variables into a dataset of uncorrelated linear combinations of the original
variables. The input images (images to be fused) are arranged in two -column vectors and their
empirical means are subtracted. Eigenvector and Eigenvalues for this resulting vector are
computed and the eigenvectors corresponding to the larger eigenvalues are obtained. The
normalized components P1 and P2 are computed from the obtained eigenvector. Fused image is
obtai ned by I= P1*i1(i,j)+P2*i2(i,j). [23]
PCA techni que determines the weightage by calculating the Eigenvalues from the
Eigenvectors from the image matrices. The images to be fused, I1 and I 2 are arranged in two –
column vectors and their empirical means are subtracted. The res ulting vector has a dimension N
x 2, where N is the length of each image vector. Eigenvector and Eigenvalues for the resulting
vector are computed and the eigenvectors corresponding to the larger eigenvalue are obta ined.
The normalized components P 1 and P 2, (e.g P1 + P 2 = 1) are compute d from the obtained
eigenvector. [17]
The obtained fused image is
In the fusion process, PCA method generates uncorrelated images (PC1, PC2,…. PCn ,
where n is the number of input multispectral bands). The first principal component (pc1) is
replaced w ith the panchromatic band, which has a higher spatial resolution than the multispectral
images. Afterward, the inverse PCA tran sformation is applied to obtain back the image in the
RGB color model. [25]
The advantages of PCA based image fusion method are t hat the method is very simple to
implement and understand, is computationally efficient, has a faster processing time , and high
spatial quality. [24]
The disadvantages of PCA based image fusion results in high spectral degradation and
color distortion. [24 ]
20
3.2.3 Image fusion techniques based on frequency domain
In the next chapters that follow I am going to present some image fusion based on
frequency domain techniques.
3.2.3.1 Laplacian pyramid based image fusion
The i mage pyramid is a representation of an image on multiple scales. It is constructed by
performing repeated smoothing and subsampling on the image. It is used to ex tract the features
of interest, attenuate noise, reduce re dundancy, enhance the image for efficient coding. [17]
The basic principle of the Laplacian pyramid based image fusion technique is to
decompose the input image into a pyramid structure. A pyramid structure consists of many levels
of source images that are obtained repeatedly by filtering the lower level image with a low pass
filter. [13]
Foreward, the fusion algorithm is applied for each level of the pyramid using feature
selection which selects the most significant pattern from the source images and discards the least
significant pattern and then, the inverse pyramid transform is appli ed to get the resultant fused
image. [13]
Laplacian is a pattern approach for the image fusion process. In this method feature level
used where image pyramids are image features at different levels of resolution requires different
filters at different sca les. [26]
Laplace works on the difference between low pass filters and high pass filters. A strength
measure is used to decide from which source what pixels contribute at each specific sample
location. Take the average of the two pyramids corresponding to each level and sum them. The
resulting image is a simple average of two low-resolution images at each level. Decoding of an
image is done by expanding, then summing all the levels of the fused pyramid which is obtained
by simple averaging. [26]
The Laplac ian pyramid is basically a sequence of increasingly filtered and downsampled
versions of an image. The Laplacian is then computed as the difference between the original
image and the low pass filtered image. This process is continued to obtain a set of ban d-pass
filtered images. Thus the Laplacian pyramid is a set of bandpass filters. [26]
21
Figure 5. Laplacian pyramid decomposition
Image pyramids are a multiresolution analysis model. Schematic diagram of the
Laplacian Pyramid fusion method is shown below: [25]
Figure 6. Block scheme of the Laplacian pyramid fusion method
Laplacian Pyramid used several modes of combination such as selection or
averaging. In the first one , the combination process selects the component pattern from
the source and copies it to the composite pyramid, while discarding the less pattern. In
the second one, the process averages the sources patterns. [25]
In conclusion, It has been found that the standard pyramid fusion methods
perform well, and the pyramid offers a useful image representation for a number of
tasks .[26]
One of the main advantages of the pyramid methods is that the methods provide a
good visual quality of an image for multi -focus images, image fusion methods. The
disadvantage of the pyramid methods is that all the pyramid decomposition -based fusion
methods produce more or less similar output because the number of decomposition level s
affects the image fusion result. [24]
22
3.2.3.2 Image fusion based on discrete cosine transform
Discrete Cosine Transform (DCT) is a technique where fused images are
represented in the frequency domain. Finite data points of fused images are
represented in terms of a sum of cosine functions of different frequencies. This
technique find s its major application in MPEG, also JPEG as it reduces the
complexity by decomposing the input images into a series of a waveform. [13]
Images to be fused are divided into blocks of size N*N. DCT coefficients
are then calculated by different methods. Further IDCT technique is applied to the
fused coefficient to get the fused image. The same procedure is repeated for each
block. [13]
The fusion rules used to calculate the DCT coefficients are the following:
average coefficients – in this method, DCT coefficients are
obtained from the different block s that are averag ed to get the
fused DCT coefficients. [13]
maximum value coefficient – the DCT components from both
image blocks are averaged toge ther and largest magnitude of
alternating current (AC) coefficients are chosen, the AC
coefficients are based on the detailed coefficient that correspond s
to sharpen brightness changes in the images such as edges and
object boundaries, but the quality of the fused image depends on
the block size which should not be less than 8*8. [13]
Figure 7. Block Diagram of DCT based image fusion method
In conclusion, the DCT ha s reduced complexity and decomposes the images into a series
of a waveform and this algorithm is very suitable for real applications due to its reduced
complexity, although one major disadvantage of this method as ab ove mentioned is that, the
fused image is not of good quality if the block size is less than 8×8 or equivalent to the image
size itself. [24]
23
3.2.3.3 Image fusion based on wavelets techniques
The wavelet transform is a mathematical tool that can be used to dec ompose two
dimensional or shortly said 2D signals such as 2D image signals, color or gray -scale
images into various resolution levels for multi -resolution image analysis. [27]
Also, wavelet transform is considered as an alternative to the time Fourier
transforms because one advantage of the wavelet is that, this kind of transforms provides
desired resolution in the time domain as well as in frequency domain in comparison to
the Fourier transform , which gives a good resolution only in the frequency domain. In
Fourier transform, the signal is decomposed into sine waves of different frequencies in –
like the wavelet transform, which decomposes the signal into scaled and shifted forms of
the given wavelet function. [15]
Wavelets are finite duration and finite ene rgy oscillatory functions with zero
average value. Wavelets can be defined using 2 functions which are father wavelet or
scaling function and mother wavelet which is the wavelet function itself . [22]
Wavelets are the foundation for representing images in v arious degree s of
resolution and also the wavelets provide a new and powerful approach to signal
processing and analysis called a multi -resolution theory. [28]
Also, for the image fusion, the most common form or transform type algorithms
are the wavelet fu sion algorithms because of the simplicity and ability to preserve the
time and frequency details of the images to be fused. [28]
3.2.3.3.1 Image fusion based on discrete wavelet transform (DWT)
The wavelet transform is a special case of sub -band co ding and is beco ming very
popular for image and video coding. Although sub -band coding of images is based on
frequency analysis, the wavelet transform is based on approximation theory. However,
for natural images that are locally smooth and can be modeled as piecewise pol ynomials,
a properly chosen polynomial function can lead to frequency domain analysis, like that of
sub-band. In fact, wavelets provide an efficient means for approximating such functi ons
with a small number of basic elements. [29]
Mathematically, a wavele t transform of a square -integrable function x(t) is its
decomposition into a set of basis functions, such as [29]
24
where, ψa,b(t) is known as the basis function, which is a time dilation and translation
version of a band -pass signal Ψ(t), called the mother wavelet and is defined as [29]
where a and b are time dilation and translation parameters,
respectively.
As the wavelet transform maps a one -dimensional signal x(t) into a two -dimensional
function Xw(a, b), this increase in dimensionality makes it extremely redundant, and the original
signal can be recovered from the wavelet transform computed on the discrete values of a and b.
The a can be made discrete by choosing
, with a0 > 1 and m an integer. As a increases,
the bandwidth of the basis function (or frequency resolution) decreases , and hence more
resolution cells are needed to cover the region. Similarly, making b discr ete corresponds to
sampling in time (sampling frequency depends on the bandwidth of the signal to be sampled
which in turn is inversely proportional to a), it can be chosen as
. For a0 = 2 and
b0 = 1, there are choices of Ψ(t) such that the function Ψm,n(t) forms an orthonormal basis of
space of square -integrable functions. This implies that any square -integrable function x(t) can be
represented as a linear combination of basis functions as:
, where αm,n are known as the wavelet transform
coefficients of x(t) and is obtained the following equation [29],
For the fusion of images, the discrete wavelet transform (DWT), it provides a good
resolution in both time and frequency domain by using low pass filters and high pass filters . [11]
Discrete wavelet tra nsform (DWT) is a time -scale representation of the digital signal
obtained by using digital filtering techniques. The discrete wavelet transform (DWT)
25
decomposes the input image into spatial frequency bands of various levels such as low -high
(LH), high -low (HL), high -high (HH) and low -low (LL) groups. [23]
Figure 8. Multi -band wavelets
In this method, input images are decomposed into two sub -bands like low sub -bands and
high sub -bands using the discrete wavelet transform and then these sub -bands are merg ed. The
last step is to apply the inverse discrete wavelet transform on the merged coefficients of low and
high sub -bands in order to obtain the output fused image. [11]
Figure 8. Wavelet transform applied on an input image
As a general image fusion rul e by using the discrete wavelet transform, is to select the
coefficients whose values are higher and have many dominant features visible at each scale in
order to be preserved in a new multi -resolution image representation, then a new image is
constructed by performing an inverse discrete wavelet transform in order to obtain the fused
image. [23]
In the discrete wavelet t ransform image fusion algorithm, the input images are down –
sampled after each level of transformation of using DWT. Down -sampling is perfo rmed by
keeping one out of every two rows and columns, resulting in making the transformed image one
quarter of the original image size and half of the original image resolution. [23] The scheme of
image fusion using DWT is given below:
26
Figure 9. Discre te wavelet fusion scheme
3.2.3.3.2 Stationary wavelet transform based image fusion
The stationary wavelet transform (SWT) is very much similar to discrete wavelet
transform only difference is that the process of downsampling is su ppressed that’s why
stationary wave let transform is translation invariant. [30]
It does so by suppressing the down -sampling step of the decimated algorithm and
instead up -sampling the filters by inserting zeros between the filter coefficients.
Algorithms in which the filter is up -sampled a re called “à trous”, meaning “with holes”.
[30]
As with the decimated algorithm, the filters are applied first to the rows and then
to the columns. In this case, however, although the four images produced (one
approximation and three detail images) are at half the resolution of the original; they are
the same size as the original image. [30]
The scheme of image fusion using stationary wavelet transform (SWT) is given
below:
Figure 10. Stationary wavelet fusion scheme
The advantages of the wave let imag e fusion techniques are that the methods
provide good quality of the fused images, better signal to noise ratio and minimize the spectral
distortion, but it also the fused images has less spatial resolution and the wavelets are time –
consuming methods durin g processing, these being a few of disadvantages of the wavelet -based
image fusion methods. [24]
27
4. Image fusion benchmarks, results and software applications comparison
4.1 Preprocessing steps usually used before image fusion
4.1.1 Noise reduction
Noise is treated as a random variation of image intensity and it will be visible as a
neighborhood of grains within the image and it can cause unwanted effects in the output image
due to the environment physical effects mostly because of light or thermal energy things which
can lead to defects for the imaging sensors inside a device like a digital camera or the imaging
sensors could gain defects after a long period of usage in time. The noise may be produced in the
image capturing process or also it could be produced during im age transmission between some
imaging devices. [31]
Noise implies that the pixels within the image , show completely different intensity values
that are obtained from an image. On the opposite h and, noise removal algorithms are taken into
account a s a metho d of removing or reducing as much as possible the noise from an input image .
The noise removal algorithms scale back or take away the visibility of noise by smoothing the
complete image departure areas close to distinction boundaries. [31]
The common forms of noise that arise within the image can be impulse noise that is often
called salt -and-pepper noise or additive noise which is additio nally known as Gaussian noise.
Completely different image noises have their own characteristics that create them disting uishable
from others. [31]
Figure 1. First image being affected by Gaussian noise and the second image being affected by
Salt-and-pepper noise.
Many noise removal algorithms from input images are done by using filtering and among
the filters used, we can list the linear filters, adaptive filters , or median filters. Linear filters are
28
designed to eliminate some types of image noise from an input image . Averaging or Gaussian
filters are appropriate for this purpose.
Linear filters additionally ten d to blur sharp edges, destroy lines and also alternative fine
image details could be destroyed and perform poorly within the presence of signal -dependent
noise. Due to this blurring, linear filters are rarely employed in observe for noise reduction;
they' re, however, typically used as a basis for nonlinear noise reduction filters. [31]
The adaptive filter is a lot selective t han a comparable linear filter, by protecting edges
and alternative high -frequency elements of a n image and is also stra ightforward to implement.
Another strateg y for removing noise is to evolve the im age beneath a smoothing partial
differential equation the same as the heat equation that is named an -isotropic diffusion. [31]
A median filter is a nonlinear filter in which each output sample is computed as the
median value of the input samples under the window – that is, the result is the middle value after
the input values have been sorted . [32]
One of the major advantages of the most filters algorithms for noise removal is that the
filters are easy to implement and are accustomed for de -noising of different types of noises that
most images are let 's say exposed, on the other hand also some major disadvantages of filtering
algorithms are that the performance is not satisfactory by lead ing in pixel invariance and in
consequence to remove some image details. [31]
4.1.2 Image registration process
Image registration is the procedure of aligning and arrang ing more than one image of
an identical scene according to a coordinate system. In this pr ocess, one of the source images
will be taken as a reference image, also well known as the fixed image and then geometric
transformations will be applied on the remaining source images to align them with the
reference image. [33]
29
For image fusion, the im age registration process is one of the most important
processes because the images that are to be fused must be at least on the same dimensions in
terms of width and height, in order to fusion process to be successful, otherwise because of
different image dimensions or geometric transformations, during fusion , some important
details from source images could be lost in the resultant fused image .
Regarding this paper, we only resume image registration mainly for two input
images, the reference image being the first input image.
There are mainly software libraries and tools which can perform image registration
for the unregistered input images.
Pre-compiled and easy to use software libraries for image registration are made
available and very easy to use in pro gramming languages or environments like Python or
MATLAB.
Foreword, in this section we are going to resume image registration in MATLAB
environment because it is offered a pre -compiled ready and easy to use for performing image
registration trough Image Pr ocessing Toolbox, toolbox that it is fully integrated in to
MATLAB.
Also , image registration trough MATLAB gives the possibility to perform some kind
of image fusion by allowing to a lign multiple scenes into a single image using image
registration .
Image r egistration is often used in medical and satellite imagery to align images from
different camera sources. Digital cameras use image registration to connect and align
adjacent taken images into a single panoramic image. [34]
Examples of image registration i n MATLAB Image Processing Toolbox are
presented below:
Figure 2. Automatic Image Registration using feature matching in MATLAB
30
Figure 3. Automatic registration of multi -modal medical images using image registration in MATLAB
Image registration method is the procedure of aligning two or more images of the same
scene. This method involves designating one image as the reference image, also called the fixed
image, and applying geometric transformations or local displacements to th e other images so that
they align with the reference. Images can be misaligned fo r a variety of reasons, like images are
captured under variable conditions that can change the camera perspecti ve or the content of the
scene or it can also result from lens a nd sensor distortions or differences between capture
devices. [35]
Image registration is often used as a preliminary step in other image processing
applications like aligning different satellite images or to analy ze images captured by different
diagnostic modalities like MRI or SPECT based on the comparison of the common futures
available in different images captured by those diagnostic modalities. [35]
For example, it might be discover ed how a river has migrated or how an area became
flooded, or whether a tumor is visible in an MRI or SPECT image. [35]
The image registration software package made available in MATLAB Image Processing
Toolbox is called Registration Estimator application which is fully available and ready to use the
software under MATLAB Image Processing Toolbox.
So the Registration Estimator app available under MATLAB gives the users the
possibility to registe r images fast and easy . Also, the Registration Estimator app will compare
completely different registration techniques, tune settings, and visualize the registered image s.
The app provides a quantitative live of quality, and it returns the registered image and therefore
the transform ation matrix. The app also generates code along with your hand -picked registration
technique and settings, thus you'll be able to apply a consistent tra nsformation to multiple
images . [35]
31
Figure 4. Registration estimator app MATLAB
A short conclusion of image registration process and software package presented above is
that the ima ge registration process is an important operation that need s to be applied right before
applying image fusion , otherwise , some major details will be lost during the fusion process, in as
presented above the image registration could be done quite well using the Registration Estimator
app made available in MATLAB, due to the features that are already available and also are w ell
documented in [34] and [35].
4.1.3 Measurement methods used for quality of fused images
The general requirements of an image fu sing proc ess are that it shall preserv e all
valid and useful pattern details from source images, but at the same time, artifacts shall
not be introduced which could interfere and affect the analyses. The performance
measures can provide some quantitative comparison for different fusion methods or the
quality of fusion could b e measured with respect to an ideal image where is a reference
image or could be simply measured compared to one of the images, in the cases where
there isn’ t a reference image, some of the qual ity metrics being presents bellow: [9][13]
32
Mean Square Error (MSE) – it is a criterion used to measure the quality of the
fused image, higher the MSE means that the image is poor in quality and it is
given by the formula:
, where I1 – the reference image, I2 – the
fused image, i – pixel row index, j – pixel column index; m, n – no. of row and
column.
Peak signal to noise ratio (PSNR) – is the ratio between the maximum possible
power of a signal and the power of corrupting noise that affects the fidelity of its
representation. The PSNR measure is given by:
, the higher
the value of PSNR, the better is the fused image.
The c orrelation coefficient (CC) – the correla tion coefficient is the measure of the
closeness or similarity between the original and the fus ed images. It can vary
between -1 and +l. Values closer to + 1 indicate that the reference and fused
images are highly similar while the values closer to -1 indicate that the images
are highly dissimilar. [10]
Standard Deviation – it is measured the contra st of the fused image. Fused image s
with higher contrast would have a high standard deviation. [13]
Entropy(EN) – is a guide to evaluate the information quantity contained in an
image. If the value of entropy becomes elevated after fusing, it indicates th at the
information enhances and the fusion performances are better .
Structure Similarity Index Metric (SSIM) – is a method that combines a
comparison of luminance, contrast and structure and is applied locally in an 8×8
square window, a window that is move d pixel by pixel over the entire image and
the SSIM is calculated for each window and the values are between 0 and 1,
values close to 1 show high correspondence with original images. [36]
33
4.3 Image fusion applications for digital images
4.3.1 Characteristic s
Data fusion is a process dealing with dat a from multiple sources to achieve
refined or improved information for decision making . A general definiti on of image
fusion is given as “ Image fusion is the combination of two or more different images to
form a new ima ge by using a certain algorithm ” . [37]
The result ed image after image fusion procedure is a single image that is much
more suitable for human and machine perception or further image processing tasks.
In this paper, we treat the image fusion at the pixel level, which means the fusion
of images at the lowest processing level, referring to the mergin g of physical
characteristics acquired at the pixel level, respectively at physical parameters within an
image. [37]
In pixel -level image fusion, some gene ral requirements and characteristics are
automatically applied on the fused image result, general characteristics of the fused
image that almost all fused images should have been enumerated below: [38]
the fusion process shall preserve as much as possible information
available, including silent information available in the source images.
the fusion procedure shall not introduce any artifacts or inconsistencies.
the fusion process should be shift -invariant
This last characteristic is very important in the i mage fusion process because shift
variant fusion process leads to unstable or flickering results and human visual system or
image processing systems which process th ese images are primarily sensitive to these
moving artifacts and this could lead to unwante d errors. [38]
Even if, all characteristics enumerated above are fulfilled completely by the
resulted fused image, one system -level issue still remains, and that is real -time.
Many systems like digital sensors, CT , or MRI scanners which are using image
fusion in there operations to acquire multiple images, must do all the fusion operations
and to fulfill all the characteristics of a fused image in real -time.
34
The large input data generated by multisensor arrays make the realization of a
real-time fusion system very difficult . [38]
Also for digital images , another usage of image fusion is the multi -focus image
fusion or image rest oration. [4 2]
The original input image can be d ivided into regions and to apply image fusion in
order to make every region in t he fused image to be in focus in at least one channel , the
aim is that the output image to be every region in focus and this can be done by
identifying the regions in focus a nd to combine them together. [42 ]
Image fusion for image restoration comes with th e aim that each image has some
parts that are visible and others which are degraded and the degraded parts can be
remove d by fusion, see e. g. below. [42 ]
Figure 5. Image restoration using image fusion
4.3.2 Benchmark images used for fusion of digital imaging
In this chapter, I am going to show some of the most used benchmarks, regarding
image fusion research field, most of them or I can say that regarding image fusion for
digital imaging, the benchmarks images are mostly used for pix el-level mult i-focus
image fusion especially and also for fusion details from two images of the same scene
into a single and more detailed scene.
Also in this chapter , only a part of the most used multi -focus image fusion or
fusion of the same scene image is going to be presented, because the research regarding
35
image fusion started approximatively, about twenty, th irty years ago , so there i s the real
world are t housand of benchmarks image sets that were used in this research field.
But some of the most u sed benchmarks sets of images regarding image fusion
research in the field of digital multi -focus imaging are presented below:
Benchmark image pairs used in digital image fusion
1.
2.
36
3.
4.
5.
6.
37
7.
8.
9.
10.
11.
38
12.
13.
The benchmark sets from image 1 to image 6 greyscales images that were used and still
are used in research papers related to multi -focus image fusion algorithms for checking the
implemented algorithms and especially for testing the efficiency of the algorithms applied to
greyscale images.
The benchmark sets from image 7 to image 9 are color like images, also used in research
papers and not only for implementing and testing multi -focus image fusion algorithms for color
imag es.
Regarding the multi -focus image fusion research benchmarks, at least from my point of
view or in touch with what image benchmarks sets I found, apparently , there are quite more, if
not double the number of benchmark sets which are formed only from gray scale images rather
than color images , that sets are fewer and this concludes that more research for image fusion
available was made on greyscale images because it is much easier to work with grayscale images
in terms of especially implementation, processi ng and also testing time.
The benchmark image sets from image 10 to image 12, the first image from the set it is a
detailed color image in terms of objects that are available, this image require a light source to
provide the details available in the scene and the second image from t he set, it is an infrared
image, shortly an IR image captured by an IR sensor, which is a special sensor that sense s the
radiant energy emitted by objects, the radiant energy cannot be detected by the human eye as the
wavelength of emitted radiations falls in the infrared region of electromagnetic spectrum.
39
So, image fus ion between th ese two images, the fused output image can bring new details
that are not otherwise perceived only with the naked human eye.
The last benchmark ima ge set from number 13, the first i mage is a PAN image a cquired
from the sat ellite which is panchromatic higher reso lution image that stands out spatial resolution
details from the scene, the second image is a multispectral image of the same scene, the
multispectral image contains details for a specific wavelength range of the electromagnetic
spectrum, usually contains spectral details of an image in the visible spectrum. The image fusion
between this two images, the fused output image can contain both spati al and spectral details of a
scene, this fusion technique is being used especially in satellite imagery to highlight the
evolution of a scene like a city or a forest in time and highlight the evolution details, this kind of
image fusion is being known as m ulti-temporal image fusion because the images taken in time
are fused and details regarding scene evolution are highlighted.
In this chapter, the most used image fusion benchmark sets, as you read above were
presented , and also they are shortly described, the characteristics and us eful applications of these
image benchmark sets.
4.3.3 Image f usion usage in the medical imaging domain
Medical image fusion has been a popular topic since the early beginning of 2000
or maybe earlier , and I can say that currently is due to the growing ap peal of this research
area, which can be found from the large number of scientific papers published in journal s
and magazines. [39]
One major application of the digital image fusion methods had been applied in the
medical field, more precisely, the fusion of images of the same organ but, the images are
acquired by different medical devices like CT and MRI, and th ese two images are fused
in order to obtain a more detailed output image in order to help the medical professional
to set a d iagnostic more precisely based on fused information which are provided or
acquired by different image sources.
Medical image fusion is a method of register ing and mixing multiple images from
one or multiple image modalities in order to bo ost image quality and reduce randomness
and redundancy in order to extend the clinical relevance of medical images for
40
diagnosing and assessment of medical issues. Multi -modal medical image fusion
algorithms and devices have shown notable achievements in ris ing clinical ac curacy of
decisions supported by medical images . [39]
The most well known medical devices used to a cquire medical images, are
devices like Magnetic Resonance Imaging or MRI for short, Computerized tomography
or CT for short, Positron emission tomograp hy, or a PET scan for sh ort.
Magnetic Resonance Imaging (MRI) has a vital role in non -invasive diagnosis of
brain tumors and is one of the most widely used imaging modalities in medical studies in
trusted clinical settings. [39]
The MR images used with othe r modalities, together with mod ern image fusion
methods have shown to improve the imaging accuracy and practical clinical applicability.
There exist several studies that attempt to combine the MRI with other modalities using
image fusion methods, some exam ples of t his are the following: M RI-CT, MRI -PET, or
MRI -SPECT. [39 ]
Computerized tomography (CT) is a medical imaging tech nique that has created
an outstanding impact on medical diagnosis and assessments. This is a popular modality
used in m ulti-modal med ical image fusion. Like MR images, t he CT images are used in a
huge range of medical applications under practical clinical conditions. Another
application of the image fusion using CT images has been for help and assistance in
surgical training and guidanc e using images obtained from a tracked endoscope to
surfaces der ived from CT data. Fusion mix in which CT is one of the main modalities
include s MRI -CT-PET-SPECT , MRI -CT, SPECT -CT. [39 ]
Posit ron emission tomography, known as PET imaging or a PET scan, is a useful
type of nuclear medi cine imaging. Here, s ome application areas where PET is a prime
modality considered in image fusion. Similar to CT and MRI, a major application of PET
is in radiology studies for brain diagnosis and treatm ent. There is a wide ra nge of
application s of image fusion using PET, mostly for cancer treatments. The use of PET
data in combination with some of the existing modalities using the image fusion
techniques i nclude s MRI -CT-PET-SPECT, MRI -CT-PET, MRI -SPECT -PET, MRI -PET.
[39]
41
There are also other medical devices used in radiology for performing image
fusion, devices like single -photo n emission computed tomography or SPECT scan,
ultrasound , or infrared imaging which also are used in the medical field.
The m edical image fusion process , but mainly multi -modal medical image fusion
procedure plans to improve the imaging quality by reducing the redundancy in order to
boost the clinical applicability of the medical images in diagnosis and medical problems
appraisal , also it is essential to know that t he image fusion algorithms are input dependent
and t herefore designing an image fusion algorithm in medical image fusion depend s on
three main aspects that must be taken into consideration , firstly the i maging modality
used, secondly, the organs that are imaged and based on this two aspects, it comes to the
algorithm implementation that must take into account the two input information
enum erated above . [39]
4.3.4 Image fusion benchmarks in the medical imaging domain
In this chapter, I am going to show some of the most used benchmarks, regarding
the image fusion research field in the medical imaging research domain.
Also in this sub -chapter, only a part of the most used multi -modal image fusion
benchmarks or images of the same scene taken by different medical complex devices are
going to be presented, because of the research regarding multi -modal image fusion in the
medical imaging, there are available dozens of image benchmark sets, of course , fewer in
comparison to other image fusion domain research, because the research in this field is
quite new among other image fusion research fields and the research , especially in this
field, is quite harder and time -consuming .
Some of the most used benchmarks sets of images regarding multi modal image
fusion research in the medical field are presented below:
42
1.
2.
3.
4.
43
5.
6.
The benchmark image sets form number 1 to number 4 are used in various
medical image fusion research articles which studies different medical imaging
approach in capturing different medical images by using different modalities such
as MRI and CT in order to obtain complementary information of the targeted
organ that is to be captured.
So, all the necessary information from these two modalities has to be
integrated into a single image for better diagnosis and treatment of a patient. [39]
Also, I have to mention that these benchmark images are used in the
following sc ientific papers by R.VIJAYARAJAN AND S.MUTTAN
"DISCRETE WA VELET TRANSFORM BASED PRINCIPAL COMPONENT
AVERAGING FUSION FOR MEDICAL IMAGES ",AEU, Volume 69, Issue 6
(2015), Pages 896 -902 and Bavi risetti DP, Kollu V, Gang X, Dhuli R. Fusion of
MRI and CT images using guided ima ge filter and image statistics, Int. J. Imaging
Syst. Technol. 2017;27:227 –237. https://doi.org/10.1002/ima.22228 .
44
The benchmark image set from 5 is used in various medical image fusion
research articles that study different image a cquiring modalities such as PET and
MRI in order to obtain complementary information of the targeted organ for a
better medical diagnostic.
The benchmark image set from 6 is used in a research art icle, available at
the following link http://www.rb.org.br/imprimir.asp?id=798 , which studies
different brain images a cquired also by MRI and PET devices from people who
take treatment for ep ilepsy and the images are a cquired in time , in order to
monitor t he effect of epilepsy treatment in time and the fusion of the two images
gives additional information of the effectiv eness of treatment in time.
In this chapter, the most used medical image fusion benchmark sets, as
you read above were presented , and also they are shortly described regarding the
usage applications and the importance of image fusion in the medical imaging
domain and particularly in medical imaging research.
4.3.5 Used software applications and results for image fusion
In this chapter, I am go ing to present some available software applications
and scripts available on the web for performing image fusion by using in
particular the benchmark sets presented in the previous chapters or other
benchmarks available on the web.
The software tools and s cripts that are presented in this chapter are just a
few tools ch ose by me, based on the results obtained in the research papers for
which it was made and also based on how it behaves during running, testing , and
also how much image fusion methods integrat e.
The software and scripts, that are going to be presented in the form of list
one after the other will be shown in the rows that follow below.
45
1. DWTPCAv fusion
This is a MATLAB script whose code implements the Discrete wavelet
transform based principa l component averaging fusion for any number of source
images which is available at the following link
https://www.mathworks.com/matlabcentral/fileexchange/60774 -dwt-based –
principal -component -averaging -fusion -dwtpcav and is used in the following
research pa per R.VIJAYARAJAN AND S.MUTTAN "DISCRETE WAVELET
TRANSFORM BASED PRINCIPAL COMPONENT AVERAGING FUSION
FOR MEDICAL IMAGES ", AEU, Volume 69, Issue 6 (2015), Pages 896 -902
paper which studies image fusion methods used for multi -modality medical
images.
The p aper studies the image fusion techniques based on multimodal
medical images which render a considerable enhancement in the quality of fused
images. An effective image fusion technique produces output images by
preserving all the viable and prominent inform ation gathered from the source
images without any introduction of flaws or unnecessary distortions.
In order to successfully perform image fusion by using this script, the
input images ha ve to be registered, otherwise , some inpu t errors are set and the
algorithm is not working.
Also, what I considered very useful at this script consists of the number of
input images that are not limited at only two input images.
When the script is set to run you as a user, are asked how many images
you are using for fusion .
46
Example of image fusion using this script is presented below
Figure 6. image fusion based on DWTPCAv fusion script
In the first two figures are the two input images and in the third is the
fused image as an output im age.
The script works very well and can be applied to various research
regarding image fusion, but as a disadvantage for the research is that the script
does not automatically calculate image quality parameters between the input
images and the fused image , parameters like PSNR, correlation coefficient or
SSIM and also works only for gray scale -like images.
So, for the fact that multiple images can be used to perform image fusion,
I think that starting from this script, script that is allowed and it can be modified
for improvements it can be used for testing and future research topics in image
fusion , especially in the medical image fusion field.
2. DCT domain using variance and energy of Laplacian (DCT_VOL)
fusion
This is also a MATLAB script available on MATLAB file exchange
website, the script which is related to the following research paper by Amin -Naji
and A. Aghagolzadeh, “Multi -Focus Image Fusion in DCT Domain using
Variance and Energy of Laplacian and Correlation Coefficient for Visual Sensor
Network s,” Journal of AI and Data Mining vol. 6, no. 2, 2018, pp. 233 -250.
47
According to this research paper , his aim is that efficient multi -focus
image fusion algorithm using DCT transform are developed and by using the
DCT methods for image fusion the quality of the fused image is expanded and the
error regarding the fixed 8×8 block by using a convolution by a 3×3 mask over the
blocks.
Foreward I am going to prese nt a result obtained after I ran the script, as
you can see below.
Figure 7. Image fusion using DCT_VOL script
Also in order to successfu lly perform image fusion by using this script, the
input images must be registered.
The script works very well and can be applied especially for research
regarding the multi -focus image fus ion, but as a disadvantage for the research is
that the script does not automatically calculate image quality parameters between
the input images and the fused image, parameters like PSNR, co rrelation
coefficient or SSIM and also works only for gray scale -like images.
48
This script works only with two input images and it displays titles for the
source images and the output fused images.
Also, this script is allowed to be mod ified for improvements and used for
other image fusion testing benchmarks or for fu ture research topics especially in
the multi -focus image fusion field , and also rega rding this script there are other
variations made by the authors and made available for usage.
3. Multispectral image fusion script
This is a MATLAB script available on MATL AB file exchange website,
the script which is made by Abbas Hussien Miry and is called Multispectral
image fusion by using IHS and Brovey Transform, the script which can be used
for image fusion betwe en a PAN image and a multispectral image and also the
images comes along with this script, a PAN image , and a multispec tral image,
images that were present ed as benchmarks at the previous chapter in this paper.
Below is an image that describes how image fusion is appl ied by using
this script.
Figure 8. Description of the steps performed by the fusion algorithm
Foreward I am going to prese nt a result obtained after I ran the script, as
you can see below.
49
Figure 9. Image fusion performed using this script
Also in order to successfu lly perform image fusion by using this script, the
input images must be registe red.
The script works well and can be applied especially for research regarding
the multi -temporal image fusion, but as a disadvantage for the research is that the
script does not automatically calculate image quality parameters between the
input images an d the fused image, parameters like PSNR, co rrelation coefficient
or SSIM, but works for any input images like grayscale images and even color
images.
This script works only with two input images and it displays titles for the
source images and the output fused images.
Also, this script is allowed to be modified for improvements and used for
other image fusion testing benchmarks or for future research topics especially in
the multi -temporal image fusion field, in order to study different sate llite images
of an area like a forest and to see how things evolved in time.
50
4. Wavelet Analyser
The wavelet analyzer application is an interactive tool which is using
wavelets to visualize and analyze signals and images . [40]
With this application the fo llowin g operations can be done, like [40]:
perform ing wavelet and wavelet packet analysis
deno ting and compress ing signals and images
estimating density and regression
perform ing matching pursuit analysis
perform ing image fusion
This application can be accessed once MATLAB is running, you have to
introduce the waveletAnalyzer command in the MATLAB command window.
After the command is successfully introduced, the Wavelet Analyzer
window must op en, like below:
Figure 10. Wavelet Analyzer window in MATLAB
So, in other words with this tool can be performed 1 -D and 2 -D signal or
image analysis by using wavelets , and of course , we also can perform image
fusion, which is done by using wavelet -based image fusion methods .
51
By using this tool, you have to take into account that, the principle of
image fusion by using wavelets is to merge the wavelet decompositions of the two
original images using fusion methods applied to approximations coefficients and
details coefficients . [41]
Also, those two images on which we want to perform fusion, must be
registered by means to be on the same size and are supposed to be associated with
indexed images on a common colormap .
In order to effectively perform image fusion by using this tool, you have
to load th e two images which are to be fuse d and after, to perform a wavelet
decomposition of the input images.
Figure 11. Image decomposition using the Wavelet Analyser for image fusion
Using the Wavelet and Level menus located to th e upper right, determine
the wavelet family, the wavelet type, and the number of lev els to be used for the
analysis and after that press Decompose button and after a pause for computation,
the tool displays the two analyses, as it can be seen below. [41]
52
Foreword, select the fusion m ethod frame, for example , you have to select
the item mea n for both approx section s and also for details section , and n ext, click
the Apply button. [41]
Figure 12. Image fusion process using image f usion tool from Wavelet Analyzer package
After that, the synthesized image and its decomposition, which is equal to
the fusion of the chosen decomposition level, the decomposed image it appears ,
and then the new image produced by fusion clearly exhibits features from the two
original ones. The synthesized image is a restored version of the good quality of
the common underlying original image, which means the resulted synthesized
image is the resultant fused image which also the image fusion tool lets you save
the synthesized image to disk. [41]
53
The Image Fusion tool from Wavelet Analyzer which is available in
MATLAB works well it can be used especially for research regarding the image
fusion methods based on wavelets, some of the wavelet methods avail able are the
discrete wavelet transform, the Haar wavelet transform s, the stationary wavelet
transform s, these wavelets transfo rms that also w as mentioned in this paper, but
also are available any other wa velet transformations that w eren' t mentioned in thi s
paper.
So among the advantages of the Image Fusion tool are that there are
available many image fusion methods based explicitly on wavelet
transformations, another advantage is that it can be performed image fusion also
on color like images, not only on grayscale like images and another big advantage
is linked to the fact that this tool gives the po ssibility to adjust or modify some
image parameters like brightness, contrast and also gives the possibility to apply
filters on the output fused image, advan tages which make the usage of this tool in
the research field very useful and easy because of the options that are put to the
user dispo sal.
Among the disadvantages, we can say that for the research usage and
purposes, there are not available some automa tically image quality parameters
like PSNR, co rrelation coefficient , or SSIM , and also this tool works with two
input images only.
The fact that this is a tool integrated in to MATLAB, it can be used only
with a MATLAB license, so it is a standalone tool t hat cannot be modified.
So, as a conclusion this tool , is suitable for testing purposes of image
fusion algorithms based on the wavelet transformations and also for applied
research in the image fusion algorithms based on the wavelet transformations.
54
5. C# ImFuse
The C# ImFuse is a software tool developed in C#, for a research study
which is related to the following research paper Implementation and Validation of
Visual and Infrared Image Fusion Techniques in C#.NET Environment , the paper
which is men tioned at number 17 in the Refer ences section from this paper.
In this tool are implemented many image fusion techniques starting from
alpha blending going on with laplacian pyramid, princip al component analysis , or
PCA for short and discrete wavelet trans form.
Figure 13. The u ser interface of the C# ImFuse
Also, as can be seen below, the tool also performs image quality matrics
between the input images and the fused image from the output.
The tool is designed to work with EO or electro -optical images and
infrared images and according to [17] , the EO imaging sensor is a CMOS sensor
that senses the reflected energy with the frequency of the energy reflected giving
an image. It requires a light source to provide the image and is more sensitive
than the eye.
55
The IR imaging sensor senses based on the radiant energy emitted by
objects. This cannot be detected by the human eye as the wavelength of emitted
radiation falls in the infrared region of the electromagnetic spectrum.
But as I tested the tool, it works with any other images like grayscale
images and also with color images, as it can be seen below.
Figure 14. Image fusion testing using the C# ImFuse tool for color image
Also, as can be seen in t he image above, this tool performs image fusion
between two input images and it calculates the image quality matrics between the
input images and the output image which is the fused image.
In the paper regarding this tool, are applied the fusion operation s on two
image benchmark sets, benchmarks that are pre sented in the Benchmarks for
digital images chapter at index 10 and index 12.
56
Foreword, I am going to run the tool on one be nchmark set presented in
this scientific paper that came along with this tool, as it can be seen below.
Figure 15. Image fusion using DWT level 1 in C# Im Fuse
Regarding the scientific paper, there are shown that using image fusion
techniques implemented in this tool, are obtained the following r esults, as shown
below: [17]
Figure 16. Image fusion quality metrics results applied on above benchmark image set
57
As it can be seen, from the above table, the great results are obtained with
the Laplacian Pyramid image fusion met hod, by judging from the quality metrics
like standard deviation, entropy, cross -entropy, average luminance and from the
SNR value, it can conclude that the Laplacian Pyramid is the best choi ce of fusion
method, at least for this image benchmark set.
So among of the first advantages of the C# ImFuse tool are that there are
available many image fusion methods, another advantage is that it can be
performed image fusion also on color like images, not only on grayscale s like
images and the biggest advantage of this tool is that it gives you the po ssibility to
automatica lly compare the input images against the output fused image and in this
way it can be evaluated which image fusion methods fits best in some research
applications and also in practical impleme ntations which are using image fusion
techniques , advantages which makes the usage of this tool in the research field
and not only but also in practical algorithms implementations very useful and
easy because of the options that are put to the end -user dispos al.
A disadvantage of this software tool consists in that, on ce you selected a
set of images for fusion, you as a user cannot choose after processing of that set
another benchmark set, so in conclusion , you have to close the application and
open it again which, after a long time of usage can be frustrating and annoying in
the same time.
And the last disadvantage from using this tool is that from time to time it
crashes and does not perform correctly the image fusion and it has to be closed
and opened aga in and repeat the same steps again, a thing that after a long time of
usage can be frustrating and annoying in the same time.
So, as a conclusion this tool, it is suitable for testing purposes of image
fusion algorithms comparison, based on the image qual ity metrics available, but
also, it has to be taken into account that this tool is a result of a scientific work,
which still has some defects, but in comparison with what it offers, I can say that
it is quite good and useful in researching and comparing i mage fusion algorithms
and to develop it further for your research applications and interests .
58
6. MITO
The Medical Imaging Toolkit or MITO for short, it is a software tool
developed by the Institute for High -Performance Computing and Networking of
the Nation al Research Co uncil of Italy and the Institute of Biostructure and
Bioimaging from Italy, as a research project for studying the medical imaging
because the tool makes possible to fetch radiological information and images
stored as DICOM file format and pr ovides to the end -user, basic functionalities
such as 2D and 3D format visualization, image segmentation and image fusion,
also has I said above the tool supports DICOM file format or the Digital Imaging
and Communications in Medicine, which is the standar d file for the
communication and management of medical imaging information and related data
especially in the transmi ssion of medical information used by the complex
medical devic es like CT or MRI scanners. [43]
Figure 17. User interface of MITO
59
In orde r to successfully use this tool, you have to use the DICOM file
format, files that have the ext ensions .dicom or .dcm.
Also, on the web are available many research websites in medical imaging
research and radiology, which are pu blic websites and public studies which also
put at users disposal the database with DICOM files used.
One website, which put at user disposal a large database of medical
images is https://demo.softneta.com/search.html , which ha s dozens of images
acquired by medical equipment with almost any anatomical parts of the human
body from the brain, lungs , and feet , as you can see below.
Figure 18. Vuasualizing a DICOM image using MITO
60
Also, the website mentioned above, which is cal led medDream, is a
research project co -funded by the European Union, which is another great tool
for image fusion, because it also gives the user, the po ssibility to analyze the
images that are m ade available by using the visualization tool and also the i mage
fusion functionality integrated and made available right on the website [44], as
you can see below.
Figure 19. Screenshot of the medDream website
So, in my opinion , the MITO tool and medDream online demo [44] are
suita ble for medical research and also for m ulti-modal image fusion research and
testing, for why not a future research paper in the medical field.
As a disadvantage, from my side, for using both platforms is that they do
not offer image comparison results for the research purposes at least, but also the
tool and website, offers the po ssibility to visualize the images from different
angles, which from my po int of view, it is a very good thing.
61
Below, it is an example of image fusion by using MITO tool.
Figure 20. Image fusion performed with MITO tool
7. Image Fusion Demo Tool
The Image Fusion Demo Tool is a tool that I implemented it using the
MARLAB App Designer.
The tool implements some of the most used image fusion methods like
from the so -called simple image fusion, which strictly combines information from
the two input images to methods much more complex like Principle Component
Analysis or PCA [47] or stationary wavelet transform with one level and also two
levels or for short SWT l evel 1 and SWT level 2 [48], is mentioned that the
above -specified image fusion methods that are already available on the web and
starting from those already available implement ations of the mentioned image
fusion methods , in order to go further, I implemented a combination between
these fusion methods available, namely the stationary wavelet transform and
Principal component analysis algorithms combination like SWT level 1 + PCA
and SWT level 2 + PCA.
62
According to [45 ], the image fusion technique gives better results by using
a combination of those two techniques.
The main a dvantage of SWT is translations invariance and the
advantage of PCA is able to reduce the redundancy present in the data .
Due to the presence of up -samplers and down -samplers in DWT, it lacks
translation invariance, this drawback in discrete wavelet transform is
overcome in the stationary wavelet transform by removing the up –
samplers and down -samplers, so SWT also called as Translation invariant
wavelet transform. The output of SWT contains the same number of
samples as the input, so it is also called redundant wavelet transform.
Figure 21.Flow diagram of the SWT and PCA methods
Next, I am going to present the tool and to apply image fusion on one
image benchmark set and to com pare methods results.
Below is presented the user interfa ce of the tool that I developed which
also implements calculations like PSNR and standard deviation.
63
Figure 22. The u ser interface of image fusion demo tool
By using the above -presented benchmark, I obtained the following results
regarding the comparison between the first input image and the output fused image regarding the
image fusion techniques provided by this tool, as you can see in the table below:
Fusion
method MSE Correlation
Coefficient PSNR (dB) Standard
Deviation SNR (dB) SSIM
PCA 67.6 0.9853 14.92 45.71 29.83 0.8441
SWT level 1 73.22 0.9838 14.74 46.22 29.83 0.8532
SWT level 2 74.85 0.9834 14.69 46.43 29.39 0.8539
SWT1+PCA 73.22 0.9838 14.74 46.22 29.48 0.8532
SWT2+PCA 74.85 0.9834 14.69 46.43 29.39 0.8539
So, as it can be seen above, by analy zing the structur al similarity index metric or
for short SSIM, metric which measures the similar ity of the images we can conclude that
the SWT + PCA algorithm, shows promising results, compared to the other methods due
to the combination between spatial characteristics that the PCA algorithm it preserves
and also due to the spectral characteristics t hat the stationary wavelet transform it
preserves.
64
The input images and the fused output image by using the SWT + PCA
algorithm can be seen below:
As you can s ee above the fused image is very clear and intact by using this
algorithm, the obtaine d results are promising as explained in [43].
Regarding the tool, one advantage of it is that it integrates some newer
image fusion approaches, regarding available image fusion algorithms in the
image processing and digital image domain.
Another advantag e is that it also provides automatically image parameters
calculation for image comparison, but the disadvantage is that the comparison is
done only between the first input image and the output fused image , and because
of this some results may not be very conclude d.
One big disadvantage of this tool is that the tool is still unstable and for
some images , the processing stops due to some image erro rs so it is not fully
optimized, so some work still must be done.
As a conclusion this tool, it is suitable fo r testing purposes of image fusion
algorithms for visualization and less for comparison and also, it has to be taken
into account that this tool is a result of a scientific work, which still has some
defects, but it can be modified and improved for future scientific research.
65
4.3.6 Advantages and disadvantages of image fusion
Among, the advantages of image fusion we can list some of the
following, as below [46]:
extraction of all the beneficial information from source
images into a single and more detailed ima ge.
fusion operations are robust to imper fections, such as
misregistration.
the fusion of images can improve reliability and capability
by complementary information.
it is convenient for identification and recognition.
also, the fusion process reduces data storage and time for
data transmission as a consequence.
Among, the disadvantages of image fusion we can list some of the
following, as below [46]:
during fusion procedure, the noise can alter the fused image
some artifacts like color can be formed re garding the
transformations used in the fusion techniques.
the dissimilar illumination problem of fused images is also
a big disadvantage.
processing of the data is slow and heavy during the fusion
process.
more than one source image is required for the f usion
procedure.
66
5. Conclusion
By using the image fusion methods and concepts I realized a short scientific study
about the most popular image fusion techniques and concepts, image fusion concepts
based on spatial image characteristics like average fusion method, high pass filtering
method, hue -saturation -intensity or for short HSI image fusion method and the principal
component analysis or PCA fusion technique.
Another concepts that I used are the fusion concepts based on the spectral image
characte ristics like laplacian pyramid based fusion , discrete cosine tran sform or DCT
based image fusion or the wavelet methods for image fusion like discrete wavelet
transform or DWT based fusion and stationary wavelet transform or SWT based image
fusion concepts .
Also, the methods that are applied to the images that are to be fused are very
important, methods like image registration or noise reduction can increase in a very good
way the quality of the fused image.
Regarding the develop ment of this project I con sider that , I achieved new
knowledge regarding a very promising image enhancement method like the image fusion
concept , and also I acquired knowledge regarding developing alike research comparison
paper.
The last thing, that also I achieved consists of th e fact that I also learn new
methods in analyzing and implementing concepts that I didn’t know that they exist
regarding the image processing domain and it interacts with other domains like the
medical field for improving the process of medical diagnostic.
Regarding the software tool comparison for image fusion, I think that the best tool
that can be chosen depends on what it can be used in the future research paper or
practical testing and also it can be taken into account the image fusion algorithms that
this tool integrates from the number perspective but the most important characteristic it
will be the quality and also popularity of the image fusion algorithms which that software
integrates.
67
The last characteristic in choosing an image fusion software t ool depends on the
time spent during processing, how the software performs in time regarding its
functionality be means if it gets stuck or does not behave correctly and for the research
purposes at least for comparison papers, it is important to offer to the user some
automatically calculated image quality m etrics.
As a future development of this research paper , I can conclude that, because of
the large methods developed along the years regarding the image fusion concepts, there
are also many other concep ts that I didn’t include in this work because this domain is
very vast and analyzing research articles available, I took the decision to include in this
paper only that techniques that are widely used, so this research can continue in order to
include also other image fusion concepts available.
Another research in the future can be a tool comparison only for medical image
fusion concepts and also it can be studied and researched tools that can accept mode that
two images to perform image fusion.
The last research that can be done in this paper is to apply image fusion on all the
presented benchmark sets and also to present any other benchmark available.
Regarding the developed MATLAB demo tool for image fusion, also this tool can
be further developed to a dd new image fusion methods and to fix the existing application
running issues and to improve the calculus regarding the image quality matrices, to
calculate between the two input images and the output fused image.
68
6. References
1. Vikas Kumar Mis hra, Shobhit Kumar, Neeraj Shukla – Image Acquisition and
Techniques to Perform Image Acquisition , S-JPSET : Vol. 9, Issue 1, ISSN : 2229 -7111
(Print) & ISSN : 2454 -5767 (Online), Link:
https://www.researchgate.net/publication/318500799_Image_Acquisition_a nd_Techniqu
es_to_Perform_Image_Acquisition
2. https://www.tutorialspoint.com/dip/image_formation_on_camera.htm, Accessed
04.05.2020.
3. https://www.uotechnology.edu.iq/dep -cs/mypdf/subjects/4sw/4ip.pdf , Accessed
04.05.2020.
4. Neetu Rani – Image Processing Techniqu es: A Review, Article Link:
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=14&ved=2ahUK
Ewje8a7n5prpAhXMBhAIHf5XCCgQFjANegQIBhAB&url=https%3A%2F%2Fjotitt.ch
itkara.edu.in%2Findex.php%2Fjotitt%2Farticle%2Fdownload%2F36%2F22%2F&usg=A
OvVaw1JtWC6z McnXZdfW7lYcnyS
5. https://www.sciencedirect.com/topics/computer -science/image -rotation , Accessed
04.05.2020.
6. Ravinder Kaur, Taqdir – Image Enhancement Techniques – A Review, International
Research Journal of Engineering and Technology (IRJET), e -ISSN: 2395 -0056; p -ISSN:
2395 -0072, Article Link: https://www.irjet.net/archives/V3/i3/IRJET -V3I3276.pdf
7. Remus Brad – Procesarea imaginilor si elemente de computer vision, Editura
Universitatii "Lucian Blaga", Sibiu 2003, ISBN 973 -651-739-X
8. https://homepages.inf.ed.a c.uk/rbf/HIPR2/freqfilt.htm , Accessed 04.05.2020.
9. Nayera Nahvi, Onkar Chand Sharma – Comparative Analysis of Various Image Fusion
Techniques For Biomedical Images: A Review, International Journal of Engineering
Research and Applications (IJERA), Vol. 4, Is sue 5 (Version 5), May 2014, pp 81 -86,
ISSN: 2248 -9622
10. Ms. Maninder kaur, Ms. Pooja – Review on: Image Fusion , International Journal of
Advanced Research in Computer Engineering & Technology (IJARCET), Volume 4
Issue 2, February 2015, ISSN: 2278 -1323
69
11. Shali ma, Dr. Rajinder Virk – Review of Image Fusion Techniques, International
Research Journal of Engineering and Technology (IRJET), Volume: 02, Issue: 03, June
2015, e -ISSN: 2395 -0056; p -ISSN: 2395 -0072
12. Saleha Masood, Muhammad Sharif, Mussarat Yasmin, Muhamma d Alyas Shahid, Amjad
Rehman – Image Fusion Methods: A Survey, Journal of Engineering Science and
Technology Review (JESTR), December 2017, ISSN: 1791 -2377
13. Vibha Gupta, Sakshi Mehra – Image Fusion Techniques – A Comparative Study,
International Journal of Engineering Trends and Technology (IJETT), Vol. 32, No. 2
February 2016, ISSN: 2231 -5381
14. Shivsubramani Krishnamoorthy, K.P. Soman – Implementation and Comparative Study
of Image Fusion Algorithms, International Journal of Computer Applications, Volume 9
– No. 2, November 2010, ISSN: 0975 -8887
15. Mamta Sharma – A Review: Image Fusion Techniques and Applications, International
Journal of Computer Science and Information Technologies (IJCSIT), Vol. 7 No. 3,
2016, ISSN: 0975 -9646
16. Susheela Dahiya, Pradeep Kumar Gar g, Mahesh K. Jat – A comparative study of various
pixel -based image fusion techniques as applied to an urban environment, International
Journal of Image and Data Fusion, 10 April 2013
17. B.Hela Saraswathi, VPS Naidu – Implementation and Validation of Visual a nd Infrared
Journal (CADFEJL), Vol. 1 No. 2, pp. 27 -39, Mar -Apr 2017
18. Susheela Dahiya, Pradeep Kumar Garg, Mahesh K. Jat – A comparative study of various
pixel -based image fusion techniques as applied to an urban environment (Review Paper),
International Jo urnal of Image and Data Fusion , September 2013 , Article Link:
https://www.tandfonline.com/doi/abs/10.1080/19479832.2013.778335
19. H.B. Mitchell – Image Fusion – Theories, Techniques and Applications, Springer 2010,
ISBN 978 -3-642-11215 -7, e-ISBN 978 -3-642-11216-4.
20. Lakhmi C. Jain, V. Rao Vemuri – Industrial Applications of Neural Networks, CRC
Press LLC, 1998, ISBN: 0849398029 .
21. Sicong Zheng – Pixel -level Image Fusion Algorithms for Multi -camera Imaging System ,
A Thesis Presented for the Master of Science Degree The University of Tennessee,
Knoxville , December 2010.
70
22. Medha Balachandra Mule, Padmavathi N. B. – Basic Medical Image Fusion Methods,
International Journal of Advanced Research in Computer Engineering & Technology
(IJARCET), Volume 4, Issue 3, March 2015 , ISSN: 2278 -1323
23. M. D. Nandeesh, Dr. M. Meenakshi – Image Fusion Algorithms for Medical Images -A
Comparison, Bonfring International Journal of Advances in Image Processing, Vol. 5,
No. 3, July 2015, ISSN: 2277 -503X
24. Dhirendra Mishra PhD, Bhakti Palkar PhD Scholar – Image Fusion Techniques: A
Review, International Journal of Computer Applications, Volume 130, No. 9, November
2015, ISSN: 0975 – 8887
25. R. Johnson Suthakar, J. Monica Esther, D. Annapoorani, F. Richard Singh Samuel –
Study of Image Fusion – Techni ques, Method and Applications, International Journal of
Computer Science and Mobile Computing (IJCSMC), Vol. 3, Issue. 11, November 2014,
pg. 469 -476, ISSN: 2320 -088X
26. Beeta Narayan, Dr. Priya S, Ms. Sheena K. V. – Reconstructing an image from multi
exposur e images fusion using pyramid techniques, International Journal of Advanced
Research in Computer Engineering & Technology (IJARCET), Volume 8, Issue 5, May
2019, ISSN: 2278 – 1323
27. Anjali Babu, Padmavathi N. B. – Image Fusion and Secure Transmission of Medi cal
Images, International Journal of Advanced Research in Computer Engineering &
Technology (IJARCET), Volume 4, Issue 3, March 2015, ISSN: 2278 – 1323
28. Nikita D. Rane, Prof. Bhagwat Kakde, Prof. Dr. Manish Jain – Comparative study of
Image Fusion Methods: A Review, International Journal of Engineering and Applied
Sciences (IJEAS), Volume 4, Issue 10 ,October 2017, ISSN: 2394 – 3661
29. Mohammed Ghanbari – Standard Codecs: Image Compression to Advanced Video
Coding, Institution of Electrical Engineers, 2003, ISBN : 0852967101
30. Devyani P. Deshmukh, Prof. A. V. Malviya – Image Fusion an Application of Digital
Image Processing using Wavelet Transform, International Journal of Scientific &
Engineering Research, Volume 6, Issue 11, November 2015, ISSN: 2229 – 5518
31. Abdall a Mohamed Hambal, Dr. Zhijun Pei, Faustini Libent Ishabailu – Image Noise
Reduction and Filtering Techniques , International Journal of Science and Research
(IJSR) , Volume 6 Issue 3, March 2017 , ISSN (Online): 2319 -7064 ,
https://ijsr.net/archive/v3i8/MDkwNz E0MTE=.pdf .
71
32. https://www.sciencedirect.com/topics/computer -science/median -filter , Accessed
01.06.2020.
33. Chapter 1 – Introduction to Image Fusion, resume paper, Link:
https://shodhganga.inflibnet.ac.in/bitstream/10603/151753/9/09_chapter%201.pdf ,
Accessed 0 1.06.2020.
34. https://www.mathworks.com/discovery/image -registration.html , Accessed
01.06.2020.
35. https://www.mathworks.com/help/images/approaches -to-registering -images.html ,
Accessed 01.06.2020.
36. Sascha Klonus, Manfred Ehlers – Performance of evaluation me thods in image fusion,
12th International Conference on Information Fusion, Seattle, WA, USA, July 6 -9, 2009,
ISSN: 978 -0-9824438 -0-4, copyr ight 2009 ISIF.
37. Prof. Talat Ahmad, Prof. Davesh K Sinha, Prof. P. P. Chakraborty, Dr. Atiqur Rahman,
Dr. Iqbal Imam, Dr. Atiqur Rahman – Remote Sensing and GIS,
https://www.researchgate.net/publication/330383485_Remote_Sensing_and_GIS_Digital
_Image_Fusion .
38. https://scialert.net/fulltext/?doi=itj.2007.1224.1230 , Accessed 12.06.2020.
39. A. P. James, B. V. Dasarathy – Medical Image Fusion: A survey of the state of th e art,
Information Fusion, 2014, https://arxiv.org/ftp/arxiv/papers/1401/1401.0166.pdf .
40. https://www.mathworks.com/help/wavelet/ref/waveletanalyzer -app.html , Accessed
12.06.2020.
41. https://www.mathworks.com/ help/wavelet/gs/image -fusion.html , Accessed 12.06.2020.
42. Jan Flusser, Filip Sroubek, Barbara Zitova – Image Fusion: Principles, Methods, and
Applications , Lecture Notes , Tutorial EUSIPCO 2007 , Institute of Information Theory
and Automation Academy of Scien ces of the Czech Republic .
43. https://www.medfloss.org/node/648 , Accessed 13.06.2020.
44. https://demo.softneta.com/search.html , https://www.softneta.com/free -dicom -viewer/ ,
medDream online platform, by Softneta Company, Accessed: 13.06.2020.
45. S.B.G.Tilak Babu, K. H.K.Prasad, Jyothirmai Gandeti, Devi Bhavani Kadali,
V.Satyanarayana, K.Pavani – Image Fusion using Eigen Features and Stationary Wavelet
Transform , International Journal of Innovative Technology and Exploring Engineering
(IJITEE) , Volume -8 Issue -8S3, June 2019 , ISSN: 2278 -3075.
72
46. Heba M. El -Hoseny, El -Sayed M. El.Rabaie, Wael Abd Elrahman, Osama S. Faragallah,
Fathi E Abd El -Samie – Medical Image Fusion: A Literature Review Present Solutions
and Future Directions , Minufiya J. of Electronic Engineering Resear ch (MJEER), Vol.
26, No. 2, July 2017.
47. https://www.mathworks.com/matlabcentral/mlc –
downloads/downloads/submissions/31338/versions/1/previews/pcaimfuse/PCAi
mfuse_demo.m/index.html , Accessed 22.06.2020.
48. https://stackoverflow.com/questions/29146015/image -fusion-using -stationary –
wavelet -transform -swt-matlab , Accessed 22.06.2020.
Copyright Notice
© Licențiada.org respectă drepturile de proprietate intelectuală și așteaptă ca toți utilizatorii să facă același lucru. Dacă consideri că un conținut de pe site încalcă drepturile tale de autor, te rugăm să trimiți o notificare DMCA.
Acest articol: Pamfiloiu Nicolae Thesis 08.07.2020 [624108] (ID: 624108)
Dacă considerați că acest conținut vă încalcă drepturile de autor, vă rugăm să depuneți o cerere pe pagina noastră Copyright Takedown.
