Lucian Blaga University of Sibiu Romania [630134]
“Lucian Blaga” University of Sibiu – Romania
Faculty of Engineering
Master program: Advanced Computing Systems
DISSERTATION
Graduate: Teodora BRANIȘ TE
Scientific coordinator: Prof. Dr. Ing. Remus BRAD
– Sibiu, 2017 –
“Lucian Blaga” University of Sibiu – Romania
Faculty of Engineering
Master program: Advanced Computing Systems
A STUDY OF
CORONARY ARTERY
CENTERLINE EXTRACTION
ALGORITHMS
Graduate: Teodora BRANIȘ TE
Scientific coordinator: Prof. Dr. Ing. Remus BRAD
– Sibiu, 2017 –
i Abstract
According to the US Center for Disease Control and Prevention data, heart disease has
recently been among the le ading cau ses of death in the entire world . Automatic tracking of
vessels in heart CT (Computed Tomography) scans is an important step toward early
detection of plaques, aneurysms, stenosis and abnormal configurations of coronary arteries
which could potentially lea d to heart failure.
Even though visual inspection of 2D slices is currently the most common way of making
diagnosis based on cardiac CT scans, reliable methods for tracking vessels in 3D imagery
could eventually make the process less labor -intensive by he lping radiologists to correlate the
information from different slices. Automatic coronary extraction has great clinical importance
in the effective handling and visualization of large amounts of 3D data. Despite tremendous
previous research, coronary extra ction remains a difficult challenge. The centerline tracing
procedure may terminate early at a severe occlusion or an anatomically inconsistent centerline
course may be generated. The major requirement for the development of methods that detect
and extract the coronary tree are simplicity of applicability, clinical validity and
reproducibility.
This thesis provides an insight in the field of extraction of coronary artery centerlines from
computed tomography angiography (CTA) data. During the course of the years numerous
algorithms have been proposed in this direction, all of which try to lead to more accurate
results. For example, the last years, over 20 articles have been submitted to an online
evaluation tool ( Rotterdam Coronary Artery Algorithm Evaluatio n Framework ) [13] provided
by the Rotterdam University.
The initial requirements for this paper include presenting some elements of image processing
and computer vision, with focus on the field of coronary artery centerline extraction.
Furthermore, the ba sics of centerline extraction algorithms and a literature review on this
subject are presented, proving the diversity of techniques available in solving the CT images
segmentation problem. The next part of the paper is a description of the implementation o f 2
algorithms for the coronary artery segmentation and also some algorithms for the
preprocessing step of the segmentation. The last chapter consist in results of the implemented
approaches as well as a comparison between several results using metrics for the
measurements of the quality of the output provided in the eva luation tool of the Rotterdam
University mentioned before. I believe that the final ranking in the results chapter gives a
good indication of the relative performance of the different method s.
ii Contents
Abstract ………………………….. ………………………….. ………………………….. ………………………….. ……………….. i
Contents ………………………….. ………………………….. ………………………….. ………………………….. ……………….. ii
List of Figures ………………………….. ………………………….. ………………………….. ………………………….. ……… iv
List of Tables ………………………….. ………………………….. ………………………….. ………………………….. ……….. vi
1. Introduction ………………………….. ………………………….. ………………………….. ………………………….. ….. 1
1.1 Motivation ………………………….. ………………………….. ………………………….. …………………. 1
1.2 Heart anat omy ………………………….. ………………………….. ………………………….. ……………. 2
1.3 Coronary artery disease ………………………….. ………………………….. ………………………….. .. 2
1.4 Cardiac Diagnostic ………………………….. ………………………….. ………………………….. ……… 4
1.5 Cardiac imaging ………………………….. ………………………….. ………………………….. …………. 6
2. State o f art ………………………….. ………………………….. ………………………….. ………………………….. ……. 8
2.1 Patter Recognition Techniques ………………………….. ………………………….. …………………. 9
2.1.1 Multi -scale -based Approach ………………………….. ………………………….. …………………….. 9
2.1.2 Skeleton -based Approach ………………………….. ………………………….. ………………………… 9
2.1.3 Region growing -based Approach ………………………….. ………………………….. …………….. 10
2.1.4 Ridge -based Approach ………………………….. ………………………….. ………………………….. . 11
2.1.5 Differential geometry -based Approach ………………………….. ………………………….. …….. 12
2.1.6 Matching filters -based Approach ………………………….. ………………………….. …………….. 12
2.2 Model -based Approaches ………………………….. ………………………….. ……………………….. 13
2.2.1 Deformable models ………………………….. ………………………….. ………………………….. …… 13
2.2.2 Parametric models ………………………….. ………………………….. ………………………….. …….. 13
2.2.3 Template matching ………………………….. ………………………….. ………………………….. ……. 14
2.3 Artificial Intelligence ………………………….. ………………………….. ………………………….. … 14
2.4 Machine Learning ………………………….. ………………………….. ………………………….. …….. 18
2.5 MICCAI 2008 workshop ………………………….. ………………………….. ……………………….. 19
2.5.1 Automatic extraction ………………………….. ………………………….. ………………………….. …. 20
2.5.2 Extraction with minimal user -interaction ………………………….. ………………………….. …. 21
2.5.3 Interactive extraction ………………………….. ………………………….. ………………………….. … 21
3. A study of coronary detection algorithms ………………………….. ………………………….. ………………… 23
3.1 Pre-processing ………………………….. ………………………….. ………………………….. ………….. 25
3.1.1 Houn sfield Unit Conversion ………………………….. ………………………….. …………………… 25
3.1.2 Gradient vector field ………………………….. ………………………….. ………………………….. …. 25
3.1.3 Gradient vector flow ………………………….. ………………………….. ………………………….. …. 27
iii 3.1.4 Gaussian Smoothing ………………………….. ………………………….. ………………………….. …. 28
3.1.5 Non-local Means ………………………….. ………………………….. ………………………….. ………. 29
3.2 Tubular detection filter ………………………….. ………………………….. ………………………….. 30
3.2.1 Eigenvectors and Eigenvalues ………………………….. ………………………….. ………………… 31
3.2.2 Frangi vesselness filter ………………………….. ………………………….. ………………………….. . 34
3.2.3 Circle -fitting TDF ………………………….. ………………………….. ………………………….. …….. 35
3.3 Ridge traversal ………………………….. ………………………….. ………………………….. …………. 36
3.4 Grouping and linking ………………………….. ………………………….. ………………………….. … 40
3.5 Inverse gradient flow tracking ………………………….. ………………………….. ………………… 40
3.6 Avoidance of false positives ………………………….. ………………………….. …………………… 42
4. Results ………………………….. ………………………….. ………………………….. ………………………….. ……….. 43
4.1 Rotterdam Coronary Artery Algorithm Evaluation Framework ………………………….. .. 43
4.2 Results of the proposed implementation ………………………….. ………………………….. …… 43
4.3 MICCAI Workshop Results ………………………….. ………………………….. ……………………. 46
5. Discussion ………………………….. ………………………….. ………………………….. ………………………….. ….. 48
5.1 Tube Detection Filters ………………………….. ………………………….. ………………………….. .. 48
5.1.1 Circle Fitting ………………………….. ………………………….. ………………………….. ……………. 48
5.1.2 Vesselness Filter ………………………….. ………………………….. ………………………….. ……….. 48
5.2 Noise Filters ………………………….. ………………………….. ………………………….. …………….. 49
5.2.1 Gaus sian Smoothing ………………………….. ………………………….. ………………………….. …. 49
5.2.2 Non-Local Means ………………………….. ………………………….. ………………………….. ……… 49
5.3 Extracted Centerlines ………………………….. ………………………….. ………………………….. … 49
5.3.1 Ridge Traversal ………………………….. ………………………….. ………………………….. ………… 49
5.3.2 Grouping and Linking ………………………….. ………………………….. ………………………….. .. 49
5.4 Segmentation Results ………………………….. ………………………….. ………………………….. … 50
6. Conclusion ………………………….. ………………………….. ………………………….. ………………………….. ….. 51
6.1 Goal achievement ………………………….. ………………………….. ………………………….. ……… 51
6.2 Further work ………………………….. ………………………….. ………………………….. …………….. 51
Bibliography ………………………….. ………………………….. ………………………….. ………………………….. ………. 54
iv List of Figures
Figure 1.1 : Anatomy of coronary arteries and heart ………………………….. ………………………….. …………… 2
Figure 1.2: Cross -section of artery ………………………….. ………………………….. ………………………….. ………. 3
Figure 1.3: Examp le of soft -plaques and calcified -plaques ………………………….. ………………………….. …. 4
Figure 1.4: Normal and abnormal examples of ECG ………………………….. ………………………….. ………….. 4
Figure 1.5: Example of an echocardiogram ………………………….. ………………………….. ………………………. 5
Figure 1.6: Example of a CTA ………………………….. ………………………….. ………………………….. ……………. 5
Figure 1.7: Coronary angiography vessel dye injection ………………………….. ………………………….. ………. 6
Figure 2.1: The region growing algorithm ………………………….. ………………………….. ………………………. 11
Figure 2.2: T he multipl e hypothesis tracking principle . ………………………….. ………………………….. …….. 15
Figure 2.3: Generating the centerline mean sh ape and region -of-interest mesh . ………………………….. .. 16
Figure 2.4 : Core point sets. ………………………….. ………………………….. ………………………….. ……………… 16
Figure 2.5: Tube detection step ………………………….. ………………………….. ………………………….. …………. 17
Figure 2.6: Coronary tree extraction results by Frangi’s vesselness filter ………………………….. ………… 17
Figure 2.7: Examples of shape matching ………………………….. ………………………….. ………………………… 19
Figure 3.1: Diagram of the algorithms ………………………….. ………………………….. ………………………….. .. 24
Figure 3.2: Gradients of a sliced, ideal tube ………………………….. ………………………….. …………………….. 26
Figure 3.3: Gradient vector flow illustration ………………………….. ………………………….. ……………………. 28
Figure 3.4: Slice views of ideal tubular structures and gradients. ………………………….. ……………………. 30
Figure 3.5: Graph of first order derivatives ………………………….. ………………………….. ……………………… 30
Figure 3.6: Graphs of second order derivatives. ………………………….. ………………………….. ……………….. 30
Figure 3.7: Ideal tubular structure with eigenvectors. ………………………….. ………………………….. ……….. 32
Figure 3.8: Circle Fitting illustration ………………………….. ………………………….. ………………………….. ….. 36
Figure 4.1: Centerline extraction with Vesselness Filter, no -denoising ………………………….. ……………. 44
Figure 4.2: Centerline extraction with Vesselness Filter, Gaussian Smoothing Low Parameters ……… 44
Figure 4.3: Centerline extraction with Vesselness Filter, Gaussian Smoothing High Parameters …….. 44
Figure 4.4: Centerline extraction with Vesselness Filter, Gaussian Smoothing Low Parameters and
Non-local Means ………………………….. ………………………….. ………………………….. ………………………….. … 44
Figure 4.5: Centerline extraction with Vesselness Filter, Gaussian Smoothing High Parameters and
Non-local Means ………………………….. ………………………….. ………………………….. ………………………….. … 44
Figure 4.6: Segmentation w ith Vesselness Filter, no -denoising ………………………….. ……………………… 44
Figure 4.7: Segmentation with Vesselness Filter, Gaussian Smoothing Low Parameters ……………….. 44
Figure 4.8: Segmentation with Vesselness Filter, Gaussian Smoothing High Parameters ……………….. 44
Figure 4.9: Segmentation with Vesselness Filter, Gaus sian Smoothing Low Parameters and Non -local
Means ………………………….. ………………………….. ………………………….. ………………………….. ……………….. 44
v Figure 4.10: Segmentation with Vesselness Filter, Gauss ian Smoothing High Parameters and Non –
local Means ………………………….. ………………………….. ………………………….. ………………………….. ………… 44
Figure 4.11: Centerline extraction with Circle Fitting, no -denoising ………………………….. ……………….. 45
Figure 4.12: Centerline extraction with Circle Fitting, Gaussian Smoothing Low Parameters ………… 45
Figure 4.13: Centerline extraction with Circle Fitting, Gaussian Smoothing High Parameters ………… 45
Figure 4.14: Centerline extraction with Circle Fitting, Gaussian Smoothing Low Parameters and Non –
local Means ………………………….. ………………………….. ………………………….. ………………………….. ………… 45
Figure 4.15: Centerline extraction with Circle Fitting, Gaussian Smoothing High Parameters and Non –
local Means ………………………….. ………………………….. ………………………….. ………………………….. ………… 45
Figure 4.16: Segmentation with Circle Fitting, no -denoising ………………………….. …………………………. 45
Figure 4.17: Segmentation with Circle Fitting, Gaussian Smoothing Low Parameters …………………… 45
Figure 4.18: Segmentation with Circle Fitting, Gaussian Smoothing High Parameters ………………….. 45
Figure 4.19: Segmentation with Circle Fitting, Gaus sian Smoothing Low Parameters and Non -local
Means ………………………….. ………………………….. ………………………….. ………………………….. ……………….. 45
Figure 4.20: Segmentation with Circle Fitting, Gauss ian Smoothing High Parameters and Non -local
Means ………………………….. ………………………….. ………………………….. ………………………….. ……………….. 45
vi List of Tables
Table 3.1: Table of tubular shapes ………………………….. ………………………….. ………………………….. …….. 33
Table 4.1: Table of testing results with Vesselness Filter ………………………….. ………………………….. ….. 44
Table 4.2: Table of testing results with Circle Fitting TDF ………………………….. ………………………….. 454
Table 4.3: Overlap measures ………………………….. ………………………….. ………………………….. …………….. 46
Table 4.4: Accuracy measure s ………………………….. ………………………….. ………………………….. …………… 47
1 1. Introduction
1.1 Motivation
Coronary artery disease has one of the highest mortality rates in Romania and the entire
world. Coronary artery disease is one type of Cardiovascular disease. Currently it is estimated
that Cardiovascular diseases are causing every year around 17 million deaths and this number
is expected to increase to 23.6 million until 2030 [ 24]. Coronary artery disease is the most
common cause, causing 8.14 million deaths globally [ 43]. At most hospitals the standard
procedure to diagnose coronary artery disease is an initial screening with a non-invasive CT
angiography (CTA), followed by an invasive coronary angiography (ICA) if the initial CTA
indicates tha t coronary artery disease might be present.
ICA is considered to be the gold standard to determine the presence, severity and location of
coronary artery disease. The measurement used to determine coronary artery disease is
fractional flow reserve (FFR). FFR is measured by inserting a pressure wire at the entrance of
the coronary arteries in order to determine the maximal coronary blood flow through that
specific artery. This estimation of maximum blood flow is compared after that with a standard
value for healthy arteries .
Since the initial CTA does not involve a pressure wire, a maximal blood flow measurement
cannot be done with this technology . Computational FFR is an alternative approach to
estimate the maximal blood flow in the coronary arteries. This method only utilizes the initial
CTA to produce an estimated value for the maximal blood flow. One of the components
required to perform a computational FFR is an accurate segmentation of the coronary arteries.
This segmentation can be done manually, but t he process of manual segmentation is a very
time consuming process. To make computational FFR practical in a medical environment, the
segmentation needs to be done fully automatic, to be produced quickly and to be very reliable
and exact.
Several differen t methods for coronary arteries segmentation have already been implemented.
But most of them are computationally very expensive and also require a long time to produce
a segmentation.
In this thesis I want to implement a method for the coronary artery segm entation, to test it on
the available benchmark images and to compare it with other results of existing approaches .
2 1.2 Heart anatomy
The coronary arteries run on the surface of the heart and their function is to supply blood to
the heart muscle. Like all othe r muscles , the heart muscle needs oxygen -enriched blood to
function and oxygen -depleted blood to be carried away. There are two main coronary arteries:
the left main and the right coronary arteries [12]. The right coronary artery supplies blood to
the right ventricle, the right atrium and the sinoatrial and atrioventricular nodes. The right
atrium is one of two blood collection chambers in the heart, the right ventricle is one of two
ventricles in the heart. The ventricles a re responsible for pumping oxygen -depleted blood to
the lungs [12]. The right coronary artery branches of into smaller arteries , including the acute
marginal artery and the right posterior descending artery. The left main co ronary artery
supplies blood to the left heart muscle, the left atrium and the left ventricle [12]. It then
branches of into two arteries , the left anterior artery and the circumflex artery. The circumflex
artery encircles t he heart muscle and supplies blood to the outer and back side of the heart.
The left anterior artery supplies blood to the front and left side of the heart [12].
Figure 1.1 : Anatomy of coronary ar teries and heart
1.3 Coronary artery disease
Coronary artery disease (CAD) is a group of diseases under the larger group of cardiovascular
diseases (CVD). The most common CADs are stable angina, unstable angina and myocardial
infarction [ 4,42]. The risk of death from CAD has decreased from 1980 to 2017 especially in
developed countries [18] but despite this, CAD is still on of the most common cause of death
globally with around 7.4 -8.1 million deaths annually [43,27]. In general, men are more prone
to get CAD, especially at ages over 65 [4,42]. The symptoms for CAD are often unclear and
difficult for the patient to recognize . The most common symptom for stable angina is chest
3 pain that occurs regularly during physical activity or after other activities that strain or pressure
the heart. T his is associated with narrowing of the coronary arteries. The most common
symptom for unstable angina is also chest pain but when it is unstable , the pain can change in
intensity, frequency or character [4,42]. It is estimated that about 30% of adults who go to the
hospital with an unclear cause of chest pain, have pain due to CAD [4]. There are several risk
factors associated with CAD, so in the absence of clear symptoms the risk factors are used to
estimate the likelihood of the presence of CAD and to determine future investigation or
treatment. The most common risk factors include family history, obesity, diabetes, smoking,
lack of exercise, st ress, high blood pressure, and high blood lipids [4,43,23,27]. Smoking is
by far the most non -inherited risk factor, as about 36% of CAD patients are current or
previous smokers [4]. Obesity is also a significant factor, as 20% of CAD patients are
reported as obese [4].
In the presence of CVD, the myocardium is not adequately supplied with blood due to an
insufficient blood flow within the coronary arteries. This insufficient blood flow is caused by
athereosclerotic plaques . The athereosclerotic plaques a re lesions on the innermost layer of
the wall of large and medium sizes arteries. The plaques contain lipids, collagen,
inflammatory cells and other substances and can impede blood flow or rupture, leading to
serious problems such as heart attack or stroke .
Figure 1. 2: Cross -section of artery showing progressive narrowing of diameter due to atherosclerosis and
plaque formation
In principle, there are two major types of plaques:
Soft-plaque: are plaques with a very low amount of calcium and consists mainly of
lipid molecules
Calcified -plaque: are plaques with a very high amount of calcium
4
Figure 1. 3: Example of soft -plaques (left image) and calcified -plaques (right image) built on the arteries
In order to identify this plaques, to quantify them and to get an exact information of the stage
of the disease, the first step is to process the images and correctly identify the coronary artery
tree. This is the main challenge of the algorithms that are supposed to give an diagnose for CT
images. The coronary arteries tree differs from patient to patient, it does not have an defined
structure and the branches of the arteries are irregular.
1.4 Cardiac Diagnostic
There are several ways to diagnose CAD. Some of these methods are not definitive on their
own, but they can provide an indication or reinforce the suspicion that a CAD might be
present. The most used methods are CT angiogram, echocardiogram, electrocardiogram and
stress test. Following is a short explanation of what the diagnostic methods suppose :
Electrocardiogram: An electrocardiogram (ECG) records electrical activity in the heart
using electrodes placed on the body. The elec trodes record tiny elec trical variations on
the surface of the skin that arise from the heart muscle during each heartbeat.
Figure 1. 4: Normal and abnormal examples of ECG
Echocardiogram: Is using ultrasound on the heart. If any parts of the heart are moving
weakly/irregularly it may indicate that this parts are receiving too little oxygen.
5
Figure 1. 5: Example of an echocardiogram
Stress test: A stress test is in this instance a stress test of the heart. This can be done in
several different ways and can involve several kinds of additional measurements. One
way could be to simply observe if the patient is having pain during heavy heart
activity. Or it can also involve echo/electrocardiogram to get more information from the
test. This test alone in not enough to diagnose CAD, but it might reveal a problem and
give an indication for future testing.
CT angiogram(CTA): Is a heart imaging technique to determine if plaque buildup is
obstructing the blood flow in the coronary arteries. To see the blood flow on a CT
image , contrast material is required. This contrast material is injected in two portions,
the first to establish how long time the contrast material injection needs to travel to the
coronary arteries, the second injection is then for the actual angiogram.
Figure 1. 6: Example of a CTA
Cardiac catheterization: This is not a test on its own but rather a technique where a
flexible, long, thin tube called catheter is inserted into a blood vessel in the arm, upper
thigh or neck and threaded to the heart. When the catheter is in place, a variety of tests
can be conducted. It can be a delivery mechanism for contrast material during a CT
Angiogram, or it can be used while an echocardiogram is conducted to detect
blockages or obstructions in the coronary arteries. It can also be used to measure loss
6 of blood pressure because of obstructions in the arteries. This kind of testing is more
invasive than the other alternatives but it has a much higher chance to successfully
detect and diagnose CAD.
Figure 1. 7: Coronary angiography vessel dye injection
1.5 Cardiac imaging
This thesis will be about segmenting the coronary arteries from CT images. The CT ima ging
technology, what it shows and how those images are generated are playing a major part in
how the algorithm for the coronary artery segmentation is designed. This chapter will cover a
simple, general explanation of how CT imag ing works from a practical standpoint.
Conventional X -ray
A CT scan uses a combination of many X -ray images taken from different angles to create a
cross -sectional image. Therefore to properly explain how CT works, we need to explain X –
rays. X-rays are created by shooting a plate of metal with electrons. The electrons will
interact with the atoms in the metal and photons will be emitted with a very high frequency.
The photons are then directed at a body. The denser material in the body will ab sorb most of
the photons, while the less dense material will absorb less. As the photons pass through the
body they hit a film behind the body. Locations where the photons hit will become darker,
and as more photons hit the same locations the darker that l ocation becomes on the film. The
film becomes the X -ray image [1].
Some materials in the body absorb more of the photons than other. Bone and teeth absorb a
lot, meaning that the areas on the final image that are almost completely white are bone or
teeth. The other, less dense parts form shadows or silhouettes on the image. Because of this,
X-rays are usually used to examine teeth and bone as the image simply doesn’t provide
enough contrast on the rest of the tissue to provide useful information [1].
7 Computed Tomography
An X -ray computed tomography (CT scan) uses computer -processed combinations of
several X -ray images taken from different angles to create a 3D image. By doing this , some
of the disadvantages with conventional 2D X -ray images are diminished. Meaning that CT
can display more than just shadows and it is better at displaying contrast between different
soft-tissue. Because of the nature of multidirectional X – rays, CT can image th e absorption of
each location in a 2D slice of the body. This is an advantage over X -rays as it can only
display the sum of all absorption at a specific direction through the body. By interpolating all
the different X -ray signals in the frequency domain, a 2D frequency image can be collected.
The actual image this produces can be retrieved by applying the inverse Fourier transform
[10].
As stated earlier, a CT scanner use s several 2D slices to form a 3D volume. Each voxel at
any specific location has a number indic ating the amount of X -ray absorp tion at that
location. This number is usually stored in Hounsfield Units (HU). The Hounsfield Unit scale
is a linear transformation of absorption value and is defined so that distilled water at 1 bar
and 25C has 0 HU [40].
8 2. State of art
Efficiently obtaining a reliable centerline is relevant in clinical practice because centerlines
can serve as a starting point for lumen segmentation, stenosis grading, and plaque
quantification . More than 30 papers have already been published only in the last years that
present and/or evaluate techniques for the centerline segmentation and extraction of human
coronary arteries in cardiac CTA datasets . A segmentation is actually a labeling of each voxel
in the CT image that determines if the voxel is part of the coronary arteries or not. There are
two notable reviews on this topic by Kirbas et al . [19] and Dehkordi et al. [11]. They have
both written an article based on a comparison of the existing algorithms for this topic and they have
also reach ed similar conclusion s.
The different approaches of algorithms for the coronary artery segmentation have been
classified by Kirbas and Dehkord i into some categories. They divide the algorithms into this
classes: pattern recognition, model -based tracking and propagation, neural network, fuzzy,
artificial intelligence and miscellaneous tube -like object detection based methods. They
compare a large number of approaches and compare the different methods based on the results. At
the 11th International Conference on Medical Image Computing and Computer Assisted
Intervention 2008 workshop ”3D Segmentation in the Clinic: A Grand Challenge II” [7] a
framework for validation and tes ting of coronary artery center line extraction was presented.
This framework consisted of testing software, and 30 coronary artery centerline extractions
done manually by professionals and validated by professionals within the field. This
eventually got the name Rotterdam Coronary Artery Algorithm Evaluation Framework [13].
This framework allows to score and validate the coronary artery centerline extraction
algorithms on independent data and the tested algorithms are all ranked against each other
with public scores. This framework has stayed relevant since the release in 2008, and is still
being used today.
In this chapter I will cover a basic introduction to the most common approaches to coronary
artery segmentation and centerline extraction. The review by Kirbas et al . [19] is used as the
primary source of categorization, with the review by Dehkordi et al. [11] supporting,
particularly within the artificial intelligence, machine learning and neural network categories.
The ranking from MICCAI’08 [7,14] will provide the final up to date reference for
centerline extraction algorithms.
9 2.1 Patter Recognition Techniques
The review by Kirbas et al . [19] and the review by Dehkordi et al . [11] are covering a large
amount of pattern recognition based methods. Pattern recognition is a branch of machine
learning that focuses on the recognition of patterns and regularities in data, although it is in
some cases considered to be nearly syno nymous with machine learning. Pattern recognition
systems are in many cases trained from labeled "training" data (supervised learning), but
when no labeled data are available other algorithms can be used to discover previously
unknown patterns (unsupervised learning). They divide pattern r ecognition algorithms into
several categories: Multi -scale, Skeleton -based, Region growing -based, Ridge -based,
Differential geometry -based, and Matching filters -based ap proaches.
2.1.1 Multi -scale -based Approach
Scale is an important para meter of images. Different objects or image structures (e.g. edges
and corners) can appear at different scales and each is meaningful only over a limited range of
scales. Multi -scale analysis has been widely used in image processing and computer vision,
serving as the basis for many high -level image analysis systems. The multi -scale approach is
usually combined with other method s of vessel de tection and/or segmentation to perform the
actual segmentation, as the multi -scale technique is used primarily to decr ease processing
time. Kirbas et al . [19] referrers to two implementations where the multi -scale app roach is
used.
2.1.2 Skeleton -based Approach
A skeleton is a set of "medial axis" data that retains the connectivity or topology of the
original pattern shape. A number of applications indicate the usefulness of reducing a pattern
to its skeleton representation. A skeleton representation can result in a reduced amount of
data, and shape analysis can be more easily made on skeleton sub patterns, named skeleton
branches. High -quality skeletonization algorithms need to be developed. Obtaining a suitable
data structure for pattern recognition based on a skeleton representation is also a problem.
Different approaches are used to extract the centerline structure. Some of these methods are:
apply thresholding and then object connectivity, thresholding follo wed by a thinning
procedure and extraction based on graph description. Both Kirbas et al. [19] and Dehkordi et
al. [11] are presenting some methods where centerline extraction is used.
A method by Niki et al . [26] uses a short scan cone -beam filtered back -propagation
reconstruction algorithm aided by a graph descript ion of the blo od vessels to ex tract the
10 centerline and then uses thresholding and an object connectivity procedure to perform the
segmentation.
Another method where the goal is to segment airw ays in the lungs uses threshold ing, then a
3D thinning algorithm to extract the centerlines.
A more complex approach by Sorantin et al. [37] is presented where a five step algorithm is
applied to a CT angiogram. Laryngo -tracheal tract(LTT) is extracted using fuzzy
connectedness based on user-supplied seed points. 3D dilation is ap plied to avoid uncertain
boundary points due to partial volume effect. The resulting 3D volume is then converted into
cubic voxels based on interpolation. The second step is then to apply a 3D thinning procedure
on the volume from the first step. The thir d step is to use a shortest patch searching algorithm
on the thinned volume from step two. This step requires the user to manually set the input start
and endpoint on the estimated central path. The forth step is to smooth the result and the final
step is to calculate a cross -sectional profile along the medial axis of the smoothed result.
The list of methods using a skeleton -based approach reviewed by Both Kirbas et al. [19] and
Dehkordi et al. [11] continues. But most of algorithms use methods that are fairly similar to the
first and second method presented here.
2.1.3 Region growing -based Approach
The region based segmentation is partitioning of an image into similar/homo genous areas of
connected pixels through the application of homogeneity/similarity criteria among candidate
sets of pixels. Each of the pixels in a region is similar with respect to some characteristics or
computed property such as colour, intensity and/or texture. Failure to adjust the
homogeneity/similarity criteria accordingly will produce undesirable results. The following
are some of them: The segmented region might be smaller or larger than the actual ; Over or
under -segmentation of the image (arising of pseudo objects or missing objects);
Fragmentation . Both Kirbas et al . [19] and Dehkordi et al . [11] review methods based on the
region growing -approach.
O’Brien and Ezquerra [28] are usin g a region growing approach to extract centerlines of
coronary arteries from CT images. Their algorithm start with a low pass filtering of the initial
image as pre -processing. Then an initi al segmentation by region grow ing is performed based
on user provided seed points. After that , the centerlines are extracted from this initial
segmentation by employing a balloon test. Undetected vessel segments will then be located by
a spatial expansion algorithm. Then a graph theory is applied to determine which centerlines
to include in the extraction and which centerlines to ignore.
11 Both Kirbas et al . [19] and Dehkordi et al . [11] review several more methods based on region
growing. Common for all of the methods covered is that they require user input for the seed
points and that they use these poin ts to either generate a center line or to directly segment the
target vessel based on the supplied seed points.
Figure 2.1 : The region growing algorithm
2.1.4 Ridge -based Approach
In Ridge -based approach, the gray scale images are treated as 3D elevation maps. The
elevation maps correspond to intensity ridges which give the structure of the object to be
detected. In the intensity map, ridge points are local peak s in the direction of maximal surface
gradient. Ridge -based approach can be used in different image modalities since it is invariant
to different affine transforms. Ridge -based approaches mostly find application in medical
image registration. Both Kirbas e t al. [19] and Dehkordi et al . [11] review several methods
that utilize a ridge -based approach.
One of the main methods where the ridge -based approach is utilized is a method by Bullitt
and Aylw ard [3]. Their method relies on manually selected seed points for each vessel to be
extracted. An intensity ridge map is constructed and for each seed t he closest ridge is
traversed. This results in a l ine of points gene rated from the traversal and for each point an
estimation of the diameter of the vessel can also be calculated based on the intensity map.
Aylward et al . [2] uses the same technique described e arlier but with further refine ment for
more accurately extract ion of centerlines. They apply the cores method [30] on the intensity
12 ridges. From manual seed points they locate the ridges to be traversed by using a conjugate
directions search with respect to the hessian matrix.
2.1.5 Different ial geometry -based Approach
In Differential geometry (DG) based methods, images are treated as hyper surfaces. The
features are extracted using the curvature and the crest lines of the hyper surface. These crest
points gives centerlines of the blood vessels. The 2D and 3D images are transformed as 3D
and 4D hyper surfaces respectively. In DG based approach a 3D surface is described by two
principal curvatures and their corresponding orthogonal directions. The orthogonal directions
are called as princi pal directions. The features extracted using this technique can be used for
medical image registration. Crest points posses the properties of the hyper surfaces.
Centerlines are obtained by linking these crest -points. Kirbas et al. [19] reviews several
methods where differential geometry is used.
A method by Krissian et al. [20] is reviewed where they use a Directional Anisotropic Diffusion
method derived from Gaussian convolution to reduce the image noise. Their method is based
on the differentiation of the diffusion in the direction of the gradient, minimum and maximum
curvatures. One of the primary benefits of the Directional Anisotropic Diffusion method is that
it effectively reduce s noise without introducing significant blurring. This method is applied to
a set of phantom images and produces an image where the vessels are significantly enhanced
and suited for some other extraction algorithm.
A method by Prinet et al . [32] is reviewed where t hey utilize the method described above. A
cylindrical mathematical model is used to identify and represent the ves sels from the 3D/4D
image generated from the general method presented. This method requires no additional
knowledg e and is fully automatic. This method generates a centerline, therefore it can also be
classified as a skeleton -based method.
2.1.6 Matching filters -based Approach
Matching filters (MF) approach convolves several matched filters with the image from which
the objects are to be extracted. The order of the filter and its orientation has a great
significance in the extraction of the vessel structure. Computational load is determined by the
convolution kernel. Usage of matched filters is followed by image processing operations such
as thinnin g. Kirbas et al . [19] reviews a few methods where the matching filters principle is
applied.
A method presented by Poli and Valli [31] is review ed where they use a set of mul tiple
oriented linear filters obtained as linear combination of properly shifted Gaus sian kernels to
13 detect vessels in real time. The filters applied are sensitive to both different orientation and
thicknes s of the vessels . In addition to this , they use convolution masks to obtain maximum
efficiency of the vessels. And they use the orientation and scale information obtained from the
linear filters to only extract vessels and no other structures. T his results in a centerline and
they use thresholding with hysteresis [8] to perform the segmentation based on that.
2.2 Model -based Approaches
Model -based tracking and propagation approaches attempt to apply vessel models to extract
or identify vessel structure in medical images. Kirbas et al . [19] divides the different model –
based and propagation approaches into four general categories: deformable models, parametric
models, template matching and generalized cylin ders.
2.2.1 Deformable models
Deformable models are a generali zation of methods where one or more deformable objects
interact with image characteristics through internal and external factors . The active contour
model, also called snake is probably the most popular method within the deformable models
category. The snake method is a general method where a line or ”snake” of connected points
is affected by external and internal factors . The internal factors impose smoothness and
connectivity to the snake. They enforce that the line does not rip apart or bend in curves . The
external factor pulls the points in the line towards the desired image features.
These two factors make the deformable line robus t to both image noise and arti facts. The
main disadvantage of this model is the initialization. The position of the snake and it’ s length
are playing an important role for the end result . Each snake needs to be handled individually,
but it is also necessary to avoid collision and overlap bet ween the snakes. Therefore, it is
advantageous to have a limited number of active snakes, at good positions, so that the results
are satisfactory and the computational load is manageable. Kirbas et al. [19] reviews several
implementations of this and very similar methods.
2.2.2 Parametric models
The parametric approach tries to identify objects of interest parametrically. In general, for
vessels and tubular objects an assumption is made that their general shape is a set of
overlapping ellipsoids. This is normally r eferred to as the circular ves sel model. On of the
problems with this model are the irregular vessel shapes. Vessel junctions, irregular vessels in
general, or vessel stenosis might not fit within the ellipsoids shape assumption and negatively
affect the results produced. Therefore, in practice , this method is usually teamed with some
14 other method s to detect the shapes that fall outside of the overall assumption. Kirbas et al . [19]
reviews a few methods where this approach is used, most of these are teamed with either an
artificial intelligence method or a pattern recognition method.
2.2.3 Template matching
The template matching approach at tempts to recognize a structure or structures in an image.
This method normally uses a priori knowledge referred to as a context or a template to match
with the potential structures in the input image. For coronary arteries this context is usually a
series of points or nodes that is deformed within some restrictions to fit the structures detected
in the image. Kirbas et al . [19] reviews a few methods where this approach is used. These
methods vary in that they use different methods for the deformation to fit the objects to be
identified. Petrocelli et al . [29] are using dynamic programming to handle the deformation.
Summers and Bhalerao [38] are using an estimation system where they estimate features like
flow direction, vessel angle and diameter to attempt to perform the deformation.
2.3 Artificial Intelligence
The Artificial Intelligence categor y is very broad. Kirbas et al . [19] defines it as methods that
utilize knowledge to guide the segmentation or detection process and the delineate vessel
structures. This knowledge can be general knowledge about the imaging technique used, the
area captured in the image, knowledge about the pa tient or the patient condition and much
more. In practice many, if not all the methods covered here can be classified as AI methods.
Smets et al . [35] presents 11 general rules about the appearance of blood vessels. The main
disadvantage Kirbas et al . [19] found is the computational complexity of most AI detection
and segmen tation methods. Therefore, artificial intelligence approaches are usually combined
with other less complex methods to remove some of the unwanted information and reduce the
computational load the AI method need s to process. As mentioned ear lier, the review by
Kirbas et al . [19] was done in 2003. Since then there has been significant development within
the AI field when it comes to medical image analy sis.
Several newer methods tested with the MICCAI’08 [7,13,14] framework utilizes AI
techniques and methods. Some algorithms, like the one presented by Schaap et al . [34] and
the one presented by Kitamura et al . [47] are based on machine learning. While others, like
Zheng et al . [46], Szymczak et al . [39], Lesage et al. [23], Yang et al. [45], Cetin et al . [9],
Krissian et al . [22], and Bauer et al . [6,5] are based on expert knowledge and intelligent
searching.
A method pre sented by Friman et al . [17] utilizes a multiple hypothesis tracking approach
15 which is complemented by a minimal path search when needed. They de scribe their algorithm
as an ”interactive approach to the identification of coronary arteries” . By that they refer to
the fact that their method is not fully automatic. How much user interaction is required varies
depending on how well the multi ple hypothesis tracking algorithm performs. But some
interactions is always required, both starting points and end point s need to be supplied by the
user. If the hypothesis tracking algorithm (Figure 2.2) performs well, in that it produc es a
complete connected center line network, no more user input is required. The minimal path
search works as a backup solution, where the u ser can supply more start and endpoints and the
minimal path search will attempt to connect them. Finally, if the centerline network is still
incomplete, their algorithm and design allows the user to manually trace the centerline.
Figure 2.2 : Illustration of the multiple hypothesis tracking principle. From the start point, several
hypothetical paths forward are evaluated before deciding on the next step .
A method presented by Zheng et al . [46] uses a mod el-driven approach. Their algo rithm
attem pts to first locate the general ar ea of the coronary arteries by detecting the heart
chambers and then attempting to place a mean centerline in close relation to the heart
chambers. Then the mean centerline is transformed i teratively based on image informa tion to
fit the coronary arteries. This transformation is done by analyzing each ”point/coordinate” in
the mean centerline, searching for the best candidate among a field of neighbors based on a
cost function, its distance from the already located best points in previous iterations and a
balancing parameter. This method utilizes machine learning also for the creation of the mean
centerline, for balancing the cost function and for the weighting of distance versus cost score
within the overall cost function.
16
Figure 2.3 : Generating the centerline mean shape and region -of-interest mesh for the left anterior
descending artery from a set of aligned training centerlines. (a) Aligned centerline point cloud and the
coarse mean centerline (red line). (b) The refined mean (blue line).
Szymczak et al . [39] algorithm for centerline extraction consists of two major steps. The first
step is to identify core points. These points are centers of intensity in two -dimensional slices
through the input image. Then, a weighted co re graph is constructed by connecting nearby
core points with edges . These edges are con structed by a shortest path weighted search. Then
when all the core points have been processed, the output is the shortest path connecting the
starting point and the endpoints. The starting point and the endpoints can be detected
automatically ( i.e. core point with only one connection) or they can be user supplied.
Figure 2.4 : Core point sets used by the algorithm for a few of the input datasets. Notice the streaks o f
points running along the centerlines of the vessels .
Yang et al. [45], Krissian et al . [22] and Bauer et al . [6,5] all have strong similarities in their
methods for extraction of the coronary artery centerlines. In all three meth ods they use
variations of Frangi et al. [15] tubular detection filter to detect potential points within the
arteries. Frangi’s vesselness filter [15 ] measures the similarity to a tubular structure with a
Hessian matrix, which has been widely used for the vessel enhancement in medical images .
Bauer et al . [6,5] uses the conventional Frangi et al . [15] vesselness filter, while Krissian et al.
17 [22] uses a an vesselness estimation filter similar to a circle fitting filter.
Figure 2.5 : Tube detection step. For visualizations just the image regions in proximity of the coronary
artery trees are shown. (a) MIP of the original dataset. (b) Extracted coronary artery trees. (c) Provided
reference centerlines. (d) Tube detection result on the initial vector field Fn. (e) Tube detection result on the
normalized GVF field Vn. (f) Combined tube detection result T.
Yang et al. [45] vesselness filter utilizes the base of Frangi et al. [15] filter, but removes points
at the boundaries of the cardiac chambers. Then the points/scores created by the vesselness
filters are processed to create the centerlines. Yang et al . [45] chooses the most promising
points and uses these as start points in a ridge traversal search manner. The search is
supported by bio mechanical metrics, to avoid errors as b ig and sudden direction changes or
invalid connections.
Figure 2.6 : Coronary tree extraction results by Frangi’s vesselness filter with the thresholds of 80 (a) and
140 (b) respectiv ely, and with our improved Frangi’s vesselness filter (c). The aorta was extracted by the
aorta detection algorithm. The red circle marks the false extraction of the right atrium. Increasing the
threshold could alleviate the false extraction. But the dista l parts and side branches were not extracted at the
threshold of 140 as shown in (b)
18 Krissian et al . [22] uses a region growing approach based on the mos t prominent vesselness
points. The most prominent points are calculated based on their vesselness score and the
location relative to each other and the aorta. Region grow ing is then utilized based on the two
best points. The region growing algorithm is limited by estimated radius, to avoid leakag e and
extraction of the aorta and it is limited by range and step size. This results in a segmentation
and the centerline is extracted by simply tracking the center of the segmentation.
Bauer et al . [6,5] uses a ridge traversal approach based on the Hessian matrix gen erated from
the intensity changes in the original input image. Several of the most promising points from
the vesselness algorithm are selected and for each point the closest ridge is traversed in both
directions. The ridge traversal is supported by avoiding sudden shifts in direction and leniency
for a limited distance to reduce noise problems. After all the promising points have been
traversed the shortest traversals are discarded. The remaining lines are then attempted linked
up based on bio -mechanical metrics in a cost function focusing on angle between lines to be
connected and distance.
Then all of the methods have a post -processing routine where the longest connected
centerline, where the average radius is within some threshold and the HU value within some
threshold, is selected as the complete centerline for the coronary arteries. All of these methods
are very clo se to each other in results.
In practice it seems like the usage of expert knowledge is a requirement to achieve accurate
results when it comes to coronary artery centerline detection and extraction.
2.4 Machine Learning
Machine learning is a broad term used to simply describe algorithms that in some way
improve, or adapt based on data or training. There are several sub categories of approaches to
machine learning. The review by Kirbas et al. [19] only covers Neural network based machine
learning approaches, and the review by Dehkordi et al . [11] only mentions it. Yet, two of the
algorithms tested on the MICCAI’08 [7,13,14] framework utilizes machine learning. Their
performance is comparable to the other algorithms ranked by the MICCAI’08 [7,13,14]
framework.
Kitamura et al. [47] algorithm uses the Hessian matrix combined with machine learn ing to
detect candidate points within the coronary arteries. They describe the ma chine learning
component as a classifier, as its purpose is to determine whether a structure is a part of the
vessel tree or not. The classifier is thought by the Adap tive Boosting algorithm [16] by
exposing it to both positive and negative training samples. The positive training samples are
19 manually labeled centerlines and vessel conto urs. The adaptive boosting algorithm learns the
classifier that discriminates the positive and negative samples through sequentially combining
the feature vectors of the samples. The feature vectors w ere extracted using Haarlike filtering
[41]. The candidate points are then connected through a grap h reconstruction algorithm
consisting of three steps: shape model, matching method and energy matching. The
construction of the shape model is done by placing 30 relat ive connected points. The relative
position of these points is decided through the same machine learning algorithm mentioned
earlier. Then the extracted candidate points are matched to this model by applying an energy
function that works in a similar manne r to the Snake algorithm described in the model -based
approach chapter. This ensures that the original candidate points keep their general shape
while still adapting to the model shape. This results in a centerline for the coronary arteries.
Figure 2.7 : Examples of shape matching. (a) is an assumed model. The topology term takes a large penalty
value when the correspondence is assigned to different branches (b), in comparison to the same branch (c) .
Schaap et al . [34] proposes an algorithm fairly similar to the algorithm presented by Kiramura
et al. [47]. Schaap et al . [34] utilizes machine learning to attempt to establish a general shape
of the vessel structures normally observed in the coronary arteries. Their method is coined at
improving an already existing centerline, and they describe this as a rough centerline that can
either be manually labeled or de livered by some other centerline extraction algorithm. Schaap
et al. [34] algorithm im proves upon the rough centerline in a similar fashion as Kiramura et al.
[47] where the rough centerline is deformed based on several factors to more closely resemble
the general shape of the thought vessel tree.
2.5 MIC CAI 2008 workshop
The papers including a lgorithms for the coronary artery tree segmentation published at the
MICCAI workshop were also classified in another way, emphasizing the user -interaction
needed to preform the segmentation. Three categori es of algorithms are to be discern :
20 2.5.1 Automatic extraction
For this kind of methods there is no user -interaction needed, the segmentation will be
automatic . Some of the algorithms at the workshop are using automatic extraction:
AutoCoronaryTree (Tek et al., 2008; Gulsun and Tek, 2008): “The full centerline tree
of the coronary arteries is extracted via a multi -scale medialness -based vessel tree
extraction algorithm which starts a tracking process from the ostia locations until all
coronary branches are reached ” [7].
CocomoBeach (Kitslaar et al., 2 008): This method starts by segmenting the ascending
aorta and the heart. Candidate coronary regions are obtained using connected
component analysis and the masking of large structures. Using these components a
region growing scheme, starting in the aorta, segments the complete tree. Finally,
centerlines within the pre -segmented tree are obtained using the WaveProp
(Marquering et al., 2005) method [7].
DepthFirstModelFit (Zambal et al., 2008): Coronary artery centerline extraction is
accomplished by fitting models of shape and appearance. A large -scale model of the
complete heart in combination with symmetry features is used for detecting coronary
artery seeds. To fully extract the coronary artery tree, two small -scale cylinder -like
models are matched via de pth-first search [7].
GVFTube’n’Linkage (Bauer and Bischof, 2008): This method uses a Gradient Vector
Flow (Xu et al., 1998) based tube detection procedure for identification of vessels
surrounded by arbitrary tissues (Bauer and Bischof, 2008a,b). Vessel c enterlines are
extracted using ridge -traversal and linked to form complete tree structures. For
selection of coronary arteries gray value information and centerline length are used
[7].
VirtualContrast (Wang and Smedby, 2008): This method segments the coro nary
arteries based on the connectivity of the contrast agent in the vessel lumen, using a
competing fuzzy connectedness tree algorithm (Wang et al., 2007). Automatic rib cage
removal and ascending aorta tracing are included to initialize the segmentation.
Centerline extraction is based on the skeletonization of the tree structure [7].
21 2.5.2 Extraction with minimal user -interaction
Extraction methods with minimal user -interaction are allowed to use one point per vessel as
input for the algorithm Following semi -automatic algorithms were published at the workshop:
AxialSymmetry (Dikici et al., 2008): This method finds a minimum cost path
connecting the aorta to a user supplied distal endpoint. Firstly, the aorta surface is
extracted. Then, a two stage Hough -like election scheme detects the high axial
symmetry points in the image. Via these, a sparse graph is constructed. This graph is
used to determine the optimal path connecting the user supplied seed point and the
aorta [7].
CoronaryTreeMorphoRec (Castro et al., 2008): This method generates the coronary
tree iteratively from point S. Pre -processing steps are performed in order to segment
the aorta, remove unwanted structures in the background and detect calcium.
Centerline points are chosen in each iteration depending on the previous vessel
direction and a local gray scale morphological 3D reconstruction [7].
KnowledgeBasedMinPath (Krissian et al., 2008): For each voxel, the probability of
belonging to a coronary vessel is estimated from a feature space and a vesselness
measure is used to obtain a cost function. The vessel starting point is obtained
automatically, while the end point is provided by the user. Finally, the centerline is
obtained as the minima l cost path between both points [7] .
2.5.3 Interactive extrac tion
All methods that require more user -interaction than one point per vessel as input are part of
category 3.
3DInteractiveTrack (Zhang et al., 2008): This method calculates a local cost for each
voxel based on eigenvalue analysis of the Hessian matrix. When a user selects a point,
the method calculates the cost linking this point to all other voxels. If a user then
moves to any voxel, the path with minimum overall cost is displayed. The user is able
to inspect and modify the tracking to improve performan ce [7].
ElasticModel (Hoyos et al., 2008). After manual selection of a background -intensity
threshold and one point per vessel, centerline points are added by prediction and
refinement. Prediction uses the local vessel orientation, estimated by eigen -analy sis of
the inertia matrix. Refinement uses centroid information and is restricted by continuity
and smoothness constraints of the model (Hernández Hoyos et al., 2005) [7].
22 MHT (Friman et al., 2008): Vessel branches are in this method found using a Multiple
Hypothesis Tracking (MHT) framework. A feature of the MHT framework is that it
can traverse difficult passages by evaluating several hypothetical paths. A minimal
path algorithm based on Fast Marching is used to bridge gaps where the MHT
terminates premat urely [7].
Tracer (Szymczak, 2008): This method finds the set of core points (centers of intensity
plateaus in 2D slices) that concentrate near vessel centerlines. A weighted graph is
formed by connecting nearby core points. Low weights are given to edges of the graph
that are likely to follow a vessel. The output is the shortest path connecting point S and
point E [7].
TwoPointMinCost (Metz et al., 2008): This method finds a minimum cost path
between point S and point E using Dijkstra’s algorithm. The cos t to travel through a
voxel is based on Gaussian error functions of the image intensity and a Hessian -based
vesselness measure (Frangi et al., 1998), calculated on a single scale [7].
23 3. A study of coronary detection algorithms
This chapter describes an implementation of two coronary artery segmentation and centerline
extraction algorithm s. I have chosen to base the implementation on the method s presented by
Bauer et al . [6,5] and Krissian et al. [22,21]. The implementation was done in the existing
framework FAST [36]. This framework was made with the purpose of supporting parallel
implementations on GPUs. The framework utilizes C++ and OpenCL as programming
languages , therefore this two languages were also used for the implementation s.
The method proposed by Bauer et al . [6,5] presents a collection of steps: pre-processing,
tubular detection filter, ridge traversal, grouping and linking, and segmentation. The first step,
pre-processing is a collection of operations and algorithms. These algorithms and operations
include optional de-noising, creation of the gradient vector field, gradient vector flow and
conversion to Hounsfield units. The framework FAST [36] has available two filter for the
noise reduction: the Gaussian smoothing filter and the Non -local means. Although the pape r
written by Bauer et al . [6,5] presents this filter only as optional, in this implementation I
compared the results from algorithms with and without noise filters and the results of the
comparison are presented in the discussion and conclusion chapter. All of the pre -processing
algorithms are presented in detail in chapter 3.1.
The tubular detection algorithm suppose t o run the tubular detection filter two times. The first
run will be on the gradient vector field and the second one on the same field with the gradient
vector flow. This double run has the purpose of detecting also small and large tubular forms.
Bauer et al. [6,5] uses for the detection a vesselness filter purposed by Frangi [15]. In this
implementation I have also used this filter . The framework FAST presents an alternative filter
for the detection , known as a circle fitting tubular detection filter and was proposed by
Krissian et al. [22,21]. This tubular detection filter is taught to be an improvement but it is
computationally more expensive to exec ute and if it provides fewer but more accurate points ,
the overall execution time can be lowered. This being said, the presented implementation can
utilize both of this filters. In the results and discussion chapter I will present results for both of
this tubular detection filters .
The ridge traversal step and the linking and grouping step are dependent on each other . The
ridge traversal algorithm proposed by Bauer et al. [6,5] produces several centerlines. Most of
these centerlin es should form a graph tree but because of the nature of the coronary arteries
there will probably exist some smaller vessels that are not connected to the overall tree. The
grouping and linking step is done after the ridge traversal and attempt s to connect the vessels
24 that do not belong to the centerline tree yet. Both of these steps are implemented ac cording to
the description by Bauer e t al. [6,5].
The final step is the segmentation step. The inverse gradient flow tracking algorithm is
described by Bauer et al. [6,5] but not utilized for segmentation of the coronary arteries as the
overall objective of their proposal was to produce an accurate centerline extraction. The inverse
gradient flow tracking al gorithm utilizes the gradient vector field that has already been
computed in a previous step.
Figure 3.1 : Diagram of the algorithms
25 3.1 Pre-processing
3.1.1 Hounsfield Unit Conversion
“The intensity values of X -ray and CT images are measured in Hounsfield Units. These
values correspond to the radiation absorption amount of th e tissue at a specific location and
can therefore be used to distinguish different types of tissue. Air is estimated to be around –
1000 HU, while bone is roughly between 700 and 3000 H U” [6,5] . The objective of this
thesis is to extract the centerlines of the coronary arteries from CTA images . A contrast fluid
is injected to the vessels in question in order to make them more visible in the images. It is
also known that vessels containing this fluid are having a HU value around 200. Every value
of HU in the input images of this thesis that will be significantly outside of this range will not
play any role and will be irrelevant for the implementation. Beca use the results of the gradient
are depending on the intensity of the image (as presented in chapter 3.1.2 and 3.1.3), the
amount of false positives needs to be reduced. This can be done by decreasing all HU values
in the image that are grater then 500 to this value. The same will be done to negative values,
all will be set to 0. These two variables are labeled HU min and HU max. The remaining values
between 0 and 500 are scaled so that the range of intensity values is converted to a floating
point number between 0.0 and 1.0. The equation used for this conversion is the next one:
𝐼(𝑣⃗)={1.0
𝐼(𝑣⃗)−𝐻𝑈𝑚𝑖𝑛
𝐻𝑈𝑚𝑎𝑥−𝐻𝑈𝑚𝑖𝑛 𝑖𝑓 𝐼(𝑣⃗) ≥ 𝐻𝑈𝑚𝑎𝑥
𝑒𝑙𝑠𝑒 (3.1)
3.1.2 Gradient vector field
The next step in the pre -processing stage is to calculate the initial gradient vector field and
normalize it. Recall that the gradient of an image is a vector that describes the intensity
change (derivatives of the image intensity) at a specific point in the image. The angle of the
vectors will describe the direction of the maximum intensity change and the magnitude of
the gradient vector describes how big the change is.
26
Figure 3.2 : Gradients of a sliced, ideal tube
Hessian -based TDFs are TDFs that uses the Hessian Matrix for the shape analysis. The values
of the Hessian matrix represent the second -order derivative information at any specific voxel
position. The gradient is actually the information of the first order derivative at any s pecific
voxel in each direction. The vectors of the first order derivate corresponds to the change in
intensity values in all three direction at the position of the voxel. The direction of these
vectors indicate the direction of the largest intensity change and the magnitude of the vector
indicates change of intensity in that direction . The second -order derivative information
needed for the Hessian matrix can be extracted from the first-order derivatives by calculating
the gradients again on each component of the first -order derivative. The Hessian matrix 𝐻(𝑣⃗)
is a matrix of these three gra dients. Each of these gradients describes the change of intensity
in their respective direction.
𝐻(𝑣⃗)= [∇(∇𝐼(𝑣⃗)𝑥)
∇(∇𝐼(𝑣⃗)𝑦)
∇(∇𝐼(𝑣⃗)𝑧)]=
[ 𝜕2𝐼(𝑣⃗)
𝜕𝑥𝑥 𝜕2𝐼(𝑣⃗)
𝜕𝑥𝑦 𝜕2𝐼(𝑣⃗)
𝜕𝑥𝑧
𝜕2𝐼(𝑣⃗)
𝜕𝑦𝑥 𝜕2𝐼(𝑣⃗)
𝜕𝑦𝑦 𝜕2𝐼(𝑣⃗)
𝜕𝑦𝑧
𝜕2𝐼(𝑣⃗)
𝜕𝑧𝑥 𝜕2𝐼(𝑣⃗)
𝜕𝑧𝑦 𝜕2𝐼(𝑣⃗)
𝜕𝑧𝑧]
(3.2)
The Hessian matrix is needed for both of the implemented TDFs presented in the chapter
3.2. In order to calculate the Hessian matrix each voxel in the image will be processed and
the product created after all voxels ha ve been processed is computing the gradient vector
flow.
In this implementation I have calculate d the gradients by using a central difference scheme
which looks at two neighboring voxels in each direction and calculates the variation as
shown below ( x, y, z indicates direction):
27 ∇𝐼(𝑣⃗)𝑥= 𝐼(𝑣⃗+(1,0,0))−𝐼(𝑣⃗−(1,0,0))
2 (3.3)
∇𝐼(𝑣⃗)𝑦= 𝐼(𝑣⃗+(0,1,0))−𝐼(𝑣⃗−(0,1,0))
2 (3.4)
∇𝐼(𝑣⃗)𝑧= 𝐼(𝑣⃗+(0,0,1))−𝐼(𝑣⃗−(0,0,1))
2 (3.5)
To further prepare the gradients for the next step in pre -processing (gradient vector flow), they
need to be normalized. With contrast -invariance I mean that if a vessel has low contrast from
the lumen to the wall it should get just as good TDF result as an vessel with a high contrast.
The normalization is also necessary for making sure that the important gradient information is
maintained in the GVF process . The parameter Fmax is used as a thresh old for the gradient
magnitudes. All magnitudes above this value will be set to max length . This is helpful for the
gradient vector flow step because it will make the gradient vector flow less sensitive to noise.
The next equation presents the normalized gradient vector field.
𝑉⃗⃗𝑛(𝑣⃗)=
{ 𝑉⃗⃗(𝑣⃗)
|𝑉⃗⃗(𝑣⃗)|
𝑉⃗⃗(𝑣⃗)
𝐹𝑚𝑎𝑥 𝑖𝑓 |𝑉⃗⃗(𝑣⃗)|≥ 𝐹𝑚𝑎𝑥
𝑒𝑙𝑠𝑒 (3.6)
3.1.3 Gradient vector flow
Opposite to Gaussian smoothing which diffuses the intensity of the original image, Gradient
Vector Flow (GVF) diffuses the image gradients instead. GVF was originally introduced by
Xu and Prince [31] as a new external force field for active contours. “This feature of the
tubular detection is also known as edge perseveration. This is because all the Hessian -based
TDFs are detect ing tubular structures based on the assumption that the edge of a structure will
either be at a valley or ridge , meaning that the ed ge will be where two or more gradients point
towards each other. In this case, the purpose of using gradient vector flow is to enable gradient
information at positions further away from an edge while preserving the original edge
information ” [48]. This is necessary for detection of larger structures and for the segmentation
algor ithm described in chapter 3.5. Gradient vector flow was originally introduced by Xu and
Prince [44] as an external force field for active contours.
28
Figure 3.3: Gradient vector flow illustration
3.1.4 Gaussian Smoothing
Gaussian Smoothing is a very frequently used algorithm when a reduction of noise detail s in a
image is needed . It is widely u sed and has many applications. Gaussian smoothing of the
dataset is necessary because of noise in the CT images. If the data is not smoothed Gradient
Vector Flow will have problems with propagating the gradient information from the edge of
the airways to the center. Recall that this information is necessary at the center to calculat e the
eigenvectors needed to perform the TDF. Smoothing to much can also be a problem because
important edge information might be lost. Gaussian Smoothing is done by convolution of the
image with a Gaussian kernel with a standard deviation σ. The Gaussian Smoothing algorithm
utilized in this implementation is included in FAST and utilizes discrete convolution by
calculating a new value for each voxel based on the weighted sum of the neighboring voxels.
The size of the neighborhood is determined by σ in the following manner, N = 2[3 σ] + 1,
giving a NxNxN neighborhood . The weight for the current voxel and for each neighboring
voxel is calculated with the equation shown below. W stands for normalization, and is equal to
the sum of the weights.
29
(𝐼∗ 𝐺𝜎)(𝑣⃗)=1
𝑊∑∑∑𝐼(𝑣⃗⃗⃗⃗⃗)𝑒−𝑥2+𝑦2+𝑧2
2𝜎2[3𝜎]
𝑧=−[3𝜎][3𝜎]
𝑦=−[3𝜎][3𝜎]
𝑥=−[3𝜎] (3.7)
3.1.5 Non-local Means
“Non-Local Means is an algorithm used to reduce noise. It does this on a per voxel basis,
making it ideal to be executed on a GPU ” [48] . The algorithm examines each voxel and
several neighborhoods this trying to find the ones that are similar to the original neighborhood
of the pixel. A weight value is calculated for each detected neighborhood showing the
similarity of it selves and the original neighborhood.
Like the name of the algorithm is already indicating, the search for voxels for this filter should
be done in the entire image and not only in one location. In practice though, this would
involve an expensive computation. This is why a limitation of the window had been
introduced. “The window (Wsize) limi ts the search area around the voxel to be de -noised. Each
voxel inside the window acts as a center point for an environment. These environments are
normally refereed to as groups. The group (Gsize) around the voxel to be de -noised and each
group around each voxel in the window are compared in the search for viable voxels to include
in the averaging ” [48]. There are many ways to perform the comparison between groups.
One solution is to calculate the average of the voxel value of all the voxels except the center
voxel. The other one involves using the Gaussian kernel. Because the Gaussian Smoothing is
already used in this implementation, the average method was chosen . A de-noising
parameter( NLM s) was e xtra introduced and is meant to balance the effect of the algorithm.
The ps eudocode of this implementation of Non-Local Means , for each voxel in the CTA is
presented in the next algorithm:
30 3.2 Tubular detection filter
As mentioned earlier, the two TDFs that will be utilized in this thesis are Frangi’s vesselness
filter also used by Bauer et al . [6,5] and the Circle Fitting TDF by Krissian et al . [22,21]. This
two detection filters are both belon ging to the category of tubular detection filters. There are
two subcategories of the TDFs. The central TDFs and the offset TDFs. Central TDFs are
using information, like the name already shows, only close to the current voxel. On the other
hand, offset TD Fs are using information from the current voxel but also from voxels in a
predefined offset to the current voxel. Because the offset TDFs are using a wider space
around the voxel, they can utilize more information than the central TDFs. This can lead to
more accurate results but also increased execution time of the complexity of the algorithm.
Both the vesselness filter and the circle fitting TDF are, as mentioned earlier, Hessian – based
TDFs. That means that they utilize the Hessia n matrix in an attempt to deter mine whether a
voxel is a part of a tubular structure or not. This is based on four basic observations about
tubular structures and their corresponding first and second order derivatives.
Figure 3.4 : Slice views of ideal tubular structures and gradients .
Figure 3.5 : Graph of first order derivatives
Figure 3.6 : Graphs of second order derivatives .
31
The four observation can be deducted from the figures 3.4, 3.5, 3.6.
1. The smallest change in intensity is in the same direction as the direction of the tube
(Figure 3.4) .
2. The largest change in intensity is in the cross -section plane of th e tube (F igure 3.4) .
3. The gradient, or first order derivatives, creates a valley or ridge at the center of the tube
depending on whether the tube is dark or lig ht (Figure 3.4 and 3.5) .
4. The largest change in intensity 𝛿2𝐼(𝑣)⃗⃗⃗⃗⃗ is at the center of the tube, because the gradients
present there change direction (Figure 3.5).
The four previous obs ervations about the derivative information of tubes can be used to
detect them. This can be done by checking all possible tube directions and checking the
derivatives, but this would be very computationally inefficient. Frangi et al. showed how to
use the eigenvalues of the Hessian to efficiently determine locally the likelihood that a tube is
present at the current position.
3.2.1 Eigenvectors and Eigenvalues
The N eigenvectors of a NxN matrix are non -zero vectors with N components. These
eigenvectors have the property that when multiplied with the matrix they remain parallel to
that matrix . Each eigenvector 𝑒⃗𝑖 has a corresponding eigenvalue 𝜆𝑖. This is the factor the
eigenvector is scaled by when multiplied with the matrix as shown in Eq 3.10 .
𝐻𝑒⃗𝑖= 𝜆𝑖𝑒⃗𝑖 (3.10)
Because the Hessian matrix is a 3×3 symmetric matrix it has 3 eigenvectors that are
orthonormal, meaning that they are all normal to each other. The eigenvectors of the Hessian
also has a geometric interpretation: The eigenvectors corresponds to the principal directions
of the second -order derivatives which are the directions in the volume where the curvature is
the maximum and minimum . If we sort the eigenvalues and their corresponding eigenvectors
so that we have the relation: |𝜆1|< |𝜆2|< |𝜆3| , the direction of the tube will be given by 𝑒⃗1
because it is the eigenvector with the smallest magnitude. The eigenvalues correspond to the
principal curvature (in this case change in intensity change) which means that they correlate
to the amount of curvature ” [48] . Looking one more time at observation 1. it can be said that
the eigenvector with the smallest eigenvalue will be on the direction of the tubular structure.
Based on this it can also be said that 𝑒⃗2 and 𝑒⃗3 have high eigenvalues and will be therefore in
the cross -sectional plane of the tube . As men tioned earlier, this is because “the change in
intensity is higher in the cro ss-sectional plane of the tube and because all the eigenvectors are
32 orthonormal ” [48] . The figure be low illu strates how the eigenvectors relate to each other and
to a tube.
Figure 3.7 : Ideal tubular structure with eigenvectors .
In this implementation I’ve used the absolute value of the eigenvalues . The sign of the
eigenvalues wa sn’t relevant in this algorithm because is only indicate the direction of the
gradient. Actually i t only determine s if the tube respectively the background are white or
black . If the tube is white and the background is black, 𝜆2 and 𝜆3 will be negative. In the
reverse case, b oth 𝜆2and 𝜆3 will be positive. Ideally, based on the figure and the observa tions
mentioned earlier, a tubular structure with its corresponding eigenvectors and eigenvalues
should have these qualities: The first eigenvalue should be close to 0 and showing the
direction of the tube, the second and the third eigenvalues should have similar values, being
significant grater then the first.
|𝜆1|≈0 (3.11)
|𝜆1|≪ |𝜆2| (3.12)
|𝜆2|≈|𝜆3| (3.13)
In table 3.1 I will show which type of structures co rrespond to different configurations of
eigenvalues.
33 λ1 λ2 λ3 Structure
-H -H -H Blob (Dark)
+H +H +H Blob (Dark)
L +H +H Tubular (Dark)
L -H -H Tubular (Bright)
L L +H Plate (Dark)
L L -H Plate (Bright)
Table 3.1 : Table of tubular shapes
Both Hessian matrix and the Eigen analysis are relying on the information resulting from the
gradient vector. In the center of the tubular structure where the intensity doesn’t change very
much, there will not be possible to use the Hessian matrix. Another method is needed in order
to solve this problem. The gradient vector flow could help this time.
Two of the most common algorithms for calculating eigenvalues and eigenvectors
of a matrix is the QR and QL algorithms, which are very similar. In this work I decided to use
the QL algorithm. The basic idea of this algorithm is that any real matrix can be decomposed
to the form:
𝐻=𝑄∙𝐿 (3.14)
Where Q is the orthogonal matrix and L is the lower triangle of the Hessian. The
Householder transformation is used to get this decomposition. QL is an iterative algorithm
that performs a sequence of transformations that will eventually converge into the eigenvalues
and the eigenvectors. QL start with H0 = H, then for each iteration i it finds the
orthogonal and lower triangle matrix of the current matrix Hi and generate the next matrix
Hi+1 by applying the equation below.
𝐻𝑖+1=𝑄𝑖∙𝐿𝑖 (3.15)
After several iterations the eigenvectors will appear at the columns of the orthogonal matrix
𝑄𝑖 and the eigenvalues will be on the diagonal of the 𝐿𝑖 matrix .
34 3.2.2 Frangi vesselness filter
The Frangi vesselness filter [15] is a very popular method for detecting tubular structures . It
uses three variables: a, b and s, in addition to the eigenvalues. These three variables are used
for weighting of three criteria Ra, Rb, and S. Ra and Rb are geometric criteria, they try to detect
the shape of the voxels in the current neighborhood a nd S was introduced because of the
present noise that needs to be handeld . Each of these criteria are weighted by the three
variables, th is gives the filter flexibility and adaptability to handle varying circumstances. “Ra
purpose is to distinguish betwee n plate -like and lin e-like structures, while Rb indicates blob –
like structure. And the purpose of S as mentioned earlier, is to provide robustness against
noise ” [48].
𝑅𝑎= |𝜆2|
|𝜆3| (3.16)
𝑅𝑏= |𝜆1|
√|𝜆2||𝜆3| (3.17)
𝑆= √|𝜆1|2+|𝜆2|2+|𝜆3|2 (3.18)
If the eigenvalues are considered the leghts of the axis of an ellipsiod, this equations would be
easier to understand. The eigenvalues are also sorted |𝜆1|<|𝜆2|<|𝜆3|. This beeing said, it is
easy to determine the shape of an ellipsoid: if |𝜆1| is very sma ll and |𝜆2| and |𝜆3| are large ,
the ellipsoid would be flat because the |𝜆1| is pointing of the direction of the tube. Frangi et
al. [15] combines 𝑅𝑎, 𝑅𝑏 and 𝑆 into the expression below, where a tube -like structure should
have a high 𝑅𝑎 and a low 𝑅𝑏.
𝑉=(1−exp (−𝑅𝑎2
2𝑎2))exp (−𝑅𝑏2
2𝑏2)(1−exp (−𝑆2
2𝑐2)) (3.19)
Because is is difficult to detect also the small and the large tubular structures in only one step,
the Frangi vesselness filter [15] needs to be run twice . The first run will be applied on the
gradients produced in with the gradient vector algorithm and detect the large tubes. The
second run is applied on the images after the original gradient has been improved with the
gradient vector flow algorithm. The result of the Frangi vesselness fil ter [15] will be two
outputs for every voxel they will both be utilized later by the ridge tr aversal algorithm
(chapter 3.3).
35
3.2.3 Circle -fitting TDF
Compared to the previous TDF algorithm, the Circle Fitting TDF proposed by Krissian et al .
[22,21 ] is a more expensive algorithm but it could lead to better results. I will show in the
results and discussion chapter if the increades computationally of the Circle Fitting is worth it.
This method constructs a circle in the cross -sectional plane of the tu be and increase the radius
until a best -fit is found. As explained in the previous chapter, an ideal tubular structure should
have a very small |𝜆1| value and larger |𝜆2| and |𝜆3| values. The cross -sectional plane used
by this method is defined by the two e igenvectors 𝑒⃗2 and 𝑒⃗3 of the Hessian matrix . First a
very small radius is used, defined by a minRadius parameter. Then, for a N number of
positions along the circle the radius r will be sampled. The individual direction to each
position 𝑑⃗𝑖 is calculated in the following way:
𝑑⃗𝑖= 𝑒⃗2sin2𝜋𝑖
𝑁+ 𝑒⃗3cos2𝜋𝑖
𝑁 (3.20)
This equation (3.20) will provide an individual score for each voxel within the circle. The
TDF value T is calculated as the average dot product between all the sampled gradients 𝑉⃗⃗and
the inward normals 𝑛⃗⃗ of the circle at each point for a given radius (Eq. 3.21). The result will
continue to increase as long as the gradients continue to increase in length. When it stops to a
certain value, it means that the border of the circle has been reached and the last value of the
TDF will form the result of the algorithm. This requires a step -wise approach at witch the
radius for each average is calcul ated. A maxRadiu s limitation was also set for the circle
expansions in order to limit the growing process and to avoid selecting vessels outside of the
structure of interes. As men tioned earlier, the average product between all the sampled points,
as well as their inward normal, makes up the TDF score for the current radius. The complete
formula to calculate the average for each radius achieved is below:
𝑇(𝑣⃗,𝑟,𝑁)= 1
𝑁∑𝑉⃗⃗(𝑣⃗+𝑟𝑑⃗𝑖)∙(−𝑑⃗𝑖)𝑁−1
𝑖=0 (3.21)
36
Figure 3.8 : Circle Fitting illustration
In FAST this is implemented to be run on a GPU, where each voxel position is processed
individually. Below is the pseudo code for the implementation in FAST, given voxel 𝑣⃗:
3.3 Ridge traversal
While the TDF output could be used as a centerline, deciding where to place the threshold for
which TDF responses to include and sorting away responses that are not relevant for the
current problem is difficult. Both TDFs also return unreliable results for vessel junctions and
vessels with irregular shape, that coul d be counteracted by a ridge traversal algorithm. Us ing a
ridge traversal algorithm is possible when the TDF have a medialness property. Medialness
scribes how ”in the center” a point is of a tubular structure. Both Frangi’s vesselness filter
[15] and Kris sians Circle Fitting TDF [21] have this prop erty. The largest TDF responses
37 from both TDFs will be in the center of the structure in question, as points close to any of the
edges will have a lower TDF output score.
“The Centerline Extraction method starts by automatically creating a stack of candidate seed
voxels C. The candidate seeds are all voxels that have a TDF value above a certain threshold
called Thigh. Also, the voxel has to have the highest TDF value of all of its closest 26
neighboring voxels. After all voxels that satisfy these criteria have been added to this stack it
is sorted according to their TDF value so that the voxel with the highest TDF value is located
at the top of the stack C. This is done to ensure that the centerlines of the extr acted seeds are
in the center of the vessel . Centerlines are then extracted from these candidate seed points as
explained earlier in tubular detection chapter . The candidate seed points are handled in order,
starting with the one with highest TDF value. ”[48] The traversal procedure continues as long
as the TDF value of the next point don’t drop below Tlow or an already extracted centerline is
hit. The new centerline is accepted if the following conditions hold:
1. The centerline length is above Dmin
2. The average TDF value of each point on the centerline is above Tmean
3. The centerline is not connected to any other centerline OR connected to one other
centerline OR connected to two different previously extracted centerlines
If the new centerline hit some p reviously found centerlines, they are all connected. The output
is the largest connected centerline.
When all the possible TDF points has been considered, the ridge traversal algorithm takes the
top point (the point with the current highest TDF score) and uses it as a start point for
traversal. From this start point x0 we use the lowest eigenvector 𝑒⃗1 to determine the general
direction of the tube. Then, starting from x0 the traversal is performed independently for both
directions t0 and –t0. The next point in the traversal is chosen based on the direction from the
current point to the candidate point, and the TDF score of the candidate point. The directional
restrictions are calculated based on a threshold to avoid large angle changes in this way:
𝑥⃗𝑖∙ 𝑥𝑖𝑛⃗⃗⃗⃗⃗∙𝑡𝑖>0. All the candidate points that fulfil the directional requirement are considered
and the one with the largest TDF value is chosen. The direction is then updated based on this
new point 𝑡𝑖+1=𝑠𝑖𝑔𝑛(𝑥𝑖⃗⃗⃗⃗𝑥𝑖+1⃗⃗⃗⃗⃗⃗⃗⃗ ∙ 𝑡𝑖+1)𝑡𝑖+1, to mainta in the direction for the next step in the
traversal. This is repeated until one of the following events occur: the TDF values of all the
candidate points are below a given threshold ( tLow ), the traversal hits an existing centerline or
38 the traversal hits it s own centerline and creates a loop. In the event that the traversal
terminates because it hit an existing centerline, the current centerline is included in the
centerline that was hit, and the ridge traversal starts over with the next point from the prior ity
list. Below is pseudo code for the candidate point selection (algorithm 4) and for the ridge
traversal algorithm (algorithm 5).
39
40 3.4 Grouping and linking
Grouping and linking is an additional method presented by Bauer et al . [6,5]. The porpuse of
this algorithm is to solve some of the problems that the TDF and ridge traversal methods are
presenting. Because the vessels have mostly an irregular shape or because of the noise present
in the images, the extracted centerlines in the previous step mey pr esent some fractures. “ The
group ing and linking approach presented by Bauer et al . [6,5] is a bio -mechanical geo metric
approach, where each centerline’s endpoints are considered. Endpoints are the final point
added to the traversal in both directions. The endpoint of a centerline that terminated because
it hit an already existing cent erline or it self is not con sidered for grouping and linking. The
grouping and linking approach presented by Bauer et al . [6,5] is part of a larger, more general
approach, but for coronary arteries specifically some parts of the general approach is not
used. ” [48]
The method works by identifying and selecting the endpoint of single c enterlines. For each
endpoint xs a search out ward in the tangent -direction ts of the centerline is performed for
potential connections to other centerlines. If a candidate point is found within some distance
limit ( dHigh ), then a connection cost calculation is performed. If the difference in gray value
(from the original input image) between the candidate point and the endpoint is to high
|𝐼(𝑥𝑠)−𝐼(𝑥𝑒)|>𝑔𝑀𝑎𝑥 the can didate is rejected. If the gray value difference is within the
limit , a connection cost C(x s, ts, xe) is calculated. The connection cost is meant to generate a
cost that represen t a trade between distance between the two points xs, xe and the angle
between the existing cen terline and the target centerline as seen in the function below.
𝐶(𝑥𝑠,𝑡𝑠,𝑥𝑒)=‖𝑥𝑠−𝑥𝑒‖
exp (−∠(𝑥𝑠⃗⃗⃗⃗ 𝑥𝑒⃗⃗⃗⃗⃗,𝑡𝑠
2𝑝2)) (3.22)
An opening angle variab le p is introduced to allow a variable input, in this case p is set to
0.30. Potential connections with too large cost cMax are discarded. Finally, if a candidate has
fulfilled all the requirements, a shortest path traversal is performed between the e ndpoi nt and
the candidate point and the now connected centerline is added to the candidate centerline.
3.5 Inverse gradient flow tracking
The segmentation method works by growing out from the extracted centerlines and is based
on the Inverse Gradient Flow Tracking method by Bauer et al. [6,5 ]. The method starts by
dilating the centerline and adding it to the segmentation result. This is done because the
centerline might not be exactly at the center. And for this method to work, the center voxels
41 where the gradient vectors change direction has to be part of the initial segmentation. The
next step is to create a queue and add the neighbors of the intial segmentation to this queue.
Then this queue is processed one voxel at the time.
The inverse gradient segmentation al gorithm works by growing the segmentation from the
centerline produced by the previous steps. It does this by growing from the centerline in the
inverse direction of the gradien ts produced by the gradient vec tor flow. As long as the length
of the next grad ient vector is larger than the previous length it will keep growing. As
mentioned earlier, the change in intensity should be largest at the edge of the tubular structure,
therefore the gradient magnitude at the edge should also be at its largest point.
To make the algorithm more robust and to make up for potential errors produced in the
previous steps, the segmentation algorithm dilate the centerline slightly before the actual
segmentation is performed in order to avoid false termination . If the cen terline is not exactly
in the center , each voxel in the ce nterline is added to a queue and processed individually. If a
neighboring voxel is not alr eady a part of the segmentation and if the gradient magnitude is
larger then the previous (current) voxel, then it is added to the segmentation. Below is pseudo
code for the inverse gradient flow segmentation algorithm.
42 3.6 Avoidance of false positives
There are multiple vessels and tubular structures in a CTA of the chest region. Bauer et al .
[6,5] discusses a few methods to remove some of the vessels that are not a part of the
coronary arteries. Some of the false positive s are removed by the threshold ing described
earlier (chapter 3.1.1). Bauer et al . [6,5] then selects the two largest, connec ted centerlines.
The two largest centerlines with the thresholding are in most cases the coronary arteries, but
the m ethod is vulnerable to exclude some of the coronary arteries in the event that some of the
vessels are not connected.
43 4. Results
4.1 Rotterdam Coronary Artery Algorithm Evaluation Framework
The purpose of the Rotterdam Coronary Artery Algorithm Evaluation Framework [7, 13,14] is
to finally make a evaluation software available that provides a standardisised method of
qualification and comparison for coronary artery centerline extraction algorithms. The
benchmarks of the framework consist in some training and testing data in .raw format. The
training data -set consists of 8 CTA images where the manually extracted centerlines are
included. The testing data-set consists of 24 CTA images where the manual centerlines are
undisclosed.
The evaluation frameworks measures the performance of an algorithm by three percentage
grades [7, 13,14] :
OF: “Overlap until first error, measures the continuous centerline length produced without
producing a centerline voxel outside the error range ”.
OT: “Overlap with c linically relevant parts of the vessels. Vessel parts with a diameter above
1.5mm. ”
OV: “Overall overlap within the error range. ”
Another fourth messurement is the average distance (AI) and it’s value shows how close are
the detected points to the acutal centerline. The AI gives an average distance in millimeters
between the centerlines produced and the reference centerlines. The point doesn’t have to be
at the exactly same position with the centerline, but the nearer it would be, the better the result
of the AI will be an the final ranking of the algorithm.
4.2 Results of the proposed implementation
In this section the centerline extraction results produced by the methods described in chapter 3
will be presented. Both the results produced by Frangi’s Vesselness Filter and Circle Fitting ,
combined with Non-Local Means and Gaussian smoothing denoising is presented here.
44
Figure 4.1:
Centerline
extraction with
Vesselness Filter,
no-denoising Figure 4.2:
Centerline
extraction with
Vesselness
Filter, Gaussian
Smoothing Low
Parameters
Figure 4.3:
Centerline
extraction with
Vesselness Filter,
Gaussian
Smoothing High
Parameters
Figure 4.4:
Centerline extraction
with Vesselness
Filter, Gaussian
Smoothing Low
Parameters and
Non-local Means
Figure 4.5:
Centerline
extraction with
Vesselness Filter,
Gaussian
Smoothing High
Parameters and
Non-local Means
Figure 4.6:
Segmentation
with Vesselness
Filter, no –
denoising
Figure 4.7:
Segmentation
with Vesselness
Filter, Gaussian
Smoothing Low
Parameters
Figure 4.8:
Segmentation with
Vesselness Filter,
Gaussian
Smoothing High
Parameters Figure 4.9:
Segmentation with
Vesselness Filter,
Gaussian
Smoothing Low
Parameters and
Non-local Means Figure 4.10:
Segmentation with
Vesselness Filter,
Gaussian
Smoothing High
Parameters and
Non-local Means
Case TDF GS-L GS-H NLM OV OF OT AI
1 Central No No No 86.3% 64.4% 90.1% 0.6
2 Central Yes No No 87.4% 65.0% 90.2% 0.6
3 Central No Yes No 89.7% 64.1% 92.9% 0.5
4 Central Yes No Yes 87.6% 63.9% 90.4% 0.5
5 Central No Yes Yes 85.9% 65.3% 87.7% 0.6
Table 4.1 : Table of testing results with Vesselness Filter
45
Figure 4.11:
Centerline
extraction with
Circle Fitting, no –
denoising
Figure 4.12:
Centerline
extraction with
Circle Fitting,
Gaussian
Smoothing Low
Parameters Figure 4.13:
Centerline
extraction with
Circle Fitting,
Gaussian
Smoothing
High
Parameters Figure 4.14:
Centerline
extraction with
Circle Fitting,
Gaussian
Smoothing Low
Parameters and
Non-local Means Figure 4.15:
Centerline
extraction with
Circle Fitting,
Gaussian
Smoothing High
Parameters and
Non-local Means
Figure 4.16:
Segmentation
with Circle
Fitting, no –
denoising Figure 4.17:
Segmentation
with Circle
Fitting, Gaussian
Smoothing Low
Parameters Figure 4.18 :
Segmentation with
Circle Fitting,
Gaussian Smoothing
High Parameters
Figure 4.19 :
Segmentation with
Circle Fitting,
Gaussian
Smoothing Low
Parameters and
Non-local Means Figure 4.20:
Segmentation with
Circle Fitting,
Gaussian
Smoothing High
Parameters and
Non-local Means
Case TDF GS-L GS-H NLM OV OF OT AI
1 Circle No No No 63.1% 31.9% 70.0% 0.8
2 Circle Yes No No 62.5% 29.4% 66.9% 0.7
3 Circle No Yes No 61.1% 27.9% 66.3% 0.8
4 Circle Yes No Yes 62.3% 32.0% 69.1% 0.8
5 Circle No Yes Yes 60.1% 29.9% 67.0% 0.8
Table 4.2 : Table of testing results with Circle Fitting TDF
46 4.3 MICCAI Workshop Results
Method
OV OF OT
% score rank % score rank % score rank
MHT Friman et al.
98.5 84.0 3.13 83.1 72.8 4.94 98.7 84.5 3.21
ShapeRegression Schaap et al.
96.9 79.2 4.64 72.5 66.3 6.39 97.1 79.2 4.90
ModelDrivenCenterline Zheng
et al.
93.5 53.4 11.7 76.5 54.9 8.72 95.6 70.0 7.98
VirtualContrast2b Wang and
Smedby
96.7 73.0 6.03 74.5 63.3 7.40 96.9 74.7 5.95
Tracer Szymczak
95.1 71.0 7.51 63.5 52.0 10.1 95.5 70.2 8.00
BayesianMaxPaths Lesage et
al.
97.5 81.6 3.69 78.8 70.8 5.22 97.7 81.5 4.00
SupervisedExtraction
Kitamura et al.
90.6 53.8 13.6 70.9 49.0 11.1 92.5 61.2 11.2
GFVCoronaryExtractor Yang
et al.
93.7 55.9 11.5 74.2 52.9 9.61 95.9 68.5 7.71
DepthFirstModelFit Zambal et
al.
84.7 48.6 15.1 65.3 49.2 10.7 87.0 60.1 11.7
HOT tractography Cetin et al.
97.3 72.9 5.67 77.4 63.2 7.01 97.7 75.4 5.30
VesselTractography Cetin et
al.
96.4 64.3 8.14 69.9 51.6 9.59 97.0 70.3 7.22
KnowledgeBasedMinPath
Krissian et al.
88.0 67.4 8.49 74.2 61.1 8.41 88.5 70.0 7.95
AutoCoronaryTree Tek et al.
84.7 46.5 16.8 59.5 36.1 15.0 86.2 50.3 15.9
GVFTube'n'Linkage Bauer
and Bischof
92.7 52.3 13.1 71.9 51.4 10.9 95.3 67.0 9.46
CocomoBeach Kitslaar et al.
78.8 42.5 18.6 64.4 40.0 14.9 81.2 46.9 17.7
TwoPointMinCost Metz et al.
91.9 64.5 9.67 56.4 45.6 12.2 92.5 64.5 10.0
StatisticalTracking Wang et al.
81.5 45.5 16.0 59.1 46.7 12.0 84.6 59.5 11.8
VirtualContrast Wang and
Smedby
75.6 39.2 19.2 56.1 34.5 15.6 78.7 45.6 17.1
AxialSymmetry Dikici et al.
90.8 56.8 13.0 48.9 35.6 15.5 91.7 55.9 13.9
TubSurfGradFlow Mohan et
al.
91.1 53.3 13.9 65.3 41.9 12.9 92.2 53.9 14.2
ElasticModel Hernández
Hoyos et al.
77.0 40.5 18.9 52.1 31.5 16.7 79.0 45.3 17.9
3DinteractiveTrack Zhang et
al.
89.6 51.1 14.7 49.9 30.5 16.6 90.6 52.4 14.9
CoronaryTreeMorphoRec
Castro et al. 67.0 34.5 21.0 36.3 20.5 18.3 69.1 36.7 20.6
Table 4.3 : Overlap measures
47 Method
AI
mm score rank
MHT – Friman et al.
0.23 47.9 3.89
ShapeRegression – Schaap et al.
0.23 49.6 3.13
ModelDrivenCenterline – Zheng et al.
0.20 51.6 2.49
VirtualContrast2b – Wang and Smedby
0.27 41.6 7.63
Tracer – Szymczak
0.26 44.4 6.21
BayesianMaxPaths – Lesage et al.
0.29 37.0 10.54
SupervisedExtraction – Kitamura et al.
0.25 47.3 4.30
GFVCoronaryExtractor – Yang et al.
0.30 37.1 10.57
DepthFirstModelFit – Zambal et al. 0.28 41.9 7.72
HOT tractography – Cetin et al.
0.34 32.0 14.39
VesselTractography – Cetin et al.
0.36 30.7 15.67
KnowledgeBasedMinPath – Krissian et al.
0.39 29.2 16.91
AutoCoronaryTree – Tek et al.
0.34 35.3 11.60
GVFTube'n'Linkage – Bauer and Bischof
0.37 29.8 16.42
CocomoBeach – Kitslaar et al.
0.29 37.7 10.94
TwoPointMinCost – Metz et al.
0.46 28.0 17.71
StatisticalTracking – Wang et al.
0.51 25.1 19.04
VirtualContrast – Wang and Smedby
0.39 30.6 15.55
AxialSymmetry – Dikici et al.
0.46 26.4 18.90
TubSurfGradFlow – Mohan et al.
0.47 24.8 20.01
ElasticModel – Hernández Hoyos et al.
0.40 29.3 16.94
3DinteractiveTrack – Zhang et al.
0.51 24.2 20.78
CoronaryTreeMorphoRec – Castro et al.
0.59 20.7 21.70
Table 4.4 : Accuracy measure
48 5. Discussion
In this chapter I will discuss the produced results in chapter 4 with focus on the two TDFs, the
effect of the noise filters and the produced centerlines extraction and segmentation .
Even if the framework from the Rotterdam University was very helpful for the verification of
the result, it also have some weakness. If the points are relatively close to the reference
centerline, the f ramework performs well in matching, detecting and grading those points. But
the framework only produces a grade of how much of the produced centerlines are right. If
every single vessel and tubular structure in the CTA was detected and included in the
produced centerline, the grades for both OT and OV would be very good. But the wrong
detected vessels are not counted here. This is an obvious weakness for the testing framework
as it means that additional, wrong or false centerlines are not very punishing for the grades
achieved. As long as the right centerline was detected, the grades will be good.
5.1 Tube Detection Filters
5.1.1 Circle Fitting
The Circle Fitting TDF performs fairly well in terms of run -time, only slightly longer then the
less complex Vesselness Filter as it was actually already expected . It works well when
detecting larger parts of the coronary arteries but it has problems when it comes to detecting
the smaller sections (<3mm). This occurs because of the approach of the Circle Fitting TDF.
The smaller vessels don’t always have a tubular shape like the lager ones in the CTA datasets.
Because the algorithm of the Circle Fitting TDF works stepwise by expanding the radius of a
circle, it means that in case of not perfectly tubular shapes it will terminate before the actual
edge will be reached.
5.1.2 Vesselness Filter
The Vesselness Filter performs well in te rms of run -time. Compared to the Circle Fitting TDF,
it seems to have no problems detecting both small and large tubular structures. It produces
both bett er and more consistent results. One drawback of the Vesselness Filter is that it
produces a large amount of false positives. That means that many incorrect vessels are going
to be selected using this methods . If the incorrect centerlines produced are not sorted out
before the segmentation, many of the vessels in the l ungs will be included. The threshold for
candidate pints for the ridge traversal needs to be set high because of the large amount of
candidate points.
49 5.2 Noise Filters
5.2.1 Gaussian Smoothing
The Gaus sian Smoothing filter performs very well in terms of run -time. The effect of this
filter is better seen when a high σ is used and is less significate in other cases. To achieve
similar results, another approach is to iterate the GVF more times. It is also important to be
aware of the risk of the Gaussian smoothing. If it’s being used to much, this method could
hide some important information in the original image. On the other hand, using a low
smoothing, that means using a low σ in the algorithm, the positive impact of the results will
be minor and the grades resulted from the testing framework will also be more ore less the
same with or without a low smoothing.
Gaussian Smoothing can also be used to support and to help the GVF algorithm to perform
faster . Bauer et al . [6,5] recommends 500 iterations of GVF for getting a image ready for the
segmentation . Although this 500 iteration would produce good results , it also takes a long
time to execute every one of them . When using the Gaussian Smoothing in combination with
the GVF, the number of iterations needed for the GVF to run for getting promising results
seems to decrease.
5.2.2 Non-Local Means
The Non -Local Means algorithm is also very slow and it also doesn’t have a positive impact
of the results of either of the TDF algorithms. The results are acually surprising because the
algorithm was expected to improve the result. If it’s bee ing run together with the Gaussian
smoothing filter, the impact on the results is really minor.
5.3 Extracted Centerlines
5.3.1 Ridge Traversal
The results of the ridge traversal algorithm are very relative to the performance and result of
the tube detection filters. If the input for the ridge traversal are many accurate centerline
detected by the TDF, only a few of them will be needed as candidat es for the actual tracking.
A disadvantage of the algorithm is the fact that it relies on continuous and long centerlines
5.3.2 Grouping and Linking
The grouping and linking approach presents also an advantage but also a disadvant age. The
advantage is that it can solve the problem of existing gaps in the centerlines produced in the
50 previous step by the ridge traversal. The disadvantage is that it can also produce many false
positives. Because the linking and grouping is done only b ased on the direction and distance
between the endpoints of the centerlines, it is possible that some vessels that are actually not
connected in the original image to get linked at this step. To detect this incorrect linked
vessels would be very complicate d because they usually occur between very small vessels that
are really close to each other but still being partially detected.
5.4 Segmentation Results
The better the centerline extraction algorithm and the gradient vector flow are working, the
better the qua lity of the segmentation will be. If the GVF is not performing that well and is
producing a large amount of noise and variations inside a small area, the segmentation will
contain some unusual edges that are not similar to a vessel. This problem can be res olved by
applying the Gaussian smoothing filter to the GVF or by running the GVF many times.
If the ridge traversal or the grouping and linking algorithms are producing centerlines that are
actually not inside a vessel, the result of the segmentation can contain multiple gaps. This can
also happen if the centerline is away from the off -center. This problem can be solved by
applying more initial dilatation . But the dilatation can cause an over segmentation of the
smaller, dilated vessels. The holes can als o be fixed if there will be applied some post –
processing methods.
51 6. Conclusion
6.1 Goal achievement
The main objective of this thesis was to do a study of the state of art, of the already existing
algorithms for coronary artery segmentation and to do a comparison between the results of the
different approaches . For the compariso n I’ve chosen two of the m ost promising methods
presented at the MICCAI workshop that are using a similar technique and witch
implementation was tested using the framework. This way, the images resulted from the two
algorithms could be interpreted and decide which one is working be tter.
6.2 Further work
In chapter 5 I already presented some of the problems and weaknesses of the proposed two
algorithms. Some of the main issues could be the noise in the original images, low TDF
results, the possibility of having a broken or partial center line in the images, the robustness for
the ridge traversal or the possibility of linking and grouping incorrect vessels.
Next I would like to discuss every of this problems and present some improvement that could
be done:
The results could improve by reduc ing more noise. Even is the run -time would be
increased by adding more filters, less noise would result in a better and more reliable
result. Sometimes the only way to determine the quality of the segmentation and the
centerline extraction is to look manua lly at CTA images and to see where the artery is
present. It would be worth to implement a more intelligent approach for noise
reduction that results into better centerline segmentation. There are already several
approaches to this topic: a Bayesian filter by Snaches et al . [33 ], a anisotropic filter
presented by Bauer [5], a hybrid diffusion filter by Mendrik A.M et al. [25 ] and others.
The disadvantage of both of the implemen ted algorithms is that they don’ t response
very well to vessels with irregular s hape. This problem is bigger for the Circle Fitting
TDF but also the Vesselness Filter presents some problems with some vessels. The
problem could be partially solved by using the grouping and linking algorithm
presented earlier . This could help detect mor e vessels but it could also lead to incorrect
connections between them or wrong centerline detection. A possible solution could be
to use one more TDF algorithm extra for detection of irregular vessels and junctions
and use it in the ridge traversal part. The next step would be to run the grouping and
linking algorithm in order to connect some of the smaller detected vessels.
52 The results of the ridge traversal algorithm depend of the quality of the TDF
algorithm. If the response of the TDF is incorrect, the ridge traversal might produce a
wrong centerline or might also to simply stop. Deciding when the ridge traversal
encounter s some of this problems is very hard to do without any manual aid. Bauer et
al. [6,5] solution to this would be to use as many points as possible and to produce as
many centerlines as possible and just after that to try to group them together. This
would help the algorithm to not terminate early anymore but it does not solve the
problem of detecting incorrect centerlines. A solution to this could be to add one more
algorithm of detection presen ted in the state of art chapter or to simply replace this
algorithm with a better one. Many of the approaches presented in the state of art
chapter utilizes candidate points in their extraction of centerlines and the TDFs can be
used to generate them and t he segmentation could be done with another algorithm.
The region growing approach presented b y Krissian et al. [21] or the refined
Vesselness approach by Yang et al . [45 ] could be good alternative methods or
complimentary methods for the purpose of creating a more reliable centerline
extraction .
The selection of the centerline presented in this thesis is using a primitive
implementation that actually has a significant weakness. This is motioned in chapter
5.4.1 and is about the s election of the most longest centerline produced, to be more
exactly of the most 3 -4 centerlines. If the centerlines produced by the ridge traversal
and grouping and linking algorithms are disconnected, the result of this selection
might be a vessels that do not belongs to the coronary artery. A solution for this
problem could be the approach presented by Zheng et al . [46 ]. They purpose to firstly
locate the heart in the images select only the centerlines that are close to it. This
approach could work bette r and sounds to be more reliable and appropriate compared
to the selection relying on the length of the centerlines.
The result of the invers gradient tracking segmentation depends on the precision of the
centerlines produced in the previous steps. It cou ld also be influenced by noise if the
gradients produced by the GVF and used here are affected by the noise. But if the
detection step is causing a good centerline , this method might also work very well. An
alternative solution to this segmentation could b e a region growing algorithm
presented by Krissian et al . [21 ] but it’s still uncertain if it would produce a better
53 result then the invers gradient method. If the approach could be done parallel, the
overall run -time could benefit from this. But the time needed for the segmentation is
very low compared to the one used for the GVF or the extraction algorithm. This is
why it is actually not worth it to invest very much in the improvement of this method.
54 Bibliography
1. Dierker J. Joite -Barfuss S. Aichinger, H. and M Sabel. – Radiation Exposure and Image
Quality in X -Ray Diagnostic Radiology . Springer, 2011.
2. Pizer S. Bullitt -E. Aylward, S. and D Eberl. – Intensity ridge and widths for tubular
object segmentation and descr iption . Proceedings of the Workshop on Math. Methods in
Biomed. Image Analysis. Pages: 131 –138.
3. S. Aylward and E Bullitt. – Analysis of the parameter space of a metric for registering 3d
vascular images . In Proceedings of the International Conference on Me dical Image
Computing and Computer -Assisted Intervention, 2001.
4. G. W. Barsness and D. R. Holmes. – Coronary Artery Disease . Springer, 2011.
5. C. Bauer. – Segmentation of 3D Tubular Tree Structures in Medical Images . PhD thesis,
Graz University of Technology, 2010.
6. C. Bauer and H. Bischof. – Edge based tube detection for coronary artery centerline
extraction . Midas Journal, July 2008.
7. T. van Walsum A. van der Giessen A. Weustink N. Mollet G. Krestin C. Metz, M.
Schaap and W. Niessen. – 3d segmentati on in the clinic:a grand challenge ii – coronary
artery tracking . In MICCAI , MICCAI’08, 2008.
8. J Canny. – Finding edges and lines in images. Tech. Rep. 720, MITAIL, 983.
9. S. Cetin and G Unal. – A higher -order tensor vessel tractography for segmentation of
vascular structures . Medical Imaging, IEEE Transactions, Pages : 2172 –2185, 2015.
10. R Cierniak. – X-Ray Computed Tomography in Biomedical Engineering . Springer, 2011
11. Doosthoseini A Dehkordi M, Sadri S. – A review of coronary artery segmentation
algorithms . Jurnal of M edical Signals and Sensors, Pages : 49–54, 2011.
12. F Filipoiu. – Atlas of Heart Anatomy and Development. Springer, 2013.
13. Rotterdam Coronary Artery Algorithm Evaluation Framework. Main page, June 2017 .
14. Rotterdam Coronary Artery Algorithm Evaluation Framework. Results, June 2017 .
15. Alejandro F. Frangi, Wiro J. Niessen, Koen L. Vincken, and Max A. Viergever. –
Multiscale vessel enhancement filtering . Springer Berlin Heidelberg, 1998.
16. Jerome Friedman, Trevor Hastie, and Robert Ti bshirani. – Additive logistic re gression: a
statistical view of boosting .1998.
17. O. Friman, C. Kuehnel, and H. Peitgen. – Coronary centerline extraction using multiple
hypothesis tracking and minimal paths . Midas Journal, July 2008
55 18. Anand S Abrahams -Gessel S Murphy A Gaziano TA, Bitton – A. Growing epi demic of
coronary heart disease in low- and middle -income countries . Current problems in
cardiology, Pages: 72–115, 2010.
19. Quek F Kirbas C. – A review of ves sel extraction techniques and algorithms . AMC
Computing Surveys, Pages: 81–121, 2004.
20. Malandain G. Krissian, K. and N Ayache. – Directional anisotropic diffusion applied to
segmentation of vessels in 3d images . Tech. Rep. 3064, INRIA, 1996.
21. Malandain G. Ayache N Krissian, K. – Model -based detection of tubular struc tures in 3d
images. Computer Vision and Image Understanding , Pages: 130– 171, 2000.
22. Pozo J.M. Villa -Uriol M.C. Frangi A Krissian K., Bogunovic H. – Minimally interactive
knowledge -based coronary tracking in cta using a minimal cost path . Midas Journal, July
2008.
23. David Lesage, Elsa D. Angelini, Isabelle Bloch, and Gareth Funka -Lea. – Bayesian
Maximal Paths for Coronary Artery Segmentation from 3D CT An – giograms . Springer
Berlin Heidelberg, 2009.
24. Peter A McCullough. – Coronary artery disease . Clinical Journal of the Ameri can
Society of Nephrology , Pages: 611–616, 2007.
25. Rutten A Viergever MA van Ginneken B Mendrik AM, Vonken EJ. – Noise reduction in
computed tomography scans us ing 3 -d anisotropic hybrid diffusion with continuous
switch . IEEE Transactions on Medical Imaging, Pages: 1585 –1594, 2009.
26. Kawatak Y. Sato H. Niki, N. and T Kumaza ki. – 3d imaging of blood ves sels using x -ray
rotational angiographic system . IEEE Med. Imaging Conf , Pages: 1873 –1877 .
27. World Health Organization. Cardiovascular diseases (cvds), June 2 017.
28. J. F. O’Brien and N. F Ezquerra. – Automate d segmentation of coronary ves sels in
angiographic image sequences utilizing temporal, spatial structural constraints . In Proc.
SPIE Conf. Visualization in Biomed. Computing .
29. Elion J. Petrocelli, R. and K. M Manbeck. – A new method for structure recog nition in
unsubtracted digital angiograms . IEEE Computers in Cardiology , Pages 207 –210.
30. Morse B. Pizer, S. and D Fritsch. – Zoominvariant vision of figural shape: the
mathematics o f cores . Computer Vision and Image Understanding, Pages: 55 – 71.
31. R. Poli and G Valli. – An algorithm for realtim e vessel enhancement and detec tion.
Comp. Methods and Prog. in Biomed , Pages: 1 –22.
56 32. Monga O. GE C. Xie S. Prinet, V. and S Ma. – Thin network e xtraction in 3d images:
Application to medical angiograms . Proc. Int. Conf. Pattern Rec , Pages: 386–390.
33. Joao Miguel Sanches, Luisa Mic and Jaime Cardoso . – Pattern Recognition and Image
Analysys . Springer Berlin Heidelberg, 2013
34. Neefjes Metz Capuano De Bruijne Schaap, Van Walsum and Niessen. – Robust shape
regression for supervised vessel segmentation and its applica tion to coronary
segmentation in cta . Medical Imaging, IEEE Transactions , Pages: 1974 –1986, 2011.
35. Verbeeck G. Sueten s P. Smets, C. and A Oosterlinck. A knowledge -based system for the
delineation of blood vessels on subtraction angiograms . Pattern Rec. Lett , Pages: 113 –
121.
36. Bozorgi M. Lin dseth F. Smistad, E. – Fast:Framework for heterogeneous med ical image
computing and visualization . International Journal of Computer Assisted Radiology and
Surgery, Pages: 1811 –1822, 2015.
37. Halmai C. Erdohelyi B. Palagyi K. Nyul L. Olle K. Geiger B. Lindbichler F. Friedrich G.
Sorantin, E. and K Kiesler. – Spiral -ctbased assessment of trac heal stenoses using 3 –
dskeletonization . IEEE Trans. on Med. Img , Pages: 263–273, 2002.
38. P. Summers and A Bhalerao. – Derivation of pressure gradients from magnetic
resonance angiography using multi -resolution segmentation . Interna tional Conference on
Image Processing and its Applications, Pages: 404 –408.
39. A. Szymczak. – Vessel tracking by connecting the dots. The MIDAS Journal, 2008 .
40. J Thie. – Nuclear Medicine Imaging : An Encyclopedic Dictionary . Springer, 2012.
41. P. Viola and M Jones. – Rapid object detection using a boosted cascade of simple
features . In Computer Vision and Pattern Recognition, Proceedings of the 2001 IEEE
Computer Society Conference, 2001.
42. Wilson R. Vlodaver, Z. and D Garry. – Coronary Heart Disease: Clinical, Pathological,
Imaging, and Molecular Profiles . Springer, 2012.
43. Nathan D Wong. Epidemiological studies of c hd and the evolution of preven tive
cardiology . Nature Reviews Cardiology , Pages: 276–289, 2014.
44. Prince J Xu, C. – Snakes, shapes, and gradient vector flow. IEEE Transactions on Image
Processing, Pages: 359–369, 1998.
45. Kitslaar P. Frenay M. Broe rsen A. Boogers M. Bax J. Dijkstra J Yang, G. – Automatic
centerline extraction of coronary arteries in coronary computed tomographic
angiography . The International Journal of Cardiovascular Imag ing, Pages: :921–933, 2011
57 46. Huseyin Tek Yefeng Zheng and Gareth Funka -Lea. – Robust and accurate coro nary
artery centerline extraction in cta by combining model -driven and data – driven
approaches.
47. Yuanzhong Li Yoshiro Kitamura and Wat aru Ito. – Automatic coronary extrac tion by
supervised detection and shape matching.
48. Zefdal Magnus Barlund , Erik Smistad – Segmentation and Centerline Extraction of the
Coronary Arteries with GPU; GPU -Based Airway Tree Segmentation and Centerline
Extractio n
Copyright Notice
© Licențiada.org respectă drepturile de proprietate intelectuală și așteaptă ca toți utilizatorii să facă același lucru. Dacă consideri că un conținut de pe site încalcă drepturile tale de autor, te rugăm să trimiți o notificare DMCA.
Acest articol: Lucian Blaga University of Sibiu Romania [630134] (ID: 630134)
Dacă considerați că acest conținut vă încalcă drepturile de autor, vă rugăm să depuneți o cerere pe pagina noastră Copyright Takedown.
