Computer-aided diagnosis system for ulcer detection [600405]

Computer-aided diagnosis system for ulcer detection
in wireless capsule endoscopy videos
Said Charfi
LabSIV , Department of Computer Science,
Faculty of Sciences, Ibn Zohr University,
BP 8106, 80000 Agadir, Morocco.
Email: charfisaid@gmail.comMohamed El Ansari
LabSIV , Department of Computer Science,
Faculty of Sciences, Ibn Zohr University,
BP 8106, 80000 Agadir, Morocco.
Email: [anonimizat]
Abstract —In this paper, we present a new feature descriptor
for automatic recognition of frames with ulcer in Wireless
Capsule Endoscopy (WCE) images. The new approach is based
on the fact that the ulcer disease exhibits various features that can
not be detected with a single descriptor. Hence, we have combined
two stages of the art descriptors in order to get more powerful
one. Complete Local Binary Pattern (CLBP) descriptor is used
to detect the texture information in the image. In parallel, the
Global Local Oriented Edge Magnitude Pattern (Global LOEMP)
descriptor is employed to extract the color features. Finally, we
combine the feature vectors to get a more discriminating one.
Experiments were conducted and the results are satisfactory.
I. I NTRODUCTION
Wireless Capsule Endoscopy (WCE) can image parts of
the human gastrointestinal tract that were unreachable by
traditional endoscopies. A major drawback of this technology
is that a large size of data must be analysed in order to detect
a disease which can be time-consuming and a burden for the
clinicians. Consequently, Computer-Aided Diagnosis (CAD)
systems come to provide assistance to them. WCE has been
adopted instead of the former one as it enables non-invasive,
painless, disposable and effective diagnosis of the whole small
bowel. The WCE system consists of an ingestible pill camera
26mm11mm, data recorder and computer software for
interpretation. As shown in 1, CE is a pill-shaped device
which consists of a short focal-length CMOS camera, four light
sources, a battery, radio transmitter and some other miniature
components. Many efforts have been made in the literature
of automatic ulcer detection. Baopu et al. [3], [4] proposed
two methods inspired by wavelet and curvelet based local
binary pattern, respectively, to distinguish ulcer regions from
normal ones in patches selected from CE images. Furthermore,
an approach for capsule endoscopy image analysis using
texture information from various color models was proposed
in [5]. This approach focuses on color texture features in
order to investigate how the structure information of healthy
and abnormal tissue is distributed on RGB, HSV and CIE
lab color spaces. The WCE images are pre-processed using
bi-dimensional ensemble empirical mode decomposition to
facilitate differential lacunarity analysis to extract the texture
patterns of normal and ulcerous regions. In [6], a method based
on Contourlet transform and Log Gabor filter to distinguish
ulcer regions from normal regions was proposed. Color and
textural features were proposed in [7] to determine the status
of the small intestine and detect bleeding and ulcer in wireless
capsule endoscopy images. An improved bag of feature forautomatic polyp detection and method for bleeding detection
in wireless capsule endoscopy images were stated in [8],
[9]. Authors in [10], proposed pixel based bleeding detection
method in WCE videos using support vector machine. A recent
two stages work for automatic ulcer detection was presented
in [11], and another for automated detection of lesion in WCE
images in [12]. Yuji Iwahori et al. [13] proposed Hessian
filter and HOG features to distinguish between polyp and
non-polyp regions, then used K-means++ for classification
phase. Recently, directional wavelet based features method
was stated in [14]. In [15], [16], systems for small intestine
motility characterization and bleeding detection, based on
Deep Convolutional Neural Networks were introduced. In [17],
capsule endoscopy and deep enteroscopy in irritable bowel
disease were investigated. Study of the effectiveness of colon
capsule endoscopy in detecting colon polyps were presented
in [18].
In this work, we propose a novel CAD system for the
recognition of ulcerous CE images. Different from previous
studies, we extract the texture features using the CLBP de-
scriptor. Using a single descriptor may not be very effective
in representing the complex features of the ulcer in the CE
images. To this end, it is necessary to exploit the color
features of the mucosa. Hence, we propose the color descriptor
(i.e., Global LOEMP ) to be concatenated with the CLBP’s
previously extracted features. First, the CE image is pre-
processed in order to reduce the illumination effect and speed
up the analysis time. After, CLBP is employed to extract the
texture features. Then, we extract the color features using
LOEMP in the HSV color space. Finally, the feature vectors
are concatenated and fed to the classifier. In addition, we
have investigated the performances of two classifiers (i.e.,
Support Vector Machine and Multilayer Perceptron) on our
classification experiments. Extensive experiments carried out
demonstrate the effectiveness of the proposed method.
The structure of the paper is as follows: Section II describes
the proposed feature extraction scheme. In Section III, we
present the experimental results as well as the evaluation and
comparison of the proposed scheme with existing approaches.
Finally, conclusions of this study are summarized in section
IV.
II. P ROPOSED APPROACH
In this section, we present the new approach to extract
the features describing ulcer in WCE images. The derived

Fig. 1: Capsule endoscopy.
features are provided to the classifiers (i.e., Support Vector
Machine (SVM) and Multilayer Perceptron (MLP), to decide
which one of them is suitable for WCE analysis. In this work,
the first step is to pre-process the CE image taken from WCE
video for noise removal, reducing the illumination effect and
masking unnecessary regions. Next, the CE image is passed
to the CLBP descriptor for texture feature extraction. Then,
we apply Global LOEMP to CE image represented in the
HSV color space in order to derive color features. Further, we
integrate the feature vectors extracted by CLBP and LOEMP
descriptors. Finally, these feature vectors are feed to a classifier
to decide whether the image is normal or containing ulcerous
region. The scheme of the proposed approach is depicted in
Fig. 2. The rest of this section details all steps involved in
ulcer detection from WCE images process.
Fig. 2: Scheme of the proposed approach.
A. Preprocessing
Capsule images are filtered using 3-D median filtering in a
333neighborhood around each pixel for noise removal and
certain regions are masked in order to speed up analysis time.
Particularly, we apply an automatic illumination correction
scheme [21], for reducing the effect of illumination.B. Texture Feature Extraction
Complete Local Binary Pattern (CLBP) is known as a
generalized version of LBP. Based on its capability to represent
the discriminant information of the local structure that simple
LBP could miss [23], CLBP has proved to be effective on
texture analysis.
Given a central pixel gc, and its P circularly and evenly
spaced neighbours gp; p= 0;1; :::; P1, we can straight-
forward calculate the difference dbetween the central pixel,
gc, and a pixel in its neighborhood gpasdp=gcgp.
In CLBP, a local region is represented by center pixel and
the difference between the values with local center pixel with
magnitude that is called as Local Difference Sign-Magnitude
Transform (LDSMT), and three operators, namely CLBP-
Center (CLBP C) indicates the difference between local pixel
value and average central pixel value, CLBP-Sign (CLBP S)
indicates the sign (positive or negative) of difference be-
tween the center pixel and local pixel, in conventional LBP
operator only the sign component of dpis utilized, and
CLBP-Magnitude (CLBP M) indicates the magnitude of the
difference between the center pixel and local pixel. CLBP C
represents the difference of the intensity values in the central
pixel neighborhood, which is not required for texture feature
extraction. Thus, only CLBP S and CLBP M are jointly
combined and the resulting features vector is concatenated with
the LOEMP’s one in the presented approach.
dpcan be further decomposed into two components:
dp=spmpandsp=sign (dp)
mp=jdpj(1)
where spis the magnitude of dp, for additional discriminant
power, CLBP also considers the intensity of the central pixel,
gc. CLBP M operator is defined as:
CLBP MP;R=P1X
p=0t(mp; c)2p; t(x; c) =1; xc
0; x < c(2)
where c is a threshold to be determined, in practice c is set
as the mean value of the whole image. CLBP Coperator is
defined as:
CLBP CP;R=t(gc; cI) (3)
where tis defined as in CLBP M and the threshold cIis set
as the average gray level of the whole image.
C. Color Feature Extraction
The color Global LOEMP is a framework that is able
to effectively combine color, global spatial structure, global
direction structure, and local shape information and balance
the two concerns of distinctiveness and robustness [24]. The
color Global LOEMP feature possesses the following three
properties.
1) Color angle patterns that are able to encode the discrimi-
native features derived from the spatio chromatic patterns of
different spectral channels within a certain local region. This
enables it to contain richer information than the other LBP-
based features.
2) A framework that is able to effectively combine color,
global spatial structures, global direction structures, and local

shape information. This enables it to contain richer image
information.
3) A global-level rotation compensation method, which shifts
the principal orientation of HOG to the first position, making
Color Global LOEMP robust to rotations. The first two prop-
erties allow the features to convey rich image information,
and the third one allows the algorithm to be robust to exterior
variations.
As demonstrated in [11], the second component of the trans-
formed WCE images in the HSV color space highlights the
ulcer regions and separates ulcer mucosa tissues from the
uninformative parts. Therefore, we extract the color features
using the LOEMP descriptor for all the WCE images in this
color plane.
D. Feature Integration
It is very important to integrate the extracted features in
an effective way for better describing ulcer. We employ CLBP
descriptor to the WCE image represented in the RGB color
space. Then, we jointly combine the CLBP S and CLBP M.
The resulting joint histogram is of 256 dimensional vector.
Likewise, we apply LOEMP descriptor to the CE image, rep-
resented in HSV color space, to compute color features which
are represented by a 140 dimensional vector. Finally, CLBP
and LOEMP vectors are concatenated to form a combined 396
dimensional vector to serve as descriptor of the CE image.
III. E XPERIMENTAL RESULTS
A set of experimental data was built based on the image
sequences acquired by WCE from different patients and down-
loaded from [25]. The dataset is composed of 2333 CE images
extracted from 16 ulcer and 7 normal patients’ video clips.
The dataset consists of 733 normal and 1600 ulcer images
that were randomly divided into two sets, 1400 WCE images
for training, in which 1000 with ulcer and 400 are normal and
933 WCE images for testing containing 600 with ulcer and
333 normal images. The proposed method was first applied to
the training dataset and the testing dataset was kept untouched.
A validation set was implicitly created from the training set
and used to tune the parameters of the classifiers. P=8 and
R=1 were empirically chosen for the calculation of the CLBP
components (i.e., CLBP C and CLBP M), the thresholds c
and t (equations 1, 2 and 3) are set to be, the mean value
and the average gray level of the whole image, respectively.
Samples of abnormal and normal CE frames are depicted in
Fig. 4, respectively. The resolution of the images is 243424.
The original images are manually labelled to provide the
ground truth. The images containing any abnormal region are
labelled as a positive samples; otherwise, they are labelled
as a negative samples. In order to prevent over-fitting of
the classification, we exploit three-fold cross-validation for
all the classification experiments. In order to exploit the
discrimination ability of the proposed approach, we provide the
features to SVM (linear kernel) and MLP (number of hidden
neurons ranging from 5 to 50) classifiers and compare their
performances with the methods presented in [11], [26]. The
classification results are assessed in terms of the accuracy,
specificity and sensitivity measures, which are defined as
follows:
Fig. 3: Two normal images (top) and two images with ulcer.
TABLE I: Comparison with state-of-the-art methods (%).
Methods Acc Sens Spec
Proposed method (SVM) 94.07 96.86 91.14
Proposed method (MLP) 93.93 95.50 92.29
Method [11] 92.65 94.12 91.18
Method [26] 87.27 88.64 85.75
Sensitivity =No:of correctpositivepredictions
No:of positives(4)
Specificity =No:of correctnegativepredictions
No:of negatives(5)
Accuracy =No:of correctpredictions
Total samples(6)
Table I presents the classification results of the proposed
approach compared with the state-of-the-art methods.
Form Table I, we can conclude that the proposed algorithm
surpasses the approach [11] with ameliorations of 1.42%
and 2.74% in accuracy, sensitively, respectively. While the
specificity is quite the same for both the approaches when
we use the SVM classifier.
In Table I, there are improvements of 1.28%, 1.38% and 1.11%
in terms of accuracy, sensitively and specificity, respectively
when we use the MLP as the classifier.
Comparing with [26], the proposed method gives better
results with improvements of 6.8%, 8.22% and 5.39% in
accuracy, sensitivity and specificity, respectively, when we
use SVM classifier in the proposed approach.
Using the MLP as the classifier and comparing the result of
the proposed algorithm with those of the method in [26].
There are ameliorations of 6.66%, 6.86% and 6.54% in terms
of accuracy, sensitively and specificity, respectively, as shown
in Table I.

Fig. 4 depicts a detected image with ulcer (Fig. 4(a)) and
a detected normal image (Fig. 4(b)).
Fig. 4: Illustration of visual detection results.
IV. C ONCLUSION
In this paper, we have presented a novel and effective two-
staged approach for ulcer detection in wireless capsule en-
doscopy videos. It is based on a combination of texture feature
extraction approach, (i.e., CLBP) and a Color feature extrac-
tion method, (i.e., Global LOEMP) to better characterize the
wireless capsule endoscopy images. The proposed approach
was experimentally evaluated on a realistic dataset with normal
and abnormal frames from wireless capsule endoscopy videos.
The results of the experiments validate that it is accurate in the
detection of ulcer with 94.07%. For future work we envisage
experimentation with other bigger datasets and even testing the
proposed method for other types of abnormalities.
ACKNOWLEDGMENT
We gratefully acknowledge and express our thanks to the
National Center for Scientific and technical Research (CNRST)
in Rabat for its research grant.
REFERENCES
[1] B. Li and M. Q. H. Meng, “Texture analysis for ulcer
detection in capsule endoscopy images,” Image Vision Comput. ,
vol. 27, no. 9, pp. 1336–1342, Aug. 2009. [Online]. Available:
http://dx.doi.org/10.1016/j.imavis.2008.12.003
[2] B. Li, M. Q.-H. Meng, and J. Y . W. Lau, “Computer-aided small
bowel tumor detection for capsule endoscopy.” Artif. Intell. Med. ,
vol. 52, no. 1, pp. 11–16, 2011. [Online]. Available: http://dblp.uni-
trier.de/db/journals/artmed/artmed52.htmlLiML11
[3] V . S. Charisis, L. J. Hadjileontiadis, C. N. Liatsos, C. C. Mavrogiannis,
and G. D. Sergiadis, “Capsule endoscopy image analysis using texture
information from various colour models.” Comput. Meth. Prog. Bio. ,
vol. 107, no. 1, pp. 61–74, 2012. [Online]. Available: http://dblp.uni-
trier.de/db/journals/cmpb/cmpb107.htmlCharisisHLMS12
[4] N. E. Koshy and V . P. Gopi, “A new method for ulcer detection
in endoscopic images,” in Electronics and Communication Systems
(ICECS), 2015 2nd International Conference on . IEEE, 2015, pp.
1725–1729.
[5] J.-Y . Yeh, T.-H. Wu, and W.-J. Tsai, “Bleeding and ulcer detection using
wireless capsule endoscopy images,” Journal of Software Engineering
and Applications , vol. 7, no. 5, p. 422, 2014.[6] Y . Yuan, B. Li, and M. Q. Meng, “Improved bag of feature for
automatic polyp detection in wireless capsule endoscopy images,”
IEEE Trans. Autom. Sci. Eng. , vol. 13, no. 2, pp. 529–535, 2016.
[Online]. Available: http://dx.doi.org/10.1109/TASE.2015.2395429
[7] ——, “Bleeding frame and region detection in the wireless
capsule endoscopy video,” IEEE J. Biomed. Health Inform. ,
vol. 20, no. 2, pp. 624–630, 2016. [Online]. Available:
http://dx.doi.org/10.1109/JBHI.2015.2399502
[8] M. A. Usman, G. Satrya, M. R. Usman, and S. Shin, “Detection of
small colon bleeding in wireless capsule endoscopy videos,” Comput.
Med. Imag. Grap. , 2016.
[9] Y . Yuan, J. Wang, B. Li, and M. Q.-H. Meng, “Saliency based ulcer
detection for wireless capsule endoscopy diagnosis.” IEEE Trans. Med.
Imag. , vol. 34, no. 10, pp. 2046–2057, 2015. [Online]. Available:
http://dblp.uni-trier.de/db/journals/tmi/tmi34.htmlYuanWLM15
[10] D. K. Iakovidis and A. Koulaouzidis, “Automatic lesion detection
in wireless capsule endoscopy – A simple solution for a complex
problem,” in 2014 IEEE International Conference on Image Processing,
ICIP 2014, Paris, France, October 27-30, 2014 , 2014, pp. 2236–2240.
[Online]. Available: http://dx.doi.org/10.1109/ICIP.2014.7025453
[11] Y . Iwahori, A. Hattori, Y . Adachi, M. K. Bhuyan, R. J. Woodham, and
K. Kasugai, “Automatic detection of polyp using hessian filter and hog
features.” in KES, vol. 60, 2015, pp. 730–739.
[12] G. Wimmer, T. Tamaki, J. J. W. Tischendorf, M. H ¨afner, S. Yoshida,
S. Tanaka, and A. Uhl, “Directional wavelet based features for colonic
polyp classification,” Med. Image Anal. , vol. 31, pp. 16–36, 2016.
[13] S. Segu ´ı, M. Drozdzal, G. Pascual, P. Radeva, C. Malagelada,
F. Azpiroz, and J. Vitri `a, “Generic feature learning for wireless capsule
endoscopy analysis,” Comput. Biol. Med. , 2016.
[14] X. Jia and M. Q. Meng, “A deep convolutional neural network
for bleeding detection in wireless capsule endoscopy images,” in
38th Annual International Conference of the IEEE Engineering
in Medicine and Biology Society, EMBC 2016, Orlando, FL,
USA, August 16-20, 2016 , 2016, pp. 639–642. [Online]. Available:
http://dx.doi.org/10.1109/EMBC.2016.7590783
[15] U. Kopylov, D. Carter, and A. R. Eliakim, “Capsule endoscopy and deep
enteroscopy in irritable bowel disease,” Gastrointest. Endosc. Clin. N.
Am., vol. 26, no. 4, pp. 611–627, 2016.
[16] T. Rokkas, K. Papaxoinis, K. Triantafyllou, and S. D. Ladas, “A meta-
analysis evaluating the accuracy of colon capsule endoscopy in detecting
colon polyps,” Gastrointest. Endosc. , vol. 71, no. 4, pp. 792–798, 2010.
[17] Y . Zheng, J. Yu, S. B. Kang, S. Lin, and C. Kambhamettu, “Single-
image vignetting correction using radial gradient symmetry,” in Com-
puter Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Con-
ference on . IEEE, 2008, pp. 1–8.
[18] T. Ojala, M. Pietik ¨ainen, and T. M ¨aenp ¨a¨a, “Multiresolution gray-scale
and rotation invariant texture classification with local binary patterns,”
IEEE Trans. Pattern Anal. Mach. Intell. , vol. 24, no. 7, pp. 971–987,
Jul. 2002.
[19] X. Yuan, X. Hao, H. Chen, and X. Wei, “Robust traffic sign recognition
based on color global and local oriented edge magnitude patterns,” IEEE
Trans. Intell. Transp. Syst. , vol. 15, no. 4, pp. 1466–1477, 2014.
[20] (2016). [Online]. Available:
http://www.chp.gov.hk/en/content/9/25/51.html
[21] T. Ghosh, A. Das, and R. Sayed, “Automatic small intestinal ulcer de-
tection in capsule endoscopy images,” International Journal of Scientific
and Engineering Research , vol. 7, no. 10, pp. 737–741, October 2016.

Similar Posts