Abstract The objective of my work was to implement an image [612499]

1
Abstract— The objective of my work was to implement an image
processing application using Vivado Design Suite, which then will be
downloaded on a Nexys 4 development board. This paper addresses
three elementary imaging processes, namely: negative image,
grayscale image and black and white image. After the project has
been downloaded on the FPGA board, with help of a VGA monitor
we can view and interpret the images. A V GA controller described
in VHDL realizes contro ller being the one that realizes the timing of
required signals for the interface with a monitor.

Keywords — Binarization , Grayscale, Image processing, Field
Programmable Gate Array (FPGA ), Nexys4, Vivado Design S uite
I. INTRODUCTION
MAGE processing is considered to be one of the areas that
aim to increase the interpretation and perception of
information, to modify the attributes of the image so that it is
more suitable for a specific observer and for a certain task, but
also to increase machine autonomy leading to independent
functioning and therefore replacing the human component. [1]
The need for image processing application performance has
increased steadily and with it the requirements of comp uting
power, especially when it comes to the real -time use of
information extracted from images, imaging applications being
resource -consuming (computing power, memory).
To speed up the imaging process, there are different
alternatives, one of the architec tures that compromises
flexibility and performance being FPGA architecture.
Due to the fact that FPGAs can operate in parallel
configurations and can be computerized machines, it has
increased their interest in digital image processing, so that these
technologies have become a viable target for implementing
algorithms that make imaging work from different such as
medicine, army, engineering, industry, astronomy, etc. [2]
In the literature, there is a multitude of research in the field
of imaging processing, these being classified according to the
hardware topologies and algorithms implemented.
From 1964 to the present, image processing has made
tremendous advances, so in order to optimize analysis and
interpretation by the human observer, these methods of
processing have been applied even in space techniques where
they have been used to correct and visually improve the images
received from the Apollo and Surveyor space missions. [3]
In medicine, for example, computerized processing methods
make possible to imp rove contrast or to encode intensities (gray
levels) of monochrome images in colors to facilitate the
interpretation of radiographs or other biomedical imagery.
Improvement and restoration methods are also used to
process degraded images of stranded objec ts (for example, paintings) or in experiments that are too costly to repeat).
Another eloquent example demonstrating the applicability of
image processing using FPGAs is presented in the article [4].
The cited article presents a real -time configurable system i n
which the images are processed by switching a series of filters
and by changing the filter parameters, thus improving the image
considerably. Another example is presented in the article [5].
This article describes the deployment of the Sobel detection
operat or applied to images to obtain a surveillance application.
This paper presents an applic ation that addresses three of the
fundamental image processing that finds its applicability
especially in fields such as medicine and the army. Thus,
depending on the user's selection, the original image (which is
to be processed) will be displayed on the screen and one of the
edits: negative image, black -and-white image or grayscale
image .
I.THEORETICAL CONSIDERATIONS
A. Image processing
Image processing is a method of performing operations on an
image to get an improved image or to extract useful information
from it. Recent studies prove that the use of an FPGA
architecture is a much more efficient image processing method,
unlike the use of a microprocessor, so FPGAs have often come
to be used on application deployments real -time imaging. [6]
From the point of view of the programs used, a significant
number of such processing programs are currently available on
the market. In combination with other software packages (such
as computerized graphics, for example), they are a very useful
starting point for solving specific image processing pro blems.
Solutions obtained through software deployment are then
transferred (ported) to specialized hardware processing boards
to obtain a higher speed.
For any digital processing to be performed on an image, it
must first be stored in the computer in an ap propriate form so
that it can be manipulated by a processing program.
The most practical way to do this is to divide the image into
a collection of discrete cells known as pixels.
Typically, the image is divided into a rectangular pixel
network so that eac h pixel is a small rectangle. Once this has
been done, each pixel has a value, representing the color of that
pixel. A color pixel in a picture is a combination of three colors:
Red, Green and Blue, together forming the RGB space. RGB
color values are repr esented in three XYZ dimensions,
illustrated by brightness, chroma, and hue attributes.
To represent the color images, the red, green and blue
components must be specified separately for each pixel Image Processing Algorithms with
Implementation o n FPGA
X Y, Technical University of Cluj Napoca
I

2
(assuming an RGB color space), and so the pixel "value" is
actually a three -dimensional vector.
Due to the desire to replace the human observer with devices
that process information faster than it, image processing and
analysis has developed as an interdisciplinary and highly –
utilized domain based o n rigorous mat hematical theory. This
paper addresses three elementary imaging processes, namely:
negative image, grayscale image and black and white image.
1) The negative of an image
The most basic and simple feature in digital image processing
is the one that calculates the negativity of an image. The
negative image is obtained by inverting all gray levels. To
calculate the negative of an image, each pixel value in the
original image i s low of 255 (if we use 8 bit /color i mages) or 15
(if we use 4 bit /color images).
The eq uation describing this algorithm is (1).

𝑠=𝑖𝑛𝑡𝑒𝑛𝑠𝑖𝑡𝑦 −𝑟 (1)

Where:
• s represents the pixel value of the processed image
• r represents the value of the gray levels of the pixels
in the original image
• intensity is 255 or 15 (depending on the image used)
[7]

Negative images are useful for improving the details of white
or gray embedded in dark areas of an image, and such
inversions are very useful when analyzing medical images, and
the images need to be reversed for automated analysis. [8]
Reversing a certain portion of the brightness range produces an
effect that can be used to highlight details in shady and saturated
areas.
2) Grayscale image
Grayscale Transform is a color -coding scheme that contains
only intensity information. The color structure is composed
exclusively of shades of gray: from black (the weakest
intensity) to white (the strongest intensity).
The reason for differentiating such images from any other
type of color image is that less information needs to be provided
per pixel, so the mai n benefit of grayscale images is that each
pixel value can be represented on a single pixel byte instead of
three bytes required for RGB encoding. [9]
The most common algorithm used to convert a color image
into a grayscale image is shown in the equation ( 2).

𝑌=(0.299 ∗𝑅)+(0.59∗𝐺)+(0.114 ∗𝐵) (2)

Where:
• Y represents the pixel luminance
• R, G and B represent the value s of the three color
components [7]

3) Binary image
Binary images are images whose pixels have two possible intensity values. Numerically, the two values are 0 for black and
1 or 255 for white, so binary images are normally displ ayed as
black and white images.
Binary images are often produced by converting a grayscale
image (which has 0 to 255 grayscale levels) in a black and white
image to separate an object from the background image. The
object color (usually white) refers to the foreground color, and
the rest (usually black) is the background color. However,
depending on the image used, this polarity can be reversed, in
which case the object is displayed with 0 (black) and the
background with a value other than zero. [10] This paper uses
the global bin arization method. In this binarization method, a
fixed threshold value is used to assign 0 and 1 for all pixel
positions in an image, as described in Equation (3). [11], [12]

𝑔(𝑥,𝑦)={1,𝑖𝑓 𝑓(𝑥,𝑦)≥𝑇
0,𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 (3)

In the equation presented:
• T is the threshold value
• Functions 𝑔 (𝑥, 𝑦) and 𝑓 (𝑥, 𝑦) represent the processed
image or the original image
• x and y represent the pixel coordinates.
B. FPGA programmable circuits
First Field Programmable Gate Array (FPGA) was invented
by co -founder Xilinx, Ross Free -man. An FPGA circuit is a
reconfigurable chip of silicon, having the same flexibility as the
software running on a processor -based system.
FPGAs are by their nature parallel devices, thus processing
different operations are assigned to a section dedicated on the
chip, which can operate autonomously without influencing
other on -going processes. FPGAs can also be programmed to
be able to replace any digital circuit or system, and in the case
of small or medium production volumes, these circu its can
provide cheap and fast solutions.
Recently, FPGA technologies have become extremely used
to implement image processing algorithms, especially due to
their unique, modular architecture. Studies show that using
FPGA circuits to perform image processi ng is a much more
efficient method compared to conventional processor, FPGAs
proving to be extremely useful for tasks requiring quick
completion. Applications for FPG As are t ypically written in
hardware languages such as VHDL or Verilog.
An FPGA chip c onsists of a very large number of
programmable logic blocks. In turn, the entire structure is
surro unded by other programmable I /O blocks.
Normally, the FPGA includes:
• Programmable logic blocks that implement logical
functions
• Programmable routing /interconnections that connect
these logical functions
• I/O blocks that are connected to the logical blocks via
routing interconnection and which perform off -chip
connections
A generalized example of an FPGA is shown in Figure 1,
where Configurable Logic Blo cks (CLB) are evenly distributed

3
over the entire face of the chip. These blocks can be connected
using programmable interconnections.

Figure 1. FPGA structure
To be able to perform a function, the Xil inx architecture uses
CLBs, I /O blocks, switchable m atrices, and an external
memory, the latter being used to store information given by the
logic blocks interconnections. Therefore, the device can be
reprogrammed by simply modifying the data stored in the
reconfiguration memory.
C. VGA Controller
Video Graph ics Array is a graphical standard for personal
computers which refers to hardware display, developed by
IBM (International Business Machines Corporation). Due to
widespread use, VGA can refer to the standard itself, to the
15-pin VGA connector (Figure 2) o r resolution 640×480.

Figure 2 – VGA Connector [8]

The main difference between VGA and previous graphic s
standards is that VGA uses analog and not digital signals, VGA
cables carrying Red, Green, Blue video signals, as well as the
two synchronization signals: horizontal synchronization and
vertical synchron ization.
VGA systems offer a resolution of 720×400 in text mode and
a 640×480 resolution in graphics mode, with 16 colors or
monochrome, this interface is used for high -definition screens
and large resolutions. Today, this interface is used for High
Definition (HD), including resolution of 1080p and higher.
The VGA controller is a programmable logic component that
performs the timing of the signal required for the interface with
a VGA monitor, it being necessary and sufficient for the user to
provide on ly the clock signal and, of course, the image source.
The VGA controller provides the synchronization signals
(horizontal, vertical), pixel coordinates, and display function
needed to produce the image at the right time. Horizontal and
vertical sync signal s are digital waveforms that define the
timing of lines and frames during monitor operation. Being
digital, these are provided directly by the FPGA. As can be seen in Figure 2, the VGA video signal asse mbly
contains 5 active signals:
• Horizontal synchroniza tion: a digital signal used for
line synchronization
• Vertical sync: a digital signal used for frame
synchronization
• Red (R): Analog signal used to control the intensity of
the red component
• Green (G): Analog signal used to control the intensity
of the green component
• Blue (B): Analog signal used to control the intensity
of the blue component
The horizontal sync signal is always responsible for starting
a new line of the screen. Similarly, the vertical sync signal will
display a new frame on the screen. The in tensity of the three
colors (red, green, and blue) is controlled by 3 vectors on a
defined number of bits, which define the colors: R, G, B.
Vertical and horizontal sync signals will determine the screen
resolution (e gg 1024 x 768 pixels) where the color of each pixel
is determined by the R, G, and B signals. Each color is a
combination of the three primary colors: Red, Green, and Blue .
II. IMPLEMENTATION OF THE ADOPTED SOLUTION
The application that is downloaded to the FPGA card loads an
image into a ROM memory , processes the image according to the
user's choice (it can choose to process the negative image, the
binarized image or the grayscale image), and a VGA driver sends
it to the monitor to be displayed next to the original image.
For the development of the application, the Vivado HLS
2017.4 software platform was used. The BitGen package
generates the bitstream file containing the configuration bits,
which is needed to program the FPGA on the Nexys 4
development board (Figure 3).

Figure 3. Nexys 4 Developme nt Board [6]
Nexys 4 is a development system based on Xilinx's latest
Artix -7 FPGA technology. With its high -performance
capability, generous external memories, USB, Ethernet, and
other port collections, Nexys4 can accommodate projects or
applications rang ing from simple combinational circuits to the
incorporation of powerful processors. Several built -in
peripherals, including an accelerometer, a temperature sensor,
a MEMS digital microphone, and a variety of I / O devices,
allow Nexys4 to be used for a wid e range of deployments

4
without the need for other components.
The top level module is ,, static_vga ”. This is responsible for
the instantiation for the other three modules: clk_wiz_0 ,
vga_sync and img_gen . Moreover, the module img_gen
instantiates other fou r modules: blk_mem_gen_0 ,
blk_mem_gen_1 , blk_mem_gen_2 and img_ proc.

Figure 4 –VHDL Project Hierarchy
The ports of ,,static_vga” module are connected to the
development board by the help of a constraints file. In this way,
signals and pins from the VHDL description can be directly
mapped to physical terminals of the FPGA.
The clk_wiz_0 module performs a frequency division role in
this application (with which the rated frequency of the used
board will be d ivided by the clock frequency of the resolution
used), the vga_sync module provides the color information, the
horizontal address and the vertical address, required in the on –
screen display process, and the img_gen module describes the
processing operation s performed on an image.
A. The clock_wiz_0 module
This application uses a VGA screen resolution of 1024×768
pixels, therefo re clock signal used must be 65 MHz. B ecause
the nominal frequency on -board oscillator providing the main
FPGA clock signal is 100 MHz , it is necessary to use a
frequency divider which divides the frequency of 100 MHz to
65 MHz. This divisor is instantiated with the help of the
Clocking Wizard IP.
B. The vga_sync module
The module vga_sync controls the board monitor interface.
Here all the synchronization signals necessary for the entire
application are generated. As mentioned above, to ensure a
resolution of 1024 * 768 pixels and a 60 Hz refresh rate, the
pixel clock frequency must be 65 MHz
Two internal counters can be found in this modul e. The
horizontal counter is incremented on the x -axis of the screen for
each pixel, as long as its length is between the limits of the
visible range of the VGA and the vertical one will be
incremented for each completed line, so the value of this
counter will be equal to the position on the y -axis of the pixel.
The same algorithm is repeated for all the screen. Thus, the
vertical counter is able to control the vertical synchronization
signal, and the horizontal one controls the horizontal
synchronization s ignal. C. The img_gen module
The img_gen module performs multiple functions such as
storing the original image (which is desired to be processed) in
a ROM memory, the actual processing of it, storing the
processed image into a RAM and displaying one of the ed its
(negative, binarization, grayscale). To display the image on the
VGA monitor using FPGA, it must be stored in the FPGAs
memory, such as to store image data which is to be processed,
we will use a ROM memory. After the image has been stored
in the FPGA memory, is to be processed, stored in a RAM
memory, and then transmitted through the VGA port to the
screen on which it will be displayed.
Two pictures will be displayed on the VGA screen, the
original image and its processi ng, depending on the user's
selection. One can choose from a black and white image,
negative image , and grayscale image.
An approach to getting the black and white image is to
compute the average the three color components (red, green and
blue) of a pixel and compare them with a threshol d. In
addressing this processing I considered the thresh old of
different values. The thre sholding operation is performed by
scanning the values of each pixel in the image that is to be
processed and replacing the corresponding pixel by 0 (if the
average of the three components is less than or equal to the
threshold against which the comparison is made) or 1 (if the
average of the three components is higher than the threshold
used).
Replacing all pixels with '0' results in a completely black
image, and if al l pixels are replaced by '1', the image will be
completely white.
To get the negative of the original image, each pixel value in
the image is low since 15 (t he original image having 4
bits/color, this being a limitation of the development board
DAC and V GA).
To get the grayscale image, I have applied a weighted
average operator, so all pixels have equal intensity in RGB
space, so it is necessary to specify a single intensity value for
each pixel. To optimize the use of FPGA resources, equation
(2) we imp lemented it using only divisions with powers of 2
(the division is done by shit to the right with as many positions
as we want to divide). Thus, the factors that multiply each color
component approximate to the nearest power number of 2. For
the red compo nent, R, the factor 0.299 is approximated to 0.25,
which denotes a division by 4. Similarly, the factor 0.59
multiplied by the green component G is approximately 0.5, so
a division by 2. For blue component, B, the factor 0.114 is
approximate to 0. 125, requ iring a division of 8.
III. EXPERIMENTAL RESULTS
The downloaded application on Nexys 4 loads a 400×300
pixel image into a defined and initialized ROM memory,
processing it according to user selection. After processing, the
original image along with the modified image is sent to the
1024×768 pi xel monitor via the VGA driver.
The user selects the processed image that is to be displayed
using three switches.

5
• if the switches are in the state "000", the original image
will be displayed twice on the screen (Figure 5), as no
processing is performed, this step verifying that the
RAM of the display system

Figure 5. The original image
• if the switches are in the state "001", on screen, next to
the original image will be displayed the grayscale
image (Figure 6)

Figure 6. The original image and the grayscale image
To get the image in grayscale, the brightness of the three
color components R, G and B is extracted, each component
having a certain weight. Because the density of recipient cells
in the human eye is not the same for the three colo rs (green
density> red density> blue density), the theory requires the
following coefficients: 0.299 for the red component, 0.59 for
the green component and 0.11 for the blue component, the sum
of the three components being 1.
• if the switches are in the state “010” a black and white
image (Figure 7) will be shown on screen, using a
threshold equal to 5

Figure 7. The original image and the black and white image
using a threshold equal to 5
After averaging the color components, the res ult is compared
to threshold 5 of the maximum value 16. For a luminance less
than or equal to 5, a black pixel is displayed, and for a
brightness greater than 5 pixels white.
• if the switches are in the state “011” a black and white
image (Figure 8) will be shown on screen, using a
threshold equal to 10
Figure 8. The original image and the black and white image
using a threshold equal to 10
It can be seen that if the image is binarized using the
threshold T = 5, it has several shades of white than the bina rized
image using the threshold T = 10. When the threshold , T, is 10,
the image has more black shades because more pixels will have
a mean brightness less than 10.
• if the switches are in the state “100”, the negative of
the original (Figure 9) image will be s hown next to it

Figure 9. The original image and the negative of the original
image
To get the negative image, each pixel value in the original
image is low from 15 (because we use 4 bit /color images).
• if the switches are in the state “101”, a red squared will
be displayed. To get the red square, the fi rst 4 bits of
each red pixel is set to '1', the other 8 bits
corresponding to the green and blue colors being '0'.
• if the switches are in the state “110”, a green squared
will be displayed.
• if the switches are in the state “111”, a blue squared
will be displayed.
Similarly, to obtain the green square, the next 4 bits (bits 7,
6, 5 and 4) of each green color pixel are set to '1', the other 8
bits correspondi ng to the red and blue colors being '0'. To get
the blue square, the last 4 bits (bits 3, 2, 1 and 0) '1' are set, and
the bits corresponding to the other two colors '0'.
The system leaves room for other pixel -level processing,
only the code has to be adju sted and the circuit restitized, the
three squares being only a substitute for possible later post –
processing.
Prior to synthesis and implementation, the platform allows
simulation of the project and, implicitly, visualization of all the
signals that come into i ts composition. Since very high –
frequency signals are simulated over a wide window, it is
difficult to h ighlight all the details of low -frequency signals
such as syncH having a negative pulse at the end of each frame
(60 pulses /second compared to 1 5.3 ns as clock time).
In order to use the resolution of 1024×768 pixels, the
reference clock frequency is 65 MHz. Because the nominal
frequency of Nexys4 is 100 MHz, it must be divided by 0.65.
To verify the correctness of the frequency divider, the refer ence
clock time and the clock signal period generated by the

6
oscillator of the development board are measured. The two
periods can be measured by means of two cursors each, the
period for each signal representing the difference between the
two measurements made by the tracers located on the respective
signal. In Figure 10 it can be seen that the clock time of the
plate, CK, is equal to (18,990 ns – 19,000 ns) 10 ns, which
translates to 100 MHz, and the reference clock signal, CKpix , is
equal to (18947.48 – 18962.780), i.e. 15.3 ns (period that
translates to 65.3 MHz). Therefore division was done correctly.

Figure 10. F requency divider
Figure 11 highlights the horizontal sync signal , synch , and
line pixel counter. As you can see, addrH increments to each
CKpix , and when it reaches the 1343th pixel (horizontal line of
1344 pixels), it resets to 0, thus starting a new line. Counting
ends at 1343, which corresponds to the total number of pixels
per line.

Figure 11. The signal of a horizontal line

As can be see n in Figure 12, the vertical address, addrV ,
increments once with every 1184 pixels of the horizon tal line
(1024 pixels visible /horizontal line + 24 pixels for FPH + 136
pixels for sync pulse) along with the positive front of syncH .

Figure 12. The signal of a vertical line Figure 13 shows the color signals read from the ROM
memory, the reading being dependent on the horizontal and
vertical addresses of the current pixel.

Figure 13. The color signals read in the form of hexadecimal
values from the ROM dr ive
A. Resources used
The downloaded application on the FPGA is obviously
resource -intensive. The amount of resource consumed by the
entire application is calculated automatically and displayed as a
chart and a table, which can be viewed after running the
application. The representation of the number of resources used
by the application is based on the number of LookUp Tablets,
FF (Flip Flop), Busy BRAM (Block RAM), DSP (Digital
Signal Processor), BU FGs ) and the percentage of I /O used.
The representation of resources used is done for both the
synthesis part of the model and the implementation. Due to the
fact that the implementation performs an optimization of the
entire system, the values may differ in the two cases.
Figure 14 shows the graph of resources us ed post -synthesis,
and Figure 15 is the graph of resources used post –
implementation. As can be seen in Figure 15, of the tot al of
63400 LUTs, only 0.87% is used. In the case of FF, only 0.09%
of the 126800 available are used. Of the 210 available I / O
ports, only 19, or 9.05% of the total, are used. Also, 6.25% of
the total BUFG and 16.67% of the total MMCMs are used.
Block RAM is occupied in the highest proportion, namely
91.11%.

Figure 14. Resources used post -synthesis

7

Figure 15. Resources used pos t-implementation
The BRAM memory is almost entirely occupied because , in
its approximately 5 Mbits, there are 3 images of 400×300 each,
namely: the original image, the original image temporarily
stored and the processed image. The I /O ports used are CK, R,
3 VGA ports, addrH , addrV , syncH and syncV . According to
Figure 15, 653 LUTs and 118 Flip-Flops are used as a result of
implementation.
IV. CONCLUSIONS
In conclusion, you can say that this image processi ng
application manages successfully to highlight the features and
advantages of an FPGA circuit, especially due to the high speed
of the machining, but also due to the low cost of the application,
the user needs to implement it only using a Nexys4
development board, a VGA monitor and a PC .
Following the experimental results we can conclude that the
developed application re -views the functionalities of an
imaging processing block, in addition, the presence of the
switches makes its use considerably easier, being able to select
the display of the desired processing. The number of proc essing
can be extended, allowing the application to do so.
This paper is a starting point for a further deepening of image
processing techniques using FPGAs, techniques that are
increasingly used. Because this application has a high degree of
flexibility, we can include research directions such as:
• facial recognition
• fingerprint reconstruction
• systems based on neural networks
V. REFERENCES

[1] D. M. Harvey, S. P. Kshirsagar, and C. Hobson, “Low Co st Scaleable
Paral lel Image Pro cessing System,” Microprocessors and Microsystems,
vol. 25, pp. 143 –157, May 2001.
[2] Muthukurnar K. & Daggu V., "Image Processing Algori thms on
Reconfigurable Architec ture Using Handel -C", Journal of Engineering
and Applied Sciences Volu me 1 Number 2 pp. 103 -111, 2006.
[3] Jain, A.K., Fundamentals of Digital Image Processing, Prentice -Hall,
London, 1989.
[4] A. Fijany and F. Hosseini, “Image Processing Applications on a Low
Power Highly Parallel SIMD architecture,” in Aerospace Conference,
2011 IEEE, pp. 1 –12, March 2011
[5] D. Crookes, “Architectures for High Performance Image Processing: The
future,” Journal of Systems Architecture, vol. 45, no. 10, pp. 739 – 748,
1999
[6] Sparsh Mit tal, S. G. (2008). FPGA: An Efficient and Prom ising Platform
for Real -Time Im age Processing Applications. Proceedings of the
Nationa l Conference on Research and De velopment in Hardware &
Systems, 4. [7] Raman Maini and Himanshu Aggarwal, A Comprehensive Revie w of
Image Enhancement Techniques, JOURNAL OF COMPUTING,
VOLUME 2, ISSU E 3, MARCH 2010, ISSN 2151 -9617.
[8] Raman Maini and Himanshu Aggarwal, A Comprehensive Review of
Image Enhancement Techniques, JOURNAL OF COMPUTING,
VOLUME 2, ISSUE 3, MARCH 2010, ISSN 215 1-9617
[9] Mark Grundland, Neil A. Dodgson, Decolorize: Fast, Contrast Enhancing,
Color to Gray -scale Conversion, 40(11), 2007, pp. 2891 -2896 .
[10] M. Sezgin, B. Sankur, “Survey over image thresholding techniques and
quantitative perfor -mance evaluation”, Journal of Electronic Imaging 13
(1) (2004) 146 –168.
[11] M.I. Sezan, “A peak detection algorithm and its application to histogram –
based image data reduction”, Computer Vision, Graphics, and Image
Processing 49 (1) (1990) 36 –51.
[12] Rosenfeld, P. de la Torre, “Histogram concavity analysis as an aid in
threshold selection”, IEEE Transactions on System, Man, an d Cybernetics
13 (1983) 231 –235

Similar Posts