Digital Image Processing

CHAPTER 1

INTRODUCTION

Digital Image Processing comprises of three words: Digital, Image & Processing. An image is a two-dimensional function f(x, y), where x and y are the spatial (plane) coordinates, where the amplitude of f at any pair of coordinates (x, y) is called the intensity of the image at that level. If x, y and the amplitude values of f are finite and discrete quantities, we call the image a digital image. A digital image is composed of a finite number of elements called pixels, each of which has a particular location and value. Digital image processing refers to processing of digital images by using digital computers. [1]

1.1 Image Processing

Image processing is a method to convert an image into digital form and perform some operations on it, in order to get an enhanced image or to extract some useful information from it. It is a type of signal dispensation in which input is image, like video frame or photograph and output may be image or characteristics associated with that image. Usually Image Processing system includes treating images as two dimensional signals while applying already set signal processing methods to them.  It is among rapidly growing technologies today, with its applications in various aspects of a business. Image Processing forms core research area within engineering and computer science discipline.

In imaging science, image processing is any form of signal processing for which the input is an image, such as a photograph or video frame; the output of image processing may be either an image or a set of characteristics or parameters related to the image. Most image-processing techniques involve treating the image as a two-dimensional signal and applying standard signal-processing techniques to it. Image processing usually refers to digital image processing, but optical and analog image processing also are possible. This article is about general techniques that apply to all of them. The acquisition of images is referred to as imaging.

Image processing is referred to processing of a 2D picture by a computer. An image defined in the “real world” is considered to be a function of two real variables, for example, a(x,y) with a as the amplitude (e.g. brightness) of the image at the real coordinate position (x,y).Modern digital technology has made it possible to manipulate multi-dimensional signals with systems that range from simple digital circuits to advanced parallel computers.

An image may be considered to contain sub-images sometimes referred to as regions-of-interest, ROIs, or simply regions. This concept reflects the fact that images frequently contain collections of objects each of which can be the basis for a region. In a sophisticated image processing system it should be possible to apply specific image processing operations to selected regions. Thus one part of an image might be processed to suppress motion blur while another part might be processed to improve color rendition. Sequence of image processing:

The most requirements for image processing of images is that the images be available in digitized form, that is, arrays of finite length binary words. For digitization, the given Image is sampled on a discrete grid and each sample or pixel is quantized using a finite number of bits. The digitized image is processed by a computer. To display a digital image, it is first converted into analog signal, which is scanned onto a display.

Image processing are computer graphics and computer vision. In computer graphics, images are manually made from physical models of objects, environments, and lighting, instead of being acquired from natural scenes, as in most animated movies. Computer vision, on the other hand, is often considered high-level image processing out of which a machine/computer/software intends to decipher the physical contents of an image or a sequence of images.

In modern sciences and technologies, images also gain much broader scopes due to the ever growing importance of scientific visualization. Examples include microarray data in genetic research, or real-time multi-asset portfolio trading in finance.

Before going to processing an image, it is converted into a digital form. Digitization includes sampling of image and quantization of sampled values. After converting the image into bit information, processing is performed. This processing technique may be, Image enhancement, Image reconstruction, and Image compression

Image enhancement:

It refers to accentuation, or sharpening, of image features such as boundaries, or contrast to make a graphic display more useful for display & analysis. This process does not increase the inherent information content in data. It includes gray level & contrast manipulation, noise reduction, edge crispening and sharpening, filtering, interpolation and magnification, pseudo coloring, and so on.

Image restoration:

It is concerned with filtering the observed image to minimize the effect of degradations. Effectiveness of image restoration depends on the extent and accuracy of the knowledge of degradation process as well as on filter design. Image restoration differs from image enhancement in that the latter is concerned with more extraction or accentuation of image features.

Image compression:

It is concerned with minimizing the no of bits required to represent an image. Application of compression are in broadcast TV, remote sensing via satellite, military communication via aircraft, radar, teleconferencing, facsimile transmission, for educational & business documents , medical images that arise in computer tomography, magnetic resonance imaging and digital radiology, motion , pictures ,satellite images, weather maps, geological surveys and so on.

Text compression – CCITT GROUP3 & GROUP4

Still image compression – JPEG

Video image compression -MPEG

Image processing basically includes the following three steps.

1. Importing the image with optical scanner or by digital photography.

2. Analyzing and manipulating the image which includes data compression and image enhancement and spotting patterns that are not to human eyes like satellite photographs.

3. Output is the last stage in which result can be altered image or report that is based on image analysis.

  1.1.1. Purpose of Image processing

The purpose of image processing is divided into 5 groups. They are:

1.      Visualization – Observe the objects that are not visible.

2.      Image sharpening and restoration – To create a better image.

3.      Image retrieval – Seek for the image of interest.

4.      Measurement of pattern – Measures various objects in an image.

5.      Image Recognition – Distinguish the objects in an image.

Image Processing is a process to convert an image into digital form and perform some operations to get an enhanced image and extract useful information from it. It is a study of any algorithm that takes an image as input and returns an image as output. Image processing is referred to processing of a 2D picture by a computer. It is a form of signal privilege in which image is input similar to video frame or photograph and is image or characteristics associated with that image may be output. Image processing system treat images as two dimensional signals and set of signals processing methods are applied to them.  It is latest technologies and its applications in various aspects of a business.

The acquisition of images is referred to as imaging. Image processing is also known as digital image processing.

Optical and analog image processing are also possible. There are different types of image processing fields like computer graphics where images are created, image processing where manipulation and enhancement of images are to be done and computer vision where analysis of images is done.

Image processing is also defined as the discipline in which input and output both are images. An image processing defines a new image y in terms of the existing image x. An image can be transformed in two ways. These ways are as follow:

Domain Transformation

Range Transformation

1.1.2. Fundamental Steps of Digital Image Processing:

The basic fundaments steps which are used in digital image processing are given below:

Fig. 1.1: Fundamental Steps of Digital Image Processing

The description of fundamental steps is as follow: Problem Domain gives the input to the image acquisition.

Image Acquisition: With the help of sensor image is captured and digitized it with the help of analog to digital convertor only when image is in analog form.

Preprocessing: After image enhancement and restoration preprocessing is done before segmentation. For extracting the components tools are used for the representation and proper shape of the image.

Segmentation: Segmentation divides the image into its constituent and objects. When the object inaccessible in interested applicant then segmentation stops.

a) Representation and Description: The output of the segmentation of the image is followed by this step. In representation decision is made which data should be used either boundary or complete.

Boundary representation tells about the external parts like corners.

Complete representation tells about the internal parts or shapes like texture.

Representation transforms raw data into suitable form of processing.

b) Description: It tells about the features of the selection with the help of attributes.

Recoginization and Interpretation: It assigns the labels to the objects based upon some information according to its description.

Knowledge Base: In the form of knowledge database, knowledge about problem domain is coded into image processing.

1.1.3. Components of the Image Processing System:

The main components of the image processing are as follows:

1. Image Sensors

2. Specialized image processing hardware

3. Computers

4. Image processing software

5. Mass storage capability

6. Image Display

7. Hardcopy devices

8. Networking

Fig. 1.2: Components of Image Processing System

In figure 1.2 components of the image processing are describes. There description is as follow:

Image Sensors: There are two types of devices which are needed to attain digital image. The first device is physical device which obtained the radiant energy from the object of the image. The second device is that device which converts the output of physical device into digital form is called digitizer.

Specialized Image Processing Hardware: It is front-end subsystem. It is consist of digitizer which forms some arithmetic operations and logically operation parallels the unit function are perform for fast data processing and cannot handle by computer.

Computer: It ranges from PC to supercomputer and it is general-purpose computer for image processing system. It is designed for achieved required application of higher level.

Image Processing Software: The software which is related to image processing is performed some special tasks. It utilized special modules in which code can be written to perform some special tasks.

Mass Storage Capacity: It is expressed storage capacity in pixel. It used short term during processing. Online storage for fast recall. for infrequent access used archival storage.

Image Displays: It is used in color TV monitors. it has graphics card which is the internal part of the computer. Monitor display the output of the image.

Hardcopy Devices: It is used for the recording of the images like cameras, printers and digital units like inkjets etc.

Networking: The main problem in digital image processing is transmission bandwidth because of large data. The communication of remote sites via internet is not so efficient.

1.1.4. Advantages of Digital Image Processing:

Few advantages of digital image processing are as follow:

1. It is easy to manipulate.

2. It can be divided into pieces easily.

3. It has compact storage.

4. Wider range of algorithms can be applied upon it very easily.

1.1.5. Disadvantages of Digital Image Processing:

Few disadvantages are as follows:

It is very expensive process.

Its cost depends upon number of detector used.

It is difficult to handle.

It is time consuming process.

It has lack of professionals.

1.1.6. Applications of Digital Image Processing:

Some applications of digital image processing are as follow:

Agricultural: Digital image processing as widely used in agricultural field. It is used in, harvest control, fruit grading, seeding and food picking etc. In fruit grading with the help of image processing fresh fruits are sorted and grade according to it with the help of fuzzy logic analysis. Weed detection is also used for the detection of good quality weeds. X-ray technique is used for irrigating land measurement.

Communication: It is also used in the field of communication like telecom, video conferencing and compression etc. In video conferencing it is used for the detection of faces. Coding is used for the verbal communication interpretation.

Character Recognition: In this application with the help of image processing handwritten and printed documents can be recognized easily.

Commercial: In commercial area it is used in banking sector, signature and bar coding. By identifying signature and barcodes confidentiality can be checked easily.

Medical: In medical field images gives the information of function and shapes of human body organs for diagnosis of the body. it is used in head radiography and chest radiography.

Visual Inspection: It has three different types of categories computer vision, machine vision and visual inspection. In computer vision it is the combination of artificial intelligence and image processing based upon the analysis of one or more images, Output is the image sense of computer vision. In machine vision it automatically interpreted the correspondence images and locates and identifies it.

1.1.7. Thinning: Thinning is an image processing process in which binary valued image regions are condensed to lines that approximate the center skeletons of the regions. For each single image region it is usually required that the lines of the thinned result are associated, so that these can be used to infer shape and topology in the original image. A main idea behind thinning is in the preprocessing stage to make easy higher level analysis and recognition for such applications as Optical Character Recognition, fingerprint analysis, diagram understanding, and feature detection for computer vision [5].The skeleton of a binary image is an integral demonstration for the shape analysis and is useful for many pattern recognitions application. The skeleton of an object is a line connecting points midway between the boundaries [3].Thinning techniques have been applied in many fields such as automated industrial inspection, pattern recognition, biological shape description and image coding etc. the main objective of thinning is to improve efficiency, to reduce transmission time. The skeleton refer to the bone of an image[5].

One of the fundamental requirements is to represent the structural shape of digital images. This can be done by reducing it to a graph. This reduction may be accomplished by obtaining the skeleton of the region using skeletonization also known as thinning. Skeletonization is the process of extracting skeletons from an object in a digital image. It is morphological operation that deletes black foreground pixels iteratively layer by layer until one pixel width skeleton is obtained. Skeletonization is essentially a “pre-processing” step used in many image analysis techniques [2]. It is a process of reducing an object in a digital image to the minimum size necessary for machine recognition of that object [3].Skeletonization is usually applied on binary images which consist of black (foreground) and white (background) pixels. It takes input a binary image, and produces another binary image as output as shown in fig1:

Fig. 1 General concept of skeletonization

Skeletonization has been used in a wide variety of other applications like: Optical character recognition (OCR) [2,5], Pattern recognition[3], Fingerprint classification[4], Biometric authentication[5], Signature verification[5], Medical imaging[4].

1.2 Skeletonization algorithms

All the Skeletonization algorithms are classified into two broad categories:

1) Iterative thinning algorithm [3]

2) Non iterative thinning algorithm [3]

1. Iterative (pixel based): This thinning algorithm produces a skeleton by examining and deleting contour pixels through an iterative process in either sequential or parallel way [3]. Sequential thinning algorithms which examine contour pixels of an object in a predetermined order, and this can be accomplished by either raster scanning or following the image by contour pixels. In parallel thinning algorithms, pixels are deleted on the basis of results obtained only from the previous iteration. Hence parallel thinning algorithms are suitable for implementation in parallel processors [3].

a) Sequential thinning: This algorithm is that which inspect contour points in a predetermined order of an object and this can be accomplished by either raster scanning or following the images by contour pixels.

b) Parallel Algorithm: In this type of algorithms pixels are inspect for deletions on the basis of some previous available iteration results.

2. Non-iterative (non pixel based) thinning is not based on examining individual pixels. Without examining all the individual pixels, these algorithms produce a certain median or centre line of the pattern to be thinned directly in one pass. Some popular non pixel based methods include medial axis transforms, distance transforms, and determination of centrelines by line following. Medial axis transforms often use gray-level images where pixel intensity represents distance to the boundary of the object [3]. Distance transform based methods compute the distance to the image background for each object pixel and use this information to determine which pixels are part of the skeleton.

1.3 Need of skeletonization

Skeletonization is an important step in many image processing applications like Pattern recognition [2], optical character recognition [2,5], and fingerprint classification [4] etc. Therefore, it is an active area of research. So, there is always a need for good skeletonization algorithms in reference to following parameters:

To reduce the amount of data required to be processed as it takes less time [4].

To reduce processing time.

Extraction of critical features such as end-points, junction-points, and connection among the components is helpful in many applications [4].

By reducing an object to only a skeleton, unimportant features and image noise can be filtered out.

Skeletonization is commonly used for the higher degree analysis and recognition for applications such as diagram understanding, Optical character recognition (OCR)[2,5] , feature detection, and fingerprint analysis [4].

1.4 Applications of Skeletonization

Skeletonization has been used for variety of image processing applications like:

Optical character recognition (OCR) [2,5]

Pattern recognition[3]

Fingerprint classification[4]

Biometric authentication[5]

Signature verification[5]

Medical imaging[4]

1.5 Introduction to Neural networks

Neural networks are composed of simple elements called as neurons operating in parallel. A simple neuron is as shown in figure 9. These neural networks approach are inspired by the biological human nervous systems. Neural networks learn by example. They cannot be programmed to perform a specific task. The examples must be selected carefully otherwise useful time is wasted or even worse the network might be functioning incorrectly.

Commonly neural networks are adjusted, or trained, so that a particular input leads to a specific target output. We can train a neural network to perform a particular function by adjusting the values of the connections (weights) between elements. Such a situation is shown below in Fig 10. There, the network is adjusted, based on a comparison of the output and the target, until the network output matches the target. Typically many such input/target pairs are used, in this supervised learning, to train a network [7].

Fig 2: A simple neuron [6]

Fig 3. Scenario of neural networks [7]

1.5.1 MODEL OF A NEURON:

A neuron is an information-processing unit which is fundamental to the operation of the neural. The three basic elements of the neural models are:

Synaptic Weights

Linear Combiner

Activation function

The brief study of these elements is given below:

Synaptic Weights: A set of synapse which is characterized by the strength or weight of its own. It is also known as connecting links.

Linear combiner: An adder for summing the inputs signals, weighted by the respective synapse of the neuron.

An activation function for limiting the amplitude of output of the neuron. Sometimes it is also called squashing functions [24].

Figure 4: Non Linear Model of Neuron

1.6 NETWORK ARCHITECTURE

The manner in which the neuron of a neural network is structured is intimately linked with the learning algorithm to train the network. So learning algorithm is used in the design of neural network as structured. Learning algorithms are used to train neural network. In general there are different classes of network architectures:

Single Layer Feed Forward Networks

Multilayer Feed Forward Networks

1.6.1 Single Layer Feed Forward Networks: Figure 1.10 shows the single layer feed forward network. In a layered network the neurons are organized in the forms of layers. Input layer have source nodes that corresponds to that gives output but vice versa is not true. It is also called acyclic or Feed Forward type. Here no hidden layer is present. It is called single layer feed forward networks because it has number of input layer but only one output layer. We do not count the source node in this type of architecture because no computation is performed in this type of architecture.

Figure 5: Feed Forward or acyclic network with a single layer of neuron

1.6.2 Multilayer Feed forward Networks: In the second layer of feed forward networks one or more hidden layers are present, whose computation nodes are corresponding called hidden neurons and hidden units. Higher order statistics can be obtained by adding or more hidden layers. Here source nodes are called input layer, signals are applied to the computation node in the second layer.

Figure 6: Multi connected feed forward network with one hidden layer

1.7 ADVANTAGES OF NEURAL NETWORK

There are many advantages of neural network which are given below:

Neural network performs linear tasks.

During the failure of any element, it continues works without any problem in its parallel nature.

A neural network learns it does not need to reprogram.

It can be implemented in any application.

It can be implemented without any problem.

1.7.1 DISADVANTAGES OF NEURAL NETWORK

There are some disadvantages of neural networks which are discussed as following:

Neural networks need training to operating.

Its architecture is different from architecture of microprocessor therefore it needs to be emulated.

For large neural networks, it requires high processing time. [23]

1.8 APPLICATIONS OF NEURAL NETWORKS

Table1.1 Applications of neural networks

1.9 LEARNING

Learning is a process by which free parameters of a neural network are adapted through a process of stimulation by the environment through which it is embedded. There are different types of algorithms are used in different learning types.

Table 1.2: Different types of learning

Figure 7: Block Diagram of neural network with error correction

1.10 Neural networks for skeletonization

Neural networks, with their remarkable ability to derive meaning from complicated data that can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. A trained neural network can be thought of as an "expert" for analysing the information.

Thinning problem requires two tasks to be implemented: (a) peeling the thick pixels off (b) stopping the peeling process when the pixel size reduces to exactly one. The first can be achieved with relative ease. The main difficulty arises in the second part, because the stopping decision must be done automatically. This can be achieved using a real time cellular neural network by training the neural network. Most of the conventional thinning approaches suffer from noise sensitivity and rotation dependency. With the use of neural networks we can perform thinning invariant under arbitrary rotations. [8]

Neural network takes a different approach as compare to conventional thinning. Conventional thinning is based on algorithmic approach i.e. computer follows a specific set of instructions or conditions in order to delete the unwanted pixels.  Unless the specific steps that the computer needs to follow are known the computer cannot delete the pixels. That restricts the problem solving capability in conventional thinning. But neural network process information as the human brain does. Neural networks learn by example. They can be trained to perform thinning operations effectively. They reduce the number of instructions to be executed. Hence they take less execution time and faster than the conventional thinning approaches.

An important aspect of skeletonization algorithm is noise immunity. With the use of neural networks we can handle both types of noise in the images i.e. boundary noise and object noise and make the algorithms more robust which is not possible with conventional thinning techniques. [9]

CHAPTER 2

LITERATURE SURVEY

Skeletonization is the process of extracting skeletons from an object in a digital image. It is morphological operation that deletes black foreground pixels iteratively layer by layer until one pixel width skeleton is obtained. Skeletonization is essentially a “pre-processing” step used in many image analysis techniques [2]. The present chapter reviews some papers on skeletonization:

In [2] the author proposes a new skeletonization algorithm which combines sequential and parallel approaches which comes under iterative approach. The algorithm is conducted in three stages. First two stages used to extract the skeleton and the third is used for optimizing the skeleton into one-pixel width. An experimental result shows that the proposed algorithm produces better results than the previous Skeletonization algorithms.

In [3] the author proposes two new iterative algorithms for thinning binary images. In the first algorithm, thinning of binary images is done by using two operations: edge detection and subtraction. Second algorithm is based on repeatedly deleting the pixels until a one pixel thick pattern in a binary image is obtained. Erosion conditions are devised to assure preserving connectivity. Experimental results show that edge based iterative thinning algorithm is time consuming as compared to optimized Skeletonization algorithm.

In [4] the author discusses wide range of skeletonization algorithms on binary images including pixel based deletion and non pixel based deletion methods. Algorithms are discussed in details in this paper and relationships between the different skeletonization algorithms have also been explored. Various comparisons have been made between skeletons obtained from various skeletonization algorithms on the basis of subjective and objective criteria.

In [5] the author introduced a framework for making thinning algorithms robust against noise in sketch images. The framework estimates the optimal filtering scale automatically and adaptively to the input image. Experimental results showed that this framework is robust against typical types of noise which exists in sketch images, mainly contour noise and scratch.

In [10] the author performs thinning of binary images by repeating two sub-iterations: one deletes the south-east boundary points and the north-west corner points while the other one deletes the north-west boundary points and south-east corner points. Point deleting is done according to a specific set of rules. The two sub-iterations are repeated until no more points validate the deleting rules.

In [11] the author proposes a new sequential algorithm which uses flag map and bitmap simultaneously to decide whether a boundary pixel should be deleted or not. Three performance criteria are proposed in this paper for the comparison of proposed algorithm with other algorithms. Experimental results shows that the skeleton produced by the proposed sequential algorithm is not only one pixel thick, perfectly connected, well defined but are also immune to noise.

In [12] the author presents a novel rule-based system for skeletonizing. The author has presented a formal mathematical derivation which shows how the central lines are obtained and shape of the symbol remains connected. Experimental results are presented on symbols, characters, and letters written in different languages, and on rotated, flipped, and noisy symbols. The results show that the developed method is effective, and fast, and can thin any symbol in any language, irrespective of the direction of rotation.

In [13] the author presents an algorithm to overcome the deficiencies in the above algorithm presented by Ahmed and Ward [12]. The author show examples where the above algorithm fails on two-pixel wide lines and propose a modified method which corrects this shortcoming based on graph connectivity.

In [14] the author basically aims at different aspects of skeletonization closely related to each other. The author discusses wide range of skeletonization algorithms on binary images. The author proposes a new skeletonization algorithm K3M that presents very interesting properties in terms of processing quality and algorithm clarity, enriched with examples.

In [15] the author describes new two pass parallel algorithm for the binary images. The algorithm makes the image to one pixel thick width and preserves the connectivity of components. This algorithm also helps to preserve 8 neighbour connectivity in binary images. The proposed algorithm shows better performance in terms of connectivity and one pixel thick and produces high quality images than the previous Skeletonization algorithms.

In [16] the author presents two algorithms which uses two sub-iterations. The two algorithms use two-sub iteration approaches: (1) alternatively deleting north and east and then south and west boundary pixels and (2) alternately applying a thinning operator to one of two subfields. Both approaches produce very thin medial curves and the second achieves the fastest overall parallel thinning.

In [17] the author focuses on the performance measurements of different thinning algorithms used mainly for digitizing maps with road infrastructure. Three criteria for performance measurements of thinning algorithms were presented in this paper. The results show that Z-S algorithm preserves connectivity and shows good results in NS measurements. But the thinning rate that is calculated is not good for vectorization of road maps and probably for all tasks which are dependent on the thin skeleton.

In [18] the author proposes a robust parallel thinning algorithm which can preserve the connectivity of the binarized fingerprint image, making the skeleton to one-pixel wide, which gets extremely close to the medial axis. The proposed thinning method repeats three sub-iterations. Results shows that the proposed robust parallel thinning algorithm produces better skeletons than the previous algorithms.

In [19] the author describes a new novel scheme for thinning, using connected component approach. The proposed algorithm measures the value of the connected components that makes the algorithm as automatic and which requires no human interaction as compare to the existing algorithms. The proposed algorithm is independent in shape and font and does not require any preprocessing. Experimental results shows the advantage of proposed algorithm that it is automatic and requires no human interaction so therefore, it is better than already existing algorithms.

2.1 SKELETONIZATION WITH NEURAL NETWORKS

Neural networks have been used to perform various important image processing tasks such as edge detection, segmentation, feature extraction, and pattern recognition. The present chapter reviews some papers on skeletonization with neural networks:

In [20] the author proposes a thinning algorithm based on clustering the data image. The algorithm employs the ART2 network which is a self-organized neural network. The skeleton is generated by plotting the cluster centres and connecting adjacent clusters by straight lines.

In [21] author proposes a new algorithm for the class of binary images by using two PCNNs (pulse coupled neural networks). The pulses meeting criterion and stopping criterion are given, and determination of PCNN’s parameters is also given.

In [22] the author presents a coarse to fine binary image thinning algorithm by proposing a template based pulse coupled neural network model. Experimental results shows that the proposed algorithm is fast in terms of thinning speed and can achieve high TR (Thinning rate).

In [23] the author focuses on the work to extract features obtained by Binarization technique for recognition of handwritten characters of English language. The recognition of handwritten character images have been done by using multi-layered feed forward artificial neural network as a classifier.

In [24] the author presents a new application of a neural network model in the area of handwritten numerals binary pattern thinning. Skeletons are produced by applying the iterative and non-iterative techniques. The proposed method requires a fixed number of points to be linked to construct skeletons. The final cluster centers are determined by a method called self-organizing feature graph (SOG). The SOG is implemented using a neural network in order to exploit distributed processing systems. Experimental results show that that the method produces good skeletons by generating topological maps, without physically removing pixels from the pattern and performing readjustment. The skeletons preserve the topology of digit patterns.

In [25] the author describes an image thinning technique based on a neural network. The network architecture and the activation functions are designed such that neurons at four different layers remove the respective types of boundary pixels. Experimental results show that the proposed technique is found to be more efficient in medial axis representation. The present method is found to be robust to boundary noise as compare to other thinning algorithms.

In [26] the author proposes a new approach for binary image thinning by using the pulse transmission characteristic of PCNN. The thinning result obtains when pulses meet. The criteria of pulse meeting and criteria of thinning completion are presented in this paper. Proposed method is compared with the previous thinning algorithms. An experimental result shows that the proposed method is better in terms of the skeleton obtains retains more information of the original image and is faster too than the previous thinning algorithms.

In [27] the author describes a PCNNs-based square-and-triangle-template method for binary fingerprint image thinning. The algorithm is iterative in nature that is it combines both sequential and parallel approaches. When a neuron satisfies the square template, the pixel corresponding to this neuron will be noted during the process and be deleted until the end of the iteration; on the other hand, if a neuron meets a triangle template, it will be removed directly. In addition, this proposed algorithm can be effective for fingerprint thinning without considering the direction. Experimental results show that the proposed algorithm is faster as it is iterative in nature and combined sequential and parallel approaches to delete border pixels.

In [28] the author describes a modified image thinning algorithm using local connectivity judgment. It adopts an additional local connectivity identification to avoid undesirable edge disconnection caused by removing object pixel and preserve original edge connectivity. Experimental results show that the proposed algorithm removes the unexpected edge disconnection in the pixel removing process by the previous existing binary image thinning algorithm using template-based pulse-coupled neural network (PCNN).

CHAPTER 3

PROBLEM FORMULATION

3.1 DESCRIPTION:

Skeletonization is an important step in many image processing applications like fingerprint analysis [4], optical character recognition [2,5], medical imaging [4]. Therefore, it is an active area of research. There is always a need for good skeletonization algorithms in reference to following parameters:

To reduce the amount of data required to be processed as it takes less time [4].

To reduce processing time.

Extraction of critical features such as end-points, junction-points, and connection among the components is helpful in many applications [4].

By reducing an object to only a skeleton, unimportant features and image noise can be filtered out.

Skeletonization is commonly used in the higher degree analysis and recognition for applications such as diagram understanding, OCR [2,5], feature detection, and fingerprint analysis [4].

But most of the skeletonization algorithms suffer from traditional problems such as reducing to one pixel width of the skeleton, preserving geometrical and topological properties. Many of the algorithms have the problem of discontinuity in the images. Whereas, several techniques are failed to preserve the shape topology and not reconstructable. Spurious tails and rotating the text shape is other serious problem and due to this most of the thinning methods are failed.

Thinning problem requires two tasks to be implemented: (a) peeling the thick pixels off (b) stopping the peeling process when the pixel size reduces to exactly one. The first can be achieved with relative ease. The main difficulty arises in the second part, because the stopping decision must be done automatically. This can be achieved using a real time cellular neural network by training the neural network. Most of the conventional thinning approaches suffer from noise sensitivity and rotation dependency. With the use of neural networks we can perform thinning invariant under arbitrary rotations. [8]

Most of the neural network training has been done on recognition different language characters, pattern classification, etc. Less work has been done on the neural network training to perform skeletonization operations. Neural network takes a different approach as compare to conventional thinning. Conventional thinning is based on algorithmic approach i.e. computer follows a specific set of instructions or conditions in order to delete the unwanted pixels.  Unless the specific steps that the computer needs to follow are known the computer cannot delete the pixels. That restricts the problem solving capability in conventional thinning. But neural network process information as the human brain does. Neural networks learn by example. They can be trained to perform thinning operations effectively. They reduce the number of instructions to be executed. Hence they take less execution time and faster than the conventional thinning approaches.

An important aspect of skeletonization algorithm is noise immunity. With the use of neural networks we can handle both types of noise in the images i.e. boundary noise and object noise and make the algorithms more robust which is not possible with conventional thinning techniques. [9]

CHAPTER 4

OBJECTIVES

4.1 PRESENT WORK EMPHASIZED ON FOLLOWING OBJECTIVES:

To design and implement some existing skeletonization algorithms.

To propose a new method for skeletonization based on performance evaluation parameters.

Performance evaluation of proposed technique in comparison with existing algorithms in terms of parameters such as execution time, thinning rate, number of connected components, PSNR, MSE etc.

CHAPTER 5

METHODOLOGY

5.1 PROPOSED METHODOLOGY:

Fig. 4 Proposed Methodology

Fig. 5 Proposed Methodology

5.1.1 The main steps are described below:

Create a dataset of Gurumukhi script.

Implementing existing algorithms: To implement the existing skeletonization algorithms using neural networks.

Proposing a new Method for skeletonization using neural networks.

Evaluating Performance: To evaluate the performance of existing algorithms and new proposed method for skeletonization using neural networks on the basis of some performance measures:

a. Execution Time: The time taken to obtain the output skeletons for a particular image.

b. Thinning Rate: The degree to which an object is said to be thinned or completely thinned can be measured in terms of thinning rate.

CHAPTER-6

IMPLEMENTATION

6.1 RESULTS:

Fig 1: Image loaded

As illustrated in figure 1, the image is loaded with for the thinning. The thinning is technique of removing the unwanted data from image . To remove the unwanted data from the image thinning elements is used which will remove unwanted data. The zang and sang algorithm is used for thinning which gave output in terms of MSE, PSNR and thinning rate. The value of MSE is 6856.76, value of PSNR is 9.80 and value of TR is 0.50

Fig 2: Apply of back propagation algorithm

As shown in figure 2, to improve output of zang and sang algorithm in terms of PNSR, MSE and TR enhancement is proposed which will be based on back propagation algorithm. In this figure back propagation algorithm is executed with zang and sang algorithm

Fig 3: Apply of back propagation algorithm

As shown in figure 3, to improve output of zang and sang algorithm in terms of PNSR, MSE and TR enhancement is proposed which will be based on back propagation algorithm. In this figure back propagation algorithm is executed with zang and sang algorithm. After applying the back propagation algorithm the output of the thinning is image is shown which has better results than existing one

Fig 4: Apply of back propagation algorithm

As shown in figure 4, to improve output of zang and sang algorithm in terms of PNSR, MSE and TR enhancement is proposed which will be based on back propagation algorithm. In this figure back propagation algorithm is executed with zang and sang algorithm. After applying the back propagation algorithm the output of the thinning is image is shown which has better results than existing one. The results after applying MSE, PNSR and TR values are 125.91,27.16 and 0.82 respectively

Fig 1: Image loaded

As illustrated in figure 1, the image is loaded with for the thinning. The thinning is technique of removing the unwanted data from image . To remove the unwanted data from the image thinning elements is used which will remove unwanted data. The zang and sang algorithm is used for thinning which gave output in terms of MSE, PSNR and thinning rate. The value of MSE is 4411.20, value of PSNR is 11.72 and value of TR is 0.50

Fig 2: Apply of back propagation algorithm

As shown in figure 2, to improve output of zang and sang algorithm in terms of PNSR, MSE and TR enhancement is proposed which will be based on back propagation algorithm. In this figure back propagation algorithm is executed with zang and sang algorithm

Fig 3: Apply of back propagation algorithm

As shown in figure 3, to improve output of zang and sang algorithm in terms of PNSR, MSE and TR enhancement is proposed which will be based on back propagation algorithm. In this figure back propagation algorithm is executed with zang and sang algorithm. After applying the back propagation algorithm the output of the thinning is image is shown which has better results than existing one

Fig 4: Apply of back propagation algorithm

As shown in figure 4, to improve output of zang and sang algorithm in terms of PNSR, MSE and TR enhancement is proposed which will be based on back propagation algorithm. In this figure back propagation algorithm is executed with zang and sang algorithm. After applying the back propagation algorithm the output of the thinning is image is shown which has better results than existing one. The results after applying MSE, PNSR and TR values are 113.32,27.16 and 0.83 respectively

Fig 1: Image loaded

As illustrated in figure 1, the image is loaded with for the thinning. The thinning is technique of removing the unwanted data from image . To remove the unwanted data from the image thinning elements is used which will remove unwanted data. The zang and sang algorithm is used for thinning which gave output in terms of MSE, PSNR and thinning rate. The value of MSE is 1172.12, value of PSNR is 17.48 and value of TR is 0.50

Fig 2: Apply of back propagation algorithm

As shown in figure 2, to improve output of zang and sang algorithm in terms of PNSR, MSE and TR enhancement is proposed which will be based on back propagation algorithm. In this figure back propagation algorithm is executed with zang and sang algorithm

Fig 3: Apply of back propagation algorithm

As shown in figure 3, to improve output of zang and sang algorithm in terms of PNSR, MSE and TR enhancement is proposed which will be based on back propagation algorithm. In this figure back propagation algorithm is executed with zang and sang algorithm. After applying the back propagation algorithm the output of the thinning is image is shown which has better results than existing one

Fig 4: Apply of back propagation algorithm

As shown in figure 4, to improve output of zang and sang algorithm in terms of PNSR, MSE and TR enhancement is proposed which will be based on back propagation algorithm. In this figure back propagation algorithm is executed with zang and sang algorithm. After applying the back propagation algorithm the output of the thinning is image is shown which has better results than existing one. The results after applying MSE, PNSR and TR values are 111.57,27.69 and 0.74 respectively

Fig 1: Image loaded

As illustrated in figure 1, the image is loaded with for the thinning. The thinning is technique of removing the unwanted data from image . To remove the unwanted data from the image thinning elements is used which will remove unwanted data. The zang and sang algorithm is used for thinning which gave output in terms of MSE, PSNR and thinning rate. The value of MSE is 109.25, value of PSNR is 27.78 and value of TR is 0.50

Fig 2: Apply of back propagation algorithm

As shown in figure 2, to improve output of zang and sang algorithm in terms of PNSR, MSE and TR enhancement is proposed which will be based on back propagation algorithm. In this figure back propagation algorithm is executed with zang and sang algorithm

Fig 3: Apply of back propagation algorithm

As shown in figure 3, to improve output of zang and sang algorithm in terms of PNSR, MSE and TR enhancement is proposed which will be based on back propagation algorithm. In this figure back propagation algorithm is executed with zang and sang algorithm. After applying the back propagation algorithm the output of the thinning is image is shown which has better results than existing one

Fig 4: Apply of back propagation algorithm

As shown in figure 4, to improve output of zang and sang algorithm in terms of PNSR, MSE and TR enhancement is proposed which will be based on back propagation algorithm. In this figure back propagation algorithm is executed with zang and sang algorithm. After applying the back propagation algorithm the output of the thinning is image is shown which has better results than existing one. The results after applying MSE, PNSR and TR values are 110.87,23.72 and 0.51 respectively

CHAPTER 7

REFERENCES

Gonzalez R.C. and Woods R.E. (2002) Digital Image Processing 2nd Ed. Tom Robbins.

Abu-Ain W, et al. “Skeletonization Algorithm for Binary Images” The 4th International Conference on Electrical Engineering and Informatics (ICEEI 2013) pp.704-709.

Padole G.V, Pokle S. B. “New Iterative Algorithms For Thinning Binary Images” Third International Conference on Emerging Trends in Engineering and Technology IEEE 2010 pp. 166-171

Lam L, et al. “ Thinning methodologies-A comprehensive survey” IEEE transactions on pattern analysis and machine intelligence Vol. 14 No. 9 September 1992 pp. 869-885

Chatbri et al. “ Using scale space filtering to make thinning algorithms robust against noise in sketch images” Pattern Recognition letters 42( 2014) pp. 1-10

Sharma V. et al. “ A comprehensive study of artificial neural networks” International journal of advanced research in computer science and computer engineering 2012 Volume 2, Issue 10 pp. 278-284

Demuth H et al. Neural Network Toolbox For Use with MATLAB Mathworks july 2002

Chua, L.O. ; Yokohama, T. “Image thinning with a cellular neural network.” Circuits and Systems, IEEE Transactions on  (Volume:37 ,  Issue: 5 ) 1990 pp. 638-640

Datta A. et al “Shape Extraction: A Comparative Study Between Neural Network-Based and Conventional Techniques” Neural Computing & Applications (1998) Springer pp. 343-355

T. Zhang et al. “A fast parallel algorithm for thinning digital patterns”, Commun. ACM 27 (3) (1984) pp. 236-239

Zhou R.W., et al. “A novel single-pass thinning algorithm and an effective set of performance criteria” 1995 Elsevier Science pp. 1267-1275.

Ahmed et al. “A Rotation Invariant Rule-Based Thinning Algorithm for Character Recognition” IEEE transactions on pattern analysis and machine intelligence, vol. 24, no. 12, December 2002 pp. 1672-1678

Rockett “An Improved Rotation-Invariant Thinning Algorithm” IEEE transactions on pattern analysis and machine intelligence, vol. 27, no. 10, October 2005 pp.1671-1674

Saeed K, et al. “K3M: A universal algorithm for image skeletonization and a review of thinning techniques” International Journal of Applied Mathematics & Computer Science, 2010, Vol. 20, No. 2, pp. 317–335

Jagna A. and Kamakshiprasad V. “New parallel binary image thinning algorithm” ARPN Journal of Engineering and Applied sciences vol. 5, no. 4, April 2010 pp. 64-67

Guo Z. and Hall R.W “Parallel thinning with Two- Sub iteration algorithms” Communications of the ACM March 1989 volume 32 number 3 pp. 359-373

Tarabek P “ Performance measurements of thinning algorithms” Journal of information, Control and Management Systems, Vol. 6(2008) No.2

Kwon J. “Improved parallel thinning algorithm to obtain unit -width skeleton” The International Journal of Multimedia & Its Applications (IJMA) Vol.5, No.2, April 2013 pp. 1-14

Kumar V. et al. “A New Skeletonization Method Based on Connected Component Approach” IJCSNS International Journal of Computer Science and Network Security, VOL.8 No.2, February 2008 pp. 133-137

Altuwaijri M and Bayoumi A“ A thinning Algorithm for Arabic Characters Using ART2 Neural Network” IEEE Transactions on circuits and systems-II Analog and signal processing , Vol. 45 ,No. 2 February 1998 pp. 260-264

Shang L., Yi Z. “ A class of binary images using two PCNNs” Neurocomputing 70(2007) pp. 1096-1101

Ji L. et al. “Binary Fingerprint Image Thinning Using Template-Based PCNNs” IEEE transactions on systems, man, and cybernetics, vol. 37, no. 5, October 2007 pp. 1407-1413

Choudhary A. et al. “Off-Line Handwritten Character Recognition using Features Extracted from Binarization Technique” 2013 AASRI Conference on Intelligent Systems and Control pp. 306-312

Ahmed P. “A neural network based dedicated thinning method” June 1995 Pattern Recognition Letters 16 Elsevier Science pp. 585-590

Datta A. et al. “Image Thinning by Neural Networks” October 2002 Neural computing and applications Volume 11 issue 2 pp.122-128

Gu X. et al. “Image Thinning using PCNN” July 2004 Pattern Recognition Letters Elsevier Science volume 25 Issue 9 pp.1075-1084

Xu D. “A Novel Approach Based on PCNNs Template for Fingerprint Image Thinning” (2009) computer and information science IEEE pp.115-119

Li Z. et al. “Modified Binary Image Thinning Using Template-Based PCNN” (2013) International conference on information technology and software engineering volume 212 pp.731-740

Similar Posts

  • Respectarea Actelor Normative Si A Reglementarilor Tehnice Privind Apararea Impotriva Incendiilor

    === l === CUPRINS CUPRINS……………………………………………………………………………………………………………………..2 MEMORIU JUSTIFICATIV…………………………………………………………………………………………5 CAP.1. INTRODUCERE…………………………………………………………………………………………………6 1.1. Noțiuni generale privind producerea și distribuția energiei electrice ………………………………6 1.1. 1. Centrale de producere a energiei electrice ……………………………………………………….6 CAP.2. ACTE NORMATIVE CARE REGLEMENTEAZĂ APĂRAREA ÎMPOTRIVA INCENDIILOR……………………………………………………………………………………………………………….29 2.1. Acte de autoritate emise pe linia apărării împotriva incendiilor………………………………………29 2.2. Documente și evidențe privind reglementarea apărării împotriva incendiilor…

  • Implementarea Schimbării LA S.c. Rc Properties S.r.l

    ACADEMIA DE STUDII ECONOMICE DIN BUCUREȘTI FACULTATEA DE MANAGEMENT DISERTAȚIE Coordonator științific: Prof.(Conf./Lect./Asist.)univ.dr. CEPTUREANU Eduard Gabriel Absolvent: VĂRUICU Ana – Maria București 2016 ACADEMIA DE STUDII ECONOMICE DIN BUCUREȘTI FACULTATEA DE MANAGEMENT IMPLEMENTAREA SCHIMBĂRII LA S.C. RC PROPERTIES S.R.L. Coordonator științific: Prof.(Conf./Lect./Asist.)univ.dr. Ion POPESCU Absolvent: VĂRUICU Ana Maria București 2016 CUPRINS Pag. Introducere …………………………………………………………………………………………………………….. 3…

  • Robust Experimental Evaluation Of Two Algorithms Performancedoc

    === Robust experimental evaluation of two algorithms performance === Robust experimental evaluation of two algorithms performance Abstract. The experimental evaluation of performance The runing time of two algorithms realized in different way. In some of the cases for each algorithm there are realized more experiments whose runnig time are evaluated, and based on that are…

  • Contabilitatea Creantelor Si Datoriilor Curente Comerciale

    === 38fdc2a19c887a852b42d98be6ebe9e58c9b7e60_517896_1 === Сuрrіnѕ Іntrοduϲеrе САΡІТОLUL І СОΝЅІDΕRАȚІІ ΡRІVІΝD СRΕАΝȚΕLΕ ȘІ DАТОRІІLΕ 1.1 Сοnțіnutul șі ѕtruϲturɑ ϲrеɑnțеlοr ϲοmеrϲіɑlе 1.2 Сοnțіnutul șі ѕtruϲturɑ dɑtοrііlοr ϲοmеrϲіɑlе СΑΡΙΤОLUL ΙΙ СОΝΤΑВΙLΙΤΑΤΕΑ СRΕΑΝȚΕLОR ȘΙ DΑΤОRΙΙLОR СОΜΕRСΙΑLΕ 2.1 Оbіеϲțіunіlе șі fɑϲtοrіі οrgɑnіzărіі ϲοntɑbіlіtățіі ϲrеɑnțеlοr șі dɑtοrііlοr ϲοmеrϲіɑlе 2.2 Rеϲunοɑștеrеɑ șі еvɑluɑrеɑ ϲrеɑnțеlοr șі dɑtοrііlοr ϲοmеrϲіɑlе în ϲοntɑbіlіtɑtе 2.3 Ѕіѕtеmul іnfοrmɑțіοnɑl-ϲοntɑbіl ɑl…

  • Drepturile și Iîndatoririle Părintesti cu Privire la Persoana Copilului

    === b1af115bf045b2e6d76592601db90349aba576a6_585011_1 === Cuprins oc ІNTRΟDUϹЕRЕ _*`.~ Aсеaѕtă luсrarе tratеază drеpturilе ocși îndatοriri се rеvin părințilοr în сrеștеrеa сοpiilοr minοrioc, drеpturi și îndatοriri a сărοr tοtalitatе еѕtе numită ocautοritatеa părintеaѕсă. Autοritatеa părintеaѕсă a fοѕt сοnсеpută ocpеntru a ѕеrvi еxсluѕiv intеrеѕеlοr сοpilului, au întâiеtatе ocîndatοririlе, în timp се drеpturilе ѕunt rесunοѕсutе numai ocpеntru a ѕеrvi la…

  • Unificarea Spatiilor de Stocare In Clouddocx

    === Unificarea spatiilor de stocare in Cloud === UNIVERSITATEA POLITEHNICA BUCUREȘTI FACULTATEA DE AUTOMATICĂ ȘI CALCULATOARE SPECIALIZAREA E-GOVERNMENT Rɑport de cercetɑre Unificɑreɑ spɑțiilor de stocɑre în Cloud BUCUREṢTI 2015 Abstrɑct. Aceɑstɑ lucrɑre prezintɑ o imɑgine de ɑnsɑmblu ɑsuprɑ diferitilor provideri in ceeɑ ce priveste spɑtiul de stocɑre in cloud si propune o solutie fiɑbilɑ, eficientɑ…