Robust Experimental Evaluation Of Two Algorithms Performancedoc
=== Robust experimental evaluation of two algorithms performance ===
Robust experimental evaluation of two algorithms performance
Abstract.
The experimental evaluation of performance
The runing time of two algorithms realized in different way. In some of the cases for each algorithm there are realized more experiments whose runnig time are evaluated, and based on that are calculated the averages which are compared. If the average running time is different then the algorithm with the smaller time is considered performing faster. We consider that this simple approach, and some others, have different problematic issues, like: is not considered the variability of experiments for example. By our opinion, this could lead sometimes to even errorneause conclusions.
In this paper, we propose a novel statistical fundamented bentchmarking algorithm for the comparison of the running time of two algorithms. We consider that our proposal, has as result a more accurate and robust comparison of the measured algorithms performace. For proving the effectivenes of our proposal, we have realized a case study on two imaging algorithms for processing noicy fingerprint images, that gave relatively similar running times. Using the proposed benchmarking algorithm, we have compared the algorithms from running time point of view.
Keywords: algorithm performance, statistics, computing complexity, fingerprint, imaging algorithms
1. Introduction
Many times two algorithms performance is established in different way [?????articole de imagistica unde se compara performanta mai multor algoritmi referinte in jurnale cat mai bune, printre ele sa fie articole noi, ar fi bine in cazul articolelelor sa descarcam ce putem tot articolul sau macar rezumatul sau partea de concluzii].
One of the usual approaches for comparison consists in calculating the avearge runing time of both algorithms based on more simulation results, and comparing the results. This could lead sometimes to even errorneause conclusions. There are obtained for the algorithms some numerical values as result of simulations, but repeating the experiments again could give different results.This could be appear in case of heuristic and metaheuristic algorithms. Heuristic and metaheuristic algorithms appear in case of: genetic algorithms [m3, m4] and ACO algorithms proposed by Marco Dorigo in 1992 in his PhD thesis [m1, m2].
Another aspect that we consider, could influence the results are the outlier values that could appear sometimes. We call outlier, a rare not statisticaly significant numerical result of a simulation that could influence the performance evaluation of the algorithms statisticaly significantly. Situations on that appear such values may consists in facts, like: very rare situations in the problem data that the algorithm does not treat very well, something happened with the computing system and is given a very different value.
Our consideration is that some algorithms could give different average performance values but from statistical point of view they could be considered equal. The simple comparison of the two values, that the smaller performs better could have a result an errorneaus conclusion.
For the elimination of the previously mentioned difficulties that could lead even to errorneause conclusions, we propose a novel benchmarking algorithm for the comparison of two algorithms performance. For proving the effectiveness of the bentchmarking algorithm we have realized a case study using two algoritms for processing noicy fingerprint images.
The upcomming part of the paper is organized as follows: Section 2 presents different aproaches for experimantal evaluation of algorithms performance, In Section 3 there is presented our proposed benchmarking algorithm, is realised the case study for compariosn of two fingerprint images processing algorithms and the results are discussed, Section 4 presents the conclusions related with the proposed betchmarking algorithm.
2. Rezultate prezentate in literatura de specialitate unde au fost tratate aspecte referitoare la performanta algoritmilor
? Articol 1………
?Articol n……………………..
si eu caut…. La sfarsit facem o selectie
3. The novel bentchmarking algorithm
3.1. Description of the proposed algorithm
In the following, we present a novel benchmarking algorithm proposed for the comparison of two algorithms performance. The algorithm is called in the following “Comparison of two Algorithms Performace on Paired Data” – abbreviated as BentchAlgPair.
We consider two algoritms denoted in the following Alg1 and Alg2 that solve the same type/class of problem denoted in the following ClassP. We consider the testing of the algorithms Alg1 and Alg2 on the same set of test data (solving problems from ClassP, there are considered problems each having some specific data) obtaining the performance results denoted Set1={A1t1, A1t2, …….., A1tn} and Set2={A2t1, A2t2, …….., A2tn}. In the following we consider the performance from the problem solving time. In the same way, it could be used a differet measure of performance like the measure of memory usage. Table 1 presents the experimetal conditions: the data and the corresponding simulation results for both algorithms. We have used the following notations: Data1, Data2…., Datak denoting the datasets used for both algorithms testing. A1t1, A1t2….. A1tn – represents the running times of the Alg1; where A1t1 represents the runnig time for Data1 processing, ……. A1tn represents the runnig time for Datan processing. A2t1, A2t2, ……, A2tn – represents the running times of the Alg2, where A2t1 represents the runnig time for Data1 processing, ……. A2tn represents the runnig time for Datan processing. Avg1 and Avg2 represents the average runnig time of the two algorithms; Avg1=Avearge(A1t1, A1t2,……..,A1tn). Avg2=Average(A2t1, A2t2, …….., A2tn).
Table 1. Simulation results. Paired data.
The BentchAlgPair algorithm compare the runnig time of the two algorithms on a testing data sets. It checks if the results are equal from statistical point of view or different. We call in the following, Null hypothesis denoted as H0, the statement that the two algorithms Alg1 and Alg2 performance is equal from statistical point of view. We denote H1 the alternative hypothesis that the performance of the algorithms is different from statistical point of view. Different details related with the algorithm are presented in the section Case Study on Complex Images Processing.
Algorithm BentchAlgPair – Comparison of two Algorithms Performace on Paired Data
IN:
Set1={A1t1, A1t2, …….., A1tn}
Set2={A2t1, A2t2, …….., A2tn}
OUT: the decision related with the equality of the two data samples.
Step1.
@Make a descriptive statistics.
@Calculate the CV (Coefficient of Variation).
@Analyse the homogenity-heterogenity of data based on CV.
Step2.
@If the data is heteoreneuse, the human experimenter is asked if wish to apply the test for otliers detection on each data sample.
Step3.
If (is selected the application of outliers detection test) then
Begin
@Apply the outers test on both data samples.
@Based on the fact that the datasets Set1 and Set2 are paired, if in one of the sets one of the data is identifyed as outlier then is eliminated and the corresponding data in the other set is elliminated also.
End
Step4.
@Verify if Set1 is normally distributed (sampled from a Gaussian population).
@Verify if Set2 is normally distributed (sampled from a Gaussian population).
If (both Set1 and Set2 are normally distributed) then
Begin
@Apply the parametric paired T test.
@As a test result will be obtained the P-Value.
End
Else
Begin
//at least one of the data sets is not sampled from a Gaussian population
@Apply the nonparametric Wilcoxon paired test.
@As a test result will be obtained the P-Value.
End
EndBentchAlg
3.2. Case study imaging algorithms
Fingerprint identification
Aici ar trebui sa avem o descriere (sa fie oarecum diferita de articolul sic sa nu fie 100% la fel)
Identifying individuals based on fingerprints has many applications. There are also many mobile phone applications, laptops etc.
One of the aplications consists in security intelligence. FBI will soon replaced the old IAFIS with Next Generation Identification system [SIC10] [SIC11] developed by Lockheed Martin [SIC12] in partnership with Safran.
A first step in a fingerprint image analyzing consists in a preprocessing, filtering. Some times this is a very difficult procees based on fact that images may be noicy. If the result of this step is not of a good quality the identification process can fail.
The two proposed algorithms, adaptations of classical algorithms based on images properties. The two algorithms wer proposed to successful completion of the digital image filtering and creating good prerequisites for authentication methods based on fingerprints.
We use digital images corresponding to fingerprints taken with ink of various dimensions. Images from the " FVC2000 , FVC2002 , and FVC2004 " databases are cut to different sizes between 150 to 300 pixels width and height
The set of filters was made with the help of two algorithms, coded from A2 and A4 .
De explicat cei doi algoritmi
Aici ar trebui inceput cu prezentarea algoritmului general quartiles – acesta prezentat sub forma unui algoritm
Algoritm ……este
Date de intrare
Date de iesire
Sfarsit algoritm.
Momentan am lasat notatia A2 si A4 dar o sa le zicem Alg1 si Algorithm 2 la sfarsit.
A2 si A4 ar trebui descrisi putin mai detaliat, si ar trebui sa accentam noutatea fata de algoritmul quantile, prin ce difera. Si ar trebui sa avem si o justificare care era scopul modificarii algoritmului. Vezi tu cum treci partea aceasta aici nu avem doar A2 si A4
A2 -uses the principle of quartiles over the entire image in conjunction with applying the quartiles calculated on small areas "locally";
A3- is achieved the extrapolation of the values that are " in the immediate " neighborhood of extremes (0 and 255 ), made in a "local" manner;
A4- uses quartiles principle applied globally, in conjunction with the thresholds referred to A3;
A9-is the most developed algorithm provides more processing at punctual situations. Use apply global quartiles (Q1, Q2, Q3) in conjunction with the following rules:
-if initial_value <Q1, the initial_value = 0;
-if initial_value> Q3, then initial_value = 255, and the gray values between Q1 and Q3 are using other algorithms in order to eliminate the effect of "salt and pepper" and to deal with alleged membership of a pixel to details.
This subset includes the following following situations:
-if initial_value is black and in its neighborhood (3 x 3 pixels) is only gray, then the whole neighborhood will be black;
-if initial_value is white and its neighborhood (3 x 3 pixels) is in grayscale, the whole neighborhood turns into grayscale;
-If initial_value is white, and the neighborhood of (3 x 3 pixels) are all black, then the entire neighborhood turns black;
-if initial_value is gray and in the neighborhood (3 x 3 pixels) everything is white, then turns white initial_value;
-if in the vicinity of (3 x 3 pixels) there are three gray neighbors and the rest white, then the middle one is white;
-if the neighborhood (3 x 3 pixels) three neighbors are gray and the rest white, then the middle one is white;
-if the vicinity of (5 * 5 pixels) is composed entirely of gray, the whole vicinity of (3 x 3 pixels) of the same pixel turns black;
-in the vicinity like in the figure nr.1
Figure nr.1 –
where m is the column the matrix corresponding to the digital image , and "Grey " is any tone between 0 and 255 (exclusive), if gray value of m is less than the value of the (m + 1) column, where m is 0 and the corresponding pixel values m -1 and m + 1 become 255 ( white ) , as shown in figure nr.2.
Figura nr.2-
-if in the vicinity like in the figure nr.3,
We have the gray value of m less than that of column (m + 1), then the value of m is 0 and the value of m + 1 becomes 255 , as follows , in Figure 4:
and vice versa , otherwise ;
-if in the vicinity like in the Figure 5,
Figura5-
Tier m pixels value is the minimum of m – 1, m and m + 1, then the value at the position m is 0 and m -1 and m + 1 are 255, as in Figure 6
Figura 6-
Or, if the minimum value is at tier m -1 , respectively m + 1, it results like shown in Figures 7 and 8:
Figura 7-
Figura8-
In the set of two filters we made use of: the classic theory of quartiles in the algorithms A2, A4, the theory of thresholds in A4 and a combination of these with others related to the vicinity of certain pixels in A7,A8,A9.
3.3. Case Study on Complex Images Processing
For comparison of the filtering algorithms denoted Alg1 and Alg2. We have realized tests in Java, using a computer with the following resources: processor-Intel (R)Core(TM)2 DUO CPU T5800@ 2.00 Ghz, Installed memory RAM(3GB), operating system – Windows 7 Ultimate. For testing, and validation of the algorithms we have used a set of n=32 fingerprint images. Few fingerprint images are presented in the annex.
We consider the testing of the algorithms Alg1 and Alg2 on the same set of test data denoted in the following Img={Img1,….., Imgn} obtaining the performance results denoted Set1={A1t1, A1t2, …….., A1tn} and Set2={A2t1, A2t2, …….., A2tn}. Set1=Set2=32. The simulation results are presented in the Table2, columns Alg1 and Alg2.
The obtained simulation data was paired. We denotes in the following the pairs as PerforancePairs={(A1t1, A2t1), (A1t2, A2t2),….., (A1tn, A2tn)}. As example in case of the image denoted Img1 the result of algorithm Alg1 is A1t1=0.414374 and in case of Alg2 is A1t2=1.663553; where A1t1 and A1t2 forms a pair. Based on paired data the degrees of freedom of the data is by n-1=31.
Table. 2 Simulation results
**This coulmn indicate which values can be identifyed as outliers by an outlier detection test.
*Indicates the identifyed outleirs.
#1: Bot values 1.836701 (Alg1) and 7.05183 (Alg2) are identifyed as outlers at the first application of the outlers detection test. However, both of them could be eliminated if decided their elimination.
#2: 1.214258 (Alg1) is not identifyed as outlier. 6.488691 (Alg2) is identifyed as outlier at the second aplication of the outliers detection test. Based on the fact that, the data is paired in case of decision of elimination of the outliers both values should be elliminated.
#3: 1.170027 (Alg1) is not identifyed as outlier. 5.605859 (Alg2) is identifyed as outlier at the third aplication of the outliers detection test. Based on the fact that, the data is paired in case of decision of elimination of the outliers both values should be elliminated.
During the statistical analysis, as a first step we have realized a descriptive statistics [mm9, mm10] by calculating the values for the: Mean, Standard Error, Median, Mode, Standard deviation, Sample Variance, Kurtosis, Skewness and Confidence Level (in most of the case we propose the choosing of 95.0% confidence; or expressed as signifficance level α=0.05), Min, Max, Lower CI (in most of the cases we propose the choosing of 95.0% confidence; or expressed as signifficance level as α=0.05), Upper CI (in most of the cases we propose the choosing of 95.0% confidence; or expressed as signifficance level α=0.05), and calculated the Coefficient of Variation denoted in the following CV. Table 3 presents the results of the descriptive statitics.
CV should be calculated using the formula:
CV= (Standard deviation/Mean). (CV1)
CV= 100*(Standard deviation/Mean). (CV2)
In formula (CV2) the Standard deviation/Mean is multiplyed with 100 for finding the result in percent %. For example if Standard deviation/Mean=0.2 multiplyed by 100 gives 20%.
We have used the CV value for analysing the homogenity-heterogenity of the data [b1]. We consider the data classification based on the variability as follows. A clasification of the homogenity-heterogenity of the data can be ralized based on the CV value as described by Marusteri [mm13]. CV[0,10] indicates homogeneose data; CV, CV(10,30) indicates a relatively homogeneose data; CV, CV30 indicates a heterogeneose data. For the two considered algorithms both simulation data where heteogeneouse with the same degree of heteogenity CV(Alg1) CV(Alg2).
Many times if we have sample data the mean is an important statistical indicator. We consider important the identifying omogenity-heterogenity of the data, based on the fact that if CV30 (the data is heterogeneouse) then the mean is not a representative statistical indicator.
For the testing of data normality we have applied the Kolmogorov-Smirnov Goodness-of-Fit Test [mm14]. We have choosed the Kolmogorov-Smirnov test based on the smaller number of data (smaller degree of freedeom). We have choosed the significance level, α=0.05. The results of the normality test are presented in Table 4.
Table 3. Descriptive statistics
Table 4. Normality test
3.4. Application of the bentchmarking algorithm on heterogeneouse data
Based on the value of CV for data of both experiments, CV30 we have concluded that both data sets are heteogeneouse. In this section at the Step2 of the algorithm BentchAlgPair, we have concluded to do not apply a test for outliers detection. However, we propose a non-parametric statistical test for paired data.
In case of our data, we have considered the most appropriate the Wilcoxon signed-rank test, a non-parametric statistical hypothesis test used when comparing two matched samples [mm11, mm12]. It can be used as an alternative to the paired Student's t-test, t-test for matched pairs [reffer].
We have chosed to apply the Wilcoxon test with two tails. We have considered the significance level α=0.05 the most appropriate. In the Wilcoxon test result interpretation, a P-value>0.05 indicates the acceptance of the null hypothesis, the simulation results are equal from statistical point of view. A P-Value0.05 indicating the rejection of the nullhypothesis.
Figure 1. Calculation results of Wilcoxon test
Figure 1 shows some calculation details of the Wilcoxon test. In the case of our two algorithms Alg1 and Alg2, the Wilcoxon test given the P-value<0.0001, based on that we can conclud the rejection of Null Hypothesis (H0) and acceptance of the alternative hypothesis (H1).
The final conclusion that can be drawn is that the algorithm Alg1 with the smaller average time, performs better.
3.5. Application of the bentchmarking algorithm for relatively homogeneose data
The CV values for both algorithms have indicated hereogeneouse data. At the Step2 of the algorithm BentchAlgPair, we have concluded to apply a test for outliers detection. There are many tests for the outliers’ detection described in the scientific literature, like: Grubbs [mm8], Chauvenet's criterion [see B5, B6], Peirce's criterion [B8], Dixon's Q test [b9]. We have choosed the Grubbs test for outliers detection with significance level α=0,05. For each set of data Set1 and Set2, we have applied the test more consecutive times until all the outliers where detected. We have indicated in Table 5 the identifyed outleir values by the Grubbs test, at which aplication of the test is identifyed each value.
Table. 5. Grubbs test for outliers detection. α=0,05
Based on the fact that we have paired data sets Set1 and Set2; we have eliminated the data in pairs.
Examples of cases for elimination or not elimination of a certain pair (A1ti, A2ti):
if A1ti and A2ti both are outleirs then should be eliminated both of them;
if A1ti is outlier and A2ti is not, based on the fact that they for a pair both of them should be elliminated;
if A2ti is outlier and A1ti is not, based on the fact that they for a pair both of them should be elliminated;
if non of the A1ti and A2ti is outleirs then should be eliminated both of them;
After the elimination of outliers we have realized a descriptive statistics Tab 6 and we have tested the data normality. For the testing of data normality we have applied the Kolmogorov-Smirnov Goodness-of-Fit Test. The normality test results are presented in Table 7.
Tab. 6 The new descriptive statistics, after the outliers elimination
Table 7. Normality test
Based on the fact that the data passed the normality test we have chossed a parametrict test, the two sample t-test for paired data [reffer]. We have considered the significance level α=0,05.
In the T-test result interpretation, a P-value>0.05 indicate the acceptance of the Null hypothesis (H0) that the simulation results are equal from statistical point of view. A P-Value0.05 indicates the rejection of the null hypothesis (H0) and the acceptance of alternative hypothesis H1.
Figure 2 show some calculation details of the paired T-test. The two tailed P-value<0.0001, this allows the obtaining the conclusion that the difference between the means of the two running times is extremly significant, it is rejected the null hypothesis (H0) and accepted the alternative hypothesis (H1) that the processing times are different and the algorithm ALG1 with the smaller average time, performs better.
Figure 2. Calculation results of the paired T-test
We have tested also if the pairing/matching was effective. The results are presented in Figure 3. Effective pairing results in a significant correlation between the columns. The one tailed P-value is <0.0001, that indicate extremely significant pairing (the pairing is effective). It was calculated the pearson coefficient of correlation [mm5, mm6, mm7]. We have choosed the pearson coefficient of correlation based on the normality of data. The data was sampled from a gaussin population that indicated the application of a parametric test.
Figure 3. Verification if the pairing was effective.
3.6. Discussion related with the obtained results.
For validation purposes of the proposed bentchmarking algorithm denoted BentchAlgPair, for paired data we have realized a case study. The purpuse was to prove the establisment of the effectiveness of the proposed bentchmarking algorithm in the performance comparison of two algorithms denoted Alg1 and Alg2. We have considered the performance from the running time point of view. In the case study, we have considered Alg1 and Alg2 with some novelties used for filtering noicy images. The decision for choosing these two algorithms was based on the fact of obtaining very similar/close simulation results, that we esteimated theoretically. The calculation results based on the simulation data set denoted Set1 for Alg1 and Set2 for Alg2, showed a small difference between the average runing time of the algoritms.
Many times if we have sample data the mean is an important statistical indicator. We consider important the identifying omogenity-heterogenity of the data, based on the fact that if the data is heterogeneouse then the mean is not a representative statistical indicator.
In the BenchAlgPair algorithm at step2 is verifyed the data homogenity-heteogenity. In the BenchAlgPair algorithm is left to the latitude of the human experimenter based on the homogenity-heterogenity report of the algorithm to decide if consider the application of an outliers test or decline for its application. Based on this fact the algorithm is a hybrid one. During the application running, if necessary is requested the intervention of a human to decide.
The algorithm could have this option imlicitely implemented. For example, if the data is heterogeneouse apply the outliers detection test without asking the human experimenter. The discision related with the hybridization must be based on the existence or not of a domain specific knowledge, that should be constructed based on considerations like: experimental conditions, specity of computing system, could appear or not outliers etc.
In Step4 is verifyed the data normality. If the both data sets are normally distributed is applied the two sample t-test for paired data (it is a parametric test), elsewhere is applied the Wilcoxon test for paired data (it is a non-parametric test).
In our case study, the simulation data for both algorithms was heterogenouse CV40. We have have decided to follow both approches, the decisons at step 2 (presented in the sections 3.4 and 3.5):
the first approach (section 3.4) consisting in the maintaining of the outliers. The data normality not passed, however we have applied the Wilcoxon test for paired data;
the second approach (section 3.5) consisting in the elimination of the outliers. After the eliminated paired data, both data sets become, relatively homogeneouse CV23. The normality test indicated the passing of data normality, however we have applied the the two sample t-test for paired data.
In both cases (with and witout elimination of the outliers) we have obtained the same result, that indicated the difference in the problem solving time between Alg1 and Alg2. Based on the fact that, the average problem solving time of Set1 is less than of the Set2, we have concluded that the algoritm Alg1 performs better than Alg2. Our conclusion related with the outliers in the case of our data it was that they was not an influencing factor to the correct conclusion formulation.
We make the obseravtion that, our proposal detect only the statistical equality and difference between two sets of simulation data. The choosing of the most performant algorithm from the two analysed one is let to the latitude of experimenter or is established as an implicit option of the algorithm. (in case if the data is heterogenouse opt for checkind and ellimination of the outliers or not).
5. Conclusions
In this paper, we have proposed a novel techniqe that allows the comparison of performance of two algorithms that run on the same computing system with the same resources and they solve the same class/type of problems.
One of the motivations of our proposal is based on the fact that during some simulations (problems solving) could appear very different simulation results versus the rest of the results that could influence the correctnes of the conclusion. We call such values outliers of the simulation. Outliers if are extremly different than the other values could be detected iven visually if are graphicaly represented. This could be difficult is there is an enormulsly large set of data. But this is not appropriate apropriate, if the differences are not so large (a value could be otliers versus other values that could not be detected visually). As examples of situations when could appear outliers, we mention: the aparition of an unexpected situation (For example: an exception, a multitasking system) in the computing system that can change change the running time very much, a solved problem’s data is very different versus the majority of cases (a very rare problem that belong to the same class of problems than the others experimented), etc.
Another consideration consists in the fact that the simulations could have a variability related with different aspects, like: simulation conditions, parameteres of the algorithms, realized simulations etc. If they are not identifyed and handled appropriately in different simulations they could give different results. For example, realizing some experiments an algoritm denoted Algorithm1 is sligthly faster than an algorithm denoted Algorithm2. But, repeating the experiments the algorithm Algorithm2 is sligthly faster than Algorithm1.
For validation purposes of the proposed benchmarking algorithm, we have realized a case study on noicy image filtering algorithms. Applying the bantchmarking algorithm we have proved the performance difference of the two algorithms. Detalis related with the experimental setup, obtained results and the analysis result are given in the previouse sections.
We consider our proposed bechmarking algorithm very original, and appropriate for accurate differentiation of two problem solving algorithms, in case of obtaining paired data (in our exerimental setup for both algorithms there was placed the same images, and a pair was considered the simulation data of both algorithms on the same image). We propose mostly for heuristic and metaheuristic algorithms.
Acknowledgements
This work was possible with the financial support of the sectorial operational program for human resources development 2007-2013, co-financed by the European social fund, under the project number POSDRU/ 159/1.5/S/132400 with the title „Young researchers of success – professional development in the interdisciplinary and international context”.
References
[m1] A. Colorni, M. Dorigo et V. Maniezzo, Distributed Optimization by Ant Colonies, actes de la première conférence européenne sur la vie artificielle, Paris, France, Elsevier Publishing, 134-142, 1991.
[m2] M. Dorigo, Optimization, Learning and Natural Algorithms, PhD thesis, Politecnico di Milano, Italy, 1992.
[m3] Mitchell, Melanie (1996). An Introduction to Genetic Algorithms. Cambridge, MA: MIT Press.
[m4] Goldberg, David (1989). Genetic Algorithms in Search, Optimization and Machine Learning. Reading, MA: Addison-Wesley Professional.
[mm5]Galton, F. (1886), Regression towards mediocrity in hereditary stature,Journal of the Anthropological Institute of Great Britain and Ireland, 15 : 246–263.
[mm6]Karl Pearson (1895) "Notes on regression and inheritance in the case of two parents," Proceedings of the Royal Society of London, 58 : 240–242.
[mm7]Stigler, Stephen M. (1989). Francis Galton's Account of the Invention of Correlation. Statistical Science 4 (2): 73–79.
[mm8] Barnett V. and Lewis, T. (1994.) Outliers in Statistical Data (3rd edition), Wiley.
[Mm9]Mann, Prem S. (1995). Introductory Statistics (2nd ed.). Wiley.
[Mm10] Nick, Todd G. (2007). "Descriptive Statistics". Topics in Biostatistics. Methods in Molecular Biology 404. New York: Springer. pp. 33–52
[mm11] Wilcoxon, Frank (Dec 1945). Individual comparisons by ranking methods. Biometrics Bulletin 1 (6): 80–83
[mm12] Siegel, Sidney (1956). Non-parametric statistics for the behavioral sciences. New York: McGraw-Hill. pp. 75–83.
[mm13] Marius Marusteri, Fundamentals in biostatistics: lecture notes. Universiyu of Medicine Press, Targu Mures, 2006
[mm14] Chakravarti, I.M., Laha, R.G., Roy, J. (1967) Handbook of Methods of Applied Statistics, vol. I, John Wiley & Sons, pp. 392-394.
[b1]Everitt, Brian (1998). The Cambridge Dictionary of Statistics. Cambridge, UK New York: Cambridge University Press
[B5] Ross, S.M. (2003). Peirce's Criterion for the Elimination of Suspect Experimental Data. J. Engr. Technology
[B6] Zerbet, A., Nikulin, M. (2003). A new statistics for detecting outliers in exponential case, Communications in Statistics: Theory and Methods, v.32, pp. 573–584.
https://www.eol.ucar.edu/system/files/piercescriterion.pdf
[B8] Stigler, S.M. (1951). Mathematical statistics in the early states, The Annals of Statistics, vol. 6, no. 2, p. 246, 1978.
[b9] Dean, R.B. and Dixon, W.J. (1951). Simplified Statistics for Small Numbers of Observations. Anal. Chem., 23(4), 636–638.
[reffer] Lowry, Richard. "Concepts & Applications of Inferential Statistics". Retrieved 24 March 2011. http://vassarstats.net/textbook/
I will see what I will do with this part
[SIC11] Dizard III, Wilson P. "FBI plans major database upgrade". Government Computer News, 28 August 2006.
[SIC12] "FBI — Next Generation Identification". Fbi.gov.
[SIC13] Lipowicz, Alice "FBI's new fingerprint ID system is faster and more accurate, agency says – GCN". Government Computer News, Mar 09, 2011.
References
POPESCU, C., A Secure and Efficient Off-line Electronic Transaction Protocol, Studies in Informatics and Control, ICI Publishing House, vol. 19, no. 1, 2010, pp. 27-34.
M.L. Costin, Tehnologia Identificării Biometrice, ISBN: 978-973-166-363-0, Editura Lumen, 2013, pag.23.
I.Ispas , The image recognition and classification, a four-stepmodeling, Proc. Of the 2nd International Conf. on European Integration – Between Tradition and Modernity: pag. 124-132, Petru Maior University, Tirgu Mures, Sept 20-21, 2007
J.M.Guoa, , Y.F. Liua, , J.Y Changa, , J.D.Lee, , Fingerprint classification based on decision tree from singular points and orientation field, Expert Systems with Applications, Volume 41, Issue 2, 1 February 2014, Pages 752–764
T.Khana, , M. A.U. Khanb, Y Konga, Fingerprint image enhancement using multi-scale DDFB based diffusion filters and modified Hong filters, Optik – International Journal for Light and Electron OpticsVolume 125, Issue 16, August 2014, Pages 4206–4214
J.Wayman, A.Jain,D.Maltoni,D.Maio, ”Biometric Systems- Technology, Design and Performance Evaluation”, Springer, 2005
L.Hong, Y.Wan and A.K.Jain, Fingerprint image enhancement:algorithms and performance evaluation.IEEE Trans.Pattern Analysis and Machine Inteligence, 20(8), 777-789, August 1998.
D.Maltoni, D.Maio, A.K.Jain and S.Prabhabar, Handbook of Fingerprint recognition.2nd Edition , SPVL, Springer,2009.
Handbook of Fingerprint Recognition (Second Edition)
D. Maltoni, D. Maio, A.K. Jain, S. Prabhakar
Springer, London, 2009.
S.Bleay, D.Charlton, Fingerprint Identification, Encyclopedia of Criminology and Criminal Justice, 2014, pp.1648-1664, 978-1-4614-5689-6, Springer New York
Marusteri, M., Bacarea, V. (2010) Comparing groups for statistical differences: how to choose the right statistical test? Biochemia Medica, 20(1):15-32.
Sokal, R.R., and Rohlf, F.J. (1995) Biometry: The Principles and Practice of Statistics in Biological Research. 2nd ed. New York: W. H. Freeman.
Annex (Example of wide tables – inserted in a borderless text box)
Table .
Copyright Notice
© Licențiada.org respectă drepturile de proprietate intelectuală și așteaptă ca toți utilizatorii să facă același lucru. Dacă consideri că un conținut de pe site încalcă drepturile tale de autor, te rugăm să trimiți o notificare DMCA.
Acest articol: Robust Experimental Evaluation Of Two Algorithms Performancedoc (ID: 119879)
Dacă considerați că acest conținut vă încalcă drepturile de autor, vă rugăm să depuneți o cerere pe pagina noastră Copyright Takedown.
