List of figures ………………………….. ………………………….. ………………………….. …………………………….. [626974]
Content
List of figures ………………………….. ………………………….. ………………………….. ………………………….. .. 2
List of tables ………………………….. ………………………….. ………………………….. ………………………….. …. 3
Abbreviations ………………………….. ………………………….. ………………………….. ………………………….. .. 4
Rezumat ………………………….. ………………………….. ………………………….. ………………………….. ……….. 5
1. Work planning ………………………….. ………………………….. ………………………….. ………………….. 11
2. State of the art ………………………….. ………………………….. ………………………….. …………………… 12
3. Theoretical Fundamentals ………………………….. ………………………….. ………………………….. …… 16
3.1 OpenCV (Open Source Co mputer Vision) ………………………….. ………………………….. ….. 16
3.1.1 Structure of the OpenCV library ………………………….. ………………………….. …………….. 17
3.1.2 Face detection ………………………….. ………………………….. ………………………….. …………. 17
3.1.3 Face recognition ………………………….. ………………………….. ………………………….. ………. 18
3.2 Python ………………………….. ………………………….. ………………………….. ……………………….. 25
3.3 Raspberry Pi ………………………….. ………………………….. ………………………….. ………………. 25
4. Implementation ………………………….. ………………………….. ………………………….. …………………. 26
4.1 Installing Raspbian OS ………………………….. ………………………….. ………………………….. … 26
4.1.1 Configuring the operating system ………………………….. ………………………….. …………… 27
4.2 Installing the OpenCV library ………………………….. ………………………….. …………………… 29
4.3 Creating the Facial Recognition application ………………………….. ………………………….. .. 35
5. Experimental results ………………………….. ………………………….. ………………………….. …………… 40
5.1 Tests using the own database ………………………….. ………………………….. …………………….. 40
5.2 Tests using the MIT -CBCL database ………………………….. ………………………….. …………. 43
5.3 Tests using the Caltech database ………………………….. ………………………….. ……………….. 45
5.4 Results comparison ………………………….. ………………………….. ………………………….. ……… 46
6. Conclusions ………………………….. ………………………….. ………………………….. ………………………. 47
7. References ………………………….. ………………………….. ………………………….. ………………………… 48
8. Appendix ………………………….. ………………………….. ………………………….. ………………………….. 49
8.1 The source code ………………………….. ………………………….. ………………………….. ………….. 49
8.1.1 The “dataSet_creator.py” file ………………………….. ………………………….. …………………. 49
8.1.2 The “trainer.py” file ………………………….. ………………………….. ………………………….. …. 51
8.1.3 The “recognizer.py” file ………………………….. ………………………….. ………………………… 52
List of figures Rare ș-Marin BOZDOG
2
List of figures
Figure 1. OpenCV library structure [Flo13] ………………………….. ………………………….. ……………… 17
Figure 2. Face detection example ………………………….. ………………………….. ………………………….. .. 18
Figure 3. Jet colormap of Eigenfaces samples [Ope17] ………………………….. ………………………….. 20
Figure 4. Eigenfaces reconstruction [Ope17] ………………………….. ………………………….. ……………. 20
Figure 5. Fisherfaces reconstruction [Ope17] ………………………….. ………………………….. …………… 22
Figure 6. LBP example ………………………….. ………………………….. ………………………….. ……………… 23
Figure 7. How LPB works [Ope17] ………………………….. ………………………….. ………………………… 24
Figure 8. Block scheme ………………………….. ………………………….. ………………………….. …………….. 26
Figure 9. Raspbi an OS installation using Etcher ………………………….. ………………………….. ……….. 27
Figure 10. Entering in “Raspberry Pi Configuration” menu ………………………….. ……………………. 27
Figure 11. Changing the Raspberry Pi password ………………………….. ………………………….. ………. 28
Figure 12. Enabling interfaces ………………………….. ………………………….. ………………………….. …… 28
Figure 13. VNC remote connection from an iPhone ………………………….. ………………………….. ….. 29
Figure 14. WnetWatcher application interface ………………………….. ………………………….. …………. 30
Figure 15. The “Raspberry Pi Software Configuration Tool” menu ………………………….. …………. 30
Figure 16. Inside/Outside of the “cv” virtual environment ………………………….. ……………………… 33
Figure 17. OpenCV compilation para meters ………………………….. ………………………….. …………….. 34
Figure 18. OpenCV working on the RaspberryPI ………………………….. ………………………….. ……… 35
Figure 19. SQLite3 database structure ………………………….. ………………………….. …………………….. 39
Figure 20. Creating the samples ………………………….. ………………………….. ………………………….. …. 40
Figure 21. The SQlite3 own database ………………………….. ………………………….. ……………………… 41
Figure 22. Samples preview ………………………….. ………………………….. ………………………….. ………. 41
Figure 23. Content of a “training_data.yml” file ………………………….. ………………………….. ………. 42
Figure 24. Face recognition example ………………………….. ………………………….. ………………………. 42
Figure 25. MIT -CBCL SQLite3 database ………………………….. ………………………….. ………………… 43
Figure 26. Caltech SQLite3 database ………………………….. ………………………….. ………………………. 45
Figure 27. Experimental results comparison ………………………….. ………………………….. …………….. 46
List of tables Rare ș-Marin BOZDOG
3
List of tables
Table 1. Comparison between Raspberry Pi versions ………………………….. ………………………….. … 14
Table 2. Own datab ase results ………………………….. ………………………….. ………………………….. ……. 43
Table 3. Confidence levels for the MIT -CBCL database ………………………….. ………………………… 44
Table 4. Confusion rates for the MIT -CBCL database ………………………….. ………………………….. . 44
Table 5. Confidence levels for the Caltech database ………………………….. ………………………….. …. 45
Table 6. Confusion rates for the Caltech database ………………………….. ………………………….. …….. 46
Abbreviations Rare ș-Marin BOZDOG
4
Abbreviations
BSD Berkeley Software Distribution
CALTECH California Institute of Technology
CBCL Center for Biological & Computational Learning
CLI Comman d-Line Interface
CSI Camera Serial Interface
CV Computer Vision
DARPA Defense Advanced Research Projects Agency
DHSMV Department of Highway Safety and Motor Vehicles
DSI Display Serial Interface
FERET Face Recognition Technology
GPIO General -Purpose I nput/Output
GPU Graphics Processing Unit
GUI Graphical User Interface
HDMI High -Definition Multimedia Interface
I/O Input/Output
ID Identifier
IEEE Institute of Electrical and Electronics Engineers
IP Internet Protocol
LBPH Local Binary Patterns Histograms
LCD Liquid Crystal Display
LDA Linear Discriminant Analysis
MAC Media Access Control
MIT Massachusetts Institute of Technology
MLL Machine Learning Library
NSA National Security Agency
OEM Original equipment manufacturer
OpenCV Open Source Computer Vision
OS Operating system
PCA Principal Component Analysis
PoE Power over Ethernet
RAM Random Access Memory
ROI Region of Interest
SBC Single -Board Computer
Sci-Fi Science F iction
SD Secure Digital
SSH Secure Shell
TCP Transmission Control Protocol
USB Universal Serial Bus
VNC Virtual Network Computing
Rezumat Rare ș-Marin BOZDOG
5
Rezumat
1. Introducere
Pentru a -și proteja datele sau chiar bunurile personale, oamenii folosesc ca și metodă de securitate
vechea tehnologie a parolelor. Aceast ă metodă de securitate utilizată de computere este disponibilă
încă din anii 1960, așadar cu trecerea timpului au apărut și o mulțime de metode eficiente pentru
spargerea parolelor care generează mii de combinații pe secundă ajutând hackerii să fure date și
chiar bunuri personale . Din cauza acestei probleme cercetă torii au găsit o soluție bazată pe
biometrie. Aceste s isteme biometrice de securitate sunt: recunoașterea ampre ntei, recunoașterea
irisului și recunoașterea facială.
Un sistem de recunoaștere facială este o aplicație care detectează, iar apoi recunoaște fața unei
persoane dintr -o imagine digitală sau în timp real din imaginile surprinse de o cameră video. Acest
lucru este posibil prin compararea caracteristicilor faciale ale unei persoane cu o bază de date
creată de către utilizator. Aceste aplicații sunt utilizate î n sistemele moderne de securitate și
funcți onează la fel ca și recunoașterea amprentei sau recunoașterea irisului.
Recunoașterea facială bazată pe caracteristicile geometrice ale feței pare să fie cel mai eficient
mod pentr u ca un computer să poată recunoaște fețele umane. L iteratura de specialita te [Sta11] ,
spune că primul sistem automat de recunoaștere facială a fost descris de către informaticianul
japonez Kanade Takeo în anul 1973 ce susținea că punctele de interes (poziția gurii, nasului,
ochilor și a urechilor) au fost utilizate pentru creare a unui vector caracteristic (distanța dintre
punctele de interes și unghiul acestora). Recunoașterea facială se făcea prin calculul distanțelor
euclidiene dintre vectorii caracteristici a i unei imagini de test și cei a i unei imagini de referință.
Aceasă metodă este extrem de eficie ntă când vine vorba de variații ale iluminării, dar din păcate
înregistrarea exactă a punctelor de interes este foarte complicată.
Astfel de aplicații ce implică recunoaștere facială pot fi implementate pe niște dispositive cu
dimensiuni reduse și portabile (ca de exemplu iPhone X), care nu au neapărat nevoie de o putere
de calcul foarte mare. Raspberry Pi 3 model B este un dispozitiv de dimensiuni reduse, extrem de
ieftin și care poate rula cu success o astfel de aplicație , ce poate fi utilizată pentru securitatea casei
sau siguranț a datelor cu caracter personal.
2. Fundamentare teoretică
În aceasă parte a lucrării au fost prezentate resursele software și cele hardware care au fost utilizate
în implementarea soluției adoptate, cât și motivele pentru care s -a decis utilizarea acestora .
Principalul subiect de discuție este biblioteca OpenCV, deoarece este pionul care contribuie cel
mai mult la acest sistem de recunoaștere facială prin algoritmii puși la dispoziția utilizatorului . Au
fost prezentate de asemenea ecuații, scheme bloc și diagrame care explică funcționarea
algoritmilor de recunoașt ere și detecție facială prezenți în biblioteca OpenCV.
Rezumat Rare ș-Marin BOZDOG
6
Așa cum spune și lucrarea [Flo13 ], OpenCV este o bibliotecă gratuită, scrisă inițial în limbajele
de programare C și C++, dar care este de asemenea compatibilă cu Android și Python. A fost
proiectată pentru a fi o unealtă eficientă în aplicații le ce funcționează în timp real și utilizează
conceptul de computer vision. Biblioteca conține peste peste 500 de funcții care acoperă domenii
precum medicina, robotica, calibrarea camerelor video, viziunea stereoscopică (cunoscută sub
numele de 3D ) și securitatea. În plus mai conține și câteva funcții de machine learning.
Scopul aceste biblioteci este de a oferi utilizatorilor o infrastructură de procesare a imaginilor ușor
de folosit care poate fi utilizată în dezvoltarea unor aplicații extrem de complexe. Este gratis pentru
oricine dorește să o utilizeze și se află sub licență BSD. Este structurată în patru părți principale:
Figura 1. S tructura bibliotecii OpenCV [Flo13 ]
Detecția facială este utilizată într -o multitudine de aplicații pentru identificarea fețelor în imagini
sau în se cvențe video. E ste similară cu detecția obiectelor, iar algoritmii sunt bazați pe fețe care se
uită în ainte deoarece acelea sunt cel mai ușor de detectat.
Sistemele care includ recunoaștere facială sunt capabile să detecteze și apoi să recunoască
identitatea unei personae dintr -o imagi ne digitală sau dintr -o secvență video. Pentru a avea un
sistem de recunoaștere facială eficient, trebuie ca și detecția facială să fie una precisă. Detecția
facială în timp real este mult mai eficientă decât în cazul imaginilor statice dar și puterea de c alcul
necesară pentru acest gen de aplicație trebuie să fie mai mare.
Articolul [Doc17] spune că clasificatorul de tip cascadă Haar este un algoritm de detecție al
obiectelor cât și al fețelor. Este cel mai utilizat algoritm când vine vorba despre detecți e facială.
Alte aplicații pentru acest algoritm ar putea fi detectarea pieselor de mobilier dintr -o încăpere sau
a plăcuțelor de înmatriculare al e mașinilor ce se deplasează pe stradă. De fapt acest algoritm este
bazat pe conceptul de machine learning care este antrenat cu o mulțime d e imagini pozitive
(imagini care conțin fețe) și negative (imagini care nu conțin fețe). Acest concept a fost inițiat de
către P. V iola și M. Jones în lucrarea științifică intitulată "Rapid Object Detection using a Boosted
Casc ade of Simple Features" din 2001.
Rezumat Rare ș-Marin BOZDOG
7
Recunoașterea facială după cum spune și articolul [Ope17] este ceva tipic oamenilor. C hiar și
bebelușii de câ teva luni sunt capabili să distingă fețele cu noscute. Deci intrebarea este câ t de greu
poate fi ca un computer să facă ac elași lucru ? Biblioteca OpenCV are la bază trei algoritmi pentru
recunoaștere facială: Eigenfaces, Fisherfaces și Local Binary Pattern Histograms.
În cadrul algoritmului Eigenfaces orice imagine în nuanțe de gri este compusă dintr -o matrice cu
elementele 𝑝 × 𝑞 ce poate fi reprezentată ca și un vector liniar 𝑚=𝑝𝑞-dimensiuni, deci o
imagine de 100 ×100 pixeli va rezulta într -o imagine de 10.000 -dimensiuni. Utilizând toate cele
10.000 de dimensiuni pentru recunoaștere facială este inutil deoarec e vor avea loc o multime de
erori. De fapt toate fețele umane sunt similare, iar asta va duce la vectori foarte asemănători.
Vectorii care descriu fața subiectului sunt trecuți prin tehnica Analizei Componentelor Principale
(Principal Component Analysis ). Cercetătorii au încercat obținerea unei reprezentări mai bune a
feței căutând vectorii care influențează cel mai mult distribuția imaginii. Acei vectori crează spațiu
facial care are o reprezentare mult mai bună decât spațiul întreg al imaginii ce conține o mulțime
de informații inutile.
Acei vectori principali pot fi văzuți ca un set de caracterictici ge nerale ale imaginilor prezente î n
baza de date. Atunci când portretele sunt aduse la o dimensiune standard pot fi tratate ca niște
vectori unidimensional i ai valorilor pixelilor. Fiecare imagine are o reprezentare exactă prin
combinarea liniară a acestor vectori principali.
Ca o îmbunătățire a algoritmului precedent, Fisherfaces a fost dezvoltat pentru a fi independent de
expresia feței și de variațiile d e lumină. Într -o analiză a modelului de clasificare, fiecare pixel al
unei imagini este considerat ca fiind o coordonată într -un spațiu multidimensional. Pentru că fețele
au umbre, imaginea va fi deviată din subspațiul liniar. Pentru a nu fi nevoie ca a cele deviații să fie
explicate este de preferat ca imaginea să fie proiectată liniar într -un subspațiu în care acele regiuni
cu deviații semnificative vor fi neglijate. Metoda de proiecție este bazată pe Analiza
Discriminatorie Liniară (Linear Discriminant An alysis ) care produce clase separate într -un
subspațiu de dimensiuni reduse chiar și atunci când au loc variații puternice ale luminii și a le
expesiei faciale.
Prima variantă a algoritmului Local Binary Pattern etichetează fiecare pixel al unei imagini
folosind un prag setat , reprezentat de către pixelul central dintr -o vecinătate de 3×3, astfel încât
dacă valoare a pixelului vecin este mai mare decât pragul setat, valoare a lui este suprascrisă cu 1,
iar dacă valoare a acestuia este mai mică decât pragul s etat, valoarea pixelului vecin este suprascrisă
cu 0, astfel un număr binar este obținut. Acea stă metodă este considerată o m odalitate eficientă a
descrierii texturale.
Metoda de recunoaștere facială bazată pe LPB folosește textura și forma imaginilor pentru
reprezentarea imaginilor faciale. Suprafața facială este divizată în regiuni mai mici (toate egale)
numite template -uri, din care se va extrage histograma modelelor binare, iar dup ă aceas ta sunt
concatenate înt r-o histogram ă spațial singular ă cu o caracteristic ă mult mai precisă. Rezultatele
experimentale au demonstrat că această metodă este mult mai eficientă decât celelalte.
Rezumat Rare ș-Marin BOZDOG
8
3. Implementarea soluției adoptate
Raspbian este un sistem de operare Linux modificat pen tru a fi compatibil cu pl atforma Raspberry
Pi. Pentru instalarea acestuia este nevoie de un card de memorie microSD, preferabil unul cu o
capacitate de 16GB sau mai mare pentru a nu rămâne fără spațiu de stocare în timpul realiză rii
proiectului. Creatorii Raspberry Pi recomandă ca acest card să aibă clasa de viteză 10 pentru a
beneficia din plin de vitezele de transfer ale platformei.
Pentru instal are este nevoie de u n program numit Etcher recomandat de site-ul oficial al Raspberry
Pi. Tot pe site -ul ofic ial se poate găsi și imaginea sistemului de op erare care după descărcare
trebuie scrisă pe cardul de memorie folos ind programul menționat adineauri .
Înainte de a începe configurarea sistemului este recomandat să se ruleze comenzile pentru
actualizări. Configurarea sistemului de operare se face accesând m eniul de configurare din
terminal s au din interfața grafică. Ca o metodă de protecție a datelor este recomand ată setarea unei
parole , pentru momentele când dispozitivul va fi accesat de la distanță prin SSH . Din acest meniu
se vor activa de asemenea serviciile de control la distanță SSH și VNC, precum și portul camerei
de luat vederi.
Instalarea bibliotecii OpenCV este un proces care necesită timp și răbdare deoarece sunt mu lte
dependențe ale acestuia ce trebuiesc și ele instalate. Pentru această parte este recomandată
utilizarea serviciului SSH și nu a interfeței grafice deoarece în timpul instalării bibliotecii
procesorul plă cuței va fi solicitat la maxim , iar utiliz ând interfața grafică va duce la supraîn călzirea
și la blocarea acesteia. Pentru a folosi serviciul SSH este nevoie de conectarea platformei la o rețea
locală. Aflarea IP -ului prin care se va stabili conexiunea se poate face printr -o aplicație dedicată
ce scanea ză dispozitivele prezente în rețea ua locală. Această aplicație detectează atât dispozitivele
conectate wireless cât și cele conectate la rețea prin cablu.
Figura 2. Interfața aplicației WnetWatcher
Conectarea la Raspberry Pi poate fi stabilită utilizând aplicația Putty. După conectare va fi nevoie
ca parola setată la configurarea sistemului de operare să fie introdusă, iar conexiunea cu terminalul
va porni. Instalarea bibliotecii OpenCV va decurge după cum urmează: configurarea spațiului de
stocare, instalarea dependențelor, compilarea și instalarea bibliotecii.
Rezumat Rare ș-Marin BOZDOG
9
Crearea aplicației pentru recunoașterea facială se va face în trei pași: realizarea unui program de
detecție facială bazată pe algoritmul Haar Cascade Classifier care va captura un număr de imagini
ale feței în timp real și le va stoca în baza de date , crearea unui trainer care folosind algoritmul de
recunoaștere facială LBPH cât și pe cel pentru detecție facială va extrage informațiile necesare
recunoașterii din imaginile capturate prece dent și le va salva într -un fișier de tip YML , realizarea
aplicației c e va fi responsabilă cu detectarea și recunoașterea fețelor în timp real.
4. Rezultate experimentale
Rezultatele experimentale au fost obținute prin testarea sistemului de recunoaștere facială cu trei
baze de date diferite. Prima bază de date es te cea proprie care conține 3 subiecți, a doua bază de
date este oferită spre utilizare de către cei de la MIT -CBCL și conține 10 subiecți , iar a treia bază
de date este oferită d e către cei de la Caltech și conține 27 de subiecți . Prima bază de date a avut
50 de eșantioane ca imagini de referință pentru fiecare persoană , iar a doua a avut 1 00 de eșantioane
pentru fiecare subiect. În cazul bazei de date de la Caltech s -a mărit num ărul de eșantioane la 200 ,
dar din cauză că platforma Raspberry Pi are o putere de calcul redusă s -au ales doar 10 subiecți
pentru testare. Când s -a încercat testarea cu toți cei 27 de subiecți și chiar când au fost reduși la 15
aplicația nu a pornit. Cei 1 7 subiecți rămași au fost utilizați pentru testarea ratei de confuzie cu
persoane din afara bazelor de date.
Pentru o identificare mai ușoară a subiecților s -a creat o bază de date SQLite 3 unde subecții aveau
stocat câte un indentificator unic și alte in formații precum numele subiectului, genul și vârsta.
Recunoașterea acestora se face prin setarea unu i prag al nivelului de confidență în urma testării
aplicației . Cu cât nivelul de confidență este mai scăzut cu atât certitudinea că persoana recunoscută
este cea potrivită este mai mare. Testele au fost efectuate pentru fiecare bază de date independent.
Compararea rezultatelor poate fi observată în graficul de mai jos:
Figura 3. Compara rea rezultatelor experimentale.
92 %
70% 70 %
0%10%20%30%40%50%60%70%80%90%100%
Own database MIT-CBCL database Caltech database
Rezumat Rare ș-Marin BOZDOG
10
5. Concluzii
Scopul acestui proiect este de a demonstra posbilitatea implementării unui sistem de securitate
bazat pe recunoaștere facială , pe un dispozitiv de dimensiuni reduse cu o putere de calcul relativ
mică. Soluția trebuie să fie ieftină și din acest moti v au fost utilizate doar softuri gratuite.
Realizând acest proiect am studiat algoritmii de recunoaștere și detecție facială puși la dispoziție
de către biblioteca OpenCV. Metoda de recunoaștere facială este bazată pe algoritmul Local
Binary Pattern Histograms, iar cea pentru detecție fac ială este bazată pe algoritmu l Haar Cascade
Classifier.
Soluția implementată constă în două componente principale: hardware și software. Componenta
hardware conține placa pentru dezvoltare Raspberry Pi 3 model B și camera de luat vederi orig inală
pentru R aspberry Pi versiu nea 2. Componenta software este completată de sistemul de operare
oficial al R aspberry Pi (Raspbian), biblioteca OpenCV versiunea 3.3.0, baza de date SQLite3 și ca
limbaj de programare a fost ales Python 3.5.
Experimentele au fost realiz ate folosind trei baze de date pentru a determina acucatețea sistemului
și pentru a identifica erorile. În timpul testelor a fost necesară reducerea nu mărului de subiecți în
cazul unei baze de date, pentru că platfo rma Raspberry Pi nu a reușit pornirea apl icației . Acest
lucru s -a întâmplat deoarece algoritmul pentru recunoaștere facială LBPH analizează fiecare
eșantion separat, în timp ce Eigenfaces vede toate eșantionele ca un tot unitar. Desigur că se puteau
folosi algoritmii Eigenfaces sau Fisherfaces pe ntru a putea crește numărul subiecților din baza de
date, dar în acest caz acuratețea recunoașterii ar fi scăzut semnificativ. Pentru că rolul acestei
aplicații este de a servi ca și sistem de securitate, nu este nevoie de o bază de date mare.
Deoarece în timpul testelor cu baza de date proprie, în cazul recunoașterii simultane a subieților
au apărut confuzii între aceștia, s -a decis eliminarea recunoașterii simultane din aplicație pentru a
rezolva această problemă. Nici o persoană din afara bazei de date nu a fost recunoscută ca fiind
înregistrată în baza de date. Acuratețea recunoașterii pentru această bază de date a fost determinată
la 92%. În cazul bazelor de date cu dimensiuni mai mari, testele au fost efectuate cu poze ale
subiecților. Din cauza refle xiilor au apărut atât confuzii între cei care erau prezenți în baza de date
cât și cu persoane care nu erau înregistrate în baza de date. Acuratețea recunoașterii în cazul
acestora a fost determinată la 70%. Pentru că în cazul bazei de date proprii antrena rea a fost
efectuată cu persoane reale, nici una din persoane nu a fost recunoscută în poze. Pentru un sistem
de securitate acest lucru este vital. Au fost efectuate teste și pentru antrenarea cu fotografii ale
subiecților, iar nici una din persoane nu a f ost recunoscută în realitate, au fost recunoscute doar în
fotografii.
Ca și muncă pentru dezvoltarea sistemului, în viitor îmi propun să identific și alte probleme care
apar în cazul detecției sau recunoașterii faciale, să le soluționez și să cresc acuratețea acestuia .
Totodată doresc să creez o interfață grafică aplicației și să o testez în condiții reale cum ar fi pe un
telefon mobil sau la intrarea în curtea casei.
Work planning Rare ș-Marin BOZDOG
11
1. Work planning
Task’s name Start End
Documentation/ State of the art 01.11.2017 30.11.2017
Developing the theoretical framework (face
detection, face recognition) and selecting the tools 01.12.2017 31.12.2017
Installing the platform (Raspberry Pi, OpenCV) 01.01 .2018 31.01 .2018
Creating the databases
(datasets, test images, SQLite database) 01.02 .2018 28.02 .2018
Software development 01.03 .2018 31.03 .2018
Preliminary tests 01.04 .2018 30.04 .2018
Running the e xperiments 01.05 .2018 31.05 .2018
Writing the paper 01.06.2018 30.06.2018
Final checks 01.07.2018 10.07.2018
State of the art Rare ș-Marin BOZDOG
12
2. State of the art
When talking abou t facial r ecognition, the thought goes to some thing from the future, high -tech
almost Sci -Fi, something that can be see n only in Star Wars or Star Tre k movies . But the truth is
that the be ginning s of this technology are in early 1960 s. Sure, in those times were a lot of
limitations due to computers processing power but until this days the te chnology is so advanced
that face recognition is implemented as security system in mobile phones and other ti ny devices
that have an enormous computing power compared to the computers from 1960 s.
In [Fac17 ] is told that Mr. Woodrow Wilson Bledsoe is known as the main parent of facial
recognition because in 1960s he developed a system that could cla ssify manually photos
containing faces , also kn own as RAND table t. That system was a device where people could
introduce vertical and horizontal coordinates using a stylus which was emitting electromagnetic
pulses. Those coordinates can be the location of various facial features as nose, eyes and mouth.
1970s were the years when the accuracy of the Bledsoe’s system was increased thanks to L.D.
Harmon , A.B. Lesk and A.J. Goldstein who used 21 specific subjective facial markers as hair color
and lip thickness , but the measurements and locations (known as biometrics) were still manually
computed.
After that, in late 1980s, most exactly in 1988 L. Sirovich and M. Kirby started to apply line ar
algebra for face recognition. This became known in early 1990s as Eigenfaces . In 1991 M. Turk
and A. Pentland improved the Eigenfaces algorithm when they dis covered how to detect faces in
images. T his were the first step s for the automatic face recognition systems. Because of those
times , were technology limitations and other enviro nment factors , but was a very significant proof
that an automatic face recognition system is possible.
From 1993 to 2000s there was a research program initi ated by The Defense Advanced Research
Projects Agency (DARPA) and the National Institute of Standards and Technology , called FERET
(Face Recognition Technology) that introduces face recognition as a business op portunity in order
to encou rage developing a more powerful technology . For the project was created a large database
of images with human f aces which in 2003 was updated with high resolution 24 -bit color pictures
with over than 2000 human faces images that were representing more than 800 persons. The
database was created for testing the techno logy and the hope was that those tests with such a large
amount of info rmation will be helpful in develop ing a more advanced facial recognition system.
One of the very first tests of facial recognition technology was made in 2002 at the XXXV Super
Bowl. The test was declared a failure because in the crowd were detected a lot of false positive
criminals. This showed that the technology was not yet ready to be a security system in such big
crowds.
The succ ess of facial recognition was in 2009 when Pinellas County Sherriff’s Office created a
database with photo archives of the DHSMV (Department of Highway Safety and Motor
Vehicles) . Until 2011, almost 170 officers were equipp ed with video cameras , which were
connected with the database to compare the suspects with the crimi nals present in that database.
The result was a large amount of investigations and arrests of true criminals , that until than wasn’t
even possible.
State of the art Rare ș-Marin BOZDOG
13
Facebook, the biggest social media company of all times started in 2010 implementing facial
recognition, today known as tagging. It was v ery blamed by the media because of privacy reasons
but the users seem to like a lot that they can tag their friend s in their own photos. The r esult is that
these days Facebook can automatically re cognize the persons in photos .
Until 2011 there were a lot of successful us es for facial recognition as security systems in airports
or the identification of Osama Bin Laden. In 2017, Apple the biggest company in the world
launched the iPhone X which has a very powerful and advanced face recognition security system
called by them Face ID. In the article [App 17] they say that the sensor reads over 30.000 infrared
dots of the use rs face and the chance that another person can unlock the phone with his face is 1
in 1.000.000. They also eliminated the Touch ID which used the fingerprint as a security system
and the error for this was declared by them 1 in 50.000, s o the Face ID as th ey said is a way better
security system.
The article [ Ope17] tells that OpenCV (Open Source Computer Vision) is one of the most used
computer vision library. It was star ted by Intel in 1999 and now it ’s based on real-time image
processing including face recognition, gesture recognition and many others. Is supporting
programming languages as C, C++, Android and Python. The main adva ntage of this library is that
it is open source . Python is the most liked programming language when it comes t o OpenCV
because it’s a lot simpler than C, C++ and Android. Now OpenCV has available three algorithms
for face recognition: Eigenfaces, Fisherfaces and LBPH (Local Binary Patterns Histograms).
Using Eigenfaces and Fisherfaces the information is converted in vectors. A ve ctor is a high –
dimensional image space and is known that high -dimensionality isn’t quite good. A solution for
this problem is applying a Linear Discriminant Analysis . Nothing is perfect, so it can’t be obtained
perfect environmental conditi ons and f or a good face recognition it’s needed more than 10 photos
for each individual face.
[Swa03] says that Python is a powerful high -level programming language and a simple one. It is
object -oriented and has a very easy to understand syntax. Was first launche d in 1990 by it s Dutch
creator Guido van Rossum. The name Python comes from the BBC show "Monty Python’s Flying
Circus". Its conceive started in late 1980s but Van Rossum began the implementation in December
1989. Because its simple syntax and becaus e it h as a very large library collection i t’s a perfect
programmin g language for complex programs that presum e a large amount of code to be written .
As in [Wik18] the Raspberry Pi is a SBC (Single -Board Computer), a tin y device with similar
sizes to a credit card that aims to pro mote learning of basic computer concepts in schools. In 2006
the first concept of a Raspberry Pi was on an Atmel ATmega644 microcontroller. The
representative of the Raspberry Pi foundation Eben Upton, has brought together a group of
academic professors and passionate s about computers with the objective to build a computer that
will inspire the kids. The last version, Raspberry Pi 3 model B+ was l aunched in 14 March 2018
and it s price is less than 40$. It has a 1.4GHz 32/64 -bit quad -core ARM Cortex -A53 processor,
1GB of RAM at 900MHz, 400MHz Broadcom VideoCore IV graphic card, built-in HDMI,
headphone jack, B luetooth, Wi -Fi, 4 USB 2.0 ports, two special ports , one dedicated for the OEM
video camera, the other one for a LCD scre en and 40 GPIO (general -purpose input/output) pins.
The storage is made on a micro SD c ard and it’s powered by a micro USB power supply with a
voltage of 5.1V and a 2.5A current. It also has an Ethernet port that supports PoE (Power over
Ethernet) and supports more than 20 operating systems including Windows. Since the launch of
the very first Raspberry Pi were found a lot of uses for the device including security systems using
face recognition.
State of the art Rare ș-Marin BOZDOG
14
In Table 1 is presented a compa rison between all Raspberry Pi versions from its first launch to the
last version from March 2018 , without the Raspberry Pi Zero models that are much smaller, less
powerful but cheaper than the normal Raspberry Pi versions. The Zero versions can be used fo r
applications that has available small spaces but also needs an enough computing power. All the
Raspberry Pi boards presented in the table are OEM designed by the Raspberry Pi Foundation.
Generation Model A Model B
1 1+ 1 1+ 2 2 ver 1.2 3 3+
Release
Date 2013 2014 2012 2014 2015 2016 2016 2018
Price 25$ 20$ 35$ 25$ 35$ 35$ 35$ 35$
CPU 700 MHz SC, 32b 900 MHz
QC, 32b 900 MHz
QC, 64b 1.2 GHz
QC, 64b 1.4 GHz
QC, 64b
GPU Broadcom VideoCore IV at 400MHz
RAM
(shared
with GPU) 256 MB 512MB 1 GB
USB 2.0
ports 1 2 4
Video
camera
port 15-pin camera inter face (CSI) connector and some USB webcams that are
compatible with L inux
Video
output One HDMI port and one display interface port for raw LCD pa nels
Audio
output Analog via 3.5 mm headphone jack and digital via HDMI
Storage SD
card MicroSD
card SD
card MicroSD
card MicroSD card and
USB Boot Mode
Network None Ethernet port Ethernet port,
Wi-Fi, Bluetooth
Power
supply 5.1V, 2.5 A 5.1V,
2.5 A
and PoE
Table 1. Comparison between Raspberry Pi versions (SC=single -core, QC=quad -core, b=bits)
[Wik18]
The [Cry17] article says that SQLite is the most u sed relational database system in the whole
world. According to the manufacturer it is more used than all relational database systems together,
and it’s in top 5 most implemented software libraries. It can be found in every mobile phone that
uses Android or iOS, in every MAC or Windows 10, in e very browser as Firefox, Chrome or
Safari, in Skype, iTunes, Dropbox, in every Apple device, on the Airbus A3 50 airplanes, on most
of the multimedia c ar systems produced nowadays .
The main advantage of this database is because it is “stand -alone” and “self -contained”. This
means that it doesn’t have dependencies excepting some C libraries. It can run on every op erating
system and the whole library can be stored in a single file with a little over size of 6MB.
State of the art Rare ș-Marin BOZDOG
15
Another advantage of the database is because i t’s “serverless”. It’s not running a server client, it
hasn’t processes that are communicating between them th rough TCP/IP. The process that wants to
access the database, it’s reading and writing directly in the .DB file from the hard disk drive.
It’s also non -configurable, which means that it doesn’t have to be installed before using it. No
process must to be st arted and there is no configuration file. It’s usually embedded in the final
application. This database it’s recommended to be used in situations like: internet of things,
embedded devices, websites, data analysis and so on.
To protect their data or even their personal property, people use as a security method the old
passwords technology. A lot of efficient methods appeared that can brake those passwords and
hackers can easily steal from people their personal data or even their proprieties. For this are used
a lot of programs that are generating thousands of possible combinations per second. Because of
this problem researchers came with a solution based on biometrics. Those biometric systems can
be fingerprint recognition, iris recognition or face recogn ition.
A facial recognition system is a computer based application that is detecting and recognizing a
person’s face from a digital image or from a live camera. This is possible by comparing the face
characteristics from a person with a database created b y the user. Applications like this work same
as the fingerprint and iris recognition does.
Facial recognition based on geometric characteristics it seems to be the best way for a computer
to do what for humans is a birth skill. Even new born babies from a few months can distinguish
between known faces. According to [Sta11] the first automatic facial recognition system was
described in 1973 by the Japanese computer scientist Kanade Takeo which says that the points of
interest (mouth, nose, eyes and ears pos ition) were used for making a characteristic vector
(distance between the interest points and their angle). The face recognition was made by
calculating the Euclidian distance between th e characteristic vectors from a test image and
comparing it with a reference one. This method is very efficient in light variations but the main
disadvantage is that the exact registration of the interest points is very complicated.
This kind of face recognition applications can be implemented on some small and portable devices
(like iPhone X), which do not necessarily have a huge computing power. The Raspberry Pi 3 model
B is a very cheap device that has enough power to handle such a n application that can be used as
home security, safety for some personal data and many other uses.
Theoretical Fundamentals Rare ș-Marin BOZDOG
16
3. Theoretical Fundamentals
In the following will be presented the software and the hardware on which will be implemented
the adopted solution proposed in this paper. The fo cus is based on the OpenCV library , because
it’s the main pawn in this facial recognition system implementation. Will also be presented the
equations, block schemes and diagrams that explain the face recognition algorithms us ed by the
OpenCV library.
3.1 OpenCV (Open Source Computer Vision)
According to [Flo13 ], officiall y OpenCV is a library that has open source functions, originally
written in C and C++, but also compatible with Android and Python. It was designed to be an
efficient computing tool for applications that are using real time computer vision. The library
contains over 500 functions that cover many areas as medicine, robotics, video cameras
calibration, stereoscopic view (known as 3D) and security. Add itional ly, it also contains a set of
machine learning functions.
The purpose of this library is to provide to its users an image processing infrast ructure easy to use
that can be used in developing of some very complex applications. It is open source for whoever
wants to use it and it’s under BSD license.
First launched in 1999 by Intel, the Open CV project was an initiative co ming from Intel’s
researchers for applications t hat uses intensive the computer microp rocessor. In the original project
the targets were drawing 3D walls and tracking the light rays. The main contributors at the project
were Intel’s programmers and a part of the Intel Russia experts in processing and image analysis.
In the start -up of the p roject, the main objectives were:
• A through research in the computer vision field, offering to the potential future user not only
the functional code, but also an optimized code for a fundamental platform in visual
information processing field
• Standardiza tion of the programming knowledges in the image processing field such that
offering the programmers a transferable and easy to use platform
• Support for advanced commercial apps using image processing , to be optimized and portable.
The first alpha version of the OpenCV library it was launched at the “IEEE Conference on
Computer Vision and Pattern Recognition” from 2000 and 5 beta versions were launched between
2001 and 2005. The first 1.0 version was launched in 2006 and somewhere in the middle of 2008,
OpenCV obtained support from Willow Garage corporation and was again under developing. The
version 1.1 “pre -release” it was launched in October 2008 and a book called “Learning OpenCV”
was launched in the same month.
A second version was released on the market in October 2009 called OpenCV 2 that includes major
changes in the C++ interface, offering new functions, a better implementation of the already
existing functions, in specially for multi -core systems.
Now OpenCV is available fo r free at [Sou18] and it can be used on Windows, Linux and Mac OS
X using C, C++, Android and Python.
Theoretical Fundamentals Rare ș-Marin BOZDOG
17
3.1.1 Structure of the OpenCV library
The OpenCV library contains over than 500 functions for a lot of areas that are using computer
vision and it is structured in four principal components:
Figure 1. OpenCV library structure [Flo13 ]
1. CV: Contains base algorithms for image proce ssing (e g. images geometric transformation ).
2. MLL: Represents the library for “machine learning” and contains a lot of statistic class ifiers
and clustering functions (e g. training data, decision trees, k -nearest neighbors) .
3. HighGUI: Contains the I/O functions and the functions for uploading a nd storing th e images
and video files (eg. video capture, video writer).
4. CXCORE: Contains the basic data types (e g. base cascade classifier, align exposures).
3.1.2 Face detection
Face detection is used in a lot of applications for identifying faces in images or video sequenc es.
The face detection is like the obj ect detection and the algorithms are based on faces that are looking
forw ard because those are more easi ly to detect.
Systems that are including facial recognition are capable to detect and then recognize the identity
of a specific person from a digital image or video sequences. The face recognition method is based
on comparing some specific characteristics present in a digital image or a video sequence with an
image that is stored in a database.
For maki ng a good facial recognition system it’s needed to make a very precise face detection.
The real -time face detection in video sequences or in live videos is easier and more precise than
the face detection in still images. Although the processing power must be higher in real -time
detection than in still images because the faces are in a contin uous movement and the environment
is static. Using methods for eliminating the static components fr om the frames it can obtain a high
accuracy detection.
As the [Doc17] article says, the Haar Cascade Classifier is a face and object detection algorithm
used in most of the face detection applications. Actually , this algorithm is a learning machine based
on a cascade function that is trained with a lot of positive (images c ontaining faces) and negative
(images without faces) images.
Theoretical Fundamentals Rare ș-Marin BOZDOG
18
Figure 2 . Face detection example
This concept was first initiated by P. Viola and M. Jones in their scientific work [Pau01] called
"Rapid Object Detection using a Boosted Cascade of Simple Fea tures" from 2001. The full image
with the coordinates 𝑥 𝑎𝑛𝑑 𝑦 contains the sum of the pixels on the left and above of the current
image. So , the integral image can be represented by:
𝑖𝑛𝑡𝑋 (𝑥,𝑦)=∑ 𝑋(𝑥′,𝑦′) 𝑥′≤𝑥
𝑦′≤𝑦 (1)
Where 𝑖𝑛𝑡𝑋 represents the full image and 𝑋 is the original image. The full image reduces the
number of the ope rations performed on the images. In image processing in general is needed to
extract the features form a specific region, not from the hole image. This operation is called Region
of Interest (ROI).
3.1.3 Face recognition
As the [Ope17] article says, for people seems to be very easy to recognize known faces, even new
born babi es can recognize a nd distinguish known faces. T. Wiesel and D. Hubel, found that the
human brain have special nerve cells which are responding to specific features that our eyes see in
different scenes, such as angles, edges, lines and movement. Our brain is combining those different
information s in something useful and this is the main reason that humans don’t see the world
scattered. So, the question is , how hard can be for a computer to recognize a face ?
In the following will be presented all three methods that OpenCV is using for face recognition
applications. For all three methods: Eigenfaces, Fisherfaces and Local Binary Pattern Histograms
will be presented an algorithmic description and some examples t o show how the images
containing human faces are seen by the co mputer vision in order to recognize different persons
without confusing them with other persons present or not in the database .
Theoretical Fundamentals Rare ș-Marin BOZDOG
19
1. Eigenfaces
Any grayscal e image is made up of an 𝑝 × 𝑞 elements matrix which can be represented as a linear
vector 𝑚=𝑝𝑞-dimensional, so an 100 ×100 pixels image results in 10.000 -dimensional image.
Using all the 10.000 dimensions for face recognition is useless because will be a lot of errors
during the recognition, in fact all human faces are simil ar and this results in very simil ar vectors.
The vectors that are describing the subject’s face are corelated using the Principal Component
Analysis (PCA). Researchers have tried to obtain a better representation of the f aces searching for
the vectors that influences image distribution the most. These vectors are creating facial space
which has a better representation than the image space that contains a lot of unnecessary
information.
Those main vectors can be seen like a set of general characteristics from the images variations
present in the database. When the portraits are brought at a standard dimension they can be treated
like unidimens ional vectors of pixels values. Every image has an exact representation through a
linear combination of these main vectors.
Algorithmic description of Eigenfaces method:
Let 𝑋={𝑥1,𝑥2,…,𝑥𝑛} be a random vector and 𝑥𝑖∈ 𝑅𝑑.
Compute the mean 𝜇:
𝜇=1
𝑛∑ 𝑥𝑖𝑛
𝑖=1 (2)
Compute the covariance matrix 𝑆:
𝑆=1
𝑛∑ (𝑥𝑖−𝜇)(𝑥𝑖−𝜇)T‘ 𝑛
𝑖=1 (3)
Compute the eigenvalues λ𝑖 and eigenvectors 𝑣𝑖 of 𝑆:
𝑆𝑣𝑖= λ𝑖𝑣𝑖,𝑖=1,2,…,𝑛 (4)
Then order the eigenvectors descending by their eigenvalue. The 𝑘 principal components are the
eigenvectors corresponding to the 𝑘 largest eigenvalues.
The 𝑘 principal components of the observed vector 𝑥 are then given by:
𝑦=𝑊𝑇(𝑥−𝜇) (5)
The reconstruction from the PCA basis is given by:
𝑥= 𝑊𝑦+𝜇 (6)
Where 𝑊=(𝑣1,𝑣2,…,𝑣𝑘).
Theoretical Fundamentals Rare ș-Marin BOZDOG
20
The Eigenfaces method does the face recognition in the following way:
• Projects all training samples into the PCA subspace
• Projects the query image into the PCA subspace
• Finds the nearest neighbor between the projected training images and the projected query
image.
In the following picture it was used the jet colormap to expose that Eigenfaces do not only encode
facial features, it encodes also the lighting present in the image.
Figure 3. Jet colormap of Eigenface s samples [Ope17]
The following picture shows how Eigenfaces is reconstructing the image. It’s clearly that 10
Eigenvectors can’t make a good reconstruction but this is an example how the algori thm is
working. The tests showed that are needed approximately 300 Eigenvectors for a good
reconstruction.
Figure 4. Eigenface s reconstruction [Ope17]
Theoretical Fundamentals Rare ș-Marin BOZDOG
21
2. Fisherfaces
After Eigenfaces, it was developed a face recognition algorithm that it’s independent of the facia l
expressions and the light ray’s variation s. In an analysis of the classification model, it’s considered
that each pixel of an image is a coordinate in a multidimensional space. Because the faces have
shadows, the images will be deviated from the linear s ubspace. Instead of explaining those
deviations, it’s preferred to project linear the image in a subspace such that those regions with big
deviations will be neglected. The projection method it’s based on the great statistician Sir R. A.
Fisher’s linear di scriminant called Linear Discriminant Analysis (LDA), which is producing
separate classes in a lower dimension subspace, even when there are strong variations of light and
facial expres sions. The Fisherfaces method is like the Eigenfaces, but the experimen tal results
revealed that Fisherfaces has significant less error rate than the Eigenfaces method.
Algorithmic description of Fisherfaces method:
Let 𝑋 be a random vector with samples drawn from 𝑐 classes:
𝑋={𝑋1,𝑋2,…,𝑋𝑐}
𝑋𝑖={𝑥1,𝑥2,…,𝑥𝑛}
The scatter matrices 𝑆𝐵 and 𝑆𝑊 are calculated as:
𝑆𝐵= ∑ 𝑁𝑖(𝜇𝑖−𝜇)(𝜇𝑖−𝜇)𝑇 𝑐
𝑖=1 (7)
𝑆𝑊= ∑ ∑ (𝑥𝑗−𝜇𝑖)(𝑥𝑗−𝜇𝑖)𝑇
𝑥𝑗∈𝑋𝑖𝑐
𝑖=1 (8)
Where , 𝜇 is the total mean:
𝜇=1
𝑛∑ 𝑥𝑖𝑛
𝑖=1 (9)
And 𝜇𝑖 is the mean of class 𝑖∈{1,…,𝑐}:
𝜇𝑖=1
|𝑋𝑖|∑ 𝑥𝑗 𝑥𝑗∈𝑋𝑖 (10)
Now the Fisher's classic algori thm looks for a projection 𝑊 that maximizes the class separability
criterion:
𝑊𝑜𝑝𝑡=𝑎𝑟𝑔 𝑚𝑎𝑥 𝑊|𝑊𝑇𝑆𝐵𝑊|
|𝑊𝑇𝑆𝑊𝑊| (11)
As [Pet97], to fix this optimization problem, must be solved the general eigenvalue problem:
𝑆𝐵𝑣𝑖=λ𝑖𝑆w𝑣𝑖 (12)
𝑆𝑊−1𝑆𝐵𝑣𝑖=λ𝑖𝑣𝑖 (13)
Theoretical Fundamentals Rare ș-Marin BOZDOG
22
Now the rank of the scatter matrix 𝑆𝑊 is almost (𝑁−𝑐), with 𝑁 samples and 𝑐 class es. In case of
pattern recognition , the problem is that the number of samples 𝑁 is almost always smaller than the
dimension of the input data (the number of pixels), so according to [Sar91], 𝑆𝑊 becomes singular.
In [Pet97] they managed to solve this problem by performing a PCA on the data and projecting
the resulting samples into the (𝑁−𝑐)-dimensional space. After that, a LDA was performed on the
reduced data, because the 𝑆𝑊 matix is not sin gular anymore.
After that the optimization problem can be rewritten as:
𝑊𝑝𝑐𝑎=𝑎𝑟𝑔 𝑚𝑎𝑥 𝑊|𝑊𝑇𝑆𝑇𝑊| (14)
𝑊𝑓𝑙𝑑=𝑎𝑟𝑔 𝑚𝑎𝑥 𝑊|𝑊𝑇𝑊𝑝𝑐𝑎𝑇𝑆𝐵𝑊𝑝𝑐𝑎𝑊|
|𝑊𝑇𝑊𝑝𝑐𝑎𝑇𝑆𝑊𝑊𝑝𝑐𝑎𝑊| (15)
Now the transformation matrix 𝑊, that projects a sample into the (𝑐−1)-dimensional space is
given by:
𝑊=𝑊𝑓𝑙𝑑𝑇𝑊𝑝𝑐𝑎𝑇 (16)
The following pi cture is showing how the Fisherface reconstruction works. For the human eyes
the differences betw een the samples can be hard to distinguish, but with a closer look they can be
visible.
Figure 5 . Fisherface s reconstruction [Ope17]
Theoretical Fundamentals Rare ș-Marin BOZDOG
23
3. Local Binary Pattern Hystograms (LBPH)
The first version of LBP was launched by T. Ojala and his contributors in 1996 and its considered
an efficient way of textural description. This method labels each pixel of an image using a set
threshold represented by the central pixel from a 3×3 neighborhood such that if the value of a
neighbo r pixel i s greater than the threshold the value is overwritten with 1, and if the value is less
than the set thre shold, the value of the neighbo r pixel is overwritten with 0, so a binary number is
obtained.
Figure 6 . LBP example
This face recognition method uses the texture and the shape of the images for representing the
facial images. The facial surface is divided in smaller regi ons (all equal) called templates , from
where will be extracted the histogram of the binary models and after this are concatenated in o ne
singular spatial histogram with a more precisely characteristic. The experimental results proved
that this method is way better than the others. The research was based on the lighting, facial
expres sions and the aging of the persons. Us ing this method, the information s from an d certain
image can be very easily extracted.
With the passing of the years, more and more facial recognition systems appe ar. This fact shows
that the sci ence has evolved significantly. However , this technology is still under devel opment
because it still has a high error rate , so the main objective for the researchers is to increase the
accuracy. For this was created a database to compare the accuracy of differe nt facial recognition
algorithms when are changes in the position and th e illumination.
Algorithmic description of LBPH method:
The LBP operator can be given a s:
𝐿𝐵𝑃 (𝑥𝑐,𝑦𝑐)=∑ 2𝑝𝑠(𝑖𝑝−𝑖𝑐)𝑃−1
𝑝=0 (17)
With (𝑥𝑐,𝑦𝑐) as central pixel, the intensity 𝑖𝑐 and the intensity of the noighbour pixel 𝑖𝑛, the sign
function 𝑠 is defined as:
𝑠(𝑥)={1,𝑖𝑓 𝑥≥0
0, 𝑒𝑙𝑠𝑒 (18)
Theoretical Fundamentals Rare ș-Marin BOZDOG
24
This method allows v iewing very fine details in an image. After a while was discovered that a
fixed neighbo r fails to encode details that are differing in scale. So as in [Tim04], the solution was
to extend the operator for using a variable neighborhood. Aligning an a rbitrary number of
neighbors on a circle with a variable radius, which enables to capture the following neighborhoods
will solve this problem.
For a given point (𝑥𝑐,𝑦𝑐) the position of the neighbo r (𝑥𝑝,𝑦𝑝) is calculated by:
𝑥𝑝=𝑥𝑐+𝑅𝑐𝑜𝑠 (2𝜋𝑝
𝑃) (19)
𝑦𝑝=𝑦𝑐+𝑅𝑠𝑖𝑛 (2𝜋𝑝
𝑃) (20)
Where 𝑝∈𝑃; 𝑃 is the number of sample points and the radius of the circle is 𝑅.
This method using operator is known as an extension of the original LPB code, sometimes called
Extended LPB. In case of a points coordinate doesn’t correspond to the image coordinates, the
interpolation is applied. In the OpenCV case will be done a bilinear interpolation:
𝑓(𝑥,𝑦)≈[1−𝑥 𝑥][𝑓(0,0)𝑓(0,1)
𝑓(1,0)𝑓(1,1)][1−𝑦
𝑦] (21)
In the following figure is an example of how LBP is working. As it can be seen the LPB operator
is robust in case of gray scale images transformation.
Figure 7 . How LPB works [Ope17]
For this project has been chosen the LBPH algorithm because it’s the most used algorithm in face
recognition applicat ions for its highest accuracy from all three algorithms used by the OpenCV
library due to low error rate.
Theoretical Fundamentals Rare ș-Marin BOZDOG
25
3.2 Python
It was launched by its Dutch creato r Guido van Rossum in 1991 and has become a very popular
programming language because it is easy to learn and easy to use. It’s an interpreted language not
a compiled one. This means that an application made in Windows using Python can run without
any prob lems also in Linux.
According to [Pi e16], Python is a high -level la nguage for general -purpose programming, with a
very clear syntax. Because it’s interpreted wasn’t very popular at the beginning due to software
and hardware limitations, but with the advan ce of the technology Python became one of the most
popular programming languages. Using Python can be w ritten very complex and powerful
applications like in C/C++ in much less lines of code. Like Java it has an internal garbage collector
for freeing up the memory automatical ly.
Python is a programming language that offers a lot of functionalities and it can be learned very
quickly by programmers that already know languages like Java and C/C++ and it’s also ideal for
beginners. It supports OpenCV and other libraries that can be used for computer vision and image
processing applications.
For this project it was chosen this programming language because it’s available on many p latforms
including Raspberry Pi . It supports OpenCV and has a lot of usef ul tools fo r applications based on
face recognition.
3.3 Raspberry Pi
According to [Wik18] t he Raspberry Pi board is a very cheap computer , not v ery powerful but
enough for it s purpose. Eben Upton the CEO of the Raspberry Pi Foundation wanted to create a
small d evice but enough powerful to inspire the kids and encourage them to learn about the
computer science. After almost 7 years of research with a team formed of university professors,
researchers and computers passionates , they finally managed to launch in 2013 the fir st Raspberry
Pi board. From then were launched a couple of models and their ap plications are unlimited, from
robotics to security systems, smart homes and much more.
Raspberry Pi 3 Model B was launched in February 2016 with a price of 35$. Its specificati ons are :
1.2 GHz 32/64 -bit quad -core ARM Cortex -A53 processor, Broadcom VideoCore IV GPU at
400MHz, 1GB of RAM memory (shared with the GPU), 4 USB ports, HDMI and DSI ports for
video output, 15 -pin CSI port for the OEM Raspberry Pi video cameras, 3.5mm jac k for analog
audio output and HDMI for digital, an ethernet port, built -in Wi -Fi and Bluetooth, a microSD card
slot for storage and it’s powered via microUSB with a 5.1V and 2.5A adapter. Its size is similar to
a credit card.
For this project was chosen t he Raspberry Pi 3 Model B because when the project was started it
was the last version available on the market and the most powerful one. It also was chosen a
Raspberry Pi board because the objective is to implement a facial recognition system on a small
device that can fit in the pocket , as a mobile phone. On the market are a lot of alternatives like the
Asus Tinker board but that one is much m ore expensive and less powerful than the Raspberry Pi .
Also , it wasn’t chosen the Arduino board because it doesn’t have an operating system and
interfaces as USB or CSI that allows the facial recognition system to be implemented.
Implementation Rare ș-Marin BOZDOG
26
4. Implementation
Implementation of the chosen solution was made as follows: installing the Raspbian OS on the
Raspberry Pi board, configuring the operating system, installing the OpenCV library, creating the
facial recognition application. Will be presented the main steps with relevant Linux commands
and suggestive figures.
Hardware
Raspberry PI
board
OpenCV library
Face Recognition
applicationFace models
(Datasets)CameraLaptop / PC
PheripheralsVNC Remote Controlvia Internet
via USB
via HDMIvia CSI port
TrainingDatabase
Software
Figure 8. Block scheme
4.1 Installing Raspbian OS
Raspbian is the officiall y operating system for the Raspberry Pi boards. It’s a modified version of
the Linux operating system called Debian in order to be compatible with the Raspberry Pi platform.
For this operation it’s needed a microSD card, preferably one with a capacity of 16GB or more to
be sure that w ill not run out of storage . The Raspberry Pi official site recommends a class 10
microSD card for the highest transfer speeds. So , it was chosen a SanDisk Ultra class 10 microSD –
HC me mory card with 16GB of storage.
To install the OS on the microSD card it’s needed a software called Etcher, also recommended by
the Raspberry Pi official site . The image of the Raspbian can be found on the Raspberry Pi official
website . There can be found two versions, one called “Raspbian Stretch Lite” and the other called
“Raspbian Stretch with Desktop” . The lite version is without graphical interface and because the
live facial recognition needs a gra phical interface it was used the version with desktop. Stretch is
the name of the last Raspbian ver sion. The downloaded file is a .ZIP archive that Etche r recognizes
automatic ally as an image. After selecting the .ZIP image must be selected the memory card where
will be installed the operating system and the “Flash!” button will start the installation .
Implementation Rare ș-Marin BOZDOG
27
Figure 9 . Raspbian OS instal lation using Etcher
4.1.1 Configuring the operating system
After the flashing on the memory card is done , it can be introduced in the Raspberry Pi board. For
its first use must be connected at a mouse, keyboard, display and optionally but recommended to
an internet cable. When it’s plugged in the Raspberry Pi will start automatic ally in desktop mode.
Now the configuration of the operating system can be started.
First of all , it’s recommended to open the “Terminal” and run the following command:
$ sudo apt -get update && sudo apt -get upgrade
This command is updating and upgrading the operating system and the existing packages.
After this it is recommended to restart the Raspberry Pi for every thing to work correctly:
$ sudo reboot
When the system finally boots up it must be acce ssed the “Raspberry Pi Configuration” menu in
order to start the proper configura tion. The following figure shows how to do that:
Figure 10 . Entering in “Raspberry Pi Configuration” menu
Implementation Rare ș-Marin BOZDOG
28
The first TAB in this menu is called “System” and in there it can be changed: the hostname, the
boot mode from Desktop mode to CLI (Comman d-Line Interface) mode, disable or enable the auto
login, enable or disable the network boot option, set the screen resolution and set a password. It is
recommended to set a password because the implicit password is “raspberry” and knowing that
anyone else can acces s your Raspberry Pi without problems even from the distance. To avoid that
and protect your work stetting a good password it’s highly recommended.
In the following figure it’s shown how to change the password by pressing the “Change
Password…” button and then in the window that will appear you must introduce and confirm the
desired password:
Figure 11 . Changing the Raspberry Pi password
In the following TAB called “Interfaces” it can be enabled or disabled different features available
on the PI board. For this project it must be enabled the “Camera” feature in order to work with
the PI camera. It is recommended also to enable the “SSH” and “VNC” features for connecting
on the Raspberry Pi from distance. The SSH fea ture it allows you to access the terminal from other
devices using another or the same operating system. To use this feature , you must install on the
other machine a prog ram called Putty or another sim ilar program that uses the SSH protocol. With
this protocol it can be accessed only the terminal not the graphical interface.
Figure 12 . Enabling interfaces
Implementation Rare ș-Marin BOZDOG
29
VNC (Virtual Network Computing) is a program that allows the user to acces s from distance, from
another device wi th another operating system the graphical interface of the Raspberry Pi . It is
available on many platforms including Windows and MAC OS X but it must be paid . For the
Raspberry Pi this application is offered for free by its creators to encourage computer science
learning. For using it you should create an accou nt on their official site than introduce i t in the
VNC application available on the Raspbian. On the device from the distance it must be installed
an application called VNC Viewer , introduce the same account on it and you can connect on the
Raspberry Pi using graphical interface even from the mobile phone.
Figure 13 . VNC remote connection from an iPhone
The third TAB in the “Raspberry Pi Configuration” menu is called “Performance” and it’s
recommended to leav e the default settings because increasing the implic it performances of the
Raspberry Pi can lead to overheating and cause significant damages even its destruction.
Last TAB called “Localisation” is where you can s et the location, the time zone, the keyboard
layout and the Wi -Fi country. It is recommended to do this settings in order to avoid compatibility
problems with some apps, devices and p eripherals or even with the Wi -Fi networks from your
country if you want to connect the Raspberry Pi wireless at the internet.
4.2 Installing the OpenCV library
This part is probably the most time consuming before creating the face recognition application
because there are many dependencies and pre -requisites that must be installed . All this operation
can be done without using the graphical interface. It is recommended to use the SSH protocol and
install OpenCV from another dev ice because it is very big resource consumer and using graphical
interface can lead to overheating and free zing of the Raspberry Pi .
In order to do t his, it was installed Putty on a laptop using Windows. For this it’s needed to know
the IP address of the Raspberry Pi that can be found accessing the PI network settings using the
graphical interface or an application called WnetWatcher that founds all the devices connected on
the same local network and shows their IP address, MAC address and their network adapter
manufacturer to help you identify the desired device. The advantage of this application is that it’s
free and discovers also the devices connected with cable, not only those wireless .
Implementation Rare ș-Marin BOZDOG
30
Figure 14 . WnetWatcher application interface
Now the remote connection via SSH is pos sible by introducing in the Putty application the IP
address of the Raspberry Pi found in the application mentioned above, selecting the SSH protocol
and the port can remain 22, the implicit one. After the connection starts you will be asked to
introduce t he user name that is “pi” and the password that you sat at the operating system
configuration part. The connection with the terminal will begin and the installation of the OpenCV
library can be started.
Raspbian is somehow limiting the space you can use o n the microSD card but this problem can be
solved by first running the following command:
$ sudo raspi -config
This command will enter to a menu called “Raspberry Pi Software Configuration Tool” as shown
in the next figure:
Figure 15. The “Raspberry Pi Software Configuration Tool” menu
Implementation Rare ș-Marin BOZDOG
31
The next step is to enter in the “Advanced Options” TAB followed by “Expand Filesystem” . After
that you will be asked if are you sure to do thi s operation, select Yes and after that Finish. Finally ,
the device must be reb ooted that the changes to take effect. Than connect again at the terminal
using Putty and type the following command to see the usage of the space from the memory card:
$ df -h
The command is returning this:
Filesystem Size Used Avail Use% Mounted on
/dev/root 15G 4.4G 9.5G 32% /
devtmpfs 434M 0 434M 0% /dev
tmpfs 438M 0 438M 0% /dev/shm
tmpfs 438M 17M 422M 4% /run
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 438M 0 438M 0% /sys/fs/cgroup
/dev/mmcblk0p1 42M 21M 21M 51% /boot
tmpfs 88M 0 88M 0% /run/user/1000
As it can be seen 4.4GB are already used and to be sure that you will not run out of s pace it can be
removed some unneeded apps for this project that are pre -installed with the Raspbian. So , in order
to do this, run the following commands:
$ sudo apt -get purge wolfram -engine
$ sudo apt -get purge libreoffice*
$ sudo apt -get clean
$ sudo apt -get autoremove
After freeing up some space to be sure it will not be a problem during the next steps of the
implementation, the installation of the OpenCV library can proceed:
1. Installing the dependencies: Some developer tools are needed, including Cmake which is a
main tool in the OpenCV building process. In a new terminal window , the following command
will be runn ed:
$ sudo apt -get install build -essential cmake pkg -config
Next is an I/O package tha t allows loading image files for mats from the storage:
$ sudo apt -get install libjpeg -dev libtiff5 -dev libjasper -dev libpng12 -dev
It’s needed also an I/O package for video f iles formats and also for live video streaming:
$ sudo apt -get install libavcodec -dev libavformat -dev
$ sudo apt -get install libswscale -dev libv4l -dev
$ sudo apt -get install libxvidcore -dev libx264 -dev
The OpenCV library is com ing with a sub -module calle d “highgui ” which is used to display
images on the screen and build basic GUIs. To compile this module, it’s needed to install the GTK
development library:
$ sudo apt -get install libgtk2.0 -dev libgtk -3-dev
Implementation Rare ș-Marin BOZDOG
32
Some matrix operations that the OpenCV library is doing can be optimized by installing the
following dependencies:
$ sudo apt -get install libatlas -base-dev gfortran
The last step of the dependencies installation is to install the Python bindings in order to compile
OpenCV with Python. Now it’s time to decide which ver sion of Python it will be used for the face
recognition application, so for this project is better to use Python3.5 for it s features instead of
Python2.7. The following command will be runned in the terminal:
$ sudo apt -get install python3 -dev
At this point the installation of the dependencies is done. In case of some errors during the
download or the installation process, all the command s above must be runned from the beginning
or just the command s that gave the error. Now you can move to the next step which is downloading
the OpenCV library .
2. Downloading the OpenCV library: With all the dependencies successfull y installed it’s time
to go on the OpenCV official site an d download it. The following commands will be runned:
$ cd ~
$ wget -O opencv.zip https://github.com/Itseez/opencv/archive/3.3.0.zip
$ unzip opencv.zip
In order to have access to all the OpenCV features, it’s also needed to download and unzip the
OpenCV contrib repository:
$ wget -O opencv_contrib.zip
https://github.com/Itseez/opencv_contrib/archive/3.3.0.zip
$ unzip opencv_contrib.zip
3. Installing the Python package manager: Before compiling the OpenCV library on the
Raspberry Pi it’s needed to install the Python package manage r called “pip” :
$ wget https://bootstrap.pypa.io/get -pip.py
$ sudo python3 get -pip.py
4. Creating the Python virtual environment: Now it’s time to create a virtual environment for
this project. The main purpose of Python virtual environments is to create an isolated environment
for every project. This means that each project can have its own dependencies, regardless of what
dependencies every other project has. So , to install the virtual environment run the following
command and then remove the cache from pip:
$ sudo pip install virtualenv virtualenvwrapper
$ sudo rm -rf ~/.cache/pip
With the virtual environment installed it’s time to modify the ~/.profile file to include the following
lines at the bottom of the file:
$ echo -e "\n# virtualenv and virtu alenvwrapper" >> ~/.profile
$ echo "export WORKON_HOME=$HOME/.virtualenvs" >> ~/.profile
$ echo "source /usr/local/bin/virtualenvwrapper.sh" >> ~/.profile
Implementation Rare ș-Marin BOZDOG
33
Now that the “~/.profile” file was updated, it’s needed to reload it to make sure the changes will
take affect:
$ source ~/.profile
Because we are working with the computer vision the virtual environment will be called “cv” :
$ mkvirtualenv cv -p python3
After this the command to acces s the computer vision virtual environment just created is:
$ workon cv
After every restart of the Raspberry Pi or when opening a new terminal, to re access the computer
vision virtual environment you must use the above command.
Here is an example of how to check if you are in the computer vision virtual environment:
Figure 16 . Inside/Outside of the “cv” virtual environment
The following commands until the OpenCV is installed must be executed in the “cv” virtual
environment.
This next Python package called NumPy is used for numerical processing and it’s also needed fo r
image processing applications as facial recognition. So , to install it run the following command in
the terminal after assuring that you are in the computer vision virtual environment:
$ pip install numpy
5. Compiling and installing the OpenCV library: Now it’s all prepared for compiling and
installing OpenCV. Once again assure that the following commands will be executed in the
computer vision virtual environment, if everything it’s ok the setup of our bui ld can begin. First it
must be created a folder called “build” inside the unzipped OpenCV folder and after this the
“cmake” command can be executed to compile the OpenCV library:
Implementation Rare ș-Marin BOZDOG
34
$ cd ~/opencv -3.3.0/
$ mkdir build
$ cd build
$ cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib -3.3.0/modules \
-D BUILD_EXAMPLES=ON ..
After the compilation is done scroll to the section t itled “Python 3 ” and make sure that the
“Interpreter ” points from “python3.5 ” binary folder are in the “cv” virtual environment, while
numpy points are located in the “NumPy ” folder. If the cv virtual environment is missing from
those variables the virtual environment must be accessed and the “cmake ” command executed
again. If it’s all alright the Python 3 section should look as in the following picture:
Figure 17 . OpenCV compilation parameters
Now comes the most time -consuming part. Because the Raspberry Pi is not such a powerful
machine, building the OpenCV library will take not less than 4 hours. Are some solutions for this
problem which involves using all four cores for this process but the risk of overheating and
freezing the board is huge. I f this happens the only solution is to unplug the Raspberry Pi than to
do the compilation and the building part all again from the beginning, so it’s better to be patient.
To begin the building process the following command must be executed in the termina l:
$ make
After 4 hours of waiting the building process must be ended successfully without any errors and
the installation p rocess of the OpenCV can begin.
From here the following commands must be executed to install the OpenCV library:
$ sudo make ins tall
$ sudo ldconfig
Implementation Rare ș-Marin BOZDOG
35
6. Testing if the OpenCV library is working: To be sure that everything is working open a new
terminal and run the following commands:
$ source ~/.profile
$ workon cv
$ python3
>>> import cv2
>>> cv2.__version__
If it is looking as in the picture below it’s all ready to create the face recognition application.
Figur e 18. OpenCV working on the RaspberryPI
4.3 Creating the Facial Recognition application
To make a face recognition application a certain logic must be respected . First, it must be created
a dataset creator which will capture and store a set number of face samples , with a unique ID
introduced by the user , for each face which is wanted to be stored in the database . After this to
perform a face recognition it’s need ed to train the recognizer using the pre -labeled datasets; for
this a trainer needs to be created . Finally , a recognizer will be created that will use the information
extracted by the trainer from the face samples , for recognizing the face in front of the camera. With
these said the application will be divided in three main parts: Dataset creator, Trainer and
Recognizer.
1. The Dataset creator:
The first part is to import the necessary libraries:
from picamera .array import PiRGBArray
from picamera import PiCamera
import time
import cv2
import os
import numpy as np
import sqlite3
Implementation Rare ș-Marin BOZDOG
36
A video capture object and a cascade classifier object for face detection will be created :
faceCascade = cv2.CascadeClassifier ('haarcascade_frontalface_default.xml' )
camera = PiCamera ()
camera.resolution = (640, 480)
camera.framerate = 32
rawCapture = PiRGBArray (camera, size=(640, 480))
time.sleep(0.1)
The dataset creator will capture and store a number of set face samples assigning a unique ID,
introduced by the user for each face, in a folder called “dataSet” :
def assure_path_exists (path):
dir = os.path.dirname(path)
if not os.path.exists(dir):
os.makedirs (dir)
path = ('dataSet' )
assure_path_exists ("dataSet/" )
Introduce the ID of the face and the name of the person, then initialize a counter variable for the
samples numbering:
id=input('Enter face ID:' )
name=input('Enter person name:' )
insertOrUpdate (id,name)
sampleNum =0;
Connect to the SQLite3 database and writ e the name of the person and the ID in the table (accessing
the database can be written more additional information about the subject):
def insertOrUpdate (Id,Name):
conn =sqlite3.connect("FacesDatabase.db" )
cmd ="SELECT * FROM People WHERE ID=" +str(Id)
cursor=conn.execute(cmd)
isRecordExist =0
for row in cursor:
isRecordExist =1
if(isRecordExist ==1):
cmd="UPDATE People SET Name=" +str(Name)+" WHERE ID=" +str(Id)
else:
cmd="INSERT INTO People(ID,Name) Values(" +str(Id)+","+str(Name)+")"
conn.execute(cmd)
conn.commit()
conn.close()
For capturing frames from the Raspberry Pi camera, the next line of code must be added:
for frame in camera.capture_continuous (rawCapture , format="bgr",use_video_port =True):
image_frame = frame.array
gray = cv2.cvtColor (image_frame ,cv2.COLOR_BGR2GRAY )
The face in front of the camera will be detected, bounded by a green rectangle and the set number
of samples will be stored in JPG format. In the images will be store d just the face bounded by the
green rectangle , not also the background . X, Y is the top left corner of the green rectangle and W ,
H are the weight and height of the detected faces in pixels. When the number of samples reach the
set threshold , the loop will break and the progr am will stop from taking pictures :
Implementation Rare ș-Marin BOZDOG
37
faces = faceCascade .detectMultiSc ale(gray, 1.1,5)
for (x,y,w,h) in faces:
cv2.rectangle (image_frame , (x,y), (x+w,y+h), (0,255,0), 2)
sampleNum =sampleNum +1;
cv2.imwrite("dataSet/Face." +str(id)+"."+str(sampleNum )+".jpg",gray[y:y+h,x:x+w])
cv2.imshow('Camera' , image_frame )
key = cv2.waitKey(1) & 0xFF
rawCapture .truncate (0)
elif sampleNum >100:
print("Done")
break
2. The Trainer:
Import ing the necessary libraries:
import cv2
import os
import numpy as np
from PIL import Image
The face recognition and the face detection algorithms will be initialized :
recognizer = cv2.face.LBPHFaceRecognizer_create ()
detector = cv2.CascadeClassifier ("haarcascade_frontalface_default.xml" );
A function will be created, that will grab the training images from the “dataSet” folder and the
corresponding ID from their name :
def getImagesAndLabels (path):
Inside of the function wil l be loaded the training images. For this a path of the image must be
created:
imagePaths = [os.path.join(path,f) for f in os.listdir(path)]
To capture the faces and the IDs from the training images two empty lists will be created:
faceSamples =[]
ids = []
Now a loop for adding the images and the IDs in those lists will be created. In the code below the
image is loaded and because it’s a PIL image it must be converted in a numpy array:
PIL_img = Image.open(imagePath ).convert('L')
img_numpy = np.array(PIL_img,'uint8')
A split of the sample name will be made to get the ID from there:
id = int(os.path.split(imagePath )[-1].split(".")[1])
The detector will extract the faces and append t hem in the faceSamples list and the ID also:
for (x,y,w,h) in faces:
faceSamples .append(img_numpy [y:y+h,x:x+w])
ids.append(id)
return faceSamples ,ids
Implementation Rare ș-Marin BOZDOG
38
Finally, the function has to be called and the ”training_data.yml ” file will be created and filled:
faces,ids = getImagesAndLabels ('dataSet/' )
recognizer .train(faces, np.array(ids))
recognizer .write('trainer/training_data.yml' )
To be sure that the trainer doesn’t take other files than JPG images , after the empty lists creation
will be written the next line of code:
if(os.path.split(imagePath )[-1].split(".")[-1]!='jpg'):
continue
3. The Recognizer:
Import ing the necessary libraries:
from picamera .array import PiRGBArray
from picamera import PiCamera
import time
import cv2
import numpy as np
import os
import sqlite3
Loading the training data and the LBPH recognizer algorithm :
recognizer = cv2.face.LBPHFaceRecognizer_create ()
recognizer .read('trainer/training_data.yml' )
The cascade classifier for face detection will be created:
cascadePath = "haarcascade_frontalface_default.xml"
faceCascade = cv2.CascadeClassifier (cascadePath );
Now the video capture object will be created:
camera = PiCamera ()
camera.resolution = (640, 480)
camera.framerate = 32
rawCapture = PiRGBArray (camera, size=(640, 480))
Because the name of the recognized person will be written on the image a font is needed:
font = cv2.FONT_HERSHEY_SIMPLEX
The connection with the SQLite 3 database will start and the informations from it will be extracted:
def getProfile (id):
conn=sqlite3.connect("FacesDatabase.db" )
cmd="SELECT * FROM People WHERE ID=" +str(id)
cursor=conn.execute(cmd)
profile=None
for row in cursor:
profile=row
conn.close()
return profile
Implementation Rare ș-Marin BOZDOG
39
The main loop will : start capturing frames from the camera object, convert them to grayscale,
detect and extract the faces from the live images, use the face recognition algorithm to recognize
the face in front of the camera and get its ID, put the face in a green rectange a nd the informations
from the SQLite3 will be displayed under the green rectangle.
for frame in camera.capture_continuous (rawCapture , format="bgr",use_video_port =True):
im = frame.array
gray = cv2.cvtColor (im,cv2.COLOR_BGR2GRAY )
faces = faceCascade .detectMultiScale (gray, 1.1,5)
for(x,y,w,h) in faces:
cv2.rectangle (im, (x,y), (x+w,y+h), (0,255,0), 2)
id,conf=recognizer .predict(gray[y:y+h,x:x+w]);
print ("ID=" +str(id)+" Confidence level= " +str(conf))
if(conf>40 and conf<70):
profile=getProfile (id);
else:
id=0
profile= getProfile (id);
if(profile!=None):
cv2.putText(im,"Name: "+str(profile[1]),(x,y+h+30),font,1,(255,0,0),2);
cv2.putText(im,"Age: "+str(profile[2]),(x,y+h+60),font,1,(255,0,0),2);
cv2.putText(im,"Gender: "+str(profile[3]),(x,y+h+90),font,1,(255,0,0),2);
cv2.imshow('Camera' ,im)
key = cv2.waitKey(1) & 0xFF
rawCapture .truncate (0)
if key == ord("q"):
cv2.destroyAllWindows
break
When the "q" key is pressted the loop will break and the recognition will stop. The confidence
threshold must be adjusted after testing the application fo r an accurate face recognition .
The full source code explained with comments for every line of code can be found in the Appendix.
Figure 19. SQLite3 database structure
The database has 4 columns where are stored : the ID, the name of the person, the age and the
person’s gender. The ID can’t be null and it is also primary key which means that it can’t be a
duplicate . Every person must have a unique ID. The name field also can’t be null , but can be a
duplicate . The age and the gender fields can be left empty. Many columns with additional
informati on about the recognized person can be added.
Experimental results Rare ș-Marin BOZDOG
40
5. Experimental results
The following experimental results were obtaine d after performing some tests to determine the
efficiency of the created application . For this were used three databases, first o ne is an own
database with 3 persons and 50 samples per each per son. The second database is given by the
Center for Biological & Computational Learning from Massachusetts Institute of Technology
(MIT -CBCL) and it has 10 subject s with 1 00 samples for each subject. The la st database is from
California Institute of Technology (Caltech) and has 10 subjects with 2 00 samples per subject. The
subjects were introduced in a database created using SQLiteStudio for an easier identi fication of
each subject.
5.1 Tests using the own database
The first step of this operation is to create the samples of each face that will be present in the
database. To do that , the “dataSet_creator.py” will run, a unique ID and the name of the person
will be introduced in the console . After the name is introduced , a window called “Camera” will
appear and a green rectangle will show that the face in the live image s is detected. It will take
about 30 seconds for 5 0 photos to be created.
Figure 20 . Creating the samples
For each face that is wanted to be stored in the database must be done the step above. Now in the
SQLite3 database will appear a table with the ID and the name of the person that will be
recognized. In that table can be also written additional information about the person that will
appear on the camera window under the green rectangle. In the test table , as additional information
about the recognized person are the gender and the age of the person, but more fields can be added
with whatever information it’s wanted to be displayed about the recognized person. The ID field
can’t be null and can’t be a duplicate or modified from t he SQLite3 database because it indicates
which samples correspond to which person. The name can be modified and can be a duplicate, but
it also can’t be null . The age and the gender can be null.
Experimental results Rare ș-Marin BOZDOG
41
Figure 21 . The SQlite3 own database
In the following figure is presented a se t of 10 samples out of those 50 , for each person registered
in the database (Figure 21), that can be found in the “daraSet” folder . It can be observed that the
images are black and white and the dataset creator is capturing pictures just from the green
rectangle that shows the detected face not including the background and other useless details. This
means that the algorithm used for the face de tection called Haar Cascade Classifier is very precise.
However, when creating the samples, it’s recommended to have a clean background because in
some lighting conditions it detects false positive faces and stores them in the database.
Figure 22 . Sampl es preview
The second step in performing a face recognition the recognizer must be trained . The “trainer.py”
will grab the training images from the “daraSet” folder and will also get the corresponding unique
IDs for each face, from the sample’s name . Finally , the trainer put them in a list of face samples
with the corresponding ID and stores this information in an YML file. The following picture shows
the content of a “training_dat a.yml” file :
Experimental results Rare ș-Marin BOZDOG
42
Figure 23 . Content of a “training_data.yml” file
The las t step is to run the “recognizer.py” file and observe the results. In the console will be
printed the confidence level that shows the distance between the image on the live camera and the
most similar photo from the sampl es present in the “daraSet” folder . If the distance is shorter, the
certainty that the recognized person is t he one that must be is greater.
Figure 24 . Face recognition example
Experimental results Rare ș-Marin BOZDOG
43
Subjects Confidence level Accuracy
Minimum Maximum Average
Rares 24 67 49 90 %
Paula 28 59 47 90 %
Mirel 30 55 41 95 %
Table 2. Own database results
After the tests was observed that individually none of the persons present in the database were
confused between them. If the recognition was made simultaneously with all three persons at the
same time , sometim es Paula was confused with Mirel and Rares was confused with Paula. Because
of this problem it was eliminated the simultaneous recognition and the application will recognize
just one person at a time. Also, none of the subjects were confused with persons t hat are not
registered in the databas e.
5.2 Tests using the MIT -CBCL database
In the “daraSet” folder , were stored 1000 samples, 100 per subject. After this in the SQLite 3
database were created 10 rows with ID’s from 1 to 10. For each subject the gender was written
and the subjects were named as following :
Figure 25 . MIT -CBCL SQLite3 database
The tests were made with photos of the subjects given by the database creators . Were used 10 more
subjects to test the confusion rate with persons that aren’t registered in the database. According to
the results an accuracy percentage was calculated for each subject that is registered in the database.
Experimental results Rare ș-Marin BOZDOG
44
Subjects Confidence level
Minimum Maximum Average
Subject 1 48 68 62
Subject 2 58 68 65
Subject 3 27 64 48
Subject 4 39 64 59
Subject 5 34 54 43
Subject 6 41 65 53
Subject 7 30 66 48
Subject 8 38 57 50
Subject 9 33 58 45
Subject 10 22 58 48
Table 3 . Confidence levels for the MIT -CBCL database
Subjects Recognized in/Out of
(test photos) Confused with
persons inside
of the database Confused with
persons outside
of the database Accuracy
Subject 1 5/5 – 1 75 %
Subject 2 4/5 – – 75 %
Subject 3 6/6 – 1 80 %
Subject 4 4/5 – – 75 %
Subject 5 5/6 – – 80 %
Subject 6 7/8 1 time with S4 – 66 %
Subject 7 5/6 – – 80 %
Subject 8 4/5 1 time with S7 1 33 %
Subject 9 7/7 1 time with S1
and S7 – 60 %
Subject 10 6/6 1 time with S8 – 80 %
Table 4. Confusion rates for the MIT -CBCL database
The accuracy of the recognition was calculated by determining an error rate for each subject
considering the results above and then subtracting that error rate from 100%. Because these
databases are offering different number of test images for each subject, in some cases the accuracy
of the recognition decreased significant. Most of the errors during the test s appeared because of
illumination changes and reflections in the test photos. In case of real persons at the same number
of samples and subje cts the accuracy of the recognition will increase.
Experimental results Rare ș-Marin BOZDOG
45
5.3 Tests using the Caltech database
For this database were made 2000 samples , 200 per each subject. Were chosen 17 subjects outside
of the database to see the confusion rate. For this number of samples per su bject , the Raspberry Pi
it was almost overheating and the application worked significantly slower in comparation with the
other databases.
Figure 26 . Caltech SQLite3 database
Subjects Confidence level
Minimum Maximum Average
Subject 1 49 73 62
Subject 2 30 74 60
Subject 3 49 72 63
Subject 4 50 72 61
Subject 5 50 72 61
Subject 6 57 81 61
Subject 7 39 75 62
Subject 8 53 65 60
Subject 9 46 60 56
Subject 10 47 64 56
Table 5 . Confidence levels for the Caltech database
Experimental results Rare ș-Marin BOZDOG
46
Subjects Recognized in/Out of
(test photos) Confused with
persons inside
of the database Confused with
persons outside
of the database Accuracy
Subject 1 19/21 2 times with S4 – 76 %
Subject 2 19/20 – 3 75 %
Subject 3 5/5 – – 100 %
Subject 4 21/22 – 4 72 %
Subject 5 19/21 – 2 76 %
Subject 6 22/23 – 1 91 %
Subject 7 19/20 – – 95 %
Subject 8 18/20 – 7 18 %
Subject 9 4/5 – – 75 %
Subject 10 20/21 – 8 25 %
Table 6. Confusion rates for the Caltech database
5.4 Results comparison
Figure 27 . Experimental results compariso n
As it can be seen , as the database will grow the accuracy of the system will decrease. Because the
purpose of this application is to serve as security system the results above are considered good. In
case of the small database the accuracy was v ery high and for security systems isn’t needed a large
database. An attempt was made to increase the database with more than 10 subjects, but because
of low computing power the application couldn’t start the recognition.
92 %
70% 70 %
0%10%20%30%40%50%60%70%80%90%100%
Own database MIT-CBCL database Caltech database
Conclusions Rare ș-Marin BOZDOG
47
6. Conclusions
The purpose of this project was to demonstrate the possibility of implementing a security system
based on face recognition, on a tiny device with a relatively low computing power. The soluti on
has to be cheap and were used just open source software solutio ns.
During this project I have studied the face detection and face recognition algorithms that are
integrated in the OpenCV library and I have implemented a face recognition solution on a small
device , which is the Raspberry Pi. The face detection method is based on the Haar Cascade
Classifier algorithm and the face recognition method is based on the Local Bina ry Pattern
Histograms algorithm .
It was implemented a solution that consists in 2 main components: hardware and software. The
hardware component i s containing the Raspberry Pi 3 model B and the Raspberry Pi OEM video
camera version 2. The sof tware component is completed by the Raspberry Pi official operating
system (the Raspbian), the OpenCV library version 3.3.0 , the SQLite3 database and as
program ming language was chosen Python 3.5.
The tests made in the experimental part of the project were performed on three databases for
determining the system’s accuracy and for identifying the errors . During the tests it was necessary
to reduce the number of subjects in one of the database s from 27 to 10 because the Raspberry Pi
couldn’t start the application. This happened because the LBPH algorithm is analyzing every face
sample separately, while the Eigenfaces algorithm analyze them like a whole. Sure, it could have
been used Eigenfaces or Fisherfaces as face recognition a lgorithms for the possibility of increasing
the dat abase, but the accuracy of the recognition would decrease significantly. Because the purpose
of this implementation is to serve as a secu rity system it wasn’t needed a large database. During
the tests with the own database it was observed that when the recognition w as made simultaneously
the subjects were confused between them, so the simultaneously recognition was eliminated from
the appli cation. Fortunately, none of the subjects were confused with persons outside of the
database. Because the tests were made with real persons, the subjects were not recognized from
photos and for a security system this is good. Also, were made tests when the training was made
using photos of the subjects and none of them were recognized in real person, they were recognized
just in photos. The recognition accuracy for this database was determined at 92%. In the case of
larger databases were made confusions wit h persons that were not stored in the database, but
because the tests were made using photos of the subjects the main reason for this problem were
the reflections. Also, confusions were made between them at the individually recognition of the
subjects, but the main reason for this problem w as also the reflections. The recognition accuracy
for these databases was determined at 70%.
As future work i s wanted to improve the system’s accuracy by identifying and fix ing more issues
that will appear during the face detection and face recognition. After this I want to create a
graphical user interface for the application and test it in real conditions as a security system on a
mobile device or at the entrance of the house yard.
References Rare ș-Marin BOZDOG
48
7. References
[App17] “About Face ID advanced technology”, apple.com, 2017. [Online]. Available:
https://support.apple.com/en -us/HT208108
[Cry17 ] “SQLite – Cel mai folosit sistem de baze de date din lume ”, crystalmind.ro, 2017.
[Online]. Available: https://www.crystalmind.ro/wordpress/sqlite -cel-mai-folosit –
sistem -de-baze-de-date-din-lume/
[Doc17] “Face Detection using Haar Cascades ”, opencv.org, 2017. [Online]. Available:
https://docs.opencv.org/3.3.0/d7/d8b/tutorial_py_face_detection.html
[Fac17 ] “A Brief History of Face Recognition”, facefirst.com, 2017. [Online]. Available:
https://www.facefirst.com/blog/brief -history -of-face-recognition -software/
[Flo13 ] L. Florea, C. Florea , "Sisteme software pentru prelucrarea imaginilor", Editura
Politehnica Press, Bucuresti 2013, ISBN 978 -606-515-474-2
[Ope17] “Face Recognition with OpenCV “, opencv.org, 2017. [Online]. Available:
https://docs.opencv.org/3.3.0/da/d60/tutorial_face_main.html
[Pau01] P. Viola, M. Jones, "Rapid Object Detection us ing a Boosted Cascade of Simple
Features" at Accepted Conference on Computer Vision and Pattern Recognition 2001
[Pet97] Peter N. Belhumeur, Joao P Hespanha, David K riegman, “Eigenfaces vs. F isherfaces:
Recognition using class specific linear projection. Pattern Analysis and Machine
Intelligence ”, IEEE Trans actions on, 19(7):711 –720, 1997
[Pie16] P. Riti, “ What You Need to Know about Python ”, Copyright © 2016 Packt
Publishing, Livery Place, 35 Liver y Street, Birmingham B3 2PB, UK
[Sar91] Sarunas J Raudys and Anil K. Jain. “Small sample size effects in statistical pattern
recognition: Recommendations for practitioners. ”, IEEE Transactions on pattern
analysis and machine in telligence, 13(3):252 –264, 1991
[Sou18] “OpenCV download link”, sourceforge.n et, 2018. [Online]. Available:
https://sourceforge.net/projects/opencvlibrary/
[Sta11] Stan Z. Li, Anil K. Jain , "Handbook of Face Recognition" , Springer Science &
Business Media, 22 aug. 2011
[Swa03] Swaroop C H "A Byte of Python" , Creative Commons At tribution -NonCommercial –
ShareAlike License 2.0 , 2003, last updated 2014
[Tim04] Timo Ahonen, Abdenour Hadid, Matti Pietikäinen, “Face recognition with local
binary patterns. ”, in Computer vision -eccv 2004 , pages 469 –481. Springer, 2004
[Wik18] “About Raspberry Pi”, Wikipedia.org, 2018. [Online]. Available:
https://en.wikipedia.org/wiki/Raspberry_Pi
Appendix Rare ș-Marin BOZDOG
49
8. Appendix
8.1 The s ource code
In the follo wing will be presented my contribution in writing the face recognition application. The
application is written in Python and it’s based on three main files: dataSet_creator.py , trainer.py
and recognizer.py . Responsible with the face detection and face recognition algorithms is the
OpenCV library. The source code is configured for the Raspberry Pi OEM camera, but it also can
be configured for any video camera and can run on any device and any operating system.
8.1.1 The “ dataSet_creator.py ” file
#####################################################################################
#Import the live image library
from picamera .array import PiRGBArray
#Import the RaspberryPI Camera library
from picamera import PiCamera
#Import time library for Camera warmup
import time
#Import the OpenCV library for image processing
import cv2
#Import the os library for file pat h
import os
#Import the Numpy library for matrix calculations
import numpy as np
#Import SQLite3 library for the database
import sqlite3
#Assure that the path exist
def assure_path_exists (path):
dir = os.path.dirname(path)
if not os.path.exists(dir):
os.makedirs (dir)
path = ('dataSet' )
assure_path_exists ("dataSet/" )
#Using OpenCV prebuilt frontal face training model for face detection
faceCascade = cv2.CascadeClassifier ('haarcascade_frontalface_default.xml' )
#Initialize the camera and grab a reference to the raw camera capture
camera = PiCamera ()
camera.resolution = (640, 480)
camera.framerate = 32
rawCapture = PiRGBArray (camera, size=(640, 480))
#Allow the camera to warmup
time.sleep(0.1)
Appendix Rare ș-Marin BOZDOG
50
#Connect to the SQLite3 database and write in the table the ID and the Name of the
person
def insertOrUpdate (Id,Name):
conn =sqlite3.connect("FacesDatabase.db" )
cmd ="SELECT * FROM People WHERE ID=" +str(Id)
cursor=conn.execute(cmd)
isRecordExist =0
for row in cursor:
isRecordExist =1
if(isRecordExist ==1):
cmd="UPDATE People SET Name=" +str(Name)+" WHERE ID=" +str(Id)
else:
cmd="INSERT INTO People(ID,Name) Values(" +str(Id)+","+str(Name)+")"
conn.execute(cmd)
conn.commit()
conn.close()
#Enter the unique ID for each person
id=input('Enter face ID:' )
#Enter person name
name=input('Enter person name:' )
insertOrUpdate (id,name)
#Start the samples counter
sampleNum =0;
#Capture frames from the RaspberryPI Camera
for frame in camera.capture_continuous (rawCapture , format="bgr",
use_video_port =True):
#Grab the raw numpy array representing the image, then initialize the timestamp and
occupied/unoccupied text
image_frame = frame.array
gray = cv2.cvtColor (image_frame ,cv2.COLOR_BGR2GRAY )
#Get the face from the video frame
faces = faceCascade .detectMultiScale (gray, 1.1,5)
areas = []
#Loops for each face, detects the nearest face
for(x,y,w,h) in faces:
areas.append(w*h)
if len(faces)>0:
j = areas.index(max(areas))
x,y,w,h = faces[j]
#Create a green rectangle around the face
cv2.rectangle (image_frame , (x,y), (x+w,y+h), (0,255,0), 2)
#Increment sample face image number
sampleNum =sampleNum +1;
#Save the captured images into the datasets folder
cv2.imwrite("dataSet/Face." +str(id)+"."+str(sampleNum )+".jpg",gray[y:y+h,x:x+w])
#Display the video frame
cv2.imshow('Camera' , image_frame )
key = cv2.waitKey(1) & 0xFF
Appendix Rare ș-Marin BOZDOG
51
#Clear the stream in preparation for the next frame
rawCapture .truncate (0)
#If the `q` key is pressed than stop taking pictures
if key == ord("q"):
cv2.destroyAllWindows ()
break
#If images taken reach 100, stop taking pictures
elif sampleNum >99:
print("Done")
cv2.destroyAllWindows ()
break
############################################################################# ########
8.1.2 The “ trainer.py ” file
#####################################################################################
#Import the OpenCV library for image processing
import cv2
#Import the os library for file path
import os
#Import the Numpy library for matrix calculations
import numpy as np
#Import the Python Image Library (PIL) for image conversions
from PIL import Image
#Assure that the path exist
path = ('dataSet' )
def assure_path_exists (path):
dir = os.path.dirname(path)
if not os.path.exists(dir):
os.makedirs (dir)
#Create Local Binary Patterns Histograms for face recognition
recognizer = cv2.face.LBPHFaceRecognizer_create ()
#Using OpenCV prebuilt frontal face training model for face detection
detector = cv2.CascadeClassifier ("haarcascade_frontalface_default.xml" );
#Create a method to get the images and label the data
def getImagesAndLabels (path):
#Loading the images
imagePaths = [os.path.join(path,f) for f in os.listdir(path)]
#Initialize empty list for the face samples
faceSamples =[]
#Initialize empty list for IDs
ids = []
Appendix Rare ș-Marin BOZDOG
52
#Loop all the files path
for imagePath in imagePaths :
#Ignore if the file doesn't have .jpg extension
if(os.path.split(imagePath )[-1].split(".")[-1]!='jpg'):
continue
#Get the image and convert it to grayscale
PIL_img = Image.open(imagePath ).convert('L')
#Convert the PIL image to a Numpy array
img_numpy = np.array(PIL_img,'uint8')
#Get the image ID
id = int(os.path.split(imagePath )[-1].split(".")[1])
#Get the face from the training images
faces = detector .detectMultiScale (img_numpy )
#Loop for each face and append to their respective ID
for (x,y,w,h) in faces:
#Add the image to the face samples list
faceSamples .append(img_numpy [y:y+h,x:x+w])
#Add the ID to the IDs list
ids.append(id)
#Return the face array and the IDs array
return faceSamples ,ids
#Get the faces and IDs
faces,ids = getImagesAndLabels ('dataSet/' )
print ("ID ="+str(ids))
#Train the model using the faces and IDs
recognizer .train(faces, np.array(ids))
print ("Training…" )
#Save the model into the training_data.yml file
assure_path_exists ('trainer/' )
recognizer .write('trainer/training_data.yml' )
print ("Done")
#####################################################################################
8.1.3 The “ recognizer.py ” file
#####################################################################################
#Import the live image library
from picamera .array import PiRGBArray
#Import the RaspberryPI Camera library
from picamera import PiCamera
#Import time library for Camera warmup
import time
Appendix Rare ș-Marin BOZDOG
53
#Import the OpenCV library for image processing
import cv2
#Import the Numpy library for matrix calculations
import numpy as np
#Import the os library for file path
import os
#Import SQLite3 library for the database
import sqlite3
#Assure that the path exists
def assure_path_exists (path):
dir = os.path.dirname(path)
if not os.path.exists(dir):
os.makedirs (dir)
assure_path_exists ("trainer/" )
#Initialize the camera and grab a reference to the raw camera capture
camera = PiCamera ()
camera.resolution = (640, 480)
camera.framerate = 32
rawCapture = PiRGBArray (camera, size=(640, 480))
#Allow the camera to warmup
time.sleep(0.1)
#Create Local Binary Patterns Histograms for face recognition
recognizer = cv2.face.LBPHFaceRecognizer_create ()
#Using OpenCV prebuilt frontal face training model for face detection
cascadePath = "haarcascade_frontalface_default.xml"
#Load the training data
recognizer .read('trainer/training_data.yml' )
#Create the classifier for the face detection prebuilt model
faceCascade = cv2.CascadeClassifier (cascadePath );
#Chose the font for displaying the information about the recognized face
font = cv2.FONT_HERSHEY_SIMPLEX
#Get the data available in the SQLite3 database
def getProfile (id):
conn=sqlite3.connect("FacesDatabase.db" )
cmd="SELECT * FROM People WHERE ID=" +str(id)
cursor=conn.execute(cmd)
profile=None
for row in cursor:
profile=row
conn.close()
return profile
#Capture frames from the RaspberryPI Camera
for frame in camera.capture_continuous (rawCapture , format="bgr",
use_video_port =True):
#Grab the raw numpy array representing the image, then initialize the timestamp and
occupied/unoccupied text
im = frame.array
gray = cv2.cvtColor (im,cv2.COLOR_BGR2GRAY )
Appendix Rare ș-Marin BOZDOG
54
#Get the face from the video frame
faces = faceCascade .detectMultiScale (gray, 1.1,5)
areas = []
#Loops for each face, detects the nearest face
for(x,y,w,h) in faces:
areas.append(w*h)
if len(faces)>0:
j = areas.index(max(areas))
x,y,w,h = faces[j]
#Create a green rectangle around the face
cv2.rectangle (im, (x,y), (x+w,y+h), (0,255,0), 2)
#Recognize the face which belongs to an ID
id,conf=recognizer .predict(gray[y:y+h,x:x+w]);
print ("ID=" +str(id)+" Confidence level= " +str(conf))
#If the confidence level less than30 and greater than 70 don't display any info
because is an unknown face
if(conf>30 and conf<70):
profile=getProfile (id);
else:
id=0
#Display the informations available in the SQLite3 database for known faces
profile= getProfile (id);
if(profile!=None):
cv2.putText(im,"Name: "+str(profile[1]),(x,y+h+30),font,1,(255,0,0),2);
cv2.putText(im,"Age: "+str(profile[2]),(x,y+h+60),font,1,(255,0,0),2);
cv2.putText(im,"Gender: "+str(profile[3]),(x,y+h+90),font,1,(255,0,0),2);
#Display the video frame
cv2.imshow('Camera' ,im)
key = cv2.waitKey(1) & 0xFF
#Clear the stream in preparation for the next frame
rawCapture .truncate (0)
#If the `q` key is pressed, stop
if key == ord("q"):
cv2.destroyAllWindows ()
break
#####################################################################################
Copyright Notice
© Licențiada.org respectă drepturile de proprietate intelectuală și așteaptă ca toți utilizatorii să facă același lucru. Dacă consideri că un conținut de pe site încalcă drepturile tale de autor, te rugăm să trimiți o notificare DMCA.
Acest articol: List of figures ………………………….. ………………………….. ………………………….. …………………………….. [626974] (ID: 626974)
Dacă considerați că acest conținut vă încalcă drepturile de autor, vă rugăm să depuneți o cerere pe pagina noastră Copyright Takedown.
