TOPLICEANU Mihai -Adrian [602738]
UNIVERSITY “POLITEHNICA” OF BUCHAREST
FACULTY OF ENGINEERING IN FOREIGN LANGUAGES
ELECTRONIC ENGINEERING AND TELECOMMUNICATIONS
DIPLOMA PROJECT
PROJECT COORDINATOR:
Conf. D r. Ing. Bujor PĂVĂLOIU
STUDENT: [anonimizat]
2017
UNIVERSITY “POLITEHN ICA” OF BUCHAREST
FACULTY OF ENGINEERI NG
IN FOREIGN LANGUAGES
ELECTRONIC EGINNEERI NG AND
TELECOMMUNICATIONS
3D Reconstruction of an object usi ng a turntable,
a webcam and laser diodes
Project Coordinator :
Conf. Dr. Ing. Bujor PĂVĂLOIU
Student: [anonimizat]
2017
UNIVERSITY “POLITEHNICA” OF BUCHAREST
FACULTY OF ENGINEERING IN FOREIGN LANGUAGES
ELECTRONIC ENGINEERING AND TELECOMMUNICATIONS
Approved
Director of department:
Prof. Dr. Ing. George Drăgoi
DIPLOMA PROJECT THEME FOR:
TOPLICEANU MIHAI -ADRIAN
1. Theme title:
-3D reconstruction of an object using a turntable, a webcam and laser diodes
2. Initial design data:
-3D scanner using structured light/laser line
-laser diodes with a servo/stepper motor
-image acquisition using the webcam
3. Student: [anonimizat]:
-bibliographical research
-design of the device
-software development
4. Compulsory graphical material:
-image acquisition
-3D reconstruction from point cloud
-design of the device
5. The paper is based on the knowledge obtained at the following study courses:
-Image processing
-Digital signal processing
6. Paper development environment:
-Arduino IDE
-MATLAB R2013b
7. The paper serves as:
-Research
8. Paper preparation date:
-June 2017
Project coordinator Student: [anonimizat], Topliceanu Mihai -Adrian, hereby declare that the work with the title “3 D Reconstruction
of an object using a tur ntable, a web cam and laser diodes ”, to be openly defended in front of the
diploma theses examination commission at the Faculty of Engineering in Foreign Languages,
University " Politehnica" of Bucharest, as partial requirement for obtaining the title of Engineer is
the result of my own work, based on my research.
The thesis, simulations, experiments and measurements that are presented are made
entirely by me under the guidance of the scientific adviser, without the implication of persons that
are not cited by name and contribution in the Acknowledgements part.
The thesis has never been presented to a higher education institution or research board in
the country or abroad.
All the information used, including the Internet, is obtained from sources that were cited
and indicated in the notes and in the bibliography, according to ethical standards. I understand that
plagiarism is an offense and is punishable under law.
The results from the simulations, experiments and measurements are genuine. I understand
that the falsification of data and results constitutes fraud and is punished according to regulations.
Topliceanu Mihai -Adrian 4.07.2017
Table of Contents
Introduction ………………………….. ………………………….. …………………………. 1
1. Opto -mechatronic systems ………………………….. ………………………….. …. 3
1.1 Mechatronic System ………………………….. ………………………….. ……. 3
1.2 The Opto -mechatronic system ………………………….. …………………… 4
1.3. Applications of opto -mechatronic systems ………………………….. … 8
2. Reconstruction in the 3D virtual environment ………………………….. … 10
2.1 Defining the problem ………………………….. ………………………….. …. 10
2.2 The origin of distance measurement ………………………….. …………. 10
2.3 3D scanner systems ………………………….. ………………………….. ……. 12
2.4 3D scanning technologies ………………………….. ……………………….. 14
2.5 Classification of 3D scanning technologies ………………………….. . 15
3. Analysis of 3D scanning technologies ………………………….. ……………. 17
3.1. The main types of 3D scanners with light radiation ……………….. 17
3.2. Active 3D scanning methods ………………………….. ………………….. 21
3.3 3D laser triangulation scanning ………………………….. ……………….. 23
3.4 3D laser scanner precision ………………………….. ………………………. 25
4. Developing a 3D Laser Scanner ………………………….. ……………………. 29
4.1 Hardware design and implementation ………………………….. ………. 29
4.2 Software design and implementation ………………………….. ……….. 38
4.3 Software testing ………………………….. ………………………….. ………… 51
4.4 Further development ………………………….. ………………………….. ….. 54
4.5 Other scanning programs ………………………….. ………………………… 55
Conclusions ………………………….. ………………………….. ……………………….. 61
References ………………………….. ………………………….. …………………………. 63
Annexes ………………………….. ………………………….. ………………………….. … 65
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
1
Introduction
The main purpose of this project is to explore the possib ility of designing a prototype of a
3D scanner using a turntable, a webcam and laser diodes .
Digitizing or 3D scanning is a procedure which uses a sensor for contact or non -contact
digitization to capture the object’s shape and to recreate it in a virtual environment using a ve ry
dense network made of points under 3D graphical representation. Data is c ollected as points which
together form the so called “point cloud”. This type of information can be saved in different type
of files, the most frequently met is STL (Surface Tessellation Language).
This technology is known for over 15 years but it is applied slightly as a new technique.
3D scanning starts t o become a method more and more used in modern days, assisting humans in
a large field of domains, for example medicine and entertainment.
3D models have various uses, such as making animations or object representations.
Analyzes, comparisons or p rototyp es can be made that can later be modified to build a new
product.
With a varie ty of technologies available we can capture objects inside a room or outside of
it, day or night. Dimensions may vary according to our needs. We can capture both small items
(jewelry, telephones, etc.) as well as large -scale objects (buildings, bridges, etc.). Some scanners
are portable for complete scanning of cavities or coated surface and others can scan the ground
and its shapes.
The main purpose of a 3D scanner is to create point clouds of the shape of an object or
surface. These points can be used after extracting a form through a process called reconstruction.
If information about the object 's colors has been collected, it can also be determined in the
reconstruction proce ss.
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
2
3D scanners share some attributes of a camcorder. Like most of them, they have a tube for
their fiel d of view in their construction and they cannot collect information about the hidden
surfaces of the object. If a camera collects information about a color , a 3D scanner collects
information about the object's surface distance through the field of view. The image produced by
a 3D scanner describes the distance of each point on the surface.
In most cases, a single scan cannot produce a complete model of the subject. Multiple scans
and multiple angles are required to complete a full reconstruction of the object. These scans have
to be brought into a common reference system and then linked together to form a complete pattern.
This process is called alignme nt.
In the next chapters, we will learn more about 3D scanners, their history, the idea behind
them, their working principles and their evolution and usefulness in today’s engineering field.
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
3
1. Opto -mechatronic systems
1.1 Mechatronic System
The objectives initially embedded in the concept of mechatronics correspond to the creation
of a unitary framework between mechanical, electronic and software engineering in the
development of new systems capable of performing heterogeneous function s of movemen t, control
and decision -making.
In Figure 1 we have some examples of mechatronic systems that are very present in our
lives today.
Fig. 1-Mechatronics Systems
Definitions of mechatronics :
“Synergistic integration of mechanical engineering with electronics and intelligent
computer control in the design and manufacturing of industrial products and processes.” (F.
Harsh ama, M. Tomizuka, and T. Fukuda , 1996)
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
4
“Field of study involving the analysis, design, synthesis, and selection of systems tha t
combine electronics and mechanical components with modern controls and microprocessors.” (D.
G. Alciatore and M. B. Histand, 1998)
“Integration of electronics, control engineering, and mechanical engineering.” ( W. Bolton ,
1995)
Fig. 2 -Description of a mechanical system
1.2 The Opto -mechatronic system
Integrated optic s in mechatronic technology has roots in the technology of developing
mechatronics and opto -mechatronics. “Their revolution took place in the 1960s with the
integration of transistors and other semiconductor components into monolithic circuits, which
could have been possible due to the invention of transistors in 1948. It followed the emergence of
microprocesso rs that were invented in 1971 by means of semiconductors. In the 1980s,
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
5
semiconductor technology also created microelectromechanical systems, and added a new
dimension to systems, a reduction in their gauge ” (Hyungsuck Cho, 2002).
This has had a huge impa ct on a wide range of technologies in this area. In particular,
developers have combined both hardware and software technologies synergistically. The merger
made it possible for machines to convert the analog signal into digital signal to solve calculation s
and draw a conclusion based on computational outcome and software algorithms and ultimately
to act correctly on the basis of these conclusions and the knowledge accumulated in their own
support of memory. These new functionalities have featured machines / systems with features such
as flexibility and adaptability.
“A new technological revolution, also known as opto -electronic integration, continued for
40 years, since the invention of the laser in 1960 by Am erican professor Theodore Harold Maiman .
At one point from the invention of the first laser in the world, Ion I. Agârbiceanu was to enter the
history of physics with an original discovery, and in 1961 he made the first infrared radiation laser
(helium -neon). By focusing the light beam produced by the m onochromatic laser, enormous
radiation densities are obtained on very small surfaces.
This was made possible by advanced manufacturing methods such as chemical vapor
deposition, molecular epitaxy and microproce ssing with ion -focused fascias ” (Jon Rigelsfo rd,
2003) .
These methods allowed the integration of optics, electronics and electronic components
into a single device compact.
The CCD (charge -coupled device) sensor not only generated computer viewi ng
technologies but has opened the door to a new era of optical technologies and optical fiber sensors .
The development of optical components and devices has many favorable features. These
components did not assume direct contact and were non -invasive (non -aggressive), had a large
radius of perception, did not feel electrical noise, had distributed communication, and had a long
wavelength.
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
6
As a matter of fact, these optical features have begun to integrate with the mechatronic
elements and have led to the construction of very high -performance systems. When a ma chine is
integrated optically, mechanically and electronically, it is called an opto -mechatronic system.
“Lithography develops integrated circuits and semiconductor components, technology that
belongs to opto -mechatronic systems. It is based on several mirrors that deviate the light beam,
optical units and stepper -servo mechanisms, which change the direction with great precision.
Another device is the optical pickup, which was introduced in production in 1982. The pick -up
reads the information from a rot ating disk, controlling both the top -bottom and left -right directions
of the read head, which attaches a low -power laser diode focused on the disc grooves. Since then,
a considerable number of opto -mechatronic products, machines and systems have been launc hed
with a high acceleration, because it has allowed significant results to be achieved by optical
components. Atomic microscopy, microelectromechanical systems and humanoid robots have
been created since the 1990s. ” (IEEE., 2005)
The main features of an o pto-mechatronic system can be classified into several areas:
1. Illumination : illumination is the photometric radiant energy transmitter on the surface
of the target object. It produces a variety of features through reflection, absorption and
transmission, depending on the properties of the material and the surface of the object being
illuminated.
2. Perception : optical sensors can translate fundamental information about the object, such
as strength, temperature, pressure, but also geometry, such as angles, velocity, etc. This
information is obtained by the optical sensor, using different optical phenomena such as reflection,
refraction, interference, diffraction, etc. Conveniently, these perceptual systems are composed of
a light source and a photose nsitive sensor, as well as optical components such as: lenses, light
beam dividers, fiber optics. Recently, more and more sensors are using the advantages of fiber
optics in different areas. Technology of optics can cont ribute to material science. The chem ical
composition can be analyzed by spectrophotometry, which recognizes the characteristics of the
light spectrum reflected, transmitted and radiated by the target material .
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
7
3. Action : light can change the physical properties of the material by increasin g the
temperature of the material or by affecting the electrical environme nt.
4. Data (s) of storage : digital data composed of 0 and 1 may be stored and read optically .
The optical recording principle uses changes in the recording environment's reflection properties.
Information is engraved by changing the optical properties of the storage media by means of a
laser. For reading the information, the optical properties of the medium are checked using optical
perception sensors.
5. Data transmission : because o f the unchangeable properties of wavelength in interaction
with external electromagnetic noise, which is unpredictable, light is a very good environment for
data transmission. The l aser, as a light source, has a long wavelength and can transmit a large
amount of data at the same time.
6. Image transmission : information is best perceived by the user through visual stimuli.
To provide an image or graphic to the user, we have a variety of devices that can reproduce an
imag e: LCD, LED, OLED, Plasma, etc. A ll based on pixels to produce the image. Each pixel has
3 cells in construction that reproduce the primary colors: red, green and blue. By combining the
three, all colors can b e obtained, including white.
7. Calculation : optical calculation can be achieved by switches, logical gates and bedtables
in logical operations such as digital electronic calculation. Optical switches can be built using opto –
mechanical, optoelectrical, acoustic -optical and magneto -optic technologies. Optical devices can
change their state in about a fifth. A logic gate is constructed from optical transistors. For an optical
computer, a variety of other elements are needed, besides the optical swit ches that are
interconnected .
8. Changes in the material properties : When a laser is focused a t one point using optical
components, the laser power increases in a small focus area. This process results in changes of the
material in the laser -lit area . Material processing methods use pulse laser beam technology and
can be classified into two groups:
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
8
a. Changing the shape of the material
b. A change in the physical condition of the material
1.3. Applications of opto -mechatronic systems
Examples of opto -mechatronic systems are in control and instrumentation, inspection and
testing, optics, manufacturi ng, consumers, and industrial manufacture of products such as
automobiles, biological applications and many other areas of engineering.
Camera
Cameras are devices that are operated by opto -mechatronic components. For example, a
performance camera is equipp ed with an aperture control and a focus adjustment system that uses
an illuminometer designed to operate independently of ambient light. With these system
configurations, new functionality has been created to increase camera performance .
Fig.3 -A camcorder or video camera
In Figure 3, we see the main components of a camcorder. Camcorders have three majo r
components – a lens that gathers and focuses light, an imager that converts light into an electrical
signal and a recorder that converts electrica l signals into digital vid eo and encodes them for
storage. The image is focused and exposed on the electric sensor with a series of lenses that zoom
in and focus on the subject. Changing the lens position changes the image size and focus area. The
amount o f light entering through the lens is detected by an aperture -controlled image sensor and
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
9
shutter speed. Recently, CMOS sensors have been used for autofocus using controlled focusing
lenses.
The optical disc drive
An optical disc drive is an opto -mechatr onic system. As is also seen in Fig. 4 , an optical
disc drive is composed of an optical readout head, where a laser diode, a servo beam that
permanently keeps the laser beam focused and a servo tape that drives the head very precisely
towards The desired l ocation.
Fig. 4-Optical disc drive
The surface of the disk is covered by a sensitive layer, protected by the dielectric layer, and
rotates under a modular laser beam focused on the disk surface with a diffraction limitation.
Other applications of opto-mechatronic systems:
● Tunable laser device (barcode scanner)
● Atomic force microscopy
● Optical sensory feedback washing machine
● Optical coordinate measuring machine
● Non-contact 3D scanners
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
10
2. Reconstruction in the 3D virtual environment
2.1 Defining the problem
The problem that has arisen is how we represent the objects from the natural environment
in the virtual environment.
Initial status:
▪ real (physical)
▪ initial object status
▪ original features
▪ the property of the object
Final status:
▪ virtual object (3D model)
▪ three -dimensional representation
▪ properties of object representation
▪ final features
2.2 The origin of distance measurement
Metrology is an area that was born in antiquity and is at the intersection of mathematics
and engineering. Even with primary units, the development of geometry has been a revolution
through the ability to accurately measure distance.
Around the 240 BC, Eratosthenes estimated the Earth's circumference without leaving
Egypt. He knew t he distance between Syene and Alexandria, the city he was in, which was equal
to 5000 stadia (primary unit ~ 0.18km) = 900 km. He knew that the direction from Syene to
Alexandria was in the north and also knew that Syene was at the Tropic. During the summe r
solstice at 12:00, Syene aligns with the Sun on the direction of the Tropic, the Sun being just above
it.
This means that an object positioned in this direction will not have a shadow, the Sun being
above the object. In Alexandria, at the same time (12:0 0), sunlight has a different angle of fall
from Syene, and so Eratosthenes measured the angle that the shadow makes on the earth using a
straight object, like a stick. The measured angle was approximately 7.2 °.
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
11
Looking from outside , light falls on the su rface of the Earth in parallel lines. Because of
the shape of the Earth, a line will fall perfectly over Syene, which will result in the object placed
on the ground to have no shad e, and in Alexandria the object placed will have a shadow with the
angle of 7.2° or 1/50 of a circle. 360 ° (a circle) divided by 7.2 ° (measured angle) = 50. Using the
complementary angles, he learned that the angle formed between the line drawn between the center
of the Earth and the object placed in Alexandria, and the line drawn between the center of the Earth
and the o bject placed in Syene is equal w ith the previously measured angle of shadow drop on th e
Earth's surface, which is 7.2 °. Hence, the distance between Alexandria and Syene is 1/50 of the
Earth's circumference. It has multiplied this distance between Alexandria and Syene with 50 to
complete the circumference of the Earth, resulting in 5000 stadia * 50 = 250,000 stadia = 45,000
km. The real circumference of the Earth is 46,250 km.
Fig. 5 -Eratosthenes method of measuring the Earth’s circumference
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
12
Going through these historical developments, measuring tools range from knowledge about
mathematics to practical needs. Primary methods require direct contact with the surface of the
object (e. g. ruler ). The pantograph, invented in 1603 by Christoph Scheiner, used a special motion –
binding mechanism with a touch probe, which could very well mimic the course of a pencil.
Modern Coor dinate Measurement Machines (MCM ) work similarly, recording peak position of a
probe as it m oves on the surface of a rigid object (Fig. 6).
Fig. 6 – Measuring forms using a method based on direct contact
Even if they are very effective, these methods, based on direct contact, can affect fragile
objects and require long periods of time to reconstruct a precise 3D model. Non -contact scanners
also have their li mitations, they are used only for observation, control during scanning, and light
interaction with the object.
2.3 3D scanner systems
A 3D scanner is a device that analyzes real -world objects or the environment to collect
information about its shape and appearance (e .g. color). The collected information can be used to
reconstruct the object in the three -dimensional digital environment.
There are many technologies that can be used to bui ld a 3D scanner. Each technology
comes with its own limitations, advantages and costs. Many limitations are encountered in the
reconstruction of the object, for example, optical technologies get stuck when it comes to scanning
bright, tran sparent objects a nd mirrors .
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
13
The applicability of 3D scans covers a wi de range of domains, such as :
Engineering :
● robot control
● 1: 1 drawings of bridges, buildings and monuments
● technical documentation of archaeological sites
● quality assurance
● surveillance of the quantity
● remodeling
● different forms of testing and surface -level analysis
● creating maps
Design process:
● increasing accuracy in working with complex objects and forms
● coordinating a product design using components from multiple sources
● replacing old or missing parts
Entertainment:
● movies
● video games
● virtual world
Reverse engineering:
● copying objects with high precision
● quality check and metrology
● precision of geometric dimensions
● assembly
● testing the finished product
● deviation assessment
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
14
2.4 3D scanning technologies
There is a variety of technologies for recognizing the shapes of an object and translating it
into the virtual environment. They can be classified into two main categories: contact and non –
contact. In t urn, the non -contact method can be divided into two sub -categories: active and passive.
Active scanners need a source of radiation to determine the shape of the object and capture
cloud points, and passive ones use the already existing radiation in the env ironment, such as
sunlight, to create the shape of the object.
3D scanning contact technology
Those with contact use either continuous scanning (scanning), articulated arm probing or
dot pointing. The touch probe reaches the measured sample, while the object is resting on a
precision surface with a fla t surface, polished to a specific maximum roughness. If the object t o be
scanned is not flat or can not be stably placed on a flat surface, it is suppo rted and held firmly in
place by a device. A Coordina te Measurement Machine ( CMM) is the best example of a contact
3D scanner. It is mainly used in manufacturing and can be very precise, but it has some drawbacks.
Depending on the nature, shape, texture and materials of which the object is composed, it may
suffer deform ations, wear and / or damage.
Non-contact 3D scanning technology
While 3D contact scanning techniques use a touch probe to perform scanning, non –
destructive contact technologies use optical sensors (laser touch probe), laser light sources, or a
combination of the two, for accur ate surface reproduction . These are the most advanced
techn ologies for non -contact and contact scanning. Other non -contact scanning methods include
photogrammetry, X -rays, computerized tomography scanning and magnetic resonance scanning.
Non-contact and visual laser sensors have been develope d as an alternative to replace those with
contact where physical contact is not feasible for fine, delicate, super -fine or high -impact surfaces
and sharp edges.
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
15
2.5 Classificatio n of 3D scanning technologies
A. Contact
1. Coordinate capture machine
B. Active non -contact
1. Laser mobile
2. Scanner w ith structured light
3. Scanner w ith modulated light
4. V olume scanner
C. Passive non -contact
1. Based on modeling after images
A. 1. A Coordinate Measurement Machine (CMM) is most commonly used in
manufacturing and is very precise. The disadvantage of this system is that it requires physical
contact with the object to be scanned. Following the scanning process, the object may suffer
physical distortions. It is very important when scanning delicate or precious items such as artifacts
to be careful . Another disadvantage is that the scanning process is very slow compared to the other
methods
B. 1. Active scanners emit a type of radiation o r light and detect its reflection through an
object or environment. Mobile laser scanners create a 3D image using the triangulation method. A
point or a laser line is projected onto an object from a mobile point in space, and a sensor (CCD =
Charge Coupled Device or PSD = Position Sensitive Detector) measures the distance to the
surface. The information is collected in accordance with an internal localization system in space.
To collect data with a mobile scanner, we need to know its position in space for e rror-free capture.
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
16
This method is called triangulation because the laser point, camera, and laser source form a
triangle.
B. 2. A structured light scanner is a device that measures the sha pe of three -dimensional
objects. It’s composed of a projector that p rojects light patterns and a camera. The light pattern is
projected on the subject, the camera is slightly i nclined toward the projector. The camera analyzes
the pattern projected on the object and calculates the distance of each point in the field of view .
Designing a narrow band of light on a three -dimensional object creates a distorted line on the
surface at a different angle to that of the projector. A faster way is to project a multi -lane pattern.
Seen from several points, the pattern appears distorted due to the shape of the object. The
movement of the bands allows exact coordinates to be set in three -dimensional space for any type
of surface (except for mirrors and glas s).
B. 3. The modular light scanner illuminates the subject using constantly changi ng light.
Usually the light source changes its sinusoidal pattern periodically. A camera detects reflected
light whenever the pattern has changed. Modular light allows ignoring other sources of light except
the laser, so there is no interference.
B. 4. The volume scanner is most commonly used in medicine. For example, tomography
is a method of creating a three -dimensional model of the interior of an object consisting of a
multitude of 2D images using X -rays. Similarly, magnetic resonance imaging (MRI) produ ces a
better contrast between soft tissue of the body than tomography, being very useful in neurology,
visibility of muscles and skeleton, cardiovascular and oncological field. This technique produces
a volumetric representation that can be directly visual ized, manipulated or transformed into a 3D
surface using surface and level algorithms.
C. 1. Passive 3D scanners do not emit radiation but are based on ambient light. Most
solutions detect visible light because it is an already existing source. Another typ e of radiation that
can be used is infrared. These methods can be very cheap because there is no need for special
components, just a simple camera.
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
17
3. Analysis of 3D scanning technologies
In order to be able to choose the best way to solve the problem we compared the scanners
according to the technology used. They all have strengths, some are better on a particular field of
expertise, but have other drawbacks. If we want precision, we choose a slow scanner that captures
multiple cloud points for a detailed scan of the object. Or, if time is important and we want to scan
in a short time, we chose a scanner that captures a smaller number of cloud points and crosses the
object at a higher speed. But not all objects can be scanned with the same technology, no m atter if
time or detail is imp ortant in scanning. Some scanners do not allow us to inter act directly with the
object, then a non -contact method must be chosen.
3.1. The main types of 3D scanners with light radiation
The main types of laser 3D scanners are:
1. "Time -of-Flight" 3D Laser Scanner
2. 3D L aser scanner using triangulation
3. Mobile 3D Laser Scanner
4. 3D laser scanner with holographic conoscopy
5. Structured light scanner
1. "Time -of-Flight" 3D Laser Scanner
"This is an active scanner that uses the laser to probe the subject. At its center, there is a
laser rangefinder that measures the flight time of the laser beam. The telemetry measures the
distance to the scanned object by calculating the return time of a l aser pulse. Since the speed of
light "c" is known, the return time determines the laser displacement distance, which is twice the
distance between the scanner and the surface of the object. If t is the return beam time of the laser
beam, then the distance d = c * t / 2 . The precision of a "flight time" laser scanner depends on how
"t" is measured: the time required for light to travel 1 millimeter is 3.3 * 10 -12 seconds
(approximately). " (Yan Cui, Sebastian Schuon, 2010 )
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
18
Fig. 7 -A ToF scanner and it’s wor king principle
These scanners can measure between 10,000 and 100,000 dots per second. The main
advantage of this type of scanner is the very large scanning distance, being ideal for buildings or
geographic features. Its disadvantage is low accuracy due to the high velocity of light, which makes
round -trip timing difficult .
2. 3D Laser scanner using triangulation
“The triangulation method has been used for hundreds of years to create maps and roads.
The process is based on the determination of the dimensions and geometry of the actual objects.
Triangulation uses at least one camera as the receiver, the distance and angles between the images
and the projected light (laser or LED) f orming the base of the triangle ” (LMI Technologies, 2016).
The angle between th e projected and reflected light on the object's surface closes the
triangle where the 3D coordinates are calculated. Applying this principle repeatedly, a 3D
representation of the object is formed.
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
19
Fig. 8 -Triangulation principle in scanning
3. Mobile 3D Laser Scanner
Mobile scanners create the 3D image of the object by the same principle of triangulation
as described above: a laser beam (dot or line) is projected on the object by means of a mobile
device and a sensor that is measuring the distance to the surface.
The information is collected in relation to an internal coordinate system. Therefore, in order
to collect the correct information while it is moving, the position of the device in space must be
determined. The position is determined using a refere nce system on the surface of the object to be
scanned, or using an external tracking system. Usually, an external tracking system has a laser that
determines the position of the sensor and an integrated camera to determine its orientation. This
method uses infrared diodes attached to the scanner that are received by the camera through filters.
The information is to be collected in the three -dimensional space, after processing it can
be transformed into triangulated polygons. Mobile scanners can combine this information with
passive ambient light sensors to capture textures and colors for complete 3D model construction
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
20
Fig. 9 -A mobile scanner in action
4. 3D laser sca nner with holographic conoscopy
Holographic conoscopy is a holographic method based on the propagation of light (laser)
through a uniaxial crystal. It was thought that its sole purpose was to be a three -dimensional sensor
for precise no n-contact distance measurements.
“Based on crystal optics, conoscopy is a technique implemented for polarized l ight
interference processes. In the elementary ensemble, a beam of light is projected onto the surface
of an object. The beam creates a light point on the target object, which reflects light in all directions
by reflection. Complete analysis of diffuse lig ht is performed. The measurement process returns a
response to the distance of the light point from a reference plane. The system that determines three –
dimensional measurements is the basis of holographic conoscopy” ( Gabriel Y. Sirat, Freddy PAZ ,
2005).
Fig. 10 -Holographic conoscopy laser
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
21
5. Structured Light Scanner
A structured light digital scanner can be used to eliminate the mechanical movement
required to project the projection onto the surface of the object. The projector could be used to
create a single column (or row) of white pixels with a black background surface translation to get
closer to the scanning quality of laser triangulation. However, a flat motion sequence does not
cover the projector's maximum capability, which can display color imag es.
Fig. 11 -Structured light scanning
“The sequence of structured light has developed and allowed projector -room
correspondence to solve this problem with just a few frames. In general, the identity of each plan
can be spatially encoded (single frame) or t emporal (multiple frames), or a combination of the two,
spatial and temporal cod ing. For example, space -coded ones allow the use of a single pattern for
reconstruction, allowing the capture of dynamic scenes. Alternatively, temporal encoding is more
likely , minimizing artifacts that occur during scanning .” (Douglas Lanman, Gabriel Taubin , 2009)
3.2. Active 3D scanning methods
The issue of correspondence
Two or more images of the same 3D scene are given, made from different points of view,
and the issue of correspondence refers to the risk of finding a set of points .
For an image to be identified as the same poin ts in another image the points or features of
the image must be matched to the points or features of the other image.
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
22
The problem of correspondence may occur in stereo situations where two or more im ages
of the same scene are used . In another case, N cameras can be used at the same time or a single
moving camera, its position being relative to the scene. The problem grows when the objects are
moving t owards the cameras.
A common applicability of the correspondence problem is found in creating panoramas or
sticking images. In this case, we need to be able to cr eate a point -matching identity of the image
pairs in order to calculate the transformation of an image and stick it to the other images .
Active 3D scanners
Active scanners have overcome the problem of correspondence using controlled light.
Compared to passive and non -contact methods, controlled lighting, in most cases, is more sensitive
to the surf ace of the material. The exception is made by objects whose surface is translucent or
with a high degree of reflection. Many such scanners attempt to solv e the problem of
correspondence replacing one of the rooms with a stereoscopic system and a controlled light
source.
In the 1970s, the first dot laser scanners began to appear. A series of fixed and mobile
mirrors were needed to move the point on the surface of the object .
A camera records the point movement, the 2D point projection of the appropriate
calibration point and the line connecting the laser point and the center of the camera. The depth is
deduced by its intersection with the line that is formed from the laser source to the p rojected point,
given by the deviation of the beam from the mirrors. As a result, these single -laser scanners are
the equivalent of optical coordinate measuring machines.
For a Coordinate Measurement Machine (CMM), only one scan point is too small, scanning
being executed very slowly. With the development of high quality CCD sensors, their prices have
fallen, and in the 1980s, the first planar scanners appeared. In this model, a laser projector creates
a single light plane, is mechanical and moves on the surface of the objec t. Prior to the previous
model, the laser beam deflected on the surface of the object would determine the 3D plane. Depth
is recovered by the intersection of this plane with a set of laser lines that link the center of the
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
23
projection chamber and the 3D curve of the object. By removing a dimension, thes e scanners are
a quick metaphor to determine the shape of an object .
3.3 3D laser triangulation scanning
One of the most common 3D scanning methods is laser triangulation because of its
simplicity and robust construction. It is designed very specifically using simple trigonometry. The
captured image of the camera is in 2D, and the depth cannot be determined only from th e image.
To determine the depth, we need laser scanning. In laser triangulation, the beam is projected onto
the object, and the image is ca ptured by a camera. Since the cost of the used components affects
scanner accuracy, a high price and quality components will produce better results than a low -cost
component scanner. The distances betwe en the laser, object, and camera form a right triangle ,
hence the origin of the name of the triangulation scanning method.
In the laser triangulation, the laser beam is projected onto the object and a picture is
captured by a camera, and the triangle formed between the t hree points (laser, object, camera ).
Stereoscopic scanning, planar scanning, and structured light scanning are based on the
recovery of the 3D shape of objects in the same way. First of all, the problem of correspondenc e
is solved either by a passive correspondence algorithm or by a spatial ide ntification method (e.g.
projection of a known line, plan or pattern). Once the correspondence between two or more points
of view (e.g. parallel pairing of two or more rooms) has been established, triangulation recovers
the depth of the scene. In stereosco pic or multi -point systems, a point is rebuilt from the
intersection of two or more lines of correspondence. In scanning with structured light systems, a
point is recovered by crossing the lines of correspondence and the plans .
In trigonome try and geometry , “triangulation is the division of a surface or plane polygon
into a set of triangles, usually with the restriction that each triangle side is entirely shared by two
adjacent triangles. It was proved in 1925 that every surface has a triangulation, but it might require
an infinite number of triangles and the proof is difficult (Francis and Weeks, 1999) .”
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
24
The distance to a point measuring two fixed angles
Coordinates and distances can be determined by calculating the length of one side of the
triangle if me asurements are made of the angles and sides of the triangle formed by this point and
two other reference points. It becomes inaccurate when the distance reaches a scale compared to
the Earth's curve, but it can be replaced by spherical trigonometry.
Euclid ean geometry consists of two fundamental types of measurement: angles and
distances. Angle scanning is absolute, and Euclid uses the right angle (90 o) as the main computing
unit. For example, for an angle of 45 o it refers to half as a straight angle. The distance scale is
relative, an arbitrary line segment is chosen with a length different from 0, taking the place of the
unit of measurement, and another distance is expressed in relation to the chosen unit of mea sure.
Measurements of a surface or volume derive from distance determination. For example, a
rectangle with a width of 3 units and a length of 4 units, has a surface of 12 units, these geometric
interpretations are limited to three dimensions.
Three -dimens ional space is a geometric representation in which 3 values (called
parameters) are required to determine the position of an element (e.g. point). In physics and
mathematics, the sequence of n numbers can be understood as the n-dimensional location. Whe n
n = 3, the location set is called three -dimensional Euclidean space. It is represented mostly by the
symbol ℝ3. This s erves as the third parameter for the physical model of the universe (without
considering time as the 4t h dimension) where all the matter is known.
Computation of distances
Fig. 12 -Determining the distance between 2 points
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
25
Let 𝑙 = the distance between A and B , then:
𝑙=𝑑
tan𝛼+𝑑
tan𝛽
Taking in consideration that:
tan𝛼=sin𝛼
cos𝛼
𝑠𝑖𝑛 (𝛼+𝛽)= sin𝛼∗cos𝛽+cos𝛼∗sin𝛽, we obtain that:
𝑙=𝑑(cos𝛼
sin𝛼+cos𝛽
sin𝛽)
𝑙=𝑑 𝑠𝑖𝑛 (𝛼+𝛽)
sin𝛼∗sin𝛽
Finallly the result can be expressed as:
𝑑=𝑙 sin𝛼∗sin𝛽
𝑠𝑖𝑛 (𝛼+𝛽)
This determines the distance of an unknown point, by observing a point of reference,
balancing it from that point, and finally the coordinates .
3.4 3D laser scanner precision
In modern engineering, the term "laser scanning" has two purposes , one is for bar code
scanning devices and the other is to control the direction of the laser fascicles.
“Laser scanning uses laser radiation as a transmitter in the form of a dot or line, being used
to take the shape of objects, buildings and the environment. The main advantages of this method
are that the laser can show the smallest cracks of the scanned surface. Another advantage is the
scanning speed. At the same time, prototypes can be reproduced very quickly, measured and
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
26
compared to the projected model. Using this technology, it can interfere with the manufacturing
process to eliminate some of the causes of manufactur ing defects. ”(Wolfgang Boehler, Andreas
Marbs , 2003)
Any point cloud produced by a laser scanner contains a considerable number of error
points. If the cloud is delivered as specified, its quality is guaranteed.
Angular accuracy
“The angle of inclination of the laser emitter towards the receiver may cause errors. Any
error of axis alignment or an gular reading will result in perpendicular errors occurring on the
propagation path. Since point pos itioning is difficult to verify some investigations of this is sue
have been made. Errors can be detected by measuring short distances vertically and horizontally
between objects located at equal distances from the scanner and comparing measurements .”
(Wolfgang Boehler, Andreas Marbs , 2003)
Range accuracy
“Triangular scanners solve the determination of the radius of action by a triangle formed
by the three points of the system (the laser, the point of reflection on the object and the receiver)
positioned at precise distances from the object. The camera is used to deter mine the direction of
the reflected laser beam. Errors of the radius of action can be observed when we know the distances
to the range of action meas ured with the scanner .” (Wolfgang Boehler, Andreas Marbs , 2003)
Resolution
The resolution term is used in c ontext when we discuss the performance of the laser
scanner. From a user's point of view, the resolution describes the ability to detect small objects or
object properties in cloud points. “Technically speaking, two different specifications contribute to
this ability of the scanners. The smallest possible increment angle between two successive point s
and the laser beam width on the object surface. ” (Wolfgang Boehler, Andreas Marbs, 2003)
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
27
Since the incremental effects and the width of the combined laser beam determine the
resolution of the object, a small sample item with fine elements and holes can determine resolution
information
Edge Effect
Even if the laser is very well focused on the surface of the object, the laser beam will have
a certain size. Whe n the point or laser line reaches the target surface on one of the edges, only part
of it is reflected. The rest will be refl ected on the adjacent surface of another surface behind the
edge or not at all (when there is no plan within the scanner's range). Both types of scanners, flight
time and triangulation produce a variety of wrong points in the vicinity of the edge. The wrong
points (artefacts or phantom points) are found around the reflected laser beam. These errors can
range from a fraction of a milli meter to a few decimetres. Therefore, the object rebuilt from cloud
points will appear larger than in reality, since bad points will also be recor ded at the time of
scanning .
Influence of surface reflection
“Triangular laser scanners are based on reflecti on of the light beam on the surface of the
object to be captured by the camera. The power of the return signal is influenced (among other
factors such as distance, atmospheric c onditions or incidence angle) by the surface reflection
properties. ”(Wolfgang B oehler, Andreas Marbs, 2003) White surfaces have a high degree of
reflection, while darker areas have less reflection. The effect on colored surfaces depends on the
spectrum characteristics of the laser beam (green, red, blue, near infrared). Bright surfac es are
usually hard to scan .
It has been observed that surfaces of different degrees of reflection have resulted in an
erroneous result within the range of action. For some materials, these errors can result in a much
larger representation of the scanned object than the real one. For example, for objects that contain
different colors and materials, errors in the scanning process will be expected. These errors can be
avoided if the object with a uniform surface and the same color is temporarily covered, but it is
not applicable in most cases .
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
28
Environmental conditions
Temperature: Any scanner will only work properly if it is used in a certain temperature
range. Within the range, deviations may occur, especially at the measuring distance. Keep in mind
that the internal temperature will, in most cases, be higher than the ambient temperature due to
internal heating of the components and / or due to external radiation (sunlight). Temperature may,
over time, le ad to changes in the system .
Atmosphere: As with any o ptical distance measurement, due to variations in temperature
and pressure, it is possible to change the propagation speed of light through the environment. For
short beams this is negligible. Also, if we have dust or steam in the environment, it can resul t in a
similar effect to the edge effect, as described above .
Radiation interference: Lasers operate in a very limited frequency band. Because of this,
filters can be applied to limit the receiver (camera) only to the frequency of the laser. If the
radiation of the light source (sunlight, lamp) is strong compared to the signal, some of the ambient
radiation will pass through this filter and will influence accuracy o r even prevent measurements .
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
29
4. Developing a 3D Laser Scanner
Following the research and documentation on how to solve the proposed prob lem, I chose
to build a 3D laser (linear) scanner b ased on the principle of laser triangulati on. T he system has
been optimized for space, cost, future development and mobility.
As a source of insp iration, I went with the model bu ilt by bqLabs, Cyclops, a laser scanner
designed for education al and research purposes . Com posed of an Arduino Uno, 2 -line red laser
diodes (5mW , 650 nm ), a webcam (5mpx), a stepper motor, a step-by-step motor driver and a
custom -made stand that fits all the parts. The stand is made out of wood and aluminum for
portability purposes.
I chose Cyclops as a starting point and then developed and modified it to increase mobility,
lower the gauge, increase engine power and the weig ht of the scanned object, ease of use and the
possibility of further development .
4.1 Hardware design and implementation
Analyzing the problem, the best choice is to scan the object through a non –
destructive, non -contact method. Of these types of scanning methods, the one I chose
is through linear laser scanning. I set up the scanner, minimized costs, developing a
scanning software.
The main function of the system is scanning the object in 360 °. The mobile
platform rotates in 200 steps (1.8 °) in a full sc an so it covers all faces of the object.
The s canning procedure can be stopped at any time in the process but it’s not
recommended.
Another important function that was implemented through the design is the
mobility of the entire system. It was built to be very easy to transport and use by
anyone.
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
30
Bill of materials
As mentioned before the scanner is composed by the following hardware parts:
a webcam (Logitech C310), an Arduino Uno R3 board, a stepper motor (28BYJ-48),
a motor driver (A4988), 2-line red laser diodes and the stand that was s pecially design
to fit the respective parts.
The total co st of the entire project was of 110 RON, as it follows:
• 1xLogitech C310 Webcam -Free
• 1xArduino Uno R3 -40 RON
• 1xStepper Motor -30 RON
• 1xMotor Driver -25 RON
• 2xLine Laser Diodes -15 RON
• 1xStand -Free (was made out of spare parts)
Camera
The camera used in this project is a Logitech C310. It is positioned at a height
of 20 cm so that the laser trajectory on the surface of the rotating platform and the
background behind t he platform are visible . Also, t he distance between the camera
and the turntable is known to be around 30 cm.
The position on OX, OY and OZ, but also the rotation around the OZ axis was
adjusted. The camera is attached directly to the scann er stand so it b ecomes a whole .
Fig. 13 -Logitech C310 HD Webcam
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
31
The webcam has the following specifications:
• Vide Resolution:1280×720
• Photo Resolution: 5 MP
• PC connection: USB
• Dimensions: 55 x 20 x 15 mm
• Manual focus
• Frame rate -30 fps
• Sensor -CMOS
In our case the most important features are the video and photo resolutions.
Arduino Uno R3 board
In my design, I used an Arduino Uno development board . The board is ideal
for creative projects in electronics. The difference to the original Arduino Uno board
is the ATmega328p microcontroller package and the fact that it is programmed using
the CH340 integrated circuit.
Fig. 14 -Arduino Uno R3 Board
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
32
Technical specifications:
• Operating voltage: 5V
• Supply v oltage: 7 -12V
• I / O pin: 14
• PWM Pins: 6 (out of 14 I / O)
• ADC Pins: 8
• Flash memory: 32kB (8 occupied by bootloader)
• TWI, SPI and UART communication
• Operating Frequency: 16MHz
The board, to which all compone nts of the system are connected is powered
by 12V current source. That includes the:
• Step-by-step motor driver
• Stepper motor
• Linear laser s
Arduino Uno Board Pinout:
Fig. 15 -Board Pinout
In my case, I use only 4 digital pins in order to control the entire system. Digital
pins 2 and 3 are used to control the 2 -line laser diodes and pins 12 and 13 are used to
contro l the driver which command the motor.
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
33
Pin 12 corresponds to the first pin (MISO) of the ISCP (In -Circuit Serial
Programming ) and is connected to the STEP pin on the driver of the motor, hence
enabling the motor to rotate step -by-step. Pin 13 correspond to the third pin of the
ISCP (SCK) and is connect to the DIR pin on the driver, hence giving the direction of
rotation.
Other pins used are digital pin 9 which serves as ground for one of the laser
diodes, the power pins to which the current source is conn ected and the ground pins .
Stepper Motor
I chose a powerful, high -end stepper engine to scale the type of objects that
can be scanned with this system.
A stepper motor works differently from the normal DC motor, which rotates
when a voltage is applied to i ts terminals. The stepper motor, on the other hand, has
multiple tooth magnets arranged around the center axis of the rotor. The
electromagnets are powered by an external current, commanded by a driver and a
microcontroller. To make the engine perform a re turn, the first electromagnet feeds,
causing the magnetic teeth of the spindle to be attracted magnetically by the first row
of teeth of the electromagnet. When the axle teeth are aligned with the first
electromagnet, there are few displacements to the nex t electromagnet. So, when the
next electromagnet is powered and the previous one closed, the shaft rotates towards
it and the process is repeated. Each of these rotations is called a "step", with an internal
number of steps forming a complete rotation. In this way, the engine can be turned
precisely .
Fig. 16 -28BYJ -48 stepper motor
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
34
The picture above shows a 28BYJ -48 stepper motor. It contains several
Darlington transistors to send commands to each coil . A Darlington transistor is made
up of two common series -connected transistors in order to control as large a current
as possible with a smallest command current.
Stepper motor technical features:
• Recommended power supply voltage: 5V
• Number of phases: 4
• Approximate reduction: 1:64
• 64 steps / resolution
• Resistance / phase: 50 ohms
• Minimum torque: 34.3 mN * m
• Degree of insulation: A
Motor Driver
The driver of the motor is a A4988 microstepping bipolar stepper motor driver ,
with adjustable features, current limiting, over -current and over -temperature
protection, and five different microstep resolutions (down to 1/16 -step). It operates
from 8 V to 35 V and can deliver up to approximately 1 A per phase without a heat
sink or forced air flow (it is rate d for 2 A per coil with sufficient additional cooling ).
Fig. 16 -A4988 driver
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
35
Fig. 17 -Motor -Driver -Arduino connections
In the above picture are presented the connections made between the Arduino
board, the driver and the stepper motor. The VDD and GND pins of the driver are
connected to the power pins of the Arduino and the STEP and DIR pin are connected
to the ISCP pins of the Arduino (pin 1 and pin 3). Pins 1A, 2A, 1B, 2B are the pins
that make the connection with the stepper motor. VMOT and GND are the power and
ground of the motor.
Driver’s key features:
• Simple step and direction control interface
• Five different step resolutions: full -step, half -step, quarter -step, eighth –
step, and sixteenth -step
• Adjustable current control lets you set the maximum c urrent output
with a potentiometer, which lets you use voltages above your stepper
motor’s rated voltage to achieve higher step rates
• Intelligent chopping control that automatically selects the correct
current decay mode (fast decay or slow decay)
• Over -temperature thermal shutdown, under -voltage lockout, and
crossover -current protection
• Short -to-ground and shorted -load protection
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
36
Line Laser Diodes
The 2 -line laser diodes are positioned on the OX axis and at the same height
and distance from the platform as the camera. Each laser together with the camera and
the turntable form a triangle. Such as the camera, the diodes are also placed on the
stand.
Laser Features:
• Weight: 6.3g
• Diameter: 10mm
• Length: 31mm
• Operating voltage: 2.8 – 5.2 VDC
• Maximum current amplitude: 25mA
• Operating temperature: -10 ° C to 45 ° C
Fig. 18 -Line laser diodes
The stand
An essential elem ent of the system is the stand . It is the unifying basis of all
the components and the fulfillment of the process according to the required
requirements. It must ensure the safe transport of all components, preserve their
original positions, and not undergo mutations following the scanning process.
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
37
The inspiration for this architectural design was the bqLabs scanner, Cyclops.
The stand is made out of wood and aluminum for transportation reasons. All the
elements are fixed to the stand , the only mobile element is the rotation platform .
Fig. 19 -Sideview
The Arduino board, the motor driver and the stepper motor are found inside
the box. As you c an see the box is made of aluminum and has holes for the power
supply and the USB cable and also for ventilation purposes. On top of the box you can
see the turntable which is controlled by the stepper motor and on top of which the
object to be scanned wil l be placed.
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
38
Fig. 20 -Top view
The box is bound at a specific distance by an arm, made also of aluminum to
a support that holds the 2 lasers and the camera. The support is shaped in a trapezoidal
form with the big base up and small base down. The lasers are placed on the endings
and the camera is in the center of support.
4.2 Software design and implementation
The software program was designed and developed by myself so that it will work perfectly
with the hardware model that I’ve built. Also, one of the reasons I developed it myself is that I
wanted to be user friendly and very easy to manage by anyone that doesn’t have a technical
background.
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
39
The program was coded in MATLAB programming language using as working
environment MATLAB R2013b versio n. The reasons why I used this version is because is the last
version which doesn’t require the user to install drivers for the image acquisition tool making it
easier to work with.
Why MATLAB? One of the reasons for which I chose MATLAB is because it is v ery good
for data acquisition and image processing. It has a very large (and growing) database of built -in
algorithms for image processing and computer vision applications . Also, it allows you to test
algorithms immediately without recompilation. You can t ype something at the command line or
execute a section in the editor and immediately see the results, greatly facilitating algorithm
development. The ability to process both images and videos was another argument for why I chose
MATLAB.
The software was de veloped specifically for our hardware setup . Arduino3DScanner.fig is
the main GUI file and can be launched from MATLAB console by typin g “guide” and selecting
the Arduino 3DScanner pro ject from the browse field. Arduino 3DScanner.m contains all the code
for the application and implements the scanning algorithm. There are basic utility functions that
are self -explanatory in the sense that they cont rol the functioning of the lasers, the stepper motor
and camera. The cameraParams.mat fi le contains calibration da ta for m my setup.
The program can be configured for any room model (webcam, DSLR, industrial cameras).
The resulting file is a PLY, which contains the coordinates and colors of each point on the scanned
surface. It can be used in a CAD program such as Auto desk products or an Open Source program
like MeshLab.
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
40
Fig. 21 -Software functionalities
As stated before the program is made out of a main, Arduino3DScanner, and other
functions that command the hardware, such as: start_laser, stop_laser, rotateMotor,
generatePoints. We are going to present each function separately in order to see each one’s
importance to the entire program.
Before explaining which function does what it is necessary to understand that everything
is controlled by the developed MATLAB softwa re, we are not using any code for the Arduino
board.
MATLAB connection to Arduino
In order to create a connectio n between MATLAB and Arduino I downloaded and installed
the Arduino hardware support package from Mat hWorks ( https://www.mathworks.com/hardware –
support/arduino -matlab.html ). The package contains the install file and some libraries that must
be uploaded into the board in order to establish the desired connection.
I have installed the package and uploaded the adios .pde library into the board , which
enables me to program and command the digital pins of the board . After these steps, I was able to
create a connection between MATLAB and Arduino via the USB cable.
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
41
The command used to connect to the board is a=arduino(‘CO M3’) , where a is the Arduino
hardware connection created using arduino, specified as an object and COM3 is the port to which
the board is connected. In order to find out on which COM is the board connected the user should
enter Start -Control Panel -Device M anager -Ports(COM&LPT) .
In order to learn more about the Arduino package I typed in the command line arduino and
from there I found out the list of methods that I needed in order to program my board from
MATLAB.
Programming the Arduino board
To program the Arduino board from MATLAB I used only two methods from the Arduino
support package: pinMode ( ) and digitalWrite (). These methods helped me program the pins of
the board.
• pinMode ( a, pin, str) = is a metho d that read or sets I/O of a digital pin
a-is the Arduino class object
pin-the number of the pin to be programed
str-is a string that specifies the pin mode: ’INPUT’ or ‘OUTPUT’
• digitalWrite ( a, pin, value) = is a method that performs digital output. It is used to
set pins values.
a-is the Arduino class object
pin-the number of the pin to be programed
value -represents the status of the pin: 1 and 0, HIGH and LOW. When a pin is 1 it
means it’s active, 0 it means is passive.
MATLAB program functions
• rotateMotor ( a, angle )-This functi on is used to command the motor. Angle is a
variable which makes reference to the angle of rotation of the platform and a is the Arduino class
object. Signals are sent to pins 12 and 13 of the Digital PWM on the Arduino board. These pins
correspond with pi ns 1 and 3 of the ISCP which are connected to the STEP, respective DIR pins
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
42
on the motor drive. When, for example, the STEP pin receives an impulse which is HIGH (1) this
will command the motor to rotate one step.
STEP – this control line drives the steppe r motor. When we apply a pulse to this line
(000011110000), the driver moves the stepper motor by one step when the line transitions from 1
to 0 (also called the falling edge of the pulse) . We connect this line to Pin 12 of the Arduino . We
will write a sma ll MATLAB function to send a pulse on this I /O pin.
DIR – this control line decides whether the stepper motor will rotate in the clockwise or
counter -clockwise direction. We will connect this line to Pin 13 of the Arduino . Setting this line
to logical 0 makes the stepper rotate in clockwise direction and logical 1 makes it rotate in the
counter -clockwise direction.
function rotateMotor(a, angle)
pinMode(a, 12, 'OUTPUT' ) % declaring pins 12 and 13 as OUTPUTS
pinMode(a, 13, 'OUTPUT' )
digitalWrite(a, 13, 0) % initializes pin 13 (DIR) as 0 (LOW)
for i=1:200 % loop of 200 rotations of 1.8 degrees=360 degrees
digitalWrite(a, 12, 1) % intialize STEP with HIGH, makes the stepper rotate
pause(.01)
digitalWrite(a, 12, 0)
pause(.01)
end
end
• start_laser(a) -The start_laser function is the function that turns on the 2 -line laser
diodes. It’s simple function because the only method we use is digitalWrite (). We use it in order
to initialize pin 2 and 3 as outputs , this will turn on the lasers.
function start_laser(a)
pinMode(a, 2, 'OUTPUT' ) % declare digital pins 2 and 3 as OUTPUT
pinMode(a, 3, 'OUTPUT' )
digitalWrite(a, 2, 1) % turn on laser diodes
digitalWrite(a, 3, 1)
pause(.01);
end
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
43
• stop_laser(a) -It is used to turn off the laser diodes once the scanning process
is done.
function stop_laser(a)
digitalWrite(a, 2, 0) %tur ns off the lasers by making the outputs LOW
digitalWrite(a, 3, 0)
pause(.01);
end
• generatePoints(img,orig_img,angle) -This function is used to generate the
point cloud.
function [points,colors] = generatePoints(img,orig_img,angle)
[orig_h,orig_w] = size(img);
midpoint = floor(orig_w/2 + 0.5);
img_truncated = img(:,midpoint:midpoint+300);
orig_img_truncated = orig_img(:,midpoint:midpoint+300,:);
[height,width] = size(img_truncated);
theta = 31.5925;
points = [];
colors = [];
for row = 1:height
scanline = double(img_truncated(row,:));
[peaks,x] = findpeaks(scanline);
if size(x) ~= 0
if size(x,2) > 1
x = sum(x)/size(x,2);
end
x = x;
y = x/tand(theta);
point = [x,y,height -row];
color = [orig_img_truncated(ro w,floor(x + 0.5),1) …
orig_img_truncated(row,floor(x + 0.5),2) …
orig_img_truncated(row,floor(x + 0.5),3)];
points = [points; point];
colors = [colors; color];
end
end
if size(points) ~= 0
R = rotz(angle);
points = points*R;
end
end
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
44
Camera calibration
Cam era calibration is a very important step in this project. We need our camera to be
precise and accurate in order to generate a better -looking poin t cloud. To calibrate the camera, we
use the Camera Calibration Toolbox from MATLAB.
We can use the camera calibrator application to estimate camera intrinsic , extrinsic , and
lens distortion parameters. We can use these camera parameters for various comput er vision
applications. These applications include removing the effects of lens distortion from an image,
measuring planar objects, or reconstructing 3 -D scenes from multiple cameras.
The Camera Calibration tool is opened by simply typing in the command wi ndow
cameraCalibrator . This command will op en the Camera Calibration Tool . From there we can start
calibrating our camera and adjusting the desired parameters.
For calibration, we use a chess pattern with 12 columns and 7 rows. Each checkboard
square has a size of 15 mm.
To begin calibration, we must add images. We can add saved images from a folder or add
images directly from a camera. The calibrator analyzes the images to ensure they meet the
calibrator requirements and then detects the points.
For best results, I used between 10 and 20 images of the calibration pattern which I took
using my mobile phone . The calibrator requires at least three images in order to calibrate your
camera . The calibration pattern and the camera setup must satisfy a se t of requirements to work
with the calibrator. For instance, try to u se uncompressed images or lossless compression formats
such as PNG .
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
45
Fig. 22-Camera calibration results
We can evaluate calibration accuracy by examining the reprojection errors and the camera
extrinsic , and by viewing the undistorted image. For best calibration results, I used all three
methods of evaluation.
Fig. 23 -Camera parameters
The result of the calibration can be exported and saved as a .m file.
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
46
Image Acquisition
Together, MA TLAB, Image Acquisition Toolbox , and Image Processing Toolbox (and,
optionally, Computer Vision System Toolbox ) provide a complete environment for developing
customized imaging solutions. We can acquire images and video, visualize data, develop
processing algorithms and analysis techniques. The imag e acquisition engine enables us to acquire
frames as fast as your camera and PC can su pport high speed imaging.
Fig. 24 -Image Processing Toolbo x
In our ca se, I used the toolbox to acquire the needed images in order to reconstruct the point
cloud. The camera records images and splits them into frames that are later processed.
To access the Image Acquisition Toolbox just type imaqtool in the comman d window of
MATLAB and the application window will appear.
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
47
Graphical User I nterface
GUIs (also known as graphical user interfaces or UIs) provide point -and-click control of
software applications, eliminating the need to learn a language or type commands in order to run
the application.
GUIDE (GUI development environment) provides tools to design user interfaces for
custom apps. Us ing the GUIDE Layout Editor, we can graphically design an UI. GUIDE then
automatically generates the MATLAB code for constructi ng the UI, which you can modify to
program the behavior of your app.
In order to access GUIDE just type guide in the MATLAB command window and the
GUIDE Layout Editor will appear.
Fig. 25 -GUIDE Layout Editor
The GUI of the Arduino3DScanner program has been built using the GUIDE Layout
Editor. The purpose was to design a user -friendly GUI that can be handled by anyone .
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
48
When running the program, the GUI is the first to appear. It has just three buttons for t he
three functionalities that I developed: STAR T, STOP and SAVE.
Fig. 26-Arduino3DScanner GUI
When pressing the START button the camera, laser and stepper will turn on.
Fig. 27 -After pressin g START button
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
49
When pressing STOP all the above processes will stop. Usually STOP button is used when
there was a problem encountered in the scanning process. Otherwise the user should allow the
scanning process to finish naturally.
Fig. 28 -After STOP button was pressed
The SAVE button is used only after the scanning process finished. The resulting file is a
PLY file, which contains the coordinates and colors of each point on the scanned surface. It can
be used in a CAD program such as Autodesk products or an Open Source program like MeshLab.
MeshLab processing
MeshLab is an open source system for processing a nd editing 3D triangular meshes. It
provides a set of tools for editing, cleaning, healing, inspecting, rendering, texturing and
converting meshes. It offers features for processing raw data produced by 3D digitization
tools/devices and for preparing model s for 3D printing.
The automatic mesh cleaning filters includes removal of duplicated, unreferenced vertices,
non-manifold edges, vertices, and null faces. Remeshing tools support high quality simplification
based on quadric error measure, various kinds of subdivision surfaces, and two surface
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
50
reconstruction algorithms from point clouds based on the ball -pivoting technique and on the
Poisson surface reconstruction approach. For the removal of noise, usually present in acquired
surfaces, MeshLab supports var ious kinds of smoothing filters and tools for curvature analysis and
visualization .
It includes a tool for the registration of multiple range maps based on the iterative closest
point algorithm. MeshLab also includes an interactive direct paint -on-mesh sys tem that allows to
interactively change the color of a mesh, to define selections and to directly smooth out noise and
small features.
Fig. 29-MeshLab working environment
We use MeshLab in order to create a more realistic 3D model. MeshLab helps us by
processing and connecting the point cloud of our scanned model. It also gives the user the
possibility of applying texture to the 3D model making it look more natural.
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
51
4.3 Software testing
For testing, I chose different objects, each with different proper ties to highlight some of
the effects I mentioned in the paper, but also to see the scanner's capability and usefulness.
We will notice that some colors could not be scanned and reconstructed in the 3D model.
This is because these colors absorb too much laser radiation and do not reflect enough to close the
triangle and achieve triangulation of cloud points. Also, another problem is repre sented by the
edge effect. The edge effect occurs when the laser beam is reflected on the surface of the object in
a direction out of the camera’s field of view.
The shape of the object or surface is something that must be taken in consideration when
attem pting a scan. If an object has a lot of curves or if it has in its composition different colors, of
different refractive index it can create problems in the reconstruction process.
Test 1
Fig. 30 –
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
52
Test 2
Fig. 31
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
53
Test 3
Fig. 32 –
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
54
4.4 Further development
The developed 3D scanner is just a prototype so we can say that indeed further
development is required in order to improve the scanning accuracy and precision of
the device. Below is a list of changes th at may affect the device. I have taken in
account both hardware and software changes.
To improve object scanning, I can modify the system as follows:
• attaching a linear green laser industrial;
• enlarge the rotating platform to scan larger volume objects;
• diffuse light source from 3 angles, left, right and center;
The object must have the following characteristics:
• smaller size than the field of view
• medium or low complexities
• matte or medium -reflective material
The software can be improved by implementing more complex alg orithms in
order to generate a better point cloud and process the entire image. Also, another idea
could be to filter the image, respectively the point cloud (cleaning the point cloud) .
This will help us when connecting the points and applying the texture by creating a
more complex mesh.
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
55
4.5 Other scanning programs
3D scanning software using photogrammetry technology
These programs reconstruct the 3D model with the help of images made on the environment
or an object from several angles and positions, analy ze the common points, calculate the spatial
position from which the images were made, create the cloud of points, and finally create the
polygonal links, resulting in a 3D model of the scanned surface.
123D Catch
Fig. 3 3-3D scanning using 123D Catch
A simple way to scan 3D objects and the environment is this program. It's a fre e program
available on : iOS, Android, Windows Mobile, and Windows Desktop platforms . The only tool you
need to do a scan is a camera or a cell phone. The process is simple, the object / environment is
photographed from several angles and the more angles we have, the more accurate the pattern will
increase. The photos are loaded into the program and it will perform the 3D reconstruction of the
object or the scanned environment
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
56
Hypr3D
Hypr3D is a web -based software service that allows everyone to create 3D models through
a series of images or a movie with the desired object or scene. Each model is assigned a single
page with an interactive panel, and all digital models are availabl e for free downloa d. You can
export the model in STL, DAE , and PLY fo rmats, then can be sent to a 3D printer
In a test performed with a model, the following were observed:
● had problems uploading images
● the second successful attempt was made possible wi th 23 preselected images
● the third attempt forced the limit of 40 images, but due to the excess of information it
was not possible to carry out the reconstruction
Insight3D
Insight is also part of the category of programs that create 3D models through a set of
images. A scene or real object is shot from multiple angles, uploads these images into the program,
automatically matches them and calculates the position in the scene where the image was taken,
and represents a 3D scene w ith the generated cloud poi nts.
You can use this program to create 3D polygonal textures of a model. The main purpose
of this program is strictly educational.
Fig. 3 4-3D representation of cloud points in the Insight3D program
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
57
3D scanning programs using structured light technology
The principle of this technology is to design a narrow band of light on a three -dimensional
surface that will result in the appearance of a distorted light line on the shape of the object viewed
from a perspective other than that of the projection and can be used for the exact reconstruction
the geometry of the shape of the surface where the light is reflected.
3D Underworld
Recently, there has been an increase in 3D representations in the virtual world of real -world
objects. Many methods and systems have already been proposed to address this issue, involving
technologies with active sensors, passive sensor technologies and a combination of both
technologies.
Fig. 3 5- Positioning System Elements for SLS Scanning
Structured Light Systems (SLS) use active sen sors, such as projectors and laser emitters,
to produce the light of a known pattern. Scanning involves designing this light pattern on the
surface of the object, capturing one or more cameras with the printing pattern.
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
58
Fig. 3 6-Reconstruction of a 3D mo del using the 3D Underworld program
This technique is quite general, but there are some accepted positions for the rooms. The
position is subjective to the position of the scanned object, as well as the number of cameras and
the type of lens used. However, as the angle between the rooms increases, the area of the object
that can be reconstructed decreases, since the common area of viewing of the rooms is reduced. It
is recommended to position the projector in the middle of the chambers, as represented i n Fig. 16.
3D Underworld is an automatic modeling environment for cloud point information.
Initially, the information is preprocessed, then distributed over the polygon network using an
unattended clustering algorithm
Dependability:
● Canon SDK
● OpenCV2.4
It is an open source program, without it being commercialized, for research and education
purposes only.
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
59
3D scanning programs using laser technology
Makerscaner
Another Open Source program is Makerscaner. The mode of operation of the program is:
● Desi gning a line on an object
● Record the position of the line
● rebuild 3D geometry
The program is available on Windows and Linux platforms. Windows is installed with
executable, and on Linux we have to install the following dependencies:
● OpenCV
● vxWidget s
To perform a scan, the laser line must be vertical, perpendicular to the ground. On a 25px
height line at the top of the image, there must be a flat background (a bove the green line, Fig. 17 ).
This is the reference for the depth of reconstruction
Fig. 3 7-A 3D model reconstruction with Makerscanner
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
60
The camera must be fixed, not allowed to move during scanning. The program uses a
difference comparison technique to detect the laser, and e rrors can occur during the scan time.
3D scanning software’s classific ation based on platform, creator and license:
Software Platform Creator License
123D Catch Windows/OSX/Linux
/Web -based Autodesk Free
Hypr3D Web -based Viztu Technologies Proprietary
Insight 3D Windows/Linux Lukas Mach AGPL 3
3D Underworld – Immersive creative
& technologies,
Cyprus University of
technology Open Source
Makerscaner Windows/Linux MakerBot Open Source
Table 1 -3D scanning software’s
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
61
Conclusions
Following the testing results, I determined how effective my prototype is and what objects
I can scan w ith it. I have seen how hard is to keep the shape of a complex object and how difficult
is to get a good 3D model.
Surfaces influence the outcome of the reconstruction. A high level of surface reflection,
such as a mirror o r too low, such as glass, will result in distorted reconstructions of the 3D model
or lack of areas.
Colors are also very important in this context as they are very useful in the object
reconstruction from the point cloud.
Light is also an essential factor in the quality of the color shades we make in the 3D
reconstruction of objects. Artificial white light or sunlight is enough to be able to capture the color
information of the scanned object.
To solve much of these problems we can do multiple object scans from multiple angles,
bring them into an environment with the same reference system and unite them, forming the
complete 3D model of object.
In my case, I can say that the 3D scanner prototype I built has an accurate hardware
resemblance to the 3D scanner s that are already on the market. The mechanical design of the
project was built in accordance with the models that served as a source of inspiration. From an
architectural point of view , we can say that my design doesn’t necessarily bring innovative ideas
but follows in the steps of the already existing designs. The only difference being that of the
materials used in its construction. However, the software part proposes a new approach. By using
MATLAB as a coding lan guage and working environment many possi bilities and solutions in
solving the problem of the point cloud opened to me . First of all, the use of MATLAB has rendered
the use of another coding language, for example Arduino, useless. Many software programs of 3D
scanners are split into 2 parts: mast er program and board program.
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
62
But MATLAB allows you to program the Arduino board directly without any need of a board
program. This helped me by eliminating any issues that might have appeared between a master
and a board program. Another advantage is that MATLAB has a lot to offer when talking about
image acquisition and processing than other software’s. A lot of useful toolboxes made my live
easier when I implem ented the code. Unfortunately, the code part is still at a testing level in
implementation. This being one of the reasons why the prototype is not yet working at full thrust.
In conclusion, it can be said that this prototype represents a good endeavor in t he field of
object reconstruction and 3D modelling and that with further impro vement it can be a really good
example fo r those who want to enter and pursue this field.
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
63
References
1. F. Harshama, M. Tomizuka, and T. Fukuda, “Mechatronics -what is it, why, and how? -and editorial,”
IEEE/ASME Trans. on Mechatronics, 1996
2. D. G. Alciatore and M. B. Histand, Introduction to Mechatronics and Measurem ent Systems, McGraw
Hill, 1998)
3. W. Bolton, Mechatronics: Electronic Control Systems in Mechanical En gineering, Longman, 1995
4.Hyungsuck Cho. Opto -Mechatronic Systems Handbook: Techniques and Applications, 2002
5. Jon Rigelsford, (2003) "Opto -Mechatronic Systems Handbook: Techniques and Applications", Assembly
Automation,
6. IEEE. Optomechatronic technol ogy: The characteristics and perspectives, 2005.
7.LMI Technologies. A simple guide to understanding 3D scanning technologies
8. Gabriel Y. Sirat, Freddy PAZ. Conoscopic Holography, 2005
9. Douglas Lanman, Gabriel Taubin. Build Your Own 3D scanner : 3D Pho tography for Beginners, 2009
10. Wolfgang Boehler, Andreas Marbs. Investigating Laser Scanner Accuracy, 2003.
11. Kyriakos Herakleous, Charalambos Poullis. 3DUNDERWORLD -SLS: An Open -Source
12. Structured -Light Scanning System for Rapid Geometry Acquisition , 2014.
13. M. Callieri, P. Cignoni, M. Dellepiane, R. Scopigno, Pushing Time -of-Flight scanners to the limit, 2009
14. Will Strober. Laser Triangulation 3D scanner , 2011.
15. Akash Malhotra, Kunal Gupta, Kamal Kant. Laser Triangulation fo r 3D Profiling of Target, 2011.
16. Yan Cui, Sebastian Schuon. 3D Shape Scanning with a Time -of-Flight Camera.
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
64
17. Adrian -Cătălin Voicu, Gheorghe I. Gheorghe. Măsurarea 3D a reperelor complexe din industria auto
utilzând scanere laser, 2013.
18. http://www.123dapp.com/catc h
19. http://www.solidsmack.com/cad -design -news/hypr3d -photo -video -3d-print/
20. http://insight3d.sourceforge.net/
21. https://en.wikipedia.org/wiki/Structured -light_3D_scaner
22. http://reprap.org/wiki/3D_scanning
23. https://en.wikipedia.org/wiki/Structu red-light_3D_scaner#Software
24. http://abarry.org/makerscaner/1 -makerscaner.html
26. en.wikipedia.org/wiki/3D_scaner#Triangulation
27. en.wikipedia.org/wiki/3D_scaner#Applications
28. https://en.wikipedia.org/wiki/Euclidean_geometry
29. https://en.wikiped ia.org/wiki/Three -dimensional_space_(mathematics)
30. en.wikipedia.org/wiki/Correspondence_problem
31. http://www.instructables.com/id/DIY -3D-scaner -based -on-structured -light-and-stereo -vision
32. http://makerzone.mathworks.com/resources/raspberry -pi-matla b-based -3d-scanner
33. https://www.thingiverse.com/thing:740357
34. https://www.mathworks.com/
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
65
Annexes
The main of the software:
function varargout = Arduino3DScanner(varargin)
% ARDUINO3DSCANNER MATLAB code for Arduino3DScanner.fig
ARDUINO3DSCANNER, by itself, creates a new ARDUINO3DSCANNER or raises the
existing singleton*.
% H = ARDUINO3DSCANNER returns the handle to a new ARDUINO3DSCANNER or
the handle to the existing singleton*
% ARDUINO3DSCANNER('CALLBACK',hObject,eventData,handles,…)
calls the local function named CALLBACK in ARDUINO3DSCANNER.M with the
given input arguments.
% ARDUINO3DSCANNER('Property','Value',…) creates a new
ARDUINO3DSCANNER or raises the existing singleton*. Starting from the
left, property value pairs are applied to the GUI before
Arduino3DScanner_OpeningFcn gets called. An unrecognized property name or
invalid value makes property application stop. All inputs are passed to
Arduino3DScanner_OpeningFcn via varargin.
% Begin initialization code
gui_Sing leton = 1;
gui_State = struct( 'gui_Name' , mfilename, …
'gui_Singleton' , gui_Singleton, …
'gui_OpeningFcn' , @Arduino3DScanner_OpeningFcn, …
'gui_OutputFcn' , @Arduino3DScanner_OutputFcn, …
'gui_LayoutFcn' , [] , …
'gui_Callback' , []);
if nargin && ischar(varargin{1})
gui_State.gui_Callback = str2func(varargin{1});
end
if nargout
[varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
gui_mainfcn(gui_State, varargin{:});
end
% End initialization code
% – Executes just before Arduino3DScanner is made visible.
function Arduino3DScanner_OpeningFcn(hObject, ~, handles, varargin)
% This function has no output args, see OutputFcn.
% hObject handle to figure
% eventdata reserved – to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
% varargin command line arguments to Arduino3DScanner (see VARARGIN)
% Choose default command line output for Arduino3DScanner
handles.output = hObject;
%connect to the a
handles.a =arduino( 'COM3');
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
66
s = load( 'cameraParams.mat' );
handles.cameraParams = s.cameraParams;
handles.stepAngle=10;
handles.vidobj=imaq.VideoDevice( 'winvideo' , 1);
% initialize the scanning data
handles.stepAngle =1.8;
handles.stepSize = 1;
handles.points = [];
handles.colors = [];
handles.filename = 'output' ;
handles.break = false;
% Update handles structure
guidata(hObject, handles);
% UIWAIT makes Arduino3DScanner wait for user response (see UIRESUME)
% uiwait(handles.figure1);
% – Outputs from this function are returned to the command line.
function varargout = Arduino3DScanner_OutputFcn(~, ~, handles)
% Get default command line output from handles structure
varargout{1} = handles.output;
function stepsize_Callback(hObject, ~, handles)
% Hints: get(hObject,'String') returns contents of stepsize as text
str2double(get(hObject,'String')) returns contents of stepsize as a double
stepSize = str2double(get(hObject, 'String' ));
if stepSize > 0 && stepSize < 50
handles.stepSize = stepSize;
handles.stepAngle =handles.stepAngle * handles.stepSize;
end
guidata(hObject, handles);
% – Executes during object creation, after setting all properties.
function stepsize_CreateFcn(hObject, ~, ~)
% Hint: edit controls usually have a white background on Windows.
% See ISPC and COMPUTER.
if ispc && isequal(get(hObject, 'BackgroundColor' ),
get(0,'defaultUicontrolBackgroundColor' ))
set(hObject, 'BackgroundColor' ,'white');
end
function edit2_Callback(hObject, ~, handles)
% Hints: get(hObject,'String') returns contents of edit2 as text
str2double(get(hObject,'String')) returns contents of edit2 as a double
handles.filename = get(hObject, 'String' );
guidata(hObject, handles);
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
67
% – Executes during object creation, after setting all properties.
function edit2_CreateFcn(hObject, ~, ~)
% Hint: edit controls usually have a white background on Windows.
% See ISPC and COMPUTER.
if ispc && isequal(get(hObject, 'BackgroundColor' ),
get(0,'defaultUicontrolBackgroundColor' ))
set(hObject, 'BackgroundColor' ,'white');
end
% – Executes on button press in start.
function start_Callback(hObject, ~, handles)
for currentAngle = 0:handles.stepAngle:360
if handles.break
break;
end
frame=step(handles.vidobj);
image_without_laser=imrotate(frame, 90);
imshow(image_without_laser,[]);
start_laser(handles.a);
rotateMotor(handles.a, handles.stepAngle);
frame=step(handles.vidobj);
image_with_laser=imrotate(frame, 90);
imshow(image_with_laser,[]);
image_difference = imabsdiff(image_with_laser,image_without_laser);
bw_img = im2bw(image_difference,0.1);
bw_img = bwareaopen(bw_img,20,8);
[points,colors]= generatePoints(bw_img,image_witho ut_laser,currentAngle);
handles.points = [handles.points; points];
handles.colors = [handles.colors; colors];
scatter3(handles.axes1,handles.points(:,1),handles.points(:,2),handles.poin
ts(:,3),10, 'filled' );
end
pcshow(handles.points,handles.colors, 'Parent' ,handles.axes1, 'MarkerSiz
e',20);
handles.output = pointCloud(handles.points, 'Color',handles.colors);
assignin( 'base','output' ,handles.output);
guidata(hObject, handles);
Diploma Project, Topliceanu Mihai -Adrian, Faculty of Engineering in Foreign
Languages, UPB, 2017
68
% – Executes on button press in stop.
function stop_Callback(hObject, ~, handles)
% hObject handle to stop (see GCBO)
% eventdata reserved – to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
handles.break = true;
guidata(hObje ct, handles);
stop_laser(handles.a)
% – Executes on button press in save.
function save_Callback(~, ~, handles)
pcwrite(handles.output,handles.filename, 'PLYFormat' ,'ascii');
% – Executes during object creation, after setting all properties.
function save_CreateFcn(~, ~, ~)
% – Executes during object creation, after setting all properties.
function axes1_CreateFcn(hObject, ~, ~)
% Hint: place code in OpeningFcn to populate axes1
hObject.XTick = [];
hObject.YTick = [];
hObject.ZTick = [];
hObject.XTickLabel = [];
hObject.YTickLabel = [];
hObject.ZTickLabel = [];
hObject.XColor = 'none';
hObject.YColor = 'none';
hObject.ZColor = 'none';
hObject.Color = 'black';
rotate3d(hObject, 'on');
Copyright Notice
© Licențiada.org respectă drepturile de proprietate intelectuală și așteaptă ca toți utilizatorii să facă același lucru. Dacă consideri că un conținut de pe site încalcă drepturile tale de autor, te rugăm să trimiți o notificare DMCA.
Acest articol: TOPLICEANU Mihai -Adrian [602738] (ID: 602738)
Dacă considerați că acest conținut vă încalcă drepturile de autor, vă rugăm să depuneți o cerere pe pagina noastră Copyright Takedown.
