Director of department: [309772]
UNIVERSITY “POLITEHNICA” OF BUCHAREST
FACULTY OF ENGINEERING IN FOREIGN LANGUAGES
ELECTRONIC ENGINEERING AND TELECOMMUNICATIONS
DIPLOMA PROJECT
PROJECT COORDINATOR:
Conf. Dr. Ing. Bujor PĂVĂLOIU
STUDENT: [anonimizat]
2017
3D Reconstruction of an object using a turntable, a webcam and laser diodes
Project Coordinator:
Conf. Dr. Ing. Bujor PĂVĂLOIU
Student: [anonimizat]
2017
UNIVERSITY “POLITEHNICA” OF BUCHAREST
FACULTY OF ENGINEERING IN FOREIGN LANGUAGES
ELECTRONIC ENGINEERING AND TELECOMMUNICATIONS
Approved
Director of department:
Prof. Dr. Ing. George Drăgoi
DIPLOMA PROJECT THEME FOR:
[anonimizat]:
-3D reconstruction of an object using a turntable, a webcam and laser diodes
Initial design data:
-3D scanner using structured light/[anonimizat] a servo/[anonimizat]: [anonimizat]:
-[anonimizat]
-software development
Compulsory graphical material:
-image acquisition
-3D [anonimizat]:
-[anonimizat]:
-[anonimizat] R2013b
The paper serves as:
-Research
Paper preparation date:
-June 2017
[anonimizat]
I, [anonimizat], hereby declare that the work with the title “3D Reconstruction of an object using a turntable, a webcam and laser diodes”, [anonimizat] "Politehnica" [anonimizat], based on my research.
[anonimizat], [anonimizat].
The thesis has never been presented to a higher education institution or research board in the country or abroad.
[anonimizat], [anonimizat]. I understand that plagiarism is an offense and is punishable under law.
[anonimizat]. I understand that the falsification of data and results constitutes fraud and is punished according to regulations.
[anonimizat] 4.07.2017
Introduction
The main purpose of this project is to explore the possibility of designing a prototype of a 3D scanner using a turntable, a webcam and laser diodes.
Digitizing or 3D scanning is a procedure which uses a [anonimizat]’s shape and to recreate it in a virtual environment using a very dense network made of points under 3D graphical representation. Data is collected as points which together form the so called “point cloud”. [anonimizat] (Surface Tessellation Language).
This technology is known for over 15 years but it is applied slightly as a new technique. 3D scanning starts to become a method more and more used in modern days, assisting humans in a large field of domains, for example medicine and entertainment.
3D models have various uses, such as making animations or object representations. Analyzes, comparisons or prototypes can be made that can later be modified to build a new product.
With a variety of technologies available we can capture objects inside a room or outside of it, day or night. Dimensions may vary according to our needs. We can capture both small items (jewelry, telephones, etc.) as well as large-scale objects (buildings, bridges, etc.). Some scanners are portable for complete scanning of cavities or coated surface and others can scan the ground and its shapes.
The main purpose of a 3D scanner is to create point clouds of the shape of an object or surface. These points can be used after extracting a form through a process called reconstruction. If information about the object's colors has been collected, it can also be determined in the reconstruction process.
3D scanners share some attributes of a camcorder. Like most of them, they have a tube for their field of view in their construction and they cannot collect information about the hidden surfaces of the object. If a camera collects information about a color, a 3D scanner collects information about the object's surface distance through the field of view. The image produced by a 3D scanner describes the distance of each point on the surface.
In most cases, a single scan cannot produce a complete model of the subject. Multiple scans and multiple angles are required to complete a full reconstruction of the object. These scans have to be brought into a common reference system and then linked together to form a complete pattern. This process is called alignment.
In the next chapters, we will learn more about 3D scanners, their history, the idea behind them, their working principles and their evolution and usefulness in today’s engineering field.
1. Opto-mechatronic systems
1.1 Mechatronic System
The objectives initially embedded in the concept of mechatronics correspond to the creation of a unitary framework between mechanical, electronic and software engineering in the development of new systems capable of performing heterogeneous functions of movement, control and decision-making.
In Figure 1 we have some examples of mechatronic systems that are very present in our lives today.
Fig. 1-Mechatronics Systems [23]
Definitions of mechatronics:
“Synergistic integration of mechanical engineering with electronics and intelligent computer control in the design and manufacturing of industrial products and processes.” [1]
“Field of study involving the analysis, design, synthesis, and selection of systems that combine electronics and mechanical components with modern controls and microprocessors.” [2]
“Integration of electronics, control engineering, and mechanical engineering.” [3]
Fig. 2-Description of a mechanical system [24]
1.2 The Opto-mechatronic system
Integrated optics in mechatronic technology has roots in the technology of developing mechatronics and opto-mechatronics. “Their revolution took place in the 1960s with the integration of transistors and other semiconductor components into monolithic circuits, which could have been possible due to the invention of transistors in 1948. It followed the emergence of microprocessors that were invented in 1971 by means of semiconductors. In the 1980s, semiconductor technology also created microelectromechanical systems, and added a new dimension to systems, a reduction in their gauge.” [4]
This has had a huge impact on a wide range of technologies in this area. In particular, developers have combined both hardware and software technologies synergistically. The merger made it possible for machines to convert the analog signal into digital signal to solve calculations and draw a conclusion based on computational outcome and software algorithms and ultimately to act correctly on the basis of these conclusions and the knowledge accumulated in their own support of memory. These new functionalities have featured machines / systems with features such as flexibility and adaptability.
“A new technological revolution, also known as opto-electronic integration, continued for 40 years, since the invention of the laser in 1960 by American professor Theodore Harold Maiman. At one point from the invention of the first laser in the world, Ion I. Agârbiceanu was to enter the history of physics with an original discovery, and in 1961 he made the first infrared radiation laser (helium-neon). By focusing the light beam produced by the monochromatic laser, enormous radiation densities are obtained on very small surfaces.” [5]
“This was made possible by advanced manufacturing methods such as chemical vapor deposition, molecular epitaxy and microprocessing with ion-focused fascias.” [5]
These methods allowed the integration of optics, electronics and electronic components into a single device compact.
“The CCD (charge-coupled device) sensor not only generated computer viewing technologies but has opened the door to a new era of optical technologies and optical fiber sensors.” [5]
The development of optical components and devices has many favorable features. These components did not assume direct contact and were non-invasive (non-aggressive), had a large radius of perception, did not feel electrical noise, had distributed communication, and had a long wavelength.
As a matter of fact, these optical features have begun to integrate with the mechatronic elements and have led to the construction of very high-performance systems. When a machine is integrated optically, mechanically and electronically, it is called an opto-mechatronic system.
“Lithography develops integrated circuits and semiconductor components, technology that belongs to opto-mechatronic systems. It is based on several mirrors that deviate the light beam, optical units and stepper-servo mechanisms, which change the direction with great precision. Another device is the optical pickup, which was introduced in production in 1982. The pick-up reads the information from a rotating disk, controlling both the top-bottom and left-right directions of the read head, which attaches a low-power laser diode focused on the disc grooves. Since then, a considerable number of opto-mechatronic products, machines and systems have been launched with a high acceleration, because it has allowed significant results to be achieved by optical components. Atomic microscopy, microelectromechanical systems and humanoid robots have been created since the 1990s.” [6]
The main features of an opto-mechatronic system can be classified into several areas:
1. “Illumination: illumination is the photometric radiant energy transmitter on the surface of the target object. It produces a variety of features through reflection, absorption and transmission, depending on the properties of the material and the surface of the object being illuminated.” [6]
2. “Perception: optical sensors can translate fundamental information about the object, such as strength, temperature, pressure, but also geometry, such as angles, velocity, etc. This information is obtained by the optical sensor, using different optical phenomena such as reflection, refraction, interference, diffraction, etc. Conveniently, these perceptual systems are composed of a light source and a photosensitive sensor, as well as optical components such as: lenses, light beam dividers, fiber optics. Recently, more and more sensors are using the advantages of fiber optics in different areas. Technology of optics can contribute to material science. The chemical composition can be analyzed by spectrophotometry, which recognizes the characteristics of the light spectrum reflected, transmitted and radiated by the target material.” [6]
3. “Action: light can change the physical properties of the material by increasing the temperature of the material or by affecting the electrical environment.” [6]
4. “Data (s) of storage: digital data composed of 0 and 1 may be stored and read optically. The optical recording principle uses changes in the recording environment's reflection properties. Information is engraved by changing the optical properties of the storage media by means of a laser. For reading the information, the optical properties of the medium are checked using optical perception sensors.” [6]
5. “Data transmission: because of the unchangeable properties of wavelength in interaction with external electromagnetic noise, which is unpredictable, light is a very good environment for data transmission. The laser, as a light source, has a long wavelength and can transmit a large amount of data at the same time.” [6]
6. “Image transmission: information is best perceived by the user through visual stimuli. To provide an image or graphic to the user, we have a variety of devices that can reproduce an image: LCD, LED, OLED, Plasma, etc. All based on pixels to produce the image. Each pixel has 3 cells in construction that reproduce the primary colors: red, green and blue. By combining the three, all colors can be obtained, including white.” [6]
7. “Calculation: optical calculation can be achieved by switches, logical gates and bedtables in logical operations such as digital electronic calculation. Optical switches can be built using opto-mechanical, optoelectrical, acoustic-optical and magneto-optic technologies. Optical devices can change their state in about a fifth. A logic gate is constructed from optical transistors. For an optical computer, a variety of other elements are needed, besides the optical switches that are interconnected.” [6]
8. “Changes in the material properties: When a laser is focused at one point using optical components, the laser power increases in a small focus area. This process results in changes of the material in the laser-lit area. Material processing methods use pulse laser beam technology and can be classified into two groups:” [6]
a. Changing the shape of the material
b. A change in the physical condition of the material
1.3. Applications of opto-mechatronic systems
Examples of opto-mechatronic systems are in control and instrumentation, inspection and testing, optics, manufacturing, consumers, and industrial manufacture of products such as automobiles, biological applications and many other areas of engineering.
Camera
Cameras are devices that are operated by opto-mechatronic components. For example, a performance camera is equipped with an aperture control and a focus adjustment system that uses an illuminometer designed to operate independently of ambient light. With these system configurations, new functionality has been created to increase camera performance.
Fig.3-A camcorder or video camera [25]
In Figure 3, we see the main components of a camcorder. Camcorders have three major components – a lens that gathers and focuses light, an imager that converts light into an electrical signal and a recorder that converts electrical signals into digital video and encodes them for storage. The image is focused and exposed on the electric sensor with a series of lenses that zoom in and focus on the subject. Changing the lens position changes the image size and focus area. The amount of light entering through the lens is detected by an aperture-controlled image sensor and shutter speed. Recently, CMOS sensors have been used for autofocus using controlled focusing lenses.
The optical disc drive
An optical disc drive is an opto-mechatronic system. As is also seen in Fig. 4, an optical disc drive is composed of an optical readout head, where a laser diode, a servo beam that permanently keeps the laser beam focused and a servo tape that drives the head very precisely towards the desired location.
Fig. 4-Optical disc drive [26]
The surface of the disk is covered by a sensitive layer, protected by the dielectric layer, and rotates under a modular laser beam focused on the disk surface with a diffraction limitation.
Other applications of opto-mechatronic systems:
● Tunable laser device (barcode scanner)
● Atomic force microscopy
● Optical sensory feedback washing machine
● Optical coordinate measuring machine
● Non-contact 3D scanners
2. Reconstruction in the 3D virtual environment
2.1 Defining the problem
The problem that has arisen is how we represent the objects from the natural environment in the virtual environment.
Initial status:
real (physical)
initial object status
original features
the property of the object
Final status:
virtual object (3D model)
three-dimensional representation
properties of object representation
final features
2.2 The origin of distance measurement
“Metrology is an area that was born in antiquity and is at the intersection of mathematics and engineering. Even with primary units, the development of geometry has been a revolution through the ability to accurately measure distance.” [7]
” Around the 240 BC, Eratosthenes estimated the Earth's circumference without leaving Egypt. He knew the distance between Syene and Alexandria, the city he was in, which was equal to 5000 stadia (primary unit ~ 0.18km) = 900 km. He knew that the direction from Syene to Alexandria was in the north and also knew that Syene was at the Tropic. During the summer solstice at 12:00, Syene aligns with the Sun on the direction of the Tropic, the Sun being just above it.” [7]
“This means that an object positioned in this direction will not have a shadow, the Sun being above the object. In Alexandria, at the same time (12:00), sunlight has a different angle of fall from Syene, and so Eratosthenes measured the angle that the shadow makes on the earth using a straight object, like a stick. The measured angle was approximately 7.2 °.” [7]
“Looking from outside, light falls on the surface of the Earth in parallel lines. Because of the shape of the Earth, a line will fall perfectly over Syene, which will result in the object placed on the ground to have no shade, and in Alexandria the object placed will have a shadow with the angle of 7.2° or 1/50 of a circle. 360° (a circle) divided by 7.2° (measured angle) = 50. Using the complementary angles, he learned that the angle formed between the line drawn between the center of the Earth and the object placed in Alexandria, and the line drawn between the center of the Earth and the object placed in Syene is equal with the previously measured angle of shadow drop on the Earth's surface, which is 7.2°. Hence, the distance between Alexandria and Syene is 1/50 of the Earth's circumference. It has multiplied this distance between Alexandria and Syene with 50 to complete the circumference of the Earth, resulting in 5000 stadia * 50 = 250,000 stadia = 45,000 km. The real circumference of the Earth is 46,250 km.” [7]
Fig. 5-Eratosthenes method of measuring the Earth’s circumference [7]
Going through these historical developments, measuring tools range from knowledge about mathematics to practical needs. Primary methods require direct contact with the surface of the object (e.g. ruler).
“The pantograph, invented in 1603 by Christoph Scheiner, used a special motion-binding mechanism with a touch probe, which could very well mimic the course of a pencil. Modern Coordinate Measurement Machines (MCM) work similarly, recording peak position of a probe as it moves on the surface of a rigid object (Fig. 6).” [8]
Fig. 6- Measuring forms using a method based on direct contact [8]
Even if they are very effective, these methods, based on direct contact, can affect fragile objects and require long periods of time to reconstruct a precise 3D model. Non-contact scanners also have their limitations, they are used only for observation, control during scanning, and light interaction with the object.
2.3 3D scanner systems
A 3D scanner is a device that analyzes real-world objects or the environment to collect information about its shape and appearance (e.g. color). The collected information can be used to reconstruct the object in the three-dimensional digital environment. [9]
There are many technologies that can be used to build a 3D scanner. Each technology comes with its own limitations, advantages and costs. Many limitations are encountered in the reconstruction of the object, for example, optical technologies get stuck when it comes to scanning bright, transparent objects and mirrors.
The applicability of 3D scans covers a wide range of domains, such as:
Engineering:
● robot control
● 1: 1 drawings of bridges, buildings and monuments
● technical documentation of archaeological sites
● quality assurance
● surveillance of the quantity
● remodeling
● different forms of testing and surface-level analysis
● creating maps
Design process:
● increasing accuracy in working with complex objects and forms
● coordinating a product design using components from multiple sources
● replacing old or missing parts
Entertainment:
● movies
● video games
● virtual world
Reverse engineering:
● copying objects with high precision
● quality check and metrology
● precision of geometric dimensions
● assembly
● testing the finished product
● deviation assessment
2.4 3D scanning technologies
There is a variety of technologies for recognizing the shape of an object and translating it into the virtual environment. They can be classified into two main categories: contact and non-contact. In turn, the non-contact method can be divided into two sub-categories: active and passive.
Active scanners need a source of radiation to determine the shape of the object and capture cloud points, and passive ones use the already existing radiation in the environment, such as sunlight, to create the shape of the object.
3D scanning contact technology
Those with contact use either continuous scanning (scanning), articulated arm probing or dot pointing. The touch probe reaches the measured sample, while the object is resting on a precision surface with a flat surface, polished to a specific maximum roughness. If the object to be scanned is not flat or cannot be stably placed on a flat surface, it is supported and held firmly in place by a device. [11]
A Coordinate Measurement Machine (CMM) is the best example of a contact 3D scanner. It is mainly used in manufacturing and can be very precise, but it has some drawbacks. Depending on the nature, shape, texture and materials of which the object is composed, it may suffer deformations, wear and / or damage.
Non-contact 3D scanning technology
While 3D contact scanning techniques use a touch probe to perform scanning, non-destructive contact technologies use optical sensors (laser touch probe), laser light sources, or a combination of the two, for accurate surface reproduction. These are the most advanced technologies for non-contact and contact scanning. Other non-contact scanning methods include photogrammetry, X-rays, computerized tomography scanning and magnetic resonance scanning. Non-contact and visual laser sensors have been developed as an alternative to replace those with contact where physical contact is not feasible for fine, delicate, super-fine or high-impact surfaces and sharp edges. [11]
2.5 Classification of 3D scanning technologies
A. Contact
1. Coordinate capture machine
B. Active non-contact
1. Laser mobile
2. Scanner with structured light
3. Scanner with modulated light
4. Volume scanner
C. Passive non-contact
1. Based on modeling after images
A. 1. “A Coordinate Measurement Machine (CMM) is most commonly used in manufacturing and is very precise. The disadvantage of this system is that it requires physical contact with the object to be scanned. Following the scanning process, the object may suffer physical distortions. It is very important when scanning delicate or precious items such as artifacts to be careful. Another disadvantage is that the scanning process is very slow compared to the other methods.” [10]
B. 1. “Active scanners emit a type of radiation or light and detect its reflection through an object or environment. Mobile laser scanners create a 3D image using the triangulation method. A point or a laser line is projected onto an object from a mobile point in space, and a sensor (CCD = Charge Coupled Device or PSD = Position Sensitive Detector) measures the distance to the surface. The information is collected in accordance with an internal localization system in space. To collect data with a mobile scanner, we need to know its position in space for error-free capture. This method is called triangulation because the laser point, camera, and laser source form a triangle.” [11]
B. 2. “A structured light scanner is a device that measures the shape of three-dimensional objects. It’s composed of a projector that projects light patterns and a camera. The light pattern is projected on the subject, the camera is slightly inclined toward the projector. The camera analyzes the pattern projected on the object and calculates the distance of each point in the field of view. Designing a narrow band of light on a three-dimensional object creates a distorted line on the surface at a different angle to that of the projector. A faster way is to project a multi-lane pattern. Seen from several points, the pattern appears distorted due to the shape of the object. The movement of the bands allows exact coordinates to be set in three-dimensional space for any type of surface (except for mirrors and glass).” [11]
B. 3. “The modular light scanner illuminates the subject using constantly changing light. Usually the light source changes its sinusoidal pattern periodically. A camera detects reflected light whenever the pattern has changed. Modular light allows ignoring other sources of light except the laser, so there is no interference.” [11]
B. 4. “The volume scanner is most commonly used in medicine. For example, tomography is a method of creating a three-dimensional model of the interior of an object consisting of a multitude of 2D images using X-rays. Similarly, magnetic resonance imaging (MRI) produces a better contrast between soft tissue of the body than tomography, being very useful in neurology, visibility of muscles and skeleton, cardiovascular and oncological field. This technique produces a volumetric representation that can be directly visualized, manipulated or transformed into a 3D surface using surface and level algorithms.” [11]
C. 1. “Passive 3D scanners do not emit radiation but are based on ambient light. Most solutions detect visible light because it is an already existing source. Another type of radiation that can be used is infrared. These methods can be very cheap because there is no need for special components, just a simple camera.” [11]
3. Analysis of 3D scanning technologies
In order to be able to choose the best way to solve the problem we compared the scanners according to the technology used. They all have strengths, some are better on a particular field of expertise, but have other drawbacks. If we want precision, we choose a slow scanner that captures multiple cloud points for a detailed scan of the object. Or, if time is important and we want to scan in a short time, we chose a scanner that captures a smaller number of cloud points and crosses the object at a higher speed. But not all objects can be scanned with the same technology, no matter if time or detail is important in scanning. Some scanners do not allow us to interact directly with the object, then a non-contact method must be chosen.
3.1. The main types of 3D scanners with light radiation
The main types of laser 3D scanners are:
1. "Time-of-Flight" 3D Laser Scanner
2. 3D Laser scanner using triangulation
3. Mobile 3D Laser Scanner
4. 3D laser scanner with holographic conoscopy
5. Structured light scanner
1. “Time-of-Flight” 3D Laser Scanner
“This is an active scanner that uses the laser to probe the subject. At its center, there is a laser rangefinder that measures the flight time of the laser beam. The telemetry measures the distance to the scanned object by calculating the return time of a laser pulse. Since the speed of light "c" is known, the return time determines the laser displacement distance, which is twice the distance between the scanner and the surface of the object. If t is the return beam time of the laser beam, then the distance d = c * t / 2. The precision of a "flight time" laser scanner depends on how "t" is measured: the time required for light to travel 1 millimeter is 3.3 * 10-12 seconds (approximately).” [12]
Fig. 7-A ToF scanner and it’s working principle [9]
These scanners can measure between 10,000 and 100,000 dots per second. The main advantage of this type of scanner is the very large scanning distance, being ideal for buildings or geographic features. Its disadvantage is low accuracy due to the high velocity of light, which makes round-trip timing difficult. [13]
2. 3D Laser scanner using triangulation
“The triangulation method has been used for hundreds of years to create maps and roads. The process is based on the determination of the dimensions and geometry of the actual objects. Triangulation uses at least one camera as the receiver, the distance and angles between the images and the projected light (laser or LED) forming the base of the triangle.” [11]
“The angle between the projected and reflected light on the object's surface closes the triangle where the 3D coordinates are calculated. Applying this principle repeatedly, a 3D representation of the object is formed.” [14]
Fig. 8-Triangulation principle in scanning [9]
3. Mobile 3D Laser Scanner
Mobile scanners create the 3D image of the object by the same principle of triangulation as described above: a laser beam (dot or line) is projected on the object by means of a mobile device and a sensor that is measuring the distance to the surface.
The information is collected in relation to an internal coordinate system. Therefore, in order to collect the correct information while it is moving, the position of the device in space must be determined. The position is determined using a reference system on the surface of the object to be scanned, or using an external tracking system. Usually, an external tracking system has a laser that determines the position of the sensor and an integrated camera to determine its orientation. This method uses infrared diodes attached to the scanner that are received by the camera through filters.
The information is to be collected in the three-dimensional space, after processing it can be transformed into triangulated polygons. Mobile scanners can combine this information with passive ambient light sensors to capture textures and colors for complete 3D model construction
Fig. 9-A mobile scanner in action [9]
4. 3D laser scanner with holographic conoscopy
Holographic conoscopy is a holographic method based on the propagation of light (laser) through a uniaxial crystal. It was thought that its sole purpose was to be a three-dimensional sensor for precise non-contact distance measurements.
“Based on crystal optics, conoscopy is a technique implemented for polarized light interference processes. In the elementary ensemble, a beam of light is projected onto the surface of an object. The beam creates a light point on the target object, which reflects light in all directions by reflection. Complete analysis of diffuse light is performed. The measurement process returns a response to the distance of the light point from a reference plane. The system that determines three-dimensional measurements is the basis of holographic conoscopy.” [15]
Fig. 10-Holographic conoscopy laser [28]
5. Structured Light Scanner
A structured light digital scanner can be used to eliminate the mechanical movement required to project the projection onto the surface of the object. The projector could be used to create a single column (or row) of white pixels with a black background surface translation to get closer to the scanning quality of laser triangulation. However, a flat motion sequence does not cover the projector's maximum capability, which can display color images.
Fig. 11-Structured light scanning [29]
“The sequence of structured light has developed and allowed projector-room correspondence to solve this problem with just a few frames. In general, the identity of each plan can be spatially encoded (single frame) or temporal (multiple frames), or a combination of the two, spatial and temporal coding. For example, space-coded ones allow the use of a single pattern for reconstruction, allowing the capture of dynamic scenes. Alternatively, temporal encoding is more likely, minimizing artifacts that occur during scanning.” [16]
3.2. Active 3D scanning methods
The issue of correspondence
Two or more images of the same 3D scene are given, made from different points of view, and the issue of correspondence refers to the risk of finding a set of points. [17]
For an image to be identified as the same points in another image the points or features of the image must be matched to the points or features of the other image.
The problem of correspondence may occur in stereo situations where two or more images of the same scene are used. In another case, N cameras can be used at the same time or a single moving camera, its position being relative to the scene. The problem grows when the objects are moving towards the cameras. [17]
A common applicability of the correspondence problem is found in creating panoramas or sticking images. In this case, we need to be able to create a point-matching identity of the image pairs in order to calculate the transformation of an image and stick it to the other images. [17]
Active 3D scanners
Active scanners have overcome the problem of correspondence using controlled light. Compared to passive and non-contact methods, controlled lighting, in most cases, is more sensitive to the surface of the material. The exception is made by objects whose surface is translucent or with a high degree of reflection. Many such scanners attempt to solve the problem of correspondence replacing one of the rooms with a stereoscopic system and a controlled light source.
“In the 1970s, the first dot laser scanners began to appear. A series of fixed and mobile mirrors were needed to move the point on the surface of the object.” [11]
“A camera records the point movement, the 2D point projection of the appropriate calibration point and the line connecting the laser point and the center of the camera. The depth is deduced by its intersection with the line that is formed from the laser source to the projected point, given by the deviation of the beam from the mirrors. As a result, these single-laser scanners are the equivalent of optical coordinate measuring machines.” [11]
For a Coordinate Measurement Machine (CMM), only one scan point is too small, scanning being executed very slowly. With the development of high quality CCD sensors, their prices have fallen, and in the 1980s, the first planar scanners appeared. In this model, a laser projector creates a single light plane, is mechanical and moves on the surface of the object. Prior to the previous model, the laser beam deflected on the surface of the object would determine the 3D plane. Depth is recovered by the intersection of this plane with a set of laser lines that link the center of the projection chamber and the 3D curve of the object. By removing a dimension, these scanners are a quick metaphor to determine the shape of an object. [10]
3.3 3D laser triangulation scanning
“One of the most common 3D scanning methods is laser triangulation because of its simplicity and robust construction. It is designed very specifically using simple trigonometry. The captured image of the camera is in 2D, and the depth cannot be determined only from the image. To determine the depth, we need laser scanning. In laser triangulation, the beam is projected onto the object, and the image is captured by a camera. Since the cost of the used components affects scanner accuracy, a high price and quality components will produce better results than a low-cost component scanner. The distances between the laser, object, and camera form a right triangle, hence the origin of the name of the triangulation scanning method.” [18]
In the laser triangulation, the laser beam is projected onto the object and a picture is captured by a camera, and the triangle formed between the three points (laser, object, camera).
Stereoscopic scanning, planar scanning, and structured light scanning are based on the recovery of the 3D shape of objects in the same way. First of all, the problem of correspondence is solved either by a passive correspondence algorithm or by a spatial identification method (e.g. projection of a known line, plan or pattern). Once the correspondence between two or more points of view (e.g. parallel pairing of two or more rooms) has been established, triangulation recovers the depth of the scene. In stereoscopic or multi-point systems, a point is rebuilt from the intersection of two or more lines of correspondence. In scanning with structured light systems, a point is recovered by crossing the lines of correspondence and the plans.
In trigonometry and geometry, “triangulation is the division of a surface or plane polygon into a set of triangles, usually with the restriction that each triangle side is entirely shared by two adjacent triangles. It was proved in 1925 that every surface has a triangulation, but it might require an infinite number of triangles and the proof is difficult.” [19]
The distance to a point measuring two fixed angles
Coordinates and distances can be determined by calculating the length of one side of the triangle if measurements are made of the angles and sides of the triangle formed by this point and two other reference points. It becomes inaccurate when the distance reaches a scale compared to the Earth's curve, but it can be replaced by spherical trigonometry.
“Euclidean geometry consists of two fundamental types of measurement: angles and distances. Angle scanning is absolute, and Euclid uses the right angle (90 o) as the main computing unit. For example, for an angle of 45 o it refers to half as a straight angle. The distance scale is relative, an arbitrary line segment is chosen with a length different from 0, taking the place of the unit of measurement, and another distance is expressed in relation to the chosen unit of measure.” [20]
Measurements of a surface or volume derive from distance determination. For example, a rectangle with a width of 3 units and a length of 4 units, has a surface of 12 units, these geometric interpretations are limited to three dimensions.
“Three-dimensional space is a geometric representation in which 3 values (called parameters) are required to determine the position of an element (e.g. point). In physics and mathematics, the sequence of n numbers can be understood as the n-dimensional location. When n = 3, the location set is called three-dimensional Euclidean space. It is represented mostly by the symbol 3. This serves as the third parameter for the physical model of the universe (without considering time as the 4th dimension) where all the matter is known.” [21]
Computation of distances
Fig. 12-Determining the distance between 2 angles [30]
Let = the distance between A and B, then:
Taking in consideration that:
, we obtain that:
Finallly the result can be expressed as:
This determines the distance of an unknown point, by observing a point of reference, balancing it from that point, and finally the coordinates.
3.4 3D laser scanner precision
In modern engineering, the term "laser scanning" has two purposes, one is for bar code scanning devices and the other is to control the direction of the laser fascicles.
“Laser scanning uses laser radiation as a transmitter in the form of a dot or line, being used to take the shape of objects, buildings and the environment. The main advantages of this method are that the laser can show the smallest cracks of the scanned surface. Another advantage is the scanning speed. At the same time, prototypes can be reproduced very quickly, measured and compared to the projected model. Using this technology, it can interfere with the manufacturing process to eliminate some of the causes of manufacturing defects.” [22]
Any point cloud produced by a laser scanner contains a considerable number of error points. If the cloud is delivered as specified, its quality is guaranteed.
Angular accuracy
“The angle of inclination of the laser emitter towards the receiver may cause errors. Any error of axis alignment or angular reading will result in perpendicular errors occurring on the propagation path. Since point positioning is difficult to verify some investigations of this issue have been made. Errors can be detected by measuring short distances vertically and horizontally between objects located at equal distances from the scanner and comparing measurements.” [22]
Range accuracy
“Triangular scanners solve the determination of the radius of action by a triangle formed by the three points of the system (the laser, the point of reflection on the object and the receiver) positioned at precise distances from the object. The camera is used to determine the direction of the reflected laser beam. Errors of the radius of action can be observed when we know the distances to the range of action measured with the scanner.” [22]
Resolution
“The resolution term is used in context when we discuss the performance of the laser scanner. From a user's point of view, the resolution describes the ability to detect small objects or object properties in cloud points. “Technically speaking, two different specifications contribute to this ability of the scanners. The smallest possible increment angle between two successive points and the laser beam width on the object surface. ” [22]
Since the incremental effects and the width of the combined laser beam determine the resolution of the object, a small sample item with fine elements and holes can determine resolution information
Edge Effect
“Even if the laser is very well focused on the surface of the object, the laser beam will have a certain size. When the point or laser line reaches the target surface on one of the edges, only part of it is reflected. The rest will be reflected on the adjacent surface of another surface behind the edge or not at all (when there is no plan within the scanner's range). Both types of scanners, flight time and triangulation produce a variety of wrong points in the vicinity of the edge. The wrong points (artefacts or phantom points) are found around the reflected laser beam. These errors can range from a fraction of a millimeter to a few decimetres. Therefore, the object rebuilt from cloud points will appear larger than in reality, since bad points will also be recorded at the time of scanning.” [22]
Influence of surface reflection
“Triangular laser scanners are based on reflection of the light beam on the surface of the object to be captured by the camera. The power of the return signal is influenced (among other factors such as distance, atmospheric conditions or incidence angle) by the surface reflection properties.” [22]
White surfaces have a high degree of reflection, while darker areas have less reflection. The effect on colored surfaces depends on the spectrum characteristics of the laser beam (green, red, blue, near infrared). Bright surfaces are usually hard to scan .
It has been observed that surfaces of different degrees of reflection have resulted in an erroneous result within the range of action. For some materials, these errors can result in a much larger representation of the scanned object than the real one. For example, for objects that contain different colors and materials, errors in the scanning process will be expected. These errors can be avoided if the object with a uniform surface and the same color is temporarily covered, but it is not applicable in most cases.
Environmental conditions
Temperature: Any scanner will only work properly if it is used in a certain temperature range. Within the range, deviations may occur, especially at the measuring distance. Keep in mind that the internal temperature will, in most cases, be higher than the ambient temperature due to internal heating of the components and / or due to external radiation (sunlight). Temperature may, over time, lead to changes in the system . [22]
Atmosphere: As with any optical distance measurement, due to variations in temperature and pressure, it is possible to change the propagation speed of light through the environment. For short beams this is negligible. Also, if we have dust or steam in the environment, it can result in a similar effect to the edge effect, as described above. [22]
Radiation interference: Lasers operate in a very limited frequency band. Because of this, filters can be applied to limit the receiver (camera) only to the frequency of the laser. If the radiation of the light source (sunlight, lamp) is strong compared to the signal, some of the ambient radiation will pass through this filter and will influence accuracy or even prevent measurements. [22]
4. Developing a 3D Laser Scanner
Following the research and documentation on how to solve the proposed problem, I chose to build a 3D laser (linear) scanner based on the principle of laser triangulation. The system has been optimized for space, cost, future development and mobility.
As a source of inspiration, I went with the model built by bqLabs, Cyclops, a laser scanner designed for educational and research purposes. Composed of an Arduino Uno, 2-line red laser diodes (5mW, 650 nm), a webcam (5mpx), a stepper motor, a step-by-step motor driver and a custom-made stand that fits all the parts. The stand is made out of wood and aluminum for portability purposes.
I chose Cyclops as a starting point and then developed and modified it to increase mobility, lower the gauge, increase engine power and the weight of the scanned object, ease of use and the possibility of further development.
4.1 Hardware design and implementation
Analyzing the problem, the best choice is to scan the object through a non-destructive, non-contact method. Of these types of scanning methods, the one I chose is through linear laser scanning. I set up the scanner, minimized costs, developing a scanning software.
The main function of the system is scanning the object in 360 °. The mobile platform rotates in 200 steps (1.8 °) in a full scan so it covers all faces of the object. The scanning procedure can be stopped at any time in the process but it’s not recommended.
Another important function that was implemented through the design is the mobility of the entire system. It was built to be very easy to transport and use by anyone.
Bill of materials
As mentioned before the scanner is composed by the following hardware parts: a webcam (Logitech C310), an Arduino Uno R3 board, a stepper motor (28BYJ-48), a motor driver (A4988), 2-line red laser diodes and the stand that was specially design to fit the respective parts.
The total cost of the entire project was of 110 RON, as it follows:
1xLogitech C310 Webcam-Free
1xArduino Uno R3-40 RON
1xStepper Motor-30 RON
1xMotor Driver-25 RON
2xLine Laser Diodes-15 RON
1xStand-Free (was made out of spare parts)
Camera
The camera used in this project is a Logitech C310.It is positioned at a height of 20 cm so that the laser trajectory on the surface of the rotating platform and the background behind the platform are visible. Also, the distance between the camera and the turntable is known to be around 30 cm.
The position on OX, OY and OZ, but also the rotation around the OZ axis was adjusted. The camera is attached directly to the scanner stand so it becomes a whole.
Fig. 13-Logitech C310 HD Webcam [31]
The webcam has the following specifications:
Vide Resolution:1280×720
Photo Resolution: 5 MP
PC connection: USB
Dimensions: 55 x 20 x 15 mm
Manual focus
Frame rate-30 fps
Sensor-CMOS
In our case the most important features are the video and photo resolutions.
Arduino Uno R3 board
In my design, I used an Arduino Uno development board. The board is ideal for creative projects in electronics. The difference to the original Arduino Uno board is the ATmega328p microcontroller package and the fact that it is programmed using the CH340 integrated circuit.
Fig. 14-Arduino Uno R3 Board [32]
Technical specifications:
Operating voltage: 5V
Supply voltage: 7-12V
I / O pin: 14
PWM Pins: 6 (out of 14 I / O)
ADC Pins: 8
Flash memory: 32kB (8 occupied by bootloader)
TWI, SPI and UART communication
Operating Frequency: 16MHz
The board, to which all components of the system are connected is powered by 12V current source. That includes the:
Step-by-step motor driver
Stepper motor
Linear lasers
Arduino Uno Board Pinout:
Fig. 15-Board Pinout [33]
In my case, I use only 4 digital pins in order to control the entire system. Digital pins 2 and 3 are used to control the 2-line laser diodes and pins 12 and 13 are used to control the driver which command the motor.
Pin 12 corresponds to the first pin (MISO) of the ISCP (In-Circuit Serial Programming) and is connected to the STEP pin on the driver of the motor, hence enabling the motor to rotate step-by-step. Pin 13 correspond to the third pin of the ISCP (SCK) and is connect to the DIR pin on the driver, hence giving the direction of rotation.
Other pins used are digital pin 9 which serves as ground for one of the laser diodes, the power pins to which the current source is connected and the ground pins.
Stepper Motor
I chose a powerful, high-end stepper engine to scale the type of objects that can be scanned with this system.
A stepper motor works differently from the normal DC motor, which rotates when a voltage is applied to its terminals. The stepper motor, on the other hand, has multiple tooth magnets arranged around the center axis of the rotor. The electromagnets are powered by an external current, commanded by a driver and a microcontroller. To make the engine perform a return, the first electromagnet feeds, causing the magnetic teeth of the spindle to be attracted magnetically by the first row of teeth of the electromagnet. When the axle teeth are aligned with the first electromagnet, there are few displacements to the next electromagnet. So, when the next electromagnet is powered and the previous one closed, the shaft rotates towards it and the process is repeated. Each of these rotations is called a "step", with an internal number of steps forming a complete rotation. In this way, the engine can be turned precisely.
Fig. 16-28BYJ-48 stepper motor [34]
The picture above shows a 28BYJ-48 stepper motor. It contains several Darlington transistors to send commands to each coil. A Darlington transistor is made up of two common series-connected transistors in order to control as large a current as possible with a smallest command current.
Stepper motor technical features:
Recommended power supply voltage: 5V
Number of phases: 4
Approximate reduction: 1:64
64 steps / resolution
Resistance / phase: 50 ohms
Minimum torque: 34.3 mN * m
Degree of insulation: A
Motor Driver
The driver of the motor is a A4988 microstepping bipolar stepper motor driver, with adjustable features, current limiting, over-current and over-temperature protection, and five different microstep resolutions (down to 1/16-step). It operates from 8 V to 35 V and can deliver up to approximately 1 A per phase without a heat sink or forced air flow (it is rated for 2 A per coil with sufficient additional cooling).
Fig. 17-A4988 driver [35]
Fig. 18-Arduino-Driver-Motor connections [36]
In the above picture are presented the connections made between the Arduino board, the driver and the stepper motor. The VDD and GND pins of the driver are connected to the power pins of the Arduino and the STEP and DIR pin are connected to the ISCP pins of the Arduino (pin 1 and pin 3). Pins 1A, 2A, 1B, 2B are the pins that make the connection with the stepper motor. VMOT and GND are the power and ground of the motor.
Driver’s key features:
Simple step and direction control interface
Five different step resolutions: full-step, half-step, quarter-step, eighth-step, and sixteenth-step
Adjustable current control lets you set the maximum current output with a potentiometer, which lets you use voltages above your stepper motor’s rated voltage to achieve higher step rates
Intelligent chopping control that automatically selects the correct current decay mode (fast decay or slow decay)
Over-temperature thermal shutdown, under-voltage lockout, and crossover-current protection
Short-to-ground and shorted-load protection
Line Laser Diodes
The 2-line laser diodes are positioned on the OX axis and at the same height and distance from the platform as the camera. Each laser together with the camera and the turntable form a triangle. Such as the camera, the diodes are also placed on the stand.
Laser Features:
Weight: 6.3g
Diameter: 10mm
Length: 31mm
Operating voltage: 2.8 – 5.2 VDC
Maximum current amplitude: 25mA
Operating temperature: -10 ° C to 45 ° C
Fig. 19-Line laser diodes [37]
The stand
An essential element of the system is the stand. It is the unifying basis of all the components and the fulfillment of the process according to the required requirements. It must ensure the safe transport of all components, preserve their original positions, and not undergo mutations following the scanning process.
The inspiration for this architectural design was the bqLabs scanner, Cyclops. The stand is made out of wood and aluminum for transportation reasons. All the elements are fixed to the stand, the only mobile element is the rotation platform.
Fig. 20-Sideview
The Arduino board, the motor driver and the stepper motor are found inside the box. As you can see the box is made of aluminum and has holes for the power supply and the USB cable and also for ventilation purposes. On top of the box you can see the turntable which is controlled by the stepper motor and on top of which the object to be scanned will be placed.
Fig. 21-Top view
The box is bound at a specific distance by an arm, made also of aluminum to a support that holds the 2 lasers and the camera. The support is shaped in a trapezoidal form with the big base up and small base down. The lasers are placed on the endings and the camera is in the center of support.
4.2 Software design and implementation
The software program was designed and developed by myself so that it will work perfectly with the hardware model that I’ve built. Also, one of the reasons I developed it myself is that I wanted to be user friendly and very easy to manage by anyone that doesn’t have a technical background.
The program was coded in MATLAB programming language using as working environment MATLAB R2013b version. The reasons why I used this version is because is the last version which doesn’t require the user to install drivers for the image acquisition tool making it easier to work with.
Why MATLAB? One of the reasons for which I chose MATLAB is because it is very good for data acquisition and image processing. It has a very large (and growing) database of built-in algorithms for image processing and computer vision applications. Also, it allows you to test algorithms immediately without recompilation. You can type something at the command line or execute a section in the editor and immediately see the results, greatly facilitating algorithm development. The ability to process both images and videos was another argument for why I chose MATLAB. [38]
The software was developed specifically for our hardware setup. Arduino3DScanner.fig is the main GUI file and can be launched from MATLAB console by typing “guide” and selecting the Arduino3DScanner project from the browse field. Arduino3DScanner.m contains all the code for the application and implements the scanning algorithm. There are basic utility functions that are self-explanatory in the sense that they control the functioning of the lasers, the stepper motor and camera. The cameraParams.mat file contains calibration data form my setup.
The program can be configured for any room model (webcam, DSLR, industrial cameras). The resulting file is a PLY, which contains the coordinates and colors of each point on the scanned surface. It can be used in a CAD program such as Autodesk products or an Open Source program like MeshLab.
Fig. 22-Software functionalities
As stated before the program is made out of a main, Arduino3DScanner, and other functions that command the hardware, such as: start_laser, stop_laser, rotateMotor, generatePoints. We are going to present each function separately in order to see each one’s importance to the entire program.
Before explaining which function does what it is necessary to understand that everything is controlled by the developed MATLAB software, we are not using any code for the Arduino board.
MATLAB connection to Arduino
In order to create a connection between MATLAB and Arduino I downloaded and installed the Arduino hardware support package from MathWorks(https://www.mathworks.com/hardware-support/arduino-matlab.html). The package contains the install file and some libraries that must be uploaded into the board in order to establish the desired connection.
I have installed the package and uploaded the adios.pde library into the board, which enables me to program and command the digital pins of the board. After these steps, I was able to create a connection between MATLAB and Arduino via the USB cable.
The command used to connect to the board is a=arduino(‘COM3’), where a is the Arduino hardware connection created using Arduino, specified as an object and COM3 is the port to which the board is connected. In order to find out on which COM is the board connected the user should enter Start-Control Panel-Device Manager-Ports(COM&LPT).
In order to learn more about the Arduino package I typed in the command line Arduino and from there I found out the list of methods that I needed in order to program my board from MATLAB.
Programming the Arduino board
To program the Arduino board from MATLAB I used only two methods from the Arduino support package: pinMode () and digitalWrite (). These methods helped me program the pins of the board.
pinMode (a, pin, str) = is a method that read or sets I/O of a digital pin
a-is the Arduino class object
pin-the number of the pin to be programed
str-is a string that specifies the pin mode: ’INPUT’ or ‘OUTPUT’ [39]
digitalWrite (a, pin, value) = is a method that performs digital output. It is used to set pins values.
a-is the Arduino class object
pin-the number of the pin to be programed
value-represents the status of the pin: 1 and 0, HIGH and LOW. When a pin is 1 it means it’s active, 0 it means is passive. [40]
MATLAB program functions
rotateMotor (a, angle)-This function is used to command the motor. Angle is a variable which makes reference to the angle of rotation of the platform and a is the Arduino class object. Signals are sent to pins 12 and 13 of the Digital PWM on the Arduino board. These pins correspond with pins 1 and 3 of the ISCP which are connected to the STEP, respective DIR pins on the motor drive. When, for example, the STEP pin receives an impulse which is HIGH (1) this will command the motor to rotate one step.
STEP – this control line drives the stepper motor. When we apply a pulse to this line (000011110000), the driver moves the stepper motor by one step when the line transitions from 1 to 0 (also called the falling edge of the pulse). We connect this line to Pin 12 of the Arduino. We will write a small MATLAB function to send a pulse on this I/O pin.
DIR – this control line decides whether the stepper motor will rotate in the clockwise or counter-clockwise direction. We will connect this line to Pin 13 of the Arduino. Setting this line to logical 0 makes the stepper rotate in clockwise direction and logical 1 makes it rotate in the counter-clockwise direction.
function rotateMotor(a, angle)
pinMode(a, 12, 'OUTPUT') % declaring pins 12 and 13 as OUTPUTS
pinMode(a, 13, 'OUTPUT')
digitalWrite(a, 13, 0) % initializes pin 13 (DIR) as 0 (LOW)
for i=1:200 % loop of 200 rotations of 1.8 degrees=360 degrees
digitalWrite(a, 12, 1) % intialize STEP with HIGH, makes the stepper rotate
pause(.01)
digitalWrite(a, 12, 0)
pause(.01)
end
end
start_laser(a)-The start_laser function is the function that turns on the 2-line laser diodes. It’s simple function because the only method we use is digitalWrite (). We use it in order to initialize pin 2 and 3 as outputs, this will turn on the lasers.
function start_laser(a)
pinMode(a, 2, 'OUTPUT') % declare digital pins 2 and 3 as OUTPUT
pinMode(a, 3, 'OUTPUT')
digitalWrite(a, 2, 1) % turn on laser diodes
digitalWrite(a, 3, 1)
pause(.01);
end
stop_laser(a)-It is used to turn off the laser diodes once the scanning process is done.
function stop_laser(a)
digitalWrite(a, 2, 0) %turns off the lasers by making the outputs LOW
digitalWrite(a, 3, 0)
pause(.01);
end
generatePoints(img,orig_img,angle)-This function is used to generate the point cloud.
function [points,colors] = generatePoints(img,orig_img,angle)
[orig_h,orig_w] = size(img);
midpoint = floor(orig_w/2 + 0.5);
img_truncated = img(:,midpoint:midpoint+300);
orig_img_truncated = orig_img(:,midpoint:midpoint+300,:);
[height,width] = size(img_truncated);
theta = 31.5925;
points = [];
colors = [];
for row = 1:height
scanline = double(img_truncated(row,:));
[peaks,x] = findpeaks(scanline);
if size(x) ~= 0
if size(x,2) > 1
x = sum(x)/size(x,2);
end
x = x;
y = x/tand(theta);
point = [x,y,height-row];
color = [orig_img_truncated(row,floor(x + 0.5),1)…
orig_img_truncated(row,floor(x + 0.5),2)…
orig_img_truncated(row,floor(x + 0.5),3)];
points = [points; point];
colors = [colors; color];
end
end
if size(points) ~= 0
R = rotz(angle);
points = points*R;
end
end
Camera calibration
Camera calibration is a very important step in this project. We need our camera to be precise and accurate in order to generate a better-looking point cloud. To calibrate the camera, we use the Camera Calibration Toolbox from MATLAB.
We can use the camera calibrator application to estimate camera intrinsic, extrinsic, and lens distortion parameters. We can use these camera parameters for various computer vision applications. These applications include removing the effects of lens distortion from an image, measuring planar objects, or reconstructing 3-D scenes from multiple cameras.
The Camera Calibration tool is opened by simply typing in the command window cameraCalibrator. This command will open the Camera Calibration Tool. From there we can start calibrating our camera and adjusting the desired parameters.
For calibration, we use a chess pattern with 12 columns and 7 rows. Each checkboard square has a size of 15 mm.
To begin calibration, we must add images. We can add saved images from a folder or add images directly from a camera. The calibrator analyzes the images to ensure they meet the calibrator requirements and then detects the points.
For best results, I used between 10 and 20 images of the calibration pattern which I took using my mobile phone. The calibrator requires at least three images in order to calibrate your camera. The calibration pattern and the camera setup must satisfy a set of requirements to work with the calibrator. For instance, try to use uncompressed images or lossless compression formats such as PNG.
Fig. 23-Camera calibration results
We can evaluate calibration accuracy by examining the reprojection errors and the camera extrinsic, and by viewing the undistorted image. For best calibration results, I used all three methods of evaluation.
Fig. 24 -Camera parameters
The result of the calibration can be exported and saved as a .m file.
Image Acquisition
Together, MATLAB, Image Acquisition Toolbox, and Image Processing Toolbox (and, optionally, Computer Vision System Toolbox) provide a complete environment for developing customized imaging solutions. We can acquire images and video, visualize data, develop processing algorithms and analysis techniques. The image acquisition engine enables us to acquire frames as fast as your camera and PC can support high speed imaging.
Fig. 25 -Image Processing Toolbox
In our case, I used the toolbox to acquire the needed images in order to reconstruct the point cloud. The camera records images and splits them into frames that are later processed.
To access the Image Acquisition Toolbox just type imaqtool in the command window of MATLAB and the application window will appear.
Graphical User Interface
GUIs (also known as graphical user interfaces or UIs) provide point-and-click control of software applications, eliminating the need to learn a language or type commands in order to run the application.
GUIDE (GUI development environment) provides tools to design user interfaces for custom apps. Using the GUIDE Layout Editor, we can graphically design an UI. GUIDE then automatically generates the MATLAB code for constructing the UI, which you can modify to program the behavior of your app.
In order to access GUIDE just type guide in the MATLAB command window and the GUIDE Layout Editor will appear.
Fig. 26 -GUIDE Layout Editor
The GUI of the Arduino3DScanner program has been built using the GUIDE Layout Editor. The purpose was to design a user-friendly GUI that can be handled by anyone.
When running the program, the GUI is the first to appear. It has just three buttons for the three functionalities that I developed: START, STOP and SAVE.
Fig. 27-Arduino3DScanner GUI
When pressing the START button the camera, laser and stepper will turn on.
Fig. 28 -After pressing START button
When pressing STOP all the above processes will stop. Usually STOP button is used when there was a problem encountered in the scanning process. Otherwise the user should allow the scanning process to finish naturally.
Fig. 29 -After STOP button was pressed
The SAVE button is used only after the scanning process finished. The resulting file is a PLY file, which contains the coordinates and colors of each point on the scanned surface. It can be used in a CAD program such as Autodesk products or an Open Source program like MeshLab.
MeshLab processing
MeshLab is an open source system for processing and editing 3D triangular meshes. It provides a set of tools for editing, cleaning, healing, inspecting, rendering, texturing and converting meshes. It offers features for processing raw data produced by 3D digitization tools/devices and for preparing models for 3D printing.
The automatic mesh cleaning filters includes removal of duplicated, unreferenced vertices, non-manifold edges, vertices, and null faces. Remeshing tools support high quality simplification based on quadric error measure, various kinds of subdivision surfaces, and two surface reconstruction algorithms from point clouds based on the ball-pivoting technique and on the Poisson surface reconstruction approach. For the removal of noise, usually present in acquired surfaces, MeshLab supports various kinds of smoothing filters and tools for curvature analysis and visualization.
It includes a tool for the registration of multiple range maps based on the iterative closest point algorithm. MeshLab also includes an interactive direct paint-on-mesh system that allows to interactively change the color of a mesh, to define selections and to directly smooth out noise and small features.
Fig. 30-MeshLab working environment [41]
We use MeshLab in order to create a more realistic 3D model. MeshLab helps us by processing and connecting the point cloud of our scanned model. It also gives the user the possibility of applying texture to the 3D model making it look more natural.
4.3 Further development
The developed 3D scanner is just a prototype so we can say that indeed further development is required in order to improve the scanning accuracy and precision of the device. Below is a list of changes that may affect the device. I have taken in account both hardware and software changes.
To improve object scanning, I can modify the system as follows:
attaching a linear green laser industrial;
enlarge the rotating platform to scan larger volume objects;
diffuse light source from 3 angles, left, right and center;
The object must have the following characteristics:
smaller size than the field of view
medium or low complexities
matte or medium-reflective material
The software can be improved by implementing more complex algorithms in order to generate a better point cloud and process the entire image. Also, another idea could be to filter the image, respectively the point cloud (cleaning the point cloud). This will help us when connecting the points and applying the texture by creating a more complex mesh.
4.4 Other scanning programs
3D scanning software using photogrammetry technology
These programs reconstruct the 3D model with the help of images made on the environment or an object from several angles and positions, analyze the common points, calculate the spatial position from which the images were made, create the cloud of points, and finally create the polygonal links, resulting in a 3D model of the scanned surface.
123D Catch
Fig. 31-3D scanning using 123D Catch [42]
A simple way to scan 3D objects and the environment is this program. It's a free program available on: iOS, Android, Windows Mobile, and Windows Desktop platforms. The only tool you need to do a scan is a camera or a cell phone. The process is simple, the object / environment is photographed from several angles and the more angles we have, the more accurate the pattern will increase. The photos are loaded into the program and it will perform the 3D reconstruction of the object or the scanned environment
Insight3D
Insight is also part of the category of programs that create 3D models through a set of images. A scene or real object is shot from multiple angles, uploads these images into the program, automatically matches them and calculates the position in the scene where the image was taken, and represents a 3D scene with the generated cloud points.
You can use this program to create 3D polygonal textures of a model. The main purpose of this program is strictly educational.
Fig. 32-3D representation of cloud points in the Insight3D program [43]
3D scanning programs using structured light technology
The principle of this technology is to design a narrow band of light on a three-dimensional surface that will result in the appearance of a distorted light line on the shape of the object viewed from a perspective other than that of the projection and can be used for the exact reconstruction the geometry of the shape of the surface where the light is reflected.
3D Underworld
Recently, there has been an increase in 3D representations in the virtual world of real-world objects. Many methods and systems have already been proposed to address this issue, involving technologies with active sensors, passive sensor technologies and a combination of both technologies.
Fig. 33- Positioning System Elements for SLS Scanning [44]
Structured Light Systems (SLS) use active sensors, such as projectors and laser emitters, to produce the light of a known pattern. Scanning involves designing this light pattern on the surface of the object, capturing one or more cameras with the printing pattern.
Fig. 34-Reconstruction of a 3D model using the 3D Underworld program [44]
This technique is quite general, but there are some accepted positions for the rooms. The position is subjective to the position of the scanned object, as well as the number of cameras and the type of lens used. However, as the angle between the rooms increases, the area of the object that can be reconstructed decreases, since the common area of viewing of the rooms is reduced. It is recommended to position the projector in the middle of the chambers, as represented in Fig. 16.
3D Underworld is an automatic modeling environment for cloud point information. Initially, the information is preprocessed, then distributed over the polygon network using an unattended clustering algorithm
Dependability:
● Canon SDK
● OpenCV2.4
It is an open source program, without it being commercialized, for research and education purposes only.
3D scanning programs using laser technology
Makerscaner
Another Open Source program is Makerscaner. The mode of operation of the program is:
● Designing a line on an object
● Record the position of the line
● rebuild 3D geometry
The program is available on Windows and Linux platforms. Windows is installed with executable, and on Linux we have to install the following dependencies:
● OpenCV
● vxWidgets
To perform a scan, the laser line must be vertical, perpendicular to the ground. On a 25px height line at the top of the image, there must be a flat background (above the green line, Fig. 17). This is the reference for the depth of reconstruction
Fig. 35-A 3D model reconstruction with Makerscanner [45]
The camera must be fixed, not allowed to move during scanning. The program uses a difference comparison technique to detect the laser, and errors can occur during the scan time.
3D scanning software’s classification based on platform, creator and license:
Table 1-3D scanning software’s
Conclusions
Following the research, I determined how effective my prototype is and what objects I can scan with it. I have seen how hard is to keep the shape of a complex object and how difficult is to get a good 3D model.
Surfaces influence the outcome of the reconstruction. A high level of surface reflection, such as a mirror or too low, such as glass, will result in distorted reconstructions of the 3D model or lack of areas.
Colors are also very important in this context as they are very useful in the object reconstruction from the point cloud.
Light is also an essential factor in the quality of the color shades we make in the 3D reconstruction of objects. Artificial white light or sunlight is enough to be able to capture the color information of the scanned object.
To solve much of these problems we can do multiple object scans from multiple angles, bring them into an environment with the same reference system and unite them, forming the complete 3D model of object.
In my case, I can say that the 3D scanner prototype I built has an accurate hardware resemblance to the 3D scanners that are already on the market. The mechanical design of the project was built in accordance with the models that served as a source of inspiration. From an architectural point of view, we can say that my design doesn’t necessarily bring innovative ideas but follows in the steps of the already existing designs. The only difference being that of the materials used in its construction. However, the software part proposes a new approach. By using MATLAB as a coding language and working environment many possibilities and solutions in solving the problem of the point cloud opened to me. First of all, the use of MATLAB has rendered the use of another coding language, for example Arduino, useless. Many software programs of 3D scanners are split into 2 parts: master program and board program.
MATLAB allows you to program the Arduino board directly without any need of a board program. This helped me by eliminating any issues that might have appeared between a master and a board program. Another advantage is that MATLAB has a lot to offer when talking about image acquisition and processing than other software’s. A lot of useful toolboxes made my live easier when I implemented the code. Unfortunately, the code part is still at a testing level in implementation. This being one of the reasons why the prototype is not yet working at full thrust.
In conclusion, it can be said that this prototype represents a good endeavor in the field of object reconstruction and 3D modelling and that with further improvement it can be a really good example for those who want to enter and pursue this field.
References
[1]. F. Harshama, M. Tomizuka, and T. Fukuda, “Mechatronics-what is it, why, and how? -and editorial,” IEEE/ASME Trans. on Mechatronics, 1996
[2]. D. G. Alciatore and M. B. Histand, Introduction to Mechatronics and Measurement Systems, McGraw Hill, 1998
[3]. W. Bolton, Mechatronics: Electronic Control Systems in Mechanical Engineering, Longman, 1995
[4]. Hyungsuck Cho. Opto-Mechatronic Systems Handbook: Techniques and Applications, 2002
[5]. Jon Rigelsford, (2003) "Opto-Mechatronic Systems Handbook: Techniques and Applications", Assembly Automation,
[6]. IEEE. Optomechatronic technology: The characteristics and perspectives, 2005.
[7]. https://en.wikipedia.org/wiki/Eratosthenes#.22Father_of_geography.22
[8]. https://en.wikipedia.org/wiki/Pantograph
[9]. https://en.wikipedia.org/wiki/3D_scanner
[10]. https://en.wikipedia.org/wiki/Coordinate-measuring_machine
[11]. LMI Technologies. A simple guide to understanding 3D scanning technologies, 2013- http://gomini3d.com/sites/default/files/EBOOK_A_Simple_Guide_To_3D.pdf
[12]. Yan Cui, Sebastian Schuon. 3D Shape Scanning with a Time-of-Flight Camera.
[13]. M. Callieri, P. Cignoni, M. Dellepiane, R. Scopigno, Pushing Time-of-Flight scanners to the limit, 2009
[14]. Will Strober. Laser Triangulation 3D scanner, 2011.
[15]. Gabriel Y. Sirat, Freddy PAZ. Conoscopic Holography, 2005
[16]. Douglas Lanman, Gabriel Taubin. Build Your Own 3D scanner: 3D Photography for Beginners, 2009
[17]. en.wikipedia.org/wiki/Correspondence_problem
[18]. Akash Malhotra, Kunal Gupta, Kamal Kant. Laser Triangulation for 3D Profiling of Target, 2011.
[19]. Francis, G. K. and Weeks, J. R. "Conway's ZIP Proof.", 1999.
[20]. https://en.wikipedia.org/wiki/Euclidean_geometry
[21]. https://en.wikipedia.org/wiki/Three-dimensional_space
[22]. Wolfgang Boehler, Andreas Marbs. Investigating Laser Scanner Accuracy, 2003.
[23]. https://www.slideshare.net/umarjamil10000/mechatronics-systems
[24]. http://www.azorobotics.com/Article.aspx?ArticleID=7
[25]. http://www.fao.org/docrep/009/a0406e/a0406e07.htm
[26]. https://www.slideshare.net/rajandas/00-hardware-of-personal-computer-v1-1
[27]. http://www.astronomy.ohio-state.edu/~thompson/1101/lecture_aristarchus.html
[28]. http://opticalengineering.spiedigitallibrary.org/article.aspx?articleid=1088709
[29]. http://mesh.brown.edu/byo3d/source.html
[30]. https://en.wikipedia.org/wiki/Triangulation_(surveying)
[31]. http://www.logitech.com/en-in/product/hd-webcam-c310
[32]. https://ledcolor.ro/arduino-uno-r3-clona.html
[33]. http://arduino.ru/forum/obshchii/pin-mapping-sootvetstvie-vyvodov-i-registrov
[34]. https://www.geeetech.com/wiki/index.php/Stepper_Motor_5V_4-Phase_5-Wire_%26_ULN2003_Driver_Board_for_Arduino
[35]. http://www.ebay.com/itm/RAMPS-Pololu-A4988-StepStick-stepper-motor-driver-with-heatsink-for-Sanguinololu-/191098138506
[36]. https://www.pololu.com/product/1182
[37]. https://www.amazon.com/Focusable-650nm-ModuledriverPlastic/dp/B012V3U3KK
[38]. https://www.mathworks.com/matlabcentral/answers/132436-image-processing-3d-and-2d
[39]. https://www.mathworks.com/help/supportpkg/arduinoio/ref/configurepin.html
[40]. https://www.mathworks.com/help/supportpkg/arduinoio/ref/writedigitalpin.html
[41]. http://makerzone.mathworks.com/resources/raspberry-pi-matlab-based-3d-scanner/
[42]. http://www.123dapp.com/catch fig
[43]. http://insight3d.sourceforge.net/
[44]. Kyriakos Herakleous, Charalambos Poullis. 3DUNDERWORLD-SLS: An Open-Source
[45]. http://abarry.org/makerscaner/1-makerscaner.html
Annexes
The main of the software:
function varargout = Arduino3DScanner(varargin)
% ARDUINO3DSCANNER MATLAB code for Arduino3DScanner.fig ARDUINO3DSCANNER, by itself, creates a new ARDUINO3DSCANNER or raises the existing singleton*.
% H = ARDUINO3DSCANNER returns the handle to a new ARDUINO3DSCANNER or the handle to the existing singleton*
% ARDUINO3DSCANNER('CALLBACK',hObject,eventData,handles,…) calls the local function named CALLBACK in ARDUINO3DSCANNER.M with the given input arguments.
% ARDUINO3DSCANNER('Property','Value',…) creates a new ARDUINO3DSCANNER or raises the existing singleton*. Starting from the left, property value pairs are applied to the GUI before Arduino3DScanner_OpeningFcn gets called. An unrecognized property name or invalid value makes property application stop. All inputs are passed to Arduino3DScanner_OpeningFcn via varargin.
% Begin initialization code
gui_Singleton = 1;
gui_State = struct('gui_Name', mfilename, …
'gui_Singleton', gui_Singleton, …
'gui_OpeningFcn', @Arduino3DScanner_OpeningFcn, …
'gui_OutputFcn', @Arduino3DScanner_OutputFcn, …
'gui_LayoutFcn', [] , …
'gui_Callback', []);
if nargin && ischar(varargin{1})
gui_State.gui_Callback = str2func(varargin{1});
end
if nargout
[varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
gui_mainfcn(gui_State, varargin{:});
end
% End initialization code
% – Executes just before Arduino3DScanner is made visible.
function Arduino3DScanner_OpeningFcn(hObject, ~, handles, varargin)
% This function has no output args, see OutputFcn.
% hObject handle to figure
% eventdata reserved – to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
% varargin command line arguments to Arduino3DScanner (see VARARGIN)
% Choose default command line output for Arduino3DScanner
handles.output = hObject;
%connect to the a
handles.a =arduino('COM3');
s = load('cameraParams.mat');
handles.cameraParams = s.cameraParams;
handles.stepAngle=10;
handles.vidobj=imaq.VideoDevice('winvideo', 1);
% initialize the scanning data
handles.stepAngle =1.8;
handles.stepSize = 1;
handles.points = [];
handles.colors = [];
handles.filename = 'output';
handles.break = false;
% Update handles structure
guidata(hObject, handles);
% UIWAIT makes Arduino3DScanner wait for user response (see UIRESUME)
% uiwait(handles.figure1);
% – Outputs from this function are returned to the command line.
function varargout = Arduino3DScanner_OutputFcn(~, ~, handles)
% Get default command line output from handles structure
varargout{1} = handles.output;
function stepsize_Callback(hObject, ~, handles)
% Hints: get(hObject,'String') returns contents of stepsize as text
str2double(get(hObject,'String')) returns contents of stepsize as a double
stepSize = str2double(get(hObject,'String'));
if stepSize > 0 && stepSize < 50
handles.stepSize = stepSize;
handles.stepAngle =handles.stepAngle * handles.stepSize;
end
guidata(hObject, handles);
% – Executes during object creation, after setting all properties.
function stepsize_CreateFcn(hObject, ~, ~)
% Hint: edit controls usually have a white background on Windows.
% See ISPC and COMPUTER.
if ispc && isequal(get(hObject,'BackgroundColor'), get(0,'defaultUicontrolBackgroundColor'))
set(hObject,'BackgroundColor','white');
end
function edit2_Callback(hObject, ~, handles)
% Hints: get(hObject,'String') returns contents of edit2 as text
str2double(get(hObject,'String')) returns contents of edit2 as a double
handles.filename = get(hObject,'String');
guidata(hObject, handles
% – Executes during object creation, after setting all properties.
function edit2_CreateFcn(hObject, ~, ~)
% Hint: edit controls usually have a white background on Windows.
% See ISPC and COMPUTER.
if ispc && isequal(get(hObject,'BackgroundColor'), get(0,'defaultUicontrolBackgroundColor'))
set(hObject,'BackgroundColor','white');
end
% – Executes on button press in start.
function start_Callback(hObject, ~, handles)
for currentAngle = 0:handles.stepAngle:360
if handles.break
break;
end
frame=step(handles.vidobj);
image_without_laser=imrotate(frame, 90);
imshow(image_without_laser,[]);
start_laser(handles.a);
rotateMotor(handles.a, handles.stepAngle);
frame=step(handles.vidobj);
image_with_laser=imrotate(frame, 90);
imshow(image_with_laser,[]);
image_difference =imabsdiff(image_with_laser,image_without_laser);
bw_img = im2bw(image_difference,0.1);
bw_img = bwareaopen(bw_img,20,8);
[points,colors]=generatePoints(bw_img,image_without_laser,currentAngle);
handles.points = [handles.points; points];
handles.colors = [handles.colors; colors];
scatter3(handles.axes1,handles.points(:,1),handles.points(:,2),handles.points(:,3),10,'filled');
end
pcshow(handles.points,handles.colors,'Parent',handles.axes1,'MarkerSize',20);
handles.output = pointCloud(handles.points,'Color',handles.colors);
assignin('base','output',handles.output);
guidata(hObject, handles);
% – Executes on button press in stop.
function stop_Callback(hObject, ~, handles)
% hObject handle to stop (see GCBO)
% eventdata reserved – to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
handles.break = true;
guidata(hObject, handles);
stop_laser(handles.a)
% – Executes on button press in save.
function save_Callback(~, ~, handles)
pcwrite(handles.output,handles.filename,'PLYFormat','ascii');
% – Executes during object creation, after setting all properties.
function save_CreateFcn(~, ~, ~)
% – Executes during object creation, after setting all properties.
function axes1_CreateFcn(hObject, ~, ~)
% Hint: place code in OpeningFcn to populate axes1
hObject.XTick = [];
hObject.YTick = [];
hObject.ZTick = [];
hObject.XTickLabel = [];
hObject.YTickLabel = [];
hObject.ZTickLabel = [];
hObject.XColor = 'none';
hObject.YColor = 'none';
hObject.ZColor = 'none';
hObject.Color = 'black';
rotate3d(hObject,'on');
Copyright Notice
© Licențiada.org respectă drepturile de proprietate intelectuală și așteaptă ca toți utilizatorii să facă același lucru. Dacă consideri că un conținut de pe site încalcă drepturile tale de autor, te rugăm să trimiți o notificare DMCA.
Acest articol: Director of department: [309772] (ID: 309772)
Dacă considerați că acest conținut vă încalcă drepturile de autor, vă rugăm să depuneți o cerere pe pagina noastră Copyright Takedown.
