See discussions, st ats, and author pr ofiles f or this public ation at : https:www .researchgate.ne tpublic ation320012348 [612290]

See discussions, st ats, and author pr ofiles f or this public ation at : https://www .researchgate.ne t/public ation/320012348
A Novel Building Post-Construction Quality Assessment Robot: Design and
Prototyping
Conf erence Paper · Sept ember 2017
DOI: 10.1109/IROS.2017.8206499
CITATIONS
2READS
386
4 author s, including:
Some o f the author s of this public ation ar e also w orking on these r elat ed pr ojects:
S-cur ve View pr oject
A-CONQU AS (Quic abot) View pr oject
Rui-Jun Y an
Chinese Ac ademy of Scienc es
34 PUBLICA TIONS    131 CITATIONS    
SEE PROFILE
Erdal Kay acan
Aarhus Univ ersity
156 PUBLICA TIONS    2,463 CITATIONS    
SEE PROFILE
I-Ming Chen
Nany ang T echnologic al Univ ersity
429 PUBLICA TIONS    5,629 CITATIONS    
SEE PROFILE
All c ontent f ollo wing this p age was uplo aded b y Rui-Jun Y an on 24 Sept ember 2017.
The user has r equest ed enhanc ement of the do wnlo aded file.

A Novel Building Post-Construction Quality Assessment
Robot: Design and Prototyping
Rui-Jun Yan1, Erdal Kayacan1,Senior Member, IEEE I-Ming Chen1,Fellow, IEEE and Lee Kong Tiong2
Abstract — This paper describes the design and development
of an automated construction quality assessment robot system
(QuicaBot) for hollowness, crack, evenness, alignments and
inclination problems. To the best of our knowledge, this work
is the first attempt to pave the way towards a fully autonomous
robotic system for post construction quality assessment of
buildings. The main goal of the novel robot is twofold: to
systematize the manual inspection work through automation
resulting in more reliable and objective inspection reports, and
to speed up the inspection process resulting in a cost reduction.
Based-on our initial on-site tests, the developed robot increases
the overall efficiency in all the aforementioned five problems.
I. INTRODUCTION
Post construction quality assessment of buildings is an
indispensable procedure in construction industry which is
currently executed by manual inspectors. In a standard daily
operation, 2-3 inspectors are needed to complete the manual
assessment procedure. Such a fully manual inspection is
subjected to several errors because of executing the operation
in an incorrect way or the use of inaccurate inspection tools.
What is more, the inspection accuracy may decrease over
time. Last but not the least, in most of the time, manual
inspection has to be done during the daytime. Motivated
by this time-consuming, tiresome and unexciting procedure,
we propose a novel automated post construction quality
assessment robot system as shown in Fig. 1. The proposed
system consists of a cloud-based mobile robot, a 2D laser
scanner, a color camera, a thermal camera, a heater and
an inclinometer, which have been technically validated for
their assessment capabilities [1]. Even though aerial vehicles
are usually used to inspect a dam [2] or vessel [3], a
mobile vehicle is selected in this research considering the
loading capacity and inspection stability. To the best of
our knowledge, this work is the first attempt to pave the
way towards a fully autonomous robotic system for post
construction quality assessment of buildings.
Even if there are more problems in construction quality
assessment operations which are still manually inspected,
only five of them are considered in this investigation: hollow-
ness, crack, evenness, alignment, and inclination. Motivated
by the developments in infrared thermography technology
1Rui-Jun Yan, Erdal Kayacan, and I-Ming Chen are with the
Robotics Research Center, School of Mechanical and Aerospace
Engineering, Nanyang Technological University, Singapore,
639798 hityrj@outlook.com; erdal@ntu.edu.sg;
michen@ntu.edu.sg
2Lee Kong Tiong is with the School of Civil and Environmen-
tal Engineering, Nanyang Technological University, Singapore, 639798
clktiong@ntu.edu.sg
4.5kW
heater battery
NI DAQsthermal cameracolor camera
inclinometerlaser
scanner
industrial PClinear
actuator
Cloud -based
mobile robotFig. 1: A novel post-construction quality assessment robot.
[4], [5], a thermal camera is preferred for hollowness as-
sessment. For automatic crack assessment, many researches
have proposed the use of color cameras for some specific
environments, such as subway tunnel [6], flexible pavement
surfaces [7], [8], and bridge decks [9]. Motivated by these
successful applications, a color camera is preferred for the
crack detection in this investigation. By using a 2D laser
scanner, evenness is assessed by checking the deviation of
the extracted line segments which have been mostly used
to build a 2D environment map and localize a mobile robot
[10], [11]. In this investigation, our novel robotic system
also proposes an assessment methodology which can give
the angle between the two walls accurately. To calculate
this angle, a number of plane extraction methods [12]–
[14] have been proposed to extract a plane from the sensor
data. Finally, an inclinometer is used to assess the ground
inclination. The five aforementioned defects and their cor-
responding automated assessment algorithms are elaborated
experimentally to validate the robustness of the proposed
novel assessment robot system.
The rest of this paper is organized as follows: the sensors
and their related mechanisms are briefly introduced in Sec-
tion II and the proposed algorithms are presented in Section
III. The experimental results are given in Section IV. Finally,
some conclusions are drawn from this study in Section V.
II. S ENSORS AND MECHANISMS
The selected sensors for QuicaBot are shown in Fig. 2
and the related mechanisms are shown in Fig. 3. The field
of view for the FLIR thermal camera is 2518:8degrees,2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
September 24–28, 2017, Vancouver, BC, Canada
978-1-5386-2681-8/17/$31.00 ©2017 IEEE 6020

Sensors Assessment items
RGB camera
LMS500 laser scannerA310 thermal camera
AGS005 inclinometerCracks on the
ground and walls
Evenness of the ground
and walls, and alignment
of two wallsHollowness of the
ground and walls
Inclination of the
ground
Fig. 2: Selected sensors for QuicaBot.
and its working frequency is around 30Hz. This thermal
camera is used to capture thermal images after heating the
assessed environment for a short time by using a heater. To
cover a larger capture area at a fixed location, this thermal
camera is mounted on a pan-tilt device as shown in Fig.
3a. Based on the thermal images, the hollow defects can be
extracted successfully. However, in the manual assessment,
the inspector needs hold a metal rod and slide it inch by inch
while keeping a tight contact with the assessed tiles. During
this process, the hollowness is assessed by distinguishing
the generated sound from the friction of the metal rod and
the tiles. Even for an experienced inspector, long assessment
time is needed to complete the hollowness assessment of a
room unit. Moreover, the hollow shape of a tile cannot be
assessed by using only the friction sound. On the other hand,
in the proposed autonomous robot, it can be clearly extracted
from the captured thermal image.
A low cost color camera is mounted on the top of the
thermal camera to capture color images for crack assessment.
Nine color images are captured in different directions by
controlling the pan-tilt device while the robot is stationary.
Currently, high resolution color camera are the most common
method for crack assessment which has been used to detect
the cracks for vessel [3] and bridge maintenance [15] suc-
cessfully. When compared to the visual checking in manual
assessment, automatic crack assessment can not only achieve
a more accurate result in a relatively shorter time but also
the length and shape of the crack can be obtained.
A SICK LMS500 laser scanner is selected to assess the
evenness of the ground and walls as well as the alignment
of walls. This laser scanner has a scanning range of 190
degrees with a measurement accuracy of 6 millimeters,
and its maximum working frequency is 100 HZ with the
measurement resolution of 1 degree. In our following set
of experiments, we choose the angular resolution of 0.3333
(a) Pan-tilt device.
(b) Linear actuator.
Fig. 3: 3D model for the designed mechanisms.
degree with the working frequency of 50Hz. For evenness
assessment, this laser scanner is mounted on a device which
has a rotation range of 90 degrees. By controlling the device
rotation and scanning with an interval angle of 2 degrees, 45
scans can be obtained during the assessment process which
takes more samples than the five samples of the manual
evenness assessment process for one wall. This laser scanner
with the rotation device can also be used to construct a 3D
model for the assessment environment [16].
To realize the alignment assessment, the rotation device
with the laser scanner is mounted on a linear actuator
as shown in Fig. 3b to obtain four 2D scans at different
heights. The first two scans and the remaining two scans are
respectively used to extract planes from the scanned walls
for calculating the angles of adjacent walls. Traditionally,
the inspectors need to keep a set square horizontally [17]
while one edge is tightly contacted with one wall. The
alignment of the two walls is assessed by checking the gap
between the other edge of the set square and the second
wall. An error may easily be introduced if the set square
is not accurately held in a horizontal direction because of
inspector’s exhaustion or lack of experience.
Finally, a POSITAL AGS005 inclinometer is selected
to assess the inclination of the ground. When compared
to the spirit level of manual assessment and the strain
sensors [18] for inclination estimation, this inclinometer is
more appropriate to be installed on the proposed QuicaBot.
This inclinometer has a measurement resolution of 0.001
degree and measurement accuracy of 0.01 degree with the
measurement range of 8 degrees. In addition, the inclination
angles on both x and y axes can be simultaneously obtained.
III. A SSESSMENT ALGORITHMS
A. Hollowness assessment
Since a temperature increase on tiles with and without hol-
lowness has significantly different characteristics, captured
thermal images can be used to extract the hollow feature.
In Algorithm 1, the pseudo-code of the proposed hollowness6021

assessment algorithm is presented. An sample image in Fig. 4
is used to illustrate the detailed procedures of the algorithm.
The original image is shown in Fig. 4(a) where the maximum
red value of the image is obtained due to the fact that the
red color in a thermal image represents the high temperature.
After blurring the image, the pixels with lower temperature
than a threshold value are set as white and the contrast of
the remaining pixels is reduced as in Fig. 4(b).
Algorithm 1: Proposed hollowness assessment algorithm
input : A thermal image T
output: The sum of the hollowness areas
1BlurImage blur(T);
2MaxRed FindMaxRGB (BlurImage );
3fori-th row in BlurImage do
4 forj-th column in BlurImage do
5 ifBlurImage (i; j)<(MaxRedthreshold )then
6 BlurImage (i; j):SetWhite ();
7 else
8 BlurImage (i; j):ReduceContrast ;
9 end
10 end
11end
12BlurImage:border:SetWhite ();
13BlurImage:TopHat ();
14GreyImage ColorToGrey (BlurImage );
15NewBlurImage blur(GreyImage );
16ContourSet ContourExtraction (NewBlurImage );
17HollowArea = 0:0;
18fori-th contour in ContourSet do
19 ifContourSet [i]:isclose ()&& ContourSet [i]:size()>
LimNum then
20 T:plot (ContourSet [i]);
21 HollowArea + =ContourSet [i]:area ();
22 end
23end
To obtain the closed boundaries for the hollow defects,
theTopHat method is used, and the contours of the corre-
sponding grey image are extracted. However, even though the
image is blurred, not all the extracted contours are closed as
in Fig. 4(c). As a matter of fact, the contour of a real hollow
area should be closed especially filling the image borders
as white. A closure state checking method is developed to
check whether the extracted contour is closed or not. If the
contour is closed and its length is bigger than a threshold
value, the area located in the interior of this contour is
considered as being the hollow defect. Eventually, the hollow
areas are simultaneously summed for the features satisfying
the previous hollow conditions. The original image with the
(a) (b) (c) (d)
Fig. 4: (a) Original thermal image; (b) Pixels with high tem-
perature; (c) Extracted contours; (d) Final contours plotted
on the original image.plotted hollow contours is shown in Fig. 4(d).
B. Crack assessment
When compared to hollowness assessment, crack assess-
ment is more difficult because of the small size of the crack
features. A detailed image processing algorithm for crack
extraction is proposed in Algorithm 2, and the corresponding
example for illustrating the process is shown in Fig. 5. For
assessing the crack located on the tiles, the joint of the
two adjacent tiles is a confusing feature because it has a
similar size and color with a real crack. Therefore, it may
lead an incorrect assessment result. To eliminate these types
of confusions, line segments are firstly extracted from the
image in Fig. 5(a) and plotted onto the original image as
shown in Fig. 5(b). In the line segments extraction process,
a blurred color image is firstly converted to a grey image and
the edges of this image are detected. These extracted edges
are taken as the input of the process of the line extraction.
Algorithm 2: Proposed crack assessment algorithm
input : A color image C
output: Boundaries of the crack defects
1BlurImage blur(C);
2GreyImage ColorToGrey (BlurImage );
3EdgeImage EdgeDetection (GreyImage );
4LineSet LineExtraction (EdgeImage );
5C:plot (LineSet );
6C:BlackHat ();
7NewBlurImage blur(C);
8BinaryImage GreyToBinary (NewBlurImage );
9ContourSet ContourExtraction (BinaryImage );
10fori-th contour in ContourSet do
11 ifContourSet [i]:size()> LimNum then
12 C:plot (ContourSet [i]);
13 end
14end
Based on the image in Fig. 5(b), a BlackHat method
is applied to extract the crack related features. To reduce
the redundant features, the output image is converted into
a binary image by comparing the pixel grey value with a
threshold value. The binary image with the extracted features
is shown in Fig. 5(c), from which it can be seen that
some small segments are incorrectly considered as cracks.
A further filter method is used to neglect the small segments
by checking the continuous pixel size of a candidate feature.
Eventually, the contours of all the segments are extracted
and the contour size is filtered with a threshold value. The
(a) (b) (c) (d)
Fig. 5: (a) Original color image; (b) Blurred image with
extracted line segments; (c) Binary image; (d) Extracted
contours for the crack features.6022

final output is the original image with the filtered contours
of cracks as shown in Fig. 5(d).
C. Evenness assessment
The raw sensor data coming from the 2D laser scanner
should firstly divided into different groups because they
may belong to different objects. In Algorithm 3, in the
beginning, a group of raw data are separated into many
groups in two cases. One is when measuring the distance
of the two contiguous points have a big difference, the other
one is that the angle of the two vectors constituted by four
contiguous points is beyond a limit value. For the raw sensor
data in each group, a line segment is extracted by using
the least squares method. The reader is encouraged to refer
to [10] for the detailed derivation of the equations for the
line extraction. The average deviation of the extracted line
segment is calculated and the point having the maximum
deviation is selected. Each data group is further segmented
as two at this selected point when the average deviation of
its extracted line segment is not less than a limit value.
After extracting a line segment for each group of sensor
data, the contiguous line segments need to be merged to
avoid over-segmentation if these line segments belong to
the same wall or ground. This merging process is realized
by checking the minimum distance and angle of the two
line segments. However, the short line segments extracted
from less than five data points may exist in the previous
segmentation. These short line segments and its neighboring
line segment may lead to a large angle, but actually these two
line segments may belong to the same scanned object. As a
further merging process, two noncontiguous line segments
satisfying the merging conditions are merged as one if at
most two short line segments are located in the middle of
these two line segments.
Algorithm 3: Proposed evenness assessment algorithm
input : A 2D sensor scan s
output: Line segments with eligible or ineligible flag
1GroupSet GroupSegment (s);
2while i-th group infGroupSetgdo
3 TempLine LineExt (GroupSet [i]);
4 ifTempLine:AveDev < LimDev then
5 LineSet:add (TempLine );
6 else
7 (g1; g2) FurtherSegment (GroupSet [i]);
8 GroupSet:swap (GroupSet [i];[g1; g2]);
9 i=i1;
10 end
11end
12LineSet LineMerge (LineSet );
13LineSet FurtherMerge (LineSet );
14while j-th line ljinfLineSetgdo
15 iflj:MaxDev > LimMaxDev jj
lj:AveDev > LimAveDev then
16 lj:flag =true ;
17 else
18 lj:flag =false ;
19 end
20endWhen the previous segmentation and merging steps are
finished, the evenness of the scanned object can be assessed
by checking the average deviation or maximum deviation
of the corresponding line segment. If one of these two
deviations is beyond the corresponding limit value, this wall
or ground is considered as uneven.
D. Alignment assessment
Alignment assessment with a laser scanner needs two 2D
scans which have to be obtained in different translational
heights or tilt angles. In Algorithm 4, the assumption is
that the two scans are obtained and the line segments are
extracted from each scan by using Algorithm 3. In the
beginning, line segments of each scan with the length of
longer than 0.5 meter are selected, because narrow walls
are also neglected in manual inspection. Then, planes are
constituted based on these two groups of line segments, and
each line segment in this process is used only once. The line
segments, which are used to construct planes, are disabled
in the following constitution process.
For two spatial line segments, coplanarity is firstly checked
by calculating the distance between one of their four end-
points and a plane constituted by other three endpoints. If this
point is far from the temporarily constructed plane, these two
line segments cannot constitute a plane. For a wall having
an opening like a door, two line segments are extracted from
raw sensor data by scanning the door part. If both scans
are obtained from the door part, it is not correct that the
Algorithm 4: Proposed alignment assessment algorithm
input : Two line sets SetOne; SetTwo
output: All possible angles AngleSet
1NewSetOne LineSelect (SetOne );
2NewSetTwo LineSelect (SetTwo );
3forj-th line ljinfNewSetTwogdo
4 dj:f=false ;
5end
6fori-th line liinfNewSetOnegdo
7 forj-th line ljinfNewSetTwogdo
8 f1 IsOnSamePlane (li; lj);
9 f2 IsOverlap (li; lj);
10 dij MinDis (li; lj);
11 iff1&&f2&& ( dij< d ) && (! dj:f)then
12 NewPlane PlaneExt (li; lj);
13 PlaneSet:add (NewPlane );
14 dj:f=true ;
15 break;
16 end
17 end
18end
19fork-th plane pkinfPlaneSetgdo
20 t = k + 1;
21 fort-th plane ptinfPlaneSetgdo
22 f IsClose (pk; pt);
23 TempAngle CalAngle (pk; pt);
24 iff&&TempAngle < LimAngle then
25 AngleSet:add (TempAngle );
26 end
27 end
28end6023

coplanarity condition for all combinations of these four line
segments is satisfied. To avoid this case, two line segments
having an overlapping segment can constitute a plane. How-
ever, for two parallel walls, extracted line segments from
two parallel scans obtained in different translational heights
may incorrectly construct a plane. To avoid this case, the
minimum distance of the two line segments is calculated,
because the two line segments far from each other may form
two different walls. When all these conditions are satisfied, a
new plane is extracted from the two candidate line segments.
With the extracted planes, the angles between a plane and
its two neighbor planes can be calculated. In the previous
plane extraction process, the four corners of each plane are
obtained by projecting the four endpoints of the two line
segments onto the extracted plane. To guarantee that the two
planes are close, the minimum distance of these two planes
are obtained by calculating the distances between the four
corners of one plane and that of the other plane. Even though
all possible angles of any two close planes are obtained, only
the selected angles are stored, because the angle of the two
walls in alignment assessments is closer to true angle.
E. Inclination assessment
Ground inclination is assessed when all the previously
explained assessments are finished. This is because the
inclinometer measurement needs some time to stabilize itself
after the movements of robot system. The output of the
inclinometer is the inclination angle in X and Y axes. To have
a more accurate inclination assessment result, many groups
of inclination angles in both axes are obtained and their
average value is considered as the inclination angle at the
fixed location. Finally, the inclination status of this location
is assessed by comparing the inclination angle with a limit
inclination value which is defined based on the experimental
results in the following section.
IV. E XPERIMENTAL RESULTS
A. Assessment of a constructed testbed
To test the designed novel robot and to validate the
proposed algorithms in this paper, a testbed consisting of
two walls with a size of 2m 2m each and a floor with a
size of 2.2m 2.2m is constructed as shown in Fig. 6. The
defects of this testbed are manually generated throughout the
construction process. The angle of the two walls is manually
inspected as 91.24 degrees, and the hollow defects and cracks
are constructed at predefined positions. One flooring tile
on the ground is also constructed as a bulgy tile. These
predefined artificially generated problems on the testbed are
the ground truth data for the proposed algorithms.
Before the autonomous assessment procedure starts, the
two walls are heated for 30 seconds, and then the thermal
images are captured while the thermal camera is 1:5mfar
away from the walls. The extracted contours for the thermal
images are shown in Fig. 7 and the percentages of the
hollowness areas with respect to the whole image are also
described. As can be seen from the images in Fig. 7, the
tiles with hollowness have higher temperature value than the
Fig. 6: Constructed testbed with QuicaBot.
perfect walls which means without any hollowness. These
results are motivating in a way that a thermal camera is one
of the efficient ways of assessing hollowness.
When capturing the thermal images, color images are si-
multaneously captured. One color image with crack features
is processed in Fig. 8 where the original image is shown in
Fig. 8(a). It can be found that the cracks on the image are
quite thin and its color is quite close to the joining color and
the shadow color. By using the Algorithm 2, the extracted
features are expressed as a binary image in Fig. 8(b). The
contours of the filtered crack features are shown in Fig. 8(c),
which still includes some incorrect features. Actually, these
features can be neglected by setting a bigger threshold value.
However, the real cracks may also be removed due to this
operation. To include the real cracks as many as possible, the
threshold value must be properly set. With the increase of
experimental tests in different environments, the assessment
result can be improved by using a learning and classification
method based on the large number of images.
To check the performance of the laser scanner under
different conditions, it is individually tested with different
scanning angles by setting the inclination angle of the base
(a) hollowness area: 14.62%
(b) hollowness area: 23.14%
(c) hollowness area: 72.45%
(d)hollowness area: 13.25%
Fig. 7: Hollowness assessment results for the constructed
testbed.6024

(a) (b) (c)Fig. 8: Crack assessment for the constructed testbed: (a)
Original image; (b) Binary image; (c) Color image with
extracted contours.
platform. For each 2D scan, two line segments are extracted
from the two scanned walls. In Fig. 9, four different cases
of evenness assessment are illustrated with a schematic dia-
gram, experiment setup, and the average deviation value for
the two extracted line segments of 200 scans. By comparing
the average deviation of the two line segments, it can be
Wall 1 Wall 2
Wall 1 Wall 2
Wall 1 Wall 2
Wall 1 Wall 2
(a) (b)
(c) (d)
Fig. 9: Evenness assessment: (a) Laser scanner is parallel
with the floor; (b) The laser scanner is mounted on a
platform with an inclination angle of 4.9 degree; (c) This
inclination angle is equal to 8.13 degree. (d) Laser scanner
is moved up about 15 centimeters with the same scanning
direction as in (d). The schematic diagram and the figures
of the experimental setup are shown in left-top and right-
top respectively for each case. The average deviations of
the extracted line segments for 200 scans are shown in the
bottom.seen that the average deviations of wall 1 plotted with red
squares are bigger than that of wall 2 plotted with blue
dots for all these scans of each case. The average value of
the 200 deviation values for wall 1 in all four cases are
close to 4 millimeter, and that for wall 2 are smaller than 3
millimeter. Most of the deviation values of wall 2 is smaller
than 3.2 millimeter defined as a limit value in the following
assessment of the CONQUAS room.
The alignment of the two walls for the testbed is assessed
by obtaining two 2D scans at different heights. In Fig. 10,
the schematic diagram of the scanning area and an example
of a constituted plane from the extracted line segments are
shown. 22 groups of data are obtained, and the calculated
angles of the extracted planes from these data are presented
in Fig. 11. All these angle values locate from 91.3 degree
to 91.6 degree, which is closer to 91.24 degree of manual
inspection value. The average value of these 22 tests is about
91.43 degree plotted with a red dashed line.
Wall 1 Wall 2
① ②
Fig. 10: Schematic diagram for evenness assessment of two
walls (left), and an example of extracted plane (right).
The testbed ground is assessed by obtaining 1000 data
for eight locations in Fig. 12, in which the ground on the
first four locations are even. For the last four locations, the
inclinometer is put on the edges of a bulgy tile. As can be
seen from Fig. 13, both maximum and minimum inclination
angles of the first four locations in both X and Y axes are
in the limit range of 0:30:8degree. For the fifth and
the eighth location, inclination angle in X is smaller than
0.3 degree and close to 4 degree respectively because the
fifth location is on the middle of the tile and the eighth
location is on the edge having the maximum bulgy height.
The inclination values for both locations are in the limit range
because this bulgy tile has accurate location in Y axis. For the
sixth and seventh location, the inclination angles in both of X
5 10 15 2091.391.3591.491.4591.591.5591.6
Angle of two walls for testbedAngle (°)

Angle Value
Average Value
Fig. 11: Angle of the two walls of the testbed for 22 tests.6025


② ③
④ ⑤ ⑥
⑦ ⑧ X
Y Fig. 12: Experimental setup for the inclination assessment of
the testbed ground in eight different locations.
and Y axes are out of the limit range. The inclination angles
in Y axis are opposite because the measurement directions
of the inclinometer are opposite. The inclination angles of
the fifth, sixth, seventh location in X axis are approximately
the same because these three locations have the same Y axis.
1 2 3 4 5 6 7 81234
Maximum and minimum angle in X axis for each positionAngle (°)

Min Max 0.3° 0.8°
1 2 3 4 5 6 7 8−2−10123
Maximum and minimum angle in Y axis for each positionAngle (°)

Min Max 0.3° 0.8°
Fig. 13: Maximum and minimum inclination angle of 1000
tests for each location in X (top) and Y (bottom) direction.
V. C ONCLUSIONS AND FUTURE WORK
This paper proposes a novel robot for the post construction
quality assessment of newly constructed building and its
corresponding assessment algorithms with four sensors to
automatically inspect five types of defects: hollowness, align-
ment, evenness, alignment and inclination. The prototype is
tested in a constructed testbed to validate the robustness and
accuracy of the proposed methods. All these autonomous as-
sessments have a closer inspection accuracy with the ones in
manual assessments. When compared to manual assessment,
the proposed assessment system has higher effectiveness
and more consistent measurement accuracy. As a future
work, a user interface will be developed to display the
assessment results, and the building information modeling
will be integrated with the assessment results to show the
defects location.ACKNOWLEDGMENT
This work was supported by National Research Foundation
of Singapore (NRF2015-TDIR01-03) and also the School of
Mechanical and Aerospace Engineering, Nanyang Techno-
logical University, Singapore.
REFERENCES
[1] R. J. Yan, C. L. Low, J. Duan, L. Liu, E. Kayacan, I. M. Chen,
and R. Tiong, “Development of a novel post-construction quality
assessment robot system,” in International Conference on Control,
Automation, Robotics and Vision (ICARCV) , Nov 2016, pp. 1–6.
[2] T. zaslan, K. Mohta, J. Keller, Y . Mulgaonkar, C. J. Taylor, V . Kumar,
J. M. Wozencraft, and T. Hood, “Towards fully autonomous visual
inspection of dark featureless dam penstocks using mavs,” in 2016
IEEE/RSJ International Conference on Intelligent Robots and Systems
(IROS) , Oct 2016, pp. 4998–5005.
[3] F. Bonnin-Pascual, A. Ortiz, E. Garcia-Fidalgo, and J. P. Company, “A
micro-aerial platform for vessel visual inspection based on supervised
autonomy,” in 2015 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS) , Sept 2015, pp. 46–52.
[4] A. Foudazi, C. A. Edwards, M. T. Ghasr, and K. M. Donnell, “Active
microwave thermography for defect detection of cfrp-strengthened
cement-based materials,” IEEE Transactions on Instrumentation and
Measurement , vol. 65, no. 11, pp. 2612–2620, Nov 2016.
[5] P. Cotic, D. Kolaric, V . B. Bosiljkov, and et al, “Determination of the
applicability and limits of void and delamination detection in concrete
structures using infrared thermography,” NDT & E International ,
vol. 74, pp. 87 – 93, 2015.
[6] W. Zhang, Z. Zhang, D. Qi, and et al, “Automatic crack detection and
classification method for subway tunnel safety monitoring,” Sensors ,
vol. 14, no. 10, p. 19307, 2014.
[7] H. Oliveira and P. L. Correia, “Automatic road crack detection and
characterization,” IEEE Transactions on Intelligent Transportation
Systems , vol. 14, no. 1, pp. 155–168, March 2013.
[8] Y . Shi, L. Cui, Z. Qi, F. Meng, and Z. Chen, “Automatic road crack
detection using random structured forests,” IEEE Transactions on
Intelligent Transportation Systems , vol. 17, no. 12, pp. 3434–3445,
Dec 2016.
[9] P. Prasanna, K. J. Dana, N. Gucunski, and et al, “Automated crack
detection on concrete bridges,” IEEE Transactions on Automation
Science and Engineering , vol. 13, no. 2, pp. 591–599, April 2016.
[10] R. J. Yan, J. Wu, M.-L. Shao, and et al, “Mutually converted arc-line
segment-based slam with summing parameters,” Proc. IMechE, Part
C: Journal of Mechanical Engineering Science , vol. 229, no. 11, pp.
2094–2114, 2015.
[11] R. J. Yan, J. Wu, J. Y . Lee, and et al, “Representation of 3d envi-
ronment map using b-spline surface with two mutually-perpendicular
lrfs,” Mathematical Problem in Engineering , vol. 2015, no. 690310,
pp. 1–14, 2015.
[12] X. Qian and C. Ye, “Ncc-ransac: A fast plane extraction method for 3-d
range data segmentation,” Cybernetics, IEEE Transactions on , vol. 44,
no. 12, pp. 2771–2783, Dec 2014.
[13] R. Hulik, M. Spanel, P. Smrz, and et al, “Continuous plane detection
in point-cloud data based on 3d hough transform,” Journal of Visual
Communication and Image Representation , vol. 25, no. 1, pp. 86 –
97, 2014.
[14] C. Feng, Y . Taguchi, and V . Kamat, “Fast plane extraction in organized
point clouds using agglomerative hierarchical clustering,” in Robotics
and Automation (ICRA), IEEE International Conference on , May
2014, pp. 6218–6225.
[15] R. S. Lim, H. M. La, Z. Shan, and et al, “Developing a crack inspection
robot for bridge maintenance,” in Robotics and Automation (ICRA),
IEEE International Conference on , May 2011, pp. 6288–6293.
[16] R. J. Yan, J. Wu, J. Y . Lee, A. M. Khan, C.-S. Han, E. Kayacan,
and I.-M. Chen, “A novel method for 3d reconstruction: Division and
merging of overlapping b-spline surfaces,” Computer-Aided Design ,
vol. 81, pp. 14 – 23, 2016.
[17] A. Ani, N. Tawil, S. Johar, and et al, “Building condition assessment
for new houses: A case study in terrace houses,” Jurnal Teknologi ,
vol. 70, no. 1, pp. 43–50, 2014.
[18] W. Svensson and U. Holmberg, “Estimating ground inclination using
strain sensors with fourier series representation,” Journal of Robotics ,
vol. 2010, no. 465618, pp. 1–8, 2010.6026
View publication statsView publication stats

Similar Posts