Paper Title (use style: paper title) [308045]
Design and Development of Multirotor UAV with Automatic Collision Avoidance using Stereo Camera
Nicolas DEMIANNAY1,2, Nicolas BESNIER1,2, Cris THOMAS3, Vikas THAPA4, Vindhya DEVALLA2, Amit Kumar MONDAL*,5
*Corresponding author
1École nationale supérieure d'ingé[anonimizat],
[anonimizat], [anonimizat]
2[anonimizat], Uttarakhand, India,
[anonimizat]
3Università [anonimizat], Italy,
[anonimizat]
4[anonimizat], Hyderabad, India,
[anonimizat]
*,5[anonimizat], Dubai, UAE,
[anonimizat]
DOI: 10.13111/2066-8201.2020.12.3.X
Received: 14 May 2020/ Accepted: xx XXXXX 2020/ Published: September (December) 2020
Copyright © 2020. Published by INCAS. This is an “open access” [anonimizat]-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Abstract: The operation of drones in cluttered environments like forests and hilly areas is extremely difficult is impossible to be flown autonomously without some intelligence incorporated with drone for obstacle detection and avoidance. [anonimizat]. The proposed method is incorporated on a stereo vison multi copter using block matching algorithm. The stereo vision baseline is based on horizontal configuration and computes the depth using sum of absolute difference algorithm. The image processing node (LabVIEW vi) and the controller node is run on the remote laptop. This vi computes the distance between the multi rotor and an obstacle and transmits depth data to on board flight controller through MAVLink protocol. The algorithm’s efficiency was tested using software in loop on Gazebo simulator to analyze the performance of the UAV. The hardware in loop results are also shown in this paper after the successful flight test.
Key Words: ROS, [anonimizat], LabView, UAV, Gazebo, flight controller
1. INTRODUCTION
Aviation industry in recent time has realized much of UAV’s [anonimizat]. [anonimizat] a [anonimizat] [1-7]. [anonimizat], land surveillance etc.
[anonimizat] (UAVs) and their autonomy has constantly increased [8]. The need for autonomy also arises sharply due to the skillset and laborious work hours that are to be spent by UAV pilots from ground station and this often compromises the prospectus in variety of applications that UAV encourages in the present and probable future. [anonimizat]. [anonimizat]’t [anonimizat] makes mission vulnerable when the drone falls in enemy hands. All this brings the necessity of autonomous UAVs or drones to the forefront.
Collision avoidance takes an important place for assisted flights. Although many solutions for obstacle avoidance exist, these solutions stay too expensive and most of the work is under lab scale. Only few companies dominate obstacle avoidance systems market like Chinese DJI or French Parrot.
This paper presents obstacle avoidance using stereo vison in GPS denied environment for UAV’s using ROS on Raspberry Pi and LabView on laptop. The image processing node and the controller node is run on the remote laptop, which receives real time images wirelessly. ROS installed on Raspberry Pi permits to read images from the stereo camera and transmit them to the laptop, also allows to control the actuators through Pixhawk on UAV.
Limited autonomy of drones has already been achieved in terms of navigation from one point to another using GPS with limited accuracy of about 20 meters. Although the accuracy of GPS navigation is variable and regulated by US government to avoid abuse of it by third parties and hence can’t be relied upon for high accuracy [9-11]. Even with navigation can be managed on a global scale with GPS and locally drone can have visual markers and other aids for precise location and landing, yet biggest problem in autonomous drones during their course of navigation is collision avoidance or obstacle avoidance. It becomes increasingly difficult when the drone has to be flown in cluttered environments like forests and hilly areas even manually piloted and is impossible to be flown autonomously without some intelligence incorporated with drone for obstacle detection and avoidance. This has been a favorite research topic for unmanned ground vehicles (UGVs) for years and many techniques like optical flow, neural networks and nature inspired algorithms etc. based on textures and environment information learning etc.,. using sensors like ultrasonic sensors, monocular cameras [12-14], stereo cameras [15], LIDARs [16], laser range finders [17-19] etc. are been used [20]. But the same methods and techniques cannot be used for UAVs as they operate in three dimensions rather than two dimensions like ground vehicles. These sensors are expensive, heavy and consumes lot of power, which affects the endurance of an UAV. Additional features and modifications are to be done to apply the techniques of UGVs to UAVs for extracting and interpolation of information in 3D for collision avoidance, where stereo camera proves to be much efficient.
The image coding plays a very important role in 3D information extraction in stereo vision. The motion of the object is determined using image coding. This translation movement generate frame to frame displacement of the moving objects [21]. Therefore, displacement estimation is necessary for the UAV for identifying and avoiding the obstacle. The most popular way of estimating the translation motion is the block matching algorithm. Here the motion of the pixels (block) is represented using a displacement vector. This displacement vector is determined by the matching technique which generally follows three stages namely motion detector to detect the moving blocks, displacement estimator to estimate the displacement vector and data compressor to encode the differences after motion compressor [22]. The choice of appropriate algorithm depends on the target application and lot of models have been developed. The current video compression standards use a translational model with partitioned rectangular regions which transmits one motion vector to every region. These models are extensively used in video compression such as H.264 [23, 24] and MPEG [25]. The displacement estimator finds the best match of a reference block in a suitable match area. To do this, fast search algorithms are used to obtain the optimum point in a search region. Thus, the estimation problem results in a search problem. Most of these search problems focuses on the matching criteria with a reduced computational load without losing optimality [26]. To reduce resources and computation power required for motion estimation an efficient SAD algorithm has been adopted as a matching criterion in this paper.
In stereo vision, the images that are captured using two cameras which are slightly displaced relative to each other. This positional difference is known as ‘horizontal disparity’ and gives rise to depth perception, even if the monocular images are unstructured and with noise and no distinctive features, as in random-dot stereograms [27]. The correspondence problem deals with the matching of monocular images by correlating the dots to the corresponding images. This ‘correspondence problem’ is a central issue that the visual system must solve in order to derive three-dimensional information.
The stereo correspondence problem is generally solved and confined to matching techniques namely global and local. Local matching techniques consider small neighborhood of pixels. Block matching is one of the local matching technique which proves to be much efficient compared to other techniques as discussed above and is most popularly used for stereo correspondence.
2. ALGORITHM
Sum of Absolute Difference algorithm can be defined in the following equation:
where and are the original and rectified image with the blending coefficients.
The SAD algorithm from the above equation shows an area-based correspondence algorithm [28-30].
is the reference point of the left image and gt-1 is a point in epi-polar plane, it computes the intensity difference for each pixel
The above equation states as a reference point of the left image minus with the right image at same epi-polar plane.
It sums up the intensities of all surrounding pixels in the neighborhood for each pixel of the left image. To calculate stereo correspondence of stereo images block matching technique is used.
Each block from left image is matched into a block in right image by shifting the left block over the searching area of pixels in the right image. To find corresponding pairs of stereo points, they have to be compared for different disparities, after which the best matching pair can be determined.
The maximum range at which the stereo vision can be used for detecting the obstacle depends on the image and the depth resolution. Absolute differences of pixel intensities are used in the algorithm to compute stereo similarity between points. By computing the SAD for pixels in a window surrounding the points, the difference between similarity values for stereo points can be calculated. The disparity associated with the smallest SAD value is selected as best match [31].
Block matching algorithm or particularly feature based approach has been used to match the two images [32, 33].
It compares a reference block to next or previous block of pixels. If two different video from two different cameras are grabbed with just a different angle, disparity matrix can be generated with a simple Block Matching algorithm.
From this matrix, depth can be deduced with a correct calibration and transmit this depth to the embedded system.
The block-matching algorithm compares two 3 by 3 blocks of memory by accumulating either a sum of the absolute differences (SAD) between corresponding pixels or a squared sum of differences (SSD). SAD algorithm has been used in this paper due to its quicker calculation then a guileless usage of a relationship based technique. The final summation is an accurate measure of how well two blocks of video match.
SAD method relates pixels from the two pictures are subtracted pairwise and the total distinction of their grey values is summed up.
As an example, a reference block compared at a starting point, pixel by pixel, with the current block. On the target picture, three possibilities arises: left, right and center. On computing the difference between reference and target for these three cases.
Algorithm on LabView
An algorithm to compute the depth image from block matching algorithm is developed in LabView. NI Vision module along with ROS tool kit developed by Tufts University [34] were used to develop the algorithm. The video streams were received by HTTP stream in the form of MJPEG compressed by ROS.
Virtual web cams were simulated on laptop with VCAM software. VCAM software acts as a wireless interface between LabView and HTTP stream. All the inputs were feed in with respect to each camera in HTTP address. The VCAM software simulates the camera just like USB cameras, which can be accessed by NI Max. Therefore, NI Vision module can be accessed wirelessly.
The next step is to develop block matching algorithm in NI Vision module. The algorithm was divided in two categories:
to calibrate the camera and
to compute the disparity and deduce depth from block matching algorithm.
The calibration is necessary to interface the camera with LabView. Parameters like distance between to cameras, their orientation and angle of view are added and adjusted in calibration process. The calibration was taken care by the visual interface (VI) developed by Mark Szaboo [35]. The calibration data file is obtained after the completion of the calibration.
The entire algorithm is divided into four steps:
In Express VI, cameras were run in 640×480 frames at 20 frames per second. The 32 bit color images are converted to 8 bit gray scale images to get depth image from stereo module. A binocular vision session is run in parallel. This binocular session reads the calibrated data. The output is two video streams and binocular session is calibrated. In fig. 1, step loop is used on a while loop has been used.
Block matching algorithm in NI Vision Module uses IMAQ stereo correspondence VI. This module needs four inputs and give two outputs. The calibrated binocular stereo session, left image in and right image in are the inputs and we get disparity image Out as the output. This module also takes in the pre filter options, post filter options and correspondence options. Thus the depth image is computed from disparity using IMAQ Get Depth image from Stereo VI module.
Fig. 1 – LabView: Algorithm to Calculate the Depth
IMAQ Get Depth Image From Stereo VI module takes three inputs and gives three outputs. The inputs are calibrated binocular stereo session, disparity image and memory module to store the computed depth image. The binocular stereo session can be stopped whenever required and the depth image out can be recovered and converted into an array. Therefore, computed depth can be accesses in tabular form.
The last step shown in fig. 2 and 3 built a region of interest by computing mean of four depth data, which can be configurable according to the requirement. The final step is the communication between LabView and ROS. At the end of each iteration, LabView sends the depth data by publishing float32 with LabView node into chatter topic.
3. SYSTEM DESIGN
Two platforms were used to implement the algorithm: LabView on laptop and Robot Operating System (ROS) [36] on Raspberry Pi 3 [37]. ROS allows to link the camera to the hardware platform as well as the NI Vision module of LabView on the laptop. LabView student version has been used for the implementation of the project as it provides higher frame rates compared to OpenCV and Matlab Simulink tool (10 fps compared to 2 fps) [38], also allows interfacing of sensors and has NI Vision Module for image processing. Therefore two cameras of Microsoft studio cam with a resolution of 640X480 pixels at 20fps were used [39]. They are connected to Raspberry pi 3 using USB cables.
An image of Ubuntu mate 3 and ROS Kinetic was installed on Raspberry Pi 3. Ubuntu Mate 3 is a light version of Ubuntu for ARM processor as installed Raspberry Pi 3.
ROS Kinetic version is chosen instead of the latest version because of its large community of developers on Internet.
ROS is a grouping of libraries, which permits to read data from the sensors and write the data on the computer /actuators/mechatronic system [40].
Cameras are connected to Raspberry Pi 3 and the system is mounted on the UAV, as shown in fig. 4.
Communication between raspberry pi 3 and laptop is done via Wi-Fi. The following chart in fig. 5, summaries the system architecture.
Fig. 2 – Computing part
Fig. 3 – Communication between LabView and ROS
An image of Ubuntu mate 3 and ROS Kinetic was installed on Raspberry Pi 3. Ubuntu Mate 3 is a light version of Ubuntu for ARM processor as installed Raspberry Pi 3. ROS Kinetic version is chosen instead of the latest version because of its large community of developers on Internet. ROS is a grouping of libraries, which permits to read data from the sensors and write the data on the computer/ actuators/ mechatronic system [40]. Cameras are connected to Raspberry Pi 3 and the system is mounted on the UAV, as shown in fig. 4. Communication between raspberry pi 3 and laptop is done via Wi-Fi. The following chart in fig. 5, summaries the system architecture.
It is able to deploy a network (in ROS) with the notion of node, topic, master and message. Nodes are able to write messages into publisher topic and read messages into subscriber topic. A node can be a sensor (cameras in this case), a laptop or a C/C++, Python or Java program. In the present case, wireless cameras with LabView has been used.
Recover data from cameras and transmit computed data to a flight control board with just a Raspberry Pi 3.ROS operating scheme for the entire system has been shown in fig. 6. Following ROS packages were been used for cameras wired by USB on the Raspberry Pi 3:
uvc_camera: This package allow cameras grabbing on raspberry pi.
mjpeg_server: This package allow camera streams by HTTP.
Fig. 4 – Assembling on UAV
There are four primary nodes. All are registered by ROS master.
Camera Node: This node takes camera informations and publish them.
Web server Node: This node takes camera informations published by Camera Node and published them on internet by HTTP stream.
Image processingNode: This node takes HTTP streamand compute depth by LabView software.
Flight controller Node: This node takes depth from laptop node and performing it.
Fig. 5 – System architecture
Fig. 6 – ROS operation scheme for the system
4. SIMULATION AND HARDWARE IMPLEMENTATION
An X-configuration quadcopter is used to implement the project. For ground control station QGroundControl (QGC) simulation software integrated with gazebo has been used [41]. QGC software helps to make a backup of the parameters of the simulation and implant them in the real system to approximate the conditions of the simulation. The simulation was done to test the functioning of the algorithm. Both Hardware in the loop (HITL) and software in the loop (SITL) can be performed in the proposed simulation. SITL is used to check the algorithm’s efficiency in the software and HITL is used to check the algorithm’s efficiency in the hardware. To perform the SITL a similar quadcopter model was selected and “Commander Takeoff” command was written for the drone to takeoff. The algorithm was then launched. It was observed that the drone makes a backward movement (as shown in fig. 7) when the obstacle was simulated using “Rostopic publish” towards the drone. The algorithm included a command stating that if depth is equal to or less than 40cm. This was initiated by publishing “depth_topic”. The simulation was useful to identify the values to be filled in topic “mavros/actuator_control” to carry out the backward movement of drone. This algorithm proves good in GPS denied environment, i.e. mostly indoor. As discussed earlier, using the QGC software a backup parameters were taken from the simulation to reuse them in the real hardware system so that the exact simulation can be recreated in the hardware.
Flight Test
Success of the flight test depends on the implementation of the algorithm which collects depth data from ROS node. According to depth values, the UAV is able to turn back if an obstacle is approximately less than 40 cm and the UAV switches to off-board mode.
The off-board mode allows the computer to have the control on the UAV. To test this, a make shift test stand with elastic band for safety tests were arranged, as shown in fig. 8.
Tests was not very positive in the first attempt because time between computing depth and the real turn back action was too slow. Despite that, UAV was able to detect an object and make turns back, same has been uploaded on YouTube platform [42] (Fig. 9).
5. DISCUSSIONS
The main problems are latency due to Wi-Fi signal and delay due to depth computing. To solve this, it would be good to use a more powerful Wi-Fi transmitter. Moreover, to improve computing depth, it would be good to use a more powerful processor.
The main problems are latency due to Wi-Fi signal and delay due to depth computing. To solve this, it would be good to use a more powerful Wi-Fi transmitter. Moreover, to improve computing depth, it would be good to use a more powerful processor. To improve this project, region of interest can be applied to avoid an obstacle on the left, the right, the bottom and the top. For example, if UAV detects an obstacle on the right of the image or in the right region of interest, the UAV could be able to go to the left to avoid this obstacle. Same for the other directions.
The video transmission speed by the Raspberry Pi to the computer is not satisfactory, the Wi-Fi chip of the Raspberry Pi 3 and the computer card are not robust enough. However, this problem can be easily corrected by using Wi-Fi USB adapters. Speed of image processing that turned out to be very low due to the hardware limitations of the computer. The result is that the number of frames per second is low, which considerably limits the speed of the drone.
Fig. 7 – Simulation of the algorithm using Drone Iris+ from 3DR
Fig. 8 – Test bed setup
Fig. 9 – Ongoing Test
REFERENCES
[1] F. G. Costa, J. Ueyama, T. Braun, G. Pessin, F. S. Osório, and P. A. Vargas, The use of unmanned aerial vehicles and wireless sensor network in agricultural applications, in Geoscience and Remote Sensing Symposium (IGARSS), 2012 IEEE International, pp. 5045-5048, 2012.
[2] V. Devalla, A. K. Mondal, A. A. J. Prakash, M. Prateek, and O. Prakash, Guidance, navigation and control of a powered parafoil aerial vehicle, Current Science, vol. 111, p. 1045, 2016.
[3] V. Devalla, A. K. Mondal, and O. Prakash, Performance analysis of a powered parafoil unmanned aerial vehicle using open loop flight test results and analytical results, in Research, Education and Development of Unmanned Aerial Systems (RED-UAS), 2015 Workshop on, pp. 369-376, 2015.
[4] V. Devalla and P. Om, Longitudinal and Directional Control Modeling for a Small Powered Parafoil Aerial Vehicle, in AIAA Atmospheric Flight Mechanics Conference, p. 3544, 2016.
[5] V. Devalla, O. Prakash, and A. K. Mondal, Angle of Attack, Pitch Angle and Glide Angle Modeling at Various Thrust Inputs for a Powered Parachute Aerial Vehicle, in Book of Abstracts, p. 35, 2014.
[6] A. Ollero and L. Merino, Unmanned aerial vehicles as tools for forest-fire fighting, Forest Ecology and Management, vol. 234, p. S263, 2006.
[7] Z. Sarris and S. Atlas, Survey of UAV applications in civil markets, in IEEE Mediterranean Conference on Control and Automation, p. 11, 2001.
[8] V. Devalla and O. Prakash, Developments in unmanned powered parachute aerial vehicle: A review, IEEE Aerospace and Electronic Systems Magazine, vol. 29, pp. 6-20, 2014.
[9] A. Nemra and N. Aouf, Robust INS/GPS sensor fusion for UAV localization using SDRE nonlinear filtering, IEEE Sensors Journal, vol. 10, pp. 789-798, 2010.
[10] H. Eisenbeiss, A mini unmanned aerial vehicle (UAV): system overview and image acquisition, International Archives of Photogrammetry. Remote Sensing and Spatial Information Sciences, vol. 36, pp. 1-7, 2004.
[11] N. Abdelkrim, N. Aouf, A. Tsourdos, and B. White, Robust nonlinear filtering for INS/GPS UAV localization, in Control and Automation, 2008 16th Mediterranean Conference on, pp. 695-702, 2008.
[12] A. Kundu, K. M. Krishna, and C. Jawahar, Realtime multibody visual SLAM with a smoothly moving monocular camera, in Computer Vision (ICCV), 2011 IEEE International Conference on, pp. 2080-2087, 2011.
[13] R. K. Namdev, A. Kundu, K. M. Krishna, and C. Jawahar, Motion segmentation of multiple objects from a freely moving monocular camera, in Robotics and Automation (ICRA), 2012 IEEE International Conference on, pp. 4092-4099, 2012.
[14] J. Chetan, K. M. Krishna, and C. Jawahar, An adaptive outdoor terrain classification methodology using monocular camera," in Intelligent Robots and Systems (IROS), 2010 IEEE/RSJ International Conference on, pp. 766-771, 2010.
[15] N. D. Reddy, I. Abbasnejad, S. Reddy, A. K. Mondal, and V. Devalla, Incremental real-time multibody VSLAM with trajectory optimization using stereo camera, in Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ International Conference on, pp. 4505-4510, 2016.
[16] J. Liu, P. Jayakumar, J. L. Overholt, J. L. Stein, and T. Ersal, The role of model fidelity in model predictive control based hazard avoidance in unmanned ground vehicles using lidar sensors, Ann Arbor, vol. 1001, p. 48109, 2013.
[17] C. Rasmussen, Combining laser range, color, and texture cues for autonomous road following, in Robotics and Automation, 2002. Proceedings. ICRA'02. IEEE International Conference on, pp. 4320-4325, 2002.
[18] B. M. Yamauchi, PackBot: A versatile platform for military robotics, in Unmanned Ground Vehicle Technology VI, pp. 228-238, 2004.
[19] M. Hebert, Active and passive range sensing for robotics, in Robotics and Automation, 2000. Proceedings. ICRA'00. IEEE International Conference on, pp. 102-110, 2000.
[20] M. H. Hebert, C. E. Thorpe, and A. Stentz, Intelligent unmanned ground vehicles: autonomous navigation research at Carnegie Mellon, vol. 388: Springer Science & Business Media, 2012.
[21] H. G. Musmann, P. Pirsch, and H.-J. Grallert, Advances in picture coding, Proceedings of the IEEE, vol. 73, pp. 523-548, 1985.
[22] A. Puri, H.-M. Hang, and D. Schilling, An efficient block-matching algorithm for motion-compensated coding, in Acoustics, Speech, and Signal Processing, IEEE International Conference on ICASSP'87, pp. 1063-1066, 1987.
[23] H. Schwarz, D. Marpe, and T. Wiegand, Overview of the scalable video coding extension of the H. 264/AVC standard, IEEE Transactions on circuits and systems for video technology, vol. 17, pp. 1103-1120, 2007.
[24] T. Wiegand, G. J. Sullivan, G. Bjontegaard, and A. Luthra, Overview of the H. 264/AVC video coding standard, IEEE Transactions on circuits and systems for video technology, vol. 13, pp. 560-576, 2003.
[25] D. Le Gall, MPEG: A video compression standard for multimedia applications, Communications of the ACM, vol. 34, pp. 46-58, 1991.
[26] M. Brunig and W. Niehsen, "Fast full-search block matching," IEEE Transactions on circuits and systems for video technology, vol. 11, pp. 241-247, 2001.
[27] A. Nieder, Stereoscopic vision: Solving the correspondence problem, Current Biology, vol. 13, pp. R394-R396, 2003.
[28] A. Fusiello, E. Trucco, and A. Verri, A compact algorithm for rectification of stereo pairs, Machine Vision and Applications, vol. 12, pp. 16-22, 2000.
[29] L. Di Stefano and S. Mattoccia, Fast Stereo Matching for the VIDET System using a General Purpose Processor with Multimedia Extensions, in camp, pp. 356-, 2000.
[30] A. Kuhl, Comparison of stereo matching algorithms for mobile robots, Centre for Intelligent Information Processing System, pp. 4-24, 2005.
[31] J. C. van den Heuvel, J. Kleijweg, W. van der Mark, M. Lievers, and L. Kester, Obstacle detection for people movers using vision and radar, 2003.
[32] A. Howard, Real-time stereo visual odometry for autonomous ground vehicles, in Intelligent Robots and Systems, 2008. IROS 2008. IEEE/RSJ International Conference on, pp. 3946-3952, 2008.
[33] Y. Jiang, Y. Xu, and Y. Liu, Performance evaluation of feature detection and matching in stereo visual odometry, Neurocomputing, vol. 120, pp. 380-390, 2013.
[34] * * * T. U. b. t. M. E. D. a. t. C. f. E. E. a. Outreach, ROS for LabView Software, ed: GitHub, 2017.
[35] M. Szabó. (2016). Machine Vision – Image processing and depth image with two webcams, Available: https://1drv.ms/
[36] * * * O. S. R. Foundation. (2018, 1/1/2018). Robotic Operating System. Available: http://www.ros.org/
[37] * * * R. P. Foundation. (2017, 31/12/2017). Raspberry Pi 3 Model B, Available: https://www.raspberrypi.org/products/raspberry-pi-3-model-b/
[38] H. Kodam, Quad rotor Based Surveying and Tracking Tool to be applied in the Agricultural Industry, 2013.
[39] * * * Microsoft. (2017). LifeCam Strudio, Available: https://www.microsoft.com/accessories/en-us/products/webcams/lifecam-studio/q2f-00013
[40] * * ( O. S. R. Foundation. (2017, 1/1/2018). Why ROS, Available: http://www.ros.org/core-components/
[41] * * * Q. D. Control. (2017). QGroundControl, Available: http://qgroundcontrol.com/
[42] D. Nicolas. (2017). UAV obstacle avoidance ROS, Available: https://www.youtube.com/watch?v=IaUEVV9D40k
Copyright Notice
© Licențiada.org respectă drepturile de proprietate intelectuală și așteaptă ca toți utilizatorii să facă același lucru. Dacă consideri că un conținut de pe site încalcă drepturile tale de autor, te rugăm să trimiți o notificare DMCA.
Acest articol: Paper Title (use style: paper title) [308045] (ID: 308045)
Dacă considerați că acest conținut vă încalcă drepturile de autor, vă rugăm să depuneți o cerere pe pagina noastră Copyright Takedown.
