”OVIDIUS” UNIVERSITY OF CONSTANT ¸ A FACULTY OF MATHEMATICS AND COMPUTER SCIENCE DEGREE PROGRAM: COMPUTER SCIENCE BACHELOR’S THESIS Implementing a… [606477]

MINISTRY OF NATIONAL EDUCATION
”OVIDIUS” UNIVERSITY OF CONSTANT ¸ A
FACULTY OF MATHEMATICS AND COMPUTER SCIENCE
DEGREE PROGRAM: COMPUTER SCIENCE
BACHELOR’S THESIS
Implementing a virtual tour of a public building
using WebGL
Scientific Adviser
Professor Popovici Dorin Mircea
Nicoara Andrei Daniel
Constant ¸a
2017

Abstract
The main goal of this paper is to offer a solution for creating and visualizing virtual tours
of different buildings/statues created by the user. The solution is given in the form of a web
application that allows to load, save and view 3D models from a server and create specific
waypoints that the camera will follow and look at the model.
The user interacts with the application using GUI elements provided by it for example
buttons, keyboard shortcuts, dropdowns, widgets etc. All changes applied to the model such
as changing its position, rotation, scaling are saved in a database alongside its waypoints
added or modified by the user.

Contents
Contents 1
List of Figures 3
List of Tables 4
1 Introduction 5
1.1 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 State of the Art 6
2.1 Virtual Tour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Virtual space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 VRML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3.1 Scene Graph Structure . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3.2 Event Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3.3 Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3.4 Scripts and Interpolators . . . . . . . . . . . . . . . . . . . . . . . 9
2.4 X3D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.4.1 Scene graph hierarchy . . . . . . . . . . . . . . . . . . . . . . . . 11
1

Contents Contents
2.4.2 Transformation hierarchy . . . . . . . . . . . . . . . . . . . . . . . 11
2.4.3 Behaviour graph . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.5 WebGL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.5.1 Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.5.2 Origins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.5.3 Structure of a WebGL Application . . . . . . . . . . . . . . . . . . 16
2.6 Three.js . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.6.1 Creating a 3D scene . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.6.2 Rendering an object in our scene . . . . . . . . . . . . . . . . . . . 18
2.6.3 Animating in Three.js . . . . . . . . . . . . . . . . . . . . . . . . 19
2.6.4 Loading a model in our scene . . . . . . . . . . . . . . . . . . . . 20
3 Proposed Solution 22
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.2 General overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.3 Client-Side application . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.4 Back-end API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.5 Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2

List of Figures
2.1 Panoramic tour vs 3D model . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 An example of a VRML world that uses Sensors and Interpolators . . . . . . . 10
2.3 The architecture of a X3D application . . . . . . . . . . . . . . . . . . . . . . 12
2.4 X3D animation example with PositionInterpolator and TimeSensor . . . . . . . 13
2.5 Relationship among OpenGL, OpenGL ES and WebGL . . . . . . . . . . . . . 15
2.6 Structure of a WebGL application . . . . . . . . . . . . . . . . . . . . . . . . 16
2.7 A Three.js scene containing a cube mesh . . . . . . . . . . . . . . . . . . . . . 19
2.8 Animation of a cube in Three.js . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.9 Three.js 3D model loading . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.1 Application structure and relations . . . . . . . . . . . . . . . . . . . . . . . . 23
3.2 Models after aplying the rename method using multer . . . . . . . . . . . . . . 26
3.3 Renaming the textures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.4 Structure of a record in MongoDB . . . . . . . . . . . . . . . . . . . . . . . . 28
3

List of Tables
2.1 Standard units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2 File types supported by Three.js for model loading . . . . . . . . . . . . . 20
4

Chapter 1
Introduction
1.1 Objectives
5

Chapter 2
State of the Art
2.1 Virtual Tour
The purpose of a virtual tour is to simulate visiting a location at the comfort of a computer.
The virtual tour can consist of some photos or videos used in a panoramic view or of a
3D model that has the shape of a location. Virtual tours target many specific fields such as
entertainment, advertising, learning etc.
Before the era of computers, the only standard of visualizing a building was using 2D
floor plans or 3D sketched buildings from cardboard which made this whole operation inef-
ficient in terms of portability. But everything changed when that era arrived and brought us
the missing portability and interactivity of a virtual tour.
At the present time, there are 3 types of virtual tours:
.Panoramas . Those are made by stitching images together, that means modifying the
perspective of images and blending them, so that the photographs can be aligned seam-
lessly.
.Video-based tours . Video cameras are used to pan and walk-through locations, which
is more advanced than panoramas because the point of view is constantly changing but
gives the viewer no interaction.
.3D Content . A 3D model of the building is loaded and available to view and interact
with, giving the user more interaction and immersion than the other two virtual tours.
Figure 2.1 : Panoramic tour vs 3D model
6

State of the Art Virtual space
2.2 Virtual space
Virtual space or also called cyberspace is in fact a representation of human experience in a
particular space. The term cyberspace was used at first by William Gibson in his book ”Neu-
romancer” published in 1984 and he describes it as the virtual space in which the electronic
data of worldwide PCs circulate.
There are multiple representations of virtual space. First of all we have virtual reality
which is a 3D environment where people can ’enter’ and ’move’ while interacting with both
the computer and other human beings, seen in movies like Tron or Matrix. On the other hand,
we have the other representation as a world of networks of computers linked via cables and
routers that let us communicate with each other and store/retrieve information. The best
example is the Internet, at first used for email, file transfer, bulletin boards and news, and
now even more because of the World Wide Web, which allows us to navigate through this
network.
Whether we talk about real space or virtual space, information is the uniform value
which allows us to move from one space to another by transforming it from the moment
of its creation to be used in any space. If we think about it, space is in fact a collection of
information used in an organized pattern, but can a virtual space exist in the absence of infor-
mation or virtual objects? The answer is yes, because the possibility to store information is
still present, while the other way around the answer seems to be a clearly no. So information
is intimately tied up with the virtual space and neither exist nor be transmitted in its absence.
If we compare the type of information used in virtual space and in physical space we get
four sub-concepts: place, distance, size and route. The first one drives us to ask ”Where?”
questions. Where is that cube located? Where should we move it? And the scene where the
cube is rendered represents a virtual place. Such ways of speaking reflect the importance of
locating specific objects in virtual space. Distance has us asking ”How far?” questions. How
many method calls will it take to display the desired object? In this way we can calculate
how long it might take for a computer to render a scene on our screens for example. With
size we ask ”How big?” questions. We might wonder how extensive a scene is, meaning how
much information it contains and how many objects it includes. And with route it involves
”navigation” issues. If in a multiplayer game someone does an action, the information con-
taining the action will follow a specified route or set of connections to reach the other players
so they can see that action on their screens.
In the following sections we shall present some technologies that help us to create our
own virtual worlds.
7

State of the Art VRML
2.3 VRML
VRML, standing for Virtual Reality Modeling Language first version being launched in
1995, is neither virtual reality nor a modeling language. Is in fact a 3D interchange for-
mat which defines most of the commonly used semantics found in 3D applications such as
hierarchical transformations, light sources, viewpoints, geometry, animation, fog, material
properties, and texture mapping. Another purpose of VRML is to be a 3D analog to HTML,
having to serve as a simple multiplatform language for publishing 3D Web pages. This is
motivated by the fact that some information is best experienced three dimensionally, such as
games, engineering and scientific visualizations etc.
Users can navigate through 3D space and click on objects representing URLs or other
VRML scenes. VRML files may contain references to files in many other standard formats.
JPEG, PNG, and GIF may be used as texture maps on objects, WA V and MIDI as sound
and music that is emitted in the scene, and Java or JavaScript code to be used to implement
programmed behavior for the objects in the scene.
Below are some features that belong to VRML.
2.3.1 Scene Graph Structure
Hierarchical scene graphs are used in VRML to describe 3D objects and scenes, where every
entity in the scene is called a node. In VRML there are different types of nodes such as
geometry primitives, appearance properties, sound and sound properties and various types
of grouping nodes. Those store their data in different types of fields that can be used to store
everything from a single number to complex data structures like an array of 3D transforma-
tions.
The scene graph in VRML is a directed acyclic graph which allows nodes to contain
other nodes as in a children-parent relation. This is helpful for creating large world or com-
plicated objects from subparts.
2.3.2 Event Architecture
Nodes use events as a mechanism to communicate with each other, where every node type
define the names and types of events that instances of that type may generate or receive, and
ROUTE statements define event paths between those who emit and generate events.
8

State of the Art VRML
2.3.3 Sensors
Events are triggered by sensors, meaning that those allow users to detect what and when
something happens. There a multiple types of sensors that are useful for user interaction,
generating event as the viewer moves through the scene or when the user interacts with some
input device.
2.3.4 Scripts and Interpolators
Scripts are used between event generators(sensor nodes for example) and event receivers.
They allow the world creator to specify random behaviors for any object in the scene.
Interpolators are built-in scripts that are used to create simple animation calculations
and combined with a sensor called TimeSensor that generates events as time passes they can
make objects move or do any transformation in a period of time.
Below is an example of a VRML world where a ball moves on a plane when a click
event occurs.
1#VRML V2.0 utf8
2
3Viewpoint { position 0 0 20 }
4
5#Base
6
7Transform {
8 translation 0 -0.2 0
9 children Shape {
10 appearance Appearance {
11 material Material {
12 diffuseColor 1 0 0 }}
13 geometry Box { size 22 0.4 2}
14 }
15}
16
17
18DEF ball_tr Transform {
19 translation -10 1 0
20 children [
21 Shape {
22 appearance Appearance {
23 material Material { }
24 texture ImageTexture { url "cone.jpg"}}
25 geometry Sphere {}
26 }
9

State of the Art VRML
27
28 DEF ball_sensor TouchSensor {}]}
29
30DEF timer TimeSensor {
31 cycleInterval 8
32 loop FALSE
33}
34
35DEF pi PositionInterpolator {
36 key [ 0 1 ]
37 keyValue [-10 1 0, 10 1 0]
38}
39
40DEF oi OrientationInterpolator {
41 key [0 0.157 0.314 0.471
42 0.628 0.785 0.942 1]
43 keyValue [ 0 0 1 0,
44 0 0 1 -3.14,
45 0 0 1 -6.28,
46 0 0 1 -9.42,
47 0 0 1 -12.56,
48 0 0 1 -15.7,
49 0 0 1 -18.84,
50 0 0 1 -20.0 ]
51}
52
53DEF si ScalarInterpolator {
54 key [ 0 0.5 1]
55 keyValue [0 1 0]
56}
57ROUTE ball_sensor.touchTime TO timer.set_startTime
58ROUTE timer.fraction_changed TO si.set_fraction
59ROUTE si.value_changed TO pi.set_fraction
60ROUTE pi.value_changed TO ball_tr.set_translation
61ROUTE si.value_changed TO oi.set_fraction
62ROUTE oi.value_changed TO ball_tr.set_rotation
Figure 2.2 : An example of a VRML world that uses Sensors and Interpolators
10

State of the Art X3D
2.4 X3D
X3D, built on the VRML, is a scene-graph architecture and file-format encoding that im-
proves on the VRML international standard. X3D expresses its geometry and behavior inher-
ited from VRML using XML(Extensible Markup Language), while also providing program
scripting in JavaScript or Java.
To view X3D scenes we need a browser that is able to parse X3D code. Those browsers
are often implemented as plugins for a regular web browser (such as Mozilla Firefox or
Google Chrome) or also delivered as standalone or embedded applications that present X3D
scenes.
The architecture of a X3D application is defined independently of any physical devices
or any other implementation-dependent concepts (touchscreen, mouse etc.). Another aspect
of every X3D application is that they contain graphics and/or aural objects that can be loaded
from local storage or over the network. These objects can also be dynamically updated
through a variety of mechanics based on the developer’s preferences. Each X3D application
has the following purposes:
.to establish a world coordinate virtual space for all the objects in the scene used by the
application.
.to define and compose a set of 2D and 3D multimedia objects.
.to specify the use of hyperlinks to other files and applications.
.to create programmable behaviour for objects in the scene.
.to permit the use of external modules or applications via various scripts and program-
ming languages.
2.4.1 Scene graph hierarchy
Same as VRML the X3D scene graph is a directed acyclic graph containing different types
of nodes that at the same time contain specific fields with one or more children nodes which
participate in the hierarchy. Those fields may contain simple values or other nodes making
it easy to add different user viewing perspectives. Because of this structure, rendering and
animating scenes with X3D is done in a straightforward way, starting at the root of the scene
graph tree, the scene graph is traversed in a depth-first manner.
2.4.2 Transformation hierarchy
The transformation hierarchy contains all the root nodes and their children that are consid-
ered to have at least one particular location in the virtual world. Those root nodes have their
coordinate system according to the world coordinate system. There are some exceptions of
this nodes that are not affected by the transformation hierarchy such as the PositionInterpo-
11

State of the Art X3D
Figure 2.3 : The architecture of a X3D application
lator that linearly interpolates an array of values, the TimeSensor that generate events as time
passes and the WorldInfo node which contains information about the world.
2.4.3 Behaviour graph
The behaviour graph is the collection of fields made of connections between router and
a model for the propagation of events declared in the scene. This graph can be changed
dynamically using routes or by adding and breaking connections. Events are inserted into
the system and propagate through the behaviour graph in a well defined order.
Below is a table containing the standard units used in X3D applications:
Table 2.1: Standard units
Category Unit
Linear distance Metres
Angles Radians
Time Seconds
Colour space RGB([0.,1.], [0.,1.], [0.,1.])
12

State of the Art X3D
The following X3D application renders and animates a cube that moves using the Posi-
tionInterpolator and TimeSensor.
1<Scene DEF=’scene’ >
2 <Group >
3 <Transform DEF=’Cube’ >
4 <Shape >
5 <Appearance >
6 <Material/ >
7 </Appearance >
8 <Box size=’1 1 1’/ >
9 </Shape >
10 </Transform >
11 <TimeSensor DEF=’Clock’ cycleInterval=’4’ loop=’true’/ >
12 <PositionInterpolator DEF=’CubePath’
13 key=’0 0.11 0.17 0.22
14 0.33 0.44 0.5 0.55 0.66
15 0.77 0.83 0.88 0.99’
16 keyValue=’0 0 0, 1 1.96 1, 1.5 2.21 1.5,
17 2 1.96 2, 3 0 3, 2 1.96 3,
18 1.5 2.21 3, 1 1.96 3, 0 0 3,
19 0 1.96 2, 0 2.21 1.5, 0 1.96 1, 0 0 0’/ >
20 </Group >
21 <ROUTE fromNode=’Clock’ fromField=’fraction_changed’
22 toNode=’CubePath’ toField=’set_fraction’/ >
23 <ROUTE fromNode=’CubePath’ fromField=’value_changed’
24 toNode=’Cube’ toField=’set_translation’/ >
25</Scene >
Figure 2.4 : X3D animation example with PositionInterpolator and TimeSensor
13

State of the Art WebGL
2.5 WebGL
WebGL is a JavaScript API that enables web content to render 3D graphics in HTML canvas
inside the web browsers. It is based on OpenGL ES 2.0 which is the API for 3D render-
ing on smart phones running on the iPhone and Android platforms. Traditionally, to create
convincing 3D graphics we had to create stand-alone applications using programming lan-
guages such as C or C++ along with dedicated computer graphics APIs such as OpenGL and
Direct3D. Now, with WebGL, we can use complex 3D graphics as a part of a standard web
page using only HTML and JavaScript.
Another aspect of WebGL is that it is supported as the browser’s default built-in tech-
nology for rendering 3D graphics, meaning that we don’t need to install any other plugins
or libraries to use it. Because of that, we can run WebGL applications on various platforms,
from PCs to tablets or smart phones.
2.5.1 Advantages
As HTML has evolved, more and more sophisticated web applications began to appear. In
the past, HTML was able to support only static content, but when scripting support such as
JavaScript was added, this enabled more complex interactions and dynamic content. HTML5
introduced further sophistication, by having support for 2D graphics via the canvas tag, al-
lowing a variety of graphical elements on a web page. The next step was the apparition
of WebGL that enables displaying and manipulating 3D graphics on web pages by using
JavaScript. Using WebGL, it’s possible to create rich user interfaces or 3D games or even to
use 3D graphics to visualize and manipulate information from the Internet. Below are some
advantages of WebGL:
.We only need a text editor for writing the code and a browser to view the 3D graphics
applications.
.It’s easy to share the 3D graphics applications to others.
.We can leverage the full functionality of the browser.
.There are a lot of material and support for studying and developing WebGL applica-
tions.
14

State of the Art WebGL
2.5.2 Origins
The first two most used technologies for displaying 3D graphics on personal computers are
OpenGL and Direct3D. Direct3D is a proprietary application programming interface devel-
oped by Microsoft and is a part of the DirectX technologies. The other one, OpenGL, is a
free and open API that can support various platforms and operating systems such as Linux,
Windows, Macintosh and a variety of devices like smart phones, tablets, and game consoles.
OpenGL, developed by Silicon Graphics and published as an open standard in 1992,
has evolved through several versions and has had a profound effect regarding 3D graphics,
software products and even film productions over the years. Although OpenGL is the root of
all other Silicon Graphics 3D libraries, WebGL is actually derived from a version of OpenGL
that targets embedded computers such as smart phones and video game consoles. Its name is
OpenGL ES, first version appeared in 2004, then the second version which WebGL is based
on appeared in 2007 and again in 2012 with the third version. The following figure 2.5 shows
the relationship between OpenGL and the other two APIs.
Figure 2.5 : Relationship among OpenGL, OpenGL ES and WebGL
As we see in this figure, with the apparition of OpenGL 2.0 came along a new graph-
ics capability, programmable shader functions. Shader functions or shaders are computer
programs that allows the developer to develop sophisticated visual effects by using a special
programming language similar to C called GLSL (OpenGL shading language).
15

State of the Art Three.js
2.5.3 Structure of a WebGL Application
By default, dynamic web pages are created by using a combination of HTML and JavaScript,
but with WebGL, the shading language GLSL ES needs to be also added, meaning that web
pages with WebGL are created by using three languages: HTML5, JavaScript and GLSL ES.
Figure 2.6 shows us the architecture of a web page on the left and a web page with WebGL
on the right.
Figure 2.6 : Structure of a WebGL application
2.6 Three.js
Three.js is a JavaScript library that makes WebGL easy to use providing us an extensive API
with a large set of functions. It’s useful to create 3D models/textures/scenes directly on a
web page without too many difficulties and with less code than using only basic WebGL.
Below are some Three.js capabilities that are easy to made:
.Creating simple and complex 3D geometries
.Animating and moving objects through a 3D scene
.Support for textures and materials for objects in the scene
.Different types of light sources to illuminate the scene
.Model loading support exported from 3D-modeling software
.Advanced post-processing effects such as particle effects, bloom, motion blur etc.
.Working with custom shaders
.Creating point clouds
16

State of the Art Three.js
2.6.1 Creating a 3D scene
To use Three.js in our web applications we need to include the JavaScript library in the
HTML file using the script tag:
1<html lang="en" >
2 <head>
3 <title> Model viewer </title>
4 <meta charset= "UTF-8" >
5 <meta name="viewport" content="width=device-width,
6 initial-scale=1" >
7 </head>
8 <body>
9 <script src= "lib/three.min.js" ></script>
10 …
11 </body>
12</html>
After we included the library we can now create the scene that will contain all our
objects and animations. We start by declaring a THREE.Scene() JavaScript variable and add
some elements to it such as the camera and a light and input control.
1//we create the scene containing all the graphic objects
2scene = new THREE.Scene();
3
4//we initialize a perspective camera with the following parameters:
field of view, aspect ration, near, far
5camera = new THREE.PerspectiveCamera( 60, window.innerWidth /
window.innerHeight,
61, 10000 );
7camera.position.z =6;
8
9//we add the camera in our scene
10scene.add(camera);
11
12//initializing orbit controls that will allow us to navigate throught
the scene using
13controls = new THREE.OrbitControls(camera);
14controls.rotateSpeed =5.0;
15controls.zoomSpeed =5;
16
17//declaring a white directional light and changing its position
18var dirLight = new THREE.DirectionalLight(0xffffff);
19dirLight.position.set(200, 200, 1000).normalize();
20
21//we add the light in our scene
22scene.add(dirLight);
17

State of the Art Three.js
2.6.2 Rendering an object in our scene
So far we’ve got a basic empty scene with no 3D objects containing a camera and a light
source. To render any object in our scene we need to declare a renderer object. The
renderer object is responsible for calculating what the scene will look like in the browser
based on the camera’s angle. To use a renderer object we first need to declare it with
THREE.WebGLRenderer(), this object will allso use the graphics card to display the scene
in our browser.
1var renderer = new THREE.WebGLRenderer();
2//we set the background color of our scene to almost white
3renderer.setClearColorHex()
4renderer.setClearColor( new THREE.Color(0xEEEEEE));
5//changing the size of our scene to be the same as the browser’s
window size
6renderer.setSize(window.innerWidth, window.innerHeight);
7
8//we add the render in the webpage html
9var container =document.createElement(’div’);
10document.body.appendChild(container);
11container.appendChild(renderer.domElement);
12
13//we tell the render to display our scene through the camera
14renderer.render(scene, camera);
After we declared and used the render variable we can add the first 3D object in our
scene. In Three.js, an object, also named mesh, is made up from two elements, a geometry
and a material. Geometry defines the shape of a mesh starting from primitives such as a box,
plane etc. to complex forms, torus knots, splines etc. Material represents the appearance
of a mesh and how affected is by lights or shadows. For example, a MeshBasicMaterial is
not affected by any shading effect, where the MeshPhongMaterial calculates the shading per
pixel and adds reflectance capabilities.
1//we declare a box geometry to be used later by the mesh
2var cubeGeometry = new THREE.BoxGeometry(4, 4, 4)
3
4//we create a blue phong material so our box can be affected by the
directional light
5var cubeMaterial = new THREE.MeshPhongMaterial({
6 color: 0x0000ff,
7 wireframe: false
8});
9
10//we give our mesh the geometry and material previously created
11cube = new THREE.Mesh(cubeGeometry, cubeMaterial);
12
13//rotates the cube by 30 degrees on Y axis and 20 degrees on X axis
18

State of the Art Three.js
14cube.rotation.y =(30*Math.PI)/180;
15cube.rotation.x =(20*Math.PI)/180;
16
17//adding the cube into our scene
18scene.add(cube);
Figure 2.7 : A Three.js scene containing a cube mesh
2.6.3 Animating in Three.js
To animate our cube, we need to find some way to re-render the entire scene at a specific
interval. Function requestAnimationFrame() is the perfect solution for animating things in
JavaScript. It executes the animating function only when the browser or tab has focus, saving
thus GPU resources.
1function animate() {
2
3 //we tell our browser to execute this function every frame
4 requestAnimationFrame(animate);
5
6 //rotate the cube with 1 degree on the Z axis every frame
7 cube.rotation.z + =(1*Math.PI)/180;
8
9 //update the scene by rendering it everytime this function is
called
10 renderer.render(scene, camera);
19

State of the Art Three.js
11}
Figure 2.8 : Animation of a cube in Three.js
2.6.4 Loading a model in our scene
Three.js supports a large variety of 3D model file types with their own textures and anima-
tions. Each file type has corresponding loader object in JavaScript so there is no universal
loader object to load all of these types. The following table shows all the file types that are
supported by Three.js:
Table 2.2: File types supported by Three.js for model loading
Type Description
JSON Scenes or geometry can be defined using this file type that is Three.js
default format being very easy to use and a lot of 3d modeler software
support exporting in this format by default or by 3rd party plugins.
OBJ or MTL OBJ is one of the most used 3D file formats and it defines the geometry
of a model. MTL, accompanying the OBJ file, is file format that stores
the materials of a model.
Collada Its format is similar to XML and is also widely used and supported by
all 3D applications and rendering engines.
FBX Proprietary format owned by Autodesk. Widely used in game engines
such as Unity, Unreal Engine etc.
20

State of the Art Three.js
The following lines loads a 3D model of a human and adds it to the scene:
1
2var loader = new THREE.JSONLoader( true );
3
4//first parameter is the model’s path and the second one is a
function callback to be called after model is loaded
5loader.load(’json/humanoid.json’, function (geometry, materials) {
6
7mesh = new THREE.Mesh( geometry, materials);
8
9//we change the mesh position and scale it by 5 times
10mesh.position.y =-30;
11mesh.scale.multiplyScalar( 5 );
12}
13
14//the mesh is added in our scene
15scene.add( mesh );
The following figure shows the scene with a JSON model of a human:
Figure 2.9 : Three.js 3D model loading
21

Chapter 3
Proposed Solution
3.1 Introduction
From chapter two, virtual worlds and tours can be made from different technologies and
libraries. In this project the solution for creating virtual tours for different building or any
kind of 3D models is a web application that supports uploading and viewing of these models
through a friendly to use interface.
For 3D rendering the Three.js library was chosen and combined with Angular 2 as
the front-end framework they are used for the client part of this application. The back-end
is created with Expressjs, a web framework that help manage routes and requests for the
REST Api, and Node.js as a run-time environment for the server. The database system is
MongoDB, a light and easy to use NoSQL database.
In the following sections are presented an overview of this application, the front-end
and the back-end components, the database and in the last section the technological aspects.
3.2 General overview
The figure 3.1 shows how the application communicates with other components. From the
client-side we can upload a 3D model made in a 3D modeling software or we can select a
model already uploaded from the list. If we choose to upload a model, a form containing
details about the model will be completed in order to better clasify it. When the form is
completed all its information will be sent to the server through REST API calls. The model
with its texture images will be saved on the server while all the other information will be sent
to the database. After this, the new model will appear in client-side list of models where we
can select it to view.
22

Proposed Solution Client-Side application
After the model is loaded, a Three.js scene is created where we can view our model.
We can also change its properties such as position, rotation, scale using either GUI elements
or keyboard shortcuts. The option of adding waypoints is available so that the camera will
follow those waypoints thus creating virtual tours of this model. In the end we can choose to
save all the changes so the server can update its information regarding this model.
Figure 3.1 : Application structure and relations
3.3 Client-Side application
3.4 Back-end API
The back-end contains services for working with the 3D models and their properties. Those
services are written using Expressjs and available to use thanks to Node.js.
Node.js is a server-side platform, built on Google Chrome’s JavaScript Engine (V8 En-
gine), that allows JavaScript code on the server. It uses an event-driven, non-blocking I/O
model, meaning that all connections and requests are stored in a event loop using only a sin-
gle thread. Microsost Windows, Linux and Mac OS X are the operating systems supporting
the Node.js runtime.
23

Proposed Solution Back-end API
The list below are some important features of Node.js:
.Asynchronous and Event Driven . Node.js and its library of APIs are asynchronous
and non-blocking that makes the Node.js server to never wait for an API to return data.
The server moves to the next API after calling it and the response from the previous
API is sent, a notification mechanism of Events makes sure that Node.js gets it.
.Very Fast . Because is based on Google Chrome’s V8 JavaScript Engine, Node.js code
is very fast executed.
.Single Threaded . Mentioned before, Node.js uses a single thread for all the requests
and connections because all of them are taken care of by the event looping mechanism.
.No Buffering . The data sent to Node.js server is never buffered, all Node.js applica-
tions output the data in chunks.
Expressjs is a minimal web framework used to build web apps. It simplifies the imple-
mentation of a Rest API server for Node.js by having sessions, routes, HTTP requests and
error handling support. When an user wants to upload a model of a building, the front-end
application sends a POST request to the server where the function inside the route.post is
executed. A library called body-parser is used to parse the body request so the data sent can
be used in this function inside the route.post method. The following lines shows how some
routes are defined and how the request’s body is used:
1router.post(’/add’, upload.fields([{
2 name: ’jsonmodel’,
3 maxCount: 1
4}, {
5 name: ’textures’,
6 maxCount: 12
7}]), function (req, res) {
8
9 var newJsonModel = new JsonModel({
10 name: req.body.name,
11 json: req.files[’jsonmodel’][0].filename,
12 textures: req.files[’textures’],
13 description: req.body.description
14 });
15
16 JsonModel.addJsonModel(newJsonModel, textures, newJsonModel.json,
function (err, jsonmodel) {
17 textures =[];
18 if(err) {
19 res.json({
20 success: false ,
21 msg: "fail to add model: " + err
22 });
23 }else {
24 res.json({
25 success: true ,
26 msg: "model added"
24

Proposed Solution Back-end API
27 });
28
29 }
30
31 });
32})
33
34router.get(’/’, function (req, res, next) {
35
36 JsonModel.getJsonModels( function (err, jsonmodels) {
37 if(err) {
38 res.send({
39 success: false ,
40 msg: "fail to get models: " + err
41 });
42 }else {
43 res.send({
44 success: true ,
45 jsonmodels: jsonmodels
46 });
47 }
48 });
49});
To handle the files sent with the POST request another Node.js library called multer is
used. Multer is used in this application for file restrictions so that the user can’t upload the
wrong file type, upload destination of those files and renaming them before saving on the
server’s disk. The files need renaming to avoid the case of having two files with the same
name on the server replacing without wanting old models.
1var storage =multer.diskStorage({
2 destination: function (req, file, cb) {
3 if(file.originalname.match(/\.(json)$/)) {
4 cb(null , ’./uploads/json’);
5 }else {
6 cb(null , ’./uploads/textures’);
7 }
8 },
9 filename: function (req, file, cb) {
10 if(!file.originalname.match(/\.(png|jpeg|jpg|JPG|json)$/i)) {
11 var err = new Error();
12 err.code =’filetype’;
13 return cb(err);
14 }else {
15 if(file.originalname.match(/\.(png|jpeg|jpg)$/i)) {
16 textures.push(file.originalname);
17 }else {
25

Proposed Solution Back-end API
18 textures =[];
19 }
20
21 cb(null , Date.now() + ’_’ + file.originalname);
22 }
23 }
24});
Figure 3.2 : Models after aplying the rename method using multer
If we open any type of 3D model before uploading, for example a JSON file of a house,
we can see the name and file type of its textures. Because with multer the name of those
textures are also changed when uploaded to avoid name collisions, some adjustments needed
to be done. The solution in this case was to open the JSON file and rename the string
containing the original name of a texture and change it to the new name modified by multer.
For that, another Node.js library was needed called ’fs’. This is the original FileSystem that
Node.js uses to work with files.
1fs.readFile(’./uploads/json/’+filename, ’utf8’, function (err, data ) {
2 if(err) {
3 return console.log(err);
4 }
5
6 var modifiedJson =data;
7
8 //textures is an array containing every texture name
9 for (var i=0; i < textures.length; i++) {
10 var reg = new RegExp(textures[i],"g");
11 modifiedJson =modifiedJson.replace(reg,
newJsonModel.textures[i].filename);
26

Proposed Solution Database
12 }
13
14 fs.writeFile(’./uploads/json/’+filename, modifiedJson, ’utf8’,
function (err) {
15 if(err) return console.log(err);
16 });
17
18 //console.log(modifiedJson);
19
20
21 })
Figure 3.3 : Renaming the textures
3.5 Database
The database was implemented in MongoDB and its schema was made using Mongoose.
MongoDB is a widely used, general-purpose, document-oriented NoSQL database that
its server and tools, written in C++, are open-source and available on all major operating
systems, including, Windows, Linux, UNIX and Mac OS X. It has strong support for dy-
namic querying and aggregating data such as MapReduce and an Aggregation Framework.
MongoDB uses BSON (Binary JSON) as file format for storage and the MongoDB Wire
Protocol for communication between client drivers and the MongdoDB server.
Because MongoDB is a document-oriented database, a record is a document where its
data is structured in pairs of fields and values, similar to JSON objects. Because of this,
27

Proposed Solution Database
MongoDB fields can include other documents, arrays, and arrays of documents.
Some advantages of using documents as records are: documents are similar to native
data types used by many programming languages, embedded documents and arrays reduce
need for expensive joins and support for dynamic schema thus having a fluent polymorphism
capability.
Figure 3.4 : Structure of a record in MongoDB
Mongoose is a Node.js package used for object modeling by creating database schemas
for MongoDB. It simplifies the use of MongoDB by giving us access to CRUD commands
for its documents. Below is an example of using Mongoose and creating the database schema
of a 3D model:
1const mongoose =require(’mongoose’);
2mongoose.connect(’mongodb://localhost:27017/modelloader’);
3
4//3dmodel schema
5const jsonmodelSchema =mongoose.Schema({
6 name: {
7 type: String,
8 required: true
9 },
10 json: {
11 type: String,
12 required: true
13 },
14 description: {
15 type: String,
28

Proposed Solution Database
16 required: false
17 },
18 textures: {
19 type: Array,
20 required: false
21 },
22 waypoints: {
23 type: Array,
24 required: false
25 }
26});
Mongoose,as said before, supports all the CRUD functions needed to work with the
database documents.
1//Create
2module.exports.addJsonModel = function (newJsonModel, textures,
filename, callback) {
3 …
4
5 newJsonModel.save(callback);
6}
7
8//Read
9module.exports.getJsonModelById = function (id, callback) {
10 JsonModel.findById( id, callback);
11}
12
13//Update
14module.exports.updateJsonModel = function (id,newJsonModel, callback)
{
15 JsonModel.findByIdAndUpdate( id, newJsonModel, callback);
16}
17
18//Delete
19module.exports.deleteJsonModel = function (id, callback) {
20
21 JsonModel.findByIdAndRemove( id,callback);
22
23}
29

Similar Posts