„OVIDIUS” UNIVERSITY OF CONSTANT  A FACULTY OF MATHEMATICS AND COMPUTER SCIENCE DEGREE PROGRAM: COMPUTER SCIENCE BACHELOR’S THESIS Implementing a… [606473]

MINISTRY OF NATIONAL EDUCATION
"OVIDIUS" UNIVERSITY OF CONSTANT  A
FACULTY OF MATHEMATICS AND COMPUTER SCIENCE
DEGREE PROGRAM: COMPUTER SCIENCE
BACHELOR'S THESIS
Implementing a virtual tour of a public
building using WebGL
Scienti c Adviser
Professor Popovici Mircea Dorin
Nicoara Andrei Daniel
Constant a
2017

Abstract
The main goal of this paper is to o er a solution for creating and visualizing virtual
tours of di erent buildings/statues created by the user. The solution is given in the
form of a web application that allows to load, save and view 3D models from a server
and create speci c waypoints that the camera will follow and look at the model.
The user interacts with the application using GUI elements provided by it for
example buttons, keyboard shortcuts, dropdowns, widgets etc. All changes applied
to the model such as changing its position, rotation, scaling are saved in a database
alongside its waypoints added or modi ed by the user.

Contents
Contents 1
List of Figures 3
List of Tables 4
1 Introduction 5
1.1 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 State of the Art 6
2.1 Virtual Tour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Virtual space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 VRML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3.1 Scene Graph Structure . . . . . . . . . . . . . . . . . . . . . . . 8
2.3.2 Event Architecture . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3.3 Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3.4 Scripts and Interpolators . . . . . . . . . . . . . . . . . . . . . . 9
2.4 X3D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.4.1 Scene graph hierarchy . . . . . . . . . . . . . . . . . . . . . . . 12
2.4.2 Transformation hierarchy . . . . . . . . . . . . . . . . . . . . . . 12
1

Contents Contents
2.4.3 Behaviour graph . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.5 WebGL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.5.1 Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.5.2 Origins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.5.3 Structure of a WebGL Application . . . . . . . . . . . . . . . . 16
2.5.4 Three.js . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2

List of Figures
2.1 Panoramic tour vs 3D model . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 An example of a VRML world that uses Sensors and Interpolators . . . . . 10
2.3 The architecture of a X3D application . . . . . . . . . . . . . . . . . . . . 11
2.4 X3D animation example with PositionInterpolator and TimeSensor . . . . 13
2.5 Relationship among OpenGL, OpenGL ES and WebGL . . . . . . . . . . . 15
2.6 Relationship among OpenGL, OpenGL ES and WebGL . . . . . . . . . . . 16
3

List of Tables
2.1 Standard units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4

Chapter 1
Introduction
1.1 Objectives
5

Chapter 2
State of the Art
2.1 Virtual Tour
The purpose of a virtual tour is to simulate visiting a location at the comfort of a
computer. The virtual tour can consist of some photos or videos used in a panoramic
view or of a 3D model that has the shape of a location. Virtual tours target many
speci c elds such as entertainment, advertising, learning etc.
Before the era of computers, the only standard of visualizing a building was using
2D
oor plans or 3D sketched buildings from cardboard which made this whole oper-
ation inecient in terms of portability. But everything changed when that era arrived
and brought us the missing portability and interactivity of a virtual tour.
At the present time, there are 3 types of virtual tours:
.Panoramas . Those are made by stitching images together, that means modi-
fying the perspective of images and blending them, so that the photographs can
be aligned seamlessly.
.Video-based tours . Video cameras are used to pan and walk-through locations,
which is more advanced than panoramas because the point of view is constantly
changing but gives the viewer no interaction.
.3D Content . A 3D model of the building is loaded and available to view and
interact with, giving the user more interaction and immersion than the other two
virtual tours.
Figure 2.1 : Panoramic tour vs 3D model
6

State of the Art Virtual space
2.2 Virtual space
Virtual space or also called cyberspace is in fact a representation of human experience
in a particular space. The term cyberspace was used at rst by William Gibson in
his book "Neuromancer" published in 1984 and he describes it as the virtual space in
which the electronic data of worldwide PCs circulate.
There are multiple representations of virtual space. First of all we have virtual
reality which is a 3D environment where people can 'enter' and 'move' while inter-
acting with both the computer and other human beings, seen in movies like Tron or
Matrix. On the other hand, we have the other representation as a world of networks of
computers linked via cables and routers that let us communicate with each other and
store/retrieve information. The best example is the Internet, at rst used for email,
le transfer, bulletin boards and news, and now even more because of the World Wide
Web, which allows us to navigate through this network.
Whether we talk about real space or virtual space, information is the uniform
value which allows us to move from one space to another by transforming it from the
moment of its creation to be used in any space. If we think about it, space is in fact a
collection of information used in an organized pattern, but can a virtual space exist in
the absence of information or virtual objects? The answer is yes, because the possibility
to store information is still present, while the other way around the answer seems to
be a clearly no. So information is intimately tied up with the virtual space and neither
exist nor be transmitted in its absence.
If we compare the type of information used in virtual space and in physical space
we get four sub-concepts: place, distance, size and route. The rst one drives us to
ask "Where?" questions. Where is that cube located? Where should we move it?
And the scene where the cube is rendered represents a virtual place. Such ways of
speaking re
ect the importance of locating speci c objects in virtual space. Distance
has us asking "How far?" questions. How many method calls will it take to display the
desired object? In this way we can calculate how long it might take for a computer
to render a scene on our screens for example. With size we ask "How big?" questions.
We might wonder how extensive a scene is, meaning how much information it contains
and how many objects it includes. And with route it involves "navigation" issues. If
in a multiplayer game someone does an action, the information containing the action
will follow a speci ed route or set of connections to reach the other players so they can
see that action on their screens.
In the following sections we shall present some technologies that help us to create
our own virtual worlds.
7

State of the Art VRML
2.3 VRML
VRML, standing for Virtual Reality Modeling Language rst version being launched
in 1995, is neither virtual reality nor a modeling language. Is in fact a 3D interchange
format which de nes most of the commonly used semantics found in 3D applications
such as hierarchical transformations, light sources, viewpoints, geometry, animation,
fog, material properties, and texture mapping. Another purpose of VRML is to be a
3D analog to HTML, having to serve as a simple multiplatform language for publishing
3D Web pages. This is motivated by the fact that some information is best experienced
three dimensionally, such as games, engineering and scienti c visualizations etc.
Users can navigate through 3D space and click on objects representing URLs or
other VRML scenes. VRML les may contain references to les in many other standard
formats. JPEG, PNG, and GIF may be used as texture maps on objects, WAV and
MIDI as sound and music that is emitted in the scene, and Java or JavaScript code to
be used to implement programmed behavior for the objects in the scene.
Below are some features that belong to VRML.
2.3.1 Scene Graph Structure
Hierarchical scene graphs are used in VRML to describe 3D objects and scenes, where
every entity in the scene is called a node. In VRML there are di erent types of nodes
such as geometry primitives, appearance properties, sound and sound properties and
various types of grouping nodes. Those store their data in di erent types of elds that
can be used to store everything from a single number to complex data structures like
an array of 3D transformations.
The scene graph in VRML is a directed acyclic graph which allows nodes to contain
other nodes as in a children-parent relation. This is helpful for creating large world or
complicated objects from subparts.
2.3.2 Event Architecture
Nodes use events as a mechanism to communicate with each other, where every node
type de ne the names and types of events that instances of that type may generate
or receive, and ROUTE statements de ne event paths between those who emit and
generate events.
8

State of the Art VRML
2.3.3 Sensors
Events are triggered by sensors, meaning that those allow users to detect what and
when something happens. There a multiple types of sensors that are useful for user
interaction, generating event as the viewer moves through the scene or when the user
interacts with some input device.
2.3.4 Scripts and Interpolators
Scripts are used between event generators(sensor nodes for example) and event re-
ceivers. They allow the world creator to specify random behaviors for any object in
the scene.
Interpolators are built-in scripts that are used to create simple animation calcu-
lations and combined with a sensor called TimeSensor that generates events as time
passes they can make objects move or do any transformation in a period of time.
Below is an example of a VRML world where a ball moves on a plane when a click
event occurs.
# VRML V2. 0 utf8
Viewpointfp o s i t i o n 0 0 20 g
#Base
Transformf
t r a n s l a t i o n 0 0.2 0
c h i l d r e n Shape f
appearance Appearance f
material Material f
d i f f u s e C o l o r 1 0 0 gg
geometry Box fs i z e 22 0.4 2g
g
g
DEF b a l l t r Transform f
t r a n s l a t i o n10 1 0
c h i l d r e n [
Shapef
appearance Appearance f
material Material f g
texture ImageTexture fu r l " cone . jpg " gg
geometry Sphere fg
9

State of the Art VRML
g
DEF b a l l s e n s o r TouchSensor fg]g
DEF timer TimeSensor f
c y c l e I n t e r v a l 8
loop FALSE
g
DEF pi P o s i t i o n I n t e r p o l a t o r f
key [ 0 1 ]
keyValue [10 1 0 , 10 1 0 ]
g
DEF o i O r i e n t a t i o n I n t e r p o l a t o r f
key [ 0 0.157 0.314 0.471
0.628 0.785 0.942 1 ]
keyValue [ 0 0 1 0 ,
0 0 13.14 ,
0 0 16.28 ,
0 0 19.42 ,
0 0 112.56 ,
0 0 115.7 ,
0 0 118.84 ,
0 0 120.0 ]
g
DEF s i S c a l a r I n t e r p o l a t o r f
key [ 0 0.5 1 ]
keyValue [ 0 1 0 ]
g
ROUTE b a l l s e n s o r . touchTime TO timer . set startTime
ROUTE timer . fraction changed TO s i . s e t f r a c t i o n
ROUTE s i . value changed TO pi . s e t f r a c t i o n
ROUTE pi . value changed TO b a l l t r . s e t t r a n s l a t i o n
ROUTE s i . value changed TO o i . s e t f r a c t i o n
ROUTE o i . value changed TO b a l l t r . s e t r o t a t i o n
Figure 2.2 : An example of a VRML world that uses Sensors and Interpolators
10

State of the Art X3D
2.4 X3D
X3D, built on the VRML, is a scene-graph architecture and le-format encoding that
improves on the VRML international standard. X3D expresses its geometry and be-
havior inherited from VRML using XML(Extensible Markup Language), while also
providing program scripting in JavaScript or Java.
To view X3D scenes we need a browser that is able to parse X3D code. Those
browsers are often implemented as plugins for a regular web browser (such as Mozilla
Firefox or Google Chrome) or also delivered as standalone or embedded applications
that present X3D scenes.
The architecture of a X3D application is de ned independently of any physical
devices or any other implementation-dependent concepts (touchscreen, mouse etc.).
Another aspect of every X3D application is that they contain graphics and/or aural
objects that can be loaded from local storage or over the network. These objects can
also be dynamically updated through a variety of mechanics based on the developer's
preferences. Each X3D application has the following purposes:
.to establish a world coordinate virtual space for all the objects in the scene used
by the application.
.to de ne and compose a set of 2D and 3D multimedia objects.
.to specify the use of hyperlinks to other les and applications.
.to create programmable behaviour for objects in the scene.
.to permit the use of external modules or applications via various scripts and
programming languages.
Figure 2.3 : The architecture of a X3D application
11

State of the Art X3D
2.4.1 Scene graph hierarchy
Same as VRML the X3D scene graph is a directed acyclic graph containing di erent
types of nodes that at the same time contain speci c elds with one or more children
nodes which participate in the hierarchy. Those elds may contain simple values or
other nodes making it easy to add di erent user viewing perspectives. Because of this
structure, rendering and animating scenes with X3D is done in a straightforward way,
starting at the root of the scene graph tree, the scene graph is traversed in a depth- rst
manner.
2.4.2 Transformation hierarchy
The transformation hierarchy contains all the root nodes and their children that are
considered to have at least one particular location in the virtual world. Those root
nodes have their coordinate system according to the world coordinate system. There are
some exceptions of this nodes that are not a ected by the transformation hierarchy such
as the PositionInterpolator that linearly interpolates an array of values, the TimeSensor
that generate events as time passes and the WorldInfo node which contains information
about the world.
2.4.3 Behaviour graph
The behaviour graph is the collection of elds made of connections between router and
a model for the propagation of events declared in the scene. This graph can be changed
dynamically using routes or by adding and breaking connections. Events are inserted
into the system and propagate through the behaviour graph in a well de ned order.
Below is a table containing the standard units used in X3D applications:
Table 2.1: Standard units
Category Unit
Linear distance Metres
Angles Radians
Time Seconds
Colour space RGB([0.,1.], [0.,1.], [0.,1.])
12

State of the Art X3D
The following X3D application renders and animates a cube that moves using the
PositionInterpolator and TimeSensor.
<Scene DEF=' scene ' >
<Group >
<Transform DEF='Cube' >
<Shape >
<Appearance >
<Material/ >
</Appearance >
<Box s i z e ='1 1 1'/ >
</Shape >
</Transform >
<TimeSensor DEF='Clock ' c y c l e I n t e r v a l = '4 ' loop =' true '/ >
<P o s i t i o n I n t e r p o l a t o r DEF='CubePath '
key='0 0.11 0.17 0.22
0.33 0.44 0.5 0.55 0.66
0.77 0.83 0.88 0.99 '
keyValue='0 0 0 , 1 1.96 1 , 1.5 2.21 1 . 5 ,
2 1.96 2 , 3 0 3 , 2 1.96 3 ,
1.5 2.21 3 , 1 1.96 3 , 0 0 3 ,
0 1.96 2 , 0 2.21 1 . 5 , 0 1.96 1 , 0 0 0'/ >
</Group >
<ROUTE fromNode='Clock ' fromField =' fraction changed '
toNode='CubePath ' toField =' s e t f r a c t i o n '/ >
<ROUTE fromNode='CubePath ' fromField =' value changed '
toNode='Cube ' toField =' s e t t r a n s l a t i o n '/ >
</Scene >
Figure 2.4 : X3D animation example with PositionInterpolator and TimeSensor
13

State of the Art WebGL
2.5 WebGL
WebGL is a JavaScript API that enables web content to render 3D graphics in HTML
canvas inside the web browsers. It is based on OpenGL ES 2.0 which is the API for
3D rendering on smart phones running on the iPhone and Android platforms. Tradi-
tionally, to create convincing 3D graphics we had to create stand-alone applications
using programming languages such as C or C++ along with dedicated computer graph-
ics APIs such as OpenGL and Direct3D. Now, with WebGL, we can use complex 3D
graphics as a part of a standard web page using only HTML and JavaScript.
Another aspect of WebGL is that it is supported as the browser's default built-in
technology for rendering 3D graphics, meaning that we don't need to install any other
plugins or libraries to use it. Because of that, we can run WebGL applications on
various platforms, from PCs to tablets or smart phones.
2.5.1 Advantages
As HTML has evolved, more and more sophisticated web applications began to appear.
In the past, HTML was able to support only static content, but when scripting support
such as JavaScript was added, this enabled more complex interactions and dynamic
content. HTML5 introduced further sophistication, by having support for 2D graphics
via the canvas tag, allowing a variety of graphical elements on a web page. The
next step was the apparition of WebGL that enables displaying and manipulating 3D
graphics on web pages by using JavaScript. Using WebGL, it's possible to create rich
user interfaces or 3D games or even to use 3D graphics to visualize and manipulate
information from the Internet. Below are some advantages of WebGL:
.We only need a text editor for writing the code and a browser to view the 3D
graphics applications.
.It's easy to share the 3D graphics applications to others.
.We can leverage the full functionality of the browser.
.There are a lot of material and support for studying and developing WebGL
applications.
14

State of the Art WebGL
2.5.2 Origins
The rst two most used technologies for displaying 3D graphics on personal computers
are OpenGL and Direct3D. Direct3D is a proprietary application programming inter-
face developed by Microsoft and is a part of the DirectX technologies. The other one,
OpenGL, is a free and open API that can support various platforms and operating sys-
tems such as Linux, Windows, Macintosh and a variety of devices like smart phones,
tablets, and game consoles.
OpenGL, developed by Silicon Graphics and published as an open standard in
1992, has evolved through several versions and has had a profound e ect regarding
3D graphics, software products and even lm productions over the years. Although
OpenGL is the root of all other Silicon Graphics 3D libraries, WebGL is actually derived
from a version of OpenGL that targets embedded computers such as smart phones and
video game consoles. Its name is OpenGL ES, rst version appeared in 2004, then the
second version which WebGL is based on appeared in 2007 and again in 2012 with the
third version. The following gure 2.5 shows the relationship between OpenGL and
the other two APIs.
Figure 2.5 : Relationship among OpenGL, OpenGL ES and WebGL
As we see in this gure, with the apparition of OpenGL 2.0 came along a new
graphics capability, programmable shader functions. Shader functions or shaders are
computer programs that allows the developer to develop sophisticated visual e ects
by using a special programming language similar to C called GLSL (OpenGL shading
language).
15

State of the Art WebGL
2.5.3 Structure of a WebGL Application
By default, dynamic web pages are created by using a combination of HTML and
JavaScript, but with WebGL, the shading language GLSL ES needs to be also added,
meaning that web pages with WebGL are created by using three languages: HTML5,
JavaScript and GLSL ES. Figure 2.6 shows us the architecture of a web page on the
left and a web page with WebGL on the right.
Figure 2.6 : Relationship among OpenGL, OpenGL ES and WebGL
2.5.4 Three.js
Three.js is a JavaScript library that makes WebGL easy to use providing us an extensive
API with a large set of functions. It's useful to create 3D models/textures/scenes
directly on a web page without too many diculties and with less code than using
only basic WebGL.
Below are some Three.js capabilities that are easy to made:
.Creating simple and complex 3D geometries
.Animating and moving objects through a 3D scene
.Support for textures and materials for objects in the scene
.Di erent types of light sources to illuminate the scene
.Model loading support exported from 3D-modeling software
.Advanced post-processing e ects such as particle e ects, bloom, motion blur etc.
.Working with custom shaders
.Creating point clouds
16

Similar Posts