„OVIDIUS” UNIVERSITY OF CONSTANTA FACULTY OF MATHEMATICS AND COMPUTER SCIENCE DEGREE PROGRAM: COMPUTER SCIENCE BACHELOR`S THESIS Virtual tour of a… [608300]

MINISTRY OF NATIONAL EDUCATION
"OVIDIUS" UNIVERSITY OF CONSTANTA
FACULTY OF MATHEMATICS AND COMPUTER SCIENCE
DEGREE PROGRAM: COMPUTER SCIENCE
BACHELOR`S THESIS
Virtual tour of a public building
Case study: Unity
Author
Bogdan Mangri
Adviser
Prof. Dr. Dorin Mircea Popovici
Constanta 2017

Virtual tour of a public building
Study case: Unity
Bogdan Mangri
Abstract
The focus of the thesis is to develop an interactive application through
which the user can perform the tour of a public building in a virtual
environment.
One of the main strengths of this approach is the three dimensional
setting that provides the user with an outstanding amount of angles
and points of view that can grant an experience much more similar to
a real tour.
The application facilitates interaction between users thanks to its
online feature and the voice chat function. Even if the user lacks an
internet connection you can still nd AI controlled characters that can
provide meaningful information and immersive activities.
1

Contents
1 Introduction 5
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Problem Statement . . . . . . . . . . . . . . . . . . . . 5
1.3 Target groups . . . . . . . . . . . . . . . . . . . . . . . 6
1.4 Solution proposed . . . . . . . . . . . . . . . . . . . . . 6
1.5 Project goals . . . . . . . . . . . . . . . . . . . . . . . 6
2 Technology 7
2.1 Graphics Engine – Unity . . . . . . . . . . . . . . . . . 7
2.2 Scripting Language . . . . . . . . . . . . . . . . . . . . 8
2.3 The development environment . . . . . . . . . . . . . . 9
2.4 Networking . . . . . . . . . . . . . . . . . . . . . . . . 9
3 Creating a character controller 11
3.1 The character . . . . . . . . . . . . . . . . . . . . . . . 11
3.2 The animations . . . . . . . . . . . . . . . . . . . . . . 13
3.3 Tying it together . . . . . . . . . . . . . . . . . . . . . 13
3.4 Taking input . . . . . . . . . . . . . . . . . . . . . . . 15
3.5 The default controller . . . . . . . . . . . . . . . . . . . 17
3.6 What did we do . . . . . . . . . . . . . . . . . . . . . . 17
4 Implementing a live-feed monitor 19
4.1 Rendering Camera . . . . . . . . . . . . . . . . . . . . 19
4.2 Render textures . . . . . . . . . . . . . . . . . . . . . . 21
4.3 User Interface . . . . . . . . . . . . . . . . . . . . . . . 22
4.4 What we achieved . . . . . . . . . . . . . . . . . . . . . 24
5 Adding an automatic tour mode 25
5.1 Navigation System . . . . . . . . . . . . . . . . . . . . 25
5.2 The preset route . . . . . . . . . . . . . . . . . . . . . 27
2

CONTENTS CONTENTS
5.3 Adding a toggle . . . . . . . . . . . . . . . . . . . . . . 28
5.4 Closest objective . . . . . . . . . . . . . . . . . . . . . 28
5.5 What we ended with . . . . . . . . . . . . . . . . . . . 30
6 How to do a quest system 31
6.1 Introducing the Rigidbody . . . . . . . . . . . . . . . . 31
6.2 Explaining colliders . . . . . . . . . . . . . . . . . . . . 32
6.3 Using the components . . . . . . . . . . . . . . . . . . 33
6.4 Delivering the quests . . . . . . . . . . . . . . . . . . . 34
6.5 What we accomplished . . . . . . . . . . . . . . . . . . 35
7 Networking and Voice chat 36
7.1 Networking . . . . . . . . . . . . . . . . . . . . . . . . 36
7.2 Voice chat . . . . . . . . . . . . . . . . . . . . . . . . . 38
7.3 What we obtained . . . . . . . . . . . . . . . . . . . . 40
8 Optimisation practices 41
8.1 General optimisations . . . . . . . . . . . . . . . . . . . 41
9 Application description 42
10 Conclusion 43
3

List of Figures
2.1 Unity5 Logo . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Unity5 Platforms . . . . . . . . . . . . . . . . . . . . . 7
2.3 Unity5 Networking Layers . . . . . . . . . . . . . . . . 10
3.1 Rendering elements example . . . . . . . . . . . . . . . 12
3.2 The animator controller . . . . . . . . . . . . . . . . . 14
3.3 How input works . . . . . . . . . . . . . . . . . . . . . 15
3.4 Taking input . . . . . . . . . . . . . . . . . . . . . . . 16
3.5 The nal animator controller . . . . . . . . . . . . . . . 18
4.1 First person camera . . . . . . . . . . . . . . . . . . . . 20
4.2 The process steps . . . . . . . . . . . . . . . . . . . . . 21
4.3 The canvas in world space . . . . . . . . . . . . . . . . 23
4.4 The button with its set event . . . . . . . . . . . . . . 23
5.1 The key components of the NavMesh . . . . . . . . . . 25
5.2 The toggle . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.3 The redirect function . . . . . . . . . . . . . . . . . . . 29
6.1 The professor animation states . . . . . . . . . . . . . . 33
7.1 The network structure . . . . . . . . . . . . . . . . . . 37
7.2 Player diagram . . . . . . . . . . . . . . . . . . . . . . 37
7.3 Opus Bitrate/Latency comparison . . . . . . . . . . . . 39
7.4 Opus Quality/Bitrate comparison . . . . . . . . . . . . 40
4

Chapter 1
Introduction
1.1 Introduction
With how fast technology has advanced in the past and how the future
is looking there is a valid reason to assume that every part of our life
can and will be adapted to the virtual environment. This has led me
to believe that tourist segment has failed to take advantage of the
technology and therefore it is in an underdeveloped state.
My hope is that after this thesis I can light the way for a more
appropriate use of the technology, already available, in the tourism
section.
1.2 Problem Statement
Up until now everything tour related has been treated as a sequence
of photos, but even tough photos have a very good delity they are
only two dimensional and can be seen only from the point of view of
the photographer, on the same note they lack the interactive element
that can bring to life a location.
By examining the available virtual tour capabilities of various im-
portant locations I've assessed the experiences as being not up to par
with the state of the art and thought of a way to improve the experi-
ence of visiting a new place with the help of a computer.
5

1.3. TARGET GROUPS CHAPTER 1. INTRODUCTION
1.3 Target groups
Such an application has varying uses such as: new scholars can learn
the layout of their school, employees can accommodate with their new
place of work faster by using the application or the public administra-
tion can use it for their points of interest as a means of publicity to
attract tourists.
As long as there is an interesting setting to be viewed it can be
translated to the virtual environment and could make use of the solu-
tion proposed.
1.4 Solution proposed
My solution involves a three dimensional open world environment in
which the user can participate as opposed to the current photography
based tours which takes advantage of the 360 degree photography
technique.
In my version the user can view their locations from multiple an-
gles allowing for a proper assessment of the environment, it includes
AI based characters that will provide information about certain ele-
ments on demand and the possibility to interact with the other users
connected therefore obtaining a more vivid and authentic experience.
1.5 Project goals
The purpose of this project is to bring a breath of fresh air to the
virtual tour section of tourism and open a door for many young people.
In the world we live now technology is a very big point of in
uence
and so is the internet, being connected is becoming a second nature
for us.
My idea is going to innovate the way virtual tours are designed and
bring them in the connectivity era.
6

Chapter 2
Technology
2.1 Graphics Engine – Unity
Figure 2.1: Unity5 LogoUnity is a cross-platform game engine primarily
used to develop video games and simulations devel-
oped by Unity Technologies. Five major versions
of Unity have been released, with the 6'th version
currently in Beta.
Unity is an all purpose engine and supports both
2D and 3D graphics, a drag and drop functionality
and scripting through its three custom languages.
The engine targets the following APIs: Direct3D
and Vulkan for Windows and Xbox; OpenGL for
Mac, Linux, and Windows; OpenGL ES on Android and iOP; as well
as proprietary APIs on video games consoles.
The main advantage is the ability to target games to multiple plat-
forms. The currently supported platforms include mobiles, personal
computers, video game consoles or TVs. The latest additions are
virtual reality devices like the HTC Vive, Oculus Rift or Microsoft
Hololens and recently WebGL support has been added to the list see
2.2.
Figure 2.2: Unity5 Platforms
7

2.2. SCRIPTING LANGUAGE CHAPTER 2. TECHNOLOGY
It is also useful for its range of integrated services to engage, re-
tain and monetize audiences. I made use of such a service for the
multiplayer implementation which we will talk about later.
For this project I've used the latest ocial release version called
Unity 5.6. This version improves on the latest by signi cantly im-
proving the lightmapper and adding support for Android, Linux and
Windows devices to use the new Vulkan low overhead,cross platform
3D graphics and compute API.
2.2 Scripting Language
Even though Unity by itself is a strong engine there is only so much
you can do without writing code. The scrips I've made for this project
are written using C#.
C# is multi-paradigm programming language developed by Mi-
crosoft within its .NET initiative. It is simple, powerful, type-safe, and
object-oriented and encompasses strong typing, imperative, declara-
tive, functional, generic, and component-oriented programming disci-
plines .
This facilitates the entity-component system used by Unity, ECS
is an architectural pattern that is mostly used in game development.
An ECS follows the Composition over inheritance principle that allows
greater
exibility in de ning entities where every object in a game's
scene is an entity (e.g. enemies, bullets, vehicles, etc.). Every Entity
consists of one or more components which add additional behavior or
functionality, the components are de ned by the already implemented
low level behaviors such as collisions or rendering and the scripts we
made to give certain objects the functionality desired. Therefore, the
behavior of an entity can be changed at runtime by adding or removing
components.
C# is a general-purpose, object-oriented programming language.
Its development team is led by Anders Hejlsberg. The most recent
version is C# 7.0 which was released in 2017 along with Visual Studio
2017.
8

2.3. THE DEVELOPMENT ENVIRONMENT CHAPTER 2. TECHNOLOGY
The latest Unity patch also has introduced support for .NET 4.6.
.NET Framework (pronounced dot net) is a software framework de-
veloped by Microsoft that runs primarily on Microsoft Windows. It
also includes a large class library named Framework Class Library
which provides language interoperability across several programming
languages.
2.3 The development environment
Together with the game engine development environment the IDE I've
used to write the code for the object components is Microsofts Visual
Studio 2017 which since the 5.6 patch has been the default IDE for
Unity.
The notable improvements in this version are: Visual Studio starts
faster, is more responsive, and uses less memory than before and the
introduction of a Structure Visualizer. And even though it's not a
new feature its worth to be noted that InteliSense is a great tool to
improve productivity.
2.4 Networking
Since it's a separate service I think i have to mention that i used the
integrated Unity Multiplayer service. I mostly used the HLAPI (high
level API) to integrate multiplayer in the project, but low level code
can be written as well.
Unity provides a High Level API is a system for building multiplayer
capabilities in games. The lower level transport real-time communica-
tion layer is the base layer everything is built upon, it handles many
of the required common tasks for multiplayer games. Even tough the
transport layer supports any kind of network topology, the HLAPI
system is a server authoritative one; no dedicated server process is
required because it allows one of the participants to be a client and
the server at the same time.
9

2.4. NETWORKING CHAPTER 2. TECHNOLOGY
Figure 2.3: Unity5 Networking LayersBecause it works
in conjunction with
the internet ser-
vices, this allows
multiplayer games
to be played over
the internet with
no major changes
from the develop-
ers.It is built from
a series of lay-
ers that add func-
tionality see g.
2.3.
Another networking component I used to obtain the voice chat func-
tionality is Opus. Opus is a lossy audio coding format designed to
eciently code speech and general audio in a single format, while re-
maining low-latency enough for real-time interactive communication.
Opus is the best available solution replacing both Vorbis and Speex
for new applications, many blind listening tests have ranked it higher-
quality at any given bitrate than any other standard audio format
until transparency is reached, including MP3, AAC, and HE-AAC.
In my project I have used the reference implementation called libo-
pus which is available under the New BSD License which allows you
almost unlimited freedom with the software so long as we include the
BSD copyright and license notice in it.
10

Chapter 3
Creating a character controller
In order to be as clear as possible we will rst tackle general terms
and then the implementation itself.
Since the tour will be in third person view rst of all we will need
a 3D model, that means textures, materials and shaders. Next up we
have to animate the model, taking input from the user and sending
that data to the animator which will animate the 3D model accord-
ing to the correct state. And in the end we will add complementary
mechanics.
The point of this project being to implement mechanics that will
simulate a tour and not modelling the environment, the models are re-
placeable and can vary for the intents and purposes of the tour. The
models and animations are provided through Adobe Fuse, which is
a 3D computer graphics software developed by Mixamo that enables
users to create custom 3D characters. Mixamo is available free, with-
out license or royalty fees, for unlimited use in both commercial and
non-commercial projects.
3.1 The character
In 3D graphics, materials and textures are nearly as important as
shapes. Scenes would be boring if all the objects were gray. Rendering
in Unity is done with Materials, Shaders and Textures. Materials are
de nitions of how a surface should be rendered, including references to
textures used, tiling information, colour tints and more. The available
options for a material depend on which shader the material is using.
11

3.1. THE CHARACTER CHAPTER 3. CREATING A CHARACTER CONTROLLER
Shaders are small scripts that contain the mathematical calcula-
tions and algorithms for calculating the colour of each pixel rendered,
based on the lighting input and the Material con guration. For most
normal rendering the standard shader is the best one, this is a highly
customisable shader which is capable of rendering many types of sur-
face in a highly realistic way, but if the situation requires you can
write custom shaders to t your requirements.
Textures are bitmap images. A Material may contain references
to textures, so that the Material's shader can use the textures while
calculating the surface colour of an object. In addition to basic colour
(albedo) of an obejct's surface, textures can represent many other
aspects of a material's surface such as its re
ectivity or roughness.
In gure 3.1 we can see an example of a combination of di erent
textures, materials and shaders.
Figure 3.1: Rendering elements example
12

3.2. THE ANIMATIONS CHAPTER 3. CREATING A CHARACTER CONTROLLER
3.2 The animations
Unity's animation system is heavily reliant on Animation Clips ,
which contain information about how some objects should change their
properties over time. Each clip could be interpreted as a single linear
recording. Animation clips from external sources can be created by
artists or animators with 3rd party tools like Max or Maya, or come
from motion capture studios or any other sources.
The key component needed to create a controller is the Animator .
The controller take references to the animation clips we use in it, and
has the task to manage the various states containing animations and
the transitions between them using a State Machine , which can be
thought of as a kind of
ow-chart.
A very simple Animator Controller might only contain one or
two clips, for example to control a powerup spinning and bouncing,
or to animate a door opening and closing at the correct time. A more
advanced Animator Controller might contain dozens of humanoid
animations for all the main character's actions, and might blend be-
tween multiple clips at the same time to provide a
uid motion as
the player moves around the scene, we will make the later one for our
tour.
3.3 Tying it together
Now we have a textured 3D model and some movement animations,
the next step is to give each animation a speci c value. We use the
Turn variable to determine in which direction does the player move
and the Forward one for the desired speed. Since we use root motion
the movement of the character is determined by the animation that
is currently running that's why i increased the animation speed for
running to obtain two di erent running speeds. We see in gure 3.2
the values set for each animation and the state machine, Pos X is the
Turn value and Y is the Forward.
13

3.3. TYING IT TOGETHER CHAPTER 3. CREATING A CHARACTER CONTROLLER
(a) Few of the values attributed to the an-
imation
(b) The state machine for animations
Figure 3.2: The animator controller
14

3.4. TAKING INPUT CHAPTER 3. CREATING A CHARACTER CONTROLLER
3.4 Taking input
The next step is taking input from the user, for this we use the GetAxis
function from the Input interface, see gure 3.3. This function requires
an axis as input and returns a value between -1 and 1. The values
change by a step when the buttons for positive and negative input are
pressed. Such a step is useful if you want to have a faster or slower
Figure 3.3: How input works
acceleration. The input from the user can be also used to change the
movement of the character but because we use root motion the speed
is exclusively controlled by the speed of the animation as I mentioned
earlier we will be using this value only to change the Turn variable.
In gure 3.4 we can see how we store the input in two 3D vectors then
sum their product with the forward and right directions relative to the
camera and later we send to the "Forward" variable of the animator
the magnitude of the sum.
And with that we've roughly explained the idea behind the char-
acter controller. Now we can add additional behaviours like di erent
moving speeds or a jump. For the jump, the basic idea would be some-
thing like this. When we press the space key a bool turns true and
we send that bool to the animator to trigger the jumping animation.
After that we check the distance from the character to the ground and
change back the bool.
15

3.4. TAKING INPUT CHAPTER 3. CREATING A CHARACTER CONTROLLER
Figure 3.4: Taking input
Ray ray = new Ray( transform . p o s i t i o n + Vector3 . up 
0.1 f ,Vector3 . up0.1 f ) ;
RaycastHit [ ] h i t s = Physics . RaycastAll ( ray , 0.75 f ) ;
foreach ( var h i t in h i t s ) f
i f ( ! h i t . c o l l i d e r . i s T r i g g e r ) f
i f ( v e l o c i t y . y <= 0)
f
r i g i d . p o s i t i o n = Vector3 . MoveTowards ( r i g i d . p o s i t i o n
, h i t . point , Time . deltaTime 10 f ) ;
g
onGround = true ; gg
To change the speed we check in a bool if the shift key is pressed
and if the player has enough stamina, stamina is a resource the player
uses to run and jump, if the player has enough stamina then we change
the value we send to the animator.
bool isRunning = Input . GetKey (KeyCode . L e f t S h i f t ) ;
i f ( isRunning&&stamina >0)
f
staminaCount= Time . deltaTime speed /15 f ;
staminaSlider . value = staminaCount ;
i f ( staminaCount <= 0)
isRunning = f a l s e ;
g
i f ( isRunning )
moveMultiplier = 1 ;
e l s e
moveMultiplier = 0.5 f ;
move= ( moveMultiplier ( speed /( stamina 2.5 f ) ) ) ;
16

3.5. THE DEFAULT CONTROLLER CHAPTER 3. CREATING A CHARACTER CONTROLLER
3.5 The default controller
In Unity there is a default Character Controller component, it
can be used in rst- or third-person games because the character will
often need some collision-based physics to make sure that everything
works as intended, the player doesn't walk through the walls or fall
through
oors. The movement and acceleration using this component
will usually be very unrealistic, you can expect your character to break
or change direction immediately without any momentum e ect going
on.
The component gives the character a capsule collider facing up-
wards. There are special functions to set the object's speed and di-
rection and a rigidbody isn't needed making it di erent from typical
colliders. The absence of the rigidbody makes the momentum e ects
unrealistic.
It comes with properties that can be custom modi ed to everyone's
needs such as the Height or Radius, you can also modify the Center
property in case it is not centered properly.
A controller is restricted from walking through static colliders. Ob-
jects who contain a rigidbody can be pushed aside but won't retain
the momentum after a collision. That means you can use the stan-
dard controller but it will limit your realistic physical interactions in
the scene.
Even tough it looks like such a controller is
awed by design we
mustn't rush our decisions, something like that is very useful in a fast
paced, arcade type of game like in the popular Doom series developed
by John Carmack. For our type of problem we will opt to not use it
and implement a more realistic behaviour.
3.6 What did we do
After everything is done we would have an advanced character con-
troller that has di erent moving speeds, does turn on spot, can jump
over small obstacles and the transitions between animations are do
blend between eachother smoothly.
The reason why we implemented this type of a controller instead of
the default one is because in our tour there is no need for the movement
to be fast paced, we want it to be like an usual tour and keep it as
17

3.6. WHAT DID WE DO CHAPTER 3. CREATING A CHARACTER CONTROLLER
realistic as possible. If the user wants to move quickly from point A
to point B we've also implemented a running system and by doing the
quest the user can increase the running speed of the character.
Also the movement is dependent on the speed and stamina of the
player. These stats can be altered in the game by doing quests for
non player controlled characters, we will cover that part later. Down
below is an example how the animator looks under a test.
Figure 3.5: The nal animator controller
18

Chapter 4
Implementing a live-feed monitor
To achieve this kind of an e ect we will make use of earlier mentioned
elements like materials and textures but also introduce new ones like
a camera. The camera is one of the most important elements in game
development, it's the component that tells the computer what to ren-
der from the scene.
In this chapter we will acquaint with how does cameras work and
why are they so important and how to use them to achieve di erent
behaviours. Complementary we'll learn about User Interface and it's
capabilities inside of the graphic engine.
4.1 Rendering Camera
In the tour there will be a total of three cameras working together to
obtain the result desired. First there is the intro camera, which is a
rst person camera we use to create the impression that the user is the
person we see walking on the screen. Next up we will see a monitor
with a live feed image of the building we want to create a tour of, in
order to give the user the feeling that he is watching a simple video
tour on a screen but after that we will zoom in the intro camera on
the screen and at the same time connect a user to the network, spawn
his own character with a third personal camera which will zoom out.
19

4.1. RENDERING CAMERA CHAPTER 4. IMPLEMENTING A LIVE-FEED MONITOR
We do this to give the user the sensation that he is immersing into
the virtual tour. He isn't doing a tour, he has a virtual character that
is doing a virtual tour on a computer as you can see in gure 4.1.
Figure 4.1: First person camera
Cameras are the game objects that capture and display the world
to the player. You can make the presentation of your game truly
unique by customizing and manipulating the cameras the way you
need them to. You can have as many cameras as you want in a scene
but beware they can take a lot of rendering power. You can set them
to render in any order, at any place on the screen, or only certain
parts of the screen.
Cameras are essential to display the game to the player. They
can be scripted to achieve just about any kind of e ect desired. For
a tabletop, you might keep the Camera static for a full view of the
board. For a rst-person shooter, you would attach the Camera to the
player, and place it at the character's eye level. For a racing game,
you'd probably want a third person camera to follow your player's
vehicle.
An interesting particularity of a camera is the Depth . Cameras are
drawn from low to high Depth. That means that a Camera with a
Depth of 2 will be drawn on top of a Camera with a lower depth. You
can also adjust the values of the Normalized View Port Rect-
angle property to change the size and position the Camera's view
onscreen.You can use this property to create multiple smaller cameras
useful for map views, rear-view mirrors, etc.
20

4.2. RENDER TEXTURES CHAPTER 4. IMPLEMENTING A LIVE-FEED MONITOR
Another very important properties are the near and far Clip Planes ,
these determine from where the camera starts to render and how far
should it go.The near and far clip planes together with the planes de-
ned by the Field Of View of the camera,commonly used as FOV is
the width of the camera's view angle, measured in degrees along the
local Y axis, describe what is popularly known as the camera frustum.
In order to be ecient Unity renders only the part of the scene that is
in that frustum, everything else is still there but isn't rendered to opti-
mize the performance of the game. This technique is called Frustum
Culling .
4.2 Render textures
Are similar with the normal textures but they distinguish themselves
because they are updated constantly while the game runs. To use
them, you have to create a new Render Texture and designate the
camera you want to record the image from to render into it, this can
be done by giving the reference of the render texture you created to
theTarget Texture eld of the camera. The render texture inspector
can be an invaluable debugging tool for e ects that use render textures
because it displays the current contents of render texture in realtime.
So in order to create a live feed monitor we need a few things. First
of all we need a camera to record, a texture in which the image will
be stored and from which we will create a material and a 3D model
to apply the material on. In the top right corner of gure 4.2, we can
Figure 4.2: The process steps
21

4.3. USER INTERFACE CHAPTER 4. IMPLEMENTING A LIVE-FEED MONITOR
see the camera selected and its frustum, of course since its a live-feed
camera we can also give the camera a certain path to follow using an
animation. In the bottom right there is the render texture inspector
where we can check if the image rendered by the camera is correct.
Then in the bottom left there are the texture and material that make
use of the data and nally there is the model with the material applied
to it.
4.3 User Interface
The user interface is the means by which the user and a computer
system interact, in particular the use of input devices and software.
Unity has an UI system which allows us to make interfaces fast and
intuitively. The core component of the system is the Canvas , the area
in which all user interface elements should be inside. All the elements
in the canvas are drawn in the same order they appear in the hierarchy,
the rst child is drawn rst and the second is drawn after therefore if
they overlap the later one will be on top of the previous.
A key property of the UI system would be the Anchor Presets
it can be found in the upper left corner of the rect transform compo-
nent. Clicking the button would brings up a dropdown menu this way
you can quickly select the most common anchoring options. The UI
elements can be anchored to the sides or middle of the parent, or you
can stretch with the parent size. This is very useful to preserve the
desired positioning and size of the user interface independent of the
screen resolution.
The canvas has a setting called render mode which changes the way
the canvas is displayed. First one is called Screen Space – Overlay
it places its elements on the screen rendered on top of the scene like the
name suggests. The canvas will also automatically change its size to
match the resolution of the display. Next one is the Screen Space –
Camera it works similarly with the previous mode but here the canvas
is placed at a given distance in front of the speci ed camera. That
means the UI elements are rendered by that camera and they follow
the same settings as it, that can lead to changes to the appearance of
the UI. The third mode is the World Space , we use this mode for our
user interface as you can see in the gure 4.3, in this mode the size of
22

4.3. USER INTERFACE CHAPTER 4. IMPLEMENTING A LIVE-FEED MONITOR
the canvas and its elements are set manually and the object behaves
like any other one in the scene this is useful for UIs that are meant to
be a part of the world. This is also known as a \diegetic interface".
The user interface elements are structured in two categories visual
and interaction components. The main visual components are the
Text andImage components and the interaction ones are Button ,
Toggle , etc. . In gure 4.4 we see the canvas with three buttons, these
also have a text and image component in their composition, each one
has a event to de ne what happens when it's clicked. The behaviour
is selected from the function of a script previously set.
Figure 4.3: The canvas in world space
Figure 4.4: The button with its set event
23

4.4. WHAT WE ACHIEVED CHAPTER 4. IMPLEMENTING A LIVE-FEED MONITOR
4.4 What we achieved
We have now gained a bit of knowledge about cameras and how ren-
dering works in Unity and we have created a live feed monitor which
displays real time rendered images. And we also intertwined it with a
diegetic interface that enables the user to immerse in the virtual world
we are creating.
24

Chapter 5
Adding an automatic tour mode
At this point we have our scene containing the means to interact with
the user through the UI and spawn a three dimensional model which
can be controlled with input from the keyboard. The user can now
free roam in our tour scene and check out the environment but since
the user doesn't know much about the location he's in we would like
to make an automatic tour mode to showcase the user the key points
in that location, give him a brief tour around rst to make the user
more comfortable with the location.
To implement such a behaviour we have to make use of the Nav-
igation System , in this section we will describe Unity's navigation
system and present how to use it to implement an automatic tour
mode for our character.
5.1 Navigation System
Figure 5.1: The key components of the NavMeshTheNavigation System is the
system that allows you to cre-
ate characters which can navigate
intelligently through the game
world. It's essential for you char-
acters to have the ability to un-
derstand that they need to take
stairs to reach second
oor, or
see the wall as an impassable ob-
stacle. The four main navigation
pieces, which can also be seen in
gure 5.1, are the following:
25

5.1. NAVIGATION SYSTEM CHAPTER 5. ADDING AN AUTOMATIC TOUR MODE
Navigation Mesh is a data structure which describes the sur-
faces of the game world that can be walked on and allows the
system nd path from one location to another in the game world.
The data structure is built, or baked, automatically from the level
geometry. This is the most important component of the naviga-
tion system, every other component requires a navigation mesh to
work properly.
NavMesh Agent is a component necessary to create characters
which avoid each other while moving towards their goal. Agents
can only move around the world on a NavMesh and they know how
to avoid each other as well as moving obstacles. The agent looks
like an upright cylinder, we can change its size by altering the
radius and height properties, these are speci ed in two di erent
places. NavMesh bake settings describe how all the agents interact
with static world geometry while the NavMesh agent properties
describe how the agent interacts with other agents and obstacles.
The cylinder moves with the object but always remains upright
even if the object itself rotates. The cylinder shape is necessary
to detect and respond to collisions between other agents and ob-
stacles, this only works with other NavMesh components and has
no interaction with other elements.
O -Mesh Link is a component that allows you to incorporate
navigation shortcuts, these are often useful when you want to
move the character over some terrain part that isn't walkable. For
example, jumping over a ditch or a fence, or taking an elevator,
can be all described as O -mesh links.
NavMesh Obstacle is a component that allows you have obsta-
cles that can change their position at runtime which the agents
should avoid while navigating the world. A couple of good exam-
ples would be a barrel or a crate that is controlled by the physics
system. Usually an obstacle is identi ed as a static element but
with the help of this component when the object is moving the
agents do their best to avoid it and once the obstacle becomes sta-
tionary it will carve a hole in the NavMesh so that the agents can
change their paths to go around it, and if the stationary obstacle
is blocking the path way the agents can nd a di erent route.
26

5.2. THE PRESET ROUTE CHAPTER 5. ADDING AN AUTOMATIC TOUR MODE
5.2 The preset route
Right now we have an understanding how the navigation system works
and that we can use it to create an agent that can navigate from point
A to point B. Next up we will see how exactly are we giving the agent
its target and how to have that set of points of interest be visited while
in auto mode.
The way we implemented this behaviour is by storing the positions
of the key points in a vector and sort them in the order that we nd it
tting. After the users character spawned we will have the automatic
mode enabled by default and the character will start moving towards
the rst position in the vector.
When the agent reaches the rst location, a collision with a trigger
component happens, that will tell the agent that we reached our target
and we will update the target with the next one in list. After we reach
the last location the automatic mode will disable itself and the player
will regain control of the character.
// get array of o b j e c t s tagged o b j e c t i v e and s o r t i t
t a r g e t L i s t = GameObject . FindGameObjectsWithTag
(" Objective " ) ;
IComparer myComparer = new l i s t S o r t e r ( ) ;
Array . Sort ( t a r g e t L i s t , myComparer ) ;
// i n i t i a l d e s t i n a t i o n
navAgent . SetDestination ( t a r g e t L i s t [ i ] . transform . p o s i t i o n ) ;
//when we c o l l i d e with a point of i n t e r e s t
p r i v a t e void OnTriggerEnter ( C o l l i d e r other )
f
i f ( other . CompareTag(" Objective ") && navAgent . enabled )
f
i ++;
i f ( i <t a r g e t L i s t . Length )
f
navAgent . SetDestination ( t a r g e t L i s t [ i ] . transform . p o s i t i o n ) ;
g
gg
27

5.3. ADDING A TOGGLE CHAPTER 5. ADDING AN AUTOMATIC TOUR MODE
5.3 Adding a toggle
After that we now have a character that after spawning it goes through
all the points we've selected until it reaches the end and then it stops.
But what if the user wants to stay an amount of minutes to the rst
point, examine it, and then move on to the next point where he would
also want to stop and look at it this way pausing the tour for an
arbitrarily amount of time.
To achieve that we would have to simply design a button as a toggle
that would disable or enable the script and components that control
the navigation system as you can see in gure 5.2. That doesn't sound
too bad, also on toggle we would make sure that we don't forget the
point where the user disabled the mode and on enable continue from
that point. Also since we now have a toggle, we would make so that
after the tour is nished the user can press the toggle key to start it
again if he desires so.
Figure 5.2: The toggle
5.4 Closest objective
At this point the character can make a tour, it can repeat the tour
as many times as the user likes and it can stop at any point in the
tour and then resume. Because we don't only stop the character in
one point but instead give him the possibility to roam freely around
the location why would we resume the tour from a point that might
28

5.4. CLOSEST OBJECTIVE CHAPTER 5. ADDING AN AUTOMATIC TOUR MODE
be very far from the users current location?
That doesn't seem like the optimal solution, and because of that
we will make a function that will calculate the distances from the
characters current location to all the points of interest on the map
and then when the user resumes the tour the character will go back
into the automatic mode and go to the objective with the shortest
distance from him to the point. We can see we make use of this
function in gure 5.3.
Figure 5.3: The redirect function
We obtain the desired behaviour by going through all the positions
in the vector calculate the distance between each of them and the
position of the agent and then set the destination to the closest point.
In order to make sure that we've run through all the positions in the
vector we yield the execution of the function, until the calculation of
the new path nishes, with the help of a coroutine.
IEnumerator Ca lc u la te D es t in at i on s () f
f l o a t minimDistance = 1000;
f o r ( j = 0 ; j <t a r g e t L i s t . Length ; j++) f
navAgent . SetDestination ( t a r g e t L i s t [ j ] . transform . p o s i t i o n ) ;
while ( navAgent . pathPending )
y i e l d return n u l l ;
i f ( navAgent . remainingDistance <minimDistance )
f
minimDistance = navAgent . remainingDistance ;
i = j ;
g
g
i f ( t a r g e t L i s t . Length == i )
29

5.5. WHAT WE ENDED WITH CHAPTER 5. ADDING AN AUTOMATIC TOUR MODE
f
i = 0 ;
g
navAgent . SetDestination ( t a r g e t L i s t [ i ] . transform . p o s i t i o n ) ;
g
5.5 What we ended with
After going through this chapter we've now gained a better grasp on
the navigation system implemented in Unity and the key concepts and
elements that drives it.
At this point we can implement an automatic mode that will give
the user a tour of the location which can be interrupted at any point
for the user to take manual control of the character and spend as much
time as he sees tting or continue the tour by himself.
We've also made that after the interruption of the automated tour
it can resume to the closest point to the agent instead of starting over
from the beginning or going from the last visited spot. As a last touch
we've also added the possibility for the user to replay the tour as many
times as he desires.
We will continue to use the navigation system to create intelligent
movement later in the project in the case of non playable characters
controlled by the computer. These will have preset routes after a cer-
tain requirement is met as a way to create an interactive and reactive
environment without the need of another human being present.
30

Chapter 6
How to do a quest system
Our reasoning behind implementing a quest system is because through
such a system the user can interact with the virtual world, this way
obtaining a sense of immersion. The progression nature of it makes
sure the user always has a goal, has something to work towards. It
can keep the user engaged in the virtual environment for much longer
than just doing the tour, and the exploration nature of the quests can
reveal to the users some parts that they might have missed without
engaging with the system.
To make this system more interactive and natural we will introduce
it using some computer controlled non playable characters, these NPCs
will deliver the quests. This way has the upper hand on a typical
menu based system because it keeps the illusion of the virtual reality
uninterrupted.
In this chapter we will use some terms previously mentioned and
we'll go into more details since they tie together with the segment.
6.1 Introducing the Rigidbody
Because we have opted to use 3D models to give the quests and interact
with the user, we will continue a previously started discussion in the
movement chapter on how to introduce them to the world and make
them more than a stone statue.
For this a key component will be the Rigidbody , the main purpose
is enabling a physical behaviour for the object attached to. With it
attached the object will respond to gravity and if the object also has
at least one Collider component added it will be moved by incoming
collisions.
31

6.2. EXPLAINING COLLIDERS CHAPTER 6. HOW TO DO A QUEST SYSTEM
By adding a rigidbody all the script controlled movement such as
altering the position or rotation will be disabled. If we desire to change
the object's position in space then we will have to apply forces to it
and then let the physics take over.
If the case to move the object through scripts and still have it use
a rigidbody appears there is a property called Is Kinematic this
property removes the control from the physics engine, that means the
collisions will also be a ected not only the movement.
A very important feature of the rigidbody component is the "sleep-
ing" mode. It occurs when a rigidbody is moving slower than the
minimum speed, no processor time is spent updating the rigidbody
until it is "awoken".
6.2 Explaining colliders
The next big component of a NPC is the collider. These invisible
components are recommended to have the rough approximation of an
object's shape because it increases the eciency of physical collisions
and has almost to no e ect in gameplay.
The simplest colliders have the least processor usage and are called
primitive collider types. Most common 3D primitive colliders are the
box, sphere and capsule ones.
In the case that the primitive colliders aren't as accurate as we need
we can use the mesh collider to match the exact shape of the object.
These colliders use much more processing power and should be used
with performance in mind. They don't require a rigidbody component
and are commonly used to create walls,
oors or other objects that
will not change their position.
In order to simulate di erent properties for di erent surfaces physics
materials are used, these materials give the object di erent properties,
for example an ice material will be slippery while the rubber will have
a lot of friction. To obtain the expected behaviour might take a lot of
trial and error tries.
A very important feature in implementing the quest system is the
Trigger property, the scripting system detects when collisions occur
and can initiate actions but we will use the engine to detect when one
collider enters the space of another without literally colliding the ob-
32

6.3. USING THE COMPONENTS CHAPTER 6. HOW TO DO A QUEST SYSTEM
jects. If the collider is con gured as a trigger it won't behave as a solid
object and will allow others to go through. We can add behaviours on
collision by writing the code neccessary in the functions delegated.
6.3 Using the components
Because we need the NPCs to be mobile we will use a similar animation
technique to the one we already did for the player, the few di erences
will be a much smaller range of animations and the movement will be
controlled by the navigation system.
The animator will contain only three states as you can see in gure
6.1. The talking animation will trigger when the user collides with
the NPC and begins the interacting process, after that process is over
the NPC will then go to the walking state and change his position
in the world. An interesting feature we've used for the professor is
mixing the walking and talking animation by using an avatar mask,
such a mask lets us use only the lower body portion from the walking
animation and the upper body for the other without creating a new
animation le.
Figure 6.1: The professor animation states
The animation states are changed from the collision script, we have
two bools walk and talk which determine what animation should be
played and if the walking animation is running after the "GoTo()"
function is used we give control to the navigation system until the
NPC reaches its target location.
public void GoTo() f
animator . SetBool ("Walk" , true ) ;
animator . SetBool (" Talk " , f a l s e ) ; g
void Update ( ) f
i f ( animator . GetCurrentAnimatorStateInfo (0)
33

6.4. DELIVERING THE QUESTS CHAPTER 6. HOW TO DO A QUEST SYSTEM
. IsName ("HumanoidWalk "))
navAgent . isStopped = f a l s e ;
i f ( navAgent . remainingDistance <0.5 f )f
navAgent . isStopped = true ;
animator . SetBool ("Walk" , f a l s e ) ; gg
6.4 Delivering the quests
We explained how to create the actors that will deliver the quests and
how they will behave in those situations and now is time to tackle the
actual quest system. The way we designed this system is structured
in three key scrips:
The rst script we need is the CollisionCheck this script will
manage the behaviours that will happen after the player gets in
a collision with a certain object. It will enable or disable user
interface elements, start a countdown, open a door or nish a
quest. It's the rst one to start an event in the chain of events
triggered by the user.
void OnTriggerStay ( C o l l i d e r other ) f
i f ( other . gameObject . GetComponent <UserInputCustom >()
. i s L o c a l P l a y e r ) f
i f ( Input . GetKeyDown(KeyCode .E)) f
pressKeyText . text = "";
i f ( gameObject . CompareTag(" Blackboard ")) f
t r i g g e r C o n t r o l l e r . WriteOnBlackboard ( other . gameObject ) ; g
i f ( gameObject . CompareTag(" LightSwitch "))
f
other . GetComponent <Animator >(). Play (" UseItem " ) ;
t r i g g e r C o n t r o l l e r . LightSwitch ( ) ;
g
. . .
. . .
Next there is the TriggerController script, it contains all the
actions ready to be used. These actions are under the shape of a
public function for each behaviour there is and can be called by
the collision script when it is tting.
34

6.5. WHAT WE ACCOMPLISHED CHAPTER 6. HOW TO DO A QUEST SYSTEM
public void WriteOnBlackboard ( GameObject other ) f
inputField = GetComponentInChildren <InputField >();
inputField . ActivateInputField ( ) ;
inputField . S e l e c t ( ) ;
c o l l i s i o n C h e c k = gameObject . GetComponentInChildren <CollisionCheck >();
c h a r a c t e r S t a t s = other . GetComponent <CharacterStats >();
g
The last script is the QuestManager the role of this manager is
to keep in mind everything related to a quest, which one is active,
which one is available or is not, do the professors need to go to a
location or has the player nished a quest and should be rewarded.
public void GetQuest ( GameObject gameObject ) f
i f ( gameObject . name . Equals (" Professor1 ") && ! lastSpeed ) f
i f ( f i r s t S p e e d ) f
i f ( ! read )f
dialogImage . enabled = true ;
talkText . enabled = true ;
talkText . text = "Hey could you . . " ;
read = true ;g
. . .
. . .
6.5 What we accomplished
Now we have a group of non playable characters that can move from
point to point, have some sort of an intelligent behaviour and they
interact with the user by distributing quests and providing interesting
facts about the selected domain.
With the help of this system we are able to provide the user an
interactive experience and keep him immersed and interested in the
tour for a longer period of time. On top of that the quests have not
only showcasing value but also change the way the character behaves
in the world.
35

Chapter 7
Networking and Voice chat
We've already had an introduction these technologies but now we will
go more into details on how they work and how did we make use of
them.
7.1 Networking
Unity's multiplayer service is based on a high-level scripting API, more
commonly referred as HLAPI. By using this we get access to com-
mands which will cover most of the requirements for multi-user ap-
plications without having to go into low level implementation details.
It allows us to control the state of the network through a Network
Manager , the games are hosted by clients which can also be a player,
we can send or receive network messages, send commands from clients
to servers or make remote procedure calls from the server to clients.
The networking is integrated in the engine and editor, allowing us to
work with components such as: NetworkIdentity, NetworkBehaviour
or a NetworkAnimator.
The networking system is structured in such a way if there is no
dedicated server one of the clients will play the role of the server, this
is called a host. The host is a server and a client at the same time.
The host is a special type of client called LocalClient while the other
clients are RemoteClients as you can see in g 7.1 . The LocalClient
communicates with the server through direct function calls since it is
in the same process. The RemoteClients communicate with the server
through a regular network connection.
In the network system there is a player object associated with each
user connected. Only our character will have the isLocalPlayer
ag,
36

7.1. NETWORKING CHAPTER 7. NETWORKING AND VOICE CHAT
Figure 7.1: The network structure
this can be used to tell the camera which player to follow or do any
kind of client side changes that should only be done for our player.
The movement is handled by the NetworkTransform component
which sends it to the server to synchronize every client, see gure 7.2.
Figure 7.2: Player diagram
In the game engine GameObject.Instantiate spawns new game ob-
jects, in the server authoritative model of the network spawning an
object on the server means that the object will be created on all the
clients connected to the server. Once it has spawned state updates
are sent to the clients when the object changes on the server, same
goes for destroying the object. Spawned objects will be stored by the
server, so if someone else joins later the objects will be spawned on
37

7.2. VOICE CHAT CHAPTER 7. NETWORKING AND VOICE CHAT
that client as well. These objects are identi ed by a unique netowork
Id, that is the same on both the server and the clients connected. This
is necessary to route messages to objects and identify them.
We added modi cations to the default HLAPI implementation to
have the rst player who wants to start a room be the host of it and
take in maximum 20 other users. When a user wants to connect if there
is already a room it will automatically connect to the available one,
this bypasses a lobby system to avoid fragmentation. We also added
an oine button in case the user doesn't have a stable connection but
would still want to try out the tour. Down below is the code which
handles creating or joining a room.
i f ( manager . matchMaker == n u l l )
manager . StartMatchMaker ( ) ;
i f ( manager . matchInfo == n u l l )
f
i f ( manager . matches == n u l l )
manager . matchMaker . CreateMatch ( manager . matchName
, manager . matchSize , true , "" , ""
, "" ,0 ,0 , manager . OnMatchCreate ) ;
e l s e
manager . matchMaker . JoinMatch ( manager . matches [ 0 ]
. networkId , "" , "" , "" , 0 , 0 , manager . OnMatchJoined ) ;
g
o n l i n e = true ;
7.2 Voice chat
For this feature we'll have to acquaint with the Audio system. Unity's
system can import most audio le formats and can play sounds in 3D
space, it can also record audio from any available microphone which
we will need in our tour.
It has two key components, a Audio Source attached to an object
to generate sound, then the sound is picked up by the Audio Lis-
tener attached to another object, usually the camera. There is also
a restriction on audio listeners, there can be only one in a scene that
means it will be placed on the player we spawn and only be enabled
on local characters.
38

7.2. VOICE CHAT CHAPTER 7. NETWORKING AND VOICE CHAT
We can access the microphone from a script and create Audio Clips
by direct recording. The Microphone class has an API to nd avail-
able microphones and start a recording session. The clips contain the
audio data used by the sources. The audio les supported are .aif,
.wav, .mp3 and .ogg.
The reason why we used Opus is because due it's combination of
the speech-oriented SILK algorithm and the lower0latency, MDCT-
based CELT algorithm it's the best audio format at any bitrate. It
has replaced both Vorbis and Speex and after it got standardized
by the IETF it eciently codes speech and general audio in a single
format while still having a low enough latency to allow interactive
communication which is exactly what we need to implement a voice
chat.
Figure 7.3: Opus Bitrate/Latency comparison
We have used the reference implementation called libopus in our
application. Opus supports constant and variable bitrate encoding
and has very short latency making it ideal for voice over IP solutions
like ours, you can see a graph with possible bitrate and latency com-
binations in the gure 7.3 . The algorithms are openly documented
and a reference implementation is published.
39

7.3. WHAT WE OBTAINED CHAPTER 7. NETWORKING AND VOICE CHAT
Opus has been shown to be competitive in quality with formats
with much higher delay such as HE-AAC and Vorbis. In gure 7.4 we
can see it performing better than other audio codecs at almost any
bitrate.
Figure 7.4: Opus Quality/Bitrate comparison
7.3 What we obtained
By mixing together the networking system, the audio recording option
in Unity and using Opus to encode the audio data we've obtained a
working voice chat. It can be very useful in our tour because it enables
the users to interact with other users over the network overall creating
a more enjoyable experience.
This could also allow some representative from the institution that
provides the tour to come in and have a discussion with the people
who are interested in knowing more about the location.
40

Chapter 8
Optimisation practices
8.1 General optimisations
The rst thing we need to look at before starting to optimize our
tour is the Pro ler this tool can give us important information about
how exactly the processing power is distributed and is essential in
nding
aws in code. Some several simple CPU optimizations can be
universally applied such as:
Unity does not use string names to address Animator, Material
and Shader properties internally, therefore whenever we use a Set
or Get method on those we should use the integer-valued method
instead of the string one.
Integer math is faster than
oating-point math and
oating-point
is faster than vector, matrix or quaternion math, that means
whenever we can we should attempt to minimize the cost of indi-
vidual mathematical operations.
A best practice would be to eliminate all usage of Object.Find
or Object.FindObjectOfType in production code. These API's
require Unity to iterate over all GameObjects and Components in
memory.
When applying post-processing e ects we should look to bundle
them into a big post processing script that handles everything at
the same time rather than multiple individual scripts.
Compressing textures can be crucial for the project depending on
the target platform.
41

Chapter 9
Application description
Now we will give a brief introduction on our virtual tour and how the
user should interact with it.
We note that we've already introduced a text-based short tutorial
for the user whenever he connects to the game.
After starting the user will nd himself in a room with his character
looking towards a monitor, there he can see a camera doing a tour of
the building and three interactive buttons. There will be an "Online"
button that will allow the user to connect to an existing room or
create one himself, the "Oine" button that will start the tour even
without an internet connection and the "Exit" button that closes the
application.
After connecting to the room the user will be able to move through
the world through his character, the movement buttons have been
selected to be intuitive, "WASD" are the keys for moving around with
"Space" as the key to trigger a jump and "LeftShift" the key to change
the running speed. The camera follows the third person concept and
can be controlled by the movement of the mouse. The user will be
noti ed by a line of text whenever he can interact with an element of
the world, the interaction button is "E".
Because we've also implemented an automatic mode there will be
a button to enable or disable that mode, it's "F1". If the user must
interact with a whiteboard he rst has to get close to it and then
leftclick on top of it to enable writing, the disabling follows the same
procedure. To enable start recording for the voice chat the user will
hold down the "R" key. To exit the application the user will press the
"Esc" button.
42

Chapter 10
Conclusion
First of all after we analysed the current state of the virtual tours,
we've come to the conclusion that right now the most used type of
tours, are the tours based on photography, most commonly the 360
photos. Such tours are present on the websites of the Smithsonian
museum, Harvard University or the ocial London virtual tour for
visitors.
After taking notice of that we've examined what are the advantages
of those tours and what do they lack, the results have shown that while
the photography technique has a very high delity it lacks depth, it
has no sound and the angles from which you can look at objectives
are limited.
In our implementation we're giving the user an experience much
more closer to the reality. Not only we provide the ability to immerse
in the tour and be able to observe an objective from many di erent
angles but what sets our technique apart is the interactive nature of
it and the possibility to connect with people from all over the world
who are interested in the same location as you, as everyone knows we
live in the era of connectivity and tours can make much more use of
it.
Thanks to this project we've learned how to use a professional game
engine and we made contact with many essential terms and techniques
which will prove themselves very useful in the case of a pursuit in this
career.
I believe that after our research the state of the virtual tours present
in the world will change for the better and our implementation can
prove itself useful for others who look to have a virtual tour.
43

Similar Posts