Learning Analytics Solution for Building [622374]

Learning Analytics Solution for Building
Personalized Quiz Sessions
C.M. Mih ăescu
Dept. of Computers and Information Technology
University of Craiova, Romania
[anonimizat]
O.M. Teodorescu
Dept. of Computers and Information Technology
University of Craiova, Romania
[anonimizat]
P.Ș. Popescu
Dept. of Computers and Information Technology
University of Craiova, Romania
[anonimizat]
M.L. Mocanu
Dept. of Computers and Information Technology
University of Craiova, Romania
[anonimizat]

Abstract — Almost any e-Learning system provides various
functionalities, ranging from general ones like accessing a course or communication with peers, to more specialized functions like providing personalized feedback or learning materials. Among the
features that may increase the quality of on-line education
environments there is feeding learners with personalized set of quizzes depending their current knowledge level. In this paper, we propose a learning analytics system that builds personalized quiz sessions used to provide a personal ranking of the questions for each
student: [anonimizat]/her experience and the experience of
others in the learning process. The system uses the data provided by previous evaluation of students using five types of quizzes and the collaborative filtering algorithm (singular value decomposition or
SVD) method for extracting the features of the students and quizzes
from a student-quiz matrix. Prelim inary experimental results show
that for students that first interact with the system the provided set of quizzes has an average difficulty level compared to already existing
answers and as the student: [anonimizat]—learning analytics, singular value decomposition,
information visualization, user customization
I. INTRODUCTION
Tesys is the web e-learning platform most used by the
students of the University of Craiova enrolled in distance
education. It provides the resources and feedback they need by
the use of course materials to be viewed/downloaded, quizzes which can be taken for each subject and/or each or more chapters of a subject to test one’s skills gained after individual study and the ability to receive feedback from professors or administrative staff through messages or videoconference.
A recent addition to Tesys was intended to extend the
existing functionality regarding student (self-)assessment using quizzes that enable the evaluation of students using multiple question types. Basically, this is similar with the implementation of a recommender system for students taking
a test on a specific subject and chapter, so that the web
platform provides a more personalized experience for its users. A collaborative filtering algorithm based on the singular value
decomposition (SVD) method was implemented for extracting
the features of the students and questions from a student-question matrix, which were then used to provide a personal ranking of the questions for each student: [anonimizat]/her experience and the experience of others in the learning
process. The SVD-based recommender was implemented
using the mahout math library.
E-assessments are an important part of any e-learning
platform, as they test the learner’s accomplishment of meeting the learning objectives. Learning Analytics provides ways for
improving teaching, learning, organizational efficiency and
decision making by collection, analysis, use, and appropriate
dissemination of student-generated, actionable data with the purpose of creating appropriate cognitive, administrative, and effective support for learners. We describe several purposes for using Learning Analytics in terms of prospected
beneficiaries [1].
• Individual learners . Reflection on personal
achievements at a specific time as compared with other colleagues with emphasis on shortcomings.
• Course managers . Getting the ability to correctly
identify students at risk and students that need support.
• Course managers . Offering analytics feedback that
improves course structure and learning assets (i.e., quiz formulation, examples, etc).
• Administration . Helping teachers to plan support
activities with learners or groups of learners in a timely
manner.
• Administration . Assessing the effectiveness of the
educational process in terms of learning curves and engagement.
We believe that the student’s ability and skills gained
through study can be further tested and assessed by

introducing more question types (multiple choice questions).
Also, the results of a student during the year can predict the student’s failure/success, and can be used to help him/others in
his/their self-improvement which results in a greater
understanding and fulfilment of the student’s individual needs. In fact, E-assessments are useful in two directions, for both the
learner and the professor. One of them is for strengthening one’s knowledge (involving primarily the learner’s objective when studying a course) and the other is evaluating the
learner’s comprehension of the course (involving both the
learner and the professor, as a means of self-evaluation and progress tracking versus progress evaluation and effectiveness of learning materials).
II.R
ELATED WORK
A. Learning Analytics
There are many definitions of data analytics but none of
them are universally accepted. Maybe the best definition for this term, was stated at the 1st International Conference on Learning Analytics, and it says: “ the measurement, collection,
analysis and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs ” [2]. Cooper
defined learning analytics as “the process of developing
actionable insights through problem definition and the application of statistical models and analysis against existing and/or simulated future data” [3]. By actionable insights, the
author indicates that “analytics is concerned with the potential
for practical action rather than either theoretical description or mere reporting” whose conclusion “may lead a rational
person to follow different courses of action per their values and factors not accounted for in the analysis”. Another definition,
proposed by E. Duval states that: “ learning analytics is about
collecting traces that learners leave behind and using those traces to improve learning ” [4]. In [5], the learning analytics
process is described as an iterative cycle defined by three major
steps:
• Data collection and pre-processing . The first step refers
the procedure of collecting data from various sources like educational environments and systems. This step is very
important for the successful discovery of useful patterns
from the data. In many cases, the collected data is too large and/or involve several irrelevant attributes, which call for data pre-processing (this step was also called data preparation) [7]. Data pre-processing also include the step of transforming the data from the raw format to a
more suitable format that can be used as input for the
Learning Analytics algorithm. Several data pre-processing methods, borrowed from the data mining and/or machine learning field, can be employed in this step. Some of these methods are: data modelling user and
session identification, data cleaning, data transformation,
data integration, data reduction, and path completion [6],[7],[8].
• Analytics and action . Using the pre-processed data from
the previous step we set an objective for the analytics exercise. Several learning analytics techniques can be used to analyse the data and to reveal hidden patterns that
can provide a better learning experience. The analytics step includes the analysis and visualization of
information and several actions regarding that
information. The possibility of taking actions is the primary goal of the whole analytics process. These actions include monitoring, adaptation, prediction, intervention, assessment, analysis, personalization, recommendation, and reflection.
• Post-processing. The last step from the learning
analytics process is the data post processing which is fundamental in continuous improvement. This step can include compiling new data from other data sources, identifying new indicators/metrics, refining the data set, determining new attributes required for
the new iteration, modifying the variables of analysis,
or choosing a new analytics method.
Further refinements of learning analytics processes is
possible. I.e., per David T. Jones, Professor of Bioinformatics, and Head of Bioinformatics Group in the University College London, in his wordpress post [15] about the LOK11 MOOC paper “Learning analytics: Definitions, processes and
potential” , there are not 3 but 7 processes of learning analytics:
select , capture , aggregate and report , predict , use, refine ,
share .
Whatever definition or description we may choose, learning
analytics focuses on the collection of data and its usage to improve learning activities. Therefore, “learning analytics
seeks to capitalize on the modelling capacity of analytics: to predict behaviour, act on predictions, and then feed those results back into the process to improve the predictions over time [16] as it relates to teaching and learning practices. The study and advancement of learning analytics involves: (1) the development of new processes and tools aimed at improving
learning and teaching for individual students and instructors
and (2) the integration of these tools and processes into the practice of teaching and learning.”
B. Recommender Systems
A formal definition of recommender systems can be read in
[14]: “a piece of software that helps users to identify the most
interesting and relevant learning items from a large number of
items”. These recommender systems can be considered a
subclass of filtering systems information area that try to predict the 'rating' that the user would give to an item [15]. In
recommender systems, there are used three basic approaches:
• content-based filtering
• collaborative filtering
• hybrid filtering
The first type of recommender systems (
Content-based) aim
to analyse item’s descriptions to find those items that
represents particular interest for the user. [16] The keywords are used in the content-based recommender system describe what kind of item the user likes. More exactly, this kind of
algorithms compute the similarity between items that user
liked in the past and ones that can be recommended and if the

actual items have a good similarity percentage a
recommendation can be made. Various items that can be recommended are compared w ith the items that were
previously rated by the user and the ones that have a good
matching score are recommended. This approach comes from the research area of Information Retrieval (IR) and filtering systems.
Most recommendation engines use user profiles of the
user’s interests. Two types of information may be relevant for
a user profile [16]:
1 A model of the user’s preferences, i.e., a description of the
types of items that interest the user. There are many possible alternative representations of this description, but one common representation is a function that for any item
predicts the likelihood that the user is interested in that
item. For efficiency purposes, this function may be used to retrieve the n items most likely to be of interest to the user.
2 A history of the user’s interactions with the
recommendation system. This may include storing the
items that a user has viewed together with other
information about the user’s interaction, (e.g., whether the user has purchased the item or a rating that the user has given the item). Other types of history include saving queries typed by the user (e.g., that a user searched for an
Italian restaurant in the 90210-zip code).
Collaborative Filtering (CF) is some sort of algorithms that
use other users and items along with their ratings (selection, purchase information could be also used) and use the user’s past actions to recommend an item that target user does not
have ratings for. Fundamental assumption behind this
approach is that other users’ preference over the items could be used recommending an item to the user who did not see the item or purchase before. CF differs a lot from the content-based methods because the user or the item itself does not have an important role in recommendation but rather how
rating and which user rated the item. The basic intuition
regarding the collaborative filtering is presented in the next three statements [14]:
• Personal tastes correlate for a given domain and
information space
• We suppose that what users agreed now is more likely to
agree in future
• To approximate the rating of the user, use users who
have somehow similar taste
The last kind of filtering (hybrid filtering) uses both
previously mentioned methods. All the basic approaches for recommender systems (collaborative, content-based, knowledge-based, and demographic techniques) have well-known shortcomings like the cold-start problem for
collaborative and content-based systems (what can we do with
the new users that have very few ratings?) and the knowledge engineering limitations [15] in the knowledge-based approaches. Hybrid recommender systems tries to use multiple
techniques to achieve better results as Wikipedia states. • Collaborative recommender systems : The system
generates recommendations using information only about ratings from different user profiles. Collaborative systems
find peer users with a rating background like the current
user and generates recommendations using this close relationship.
• Content-based recommender systems : This kind of system
can generate recommendations using two sources: the
attributes that were associated with the products and the
ratings that users previously gave them. Content-based recommenders reduce the recommendation to a user-specific classification problem and use a supervised learning algorithm to learn what user's likes and dislikes based on product attributes.
• Demographic recommender systems : The third kind of
recommender system provides recommendations based on the demographic profile of the user. Recommended products can be produced for different demographic
niches, by combining the ratings of users in those niches.
Recommender systems are a very good alternative to
search algorithms because this kind of systems assist users discover items they might not have found by themselves. Recommender systems are often implemented using search
engines that are indexing non-standard data. They are an
open research field in the data mining and machine learning research areas.
III.P
ERSONALIZED QUIZZ SESSIONS
The Tesys platform is a web application written in Java,
that makes use of web servlets and Apache technologies; the
whole system is hosted on an Apache Tomcat 7 web server.
The current project’s scope can be divided in two parts.
The former part (development) refers to the introduction of more types of questions for the TesysWeb platform (which
currently supports only multiple choice questions) for offering
a broader range of alternatives in the (self-)evaluation of students using the TesysWeb e-learning platform. The latter part (research) aims to use the Learning Analytics to provide students with tailored learning pathways for the
personalization of the e-learning platform to fit the student’s
specific needs and can be accomplished through the selection based on recommendation (collaborative filtering) of questions to be included in a test for a new student or information visualization in the form of a learning dashboard available for the student.
The database is MySQL and its original structure has been
adapted to allow new types of questions to be added. Also, the views (webmacro templates) were adapted for the student to be able to answer the different types of questions.
For the scope of this project, the Mahout Math java library
was used for the implementation of a recommender system
using SVD. The SVD algorithm în Mahout is “designed to reduce noise in large matrices, thereby making them smaller

and easier to work on” [16]. In the matrix decomposition, it
also does feature selection automatically.
A. Singular Value Decomposition
Singular Value Decomposition (SVD) is a factorization of a
real or complex matrix. It is the generalization of the eigen
decomposition of a positive semidefinite normal matrix (for example, a symmetric matrix with positive eigenvalues) to any m x n matrix via an extension of polar decomposition.
Formally, the singular value decomposition of an m x n real or
complex matrix
M is a factorization of the form UΣV¿,
where U is an m x m real or complex unitary matrix, Σ is
a m x n rectangular diagonal matrix with non-negative real
numbers on the diagonal, and V is an n x n real or complex
unitary matrix. The diagonal entries σi of Σ are known as
the singular values of M. The columns of U and the
columns of V are called the left-singular vectors and right-
singular vectors of M, respectively. The singular value
decomposition can be computed using the following
observations:
• The left-singular vectors of M are a set of orthonormal
eigenvectors of MM¿
• The right-singular vectors of M are a set of
orthonormal eigenvectors of M¿M
The non-zero singular values of M (found on the
diagonal entries of Σ) are the square roots of the non-zero
eigenvalues of both M¿M and MM¿.
B. Database Design
In the previous implementation, Tesys could store only
multiple-choice questions. Below is a portion of the database relational model, for the question-related tables, before the introduction of multiple question types.

In this structure, the questions (“intrebari”) table fields
definitions are presented in the following table:

Table 1: Database field definition for the questions table before
introduction of multiple question types
Field name Field
type Field definition
id integer the primary key of the table, identifies each
question uniquely
capid integer a foreign key to the chapter table, identifies the
chapter the question belongs to text text the text of the question, can contain image
links from the uploaded images stored in the
servlet’s images folder
corectans varchar the correct answers of the question (values
from A to F)
visibleans varchar the visible answers for the student to choose
from (values from A to F)
stareAD char the state of the question (A – activated, D –
deactivated)
stareTE char the context of the question (T – test, E – exam)
lastUpdate datetime last datetime when the question was updated
C. Application Design
The package structure of the application which aims to build
personalized quizz sessions is presented in the figure below.

Figure 1: Package struct ure for the implemented
functionalities

Quizz sessions are built by first implementing the multiple
question and data analysis functionalities. The packages that
were modified/created are:
ucv.idd.portal.beans – contains the classes involved in the
data access layer, as an abstraction for either the database objects, or the business logic. The next two new packages were newly created for current implementation to separate
the business logic classes from the database class models
ucv.idd.portal.beans.dataanalytics – contains the data
analytics helper objects. It provides the classes used to get data from the database for the student dashboard and serves as a viewmodel for the dashboard webmacro template.
ucv.idd.portal.beans.quiz – contains classes for the question
evaluation business logic. It provides the interfaces needed to map class data to an uniform structure that allows for an easier access to the data required for result computation. Also, the strategies for result computation are defined in
this package, each for each question type
ucv.idd.portal.managers – contains the managers for each
database entity model . Managers provide methods with
database access for querying/alte ring data or other server
resources (eg.: files on the database used to store chapters, question text images etc). They encapsulate unitary
functions that the entity model would be expected to
provide for enabling the user to perform specific actions or steps in an action. In our implementation, the existing
Fig. 1. Database structure for question-related tables before introduction of
multiple question types

QuestionManager was refactored to remove code duplicates
and to provide a clear separation between database access specific functions for queries and functionalities of the
underlying manager. Two new managers were introduced,
one for question grading, the other one for data analytics.
ucv.idd.portal.modules.actions – contains the business logic
of the application, that is, classes directly involved with the web servlet actions that the actors perform. Actions are divided into classes by role/ module. An existing Student
action class was modified to retrieve and display the correct
template and parameters attached to it using webmacro.
ucv.idd.utils – contains utility classes for the application.
Some examples of the added classes are: NumericalUtils –
utility for numerical values, contains methods to round real
numers to 2 decimal places and is used in the context of
data computation and visualization; Pair – generic class
that pairs two values, it was used in the context of matching questions; Range – defines an interval [low, high] with an
inclusion method for checking if a value fits a range, it was used for numerical questions for checking correct answers
within a range of possible values; StringUtils – defines html
to database value converters; ListUtility – provides list
manipulation extension methods.
The old question template handled multiple-choice
questions by displaying a table with the text, image and
available choices of the question. The template received, as a
parameter, the Question object from which it extracted the
data to display. Each template has a localization handler which translates labels based on a key. The syntax for invoking the
translation of a key is:
$mh.getMessage("Key").
Translations are stored in a resource bundle property file per
language (two languages are supported, english and romanian)
in the form of key-value pairs (eg.: Key=Value ). For the
multiple-choice question, only answers numbered from A to F were accepted. The template checked, for each answer numbering, if it was present in the visible answer of the
question, case in which a checkbox was displayed. A sample
webmacro code for displaying answer numbered A looks like:

#if($question.getVisibleans().indexOf("A")>=0) {
<input class="inputcb" type="checkbox" name="A"/>A }
The existing template was modified so that different visual
elements are shown based on the question type. A sample of webmacro code for determining the question type can be seen below; it assumes as default a multiple choice question when no type is defined and uses the question type uid to determine question type.

#set $questionType = "MULTCH"
#if ($question.getQuestionType() != null &&
!$question.getQuestionType().getUid().isEmpty()) {
#set $questionType =
$question.getQuestionType().getUid()
} D. Testing
For testing, a separate Java application has been
implemented. The application uses JUnit for small test cases to assert recommendation validity. The automatic test
generator implemented for testing can generate: students based
on four archetypes (bad, medium, good, very good); questions
of all types implemented in the Tesys platform; and student
grade evaluations for a list of questions. Each archetype has a
range of success probabilities and each student is randomly generated with a probability to answer a question correctly,
within that range . The archetypes probability domains are
non-overlapping, i.e. 0-49%, 50-69%, 70-89% and 90-99%.
All the five types of questions implemented on the platform
are supported by the question generator. For each question, a randomly picked question type is taken. Upon using the
question type, the generator knows whether it has to generate a
single value representing the default grade, or it will also need to generate a number of alternatives for matching and multiple-choice questions (which represent the number of choices the student has to match and choose from, respectively). The default grade and number of alternatives are
randomly chosen from a range of values (by default chosen as
1 to 6, the latter also being the maximum number of multiple-choice variants in Tesys platform and configurable in the test application). The number of alternatives will be later used to generate grades from a possible range of solutions. Also,
based on a probability of a question to be unanswered by any
student, some questions will be marked as unanswered by any student and will be ignored by the student grade generator for a question. By default, this is set to a probability of 20%.
One of the most important part in the test application is the
generation of grades for questions answered by students.
Based on the student archetype, the generator starts by randomly choosing a success percentage within the expected bounds. This is then used to assess whether the student asnwers the question correctly (maybe partially) or incorrectly using a Bernoulli distribution. In case of a miss, a grade of 0 is
set. In case of a hit, questions which allow partial scoring
further generate a random number for the partial grade (the other ones simply get the default grade for a hit and 0 for a miss) which is then quantified by the alternative grade (computed by dividing the default grade by the number of
alternatives) by rounding the result of the multiplication of the
alternative grade with the division between the generated number and the alternative grade.
Apart from manual testing, a test generator for students,
question and grade data has been implemented. It has the
ability to generate input data for the SVD recommender and
save/load them in/from XML format using JAXB.
IV.C
ONCLUSIONS AND FUTURE WORK
We started this project with the intention to give students a
wider range of possibilities to test their comprehension of a
course with new types of questions, offering students the
opportunity to show their knowledge, skills and abilities in a

variety of ways. Also, we wanted to take the first steps in
making a more personalized ex perience for platform users and
providing them with tailored learning pathways. But most
important, we thought it useful to gain a better understanding
of the process of learning analytics and the variety of ways to model and implement a recommender system.
The current implementation could be extended to also
offer content-based recommendation on learning materials or
suggest actions such as chapter review to a student based on
his/her test results. By improving its suggestions over time, the recommender system could predict user performance over time and point out situations when an intervention might be needed.
The new question types were deployed on the platform
starting last october and we appreciate that, after a year of usage, the recommender system can be put into place for a better experience.
R
EFERENCES
[1] S. M. Powell, "Institutional Readiness for Analytics," CETIS Analytics
Series, vol. 1, no. 8, p. 11, 2012.
[2] G. Siemens, "Learning and Academic Analytics," 2 August 2011.
[Online]. Available: http://www.learninganalytics.net/?p=131 .
[3] A. Cooper, "What is Analytics? Definition and Essential
Characteristics," JISC CETIS Analytics Series, vol. 1, no. 5, p. 3,
November 2012.
[4] E. Duval, "Learning Analytics and Educational Data Mining," 30 Jan.
2012. Available: https://erikduval.wordpress.com/2012/01/30/learning-
analytics-and-educational-data-mining/ . [5] A.U.S.H.T.M.A. Chatti, "A Reference Model for Learning Analytics,"
International Journal of Technology Enhanced Learning (IJTEL), vol. 4,
no. Special Issue on State-of-the-Art in Technology Enhanced Learning,
pp. 318 – 331, 2012.
[6] Han, J., Pei, J., & Kamber, M. (2011). Data mining: concepts and
techniques . Elsevier.
[7] Jindal, N., & Liu, B. (2006, July). Mining comparative sentences and
relations. In AAAI (Vol. 22, pp. 1331-1336).
[8] Romero, C., & Ventura, S. (2007). Educational data mining: A survey
from 1995 to 2005. Expert systems with applications , 33(1), 135-146.
[9] D. T. Jones, "Learning analytics: Definitions, processes and potential,"
10 Jan. 2011. Available: https://davidtjones.wordpress.com/2011/01/10/
learning-analytics-definitions-processes-and-potential/ .
[10] W.W. Eckerson, Performance dashboards: Measuring, monitoring, and
managing your business. Hoboken, NJ: John Wiley & Sons, 2006.
[11] N.A.A. Khairil Imran Ghauth, "Measuring learner’s performance in e-
learning recommender systems," Australasian Journal of Educational
Technology, vol. 26, no. 6, pp. 764-774, 2010.
[12] F. R. a. L. R. a. B. Shapira, Introduction to Recommender Systems
Handbook, Springer Science, 2011.
[13] D. B. Michael J. Pazzani, "Content-based Recommendation Systems,"
Adaptive Web, Springer Science+Business Media, pp. 325-341, 2007.
[14] Bugra Machine Learning Newsletter, "Alternating Least Squares
Method for Collaborative Filtering,". Available: http://bugra.github.io/work/notes/2014-04-19/alternating-least-squares-
method-for-collaborative-filtering/ [Online, accessed 7 jul. 2016]
[15] R. Hoekstra, "The Knowledge Reengineering Bottleneck," Semantic
Web – Interoperability, Usability, Applicability 1, IOS Press, 2010.
[16] IBM developerWorks, "Apache Mahout: Scalable machine learning for
everyone," Nov. 2011. Available: http://www.ibm.com/developerworks/
library/j-mahout-scaling/index.html. [Accessed 12 July 2016].

Similar Posts

  • Optical Coherence Tomography for Corneal Diseases [605312]

    REVIEW ARTICLE Optical Coherence Tomography for Corneal Diseases Naoyuki Maeda, M.D. Abstract: Anterior segment optical coherence tomography (OCT) is currently used for investigating the distribution of the corneal thickness,shape of the stromal interface after lamellar corneal surgery, associationbetween host and corneal graft in keratoplasty, dimension of theanterior chamber, and lesions of the corneal diseases. In…

  • Raluca.ilie94@yahoo.ro 594 Biris Et Al 2002 Ghid Paduri Virgine 1 Text

    zooz usaunone VINYNOU NIA _NIOWIA MONRINAYa V Y2I901093 VVNIVA3 IS VRUVLOTTIS MILNAd GHD asauerepio Sunda unim SVor 455 mutu wounites svt” 459 YSNN3O npew “Bursa nai urle! Burua tainang SVa). 459 nto.none svat ‘ut «so YLINOA oeio3iN “Bursa nu ueupv: noi Bua rug uevpy non Sua BINIVN) ANYANO zo0z wtomona voia oganomis un ua ‘nosannoIon…

  • I.INTRODUCERE … …..p.2 [603628]

    1 CUPRINS I.INTRODUCERE ………………………………………………… …………………..p.2 I.1. Problema și situația -problemă …………………… …………………..p. II. PROBLEMATIZAREA ÎN LECȚIA DE ISTORIE ……………… ………………… ……………………………………………..p.4 BIBLIOGRAFIE …………………………………………………………………… …p.9 . 2 I.INTRODUCERE -Aspecte generale ale problematizării „Educa ția, sus ține M. Mahmutov, const ă în transmiterea experien ței genera țiilor anterioare celei noi. Scopul instruirii problema tizate îl constituie transferal…

  • SPECIALIZAREA SISTEME ȘI ECHIPAMENTE NAVALE [305689]

    UNIVERSITATEA “OVIDIUS” [anonimizat]: [anonimizat] 2018 UNIVERSITATEA „OVIDIUS” [anonimizat]: [anonimizat] 2018 CUPRINS CAPITOLUL 1. Considerații generale privind industria offshore 1.1. Platforme offshore …………………………………………………………….……… Pag.6 1.1.1. Platforme fixe …………………………………………………………..…….. Pag.6 1.1.2. Turnuri flexible ………………………………………………………………. Pag.7 1.1.3. Platforme plutioare …………………………………………………………… Pag.7 1.1.4. [anonimizat] ………………………………………………… Pag.8 1.1.5. Platforme în formă de stea de mare ………………………………….………. Pag.8 1.1.6. Platforme cu…

  • DOMENIUL PROGRAMUL DE STUDIU: MANAGEMENT [601995]

    1 UNIVERSITATEA DIN ORADEA FACULTATEA DE ȘTIINȚE ECONOMICE DOMENIUL/ PROGRAMUL DE STUDIU: MANAGEMENT MANAGEMENTUL ORGANIZAȚIEI FORMA DE ÎNVĂȚĂMÂNT : IF LUCRARE DE DISERTA ȚIE COORDONATOR ȘTIINȚIFIC CONF. UNIV. D R. MARIA -MADELA ABRUDAN ABSOLVENT: [anonimizat] 2015 2 UNIVERSITATEA DIN ORADEA FACULTATEA D E ȘTIINȚE ECONOMICE DOMENIUL/ PROGRAMUL DE STUDIU: MANAGEMENT MANAGEMENTUL ORGANIZAȚIEI FORMA DE ÎNVĂȚĂMÂNT…

  • CHAPTER 1. WEB TECHNOLOGIES … … … … 9 [616036]

    3 CONTENT INTRODUCTION ………………………….. ………………………….. ………………………….. ………………………….. ……. 5 CHAPTER 1. WEB TECHNOLOGIES ………………………….. ………………………….. ………………………….. ……….. 9 1.1. Introduct ory notions ………………………….. ………………………….. ………………………….. ……………….. 9 1.1.1. What are web technologies ………………………….. ………………………….. ………………………….. … 9 1.2. HTML5 ………………………….. ………………………….. ………………………….. ………………………….. …….. 11 1.2.1. Advantages of HTML5 ………………………….. ………………………….. ………………………….. ……… 11 1.2.2….