Lucrare de disertat ,ie [616357]

Universitatea"Aurel Vlaicu\ din Arad
Facultatea de S ,tiint ,e Exacte
Domeniul: Matematic a
Programul de studiu: Matematic a-Informatic a
Lucrare de disertat ,ie
^Indrum ator s ,tiint ,ific:
Gas,parPastorel
Absolvent: [anonimizat] 2018

Universitatea"Aurel Vlaicu\ din Arad
Facultatea de S ,tiint ,e Exacte
Domeniul: Matematic a
Programul de studiu: Matematic a-Informatic a
Lucrare de disertat ,ie
The Wold Decomposition
Theorem
^Indrum ator s ,tiint ,ific:
Gas,parPastorel
Absolvent: [anonimizat] 2018

Stefaniu Gheorghe Lucrare de disertat ,ie
Universitatea"Aurel Vlaicu\ din Arad
Aprobat
Facultatea de S ,tiint ,e Exacte
Decan
Prof. dr. ing. Mariana Nagy
Domeniul: Matematic a
Programul de studiu: Matematic a-Informatic a
Nr. din
Vizat
^Indrum ator s ,tiint ,i c
Gas ,parPastorel
Date personale ale candidat: [anonimizat]
1. Date privind identitatea persoanei:
Numele: Stefaniu
Init,iala tat alui: M.
Numele anterior: {
Prenumele: Gheorghe
2. Sexul (M/F): M
3. Data s ,i locul nas ,terii:
Ziua/luna/anul: 18/04/1969
Locul nas ,terii (localitate, judet ,):Oras ,ul Ias ,i, jud. Ias ,i
4. Prenumele p arint ,iilor:
Tata: –
Mama: Maria
5. Domiciliul permanent (str., nr., bl., sc., ap., loc., cod pos ,tal, jud., tel.,
e-mail):
1

Stefaniu Gheorghe Lucrare de disertat ,ie
str. Met ,ianu ,nr.14,bl.-,sc.-,ap.07 ,
loc. Arad ,cod pos ,tal: 310099, jud. Arad ,
tel.: [anonimizat], e-mail :-
6. Sunt absolvent( a) promot ,ia:Iulie 2018
7. Forma de ^ nv at , am^ ant pe care am absolvit-o este (cu frecvent , a, cu frec-
vent , a redus a, ID), (cu tax a/far a tax a): zi, cu tax a ;
8. Locul de munc a (dac a este cazul): Enel Electrica S. A. ;
9. Solicit ^ nscrierea la examenul de disertat ,iesesiunea: Iulie 2018 ;
10. Lucrarea de disertat ,iepe care o sust ,in are urm atorul titlu:
The Wold Decomposition Theorem ;
11.^Indrum ator s ,tiint ,ific: Gas ,parPastorel ;
12. Ment ,ionez c a sust ,in examenul de disertat ,ie(pentru prima oar a {
dup a caz) pentru prima oar a s,i declar pe propria-mi r aspundere c a
am luat la cunos ,tit, a de prevederile art. 143 din Legea 1/2011. Declar
c a prezenta lucrare nu este realizat a prin mijloace fraduloase, fiind
cons ,tient de faptul c a, dac a se dovedes ,te contrariul, diploma obt ,inut a
prin fraud a ^ mi poate fi anulat a, conform art. 146 din Legea 1/2011.
ARAD, 20 iunie 2018
Semn atura autorului lucr arii de disertat ,ie
2

Stefaniu Gheorghe Lucrare de disertat ,ie
Referat
privind lucrarea de disertat ,ie a absolvent: [anonimizat]: Matematic a
programul de studiu: Matematic a-Informatic a
promot ,ia:Iulie 2018
1. Titlul lucr arii: The Wold Decomposition Theorem ;
2. Structura lucr arii:
(a) Introducere;
(b) Capitolul 1;
(c) Capitolul 2;
(d) Capitolul 3;
(e) Concluzii;
(f) Glosar;
(g) Index de not ,iuni;
(h) Bibliografie;
3. Aprecieri asupra cont ,inutului lucr arii de disertat ,ie, organizare logic a,
mod de abordare, complexitate, actualitate, deficent ,e:
Organizare logic a bun a;
Mod de abordare corect;
Complexitate mare: 108 referint ,e ^ ncrucisate, 7 figuri, 38 de for-
mule matemaice (35 de tip"equation\), 3 de tip"multline\, 15
leme, 17 teoreme, 6 propozit ,ii;
Lucrarea este actual a;
Nu s-au exemplificat suficente aplicat ,iile practice;
4. Aprecieri asupra lucr arii (se va ment ,ona: num arul de titluri biblio-
grafice consultate, fracvent ,a notelor de subsol, calitatea s ,i actualitatea
surselor consultate, modul ^ n care absolventul a preluat informat ,iile din
sursele bibligrafice, contribut ,ii):
39 de titluri bibliografice, din care:
{15 monografii ale editurilor:
3

Stefaniu Gheorghe Lucrare de disertat ,ie
All, Academiei Rom^ ane, MatrixRom, Tehnic a (toate din
Bucures ,ti), Albastr a (Cluj-Napoca), din Rom^ ania;
CRC Press (Boca Raton), Dunod (Paris), Academic
Press, ACM Press (am^ andou a din New York), SIAM (Phi-
ladelphia) din str ain atate;
{21 articole ^ n reviste clasificate ISI s ,i alte reviste importante,
din care amintim: IMA J. Appl. Math. (if1=0.653), IMA
J. Numer. Anal. (if1=1.513), J. Comp. Appl. Math.
(if1=1.029), BIT Numer. Math. (if1=0.821), . . . ;
{3 pagini web, site-urile oficiale ale softurilor matematice: Ma-
thcad, Matlab s ,i Mathematica;
{54 de cit ari bibliografice;
{31 de linkuri externe la sursele bibligrafice;
0 note de subsol;
Sursele bibliografice consultate sunt din perioada 1961{2010;
Sursele bibliografice au fost preluate analitic, sintetic s ,i critic;
3 contribut ,ii originale:
{5 programe Mathcad;
{7 demonstrat ,ii proprii de teoreme cunoscute;
{6 demonstrat ,ii proprii de propozit ,ii;
5. Concluzii (valoarea lucr arii elaborate de absolvent, relevant ,a studiului
^ ntreprins, competent ,ele absolventului, consecvent ,a s,i seriozitatea de
care a dat dovad a absolventul pe parcursul document arii):
Lucrarea are o valoare teoretic a cu posibilit at ,i de valorificare prac-
tic a a rezultatelor lucr arii;
Absolventul are competent ,e de utilizare s ,i programe ^ n medii de
soft matematic, ^ n special ^ n Mathcad;
Relevant ,a lucr arii este bun a;
Munca depus a a fost perseverent a ^ n decursul elabor arii lucr arii;
6. Redactarea lucr arii respect a normele de redactare:
Lucrarea a fost redactat a ^ n LATEX obt ,in^ andu-se un format pdf
(portabile document file);
1Impact Factor 2010
4

Stefaniu Gheorghe Lucrare de disertat ,ie
Lucrarea a fost realizat a cu ajutorul modelului aprobat de condu-
cerea Facult at ,ii de S ,tiint ,e Exacte s ,i postat pe site-ul Universit at ,ii;
7. Nu exist a suspiciuni de realizare prin fraud a a prezentei lucr ari;
8. Consider c a lucrarea ^ ndeplines ,te condit ,iile pentru sust ,inerea ^ n sesiu-
nea de examene de disertat ,ie din Iulie 2018.
Propun comisiei ca absolvent Stefaniu M. Gheorghe, autorul acestei
lucr ari s a fie notat cu nota . . . . . . . . . la examenul de disertat ,ie.
ARAD, 20 iunie 2018
^Indrum ator s ,tiint ,i c
Gas ,parPastorel
5

Cuprins
1 Introducere 9
1.1 What is the time-series? . . . . . . . . . . . . . . . . . . . . . 9
1.1.1 Analysis of time series components . . . . . . . . . . . 9
1.1.2 Smoothing time series through mobile environments . . 10
1.1.3 Exponential smoothing of time series. . . . . . . . . . . 11
1.1.4 Component measurement for the multiplicative model.
Determining the trend. . . . . . . . . . . . . . . . . . . 12
1.1.5 Measure cyclic variations . . . . . . . . . . . . . . . . . 13
2 The Wold Representation and its Approximation 15
2.1 Representation and approximation of the environment . . . . . 15
2.2 White Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2.1 Zero-mean white noise . . . . . . . . . . . . . . . . . . 16
2.2.2 Independent (strong) white noise . . . . . . . . . . . . 16
2.2.3 Gaussian white noise . . . . . . . . . . . . . . . . . . . 16
2.2.4 Unconditional moment structure of strong White Noise 16
2.2.5 Conditional moment structure of strong White Noise . 17
2.2.6 Autocorrelation structure of strong White Noise . . . . 17
2.3 The Wold decomposition and the general liniar process . . . . 17
2.3.1 Under regularity conditions, every covariance-
stationary process . . . . . . . . . . . . . . . . . . . . . 17
2.3.2 The general liniar process. . . . . . . . . . . . . . . . . 18
2.3.3 Unconditional moment structure of the general liniar
process. . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.3.4 Conditional moment structure . . . . . . . . . . . . . . 18
2.3.5 Autocovariance structure . . . . . . . . . . . . . . . . . 19
2.4 Winer-Kolmogorov-Wold extraction and prediction . . . . . . 19
2.4.1 Extraction and prediction . . . . . . . . . . . . . . . . 19
2.4.2 Prediction error . . . . . . . . . . . . . . . . . . . . . . 19
2.4.3 Wold s chain rule for auto regessions consider an AR(1) 20
2.5 Multivariate . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6

Stefaniu Gheorghe Lucrare de disertat ,ie
2.5.1 The environment . . . . . . . . . . . . . . . . . . . . . 20
2.5.2 Cross covariances and the generating function. . . . . . 20
2.5.3 Cross correlations . . . . . . . . . . . . . . . . . . . . . 21
2.5.4 The multivariate general liniar process . . . . . . . . . 21
2.5.5 Autocovariance structure. . . . . . . . . . . . . . . . . 21
2.5.6 Wiener-Kolmogorov prediction . . . . . . . . . . . . . . 21
2.5.7 Wiener-Kolmogorov prediction erorr . . . . . . . . . . 22
3 Extended Wold Decomposition 23
3.1 Abstract Wold Theorem and Classical Wold Decomposition . . 23
3.2 Classical Wold Decomposition . . . . . . . . . . . . . . . . . . 26
4 The Extended Wold Decomposition of xt 29
7

Lista figurilor
8

Capitolul 1
Introducere
1.1 What is the time-series?
The (TS-Time Series) series consists of a string ( Yt) of values relative to a
particular entity, indicating its observed state at time points tequally spaced.
In the economy, ST is considered to be the realization of a random variable
(Yt) string, generally stagnatively dependent. Since random variables form
a random string (or chain), it would have been more accurate "temporal
string".
1.1.1 Analysis of time series components
The analysis of the temporal series aims at deducting the structure and
laws of their evolution so as to reach predictions of future values. The stages
of the analysis of a time series are three:
1. analysis of its components (steady increase or decrease, periodic oscil-
lations, random variation);
2. deducting existence and measuring them (seeking to determine func-
tional laws);
3. the prediction of the time series values (their calculation according to
the determined laws).
The following four main components of a time series are believed to exist:
athe trend (denoted by Tt) – a direction of evolution (linear / nonlinear)
relatively smooth, long-term (more than one year);
9

Stefaniu Gheorghe Lucrare de disertat ,ie
bcyclical windfall (note Ct) – a wobbly, longer-lasting, more than a year
period (eg. economic cycle or business cycle)
cseasonal variation (note St) – is a wobbly, short-lived, less than a year (for
example, semester, quarter, month, day and even hour);
drandom variation (denoted by At) – a random, non-periodic and unchan-
geable change, not included in Tt,Ct, orSt.
The rst three components are deterministic, while the fourth is random, At
has normal distributions N(0;2
t) and thus forms a random Gaussian process.
Each of these components is studied by one or more speci c methods, after
which the temporal series is described by the synthesis of Tt,Ct,StandAt,
in a mathematical model corresponding to the nature of the temporal series.
Models of the time series are of two types:
-multiplier (for ascending or descending), of the form:
yt=TtCtStAt (1.1)
-additive (for constant, less frequent evolutions), of form:
yt=Tt+Ct+St+At: (1.2)
We will rst deal with the multiplicative and then the additive models.
Since the random variation Athides the other components of ST, it rst seeks
to isolate and remove it by various methods of smoothing the temporary se-
ries, of which the simplest are the methods of moving media and exponential
smoothing.
1.1.2 Smoothing time series through mobile environ-
ments
The moving average is denominated abbreviated MM Yandm, is an
arithmetic mean of the values
yt= (1;2;3;:::;n )
over a range nvalues andmtemporal periods  t(having the at the center
of the interval). For nvalues observed, it results:
Y(m)
=Pi+mi
t=iyt
m81inm+i;  =i+ (m1)
2(1.3)
10

Stefaniu Gheorghe Lucrare de disertat ,ie
It is noted that in the case m= 2kis a number even, it falls between
momentst, which is not convenient. To remedy this, a centering is performed
by calculating the m= 2 series moving averages for the previous y(m)
results,
obtaining:
Y(m)
c=
y(m)
+y(m)
+1
2;20=t+knm: (1.4)
The power of smoothing increases with m. In addition, if mis too small,
it is not possible to eliminate the random variations At, and ifmis too
large, it is eliminated even from the components Ct,St. The defects of the
MM method are the missing values ( y(m)
) for the rst ( m1) moments
and the last moment, as well as the failure to take into account the past
(the values ( y(m)
) for theanetrior). When the cyclic component ( Ct) is
small, a slight smoothing (with ( m= 3)) is made, so as not to remove part
of it. If the cyclical and seasonal variations are negligible ( Ct;St= 0), for
(t= 1;2;3;:::;n ), the temporal prediction of the series can be made directly
by means of mobile means. If ( p1) is the prediction period ( interval )
then the future value ( F+p) in time (+p) will be:
F+p=Y(m)
;1nm+ 1 (1.5)
The accuracy of prediction decreases rapidly with its increase ( p), so it usually
takes (p= 1). If MM is always calculated ( F+p) instead of Y(m)
, so to obtain
Y(m)
at the appropriate moments , it is necessary to translate the values
(F+p) backwards.
1.1.3 Exponential smoothing of time series.
The exponential smoothing (NE, exponential smoothing) of the values yt
of a temporal series consists in the successive weighting of these values, by
means of the smoothing factor (damping factor) 2[0;1], obtaining at each
steptan exponentially smooth Stvalue of the series, so:
S1=y1; St=St1+ (1)yt;wheret= 2;:::n: (1.6)
Smoothing power increases with . By iteration after t, it results
St=t1y1+t2y2+:::+(1)yt=t1y1+(1)t2X
=0yt;(1.7)
11

Stefaniu Gheorghe Lucrare de disertat ,ie
from which it can be seen that Stis deferred by all the values previously
observed. Thus, both defects of the mobile media method MM are reme-
died in the NE method. Sometimes, instead of , we use the percentage of
smoothing (weighting factor) w, with
w= 1 (1.8)
As with the MM method, when the cyclic component Ctis small, a slight
smoothing (with = 0:2) is made, and in the case of Ct;St0, fort=
1;2;3;:::;n , we can directly predict the temporal series with NE. If p1 is
the prediction period (range), then the future value Ft+pwill be:
Ft+p=St;1tn: (1.9)
The accuracy of the prediction decreases rapidly with the increase of p, so it
usually takes p= 1.
1.1.4 Component measurement for the multiplicative
model. Determining the trend.
The law of evolution of the temporal series Tttends to be determined
with the help of a polynomial regression – linear or nonlinear (most often
– quadratic) in the observed ytannual values 1;n, so for some regression
models of form:
y0
t= 0+a1t+"; (1.10)
y0
t= 0+ 1t+ 2t2+": (1.11)
In applying the time series regression, many of its conditions of fairness
are not met, so the analysis of the quality of the regression results is not
meaningful and can be discarded. For example, because of the deterministic
periodic variations of the Ctcomponent, a coecient R2of determination
(indicating overall regression) is not very high ( <0:9). To avoid calculations
with too large digits ( thaving values around 2000), a translation is performed
t0
i=ti+ 1 (1.12)
of the variable t, whereby the moments t1:::;ti;:::tnwith 1;2;3;:::;n , so
practically, are replaced by the order numbers iand theyt. In the case of
Ct;St= 0, fort=1;n, a temporal series of the temporal series can be made
directly using the regression function y0
t. Ifis the timing for which the
prediction is made, then the future value Ftwill be
Ft=T=y0
 (1.13)
Determination of this trend will be used to measure both cyclical and
seasonal variation.
12

Stefaniu Gheorghe Lucrare de disertat ,ie
1.1.5 Measure cyclic variations
The cyclic variation Ctof theyttime series, with a period of at least one
year, is measured by the Ikcycle indexes, indicating the degree (proportion)
of theytvalues from one year to the other of the cycle. These indices are
obtained through the following ve steps.
1. The seasonal variations, yt, are removed from the Stso as to obtain the
equation of form
yt=TtCt (1.14)
by one of the following methods:
a).summing the yivalues over seasons, in order to obtain the annual
ones, in which seasonal and random variations are compressed (so
theStAtcomponents become null);
b).calculating moving average, where y(m)
t, withm=nswherensis
the number of types of seasons on the seasonal period (for example,
for the quarters and years ns= 4).
2. The trend is calculated by regression Tt=y0
tso that ??the relationship
is obtained
Tt=yt
y0
t(1.15)
By calculating and graphically representing the trend percentage T%
t
T%
t=100yt
y0
t(1.16)
we can deduce the existence or not (and possibly the period) of the
cyclic variation Ct: ifTtvalues, alternately grouped above / below
100%,Ctexists.Even in this case, the cyclical variation is usually too
irregular to deduct ncand the period of the cycle (number of years).
3. Ifncis determined, the I0
kaverages of the trend percentages are calcu-
latedT%
t,
I0
k=Pnk
tk=1T%
tk
100nk; k = 1;2;:::;nc; (1.17)
wheretkare the moments corresponding to year kof the cycle, and
nk=n
nc(1.18)
-the number of years ktype observe ( k= 1;2;:::;nc):
13

Stefaniu Gheorghe Lucrare de disertat ,ie
4. Calculate the cycling indices It(which characterizes Ct)
Ik=I0
k
1
ncPnc
k=1I0
k; k = 1;2;:::;nc; (1.19)
by adjusting the Itmedia so that
1
ncncX
k=1Ik= 1 (1.20)
5. If desired, the decimalized time series can be calculated
(eyt)t2N+
(ey)tk=yt
Ik: (1.21)
By decylising, cyclical variations are removed because
eyy=TtStAt; (1.22)
and it is possible to compare data series with di erent cyclic compo-
nents.Iktake values around in their neighborhood 1 (their average);
ifIk>1, year k of the favorable cycle, and if Ik<1, it is favorable.
14

Capitolul 2
The Wold Representation and
its Approximation
2.1 Representation and approximation of the
environment
Time series Yt(in nitely double) Realization yt(again in nitely double)
The test path yt;t= 1;2;3;:::T
1. Strict stationarity join for set of obesrvations depedent only on displa-
ceament, not time.
2. Weak stationarity (second-order stationarity , wide sense stationarity
, covariance stationarity,…).
Eyt=;8t; (2.1)

(t;) =E(ytEyt)(yt+rEyt+r) =
();8t: (2.2)
0<
(0)<1
3. Autocovariance function
(a) symmetric

() =
();8t;
(b) nonnegative de nite
a0X
a0;8a:
where Toeplitz marixPhasijthelement
(ij).
15

Stefaniu Gheorghe Lucrare de disertat ,ie
(c) bound by the variance

(0)j
()j;8:
4. Autocovariance generating function
g(z) =1X
=1
()z(2.3)
5. Autocorrelation function
() =
()

(0)(2.4)
2.2 White Noise
White Noise
tWN(;2) (serially uncorrelated) (2.5)
2.2.1 Zero-mean white noise
tWN(0;2) (2.6)
2.2.2 Independent (strong) white noise
t(0;2) (2.7)
2.2.3 Gaussian white noise
tN(0;2) (2.8)
2.2.4 Unconditional moment structure of strong
White Noise
E(t) = 0 (2.9)
var(t) =2(2.10)
16

Stefaniu Gheorghe Lucrare de disertat ,ie
2.2.5 Conditional moment structure of strong White
Noise
E(tj
t1) = 0 (2.11)
var(tj
t1) =E(tE(tj
t1)2j
t1j=2(2.12)
where

t1=t1;t2;t3;:::
2.2.6 Autocorrelation structure of strong White Noise

() =2;= 0
0;1(2.13)
() =1;= 0
0;1(2.14)
An aside on treatament of the mean. In theoretical work we assume a zero
mean= 0.This reduces notational cluter and is without loss of generality.
Think ofytas having been centered around its mean, , and note that yt
has zero mean by construction. In empirical work we allow explicity for a
non-zero mean, either by centring the data around the sample mean or by
inclunding an intercept.
2.3 The Wold decomposition and the general
liniar process
2.3.1 Under regularity conditions, every covariance-
stationary process
fytgcan be written as:
yt=1X
i=0bi"ti (2.15)
where
b0= 1
and1X
i=0b2
i<1 (2.16)
17

Stefaniu Gheorghe Lucrare de disertat ,ie
therefore it results
"t= [ytP(ytjyt1;yt2;yt3;yt4;:::)]WN(0;2) (2.17)
2.3.2 The general liniar process.
yt=B(L)"t=1X
i=0bi"t1 (2.18)
"WN(0;2) (2.19)
b0= 1
1X
i=0b2
i<1 (2.20)
2.3.3 Unconditional moment structure of the general
liniar process.
E(yt) =E 1X
i=0bi"t1!
=1X
i=0biE"t1=1X
i=0bi0 = 0 (2.21)
var(yt) =var 1X
i=0bi"t1!
=1X
i=0b2
ivar("t1) =21X
i=0b2
i (2.22)
2.3.4 Conditional moment structure
E(ytj
t1) =E(tj
t1) +b1E(t1j
t1) +b2E(t2j
t1) +::: (2.23)
(
t1="t1;"t2;"t3;:::) =
= 0 +b1"t1+b2"t2+b3"t3+:::=1X
i=0bi"t1: (2.24)
var(ytj
t1) =E(ytE(ytj
t1))2j
t1=
=E("2
tj
t1) =E("2
t) =2(2.25)
These calculation assume strong WN innovation.
18

Stefaniu Gheorghe Lucrare de disertat ,ie
2.3.5 Autocovariance structure

() =E" 1X
i=1bi"ti! 1X
h=1bh"th!#
= (2.26)
=21X
i=1bibi (2.27)
where
bi0 ifi<0
then
g(z) =2B(z)B(z1) (2.28)
2.4 Winer-Kolmogorov-Wold extraction and
prediction
2.4.1 Extraction and prediction
yt="t+b1"t1+b2"t2+::: (2.29)
yT+h="T+h+b1"T+h1+b2"T+h2+:::+bh"T+bh+1"T1+::: (2.30)
projection on
Tthen

T=f"T;"T1;"T2;:::g (2.31)
to get
yT+hT=bh"T+bh+1"T1+::: (2.32)
we note that the projection is on the in nite past.
2.4.2 Prediction error
eT+h;T=yT+hyT+h;T=h1X
i=0bi"T+hi (2.33)
E(eT+h;T= 0 (2.34)
var(eT+h;T) =2h1X
i=0b2
i (2.35)
19

Stefaniu Gheorghe Lucrare de disertat ,ie
2.4.3 Wold s chain rule for auto regessions consider an
AR(1)
yt=yt1+"t (2.36)
fytgT
t= 1 (2.37)
yT+1;T=yT (2.38)
yT+2;T=yT+1;T=2yT (2.39)

yT+h;T=yT+h1;T=yT (2.40)
2.5 Multivariate
2.5.1 The environment
(y1t;y2t)0is convariance stationary if:
E(y1t) =1;8t (2.41)
E(y2t) =2;8t (2.42)
y1;y2(t;) =Ey1t1
y2t2
(y1;t1; y2;t2) = (2.43)
=
11()
12()

(21)()
22()
: (2.44)
= 0;1;2;3;:::
2.5.2 Cross covariances and the generating function.

11()6=
12()

12() =
21()
y1;y2() = (0)
y1y2();where= 1;2;3;::: (2.45)
Gy1y2(z) =1X
=1y1y2()z(2.46)
20

Stefaniu Gheorghe Lucrare de disertat ,ie
2.5.3 Cross correlations
Ry1y2() =D1
y1y2y1y2()D1
y1y2;where= 0;1;2;::: (2.47)
D=10
02
: (2.48)
2.5.4 The multivariate general liniar process
y1t
y2t
=
=B11(L)B12(L)
B21(L)B22(L)"1t
"2t
: (2.49)
yt=B(L)"t= (I+B1L+B2L2+B3L3+:::)"t (2.50)
E("t"(0)
s) =Pift=s
0 otherwise
(2.51)
1X
i=0kBik2<1 (2.52)
2.5.5 Autocovariance structure.
y1y2() =1X
i=1BiX
B(0)
i(whereBi0 if8i<0) (2.53)
Gy(z) =B(z)X
B(0)(z1) (2.54)
2.5.6 Wiener-Kolmogorov prediction
yt="t+B1"t1+B2"t2+B3"t3+::: (2.55)
yT+h="T+h+B1"T+h1+B2"T+h2+::: (2.56)
projection on

t=f"T;"T1;"T2;:::g (2.57)
yt+h;T=Bh"T+Bh+1"T1+::: (2.58)
21

Stefaniu Gheorghe Lucrare de disertat ,ie
2.5.7 Wiener-Kolmogorov prediction erorr
"T+h;T=yT+hyT+h;T=h1X
i=0Bi"T+hi (2.59)
E["T+h;T] = 0 (2.60)
E["T+h;T"(0)T+h;T] =h1X
i=0BiX
B(0)
i (2.61)
22

Capitolul 3
Extended Wold Decomposition
3.1 Abstract Wold Theorem and Classical
Wold Decomposition
LetHbe a Hilbert space and V:H!H a linear operator.
Definit ,ia 3.1.1. LetVbe bounded. The adjoint, or transposed, of Vis the
bounded linera operator V:H!H that satis es the relation
hV;x;yi=hx;V;yi 8x;y2H
In particular,kVk=kVk
Definit ,ia 3.1.2. V is an isometry when
hVx;Vyi=hx;yi 8x;y2H
or, equivalently, VV=I, whereIdenotes the identity map on H.
IfVis an isometric operator, its norm is equal to 1 and its powers Vj, with
j2N, are isometries, too. Isometries allow us to decompose orthogonally
Hilbert spaces. The brick in the decomposition is the so-called wandering
subspace, which is also termed detail subspace or innovation subspace.
Definit ,ia 3.1.3. LetVbe an isometry. We call wandering subspaces for V
a subspaceLVofHsuch thatVhLVare orthogonal for every h;k2N0, with
h6=k. SinceVis an isomerty, we can just require that VnLVis orthogonal
toLfor everyn2N.
Given a wandering subspace LV, it is possible to de ne the orthogonal
sumL+1
j=0VjLV, in which the convergence of the in nite direct sum is in the
noem induced in the Hilbert space Hby the inner product. It may happen
that such an orthogonal sum covers the whole Hilbert space H.
23

Stefaniu Gheorghe Lucrare de disertat ,ie
Definit ,ia 3.1.4. LetVbe an isometry. We say that Vis a unilateral shift
when there exists a subspace LVofHwhich is wandering for Vand such
that
+1M
j=0VjLV=H:
Given a unilateral shift, the wandering subspace is uniquely determined
by the relationLV=H VL. We name Abstract Wold Theorem the
following decomposition of Hilbert spaces.
Teorema 3.1.1 (Abstract Wold Theorem) .LetHbe a Hilbert space and
V:H!H be an isometry. Then, Hdecomposes uniquely into an orthogonal
sum
H=^H ~H;
such that
V^H=^H; V ~H ~H
and the restriction of Von~His a unilateral shift. In particular,
^H=+1\
j=0VjH
~H=+1M
j=0VjLV;
withLV=H VH.
Or
Teorema 3.1.2 (Abstract Wold Theorem) .LetVbe an arbitrary isometry
on the Hilbert space H. ThenHdecomposes into an orthogonal sum H=
^HL~H, such that ^Hand ~HreduceV, the part of Von^His unitary and
the part of Vand ~His a unilateral shift. This decomposition is uniquely
determined indeed we have
^H=+1\
j=0VjHand ~H=M+(H)whereLV=H VH: (3.1)
The space ^Hor~Hmay be absent that is equal to f0g
24

Stefaniu Gheorghe Lucrare de disertat ,ie
Demonstrat ie. The spaceLV=H VHis wandering for V. Indeed, for
n1 we have
VjLVVjHVHandVH?LV
consider ~H=M+(LV) and ^H=H ~H. Observe that xbelongs to ^Hif and
only if it is orthogonal to all nite sumLm1
0VjLVwherem= 1;2;3;:::.
Now we have
LVVLV:::Vm1L= (H VH)
(VH V2H):::(Vm1H VmH) =H VmH
thusx2^Hif and only if x2VmHfor allm0. Hence ^Hsatis es the
rst relation (2.1). Because the subspaces VmHwherem= 0;1;2;:::form
a nonincreasing sequence, we also have ^HT+1
j=1VjH. It follows that
V^H=V+1\
j=0VjH=1\
j=0Vj+1H=1\
m=1VmH=^H
thus ^HreduceVandVj^His a unitary operator on ^H. Hence ^Halso
reduceVand the part of Von~His evidently a unilateral shift. Thus the
subspace given by (2.1) satisfy our conditions. It remains to prove that if
H=^HM~H
is an arbitray decomposition satisfying these conditions (i.e. if
~H0=M+(L0V);
whereL0Vis wandering with respect to V, and if
V^H0= (^H0);
then ^H0=^Hand ~H0=^H. This follows readily form the equations
LV=H VH=
= (^H0~H0)
(V^H0)V~H0) =
= (^H0~H0)
(^H0V~H0) =
=~H0 V~H0=L0V:
25

Stefaniu Gheorghe Lucrare de disertat ,ie
3.2 Classical Wold Decomposition
We consider, throughout, a zero-mean, weakly stationary process x=
fxigt2Z, where each xtbelongs to the space L2(
;F;P). Through the whole
paper, equalities between random variables are in the L2norm. Moreover,
we assume that xis purely non-deterministic, i.e. there exists a unit variance
White Noise
"=f"gt2Z
such that, for any tinZ,
xt=+1X
h=0 h"th: (3.2)
In case xis not purely non-deterministic, the decomposition we describe is
meant for the non-deterministic component of x. The process "is commonly
called the sequense of fundamental innovations ofx. The coecients hare
square-summable, they do not depend on tand each his the projection
coecient of xton the liniar space generated by "thnamely
h=Ejxt"thj:
Consider now the liniar subspace of L2(
;F;P) spanned by the sequence
of fundamental innovations f"tkgk2N0, that is
Ht(") =(+1X
k=0ak"tk:+1X
k=0a2
k<+1)
: (3.3)
In words,Ht(") is the space of in nite moving averages, commonly denoted
byMA(1), whose underlying Withe Noise proces is ". Next, we de ne the
scaling operator R:Ht(")! Ht(") as follows
R:+1X
k=0ak"tk7!+1X
k=0akp
2("t2k+"t2k1)
=+1X
k=0abk
2cp
2"tk: (3.4)
wherebcassociates any real number cwith the integerbcc=maxfn2Z:
ncg.
We now state the Classical Wold Decomposition Theorem for zero-mean,
regular, weakly stationary time series.
26

Stefaniu Gheorghe Lucrare de disertat ,ie
Teorema 3.2.1 (Classical Wold Decomposition) .Letx=fxtgt2Zbe a zero-
mean, regular, weakly stationary stohastic process. Then, for any t2Z;xt,
decomposes as
xt=+1X
k=0 k"tk+t; (3.5)
where the equality is in the L2norm and
i"=f"tgt2Zis a unit variance White Noise process;
iifor anyk2N0, the coecients kdo not depend on t,
k=E[xt"tk]and1X
k=0 2
k<+1;
iii=ftgt2Zgis a zero-mean weakly stationary process,
t=+1\
j=0Htj(x)andE[t"tk] = 08k2N0;
iv
t2cl(+1X
h=1ahth2+1\
j=1Htj(x) : h2R)
:
Propozit ,ia 3.2.2 (Classical Wold Decomposition) .For any xed j2N, the
processf"(j)
tk2jgk2Zis a unit variance White Noise.
Demonstrat ie. First of all, we show that f"(j)
tk2jgk2Zis weakly stationary.
iVariabiles"tare the teorem classical Wold innovations of the process x,
hence E["tp"tq] = 0 for all p6=qandE["2
t] = 1 for any time index t.
Consequently, for any k2Z,
Eh
("(j))2
t2kji
=
=1
2jE2
40
@2j11X
i=0"t2kji2j11X
i=0"t2kj2j1i1
A23
5
=1
2j2j11X
i=0E["2
t] =2j
2jE["2
t] = 1 (3.6)
27

Stefaniu Gheorghe Lucrare de disertat ,ie
Thus,
E
"(j)
t2kj2
is nite and it does not depend on k.
iiSinceE["t] = 0 for any t, we nd that Eh
"(j)
t2kji
= 0 for any k2Zand so
the expectation does not depend on k.
iiiLet us analyses cross-momoents in the support S(j)
t. By taking h6=k,
Eh
"(j)
th2j"(j)
tk2ji
=
=1
2jE2
40
@2j11X
i=0"th2ji2j11X
i=0"th2j2j1i1
A0
@2j11X
i=0"tk2jl2j11X
i=0"tk2j2j1l1
A3
5
=1
2j8
<
:2j11X
i=02j11X
i=0E["th2ji"tk2jl]2j11X
i=02j11X
i=0E["th2ji"tk2j2j1l]
2j11X
i=02j11X
i=0E["th2j2j1i"tk2jl] +2j11X
i=02j11X
i=0E["th2j2j1i"tk2j2j1l]9
=
;
Sinceh6=k, the sets of indices fh2j;:::;h 2j+ 2j1gandfk2j;:::;k 2j+
2j1gare disjoint and so the last sums are null. As a result,
Eh
"(j)
th2j"(j)
tk2ji
= 08h6=k
To recap,f"(j)
tk2jgk2Zturns out to be weakly stationary on its suport S(j)
t.
In particular, it is a White Noise process with variance equal to 1.
28

Capitolul 4
The Extended Wold
Decomposition of xt
Since xis a purely non-deterministic process, the Classical Wold Theorem
guarantees that xtbelongs toHt("). Therefore, the decomposition of the
spaceHt(") implies that there exists a sequencen
g(j)
to
j2Nof random variables
such that
xt=+1X
j=1g(j)
t; (4.1)
where each g(j)
tis the orthogonal projection of xtonRj1LR
t.
Definit ,ia 4.0.1. We call persistent component at scale jthe orthogonal pro-
jection ofxton the subspace Rj1LR
tofHt(")and we denote it by g(j)
t.
Be construction, given t, the persistent components g(j)
tare othogonal to
each others. Since each g(j)
tbelongs toRj1LR
t,
g(j)
t=+1X
k=0 (j)
k"(j)
tk2j (4.2)
for some square-summable sequence of real coecients
n
(j)
kko
:
Each (j)
kis the Fourier coecient obtained by projecting xton the liniar
subspace generated by the detail "(j)
tk2j, i.e.
(j)
k=Eh
xt"(j)
tk2ji
: (4.3)
29

Stefaniu Gheorghe Lucrare de disertat ,ie
Substituting the expession of g(j)
tinto eq. (3.7), we obtain Extended Wold
Decomposition ofxt.
Definit ,ia 4.0.2 (Extended Wold Decomposition) .We call Extended Wold
Decomposition of xtthe decomposition
xt=+1X
j=1+1X
k=0 (j)
k"(j)
tk2j
Moreover, we call (j)
kthe multiscale impulse response function associated to
the innovation at scale jand tiem translation k2j.
It is important at this point to establish the relation between the coe-
cients (j)
kof the Extended Wold Decomposition and coecients hof the
Classical Wold Decomposition of xt. Recall that the White Noise "is the
process of classical Wold innovation of x. Moreover, Exteended Wold De-
composition is based on the variables "(j)
tthat are nite liniar combinations
of the variables "t. Therefore, both decompositions actually expolit the same
fundamental innovations. Since the Wold coecients hare unique, given
the process ", it follows that the coecients (j)
kare functions of the coe-
ceints h. In addition, this connection ensures that the multiscale impulse
response functions (j)
kare independent of the time index trelative to the
variablextthat we decompose.
Propozit ,ia 4.0.1. For anyj2N; k2N0
(j)
k=1p
2j0
@2j11X
i=0 k2j+i2j11X
i=0 k2j+2j1+i1
A: (4.4)
hence (j)
kdoes not depend on t. In addition, limk!+1 (j)
k= 0for anyj2N.
Demonstrat ie. To begin with, we show that, for any xed scale j2N,
g(j)
t=+1X
k=0 (j)
k"(j)
tk2j
belongs toRj1LR
t. Actually, we just have to prove that the series is conver-
gent. By making variables "(j)
tk2jexplicit with respect to the Classical Wold
innovations of xt, we derivw that
g(j)
t=+1X
k=0 +1X
l=0 (j)
kp
2j"tk2jl+1X
l=0 (j)
kp
2j"tk2j2j1l!
30

Stefaniu Gheorghe Lucrare de disertat ,ie
For anyh2N0, we uniquely nd k2N0andl2f0;1;2;:::;2j1gsuch that
h=k2j+l. Consequently, we can express the component g(j)
tas
g(j)
t=+1X
h=0(j)
h"th
where coecients (j)
hare de ned by
(j)
h=8
<
: (j)
kp
2j;fork2N0l2f0;1;2;:::;2j11g
(j)
kp
2j;fork2N0l2f2j1;:::;2j1g
Therefore, we check the convergence of the series
1X
h=0((j)
h)2
By using the inequality
nX
i=0ai!2
(n+ 1)nX
i=0a2
i;8n2N;
it is easy to show that
+1X
h=0
(j)
h2
=+1X
k=0
(j)
k2
+1X
h=0a2
h
which is nite because xtbelongs toH("). As a result g(j)
tbelongs toRj1LR
t.
Furthermore, we actually showed that coecients (j)
kare square-summable.
Inter alia, this ensures that, for any xed scale j2N;limk!1 (j)
k= 0. In or-
der to nd the exact experssion of coecients (j)
k, we exploit the orthogonal
decompositions of the space H(") at di erent scales J2N:
H(") =RJH(")JM
j=1RJ1LR
t:
We call(j)
tthe orthogonal projection of xton the subspace RJH(") and we
proceed inductively. Let us start by the rst decomposition xt=(1)
t+g(1)
t
coming from scale J= 1:
H(") =RH(")LR
t
31

Stefaniu Gheorghe Lucrare de disertat ,ie
By using the de nitions of elements belonging to subspace RH(") andLR
t,
we set
(1)
t=+1X
k=0
(1)
t"t2k+"t(2k+1)p
2=+1X
k=0c(1)
k("t2k+"t(2k+1)); (4.5)
g(1)
t=+1X
k=0 (1)
k"(1)
t2k=+1X
k=0d(1)
k("t2k"t2k1) (4.6)
for some sequences of coecientsn
c(1)
kko
andn
d(1)
ko
k, or equivalentyn

(1)
ko
k
andn
(1)
ko
k;to determine in order to have xt=(1)
t+g(1)
t, where we set
p
2c(1)
k=
(1)
kandp
2d(1)
k= (1)
k. The expessions above may be rewritten as
xt=+1X
k=0n
c(1)
k+d(1)
k
"t2k+
c(1)
kd(1)
k
"t2ko
However, from the Classical Wold Decomposition of x, we know that
xt=+1X
k=0f 2k"t2k+ 2k+1"t2k1g
where we use the same fundanental innovations as before. By exploiting
the uniqueness of writing deriving from the Classical Wold Decomposition,
the two expressions for xtmust coincide. As a result, c(1)
kandd(1)
kare the
solutions of the liniar system
(
c(1)
k+d(1)
k= 2k
c(1)
kd(1)
k= 2k+1:(4.7)
that is
c(1)
k= 2k+ 2k+1
2(4.8)
d(1)
k= 2k 2k+1
2(4.9)
in particular, we nd that

(1)
k= 2k+ 2k+1p
2(4.10)
(1)
k= 2k 2k+1p
2(4.11)
32

Stefaniu Gheorghe Lucrare de disertat ,ie
Hence
(1)
t=+1X
k=0 2k+ 2k+1p
2"t2k+"t2k1p
2(4.12)
g(1)
t=+1X
k=0 2k 2k+1p
2"(1)
t2k (4.13)
Now, focus on the scale J= 2. We expolit the decomposition of the space
RHt(") =R2Ht(")RLR
t; (4.14)
that implies the relation
(1)
t=(2)
t+g(2)
t:
We follow the same track as in the previous case, by using the features of
the coecients of elements in R2Ht(") and inRLR
tand finally, by comparing
the expression of (2)
t+g(2)
twith the (unique) writing of (1)
tthat we found
before. By solving a simple liniar system we discover that
c(2)
k= 4k+ 4k+1+ 4k+2+ 4k+3
4(4.15)
d(2)
k= 4k+ 4k+1 4k+2+ 4k+3
4(4.16)
and, in particular,

(2)
k= 4k+ 4k+1+ 4k+2+ 4k+3
4(4.17)
(2)
k= 4k+ 4k+1 4k+2 4k+3
4(4.18)
Consequently,
(2)
t= 4k+ 4k+1+ 4k+2+ 4k+3
4"t4k+"t(4k+1)+"t(4k+2)+"t(4k+3)
4
(4.19)
g(2)
k=+1X
k=0 4k+ 4k+1 4k+2 4k+3
4"(2)
t4k (4.20)
As for the generic scale J=j, we claim that

(j)
k=1
p
2j0
@2j1X
i=0 k2j+i1
A (4.21)
33

Stefaniu Gheorghe Lucrare de disertat ,ie
(2)
k=1
p
2j0
@2j11X
i=0 k2j+i2j11X
i=0 k2j+2j1+i1
A: (4.22)
In addition, it may be helpful to higlight the expressions of (j)
tandg(j)
k:
(j)
t=+1X
k=00
@1
p
2j2j1X
i=0 k2j+i1
A1
p
2j2j1X
i=0"tk2ji; (4.23)
g(j)
k=+1X
k=01
p
2j0
@2j11X
i=0 k2j+i2j11X
i=0 k2j+2j+i1
A"(j)
tk2j: (4.24)
Teorema 4.0.2 ((Extended Wold Decomposition)) .Letxbe a zero-mean,
weakly stationary purely non-deterministic stochastic process. Then xtde-
composes as
xt=+1X
j=1+1X
k=0 (j)
k"(j)
tk2j;
where the equality is in L2- norm and
ifor any xed j2N, the process "(j)=f"(j)
tgt2Zin anMA(2j1)with
respect to the classical Wold innovations of x:
"(j)
t=1
p
2j0
@2j11X
i=0"ti2j11X
i=0"t2j1i1
A
andf"(j)
tgt2Zis a unit variance White Noise;
iifor anyj2Nandk2N0the coecients (j)
kare unique and they satisfy
(j)
k=1
p
2j0
@2j11X
i=0 k2j+i2j11X
i=0 k2j+2j1+i1
A;
hence they do not depend on tandP1
k=0( (j)
k)2<+1for anyj2N;
iiiletting
g=+1X
k=0 (j)
k"(j)
tk2(j)
34

Stefaniu Gheorghe Lucrare de disertat ,ie
then, for any j;l2N;p;q;t2Z,
Eh
g(j)
tpg(l)
tqi
depends at most on j;l;pq. Moreover,
Eh
g(j)
tpg(l)
tqi
= 08j6=l; m;n2N0;8t2Z
Demonstrat ie. The reperesentation of xtcomes from the Wold decomposition
of the spaceHt("). Indeed, by applying the Classical Wold Decomposition to
the zero-mean, weakly stationary purely non-deterministic stochastic process
x, we nd that xtbelongs to the Hilbert space Ht("), where"=f"tgtis
the White Noise process of classical Wold innovations of x. Afterwards, by
expolting the orthogonal decomposition of Ht(") and is justi ed by the fact
the scaling operator Ris isometric onHt("), we know that
Ht(") =+1M
j=1Rj1LR
t
where
Rj1LR
t=(+1X
k=0 (j)
k"(j)
t2j2Ht(") :b(j)
k2R)
Recall thet random variables "(j)
tare de ned by
"(j)
t=1p
2j0
@2j11X
i=0"ti2j11X
i=0"t2j1i1
A:
Hence, by denoting g(j)
tthe orthogonal projections of the variable xton the
subspaceRj1LR
t, we nd that
xt=+1X
j=1g(j)
t;
where the equality is in the L2-norm. Then, by using the characterizations
of subspace Rj1LR
t, for any scale j2Nwe can nd a square-summable
sequence of real cocients f (j)
kgsuch that
g(j)
t=+1X
k=0 (j)
k"(j)
t2j
35

Stefaniu Gheorghe Lucrare de disertat ,ie
As a result, we are allowed to decompose the variable xtas
xt=+1X
j=1+1X
k=0 (j)
k"(j)
t2j
wher the equality is in the L2-norm.
iAs we can see in the de nition of variables "(j)
t, the process "(j)
tis an
MA(2j1) with respect to the fundamental innovations "of the pro-
cessx. In addition, as the subprocess f"(j)
tk2jgk2Zis White Noise with
variance equal to 1.
iiFor any xed scale j2N, once the detail process "(j)is de ned, since the
variables"(j)
tk2jare orthonormal when knaries, the component g(j)
thas
unique reperesentation of the kind
g(j)
t=+1X
k=0 (j)
k"(j)
t2j:
Thus, the coecients (j)
kare uniquely de ned. By Popozit ,ia 3.3.1, the
coecents (j)
kdo not depend on tand they can be expressed in terms
of the classical Wold coecients hofxtas
(j)
k=1
p
2j0
@2j11X
i=0 k2j+i2j11X
i=0 k2j+2j1+i1
A:
MoreoverP
k( (j))2<+1
k for anyj2N, as explicitly shown in the proof
of Propozit ,ia (3.3.1). Indeed, it holds that
+1X
k=0( (j))2P+1
h=0 2
h<+1
k
iiiFirst of all, when tis xed
Eh
g(j)
tg(l)
ti
= 0
for allj6=lbecauseg(j)
tandg(l)
tare, respectively, the projections
ofxton the subspaces Rj1LR
tandRl1LR
twhich are orthogonal by
36

Stefaniu Gheorghe Lucrare de disertat ,ie
construction. Now, consider any g(j)
tm2jwithm2N0. Cleary,g(j)
tm2j
belongs toRj1LR
tm2jbut, by the de nition of g(j)
t, we can write
g(j)
tm2j=+1X
k=0 (j)
k"(j)
t(m+k)2j=+1X
K=0 (j)
K"(j)
tK2j:
where
(j)
K=(
0; ifK2f0;1;2;:::;m1g
(j)
k;ifK=m+kfor somek2N0:
As a result, g(j)
tm2jbelongs toRj1LR
t, too. Similarly, at scale l, taken
anyn2N0it is easy to see that g(l)
tn2jbelongs toRl1LR
t. Hence, the
orthogonality of such subspaces guarantees that
Eh
g(j)
tm2jg(l)
tn2ji
= 08j6=l; m;n2N0
As for the more general requirement concerning
Eh
g(j)
tpg(l)
tqi
for anyj;l2N
andp;q;t2Z, we have that
Eh
g(j)
tpg(l)
tqi
=+1X
k=0+1X
h=0 (j)
k (l)
hEh
"(j)
tpk2j"(l)
tqh2li
=1p
2j+l+1X
k=0+1X
h=0 (j)
k (l)
h2j11X
u=02l11X
v=0
fE["tpk2ju"tqh2lv]
E["tpk2ju"tqh2l2l1v]
E["tpk2j2j1u"tqh2lv]+
+E["tpk2j2j1u"tqh2l2l1v]g
and so
Eh
g(j)
tpg(l)
tqi
=1p
2j+l+1X
k=0+1X
h=0 (j)
k (l)
h2j11X
u=02l11X
v=0
f
(pq+k2j+uh2lv)
37

Stefaniu Gheorghe Lucrare de disertat ,ie

(pq+k2j+uh2l2l1v)
(pq+k2j+2j1+uh2lv)+
+
(pq+k2j+ 2j1+uh2l2l1v)g;
where coecients (j)
k, (l)
hdo not depend on t. After the summations
overu;vandk;h, the one remaing variables are j;l;pq. In other
words, Eh
g(j)
tpg(l)
tqi
depends at most on j;l;pq.
38

Concluzii
In mathematics, especially in operator theory, Wold decomposition is a
classi cation theorem for iometric linear operators on the Hilbert space. Each
isometry is a direct sum of unilateral changes and a unitary operator.
In the analysis of time series, the theorem implies that any discrete static
time stochastic process can be broken down into a pair of unrelated processes,
one deterministic and the other being a moving process of the environment.
In Chapter 1 emphasized Time Series. The (TS-Time Series) series con-
sists of a string ( Yt) of values relative to a particular entity, indicating its
observed state at time points tequally spaced. In the economy, ST is consi-
dered to be the realization of a random variable ( Yt) string, generally stagna-
tively dependent. Since random variables form a random string (or chain),
it would have been more accurate "temporal string".
The analysis of the temporal series aims at deducting the structure and
laws of their evolution so as to reach predictions of future values. The rst
three components are deterministic, while the fourth is random, Athas nor-
mal distributions N(0;2
t) and thus forms a random Gaussian process. Each
of these components is studied by one or more speci c methods, after which
the temporal series is described by the synthesis of Tt,Ct,StandAt, in a
mathematical model corresponding to the nature of the temporal series.
In Chapter 2 the Wold representation and its approximation. Representa-
tion and approximation of the environment Time series Yt(in nitely double)
Realization yt(again in nitely double) The test path yt;t= 1;2;3;:::T
1. Strict stationarity join for set of obesrvations depedent only on displa-
ceament, not time.
2. Weak stationarity (second-order stationarity , wide sense stationarity
, covariance stationarity,…).
In theoretical work we assume a zero mean = 0.This reduces notatio-
nal cluter and is without loss of generality. Think of ytas having been
centered around its mean, , and note that ythas zero mean by
construction. In empirical work we allow explicity for a non-zero mean,
39

Stefaniu Gheorghe Lucrare de disertat ,ie
either by centring the data around the sample mean or by inclunding
an intercept.
In Chapter 3,we approached extended Wold decomposition. Abstract
Wold Theorem and Classical Wold Decomposition. Let Hbe a Hilbert
space andV:H!H a linear operator.
Definit ,ia 4.0.3. LetVbe bounded. The adjoint, or transposed, of V
is the bounded linera operator V:H!H that satis es the relation
hV;x;yi=hx;V;yi 8x;y2H
In particular,kVk=kVk
Definit ,ia 4.0.4. V is an isometry when
hVx;Vyi=hx;yi 8x;y2H
or, equivalently, VV=I, whereIdenotes the identity map on H.
IfVis an isometric operator, its norm is equal to 1 and its powers
Vj, withj2N, are isometries, too. Isometries allow us to decompose
orthogonally Hilbert spaces. The brick in the decomposition is the
so-called wandering subspace, which is also termed detail subspace or
innovation subspace.
In Chapter 3,we approached the extended Wold decomposition of xt.
Since xis a purely non-deterministic process, the Classical Wold Theo-
rem guarantees that xtbelongs toHt("). Therefore, the decomposition
of the spaceHt(") implies that there exists a sequencen
g(j)
to
j2Nof
random variables such that
xt=+1X
j=1g(j)
t; (4.25)
where each g(j)
tis the orthogonal projection of xtonRj1LR
t.
It is important at this point to establish the relation between the co-
ecients (j)
kof the Extended Wold Decomposition and coecients
hof the Classical Wold Decomposition of xt. Recall that the White
Noise"is the process of classical Wold innovation of x. Moreover,
Exteended Wold Decomposition is based on the variables "(j)
tthat are
nite liniar combinations of the variables "t. Therefore, both decom-
positions actually expolit the same fundamental innovations. Since the
40

Stefaniu Gheorghe Lucrare de disertat ,ie
Wold coecients hare unique, given the process ", it follows that the
coecients (j)
kare functions of the coeceints h. In addition, this
connection ensures that the multiscale impulse response functions (j)
k
are independent of the time index trelative to the variable xtthat we
decompose.
The reperesentation of xtcomes from the Wold decomposition of the
spaceHt("). Indeed, by applying the Classical Wold Decomposition to
the zero-mean, weakly stationary purely non-deterministic stochastic
processx, we nd that xtbelongs to the Hilbert space Ht("), where
"=f"tgtis the White Noise process of classical Wold innovations of x.
41

Glosar
Argyros I. K., 5
Broyden C. G., 5, 54, 59, 62
Chen X., 69
Cira O., 5
Cuyt A. A. M., 5
Dennis J. E., 5, 54, 62
Ezquerro J. A., 35
Gay D. M., 54
Hernandez M. A., 35
Janko B., 49
Kantorovich L. V., 35
Korgano A., 49
M aru ster S t., 5
More J. J., 54, 62
Neumann J. von, 16, 36
Ortega J. M., 5, 54
Rall L. B., 5
Rheinboldt W. C., 5, 54
Rubio M. J., 35
Schnabel R. B., 5
Stachurski A., 69
Szidarovszky F., 5
Traub J. F., 5
Wegge L., 49Wozniakowski H., 5
Ypma T. J., 5
42

Index de not ,iuni
Mm;n(K)Mn;neste mult ,imea matricelor de dimensiune mncu elemente
din corpul K{ ^ ncep^ and de la pagina ??;
PA(x) este polinomul caracteristic al matricei A{ ^ ncep^ and de la pagina ??;
det(A) este determinatul matricii A{ ^ ncep^ and de la pagina ??;
grad(P) este gradul polinomului P{ ^ ncep^ and de la pagina ??;
Ineste mult ,imea indicilorf1;2;:::;ng{ ^ ncep^ and de la pagina ??;
Ker(f) este nucleul lui f{ ^ ncep^ and de la pagina ??;
dimK(V) este dimensiunea spat ,iului vectorial Vpeste corpul K{ ^ ncep^ and
de la pagina ??;
Nmult ,imea numerelor naturale N=f1;2;:::g^ ncep^ and de la pagina ??;
N= 0[N^ ncep^ and de la pagina ??;
43

Bibliografie
44

Similar Posts