1.1 Martingales. Stopping Times and Filtration . . . . . . . . . . 2 1.1.1 Stochastic Processes, Filtration and Stopping Times . . 2 1.1.2 Continuous… [625483]
Contents
1 Basic notions 2
1.1 Martingales. Stopping Times and Filtration . . . . . . . . . . 2
1.1.1 Stochastic Processes, Filtration and Stopping Times . . 2
1.1.2 Continuous Time-Martingale . . . . . . . . . . . . . . . 6
1.1.3 The Doob-Mayer decomposition and Square Integrable
Martingales . . . . . . . . . . . . . . . . . . . . . . . . 8
2 Stochastic Integration 12
2.1 Construction of the Stochastic Integral . . . . . . . . . . . . . 12
2.2 The It^ o formula . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3 Representation of continuous Martingales in terms of Brown-
ian motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3 Stochastic Dierential Equations 20
3.1 Strong Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2 Weak Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4 Aplications of Stochastic Dierential Equations 27
4.1 The Martingale Problem of Stroock and Varadhan . . . . . . . 27
4.2 Gauss-Markov Processes . . . . . . . . . . . . . . . . . . . . . 28
4.3 Applications To Economics . . . . . . . . . . . . . . . . . . . . 29
References 31
1
Chapter 1
Basic notions
This chapter presents the basic notions of stochastic analysis that can be
considered pre-requisites for the development of the present paper.
1.1 Martingales. Stopping Times and Filtra-
tion
The general framework will be a probability space, i.e. a measurable
space (
;F) endowed with a probability measure P. The measurable space
(
;F) is referred to as the sample space .
Arandom variable on (
;F) will be a measurable function X:
!
S, where (S;S) is another measurable space called the state space . In
most cases and throughout this paper the state space is considered to be
(Rd;B(Rd)), where B(Rd) denotes the Borel -algebra of subsets of Rd.
The value at some point !2
of a random variable, X(!) will be called
realization .
1.1.1 Stochastic Processes, Filtration and Stopping
Times
Definition 1.1.1.Astochastic process is a mathematical model for the
occurrence, at each moment after the initial time, of a random phenomenon.
Thus a stochastic process is a sequence of random variables X= (Xt; 0t<
1) on (
;F). For a xed sample point !2
, the function t7!Xt(!);t> 0
is a sample path.
LetXandYbe two stochastic process on (
;F;P). We present some
weakened variants for these processes to be considered \equal".
2
Gavril aTania Dissertation
Definition 1.1.2.We say that Yis amodication ofXif, for every t0
we haveP[Xt=Yt] = 1.
Definition 1.1.3.The processes XandYare said to have the same nite
dimensional distributions if, for any integer n1, any 0<t1<<tn<1
and any set A2B(Rnd) we have
P[(Xt1;:::;Xtn)2A] =P[(Yt1;:::;Ytn)2A]:
Definition 1.1.4.XandYare said to be indistinguishable if almost all
their sample paths satisfy:
P[Xt=Yt;80t<1] = 1
The following example comes to illustrate that the inclusion of the classes
determined by the previous denitions are strict.
Example 1.1.1. Consider a positive random variable Twith a continuous
distribution, put Xt= 0, and let
Yt=0; t6=T
1;t=T
Yis a modication of X, since for every t0 we haveP[Yt=Xt] =P[T6=
t] = 1, but on the other hand:
P[Yt=Xt;8t0] = 0
It is proper to remark here that though two processes can be satisfying
denitions 1.1.2 and 1.1.4 only if they are dened on the probability space
and have the same state space, denition 1.1.3 however can be extended
for processes not necessary dened on the same probbaility space, but still
having the same state space.
Definition 1.1.5.LetXandYbe stochastic processes dened on the proba-
bility spaces (
;F;P) and (e
;fF;eP), respectively and having the same state
space ( Rd;B(Rd)).XandYhave the same nite-dimensional distribution
if, for any integers n1, any 0t1< t 2<< tn<1and any set
A2B(Rnd), we have:
P[(Xt1;:::;Xtn)2A] =eP[(Yt1;:::;Ytn)2A]
Dening the stochastic process as a collection/sequence of random vari-
ables contains implicitly the fact that XtisF;B(Rd) – measurable for any
t >0. However, the process itself is actually a function of two variables !
andt, hence a joint measurability has to be dened.
3
Gavril aTania Dissertation
Definition 1.1.6.The stochastic process Xis called measurable if, for every
A2B(Rd), the setf(t;!);Xt(!)2Agbelongs to the product -algebra
B([0;1))
F. More precisely, if
(t;!)7!Xt(!) : ([0;1)
;B([0;1))
F)!(Rd;B(Rd))
is a measurable mapping.
Considering the index tof a stochastic process as a time variable, in order
to identify notions as past, present and future at some given point t >0
we introduce ltration. A ltration on a measurable space (
;F) means a
nondecreasing family fFt;t0gof sub--algebras of F, i.e.FsFtF
for 0s<t<1. We set F1: =([t0Ft).
Example 1.1.2. Given a stochastic process, a natural choice of a ltration
is that generated by the process itself. More precisely,
FX
t: =(fXs; 0stg);
which is the smallest -algebra generated by all Xsup tot, meaning the
-algebra with respect to which Xsis measurable for every s2[0;t].
Example 1.1.3. LetXbe a process, every sample path of which is RCLL
(i.e.,right-continuous on [0 ;1) with nite left limits on (0 ;1)). LetAbe the
event thatXis continuous on [0 ;t0]. ThenA2FX
t0. Are the sample paths
almost surely RCLL then A62FX
t0, but if Ft;t0 is a ltration satisfying
FX
tFt;t0 andFt0is complete under P, thenA2Ft0
LetfFt;t0gbe a ltration. We dene
F
t: =([
s<tFs)
to be the-eld of events strictly prior to t>0 and
F+
t: =\
>0Ft+
to be the-algebra of events immediately after t0. We set F
0: =F0
and say that the ltration Ftisright-(left-) continuous if
Ft=F+
t
respectively
Ft=F
t
holds for every t0.
4
Gavril aTania Dissertation
Definition 1.1.7.The stochastic process Xisadapted to the ltration Ft
if, for each t0; Xtis a measurable random variable with respect to Ft.
Example 1.1.4. Every process Xis adapted to Fx
t. Moreover, if Xis
adapted to FtandYis a modication of X, thenYis also adapted to Ft
provided that F0contains all the P-negligible sets in F. This requirement
is not the same as saying that F0is complete, since some of the P-negligible
sets inFmay not be in the completion of F0.
Example 1.1.5. LetXbe a process with every sample path LCRL (i.e., left
-continuous on (0 ;1) with nite right-hand limits on [0 ;1), and let Abe the
event thatXis continuous on [0 ;t0]. LetXbe adapted to a right-continuous
ltration Ft. Show that A2Ft0.
Arandom time Tis anF-measurable random variable, with values in
[0;1].
Definition 1.1.8.IfXis a stochastic process and Tis a random time, we
dene the function Xton the event T<1by
XT(!) =XT(!)!
.
IfX1(!) is dene for all !2
, thenXTcan also be dened on
, by
settingXT(!) =X1(!) onT=1.
Definition 1.1.9.Let us consider a measurable space (
;F) equipped with
a ltration Ft. A random time Tis a stopping time of the ltration, if the
eventTtbelongs to the -eldFtfor everyt >0. A random time Tis
an optional time of the ltration, if T<t2Ft, for everyt0.
Proposition 1.1.1.Every random time equal to a nonnegative constant is a
stopping time. Every stopping time is optional, and the two concepts coincide
if the ltration is right-continuous.
Corollary 1.1.2.Tis an optional time of the ltration Ftif and only if it
is a stopping time of the (right-continuous!) ltration Ft+.
Example 1.1.6. [1] Consider a stochastic process Xwith right-continuous
paths, which is adapted to a ltration Ft. Consider a subset 2B(Rd) of
the state space of the process, and dene the hitting time:
H (!) =1t0;Xt(!)2
We employ the standard convention that the inmum of the empty set is
innity.
5
Gavril aTania Dissertation
If the set in example of the top is open, show that H is an optional
time.
If the set in example of the top is closed and the sample paths of the
processXare continuous, then H is a stopping time. Let us establish some
simple properties of stopping times.
1.1.2 Continuous Time-Martingale
The purpose of this section is the expansion of discrete-time martingales
to continuous-time martingales. The standard exemple of a continuous-time
martingale is the one-dimensional Browninan motion.
Definition 1.1.10 .The process Xt;Ft; 0t<1is said to be a submartin-
gale (respectively, a supermartingale) if, for every 0 s < t <1we have
P:E(XtjFs)Xs(respectively, E(XtjFs)Xs.
We shall say that Xt;Ft; 0t<1is a martingale if it is a both a
submartingale and a supermatingale.
Definition 1.1.11 .A Poisson process with intensity > 0 is an adapted,
integer-valued RCLL process
N=Nt;Ft; 0t<1
such that N0= 0, and for 0s < t;Nt Nsis independent of Fsand is
Poisson distributed with mean (t s).
Remark 1.1.1 .Decomposition
Nt=Mt+t
from the (submartingale) process Poisson is the sum of martingale M where
the function increases At=t;t0.
Theorem 1.1.3.Submartingale Convergence [1] Let Xt;Ft; 0t<1be a
right-continuous submartingale and assume C= supt0E(Xt
t)<1. Then
X1(!)exist for!2
andEjX1j<1.
Example 1.1.7. Let
Xt;Ft; 0t<1
be a right continuous, nonnegative supermartingale; then
X1(!) = lim
t!1Xt(!)
exist for P, !2
andXt;Ft; 0t<1is a supermartingale.
6
Gavril aTania Dissertation
Definition 1.1.12 .A right-continuous, nonnegative supermartingale
Zt;Ft;0t<1
with
lim
t!1E(Zt) = 0
is called a potential.
Remark 1.1.2 .The following three conditions are equivalent for a nonnega-
tive, right-continuous submartingale Xt;Ft; 0t<1:
1. it is a uniformly integrable family of random variables;
2. it converges in L1, ast!1 ;
3. it converges P(ast!1 ) to an integrable random variable X1, such
thatXt;Ft; 0t1 is a submartingale.
Observ that the implications ( a)!(b)!(c) hold without the assumption
of nonnegative.
The Optimal Sampling Theorem is this section we will be able to see
what happen if you randomly sample random martingals. Doob's optional
sampling theorem tells us under what conditions we can expect this to be
true.
Theorem 1.1.4.Optional Sampling [1] Let Xt;Ft; 0t1 be a right-
continous submartingale whut a last element X1, and letSTbe two op-
tional times of the ltration Ft. We have:
E(XTjF+)Xs
a.s P If S is a stopping time, then Fscan replace Fs+above. In particular
EXtEX 0, we haveEXT=EX 0.
Proof. Consider the sequence of random times:
Sn!=S(!); S(!) = +1
k
2n;k 1
2nS(!)<k
2n
and the similarly dened sequences Tn. For every xed integer n1, both
Sn, andTn, take on a countable number of values and we also have SnTn.
From the above theorem we haveZ
AXSndPZ
AXTndP
for everyA2FSnand a fortiori for every A2FS+=\1
n=1FSn. If S is
stopping time, then SSnimplies FFvFSn, and the preceding inequality
also holds for every A2FS.
7
Gavril aTania Dissertation
Example 1.1.8. Establish the optional sampling theorem for a right-
continuous submartingale Xt;Ft; 0t <1and optional times ST
under either of the following two conditions:
1. T is a bounded optional time (there exists a number a > 0, such that
Ta);
2. there exists an integrable random variable Y, such that XtE(YjFt)
a.s. P, for every t0.
Example 1.1.9. A right-continuous process
X=Xt;Ft; 0t<1
with
EjXtj<1; 0t<1
is a submartingale if and only if for every pair STof bounded stopping
times of the ltration Ftwe have
E(XT)E(XS):
1.1.3 The Doob-Mayer decomposition and Square In-
tegrable Martingales
This sections is devoted to the decomposition of certain submartingales
as the summation of a martingale and an increasing process.
Definition 1.1.13 .A probability space
;F;Pand a random sequence
Ann= 01adapted to the discrete ltration Fnn= 01.
Definition 1.1.14 .An increasing sequence An;Fn;n= 0;1:::is called nat-
ural if for every bounded martingale Mn;Fn;n= 0;1;:::we have :
E(MnAn) =EnX
K=1Mk 1(Ak Ak 1);8n1
.
Definition 1.1.15 .Process A is increasing if for P. a.e !2
if:
1.A0(!) = 0
2.t2At(!) is a nondecreasing, right-continuous functions, and
E(At)<1holds for every t2[0;1). Process incressing is called in-
tegrable ifE(A1)<1, where
A1= lim
t!1At
.
8
Gavril aTania Dissertation
Definition 1.1.16 .Process A is increasing natural if for every bounded,
right-continuous martingale Mt;Ft; 0t<1we have:
EZ
(0;t]MsdAs=Z
(0;t]Ms dAsforevery 0<t<1
.
Lema 1.1.5.As dened above, condition:
EZ
(0;t]MsdAs=Z
(0;t]Ms dAsforevery 0<t<1
is equivalent to:
E(MtAt) =EZ
(0;t]Ms dAs
.
Proof. Consider a partition = t0;t1;:::;tnof [0;t] with 0 = t0t1
tn=t, and dene:
M
S=nX
k=1Mtk(Atk Atk 1) =E[nX
k=1MtkAtk n 1X
k=0Mtk+ 1Atk]
=E(MtAt) En 1X
k=1Atk(Mtk+ 1 Mtk) =E(MtAt):
Now let
kk= max
1kn(tk tk 1)!0
so
M
s!Ms
, and use the bounded convergence theorem for Lebesquen-Stieltjes integra-
tion to obtain:
E(MtAt) =EZ
(0;t]MsdAs:
If the right-continuous submartingale Xt;Ft; 0t<1is of class
DL, then it admits the decomposition as the summation of a right-
continuous martingale M=Mt;Ft; 0t<1and an increasing process
A=At;Ft; 0t<1. Further, if Xis of class D, then M is a uniformly
integrable martingale and A is integrable.
9
Gavril aTania Dissertation
To predict the Brownian motion we must understand the role of the dier-
ences in the process classes. One such a class is that of square-integrable
martingales.
During this time we have a xed probability space
;F;Pand a ltration
Ftthat meets the usual conditions.
Definition 1.1.17 .LetX=Xt;Ft; 0t<1be a right-continuous mar-
tingale. We say that Xis square-integrable if EX2
t<1for everyt0. If
in addition, X0= 0, we write X2M2(orX2Mc
2ifXis also continuous).
Lema 1.1.6.LetX2M2satisfyjXsjk<1for alls2[0;t];P.
Let
=t0;t1;:::;tm
, with
0 =t0t1:::tm=t
, be a partition of [0;t]. Then
E[V2
t()]26K4
.
Proof. [1]
The martingale property, we have for 0 km 1:
E[mX
j=k+1(Xtj Xtj 1)2jFtk] =E[mX
j=k+1(Xtj Xtj 1)2
jFtk]
=E[mX
j=k+1(X2
tj X2
tj 1)jFtk]
E[X2
tmjFtk]
k2
so
E[m 1X
k=1mX
j=k+1(Xtj Xtj 1)2(Xtk Xtk 1)2] =
=E[m 1X
k=1(Xtk Xtk 1)2mX
j=k+1E[(Xtj Xtj 1)2jFtk]
k2E[m 1X
k=1(Xtk Xtk 1)2]
k4
10
Gavril aTania Dissertation
then we have
E[mX
k=1(Xtk Xtk 1)4]4k2E[mX
k=1(XtK xtk 1)2]4k4
Inequalities imply:
E[V2
t()]2=E[mX
k=1(Xtk Xtk 1)4] + 2E[m 1X
k=1mX
j=k+1(Xtj Xtj 1)2]6k4:
11
Chapter 2
Stochastic Integration
2.1 Construction of the Stochastic Integral
The martingale M=Mt;Ft; 0t<1on a probability space (
;F;P)
equipped with will be assumed throughout this chapter to satisfy the con-
ditions of denition. We assume M0= 0 a.s. Such a process M2Mc
2is of
unbounded variation on any nite interval [0 ;T], and consequently integrals
of the form:
IT(X) =ZT
0Xt(!)dMt(!)
cannot be dened \pathwise" (i.e. for each !2
separately) as ordinary
Lebesque-Stieltjes integrals. Nevertheless, the martingale Mhas a nite
second variation, given by continuous, increasing process <M > . The con-
struction is due to It^ o for the special case that Mis a Brownian motion and
to Kunita and Watanabe for general M2M2. The construction will then
be extended to general continuous, focal martingales M.
Definition 2.1.1.The setLis equivalence classes of all measurable, Ft-
adapted processes X, for which [ X]t<1for allT >0, dene a metric on L
by [X Y], where:
[X] =1X
n=12 n(1^[X]n)
.
Definition 2.1.2.Let process Xis called simple if there exist a strictly
increasing sequence of real numbers tnn= 01witht0= 0 and
lim
n!1tn=1
12
Gavril aTania Dissertation
, as well as asequence of random variables nn= 01with
sup
n0jn(!)jc<1
, for every!2
, such that nisFtn- measurable for every n0 and
Xt(!) =0(!)1f0g(t) +1X
i=0i(!)1(ti;ti+1](t); 0t<1;!2
.
Lema 2.1.1.TheAt; 0t<1be a continuous process adapted to the l-
tration of martingale M=Mt;Ft; 0t<1. IfX=Xt;Ft; 0t<1is
a progressively measurable process satisfying :
EZT
0X2
tdAt<1
for eachT >0, then there exist a sequence Xnn= 11of simple processes
such that
sup
T>0lim
n!1EZT
0jXn
t Xtj2dAt= 0
.
Let dened the stochastic integral of a simple process X2L0. The list
certain properties of this integral X;Y2L0and 0s<t<1we have:
1.I0(X) = 0
2.E[It(X)jFs] =Is(X)
3.E(It(X))2=ER1
0X2
ud<M > u
4.kI(X)k= [X]
5.E[(It(X) Is(X))2jFs] =E[Rt
sX2
ud<M > ujFs]
6.I(X+Y) =I(X) +I(Y);;2R
The rst property and the last are obvious. The second property follows
from the fact that for any 0 s<t<1and any integer i1, we have in
the notation of :
E[i(Mt^ti+1 Mt^ti)jF] = i(Ms^ti+1 Ms^ti)
a.s. and veried separately for each of the three cases sti;ti<sti+1
andti+ 1<sby using the Ftimeasurability of i.
13
Gavril aTania Dissertation
Definition 2.1.3.LetX2L, the stochastic integral of Xwith re-
spect to the martingale M2Mc
2is the unique, square-integrable martingale
I(X) =It(X);Ft; 0t<1which satises
lim
n!1jjI(Xn) I(X)jj= 0
for every sequence Xnn= 11L0with
lim
n!1[Xn X] = 0
we have:
It(X) =Zt
0XsdMs; 0t<1
EitherM=M;Ft; 0t<1andN=Nt;Ft; 0t<1are inMc
2and
takeX2L(M);Y2L(N). Then
IM
t(X) =Zt
0XsdMs;IN
t(Y) =Zt
0YsdNs
are also inMc
2and we have:
IM(X)t=Zt
0X2
ud<M > u
IN(Y)t=Zt
0Y2
ud<N > u
the formula follows:
IM(X);IN(Y)t=Zt
0XuYud<M;N > u;t0
andX,Yare simple, then it is straight to show by a computation similar
forward to that for 0 s<t<1,
E[(IM
t(x) IM
s(X))(IN
t(Y) IN
s(Y))jFs] =E[Zt
sXuYud<M;N > ujFs]:
EitherM=Mt;Ft; 0t<1be continuous local martingale on a proba-
bility space
;F;PwhitM0= 0 andM2Mc.
Definition 2.1.4.LetPcollection equivalence classes of all measurable,
adapted processes X=Xt;Ft; 0t<1!
P[ZT
0X2
td<M > t<1] = 1;T2[0;1)
.
14
Gavril aTania Dissertation
Definition 2.1.5.Let process X is called simple if there exist a strictly
incresing sequence of real numbers tn1
n=0witht0= 0 and limn81tn=1, as
well as a sequence of random variables 1
nn=0with
sup
n0j(!)jc<1
, for every!2
such that nisFtn- measurable for every n0 and
Xt(!) = 0(!)10(t) +1X
i=0i(!)1(tt;ti+1)(t); 0t<1;!2
:
Definition 2.1.6.LetX2L, the stochastic integral of X with respect to
the martingale M2Mc
2is the unique , square-integrable martingale IX=
It(X);Ft; 0t<1wich satises limn81jjI(X(n)) I(X)jj= 0 for every
sequenceXn1
n=1L0with limn81[Xn X] = 0. We have :
It(X) =Zt
0XsdMs; 0t<1:
2.2 The It^ o formula
The It^ o formula has a smooth semimartingal continuous function that
also helps its decomposition.
Theorem 2.2.1.Letf:R!Rbe a function of class C2and letX=
(Xt;Ft);0t<1be a continuous semimartingale with decomposition if:
f(Xt) =f(X0)+Z1
0f0(Xs)dMs+Zt
0f0(Xs)dBs+1=2Z1
0f00(Xs)ds<M > 2; 0t<1
Remark 2.2.1 .The equation above is often written in dierential notation:
df(Xt) =f0(Xt)dMt+f0(Xt)dBt+ 1=2f00(Xt)d<M > t
=f0(Xt)dXt+ 1=2f"(Xt)d<M > t; 0t<1
is "chain-rule" for stochastic calculus.
Theorem 2.2.2.EitherMt= (M1
t;:::;Md
t);Ft; 0t<1be a vector
of local martingales in Mc,Bt= (B1
t;:::;Bd
t);Ft; 0t<1 a vec-
tor of adapted processes of bounded variation with B0= 0 and set
Xt=X0+Mt+Bt; 0t<1whereX0is anF0-mesurable random vec-
tor inRd.
15
Gavril aTania Dissertation
Letf(t;x) : [0;1)xRd!Rbe of classC1;2!
(f;Xt) =f(0;X0)+Z1
0@
@tf(s;Xs)ds+dX
i=1Zt
0@
@xif(s;Xs)dB1
s+dX
i=1Z1
0@
@xif(s;Xs)dMi
s+
+1=2dX
i=1dX
j=1Z1
0@2
@xi@xjf(s;Xs)d<Mi;j>s; 0t<1
.
Theorem 2.2.3.EitherMt= (M1
t;:::;M1
t;Ft; 0t <1)be a vector
of local martingales in MC;loc:;Bt= (B1
t;:::;Bd
t);Ft; 0t<1a vector of
addapted processe of bounded variation with B0= 0, and set
Xt=X0+Mt+Bt; 0t<1
whereX0is aF0-measurable random vectot in Rd. Letf(t;x) : [0;1)xRd!
Rbe a class
f(t;Xt) =f(t;Xt) =f(0;X0) +Z1
0@
@tf(s;Xs)ds
+tX
0@
@xif(s;Xs)dBi
s
+dX
i=1Z1
0@
@xif(s;Xs)dMi
s
+1
2dX
i=1dX
j=1Z1
0@2
@Xi@Xjf(s;Xs)d<Mi;j>s;ot<1:
2.3 Representation of continuous Martin-
gales in terms of Brownian motion
In the section we expound on the Brownian motion is the fundamental
continuous martingale, be showing how to represent other continuous mar-
tingales in terms of it.
Remark 2.3.1 .In representation theorem involves the notion of the extension
of a probability space. Let
X=Xt;Ft; 0t<1
16
Gavril aTania Dissertation
be a adapted process on some
;F;P. Letb
;cF;bPbe another prob-
ability space, on which we consider a d-dimensional Brownian motion
bB=Bt;cFt; 0t<1setb
=
xb
,bG=F
cF,bP=PxbP, dened a l-
tration bycGt=Ft
cFt.
IfWt;Ft; 0t<1is standard Brownian motion and Xis a measurable,
adapted process with
P[Zt
0X2
sds<1] = 1
for every 0t<1, the stochastic integral
It(X) =Zt
0XsdWs
is a continuous local martingale and varation process <I(X)>t=Rt
0X2
sds
is absolutely continuous function.
Theorem 2.3.1.(Time-Change for Martingales [Dambis(1965),Dubins and
Schwarz (1965)]) [1] Where M=Mt;Ft; 0t<12MC;loc:satisfy
lim
t!1<Mt>=1
. Dene, for each 0s<1, the stopping time :
T(s) = inft0;<M >t>s
the time-changed process
Bs=MT(s);Gs=FT(s); 0s<1
is standard one-dimensional Brownian motion. The ltration Gssatises the
usual conditions and we have, a.s P
Mt=B<M >t; 0t<1:
Proposition 2.3.2.From the top theorem follows the change of the stochas-
tic integral: X=Xt;Ft; 0t<1is progressively measurable and satises:
Z1
0X2
td<M > t<1! the process
Ys=XT(s);Gs; 0s<1
is adapted and satises almost surely:
Z1
0Y2
sds<1
17
Gavril aTania Dissertation
Zt
0XdM=Z<M> t
0YudBu; 0t<1
ZTs
0XdM =Zs
0YudBu; 0s<1
.
The integrator martigale a standard one-dimensional Brownian motion
W= (Wt;Ft); 0t<1on a probability space
;F;Pand we assume Ft
satises usual conditions. The mapping X!IT(X) fromL
TtoL2(
;F;P)
preserves inner products:
EZT
0XtYtdt=E[IT(X)IT(Y)]
.
Theorem 2.3.3.[1] LetM=Mt=Md
t;:::;Md
t;Ft; 0t<1be continu-
ous, adapted process with
Mt2MC;lim
t!1<Mi>t=1
<Mi;Mj>t= 0; 1i6=jd;0t<1:
Dene:
Ti(s) = inft0;<Mi>t>s; 0s<1;1id
so that for each i and s , the random time Ti(s)is a stopping time for the
(right-continous) ltration Ft. The processes
Bi
s=M(
Ti(s)i); 0s<1;1id
are independent, standard, one-dimensional Brownian motions.
The integration martingale a standard, one-dimensional Brownian motion
W=Wt;Ft; 0t<1one a probability space
;F;Pand we assume Ft
satises usual conditions. The mapping X!IT(X) fromL
TtoL2(
;FT;P)
preseves inner products :
EZT
0XtYtdt=E[IT(X)IT(Y)]:
Since any convergent sequence in :
RT=IT(X);X2L
T
18
Gavril aTania Dissertation
is also Cauchy, must have a limit in L2(
;FT;P) a fact we shall need shortly.
Stochastic integrals
It(X) =Zt
0XsdWs; 0t<1
of processes X2L:
M
2=I(X);X2L<Mc
2<M2:
19
Chapter 3
Stochastic Dierential
Equations
In this chapter we will discus stochastic dierential equations.
In other words, let us consider a d-dimensional Markov family
X=Xt;Ft; 0t<1;(
;F);Px2Rd, and assume that Xhas contin-
uous paths.We suppose, further that the relation:
lim
x!11
t[Exf(Xt) f(X)] = ((A)f(x));8X2Rd
holds for every f in a suitable subclass of the space C2(Rd) of real-valued,
twice continuously dierentiable functions on Rdthe operator ( A)fin limits
is given by :
(A)f(x) =1
2dX
i=1dX
k=1aik(x)@2f
@xi@xk(x) +dX
i=1bi(x)@f
@xi(x)
for suitable Borel-measurable functions bi;aik:Rd!R;1i;kd.
Definition 3.0.1.There
X=Xt;Ft; 0t<1(
;F);Pxx2Rd
be a dimensional Markov family such that:
1.Xhas continuous sample paths
2. the relation above holds for every f2C2(Rd).
Let us suppose that the Markov family of denition has a transition prob-
ability density function:
Px[Xt2dy] = (t;x;y)dy;8x2Rd;t> 0
20
Gavril aTania Dissertation
From the denition we suggest that ( t;x;y) should satisfy the forward Kol-
mogorov equation for every xed x2Rd:
@
@d (t;x;y) =A (t;x;y); (t;y)2(0;1)xRd;
and the backward Kolmogorov equation, for every xed y2Rd:
@
@d (t;x;y) =A (t;x;y); (t;x)2(0;1)xRd:
The operator Ais given by:
(Af)(y) =
1
2dX
i=1dX
k=1@2
@yi@yk[aik(y)f(y)]
dX
i=1@
@yi[bi(y)f(y)]:
The methodology of stochastic dierential equations was suggested by P.
Levy as an \alternative", probabilistic approach to diusions and was carried
out in a masterly way by K. Ito. Suppose that we have a continuous, adapted
d-dimensional process
X=Xt;Ft; 0t<1
which satises, for every x2Rd, the stochastic integral equation:
Xi
t=Xi+Zt
0bi(Xs)ds+rX
y=lZ1
0ij(Xs)dWj
s; 0t<1;1id;
0t <1;1id, on a probability space (
;F;Px) whereW=
Wt;Ft; 0t<1is a Brownian motion in Rrand coecients bi;ij;Rd!
R; 1id;1jrare Borel-measurable.
3.1 Strong Solutions
In this section the introduction of the stochastic dierential equa-
tion with respect to Brownian motion and its solutions in the so-
called strong sense. Let us start with Borel-measurable functions
21
Gavril aTania Dissertation
bt(t;x);ij(t;x); 1id;1jrfrom [0;1)xRdintoR, and dene the
dx1 drift vector b(t;x) =bi(t;x)1idand thedxrdispersion matrix
(t;x) =ij(t;x)1id1jr:
The intent is to assign a meaning to the stochastic dierential equation:
dXt=b(t;Xt)dt+(t;Xt)dWt
written componentwise as
dXi
t=bi(t;Xt)dt+rX
j=1ij(t;Xt)dWj
t; 1id
whereW=Wt; 0t<1is an r-dimensional Brownian motion and X=
Xt; 0t<1is a suitable stochastic process with continuous sample paths
and values in Rdthe "solution" of the equation. The vector b(t;x) and
dispersion matrix (t;x) are the coecents of this equation; the dxdmatrix
a
(t;x) =rX
j=1tj(t;x)T(t;x); 1i;kd
will be called the dierentiation matrix.
Either space
;F;Pas well as an r-dimensional Brownian motion
W=Wt;Fw
t; 0t<1onit. We assume also that this space is rich enough
to accommodate a random vector taking values in Rdindependent of Fw
1
and with given distribution:
( ) =P[2 ]; 2B(Rd)
. The left-continuous ltration
Gt=()vFW
T=(;Ws; 0st); 0t<1
collection of null sets
N=Nv
;9G2G1withNvGandP (G) = 0:
Create the argumented ltration
Ft=(Gt[N); 0t<1;F1=([t0Ft):
Definition 3.1.1.A strong solution of the stochastic dierential equation,
probability space
;F;P, and the xed Brownian motion W, initial con-
ditionis a process X=Xt; 0t<1with continuous sample paths and
with following properties:
22
Gavril aTania Dissertation
1.Xis adapted to the ltration Ft
2.Xis adapted to the ltration Ft
3.P[Rt
0jbt(s;X 0)j+ij2(s;Xs)ds<1] = 1;0id;1jrand 0t<1
Remark 3.1.1 .[1] The crucial requirement of this denition is captured in
condition (1); it corresponds to our intuitive understanding of Xas the
"output" of a dynamical system described by the pair of coecients ( b;),
whose "input" is W and which is also fed by the initial datum . The
principle of causality for dynamical systems requires that the output Xt,
at timetdepend only on and the values of the input Ws; 0st
up to that time. This principle nds its mathematical expression in (1).
Furthermore, when both and Wt; 0t<1are given, their specication
should determine the output Xt; 0t<1in an unambiguous way. We are
thus led to expect the following form of uniqueness.
Theorem 3.1.1.Suppose that the coecients b(t;x);(t;x)are locally Lip-
schitz continuous in the space variable; i.e., for every integer n1there
exists a constant Kn>0such that for every
t0;jjxjjnandjjyjjn
:
j(b(t;x) b(t;y)j+j(t;x) (t;y)jKnjx yj)
Proof. Let us suppose that Xand ~Xare both strong solutions, dened
for allt0 , of relative to the same Brownian motion W and the same
initial condition , on some (
;F;P). We dene the stopping times
n=inft0;jjXtjjnforn1 as well as their tilded counterparts,
and we set Sn=n^~n. Clearly lim n!Sn=1a.s P and
Xt^Sn ~Xt^Sn=Zt^Sn
0b(u;Xu) b(u;~Xu)du
+Zt^Sn
0(u;Xu) (u;~Xu)dWu:
Using the vector inequality
kv1++vkk2k2(kv1k2++kvkk2)
the Holder inequality for Lebesque integrals, the basic property of stochastic
integrals, we may write for 0 tT:
23
Gavril aTania Dissertation
E[kXt^Sn ~Xt^Snk24E[Zt^Sn
0kb(u;Xu) b(u;~Xu)kdu]2
+ 4EdX
i=1[rX
j=1Zt^Sn
0(ij(u;Xu) ij(u;~Xu))dW(j)
u]2
4tEZt^Sn
0kb(u;Xu) b(u;~Xu)k2du
+ 4EZt^Sn
0k(u;Xu) (u;~Xu)k2du
4(T+ 1)K2
nZt
0EkXu^Sn ~Xu^Snk2du
We now apply
g(t) =EkXt^Sn ~Xt^Snk2
to conclude that
Xt^Sn; 0t<1
and
~Xt^Sn; 0t<1
are modications of one another, and thus are indistinguishable. Putting
n!1 we see that the same is true for
Xt; 0t<1
and
~Xt; 0t<1:
Example 3.1.1. Suppose that the continuous function g(t) satises
0g(t)(t) +Zt
0g(s)ds; 0tT
with0and : [0;T]!Rintegrable. Then
g(t)(t) +Zt
0(s)e(t s)ds; 0tT:
24
Gavril aTania Dissertation
Example 3.1.2. [1] From what we have just proved, it follows that strong
uniqueness holds for the one-dimensional stochastic equation
Xt=Zt
0jXSjdWs; 0t<1
, as long as (1=2), and it is obvious that the unique solution is the trivial
oneXt0. This is also a solution when 0 << (1=2), but it is no longer
the only solution. We shall in fact see that not only does strong uniqueness
fail when 0 << (1=2), but we do not even have uniqueness in the weaker
sense developed in the next section.
3.2 Weak Solutions
Definition 3.2.1.A weak solution of equation is a triple
(X;W );(
;F;P);Fwhere:
1. (
;F;P) is a probability space, and Fis a ltration of sub- -elds of
Fsatisfying the usual conditions,
2.X=Xt;Ft;0t<1 is a continuous, adapted Rd-valued
process,W= (Wt;Ft); 0t <1is an r-dimensional Brown-
ian motion, and
3. of denition are satised.
Definition 3.2.2.We say that uniqueness in the sense of probability
law holds for equation from denition if, for any two weak solutions
(X;W );(
;F;P);(Ft) and ~X;~W;(~
;~F;~P;~Fst) with the same initial dis-
tribution:
P[X02 ] = ~P= [X02 ];8 2B(Rd)
. the two processes Xand ~Xhave the same law.
The principal method for creating weak solutions to stochastic dierential
equations is transformation of drift via the Girsanov theorem.
Proposition 3.2.1.Consider the stochastic dierential equation
dXt=b(t;Xt)dt+dWt; 0tT
where T is a xed positive number, W is a d-dimensional Brownian motion,
andb(t;x)is a Borel-mesurable, Rd-valued function on [0;T]xRdwich sat-
ises
jb(t;x)jK(1+jxj); 0tT;x2Rd
25
Gavril aTania Dissertation
for the some positive constant K. For any probability measure on
Rd;B(Rd), equation has a weak solution with initial distribution .
Remark 3.2.1 .It is apparent from denition that proposition can be extended
to include the case:
Xt=X0+Zt
0b(s;X)ds+Wt; 0tT;
whereb(t;x) is a vector of progressively measurable functionals on C[0;1)d.
Definition 3.2.3.Let
;F;Pbe a probability space and Ga sub-eld
ofF. A function Q(!;A)2
xF![0;1] is called a regulat conditional
probability for FgivenGif:
1. for each !2
,Q(!;) is a probability measure on
;F
2. for each A2F, the mapping !!Q(!;A) isG-measurable, and
3. for each A2F;Q(!;A) =P[AjG](!);!2
Example 3.2.1. [2] FixU=Rm=U0=U1and letB1:::Bmbe linear
operators on H with domains D(B1):::D (Bn) respectively. Dene:
D(B) =Tm
j=1D(Bj)
B(x)u=Pm
j=1ujBjX;(u1;:::;um)2Rm;8x2D(B)
The B has the required form.
Example 3.2.2. [2] Similary as for additive noise, we dene a strong solution
of problem above as an H-valued predictabile process X(t);t2[0;T] , which
takes values in D(A)\D(B);P suchthat:
(
P(RT
0[jX(s)j+jAX(s)j]ds< +1) = 1
P(RT
0kB(X(s))k2
L0
2ds< +1) = 1
and for arbitrary t2[0;T]andP
X(t) = +Zt
0(AX(s) +f(s))ds+Zt
0B(X(s))dW(s):
26
Chapter 4
Aplications of Stochastic
Dierential Equations
4.1 The Martingale Problem of Stroock and
Varadhan
In order to provide motivation for the martingale problem, let us suppose
that (X;W );(
;F;P);Ftis a weak solution to the stochastic dierential
equation.For every t>0, we introduce the second-order dierential:
(At;f)(x) =1
2dX
i=1dX
k=1aik(t;x)@2f(x)
@xi@xk+dX
i=1bi(t;x)@f(x)
@xi;f2C2(Rd)
whereaik(t;x) are the components of the diusion matrix. If, as in the
next proposition, f is a function of t2[0;1) andx2Rd, thenAt(f)(t;x),
is obtained by applying Attof(t;).
Definition 4.1.1.(Martingale Problem of Stroock & Varadhan (1969)). A
probability measure PonC[0;1]d;B(C[0;1]) under which Mfin is a con-
tinuous martingale for every f2C2
0(Rd) is called a solution to the martingale
problem associated with A0
t.
Theorem 4.1.1.(Skorohod (1965), Stroock & Varadhan (1969))[1] Consider
the stochastic dierential equation
dXt=b(Xt)dt+(Xt)dWt
where the coecients bi;i;j:Rd;Rare bounded and continuous func-
tions. Corresponding to every initial distribution onB(Rd)with
Z
Rdjxj2m(dx)<1for somem> 1
27
Gavril aTania Dissertation
there exists a weak solution.
4.2 Gauss-Markov Processes
IfX0is normally distributed, then the nite-dimensional distributions of
the Gaussian process Xare completely determined by the mean and covari-
ance functions. In this case of the distribution of Xtthe positive deniteness
of the matrix :
V(t) = (t)[V(0) +Zt
0 1(u)(u)( 1(u)(u))Tdu]T(t)
for everyt > 0. In order to settle this question, we shall introduce the
concept of controllability from linear system theory.
Proposition 4.2.1.The pair of functions (A;)is controllable on [0;T]if
and only if the matrix:
M(T) =ZT
0 1(t)(t)( 1(t)(t))Tdt
is nonsingular.
Let us consider the one-dimensional ( d= 1;r1) stochastic dierential
equation:
dXt= [A(t)Xt+a(t)]dt+rX
j=1[Sj(t)Xt+j(t)]dWj
t
whereW= (Wt=W1
t;:::;Wr
t);Ft; 0t<1is a r-dimensional Brownian
motion, and the coecents A;a;Sj;jare measurable Ft-adapted almost
surely locally bounded processes. We set :
=rX
j=1Z1
0Sj(u)dW(
uf) 1
2rX
j=1Z1
0S(
j2)(u)du
= exp[Z1
0A(u)du+t]:
Definition 4.2.1.The operator Aof is called elliptic at the point x2Rdif
dX
i=1dX
k=1aik(x)ik>0;82Rdnf0g
28
Gavril aTania Dissertation
Example 4.2.1. Show that if V(0) is given by then V(t)V(0). In partic-
ular, V satises the algebraic matrix equation :
AV+VAT= T:
We have established the following result.
4.3 Applications To Economics
In this section we apply the theory of stochastic calculus and dieren-
tial equations to two related problems in nancial economics. The process
W=Wt= (W1
t:::Wd
t)T;Ft; 0tTis a d-dimensional Brownian
motion on a probability space (
;F;P), and the ltration Ftis the aug-
mentation under Pof the ltration FW
tgenerate by W. The interest rate
processr(t);Ft; 0tT, as well as the vector of mean rates of re-
turnb(t) = (b1(t):::ba(t))T;Ft: 0tTand the dispersion matrix
(t) = (ij(t))1i;jd;Ft; 0tT, are assumed to be mea-
surable, adapted, and bounded uniformly in ( t;!)2[0;T]
. We set
a(t) =(t)T(t) and assume that for some number >0.
Ta(t)kk2;82Rd;0tT
Definition 4.3.1.A portfolio process =(t) = (1(t):::d(t))T;Ft; 0
tTis a measurable, adapted process for wich
dX
i=1ZT
02
i(t)dt<1
A consumtion process C= (Ct;Ft;; 0tT) is a measurable adapted
process with values in [0 ;1) and
ZT
0Ctdt<1
.
Remark 4.3.1 .The amount invested in the boud,
0(t) =Xt dX
i=1i(t)
can be negative so that the rate is r(t).
29
Gavril aTania Dissertation
Definition 4.3.2.A contigent claim is a nancial instrument consisting of:
1.apayoff rateg =gt;Ft; 0tT and
2.aterminalpayoff f Tatmaturity
The non-negative measurable and tailored process is g, where fTis non-
negative. FT-measurable random variable for wich >1 we have:
E[fT+ZT
0gtdt]<1
.
Definition 4.3.3.Letx0 and (;C) a portofolio/consumption that is
acceptable for the initial endowment x. Pair ( ;C) is a contingent claim
heding strategy ( g;fT) due of conditions: i)
C+=gt; 0tT
ii)
XT=fT
where X is the process associated with the pair ( ;C) and the initiation
conditionX0=x.
Example 4.3.1. Consider the case r0;d1;bt0and1. Let
the contingent claim g0 andfT0 be given, so obviously there exists
a hedging strategy with x= 0;C0and0. Show that for each x >
0, there is a hedging strategy with X0=x. The fair price for a contingent
claim is the smallest number x > 0 which allows the construction of a hedging
strategy with initial wealth x.
We have a measurable, tailored reduction, uniformly limited to the pro-
cess=(s);Fs; 0sT, and an exact increase strictly congru-
ent with the continuous dierential function U: [0;1)8[0;1), for which
U(0) = 0andU0(1)limc!1U0(c) = 0. Given an initial endowment x>0,
an investor wishes to choose an admissible pair ( ;C) of portfolio and con-
sumption processes, so as to maximize:
V;C(x)EZT
0e Zs
0(u)duU(Cs)ds
. We dene the value function for this problem to be
V(x) = sup
;CV;C(x)
where the supremum is over all pairs ( ;C) admissible for x.
30
Gavril aTania Dissertation
Proposition 4.3.1.For everyx0we have:
V(x) = sup
C2(x)EZT
0e Zt
0(s)dsU(Ct)dt
Proof. Supposee;C is admissible for x>0, and set
y=~EZT
0e Zt
0r(s)dsCtdtx
. Ify >0 we may dene ~Ct= (x=y)Ctso that ~C2D(x). There exist then
a portofolio process ~ such that (~;~C) is admissible for xand
V;C(x)V~;~C(x):
Ify= 0 thenCt= 0;t2[0;T], almost surely, and we can nd a constant
c>0 such that ~Ctc. Holds for some ~ chosen so that (~ ;~C) is admissible
forx.
We consider here a case somewhat more general than that originally, in
particular, we shall assume that U is three times continuously dierentiable
and that the model data are constant:
(t) r(t)rb(t)b(t)
whereb2Rdandis a nonsingular, ( dd) matrix.
31
Bibliography
[1]I. Karatzas, S.E.Shreve ,Brownian Motion and Stochastic Calculus ,
Springer, 1991
[2]G. Da Prato, J. Zabczyk ,Stochastic Equations In Innite Dimen-
sions , Cambridge University Press, 1992
32
Copyright Notice
© Licențiada.org respectă drepturile de proprietate intelectuală și așteaptă ca toți utilizatorii să facă același lucru. Dacă consideri că un conținut de pe site încalcă drepturile tale de autor, te rugăm să trimiți o notificare DMCA.
Acest articol: 1.1 Martingales. Stopping Times and Filtration . . . . . . . . . . 2 1.1.1 Stochastic Processes, Filtration and Stopping Times . . 2 1.1.2 Continuous… [625483] (ID: 625483)
Dacă considerați că acest conținut vă încalcă drepturile de autor, vă rugăm să depuneți o cerere pe pagina noastră Copyright Takedown.
