[ad_1]
TABLE OF CONTENTS
INTRODUCTION AND PRELIMINARIES 6
1.1 PRELIMINARIES . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.1.1 -ALGEBRA: . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.1.2 BOREL -ALGEBRA: . . . . . . . . . . . . . . . . . . . . . . 6
1.1.3 PROBABILITY SPACE: . . . . . . . . . . . . . . . . . . . . 7
1.1.4 MEASURABLE MAP: . . . . . . . . . . . . . . . . . . . . . 7
1.1.5 RANDOM VARIABLES/VECTORS: . . . . . . . . . . . . . . . . 7
1.1.6 PROBABILITY DISTRIBUTION: . . . . . . . . . . . . . . . . . 7
1.1.7 MATHEMATICAL EXPECTATION: . . . . . . . . . . . . . . . 8
1.1.8 VARIANCE AND COVARIANCE OF RANDOM VARIABLES: . . . . . 8
1.1.9 STOCHASTIC PROCESS: . . . . . . . . . . . . . . . . . . . . 8
1.1.10 BROWNIAN MOTION: . . . . . . . . . . . . . . . . . . . . . 8
1.1.11 FILTRATIONS AND FILTERED PROBABILITY SPACE: . . . . . . . 9
1.1.12 ADAPTEDNESS: . . . . . . . . . . . . . . . . . . . . . . . 10
1.1.13 CONDITIONAL EXPECTATION: . . . . . . . . . . . . . . . . . 10
1.1.14 MARTINGALES: . . . . . . . . . . . . . . . . . . . . . . . . 10
1.1.15 ITO CALCULUS: . . . . . . . . . . . . . . . . . . . . . . . . 10
1.1.16 QUADRATIC VARIATION: . . . . . . . . . . . . . . . . . . . 11
1.1.17 STOCHASTIC DIERENTIAL EQUATIONS: . . . . . . . . . . . . 11
1.1.18 ITO FORMULA AND LEMMA: . . . . . . . . . . . . . . . . . 11
1.1.19 RISK-NEUTRAL PROBABILITIES: . . . . . . . . . . . . . . . . 12
1.1.20 LOG-NORMAL DISTRIBUTION: . . . . . . . . . . . . . . . . . 13
1.1.21 BIVARIATE NORMAL DENSITY FUNCTION: . . . . . . . . . . . 13
1.1.22 CUMULATIVE BIVARIATE NORMAL DISTRIBUTION FUNCTION: . 13
1.1.23 MARKOV PROCESS: . . . . . . . . . . . . . . . . . . . . . . 13
1.1.24 BACKWARD KOLMOGOROV EQUATION: . . . . . . . . . . . . 14
1.1.25 FORKKER-PLANCK EQUATION: . . . . . . . . . . . . . . . . 14
4
1.1.26 DIUSION PROCESS: . . . . . . . . . . . . . . . . . . . . . 14
1.2 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2 LITERATURE REVIEW 17
3 FINANCIAL DERIVATIVES AND COMPOUND OPTIONS 20
3.1 FINANCIAL DERIVATIVES . . . . . . . . . . . . . . . . . . 20
3.2 CATEGORIES OF DERIVATIVES . . . . . . . . . . . . . . . 21
3.2.1 FORWARDS . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2.2 FUTURES . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2.3 SWAPS . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.2.4 OPTIONS . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.2.5 FINANCIAL MARKETS . . . . . . . . . . . . . . . . 25
3.2.6 TYPES OF TRADERS . . . . . . . . . . . . . . . . . 25
3.2.7 EXOTIC OPTIONS . . . . . . . . . . . . . . . . . . . 27
3.2.8 SIMULTANEOUS AND SEQUENTIAL COMPOUND OPTIONS . . . 33
4 PRICING COMPOUND OPTIONS 34
4.1 FACTORS AFFECTING OPTION PRICES . . . . . . . . . . 34
4.1.1 EXERCISE PRICE OF THE OPTION . . . . . . . . . . . . . . . 34
4.1.2 CURRENT VALUE OF THE UNDERLYING ASSET . . . . . . . . . 34
4.1.3 TIME TO EXPIRATION ON THE OPTION . . . . . . . . . . . . 35
4.1.4 VARIANCE IN VALUE OF UNDERLYING ASSET . . . . . . . . . . 35
4.1.5 RISK FREE INTEREST RATE . . . . . . . . . . . . . . . . . . 35
4.2 BLACK-SCHOLES-MERTON MODEL . . . . . . . . . . . . . 35
4.2.1 BLACK-SCHOLES OPTION PRICING . . . . . . . . . . . . . . 35
4.2.2 THE GENERALISED BLACK-SCHOLES-MERTON OPTION PRICING
FORMULA . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.2.3 COMPOUND OPTIONS . . . . . . . . . . . . . . . . . . . . 46
4.2.4 PUT-CALL PARITY COMPOUND OPTIONS . . . . . . . . . . . 48
4.3 BINOMIAL LATTICE MODEL . . . . . . . . . . . . . . . . . 49
4.3.1 COMPOUND OPTION MODEL IN A TWO PERIOD BINOMIAL TREE 49
4.3.2 FOUR-PERIOD BINOMIAL LATTICE MODEL . . . . . . . . . . . 53
4.4 THE FORWARD VALUATION OF COMPOUND OPTIONS 57
5 APPLICATIONS 65
5.1 BLACK-SCHOLES-MERTON MODEL . . . . . . . . . . . . . . . . . . . 65
5.2 BINOMIAL LATTICE MODEL . . . . . . . . . . . . . . . . . . . . . . 70
5
CHAPTER ONE
NTRODUCTION AND
PRELIMINARIES
1.1 Preliminaries
1.1.1 -algebra:
Let
be a non empty set, and a non empty collection of subsets of
.
Then is called a -algebra if the following properties hold:
(i)
2
(ii) If A 2 , then A0 2
(iii) If fAj : j 2 Jg , then
[
j2J
Aj 2
for any nite or innite countable subset of N.
1.1.2 Borel -algebra:
Let X be a non empty set and a topology on X i.e. is the collection
of subsets of X. Then ( ) is called the Borel -algebra of the topological
space (X; )
6
1.1.3 Probability Space:
Let
be a non-empty set and be a -algebra of subsets of
. Then the
pair (
,) is called a measurable space, and a member of is called a
measurable set. Let (
,) be a measurable space and be a real-valued
map on . Then is called a probability measure on (
,) if the following
properties hold:
I (A) 0; 8A 2
II (
)=1
III For fAngn2N with Aj Ak = ;, and i 6= j, then
(
[
n2N
An) =
X
n2N
(An)
i.e. is -additive (or countably additive).
Now if (
,) is a measurable space and is a probability on (
,), then the
triple (
; ; ) is called a probability space.
1.1.4 Measurable Map:
Let (
; ) and (?; ) be two measurable spaces. Then a map X :
?! ? is
called measurable if the set X?1(A) = f! 2
: X(!) 2 Ag is in whenever
A 2 : In particular, we take (?; ) to be (R; (R)) or (Rn; (Rn)) where
n 2 N and (R) is the Borel -algebra of R:
1.1.5 Random variables/vectors:
Let (
; ; ) be an arbitrary probability space and (Rn; (Rn)) be the n-
dimensional Borel measurable space. Then a measurable map X :
?! Rn
is called a random vector. If n = 1, then X is called a random variable.
We denote by L(
;Rn) the set of all Rn-valued random vectors on
, and
L1(
; ; ) the space of random variables.
1.1.6 Probability Distribution:
Let (
; ; ) be a probability space, (Rn; (Rn)) be the n-dimensional Borel
measurable space, and X :
?! Rn a random vector. Then the map
X : (Rn) ?! [0; 1] dened by X(A) = (X?1(A));A 2 (Rn) is called
the probability distribution of X:
7
1.1.7 Mathematical Expectation:
Let (
,,) be a probability space. If X 2 L1(
; ; ), then
E(X) =
Z
X(!)d(!)
is called the mathematical expectation or expected value or mean of X:
1.1.8 Variance and Covariance of random variables:
Let (
; ; ) be a probability space and X an R-valued random variable on
,
such that X 2 L2(
; ; ). Then X is automatically in L1(
; ; ) (because
in general if p q, then Lq(
; ; ) Lp(
; ; ) for all p 2 [1;1) [ f1g:)
The variance of X is dened as
V ar(X) = E((X ? E(X))2):
The number X =
p
V ar(X) is called the standard deviation/error. Now
let X, Y 2 L2(
; ; ). Then the covariance of X and Y is given by:
Cov(X; Y ) = E((X ? E(X))(Y ? E(Y )))
1.1.9 Stochastic Process:
Let (
,,) be a probability space. A stochastic process X indexed by a
totally ordered set T (time), is a collection X = fX(t) : t 2 Tg, where each
X(t) or Xt is a random variable on
. We denote X(t) by Xt and write the
value of X(t) or Xt at ! 2
by X(t; !) or Xt(!). Thus, a stochastic process
or random process is a collection of random variables, often used to represent
the evolution of some random value, or system overtime.
1.1.10 Brownian Motion:
The Brownian motion refers to the ceaseless, irregular random motion of
small particles immersed in a liquid or gas, as observed by R. Brown in
1827. The phenomena can be explained by the perpetual collisions of the
particles with the molecules of the surrounding medium. Mathematically,
let (
; ; ) be a probability space, and W = fW(t) 2 L(
;Rn) : t 2 Tg,
where T R+ = [0;1), be an Rn-valued stochastic process on
with the
8
following properties:
(i) W(0) = 0, almost surely.
(ii) W has continuous sample paths. i.e. If X is a stochastic process and
! 2
then the map t 7?! X(t; !) 2 Rn is called a sample path or trajectory
of X. Now if the map is continuous we say X has a continuous sample paths.
(iii) W(t)?W(s) is an N(0; (t?s)T) random vector for all t > s 0, where
T is the n n identity map.
(iv) W has a stochastically independent increments i.e. For every 0 < t1 <
t2 < < tk, the random vectors W(t1), W(t2) ?W(t1); ;W(tk) ?W(tk?1)
are stochastically independent.
ThenW is called the standard n-dimensional Brownian motion or n-dimensional
Wiener process.
For the n-dimensional Brownian motion W(t) = (W1(t); ; Wn(t))
we have the following useful properties:
(I) E(Wj(t)) = 0, j = 1; 2; 3; ; n
(II) E(Wj(t)2) = t, j = 1; 2; 3; ; n
(III) E(Wj(t)Wk(s)) = min(t; s) for t, s 2 T.
To show the result in III above, we assume t > s (without loss of
generality) and consider
E[Wj(t)Wk(s)] = E[(Wj(t) ?Wk(s))Wk(s) +Wk(s)2]
= E[(Wj(t) ?Wk(s))Wk(s)] + E[Wk(s)2]
(because E is linear). Then, since Wj(t)?Wk(s) and Wk(s) are independent
and both Wj(t) ?Wk(s) and Wk(s) have zero mean, so
E[Wj(t)Wk(s)] = E[Wk(s)2] = s = min(t; s)
1.1.11 Filtrations and Filtered Probability space:
Let (
; ; ) be a probability space and consider F() = ft : t 2 Tg a family
of -algebras of with the following properties:
(i) For each t 2 T, t contains all the -null members of ,
(ii) s t whenever t s, s, t 2 T.
Then F() is called a Filtration of and (
; ; F(); ) is called a Filtered
Probability Space or Stochastic Basis.
9
1.1.12 Adaptedness:
A Stochastic process X = fX(t) 2 L(
;Rn) : t 2 Tg is said to be adapted
to the ltration F() = ft : t 2 Tg if X(t) is measurable with respect to
t for each t2T. It is plain that every stochastic process is adapted to its
natural ltration.
1.1.13 Conditional Expectation:
Let (
; ; ) be a probability space, X a real random variable in L1(
; ; )
and a -subalgebra of . Then the conditional expectation of X given
written E(X j ) is dened as any random variable Y such that:
(i) Y is measurable with respect to i.e. for any A 2 (R), the set Y ?1(A) 2
.
(ii)
R
B X(!)d(!) =
R
B Y (!)d(!) for arbitrary B 2 :
A random variable Y which satises (i) and (ii) is called a version of E(X j ):
1.1.14 Martingales:
The term martingale has its origin in gambling. It refers to the gambling tac-
tic of doubling the stake when losing in order to recoup oneself. In the stud-
ies of stochastic processes, martingales are dened in relation to an adapted
stochastic process. Let X = fX(t) 2 L1(
; ; ) : t 2 Tg be a real-valued
stochastic process on a ltered probability space (
; ; F(); ). Then X is
called a
(i) Supermartingale if E(X(t) j s) X(s) almost surely whenever t
s.
(ii) Submartingale if E(X(t) j s) X(s) almost surely whenever t s.
(iii) martingale if X is both a submartingale and a supermartingale i.e. If
E(X(t)j s) = X(s) almost surely whenever t s.
1.1.15 Ito Calculus:
Let (
; ; F(); ) be a ltered probability space and W a Brownian motion
relative to this space. We dene an integral of the form
W(f; t) =
Z t
0
f(s)dW(s); t 2 R+
10
where f belongs to some class of stochastic processes adapted to (
; ; F(; ).
1.1.16 Quadratic Variation:
Let X be a stochastic process on a ltered probability space (
; ; F(); ):
Then the quadratic variation of X on [0; t], t > 0, is the stochastic process
hXi dened by
hXi(t) = lim
jPj! 0
Xn?1
j=0
jX(tj+1) ? X(tj))j2
where P = ft; t1; ; tng is any partition of [0; t] i.e. 0 = t1 < t2 < < tn = t
and jPj = max0jn?1jtj+1 ? tj j
If X is a dierentiable stochastic process, then hXi=0.
1.1.17 Stochastic Dierential Equations:
These are equations of the form
dX(t) = g(t;X(t))dt + f(t;X(t))dW(t)
with initial condition X(t) = x
1.1.18 Ito Formula and Lemma:
Let (
; ; F(); ) be a ltered probability space, X an adapted stochas-
tic process on (
; ; F(); ) whose quadratic variation is hXi and U 2
C1;2([0; 1] R).
Then,
U(t;X(t)) = U(s;X(s)) +
Z t
s
@U
@t
(;X( ))ds +
Z t
s
@U
@x
(;X( ))dX( )
+
1
2
Z t
s
@2U
@x2 (;X( ))dhXi( )
which may be written as
11
dU(t; x) =
@U
@t
(t;X(t))dt +
@U
@x
(t;X(t))dX(t)
+
1
2
@2U
@x2 (t;X(t))dhXi(t)
The equation above is normally referred to as the Ito formula. If X
satsies the stochastic dierential equation (SDE)
dX(t) = g(t;X(t))dt + f(t;X(t))dW(t)
X(t) = x;
then
dU(t;X(t)) = gu(t;X(t))dt + fu(t;X(t))dW(t)
U(t;X(t)) = U(t; x)
where
gu(t; x) =
@U
@t
(t; x) + g(t; x)
@U
@x
(t; x) +
1
2
(f(t; x))2 @2U
@x2 (t; x);
fu(t; x) = f(t; x)
@U
@x
(t; x)
We obtain a particular case of the Ito formula called the Ito lemma, if we
take X = W, where g 0 and f 1 on T R. Then
dU(t;W(t)) = [
@U
@t
(t;W(t)) +
1
2
@2U
@x2 (t;W(t))]dt +
@U
@x
(t;W(t))dW(t)
The equation above is referred to as the Ito lemma.
1.1.19 Risk-neutral Probabilities:
These are probabilities for future outcomes adjusted for risk, which are then
used to compute expected asset values. The benet of this risk-neutral pric-
ing approach is that once the risk-neutral probabilities are calculated, they
can be used to price every asset based on its expected payo. These the-
oretical risk-neutral probabilities dier from actual real world probabilities;
if the latter were used, expected values of each security would need to be
adjusted for its individual risk prole. A key assumption in computing risk-
neutral probabilities is the absence of arbitrage. The concept of risk-neutral
probabilities is widely used in pricing derivatives.
12
1.1.20 Log-normal Distribution:
A random variable X is said to have a lognormal distribution if its logarithm
has a normal distribution. i.e. ln(X) N(; ), meaning logrithim of X is
distributed normal with mean and variance .
1.1.21 Bivariate Normal Density Function:
The bivariate normal density function is given by:
f(x; y) =
1
2
p
1 ? 2
exp[?
x2 ? 2xy + y2
2(1 ? 2)
1.1.22 Cumulative Bivariate Normal Distribution Func-
tion:
The standardised cumulative normal distribution function returns the prob-
ability that one random variable is less than “a”, and that a second random
variable is less than “b” when the correlation between the two variables is
and is given by:
M(a; b; ) =
1
2
p
1 ? 2
Z a
?1
Z b
?1
exp[?
x2 ? 2xy + y2
2(1 ? 2)
]dxdy
1.1.23 Markov Process:
A Markov process is a stochastic process satisfying a certain property, called
the Markov property. Let (
; ; ) be a probability space with a ltration
F() = ft : t 2 Tg for some totally ordered set T, and let (S; ) be a mea-
surable space. An s-valued stochastic process X = fXt : t 2 Tg adapted
to the ltration is said to posses the Markov property with respect to the
lteration F() if, for each A 2 and s; t 2 T with s < t,
P(Xt 2 Ajs) = P(Xt 2 AjXs)
A Markov process is a stochastic process which satises the Markov prop-
erty with respect to its natural ltration.
13
1.1.24 Backward Kolmogorov Equation:
The Kolmogorov backward equation (diusion) is a partial dierential eqau-
tion (PDE) that arises in the theory of continuous-time Markov processess.
Assume that the system state X(t) evolves according to the stochastic dif-
ferential erquation
dXt = (Xt; t)dt + (Xt; t)dW(t)
then the Kolmogorov backward equation is as follows
?
@
@t
p(x; t) = (x; t)
@
@x
+
1
2
2(x; t)
@2
@x2 p(x; t)
for t s, subject to the nal condition p(x; s) = us(x): This can be derived
using Ito’s lemma on p(x; t) and setting the dt term equal to zero.
1.1.25 Forkker-Planck Equation:
The Fokker-Planck equation describes the time evolution of the of the veloc-
ity of a particle, and can be generalised to other observables as well. It is
also known as the Kolmogorov forward equation (diusion). In one spatial
dimension X, for an Ito process given by the stochastic dierential equation
dXt = (Xt; t)dt +
p
2D(Xt; t)dWt
with drift (Xt; t) and diusion coecient D(Xt; t); the Fokker-Planck
equation for the probability density f(x; t) of the random variable Xt is
@
@t
f(x; t) = ?
@
@x
[(x; t)f(x; t)] +
@2
@x2 [D(x; t)f(x; t)]
The Fokker-Planck also exist in many dimensions, but we are going to restrict
ourselves to one dimension only.
1.1.26 Diusion Process:
A diusion process is a solution to a stochastic dierential equation. It
is a continuous-time Markov process with almost surely continuous sample
paths. Mathematically, it is a Markov process with continuous sample paths
for which the Kolmogorov forward equation is the Forkker-Planck equation.
Brownian motion, re ected Brownian motion and Ornstein-Uhlenbeck pro-
cesses are examoles of diusion process.
14
1.2 Introduction
An option is a nancial instrument that species a contract between two
parties for a future transaction at a reference price. This transaction can be
to buy or sell an underlying assets such as stocks, bonds, an interest rate
e.t.c. The option holder has the right but not the obligation to carry out the
specic transaction (i.e. to buy if it is a call option” or to sell if it is a put
option”) at or by a specied date (reference time).
A European option give the holder the right but not the obligation to buy,
(if it is a call) or to sell (if it is a put), an underlying asset on the specied
time or maturity date at the specied price. While an American option, give
the holder the right but not the obligation to buy, or sell an underlying asset
on or prior to the specied time or maturity date at the specied price.
A compound option is an option on an option. Hence, the compound op-
tion, or the mother option gives the holder the right but not the obligation
to buy, or sell another underlying option, the daughter option; for a certain
strike price K1 at a specied time T1. The daughter option then gives the
holder another right to buy or sell a nancial asset for another strike price
K2 at a later point in time T2. So, a compound option has two strike prices,
and two expiration dates. Also, Compound options are very frequently en-
countered in capital budgeting problems when projects require sequential
decisions. For example, when dealing with development projects, the initial
development expense allows one later to make a decision to wait or, to engage
in further development expenses eventually leading to a nal capital invest-
ment project. All R&D expenditures involve a sequence of decisions. In the
mining and extraction industries, one conducts geological surveys that will
lead to the opening of a mine, or to the decision to drill. Then, the owner
of the mine, or the drilling platform can any day stop operations, and begin
them again later. An investment in the production of a movie, might lead to
sequels. The value of a sequel is the value of a compound option.
This project is divided into ve chapters; chapter one is the preliminar-
ies and introduction, chapter two is the Literature Review. Chapter three
will consist of nancial derivatives and compound options, where we’ll give
a full explanation of what compound option is all about. As in the case
of pricing and valuation of other nancial instruments (bonds or stocks) or
derivatives (futures or swaps), options too can be priced to avoid underesti-
mates or overestimates of the prices. As such, option pricing theory is one of
15
the cornerstones, and most successful theory in nance and economics as de-
scribed by Ross. Therefore, chapter four will deal with pricing of compound
options, where we are going to give some methods that are used in pricing
compound options, which is the main work of the project. Black-Scholes
formula for pricing compound options, forward valuation of compound op-
tions will also be discussed, where we use the Forkker-Planck equation and
backward Kolmogorov equation to obtain the formula for pricing compound
options. We will also discuss the binomial lattice model or binomial tree
model for pricing sequential compound options. Finally, chapter ve will
deal with applications.
16
GET THE COMPLETE PROJECT»
Do you need help? Talk to us right now: (+234) 8111770269, 08111770269 (Call/WhatsApp). Email: [email protected]
IF YOU CAN’T FIND YOUR TOPIC, CLICK HERE TO HIRE A WRITER»
Disclaimer: This PDF Material Content is Developed by the copyright owner to Serve as a RESEARCH GUIDE for Students to Conduct Academic Research. You are allowed to use the original PDF Research Material Guide you will receive in the following ways: 1. As a source for additional understanding of the project topic. 2. As a source for ideas for you own academic research work (if properly referenced). 3. For PROPER paraphrasing ( see your school definition of plagiarism and acceptable paraphrase). 4. Direct citing ( if referenced properly). Thank you so much for your respect for the authors copyright. Do you need help? Talk to us right now: (+234) 8111770269, 08111770269 (Call/WhatsApp). Email: [email protected]
Related Current Research Articles
[ad_2]
Purchase Detail
Hello, we’re glad you stopped by, you can download the complete project materials to this project with Abstract, Chapters 1 – 5, References and Appendix (Questionaire, Charts, etc) for N4000 ($15) only, To pay with Paypal, Bitcoin or Ethereum; please click here to chat us up via Whatsapp.
You can also call 08111770269 or +2348059541956 to place an order or use the whatsapp button below to chat us up.
Bank details are stated below.
Bank: UBA
Account No: 1021412898
Account Name: Starnet Innovations Limited
The Blazingprojects Mobile App
Download and install the Blazingprojects Mobile App from Google Play to enjoy over 50,000 project topics and materials from 73 departments, completely offline (no internet needed) with the project topics updated Monthly, click here to install.
Recent Comments