[ad_1]
TABLE OF CONTENTS
Preliminaries: . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.1.1 -algebra . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.1.2 Probability Space . . . . . . . . . . . . . . . . . . . . . 8
1.1.3 Borel -algebra . . . . . . . . . . . . . . . . . . . . . . 8
1.1.4 A random variable: . . . . . . . . . . . . . . . . . . . . 9
1.1.5 Probability distribution . . . . . . . . . . . . . . . . . 9
1.1.6 Normal distribution . . . . . . . . . . . . . . . . . . . 9
1.1.7 A d-dimensional Normal distribution . . . . . . . . . . 10
1.1.8 Log-normal Distribution . . . . . . . . . . . . . . . . . 11
1.1.9 Mathematical Expectation . . . . . . . . . . . . . . . . 11
1.1.10 Variance and covariance of random variables: . . . . . 12
1.1.11 Characteristic function . . . . . . . . . . . . . . . . . . 12
1.1.12 Stochastic process . . . . . . . . . . . . . . . . . . . . 13
1.1.13 Sample Paths . . . . . . . . . . . . . . . . . . . . . . . 13
1.1.14 Brownian Motion . . . . . . . . . . . . . . . . . . . . . 13
1.1.15 Filtration: . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.1.16 Adaptedness . . . . . . . . . . . . . . . . . . . . . . . 14
1.1.17 Conditional expectation . . . . . . . . . . . . . . . . . 14
1.1.18 Martingale . . . . . . . . . . . . . . . . . . . . . . . . 15
1.1.19 Quadratic variation . . . . . . . . . . . . . . . . . . . 15
1.1.20 Stochastic dierential equations . . . . . . . . . . . . . 16
5
1.1.21 Ito formula and lemma . . . . . . . . . . . . . . . . . . 16
1.1.22 Gamma distribution . . . . . . . . . . . . . . . . . . . 17
1.1.23 Risk-neutral Probabilities . . . . . . . . . . . . . . . . 18
1.2 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2 Literature Review 21
3 Financial Derivatives 24
3.0.1 Forward Contract . . . . . . . . . . . . . . . . . . . . 24
3.0.2 Future Contracts . . . . . . . . . . . . . . . . . . . . . 25
3.0.3 Options . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.0.4 Hedgers . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.0.5 Speculators . . . . . . . . . . . . . . . . . . . . . . . . 28
3.0.6 Arbitrageurs . . . . . . . . . . . . . . . . . . . . . . . 29
4 Pricing of Basket option 30
4.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.2 Geometric Brownian Motion . . . . . . . . . . . . . . . . . . . 32
4.3 Methods used in pricing Basket options . . . . . . . . . . . . 34
4.3.1 Numerical Methods . . . . . . . . . . . . . . . . . . . 34
4.3.2 Approximation Methods . . . . . . . . . . . . . . . . . 41
5 APPLICATION 48
5.1 Foreign Exchange Market . . . . . . . . . . . . . . . . . . . . 48
5.1.1 Quotation Style . . . . . . . . . . . . . . . . . . . . . . 51
5.2 Foreign Exchange Basket Option . . . . . . . . . . . . . . . . 52
5.2.1 Correlation in foreign exchange . . . . . . . . . . . . . 53
5.3 Numerical Example . . . . . . . . . . . . . . . . . . . . . . . . 54
5.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
6
CHAPTER ONE
General Introduction
In this chapter we give some denitions in probability theory needed for our
thesis and provide some introduction to the work.
1.1 Preliminaries:
We begin by introducing a number of probabilistic concepts.
1.1.1 -algebra
Let
be a non-empty set and B a non-empty collection of subset of
, B is
called a -algebra if the following properties hold:
i
2 B
ii A 2 ) A0 2
iii fAj : j 2 Jg B )
S
j2J
Aj 2 B for any nite or innite countable
subset J of N
7
1.1.2 Probability Space
1. Let
be a nonempty set and B a – algebra of subsets of
. Then
the pair (
; B) is called a measurable space and a member of B is
called a measurable set.
2. Let (
; B) be a measurable space and : B ?! R be a real valued
map on . Then is called a probability Measure if the following
properties hold:
i (A) 0 8A 2
ii (
) = 1
iii For fAngn2N , with Aj Ak = ; for j 6= k
(
S
n2N
An) =
P
n2N
(An) i.e is -additive(or countably additive).
3. If (
; ) is a measurable Space and is a probability measure on
(
; ),then the triple (
; ; ) is called Probability Space.
1.1.3 Borel -algebra
If is a collection of subsets of
,then the smallest -algebra of subsets of
which contains , denoted by () is called the -algebra generated by
.
Let X be a nonempty set and a topology on X, i.e is the collection
of all open subsets of X.Then ( ) ia called the Borel -algebra of the
topological space (X; ).
8
1.1.4 A random variable:
Let (
; ; ) be an arbitrary probability space,B(Rd) be the Borel -algebra
of Rd and (Rd;B(Rd)) the d-dimension Borel measurable space. Then, a
measurable map X :
?! Rd is called a random vector. In the case
d=1, X is called a random variable.
1.1.5 Probability distribution
Let (
; ; ) be a probability space, (Rd; (Rd)) be the d-dimensional Borel
measurable space, and X :
?! Rd a random vector. Then the map
X : (Rd) ?! [0; 1] dened by X(A) = (X?1(A)), A 2 (Rd) is called
the probability distribution of X:
1.1.6 Normal distribution
A standard univariate normal distribution(i.e of mean zero and variance 1)
has density (x) = p1
2
e?x2
2 ,?1 < x < 1 and cumulative distribution
function
(x) =
R x
?1
p1
2
e?u2
2 du
In general a normal distribution with mean and variance 2; > 0
has density ;(x) = 1
p
2
e?(x?)2
22 and cumulative distribution function
;(x) = (x?
)
The notation X N(; 2) means the random variable X is normally
distributed with mean and variance 2.
If Y N(0; 1) (i.e Y has the standard normal distribution, then +Y
N(; 2). Thus given a method for generating the samples Y1; Y2; from
the standard normal distribution, we can generate samples X1;X2; from
N(; 2). It therefore suces to consider methods for sampling from N(0,1).
[2]
9
1.1.7 A d-dimensional Normal distribution
This is characterised by a d-vector and a dd covariance matrix ; and is
abbreviated as N(; ). If is positive denite (i.e xTx > 0,8x 6= 0 2 Rd),
then the normal distribution N(; ) has density
;(x) =
1
(2)
d
2 jj
1
2
exp(?
1
2
(x ? )T?1(x ? ));
x 2 Rd with jj the determinant of :
The standard d-dimensional normal distribution N(0; Id); with Id the dd
identity matrix, is the special case
1
(2)
d
2
exp(?1=2 xT x):
If X N(; ) (i.e the random vector X has a multivariate normal
distribution)then its ith component Xi has distribution N(i;2i
) with 2
i =
ii.The ith and jth component have covariances cov(Xi;Xj) = E[(Xi ?
i)(Xj ? j )] = ij which justies calling the covariance matrix. The
correlation between Xi and Xj is given by ij = ij
ij
.
If a d d symmetric matrix is positive semi-denite but not positive
denite then the rank of is less than d, fells to be invertible, and there
is no normal density with covariance matrix . In this case we can dene
the normal distribution N(; ) as the distribution of X = + AZ with
Y N(0; Id) for any d d matrix A: AAT = . The resulting distribution
is independent of which A is chosen. The random vector X does not have a
density in Rd, but if has rank then one can nd k component of X with
multivariate normal density in Rk.
Any linear transformation of a normal vector is again normal, X
N(; ) ) AX N(A;AAT ) for any d-vector and dd matrix and
any d k matrix A, for any k.[2]
10
1.1.8 Log-normal Distribution
In simple terms: A random variable X is said to have a lognormal distri-
bution if its logarithm has a normal distribution. I.e ln[X] N(; ). An
important property of this distribution is that it does not take values less
than 0.
A lognormal distribution is very much what the name suggest “lognor-
mal”. Imagine that you have a function that is the exponent of some input
variable X. The input variable itself is a normal distribution function . e.g.
y = k:eX
Now, if we take a natural log of this function gives a normal distribution.
1.1.9 Mathematical Expectation
Let (
,,) be a probability space. If X 2 L1(
; ; ), then
E(X) =
Z
X(!)d(!)
is called the mathematical expectation or expected value or mean of X:
The map X 7?! E(X) ,X 2 L1(
; ; ) has the following properties:
i E is linear: E(X + Y ) = E(X) + E(Y ), for all X; Y 2 L1(
; ;
and, ; 2 R
ii Markov’s inequality holds, i.e let X 2 L1(
; ; ) be R-valued. Then
(f! 2
: jX(!)j g) E(jXj)
) = kXk1
, where > 0.
iii E is positivity preserving i.e if X is real-valued and lies in L1(
; ; )
and X 0, then E(X) 0.
11
iv Chebychev’s inequality holds: Let X 2 L1(
; ; ) be a R-valued ran-
dom variable with mean E(X) = and variance 2X
.Then for > 0
(f! 2
: jX(!) ? j g) 2X
2 ).
v Jensen’s inequality holds i.e, if X is real-valued and lies in L1(
; ; ).
: R ?! R is convex and (X) 2 L1(
; ; ), then E((X))
(E(X)).
1.1.10 Variance and covariance of random variables:
Let (
; ; ) be a probability space and X an R-valued random variable on
,
such that X 2 L2(
; ; ). Then X is automatically in L1(
; ; ) (because
in general if p q, then Lq(
; ; ) Lp(
; ; ) for all p 2 [1;1) [ f1g:)
The variance of X is dened as
V ar(X) = E((X ? E(X))2):
The number X =
p
V ar(X) is called the standard deviation/error of
X. Now let X, Y 2 L2(
; ; ). Then the covariance of X and Y is given
by:
Cov(X; Y ) = E((X ? E(X))(Y ? E(Y )))
And the correlation is given by:
corr(X; Y ) = (X; Y ) =
Cov(X)Cov(Y ) p
V ar(X)V ar(Y )
Two random variables X,Y are called uncorrelated if cov(X; Y ) = 0.
1.1.11 Characteristic function
Let (
; ; ) be a probability space and X 2 L0(
;R). Dene the C-valued
function on R by:
(t) = E(eitx) = E(costx) + iE(sintx):
12
Then is called the characteristic function of X.
Note: 0(0)
i = E(X)
1.1.12 Stochastic process
A stochastic process X indexed by a set J is a family X = fX(t) : t 2 Jg of
members of L(
;Rd).The value of X(t) at ! 2
is written as X(t,!).
1.1.13 Sample Paths
If X is a stochastic process and w 2
then the map t 7?! X(t;w) 2 Rd is
called a sample path or trajectory of X.
1.1.14 Brownian Motion
Let Z = fZ(t) 2 L(
;Rd) : t 2 4g ,where 4 R+ = [0;1] be an Rd
Stochastic process on
with the following properties:
i Z(0) = 0, almost surely.
ii Z(t) ? Z(s) is an N(0,(t-s)I) random vector for all t s 0,where I
is the d d identity matrix.
iii Z has stochastically independent increments i.e for 0 < t1 < t2 < <
tn, the random vectors Z(t1);Z(t2) ? Z(t1); ;Z(tn) ? Z(tn?1) are
stochastically independent.
iv Z has continuous sample paths t 7?! Z(t;w) for xed w 2
Then Z is called the standard d-dimensional Brownian Motion or d-dimensional
Weiner process.
13
For a d-dimensional Brownian motion Z(t) = (Z1(t); ;Zd(t)) we have
the following:
i E(Zj(t)) = 0, j = 1; 2; ; d
ii E(Zj(t)2) = t, j = 1; 2; ; d
iii E(Zj(t)Zk(s)) = jkt^k = jkminft; sg, for t; s 2 4, j; k = 1; 2; ; d
1.1.15 Filtration:
Let (
; ; ) be a probability space and consider F() = ft : t 2 g a
family of -subalgebras of with the following properties:
i For each t 2 , t contains all the -null members of .
ii s t whenever t s, s; t 2
Then F() is called a ltration of and (
; ; F(); ) is called a ltered
probability space or stochastic basis.
We interpret t as the information available at time t and F() describe the
ow of information.
1.1.16 Adaptedness
A Stochastic process X = fX(t) 2 L(
;Rn) : t 2 Tg is said to be adapted
to the ltration F() = ft : t 2 Tg if X(t) is measurable with respect to
t for each t 2 T. It is plain that every stochastic process is adapted to its
natural ltration.
1.1.17 Conditional expectation
Let (
; ; ) be a probability space, X a real random variable in L1(
; ; )
and a -subalgebra of . Then the conditional expectation of X given
14
written E(X j ) is dened as any random variable Y such that:
(i) Y is measurable with respect to i.e. for any A 2 (R), the set Y ?1(A) 2
.
(ii)
R
B X(!)d(!) =
R
B Y (!)d(!) for arbitrary B 2 :
A random variable Y which satises (i) and (ii) is called a version of E(X j
):
1.1.18 Martingale
Let X = fX(t) 2 L1(
; ; ) : t 2 g be a real-valued stochastic process on
a ltered probability space (
; ; F(); ). Then X is a
1. submartingale, if E(X(t)=s) X(s) a.s whenever t s
2. Supermartingale, if E(X(t)=s) X(s) a.s whenever t s
3. Martingale, if X is both a submartingale and supermartingale i.e E(X(t)=s) =
X(s) a.s whenever t s
1.1.19 Quadratic variation
Let X be a stochastic process on a ltered probability space (
; ; F(); ):
Then the quadratic variation of X on [0; t], t > 0, is the stochastic process
hXi dened by
hXi(t) = limjPj!0
nX?1
j=0
jX(tj+1) ? X(tj)j2
where P = ft; t1; ; tng is any partition of [0; t] i.e. 0 = t1 < t2 < < tn = t
and jPj = max0jn?1jtj+1 ? tj j
Note: If X is a dierentiable stochastic process, then hXi=0.
15
1.1.20 Stochastic dierential equations
These are dierential equations in which one or more terms is a stochastic
process, resulting in a solution which is itself a stochastic process. SDE are
used to model diverse phenomena such as uctuating stock prices or physical
system subject to thermal uctuations. They are of the form
dX(t) = g(t;X(t))dt + f(t;X(t))dW(t)
with initial condition X(t) = x, where W denotes a Wiener process (stan-
dard Brownian motion). These are equations of the form
dX(t) = g(t;X(t))dt + f(t;X(t))dW(t)
with initial condition X(t) = x
1.1.21 Ito formula and lemma
Let (
; ; F(); ) be a ltered probability space, X an adapted stochastic
process on (
; ; F(); ) with quadratic variation hXi and U 2 C1;2([0; 1]
R): Then
U(t;X(t)) = U(s;X(s)) +
Z t
s
@U
@t
(;X( ))ds +
Z t
s
@U
@x
(;X( ))dX( )
+
1
2
Z t
s
@2U
@x2 (;X( ))dhXi( )
which may be written as
dU(t; x) =
@U
@t
(t;X(t))dt +
@U
@x
(t;X(t))dX(t) +
1
2
@2U
@x2 (t;X(t))dhXi(t)
The equation above is referred to as the Ito formula. If X satsies the
stochastic dierential equation (SDE)
dX(t) = g(t;X(t))dt + f(t;X(t))dW(t)
X(t) = x;
16
then
dU(t;X(t)) = gu(t;X(t))dt + fu(t;X(t))dW(t)
U(t;X(t)) = U(t; x)
where
gu(t; x) =
@U
@t
(t; x) + g(t; x)
@U
@x
(t; x) +
1
2
(f(t; x))2 @2U
@x2 (t; x);
fu(t; x) = f(t; x)
@U
@x
(t; x)
We obtain a particular case of the Ito formula called the Ito lemma, if we
take X = Z, by setting g 0 and f 1 on T R. Then
dU(t;Z(t)) = [
@U
@t
(t;Z(t)) +
1
2
@2U
@x2 (t;Z(t))]dt +
@U
@x
(t;Z(t))dZ(t)
The equation above is referred to as the Ito lemma.
Table 1.1: Ito Multiplication Table
x dt dZ(t)
dt 0 0
dZ(t) 0 dt
1.1.22 Gamma distribution
The probability density function g? of a gamma distributed variable is given
by g?(x; ; ) =
e
?x
( x
)?1
?() ,x , ; 0
The corresponding cumulative distribution function G? is dene as:
G?(x; ; ) =
Z x
0
g?(u; x; )du
=
R x
0 u?1eudu
?()
=
(; x
)
?()
;
17
where
?(z) =
Z 1
0
tz?1e?tdt
The ith moment of the gamma distribution is given by:
E[Y i] =
i?(i + )
?()
The ith moment of the inverse gamma distribution can be obtained for
? < i 0 for i ? the moments are 1
If Y is reciprocally gamma distributed then:
E[Y i] =
1
i( ? 1)( ? 2) ( ? i)
:
Let gR be the inverse gamma probability distribution function. Then
gR(x; ; ) =
g?( 1
x ;;)
x2 , x 0; ; > 0
1.1.23 Risk-neutral Probabilities
These are probabilities for future outcomes adjusted for risk, which are then
used to compute expected asset values. The benet of this risk-neutral pric-
ing approach is that once the risk-neutral probabilities are calculated, they
can be used to price every asset based on its expected payo. These theo-
retical risk-neutral probabilities dier from actual real world probabilities;
if the latter were used, expected values of each security would need to be
adjusted for its individual risk prole. A key assumption in computing risk-
neutral probabilities is the absence of arbitrage. The concept of risk-neutral
probabilities is widely used in pricing derivatives.
1.2 Overview
A nancial derivative is a contract whose price is dependent upon or derived
from one or more underlying assets. The underlying assets could be stocks,
18
commodities, currencies e.t.c. An option is a nancial derivative that gives
the holder the right but not the obligation to buy or sell an underlying asset
at a certain date and price. Options were rst traded on the Chicago Board
Options Exchange on April 26th, 1973. Basket option is a type of derivative
security where the underlying asset is a group of commodities, securities or
currencies. Since the early 1990s, basket options have been used as a tool
for reducing risks (Hedging).
The pricing and hedging of basket options is dicult, due to the number
of state variables. The usual methods employed in pricing options are not
used to price Basket options, like Black and Scholes(1973) model. A sin-
gle underlying asset is assumed to follow a geometric Brownian motion and
therefore log-normally distributed, the problem arises from the fact that sum
of correlated log-normally distributed random variables is not log-normal,
thereby making it dicult to price the basket options and have a closed form
pricing formula and hedging ratios. Some Practitioners sometimes take the
basket itself also as a log-normal distribution. However, it leads to an incon-
sistency in the basic assumption “The distribution of a weighted average of
correlated log-normals is anything but log-normal. Another diculty that
prevents the price of basket options from being exactly known the correla-
tion structure involved in the basket. Correlation is observed to be volatile
over time as is the volatility. A lot of research have been done to overcome
this diculty. Several methods has been proposed, comprising numerical
methods and analytical approximation.
Instead of buying an option on each underlying asset, one may buy a
single option on all the underlying assets “Basket options” as this will be
cheaper, since there is only one option to monitor and exercise.
In the second chapter we give a literature review in pricing of basket
options, highlighting some of the important contributions.
In the third chapter, we discuss nancial derivatives and basket options,
so as to have a clear idea of the nancial market. We provide the: Denition
19
of option, types of option, some examples of nancial derivatives, traders,
and some examples of basket options.
In the fourth chapter, we discuss the pricing of basket options and the
methods used in the pricing, which is the main work of this thesis. The seller
of a nancial derivative, in particular options, requires a compensation for
the risk he is bearing, by selling the option to the buyer. The buyer must
pay a certain amount called a premium, in order to get the right to buy or
sell the underlying asset and that is what is referred to as the price of the
option. Several factors aect the pricing of basket option which include the
initial prices, volatilities of the underlying asset, correlation e.t.c.
Various methods have been used in pricing of basket options, which
include Monte-Carlo simulation (by assuming that the assets follow corre-
lated geometric Brownian motion processes) rst suggested by Boyle(1977).
Monte-Carlo methods are suitable numerical methods used in pricing op-
tions that do not have an analytical closed form solution, especially basket
options, Cox and Ross (1976) noted that if a riskless hedge can be formed,
the option value is the risk-neutral and discounted expectation of its pay-
o, that is the price can be represented by an integral, therefore making
it possible to estimate the price of the option by Monte Carlo methods,
which is done by simulating many independent paths of the underlying as-
sets and taking the discounted mean of the generated pay-o’s. We also have
Tree based method (in the case of few state variables), analytical approx-
imations such as Taylor approximation, Reciprocal gamma approximation,
Log-normal approximation e.t.c.
In the last chapter, we give some applications of log-normal approxima-
tion by considering foreign exchange basket options, and give details on how
they are priced.
GET THE COMPLETE PROJECT»
Do you need help? Talk to us right now: (+234) 8111770269, 08111770269 (Call/WhatsApp). Email: [email protected]
IF YOU CAN’T FIND YOUR TOPIC, CLICK HERE TO HIRE A WRITER»
Disclaimer: This PDF Material Content is Developed by the copyright owner to Serve as a RESEARCH GUIDE for Students to Conduct Academic Research. You are allowed to use the original PDF Research Material Guide you will receive in the following ways: 1. As a source for additional understanding of the project topic. 2. As a source for ideas for you own academic research work (if properly referenced). 3. For PROPER paraphrasing ( see your school definition of plagiarism and acceptable paraphrase). 4. Direct citing ( if referenced properly). Thank you so much for your respect for the authors copyright. Do you need help? Talk to us right now: (+234) 8111770269, 08111770269 (Call/WhatsApp). Email: [email protected]
Related Current Research Articles
[ad_2]
Purchase Detail
Hello, we’re glad you stopped by, you can download the complete project materials to this project with Abstract, Chapters 1 – 5, References and Appendix (Questionaire, Charts, etc) for N4000 ($15) only, To pay with Paypal, Bitcoin or Ethereum; please click here to chat us up via Whatsapp.
You can also call 08111770269 or +2348059541956 to place an order or use the whatsapp button below to chat us up.
Bank details are stated below.
Bank: UBA
Account No: 1021412898
Account Name: Starnet Innovations Limited
The Blazingprojects Mobile App
Download and install the Blazingprojects Mobile App from Google Play to enjoy over 50,000 project topics and materials from 73 departments, completely offline (no internet needed) with the project topics updated Monthly, click here to install.
Recent Comments