Often in financial trading, instruments are not linear function of the underlyings. For instance, forward is just linear function of the underlying spot and can easily formulate linear approximation for the PnL. But for options, they are non-linear and clearly needed delta and/or delta-gamma based non-linear parametric approximations.
The formula for delta approximations is given by,
For the delta-gamma approximations, fore note on the gamma, it is the second order derivative of delta from the change in the underlying. It essentially captures the non-linearity from the taylor series expansion.
The Value-at-Risk for the delta portfolio can be calculated from,
Given the increase volatility contracts traded, its imperative to include the vega approximation to the taylor series, so the formula we see will be given by,
So far we have seen the PnL approximations and VaR calculations based on delta, gamma and vega sensitivities. Note that we haven’t include the rho and theta primarily they underlying is not too sensitive on very short term. We will now see the formula for the delta, gamma and vega to impute in the above equations,
Note in the Gamma equation, ensure to substitute and NOT
In the universe, there are many number of black holes which form high density, gravitational pull that even light cannot pass through it. So, when we trace back to the formation of our universe way back to Hubbler’s modified estimation 13.8 billion years ago, universe was in-fact black hole, a very small condensed with huge amount of four forces, gravity, electromagnetic fields, strong and weak nuclear forces which were so packed within tiny tiny particle that when gravity got exploded, the whole new universe inflated and hence formation of different gas from base hydrogen & helium atoms formed. So far back tracing using the light emitted from back from the formation of universe we can guess the age of our universe but what it really takes to have in first place that tiny tiny dense state of huge gravitational and electromagnetic force, how it happened ? Does it means, there’s multiverse meaning multiple universe we are one among them or previous to the formation of universe whether there was existing one which was absorbed by one large giant black hole. We will see more about the theory and what we have theoretically observed so far.
Alright, let’s see what happens when star like Sun started to run out of the fuel. Well, we all know the core of the sun needs i.e., hydrogen to burn and this is also one of the core gas along with helium which then forms class of other heavy gas elements in the universe. Now, when star dies, hydrogen runs out and it uses helium to produce other heavy gas elements such as lithium and beryllium. During this time, it inflates in huge size and carbon it generates will form iron. At one stage when all the helium gases are finished, it will either form huge red dwarf or inflate and form supernova and form itself neutron star. So, a neutron star is like black hole such that its mass is so dense that even tiniest size mass is as huge as mass of Earth.
So, big bang happened predicted earlier by 13.77 billion years and our solar system formed roughly 4.6 billion years, so roughly 9.17 years it took for our solar system to form. Earlier, we could imagine universe was expanding with other galaxies, solar systems within each ones, exoplanets, asteroids, comets, dwarf, neutron star etc.,. So, a picture explains more beautifully as below,
The above supercluster named ‘Laniakea’ is one recently formed by group of scientist after observing movements of close spatial galaxies and our milky way galaxy as you see is one corner of this super massive cluster. Like this, there are many many superclusters in universe, where each galaxies boots millions, billions of stars and each star having its own solar system planets orbiting around them.
We will see next extra terrestrial life forms most importantly Titan methane lakes that orbiting around Saturn and read some interesting news on this.
We will see below some of the most probable occurrence distributions to be estimated by MLE.
Binomial distribution
Poisson distribution
Exponential distribution
First we need to create likelihood function, then take the log likelihood function. With the MLE estimation, taking the first derivative will yield score vector which can be equated to zero to determine the expected values of the unknown parameters. Second, to find the lower variance-covariance hessian matrix, take the second derivative and form the information matrix ie., expected values of the second order derivatives and inverse of the information matrix should get the cramer-rao variance matrix. From this, we can deduce the asymptotic property of the unknown parameters and hence satisfy MLE properties.
Lemma proof’s for MLE
Let’s suppose, we have function
where,
Now, let’s see how to derive the likelihood function for this function, given unknown parameters
Sometimes, when we are not sure about the exact distribution of the underlying random variable (r.v) X but we know the first four moments, then we can approximate the quantile as below,
The close the distribution is to standard normal, better approximations. Note that when substituting standard normal make sure to use the negative sign for the left quantile. So, for instance .
Let’s see the few of the volatility models based on past innovations (meaning past observations). There are few models such as moving average which gives equal weights to past observations but we will see most commonly used models in the industry, one such is EWMA meaning, Exponential Weighted Moving Average.
The general formula is first to calculate the weights,
Then applying to predicting volatility,
Supposing, M is large, we can formulate more generic solution, termed RiskMetrics formerly modelled by JP Morgan using the lambda values close to 1 for volatility persistence and market reactivity.
Let’s now see for the another volatility model, most widely used and industry adopted solution, GARCH, Generalised AutoRegressive (meaning current return based on past lagged observations) Conditional heteroskedasticity (meaning non constant / non-homoskedastic variance across observations).
The formula to model for GARCH volatility is given below,
where,
Estimating GARCH
Estimating GARCH parameters is done in 2 step process,
Estimate the long run volatility,
Then, estimate GARCH,
Now the parameters can be estimated using Maximum Likelihood estimation, given the past innovations are i.i.d.
GARCH Forecasting
With GARCH, you can forecast the volatility using generalised formula,
Most of the time we see the returns are mean reverted meaning that they trend around mean average whereas the volatility are highly skewed. The fact is that when returns are high, we don’t rush to invest in the underlying’s but when the reverse happens, i.e., returns are low, everyone rush to disinvest and move the assets to the safe treasury bonds or cash. This relationship has ripple effect which assets are correlated to the systematic effect and event which can have influence to affect the general market will have ripple effects on other assets by CAPM though the idiosyncratic effects can be distributed evenly. Hence volatility modelling is huge subject involves studying the pattern of past returns and predicting the future returns. There are number of existing models to name a few, EWMA, ARCH, ARMA, GARCH which can predict based on the past historical returns and forecast the future volatility and parametric models such as Heston, Hagen’s Stochastic Alpha, Beta, Rho (SABR) which models based on the market traded derivatives and uses the market perception of future volatility. We will see in future blogs about each of these underlying models and hence will discuss in depth about the pros and cons of each approach.