Describe the principal methods used by investment banks to compute their Value at Risk to movements in market prices. What are the advantages and limitations of using such measures? 1. Introduction Philippe Jorion defines Value at Risk (VaR) as a model used to “summarise the maximum loss on a portfolio in a given time horizon, within a given confidence level”. VaR is the main method for financial institutions to measure their exposure to risk. In the world of banking today, risk management is becoming an important subject as banks strive to prevent events such as LTCM occurring again. There are several types of risks that banks face. These are: operational, market, credit, liquidity and business risks.
There four steps to calculating VaR. First, the risk manager must collect all the data, regarding losses previously made and information about the risk factors involved. The risk factors must be identified. There are four main steps risk factors employed in a VaR model, which are: the decline or rise in interest rates and equity prices, or the movement in commodity and currency prices. The risk manager must then choose the appropriate method of calculating VaR. This could be Delta Normal, Historic Simulation, or Monte Carlo Simulation. Finally, when all the information and data is inputted, the VaR can be calculated.
This essay will focus on how the principle methods of VaR are calculated and will explain the advantages and disadvantages of each. 2. Delta Normal Delta-normal method, also known as variance-covariance approach, relies on the normality assumption for the driving risk factors. The portfolio standard deviation (portfolio VaR) is a simple linear transformation of individual risk factors if all positions are linear in underlying risks. For the non-linear factors, such as bond and options, it works with linear approximations where the true positions are placed with linear approximations.
To illustrate this method, first let us take an instrument whose value depends on a single underlying risk factor, S. The value of the portfolio at the initial point . Then we define as the first partial derivative, or the portfolio sensitivity to changes in prices, evaluated at the current position V0. The potential loss in value is computed as: . Given the assumption that the distribution is normal, the portfolio can be derived from the product of the exposure and the of the underlying variable: ,where is the standard normal deviate corresponding to the specified confidence level, e.g., 1.645 for a 95 percent confidence level. Here, we take as the standard deviation of rates of changes in the price. (2001 Jorion)
Here is an example calculating the risk of a single cash position: There is a USD-based firm with one asset: JPY 14 billion in cash. What is the 95% worst-case loss over a 1-day period? We are given the information that, according to the RiskMetrics(r)1 data, the daily price volatility of the JPY/USD exchange rate is 1.78%, using a 95% confidence level. (Note: This implies that 1 standard deviation equals 1.78%/1.65=1.08%) So the risk calculated is $100 million1.78%=$1.78 million, which means that the 95% worst-case loss due to adverse movements in the JPY/USD over 1 day would be $1.78 million (or, we have a 5% chance of losing $1.78 million or more overnight).2
The next step is to estimate the of a portfolio. The measurement of is relatively simple if the portfolio consists of only securities with jointly normal distributions. The portfolio return is , where the weights are indexed by time to recognize the dynamic nature of trading portfolios. Using matrix notations, the portfolio variance is given by , where t+1 is the forecast of the covariance matrix over the horizon. The portfolio is then: . The covariance matrix could be achieved based on the data of correlations of underlying risk factors.
To illustrate this, we use a simplified example with only two risk positions. The JPY/USD has a of $1.78 million, while THB/USD’s is $1.9 million. The RiskMetrics data set shows a correlation of 55% between JPY/USD and THB/USD. So using the formula: Hence in our example: 3 2.1 Advantages: First, it is easy to implement and to calculate because it is based on the assumption of normality. It only involves a simple matrix multiplication and only requires the market values and exposures of current positions, combined with risk data. As a result it saves time and cost of facilities. Compared to the Historical method and Monte-Carlo method, it only takes 0.08s compared to 66.27s using full Monte Carlo calculation. (Jorion 2001:228) Second, is easily amenable to analysis, since measures of marginal and incremental risk are a by-product of the computation. (Jorion 2001:220)
2.1 Limitations: One limitation is the existence of fat tails in the distribution of returns on most financial assets. It has problems because attempts to capture the behaviour of the portfolio return in the left tail. Thus, a model based on a normal distribution would underestimate the proportion of outliers and hence the true value at risk. A further study could be distributed to adjust the fat tails. There are mainly two approaches to adjust for fat tails, one of which is the normal mixture approach and the other is generalized error distribution. (Down 1998:74) Another problem is that the method inadequately measures the risk of non-linear instruments, such as options or mortgages. (Jorion 2001:221)