Financial risk: prospect of financial gain or loss due to unforeseen changes in risk factors
Market risks: arising from changes in market prices or rates
Interest-rate risks, equity risks, exchange rate risks, commodity price risks, etc.
Market vs. credit vs. operational risks
Stock market volatility
Exchange rate volatility
Interest rate volatility
Commodity market volatility
Number of shares traded on NYSE grown from 3.5m in 1970 to 100m in 2000
Turnover in FX markets grown from $1bn a day in 1965 to $1,210bn in April 2001
Explosion in new financial instruments
Securitization
Growth of offshore trading
Growth of hedge funds
Derivatives
Huge increases in computational power and speed
Improvements in software
Improvements in user-friendliness
Risk measurers no longer constrained to simple back-of-the-envelope methods
Gap analysis
PV01 analysis
Duration and duration-convexity analysis
Scenario analysis
Portfolio theory
Derivatives risk measures
Statistical methods, ALM, etc.
Developed to get crude idea of interest-rate risk exposure
Choose horizon, e.g., 1 year
Determine how much of asset or liability portfolio re-prices in that period
GAP = RS assets – RS liabilities
Exposure is change in change in net interest income when interest rates change
Pros
Cons
Also applies to fixed income positions
Addresses what will happen to bond price if interest rises by 1 basis point
Price bond at current interest rates
Price bond assuming rate rise by 1 bp
Calculate loss as current minus prospective bond prices
Another traditional approach to IR risk assessment
Duration is weighted average of maturities of a bond’s cashflows,
Duration indicates sensitivity of bond price to change in yield:
Can approximate duration using simple algorithm as , where is current price, is price when , is price when
Duration is a linear function: duration of porfolio is sum of durations of bonds in portfolio
Duration assumes relationship is linear when it is typically convex
Duration ignores embedded options
Duration analysis supposes that yield curve shifts in parallel
\small
If duration takes a relationship as linear, convexity takes it as (approx) quadratic
Convexity is the second order term in a Taylor series approximation
Convexity is defined as
Convexity term gives a refinement to the basic duration approximation
In practice, convexity adjustment often small
Convexity a valuable property
Makes losses smaller, gains bigger
Can approximate convexity as
Pros
Cons
'What if' analysis – set out scenarios and work out what we gain/lose
Select a set of scenarios, postulate cashflows under each scenario, use results to come to a view about exposure
SA not easy to carry out
SA tells us nothing about probabilities
SA very subjective
Much depends on skill and intuition of analyst
Starts from premise that investors choose between expected return and risk
Wants high expected return and low risk
Investor chooses portfolio based on strength of preferences for expected return and risk
Investor who is highly (slightly) risk averse will choose safe (risky) portfolio
Key insight is that risk of any position is not its std, but the extent to which it contributes to portfolio std
Risk is measured by beta (\alert{why?}) – which depends on correlation of asset return with portfolio return
High beta implies high correlation and high risk
Low beta implies low correlation and low risk
Zero beta implies no risk
Ideally, looking for positions with negative beta
PT widely used by portfolio managers
But runs into implementation problems
Practitioners often try to avoid some of these problems by working with `the' beta (as in CAPM)
CAPM discredited (Fama-French, etc.)
Can measure risks of derivatives positions by their Greeks
Use of Greeks requires considerable skill
Need to handle different signals at the same time
Risks measures only incremental
Risk measures are dynamic
In late 1970s and 1980s, major financial institutions started work on internal models to measure and aggregate risks across institution
As firms became more complex, it was becoming more difficult but also more important to get a view of firmwide risks
Firms lacked the methodology to do so
Bestknown system is RiskMetrics developed by JP Morgan
Supposedly developed by JPM staff to provide a `4:15' report to CEO, Dennis Weatherstone.
What is maximum likely trading loss over next day, over whole firm?
To develop this system, JPM used portfolio theory, but implementation issues were very difficult
Staff had to
Main elements of system working by around 1990
Then decided to use the '4:15' report
Found that it worked well
Sensitised senior management to risk-expected return tradeoffs, etc.
New system publicly launched in 1993 and attracted a lot of interest
Other firms working on their systems
JPM decided to make a lower-grade version of its system publicly available
This was RiskMetrics system launched in Oct 1994
Subsequent development of other VaR systems, applications to credit, liquidity, op risks, etc.
PT interprets risk as std of portfolio return, VaR interprets it as maximum likely loss
PT assumes returns are normal or near normal, whilst VaR systems can accommodate wider range of distributions
VaR approaches can be applied to a wider range of problems
VaR systems not all based on portfolio theory
VaR provides a single summary measure of possible portfolio losses
VaR provides a common consistent measure of risk across different positions and risk factors
VaR takes account of correlations between risk factors
Can be used to set overall firm risk target
Can use it to determine capital allocation
Can provide a more consistent, integrated treatment of different risks
Can be useful for reporting and disclosing
Can be used to guide investment, hedging, trading and risk management decisions
Can be used for remuneration purposes
Can be applied to credit, liquidity and op risks
VaR was warmly embraced by most practitioners, but not by all
Concern with the statistical and other assumptions underlying VaR models
Concern with imprecision of VaR estimates
Concern with implementation risk
Concern with risk endogeneity
Uses of VaR as a regulatory constraint might destabilise financial system or obstruct good practice
Concern that VaR might not be best risk measure
Formula
or,
If mean is zero, VaR rises with square root of holding period
'Square root rule':
Empirical plausibility of SRR very doubtful (why?)
VaR surface is much more revealing
A single VaR number is just a point on the surface – doesn’t tell much
With zero mean, get spike at high cl and high hp
High cl if we want to use VaR to set capital requirements
Lower if we want to use VaR
Depends on investment/reporting horizons
Daily common for cap market institutions
10 days (or 2 weeks) for banks under Basel
Can depend on liquidity of market – hp should be equal to liquidation period
Short hp makes it easier to justify assumption of unchanging portfolio
Short hp preferable for model validation/backtesting requirements
VaR estimates subject to error
VaR models subject to (considerable!) model risk
VaR systems subject to implementation risk
But these problems common to all risk measurement systems
VaR tells us most we can lose at a certain probability, i.e., if tail event does not occur
VaR does not tell us anything about what might happen if tail event does occur
Trader can spike firm by selling out of the money options
Two positions with equal VaRs not necessarily equally risky, because tail events might be very different
Solution to use more VaR information – estimate VaR at higher cl
VaR-based decision calculus can be misleading, because it ignores low-prob, high-impact events
Additional problems if VaR is used in a decentralized system
VaR-constrained traders/managers have incentives to 'game' the VaR constraint
VaR of diversified portfolio can be larger than VaR of undiversified one
Example
A risk measure is subadditive if risk of sum is not greater than sum of risks
Aggregating individual risks does not increase overall risk
Important because: Adding risks together gives conservative (over-) estimate of portfolio risk – want bias to be conservative
If risks not subadditive and VaR used to measure risk
Subadditivity is highly desirable
But VaR is only subadditive if risks are normal or elliptical
VaR not subadditive for arbitrary distributions
Example: Suppose each of two independent projects has a probability of 0.02 of a loss of $10 million and a probability of 0.98 of a loss of $1 million during a one-year period. The one-year, 97.5% VaR for each project is $1 million. When the projects are put in the same portfolio, there is a 0.02 × 0.02 = 0.0004 probability of a loss of $20 million, a 2 × 0.02 × 0.98 = 0.0392 probability of a loss of $11 million, and a 0.98 × 0.98 = 0.9604 probability of a loss of $2 million. The one-year 97.5% VaR for the portfolio is $11 million. The total of the VaRs of the projects considered separately is $2 million. The VaR of the portfolio is therefore greater than the sum of the VaRs of the projects by $9 million. This violates the subadditivity condition.
Example: A bank has two $10 million one-year loans. The probabilities of default are as indicated in the following table.
Outcome Probability Neither loan defaults 97.50% Loan 1 defaults; Loan 2 does not default 1.25% Loan 2 defaults; Loan 1 does not default 1.25% Both loans default 0.00% If a default occurs, all losses between 0% and 100% of the principal are equally likely. If the loan does not default, a profit of $0.2 million is made.
Consider first Loan 1. This has a 1.25% chance of defaulting. When a default occurs the loss experienced is evenly distributed between zero and $10 million. This means that there is a 1.25% chance that a loss greater than zero will be incurred; there is a 0.625% chance that a loss greater than $5 million is incurred; there is no chance of a loss greater than $10 million. The loss level that has a probability of 1% of being exceeded is $2 million. (Conditional on a loss being made, there is an 80% or 0.8 chance that the loss will be greater than $2 million. Because the probability of a loss is 1.25% or 0.0125, the unconditional probability of a loss greater than $2 million is 0.8 × 0.0125 = 0.01 or 1%.) The one-year 99% VaR is therefore $2 million. The same applies to Loan 2.
Consider next a portfolio of the two loans. There is a 2.5% probability that a default will occur. As before, the loss experienced on a defaulting loan is evenly distributed between zero and $10 million. The VaR in this case turns out to be $5.8 million. This is because there is a 2.5% (0.025) chance of one of the loans defaulting and conditional on this event is a 40% (0.4) chance that the loss on the loan that defaults is greater than $6 million. The unconditional probability of a loss from a default being greater than $6 million is therefore 0.4 × 0.025 = 0.01 or 1%. In the event that one loan defaults, a profit of $0.2 million is made on the other loan, showing that the one-year 99% VaR is $5.8 million.
The total VaR of the loans considered separately is 2 + 2 = $4 million. The total VaR after they have been combined in the portfolio is $1.8 million greater at $5.8 million. This shows that the subadditivity condition is violated. (This is in spite of the fact that there are clearly very attractive diversification benefits from combining the loans into a single portfolio-particularly because they cannot default together.)
Let and be future values of two risky positions. A coherent measure of risk should satisfy the following axioms
Homogeneity and Monotonicity imply convexity, which is important
Translation invariance means that adding a sure amount to our end-period portfolio will reduce loss by amount added
Any coherent risk measure is the maximum loss on a set of generalized scenarios
Maximum loss from a subset of scenarios is coherent
Outcomes of stress tests are coherent
Coherence risk measures can be mapped to user’s risk preferences
Each coherent measure has a weighting function that weights loss values
can be linked to utility function (risk preferences)
Can choose a coherent measure to suit risk preferences
The Conditional VaR (CVaR) is the expected loss, given a loss exceeding VaR
it is also called expected shortfall, tailed conditioal expectation, conditional loss, or expected tail loss
VaR tells us the most we can lose if a tail event does not occur, CVaR tells us the amount we expect to lose if a tail event does occur
CVaR is coherent
Tell us what to expect in bad states
CVaR-based decision rule valid under more general conditions than a VaR-based one
CVaR coherent, and therefore always subadditive
CVaR does not discourage risk diversification, VaR sometimes does
CVaR-based risk surface always convex
This is the outcome of a worst-case scenario analysis (Boudoukh et al)
Can consider as high percentile of distribution of losses exceeding VaR
WCSA is also coherent, produces risk measures bigger than CVaR
Standard-Portfolio Analysis Risk (SPAN, CME)
Considers 14 scenarios (moderate/large changes in vol, changes in price) + 2 extreme scenarios
Positions revalued under each scenario, and the risk measure is the maximum loss under the first 14 scenarios plus 35% of the loss under the two extreme scenarios
SPAN risk measure can be interpreted as maximum of expected loss under each of 16 probability measures, and is therefore coherent
The Semistandard Deviation
The Drawdown
Try to verify wether the following popular risk measures are coherent measures of risk or not
STs are procedures that gauge vulnerability to 'what if' events
Early stress tests 'back of envelope' exercises
Major improvements in recent years, helped by developments in computer power
Modern ST much more sophisticated than predecessors
ST received a major boost after 1998
Realisation that it would have helped firms to protect themselves better in '98 crisis'
ST very good for quantifying possible losses in non-normal situations
STs versus probabilistic aprpoaches
Different types of event (normal, extreme, etc.)
Different types of risk (market, credit, etc.) and risk factors (equity risks, yield curve, etc.)
Different country/region factors
Different methodologies (scenario analysis, factor push, etc.)
Different assumptions (relating to yields, stock markets, etc.)
Different books (trading, banking)
Differences in level of test (firmwide, business unit, etc.)
Different instruments (equities, futures, options, etc.)
Differences in complexity of portfolio
Scenario ('what if') analysis
Mechanical stress tests
Can provide good risk information
Can guide decision making
ST can help firms to design systems to deal with bad events
Good for showing hidden vulnerability
Can improve on VaR type information in various ways
Can more easily identify a firm’s breaking point
Can give a clearer view about dangerous scenarios
Stress tests particularly good for identifying and handling liquidity exposures
Stress tests also very good for handling large market moves
Stress tests good for handling possible changes in market volatility
Stress tests for good for handling exposure to correlation changes
Stress tests good for highlighting hidden weaknesses in risk management system
Much less straightforward than it looks
Based on large numbers of decisions about
ST results completely dependent on chosen scenarios and how they are modelled
Difficulty of working through scenarios in a consistent way
Need to follow through scenarios, and consequences can be very complex and can easily become unmanageable
Need to take account of interrelationships in a reasonable way
Need to take account of zero arbitrage relationships
Interpretation of ST results
STs do not address prob issues as such
Hence, always an issue of how to interpret results
This implies some informal notion of likelihood
Integrating ST and prob analysis
Can integrate ST and prob analysis using Berkowitz’s coherent framework
This approach is judgemental, but does give an integrated analysis of both probs and STs
moving key variables one at a time
using historical scenarios
creating prospective scenarios
reverse stress tests
Simulated movement in major IRs, stock prices, etc.
Long been used in ALM analysis
Problem is to keep scenarios down in number, without missing plausible important ones
Based on actual historical events
Advantages
Can choose scenarios from a catalogue, which might include
Moderate market changes
More extreme events such as reruns of major crashes
These might be natural, political, legal, major defaults, counterparty defaults, economic crises, etc.
Can obtain them by alternate history exercises
Can also look to historical record to guide us in working out what such scenarios might look like
Having specified our set of scenarios, we then need to evaluate their effects
Key is to get an understanding of the sensitivities of our positions to changes in the risk factors being changed
Easy for some positions
Harder for options
Must pay particular attention to impact of stylised events on markets
Very unwise to assume that liquidity will remain strong in a crisis
If futures contracts are used hedges, must also take account of funding implications
Otherwise well-hedged positions can unravel because of interim funding
These approaches try to reduce subjectivity of SA and put ST on firmer foundation
Mechanical ST more systematic and thorough than SA, but also more intensive
Push each price or factor by a certain amount
Specify confidence level
Push factors up/down by
Revalue positions each time,
Work out most disadvantageous combination
This gives us worst-case maximum loss (ML)
FP is easy to program, at least for simple positions
Does not require very restrictive assumptions
Can be modified for correlations,
Results of FP are coherent
If we are prepared to make further assumptions, FP can also give us an indication of likelihoods
But FP rests on assumption that highest losses occur when factors move most, and this is not true for some positions
Solution to this problem is to search over interim values of factor ranges
This is MLO
MLO is more intensive, but will pick up high losses that occur within factor ranges
MLO is better for more complex positions, where it will uncover losses that FP might miss
Backtesting is the process to compare systematically the VaR forecasts with actual returns.
Backtesting compares the daily VaR forecast with the realized profit and loss (P&L) the next day.
Trading outcome
Need to obtain suitable P/L data
Accounting (e.g., GAAP) data often inappropriate because of smoothing, prudence etc
Want P/L data that reflect market risks taken
Need to clean data or use hypothetical P/L data (obtained by revaluing periods from day to day)
Draw up summary statistics
Draw up QQ charts
Draw up charts of predicted vs. empirical probs
Shape of these curves indicates whether supposed pdf fits the data – very useful diagnostic
P/L data typically random
Porfolios and dfs often change from day to day
How to compare P/L data if underlying pdfs change?
Good practice to map P/L data to predicted percentile
This standardizes data to make observations comparable given changes in pdf or porfolio
Binomial Distribution
Example: For instance, we want to know what is the probability of observing exceptions out of a sample of observations when the true probability is 1%. We should expect to observe exceptions on average across many such samples. There will be, however, some samples with no exceptions at all simply due to luck. This probability is
So, we would expect to observe 8.1% of samples with zero exceptions under the null hypothesis. We can repeat this calculation with different values for . For example, the probability of observing eight exceptions is . Because this probability is so low, this outcome should raise questions as to whether the true probability is 1%.
Normal Approximation
Decision Rule for Backtests
Consider a VaR measure over a daily horizon defined at the 99% level of confidence . The window for backtesting is days.
Exception tests focus only on the frequency of occurrences
It ignores the time pattern of losses
Risk system
Describe joint movements in the risk factors
Aggregation: VaR methods
Have assumed so far that each position has its own risk factor, which we model directly
However, it is not always possible or desirable to model each position as having its own risk factor
Might wish to map our positions onto some smaller set of risk factors
Might not have enough data on our positions
Might wish to cut down on the dimensionality of our covariance matrices
Need to keep dimensionality down to avoid computational problems too – rank problems, etc.
Construct a set of benchmark instruments or factors
Collect data on their volatilities and correlations
Derive synthetic substitutes for our positions, in terms of these benchmarks
Construct VaR/CVaR of mapped porfolio
Take this as a measure of the VaR/CVaR of actual portfolio
Usual approach to select key core instruments
Want to have a rich enough set of these proxies, but don’t want so many that we run into covariance matrix problems
RiskMetrics core instruments
Can use PCA to identify key factors
Small number of PCs will explain most movement in our data set
PCA can cut down dramatically on dimensionality of our problem, and cut down on number of covariance terms
Most positions can be decomposed into primitive building blocks
Instead of trying to map each type of position, we can map in terms of portfolios of building blocks
Building blocks are
Decompose stock return
The portfolio return
This approach is useful especially when there is no return history
Risk-free bond portfolio
maturity mapping: replace the current value of each bond by a position on a risk factor with the same maturity
duration mapping: maps the bond on a zero-coupon risk factor with a maturity equal to the duration of the bond
cash flow mapping: maps the current value of each bond payment on a zero-coupon risk factor with maturity equal to the time to wait for each cash flow
Corporate bond portfolio
Decomposition:
the movement in the value of bond price :
the portfolio:
aggregation:
Variance:
on bond on on
It should be driven by the nature of the portfolio:
portfolio of stocks that have many small positions well dispersed across sectors
portfolios with a small number of stocks concentrated in one sector
an equity market-neutral portfolio
All these positions can be mapped with linear based mapping systems because of their being (close to) linear
These approaches not so good with optionality
With non-linearity, need to resort to more sophisticated methods, e.g., delta-gamma and duration-convexity
Copula is a function of the values of the marginal distributions plus some parameters, , that are specific to this function. For example,
Sklar's theorem: For any joint density there exists a copula that links the marginal densities:
This result enables us to construct joint density functions from the marginal density functions and the copula function
Takes account of dependence structure
To model joint density function, specify marginals, choose copula, and then apply copula function
Independence (product) copula
Minimum copula
Maximum copula
Gaussian copulas
t-copulas
Gumbel copulas, Archimedean copulas
Extreme value copulas
Tail dependence
Gives an idea of how one variable behaves in limit, given high value of another
The limit distribution for values beyond a cutoff point () belongs to the following family
It is called the generalized Pareto (GP) distribution
It subsumes other distributions as special cases
Close-form solutions for VaR and CVaR rely heavily on the estimation of and .
Maximum likelihood
Method of moments
Hill's estimator
Estimates are sensitive to changes in the sample
Results depend on assumptions and estimation method
It relies on historical data
Assumption
The VaR
Advantages & Drawbacks
The Idea: replays a ''tape'' of history to current positions
The VaR
Advantages & Drawbacks
The Idea
Advantages & Drawbacks
It should converge to the delta-normal VaR if all risk factors are normal and exposures are linear
Illiquid Assets
Losses Beyond VaR
Issues with Mapping
Reliance on Recent Historical Data
Procyclicality
Crowded Trades
A、B两只股票最近30周的周回报率如下所示(单位:1%):
A: -3,2,4,5,0,1,17,-13,18,5,10,-9,-2,1,5,-9,6,-6,3,7,5,10,10,-2,4,-4,-7,9,3,2;
B: 4,3,3,5,4,2,-1,0,5,-3,1,-4,5,4,2,1,-6,3,-5,-5,2,-1,3,4,4,-1,3,2,4,3。
某金融机构用A、B按1:1比例构造投资组合。
(a) 请分别计算组合在90%置信水平下的VaR和CVaR。
(b) 通过计算验证VaR和CVaR是否为相容风险度量(coherent measure of risk)。
某交易组合是由价值300,000美元的黄金投资和价值500,000美元的白银投资构成,假定以上两资产的日波动率分别为1.8%和1.2%,并且两资产回报的相关系数为0.6,请问:()
(a) 交易组合10天展望期的97.5%VaR为多少?
(b) 投资分散效应减少的VaR为多少?
假设我们采用100天的数据来对VaR进行回顾测试,VaR所采用的置信水平为99%,在100天的数据中我们观察到5次例外。如果我们因此而拒绝该VaR模型,犯第一类错误的概率为多少?