03a395674ca9bb0a9fea16b19f9948c6.ppt
- Количество слайдов: 45
Modelling the CRM for the Correlation Trading Portfolio Dherminder Kainth, Jan Kwiatowski & Douglas Muirden Royal Bank of Scotland May 19, 2010
Agenda § Regulatory Requirements § Challenges in Meeting Regulatory Requirements § RBS Approach to CRM Calculation § Modelling Approaches and Assumptions § Price Risk • Simulation of Market § Default Risk § Appendix • Computational Implementation of CRM RBS 00000 2
Regulatory Requirements § The All Price Risk Measure represents a special form of the Incremental Risk Charge, described in 7. 10. 55 S R (1) for positions in the correlation trading book § The “All Price Risk Measure”, must • Adequately capture all price risks at the 99. 9% confidence interval over a capital horizon of one year • Under the assumption of a constant level of risk • And be run at least weekly § Price risk measures include: • Defaults, including the ordering of defaults; • Credit spread risk; • Volatility of implied correlations, including the cross effect between spreads and correlations; • Index to single names basis and implied correlation of an index to bespoke portfolios basis; • Recovery rate volatility; • Risk dynamic hedging and the cost of rebalancing; • Though interest rate and foreign exchange risk have not been explicitly mentioned, we consider this to included in “All Price Risk” RBS 00000 3
Agenda § Regulatory Requirements § Challenges in Meeting Regulatory Requirements § RBS Approach to CRM Calculation § Modelling Approaches and Assumptions § Price Risk • Simulation of Market § Default Risk § Timing and Next Steps § Appendix § Computational Implementation of CRM RBS 00000 4
Naive Implementation of CRM § Naively implementing the CRM i. e. , computing the 99. 9% worst loss on a 1 year horizon on RBS’s entire correlation trading portfolio is very difficult § For example, using Monte Carlo, we would need to evolve the market forwards in time, pricing and hedging the portfolio as per the desk, tracking P&L over a 1 year horizon § A back of the envelope calculation immediately reveals the high likelihood of failure: Trade Type Computation Trade Count Timing (seconds) Bespoke ~500 PV ~10 Nth To Default ~500 Parallel CR 01 ~200 Index CDS ~3000 Default Delta ~60 Index tranches ~3000 Recovery Delta ~200 Base Correlation Sensitivity ~60 Single name CDS c. 75, 000 Figure 1: Numbers and types of trades in our portfolio along with representative times to compute PV and risks for one trade on one computer RBS 00000 5
Naive Implementation of CRM (cont’d) § Assuming a rehedging frequency of once every month and a grid of 300 computers and the minimum number of paths to compute the 99. 9% confidence limit (i. e. , 1, 000) we see that we would need ~ 2, 600 hours to compute results for just the bespoke CDOs § Recalibration of the market (which needs to happen for every valuation and hedging time point) adds substantially to this timing § Over the next few slides we highlight: • How one might address the core issue of computational intractability • Issues in simulating the market • Subjectivity of hedging § We will in effect pose a series of questions; the decisions that we have made form the basis of the RBS approach to computing the CRM. This will be discussed in more detail in the following section. RBS 00000 6
Possible Areas of Optimisation: Pricing Algorithms Choice of Algorithm § § Can we use convolution ? Importance sampling for the Monte Carlo Replace Recursion ? §The 1 factor Gaussian copula (with random recovery) is very popular because rapid computational schemes exist. The ASB (or variants thereof) algorithm is commonly used in the industry because it returns (quasi exacts) PV and risks rapidly. • Faster pricing approaches are well known in the literature; however, these are to some extent (uncontrolled) approximations to the true price. – LHP (Large Homogeneous Portfolio) – Conditional Gaussian approach (Shelton) – Saddlepoint Methods – Stein § Choice of scheme depends on counterplay between accuracy and speed Optimisation of the Implementation § Parallelisation of the code - currently valuation and risks are computed on a grid. Buy more computers? • Performance of grids do not necessarily scale linearly - data passing is a limiting factor § Front office pricing code focuses on accuracy: potential speed ups by for example reducing tolerances whilst maintaining high levels of accuracy § Rewriting time critical parts of the code in the assembler? RBS 00000 7
Possible Approaches: Changing Mapping Approaches § When pricing bespoke tranches within a copula based model, we apply mapping technologies to determine the base correlation for the bespoke - this reflects the different riskiness of the bespoke tranche relative to the index § Loss Fraction (“LF”) mapping (the approach used by RBS and much of the industry) is slow - it requires the inversion of prices to determine correlations § Consider the use of a faster mapping technique such as At the Money (“ATM”) mapping • RBS front office uses LF mapping to risk manage their correlation book • LF deltas differ from ATM deltas • Valuing current portfolio and hedges using ATM mapping rather than LF, will make it appear unhedged • If we use ATM mapping, we would need to modify RBS’s current portfolio to achieve the same “level of risk” as per LF mapping and then apply a different mapping technique RBS 00000 8
Possible Approaches: Changing Mapping Approaches Figure 2: Mapping i. Traxx 9 to CDX 9 using ATM mapping and LF mapping § Figure 3: Mapping i. Traxx 9 5 Y to 7 Yusing ATM mapping and LF mapping We demonstrate the effect of the different mapping approaches in two scenarios: • Figure 2 shows the effect of the ATM and LF mapping, when mapping i. Traxx S 9 to CDX S 9. Due to the important differences between the two indices, none of the considered mapping methods produces satisfactory results. However, we note that the LF mapping shifts the market correlation curve in the right direction (as opposed to the ATM mapping) • Figure 3 shows the effect of the ATM and LF mapping, when mapping i. Traxx S 9 5 year to 7 year. The two mapping methods produce similar results, with slightly higher correlation values for the LF mapping RBS 00000 9
Subjectivity in Hedging § Typically traders hedge a position in a CDO tranche [a, b] using primarily the constituent CDSs and the index, and sometimes with an additional tranche [l, u] • Delta hedging movements in the Single Name CDS • Delta-hedging movements in the index • Delta and gamma hedging movements in the index • Hedging parallel shifts in correlation • Hedging default risk • Regression based hedging Traders are free to use some/other of the strategies outlined above; the choice will change depending on market conditions and trader outlook Algorithmically predicting the hedging strategy is therefore very difficult Hedging is computationally expensive; furthermore it is very subjective and implementing only a simplistic approach will give rise to greater slippages RBS 00000 10
Simulation of the Market Simulating the universe of observed prices relevant to the CDO book forwards by periods up to one year is challenging § We need to model possible movements in yield curves and FX rates § There is a need to capture the dynamics of the market implied CDS spreads to model the price risk. Desiderata for the evolution of the CDS spreads include: • Impact of rating migrations (jumps? ) • Empirical co-dependence between CDS spreads shows regional and sectoral variation • Co-dependence between CDS spreads is time dependent - showing regime like behaviour • Level dependent volatility § § Modelling the index tranche market is, if anything, even more challenging § Given the occurrence of defaults, some of these detachments have changed - e. g. , for the high yield the original (0, 10%) tranche has been completely wiped out The observed index tranche market comprises: RBS 00000 11
Simulation of the Market § Typically these index tranche prices are mapped into base correlations using the (random recovery) Gaussian copula. In simulating the market forwards in time, we need to evolve the price / correlation surface § Can we evolve correlations e. g. , additively? • Pretty clear that correlations are bounded between (0, 1) § However, the problem is far more subtle than this: it rapidly becomes clear that an arbitrary set of correlations do not describe a valid set of prices § Applying historical moves in base correlation to the current base correlation curve can lead to arbitrage situations, for example, negative tranche spreads • In the following graphs, the 3 month move in base correlations from September 2008 to December 2008 is applied to the current base correlation curve to obtain a shifted correlation curve • As can be seen from the graph on the bottom left, the resulting shifted base correlation curve results in tranche spreads which eventually become negative RBS 00000 12
Evolving correlations can lead to arbitrage opportunities Figure 4 - Historic Base Correlation Moves (i. Traxx 5 y) Figure 5 – Historic change applied to spot Figure 6 - Base Correlation Tranche Prices Figure 7 - Base Correlation Tranche Prices, zoomed in RBS 00000 13
Simulation of the Market (cont’d) § For the prices of index tranches to be admissible (i. e. , for the absence of arbitrage) a set of strong conditions (that have effectively never violated for the market quoted points) must hold. § Typically these conditions are expressed in terms of the ETL (Expected Tranche Loss), denoted here by: § Intuitively, this is just the price of a European (capped call) option on the loss (More formally we define it as the expected loss on an equity tranche of width K at time T, as seen from time 0). § A number of boundary conditions are immediately apparent: • An equity tranche cannot lose more than its width i. e. , • To ensure no arbitrage, the density of the loss distribution must be non-negative for all strikes and times. The ETL is just a normalised price of a call option on the loss; hence the non-negativity of the loss density implies that: • Losses cannot be reversed - hence the ETL of an equity tranche must be a constant or increasing function of T RBS 00000 14
Agenda § Regulatory Requirements § Challenges in Meeting Regulatory Requirements § RBS Approach to CRM Calculation § Modelling Approaches and Assumptions § Price Risk • Simulation of Market § Default Risk § Appendix • Computational Implementation of CRM RBS 00000 15
RBS Approach – Disaggregation of CRM calculation into Default and Price Risk Issues with Simulation 1. 2. 3. Unfeasibly large number of computations required to estimate 99. 9 th percentile. Calculating hedges is computationally very expensive Hedging strategy is very subjective – dependent upon the market and trader’s view of the future Definition of Price & Default Risk • We term Price Risk to be the impact on the portfolio of all moves in the market except for a default event; • Default Risk is defined to be the impact on portfolio value of default events • Default events are irreversible; price moves are reversible. Names cannot come back out of default • Different time horizons for Price Risk and Default Risk: • • We can hedge price risk – hence the time horizon for price risk is dependent on hedge frequency (days to 1 month) Defaults have a longer natural timescale – number of defaults in 1 month is minimal RBS chosen approach: Evaluate Price Risk and Default Risk separately, then aggregate to obtain CRM § § § Constant level of risk allows convolution of Price Risk (up to 1 month for re-hedging) cf. IRC Reduces number of computations required for Pricing Risk Removes need for extensive computation of sensitivities and reduces subjectivity in choice of hedging algorithm Need to evaluate default risk separately – defaults are irreversible. Use Monte Carlo for default risk. Enables development of an importance sampling algorithm for Default Risk Is more conservative: double counts defaults combined with large spread move scenarios RBS 00000 16
Price Risk - Constant Level of Risk § Mathematically speaking, the constant level of risk assumption translates to assuming an identical loss distribution after each time interval, D, corresponding to 1/Hedging frequency § i. e. , after every hedge interval we are able to re-hedge such that the overall riskiness of RBS’s portfolio is identical to today’s level § Assume D = 1 Month. Then the constant level of risk P/L distribution over 1 year is the convolution of 12 copies of the 1 Month P/L distribution. This is very powerful: • We do not need to compute actual hedges, just monthly P/L. • Convolution allows us to get easily into the tail i. e. , to estimate 99. 9% • This leads to significant savings in time – the computation becomes feasible without the need to move away from our books and records valuation approaches (i. e. , CRM and desk approaches are consistent) • Removes the subjectivity in choice of hedging approach • Obviously convolution cannot be used for defaults (names that default over a month would need to come back out of default !) RBS 00000 17
Constant Level of Risk - Convolution RBS 00000 18
Convolution lets us get into the tails! RBS 00000 19
Price Risk – RBS Algorithm 1. Choose time-horizon over which portfolio could be re-hedged (2 – 4 weeks) 2. Simulate Market (index tranches, single name CDS yield curves, basis etc) over hedging interval using our historical simulation algorithm (see below) 3. Compute P/L over this period; repeat ~200 – 500 times to compute a distribution • Use pricing technologies consistent (essentially identical analytics) with those used for books and records valuations 4. Use stressed market scenarios and probability weight (see below) to compute the full 1 M P/L distribution. 5. Convolve N times (N = 12 if hedge frequency = 1 M) to obtain full P/L distribution over 1 Y • The use of convolution implies the absence of autocorrelation i. e. , the 1 M P/L distribution is uncorrelated with next month’s P/L distribution • We will quantify this by examining the impact on price risk of changing the hedging horizon • From a final number perspective, the impact of autocorrelation will be captured via the use of stressed starting scenarios RBS 00000 20
Stressed Starting Scenarios § More significantly, however, the constant level of risk assumption implies that (at the end of each hedging interval, despite significant market moves) we are able to re-hedge our CDO portfolio to the same level of riskiness as today § This is a strong assumption. We therefore aim to apply an approach similar to that used for the IRC, where we use stressed starting scenarios § Algorithmically: • Choose 5 starting scenarios i. e. , the market is in one of 5 starting scenarios (each with a weight – the Gauss Hermite weight). – Our CDO positions will only be partially hedged to this scenario; the cost of this partial hedging will be part of the final – P/L distribution The starting scenarios will correspond to dates on which the i. Traxx, CDX and HY indices assumed the values implied by the Gauss Hermite percentiles • The market is then evolved as per the algorithm above; the total loss distribution for 1 M is computed, accounting for the impact of the stressed scenarios § § § Hedging Allow partial (risk based) re-hedging of book when switching to stressed scenarios Model the relevant cost of re-hedging – based on applicable market bid/offers but also by including a liquidity premium RBS 00000 21
Stressed Starting Scenarios § Choose stress scenarios to be market on particular days in our history. § Proxy stress events by absolute level of i. Traxx spread levels § Choose days in history corresponding to stress events by finding days when quantile of the index matches the probability levels implied by Gauss Hermite. RBS 00000 22
Agenda § Regulatory Requirements § Challenges in Meeting Regulatory Requirements § RBS Approach to CRM Calculation § Modelling Approaches and Assumptions § Price Risk • Simulation of Market § Default Risk § Appendix • Computational Implementation of CRM RBS 00000 23
Price Risk: Simulation of Market Variables Simulation of the Market Yield Curve Single Name Spreads Index Loss Fractions Typical Approach in the Industry: § Choose a stochastic differential equation (SDE) to describe the market data parameter (e. g. , FX) that we wish to simulate • Immediately introduces model dependence. • Estimate the parameters of the SDE (Kalman Filtering) • Simulate the SDE forwards to generate a possible future time series § Issues – why don’t we do this? • Strong model dependence – if we estimate a market using a diffusion, we will never predict any jumps! • Estimation dependent upon quality of history • Very difficult when we want to simulate a group of inter related variables (e. g. spreads, yield curves, FX, rates) consistently • Estimation very difficult in the multidimensional case! • Typically attempt to capture codependence using static correlation; real codependences are far more complex – time dependent and show regimes • Such an approach will struggle to preserve the shapes of curves (e. g. yield curves) RBS 00000 24
25 Price Risk – Simulation of Market Variables Δt, t+1 market = {Δt, t+1 spreads, Δt, t+1 FX , Δt, t+1 YC , …} Simulation of the Market Δ 01 market Yield Curve Single Name Spreads Δ 12 market Δ 23 market History t 0 t 1 t 2 Index Loss Fractions t 3 t 4 . . . Jump in history Apply H(Δt, t+1 market) H(Δ 01 market) Simulation Today’s Market § Derive a time-series of intra-period changes in market variables (FX, Interest rates, etc. ) § Historic changes can not be applied to current data directly – define transformation function H() § Apply changes as well as sign-reversal : drift is random, directional correlations are preserved. Change sign of the entire market. RBS 00000 25
Price Risk Simulation of Market Variables Simulation of the Market Yield Curve Single Name Spreads Index Loss Fractions RBS uses the Mahal, Rebonato et el. approach: § Apply sequence of historical market changes to current market • • • Starting date is randomly chosen Dates of selected changes must agree across all risk drivers. Randomly jump from sequence to a new date Randomised trend reversal Preserves directional inter-dependence (so, no need to model correlations etc. ). § Historical changes must be applicable to current market • e. g. if current spread is 10 bps, not realistic to apply ± 100 bps historical change • Transform risk drivers: y = H [x] ysim = ytoday + Dyhist xsim = H-1[ ysim] 26 RBS 00000 • e. g. for proportional changes: H [x] = ln (x) • (we use this transformation for FX rates) • Historical changes should look like ‘white noise’ (not dependent on current market)
Price Risk Market Simulation – Yield Curves Simulation of the Market Individual rates § Simple CEV-type transformations: Yield Curve Single Name Spreads Index Loss Fractions but not necessary for 1 -month changes § Also necessary to check shape of simulated CEV transformation s (x) § To be calibrated: a, x. L, x. R, C § We also have ‘band reversion’ parameters, curves § See Mahal et al. ‘Barbell’ effects, shape reversion, etc. § Again, not a problem for short time-horizons x. L 27 Rate, x RBS 00000 Source: RBS x. R
Price Risk Market Simulation – Spreads Yield Curve General Approach § The history of individual names not necessarily relevant to modelling spread dynamic of the same name today (e. g. Ford) • Single Name Spreads Index Loss Fractions For obligors that have experienced downgrades or corporate actions, a direct map to its spread history and historical spread change would be unrepresentative of the behaviour it is likely to exhibit today § For any date, bucket names by sector and spread percentile range § For each path (start-date) randomly map each name into a name in the same historical bucket. § Apply the corresponding changes from the mapped name. • • • Introduces more randomness and therefore a wider range of plausible outcomes Preserves correlations across an industry Captures cross-gamma risk concentrated by name Spread Mapping Exercise Transformation § Simplest model would be H[x] = ln(x) § (as used in Regulatory Stress Test) § However, we would expect some dependency on current levels of spreads § Maybe similar to Interest Rates § This is work in progress Industrial Sector Spread Percentile Band Simulation of the Market 28 RBS 00000 Source: RBS
Price Risk Market Simulation – Index Loss Fractions Simulation of the Market General Considerations Yield Curve § Need a different parameterisation of index tranche prices beyond correlation § Simulated prices must be non-arbitrage-able across detachment points and across maturities Single Name Spreads RBS is in the process of testing two alternative models, both involving Index Loss Fractions (“ILFs”) Index Loss Fractions § ILFs are effectively the ratio between the Expected Tranche Loss for an equity tranche with strike K to the total expected loss (EL) of the index (i. e. , expected tranche loss on an equity tranche with strike 0). § Index Loss fractions – underlie loss fraction mapping § RBS first simulates single name CDS spreads and the basis – we can therefore compute EL. § We then propose to simulate the ILFs (i. e. , the above ratios), and then convert to tranche price RBS 00000 29
Price Risk Market Simulation – Loss Fractions Bounds Simulation of the Market Yield Curve Loss Fraction Bounds § ILFs for any maturity need to be concave functions of detachment point. We model changes so that simulated ILFs automatically have this property § Simulate equity tranche, and for successively senior tranches find lower and upper bounds for the Single Name Spreads Index Loss Fractions ILFs, say LB and UB § Define tranche ‘Theta’ as the following ratio: • [q] (must be between 0 and 1) § Clearly we cannot just add q (given the bounds); instead map q onto the range (- , ) using inverse normal cumulative distribution, say: S = F-1[q] § Additive changes in S will therefore always be valid. Are we done? RBS 00000 30
Market Simulation – Loss Fractions Bounds § § § Simulation of the Market Yield Curve Y-axis: base correlation X-axis: detachment point Right hand graphs show magnified view of corresponding left hand graph 0 -x Single Name Spreads 0 -x Index Loss Fractions 0 -3 0 -7 RBS 00000 31
Price Risk Market Simulation – Loss Fractions Bounds Simulation of the Market Yield Curve § Let us look at a plot of Historical S’s § Figure 8 shows a plot of changes in S for 5 -year CDX versus Index Expected Loss • There is clearly a pattern: as we go to higher expected loss the range over which S can vary decreases Single Name Spreads Index Loss Fractions • This effect appears more significant in the data than it is – fewer data points for larger EL § Hence S is not a good quantity to simulate § Figure 9 shows the impact of scaling S by Expected Loss: ie Z = S *G{EL) G{. } is calibrated to different indices and maturities • No pattern i. e. , apply historical changes in Z to today’s market Figure 8 Change in S Versus Expected Loss - Unscaled Source: RBS 32 RBS 00000 Source: RBS Figure 9 Change in S Versus Expected Loss - Scaled
Agenda § Regulatory Requirements § Challenges in Meeting Regulatory Requirements § RBS Approach to CRM Calculation § Modelling Approaches and Assumptions § Price Risk • Simulation of Market § Default Risk § Appendix § Computational Implementation of CRM RBS 00000 33
Default Risk Summary Schematic – Explanation Simulating Defaults § § Simulate defaults over a 1 -year liquidity horizon Approach employs same PD/Default correlation structure as IRC • Through-the-cycle (i. e. , long term) PDs based on, for example, historically experienced default rates • However, we don’t know what stage of the credit cycle we will be in 1 year in the future. Hence we need to stress these PDs. • Use a Merton firm value (Gaussian copula) type approach – familiar from IRC as described by the IRB. • Stress the common factor to give default correlation/contagion effects • Non-default spreads driven by the same systematic effects • We need to integrate over systematic effects (use Gauss-Hermite if 1 factor model, Monte Carlo if multi factor) § § Recovery rates randomised (driven by systematic effects) Therefore we have a set of defaulted names (defaulted as per “real world” dynamics) and the times of default up to 1 year Valuation § Given a set of defaults over 1 year, we would expect that the spreads of the non defaulted firms will have changed: • If we do not allow contagion, FTD baskets would always make money on a default § We need to know the form of the entire market – index tranche prices, CDS spreads, FX, yield curves, basis etc. We propose to do this by using the value of the common factor to pick out dates where the empirical cumulative probability of the Itraxx / CDX index level corresponds to the cumulative probability of the common factor. RBS 00000 34
Default Risk Detailed Explanation § Then we • Revalue the portfolio under the given market scenario incorporating randomized recoveries and defaults and spreads blown out (V 1) • Revalue the portfolio under the given market scenario incorporating spreads blown out but with no defaults (V 2). • Default p/l = V 1 -V 2 § The impact of price risk is already captured • A series of default events will cause the spread environment to change (possibly markedly). The aim is to capture this cross effect – these products are nonlinear! § Tail risk identified by 2 -stage estimation process (Importance sampling) 1. Large number of simulations (10, 000) using approximate revaluation Select subset (1, 000) giving largest approximate losses 2. Compute corresponding losses using exact revaluations Find appropriate tail average of these RBS 00000 35
Default Risk - Modelling Contagion Effects RBS 00000 36
Optimisation – Improving on Stein ? § The 1 factor Gaussian copula (with random recovery) is very popular because rapid computational schemes exist • All such schemes are predicated on the fact that after conditioning on the common factor credits become conditionally independent • The standard approach - the so called ASB algorithm - computes this conditional loss distribution using recursion and is essentially exact • • Various approximations –all of which seek to approximate this conditional loss distribution exist • We have implemented Stein and extensively investigated its use for this problem. We have also developed an alternative (novel – i. e. , not seen in the literature) Poisson approximation • • Both approaches are significantly quicker than standard recursion (factor ~3) Probably the most accurate approach in the literature is an application of the Stein approximation (El Karoui, 2008) Both methods have been compared with Random Recovery Recursion on actual Index Tranche and Bespoke portfolios, for a range of: – Spread Scenarios – Correlations – Maturities – Attachments / Detachments – Our testing has encompassed stress events such as those produced by our market simulation. RBS 00000 37
Normal Approximation • The conditional loss distribution is bounded between 0 and the (factor-dependent) maximum loss. When portfolio expected loss is not too low or high the loss distribution can be close to normal. • Otherwise, however, the distribution can accumulate at either extreme and the normal approximation deteriorates. • The figures below compare a 100 -name homogeneous loss distribution with its approximating normal for different levels of expected portfolio loss. Extreme low or high expected losses will always arise since we are integrating across the market factor. • (Note that these figures are qualitative comparisons only, where discrete distributions are normalized by grid size. The tranche prices themselves give the true quantitative comparison. ) RBS 00000 38
Standard Poisson Approximation • A poisson distribution is a natural approximation to the true conditional loss distribution when expected losses are low. • The figures below compare the same 100 -name homogeneous distribution with the usual poisson approximation. As portfolio expected loss increases the accuracy deteriorates. • The range of accuracy of the poisson and normal are complementary so a threshold for expected loss can be specified at which the approximation changes from poisson to normal. For the example here this would typically be set around 0. 10 to 0. 15. • If recoveries are inhomogeneous however the distribution will be sparse with a small loss unit or grid size, and the standard poisson approach becomes problematic. RBS 00000 39
Adjusted Poisson Approximation • The standard poisson approximation uses the same loss grid as the true distribution. If instead we allow the approximating poisson to have its own loss unit we have an extra parameter and a more flexible approach. • At low expected losses, the adjusted poisson is very similar to the standard poisson and the grid size is very close to that of the true distribution (homogeneous in this example). • As expected loss increases the grid size decreases and the adjusted poisson smoothly changes over to be very close to normal. RBS 00000 40
Comparison (Poisson vs Stein) Price Differences versus Random Recovery Recursion: 0 – 3% Tranche Poisson Stein RBS 00000 41
Comparison (Poisson vs Stein) Price Differences versus Random Recovery Recursion: 9 – 12% Tranche Poisson Stein RBS 00000 42
Comparison (Poisson vs Stein) Price Difference Comparison for all Bespoke Tranches RBS 00000 43
Default Risk Approximation § Default Risk modelling is time consuming • Under full revaluation - each time a default occurs, the model must, for each trade: – Remove defaulted name from portfolio – Calculate expected recovery of defaulted name – Calculate adjusted portfolio expected loss – Iteratively Re-calibrate loss fraction curve(s) based on new portfolio expected loss – Re-price adjusted tranche using new attachment and detachment points § In scenarios where we are simulating a number of defaults occurring (i. e. tail risk), computation times increase dramatically RBS 00000 44
Default Risk Approximation – PV Interpolation § The time consuming step in calculating the PV impact of defaults is the recalibration of the tranche loss fraction curve required for the new portfolio after defaulted names have been removed § For calculation of the PV of a tranche on a portfolio that has experienced defaults, this recalibration step can be circumvented if we keep the portfolio the same, but readjust the tranche attachment point by the loss amount: 100% Tranche 6% 100% Simulate Defaults Tranche 5% Default 0% Recovery Calculate PV Impact 6% Tranche 5% 1% 0% 0. 5% 0% Loss 100% 95% 6% 5% Loss Tranche 0. 5% 0% Full Reval § 100% 6. 0% 5. 5% 4. 5% 0% PV Interp Operationally, for each trade portfolio, the (mid spreads and durations of) 15 tranches beneath the attachment point of the original tranche are pre-calculated • • These tranches are of the same tranche thickness as the original transaction The specific pre-calculated tranches depend on the original tranche attach and detach Based on the simulated number of defaults, a loss amount is calculated and the corresponding loss in subordination of the original tranche is calculated § The PV of the defaulted tranche is calculated based on interpolating the subordination adjusted curve against the precalculated tranches 45 RBS 00000 §