Скачать презентацию Operational Risk Training Managing Operational Risk AMA Скачать презентацию Operational Risk Training Managing Operational Risk AMA

751f7ed7c12b54b632ae99e19cd447c4.ppt

  • Количество слайдов: 142

Operational Risk Training Managing Operational Risk & AMA Toronto Nov 3, 2011 Bogie Ozdemir Operational Risk Training Managing Operational Risk & AMA Toronto Nov 3, 2011 Bogie Ozdemir & Evren Cubukgil 1

Agenda - Morning 8: 30 – 10: 30 Introduction o Principles and Objectives of Agenda - Morning 8: 30 – 10: 30 Introduction o Principles and Objectives of an effective Operational Risk Framework o Earlier Practices and Lessons Learned Basel II & Solvency II - what lies ahead Designing a Comprehensive AMA Framework o Necessary components and their inter-connectivity o Internal Data 10: 30 – 11: 00 – 12: 00 Coffee Break o External Data Scaling External Data Quantile Regression 12: 00 – 13: 00 2 Lunch Break

Agenda - Afternoon 13: 00 – 14: 30 o. Qualitative Elements RCSA Scenario Analysis Agenda - Afternoon 13: 00 – 14: 30 o. Qualitative Elements RCSA Scenario Analysis 14: 30 – 15: 00 – 16: 15 Coffee Break o. Quantitative Elements Distribution Choices – Frequency and Severity Fitting to truncated data Change of Measure Goodness of Fit, Stability EVT – Comparison with LDA False Precision 16: 15 – 16: 30 Coffee Break 16: 30 – 17: 30 Governance and Validation Operationalizing AMA Management Buy-In -Roles and Responsibilities IT Infrastructure Allocation of Operational Risk Capital: appropriate level of granularity, allocation of diversification, Allocation of capital generated by shared service centers Driving business benefits: Utilizing AMA in effective capital and Risk Management 3

8: 30 – 10: 30 Introduction o Principles and Objectives of an effective Operational 8: 30 – 10: 30 Introduction o Principles and Objectives of an effective Operational Risk Framework o Earlier Practices and Lessons Learned Basel II & Solvency II - what lies ahead Designing a Comprehensive AMA Framework o Necessary components and their inter-connectivity o Internal Data 4

Principles of an Effective Operational Risk Framework 1. Actionable – facilitate effective operational risk Principles of an Effective Operational Risk Framework 1. Actionable – facilitate effective operational risk and capital management – Models don’t manage Op Risk, people do 2. Facilitate Institutional Learning – We need to connect the dots 3. Reliable, Consistent, Stable across time and the organization – Relative accuracy is a key 4. Defendable 5. Quantifiable – We cannot manage it if we cannot quantify it 6. Complementary, marry Expert Judgment and Analytics - Use all Available Information 5

Earlier Practices and Lessons Learned 1. SCORE-CARD LIKE APPROACHES High Risk Medium Risk Low Earlier Practices and Lessons Learned 1. SCORE-CARD LIKE APPROACHES High Risk Medium Risk Low Risk • • An attempt to make use of expert judgment, provide some incentive to manage risk, but • 6 A score is assigned to Each Risk Type based on subjective criteria (perceived risk, effectiveness of controls, audit findings, etc). Aggregated to arrive at an overall score using weights/multipliers. Top-down capital numbers are assigned Subjective, Simplistic and not defendable

Earlier Practices and Lessons Learned 2. SINGLE EQUIVALENT SCENARIO QUANTIFICATION In a workshop a Earlier Practices and Lessons Learned 2. SINGLE EQUIVALENT SCENARIO QUANTIFICATION In a workshop a tail scenario is identified, discussed and quantified. The result is used as Capital • Can have different starting points; Single overall event based scenario can directly be quantified as Capital or Event based scenario per risk type can be aggregated as Capital • • • 7 Discussion can be very useful to surface potential risks but No complete Op Va. R distribution, thus No way of knowing if the scenario corresponds to a capital event at the required confidence level

Earlier Practices and Lessons Learned 3. Earlier Op. Va. R approaches Quantitatively similar to Earlier Practices and Lessons Learned 3. Earlier Op. Va. R approaches Quantitatively similar to today’s AMA, it uses frequency (typically Poisson) and severity (typically log-normal) distributions to obtain an Op. Va. R distribution. While internal data are used for high frequency - low severity events covering the body of the distribution, workshops are used for the low frequency – high severity events constructing the tail • • 8 A complete Op Va. R distribution but, In the absence of a reliable supporting governance, validation framework, consistency and defendability are not achieved. Workshops turned into a negotiation process, The approached quickly lost credibility and faded! Overreliance on the quantitative approach was clearly not the solution

Back to the Future I- Capital Estimation Process - Example 1 RISK ASSESSMENT FRAMEWORK Back to the Future I- Capital Estimation Process - Example 1 RISK ASSESSMENT FRAMEWORK (Parameter Estimation Qualitative+Quantitative) INPUTS For Frequency: Expert Opinion, IT, etc. Audit Reports Management Interviews Management Reports Loss History External Reports For Severity: Management Interviews Loss History - Internal - External 3 Methodology Inputs REVIEW AND VALIDATION MODELING “Monte Carlo Simulations” FEEDBACK OPERATING RISK REVIEW COMMITTEE 9 2 Risk Categories and Nature of Loss OUTPUT Management Action: It allows the LOBs to manage their operational Capital at Risk and make cost/benefit decisions on controls and insurance.

Back to the Future II-Modeling Approach Methodology Inputs: • The identification of the risk Back to the Future II-Modeling Approach Methodology Inputs: • The identification of the risk types found in the line of business through the assessment process facilitated by the Operational Risk unit of RMG • The average frequency of the operational loss events by risk type: Expected # of occurrences per year • = 100 (100 events per year) • = 0. 2 (1 event in every 5 years) • The average severity of the operational loss events by risk component: Expected $ loss per event • = $100, 000 ($100, 000 per event) The assessment process uses loss events from historical data, industry data, management interviews, Audit reports, etc. 10

Back to the Future Methodology Assumptions Frequency Distribution = Poisson Distribution Why Poisson: r Back to the Future Methodology Assumptions Frequency Distribution = Poisson Distribution Why Poisson: r One parameter distribution, easy to calibrate r Widely used in insurance industry for event risk modeling Severity Distribution = Lognormal Distribution Why Lognormal: r Practical: Similar to normal, but can not have negative values (losses) Not possible to validate these assumptions using historical data, but they are reasonable and provide the simplest model based solution 11

Back to the Future Monte Carlo Simulation Risk Classes Scenarios EC • For each Back to the Future Monte Carlo Simulation Risk Classes Scenarios EC • For each scenario, # of events per risk class over next year is simulated $ loss per event per risk class is simulated the $ losses are aggregated per risk class cumulative losses for all risk classes are aggregated • Repeat the same procedure 10, 000 times • Then the worst possible operational loss at the required confidence level is used for Capital Estimation 12

Back to the Future Loss Distribution Confidence Level 99. 95% High Frequency, Low Severity Back to the Future Loss Distribution Confidence Level 99. 95% High Frequency, Low Severity Low Frequency, High Severity Capital 13

Back to the Future Loss Distribution Expected Losses: Capital: • Mean events • Tail Back to the Future Loss Distribution Expected Losses: Capital: • Mean events • Tail events • High Frequency, Low Severity (Processing Errors, Fraud, etc. ) • Low Frequency, High Severity (Legal Risk, Image and Reputational, etc. ) • Think of internal controls as a filter • Residual risk is the risk after the filter, contributing towards EL’s • Think of internal controls as a filter • Catastrophic losses if the filter itself breaks down • Inherent Risk 14

The Basel Committee definition “The risk of loss resulting from inadequate or failed internal The Basel Committee definition “The risk of loss resulting from inadequate or failed internal processes, people and systems or from external events. ” • Lack of loss data makes it impossible to rely solely on quantitative models for Operational Risk Economic Capital • Financial industry has developed various qualitative models for assessing Operational Risk • Leading Market practice incorporates available loss data with qualitative methods within a robust conceptual framework. 15

Qualitative Practices For Capturing Op. Risk Although Forward looking, capital numbers are directly calibrated Qualitative Practices For Capturing Op. Risk Although Forward looking, capital numbers are directly calibrated based on qualitative assessments. Results are difficult to challenge – Lack of discipline in managing risk • Risk Control and Self Assessment (RCSA) process Businesses qualitatively assess their internal controls and inherent risk factors. Monitor Key Risk Indicators for Businesses. • Scenario Analysis Working together, Business and Op. Risk experts formulate a set of worst case scenarios meant to capture extreme tail risk loss events 16

Quantitative Practices for Capturing Op. Risk Historical loss data is not forward looking, and Quantitative Practices for Capturing Op. Risk Historical loss data is not forward looking, and is not sensitive to changes in current risk factors – little incentive to manage risk • Using Internal Loss Data to directly calibrate a loss distribution to infer a capital number Lack of internal loss data - will never observe tail loss events • Use External Data to calibrate a loss distribution and infer a capital number Much more data available Risk profiles can be mismatched between internal business units and industry loss experience 17

AMA Provides a Formalized Framework to Consistently Manage Operational Risk Formal and Consistent Risk AMA Provides a Formalized Framework to Consistently Manage Operational Risk Formal and Consistent Risk Management Encouraged by Regulators Recognized by Rating Agencies Disciplined and Sensitive to Risk All Available Information Leading Market Practice Strong Message to Shareholders Focus on Horizon: RCSA, KRI Internal Data Loss Scenarios 18 External Data RCSA Grounded with Industry Experience: External Data Manage Risk at Business Unit Level

Basel II & Solvency II - what lies ahead Evren • AMA has emerged Basel II & Solvency II - what lies ahead Evren • AMA has emerged as industry best practice from the requirements set out in Basel II for the measurement and management of operational risk. Regulatory requirements are broadly defined, so as to allow institutions to tailor approaches to organizational needs Well documented approach for weighting the use of BEICF, Scenario Analysis, Internal and External Data in measuring Economic Capital. • Banking regulators are familiar with AMA given long history of implementation and approval (BIS June 2011 supervisory guidelines) • Regulatory requirements for operational risk management for insurance companies have lagged those applying to banks • Under Solvency II insurance companies are required to demonstrate that their methodology for quantifying operational risk is aligned with their risk profile, and that the output from that methodology is used in strategic decision making and business planning (Use Test). Standardized or formula based approaches will generally fail the Use Test. • Following Solvency II and the higher standards for operational risk in the banking sector, regulators in North America will require a move towards best practices by insurance companies. 19

Basel Operational Risk Event Types 1. Internal fraud: intentional misreporting of positions, employee theft Basel Operational Risk Event Types 1. Internal fraud: intentional misreporting of positions, employee theft and insider trading on an employee’s own account. 2. External fraud: robbery, forgery, check kiting and damage from computer hacking. 3. Employment practices and workplace safety: compensation claims, violation of employee health and safety rules, organized labor activities, discrimination claims and general liability (for example, a customer slipping and falling at a branch office). 4. Clients, products and business practices: fiduciary breaches, misuse of confidential customer information, improper trading activities on the bank’s account, money laundering and sale of unauthorized products. 5. Damage to physical assets: terrorism, vandalism, earthquakes, fires and floods. 6. Business disruption and system failures: hardware and software failures, telecommunication problems and utility outages. 7. Execution, delivery and process management: data entry errors, collateral management failures, incomplete legal documentation, unapproved access given to client accounts, non-client counterparty miss-performance and vendor disputes. 20

Examples in the Insurance Industry Basel Event Types Examples in Insurance 1. Internal Fraud Examples in the Insurance Industry Basel Event Types Examples in Insurance 1. Internal Fraud 2. External Fraud 2. Claims fraud, falsifying application information 3. Employment Practices and Workplace Safety 3. Repetitive stress, discrimination 4. Client, Products and Business Processes 4. Client privacy, bad faith, red-lining 5. Damage to Physical Assets 5. Physical damage to own office or vehicle fleets 6. Business Disruption and Systems Failures 6. Processing center downtime, system interruptions 7. Execution, Delivery and Process Management 21 1. Employee theft, claim falsification 7. Policy Processing, claim payment errors

AMA and the Use Test 4 broad principles that institutions have to consider at AMA and the Use Test 4 broad principles that institutions have to consider at a minimum to satisfy the Use Test provided by the EBA: 1. The purpose and use of the AMA should not be limited to regulatory purposes • The framework is used to manage operational risk exposures across different business lines • How inputs, estimations, predictions or outputs from the risk measurement system are used in the decision making process (strategic or tactical decision making 2. The AMA should evolve as the institution gains experience with risk management techniques and solutions • • 22 How the institution ensures that the nature and balance of inputs into the framework are relevant and fully reflect the nature of the business How the framework becomes more responsive and robust over time

AMA and the Use Test 3. The AMA should support and enhance the management AMA and the Use Test 3. The AMA should support and enhance the management of operational risk within the organization • How decisions for improving processes and controls are made • Operational management objectives and activities are communicated within the organization 4. The use of an AMA should provide benefits to the organization in the management and control of operational risk. • • 23 Senior management has considered action on its receipt of information from the framework AMA increases transparency, risk awareness and operation risk management expertise, and creates incentives to improve the management of operational risk throughout the organization.

Designing a Comprehensive AMA Framework • Advanced Measurement Approach (AMA) has emerged as the Designing a Comprehensive AMA Framework • Advanced Measurement Approach (AMA) has emerged as the best practice already used by Basel II banks and being implemented by Solvency II adopting insurers • AMA was originally based on an actuarial approach to modeling claim severity and frequency. Its earlier – pre Basel II implementation has not been successful due to sole reliance on quantitative models in a data starved field without the supporting governance and control infrastructure. • Under Basel II, AMA has matured with the supporting governance and control infrastructure being mandatory components. With these components in place, AMA provides Financial Institutions with a useful framework to quantify and manage the Op Risk with actionable cost vs benefit decision capability. • AMA is not a quantitative model, but a process and framework, which encompasses both quantitative and qualitative elements within a controlled environment. 24

AMA Framework Brings 4 Approaches Together • None of the aforementioned approaches (RCSA, Scenario AMA Framework Brings 4 Approaches Together • None of the aforementioned approaches (RCSA, Scenario Analysis, Internal and External data) can be satisfactorily used on its own to manage Operational Risk • AMA provides a framework in which each of the approaches can be brought together; results of each approach can be validated against each other • Qualitative assessments (Scenarios and RCSA) bring a forward looking perspective of the risk profile to empirical loss experience • Capital models and empirical loss data ground qualitative assessments in industry experience. • Common Language within AMA Framework is Frequency and Severity 25

Using All Available Information • Internal Loss Data Establish an internal loss database to Using All Available Information • Internal Loss Data Establish an internal loss database to consistently record loss events in Business. Unit / Even-Type categories – Mapping and Aggregation matters • External Loss Data Must be purchased from an external consortium (if necessary mapped to internal loss categories) • Risk Control and Self Assessment (RCSA) Operational risk management indicators that provide forward-looking assessments of business risk factors as well as internal control environment. Assessment of expected (mean) impact of risk types on Business Units Score Risk types according to severity and frequency within Business Units • Loss Scenarios Hypothetical loss scenarios defined over a severity range and according to frequency / likelihood of occurring (ex. 1 event every 5 years). Assessment of extreme tail events of risk types on Business Units Developed through formalized and replicable workshops with full representation from business units. 26

Designing a Comprehensive AMA Framework Necessary components and their inter-connectivity Internal Data Enterprise Consistency Designing a Comprehensive AMA Framework Necessary components and their inter-connectivity Internal Data Enterprise Consistency Check Curve Fitting per Risk Type Frequency and Severity Scaling Scenario Analysis Aggregation Op Va. R Validation External Data Consortium RCSA External Data Public Risk Management 27 Risk Identification & Quantification

Internal Data • Internal data Issues: EL type losses, certainly not tail Availability (3 Internal Data • Internal data Issues: EL type losses, certainly not tail Availability (3 years are required under Basel, but no magic number) How many years of history do we need? Retroactive data collection Truncated loss collection (Basel limits: collection over $10 K) – collection cost vs accuracy Changing the culture (Admitting to the op risk event) Near misses are harder to capture – how to quantify? Capturing the losses which unfold over a long period of time: time of occurrence, continuous updates Discounting the losses to estimate the severity - discount rate? 28

Designing an Internal Loss Database • Database has to be designed to roll up Designing an Internal Loss Database • Database has to be designed to roll up into Business Unit – Event Types (Units of Measure) for which capital will be calculated in the Framework • Highest level of event types must correspond to Basel categories for compatibility with external data sources, and participation in consortiums • Sub categories can vary across business groups depending on the nature of the risk profile • Sub-Categories to be defined in conjunction with business groups based on common underlying causes of loss events • How Granular to go? Implications for Modeling Executive Fraud vs Teller Fraud, should they be treated as different loss categories, or just different degrees of the same loss type? If they are different, how to allocate intermediate event types (middle management fraud) • Additional levels of granularity benefit RCSA and Scenario Analysis workshops by allowing more detailed analysis of internal data. • The more granular the loss categories the more diluted the data available for analysis within each category. 29

Units of Measure • Within the AMA framework Capital will be calculated and held Units of Measure • Within the AMA framework Capital will be calculated and held at the level of Units of Measure (Business Unit – Event Type Categoreis) 30

Shared Service Centers • How to address risks of shared internal service centers within Shared Service Centers • How to address risks of shared internal service centers within the database and in the broader framework • Businesses will resist recognizing losses incurred by processes which they do not manage directly, or are shared across the enterprise • Categorization has to link to capital allocation, if the service center cannot hold capital, risks have to be allocated to business units using the service center, in the same way as risks incurred by outsourced services • Loss exposure of business units to shared service centers has to be managed carefully to avoid double counting across the enterprise • Joint exposure to shared services has to be recognized in modeling correlations between units of measure 31

Recording Operational Losses • ORX data standards: • An Operational Risk event is an Recording Operational Losses • ORX data standards: • An Operational Risk event is an event leading to the actual outcome(s) of a business process to differ from the expected outcome(s), due to inadequate or failed processes, people and systems, or due to external facts or circumstances. • Includes Legal Risk: • Legal Risk is the risk of being sued or being the subject of a claim or proceedings due to non-compliance with legal or statutory responsibilities and/or losses due to inaccurately drafted contracts. This includes the exposure to new laws as well as changes in interpretations of existing law by appropriate authorities and exceeding authority as contained in the contract. This applies to the full scope of Group activities and may also include others acting on behalf of the Group. Legal Risk is a component of Operational Risk. • Basel II requirements: • Legal Risk includes, but is not limited to fines, penalties, or punitive damages from supervisory actions, or to judgments or private settlements (Basel II Accord section V. A. § 644 - Definition of Operational Risk) or to the reduction in asset values or cashflows. 32

Anatomy of an Operational Loss • Dates Define a Loss Event 1 Date of Anatomy of an Operational Loss • Dates Define a Loss Event 1 Date of Occurrence: the date when the event happened or first began, 2 Date of Discovery: the date on which the firm became aware of event, and 3 Date of Recognition / Accounting Date: the date when a loss or reserve/provision was first recognized in the P&L Date of Resolution – (Duration of loss event) • This is a standard requirement for recording losses in industry consortiums • Standard practice is to leave loss dates unchanged, even if further losses are realized over time. If multiple losses are posted at different times in the General Ledger, losses linked to original dates should be updated (Grouped Losses). 33

Truncation of Recorded Loss Events • Loss event databases will typically not record all Truncation of Recorded Loss Events • Loss event databases will typically not record all loss events • As losses become smaller and smaller there is a point at which the benefit of having the loss available in an AMA framework is outweighed by the effort / cost of recording it. Small losses are not very informative of exposure to significant operational loss events (High Severity Low Frequency) • Impact on modeling likelihood of large losses: 10 small losses and 10 medium – large losses 95 th percentile becomes the 90 th percentile 34

Conditional Probability • Probability that a loss L is less than a, based on Conditional Probability • Probability that a loss L is less than a, based on a data set where only losses greater than b are reported. • Based on the definition of conditional probability, we only need to adjust the likelihood function when estimating the probability density function of the severity distribution. If we don’t we’d be overstating the severity. 35

How to Record & Use Near Miss Losses • Stands between actual loss and How to Record & Use Near Miss Losses • Stands between actual loss and hypothetical loss (scenario? ) • Were losses averted by chance, or were losses averted due to controls? Is there a difference? • Separate categorization in loss database • How to incorporate into modeling? Cannot augment realized loss data with near miss events Need to assess probability of near miss loss occurrence, because it didn’t actually happen: 1 near miss event in 10 years is not a 1 in 10 year event; how many 10 year periods would we have to experience before the near miss event was actually realized? Incorporate into modeling with hypothetical loss scenarios. 36

Grouped losses • Some operational losses can be characterized by multiple losses over several Grouped losses • Some operational losses can be characterized by multiple losses over several years. Several losses occur before their cause is identified and remediated Regulatory fines, or legal settlements are linked to operational failures in the past • All losses should be tied to their underlying causal event. This could require previous loss event data points to be updated over time, and calls for careful management of the internal loss event databases. • ORX Requirements: An event may have multiple associated losses. In such cases, an investigation may be necessary to identify the “root event”—that is, the initial event without which none of the related losses would have occurred. For ORX purposes, the root event is included in a single record, containing all related losses, and is classified according to its specific event characteristics. 37

Discounting Grouped Loss Events to the Date of the Causal Event • Grouped loss Discounting Grouped Loss Events to the Date of the Causal Event • Grouped loss events reported to ORX are not discounted, due to complexity and availability of information: • Which discount rate to use? Risk free discount rate – at the time of the original event? – at the time of the newly reported event? Should one use risk adjusted discount factors? Required rate of return / cost of capital? • What is the impact on modeling of not discounting grouped losses realized over extended time periods? 38

Discounting Grouped Loss Events to the Date of the Causal Event - Ctd • Discounting Grouped Loss Events to the Date of the Causal Event - Ctd • Risk adjusted discount rate: Who’s perspective is the operational loss from? Debtholders, Policyholders? Deposit Holders? Shareholders? Magnitude of EC is assessed from the debtholder or policyholder perspective, return on EC is assessed from the shareholder’s perspective. Operational losses experienced are distinct from Economic Capital held against operational risk, and on which firms must earn a required rate of return for shareholders – this is not the relevant discount rate to apply to grouped losses. Shareholder’s required rate of return would take into account all undiversifiable risk in returns generated by the financial institution: including market, credit, operational, etc. (weighted average cost of capital) • Conceptually, Operational loss events likely have low levels of undiversifiable / systematic risk 39

Discounting Grouped Loss Events to the Date of the Causal Event - Ctd • Discounting Grouped Loss Events to the Date of the Causal Event - Ctd • There are difficulties in obtaining data for estimating time series correlation of loss severity between observed operational loss events and other risk types and returns. When losses take place over several years, the time horizon is uncertain, and in some cases independent of loss severity (regulatory fines, lawsuits) • Applying risk adjusted discount factors to operational losses which take place over multiple years would result in less conservative estimates of losses. Given uncertainty over methodology it’s always better to be conservative. • If operational losses have little or no systematic or un-diversifiable risk, they should be discounted over multiple years based on the risk free interest rate. • The relevant discount rate is not that prevailing at the time of the initial causal event, nor the rate prevailing at the time the loss impacts a firm’s P&L • The relevant rate is that prevailing at the time of modeling the capital, as this is the perspective from which the potential loss is being assessed. 40

Internal Data and Expected Loss • Given a sufficiently fat tailed distribution, Internal loss Internal Data and Expected Loss • Given a sufficiently fat tailed distribution, Internal loss data is only informative of the body of the loss distribution – mean loss events • Conceivably many thousands of loss data points may be required in order to sample from the tail of the loss distribution of an individual business unit event type • Fitting a severity distribution to internal loss data can be missleading if we only have points around the body of the distribution, and have not observed any tail loss events. Lack of tail observations may lead to the selection of an inappropriate probability density function used to approximate the severity distribution. Capital figures can vary substantially based on the choice of severity distribution For a given density function used to approximate the severity distribution of losses, fitted parameter values may be inappropriate given the lack of tail data. 41

Collecting Internal Data – How Many Years are Needed? • Generally regulators expect a Collecting Internal Data – How Many Years are Needed? • Generally regulators expect a minimum of 3 years of internal loss data. European banking authority requires 3 years of internal loss data when institutions initially apply for AMA, and mandates the use of minimum of 5 years of internal loss data as it is acquired. • There is no set minimum amount of data that will ensure more accurate results of the model, or superior management of risk in the framework • Lower the event frequency the more years of data • Depending on how quickly an institution grows or undergoes internal changes, data can quickly become stale. You can’t drive a car by looking the rear view mirror 42

Retroactively Collecting Data • Retroactively collecting losses can skew the perspective of the risk Retroactively Collecting Data • Retroactively collecting losses can skew the perspective of the risk profile: • When collecting historical losses it is uncertain what percentage of losses can be recovered in given years • The threshold above which losses are recovered is not fixed and is unknown: may only find 30% of losses between $100 k and $500 k, and 50% of losses between $50 k and $100 k. • When retroactively collecting losses a sufficiently high threshold must be set so as to ensure that all relevant losses are collected: Retroactively record all losses above $20 m, $100 m. • Similar to hypothetical loss events or near miss events, retroactively collected internal losses cannot be added directly to the internal loss event database. • Retroactively collected loss events can be useful in validating loss distributions (back-testing the largest loss in 10 years) 43

Setting the Tone at the Top • Populating the internal loss event database requires Setting the Tone at the Top • Populating the internal loss event database requires strong support from senior executives and the board. • Business Units may not want to appear to have higher or more frequent operational losses, and may resist reporting initiatives • Negative equilibrium: there is no incentive to be the first business unit to buy into recording operational losses • Neglecting to record minor loss events places upward bias on the likelihood of larger loss events that are recorded. – Businesses can benefit by being more diligent about recording lower operational losses. • Incentivize businesses within the framework: RCSA must be supported by internal data. Incentivize progress: At first RCSA is supported or validated by collection efforts. As process matures and data is collected, RCSA becomes supported by results of data collection. 44

10: 30 – 11: 00 45 Coffee Break 10: 30 – 11: 00 45 Coffee Break

11: 00 – 12: 00 q External Data Scaling External Data Quantile Regression 46 11: 00 – 12: 00 q External Data Scaling External Data Quantile Regression 46

External Data • External data Consortium (ORX, ORIC) Public (First, SAS) Private / Proprietary External Data • External data Consortium (ORX, ORIC) Public (First, SAS) Private / Proprietary (Aon Op. Base) • How do we use it? Scaling In direct calibration In stimulating RCSA and Scenario analysis In Validation Complementary: external data Covering different regions of the distribution 47

External Data Sources: • SAS Op. Risk Global Data: largest database of losses over External Data Sources: • SAS Op. Risk Global Data: largest database of losses over $US 100 k: 25 000 loss events. Losses are categorized according to Basel II event types, and by individual business lines Facilitates data scaling according to firm size by revenue, assets, net income, number of employees and shareholder equity Database is populated by commercial online information providers and thousands of publications • Aon Op. Base: quantitative and qualitative information on more than 16, 500 losses and incidents experienced by more than 1, 600 financial institutions worldwide Based on both proprietary (through reinsurance brokerage business) and publicly available data Loss amounts range from $US 4 to $US 798 m. 48

External Data Sources: • ALGO – FIRST: Leading database for scenario analysis Contains approximately External Data Sources: • ALGO – FIRST: Leading database for scenario analysis Contains approximately 10 000 events Collected from publicly available sources Includes detailed qualitative analysis of each event based on control breakdowns and event triggers Losses categories span corporate governance, strategic issues, market practices, and business risk • ORX Leading consortium database for banks 212 282 operational loss events recorded from 57 member firms Insurance database is in development with an initial planned number of 8 participating firms Losses colected since 2002 49

External Data Sources: • ORIC Maintained by the Association of British Insurers (ABI) Leading External Data Sources: • ORIC Maintained by the Association of British Insurers (ABI) Leading operational loss data consortium for insurance companies Includes over 3000 loss events collected over the last 5 years 3 levels of loss categorization with level 1 losses consistent with Basel II 26 members, European insurance firms 50

Comparing External Loss Data Sources • External loss data from different providers correspond to Comparing External Loss Data Sources • External loss data from different providers correspond to different regions of loss distributions. • Different collection thresholds have implications for use of external data in modeling or validation 51 ORX/ORIC AON/SAS FIRST

Scaling Operational Losses • External data provided from Operational loss consortiums potentially contain a Scaling Operational Losses • External data provided from Operational loss consortiums potentially contain a sufficient number of operational losses to calibrate the tail of loss severity distributions. • However, loss data is collected across many institutions and business lines, which differ significantly in risk profile. • Quality of existing controls are not equal across institutions • Absolute size of business unit in terms of gross revenue or net income, as well as geographical differences significantly affect the magnitude of operational losses • To date there is no satisfactory loss scaling methodology that would allow financial institutions to directly incorporate external loss data into the calibration of severity loss distributions and the calculation of economic capital. • The use of external loss data is limited largely to validating the outcomes of RCSA and Scenario Analysis, as well as fitted severity distributions 52

Scaling Loss Data • Each loss database provider has developed its own scaling methodology Scaling Loss Data • Each loss database provider has developed its own scaling methodology • Generally scaling methodologies are derived and tested based on the proprietary database (each probably works best with its own) • Scaling methodologies must be mindful of preserving underlying distributional assumptions : linear transformations of lognormally distributed random variables are not lognormally distributed. • Predominant methodologies for scaling operational loss data: Log Linear models of losses Modeling scale and shape parameters for given distributions Quantile regression 53

Log-Linear Models of Losses • Early studies of loss scaling fit linear models of Log-Linear Models of Losses • Early studies of loss scaling fit linear models of log-losses onto bank characteristics that are correlated with firm size (log-gross income, log-total assets), as well as dummy variables indicating region, business line or type of loss occurred. • First term is a deterministic function of a vector of business line characteristics, the second term is a random variable corresponding to the loss distribution of a “standardized” business line • Many studies use a power law function to describe variation in loss sizes between institutions 54

Log-Linear Models of Losses • Once the scaling relationship has been determined, internal and Log-Linear Models of Losses • Once the scaling relationship has been determined, internal and external data can conceivably be pooled together. • Given a sufficient number of observations, tail loss events can be used to calibrate loss severity distributions. • Scaling functions can be estimated by Ordinary Least Squares (OLS). • This methodology has the advantage of not making any distributional assumptions for base losses prior to scaling • Empirical studies have found that fitted regressions have very poor explanatory power with R 2 values reported in the range of 10% 55

Log-Linear Models of Losses • Fitting log linear regression models of operational losses only Log-Linear Models of Losses • Fitting log linear regression models of operational losses only fit the mean log loss level given the level of certain exposure indicators. • The mean log loss level that is fit does not correspond to an intuitive statistic in the original loss data, because the exponential of the mean of log losses does not equal to the mean of loss: • In OLS regression R 2, is of limited significance in assessing the validity of the scaling relationship, as the objective is not to explain variation in losses at different levels of exposure indicators • The scaling relationship should be assessed based on the degree of similarity among the scaled loss distributions 56

References for Studies on Log-Linear Models of Loss Severity • Shih, J. , A. References for Studies on Log-Linear Models of Loss Severity • Shih, J. , A. Samed-Khan, and P. Medapa. “Is the size of operational loss related to firm size? ” Operational Risk, 2000 • Chapelle, Ariane, Yves Crama, Georges Hubner, and Jean. Philippe Peters. “Measuring and managing operational risk in the financial sector an integrated framework. ” (2005) • Na, Heru Stataputera, jan van den Berg, Lourenco Couto Miranda, and Marc Leipoldt. “An econometric model to scale operational losses. ” Journal of Operational Risk 1 (2006): pp. 1131. • Dahen, Hela and Georges Dionne. “Scaling models of severity and frequency of external operational loss data. ” Canada Research Chair in Risk Management Working Paper 07 -01, 2007. 57

Modeling Scale and Shape Parameters • An alternative approach to loss scaling is to Modeling Scale and Shape Parameters • An alternative approach to loss scaling is to assume that loss distributions arise from a given parametric family of distributions, and that shape and scale of those distributions vary with an institution’s exposure indicators. • Assuming a given family of distributions for loss severities (log-normal), estimate location and scale parameters separately for each institution in the consortium data set. • Regress the scale and shape parameters on the exposure indicators: • Likelihood ratio tests can be used to assess whether models including exposure indicators perform better than those which impose constant scale and shape parameters. • Wei (2007) applies this methodology with the following severity distributions: Generalized Beta of the second kind (GB 2), Burr Type XII, Generalized Pareto (GPD) and Lognormal. 58

Aon Op. Base Scaling • Aon propose an alternative scaling approach, by assuming losses Aon Op. Base Scaling • Aon propose an alternative scaling approach, by assuming losses are log-normally distributed with mean and standard deviations specified as functions of explanatory variables. • The approach is based on a function which scales external loss data L to have the same distribution as internal loss data: L*=h(L) • Where the subscript 0 indicates the exposures of the institution for which data is considered “internal” 59

Aon Op. Base Scaling • Coefficients in specifications of mean and standard deviation are Aon Op. Base Scaling • Coefficients in specifications of mean and standard deviation are estimated by maximum likelihood pooling all internal and external data together. • Once estimated, both the severity distribution and the scaling function are specified. 60

References for Modeling Scale and Shape Parameters • Wei, Ran. “Quantification of operational losses References for Modeling Scale and Shape Parameters • Wei, Ran. “Quantification of operational losses using firm-specific information and external database. ” Journal of Operational Risk 1 (2007): pp. 3 -34. • Frachot, Antoine, Edlira Kospiri and Fabien Ramaharobandro. “Operational risk: scaling the severity of external operational risk data”. Available for download at www. aon. com 61

Quantile Regression • Quantile regression specifies a relationship between quantiles of distributions of random Quantile Regression • Quantile regression specifies a relationship between quantiles of distributions of random variables as opposed to their means • Given percentiles of a distribution of losses from one institution may be differ by a constant amount (location shift, figures a and b) By a constant and multiple (location-scale shift figures c and d) 62 • The location or location-scale shift relationships being parameterized in terms exposure indicators

Quantile Regression • Formally the location and location-scale shift model can be expressed respectively Quantile Regression • Formally the location and location-scale shift model can be expressed respectively as • The residual term has a fixed distribution Fo • The distribution of any log loss is a shifted scaled version of the base distribution of Fo • For a given exposure variable the quantile of the log-loss severity distribution can be expressed in terms of the location and scale shift parameters 63

Quantile Regression • First step is to estimate the Khmaladze test which evaluates the Quantile Regression • First step is to estimate the Khmaladze test which evaluates the null hypothesis that a given quantile regression model follows a location- or a location-scale shift model. • If the null hypothesis of a location shift model is not rejected its parameters can be estimated using OLS • If the null hypothesis of a location-scale shift model is not rejected, Fit quantile regression lines at several probability levels For a fixed exposure level Xo, let the estimated quantile level of the loss distribution be given by Regress quantile coefficient estimates on the relevant quantile estimates given the base exposure Xo, using OLS to obtain estimates of location and scale parameters 64

Quantile Regression • The calculation of quantiles of a given data set can be Quantile Regression • The calculation of quantiles of a given data set can be expressed in the form of a minimization problem: 65

Quantile Regression • Expressing a quantile of a data set as a linear function Quantile Regression • Expressing a quantile of a data set as a linear function of exposure variables – quantile regressions obtains parameters of the function by a similar minimziation problem: • This is very similar to OLS estimate, the only difference being that the quantile regression estimates a specific quantile as opposed to the mean 66

Quantile Regression • Unlike OLS estimates of log linear models of operational losses, which Quantile Regression • Unlike OLS estimates of log linear models of operational losses, which don’t preserve the mean of the loss data, quantiles of the underlying loss data are preserved under all monotonic transformations • Quantile regressions do not require any distributional assumptions of the underlying loss data. • A significant drawback are loss data thresholds – only above which are losses reported to consortium databases. It is very difficult to determine which percentile of the underlying loss data the threshold corresponds to at each institution. • ORX proposes an iterative quantile regression approach to obtaining estimates of location and scale parameters from truncated data. 67

References For Quantile Regression • Koenker, Roger. Quantile Regression. Cambridge University Press, 2005. • References For Quantile Regression • Koenker, Roger. Quantile Regression. Cambridge University Press, 2005. • Cope, E and Abderrahim Labbi. “Operational Loss Scaling by Exposure Indicators: Evidence from the ORX Database”. 2005. available for download at http: //www. orx. org. 68

External Data and Validation • External data is still vital to an AMA framework External Data and Validation • External data is still vital to an AMA framework even without applying a scaling methodology and combining it with internal data • Qualitative assessments of operational loss exposures should always reference relevant external data • Scenario Analysis: workshops should include challenge by review of external loss experience from similar lines of business and similar institutions • RCSA: Evaluation of control effectiveness and inherent risks should be validated against external loss data (and if available, comparison of external and internal data) • High percentiles of loss data from external consortiums, and loss scenarios from the FIRST database can be used to validate capital numbers based on fitted severity distributions. 69

12: 00 – 13: 00 70 Lunch Break 12: 00 – 13: 00 70 Lunch Break

13: 00 – 14: 30 q Qualitative Elements q RCSA q Scenario Analysis 71 13: 00 – 14: 30 q Qualitative Elements q RCSA q Scenario Analysis 71

Qualitative Elements: RCSA • RCSA is the point at which the AMA framework is Qualitative Elements: RCSA • RCSA is the point at which the AMA framework is integrated with businesses and management of operational risk. • Without a sound RCSA process exposures that are identified and measured within the AMA framework cannot be effectively managed. • 2 major components of an RCSA: Qualitative self assessment of risk exposures Identification and review of existing controls and current gaps (responsibility for implementing controls) • Given an inventory of gaps in a business’ controls, capital figures produced in the AMA framework provide an indication of which to prioritize • Qualitative risk self assessments are often criticized within risk management As a process it promotes the introspection of risk exposure, and imbeds good risk management culture within organization • To the critics: Is it a bad idea for key management and business leaders to get together to discuss and assess weaknesses and risks within their operations? 72

Who Participates in an RCSA? • Representation from all areas of the Business Group, Who Participates in an RCSA? • Representation from all areas of the Business Group, discussion of relevant risks and review of controls should be facilitated at all levels of the business in a successful RCSA • At least some participants in the workshop must be able to understand strategic objectives of the business group as well as the processes in place to meet those objectives • Workshop must also have members who can either determine or allocate ownership of controls and gaps • Participation of more senior employees must be managed carefully: They may turn defensive during discussion of potential failures in the business plans More junior employees may feel uncomfortable giving their opinions Political or sensitive issues may not be discussed • These are cultural issues and must be resolved from the top of an organization by an executive champion 73

Role of the RSCA Facilitator • RCSA workshops benefit substantially from a designated objective Role of the RSCA Facilitator • RCSA workshops benefit substantially from a designated objective facilitator from outside the business who can: Act as devil’s advocate within group discussions Manage use of external and internal loss data throughout the workshop 74 Record decisions and actions in a consistent format • Maintain discipline within the group and help to manage the time taken by the workshop Ensure involvement of the entire group and stimulate discussion by introducing reference material such as internal and external data at appropriate points within the workshop High level validation of the output from workshops is required to ensure that they are conducted consistently across different business units.

Qualitative Elements RCSA Process Business Objectives Processes Inherent Risks • Results are owned and Qualitative Elements RCSA Process Business Objectives Processes Inherent Risks • Results are owned and produced by the Business. Integrated with planning process • Risks assessed based on expected (mean) impact on business • Manage Risks through the AMA Framework: Severity and Frequency Modified Control Assessment Residual Risk 75 Control Portfolio AMA Framework

Impact on Business Objectives RCSA Process • Identify Risks in the contexts of Business Impact on Business Objectives RCSA Process • Identify Risks in the contexts of Business Objectives Processes Inherent Risks Outline Key Objectives of the Business Unit at two levels: • High level objective in the organization (long term) • Strategic objectives within the high level mandate (short term) Define the Processes in place to meet those objectives • How do Risk Types impact Processes and Business Objectives? More granular taxonomy of risk types within Basel categories significantly aids this assessment process Identification of new risk sub-categories can be added to the database as they emerge 76

Frequency of Event Types RCSA Process • Frequency: how often each event type impacts Frequency of Event Types RCSA Process • Frequency: how often each event type impacts Business Processes and Objectives in the Absence of All Controls (include near miss events) • Severity: Given occurrence, what is the expected loss in the Absence of All Controls 77

Evaluating Controls and Residual Risk RCSA Process • Develop an inventory of existing controls Evaluating Controls and Residual Risk RCSA Process • Develop an inventory of existing controls in the Business Unit • Map controls to risk types in the Business Unit’s taxonomy (aided by more granular categorization of risk types) • Develop an inventory of gaps in controls on processes • Within business units identify owners of controls, and owner of gaps Control Assessment • • 78 As part of control and gap inventories: estimate costs for maintaining existing controls, as well as costs for closing gaps in controls Goal is not only to identify missing controls, but also identify potential areas that may be over-controlled and create inefficiency.

Evaluating Controls and Residual Risk RCSA Process • Evaluate Controls on 2 dimensions: Development Evaluating Controls and Residual Risk RCSA Process • Evaluate Controls on 2 dimensions: Development and Effectiveness: • Weight each dimension according to maturity of controls In the absence of sound controls, Development is more important than Effectiveness Control Assessment 1. Development: – – How are controls developed? Who is involved? How Frequently are they Reviewed? Calculate KRI; recording internal data 2. Effectiveness: – Defined in terms of Severity and Frequency: • • • 79 How much is the loss amount decreased given an event How is the likelihood of the event decreased KRI metrics, Internal vs External Loss data.

Determining Residual Risk RCSA Process • Inherent Risk External Data Control Assessment Residual Risk Determining Residual Risk RCSA Process • Inherent Risk External Data Control Assessment Residual Risk 80 • Residual Risk-External Data Development/Review Process

Residual Risk Heat Map RCSA Process • Residual Risk of the Event Types can Residual Risk Heat Map RCSA Process • Residual Risk of the Event Types can be placed on a heat map to summarize the results of the RCSA • Operational Risk is categorized by extreme loss events which by their nature are highly unlikely 81

RCSA and Validation • External and Internal Data is vital for challenging results of RCSA and Validation • External and Internal Data is vital for challenging results of risk self assessments • Introducing internal and external data too early can impede the RCSA process • Business units need to develop their OWN assessment of their risks and control quality, which can then be challenged through comparison with internal and external data. Exposure to data can prevent introspection of risks, as workshop participants focus on past events • Differences between Inherent (external loss data) and residual risk should be substantiated by development/review process for controls – businesses can’t have effective controls by accident • Differences between loss scenarios and external (as well as internal) should be supported by the evaluation of residual risk and the development process for controls 82

Scenario Analysis - Workshop • Objective: To create severity and frequency data to populate Scenario Analysis - Workshop • Objective: To create severity and frequency data to populate the tails of the loss distribution per risk type via working sessions with management and front line operational risk people • External data, discussions with other groups, expert opinions are used to facilitate the discussion • Scenario data can be combined with internal loss data or used to calibrate the CDF of a severity distribution directly Frequency and Severity Curve Fitting per Risk Type Scenario Analysis RCSA 83 External Data Public

Framing the questions • We effectively create synthetic data points for the tail • Framing the questions • We effectively create synthetic data points for the tail • Questions should be framed appropriately to accommodate the human mind 84

Example Curve Fitting Expected # of events per year • MLE minimizes the cumulative Example Curve Fitting Expected # of events per year • MLE minimizes the cumulative error in curve fitting • We can give more weight to the tail errors for better tail fit – but extreme weight marginalizes the other data points creating extreme sensitivity to the tail data 85 • (if had only used internal data, we’d be extrapolating based on distribution assumptions)

Role of Corporate: • Corporate experts facilitate the sessions to ensure maximum objectively and Role of Corporate: • Corporate experts facilitate the sessions to ensure maximum objectively and consistency across the businesses and time. • Corporate plays a very critical pull and push role in making sure the business leaders will be sufficiently and consistency (with other groups) open with potential Op Risk Events. • The process has a high risk of turning into a negotiation process rather than an open, fact finding and brainstorming. • Reassuring the business leaders that the process is consistent across the enterprise a key. • Centralized expertise with the same corporate people attending the sessions over time and for all business group is required. 86

Potential Bias’s and Control • Scenario Biases: Partition dependence: where respondent’s knowledge is distorted Potential Bias’s and Control • Scenario Biases: Partition dependence: where respondent’s knowledge is distorted by discrete choice of buckets within which their responses have to be presented Availability: where participants only recall recent events Anchoring: where different starting points yield different estimates Motivational: where the misrepresentation of information due to respondents’ interests are in conflict with the goals and consequences of the assessment. Overconfidence: where respondents rely on limited loss experience • Identification and evaluation of controls within the RCSA process can validate outcomes of scenario analysis against internal and external loss experience. 87

14: 30 – 15: 00 88 Coffee Break 14: 30 – 15: 00 88 Coffee Break

15: 00 – 16: 15 o. Quantitative Elements 89 Distribution Choices – Frequency and 15: 00 – 16: 15 o. Quantitative Elements 89 Distribution Choices – Frequency and Severity Fitting to truncated data Change of Measure Goodness of Fit, Stability EVT – Comparison with LDA False Precision

Loss Distribution Approach • Standard approach to determining capital for a business unit event Loss Distribution Approach • Standard approach to determining capital for a business unit event type (unit of measure) is to parameterize frequency and severity distributions • Frequency distribution determines likelihood of experiencing different numbers of loss events in a year Use annual observations of the number of losses to parameterize the frequency distribution • Severity distribution determines the likelihood of losing different amounts given that a loss occurs. Use the magnitudes of all losses experienced to parameterize the severity distribution • Capital numbers are based on the aggregate annual loss distribution A draw from the frequency distribution determines the number of losses in a year (n) Taking (n) draws from the severity distribution determines the individual magnitudes of those losses. 90

Level of Granularity in Modeling • Modeling loss distributions for event type sub-categories significantly Level of Granularity in Modeling • Modeling loss distributions for event type sub-categories significantly reduces the amount of internal data available • Comparing loss sub-categories within Basel Event types, it may not be clear whether losses should be categorized by two separate distributions. • The goodness of fit tests can be adapted to evaluate the hypothesis that subsamples of data belong to the same distribution. • Why not model executive fraud and teller fraud as losses from different quantiles of the same distribution for internal fraud? There is certainly a continuum of employees within an organization or business unit - capable of perpetrating fraud of various magnitudes. • Evaluate whethere is sufficient data available to calibrate separate distributions, and whether data rejects the hypothesis that losses from different categories follow the same distribution 91 If there are limited grounds for rejecting it, a simpler framework is generally better.

Op. Var Frequency Distributions • Poisson Distribution: Single variable distribution parameterized with mean annual Op. Var Frequency Distributions • Poisson Distribution: Single variable distribution parameterized with mean annual frequency of losses Variance is equal to mean • Binomial Two variable distribution parameterized by Maximum Likelihood Estimation (MLE) using loss frequency data Variance is smaller than the mean • Negative Binomial: Two variable distribution parameterized with MLE using loss frequency data Variance is larger than the mean 92

Op. Var Severity Distributions • There are many options to choose from for modeling Op. Var Severity Distributions • There are many options to choose from for modeling the likelihood of loss severity • Thin tailed Distributions: Beta, Chi-Square, Exponential, Gamma, Log Normal, Weibull • Fat Tailed Distributions: Burr, Cauchy, Generalized Pareto (GDP), Log – Gamma, Log-Logistic, Pareto • Parameters of the severity distribution are fit by maximizing the joint likelihood of loss amounts w. r. t. parameters of the likelihood function (distribution) • Typically the data will not clearly match one severity distribution over alternatives – sufficient number of tail loss events are not available to differentiate between these distributions. The likelihood of observing losses around the body or mean, is similar across the alternative severity distributions. • Capital numbers are based on the shape of the tail of the severity distribution – which can vary substantially across fitted distributions 93

Fitting Severity Distributions to Truncated Data • Likelihood functions for severity of losses are Fitting Severity Distributions to Truncated Data • Likelihood functions for severity of losses are defined from a loss value of 0. Loss data is only recorded for losses over a threshold b • Parameters of the likelihood function f() have to be calibrated based on the likelihood of observing loss greater than a threshold b. b 94 L Data is only available from this portion of the distribution

Quantitative Elements – Curve Fitting - Evren • Calibration to Mixture of Data Internal Quantitative Elements – Curve Fitting - Evren • Calibration to Mixture of Data Internal + External loss data; Internal loss data + Scenarios; Change of measure Scenarios 95

External Data in Modeling • There is no consensus in the industry on a External Data in Modeling • There is no consensus in the industry on a scaling methodology that would allow external data to be incorporated directly into capital modeling • Current techniques focus on indicators of business size and location, however risk profiles will vary substantially between institutions depending on specific controls that are in place and differences in culture • Using external data (even if it was accurately scaled) would allocate capital to businesses based on loss experience that was completely out of their control, and would be difficult to gain acceptance and support within an institution. • Percentiles of the empirical distribution of external losses can be compared to percentiles of candidate severity distributions fit with internal data. • Fit severity distributions using scaled external data combined with internal data, and compare to the fit using only internal data. 96

Incorporating Scenario Analysis into Modeling • Outcome of Scenario Analysis can be used to Incorporating Scenario Analysis into Modeling • Outcome of Scenario Analysis can be used to validate the fit of a severity distribution by comparing relevant quantiles of the distribution to the severity and likelihood of the hypothetical scenario • Scenario data cannot be directly added to internal loss data, as it increases the likelihood of the scenario losses within the sample of internal data: Adding a 1 in 50 year loss event to 3 years of internal loss data • One approach to incorporating scenario data points into capital modeling, is to fit two distributions with scenario data and internal loss data; splicing the two distributions together or drawing independently from each in simulation. Significant discontinuities between scenarios and internal data pose problems to this approach Difficult to assess the appropriateness of using two separate distributions when there is little internal data. 97

Combining Scenario and Internal Loss Data • Scenarios cannot be added directly to the Combining Scenario and Internal Loss Data • Scenarios cannot be added directly to the internal loss data for any unit of measurement. Frequency and severity of internal loss data have to be modified before it is combined with scenario events in calculating capital figures. Frequencies of the scenario and internal loss data may not be aligned; and naively combining scenarios with internal data will place undue weight on the scenario. Dutta and Babbel (2010) propose a method for appropriately merging internal data with a set of loss scenarios each defined by a frequency and loss interval (i. e event with losses between 200 mil and 250 mill occurring once every 20 years). This method involves systematically extending the frequency of historical loss data to match the frequency of each scenario event on a normalized scale. Severity distributions estimated with the underlying loss data are then transformed to accommodate the scenarios. 98

Combining Scenario and Loss Data (Dutta 2010) 1. 2. 3. 4. 5. 6. 7. Combining Scenario and Loss Data (Dutta 2010) 1. 2. 3. 4. 5. 6. 7. 99 Use internal loss data to estimate severity and frequency distributions for a unit of measurement (ET/Lo. B) Formulate Scenario with loss interval [a, b] and frequency m/t; m events are observed every t years If the frequency of losses n is distributed as freq(n) then in t years we would expect to see Take y draws from the severity distribution estimated with the internal data. We observe k

Evaluating Distributions using Change of Measure (COM) metrics can be used to evaluate the Evaluating Distributions using Change of Measure (COM) metrics can be used to evaluate the relevance of scenarios in updating a given severity distribution; and also the ability of a given severity density function to predict a scenario Each scenario that is combined with the internal data will change the implied probability of that event with respect to the historical probability. • Historical probability: probability of scenario occurring based on severity distribution estimated with the internal data • Implied probability: probability of a scenario occurring based on the implied severity distribution obtained by combing in internal loss data with scenario data. Change of Measure (COM) is informative of the relevance of a given scenario in updating the severity distribution The appropriateness of distribution choice can be evaluated based on COM values for a set of scenarios. Comparing COM values calculated between different severity density functions, a lower COM implies that a distribution is a better predictor of a scenario. 100

Evaluating Alternative Distributions Based on Change of Measure • Instead of fitting distributions independently Evaluating Alternative Distributions Based on Change of Measure • Instead of fitting distributions independently with internal and scenario data, COM approach can be used to jointly assess the fit of distributions to both internal data and scenarios 1. For a set of candidate distributions, fit each using available internal data 2. For Each candidate distribution calculate the COM based on implied probability from refitting the given distributions with scenario data • • 101 The lowest COM implies that a given distributional choice more easily accommodates the fit of both internal data and hypothetical scenarios The parameterization of the distribution estimated with the internal and external data, and with the lowest COM would be selected for capital modeling

References • Dutta, K. and Babbel, D (2010) Scenario Analysis in the Measurement of References • Dutta, K. and Babbel, D (2010) Scenario Analysis in the Measurement of Operational Risk Capital: a Change of Measure Approach; available for download at http: //fic. wharton. upenn. edu/fic/papers/10/10 -10. pdf • Dutta, K. and Perry, J (2006) A tale of tails: an empirical analysis of Loss distribution models for estimating operation risk capital. Federal Reserve Bank of Boston, Working Paper No. 06 -13 • Embrechts, P. Degen, M. Lambrigger, D. The Quantitative Modeling of Operational Risk: Between g-and-h and EVT; available for download at http: //www. math. ethz. ch/%7 Edegen/Lausanne. pdf • Moscadelli, M. (2004) The modelling of operational risk: experiences with the analysis of the data collected by the Basel Committee. Bank of Italy, Working Paper No 517. 102 • Makarov, M. (2006). Extreme Value Theory and High Quantile Convergence. Journal of Operational Risk (51 -57). Volumne 1/ Number 2, Summer 2006.

Using Only Scenario Data • In some instances institutions or individual units of measure Using Only Scenario Data • In some instances institutions or individual units of measure will have little to no internal loss data available for modeling (loss data collection has only just commenced) • It is still meaningful to summarize scenario data into a distribution to calculate capital 103 Each scenario severity and likelihood corresponds to a point 1 on the severity CDF Calibrate parameters of the severity distribution to fit its CDF to the scenario severity / likelihood data points. Choosing the weight of the fit, so that the tail of the distribution 0 fits the data better. CDF Severity

Goodness of Fit Statistics • A good way to evaluate the fit of a Goodness of Fit Statistics • A good way to evaluate the fit of a severity distribution for a given set of loss data is to plot empirical CDF against the CDF of the fitted distributions (using losses) How do the fitted distributions compare to the data? The scenarios? • Lognormal probability plots are also useful for observing fit of the tail y-axis is the log loss level X-axis is a standard normal variable Norm. SInv(X)=proportion of the sample below exp(Y) • Formal Goodness of Fit Statistics include: Kolmogorov-Smirnov Test Anderson Darling Test 104

Kolmogorov Smirnov Test • The tests evaluates the null hypothesis of whether A sample Kolmogorov Smirnov Test • The tests evaluates the null hypothesis of whether A sample data set is drawn from a reference distribution Two sample data sets are drawn from the same distribution • The test compares the differences between the CDFs of the sample and reference distribution or the two sample CDFs: • Empirical CDF: • Test Statistic: • Two Sample Case: • Requires a large sample of data to reject the null • Compared to critical values of the Kolmogorov distribution 105

Anderson-Darling Test • Evaluates the Null Hypothesis that a sample of data came from Anderson-Darling Test • Evaluates the Null Hypothesis that a sample of data came from a population with a specific distribution • Modified version of the KS test, which gives more weight to the fit of the data in the tail • A disadvantage is that the distribution of the test statistic under the null depends on the hypothesized distribution • Fit a distribution with the data, then use the fitted distribution in the Test statistic to evaluate the null hypothesis that the data was drawn from the fitted distribution • One sided test – smaller values of the test statistic are preferred. 106

Capital Stability • When modeling severity with fat tailed distributions, resulting capital numbers can Capital Stability • When modeling severity with fat tailed distributions, resulting capital numbers can become sensitive to the addition of new data points. Unstable capital figures over time as new internal data is collected. • It is important to evaluate the stability of high percentiles of fitted loss distributions with respect to changes in the data: For a given set of loss data points, use random subsamples to obtain multiple fits of the same distribution. Compare the 99 th (or high) percentiles across the fitted distributions • A distribution that results in a small test statistic will not necessarily provide the most stable capital figures over time. 107

Generating Capital Numbers: • Monte Carlo estimates of high quantiles of loss distributions suffer Generating Capital Numbers: • Monte Carlo estimates of high quantiles of loss distributions suffer from high variance. • Deterministic such as Fourier Transformation can provide more accuracy than Monte Carlo Simulations, however they are not as flexible: Insurance benefits can be modeled in Monte Carlo Simulations • High number of simulations (10^5 or greater per Unit of Measure) are require for capital number to converge. Number of simulations required will depend on the severity distribution (kurtosis). 108 Increase the number of simulations until the desired variance of capital numbers is achieved.

Extreme Value Theory (EVT) • An alternative to fitting separate severity and frequency distributions Extreme Value Theory (EVT) • An alternative to fitting separate severity and frequency distributions (LDA) is to model the probability of losses exceeding a given threshold using EVT. • A common Peaks-over-Threshold model of extreme values employs a Generalized Pareto Distribution (GPD) to model the likelihood of losses exceeding a high threshold. • For a large class of commonly used distributions used in statistics and actuarial sciences (normal, lognormal, 2, t, F, gamma, exponential, uniform, beta, etc. ), the distribution of exceedences above a thresholds converges to a GDP as the threshold approaches the right endpoint of the distribution. • Frequency can be incorporated by representing exceedences with a Peaks Over Thresholds Point Process (POT-PP) • Capital numbers can be obtained by multiplying the average severity of exceedences by the average frequency of exceedences when they are modeled by POT-PP 109

LDA vs EVT • From a practical perspective it can be difficult to determine LDA vs EVT • From a practical perspective it can be difficult to determine the appropriate threshold from which to model loss exceedences using EVT POT-PP. EVT relies on convergence in distribution to GDP, as the threshold converges to the right endpoint of the underlying distribution. The higher the threshold the less data available to estimate parameters of the GDP distribution • “Many different techniques being tested by researches are centered around EVT. In many of those cases we observe that attempts are made to fit a distribution or apply a method without understanding the characteristics of the loss data or the limitation of the models” (Dutta and Perry (2006) • Based on findings of Dutta and Perry (2006) in which operational risk data seem to be modeled better by the g-and-h than by Peaks Over Threshold (POT) EVT approaches, Embrechts et al (2006) outline several theoretical grounds on which POT EVT approaches may be unsatisfactory for fitting data from g-and-h density functions. 110

EVT and Convergence in Quantiles and Shortfall (Makarov (2006)) • EVT POT-PP approximations of EVT and Convergence in Quantiles and Shortfall (Makarov (2006)) • EVT POT-PP approximations of the tail of a distribution is based on convergence in distribution to GDP. • The convergence in distribution is a weak form of convergence and does not guarantee convergence in mean or convergence for quantiles. • EVT approximations can have convergence in quantiles (Uniform Relative Quantile convergence), however not for all families of distributions (not log gamma) • When applying EVT method to a distribution with finite mean, it can result in an approximation which has infinite mean or significantly different high quantiles and shortfalls. 111 • Although EVT can be useful in approximating the tails of unknown distributions without making assumptions about the underlying distribution, LDA provides a more flexible and less sophisticated approach.

False Precision • Although there are sophisticated methods available for fitting loss distributions and False Precision • Although there are sophisticated methods available for fitting loss distributions and generating capital figures based on historical loss data, these methodologies are not robust without large samples of data (generally more than will be available for operational risk modeling). • Without a substantial number of tail loss observations capital numbers represent a significant extrapolation based on the assumed severity distribution. • Even if scenario data is used to fit the tail of the severity distribution, it is still based on an approximation of very remote and unlikely events • The role of expert judgment has an important role to play in the AMA framework – it has to be recognized and carefully managed. • Estimated loss distributions are ultimately extrapolations of available loss experience to estimate highly unlikely events and will always require the overlay of management judgment. • A successful AMA framework cannot be over-reliant on sophisticated fitting techniques, capital calculations and test statistics – at the end of the day the precision of these numbers is not defendable • Institutions have fallen into the trap of using complexity to disguise uncertainty in the AMA framework. 112

Qualitative Assessment of AMA Framework • Dutta and Perry (2006) provides a qualitative yardstick Qualitative Assessment of AMA Framework • Dutta and Perry (2006) provides a qualitative yardstick for evaluating AMA models: 1. Good Fit – Statistically, how well does the method fit the data? 2. Realistic – If a method fits well in a statistical sense, does it generate a loss distribution with a realistic capital estimate? 3. Well specified – Are the characteristics of the fitted data similar to the loss data and logically consistent? 4. Flexible – How well is the method able to reasonably accommodate a wide variety of empirical loss data? 5. Simple – Is the method easy to apply in practice and is it easy to generate random numbers for the purposes of loss simulation? 113

References For Capital Modeling • Dutta, K. and Babbel, D (2010) Scenario Analysis in References For Capital Modeling • Dutta, K. and Babbel, D (2010) Scenario Analysis in the Measurement of Operational Risk Capital: a Change of Measure Approach; available for download at http: //fic. wharton. upenn. edu/fic/papers/10/10 -10. pdf • Dutta, K. and Perry, J (2006) A tale of tails: an empirical analysis of Loss distribution models for estimating operation risk capital. Federal Reserve Bank of Boston, Working Paper No. 06 -13 • Embrechts, P. Degen, M. Lambrigger, D. The Quantitative Modeling of Operational Risk: Between g-and-h and EVT; available for download at http: //www. math. ethz. ch/%7 Edegen/Lausanne. pdf • Moscadelli, M. (2004) The modelling of operational risk: experiences with the analysis of the data collected by the Basel Committee. Bank of Italy, Working Paper No 517. • Makarov, M. (2006). Extreme Value Theory and High Quantile Convergence. Journal of Operational Risk (51 -57). Volumne 1/ Number 2, Summer 2006. 114

Quantitative Elements – Correlations • Correlations by [Risk Types x Business Units x Regions] Quantitative Elements – Correlations • Correlations by [Risk Types x Business Units x Regions] • Across the risk types Nice to have but almost no data But some risk types must be correlated (e. g fraud and legal). Ad-hoc parameterization of correlations is possible (0%, 25%, 50%) – typically frequency draws are correlated (not severity) • Utilizing the aggregate op risk losses for the business units: Time series of aggregate losses may be available for different business units and regions – from which we can infer the empirical correlations between the aggregate losses We first aggregate the losses from different risk types per business unit and region using zero or ad-hoc correlations by means of normal copulas We then aggregate the cumulative loss distributions (per region/business unit) using the empirical distributions (by means of normal copulas) Note the correlations among the risk types would still be the weakest link and need to be empirically estimated as more data are collected over time 115

ρ the risk types Correlations among Aggregation ρ ρ Standalone Business Unit Capital Enterprise ρ the risk types Correlations among Aggregation ρ ρ Standalone Business Unit Capital Enterprise Capital 116 ρ ρ

Aggregation, across the risk types • Typically, frequency is correlated (as opposed to severity) Aggregation, across the risk types • Typically, frequency is correlated (as opposed to severity) • Normal copulas are used for correlation • Where 117

Aggregation, across the Business Units • Need to aggregate Standalone Business Unit Loss Distributions, Aggregation, across the Business Units • Need to aggregate Standalone Business Unit Loss Distributions, L 1 & L 2 • Normal copulas are used again: Correlated draws from L 1 and L 2 and aggregation 118

16: 15 – 16: 30 119 Coffee Break 16: 15 – 16: 30 119 Coffee Break

16: 30 – 17: 30 Governance and Validation Operationalizing AMA Management Buy-In Roles and 16: 30 – 17: 30 Governance and Validation Operationalizing AMA Management Buy-In Roles and Responsibilities: Corporate vs Front Line Allocation of Operational Risk Capital IT Infrastructure The appropriate level of granularity for allocation among the geographies, business units and business lines Allocation of capital generated by shared service centers Allocation of diversification Driving business benefits: Utilizing AMA in effective capital and Risk Management 120

Validation • What is different about Op. Va. R Validation? Backtesting the inputs Market Validation • What is different about Op. Va. R Validation? Backtesting the inputs Market Va. R Market Risk EC Credit Risk EC Op. Va. R 121 Backtesting the Quantiles of the distribution (Thus the distribution assumptions)

Governance and Validation • In the absence of data, “full” validation is not possible Governance and Validation • In the absence of data, “full” validation is not possible therefore we need to rely on Quality Assurance and external bechmarking. However a comprehensive validation spectrum includes: 1. Quality Assurance embedded in the production cycle 1. Enterprise relative-consistency check per risk type 2. Examine the large quantiles of the fitted loss distributions against the available data 3. Utilizing the available data to examine the RCSA and Scenario Analysis Results • 122 2. Model Vetting 3. External bechmarking 4. Data integrity and Maintenance 5. Use Test It is important to use the right skill set and expertise for each component of validation

Quality Assurance 1 - Ensuring Consistency • Subject matter experts examine the relative accuracy Quality Assurance 1 - Ensuring Consistency • Subject matter experts examine the relative accuracy across the enterprise per risk type Enterprise Consistency Check Frequency and Severity • For example Does it make sense that the legal risk for Business Unit A is twice a big as for Business Unit B? Scenario Analysis Examined by a enterprise legal risk expert. RCSA Op Va. R by Risk Type 123

Quality Assurance 2 – Examine the Fitted Distributions • Possible to see underestimation • Quality Assurance 2 – Examine the Fitted Distributions • Possible to see underestimation • Out of sample testing with public data: Compare the quantiles of the Fitted Distribution with external data per risk type. Need to estimate the frequency Scaling is an issue • Out of sample (in sample if used in the fitting) testing with consortium data: Compare the quantiles of the Fitted Distribution with external data per risk type. Frequency and Severity Curve Fitting per Risk Type Scenario Analysis Validation RCSA External Data Public Consortium • Data from individual institutions can be aggregated to create a longer time series data to estimate frequency (assuming independence ? ) 124

Quality Assurance 3 – Examine the RCSA and Scenario Analysis Data • Internal and Quality Assurance 3 – Examine the RCSA and Scenario Analysis Data • Internal and External data are used in validating the RCSA Inherent risk should align with external loss experience External data - Residual Development/Review of Controls • Residual Risk and Control quality validates Loss Scenarios against External and Internal Loss data. • This is valuable anyway as it stimulates business introspection of potential risk Curve Fitting per Risk Type Scenario Analysis External Data 125 RCSA Internal Data

Model Vetting • Vetting the mechanical aspects: Verification of the code Monte Carlo simulation Model Vetting • Vetting the mechanical aspects: Verification of the code Monte Carlo simulation Curve fitting algorithm Choice of the distributions Goodness of fit • Separation of vett-able and not-vettable: Managing the scope to match the vetters skill set is very important. 126

External Benchmarking • In the absence of sufficient data for full validation, peer benchmarking External Benchmarking • In the absence of sufficient data for full validation, peer benchmarking and industry studies for reasonability and comparability testing is required. • 12%-20% (Op Risk Capital/Minimum Regulatory Capital) appears to be a current industry benchmark, with bank’s being at the lower end and insurance companies being at the higher end. • Operational risk capital for non-AMA banks is higher than for AMA banks regardless of exposure indicator used for scaling Ratio of Op. Risk Capital to gross income is 10. 8% for typical AMA banks Basic Indicator Approach requires 15% of gross income. Banks using Standardized Approach (TSA) have a similar ratio of 12% - 18% • Regulators move to impose AMA capital floors (AMA ≥ 85%(ish) TSA Going Forward EC < AMA – Implications? • Is this a Top-down calibration? 127

Operationalizing AMA • • • Management Buy-In Roles and Responsibilities: Corporate vs Front Line Operationalizing AMA • • • Management Buy-In Roles and Responsibilities: Corporate vs Front Line Allocation of Operational Risk Capital IT Infrastructure The appropriate level of granularity for allocation among the geographies, business units and business lines • Allocation of capital generated by shared service centers • Allocation of diversification • Driving business benefits: Utilizing AMA in effective capital and Risk Management 128

Management Buy-In Some tips: • Don’t oversell Op. Va. R as the correct absolute Management Buy-In Some tips: • Don’t oversell Op. Va. R as the correct absolute number – emphasize that it provides an operation risk management framework where relative magnitude of different operational risk types are quantified. • Get business and management involved during the entire process. • Express the fact that: This is not a quantitative model but a process and framework, i. e. the business unit’s experience/ judgment is used. The model provides a quantitative and consistent framework to formalize the process, this experience/judgment. It is directionally right, it is obviously better than most benchmark based approaches (expense based, etc. ). It is linked to the risk drivers, thus, gives Business Units control over their Capital charge and allows them to make cost/benefit decisions on controls and insurance. • Explain the Use Test • The fact that it is a regularly requirement helps but actively seek linkages to the business benefits. 129

Roles and Responsibilities: • Corporate: Set up the framework, coordinate, facilitate, govern, allocate, report Roles and Responsibilities: • Corporate: Set up the framework, coordinate, facilitate, govern, allocate, report EC Group Operational Risk Group • Front Line Risk Managers: Manage the risk • Business: Own the Risk and the corresponding Op. Va. R 130

Dividing the Work within Corporate, A Key Partnership Framework Design, Calibration and Estimation Scenario Dividing the Work within Corporate, A Key Partnership Framework Design, Calibration and Estimation Scenario Analysis Design Enterprise Consistency/Validation Internal + External Data RCSA and Scenario Analysis Facilitation Coordination of Data Capture and Maintenance 131 Op Va. R Operational Risk Drivers Op Risk Group Governance Risk Taxonomy Op Risk Data Marts Design EC Group Op Va. R

Allocation of Operational Risk Capital • The appropriate level of granularity for allocation among Allocation of Operational Risk Capital • The appropriate level of granularity for allocation among the geographies, business units and business lines • Allocation of diversification • Allocation of capital generated by shared service centres (by volume, usage etc) – do they have control over the controls? 132

Allocation • By Risk Type x Business Unit • Capital Numbers are calculated for Allocation • By Risk Type x Business Unit • Capital Numbers are calculated for Business-Unit where performance is managed 133

the risk types Correlations among Diversification (Standalone) • Total Allocation Benefit =∑Standalone – Enterprise the risk types Correlations among Diversification (Standalone) • Total Allocation Benefit =∑Standalone – Enterprise Capital • Allocation of diversification benefits? 134

Allocation of diversification benefits • Pro-rata allocation: Diversified EC allocated to sub-portfolios based on Allocation of diversification benefits • Pro-rata allocation: Diversified EC allocated to sub-portfolios based on standalone EC • Marginal capital allocation: the difference between the EC required with the sub-portfolio as a component of the total portfolio and the EC required with the sub-portfolio removed • Shapley Allocation: the average of the marginal EC when a given portfolio is added to all possible combinations of sub-portfolios in which it can be included (Denault, 2001). 135

Allocation of diversification benefits • The Risk Contribution: Based on sub-portfolios’ contribution to the Allocation of diversification benefits • The Risk Contribution: Based on sub-portfolios’ contribution to the total variance of the loss distribution. • Tail Risk Contribution Methodology: Based on a sub portfolio’s marginal contribution to the portfolio losses within a defined region of the portfolio loss distribution 136

Diversification – No fungibility between the regions Region 1 Enterprise Region 2 + Capital Diversification – No fungibility between the regions Region 1 Enterprise Region 2 + Capital = 137 • Diversification is allowed within the region but not between the regions where capital is not fungible

IT infrastructure • IT infrastructure is essential for supporting the AMA Framework – tracking IT infrastructure • IT infrastructure is essential for supporting the AMA Framework – tracking and coordinating the moving parts. • IT system must integrate: Loss collection Relevant internal and external data must be referred through the RCSA and Scenario Analysis Results of RSCA, Scenario Work shops and Internal and External Loss data must be integrated with Capital modeling and fitting Results of the Capital modeling must then be available for in RCSA and control review to complete the cycle. Audit tracking and workflow management throughout the entire cycle to demonstrate the “Use-Test” and the use of all 4 elements • Can be very costly to develop the IT internally. Various components require input and design by different subject matter experts that are usually “siloed” across an enterprise (capital modeling especially) How to get systems to talk to each other if they are developed independently • Given the FTEs required for development and potential pitfalls to achieving coordination it is more cost effective to purchase pre-packaged off the shelf solution 138

Operational Risk And Regulation Software Rankings 139 Operational Risk And Regulation Software Rankings 139

Operational Risk And Regulation Software Rankings 140 Operational Risk And Regulation Software Rankings 140

Closing Remarks • The profile of operational risk is growing in the insurance industry Closing Remarks • The profile of operational risk is growing in the insurance industry Solvency II standard formula will be insufficient, requiring the development of internal models Joint regulators won’t hold a double standard to banks and insurance companies Rating agencies are increasingly interested in operational risk controls and processes • The value of an AMA framework is in the governance and validation processes between the 4 inputs: Internal and External data, Scenario Analysis and RCSA. Distribution fitting and capital modeling is not defendable without transparent governance and validation against Scenario Analysis and RCSA 141

Closing Remarks, con’t • A well governed AMA has direct business benefits. It provides Closing Remarks, con’t • A well governed AMA has direct business benefits. It provides an effective framework for operational risk quantification and management, facilitating: increased awareness for operational risk early identification and thus mitigation of potential operational risk via Scenario Analysis and RCSA discussions, keeping an eye on the horizon rather than on the rear view mirror – being proactive rather than reactive Op. Va. R quantification provides a heat map indicating where to focus • For these to happen we need to close the cycle of learning – when analyzing the Op. Va. R results – need to connect the dots • It separates heavy capital users (Business Units, Products etc. ) from others. This more accurate relative quantification of Capital leads to effective capitalization, performance management and pricing. • Facilitates Cost vs Benefit decisions, such as investing in better controls, insurance vs holding capital 142