See discussions, st ats, and author pr ofiles f or this public ation at : https:www .researchgate.ne tpublic ation228199571 [607939]
See discussions, st ats, and author pr ofiles f or this public ation at : https://www .researchgate.ne t/public ation/228199571
Operational Risk: A Practitioner's View
Article in Journal of Risk · June 2003
DOI: 10.21314/ JOR.2003.077
CITATIONS
24READS
720
4 author s:
Some o f the author s of this public ation ar e also w orking on these r elat ed pr ojects:
Asse t Manag ement View pr oject
Credit Risk View pr oject
Paolo V anini
Univ ersity of Basel
141 PUBLICA TIONS 689 CITATIONS
SEE PROFILE
Pierr e Ant oline z
Zürcher Cant onmal Bank
2 PUBLICA TIONS 43 CITATIONS
SEE PROFILE
Silvan Ebnöther
Independent R esearcher
7 PUBLICA TIONS 64 CITATIONS
SEE PROFILE
Alexander John McNeil
The Univ ersity of Y ork
80 PUBLICA TIONS 8,561 CITATIONS
SEE PROFILE
All c ontent f ollo wing this p age was uplo aded b y Paolo V anini on 03 June 2014.
The user has r equest ed enhanc ement of the do wnlo aded file.
Working Paper
Series
_______________________________________________________________________________________________________________________
National Centre of Competence in Research
Financial Valuation and Risk Management
Working Paper No. 52
Operational Risk: A Practitioner's View
Silvan Ebnöther Paolo Vanini
Alexander McNeil Pierre Antolinez
First version: November 2001
Current version: April 2003
This research has been carried out within the NCCR FINRISK project on
“Interest Rate and Volatility Risk”.
___________________________________________________________________________________________________________
Operational Risk: A Practitioner's View
By Silvan Ebnöthera, Paolo Vaninib
Alexander McNeilc, and Pierre Antolinezd
a Corporate Risk Control, Zürcher Kantonalbank,
Neue Hard 9, CH-8005 Zurich,
e-mail: [anonimizat]
b Corresponding author,
Corporate Risk Control, Zürcher Kantonalbank,
Neue Hard 9, CH-8005 Zurich,
Institute of Finance, University of Southern Switzerland, CH-6900 Lugano,
e-mail: [anonimizat]
c Department of Mathematics,
ETH Zurich, CH-8092 Zurich,
e-mail: [anonimizat]
d Corporate Risk Control, Zürcher Kantonalbank,
Neue Hard 9, CH-8005 Zurich,
e-mail: [anonimizat]
First version: November 2001, this version: April 28, 2003
Operat ional Risk Versi on: Apri l 28, 2003 2
Abstract
The Basel Committee on Banking Supervision ("th e Committee") released a consultative document
that included a regulatory capital charge for operational risk. Since the release of the document, the
complexity of the concept of "operational risk " has led to vigorous and recurring discussions. We
show that for a production unit of a bank with well-defined workflows operational risk can be
unambiguously defined and modelled. The results of this modelling exercise are relevant for the
implementation of a risk management framework, and the pertinent risk factors can be identified. We
emphasize that only a small share of all workflow s make a significant contribution to the resulting
VaR. This result is quite robust under stress testing. Since the definition and maintenance of processes
is very costly, this last result is of major practi cal importance. Finally, the approach allows us to
distinguish features of quality and risk management respectively. Finally, the methodology is designed
to relate risk measurement to the concerns and risk tolerance of risk management.
Keywords: Operational Risk, Risk Management, Extreme Value Theory, VaR
JEL Classification: C19, C69, G18, G21
Acknowledgement: We are especially grateful to Professor Embrechts (ETH) for his profound insights
and numerous valuable suggestions. We are also grat eful to A. Allemann, U. Amberg, R. Hottinger
and P. Meier from Zürcher Kantonalbank for provid ing us with the data and relevant practical
insight. We finally thank the participants of the seminars at the Universities of Pavia, Lugano and
Zurich and the IBM 2002 Finance Forum, Zurich. Paolo Vanini would like to thank the Swiss National Science Foundation (NCCR FINRISK) for their financial support.
Operat ional Risk Versi on: Apri l 28, 2003 3
1 Introduction
In June 1999, the Basel Committee on Banking Supervision (''the Committee'') released its
consultative document ''The New Basel Capital Acco rd'' (''The Accord'') that included a proposed
regulatory capital charge to cover ''oth er risks''. Operational risk (OR) is one such ''other risk''. From
the time of the release of this document and its se quels (BIS (2001)), the industry and the regulatory
authorities have been engaged in vigorous and recu rring discussions. It is fair to say that at the
moment, as far as operational risk is concerned th e "Philosopher's Stone" is yet to be found.
Some of the discussions are on a rather general and abstract level. For example, there is still ongoing debate concerning a general definition of OR. Th e one adopted by the BIS Risk Management Group
(2001) is ''the risk of direct loss resulting form inadequate or failed internal processes, people and
systems or from external events.'' How to translate the above defin ition into a capital charge for OR
has not yet been fully resolved; see for instance Danielsson et al. (2001). For the moment, legal risk is
included in the definition, whereas systemic, strategic and reputational risks are not.
The present paper contributes to these debates from a pr actitioner’s point of view. To achieve this, we
consider a number of issues of operational risk from a case study perspective. The case study is
defined for a bank's production unit and factors in self-assessment as well as historical data. We try to
answer the following questions quantitatively:
1. Can we define and model OR for the workflow processes of a bank's production unit
(production processes)? A production process is roughly a sequence of business activities; a
definition is given in the beginning of Section 2.
2. Is a portfolio view feasible and with what assets?
3. Which possible assessment errors matter?
4. Can we model OR such that both the risk exposure and the causes are identified? In other
words, not only risk measurement but risk management is the ultimate goal.
5. Which are the crucial risk factors?
6. How important is comprehensiveness? Do a ll workflows in our data sample significantly
contribute to the operational risk of the business unit?
The results show that we can give reasonable an swers to all the questions raised above. More
specifically, if operational risk is modelled on well- defined objects, all vagueness is dispelled although
compared with market or credit risk, a different methodology and different statistical techniques are
used. An important insight from a practitioner’s point of view is that not all processes in an organization need to be equally considered for the purpose of accurately defining operational risk
exposure. The management of operational risks can focus on key issues; a selection of the relevant processes significantly reduces the costs of defining and designing the workflow items. To achieve
this goal, we construct the Risk Selection Curve (RiSC), which singles out the relevant workflows needed to estimate the risk figures. In a next step, th e importance of the four risk factors considered is
analyzed. As a first result, the importance of th e risk factors depends non-linearly on the confidence
level used in measuring risk. While for quality manage ment all factors matter, fraud and system failure
have a non-reliable impact on risk figures. Finally, with the proposed methodology we are able to link
risk measurement to the needs of risk management: According to the risk tolerance of the management
using RiSC and the risk factor contribution anaylsis , the relevant workflows and risk factors are shown
are selected
The paper is organized as follows. In Section 2 we describe the case study. In Section 3 the results
using the data available are discussed and compared for the two models. Further, some important
issues raised by the case study are discussed. Section 4 concludes.
Operat ional Risk Versi on: Apri l 28, 2003 4
2 Case Study
The case study was carried out for Zürcher Kantonalbank's Production Unit. The study comprises 103
production processes.
2.1 Modelling Operational Risk: Framework
The most important and difficult task in the quantif ication of operational risk is to find a reasonable
model for the business activities1. We found it useful, for both practical and theoretical reasons, to
think of quantifiable operational risk in terms of di rected graphs. Though this approach is not strictly
essential in the present paper, for operational risk mana gement full-fledged graph theory is crucial (see
Ebnöther et al. (2002) for a theoretical approach). In this paper, the overall risk exposure is considered
on an aggregated graph level solely for each process. This approach of considering first an aggregated
level is essential from a practical feasibility point of view: Considering the costly nature of analyzing
the operational risk of processes quantitatively on a "microscopic level", the important processes have
to be selected first.
In summary, each workflow is modelled as a graph consisting of a set of nodes k j and a set of directed
edges e k. Given this skeleton, we next attach risk information. To this end, we use the following facts:
At each node (representing, say, a machine or a person) errors in the processing can occur (see Figure
1 for an example). It follows from Figure 1 that a process consists of (many) sub-processes.
Insert Figure 1 around here.
The errors have both a cause and an effect on the performance of the process. More precisely, at each
node there is a (random) input of information defini ng the performance. The errors then affect this
input to produce a random output performance. The causes at a node are the risk factors, examples
being fraud, theft or computer system failure. The primary objective is to model the link between
effects and causes. There are, of course, numerous ways in which such a link can be defined. As
operational risk management is basically loss ma nagement, our prime concern is finding out how
causes, through the underlying risk factors, impact losses at individual edges.
We refer to the entire probability distribution associated with a graph as the operations risk
distribution . In our modelling approach, we distinguish between this distribution and operational risk
distribution . While the operations risk distribution is defined for all losses, the operational risk
distribution considers only losses larger than a given threshold .
Operational risk modelling, as defined by the Accord , corresponds to the operations risk distribution in
our setup. In practice, this identification is of little value as every bank distinguishes between small
and large losses. While small losses are frequent, large losses are very seldom encountered. This
implies that banks know a lot about the small losses and their causes but they have no experience with
large losses. Hence, typically an efficient organi zation exists for small losses. The value added of
quantitative operational risk management for banks thus lies in the domain of large losses (low
intensity, high severity). This is the reason why we differentiate between operations risk and
operational risk if quantitative modelling is considered. We summarize our definition of operational
risk as follows:
Definition 1 Quantitative operational risk for a set of production processes are those operations risks
which exceed a given threshold value.
Whether or not we can use graph theory to cal culate operational risk critically depends on the
existence of standardized and stable workflows within the banking firm. The cost of defining
processes within a bank can be prohibitively large (i) if all processes need to be defined, (ii) if they are
defined on a very deep level of aggregation, or (iii) if they are not stable over time.
1 Strictly speaking there are three differe nt objects: Business activities, workflows, which are a first model of these activiti es, and graphs,
which are a second model of business activities based on the workflows. Loos ely speaking, graphs are mathematical models of wor kflows
with attributed performance and risk information relevant to the business activities. In the sequel we use business activities and workflows as
synonyms.
Operat ional Risk Versi on: Apri l 28, 2003 5
2.2 Data
An important issue in operational risk is data availability. In our study we use both self-assessment
and historical data. The former are based on expert knowledge . More precisely, the respective process
owner valued the risk of each production process. To achieve this goal, standardized forms were used
where all entries in the questionnaire were prope rly defined. The experts had to assess two random
events:
1. The frequency of the random time of loss. For example, the occurrence probability of an event
for a risk factor could be valued ''high/medium/low'' by the expert. By definition the ''medium''
class might, for example, comprise events whic h return time less than one (four) years.
2. The experts had to estimate maximum and minimum possible losses in their respective
processes. The assumed severity distribution de rived from the self-assessment is calibrated
using the loss history2. This procedure is explained in Section 2.4.
If we use expert data, we usually possess sufficient da ta to fully specify the risk information. The
disadvantage of such data concerns their quality. As Rabin (1998) lucidly demonstrates in his review
article, people typically fail to apply the mathematical laws of probability correctly but instead create
their own ''laws'' such as the ''law of small numbers ''. An expert based data base thus needs to be
designed such that the most important and prominent biases are circumvented and a sensitivity
analysis has to be done. We therefore repr esented probabilistic judgments in the case study
unambiguously as a choice among real life situations.
We found three principles especially helpful in our data collection exercise:
1. Principle I: Avoid direct probabilistic judgments.
2. Principle II: Choose an optimal interplay between experts' know how and modelling. Hence
the scope of the self-assessment has to be well defined. Consider for example the severity
assessment: A possible malfunction in a pro cess leads to losses in the process under
consideration. The same malfunction can also affect other workflows within the bank. Experts
have to be awake to whether they adopt a lo cal point of view in their assessment or a more
global one. In view of the pitfalls inherent in probabilistic judgments, experts should be given
as narrow a scope as possible. They should focus on the simplest estimates, and model builders should perform more complicated relationships based on these estimates.
3. Principle III: Implement the right incentives. In order to produce the best result it is important
not only to advise the experts on what information they have to deliver, but also to make it
clear why it is beneficial for them and th e whole institution to do so. A second incentive
problem concerns accurate representation. Speci fically, pooling behavior should be avoided.
By and large, the process experts can be classifi ed in three categories at the beginning of a
self-assessment: Those who are satisfied with the functioning of their processes, those who are
not satisfied with the status but have so fa r been unable to improve their performance and,
finally, experts who well know that their pro cesses should be redesigned but have no intention
of doing so. For the first type, making an accura te representation would not appear to be a
problem. The second group might well exaggerate the present status to be worse than it in fact
is. The third group has an incentive to mimic th e first type. Several measures are possible to
avoid such pooling behavior, i.e. having other employees crosscheck the assessment values,
and comparing with loss data where available. And ultimately, common sense on the part of
the experts' superiors can reduce the extent of misspecified data due to pooling behavior.
The historical data are used for calibration of the sev erity distribution (see Section 2.4). At this stage,
we restrict ourselves to noting that information re garding the severity of losses is confined to the
minimum/maximum loss value derived from the self-assessment.
2 The loss history was not used in Ebnöther et al. (2002) because the required details were not available. The soundness of the results has
been enhanced by the availa bility of this extended data.
Operat ional Risk Versi on: Apri l 28, 2003 6
2.3 The Model
Within the above fram ework, the following steps summarize our quantitative approach to operational
risk:
1. First, data are generated through simulation starting from expert knowledge.
2. To attain m ore stable results, the distribu tion for large losses is m odelled using extrem e value
theory .
3. Key risk figures are calculated for the chosen risk m easures. We calculate the VaR and the
conditional VaR (CVaR)3.
4. A sensitivity analy sis is perform ed.
Consider a business unit of a bank with a num ber of production processes. We assum e that for
workflow i there are 4 relevant risk factors R i,j, j = 1,…, 4, leading to a possible process malfunction
such as sy stem failure, theft, fraud, or error. Because we do not have any experience with the two
additional risk factors external catastrophes and tem porary loss of staff, we have not considered them
in our m odel. In the present m odel we assum e that all risk factors are independent .
To generate the data, we have to sim ulate two risk processes: Th e stochastic tim e of a loss event
occurrence and the stochastic loss am ount (the sever ity) of an event expressed in a given currency .
The num ber N i,j of workflow i m alfunctions by risk factor j and the associated severity Wi,j(n), n =
1,…N i,j, are derived from expert knowledge. N i,j is assum ed to be a hom ogeneous Poisson process.
Form ally, the inter-arrival tim es between successive losses are i.i.d, exponentially distributed with
finite m ean 1/ λi,j. The param eters λi,j are calibrated to the expert knowledge database.
The severity distributions W i,j (n) ∼ Fi,j, for n=1, … , N i,j are estim ated in a second step. The
distribution of severity W i,j(n) is modeled in two different ways. First, we assum e that the severity is a
combined Beta and generalized Pareto distributi on (GPD, see Em brechts et al. (1997)). In the second
model, a lognorm al distribution is used to replicate the severity .
If the (i,j)-th loss arrival process Ni,j (t), t σ 0, is independent from the loss severity process
{W i,j(n)} n∈N and W i,j(n) has the sam e distribution for each n a nd are independent, then the total loss
experienced by process i due to risk ty pe j up to tim e t
∑
==)(
1)(, )(,,tN
nnjiW tjiSji
is called a com pound Poisson process and 0)(, =t Sji if 0)(, =tjiN .We alway s simulate 1 y ear. For
exam ple, 10,000 sim ulations of S(1) m eans that we sim ulate the total first y ears loss 10,000 tim es.
The next step is to specify the tail of the loss distri bution as we are ty pically interested in heavy losses
in operational risk m anagem ent. We use extrem e value theory to sm ooth the total loss distribution.
This theory allows a categorization of the total loss distribution into different qualitative tail regions4.
In sum mary, Model 1 is specified by :
• Production processes which are represented as aggregated, directed graphs consisting of two
nodes and a single edge,
• Four independent risk factors,
3 VaR denotes the Value- at-Risk m easur e and CVaR denotes Conditiona l Value-at-Risk (CVaR is also called E xpected Shor tfall or Tail
Value- at-Risk (See Tasche ( 2002) ).
4 We consider the m ean excess f unction e 1(u) = E[ S(1)-u | S(1) σ u] for 1 year, which by our definition of oper ational r isk is a useful m easur e
of risk. The asy mptotic behavior of the m ean excess function can be captur ed by the gener alized Par eto distr ibution ( GPD) G. The GPD is a
two-parameter distr ibution with distr ibution function
= − −≠ +−
=−
,0 if) exp(1,0 if ) 1(1
)(1
,
ξσξσξξ
σξxx
x G
where σ > 0 and the suppor t is [0, ∞) wh en ξ σ 0 and [0, – σ/ξ] for ξ < 0. A good data fit is achieved whic h leads to stable r esults in the
calculation of the conditional Value- at-Risk (see Section 3) .
Operat ional Risk Versi on: Apri l 28, 2003 7
• A stochastic arrival time of loss events mode lled by a homogeneous Poisson process and the
severity of losses modeled by a Beta-GPD-mixtu re distribution. Assuming independence, this
yields a compound Poisson model for the aggregated losses.
• It turns out that the generalized Pareto distribution, which is fitted by the POT5 method, yields
an excellent fit to the tail of the aggregate loss distribution.
• The distribution parameters are determined using maximum likelihood estimation techniques.
The generalized Pareto distribution is typically used in extreme value theory. It provides an excellent
fit to the simulated data for large losses. Since the focus is not on choosing the most efficient statistical
method, we content ourselves with the above choi ce while being very aware that other statistical
procedures might work equally well.
2.4 Calibration
Our historical database6 contains losses that can be allocated to the workflows in the production unit.
We use this data to calibrate the severity distributi on, noting that the historical data show an expected
bias: Due to the relevance of operational risk in th e last years, more small losses are seen in 2000 and
2001 than in previous years.
For the calibration of the severity distribution we use our loss history and the assessment of the
maximum possible loss per risk factor and workflow. The data are processed in two respects. First, as
the assessment of the minimum is not needed sin ce it is used for accounting purposes only, we drop
this number. Second, errors may well lead to gains instead of losses. In our database a small number of
such gains occur. Since we are inte rested solely in losses, we do not consider events leading to gains.
Next we observe that the maximum severity assessed by the experts is exceeded in some processes. In
our loss history, this effect occurs with an empiri cal conditional probability of 0.87% per event. In our
two models, we factor this effect into the seve rity value by accepting losses higher than the maximum
assessed losses.
Calibration is then performed as follows:
• We first allocate each loss to a risk factor and to a workflow.
• Then we normalize the allocated loss by the maximum assessed loss for its risk factor and
workflow.
• Finally we fit our distribution to the generated set of normalized losses. It follows that the
lognormal distribution and a mixture of the Beta and generalized Pareto distribution provide
the best fits to the empirical data.
In the second simulation, we have to multiply th e simulated normalized severity by the maximum
assessed loss to generate the loss amount (re version of the second calibration step).
2.4.1 Lognormal Model
In our first model of the severity distribution, we fit a lognormal distribution to the standardized losses.
The lognormal distribution seems to be a good fit for th e systematic losses. However, we observe that
the probability of occurrence for large losses is greater than the empirical data show.
2.4.2 Beta-GPD-Mixture Model
We eliminate the drawbacks of the lognormal distribution by searching for a mixture of distributions which satisfies the following properties:
First, the distribution has to reliably approximate the normalized empirical distribution in the domain where the mass of the distribution is concentrated. Th e flexibility of the Beta distribution is used for
fitting in the interval between 0 and the estimated maximum X
max.
5 The Peaks-Over-Threshold (POT) method based on a GPD model allows construction of a tail fit above a certain threshold u; for details of
the method, see the papers in Embrechts (2000).
6 The data range from 1997 to 2002 a nd contain 285 appropriate entries.
Operat ional Risk Versi on: Apri l 28, 2003 8
Second, large losses, which probably exceed the maxi mum of the self-assessment, are captured by the
GPD with support the positive real numbers. The GPD distribution is estimated using all historical
normalized losses higher than the 90% quantile. In our example, the relevant shape parameter ξ of the
GPD fit is nearly zero, i.e. the distribution is medium tailed7. To generate the losses, we choose the
exponential distribution which corresponds to a GPD with ξ=0.
Our Beta-GPD-mixture distribution is defined by a combination of the Beta- and the Exponential
distribution. A Beta-GPD-distributed random variable X satisfies the following rules: With probability π, X is a Beta random variable, and with probability (1- π), X is a GPD-distributed random variable.
Since 0.87% of all historical data exceed the assessed maximum, the weight π is chosen such that P(X
> X
max) = 0.87% holds.
The calibration procedure reveals an important i ssue if self-assessment and historical data are
considered: Self-assessment data typically need to be processed if they are compared with historical
data. This shows that the reliability of the self-asse ssment data is limited and that by processing this
data, consistency between the two different data sets is restored.
3 Results
The data set for the application of the above approaches is based on 103 production processes at
Zürcher Kantonalbank and self-assessment of the pr obability and severity of losses for four risk
factors (see Section 2.1). The model is calibrated against an internal loss database. Since
confidentiality prevents us from presenting real values, the absolute values of all results are fictitious
but the relative magnitudes are real. The calculati ons are based on 10,000 simulations. Table 1 shows
the results for the Beta-GPD-mixture model.
Key risk figures for the Beta-GPD-mixture model
α = 95% α = 99% α = 99.9%
VaR α CVaR α VaR α CVaR α VaR α CVaR α
fitted analytical 17 41 60 92 134 161
u = 95%-quantile 17 55 52 129 167 253
u = 97.5%-quantile – – 59 10 133 165
u = 99%-quantile – – 60 91 132 163
u = 99.5%-quantile – – – – 134 161
Table 1 Simulated data behavior of the tail distribution. ''Fitted analytical'' denotes the
results derived from 10,000 simulations for the Beta-mixture model. The other key
figures are generated using the POT8 model for the respective thresholds u.
Using the lognormal model to generate the severities the VaR α for α = 95% and 99% respectively are
approximately the same. The lognormal distribution is more heavily tailed than the Beta-GPD-mixture
distribution that leads to higher key figures for the 99.9% quantile.
Key risk figures for the Lognormal model
α = 95% α = 99% α = 99.9%
VaR α CVaR α VaR α CVaR α VaR α CVaR α
fitted analytical 14 48 55 137 253 512
7 The lognormal model belongs to the medium tailed distributions, too. But we observe that the tail behavior of the lognormal di stribution
converges very slowly to ξ = 0. For this reason, we anticipate th at the resultant distribution on the year ly total loss will seem to be heavily
tailed. Only a large-scale simulation could observe this fact.
8 From Table 1 and 2 it follows that the POT model yields a r easonable tail fit. For further information on the underlying loss t ail behavior
and statistical uncertainty of the estimated parameters we refer to Ebnöther (2001).
Operat ional Risk Versi on: Apri l 28, 2003 9
u = 95%-quantile 14 68 53 165 234 633
u = 97.5%-quantile – – 53 195 236 706
u = 99%-quantile – – 55 277 232 911
u = 99.5%-quantile – – – – 252 534
Table 2 Simulated dat a behavi or of t he tail distribution. Instead of the Beta-mixture
model of Tabl e 1 the lognorm al model is used for t he severi ty.
We can observe that a robust approxim ation of the c oherent risk m easure CVaR is m ore sensitive to
the underly ing loss distribution. The Tables also confirm that the lognorm al model is m ore heavily
tailed than the Beta-m ixture m odel.
3.1 Risk Selection Curve
A relevant question for practitioners is how m uch each of the processes contributes to the risk
exposure. If it turns out that only a fraction of all processes significantly contribute to the risk
exposure, then risk m anagem ent needs onl y to be defined for these processes.
We therefore analy ze how m uch each single process contributes to the total risk. We consider only
VaR in the sequel as a risk m easure. To split up the risk into its process com ponents, we com pare the
increm ental risk (IR) of the processes.
Let IR α (i) be the risk contribution of process i to VaR at the confidence level α
}){\( VaR)( VaR)(IRα α α iP P i − = ,
where P\{i} is the whole set of workflows without process i.
Because the sum over all IRα's is generally not equal to the Va R, the relative increm ental risk (RIC α(i))
of process i is defined as the IR α(i) norm alized by the sum over all IR α, i.e.
∑ ∑−= =
j jjiP P
jii)(IR}){\( VaR)( VaR
)(IR)(IR)( RIC
αα α
αα
α .
As a further step, for each α, we count the num ber of processes th at exceed a relative increm ental risk
of 1%. We call the resulted curve with param eter α, the Risk Selection Curve (RiSC).
Insert Figure 2 around here.
Figure 2 shows that on a reasonable confidence level only about 10 percent of all processes contribute
to the risk exposure. Therefore only for this sm all num ber of processes is it worth developing a full
graph theoretical m odel and analy zing this process in m ore detail. On lower or even low confidence
levels, more processes contribute to the VaR. This indicates that there are a large num ber of processes
of the high frequency /low im pact ty pe. These la tter processes can be singled out for quality
managem ent, whereas processes of the low frequenc y/high impact type are under the responsibility of
risk m anagem ent. In sum mary, using RiSC graphs allows a bank to discrim inate between quality and
risk m anagem ent in respect of the processes wh ich m atter. This reduces costs for both ty pes of
managem ent significantly and indeed renders OR m anagem ent feasible.
We finally note that the shape of the RiSC, i.e. not m onotone decreasing, is not a product of
modelling.
From a risk m anagem ent point of view RiSC li nks the m easurem ent of operational risk to its
managem ent as follows: Each param eter value α represents a risk m easure and therefore, in Figure 2
on the horizontal axis a fam ily of risk m easures are display ed. The risk m anagers possess a risk
tolerance that can be expressed with a specific value α. Hence, RiSC allows to provide the risk
inform ation the m anagers are concerned with.
Operat ional Risk Versi on: Apri l 28, 2003 10
3.2 Risk Factor Contribution
The inform ation concerning the m ost risky processes is im portant for splitting the Value at Risk into
its risk factors. Therefore we determ ine the relativ e risk that a risk factor contributes to the VaR α in a
similar m anner to the form er analy sis. We defi ne the relative risk factor contribution as
∑
=−−=4
1α αα α
α
})){\( VaR (VaR}){\( VaR VaR)( RRFC
jj PiPi ,
with P now the whole set of risk factors.
The resulting graph shows the im portance of the risk factors.
Insert Figure 3 around here.
Figure 3 shows that the im portance of the risk factors is not uniform and in linear proportion to the
scale of confidence levels. For low levels, error is the m ost dom inant factor, which again indicates that
this dom ain is best covered by quality managem ent. The higher the c onfidence level is, the more fraud
becom es the dom inant factor. The factor theft display s an interesting behavior too: It is the sole factor
showing a virtually constant contribution in percentage term s at all confidence levels.
Finally , we note that both results, RiSC and the risk factor contribution, were not known to the experts
in the business unit. These clear and neat results contrast with the diffuse and disperse knowledge
within the unit about the risk inherent in their business.
3.3 Modelling Dependence
In the previous m odel we assum ed the risk factor s were independent. Depende nce could be introduced
though a so-called com mon shock m odel (see Bedf ord and Cooke (2001), Chapter 8, and Lindskog
and McNeil (2001)). A natural approach to model dependence is to assum e that all losses can be
related to a series of underly ing and independent shock processes. When a shock occurs, this m ay
cause losses due to several risk f actors triggered by that shock.
We did not im plem ent dependence in our case study for the following reasons:
• The occurrence of losses which are caused by fraud, error and theft are independent.
• While we are aware of sy stem failures dependenc ies, these are not the dom inating risk factor.
(See figure 3.) Hence, the costs for an asse ssment and calibration procedure are too large
compared to the benefit of such an exercise.
3.4 Sensitivity Analysis
We assum e that for each workflow and each risk f actor the estim ated m aximum loss is twice the self-
assessed value, and then twice that value again. In doing so, we also take into account that the
calibration to the newly genera ted data has to be redone.
Sensitivity
α = 95% α = 99% α = 99.9%
VaRα CVaR α VaR α CVaR α VaR α CVaR α
fitted analy tical 17 41 60 92 134 161
stress scenario 1 22 45 57 92 129 178
stress scenario 2 21 48 65 103 149 186
Table 3 Stress scen ario 1 is a sim ulation using a m aximum twice the self-assessed
value. Stress scenari o 2 is a simulation usi ng a m aximum four t imes the sel f-assessed
value. A B eta-GPD-m ixture distribution is chosen as severi ty model.
It follows that an overall underestim ation of the estim ated m aximum loss does not have a significant
effect on the risk figures since the sim ulation input is calibrated to the loss history . The intuition for
Operat ional Risk Versi on: Apri l 28, 2003 11
this result is as follows: The mass of the realized losses within the self-assessed bounds remains
largely invariant under a scaling of all boundary values simultaneously.
Furthermore, the relative risk contributions of the risk factors and processes do not change
significantly under these scenarios, i.e. the number of processes which significantly contribute to the
VaR remains almost invariant and small compared to all processes.9
4 Conclusion
The scope of this paper was to show that quantifi cation of operational risk, adapted to the needs of
business units, is feasible if data exist and if the modelling problem is seriously considered. This
means that the solution of the problem is described with the appropriate tools and not by an ad hoc
deployment of methods successfully developed for other risks.
It follows from the results presented that a quantification of OR and OR management must be based
on well-defined objects (processes in our case). We do not see any possibility of quantifying OR if
such a structure is not in place within a bank. It also follows that not all objects (processes for
example) need to be defined; if the most important are selected, the costs of monitoring the objects can
be kept at a reasonable level and the results will be sufficiently precise. The self-assessment data used
in the present paper proved to be useful: applying a sensitivity analysis, the results appear to be robust.
The models considered in this paper can be extended in various directions. First, if the Poisson models used are not appropriate, they can be replaced by a negative Binomial process (see Ebnöther (2001)
for details). Second, production processes only are part of the total workflow processes defining
business activities. Hence, other processes need to be modelled and all elements then finally catenated
using graph theory. This ultimately implies a compre hensive risk exposure for a large class of banking
activities.
9 At the 90% quantile, for both stress scenario s the number of “relevant “ workflows (8) remains constant whereas a small reducti on from 15
to 14 (13) relevant workflows is observed at the median.
Operat ional Risk Versi on: Apri l 28, 2003 12
5 References
• BIS 2001, Basel Committee on Banking Supervision (2001), Consultative Document, The
New Basel Capital Accord, http://www.bis.org .
• BIS, Risk Management Group of the Ba sel Committee on Banking Supervision (2001),
Working Paper on the Regulatory Treatment of Operational Risk, http://www.bis.org .
• Bedford, T. and R Cooke (2001), Probabilistic Risk Analysis, Cambridge University Press, Cambridge.
• Danielsson J., P. Embrechts, C. Goodhart, C. Keati ng, F. Muenich, O. Renault and H. S. Shin
(2001), An Academic Response to Basel II, Sp ecial Paper Series, No 130, London School of
Economics Financial Markets Group and ESRC Research Center, May 2001
• Ebnöther, S. (2001), Quantitative Aspects of Operational Risk, Diploma Thesis, ETH Zurich.
• Ebnöther, S., M. Leippold and P. Vanini (2002), Modelling Operational Risk and Its
Application to Bank's Business Activities, Preprint.
• Embrechts, P. (Ed.) (2000), Extremes and Inte grated Risk Management, Risk Books, Risk
Waters Group, London
• Embrechts, P., C. Klüppelberg and T. Mikosch (1997), Modelling Extremal Events for Insurance and Finance, Springer, Berlin.
• Lindskog, F. and A.J. McNeil (2001), Common Poisson Shock Models, Applications to
Insurance and Credit Risk Modelling, Preprint, ETH Zürich.
• Medova, E. (2000), Measuring Risk by Extreme Values, Operational Risk Special Report, Risk, November 2000.
• Rabin, M. (1998), Psychology and Economics, Journal of Economic Literature, Vol. XXXVI,
11-46, March 1998.
• Tasche, D. (2002), Expected Shortfall and Be yond, Journal of Banking and Finance 26(7),
1523-1537.
Operat ional Risk Versi on: Apri l 28, 2003 13
<Figure 1>
Figure 1 Example of a simple production process: The edit-returned mail. More
complicated processes can contain several dozens of decision and control nodes.
The graphs can also contain loops and ver tices with several legs, i.e. topologically
the edit-returned mail process is of a particularly simple form. In the present paper, condensed graphs (Model 1) are only c onsidered, while for risk management
purposes the full graphs are needed.
<Figure 2>
Figure 2 The risk selection curve (RiSC) of the Beta-GPD-mixture model.
<Figure 3>
Figure 3 The segmentation of the VaR into its risk factors.
View publication statsView publication stats
Copyright Notice
© Licențiada.org respectă drepturile de proprietate intelectuală și așteaptă ca toți utilizatorii să facă același lucru. Dacă consideri că un conținut de pe site încalcă drepturile tale de autor, te rugăm să trimiți o notificare DMCA.
Acest articol: See discussions, st ats, and author pr ofiles f or this public ation at : https:www .researchgate.ne tpublic ation228199571 [607939] (ID: 607939)
Dacă considerați că acest conținut vă încalcă drepturile de autor, vă rugăm să depuneți o cerere pe pagina noastră Copyright Takedown.
