Common Sense Risk Estimate

Alexander Liss

10/28/99

Example Quality of Investment *

Separation of Risk Analysis *

Tools at Hand *

Application of Quantitative Risk Analysis *

Prevalent Movement of Prices and "Fat Tails" *

Risk and Preferences *

Risk Analysis and Forecast *

Language of "Chance" and Mistake of Measurement *

Forecast Built in Prices *

Probability of Default Built in Interest *

Risk Characteristics in Decision-Making Process *

System of Risk Characteristics *

Weight Functions for Basic Characteristics *

Classification of Risk Characteristics *

Scenarios Military Approach to Risk Mitigation *

Simulation for Risk Analysis *

Quantitative risk analysis in finance is a domain of a small group of professionals, who use sophisticated models. However, we are engaged in risk analysis very often in our lives and concepts and many methods of risk analysis in finance can be and should be presented in a simple and easily understandable form.

We can easily relate to analysis of the quality of investment (many of us invest or read and hear about investment). In investment, there are a few major issues

- How our investment behaves in average over the period of time
- How big is the possibility that we enter a state of really big losses (the company went bankrupt, we cannot repay borrowed money in time)
- What happens, if we time our investment decisions poorly (it is known that substantial changes in prices happen during only a few days in the year).

Note that we can hit big losses because of different reasons.

One type of reasons is something that is hard to anticipate, as major political events, shifts in the market, changes in the place of the company in the market etc. We mitigate this risk by relying on specialists we entrust our investment to investment managers or follow closely market analysis performed by specialist. In other words, we rely on experience: our experience and experience of specialists.

The other type of reasons can be easily anticipated from the observation of market behavior in the past (experience is based on such observation also, but it cannot be presented formally as easily as one might want). Mitigation of this form of risk is based on the concept of volatility. While the concept is easy to understand in general we observe the movement of price around some "average" value and the "amplitude" of this movement is "volatility"; it is not so simple, when we try to define it precisely, that we can arrive to some reliable quantitative results.

There are many obstacles on the way of precise definition of volatility the price we read on the screen is not the one, which we can get, the "average" depends on the period we use to compute it and it is changing, etc.

However, even when we use some poor definitions, we can get some reasonable estimates of possibility for us to hit big losses based on easily available market data.

It is interesting that we can separate Risk Analysis in the analysis of Quality of Investment as a separate relatively independent activity.

Risk Analysis deals with relatively rare events and requires special methods and special expertise. In many cases, Risk Analysis does not require precise description of the *prevalent movement of prices*, only rare events have to be described well. It means that we can use simplified models to describe the prevalent movement of prices and their rare combinations, which create rare events.

It is interesting that if we can separate Risk Analysis, then we do not need to be particularly precise in the rest of analysis of the Quality of Investment. If we took care already of rare events, then we can use simplified models, which do not describe rare events well, but which are good in description of the prevalent movement of prices.

Now we have a nice separation simple models, which describe the prevalent movement of prices and special models for Risk Analysis, which rely on these simple models.

Models, which describe prevalent movement of prices, are well developed already. We can use them to estimate the risk associated with relatively rare combination of the prices in the course of the normal price movement. In some cases, this component of risk is important by itself. For example, it is important to evaluate that a potential decision can withstand the normal price volatility (without impact of unusual rare events).

These models can be used in more creative way. They have parameters, which can be set to reflect some potential events, as a Feds decision, which affects interest rate. This way, we can create a set of potential variants of market situation and evaluate their impact on overall risk. This method is used by currency speculators not only to evaluate risk, but to find profit opportunities also.

Still, there are events, which we cannot evaluate with models describing prevalent movement of prices. These events are related to some "catastrophic" changes bankruptcies, rapid change in liquidity of some group of financial instruments, etc. The impact of potential events of this kind can be evaluated with help of models developed in reliability theory.

In many cases, Quantitative Risk Analysis cannot deliver high precision of results, because it is hard to foresee the future events the market is changing and new factors are coming into play all the time.

However, in real world Quantitative Risk Analysis is always a compliment to the non-quantitative risk analysis, which any decision-maker is doing all the time consciously or unconsciously. For example, if a portfolio manager anticipates troubles with the debt of some government, then he reorganizes portfolio, without any special Quantitative Risk Analysis. Actually, we need Quantitative Risk Analysis to catch unfavorable combinations of many factors, which are hard to take in account without it.

If we use Quantitative Risk Analysis in this limited role, then requirements to its precision are achievable.

In some cases, risk analysts use models designed for description of Prevalent Movement of Prices and find that they do not describe rare events well enough. It is manifested in deviation of the distribution function produced by the model, from experimental results in the area of small probabilities tails of distribution. This phenomenon had created a special term "fat tails". Unfortunately, there are no known methods of adjusting an existing model to model these "fat tails" precisely enough. Hence, the entire concept of "fat tails" is not productive. The presence of "fat tails" is only a sign, that the model is not applicable.

Usually the term "fat tails" means that the distribution of the random value is not normal, and even cannot be sufficiently approximated with normal distribution especially in the are of low probabilities. This language "not normal distribution" is clear and it is not misleading, it is preferable to "fat tails".

For example, if stock prices have lognormal distribution, then the value of stock portfolio has a distribution, which is neither lognormal nor normal. This is an obvious fact and there is no need to discuss "fat tails".

Even superficial observation shows that it is impossible to describe risk in absolutely objective way ignoring individual System of Preferences.

In some cases, a particular form of risk can be ignored because there are protective measures in place, which we do not want to include in the scope of our particular Risk Analysis. In other cases, some events can be damaging individually to the decision-maker, while in the "grand scale of facts" they do not look very important. In other cases yet, a decision-maker has a particular phobia of something that does not look important from outside.

Hence, methods of Risk Analysis have to be attuned to what a decision-maker perceives as important. The problem with this requirement people are not accustomed with formal description of their preferences. Hence, methods of Risk Analysis have to provide a limited set of ways of expressing these preferences, that users can acquire some experience with them through the use of these methods.

From experience of similar work in other areas, this can be done effectively with weight coefficients and with clear and simple procedure of weight assignment by specialists.

Risk Analysis makes an assessment of future events, hence one way or the other it is based on forecast. The word "forecast" is associated with something unreliable, hence this fact that Risk Analysis is basically a sophisticated forecast is often hidden. Actually, we should not hide it. We should explicitly point out where we rely on forecast. This fact alone allows some informal assessment of the precision of results of Risk Analysis.

Usually, forecast is not more than extrapolation in time of some function of time. In finance it is somewhat more sophisticated the change of prices is assumed to be a stochastic process of very simple type. The forecast in this case consists in extrapolation in time of its characteristics (usually it is assumed that they do not change).

Such simple procedure cannot be very precise, obviously. The art of forecast is in proper choice of parameters, which we extrapolate, and computation of the value, which we want to forecast. If this is done properly, then

- big mistakes of extrapolation of parameters do not lead to big mistakes of the forecast of our value
- we can interpret parameters well and, therefore, can improve the precision of their extrapolation.

For example, we can extrapolate the logarithm of price of stock better than the price itself.

In finance, we have an example of such approach. Our value, which we forecast, is a price of portfolio of financial instruments. However, we do not forecast it directly, but we forecast the price of each instrument separately and compute the price of the portfolio. If each instrument adds only small amount to overall price of portfolio, then mistake of its forecast does not affect much the mistake of the portfolio price forecast. In addition, we can interpret well the movement of price of each particular instrument, hence we can forecast it better. However, this logic works only when mistakes of forecast of individual stocks are independent. When all stocks go down simultaneously the portfolio goes down also.

The forecast of the prevalent movement of prices can be sufficiently precise 10% daily movement of a price on the stock market is an extraordinary event. However, the forecast of unusual rare events, which cause big changes in prices, is difficult, and requires the input of expertise of forecasters.

Because forecast deals with events in the future, we have to use the language of "chance" as it is done in a popular weather forecast "There is 80% chance of the rain tomorrow".

Language of "Chance", which we use to describe forecast, is close to familiar description of Mistake of Measurement. We use it in a similar manner: we acquire experience of the use of these characteristics and we specify them for particular tasks from this experience. For example, we take an umbrella if there 90% chance of the rain and we do not take actions, if there is a 60% chance of a light rain in a summer day; however, if there is a 60% chance of a hurricane, then we do take actions.

While the acceptable mistake of measurement is big, we do not pay attention to various factors, which affect measurement; we treat them as a "random noise". However, the higher level of required precision, the more attention we have to pay to these circumstances. For example, when we measure the length of an object precisely, we check its temperature, specify what we define as its length, etc.

We have to specify external factors and the method of measurement to arrive to repeatable results of measurement.

We have similar situation with the Language of "Chance". While we take actions even when the chance of a wrong forecast is big, we do not need to pay attention to many circumstances of forecast. However, when we demand a high "precision" of forecast, to take actions, we need to specify circumstances to make use of the number, describing the "chance".

At some level, this increased requirement of precision of forecast looses sense, because we reach the point, where we have to admit that there are factors acting in the future, which we cannot anticipate.

This is where our analogy with mistake of measurement breaks.

We can provide increasingly precise control over circumstances of measurement, but we cannot foresee all factors affecting forecast.

If forecast says that it is right at 60%, then we can expect that from 10,000 forecasts about 6,000 are right, but if it says that it is right at 99.9%, then we should not expect that from 10,000 forecasts 9,990 are right!

We apply this logic to a traditional measure of risk - Value at Risk (VaR), which is a quintile of the evaluated value.

It means, that specification of VaR with small probability does not increase precision of forecast starting at some level of probability. These VaRs can be used as special parameters, which reflect particular component of risk, but they cannot be used as precise measures of overall risk.

It is possible, that this logic is applicable to popular 1% VaR, already.

Forecast of market characteristics can be especially confusing because we understand that market establishes prices based on some informal or even formal forecast of their future values. At first glance it looks as a logical loop, but it is only recognition of the fact that the market works as balancing mechanism, which imposes the restriction on the way the prices change.

This balance is established through trades, hence frequently traded items have prices, where forecast is embedded better. Obviously, illiquid assets do not reflect forecast as well as liquid assets do.

It is tempting to extract this forecast from the market data. This task is not simple and has hidden traps.

Prices are influenced by many factors in addition to forecast of their future values, as level of liquidity, for example.

It is clear now, that this embedded in prices forecast is quite sophisticated. It includes an extrapolation of values in the future and a correction for the degree of uncertainty of future situation.

This has a few consequences. First, extraction of this forecast requires sophisticated models. Second, if one wants to improve this extracted forecast and arrive to more precise forecast, one has to be sure to have some information, unavailable to market participants.

The embedding of forecast is done through trades, where the price fluctuates. Hence, we have to "average" the price in some period to arrive to "current" price. The length of this period depends on frequency of trades. If trades are infrequent, then the period could be so long, that the entire idea of extracting forecast from prices should be abandoned.

The interest, or more precisely the interest premium, which a company has to pay on its debt, reflects the level of companys stability, its ability to pay back borrowed funds.

It is tempting to extract this information and use it in risk analysis. However, it is not easy, as it is not easy to extract the forecast built in prices.

There are many factors, which affect interest, including the level of debts liquidity and state of credit market. Debt is traded far less than equity and it means that we cannot use market as effective forecaster in this case. It is better to use methods, which credit rating agencies use, to arrive to similar rough estimate of the chance of default, as they do.

Specialists, who analyze risk, know how imprecise particular risk characteristics can be and how much in this analysis depends on in depth understanding of the situation; and one hardly can describe this understanding formally. However, when results of risk analysis lend on the desk of decision-maker, the numbers are perceived differently. In some respect, it is justifiable it is the duty of the risk analysis specialist to present results in the form adjusted to quick making of decision.

A decision-maker makes a decision based on values of a set of characteristics. He compares these values for different variants of decisions, finds the way to compromise between different contradicting requirements and chooses one variant, which looks the best according to available information.

Hence, Risk Characteristics used in Decision-Making have to allow reasonable comparison between different variants of decisions and yield themselves to creation of compromises in combination with other non-risk characteristics. It means that they have to be easy to interpret and they have to provide factual discrimination of variants of decisions.

Defining of such characteristics and making decision-makers comfortable with them is a difficult task.

Let us look at VaR from this point of view.

It is easy to interpret it is defined with amounts and chances.

However, it is hard to use to compare different portfolios of different size, because it is an absolute characteristic and not a characteristic relative to the size of portfolio.

Worse yet, it is very imprecise. Small variation of assumptions used to compute it leads to substantial changes in its value. Small amount of samples used, when the Monte-Carlo method is used for their computation, leads to large mistakes of its estimate. Big mistakes mean that we cannot discriminate between two variants, while decision-makers often have an impression that they can do this based on numbers at hand.

We can go nowhere in Risk Analysis, if we do not have an established set of risk characteristics. From the other hand, we know already, that risk characteristics should depend on a user.

Hence, we have to devise a system of constructing of risk characteristics based on a simple set of rules. We need a small set of basic characteristics and we construct any general risk characteristic with the help of these basic characteristics and weights assigned by specialists; mostly we construct them as a sum of basic characteristics with weights.

The procedures of setting these weights "manually" are procedures, which allow formal description of valuable experience of specialists.

Weight Functions can be a useful tool of definition of risk characteristic. For example if P is a threshold for a price and the actual price is p, then we can define the value r and the weight function:

r is zero if p>P and

r = P-p when p<P.

If p is a random value, then r is a random value also, and we have to look at its distribution function, we can define its density as the risk characteristic. Value r is non-negative, its distribution function usually has a substantial jump in the point r=0.

We can modify this weight function, to reflect the fact that values p > P, when p-P is small are undesirable also, and values p < P with very large P-p has much bigger undesirable impact than small ones. In other words we make dependency of r on p non-linear similar to dependency of tax amount on amount of taxable income.

We add two bounds P1 and P0: P1 < P < P0 and we define

r is zero if p>P0

r = a0*(P0-p) when P<p<P0

r = P-p + a0*(P0-P) when P1<p<P

r = a1*(P1-p) + (P-P1) + a0*(P0-P) when p<P1,

a0 and a1 are coefficients:

a1 > 1 > a0 > 0.

We need to differentiate between characteristics used for in depth analysis and for routine decision-making. For in depth analysis we use detailed characteristics as distribution functions, which can be sensitive to the assumptions of analysis and to way of data collecting. However, for routine decision-making we need simple and stable characteristics.

Some characteristics can be estimated based on available information about market behavior in the short period of the past. Others require in depth analysis of tendencies of market development, political development, etc.

We can fix the probability of some event and compute the corresponding value (as it is done in the definition of VaR), or we can fix the value and compute the probability of an event defined by this value. These are two equivalent ways of creating risk characteristics, however some of them are easier to interpret than the others. Actually we can create the families of characteristics, in the first case the value as a function of probability and in the second case a probability as a function of value (distribution function).

All characteristics describe risk in relation to the particular period in time; some periods are small (as a day), some are medium (as a week, a month) and others are long (as years). Usual definition of VaR implies a daylong period.

The important classification of risk characteristics is related to the classification of undesirable events. An undesirable event has parameters as a price, below which one is in trouble (has portfolio value is below of the required threshold, for example) and a period, during which the probable price recovery is not sufficient to reorganize portfolio and meet the requirement. The probability of such event to happen is a valid risk characteristic. It has two parameters a threshold price and a period.

Above we had shown how risk characteristics could be defined with weight functions. The combination of weight functions and events is a further development of the idea.

Experience of military is valuable source of knowledge and methods for anyone, who wants to improve his ability to manage risk. Military operates in conditions of high uncertainty, when there is no time to research and prepare a decision and there is no way to organize an effective centralized decision-making system during the battle.

In this environment, Military uses Scenarios. A Scenario describes a class of potential situations. For the set of scenarios, which reasonably covers potential situations, Military designs a system of rules to respond to any of these scenarios. To implement the responses according to these rules, Military needs resources, which are limited. Hence, Military has to compromise and provide more support to some scenarios and less for others. The more flexible is the response system, the less compromise is needed.

The compromise is based on the level of risk associated with each scenario the probability that the scenario can happens and potential danger associated with it.

Actually, Military does not have a luxury of setting a small probability to a scenario, because the enemy chooses a scenario, and it could be one, which is assumed to be of low probability.

If one applies this idea of scenarios to other areas of risk analysis, then scenarios might reflect classes of circumstances, which probability can be estimated fairly well. Some of these probabilities can be computed, while others have to be estimated based on experience of specialists, who analyze risk.

The procedures of setting these probabilities "manually" are procedures, which allow formal description of experience of specialists and they are of value by themselves.

For example, a scenario can reflect a probable change in the interest rate set by the Federal Reserve Board. Different scenarios reflect different changes and probabilities assigned to them are probabilities of such events estimated by specialists.

Even more appropriate is the use of scenarios to model possible rare events, as market crashes. Different variants of a possible market crash (time, sector, magnitude, etc.) are different scenarios. Setting of their probabilities requires special analysis and estimates based on experience of specialists.

Risk Analysis needs to combine diverse sources of information and it has to accommodate new sources of information and new ways of information processing. In this environment Simulation looks as a natural tool.

However, Simulation for Risk Analysis has its specifics.

First, it should produce much bigger number of samples, than it is usually required. In the same time, the vast majority of these samples is irrelevant for risk estimates.

Second, the simulation has to work with different sources of information in creation of samples-variants and allow computation of results based on all of them. Some sources are models and some sources are channels for immediate experience of specialists. For example, we need to be able to set some variants manually, assign probabilities to variants, etc.

Samples-variants can be grouped according to type of risk, which they represent. Above we identified a few of such types:

- Related to prevalent movement of prices, where well-known models are used in the usual way
- Related to unusual events, where well-known models are used in the unusual way,
- Related to catastrophic events, where models are used, which are similar to ones used in estimates of reliability,
- Variants created "manually" based on experience of risk analysis specialists.