There is a widely held belief that financial risk is easily measured – that we can stick some sort of riskometer deep into the bowels of the financial system and get an accurate measurement of the risk of complex financial instruments. Such misguided belief in this riskometer played a key role in getting the financial system into the mess it is in.
Unfortunately, the lessons have not been learned. Risk sensitivity is expected to play a key role both in the future regulatory system and new areas such as executive compensation.
Origins of the myth
Where does this belief come from? Perhaps the riskometer is incredibly clever – after all, it is designed by some of the smartest people around, using incredibly sophisticated mathematics.
Perhaps this belief also comes from what we know about physics. By understanding the laws of nature, engineers are able to create the most amazing things. If we can leverage the laws of nature into an Airbus 380, we surely must be able to leverage the laws of finance into a CDO.
This is false. The laws of finance are not the same as the laws of nature. The engineer, by understanding physics, can create structures that are safe regardless of what natures throws at them because the engineer reacts to nature but nature does not generally react to the engineer.
The problem is endogenous risk
In physics, complexity is a virtue. It enables us to create supercomputers and iPods. In finance, complexity used to be a virtue. The more complex the instruments are, the more opaque they are, and the more money you make. So long as the underlying risk assumptions are correct, the complex product is sound. In finance, complexity has become a vice.
We can create the most sophisticated financial models, but immediately when they are put to use, the financial system changes. Outcomes in the financial system aggregate intelligent human behaviour. Therefore attempting to forecast prices or risk using past observations is generally impossible. This is what Hyun Song Shin and I called endogenous risk (Danielsson and Shin 2003).
Because of endogenous risk, financial risk forecasting is one of the hardest things we do. In Danielsson (2008), I tried what is perhaps the easiest risk modelling exercise there is – forecasting value-at-risk for IBM stock. The resulting number was about +/- 30% accurate, depending on the model and assumptions. And this is the best case scenario. Trying to model the risk in more complicated assets is much more inaccurate. +/- 30% accurate is the best we can do.
Applying the riskometer
The inaccuracy of risk modelling does not prevent us from trying to measure risk, and when we have such a measurement, we can create the most amazing structures – CDOs, SIVs, CDSs, and the entire alphabet soup of instruments limited only by our mathematical ability and imagination. Unfortunately, if the underlying foundation is based on sand, the whole structure becomes unstable. What the quants missed was that the underlying assumptions were false.
We don’t seem to be learning the lesson, as argued by Taleb and Triana (2008), that “risk methods that failed dramatically in the real world continue to be taught to students”, adding “a method heavily grounded on those same quantitative and theoretical principles, called Value at Risk, continued to be widely used. It was this that was to blame for the crisis.”
When complicated models are used to create financial products, the designer looks at historical prices for guidance. If in history prices are generally increasing and risk is apparently low, that will become the prediction for the future. Thus a bubble is created. Increasing prices feed into the models, inflating valuations, inflating prices more. This is how most models work, and this is why models are often so wrong. We cannot stick a riskometer into a CDO and get an accurate reading.
Risk sensitivity and financial regulations
One of the biggest problems leading up to the crisis was the twin belief that risk could be modelled and that complexity was good. Certainly the regulators who made risk sensitivity the centrepiece of the Basel 2 Accord believed this.
Under Basel 2, bank capital is risk-sensitive. What that means is that a financial institution is required to measure the riskiness of its assets, and the riskier the assets the more capital it has to hold. At a first glance, this is a sensible idea, after all why should we not want capital to reflect riskiness? But there are at least three main problems: the measurement of risk, procyclicality (see Danielsson et. al 2001), and the determination of capital.
To have risk-sensitive capital we need to measure risk, i.e. apply the riskometer. In the absence of accurate risk measurements, risk-sensitive bank capital is at best meaningless and at worst dangerous.
Risk-sensitive capital can be dangerous because it gives a false sense of security. In the same way it is so hard to measure risk, it is also easy to manipulate risk measurements. It is a straightforward exercise to manipulate risk measurements to give vastly different outcomes in an entirely plausible and justifiable manner, without affecting the real underlying risk. A financial institution can easily report low risk levels whilst deliberately or otherwise assuming much higher risk. This of course means that risk calculations used for the calculation of capital are inevitably suspect.
The financial engineering premium
Related to this is the problem of determining what exactly is capital. The standards for determining capital are not set in stone; they vary between countries and even between institutions. Indeed, a vast industry of capital structure experts exists explicitly to manipulate capital, making capital appear as high as possible while making it in reality as low as possible.
The unreliability of capital calculations becomes especially visible when we compare standard capital calculations under international standards with the American leverage ratio. The leverage ratio limits the capital to assets ratio of banks and is therefore a much more conservative measure of capital than the risk-based capital of Basel 2. Because it is more conservative, it is much harder to manipulate.
One thing we have learned in the crisis is that banks that were thought to have adequate capital have been found lacking. A number of recent studies have looked at the various calculations of bank capital and found that some of the most highly capitalised banks under Basel 2 are the lowest capitalised under the leverage ratio, an effect we could call the financial engineering premium.
As Philipp Hildebrand (2008) of the Swiss National Bank recently observed “Looking at risk-based capital measures, the two large Swiss banks were among the best-capitalised large international banks in the world. Looking at simple leverage, however, these institutions were among the worst-capitalised banks”
The riskometer and bonuses
We are now seeing risk sensitivity applied to new areas such as executive compensation. A recent example is a report from UBS (2008) on their future model for compensation, where it is stated that “variable compensation will be based on clear performance criteria which are linked to risk-adjusted value creation.” The idea seems laudable – of course we want the compensation of UBS executives to be increasingly risk sensitive.
The problem is that whilst such risk sensitivity may be intuitively and theoretically attractive, it is difficult or impossible to achieve in practice. One thing we have learned in the crisis is that executives have been able to assume much more risk than desired by the bank. A key reason why they were able to do so was that they understood the models and the risk in their own positions much better than other parts of the bank. It is hard to see why more risk-sensitive compensation would solve that problem. After all, the individual who has the deepest understanding of positions and the models is in the best place to manipulate the risk models. Increasing the risk sensitivity of executive compensation seems to be the lazy way out.
This problem might not be too bad because UBS will not pay out all the bonuses in one go, instead, “Even if an executive leaves the company, the balance (i.e. remaining bonuses) will be kept at-risk for a period of three years in order to capture any tail risk events.” Unfortunately, the fact that a tail event is realised does not by itself imply that tail risk was high, and conversely, the absence of such an event does not imply risk was low. If UBS denies bonus payments when losses occur in the future and pays them out when no losses occur, all it has accomplished is rewarding the lucky and inviting lawsuits from the unlucky. The underlying problem is not really solved.
Conclusion
The myth of the riskometer is alive and kicking. In spite of a large body of empirical evidence identifying the difficulties in measuring financial risk, policymakers and financial institutions alike continue to promote risk sensitivity.
The reasons may have to do with the fact that risk sensitivity is intuitively attractive, and the counter arguments complex. The crisis, however, shows us the folly of the riskometer. Let us hope that decision makers will rely on other methods.
References
Danielsson, Jon and Hyun Song Shin, 2003, “Endogenous Risk”, chapter in Modern Risk Management: A History.
Danielsson, Jon, Paul Embrechts, Charles Goodhart, Con Keating, Felix Muennich, Olivier Renault and Hyun Song Shin (2001) “An Academic Response to Basel II”, 2001.
Danielsson, Jon (2008) “Blame the models”, VoxEU.org, 8 May 2008
Hildebrand, Philipp M. (2008) “Is Basel II Enough? The Benefits of a Leverage Ratio”, Financial Markets Group Lecture, London School of Economics .
Taleb, Nassim Nicholas and Pablo Triana (2008) “Bystanders to this financial crime were many” Financial Times December 7.
UBS (2008) “Compensation report: UBS’s new compensation model”