The following is taken from my book Illusion of Control.
There are many ways we can measure financial risk. We could study the deep structure of the financial system, identify all the interlinkages and the hidden corners where risk is taken. That this difficult and costly.
Much better to use a riskometer — a purely statistical approach — The term coming from a blog piece I wrote for VoxEU.org in 2013 titled The myth of the Riskometer.
It is a fantastic thing, the riskometer. Plunge it into the bowels of the City of London and out pops a single accurate measurement of risk. Magical. But does it work in practice?
Anywhere from someone investing their own money, to risk managers controlling proprietary traders, to a bank determining the amount of capital it holds, all the way to the financial regulators concerned with the stability of the entire financial system.
They promise to distill the risk of entire financial institutions into one number.
That idea summarizes the best and the worst aspect of riskometers. When it comes to making decisions, it is really useful to have a single unambiguous measurement of risk. A number with all sorts of caveats is not nearly as helpful. The decision-makers, the people who run banks, and the regulatory agencies are just like president Harry S. Truman who demanded, “Give me a one-handed economist. All my economists say 'on the one hand...', then 'but on the other...'”.
The riskometers are cheap, quick, and objective — scientific really. The alternatives are subjective, slow, and expensive. In the scientific world of risk, with almost limitless data, sophisticated statistical methods, and all the processing power one could want, how can the riskometer not be the best way to measure risk?
The problem is that riskometers can only capture a caricatured view of risk.
A caricature exaggerates particular facial features, perhaps making the nose bigger and chin smaller, ending up as something clearly related to the face in some way but still far from an accurate representation. It is the same with riskometers. Any particular implementation will focus on and often exaggerate some aspects of risk and ignore others.
That means that riskometers are not nearly as accurate as most of us think, most importantly, senior decision-makers. Those who actually are on the ground, designing riskometers and reporting risk to their superiors, know better. The reason why that understanding does not get transmitted to the bosses is the very complexity of the riskometers.
There are two more issues I will return to a bit later. They are really hard to validate by backtesting, and best riskometers specific to the objectives of the end-user.
There are a lot of riskometers out there. Because it is not very hard for someone who knows programming and statistics to create yet another one, it in not surprising that academia and consultancies are full of people churning out riskometers, all producing different measurements of risk for the same assets. It can be an easy way of getting a PhD in statistics, physics, computer science, or economics. The same PhDs tend to get jobs in the financial industry, producing riskometers for their government agencies and banks.