After using Julia successfully in a few new projects, my latest came to a sticky end because of how it saves large data files.

**—
more
—**

There are many measurements of systemic risk and a huge number of papers depend on those measurements. But is that work of any use?

**—
more
—**

The almost infinite complexity of the financial system is the main reason why it is so hard to keep it under control. And that complexity is due to everybody that works in the financial system having incentives to increase its complexity, including the regulators.

**—
more
—**

Risk measurements are quite inconsistent. Does that matter, and how to interpret the results?

**—
more
—**

There are a million ways to measure financial risk, and if one wants to calculate global risk, it requires a truly heroic set of assumptions. But it’s much easier to calculate *global market risk*, and that is what I set myself to do below.

**—
more
—**

As Amazon continues its exponential growth, where will it all end? Growth has to stop, but when and why?

**—
more
—**

The new Apple M1 processor has gotten excellent reviews for speed. Getting curious, I measured its speed on pure c code, latex and R and compared it to its most recent Intel competitors.

**—
more
—**

Amazon AWS has been recommending ARM Graviton2 as a cost-effective alternative to Intel/AMD instances. So I tried it out.

**—
more
—**

I just tried the same code in R’s data.tables and Julia’s DataFrames, and the results are a bit surprising.

**—
more
—**

It is easy to manipulate risk forecasts. If your regulator or compliance officer sets a risk target you don’t like, just tell them what they want to hear and continue taking the risk you like.

**—
more
—**

It is easy to criticise risk forecasting, but it’s rather pointless unless one can come up with proposals. Here are my five principles for the correct use of riskometers.

**—
more
—**

Backtesting is the magical technique that tells us how well a forecast model works. Test the model on history, and we have an objective way to evaluate how good the model is. But does it really work in practice?

**—
more
—**

The way financial risk is measured is by a device I have called the **riskometer**. It is a fantastic thing, plunge it into the bowels of the City of London and out pops a single accurate measurement of risk. Magical. But does it work in practice?

**—
more
—**

The way we manage financial risk has a lot in common with the old concept of scientific socialism. The modern-day riskometer is pseudoscientific, and the increased reliance on it leads to disastrous systemic risk.

**—
more
—**

The financial markets did not have a good 2018 as the media kept on reminding us:

**—
more
—**

The riskiest year in human history was 1962. The year of the Cuban missile crisis, the closest we ever came to a nuclear war. The mother of all tail events, where all prices go to zero. Volatility that year was average — 16.5%

How can market risk be average when tail risk is at its highest?

**—
more
—**

*Perceived risk* is risk predicted by models and *actual risk* is the fundamental underlying risk. We measure perceived risk and care about actual risk. Unfortunately, those two are negatively correlated.

**—
more
—**

One can endlessly criticise risk models, but that is just too nihilistic. So, what are the good for? There are three camps, the model believers, the rejectionists and the healthy skeptics. I’m going to make the case for the last below.

**—
more
—**

Medieval mapmakers noted the risk of an unknown kind by “here be dragons”. Attempts at measuring extreme risk should come with a similar warning. Just like the sailors of yesteryear, financial institutions will go into unknown territories and, just like the map makers of the earlier era, modern risk modellers have little to say.

**—
more
—**

The stock market had a mini crash yesterday. So how big was that in a historical context?

**—
more
—**

The European Central Bank has an indicator of systemic risk called the Composite Indicator of Systemic Stress , CISS. So what sort of signal does it send and what is it to be used for?

**—
more
—**

Suppose one cares about tail risk, what is the best way to estimate it? There are two, not mutually exclusive, ways; *statistical* and *structural*. Which is right?

**—
more
—**

Why do the regulatory authorities seemingly fall into the category of model believers, if not quite to the view that there must be one true model? Well, it is sort of inevitable the way the regulatory process works.

**—
more
—**

There a lot of evidence that models are less than perfectly reliable. Why then do we rely so much on models in decision-making, and especially financial regulations? Because there are three types of people: Believers in true model, skeptics who accept model risk and nihilistic rejectionists.

**—
more
—**

When designing models, the underlying assumption is often that the model captures the true data generating process. Does a true model exist? To me, the question is completely irrelevant.

**—
more
—**

Last January I looked at how the Swiss FX shock affected the most popular risk measures. Events of the past week give us another interesting test. My daily risk forecast shows the various risk measures for a number of assets, but focus on the SP-500, and the following picture taken from the site today:

**—
more
—**

**May 14, 2015**

Bloomberg today had an interesting piece, called Market Moves That Are Supposed to Happen Every Half-Decade Keep Happening. Here is their self-described “terribly simplistic list”

**—
more
—**

So, does ES capture tail risk, but VaR not? Therefore the Basel committee is correct, and we all should use ES. Is that true?

**—
more
—**

Just looked again at the what I did on the Swiss FX shock, looking at how the various risk measures performed in the days after the event, and also looking at the risk of the inverse FX.

The original analysis just looked at the risk of the Franc appreciating, but why not look at the risk of the euro appreciating.

**—
more
—**

© All rights reserved, Jon Danielsson, 2022