Many risk specialists are now studying ways to bolster the statistical models that guide the risk management processes of fund managers and banks, and some of them are asking if models from other industries can help.
Professor Ingo Walter, a vice dean at the Stern School of Business, New York University, is among a number of academics who are interested to see if ‘near-miss’ risk models, which are used in aviation to try and avoid plane crashes, can be adapted
to the operational risks that financial institutions face.
Professor Walter, who also heads SimCorp StrategyLab, says: “If the control tower clears a plane to take off and at the same time clears another plane to cross the runway, then this is a potentially fatal mistake. But if the guy who is set to take off sees the other plane and postpones it, a disaster is averted.”
The pilot will report this in the aviation log book, says the professor, and reports of near misses build up a database that becomes statistically useful, allowing near misses to be correlated with factors such as weather conditions, volumes of traffic and times of day. This means probabilities that accidents will occur can be calculated.
But any statistically driven risk model faces criticism for being just another backward-looking tool – just like value at risk (VaR), which is the main statistical risk management tool used in finance to measure market risk.
VaR has been blamed in part for the collapse of banks in the financial crisis. Referring to VaR, Lord Turner, chairman of the UK Financial Services Authority, said there had been a “misplaced reliance on sophisticated maths” by firms and regulators.
VaR captures the probability of a financial loss of a given size on a particular day due to market risks such as volatility levels and correlation between asset classes. But critics say VaR’s ability to ‘predict’ the future is only as good as the historical data that goes into it.
Near-miss models rely on past experience too, and because they centre on rare events, there’s not that much data to input in the first place.
Another particular problem with adapting near-miss models to finance is that to work well there may need to be an honest reporting of mistakes – and, of course, mistakes never happen in finance.
The advantage of the take-off scenario above is that there were more observers than just the control tower that made the mistake. The faultless pilot would have no qualms in logging the error.
A so-called fat finger trade where a trader pushes an extra nought into a trade order might send losses or returns sky high, but if the number is aggregated along with thousands of other trades into a single number explaining risk, the error is lost in the detail – unless the trader reports it.
“In aviation it is mandatory to report near misses; in finance it is not,” says Professor Walter. “Before near-miss modelling could be employed in finance we need to see if there are any reasons why it cannot be done. Self-reporting is not going to happen – you won’t report a mistake if you can sort it out. We need to develop an incentive system that rewards people for reporting problems rather than punishes them.”
A single number
Aggregating many risk numbers into one figure has been a problem for VaR. VaR aggregates individual market risk figures
that arise from correlation risk, volatility and credit risks. It may do this for numerous portfolios and distil them into a single number.
“The industry has come to realise that no single number describes risk,” says Dr Lisa Goldberg, an authority on statistics at University of California, Berkeley, and a director at MSCI Barra.
In another sign of how mathematicians are trying to adapt statistical methods used in other industries, Dr Goldberg is studying how ‘extreme value theory’ could be used to model shocks in the financial system. This theory of statistics is used in structural engineering where engineers have to face extreme events such as once-a-century floods or space shuttle crashes. Whereas VaR assumes normal market conditions, extreme value theory looks at the so-called ‘fat tails’, or rare events like those that triggered the latest crisis.
“This is part of the next generation,” says Dr Goldberg.
But an element of backtesting data is also used in extreme value theory, so is this not just a case of replacing one statistical-driven, backwards-looking measure with another?
Possibly, but Dr Goldberg says extreme value theory would be employed alongside VaR and would not replace it. “VaR is a useful tool, not a villain, but investors need more than just VaR to know what’s going on.”
MSCI Barra is looking to include leverage and liquidity factors in its extreme value theory-based models. VaR missed these and it was the failure to map liquidity in particular that led VaR to fail.
Rob Gardner, a co-founder and partner at Redington Partners, a pensions risk advisor, says: “The triple-A rated ABS market was very liquid three years ago, and back then VaR analysis told you that the probability of a loss was small. When ABS values declined significantly and liquidity dried up, VaR analysis showed that although losses would be larger, it did not pick up on the extreme tail loss and impact of liquidity risk...”
There are signs, however, that some investors are wary now of seemingly any statistical model. Stephen Docherty, head of global equities at Aberdeen Asset Management, says: “The problem with VaR is the same with any mathematical model. If you assume it is correct, you will have problems.
“What you put in affects what will come out. Garbage in, garbage out, as they say. Statistics can be used to get the results that you want.”
To put it another way, any statistician can now give the probability that the collapse of Lehman Brothers happened on September 15, 2008.
But as statistics experts, Professor Walter and Dr Goldberg are not saying that statistics alone should be used for financial risk management. Rather, these models should complement a wider risk-management strategy. Forward-looking risk assessments will likely become far more important, and for some risk specialists it is the opinions of fund managers about future asset prices that lie at the bottom of these.
Denis Gallet, head of risk management at Fortis Investments, says: “Our portfolio risk management takes place first at the level of the investment team and then at the level of the firm globally. For asset managers, the first risk manager in the company is always the portfolio manager.
“We look at the risk within portfolios and add a layer of stress testing to make sure we have a better view of the tail of distribution.”
Simon Hookway, CEO of MSS Capital, says: “The right thing to do in the first place is hire the right people. Then your risk management tools should focus on the behaviour of the people managing the money, checking that they are keeping their promises and not assuming positions that are fundamentally at odds with what was written in the prospectus.”
Similarly, Professor Walter says: “Make sure people stay away from the office for a couple of weeks and let someone else run the portfolio.” This is a sure way to see that promises are kept, or if fraud is present.
This is not the first time that the financial industry has reflected on statistically guided risk management models. The trouble is that there’s always a danger that the models created by maths boffins will be inexplicable to CEOs who lack a statistics education. As Lord Turner notes: “The very complexity of the mathematics used to measure and manage risk … made it increasingly difficult for top management and boards to assess and exercise judgement over the risks being taken.”
So not only have financial institutions over-relied on stats, they’ve over-relied on stats that they don’t understand. The next time you fly, hope that your pilot’s grasp on statistics is better than the bankers’.
©2009 funds europe