MIL-OSI Banking: Remarks by A. Kriegler to the 2014 Property and Casualty Insurance Industry Forum

9

Source: Canada Office of the Superintendent of Financial Institutions

Quote

“… what was a good risk management theory in years past probably still is. It’s just that since there are few absolute truths in how to turn those theories into practices, many of the debates around them reoccur.”

What goes around, comes around

Good evening and thank you for the opportunity to join you. It has been a little over 12 years since I first attended a Langdon Hall event – it was the Financial Services Invitational Forum – and in remembering that experience as I prepared my remarks for this evening, I was struck by two thoughts.

First, by just how much has happened to the Canadian financial services industry– indeed to financial services globally – since May of 2002. At that point, the dot-com bubble was still in its death throes and while there was a subprime credit card crisis brewing south of the border, much of the run-up in U.S. mortgage markets that culminated in the global financial crisis had yet to really occur.

My second thought was, perhaps inevitably, that despite all that has happened around us, the issues we all worry about and debate as stewards and managers of risk really haven’t changed very much at all.

I think that is true because if the tumultuous events of the last decade or so teach us anything, it is that finance – and the institutions and people that work in them – tend to fall into the same patterns again and again. In turn, of course, that means what was a good risk management theory in years past probably still is. It’s just that since there are few absolute truths in how to turn those theories into practices, many of the debates around them reoccur.

So what I would like to do this evening is to talk a bit about a debate amongst the world’s banking regulators and compare it to the debate that was raging in 2002. At its heart, it’s a very similar debate now as it was then. It’s about the role of models and data and the value they bring to risk management and to setting capital requirements for regulated financial institutions.

Lest you think that I have forgotten that I am at the P&C forum, let me explain why I am talking about an issue that seems primarily related to banking.

First of all, P&C insurance is an industry that is all about pricing risk accurately based on experience, so I thought the theme of models and data – and the pitfalls of using them without sufficient skepticism – would resonate with you.

Second, some of the challenges that banking faces in using models – short data histories, low frequency, high severity events, and unstable statistical relationships, to name but a few – are not unique to banking; they also exist in P&C. While catastrophe (CAT) risks are the obvious example, others can also appear when unexpectedly correlated liabilities arise due to changing societal norms or judicial interpretations. So there may be things that the two industries can learn from each other.

Finally, I think it goes without saying that the global regulatory community needs to get it right when it comes to setting capital requirements for financial institutions – because the cost to society is demonstrably too high if we don’t.

So let me take you back to 2002, when the world’s banking regulators – including, of course OSFI – were in the middle of creating Basel II. A significant update to the original 1988 Basel I Accord, the new regime’s capital charges were intended to be much more sensitive to risk. Basel I had been a huge step forward when it was introduced but its shortcoming was that with few discrete risk categories, there would always be an incentive for financial institutions to max out the risk (and therefore earn the highest return) on the required capital amount for any given asset type.

So Basel II would introduce, in the advanced approaches, a far more continuous capital charge – based on the actual observed risk of the assets in question. The risks and therefore the capital would be more accurately calculated and more efficiently allocated. Even in the standardized approaches, the granularity of asset types was increased dramatically and tied more explicitly to objective measures of risk, namely, ratings from credit rating agencies. Perhaps most important, the effort that financial institutions would have to go through to comply with the new accord would force them to understand their risks better and simultaneously give supervisors a better window on institutions’ controls and risk governance.

To say that Basel II was controversial is an understatement. It required an enormous investment in people, technology, and data on the part of the world’s banks–institutions not known for wanting to spend money if they don’t have to. It was criticized as being too complex, too opaque, too easy to arbitrage and, most damning of all, pro-cyclical. Indeed, regulators of countries that were part of the process raised some of these legitimate concerns.

Perhaps both the praises and the criticisms had some basis. On balance though, both the more continuous nature of risk measurement and the improvements in risk management that would come with banks using those measurements meant the whole exercise was clearly a net positive for the system.

My concern, and what I was asked to Langdon to talk about back then, was simply that because of the competitive nature of financial markets – and the fact that human beings prefer being part of what’s seen to be a winning consensus rather than being an outlier – there really weren’t many substantively different models out there. If this was true, while not wanting to overstate the point – there was a risk that everyone would be getting at least directionally the same signals from models based on the same theories at the same time – everyone would be headed towards the same side of the boat as a result.

To use the example of credit risk models, many of the myriad ideas and approaches in the market today for measuring credit risk of publicly traded enterprises are rooted in just one place: the work Robert Merton first published in 1974.

This was a concern not just for the internationally active banks using quantitative models. Because the accord baked credit ratings into the standardized approach, a similar concentration of signals would be driving smaller banks’ behaviours. Indeed, as the rating agencies – for structured products anyway – used models based on the same underlying theories as the banks, those markets were particularly vulnerable to herd behaviour.

At the same time though, all these models were to be built and run by individual banks managed by different people and supervised by different supervisors in different countries. As a result, there appeared to be considerable offsets to the potential risks of group-think (or group analysis). Most importantly of all, though, if the banks actually used the models to run their businesses, to charge those businesses for the capital and other resources they used, then the front lines of those institutions would make better informed risk-return trade-offs.

Between now and then of course, there was the financial crisis. As we get to today’s debate on the value of models, consider just two of the many observations from those difficult times:

  • There was not enough high quality capital, not enough liquidity and too much leverage in the system to support the risks that were being taken, and the cost of the capital and liquidity that did exist was not being charged to the right products and businesses; and
  • When the crisis hit, just as in every previous panic, everybody ran for the exits at the same time. They just did it faster, more comprehensively across more asset classes and jurisdictions, and with more apparent coordination than before.

So today, as part of the reflection of what can be done to prevent a repetition of the crisis, the debate about the role of models and data has understandably surged anew. And, just as was the case a decade ago, some of the most pointed concerns of using models to set capital standards are coming from regulators representing countries active at the Basel Committee.

In my mind, these concerns fall into two main camps: a concern about models themselves and a concern about whether individual institutions are the right places for capital adequacy to be measured – no matter how accurate the models that measure it might become.

One could say that Andrew Haldane, then Executive Director of Financial Stability at the Bank of England, best defined the first concern in 2012 when he delivered his speech titled “The dog and the frisbee.”

A gross oversimplification of his thesis is that the regulatory system – and the models and data that support it – has simply become too complex in absolute terms. Further, in the absence of far more historical data than the experience of the financial markets can provide, complex models will necessarily underperform simple indicators in the measurement of risk.

Federal Reserve Governor Tarullo recently echoed that first concern around the complexity and opacity of firms’ capital models and added that:

“…the relatively short, backward-looking basis for generating risk weights makes the resulting capital standards likely to be excessively pro-cyclical and insufficiently sensitive to tail risk.”

He went on to articulate a solution for the second concern – banks using their own models to set capital levels – by suggesting that the US system of supervisory stress tests were a far better way to set minimum capital requirements and that the current system of bank-administered models be discarded. Not only do the stress tests not use the firms’ own estimates, he said, but because they are done centrally they can take into account system effects.

To summarize then, the worry is that the models used to set capital requirements are too complex for the limited data that they are built on, don’t use enough stressed data even when it is available, and when run by banks themselves, are both subject to arbitrage and incapable of taking into account important correlations between institutions.

So what is our reaction to this debate? And more importantly for this evening, what should you in the P&C industry take from it?

First of all, it should be evident that there is no simple right answer – and because there is no simple right answer, these issues will come up again and again. Indeed, I suspect that the debates around the use of models will feature prominently in the discussion of Basel Accords 4 through 10.

Second, I would say that these issues – in addition to coming around again and again – will be increasingly important. I say that because our ability to collect data, to process it and to look for patterns within it, is getting better and faster by the day. And if the data is there, it is going to be used.

Third, since people have evidenced a nasty tendency to take at face value whatever answer a model spits out, the consequences of models getting it wrong will also become more severe.

With all that in mind, here are a couple of thoughts on the current round of debates…

The concern around complexity and the availability of sufficient data to support complex models is a fair one. At the same time, perhaps the most important data set for any model seeking to estimate the consequences of difficult times is data from those times, which we unquestionably have. And not only does the world have the experiences of the last several years to learn from, regulators have enshrined the requirement that banks use it.

For example, the so-called Basel 2.5 revisions introduced the idea of a stressed value at risk – the need for a bank to know and be capitalized for what happens to its trading portfolio if things go badly wrong – say as in 2007 – 2009. The passage of time – and even the return of stability for long periods – does not remove the need for calculating and using a stressed VAR based on a “continuous 12-month period of significant financial stress relevant to the bank’s portfolio.”

Stress testing is a very important tool, particularly when it is coordinated centrally to consider system-wide effects. The International Monetary Fund just concluded its review of Canada’s financial system, and a significant component of that assessment was based on stress testing. Importantly, it was not only the IMF’s stress test that was considered in their results, but separate ones from each of the Bank of Canada and OSFI as well as bottom-up stresses run by the major institutions.

Stress testing, though, may not be an end in itself. Not only is a nationally administered stress test, at its heart, a really big model, with all of the model vulnerabilities we’ve talked about, it also concentrates the risk and the responsibility of getting it right into one single place – outside of the risk management and governance frameworks of the institutions themselves.

If you recall my observations from the crisis, they included not only the view that there was evidently not enough capital and liquidity in the system, but what was there was in the wrong place. That is because I feel that some of the businesses that unwound during the crisis would never have existed had they been charged appropriately for the risks that they took and the resources – current and contingent – that they consumed. Ensuring that they do get charged appropriately needs work at a very granular level within the institutions themselves.

Further, I noted that when things got ugly, people rushed for the exits as they always do, but that the rush was broader and more coincident than perhaps has happened before.

So if we concentrate on models, in the form of stress tests, in just one place, it will be more important than ever that they are right… because they’ll be sending the same signals to everyone at the same time.

To be sure, nobody has suggested that banks get rid of their risk models entirely – only that they stop using them for setting capital. Our only question is that if the models don’t make a difference in the capital calculations, how important will they be in day-to-day business decisions? While supervisory efforts can help focus institutions’ attention and supervisory-driven capital charges will change behaviours, they are fairly blunt instruments.

So perhaps there is a middle ground. Stress testing, coordinated and overseen centrally, is an important tool and should be maintained and even expanded in its use for risk management and for supervisory purposes. Simplified approaches to bank models that contribute to the capital calculation, perhaps constrained with floors at a suitably granular level, can add different points of view while keeping the measurement process meaningful for institutions. The relative weighting of these inputs remains to be seen. The only thing that is sure is that the debate will continue.

Let me conclude by linking the issues I’ve discussed back to P&C. P&C, too, uses vast amounts of claim experience and, increasingly, vast amounts of customer behavior data to support the pricing of its products. The use of this data together with increasingly sophisticated models is driving industry’s evolution towards increased efficiency and competitiveness. As I hinted at the outset though, one consequence of increased efficiency is increased volatility.

As in many other competitive arenas, P&C insurers can be subject to the winner’s curse – that the winning bid is often not enough to cover the costs of all the risks. When combined with the human competitive reaction to ignore the risk of the winner’s curse, individual firms can respond as if they were a herd – with the expected resulting cycles in underwriting rigour and profitability.

The increasing use of big data to support product design, underwriting and pricing decisions – especially if they are based on models that derive from a small number of theoretical foundations – has the prospect of taking the longstanding competitive underwriting cycles and making them both more frequent and more severe.

That risk exists even with traditional products when the distribution channels are made quicker and decisions more automated. It is exacerbated when the dataset supporting these models is lean and the reality that underlies the data is subject to potential change, as in the case of new or expanding products. Riskiest of all, of course, are products subject to infrequent, high loss events like CATs. Earthquake cover – where risk assessment is supported by a handful of modeling approaches – is one example. Overland flood is perhaps even less supported by comprehensive and stable data sets.

So I will close with a not unexpected caution. As analytical technology becomes ever more important, pause from time to time to ask the question – is the data supporting the decision comprehensive and stable enough? Are the models you are using well designed and properly maintained? And is everyone else getting the same answer at the same time?

Thank you.

MIL OSI Global Banks