The unifying theory of 21st-century catastrophes by mattsteinglass
June 5, 2010, 1:20 pm
Filed under: Economics, Liberalism, Politics, Uncategorized

DAVID LEONHARDT is among my favourite writers, and when my RSS reader showed me that this weekend he got a slot in the New York Times Magazine to talk about “What the oil spill and the financial crisis have in common,” I got all excited. Then I read it, and it’s…pretty good. Leonhardt’s premise is that what the Deepwater Horizon blowout has in common with the global financial crisis is heedlessness of tail-end risk. Black swans, an unwillingness to take seriously the consequences of very low-probability, very high-damage eventualities, and all that. And this is certainly true. As Leonhardt writes, BP executives had never seen an oil rig blow up, so they didn’t really believe it could happen, just as Ben Bernanke didn’t really believe a nationwide real-estate crash could happen.

But this isn’t the main theme the two events have in common. The main theme they have in common is much simpler than that, and has more moral valence. And it’s the main theme not just for the oil blowout and the financial crisis but for the Katrina disaster and the Enron collapse and the Chinese melanin milk scandal and an extraordinary array of scandals, disasters and tragedies so far this century. The main theme they have in common is regulatory failure. The regulations weren’t strong enough, and the regulators didn’t do their jobs. Oil companies were allowed to self-certify, and MMS inspectors let them hand in their own inspection reports in pencil, then traced over them in pen * approved their design changes within five minutes with  no real review. Non-bank financial institutions escaped regulations that had been written to cover banks, and when SEC inspectors were sent in to banks to monitor suspicious debt-hiding activities they spent their time downloading porn. Dyke safety standards established by the Corps of Engineers were inadequate, and officials at FEMA were incompetent. And, obviously, the people’s elected representatives chiefly clamoured for weaker regulations and tried to stop regulators when they did attempt to enforce the rules.

We may not be heading towards an End of History, but Hegel was right that sometimes there’s such a thing as a weltgeist that moves directionally from decade to decade, and what we’re seeing here is comeuppance (or, as Hegel would put it, the antithesis) for the deregulatory exuberance of the 1980s and 1990s. Leonhardt concentrates on the unfortunate human tendency to discount the highly unlikely. This is certainly a factor, but as advice, it’s only partially useful. If the lesson of the catastrophes of the noughties is to pay attention to tail-end risk, then we should all be running around building nuclear fallout shelters and working out deflection strategies for massive asteroid strikes. And that’s not going to happen. (Though in the case of climate change, one of Leonhardt’s examples, it is useful: we should be paying more attention to the risk that global temperature rise by 2100 will be near the catastrophic 6-degree-celsius high-end estimate, not the merely awful 2-degree median estimate.) But I don’t think that is the main lesson. The main lesson is simpler and more concrete: government regulations need to be more restrictive, regulators need to be more aggressive, better-paid, and more powerful, and they need to stop people and corporations more often from doing things that may be profitable but pose unacceptable risks to the public. We had this theory for a while that economic self-interest would prove sufficient disincentive to foolish risk-taking. But now the Gulf of Mexico is on fire, so I’m afraid we need to go back to the old-fashioned system with the rules and the monitors carrying sticks. Sorry.

* It turns out this probably isn’t true. The Interior Dept. Inspector General’s Report says there were reports with pencil that were then traced over in pen, but it’s likely that the inspectors themselves filled them out in pencil for convenience in case of corrections, and they couldn’t find any evidence that any had been filled out by the oil company. They had apparently heard a rumor that this had happened, but couldn’t substantiate it.


7 Comments so far
Leave a comment

Smart post, but I think Hegel was wrong in that, if the thesis is regulation and that had its heyday in the 70s and the antithesis was laissez-faire which got the next three decades to show its merits, we should have gotten to a synthesis, maybe one in which regulation is careful and firm rather than scattered salt-shot.

I don’t think regulatiors deal any better with unlikely events than other agencies. In truth, I don’t see Horizon or Massey as the victims of wayward-comet caliber unlikelihoods. Complex manmade systems breaking down with disastrous consequences isn’t exactly a one-off. You just named several for the last decade but you’ll find plenty in the 70s and early 80s when they regulated what side a man should dress on.

I’d say the policy prescription is to not try to regulate what you can’t and regulate firmly but discretely where it is likely to matter. In other words, regulators need to mean what they say, not say more than they mean and be allowed to shoot people as well as fine them.

Comment by citifieddoug

[…] The unifying theory of 21st-century catastrophes – Matt Steinglass … […]

Pingback by what does a paleontologist do when doing research or teaching? | education reference answers

[…] The unifying theory of 21st-century catastrophes – Matt Steinglass … […]

Pingback by Does donating a pint of blood affect a weight loss program/diet? | Weight Loss Guide

While allowing business to self-certify is wrong, I am highly skeptical that more regulation from the people responsible for running Amtrak and the Post Office are the answer.

Comment by comradejoe

Comradejoe, that sounds to me like “I don’t want to think about it.” Neither the post office nor Amtrak are run by regulators and my dad and I had lunch yesterday at a chain restaurant that would do much better if it were run like the post office. We’re all slow-thinking idiots. Some of us just happen to work in government while others indolently make a hash out of private businesses.

Comment by citifieddoug

comradejoe, to be flip, I don’t think the people who run Amtrak should regulate offshore oil drilling. To be more serious, it looks like there’s a problem with the people responsible for inspecting oil drilling in the Gulf all coming from southern Louisiana and having been friends since high school with the guys who drill the wells. This is perfectly understandable, but also makes for poor inspection. Since the Romans, administrators have know that you have to rotate your prefects or they get too cozy with the locals and stop doing a good job for the Empire.

But my feelings on this are animated by comparisons with Vietnam. That example of the Chinese melamine milk scandal is meant very seriously: this is what happens in a country where there is no strong history of serious health inspections. Here in Vietnam, it is essentially impossible to know what’s in your food. That is really scary. People deal with it by not thinking about it too much, and more recently by setting up all-private organic food systems, but going organic is so expensive that it’s not a realistic solution for the population at large, only for little swathes of the bourgeoisie. Without health inspectors, we in the US would be in the same boat as the Vietnamese.

And it’s democratic pressures on government that keep inspectors honest. Over the past 5 years, construction cranes started collapsing in NYC and it turned out the inspectors had been getting bribed not to show up. What fixes this situation? Scandal, a functioning legal liability system, the press, and democratic pressures on city government. The mayor either fixes the inspectors, or he will lose the elections. Good for democracy, right?

But this only works as long as *people understand that what they need to demand is more and better inspectors and regulations*. If people become convinced that more inspectors from teh gummint will only make things worse, then there is no mechanism pushing for stronger safety regulations and inspection. The cranes come crashing down, everybody moans about the human tendency to underestimate tail-end risk, and nobody does anything about it. And in that case I submit to you that we are well and truly etc.

Comment by Matt Steinglass

I agree that tail risk is the right way to think about this. (Normally just called tail risk, not tail-end risk.)

Unfortunately I don’t think regulation driven by scandal and democracy is anywhere near sufficient. The reasons for that also help us think about the problem more clearly.

Consider any of these problems — cranes falling off buildings, oil wells blowing out underwater, drug side effects killing people, etc. A tail risk event will maybe kill the company unfortunate enough to get hit by it, but all the other companies taking the same risk will be fine. Meanwhile they will have made much more profit by ignoring the tail risk. So it is actually rational for each company to ignore the tail risk even if they fully understand it is there.

This works because the full cost of the tail risk isn’t borne by the companies that incur it, individually or collectively. Once the company that gets hit by the tail event goes bankrupt, the rest of the cost is borne by society at large, and/or the victims of the disaster.

So this is a special case of these companies externalizing cost (in this case risk). The cost of the tail events is mostly external, so it is rational for each company to ignore that cost.

Much of the point of regulation and/or law (such as product liability law) is to internalize that risk — basically to force companies to act as if they were bearing the full cost of their decisions. I’m by no means a libertarian, but I do believe that to a large extent companies would make the right decisions if they effectively had to bear the full cost of those decisions. For example, if energy companies had to bear the full costs of all the environmental and social costs of energy (including tail risks) we’d have a pretty good energy policy with very little further regulation.

But here is where things get sticky. There are two problems with accounting for tail risk that interact in a very bad way.

First, of course the affected parties will resist internalizing those costs. We’re seeing that right now with the tears shed over the damage that will be done to small oil producers if they have to accept full liability for their actions. But we also see it in the anger of SUV owners about having to pay higher gas prices, etc.

Resistance will show up not just in responses to the allocation of costs, but also very much in the arguments about how much those costs will be. We’re seeing that in the determined efforts to obfuscate our best guesses about the sources and costs of global warming and obesity, and prior to that the sources and costs of smoking related illness.

And this is where we get to the second problem that is specific to tail risks. By their nature, tail risks can’t be given an expected cost in the normal way: probability of occurrence multiplied by cost if they occur. We often can’t estimate the actual costs very well since nothing like this has occurred, at least not in equivalent circumstances. And in general we can’t estimate the probability at all, because our models don’t assign valid probabilities to tail events. This lack of valid probabilities is fundamental to the nature of tail risk in complex systems — we can’t in general fix it simply by refining our models, though we may be able to fix specific cases. And unfortunately the expected cost is tremendously sensitive to the guesstimated probability.

As a result of these two problems, we can’t come up with a strongly defensible cost for tail risks, and the most interested parties (including big constituencies like consumers) have a strong material interest in believing that the lower estimates of those costs are correct. So we will tend to consistently underestimate the cost of tail risks, and then have to absorb the costs when they come home to roost.

I can’t think of any full solution to this, but I do think that building a consensus around the goal of internalizing all predictable costs — either economically or through regulation aimed to produce similar decisions — would help a lot. Then at least in energy policy and so on we could have a coherent discussion about whether we want to adjust our projected costs, or just accept the social burden of unaccounted tail risk. We could focus our political energies on getting the cost estimates right, and identifying costs that haven’t been internalized.

We probably would have just as many battles, and the bad actors would be just as slimy, but we’d be putting effort into more useful issues.

Comment by jedharris

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: