Harvard economist disses most climate cost-benefit analyses

Harvard economist Martin Weitzman has a new paper in which he points out that the vast majority of conventional economic analyses of climate change should carry the following label:

“WARNING: to be used ONLY for cost-benefit analysis of non-extreme climate change possibilities. NOT INTENDED to cover welfare evaluation of extreme tail possibilities, for which a complete accounting might produce ARBITRARILY DIFFERENT welfare outcomes.

In short, if you don’t factor in plausible worst-case scenarios — and the vast majority of economic analyses don’t (this means you, William Nordhaus, and you, too, Bj¸rn Lomborg) — your analysis is useless. Pretty strong stuff for a Harvard economist!

fat-tail.gifThe extreme or fat tail of the damage function (click on figure at right) represents what Weitzman calls “rare climate disasters,” although as we’ll see, they probably aren’t that rare. For Weitzman, disaster is a temperature change of > 6°C (11°F) in a century, as he explains in an earlier paper on the Stern Review on the economics of climate change:

With roughly 3% IPCC-4 probability, we will “consume” a terra incognita biosphere within a hundred years whose mass species extinctions, radical alterations of natural environments, and other extreme outdoor consequences of a different planet will have been triggered by a geologically-instantaneous temperature change that is signi…cantly larger than what separates us now from past ice ages.

Weitzman says the IPCC Fourth Assessment gives the probability of such an “extreme” temperature change as 3%, and that “to ignore or suppress the signi…cance of rare tail disasters is to ignore or suppress what economic theory is telling us loudly and clearly is potentially the most important part of the analysis” — more important than the discount rate.

For me, what is especially alarming about Weitzman’s analysis is that I have argued there is far greater chance than 3% that we will have a total warming of 6°C or more in a century or so if we don’t reverse emissions trends soon. That’s because failure to act quickly means carbon cycle feedbacks will kick in by mid-century, escalating greenhouse gas concentrations and temperatures well beyond standard IPCC projections. Put another way, if we don’t stabilize below 500 ppm of carbon dioxide emissions (we are at 380 today and were at 280 pre-industrial), we will probably soar to at least 800 ppm in a century, if not 1000 ppm or more. Losing either the permafrost or the Amazon are sufficient to take us to 1000.

Weitzman’s paper “Structural Uncertainty and the Value of Statistical Life in the Economics of Catastrophic Climate Change,” is not for the general reader. His discussion of the Stern Review, however, covers many of the same points and is, I think, accessible to anyone who took an economics class or two in college, especially if first you read John Quiggin (here and here).

It is worth noting that while Weitzman is critical of how Stern chose the key discount rate parameters, he still thinks that Stern is mostly right for the “wrong reasons” — because the “the implications of large consequences with small probabilities” — like the many scenarios of catastrophic climate change (ice sheet instability, tundra melting) — matter more than the choice of discount rate.

That said, the mainstream economic policy think tank — Resources for the Future (RFF) — wrote a major report, “An Even Sterner Review,” that concluded, “we find no strong objections to the discounting assumptions adopted in the Stern Review” (a point I have made, also, based on Quiggin). It also concluded Stern could have used “rising relative prices” from future scarcity to get the same result. The RFF report pointed out:

If we were to combine the low discount rates in the Stern Review with rising relative prices, the conclusions would favor even higher levels of abatement. This would in fact lead us to consider some of the levels of carbon content that Stern deems unrealistic, that is, aiming for a target of less than 450 ppm CO2 equivalents.

Now what I would like to see is a cost-benefit analysis combining a moderate discount rate with RFF’s rising relative prices AND Weitzman’s “extreme climate change possibilities.”

I’m sure that such a comprehensive economic analysis would vindicate Stern again and drive us toward a target of 450 ppm or lower — which means we must peak in global emissions by 2020. The time to act is now. Economics demands it.

UPDATE: Let me be clear that a 3°C to 4°C total warming from preindustrial levels — which takes us to the same temperature the planet had the last time sea levels were 80 feet higher — would be an unmitigated catastrophe for the planet — that is Hansen’s point. My point in this post is just that if we get that warm, the feedbacks will probably take us to 6°C warming a few decades later.

UPDATE2: Weitzman has toned down the piece in his next draft, so you won’t see his strong warning. Oh well. I guess I’ll have to dis Lomborg and Nordhaus myself! Stay tuned.

10 Responses to Harvard economist disses most climate cost-benefit analyses

  1. Earl Killian says:

    Joe, has any economist estimated the impact of the Greenland ice melting and the resulting sea level rise? Even if the IPCC is spot-on on the temperature target, the fact that they ignored “dynamical processes” (the very ones you’ve pictured so often on your blog) means that the IPCC could be exactly right about the temperature and miss the sea level rise entirely. Such things are possible when you simply ignore effects because you don’t understand the physics well enough. That suggests that it is not only the 3% temperature extreme that might lead to a different economic conclusion.

  2. NU says:


    I’m sure someone has studied Greenland, but I don’t know any references off the top of my head. For a similar study involving the West Antarctic ice sheet, try here.

  3. John says:


    We always seem to talk exclusively in terms of CO2, yet that only makes up about 80% of GHG. I read that if we factor in atmospheric non carbon GHG concentrations we exceeded 420 ppm last year. Is that true, and when we talk about bright line numbers we don’t want to exceed, like 500 ppm CO2, are we factoring in the non-carbon GHG concentrations? I guess what I’m asking is: is the time it will take to get to 500 ppm including other GHG as carbon equivalents, or are we simply ignoring them?

    Hope it’s the lformer because the other GHGs tend to have a stronger warming effect and several have extraordinary half lives.

  4. jcwinnie says:

    What concerns me is that you are succumbing to the Al & Leo show, where we all get to sing, “Pack up your Green House Gases in your old CCS, and smile, boys, that’s it, smile.”

    When you write, “failure to act quickly means carbon cycle feedbacks will kick in by mid-century”, the human tendency is to continue with the denial and ignore the carbon cycle feedback that already has occurred and is occurring.

    And, playing the accountant’s game sure feeds the denial, “Ah, we lost the Earth, it was bad quarter.”

    Moral outrage at the human suffering caused by our inaction gets relegated to Appendix 2.4 (d)(7) and is critiqued by other mainstream economists for unconvential way of including such externalities.

    Hand me that cup and let me pay you for that chicken, Joe.

  5. Joe says:

    JC — I can’t tell if you are criticizing me for going too far or not far enough. It doesn’t sound to me like the former, and if it’s the latter, you need to read my book.

  6. mk says:

    Joe, you write:

    Now what I would like to see is a cost-benefit analysis combining a moderate discount rate with RFF’s rising relative prices AND Weitzman’s “extreme climate change possibilities.”

    Now I haven’t read Marty’s latest paper yet (it looks like a bit of a slog and I’m not an expert) but I skimmed it– I thought he was saying something like: integrating under the thickened tail to determine expected utility may not even converge, if the tail is thick enough. Or maybe it will converge to something arbitrarily huge, like the expected outcome being complete wipeout (99% plunge in consumption, or something). That’s what I took him to mean by “results may be arbitrarily different”.

    I interpreted him to be saying that this basically paralyzes us. I didn’t take him to be suggesting that we should carry along with the same cost-benefit analysis paradigm, using thickened tails, and scare the crap out of ourselves.

    Thanks for the posting; instructive as always.

  7. mk says:

    Sorry, in case it wasn’t obvious, my (implicit) question above was: is this how you interpret Marty’s paper as well? If so, did you disagree with his view? I could easily be talking out of my rear because I haven’t read the paper yet.

  8. mk says:

    OK, I reread the conclusions section. I guess Marty is really saying, not that CBA is useless, but that the outcome of CBA in these kind of situations is dominated by the most unclear modeling choices: namely, what to assume about the thickness of the bad tail. (We have to pretty much assume or guess the tail’s thickness; we never have enough data to decide it very objectively).

    To me, it kind of does sound like CBA might be useless. But maybe it’s not; I certainly don’t like the notion of dropping quantitative modeling.

    It would at least indicate that climate-change-related CBA is fairly subjective.

    Unfortunately, I fear that means that all the conservatives are going to pick skinny tails and all the liberals are going to pick thick tails. More optimistically, if Marty is right that this is the central modeling difficulty, I’d hope everyone will agree with his prescription that we spend a lot of money researching the shape of the tail.

  9. Jay Alt says:

    Economists will provide excellent after-the-fact models.
    They always do.

  10. Joerg Haas says:

    You might be interested in Richard Tols paper on the same subject:
    Tol is often quoted by delayers but here he seems to agree with Weitzman. Interesting.