Tumblr Icon RSS Icon

Climate Extremes Reexamined: Can We Quantify The Straw That Breaks The Camel’s Back?

By Climate Guest Contributor  

"Climate Extremes Reexamined: Can We Quantify The Straw That Breaks The Camel’s Back?"

Share:

google plus icon

JR: James Hansen’s recent work on attributing climate extremes to global warming is very important. That’s because off-the-charts extreme weather — along with its impact on food production — is how most Americans and indeed most homo sapiens are likely to experience the negative impacts of climate change for the foreseeable future. So it’s no surprise that it has come under attack.

NASA’s Gavin Schmidt has an excellent explanation of why Hansen’s analysis is so relevant and why some of his critics are so wrong. The bottom line: The critics apparently think climate impacts are linear — a small change always has a small incremental impact — whereas reality makes clear that the impacts are non-linear and have potentially dangerous thresholds. There is a straw that breaks the climate’s back, and we would appear to be fast approaching it.

by Gavin Schmidt via RealClimate

There has been a lot of discussion related to the Hansen et al (2012, PNAS) paper and the accompanying op-ed in the Washington Post last week. But in this post, I’ll try and make the case that most of the discussion has not related to the actual analysis described in the paper, but rather to proxy arguments for what people think is ‘important’.

The basic analysis

What Hansen et al have done is actually very simple. If you define a climatology (say 1951-1980, or 1931-1980), calculate the seasonal mean and standard deviation at each grid point for this period, and then normalise the departures from the mean, you will get something that looks very much like a Gaussian ‘bell-shaped’ distribution. If you then plot a histogram of the values from successive decades, you will get a sense for how much the climate of each decade departed from that of the initial baseline period.

Fig 4a, Hansen et al (2012)

The shift in the mean of the histogram is an indication of the global mean shift in temperature, and the change in spread gives an indication of how regional events would rank with respect to the baseline period. (Note that the change in spread shouldn’t be automatically equated with a change in climate variability, since a similar pattern would be seen as a result of regionally specific warming trends with constant local variability). [Now combine] this figure … with the change in areal extent of warm temperature extremes:

fig 5, hansen et al (2012)

[These] are the main results that lead to Hansen et al’s conclusion that:

“hot extreme[s], which covered much less than 1% of Earth’s surface during the base period, now typically [cover] about 10% of the land area. It follows that we can state, with a high degree of confidence, that extreme anomalies such as those in Texas and Oklahoma in 2011 and Moscow in 2010 were a consequence of global warming because their likelihood in the absence of global warming was exceedingly small.”

What this shows first of all is that extreme heat waves, like the ones mentioned, are not just “black swans” – i.e. extremely rare events that happened by “bad luck”. They might look like rare unexpected events when you just focus on one location, but looking at the whole globe, as Hansen et al. did, reveals an altogether different truth: Such events show a large systematic increase over recent decades and are by no means rare any more.

At any given time, they now cover about 10% of the planet. What follows is that the likelihood of 3 sigma+ temperature events (defined using the 1951-1980 baseline mean and sigma) has increased by such a striking amount that attribution to the general warming trend is practically assured. We have neither long enough nor good enough observational data to have a perfect knowledge of the extremes of heat waves given a steady climate, and so no claim along these lines can ever be for 100% causation, but the change is large enough to be classically ‘highly significant’.

The point I want to stress here is that the causation is for the metric “a seasonal monthlyanomaly greater than 3 sigma above the mean”.

This metric comes follows on from work that Hansen did a decade ago exploring the question of what it would take for people to notice climate changing, since they only directly experience the weather (Hansen et al, 1998) (pdf), and is similar to metrics used by Pall et al and other recent papers on the attribution of extremes. It is closely connected to metrics related to return times (i.e. if areal extent of extremely hot anomalies in any one summer increases by a factor of 10, then the return time at an average location goes from 1 in 330 years to 1 in 33 years).

A similar conclusion to Hansen was reached by Rahmstorf and Coumou (2011) (pdf)) but for a related but different metric: the probability of record-breaking events rather than 3-sigma events. For the Moscow heat record of July 2010, they found that the probability of a record had increased five-fold due to the local climatic warming trend, as compared to a stationary climate (see our previous articles The Moscow warming hole and On record-breaking extremes for further discussion). An similarly concluded extension of this analysis to the whole globe is currently in review.

There have been been some critiques of Hansen et al. worth addressing – Marty Hoerling’s statements in the NY Times story referring to his work (Dole et al, 2010) and Hoerling et al, (submitted) on attribution of the Moscow and Texas heat-waves, and a blog post by Cliff Mass of the U. of Washington. *

*We can just skip right past the irrelevant critique from Pat Michaels – someone well-versed in misrepresenting Hansen’s work – since it consists of proving wrong a claim (that US drought is correlated to global mean temperature) that appears nowhere in the paper – even implicitly. This is like criticising a diagnosis of measles by showing that your fever is not correlated to the number of broken limbs.

The metrics that Hoerling and Mass use for their attribution calculations are the absolute anomaly above climatology. So if a heat wave is 7ºC above the average summer, and since global warming could have contributed 1 or 2ºC (depending on location, season etc.), the claim is that only 1/7th or 2/7th’s of the anomaly is associated with climate change, and that the bulk of the heat wave is driven by whatever natural variability has always been important (say, La Niña or a blocking high).

But this Hoerling-Mass ratio is a very different metric than the one used by Hansen, Pall, Rahmstorf & Coumou, Allen and others, so it isn’t fair for Hoerling and Mass to claim that the previous attributions are wrong – they are simply attributing a different thing. This only rarely seems to be acknowledged. We discussed the difference between those two types of metrics previously in Extremely hot. There we showed that the more extreme an event is, the more does the relative likelihood increase as a result of a warming trend.

So which metric ‘matters’ more? and are there other metrics that would be better or more useful?

A question of values

What people think is important varies enormously, and as the French say ‘Les goûts et les couleurs ne se discutent pas’ (Neither tastes nor colours are worth arguing about). But is the choice of metric really just a matter of opinion? I think not.

Why do people care about extreme weather events? Why for instance is a week of 1ºC above climatology uneventful, yet a day with a 7ºC anomaly is something to be discussed on the evening news? It is because the impacts of a heat wave are very non-linear. The marginal effect of an additional 1ºC on top of 6ºC on many aspects of a heat wave (on health, crops, power consumption etc.) is much more than the effect of the first 1ºC anomaly. There are also thresholds – temperatures above which systems will cease to work at all. One would think this would be uncontroversial. Of course, for some systems not near any thresholds and over a small enough range, effects can be approximated as linear, but at some point that will obviously break down – and the points at which it does are clearly associated with extremes that with the most important impacts.

Only if we assume that the all responses are linear, can there be a clear separation between the temperature increases caused by global warming and the internal variability over any season or period, and the attribution of effects scales like the Hoerling-Mass ratio. But even then the “fraction of the anomaly due to global warming” is somewhat arbitrary because it depends on the chosen baseline for defining the anomaly – is it the average July temperature, or typical previous summer heat waves (however defined), or the average summer temperature, or the average annual temperature? In the latter (admittedly somewhat unusual) choice of baseline, the fraction of last July’s temperature anomaly that is attributable to global warming is tiny, since most of the anomaly is perfectly natural and due to the seasonal cycle! So the fraction of an event that is due to global warming depends on what you compare it to. One could just as well choose a baseline of climatology, conditioned e.g. on the phase of ENSO, the PDO and the NAO, in which case the global warming signal would be much larger.

If however, the effects are significantly non-linear then this separation can’t be done so simply. If the effects are quadratic in the anomaly, a 1ºC extra on top of 6ºC, is responsible for 26% of the effect, not 14%. For cubic effects, it would be 37% etc. And if there was a threshold at 6.5ºC, it would be 100%.

Since we don’t however know exactly what the effect/temperature curve looks like in any specific situation, let alone globally (and in any case this would be very subjective), any kind of assumed effect function needs to be justified. However, we do know that in general that effects will be non-linear, and that there are thresholds. Given that, looking at changes in frequency of events (or return times, as is sometimes done), is more general and allows different sectors/people to assess the effects based on their prior experience. And choosing highly exceptional events to calculate return times – like 3-sigma+ events, or the record-breaking events – is sensible for focusing on the events that cause the most damage because society and ecosystems are least adapted to them.

Using the metric that Hoerling and Mass are proposing is equivalent to assuming that all effects of extremes are linear, which is very unlikely to be true. The ‘loaded dice’/’return time’/’frequency of extremes’ metrics being used by Hansen, Pall, Rahmstorf & Coumou, Allen etc. are going to be much more useful for anyone who cares about what effects these extremes are having.

by Gavin Schmidt. This RealClimate piece was reposted with permission.

Related Post:

‹ Carbon Pollution Update: Zombie Power Plants and Zombie Lawsuits Stagger On

Wasteland: How America Can Save Money And Stop Wasting Food, Energy And Water ›

16 Responses to Climate Extremes Reexamined: Can We Quantify The Straw That Breaks The Camel’s Back?

  1. Can anyone give examples of 4 sigma events that occurred between 2001 and 2011?

    According to the graph, the probability of their occurring would be close to zero without global warming, but a significant number actually have occurred.

    It would be useful to the political debate if we could identify those events. Eg, is the current hot spell and drought in the midwest a 4 sigma event? If so, it would be very effective to say that it would have had a close to zero possibility of occurring without global warming.

    • Robert In New Orleans says:

      1. 2005 Hurricane season.
      2. 2003 European heatwave.
      3. 2010 Russian heatwave.
      4. Australian Multi Year Drought.

  2. The summer, parts of Kansas briefly experienced temps which kill corn outright. So, we’re flirting with weather beyond simple drought and diminished yields. We’re heading for temperatures which will just destroy the crop outright. That’s terra incognita for farmers. (Or should I say “terror” incognita?) They may be inured to the scenes of cracked earth, but the complete loss of a season should get their attention.

    Miles of the stalks of dead wheat and corn should be the straw that breaks the camel’s back.

    • Brooks Bridges says:

      I heard an interview with a farmer. He said that in a single, very hot day, his corn changed from having green leaves to grey leaves. He seemed almost in shock.

  3. MarkfromLexington says:

    I don’t understand Gavin’s comment – “Note that the change in spread shouldn’t be automatically equated with a change in climate variability, since a similar pattern would be seen as a result of regionally specific warming trends with constant local variability.”

    Can someone explain this in simple terms? I am confused by both the first half and the second half of his parenthetical statement.

    • This is discussed in detail here:

      http://tamino.wordpress.com/2012/08/13/hansen-et-al-2012/

      In a nutshell, if the mean temperatures in different parts of the globe are increasing at different rates, the statistics of the overall global temperature will seem to display increased variability, even if each region individually has experienced no change in its local variability.

    • Joan Savage says:

      He is pointing out is that the wider spread could have theoretically occurred from an assemblage of regional climates smoothly gliding towards warmer conditions, some faster than others. He doesn’t want us using the graph to prove greater variability, when that particular graph does not..
      Imagine a flock of ducks taking off from a lake. At first they are clustered close together on the water, but as they take off they spread out from one another into a flight formation that is larger. Each duck is theoretically like a regional climate, with some out in front and others lagging behind.
      The reality has been greater climate variability, which would be like the ducks experiencing turbulence, struggling to get aloft and stay aloft, and not so able to keep to a smooth formation.
      But the graph doesn’t prove greater ‘turbulence.’
      That may have confused matters, but I had fun with the metaphor!

    • john c. wilson says:

      In simplest terms the man needs an editor. That is simply bad writing. In search of strictest accuracy he creates perfect confusion. Which is a shame because he has an important message.

    • MarkfromLexington says:

      Thank you for the explanations. I now understand the point he was making.

  4. prokaryotes says:

    It would be good for understanding “Sigma Events” to have a list which outlines rarities.

    Below is what i could find from the wikipedia:

    Higher deviations

    Because of the exponential tails of the normal distribution, odds of higher deviations decrease very quickly. http://en.wikipedia.org/wiki/Sigma_event#Higher_deviations

    http://en.wikipedia.org/wiki/Standard_deviation#Rules_for_normally_distributed_data

  5. Leif says:

    People the world over readily accept the scientific fact that changing a patch of South Pacific from warm to cool by only a couple of degrees C, and the resulting comparatively narrow El Nino/La Nina current across the Equatorial Pacific to South America, can have a profound effect on the weather. Not only here in the United States, but to a lesser degree Europe and Africa.
    On the other hand, transforming a much closer, (boarders in many cases), highly reflective patch of earth from significantly bellow freezing to dark open water above freezing, a difference of 10′s of degrees C and it is all cool? Couple that with an area that is larger than the states of Alaska and Texas combined and it is all just going to be “Ho Hum”! Get real. I am telling you, Science is telling you, and the on the ground reality are all raising red flags here. Of course vested interests are spending big bucks trotting out “red herrings” as fast as they can. Perhaps that must be factored into the attitudes of the masses, you think?

    Time to toast the deniers, not the Kidders…

  6. Dan Miller says:

    Some simple numbers: Extremely Hot Summers (+3-sigma) have gone from 0.1~0.2% of area in 1951-1981) to ~10% last decade (2001-2011). That is an increase of 50X to 100X or 5000% to 10,000% increase in 50 years! Assuming a 50X increase, when a new 3-sigma event happens, there is a 1/50 (or 2%) chance is was due to natural variation and a 49/50 (or 98%) chance that it was due to global warming. That is why we can attribute the 2003 European heat wave that killed 70,000 people and the Texas heat wave that caused $7B in damage (and, soon, this year’s Midwest heat wave) to global warming.

    With 1C more warming, the 3-sigma events will happen 50% of the time and the 5-sigma (once in a million years) events will happen about 10% of the time.

  7. Dan Miller says:

    Even though I’ve said it before, Dr. Hansen’s study is amazing. It does not use climate models nor does it make predictions. It simply analyses actual, measured temperature data of the past 60 years. The 5000% increase in Extremely Hot Summers already occurred!

    And since the data is global, the shift of the curves to the right do not imply global warming, it IS globe warming! (Just like if the doctor measures your temperature to be 103F, you DO have a fever!)

    • Mulga Mumblebrain says:

      In other words, our goose is already cooked, and the cremation follows pretty immediately.

  8. Paul Klinkman says:

    The camel has passed on! This camel is no more! He has ceased to be! ‘E’s expired and gone to meet ‘is maker! ‘E’s a stiff! Bereft of life, ‘e rests in peace! If you hadn’t nailed ‘im up ‘e’d be pushing up the daisies! ‘Is metabolic processes are now ‘istory! ‘E’s off ‘is legs! ‘E’s kicked the bucket, ‘e’s shuffled off ‘is mortal coil, run down the curtain and joined the bleedin’ choir invisibile!! THIS IS AN EX-CAMEL!!

  9. Gestur says:

    Squeezing just a bit more out of the argument presented by Rahmstorf and Coumou in their fine blog post, Extremely Hot, of March 26, 2012, given the highly stochastic nature of weather, how could one possibly justify the use of a metric of attribution that is not grounded in probabilities? And given the need for a metric grounded in probability, let me advance this little reductio ad absurdum argument. As they note, with the Hoerling-Mass ratio for attribution, the greater the anomaly, say in units of standard deviation from the mean, the lower is the attribution of general warming. And, of course, for the metrics being used by Hansen, Pall, Rahmstorf & Coumou, Allen etc., the greater this anomaly the higher would be the attribution given to general warming. Consider, then, simply letting this anomaly → ∞. The Hoerling-Mass ratio for attribution goes to zero and, of course, the metrics being used by Hansen, Pall, Rahmstorf & Coumou, Allen etc. increase to 100% for general warming attribution. Among other things, the Hoerling-Mass ratio for attribution stands the whole idea of statistical power on its, well, head.