Like a baseball player on steroids, our climate system is breaking records at an unnatural pace. And like a baseball player on steroids, it’s the wrong question to ask whether a given home run is “caused” by steroids. Meteorologist Dr. Jeff Masters explained the analogy this way last December:
… we look at heat waves, droughts, and flooding events. They all tend to get increased when you have this extra energy in the atmosphere. I call it being on steroids kind of for the atmosphere….
Well, normally, you have the everyday ups and downs of the weather, but if you pack a little bit of extra punch in there, it’s like a baseball hitter who’s on steroids.
You expect to see a big home run total maybe from this slugger, but if you add a little bit of extra oomph to his swing by putting him on steroids, now we can have an unprecedented season, a 70 home run season. And that’s the way I look at this year.
We had an unprecedented weather year that I don’t think would have happened unless we had had an extra bit of energy in the atmosphere due to climate change and global warming.
I’m reposting all this because of a recent editorial in the journal Nature that seems to have missed the key point. Below is a response to that editorial by NASA’s Gavin Schmidt at RealClimate, followed by a comment on the editorial from Trenberth.
First, though, it’s worth noting in March that Nature Climate Change published a major new analysis of the scientific evidence, “A decade of weather extremes” (subs. req’d) — see my post Nature: Strong Evidence Manmade ‘Unprecedented Heat And Rainfall Extremes Are Here … Causing Intense Human Suffering’. That Nature analysis concluded:
It is very likely that several of the unprecedented extremes of the past decade would not have occurred without anthropogenic global warming.
Here is the RealClimate piece:
Nature has an interesting editorial this week on the state of the science for attributing extreme events. This was prompted by a workshop in Oxford last week where, presumably, strategies, observations and results were discussed by a collection of scientists interested in the topic (including Myles Allen, Peter Stott and other familiar names). Rather less usual was a discussion, referred to in the Nature piece, on whether the whole endeavour was scientifically worthwhile, and even if it was, whether it was of any use to anyone. The proponents of the ‘unscientific and pointless’ school of thought were not named and so one can’t immediately engage with them directly, but nonetheless the question is worthy of a discussion.
This workshop was a follow-up to one held in 2009, which took place in a very different environment. The meeting report was typical of a project that was just getting off the ground — lots of potential, some hints of success. Today, there is a much richer literature on the topic, and multiple approaches have been tried to generate the statistical sample required for statements of fractional attribution.
But rather than focus on the mechanics for doing this attribution, the Nature editorial raises more fundamental questions:
One critic argued that, given the insufficient observational data and the coarse and mathematically far-from-perfect climate models used to generate attribution claims, they are unjustifiably speculative, basically unverifiable and better not made at all. And even if event attribution were reliable, another speaker added, the notion that it is useful for any section of society is unproven.
Both critics have a point, but their pessimistic conclusion — that climate attribution is a non-starter — is too harsh.
Nature goes on to say:
It is more difficult to make the case for ‘usefulness’. None of the industry and government experts at the workshop could think of any concrete example in which an attribution might inform business or political decision-making. Especially in poor countries, the losses arising from extreme weather have often as much to do with poverty, poor health and government corruption as with a change in climate.
Do the critics (and Nature sort-of) have a point? Let’s take the utility argument first (since if there is no utility in doing something, the potentially speculative nature of the analysis is moot). It is obviously the case that people are curious about this issue: I never get as many media calls as in the wake of an extreme weather event of some sort. And the argument for science merely as a response to human curiosity about the world is a strong one. But I think one can easily do better. We discussed a few weeks ago how extreme event attribution via threshold analysis or absolute metrics reflected a view of what was most impactful.
Given that impacts generally increase very non-linearly with the size/magnitude of an event, changes in extremes frequency or intensity have an oversized influence on costs. And if these changes can be laid at the feet of specific climate drivers, then they can certainly add to the costs of business-as-usual scenarios which are then often compared to the cost of mitigation. Therefore improved attribution of shifts in extremes (in whatever direction) have the potential to change cost-benefit calculations and thus policy directions.
Additionally, since we are committed to certain amount of additional warming regardless of future trends in emissions, knowing what is likely in store in terms of changing extremes and their impacts, feeds in directly to what investments in adaptation are sensible. Of course, if cost-effective investments in resilience are not being made even for the climate that we have (as in many parts of the developing world), changes to calculations for a climate changed world are of lesser impact. But there are many places where investments are being made to hedge against climate changes, and the utility is clearer there.
Just based on these three points, the question of utility would therefore seem to be settled. If reliable attributions can be made, this will be of direct practical use for both mitigation strategies and adaptation, as well as providing answers to persistent questions from the public at large.
Thus the question of whether reliable attributions can be made is salient. All of the methodologies to do this rely on some kind of surrogate for the statistical sampling that one can’t do in the real world for unique or infrequent events (or classes of events). The surrogate is often specific climate simulations for the event with and without some driver, or an extension of the sampling in time or space for similar events. Because of the rarity of the events, the statistical samples need to be large, which can be difficult to achieve.
For the largest-scale extremes, such as heat waves (or days above 90ºF etc), multiple methodologies — via observations, coupled simulations, targeted simulations — indicate that the odds of heat waves have shortened (and odds for cold snaps have decreased). In such cases, the attributions are increasingly reliable and robust. For extremes that lend themselves to good statistics — such as the increasing intensity of precipitation — there is also a good coherence between observations and models. So claims that there is some intrinsic reason why extremes cannot be reliably attributed doesn’t hold water.
It is clearly the case that for some extremes — tornadoes or ice storms come to mind — the modelling has not progressed to the point where direct connections between the conditions that give rise to the events and climate change have been made (let alone the direct calculation of such statistics within models). But in-between these extreme extremes, there are plenty of interesting intermediate kinds of extremes (whose spatial and temporal scales are within the scope of current models) where it is simply the case that the work has not yet been done to evaluate whether models suggest a potential for change.
For instance, it is only this year that sufficient high frequency output has been generically archived for the main climate models to permit a multi-model ensemble of extreme events and their change in time — and with sufficient models and sufficient ensemble members, these statistics should be robust in many instances. As of now, this resource has barely been tapped and it is premature to declare that the mainstream models are not fit for this purpose until someone has actually looked.
Overall, I am surprised that neither Nature or (some?) attendees at the workshop could find good arguments supporting the utility of attribution of extremes — as this gets better these attributions will surely become part of the standard assessments of impacts to be avoided by mitigation, or moderated by adaptation. We certainly could be doing a better job in analysing the data we have already in hand to explore whether and to what extent models can be used for what kinds of extremes, but it is wrong to say that such attempts are per se ‘unverifiable’. As to whether we are better off having started down this path, I think the answer is yes, but this is a nascent field and many different approaches and framings are still vying for attention. Whether this brings on any significant changes in policy remains to be seen, but the science itself is at an exciting point.
Finally, here are the comments sent me by Dr. Trenberth, who was not able to attend the Oxford workshop:
Attribution of climate change requires not only good data but also a good model to take apart any simulated signal. This means that the model must be capable of simulating the relevant phenomena with high integrity. Unfortunately, this is often not the case for individual events owing to model errors. For instance climate models do not perform well for blocking anticyclones, monsoons, tropical storms, or most intense rainfall events. In part this is resolution related, but in part it is because of model biases, so that basic features of the simulated climate are not quite in the right place. Fundamentally the model climate is not identical to the observed.
Uncritical use of a model, without testing its ability on the phenomena thought to be relevant has led to some studies to conclude that “it must be natural variability” owing to the way the testing is done. This is because the main test is to see whether it is due to human influences, rather than the reverse: to test whether it isn’t. Of course the real conclusion should be that the tools are inadequate. Or that the wrong questions are being posed! These days, climate change is pervasive, as the basic environment in which all weather forms is simply different than it was more than 30 years ago. That reflects the memory of all the human influences on climate. In addition there is the changed atmospheric composition and atmospheric heating occurring on a daily basis. How can there not be a human influence on all climate events?
As an aside, having an imperfect model does not preclude it from being used to explore changes in a general sense. These models have been successfully used to clearly demonstrate that human-induced climate change is real and substantial. However, event attribution requires considerable care and better models are essential. Testing models on such events, though, is one way to help improve them. It is important research.
- “Study Finds 80% Chance Russia’s 2010 July Heat Record Would Not Have Occurred Without Climate Warming”
- “NOAA Study Finds Human-Caused Climate Change Already a Major Factor in More Frequent Mediterranean Droughts”
- Study: Global warming is driving increased frequency of extreme wet or dry summer weather in southeast, so droughts and deluges are likely to get worse
- Leading experts explain how human-caused warming exacerbates Texas drought