Tumblr Icon RSS Icon

Nate Silver’s Climate Chapter And What We Can Learn From It

Posted on

"Nate Silver’s Climate Chapter And What We Can Learn From It"

Share:

google plus icon

by Dana Nuccitelli, via Skeptical Science

In the interest of full disclosure, many Skeptical Science team members are big fans of Nate Silver’s FiveThirtyEight blog at The New York Times.  Silver runs a model which uses polling results and various other input factors (such as economic indicators) to predict election outcomes in the USA, with an impressive track record of accuracy.

Thus we were intrigued to hear that Silver had included a chapter on climate change in his newly-published book The Signal and the Noise: Why So Many Predictions Fail-but Some Don’t, particularly since we at Skeptical Science are often forced to explain the difference between signal and noise.  Having great respect for the work and climate-related opinions of Michael Mann (who Silver consulted in writing the book), we were also concerned to see his criticisms of Nate Silver’s climate chapter.

Nevertheless, Mann recommended that people read the book for themselves, praising much of the content.  So I did just that, and overall I believe that if we take Silver’s analysis a step further, we can learn a lot about the accuracy of climate models.  It’s also important to remember that, as Silver himself notes in the chapter, our basic understanding of how the climate works and how much it will warm in response to our greenhouse gas emissions is not just dependent on models.

Correlation is not Causation without Physical Connection

Silver’s climate chapter starts out very well, noting that correlation does not necessarily imply causation, and that determining climate change causation requires a physical understanding of the climate system.

“…predictions are potentially much stronger when backed up by a sound understanding of the root causes behind a phenomenon.  We do have a good understanding of the cause of global warming: it is the greenhouse effect.”

Failing to consider physics in trying to determine the cause of global warming has been the pitfall for many a climate contrarian, for example Roy Spencer, Craig Loehle, Nicola Scafetta, Syun-Ichi Akasofu, and many others, so Silver’s point is an important and relevant one.  It is easy to fall into the curve fitting trap.

Silver goes on to explain some of that fundamental physics as discussed in the IPCC report – that atmospheric CO2 has increased steadily and rapidly, that this CO2 increase will in turn increase the greenhouse effect and cause global surface warming (which we’ve known for well over a century), and that water vapor will amplify that global warming as a feedback effect, ultimately noting “The greenhouse effect isn’t rocket science.”

Healthy Skepticism or Noise?

After this good start, the chapter then proceeds to discuss what Silver considers the healthy form of scientific skepticism, noting that

“In climate science, this healthy skepticism is generally directed at the reliability of computer models used to forecast the climate’s course.”

Silver then discusses J. Scott Armstrong as an example of this type of healthy skeptic of science who is concerned about the accuracy of climate model predictions.  Armstrong is basically used to establish the ‘skeptic’ criticisms of climate models, though his arguments are very weak, basically boiling down to ‘climate models are too complex to be accurate.’  Armstrong also tends to focus on short-term noise rather than long-term trends, which Silver does eventually point out toward the end of the chapter.  After establishing Armstrong’s criticisms, Silver moves on to the more interesting part of the chapter, evaluating the accuracy of past climate models.

Testing Hansen’s 1988 Model Accuracy

Silver attempts to evaluate the accuracy of climate models by examining the model projections made by James Hansen in 1988 and the IPCC in 1990 and 1995. We should note here that Skeptical Science has evaluated many other temperature projections going back as far as Wallace Broeker’s 1975 paper in the Lessons from Past Predictions series, with the results summarized in Figure 1 (though not all of these are based on climate models).  Note that most of the accurate predictions have come from mainstream climate scientists and models, while the least accurate predictions have come from various ‘skeptics’.

Figure 1: Various best estimate global temperature predictions evaluated in the ‘Lessons from Past Climate Predictions’ series vs. GISTEMP (red).  The warmer colors are generally mainstream climate science predictions, while the cooler colors are generally “skeptic” predictions.  The Hansen projection in pink is from Hansen et al. 1988.

Silver first examines James Hansen’s 1988 projections, but not in great detail, simply noting that they are difficult to evaluate because they rely on various emissions and radiative forcing (global energy imbalance) assumptions, concluding

“Even the most conservative scenario somewhat overestimated the warming experienced through 2011.”

Silver is right that Hansen’s 1988 model projected more warming than has been observed.  But what can we learn from this?

The overall climate sensitivity (the total amount of climate warming in response to a given greenhouse gas increase, including feedbacks) in Hansen’s model was 4.2°C for a doubling of atmospheric CO2 levels.  This is significantly higher than most of today’s climate models, which put the value around 3°C for doubled CO2.  In order to accurately project the ensuing warming, Hansen’s model sensitivity would have had to be close to that in today’s climate models (Figure 2).

Figure 2: A rough adjustment of the Hansen 1988 Scenario B temperature projection to reflect a 3°C rather than 4.2°C climate sensitivity (red) vs. GISTEMP observations (black)

Thus we can be confident that today’s climate models are, not surprisingly, more accurate than James Hansen’s 1988 model.

Testing IPCC Model Accuracy

The chapter proceeds to evaluate the 1990 IPCC report’s temperature projections.  Silver notes that under the various scenarios, the models projected between approximately 2°C and 5°C global surface warming from 2000 to 2100 (Figure 3).

Figure 3: 1990 IPCC projected global warming in the BAU emissions scenario using climate models with equilibrium climate sensitivities of 1.5°C (low), 2.5°C (best), and 4.5°C (high) for double atmospheric CO2

Silver then compares this rate of warming to the rate of warming from 1990 through 2011 and concludes that the 1990 IPCC report somewhat over-predicted the ensuing warming.  We can take this analysis further and address the question why their warming projections were a bit high.

While Silver’s discussion compares observations to all 3 IPCC climate sensitivity scenarios (low = 1.5°C for doubled CO2, best = 2.5°C, high = 4.5°C), he has not yet considered the emissions scenario (BAU = business as usual).  The 1990 BAU scenario considered a 3.5 Watts per square meter (W/m2) greenhouse gas forcing in 2011, whereas the actual greenhouse gas radiative forcing was approximately 2.8 W/m2 in 2011.  The IPCC BAU forcing was too high for two reasons

1) In 1990, the radiative forcing caused by a doubling of CO2 was believed to be about 4.4 W/m2.  We now know it’s closer to 3.7 W/m2.

2) Greenhouse gas emissions have not risen quite as fast as the IPCC BAU scenario.

This resulting lower real-world radiative forcing (an input, not output of the model) accounts for most of the model-data discrepancy Silver observes (Figure 4).

Figure 4: 1990 IPCC FAR BAU “best” global warming projection reflecting the observed GHG forcing changes (blue) vs. observed average global surface temperature change from GISTEMP (red) since 1990.

When adjusting the BAU scenario projections to reflect the actual greenhouse gas changes since 1990, the model would expect to see roughly 0.2°C per decade warming, which is slightly more than has been observed, but within the 95% uncertainty range in all temperature data sets.  That the observed rate of warming has most likely been a bit lower than the IPCC projection is also not surprising considering all the short-term cooling influences over the past decade.  In fact, later in the chapter Silver discusses one of these recent cooling effects – increased aerosol emissions from Chinese coal plants have likely dampened the observed warming over the past decade.

Silver does note that the 1990 IPCC BAU scenario “was somewhat too pessimistic,” (point #2 above) but goes on to claim that

“Nevertheless, the IPCC later acknowledged their predictions had been too aggressive.  When they issued their next forecast, in 1995, the range attached to their business-as-usual case had been revised considerably lower: warming at a rate of about 1.8°C per century.  This version of the forecasts has done quite well relative to the actual temperature trend.  Still, that represents a fairly dramatic shift.”

We should point out that the 1995 IPCC report considered a number of different emissions scenarios, with a corresponding average global surface warming ranging from about 1.6 to 2.5°C between 1990 and 2100 (Figure 5).  Describing it as simply projecting about 1.8°C per century warming does not capture the full spread of warming projections.

Figure 5: 1995 IPCC report projected global mean surface temperature changes from 1990 to 2100 for the full set of IS92 emission scenarios. A climate sensitivity of 2.5°C is assumed.

Again we can take this analysis a step further and get into the model nuts and bolts to understand the reason behind the lower rate of projected warming in the 1995 IPCC report as compared to the 1990 report.  As it turns out, the difference was mainly due to the revised emissions scenarios — model inputs, not outputs.  The 1995 IPCC report still used the too-high CO2 radiative forcing value (point #1 above), but used what has turned out to be a more accurate range of greenhouse gas emissions scenarios than the 1990 BAU (addressing point #2 above).

When we adjust for the actual greenhouse gas radiative forcing, the 1995 IPCC report projects a very similar amount of warming as the similarly corrected 1990 report ‘best estimate’, which we would expect, since both used climate sensitivities of 2.5°C.

Ultimately the difference between the projected warming in the two scenarios mostly boils down to using to different emissions and radiative forcing scenarios – model inputs, not outputs.  As illustrated in Figure 1 above, the 1990 and 1995 IPCC temperature projections performed very similarly when adjusted for actual greenhouse gas and radiative forcing changes.

The ‘dramatic shift’ Silver refers to simply reflects a change in emissions scenarios – a model input.  Both the 1990 and 1995 IPCC model projections have been quite accurate when we adjust for those inputs.  If we are just evaluating model accuracy here, the IPCC did very well in both 1990 and 1995.  If we are evaluating the IPCC’s ability to predict CO2 emissions, well, that’s the rub.  We don’t know how human CO2 emissions will change in the future, but if they continue on their current path, the models project a whole lot of warming.

The IPCC, Al Gore, and Polar Bears

There are a few relatively minor errors in the chapter worth noting.  For example, Silver states:

“And however many models there are, the IPCC settles on just one forecast that is endorsed by the entire group.”

This is not quite correct.  While the IPCC report does publish a graphic illustrating the multi-model mean temperature projection for each emissions scenario, it also shows the envelope of individual model projections on the same figure (Figure 6).

Figure 6: Temperature Projections from the 2007 IPCC Report.  Solid lines are multi-model global averages of surface warming (relative to 1980–1999) for the emissions scenarios A2, A1B, and B1, shown as continuations of the 20th century simulations. Shading denotes the ±1 standard deviation range of individual model annual averages. The orange line is for the experiment where concentrations were held constant at year 2000 values. The grey bars at right indicate the best estimate (solid line within each bar) and the likely range assessed for the six emissions marker scenarios.

Silver also criticizes Al Gore’s film An Inconvenient Truth as

“…sometimes [being] less cautious, portraying a polar bear clinging to life in the Arctic, or South Florida and Lower Manhattan flooding over.  Films like these are not necessarily a good representation of the scientific consensus.”

However, Arctic sea ice is actually declining significantly faster than the climate models used in the IPCC report predicted, and it’s certainly true that south Florida and Manhattan could eventually become flooded as a result of sea level rise.  Additionally, while Gore’s film did get a few details wrong, as Michael Mann noted, it got the basic science right.

Good Points in the Chapter

Silver’s climate chapter also makes many good points for which he deserves credit.

  • Silver points out that climate models simply cannot replicate the current climate without accounting for greenhouse gas increases.
  • The chapter contains a graphic similar to The Escalator to show that there are often short-term changes in the opposite direction of the long-term trend, but that this just represents noise in the system.
  • Silver references William Nordhaus in noting that uncertainty is actually a reason to reduce greenhouse gas emissions, because the worst climate scenarios cannot be ruled out.

So aside from the issues noted above, the chapter does contain a lot of good information.

Discouraging Conclusion

The chapter ends on a bit of a discouraging note, with Silver suggesting that climate scientists should not try to directly influence climate policy.

“It is precisely because the debate may continue for decades that climate scientists might do better to withdraw from the street fight and avoid crossing the Rubicon from science into politics.”

Note that by “politics” Silver appears to refer to “policy” in this context.  However, the problem with Silver’s suggestion is that we simply don’t have decades to waste. The evidence that humans are causing dangerous climate change has been building for many decades, with climate scientists advising the importance of emissions cuts, and yet policymakers have continually failed to achieve the necessary action to steer us away from this path.  Climate scientists have essentially been forced into pressing harder and harder for serious climate policy – to put it simply, we are quickly running out of time to avoid potentially catastrophic climate change.

Silver’s discouraging conclusion may result from not taking that extra step to evaluate why the past warming projections discussed here were too high, as we have done.  Silver may have drawn the conclusion that the climate is not warming as fast as we expect, which would suggest that we have more time than climate scientists believe to solve the problem.

However, when correcting the model projections to account for the actual greenhouse gas emissions and forcing changes, we see that their temperature projections have been very accurate.  Thus climate scientists are correct to worry that unless we quickly enact climate policies to reduce human greenhouse gas emissions, we will suffer some very painful consequences.  In fact, as Silver himself notes in the chapter, we don’t even need climate models to realize that we’re in for a lot of global warming this century.

Silver is right that we are making too little progress in terms of climate policy, and he is is of course correct to note that climate scientists must be careful to ensure that their predictions and warnings are scientifically accurate and defensible.  However, remaining silent in the face of a potential catastrophe is simply not an option.  Effective policy depends on active engagement between policymakers and the scientific community.  It’s important for scientists to know the limits of science in shaping policy, but it’s also vital to ensure that policy grapples with scientific realities. If climate scientists say nothing, this creates a vacuum to be filled by people like Armstrong and other climate contrarians Silver discusses, who frankly are way out of their depth when it comes to climate science and climate models, but don’t know enough to realize it.

Note that I have not yet read the rest of Silver’s book, which looks like it may be very interesting.  Overall the climate chapter does give us a good start in evaluating the accuracy of climate models, as long as we take the analysis a few steps further.

Dana Nuccitelli is an environmental scientist at a private environmental consulting firm in the Sacramento, California area. This piece was originally published at Skeptical Science and was reprinted with permission.

« »

13 Responses to Nate Silver’s Climate Chapter And What We Can Learn From It

  1. Silver really wants climate scientists to not voice their thoughts on climate policies? Wow. Not only is that undemocratic, but it sets a litmus test where only the more poorly informed members of society get to influence and create climate policy.

    You don’t need to be a statistics genius to see that is not working out so well so far.

    • Mulga Mumblebrain says:

      Very well said. I thought that you, and we and all the other lucky darlings of the glorious West lived in ‘democracies’, where everybody had an equal say. Yes, I know that’s complete tosh, but that’s the line that gets peddled. But, according to Mr Silver, scientists, those who actually understand the ghastly truth, must shut up and leave politics to….whom, precisely? Why, to those ‘born to rule’ types who know best, of course, the businessmen, the Kochtopus, the fossil fuel interests, and to those human paladins in the MSM who brainwash the proles for them, the Limbaughs, Alan Joneses, Delingpoles and all the rest of that scuttling tribe. I thought that this chap had been dismissed weeks ago.

      • Jack Burton says:

        Well said indeed! I was blow away by the ridiculous assertion that climate scientists should not try and influence politics. This man, Silver, seems to consider himself a member of the ruling elites. Does he also subscribe to Ms Rand and her mad take on humanity?
        Look, I have just about had it with all these supposed educated people making excuses for the climate skeptics and their divorce form science. As if the skeptics are anything buy paid shills of the fossil fuel industry.
        A day will come when all those who have enabled the skeptic’s hoax to continue, long past our last chance to turn the CO2 monster around, are called to book before history. Imagine you are an historian in the year 2075 and have all our present records to study, lots of names will live in infamy in the history books of 2100.

  2. Mike Roddy says:

    Good summary, Dana, but you are giving Silver too much slack. I suggest that readers click the link you posted to Michael Mann’s criticism of Silver, which was polite but forceful.

    Economists like Silver and Levitt or, for that matter, statisticians like McIntyre, should not contribute to serious discussions of the science of global warming. As Mann pointed out, radiative forcing is proved by physics. Venturing into short term fluctuations, alleged anomalies in models, and other matters just leads to confusion.

    Besides the Chicago Economics libertarian influence, Silver self censors as a result of working for the New York Times. Revkin, Gillis, Broder, and the rest of them somehow feel obligated to throw a bone to skeptics, and call short term data fluctuations “uncertainty”.

    Finally, Silver proposes that scientists stay in their laboratories, and avoid engaging the public. This is a mistaken and irresponsible position. Public education is currently being handled by organizations like CBS, Fox, Tribune, Clearchannel, TimeWarner, and, yes, the New York Times, which is bad but not horrible, like the rest of them. Maybe Silver believes that he and other economists are better suited to interpret the data. No no, Nate.

    • dana1981 says:

      My initial draft was tougher on Silver, but then we decided to tone it down and go with the ‘what can we learn from this?’ theme. Silver does get a lot of stuff right in the chapter, after all. But the conclusion in particular was very disappointing, illustrating that Silver really doesn’t grasp the magnitude climate threat.

  3. Paul Klinkman says:

    South Florida and Manhattan won’t wait until the far, far-off future to flood. Some August, a really big hurricane (not 170 mph Andrew) will hit Florida or Manhattan at a bad tide or in a bad storm surge spot. Then a few people will start to get it.

  4. Silver has a great grasp on the issues are currently statistically significant in elections. However, despite the impact of elections, that’s actually a very narrow range of knowledge with a narrow range of applicability.

    Scientists aren’t removed from society. If a scientist’s house were burning, would Silver tell him he shouldn’t call the fire department? Of course not. The difference between a house fire and climate change isn’t of kind but of time. Climate change is a slow moving crisis, but that doesn’t make it not a crisis. It’s like an avalanche: by the time you know it’s happening, you can’t do anything about it.

    Or put it another way: who would Silver have in the debate? The hacks and quacks?

  5. AlanInAZ says:

    I have read the climate chapter as well as several others. I do think Michael Mann’s review (seconded by Brad Johnson) is a better reflection of the book’s content than this review. After reading the chapter I felt Silver left the impression that climate change, although real, is not really a big deal just yet and we can wait until the predictive ability of the science is a bit more settled. Every example he cites shows the climate science over stating impacts. He even spends space trying to debunk Gavin Schmidt’s overconfidence in placing odds on a bet for global warming over the next decade. The other chapters in the book are good but the climate chapter was a disappointment.

  6. mikkel says:

    Well I really liked this piece. It did a really good job of highlight the purpose for making climate models in the first place, which is to not merely predict what “will” happen, but create scenarios that can be updated over time based on better scientific understanding and actions taken.

    In the beginning days of Fukushima I told everyone that would listen that things could get really bad because of inherent flaws in the design. I had originally learned about this in an Adam Curtis documentary in which the inventor of the LWBR design said it should never be used in utility plants because it was impossible to guarantee stability due to positive feedbacks. “Don’t believe anyone that says the models show things will be safe,” I said.

    Someone asked why I didn’t believe those models but believed the global warming ones and I responded that it was simple: the global warming models aren’t trying to predict how to control the climate, they are merely pointing out how likely it is to lose control based on what we know.

    I’ve found the difference between a control system and an impersonal prediction is largely lost on most statisticians, and not consistently elucidated by climate modelers. This post does a good job of indirectly acknowledging the role of models in diagnosing complex dynamic systems.

    Maybe there is some conceit by Silver, but never ascribe to malice what can be explained by incompetence.

    • dana1981 says:

      Thanks. Really the point is that models tell you “if x, then y will happen”. The problem was that Silver didn’t look in very much detail at the “x”, he just looked at the “y”. So he seemed to suggest the models were too sensitive, when in reality the emissions just haven’t generally risen as fast as “x” (or more accurately, the forcing hasn’t been as high as the input scenarios).

      You have to understand some climate science to grasp this though, or at least it helps. I think Silver should have consulted Gavin Schmidt on that portion of the chapter.