The deniers were half right: The Met Office Hadley Centre had flawed data — but it led them to UNDERestimate the rate of recent global warming
"The deniers were half right: The Met Office Hadley Centre had flawed data — but it led them to UNDERestimate the rate of recent global warming"
Claims that global warming has slowed down over the past decade were partly based on faulty data. Instead, the rate of global warming was underestimated because of a new way of measuring sea-surface temperatures, suggests a new study….
[Lead author John] Kennedy says the underestimation of the change in sea-surface temperature could account for up to 0.03 °C of the apparent slowdown in global temperatures. The correction could mean that 2010 will be the warmest year on record, surpassing 1998 and 2005.
That’s the New Scientist reporting on a new re-analysis of the global sea-surface temperature (SST) dataset by the Met Office.
Of course, everybody but the anti-science disinformers have known for a long time that the Hadley/CRU (Climatic Research Unit) temperature data UNDERestimates “” not OVERestimates “” the recent global temperature rise. Their data excludes “the place on Earth that has been warming fastest” (see “Why are Hadley and CRU withholding vital climate data from the public?” and “What exactly is polar amplification and why does it matter?“). NASA’s James Hansen has made this point for years.
Last December, the Met Office had concluded that “The global temperature rise calculated by the Met Office’s HadCRUT record is at the lower end of likely warming.” In a 2009 analysis, “New evidence confirms land warming record,” the Met Office explained that they had been lowballing land-based temperatures, “because HadCRUT is sampling regions that have exhibited less change, on average, than the entire globe over this particular period.”
There is little doubt that these two problems are responsible for a large fraction of the apparent slow down of the rate of warming in the Hadley/CRU dataset — and that the rest of the slowdown is due to well understood factors such as the deepest solar minimum in a century, which we are just coming out of. The Met Office said as much last week.
I emailed Kennedy to get the study, which has been accepted for publication in Remote Sensing of Environment but isn’t online yet. I also wanted to ask him if the second paragraph cited above in the New Scientist article was accurate — it is — and whether the Met Office would be incorporating these corrections for its final 2010 temperature calculation. Kennedy elaborated:
Regarding the accuracy of the New Scientist paragraph. I think it is correct in so far as 0.03 is an upper limit on the size of the effect and that such a change in the global average temperature of 2010 in the HadCRUT3 data set (0.52 from January to October) would mean that the average was higher than the average for 1998 as a whole (0.52) and 2005 as a whole (0.47). Note that the uncertainties quoted on annual average temperatures are typically around 0.1K, so the effect on an individual year is likely smaller than other effects.
I would add that there are still considerable uncertainties in the sea-surface temperature measurements (as described in the paper) and the difference between ship and buoy data is only one factor that needs to be taken into account. This is why we are not yet applying any corrections to the SST data based on the RSE paper. We are still working on a more comprehensive analysis that takes these other factors into account. This will not be ready in the next couple of months.
So the Met Office may not make its correction in time for it announcement sometime in January as to the ranking of 2010 among hottest years. So the deniers may have something to hold onto for a while. Ultimately, this correction will force them to reorder all of the recent years from this:
The simple explanation of this correction is here:
Over the past decade, sea-surface temperature has mostly been measured by thermometers on buoys, whereas previously it was measured aboard ships. Ship measurements tend to be too high because the water warms up as it is taken on board.
So although the newer buoy measurements are more accurate, the switch in method has erroneously shown sea-surface temperatures appearing to level off.
“Compared with ships, buoys show cooler temperatures,” says Vicky Pope at the Met Office. “You have to be careful of false signals.”
Here is a slightly more detailed explanation, from ReportingClimateScience’s piece, “Met Office to revise global warming data upwards“:
Satellite data has reported a bigger increase in sea surface temperatures than in situ data from buoys and ships, according to Met Office scientist John Kennedy. “We suspect that there has been this difference for quite a while. And when we make a correction for the data from buoys we find that the trend from in situ data is much closer to the trend observed by the satellites,” explained Kennedy. “This is what makes us think, all other things being equal, that the increase in the number of buoys leads to a cooling bias in the global average sea surface temperature.”
In particular, Met Office scientists found that sea surface temperature observations from the Along-Track Scanning Radiometer (ATSR) satellite instrument and observations from the same area made by in situ platforms were different. Kennedy and colleagues from the Met Office have submitted a paper on this issue that has been accepted for publication in the journal Remote Sensing of Environment (RSE) which is called “Using AATSR data to assess the quality of in situ sea-surface temperature observations for climate studies”.
The RSE paper states that “The trend in global-average SST between 1991 and 2007 calculated from in situ data was compared to its counterpart calculated from the ATSR instruments. The in situ record warms more slowly than the ATSR record, probably due to a decrease in the fraction of relatively warm-biased ship observations contributing to the global-average SST over the period and a corresponding increase in the fraction of relatively unbiased drifting buoy observations.”
The problem seems to be related to the way that some ships measure the temperature of the water which leads to the average temperature measured by ships being higher than the average sea surface temperature measured by a thermometer on a buoy. “On average the difference ranges between 0.13C and 0.18C,” Kennedy told Reporting Climate Science .Com. The scale of this difference across the globe and over the years is sufficient to add a warming of 0.03C per decade to the HadCRUT surface temperature record.
If you are a real SST junkie, here is a May 2010 paper on the subject, “Effects of instrumentation changes on sea surface temperature measured in situ.”
So the global warming deniers, disinformers, and confusionists had it half right — the Hadley/CRU data was flawed, but just not in the direction they have been saying.
Finally, it always bears repeating that, as we learned in two key 2009 papers, the planet is warming from GHGs just where climate science said it would “” the oceans, which is where more than 90% of the warming was projected to end up (see “Skeptical Science explains how we know global warming is happening.“). The key findings in the second study are summed up in these two figures
Total Earth Heat Content from 1950 to 2003 (Murphy 2009).
Time series of global mean heat storage (0-2000 m), measured in 108 Jm-2.
The second study makes clear that upper ocean heat content, perhaps not surprisingly, is simply far more variable than deeper ocean heat content, and thus an imperfect indicator of the long-term warming trend.







FRONT
The deniers, of course, will claim “the warmists data didn’t show enough warming, so they revised it.” Sigh.
Excellent. A good step in the right direction.
The challenge now is to find a way to measure temps in the northernmost parts of the Arctic as well. Especially important now, with what seems like a permanent NAO (due in part to the extreme heating of the area above the Barents Kara sea) that blasts cold Arctic air into Europe, where temps are measured, while warm air flows back into the Arctic, where its effect aren’t being measured, causing another cooling bias in the data.
Can’t we just measure the temperature of things anymore? (Could we ever?)
It just seems like the absolute value of all the adjustments is approaching the magnitude of the anomalies themselves.
It’d have been nice 30 years ago if someone started a ‘perfect’ temperature sensing network. I guess they tried that with satellites…didn’t work out so well.
The Deniers are a very persistent group. I argue with them constantly over at the Huffington Post. They try and pass the same Denier Junk from the likes of ‘Watts Up With That, Roy Spencer, John Lindzen, Steve Goddard etc…I had one early today try and feed me the shop worn lies of the ‘Vostok Ice records’ and the ‘800 years’ we have till C02 begins to effect climate.
Or they say ‘Mankind’ has ‘always adapted to climate change in the past…..and will do so again……’
BB: Feel free to design a ‘perfect’ global temperature sensing network, work out the cost, explain who will pay, and then get them to cough up. I suspect you’ll get stuck on the first step, but if you make it all the way, you’ll be a hero forever.
BB: Science aims to improve our measurements, detecting problems and fixing them as we go. You surely don’t think it’s easy to measure temperatures all around the world, in a way that always gets it right? It took a lot of work for astronomers to come up with good quality data on the planets’ motions– Brahe’s work was essential to Kepler arriving at his laws of planetary motion, leading up to Newton’s laws– and we’re still improving and extending that data. Climate science is doing the same for its data– they will never be perfect, but they are more and more in agreement on the warming that’s underway (do you remember when the satellite data studied at University of Alabama, Huntsville were a ‘big problem’, but the problem was much reduced when processing errors at UAH were corrected). And our analyses of how GHGs are causing that warming are also improving. What more would you expect?
Isn’t this just the reverse of the switch from “bucket method” to “engine intake method” for shipboard measurements? Kindof funny it took several years before someone thought of looking for this with buoys.
Just wanted to take a moment to note that the journalism practiced in this blog seems far superior to most mainstream reporting on climate change and other scientific issues. This post is so much more than re-reporting of a published article. Although that’s where it starts, regular readers know that re-reporting scientific articles constantly confounds reporters. Even though (and perhaps because) Dr. Romm is an expert, to confirm the story’s findings and his understanding of them, he contacted the study’s author to get direct answers to his questions about the article. Then he quotes the author’s response verbatim. The information from the original report is clarified and advanced and additional context is provided. The colored graph ordering the years from warmest to coolest is the best visual representation of the trend I’ve seen. Anyone out there in the MSM paying attention?
Saw this first on Skeptical Science awhile back.
http://www.skepticalscience.com/A-new-twist-on-mid-century-cooling.html
although it was discussed in earlier papers.
http://www.ncdc.noaa.gov/oa/climate/research/sst/papers/SEA.temps08.pdf
Been trying to give deniers a heads up on this one for awhile, but it’s unlikely to prevent them from going berzerk. Remember – all upward adjustments to temperature data is a sign they are falsifying data. All downward adjustments are proof that they were falsifying data to begin with.
http://forums.accuweather.com/index.php?s=54ed1eab2053ef9d4b04d80fd4591e6b&showtopic=20040&view=findpost&p=960436
One question though: Is the Hadley data really the only dataset to be affected by this correction, or is this an assumption because the article only discusses Hadley?
@5-6 … I guess the rhetoricalness with my question is quite apparent. It’s just unfortunate that our science is a field of uncertain and need-to-be adjusted measurements, even when we can just walk out there with a thermometer in hand, or report on the high temperature for the day each night (which makes it differ from astrophysics and scientifically existential pursuits). Is our field unique in this regard?
Perhaps the public is having difficulty resolving the dire conclusions when the science is admittedly still revising their data collection/adjustment methods, even if the end result doesn’t change (or get revised worse). Or, is it deniable that there may be a ‘smell-test’ involved that transcends higher order reasoning?
Are there other avenues of science in which people are so directly interacting on a daily basis (warm/cold/rain/snow) where simultaneous labels of ‘simple enough for a child to understand’ and ‘utterly complicated and evolving’ are attempted to be communicated?
Of course the deniers would be wrong. Statistics are easy to manipulate and I guess they tried to skew their numbers.
Joe – climate science is just like gravity: just a theory. If there were an industry whose profits depended on objects falling up, gravity would be a political issue.
http://climatehawk.wordpress.com/2010/12/02/climate-science-is-like-gravity/
I always tell people that meteorology and climatology are based on quite simple physical principles. What makes them confoundingly difficult to understand and explain is the enormous complexity of the intereaction of these basic physical principles. And I think that there lies the crux of the problem, both in explaining what is going on to the general public and in fighting the deniers. The basic physics is simple enough so that most people think they have an intuitive feel for what is going on and so figure that they can tell when the “elite scientists” get it wrong. They fail to understand that weather and climate processes are all very complex and prone to subtleties which can sometimes be counter-intuitive. Also, they fail to appreciate that both meteorology and cimatology are fairly recent disciplines, especially climatology, which have just been cobbled together haphazardly over the last two centuries from other studies. Meteorology just came into being as a real scientific discipline right after World War I and climatology not really until the 1970s and 80s. So even though conventional wisdom says that we’ve had formal temperature measurements since the mid 1800s, in reality the whole temperature measurement enterprise has only been refined over the last 30 or 40 years and we’re still at it. Sorry to say, I think this has a lot to do with the skepticism that the general public has with anything that has to do with weather and climate. I see it every day when trying to explain what’s going on to either my weather clients, or to others in casual conversations about weather and climate, or when I give talks on the subject. I still hear that the “weatherman” is always wrong, despite obvious evidence to the contrary. The cultural deck seems to be stacked against us! And climate hawk #12, I agree completely with you!!!
Does the SST data correction address Kevin Trenberth’s concerns about unidentified heat storage?
Note that this is a correction to a sea-surface temperature data set: I think that GISS and NCDC may use the same sea-surface data, therefore we might expect to see all three datasets adjusted in the next couple years… thereby showing more warming (with GISS still warming the fastest because of its Arctic coverage)
-M