Another week, another ado over nothing.
Last Saturday, Steve McIntyre wrote an email to NASA GISS pointing out that for some North American stations in the GISTEMP analysis, there was an odd jump in going from 1999 to 2000. On Monday, the people who work on the temperature analysis (not me), looked into it and found that this coincided with the switch between two sources of US temperature data. There had been a faulty assumption that these two sources matched, but that turned out not to be the case. There were in fact a number of small offsets (of both sign) between the same stations in the two different data sets. The obvious fix was to make an adjustment based on a period of overlap so that these offsets disappear.
This was duly done by Tuesday, an email thanking McIntyre was sent and the data analysis (which had been due in any case for the processing of the July numbers) was updated accordingly along with an acknowledgment to McIntyre and update of the methodology.
The net effect of the change was to reduce mean US anomalies by about 0.15 ºC for the years 2000-2006. There were some very minor knock on effects in earlier years due to the GISTEMP adjustments for rural vs. urban trends. In the global or hemispheric mean, the differences were imperceptible (since the US is only a small fraction of the global area).
There were however some very minor re-arrangements in the various rankings (see data [As it existed in Sep 2007]). Specifically, where 1998 (1.24 ºC anomaly compared to 1951-1980) had previously just beaten out 1934 (1.23 ºC) for the top US year, it now just misses: 1934 1.25ºC vs. 1998 1.23ºC. None of these differences are statistically significant. Indeed in the 2001 paper describing the GISTEMP methodology (which was prior to this particular error being introduced), it says:
The U.S. annual (January-December) mean temperature is slightly warmer in 1934 than in 1998 in the GISS analysis (Plate 6). This contrasts with the USHCN data, which has 1998 as the warmest year in the century. In both cases the difference between 1934 and 1998 mean temperatures is a few hundredths of a degree. The main reason that 1998 is relatively cooler in the GISS analysis is its larger adjustment for urban warming. In comparing temperatures of years separated by 60 or 70 years the uncertainties in various adjustments (urban warming, station history adjustments, etc.) lead to an uncertainty of at least 0.1°C. Thus it is not possible to declare a record U.S. temperature with confidence until a result is obtained that exceeds the temperature of 1934 by more than 0.1°C.
More importantly for climate purposes, the longer term US averages have not changed rank. 2002-2006 (at 0.66 ºC) is still warmer than 1930-1934 (0.63 ºC – the largest value in the early part of the century) (though both are below 1998-2002 at 0.79 ºC). (The previous version – up to 2005 – can be seen here).
In the global mean, 2005 remains the warmest (as in the NCDC analysis). CRU has 1998 as the warmest year but there are differences in methodology, particularly concerning the Arctic (extrapolated in GISTEMP, not included in CRU) which is a big part of recent global warmth. No recent IPCC statements or conclusions are affected in the slightest.
Sum total of this change? A couple of hundredths of degrees in the US rankings and no change in anything that could be considered climatically important (specifically long term trends).
However, there is clearly a latent and deeply felt wish in some sectors for the whole problem of global warming to be reduced to a statistical quirk or a mistake. This led to some truly death-defying leaping to conclusions when this issue hit the blogosphere. One of the worst examples (but there are others) was the ‘Opinionator’ at the New York Times (oh dear). He managed to confuse the global means with the continental US numbers, he made up a story about McIntyre having ‘always puzzled about some gaps’ (what?) , declared the the error had ‘played havoc’ with the numbers, and quoted another blogger saying that the ‘astounding’ numbers had been ‘silently released’. None of these statements are true. Among other incorrect stories going around are that the mistake was due to a Y2K bug or that this had something to do with photographing weather stations. Again, simply false.
But hey, maybe the Arctic will get the memo.
Hank Roberts says
Anyone who reads comp.risks knows the problems.
http://catless.ncl.ac.uk/Risks/24.80.html
Timothy Chase says
Hank Roberts (#601) wrote:
True, but you could ask someone to forward you an email as well. If you habitually underestimate the intelligence of those you are dealing with, you might even forget to reformat it. In this respect, comparable to a certain profile…
John Mashey says
re: #600
All application areas that I’ve ever dealt with have their own tradeoffs between development and QA, i.e., how much money and schedule time will one spend on the latter. Research software is different from production software in its tradeoffs as well, in any area.
Medical software has no bugs?
Be serious.
[I used to work with builders of medical systems like CAT & MRI scanners. Those people were very careful, but…]
Timothy Chase says
Hank Roberts (#36) wrote:
You are right. I had misread the following interview. Fortunately I was able to find it again.
Peter Ward
The scientist on climate change, mass extinctions, and other crazy global-warming consequences
07-12-07
http://www.lacitybeat.com/article.php?id=5816&IssueNum=214
Second paragraph states:
1000 ppm – we certainly won’t reach that by 2050.
I have an interest in him in part because of my interest in the H2S scenario. When the paragraph puts that together with “big mass mortalities” around 2050, it makes it sound like this is the beginning of that scenario. But it can’t happen that early. You need roughly 1000 ppm, and even then for that to be a factor could take a while. From what I understand, with the strong feedbacks from the carbon cycle under BAU, we should be 720 to 1030 ppm by 2100.
Of course this was before we discovered that the rate of emissions had actually doubled since the 1990s, so where are we headed if this is the new BAU? The more forcing, the greater the feedback. But there he is probably thinking famine. However, it is argued that climate change is already a significant factor in some wars and famines.
But when I think of 2040, I am thinking that we really can’t do much to affect how bad things will be at that point. Its already been decided, more or less. What we do now will determine where we go from there: does it get a little worse or a lot worse?
Anyway, now I will probably buy the book – as soon as I am able to afford it. Anyway, I should be going to bed.
Kooiti Masuda says
Re. Ron Taylor (#511), Timothy Chase (#529) and others:
The retreat of many mountain glaciers in Asia is a serious issue with respect to water resources for tens of millions of people (many people, indeed). But I think it an exaggeration (perhaps inadvertent) that it is a serious issue for hundreds of millions of people.
IPCC AR4 WG1 SPM says
(under “Fresh water resources and their management” of
“C. Current knowledge about future impacts”)
| In the course of the century, water suppries stored in glaciers
| and snow cover are projected to decline, reducing water
| availability in regions supplied by meltwater from major mountain
| ranges, where more than one-sixth of the world population
| currently lives.
Note that they say “glaciers and snow cover”, not just glaciers.
Similarly, Stern Review says in boldface
(Section 3.2 “Water”, p. 76 of Cambridge U.P. edition)
| Melting glaciers and loss of mountain snow will increase flood
| risk during wet season and threaten dry-season water supplies
| to one-sixth of the world’s population (…).
Here also “mountain snow” is mentioned, though the text that follows
sometimes makes confusion (e.g. it says that 250 million people in western China depend on glacier meltwater).
A scientific review article cited by both IPCC and Stern is
T.P. Barnett, J.C. Adam and D.P. Lettenmaier, 2005:
Potential impacts of a warming climate on water availability in snow-dominated regions.
Nature, 438, 303 – 309.
As the title suggests, it mainly deals with the projected decline of snow cover rather than of glaciers, and it does say that “approximately one-sixth of the world’s population lives within this snowmelt-dominated, low-reservoir-storage domain”. I think that the phrase “one-sixth of world population” in IPCC and Stern reports should be interpreted in this context.
Another estimate that seems to have become popular by Stern review is 500 million in the South Asia (Himalaya & Hindu-Kush region) and 250 million in China. I subjectively think that the Chinese part is reasonable if it is the number of people who depend on either glacier melt or snow melt (mostly the latter), but the South Asian part is still exaggeration even with this interpretation. Maybe the text just says that 500 million people live in the Ganges river basin whether or not they depend on melt water. But if so, inserting such a piece of information in this context is very misleading. More likelily, it seems that the fact that the headwaters of many large rivers such as the Ganges is in the Himalayas makes people (including the writer of review reports) feel like all water of these rivers comes from the Himalayas. Actually some does, but some does not.
Dan Hughes says
re: # 603
I did not say, “Medical software has no bugs.”
Kooiti Masuda says
Re: my comment (#605)
Excuse me, I made a trivial mistake.
> IPCC AR4 WG1 SPM says
It is IPCC AR4 WG2 SPM, not WG1.
Hank Roberts says
Peter Ward is, for this century, talking about the more immediate issues. Like:
“Global wheat stockpiles will slip to their lowest levels in 26 years ….
… Canadian officials said the country expected its harvest to be slashed by a fifth as a result of drought.
… Australia – the world’s third-largest wheat exporter and a key supplier to Asian regions and South America – has also warned harvests may be reduced by warmer-than-expected temperatures experienced in the spring.
Crops in the Black Sea area of Europe, however, have been ruined by bad weather
… Chinese production is expected to fall by 10% as a result of both flooding and droughts.”
http://news.bbc.co.uk/2/hi/business/6962211.stm
Hank Roberts says
Two thoughts:
1) A new way septics self-identify themselves: continuing to claim GISS made a programming “Y2K” error.
2) News about how adding a new and more accurate set of instruments to an existing data set changes the information (slightly) — the ARGO ocean temperature/salinity data coming in.
http://www.agu.org/pubs/crossref/2007/2007GL030452.shtml
Ron Taylor says
Re 605 by Kooti Masuda – Thank you for these words of caution. However, this recent article seems to indicate serious concern in China.
http://www.commondreams.org/headlines06/0507-05.htm
And this would indicate that the concern is shared in India.
http://www.commondreams.org/headlines06/0507-05.htm
Ron Taylor says
Sorry – the second link should have been
http://www.msnbc.msn.com/id/16313866/
Philippe Chantreau says
Interesting article toay also on China’s dilemma. That country’s situation by itself is a quasi-experiment on the problem of the cost of prosperity
Philippe Chantreau says
Sory, forgot to mention the article is in the NYT.
http://www.nytimes.com/2007/08/26/world/asia/26china.html
Raplh Smythe says
In the time it’s taken to spend arguing about this, the code could have been consolidated and re-written 28.53852239181 times. Now almost 3 weeks later, we’re back where we started!
Steve McIntyre says
Can anyone here explain to me how Hansen’s algorithm for combining different data versions works – and thereby show me that his verbal descriptions are (a) accurate (b) sufficient. The problem is described at http://www.climateaudit.org/?p=2018 and some preceding posts.
In the cases of Praha-Libus, Gassim and Joenssu – to pick three examples- there are two station versions for each station. In each case, during the period of overlap, the values in each version are identical. However in each case, one of the versions has one value missing. As a result, Hansen re-states the values for the other series in the above cases by -0.1, 0.1 and 0.2 deg C. If anyone can explain how these values are calculated based on published literature (ort otherwise), I’d much appreciate it.
Steve Bloom says
Re #615: Sorry, Lex, you’ve got the data and now it’s time for you to come up with your own numbers. If you get a substantially different result from the professionals, I’m sure there will be plenty of volunteers to go over your work product and figure out where you went wrong. Only after that would there be any value in doing the type of comparison you propose.
Michael Cassin says
I’m hosting a wiki with statistical analysis capability that should be useful. Here is an editable application that illustrates the concept:
GHCN_Duplicate_time_series_analysis. That page serves up the GHCN data for a station, highlighting records with any variance.
If anyone would point me in the right direction, I’ll try to implement formulae for combining multiple series and share it. Please feel free to try yourself by pressing “edit”, and to contact me if you’d like help.
Regards, Mike
jacob l says
so after about a month of debate I tried to create my own temperature estimate.
I used 20 stations,one month January, avoided missing data as much a possible. filled in on one station one year 1991. all are rural zero light.
these are my results compared to the N.C.D.C for the U.S.A lower 48
[edited…we’re not going to go down the slippery slope of allowing people to post long strings of data in the comments. feel free to create an external URL where the data are available, then just provide that external link in your comment]
David B. Benson says
I am simply reporting an advert on page A16 of todya’s TNYT: The advert states that “global warming is not a crisis” and asks one to call Al Gore to ask him to debate a Mr. Chris Horner. The supporting ‘evidence’ given includes “NASA says 1934 was a warmest year on record in the U.S., not 1998.”
The sponor appears to be ‘The Heartland Institute’ and has a web site at
http://www.heartland.org
I thought you might care to know about this…
Rob Jacob says
For those still interested, Jim Hansen has made the source code for the temperature analysis available.
http://data.giss.nasa.gov/gistemp/sources/