Another week, another ado over nothing.
Last Saturday, Steve McIntyre wrote an email to NASA GISS pointing out that for some North American stations in the GISTEMP analysis, there was an odd jump in going from 1999 to 2000. On Monday, the people who work on the temperature analysis (not me), looked into it and found that this coincided with the switch between two sources of US temperature data. There had been a faulty assumption that these two sources matched, but that turned out not to be the case. There were in fact a number of small offsets (of both sign) between the same stations in the two different data sets. The obvious fix was to make an adjustment based on a period of overlap so that these offsets disappear.
This was duly done by Tuesday, an email thanking McIntyre was sent and the data analysis (which had been due in any case for the processing of the July numbers) was updated accordingly along with an acknowledgment to McIntyre and update of the methodology.
The net effect of the change was to reduce mean US anomalies by about 0.15 ºC for the years 2000-2006. There were some very minor knock on effects in earlier years due to the GISTEMP adjustments for rural vs. urban trends. In the global or hemispheric mean, the differences were imperceptible (since the US is only a small fraction of the global area).
There were however some very minor re-arrangements in the various rankings (see data [As it existed in Sep 2007]). Specifically, where 1998 (1.24 ºC anomaly compared to 1951-1980) had previously just beaten out 1934 (1.23 ºC) for the top US year, it now just misses: 1934 1.25ºC vs. 1998 1.23ºC. None of these differences are statistically significant. Indeed in the 2001 paper describing the GISTEMP methodology (which was prior to this particular error being introduced), it says:
The U.S. annual (January-December) mean temperature is slightly warmer in 1934 than in 1998 in the GISS analysis (Plate 6). This contrasts with the USHCN data, which has 1998 as the warmest year in the century. In both cases the difference between 1934 and 1998 mean temperatures is a few hundredths of a degree. The main reason that 1998 is relatively cooler in the GISS analysis is its larger adjustment for urban warming. In comparing temperatures of years separated by 60 or 70 years the uncertainties in various adjustments (urban warming, station history adjustments, etc.) lead to an uncertainty of at least 0.1°C. Thus it is not possible to declare a record U.S. temperature with confidence until a result is obtained that exceeds the temperature of 1934 by more than 0.1°C.
More importantly for climate purposes, the longer term US averages have not changed rank. 2002-2006 (at 0.66 ºC) is still warmer than 1930-1934 (0.63 ºC – the largest value in the early part of the century) (though both are below 1998-2002 at 0.79 ºC). (The previous version – up to 2005 – can be seen here).
In the global mean, 2005 remains the warmest (as in the NCDC analysis). CRU has 1998 as the warmest year but there are differences in methodology, particularly concerning the Arctic (extrapolated in GISTEMP, not included in CRU) which is a big part of recent global warmth. No recent IPCC statements or conclusions are affected in the slightest.
Sum total of this change? A couple of hundredths of degrees in the US rankings and no change in anything that could be considered climatically important (specifically long term trends).
However, there is clearly a latent and deeply felt wish in some sectors for the whole problem of global warming to be reduced to a statistical quirk or a mistake. This led to some truly death-defying leaping to conclusions when this issue hit the blogosphere. One of the worst examples (but there are others) was the ‘Opinionator’ at the New York Times (oh dear). He managed to confuse the global means with the continental US numbers, he made up a story about McIntyre having ‘always puzzled about some gaps’ (what?) , declared the the error had ‘played havoc’ with the numbers, and quoted another blogger saying that the ‘astounding’ numbers had been ‘silently released’. None of these statements are true. Among other incorrect stories going around are that the mistake was due to a Y2K bug or that this had something to do with photographing weather stations. Again, simply false.
But hey, maybe the Arctic will get the memo.
tamino says
Re: #143 (Lawrence Brown)
Sorry, but the area of the globe is 196 million (with an “m”) square miles, about 510 million (with an “m”) square km.
To err is indeed human.
John Mashey says
re: #144 F.C.H.
Well, I said “there are real issues about which reasonable, informed people can disagree.” [FCH at least has relevant expertise, which is not clear for some people expressing strong opinions.]
After all, “open source” is not new in computing, having started no later than the early 1950s with John von Neumann’s distribution of the IAS plans, and continuing exchange of code via user groups (like SHARE & DECUS) in the 1950s/1960s onward, the UNIX dispersion of the 1970s, the related Software Tools User Group of the 1980s, and lately Linux, Apache, etc, etc.
Technical users (especially) have long shared code, but thankfully it has indeed gotten a lot easier to share, given the Internet and WWW. Making tapes gets old [at Penn State, in the early 1970s, we made hundreds of tapes of code I’d written, and that actually cost money] … but it cost more to design professional-grade distributions, and if there hadn’t been some grant money to help, we wouldn’t have done it. This was for code we knew people actually wanted to *use* daily, not just maybe look at to see if they could find bugs.]
The original UNIX distributions from Bell Labs originally said ~ “No warranty, no support, and don’t call us with bugs.” … but people knew perfectly well that we would in fact spend some time informally supporting it. Of course, we had monopoly money to play with…
Anyway, the issue is that in organizations in which researchers are supposed to generate research, it is a legitimate argument to figure out
– what level of source/data/test/documentation availability is appropriate,
– how much time researchers should spend responding to questions,
– what level of availability is required to publish in various journals
– and the tradeoff between getting results out in a timely fashion, with normal peer review (and consequently, unfound), versus doing excruciatingly-long testing, outside code reviews, etc, etc.
I don’t think there is any one right answer. I do think it is important for science to figure out answers, and if we raise the bar, we’d better figure out how to raise the funding, because it usually costs money – Sometimes it’s worth it, sometimes it’s not.
When I was at SGI, we went through a major exercise to figure out what we had to do to *usefully, professionally* make certain pieces of internal code open source,[not just stick them up on an FTP site], i.e., including the support found to be necessary.
Many of us wanted to do this, but we found that it simply wasn’t free, so we did it (spending a fair amount of money) for some things (like XFS -> Linux), and we didn’t for others, like Showcase. We also spent money on things like supporting Samba [i.e., hiring Jeremy Allison for a while to do that].
I would be delighted if every scientific paper that used computers offered a professional-grade distribution … but I can’t make myself believe that would be cost-effective. I have many times been in the position of doing budgets for projects where there was a choice between just building/using software and doing the work to make it usefully accessible elsewhere, and I’ve opted for the latter as often as possible, but it *was not* free.
Others may legitimately have other opinions, although I’d hope people will say why they have relevant experience (as FCH did).
VirgilM says
Roger Pielke Sr and 14 others published an article in the latest BAMS noting problems with USHCN adjustment procedures beyond TOBS. Is it really possible to correct for station moves…especially in areas of complex terrain? Is it really possible to correct for UHI when land use changes around the station have not been documented very well, if at all? One flaw in the GISS adjustment code has been discovered. How do we know if there are more flaws? Gavin says that there methodology has been well documented, however, how do we know if that methodology was implemented correctly in the code? I understand that the U.S. has one of the best climate networks in the world (and better history information). What about Africa and Asia? It is clear to me that we don’t know what is UHI and what is a CO2 signal in our temperature records. That said, I know that the climate over the Northern Rockies (where I live) is changing. Our winters are milder and our snowpack hasn’t been doing well. It could be all the junk that China is putting in the air is affecting the weather patterns over the Pacific. Eventually that affects patterns over the Northern Rockies. Others say that this pattern change is CO2 induced. Neither side has yet to prove their case. I know the GISS people are high on CO2 causation, but they lost credibility when they keep their code secret (mistakes and all) from those who want to check their work.
dhogaza says
This is how Stephen Mosher says “it should be done…”
So Stephen apparently thinks that it’s OK that the researchers didn’t open-source the Perl script, nor that the researcher uncovering the error had to do a lot of work.
On the other hand, NASA is Evil! Evil! Evil! for not having open sourced their code, and Evil! Evil! Evil! for writing an impersonal and slightly ungrammatical e-mail that correctly described what happened.
DavidU says
#147
Land area Canada 9,984,670 square kilometers
Land area US 9,629,091 square kilometers
#144
One thing you need to keep in mkind here is that the “programmers” in many scientific projects are not professional programmers, but rather just one of the climatologists/biologists/chemists… in the project who knows how to program, and sometimes learnt it 20 years and are still using the same progamming language as back then. People still write code in line numbered Fortran77!
These programs usually work just fine, they know how to write workign code, but to someone like the two of us who knows programming as a subject of it own it often looks hairrasing.
Hiring a professional programmer to do the work instead is normally not even close to realistic within a typical project budget, so there really isn’t anyone around to fire for their hard to read code.
#142
I agree with your point too. I actually often write two separate codes for doing the same job, in different languages, and don’t trust them until they both give the same results.
steven mosher says
Timothy. RE 98.
Apparently the blog at my reply ( same thing happened over at Watts place, so I blame my Wifi )
I’ll give a condensed version:
You wrote:
“As such, they are responding to sources of contamination which may not have been regarded as that important – prior to 1929. Not only does this have little to do with current trends, but it demonstrates a concern for accuracy. And as a matter of fact, the elimination of such contamination tends to reduce the apparent rise in temperatures over the twentieth century rather than magnify it.”
Actually the don’t show contaimination they Hypthesize it to explain the record. They took cooling out
My main issue with that paragraph in Hansen 2001 is the lack of supporting data and analysis in the text or figures. If you like I will detail all the missing parts. But look for yourself. Find the ANALYSIS in the text. Not a description of the analysis. Simple example:
which sites in the “region” were the 5 sites in question compared to?
Next:
“In an earlier thread you claimed to be interested in accuracy and were attacking urban sites for their presumed contamination by the urban heat island effect. We of course pointed that you get virtually the same warming trend whether you use all stations or just rural stations – which you didn’t even seem to acknowledge.”
Let me schematize the UHI arguement for you using Peterson 2003, which Hansen cites ( as submitted) and which Parker quotes.
1. UHI exists: ( I’ll link studies if you like),
but see Petersons FIRST SENTENCE.
2. We expect to see differences between Rural and Urban stations.
3. These differences are NOT observed ( peterson, Parker, Hansen)in the climate network
4. THEREFORE, Urban stations must be well sited in COOL PARKS.
From Parker:
Furthermore, Peterson (2003) found no statistically
significant impact of urbanization in an analysis
of 289 stations in 40 clusters in the contiguous United
States, after the influences of elevation, latitude, time of
observation, and instrumentation had been accounted
for. One possible reason for this finding was that many
“urban” observations are likely to be made in cool
parks, to conform to standards for siting of stations.
From Peterson’s conclusion:
“Therefore, if a station is located within
a park, it would be expected to report cooler temperatures
than the industrial sections experience. But do
the urban meteorological observing stations tend to be
located in parks or gardens? The official National
Weather Service guidelines for nonairport stations state
that an observing shelter should be ‘‘no closer that four times the height of any obstruction (tree, fence, building, etc.)’’ and ‘‘it should be at least 100 feet from any paved or concrete surface’’ (Observing Systems Branch 1989). If a station meets these guidelines or even if any attempt to come close to these guidelines was made, it is clear that a station would be far more likely to be located in a park cool island than an industrial hot spot. ”
SO, you get the argument: we expect a difference, we find no difference, THEREFORE urban sites are in cool parks.
Simple Question: How do you test this last part?
You look. Go to Surfacestations. Look at Tuscon ( parking lot) Eureka ( on a roof) Santa Rosa ( on a roof)
Paso Robles ( on cncrete next to a freeway) And newport beach ( on a roof)
So now the argument looks like this.
1. UHI exists:
2. We expect to see differences between Rural and Urban stations.
3. These differences are NOT observed
4. Perhaps, Urban stations are well sited in COOL PARKS.
5. We haven’t found many if any Urban sites in a cool park.
6. Perhaps the Rural sites are corrupted at the MICROsite level. Things not visible on nightlights,
things like nearby asphalt, buildings, wind shelter,
Non compliant things.
So, how to test #6. Look. Take at look at Tahoe City and Happy Camp.
Essentially its the same logic as Hansen. He saw weird cooling and assumed the sites must have smething wrng with them and Hypthesized contaimination. We see similair weirdness ( No difference between Urban rural )
and hypothesize that rural sites have microsite issues.
Then we take the extra step of LOOKING. Peterson never checked his Supposition of COOL PARKS. We did. After checking 289 sites we havent found an urban site in a cool park. In parking lots? Yes, on Rooftps Yes, on Utility Poles, Yes.
Now, I would not call the matter closed. It’s never closed. If you like pick up a camera and go find a cool park site.
steven mosher says
RE 56: Long Post timothy.
I’ll hit a few key points:
“Sounds like Quine to me.
Actually he’s not all bad – at least the bits he gets from Pierre Duhem.”
Really? Quine’s Two Dogmas was published in 1951 and Duhems work was not allowed to be published until 1954.
ZING!
Next:
“Now what about the claim that the world is at least five seconds old.
I certainly think it is, but you may regard this as nothing more than a “theory”
which is “underdetermined” by the “data,” where the data consists of the experience
of “memories” and anything else which I might claim as evidence for an older world.
But at this point we aren’t even talking about observation per se – we are talking about memory.”
Quine would say it’s a theory. A well supported theory. One that would be very complicated to
give up.
NEXT:
“So does this mean that when I look at an iceberg floating off,
it might actually not be floating away? ”
No. There is a choice. A theory of human perception ( that explains perceptual illusions) Or a theory
of human delusion. The first is more useful than the second. Both are underdetermined.
“Does this mean that if I am looking at the temperature displayed by a thermometer
I am holding is rising, that the temperature might not be rising? ”
No. There is a choice this time three options. The last being “instruments accurately record physical quantities”
Again, Quine would say they have the same epistemic status. Some are more useful at things like predictin
Next:
“If so, I would begin to wonder whether you are engaging in the philosophy of science
rather than in some freshman philosophy bullsession.
I would also have to wonder just how desperate the “global warming skeptics”
have gotten that they find it necessary to appeal to this kind of reasoning.”
I think someone who didnt even know that quine published before Duhem needs to actually read Quine.
And you will find that Confirmational holism makes one more amenable to AGW acceptance.
NEXT:
“Falsification can always be avoided by appealing to other data ( sea ice, SST, species migration, etc etc etc).
This isn’t the way that I normally hear it.
From what I understand, falsification can always be avoided by appealing to another hypothesis.”
Both ways actually.
NEXT:
“But lets focus on the phrase “appealing to other data.” …..
A hypothesis or theory which is justified by multiple lines of investigation
is generally justified to a far greater degree than it would be if it were simply
justified by any one line of investigation considered in isolation.”
Yes, just as quine says.
NEXT:
“Now the vast majority of the scientific community has accepted the view that:
1. The earth is getting warmer;
2. greenhouse gases are largely responsible for this; and,
3. That what has been raising the level of greenhouse gases are human activities.
You on the other hand are still stuck on (1). Not dogmatically denying it,
I understand, but simply doubting it with your healthy, “scientific” skepticism.”
Stuck on #1. Presently I am looking at #1. Have to start somewhere. Now, force me to decide, and I will say
Yes, the earth is probably getting warmer. That “probably” needs to be quantified and independantly confirmedand it’s magnatude estimated.
NEXT:
So in the interest of science, lets look at the evidence:
1. We have surface measurements in the United States which show an accelerating trend towards higher temperatures.”
Hmm. Which ten year trend shows a higher rate: 1997-2006 ( last ten) or say (1927-1936)?
Lets start with that one. I’ll address the other 19 in due course, but first things first. The simple task
of measuring air temps.
Barton Paul Levenson says
[[If I low pass filter the US surface temperature time series and examine the derivative of the signal, I see no evidence for this accelerating trend.]]
Why didn’t you just do a linear regression like everybody else in the world? You’re essentially admitting that you had to distort the data to get the result you wanted.
rick says
Why is there an assumption by the deniers that the US temp. data is the best?
Is this true? I find that my reginal NWS daily temp figures are often several degrees under what my home temp. gauge is recording.
steven mosher says
RE 154.
You missed two points and invented a third.
1. The point was not the grammer. The point was attribution. Ruedy wrote a fine mail thanking SteveMc
for his work. The example I sited showed proper attribution for finding an error.
Like so for the Slow : ” I would like to thank Person X for finding..”
2 Nothing was said about NASA. I did not name the person who wrote the mail. And as far as the
accuracy goes you need to do some more reading.
3.Who said evil? you. If I had to use a word it would be obsfucating or opaque or lacking grace.
Some people like Ruedy, Dr. Curry, and Gavin are gracious. Hell, Gavin puts up with me.
Other folks are less gracious, myself included. Let’s leave it at that.
Timothy Chase says
Steven (#148) wrote:
I am not a member of the scientific community – simply a philosophy major turned programmer who is on the outside – and with little practice in statistics. As such, there are certainly people that I would defer to, particularly in this area. Tamino is one obvious case – as I am impressed with his objectivity and skill.
Going off memory at the moment, I remember that the later trend of perhaps the last fifteen years was higher than the trend that of the last thirty, but I believe it was not statistically significant, and I had originally mentioned as much. There may be a valid approach which can demonstrate statistical significance. For example, one might perform an analysis in terms of the monthly averages and filter out the annual cyclical behavior, but I do not know the results of such an exercise. At present I would have to agree with you on this point and I will omit this statement in the future until I know otherwise.
(One point: even if it is demonstrated at some point that there is such an “acceleration,” I would expect it to be temporary, with a new higher slope being established until such time as we are able to reduce the geometric increase in the rate at which we are emitting carbon dioxide into the atmosphere. We have actually done worse in the past seven years than the previous ten, from what I understand.)
Steven continues:
There are other obvious examples. What is happening in the arctic is progressing much more rapidly than any of the models would predict – and I suspect that this will lead to more rapid climate change at a global level than models would predict. Alternatively, they are having some difficulty modeling the Indian Monsoon and a small western region of the Tibetean Plateau where glaciers are currently increasing in size as the result of increased snowfall.
The ability to observe the effect does not automatically imply the ability to model it – insofar as models have to be grounded in the actual physics, not the mere ability to identify trends through statistical analysis. I am sure there are others.
But the B scenario of Hansen’s 1988 projections performed admirably well in terms of global temperature – and it was primitive by today’s standards, for example, in the fact that it was based upon a single run. But overall, they are doing quite well, for example, in terms of the modeling of ocean circulation as of roughly 2001. The new GISS model is performing far better at modeling clouds, although there continue to be surprises in this area. For example, the recently discovered twilight zone where an invisible zone of higher water vapor extends for several kilometers beyond the visible edges of clouds. Likewise the modeling of both aerosols and the carbon cycle are at fairly early stages of development.
But I strongly suspect that Hadley’s new approach involving the initialization with real world data including natural variability will become the norm – which should improve both the modeling of climates at a variety of levels. Improved resolution will likewise improve model performance. We were able to model hurricane formation only with the creation of the NEC Earth Simulator – but then in 2003 projected increased cyclone formation in the South Atlantic, and Catarina formed in late March of 2004.
*
In any case, I genuinely appreciate the correction regarding temperature trends in the United States and will keep this in mind from now on.
Timothy Chase says
Steven Mosher,
I will have to get back to most recent posts of 156 and 157 a little later. Currently I am about to go off to work. I hope you will understand. But if someone else would like to respond in the interim, I most certainly wouldn’t have a problem with this, although I would likely still personally respond later.
Walt Bennett says
Re: #13, #18, #21, #22, #27, #30, #33, #41, #74, #79, #135, #145, #156, #157, #160
Steven,
Having read each of your posts, in which you use some fancy, uncommon words and refer to theories which I have not seen expressed before (which of course means nothing more than this: you have some education that I don’t have), I am left with one question: How, applying the theories you espouse, can you ever come to believe anything?
That is a serious question, and I guess we can approach it from the other side as well: What would it take for you to accept that this long list of unusual climate observations are related, and that they describe a coherent theory which is a) highly plausible and b) much more plausible than any other theory which has been applied to the same set of observations?
As Gavin said, in so many words: how do you explain these observations without a human component?
Further: academic exercises are all well and good (and I mean exactly that: they are essential and useful); but at what point do we put down the textbooks and notebooks and start trying to enact policies based on our observations?
Dodo says
Gavin started this topic with the sentence: “Another week, another ado over nothing.” How fitting! Now, 162 comments and 20 blog entries later, one has to wonder at most commentators’ ability to get worked up about something that is supposed to be nothing. But I suppose their zeal will not be diminished until the last denialist, skeptic, heretic or dissident has been silenced. This “ado” seems to be part of a recurring pattern in human behaviour.
Alex Tolley says
Freeman Dyson has written a nice article in “Edge” expressing a heresy that GW climate models may be wrong. I think he makes too much of climate models and ignores other evidence, but there is no denying he is still a very smart guy.
http://www.edge.org/3rd_culture/dysonf07/dysonf07_index.html
Any comments on his thoughts?
[Response: Michael Tobis has a good commentary. – gavin]
dhogaza says
My understanding is that McIntyre pointed the finger at something that was wrong, but that the NASA group actually figured out what it was and did the corrections.
So the scenario’s a bit different than in the letter you cited, where the woman in question did a huge amount of work and figured out in detail what was wrong.
And attribution has nothing to do with correctness, anyway.
And my point about the source code and work involved stands:
Note that she computed the phylogenic trees by hand, not using the script, and deduced that there must be an error in the script.
Publishing algorithms, not code, is SOP in many fields, climate science is not exceptional in this regard.
tamino says
Re: #158 (BPL)
I disagree. A low-pass filter is a standard, and rather reliable, method of removing fast fluctuations from data, revealing the slower “trend”-like signal. I certaintly wouldn’t characterize it as an attempt to “distort the data.”
Unfortunately I don’t have monthly data for the lower-48 U.S. states, just annual averages. But looking at that data, I too don’t see any statistically significant acceleration in recent years (since 1975) in lower-48 U.S. temperatures.
captdallas2 says
I don’t want to start a tempest in a teapot, but what is the deal with Hansen’s email?
Lynn Vincentnathan says
If the lower 48 US states have not been significantly warming (is that what people here are saying), then there must be other places that ARE significantly warming, if the global warming idea is accurate. Isn’t that that whole idea behind “the global average temperature” increasing? Avergage means that some places might stay the same & some might even get colder, but on average the whole data set for the entire world shows a warming trend. (And, it might be that the places currently staying the same or getting colder might eventually show a warming trend, if the problem continues.)
And if there’s lots of quibbling about how correct the data is, can we also use some other measures of temperature change, such as ice melting. I think the net melting in the Arctic, and the melting in the Old and New Worlds that has allowed some archaeological finds from thousands of years ago makes a good case that the world is warming. Not to mention the warming of the oceans.
Then there is a good theory to go along with this — the greenhouse effect — which not only explains the natural warmer than expected temps on earth, but also the colder temps on Mars, and much warming temps on Venus.
It seems to me it doesn’t take a rocket scientist or fancy statistician to come to grips that the world “ON AVERAGE” is warming. I assume that smart people like Steve McIntyre aren’t really suggesting there is no global warming — a idea that would go against this other evidence and a well-established theory.
I think it’s wonderful that scientists are able to tell us what is happening, so that hopefully we can solve this problem before it gets really bad.
Magnus Andersson says
gavin: In the response to the second comment you refer to “only the result”. But this result is a result after adjustments due to a factor which in “climate science” cools the earth today and therefor must be compensated for, but exactly what is done isn’t published. It’s also a result after other not published steps taken.
E.g., Hansen showed 1999 that the warmest year on the US record was 1934, but two years later he had a very different temperature record with higher temperatures the last decades, and lower earlier in the 20th century. The plotted data in both charts here:
http://www.coyoteblog.com/coyote_blog/2007/08/a-temperature-a.html
The problem is we only get data after the climate scientists has secretly adjusted them. Also we don’t know which non-rural stations that are not used due to rules described on the NASA diss data page; rejection of stations not within the long term trend. We don’t have raw station data or algorithms for selection or adjustment.
[edit]
[Response: Presumably that is new definition for the word secret? As in ‘secretly’ publishing all the adjustments and consequences in the open peer reviewed literature? Please read the references: Hansen et al 2001. – gavin
Dan says
re: 9. “When “deniers” make a claim (like yours), it’s based on the lack of serious research or considerable effort, and if an error is pointed out, it’s excused away or triggers a scrambling attempt to change the subject.”
Indeed. Or running away entirely, unable to admit being in error intentionally or not. In fact, we see it here by various anti-science layman skeptics/deniers who throw out any possible, usually unpublished comment (e.g. the surfacestations.org dribble) about GW desparately trying to make it stick. And who keep repeating it as if repetition somehow magically makes it true and to heck with literally thousands of climate researchers who publish in peer-review journals. Or who make absurd ascertains without absolutely any data or science to support their statements yet when confronted still do not make an apparent effort to learn why they were wrong.
tamino says
But they have been significantly warming, no one in his right mind disputes that. The discussion has been whether the warming has been accelerating, and to my mind, over the last 30 years or so it hasn’t — the warming has been steady rather than getting faster.
The rate of warming seen in surface temperature data (since the correction to USHCN data and update of NASA GISS) since 1979 (the beginning of satellite temperature data) matches almost exactly the rate seen in satellite data for the lower-48 states of the U.S. This is illustrated in here.
Aaron Lewis says
Re 121
I am so glad to hear that the DUSTBOWL will not form until 2080.
http://www.drought.unl.edu/dm/monitor.html and the state of the Sierra Snow Pack had me worried.
On the other hand, just as the Arctic sea ice is melting faster than the models predicted; perhaps North America may be drying out faster than the models predicted?
I do not think that one dry year makes a drought or a trend, but any noticeable change in the heat distribution of the Northern Hemisphere should put us on alert for follow-on effects.
Prudent men should consider and prepare.
DavidU says
#169
As I mentioned in an earlier post here I am in Sweden right now and here they are noticing the warming in ways that they don’t need any instruments for.
E.g. the news here has recently reported that there are now wild oak spreading in an area about 200 miles north of were oak could be found 15 years ago. There has been oaks in that area before, a few thousand years ago.
This is near a town called Örnsköldsvik, you can find it in Google earth, about 63 degrees north, which is quite far north even when considering that they are warmed by the Gulf stream here.
steven mosher says
RE 166.
You missed the points again. I make it simple for you.
As some have claimed the algrithms are documented in the text.
1. They are not. They are generically described.
2. IF THEY WERE documented in the text, the issue remains:
3. Does the code implement the algorithm as designed.
[edit]
[Response: ‘Algorithms’ are just descriptions in various flavours of formality. If there is a question of whether the code correctly implements the algorithm it will be apparent if someone undertakes to encode the same algorithm and yet comes up with a different answer. Absent that, there is no question mark over the code. So if I generically describe a step as ‘we than add A and B to get C’, there is only a point in checking the code if you independently add A and B and get D. That kind of replication is necessary and welcome. With that, all parties will benefit. Simple demands to see all and every piece of code involved in an analysis presumably in the hope that you’ll spot the line where it says ‘Fix data in line with our political preference’ are too expansive and unfocused to be useful science. – gavin]
James says
Re #169: [I think it’s wonderful that scientists are able to tell us what is happening…]
What’s even more wonderful is not that climate science can tell us what is happening, but that it told us, years ago, that it would happen. Isn’t that what science is supposed to be about: not just explanation, but prediction?
I’m going to harp some more, but there’s a quote in #157 that’s illustrative of the backwards viewpoint some people have taken:
“Now the vast majority of the scientific community has accepted the view that:
1. The earth is getting warmer;
2. greenhouse gases are largely responsible for this; and,
3. That what has been raising the level of greenhouse gases are human activities.”
A more accurate statement would be something like
The vast majority of the scientific community has accepted the view that:
1) The amount of CO2 in the atmosphere has risen considerably during the industrial period, and many lines of evidence show that the increase is human-caused.
2) Theory, well supported by experiment, says that increased atmospheric CO2 will trap more IR, leading to warmer temperatures.
3) There are many different lines of evidence that show that the Earth has in fact warmed by an amount consistent with theory.
This makes all the fuss over a minor adjustment to a few points in one data set (of many) supporting one line of evidence (again, of many) seem rather out of proportion, doesn’t it?
Mario says
I have this doubt:
If it is man-made CO2 that pushes global warming, naively I would exspect that the US should be among the world regions in which the warming is MORE severely felt.
But now, after NASA revision, it seems that US are among the places LESS affected by long term warming…
There is something to ponder upon or everything is perfectly clear?
Michael says
Gavin, could you address your fallibility as a scientist? Could you be wrong? Have you ever been wrong? How much do you really know? I know you trust in your models and conclusions, but I have learned never to trust anyone claiming to have the final word (on anything). I personally think your position on gobal warming is more rational than the deniers claims, but I would be a fool to put any money on your predictions (if I were a gambling man). Also could you please address your fallibility without using “if we wait any longer it will be too late” or without using the word ‘consensus’? Thanks much.
[Response: Moi, infallible? Hardly. I’ve made my fair share of coding errors and the like. But the whole point is that you shouldn’t pay any particular attention to me or any individual scientist – we all might have prejudices and biases that might colour what we say. However, assessment bodies like the IPCC and National Academies are much more impartial and since they go through very rigorous peer review , the chances that once person’s biases end up in the final product is very small. So, don’t listen to me. Read the IPCC report instead. – gavin]
tamino says
Re: #177 (Mario)
I think your primary misconception is that CO2 emissions tend to have a stronger impact on the region in which they’re emitted. The truth is that CO2 is a “well-mixed” gas which stays in the atmosphere a long time, so it quickly (on a timescale of about 7 or 8 months) distributes itself around the globe. Hence CO2 emissions tend to affect the entire globe, rather than preferentially affecting the region in which they’re emitted.
Something that *does* tend to preferentially impact the region in which they’re emitted, because they have such a short atmospheric lifetime, are sulfate aerosols. These have a cooling effect, and historically have been very strong in the U.S. This may partially explain why the U.S. has shown less global warming over the century than the globe as a whole.
Finally, during the “modern global warming era” — 1975 to the present — as far as I can tell the U.S. has warmed just as fast as the rest of the world.
Timothy Chase says
tamino (#167) wrote:
I wouldn’t be at all surprised if statistics can this in the case of global temperatures. Probably already has. But I actually doubt that an analysis in terms of the monthly data would turn up anything. My main point was simply that using the appropriate methods one may be able to uncover signals that superficially look like they would be drown out by the noise.
Anyway, my apologies.
I really should have found a better way of making the same point. I will be more careful in the future.
Timothy Chase says
Aaron Lewis (#173) wrote:
Actually spelling it as two words seems to be more common. But one word spelled that way would seem to be a definite improvement over how I have spelled it in the past.
The more information and the more resources the better.
My central point in bringing it up in the first place is that most Americans still seem to be under the impression that climate change will have relatively little impact upon the United States – but this isn’t what the models say come the 2080s. Personally, I think that what may happen in Asia is a great deal more important, at least in terms of the numbers and global effect.
However, the United States has a disproportionate effect upon global policy, and this is largely a function of conservative attitudes. Despite the recent urbanization of conservativism, I suspect that pointing out the consequences of climate change in relation to US agriculture may be one of the more effective arguments, at least in the states.
I agree. And things may move more quickly as a result of the meltdown in the Arctic – and this is something that I may mention in passing. However, despite my earlier mistake regarding temperature trends, I think it is important to claim no more than what the weight of the evidence (either in terms of modeling or obvious trends) can bear. Otherwise it is all too easy for those on the other side to label us alarmists.
There are things which I worry about more than the drought in either the United States or Asia. I suspect that is obvious. But I won’t make it the centerpiece in any attempt to convince others that we need to do something about climate change.
But that is my own personal approach. I can certainly see others reasonably taking a different view.
Nick Gotts says
Re #144 I have to say I agree with FCH that source code should be made available in science generally. Although I take the point that doing any more than just putting it online somewhere is going to take resources, I don’t think that is a sound argument against doing at least that much. Certainly in my own field (agent-based modelling), I’m among those trying to establish a norm that code, along with parameter settings and metadata concerning the hardware and software environment in which model runs were undertaken, should be put in the public domain.
J.S. McIntyre says
re 169. Lynn V. wrote”If the lower 48 US states have not been significantly warming (is that what people here are saying), then there must be other places that ARE significantly warming, if the global warming idea is accurate. Isn’t that that whole idea behind “the global average temperature” increasing?”
Interesting developments in Crete.
http://www.npr.org/templates/story/story.php?storyId=12707473&ft=1&f=1004
Not that GW is necessarily the only culprit … but poor sustainability choices combined with apparent warming seems to have their drawbacks…
Magnus Andersson says
#170: Yes, gavin. These documents we have, but I’m probably a bit disappointed if that’s all you have. I’ve just started to go through them as good as possible, but of course here is no raw data, program code, or most important I don’t think there are detailed algorithms. It seems a lot to be a defence of methodology. Before I ask more specific questions I will however search for all answers on the site and elsewhere.
Now, the question why the US Temperature Anomaly 1880-1998 data 1999 are so different from that 2001 wasn’t answered. I shall not have to find it out myself, which adjustments that differ and the impact from them respectively. I stronlgy doubt it can solve this from these (more heuristically rethorical than mathematical?) documents that you gave me a link to.
Again: Why is the US Temperature Anomaly 1880-1998 data on page figure 6, page 37 in “GISS analysis of surface temperature change” (1999) so very different from US Temperature Anomaly 1880-1999 data an page 22 in “A closer look at United States and global surface temperature change” (2001).
These documents:
http://pubs.giss.nasa.gov/abstracts/1999/Hansen_etal.html
http://pubs.giss.nasa.gov/abstracts/2001/Hansen_etal.html
Just a general, not a detailed, answer why. Thanks!
[Response: The answers are in the papers! Basically, it is because of the corrections due to Time of Observation biases and calibrations for known station moves and the like. – gavin]
John Mashey says
# 178 Michael
“but I would be a fool to put any money on your predictions (if I were a gambling man).”
So, would you put money against GISS predictions? If you think you’d be a fool to put $$ on their predictions, surely it would be very unfoolish to put money against them? Do you think a global 5-year average around 2020 will be the same? colder? than one around 2005? I’m still hunting someone to propose that side of a bet over on http://www.longbets.org...
Dave Blair says
Gavin, you make peer review sound flawless (re: response in #178).
http://www.eurekalert.org/pub_releases/2004-08/cu-bse081204.php
[Response: Interesting study. But still, I have never described peer review as flawless. When done properly and thoroughly (as it was for the IPCC reports) it is very useful at improving the quality of a document. When done badly….. well, let’s just say that it isn’t always done well. It’s just one step at improving the credibility of a statement. Necessary, but not sufficient! – gavin]
Mario says
Re: #179 Tamino
Thanks for the explanation,
but your answer induces another “naive” doubt:
if CO2 is “world global” in its warming effects
while “short lifetime” sulfate aerosols are much more local
then for the 1940-1975 cooling trend I would also expect a neat divergence
between, say, the northern and southern emisphere of the globe
Now for example in
http://iabp.apl.washington.edu/Papers/JonesEtal99-SAT150.pdf
(see figure 4, p. 10)
I can’t see that much of a difference ,,,
DaveS says
Then just host a tarball–or give public read access to the SVN/CVS repo–and go about your business, doing the “useful science”. Is there any reasonable argument for withholding any of the relevant source code? How is that “expansive”?
[Response: Because working codes are generally a mixture of scripts, programs, utilities, dead ends, previous methodologies and unused options and not ‘nice’ web applets that anyone can run. Pulling together everything needed to the analysis so that it can be run elsewhere ‘out of the box’ is a significant undertaking. We’ve done that for climate models because that flexibility is required due to the number of users. For programs that just one person runs, it’s much more difficult and time consuming in general. Before going to all that bother, demonstrate that there indeed is some ambiguity in the published descriptions. Without that, this conversation is just posturing. – gavin]
DaveS says
No, but if the paper says C=f(g(h(A + g(h(B))))), it would be “nice” to have f(), g(), and h() fully specified and reproducible, with full access to the datasets containing A and B.
[Response: Read the paper and you will see that the GISS methodology for the urban adjustments are not that complex and all the raw data is already available. – gavin]
David Price says
I saw the site on temperature anomilies. I wonder is there one on rainfall anomolies?
Aaron Lewis says
Re 181
Drought in California will have major impact on 30 million people, from lettuce farmers that flood their fields with Sierra snow melt to computer programmers that want a morning cup of coffee. A drought like the one that started in 1280 would be worse for California than any foreseeable earthquake, volcano, or plague.
I would rather risk being labeled an alarmist by speaking up, then to have people perish as a result of my not raising the issue soon enough, or not making my warning sufficiently urgent. So at what point do you raise the alarm? 2 % probability? 50 %? Do you wait until you are certain? By then, the danger may be so close that it is impossible to avoid. When does the philosopher speak? I think that a lot of people are going to come to grief because respectable scientists do not raise the alarm in time.
I think that the changes in the Arctic are evidence of changes in Northern Hemisphere surface sea temperatures and atmospheric circulation patterns. I think the surface sea temperatures in the Eastern Pacific are similar to those that I saw last year at this time, and I feel that those sea surface conditions contributed to California’s lack of precipitation last winter. Last year, I thought it was gong to give us a wet year, and instead the storm track moved north and we were dry. I think the same thing is going to happen this year. What is my confidence level? Maybe 33% – not high, but give the stakes, it is a heck of a bet. I would not give it a second thought if the Arctic ice was not melting, and my fruit trees were not blooming a day earlier ever year.
What happens if I shout DROUGHT too loudly? It is not like shouting fire in a crowded theater.
The worst that could happen is a few farmers put in drip irrigation systems and a few home owners convert their lawns to sedums. It is not like everybody is going to run for Oregon. Heck, half of the households in our little cul-de-sac do not believe in global warming anyway.
DaveS says
Here is Hansen et al paper on the urban adjustment: “The urban adjustment, based on the long-term trends at neighboring stations, introduces a regional smoothing of the analyzed temperature field. To limit the degree of this smoothing, the present GISS analysis first attempts to define the adjustment based on rural stations located within 500 km of the station. Only if these stations are insufficient to define a long-term trend are stations at greater distances employed.”
I have some questions:
— How does the analysis “attempt to define the adjustment based on rural stations”? What is the methodology?
— How are the local rural stations chosen?
— What is the test for insufficiency?
— In the event that local stations are deemed insufficient, how do you choose the longer range ones?
— Why 500km? Was that arbitrarily chosen?
— What is the methodology for making the adjustment for long range ones? How does it differ?
Also from the paper, regarding station history adjustments: “One of the best opportunities to make useful station history adjustments is in the United States. The USHCN stations were selected as locations with nearly complete temperature records for the 20th century, but also because they have reasonably good station history records that permit adjustments for discontinuities.”
— Um… this doesn’t describe the adjustment… at all.
Now, maybe I’m missing something, but I don’t see any papers referenced here which could answer any of these questions. Perhaps that is why some people are characterizing it as “secret”.
I’ll be the first to admit that I have NO IDEA whether there are or are not published papers out there that answer these, but they aren’t cited in Hansen et al 2001. With no mention of methodology on such important alteration of the raw data, how can this possibly be called “peer-reviewed”? I really don’t mean to sound snarky…. I’m sure I’m just missing something.
[Response: Umm… try reading past the abstract – all the full papers are freely available online. – gavin]
DaveS says
I quoted from sections 4.1.2 (Station History Adjustment) and 4.2.2 (Urban Adjustment), the relevant sections on the adjustments. I honestly quoted the most descriptive excerpts from the two sections… a full reading does not convey any more information toward answering my questions that do those excerpts. (You HAD to know that, so why such a dismissive response?)
All of my questions still stand.
[Response: Apologies if I jumped to conclusions, but I’m reading the same paper as you and the answer to your first question is clear: the rural stations are determined by the night light index (Section 3 and Plate 1). For the others, ‘insufficient’ refers to whether there are any rural stations within the radius (with enough values to define a clear trend). Of course, 500km was somewhat arbitrarily chosen – you want to minimise the size of the circle of influence while still maintaining enough stations for the method to work. The station adjustments are simply taken from USHCN itself – there is nothing extra added for that (and so no description required). If you think these choices matter, do it yourself with different numbers and see what happens. I guarantee that trying to do it for yourself will give you a much greater insight into the problems and potential solutions than simply looking at badly written fortran…. – gavin]
Steven says
#161
No worries.
Now, moving on to global temperature anomalies, does anyone think there is an accelerating trend in the global data? Does anyone have some analysis of the time series to support this?
regards, Steve
John Norris says
Re Gavin’s response in 175:
“If there is a question of whether the code correctly implements the algorithm it will be apparent if someone undertakes to encode the same algorithm and yet comes up with a different answer.”
Gavin,
Geez. Just support the release of the code and scripts. The only thing worse than if there were more errors would be if there were more errors and no one caught them. Through support of keeping it difficult to find errors you are diminishing your value as a scientist.
[Response: You completely misread my point. I am certainly all for as much openness as possible – witness the open access for the climate models I run. But the calls for the ‘secret data’ to be released in this case are simple attempts to reframe a scientific non-issue as a freedom of information one (which, as is obvious, has much more resonance). My points here have been to demonstrate that a) there are no ‘secret’ data or adjustments, and b) that there is no reason to think there are problems in the GISTEMP analysis. The fact that no-one has attempted to replicate the analysis from the published descriptions is strong testimony that the calls for greater access are simply rhetorical tricks that are pulled out whenever the initial point has been shown to be spurious. – gavin]
DaveS says
First of all, Gavin, I wanted to say that its cool of you to come in here and actually interact with us and answer these types of questions. Regardless of whether we agree with you on a particular point (or are even qualified to agree or disagreee), you deserve credit for that. It’s rare.
Onto my response…
Well, the paper says that the “analysis” is “based on” the unlit stations. The use of such otherwise superfluous language suggests that there is something more going on than a simply-described, routine adjustment.
And regarding the insufficiency, should it not be made clear what constitutes a “clear trend”? The results could NEVER be reproduced without that information, which leads me back to my last question: how was this “peer-reviewed”? Are there publicly-available peer review documents?
Also from the paper: “The hinge date [for the urban adjustment] is now also chosen to minimize the difference between the adjusted urban record and the mean of its neighbors.”
What makes anyone think that this is correct? Is there some research that lends credibility to this approach? There is nothing cited for that either. It sounds like this method would arbitrarily minimize UHI when adjusting the data… rather, it sounds like the hinge point is chosen in a way that guarantees the highest possible trend. Maybe I’m reading that wrong though.
[Response: I think you have read it wrong. They don’t want the urban trends to have an influence at all, and so they look at the nearby rural trends. The urban long term change is then adjusted (using a two piece linear fit to the mean rural station trend) so that it matches the rural trend. They use a a two-piece linear fit (with an arbitrary hinge) because you don’t want to arbitrarily fix a particular date at which the urban trends become important (they were doing that prior to the 2001 paper). You could do better I think, for instance fitting a low order spline to the rural numbers, but I’m not sure that the improvement in fit would justify the work. The main point is not that the Hansen et al method is ‘right’, because there is no ‘right’ answer. There are just different approaches. Eli Rabett had a good example of how this works – and his ‘reverse engineering’ revealed that a) the GISS adjustment is a simple two-piece linear trend and b) the urban trends after adjustment are the same as the rural trends. That’s prima facie evidence that the code does what it claims to do. – gavin]
tamino says
Re: #187 (Mario)
I agree that from the graphs in the paper you reference “I can’t see much of a difference.” So, I took the NASA GISS data for the northern and southern hemispheres, and computed the difference (northern – southern) from 1880 to the present. You can view the graph here.
As you can see, the northern hemisphere did indeed cool relative to the southern hemisphere from about 1945 to about 1975.
tamino says
Re: #196 (tamino)
One more note: for most of the century it’s evident from the graph that the northern hemisphere warms relative to the southern. That’s because land warms faster than ocean (due to the immense thermal inertia of the oceans) and most of the land is in the northern hemisphere. The period 1945-1975 diverges from this pattern.
bjc says
What happened to my comment on #196? Was there something inappropriate? Please let me know.
bjc says
I urge everyone to review Eli’s rework, but make sure you read Steve McIntyre’s commentary.