Another week, another ado over nothing.
Last Saturday, Steve McIntyre wrote an email to NASA GISS pointing out that for some North American stations in the GISTEMP analysis, there was an odd jump in going from 1999 to 2000. On Monday, the people who work on the temperature analysis (not me), looked into it and found that this coincided with the switch between two sources of US temperature data. There had been a faulty assumption that these two sources matched, but that turned out not to be the case. There were in fact a number of small offsets (of both sign) between the same stations in the two different data sets. The obvious fix was to make an adjustment based on a period of overlap so that these offsets disappear.
This was duly done by Tuesday, an email thanking McIntyre was sent and the data analysis (which had been due in any case for the processing of the July numbers) was updated accordingly along with an acknowledgment to McIntyre and update of the methodology.
The net effect of the change was to reduce mean US anomalies by about 0.15 ºC for the years 2000-2006. There were some very minor knock on effects in earlier years due to the GISTEMP adjustments for rural vs. urban trends. In the global or hemispheric mean, the differences were imperceptible (since the US is only a small fraction of the global area).
There were however some very minor re-arrangements in the various rankings (see data [As it existed in Sep 2007]). Specifically, where 1998 (1.24 ºC anomaly compared to 1951-1980) had previously just beaten out 1934 (1.23 ºC) for the top US year, it now just misses: 1934 1.25ºC vs. 1998 1.23ºC. None of these differences are statistically significant. Indeed in the 2001 paper describing the GISTEMP methodology (which was prior to this particular error being introduced), it says:
The U.S. annual (January-December) mean temperature is slightly warmer in 1934 than in 1998 in the GISS analysis (Plate 6). This contrasts with the USHCN data, which has 1998 as the warmest year in the century. In both cases the difference between 1934 and 1998 mean temperatures is a few hundredths of a degree. The main reason that 1998 is relatively cooler in the GISS analysis is its larger adjustment for urban warming. In comparing temperatures of years separated by 60 or 70 years the uncertainties in various adjustments (urban warming, station history adjustments, etc.) lead to an uncertainty of at least 0.1°C. Thus it is not possible to declare a record U.S. temperature with confidence until a result is obtained that exceeds the temperature of 1934 by more than 0.1°C.
More importantly for climate purposes, the longer term US averages have not changed rank. 2002-2006 (at 0.66 ºC) is still warmer than 1930-1934 (0.63 ºC – the largest value in the early part of the century) (though both are below 1998-2002 at 0.79 ºC). (The previous version – up to 2005 – can be seen here).
In the global mean, 2005 remains the warmest (as in the NCDC analysis). CRU has 1998 as the warmest year but there are differences in methodology, particularly concerning the Arctic (extrapolated in GISTEMP, not included in CRU) which is a big part of recent global warmth. No recent IPCC statements or conclusions are affected in the slightest.
Sum total of this change? A couple of hundredths of degrees in the US rankings and no change in anything that could be considered climatically important (specifically long term trends).
However, there is clearly a latent and deeply felt wish in some sectors for the whole problem of global warming to be reduced to a statistical quirk or a mistake. This led to some truly death-defying leaping to conclusions when this issue hit the blogosphere. One of the worst examples (but there are others) was the ‘Opinionator’ at the New York Times (oh dear). He managed to confuse the global means with the continental US numbers, he made up a story about McIntyre having ‘always puzzled about some gaps’ (what?) , declared the the error had ‘played havoc’ with the numbers, and quoted another blogger saying that the ‘astounding’ numbers had been ‘silently released’. None of these statements are true. Among other incorrect stories going around are that the mistake was due to a Y2K bug or that this had something to do with photographing weather stations. Again, simply false.
But hey, maybe the Arctic will get the memo.
Hal P. Jones says
Tamino, your claim in #9 that it didn’t “cool” between 1940 and 1975 is incorrect. It’s not much, but if you chart global mean at NOAA’s GCAG, you’ll see the anomaly went from slightly under 0 to about -.1 C
It’s not much “cooling”, sure. The trend is only -.01 a decade and the significance is only 82%. But it did go down.
(GISS analysis using either 51-80 61-90 or 71-00 as the base shows -.14 (slightly confusing me as to why changing the base doesn’t affect the anomaly.))
But I did notice you saw the cooling trend for 1945-75 later in #197 I see. And you make a good point about the software in #228. Which makes all this discussion before and later on about “the code” rather perplexing.
———————-
Gavin, I understand your point in #188 perfectly, that’s a great way to explain it. I wish you’d put it that way in #43 :) So just let’s give all that junk to those asking for it and we don’t have to spend any more time on it. If that’s how it’s all used, just give it that same way and be done with it. I do disagree a bit with #189, at least if Dr. McIntyre has a reason to say he doesn’t fully understand it, which I have no reason to doubt. So if somebody like he with as much insane detail as he goes into statistically on his site can’t figure it all out, who can? For example, your response to #211, if it’s just “a couple of pages of MatLab” why is all this debate going on? Or is he just complaining for no reason? I don’t see that. Plus as he later points out, it’s not the original stuff anyway if he “reproduces it” Why reinvent the wheel? Although you do make good points about balancing against the other methods that other adjustment schemes make showing that it’s probably stable.
I commend you for the link to the references but many of those papers and the papers they reference are not linked or available. Which ones have the algorithms in them is rather difficult to tell, also. And as John N. pointed out in #75, it’s not all that detailed, I don’t see it. I think the point of all this is that rather than those that want to independently validate everything needing to demonstrate anything, but rather it’s up to those being audited to assist. I suppose that’s what the argument is about. That and “read all the papers” isn’t a very satisfying answer to a lot of this.
And your response to #195… Perhaps others have tried to “replicate the analysis from the published descriptions, and it’s more difficult or less complete than you’re making it out to be? (I’m just asking if you’ve considered that as well as you could have….) And in #196, the link to Eli’s blog and the discussion there shows there is an issue that “both sides” are sure they have valid points. I don’t think the views are mutually exclusive.
But again, I believe the disagreement is if “start Reading The Fine RFCs, all the informtion you need is there” is a valid answer or not. I’m sure some specifics might clear this up, so now we’re back to “just give all of it and let’s stop pointlessly debating matters of opinion”. Your discussion with John N. and your reply in #205 is prime evidence of the conflicting viewpoint, and your response to Steve M. in #206… Um, I believe he has and hasn’t gotten an answer. And #208’s response seems rather not an answer to the point that was being made. etc further on down the line with other back and forth comments.
Thanks.
———————-
Dylan in #50, if you actually read what Dr. McIntyre says on his blog and the subjects he’s interesting in talking about (mainstream accepted scientific literature) that’s been the point all along; it’s not what the data shows, it’s making sure it’s correct by independently validating it. That’s what confuses many of us; the audits could show more warming than previously thought, so why do so many attribute other motives to what’s just a bunch of data collection and validation?
———————-
Patrick, #51, I certainly agree with you.
Good point in #144 FCH.
———————-
But John M., in #53, those are the GCMs. (Later discussion in #240 on starts talking about the GCMs again…). But the adjustment software is not there. But goood point!!!
Instead of arguing about .1C or .003C or whatever, or complaining about doing station surveys, why don’t we all work on doubling or tripling their budget? (Perhaps more support would occur if the results could easily be replicated and easily independently verified?) And in #138, the point is this is a publicially funded organization, the citizens own that code. If somebody has experience coding or not is immaterial. If it’s copywritten or purchased software, why not just say that? If it’s a mess and not one package, why not just give it up? What’s the big deal about giving a researcher code that exists only to adjust the data?
Certainly it has a limited use to anyone else. Or how about a published paper with instructions and/or algorithms detailed “all in one place” instead, then? All the pushback or ignoring of the subject confuses some of us. On the other hand, your comments in #210 strike me as somewhat like Gavin’s in #208. As I said above, I think the issue of “serious software engineering and what it costs” is beside the point — if all the materials are little tidbits and expensive to put together, give them out as is! “Prove there’s a need” is not really a good answer.
shrug
———————-
That is a good point in #155 David. I knew a person with a doctorate in physics that did a lot of complicated analytical coding in C, but wasn’t a programmer. I understood the basic flow but the code was a mess. Aside from the fact I didn’t understand the math itself which was the bulk of everything. :)
———————-
All other discussion of other issues, or quibbling about details, detracts from what should be the conversation and goals. One such distraction is trying to compare periods less than 15-30 years. (like 2002-2006) We should be talking like Gavin did in the note in #87 about US temp trends (” For the more recent period 1975-2006: 0.3 +/- 0.16 deg C/dec”) My take on it all is “So 1934 and 1998 are the same anomaly for the US. So what.”
My question is the base period still 1961-1990 instead of 1971-2000? And does it matter and why or why not?
tamino says
Re: #251 (Hal P. Jones)
We’ve been through this before, on another thread. But it’s an important distinction, so here goes …
I don’t dispute that 1975 was slightly cooler than 1940. I disputed the claim that “1940-1975 the temperature was falling.” I often hear it said that mid-century we experience 30 or more years of cooling, and frankly, it just ain’t so.
Fit a trend line 1940 to 1975; the slope is slightly negative (cooling). Now fit a trend line 1950 to 1975. The slope is slightly positive (warming). And that is *not* 30+ years of cooling. It’s more correct to say that it cooled from about 1944 to 1951 (7 years), then levelled off for 24 years.
Then there’s the fact that neither the cooling 1940-1975 nor the warming 1950-1975 is statistically significant. It’s not cooling or warming; it’s fluctuating.
The oft-repeated claim of 30+ years of global cooling mid-century, is excellent fodder for denialist propoganda. But it’s simply not true.
John Mashey says
# 251 Hal
Somehow the message isn’t getting through.
How much computer-based scientific research & software engineering do you do? I’m happy to debate with people with relevant experience [as always, reasonable people can differ], but when people essentially keep insisting things are free, it goes rather contrary to a *lot* of experience.
My concern is to *cost-effectively* help good science happen, and I want my tax dollars to be used well. I’d be delighted to see GISS budget doubled or tripled … and I’d bet we’d see more things on websites, but I’d trust GISS to figure that out, given that they do better than many, from what I can see.
Yes, 2X-3X more budget … but no strings. I wouldn’t insist on procedures designed to slow down research, like those I mentioned in #210. PLEASE go read some of the things I pointed to there.
Hank Roberts says
John Mashey, thanks for #210. This is really cautionary:
http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1446868
Hal P. Jones says
Thanks Tamino. I think the issue here is one of phrasing. The proper way to say that is “The trend from 1940-1975 went in a negative direction.” And if we talk about “The trend from 1880-2006 went in a positive direction” that’s fine too.
Sure, I don’t disagree that 7 years of falling coupled with 24 years of being steady can be described a great many ways. My point is that choosing your measurement period can allow anything to be shown.
My outlook is that if we talk about what it’s doing at some arbitrary point (pick one: 1893-1900, 1900-1909, 1929-1944, 1944-1976, 1992-1998, 1992-2006, or make up your own) as being the greatest rise/fall of x years, that’s a dis-service. That there’s no doubt that from 1950-2006 it’s gone up a lot (.1C/decade, globally) — that is the key, and the subject I think. The meaning of that is a different subject and I would say so is the accuracy and a whole lot of other subjects rolled into one. It’s difficult if not impossible to try and talk about all of them at once!!!!
All I’m saying is that it’s globally trended up .1C/decade in how we measure it over the last 56 years (100% significance). We shouldn’t be discussing short term trends, that’s my only real point.
Eric says
There is an unfortunate interpretation that those wishing to see source code wish only to find fault. But verification (or what some call “auditing”) cuts both ways – while it may find faults, the process also results in verification that a method is implemented properly and would thereby strengthen the interpretation of the output of the computer model. I suspect there is a strong organizational culture difference from those of us in engineering versus those in academic settings. We do make mistakes in engineering – and go through significant quality assurance steps to prevent errors from occurring, or finding errors that might have occurred. Perhaps a description of what quality assurance steps are used in the design, implementation and verification of various GISS data analysis and models would be helpful?
Steve L says
Re: Gavin’s response to McIntyre’s #225. I’m an academic lightweight, but I feel I have a good reason to disagree. Having the code and making specific changes to it is like doing controlled experiments. I think that is a very good way to do sensitivity analyses. Changing multiple factors at once seems a less effective method of learning, even if it could be a faster way to find a more robust model. Perhaps I’ve misunderstood the point?
Hal P. Jones says
I understand your message John, I just don’t think the same way about it that you do. I hope you understand that key point. I spent quite a bit of time explaining what this debate is about. One group doesn’t see the need and the other group does. Fine.
GISS does a wonderful job, and I don’t question anyone’s motives. It just seems to me that this non-issue is taking up a lot of effort better spent elsewhere. I have no control over their funding, but if I did I’d increase it. Moot.
Nope, not a programmer. Immaterial. Zip up everything in the code directory and sub directories and make it available for download. Case closed, questions finished. Issue done. Costs nothing. The code is produced by the government and is not secret, it should be available. Regulations don’t prohibit the release of it. All this other stuff is just cobwebs in the way. Who cares why anyone wants it or if it needs to be checked? Just put it out there.
But I’ll tell what my qualifications are. I worked for the government for many years, and know exactly how everything functions (probably more than I want to, in fact!!!) Owned businesses, been in charge of large teams of people, ran the IT (with the help of multiple team leaders) in an organization of hundreds. I have been involved in this industry since the ’70s and my degree is in computer science. I have almost 10 years experience teaching computers and networking down to the digital theory level and up to the business and policy aspects. Also have training and some various level of experience in Ada, tcl/tk, dBase, BASIC, shell scripts, batch files, HTML, XML, Pascal, various IDE stuff, CVS, and so on. I certainly consider myself qualified to discuss these issues, and on multiple levels. Political, economic, social, scientific and otherwise.
barry says
Here’s the correct link to “the 2001 paper describing the GISTEMP methodology”.
http://pubs.giss.nasa.gov/docs/2001/2001_Hansen_etal.pdf
Hank Roberts says
Have any of the people who want to audit any of the existing models done their own model? I’d think that would be a convincing exhibit of bona fides, to show that they know how this stuff works and make their own code public.
I realize that some infighting about what data to include would be an issue —some people may want to rule in or out particular data sets or methods.
But if a group set up a public-source model, they’d get a lot of attention and be able to prove their competence in the field.
Steve Bloom says
Re #251 (Hal Jones): Just to note that McIntyre is two degree levels short of being “Dr.”.
Walt Bennett says
Re: #259
Hank,
I really think the best way is for NASA-GISS (and other similar organizations) to have robust DP audit departments – I’d be shocked if they didn’t have something along those lines established already.
The issue would be: what is their expertise in this sort of validation? In other words, as I mentioned earlier, there are issues which go beyond climate science and even beyond the programming language used. These areas of expertise are not critical for the software engineers and scientists who do the modeling, but they are essential for the auditors.
I’d love to hear from Gavin with regard to what NASA-GISS already has in place.
Of course, Steve would say: not enough, or else why did he and not they find the data-switch error?
Mario says
Re: #197-198 (Tamino)
Many thanks again for the effort in producing a well documented answer to my naive doubts
your graph at
http://tamino.files.wordpress.com/2007/08/nh-sh.jpg
on the northern-southern hemisphere temperature anomaly is really interesting and thought provoking
For example…
1. If the main driver of global warming is man-provoked CO2 increase, then it would seem that already in the ’80s of 19th century (!)
Northern Europe-US “industrial revolution” was somehow able to make itself felt thru a (quite fast) Northern hemisphere differential warming
True: railways and carbon burning were rising very fast then,
but can this be enough?
one could give a look look – say – at figure 2 of
http://www.epa.gov/climatechange/emissions/globalghg.html
2. Thermal inertia of the southern oceans must really be “immense”, as you say, because even in recent years, when northerners sulfate aerosols are far in the past, and CO2, with its “7 or 8 months” diffusion time over the globe, the North-South differential is still rising!
but this – I suppose – could for example be the effect of a possible strong acceleration in recent northern CO2 emissions…
3. But then another notable kind of “thermal inertia” must be operating in the US, otherwise, after the demise of sulfate aerosols, one could expect a quick alignment to – say – a kind of northern hemisphere mode,
that is a QUICKER warming “to recover lost time”, so to say.
On the contrary, now US among the northern regions seem to be the warming laggard,
as the sulfate aerosols era had left a lasting heritage…
Now if we assume some kind of big “thermal inertia” all can be neatly explained, and this is perhaps the correct thing to do,
but then
unless we also find a robust way of independently verifying and measuring “thermal inertia”,
admission in the discourse of this additional “free entity” reduces greatly the forcefulness of our theoretical construction,
because other competing explanations of global warming would become workable too:
because it is enough that these “competitor theories” adjust the non-directly-measured-but-conveniently-assumed “free entity” at the level that best fits their needs.
One would then be forced then to admit a higher level of ignorance on climate mechanisms than it’s pleasant to do.
FP says
“so why do so many attribute other motives to what’s just a bunch of data collection and validation?”
I guess I am more cynical, but after seeing what Bush did to the gulf Coast after the storms, and after seeing him put the scientist guy that companies use as paid witness to defend lead poisoning cases into the EPA with a high ranking position , it is pretty obvious to me that there are other motives and agendas at play here. And one side has proven itself to lie and exaggerate exponentially more than the other.
Gerald Machnee says
Re #260 – So what is your point?
Tim McDermott says
Steve L: Having the code and making specific changes to it is like doing controlled experiments. I think that is a very good way to do sensitivity analyses. Changing multiple factors at once seems a less effective method of learning, even if it could be a faster way to find a more robust model. Perhaps I’ve misunderstood the point?
Not a software type, are you? The nasty fact is that software is formally chaotic. There are several ways to get to that conclusion, but the one I like best is that a running program in a Von Nuemann computer is an iterated map, and iterated maps are known to be chaotic. Which adds up to the fact that making random changes to code usually blows things up.
That is beside the point, however. The game that Gavin, or any modeler of natural processes, plays with his models is to try to learn how things works. Coding a new feature, or trading one way of calculating with another are, essentially, experiments. The code is not important. The equations expressed in the code are important. And running the model is how you evaluate a particular set of equations. The rules of economics or engineering don’t really apply here. In a profound way, a sensitivity analysis is meaningless in this domain.
Hank Roberts says
Gerald, 260 corrects an error in 251; Dr. McIntyre is someone else.
Rich Briggs says
The side discussion of tectonics in the context of sea level rise is getting a bit tangled (“BlogReader said: Odd you didn’t mention that maybe some plates might be sliding lower into the ocean . . .and john mann replied, “Well, plates do slide against each other, but sea level changes due to tectonics tends to be in the order of 1cm per thousand years . . .”). And this epic set of comments is WAY too good to get sidetracked, so I won’t wade in with much more.
Barton summed it up nicely: Sea level is rising on average, as determined by several independent lines of evidence (satellite ranging; satellite gravity measurements; tide gauge analyses, among others) – but the effects on individual coastlines are decidedly local. A previous Realclimate post (https://www.realclimate.org/index.php?p=314) touched on these issues without explicitly dealing with coastal changes (unless I missed a more relevant post; sorry if so). In most places eustatic sea level rise due to AGW is purely bad news. All this is probably best taken up in another thread sometime.
Ralph Becket says
Re. #258: amazing. That paper contains not one formula that I can see. If the algorithm is as trivial as claimed, surely it might fit into the space of one of the many graphs supplied in the index?
Has anyone here gone to the trouble of turning those ten pages of dense text into something more directly comprehensible (i.e., a formula)? If so, would they mind posting it here?
Patrick the PhD says
Gavin, thanks for the link, I’ve downloaded source and am looking through it now. While you may argue such code-sharing isn’t important like primary work, it certainly helps in diffusion of knowledge.
JohnM: My platform is Linux. In 25 years of SW experience, I have studiously avoided Fortran as best I could, but unfortunately have to run into it occasionally; I can certainly compile, read and understand as well as most other languages I work with. As I mentioned before, I have a day job and other responsibilities besides, so my intention in doing this myself is for curiosity/education, not to find any bug. Should it parlay beyond that I cant commit to now; expect no weeklys, they are painful enough even when paid.
The one thought I did have that might be a contribution was the thought that, like the SETI project, there is perhaps a way to harness many people’s machines to run these GCM simulations. Have others had this idea as well? I think so. Anyway, an available open-source GCM code base would be an enabler of such an effort, hence my ‘what the heck, I’ll run it’ bid to take it.
[Response: Watch this space… – gavin]
Justin says
I feel persuaded by Gavin on this one RE: response to McIntyre.
Both don’t seem to disagree in the value of the check and balance system of replicability. Where they disagree is in the required level of precision in the record. However, in this case, the accuracy of the record remains the same even after the corrections, for all intents and purposes (and that accuracy was coupled with a good level of precision nonetheless). It seems as if McIntyre is not so much concerned with accuracy (and I think that Gavin was right to refer to his exercise as “micro auditing”) as he is with precision; I don’t think he has made the case for why greater precision must be had (at least in this instance).
What Gavin is arguing is that when a question of accuracy arises, it is much more fruitful to form a competing replication. Since there are different codes used for the record constructions anyway (we can, I think, rightly assume this), it’s clear that the code changes aren’t going to affect whether or not the record is accurate. Analysing a replication through creating a competing (and perhaps, null) replication will tell you a lot more.
Justin
Alan K says
When a “skeptic” scientist offers up a view of why they may disagree with your theory you are very quick to dismiss them (I have never seen so much as a “well you have a point..”). So people do build things from scratch and it ends up in a “he said-she said” argument.
This issue gives everyone a huge opportunity to move closer together on climate change analysis as it cuts to the heart of many skeptics’ problems. Computer models. Silly dim old skeptics can’t quite believe how computer models are able to model something as complex as the weather. Out to 100 years in the future. But they do, and many people say that such forecasts should dictate public policy. So when you say it’s about the science, build your own model, you are being disingenuous: it has become both about the science and the models. Climate science without the models would be meaningless.
I am a drive-by skeptic with no scientific training. What do they call such people? Let me see..oh yes, voters. There are billions of us. Threads like this, seeing the many ways you have tried to deny there is a need to open any aspect of your scientific investigation to anyone who wants it, makes it less likely we will vote for any “anti-global warming” measures which today we see as futile and economically destructive.
hillrj says
re 259 H Roberts …if a group set up a public source model…
It is a surprise that this hasnt been proposed already.
The Linux project shows that massive public codes can be developed.
With modern cluster and sharing management large amounts
of computer power is freely available.
Maybe the Free Software Foundation could be the vehicle.
Or, Anthony Watts seems to be able to inspire public
participation. All we need is one Torvalds!
Steve Bloom says
Re #259: I don’t know the details, but IIRC NCAR’s CCSM is set up more or less for wide access, which raises the question of why the “auditors” are interested in the GISS model in particular.
Re #262: It was an appeal to lack of authority, Gerald.
Barton Paul Levenson says
Well, apologies to Jim Hansen, but that has to be the worst coding style I’ve ever seen. No indentations or blank lines, barely any comments, gotos all over the place… if he’d wanted to make it unreadable he couldn’t have done a better job. And I assume that’s Fortran-77? It sure doesn’t look like ’95. God forbid that I knock ’77, because it kept me employed as a programmer for eight years, but please tell me they’re not still coding in it.
John Tofflemire says
Tamino states in #252 that:
“I often hear it said that mid-century we experience 30 or more years of cooling, and frankly, it just ain’t so. . . . Then there’s the fact that neither the cooling 1940-1975 nor the warming 1950-1975 is statistically significant. It’s not cooling or warming; it’s fluctuating. . . . The oft-repeated claim of 30+ years of global cooling mid-century, is excellent fodder for denialist propoganda. But it’s simply not true.”
According to the NOAA global temperature anomaly data, prior to 1981, the highest average global temperature anomaly in a single year was .2143 degrees Celsius, in 1944 (The average global temperature anomaly in 1940 was .1187 degrees Celsius, nearly .1 degrees Celsius below the peak level attained in 1944). From January 1945 to December 1976 (a total of 384 months) the global temperature anomaly was lower in 369 months, or 96% of the total. The standard deviation of the first difference of the global temperature anomaly is about .108 degrees Celsius. In the same 1945 to 1976 period, in 317 months, or 83% of the total, the global temperature anomaly was greater than one standard deviation below that average in 1944. Furthermore, in 206 months in that same period, the global temperature anomaly was greater than two standard deviations below the 1944 average.
One would have expected that, were the global temperature “not cooling or warming; [but] fluctuating” between 1945 and 1976, we would have expected that roughly 50% of the data points in that period would have been above and below that 1944 average, that 33% would have been one standard deviation below and that 5% would have been two standard deviations below.
For those who claim that this writer is cherry picking, if we take the average global temperature anomaly for the 1940 to 1944 period of .1434 degrees Celsius, the corresponding percentages are 90%, 60% and 31%. These percentages are significantly different from expected percentages of 50%, 33% and 5%.
Furthermore, Tamino claims that the relatively flat temperatures between 1950 and 1975 dispel the notion that one could consider that there was a general cooling trend. The average global temperature anomaly during this period was -.0036 degrees Celsius (the corresponding figure for the 1952 to 1975 period is .0063 degrees Celsius). In 277 months out of 312 (89%) during this period the global temperature anomaly was below the 1940 to 1944 average; 179 months (57%) were 1 standard deviation below the 1940 to 1944 average; 93 months (30%) were two standard deviations below the 1940 to 1944 average.
Slimply put, the global temperature anomaly was, very simply, cooler in the 1945 to 1970 period and these cooler temperatures were statistically significant compared with the five-year period between 1940 and 1944 when global temperatures reached their highest level attained before 1980. Regardless of your position on AGW, the above analysis is not “denialist propaganda”, but a careful examination of reality based on the data.
Dodo says
Re #226. Thanks for the graphs. Looks much better now that global warming doesn’t fly off the chart anymore. Let’s hope the GISS graphichs person takes note.
Tom Adams says
#272 Alan,
This incident proves that climate scientist take skeptics seriously when they bring something new and worthwhile to the table.
But all I see from skeptics is that that recycle the same already refuted arguments over and over again, millions of times, I would estimate. Nobody has time to even take note of this stuff.
For instance, your just trucked out the old “models don’t work” chestnut again. Been there, done that, nobody has time to keep refuting it over and over again for every skeptic with a keyboard [edit]. There are just too many of you.
Dan says
re: 272. Do you apply your model skepticism broadly? Or selectively to science? For example, the next time you step on an airplane for a flight, do you realize that models were used to develop and test the plane? Or do you demand to see the open models that were used first before allowing the plane to take off? BTW, those models were also tested and peer-reviewed.
“…which today we see as futile and economically destructive.” Ah, now the unobjective statement behind your question becomes obvious. “WE”, the voters do not see that at all. Check the various polls re: the need for action on GHGs.
tamino says
Re: #276 (John Tofflemire)
You have misunderstood me. I’ll say it again…
I have never disputed that the period 1950 to 1975 was cooler than the period 1940 to 1945. What I dispute (correctly) is that is was cooling *from* 1945 *to* 1970. If you want to say that mid-century we saw 25-30 years of “cooler,” that’s one thing. But if you say 25-30 years of “cooling,” then you are quite simply mistaken.
The “denialist propoganda” to which I refer is the claim that the globe was cooling for 30+ years mid-century. The impression which is intended is that the planet cooled, and kept cooling, for three decades. It just ain’t so.
We beat this to death already on an earlier thread.
Alan K says
# 278 Tom, why should people not continue to express doubts? Just because you say so? [edit]
# 279 I do apply my model scepticism broadly. I have done enough (albeit financial) modelling to know that you can make an output what you want to make an output. A plane will have been proved to fly – ie in real life. Would you believe a model that told you what type of flight will exist in 100yrs?
Lynn Vincentnathan says
RE #277, that’s science for you. We lay people look at something and see not much happening, but a scientist looks at it and sees all sorts of things. It’s a matter of many years of education and training. And I’ll trust what the scientists say over what the untrained laypeople say any day.
BTW, I just read on ClimateArk.org that British scientists have predicted the temps will level off for 2 years, then after 2009 rise sharply, and all hell is about to break loose. See:
http://www.climateark.org/shared/reader/welcome.aspx?linkid=81736 and
http://www.climateark.org/shared/reader/welcome.aspx?linkid=81891
Of course, if you’re 99 and expect not to be here then, you’ll probably miss the worst.
Patrick the PhD says
279. “re: 272. Do you apply your model skepticism broadly? Or selectively to science? For example, the next time you step on an airplane for a flight, do you realize that models were used to develop and test the plane?”
Dan, an inapt analogy. We have had tens of thousands of actual airplane flights of experience to validate that the models are matching observation. Airplanes were not built on models alone, but multiple test flights. We have yet to live through the 21st century to validate climate models longterm. Its an ongoing validation and calibration, year by year.
For climate modelers, the challenge is that its such a complex system with complex cause-and-effect, that one can match past experience yet have a model with not much skill in predicting the future (viz 281).
For example, I could give you a computer model that perfectly matches how the stock market performed up to today. Would you put your entire life savings into using it to make a bet on the stock market in the future? I suspect you would give it multiple ‘trial runs’ before making such a bet, right?
[Response: You are erecting and knocking down strawman arguments. Climate models have nothing to do predictions of the stockmarket. In climate modelling, there are indeed many ‘trial runs’ that give people confidence in their projections. See previous threads on this exact same point: here and here. Further discussions of climate models is OT in this thread. – gavin]
Walt Bennett says
We haven’t touched on this before that I have seen, but something occurred to me last night as I read the latest climate story from Elizabeth Kolbert in The New Yorker. Her story is about declining bee populations. I also saw in last week’s Newsweek that certain Central American frogs were wiped out in 2 years in the late 1980s. Also, certain butterflies no longer exist in their previous habitat, having moved north and up to seek the cooler temperatures they prefer.
In other words, nature is changing out from under us, long before we ‘feel’ the impacts of climate change in our daily lives. And long before ice sheets break up, or storm patterns change for the worse, or previously wet regions turn to desert.
Long before that, animals and plants which are much more sensitive to environmental changes will feel the effects and be forced to adapt. This will have unknown consequences for man, but one thing is for sure: there will be consequences.
FurryCatHerder says
Another issue with having the code is that there is a very strong tradition in the open source community of improving code. Some of the fastest compilers, as I think John Mashey will testify to, are products of open source efforts.
On the subject of “support”, I was an early UNIX user — I’ve seen 7th Edition and System III source code, as well as some variants that never saw the light of day. We managed to do just fine without support from Bell Labs. The same was true in the early days of Linux, back when we installed it using a stack of floppy disks. There was no support for Linux then, and look at it today. I wouldn’t be surprised to learn that a lot of the platforms running these models are Linux systems :)
VirgilM says
Steve McIntyre observed that the GISS corrections does attempt to eliminate UHI for urbanizing sites. However, Steve McIntyre also observed that the GISS corrections added a warming trend to sites that is rural in nature and hasn’t moved for quite some time. I’d like Gavin to explain the physical reason why a warming trend was added to rural sites and how it was added. This is no small matter. Subtracting UHI from urbanizing sites, but adding UHI from non-urbanizing sites, still keeps UHI effects in the GISS analysis. The validation of climate models can’t be done with surface data that is corrupted with effects unrelated to climate change. This makes any error found a BIG deal in my mind, because it opens the possibility of more errors.
[Response: The virtue of reading the references:
Why might that be?
Therefore any one station, even if rural, has its trend set by the average of raw rural station data in the area. – gavin]
Walt Bennett says
Re: #286
Agreed.
My point was, climate change will affect these populations in a direct way long before they affect humans in a direct way.
And yet, these indirect effects will have significant consequences for humans. For example, without bees we cannot grow as much of certain fruits, vegetables and even nuts as we demand.
That’s just one of what will certainly be many examples.
Lynn Vincentnathan says
#283 & We have yet to live through the 21st century to validate climate models longterm.
Well, see the scientists want to do as much as they can now with whatever evidence they have, before what actually happens during the 21st century has a chance to validate or invalidate their models. That’s because by 2100 there may not be any scientists left. They may have to do like the rest of the remnants of humanity will be doing — scrounge around for food, fight off mauraders, escape mega floods, hurricanes, storms, and forest fires, and such. :(
VirgilM says
Gavin, So what if the http://www.surfacestations.org effort determines that some of the rural stations used in the regional average is corrupted with non-climate change effects? This may not subract enough trend from urbanized sites and add too much trend to the rural sites.
Of course, the nearest USHCN site to me is Huntley Exp Station, MT. They irrigate all around the station, so while it can be considered rural, they have land use effects corrupting the climate change signal (posible cooling effect during May-Aug?). Is it possible to know how the levels of irrigation has changed in the area during the last 100 years? And even if we knew the irrigation levels, do we know how to correct the data?
I think we need to fund a climate station network that sites stations free of land use effects and changes.
[Response: The regional trend is the average trend of the regional rural stations. In the raw data, some will be more than the mean, some less. There is a funded climate station network – CRN – which is exactly what you want. – gavin]
Hal P. Jones says
Ahem, looks like the numbers got moved. Just to clarify, #260 is Hank Roberts discussing people making their own ways of performing adjustments. #261 is Steve Bloom correcting me on Steve McIntyre not having a doctorate (The title I used in #251) Hank (#267) responds to Gerald (#265) asking what the point of Steve Bloom’s comment about not having the title of doctor. (So Hank, same person, wrong title).
Which is correct, I was wrong, I went and looked it up. It’s a bach sci from U Toronto in 1969 and graduating Oxford 1971 after studying philosophy, politics, and economics. I’m unsure if Oxford included a degree or what kind if so, it just says graduated. I just always thought he had a PhD, sorry.
Also, Ralph (269) comments on “#258” (barry’s link to Hansen 2001 in #259 ) hillrj (#273) is commenting on Hank in #260 not #259, Steve B. (#274) has #259 and #262 but that’s #? and #265, etc.
FP, in #264 you are talking about the US govt and matters of politics vis ulterior motives. I was talking about the people surveying or auditing.
Justin, in #271 you talk about precision. I don’t think that’s it. It’s more an issue of validating a method by analyzing the method itself versus validating it by creating a different method that does the same thing. I suppose the discussion is all about which way is the “better way” to see if things are doing what they’re supposed to? Gavin thinks the other methods around already do that. Steve wants the checking the original method directly. So what I see is that one doesn’t see the need and the other doesn’t understand why the need has to be seen. I don’t think there’s a solution to that.
Steve B, your #274 I believe you’re saying that my mistakenly calling him Dr. as a title is trying to say what I wrote was an appeal to (lack of) authority. Reading the bio, it seems to me that regardless, he’s qualified to statistically analyze these sorts of things. YMMV
#276. John T. #280 tamino I suppose it depends if you’re talking about constantly cooling every year versus the general trend.
John Wegner says
Regarding the adjustments, at one time (August 1999)1934, 1921, 1931, 1953 and 1998 was the order of US temperature records.
See:
http://www.giss.nasa.gov/research/briefs/hansen_07/fig1x.gif
From:
http://www.giss.nasa.gov/research/briefs/hansen_07/
[Response: Interestingly, that was prior to the adoption of the USHCN corrections for time of observation biases etc. – which is what the 2001 paper was all about. – gavin]
Brian says
This is definitely an interesting to discussion to read as non-climatologist scientist. All of the attention and scrutiny is, in the long run, very good for the science.
I would agree w/ many of the comments above…the skeptics, denialists, auditors, whatever…they are making the case for AGW stronger with their probing. I’m interested to hear about the results of the due diligence.
And I would love to contribute computing power in the SETI-type way mentioned in #270.
DavidU says
#292
Are you looking for something like this?
Climateprediction@home
Justin says
Hal,
when you say “validating the method”, do you mean validating the code or validating the method without its coded form?
Justin
Brian says
Re#293: DavidU…thanks!…I will give that site a look
Chuck Booth says
I trust those who are clamoring for access to the computer code used by climatologists in order to scrutize those codes for possible errors are similarly motivated to do quality control checks on the models underlying gloom and doom economic forecasts of dealing with AGW:
Emissions Dilemma High Cost Of Reductions Estimated; Others Say Doing Nothing Would Cost More
By ALAN ZIBEL
Associated Press
August 14, 2007
WASHINGTON
Making big cuts in emissions linked to global warming could trim U.S. economic growth by $400 billion to $1.8 trillion over the next four decades, a new study says.
The study published Monday by a nonprofit research group partially funded by the power industry concludes that halving emissions of carbon dioxide – the main greenhouse gas linked to global warming – will require “fundamental” changes in energy production and consumption.
The Electric Power Research Institute said the most cost-effective way to reduce the level of carbon dioxide in the atmosphere is to make many changes at once, including expanding nuclear power, developing renewable technologies and building systems to capture and store carbon dioxide emitted from coal plants. Reducing demand for fossil-fuel power is also key, the institute said.
The EPRI cost estimate is based on a 50 percent cut in total U.S. carbon emissions from 2010 levels by 2050. Without such a cut and the shifts in technology it would bring, the Energy Department projects that U.S. carbon emissions will rise from about 6 billion metric tons a year in 2005 to 8 billion metric tons by 2030.
The report calls for more modest cuts in emissions than some proposals being considered in Congress. Bigger cuts could well be more expensive…
Disclaimer: Many years ago, I was involved in research funded by EPRI.
Timothy Chase says
Hal P. Jones (#290) wrote:
Quick note: even when on rare occasion a post gets removed (and this has happened to me before when I implied a rather ill-chosen comparison), if you have hyperlinks to the posts that you are refering to, people will still be able to follow. Anyway, thank you for the effort that you are putting into keeping the context.
It helps.
Personally, I have a variety of problems with Steve McIntyre. First, he is not a climatologist, and he has deliberately made use of statistical methods which he should know are invalid to try and discredit the hockey stick. Second, he tries to create the impression that taking photos of the stations will be sufficient to determine whether they are capable of providing accurate and useful information, that is, that somehow the photos will show whether or not the stations are in park cool islands, whether other staticians are using the appropriate statistical methods for separating the signal from the noise, etc.. Third, he cherry-picks the stations and misrepresents the data which is being recieved from them. Fourth, he pretends as if somehow a problem with a particular station will result in a continuing upward trend when all it would produce is a jump, not a trend. Fifth, he pretends as if this is the only source of information we have for reliably determining trends.
So I do not question his qualifications as a statician. [edit]
That said, he is attracting more talent at present. Who knows ?- maybe something of real value will come out of his group despite his involvement. Stranger things have happened.
But I won’t be holding my breath.
pete best says
Re #296, makes me wonder why anyone thinks that fossil fuels starting with Oil are going to be around in suficient quantities to fuel economic growth for decades to come. From all of the available evidence I would suggest not. And to think that something else is available to replace Oil, then gas and coal immediately with no economic pain is at the present seemingly presumptous and slightly foolish.
Still Peak Oil people are seem as doomongers much like environmentalists and as yet not quite mainstream enough thinking.
Timothy Chase says
Chuck Booth (#296) wrote:
Electric Power Research Institute
http://www.sourcewatch.org/index.php?title=Electric_Power_Research_Institute
Hmmm… Exxon funding, Chauncey Starr in previous years froom George C. Marshall Institute…
Starr, Chauncey (George C. Marshall Institute)
http://www.mediatransparency.org/recipientprofileprinterfriendly.php?recipientID=137
Involved in…
HARVARD CENTER FOR RISK ANALYSIS
According to its website, the HCRA “was launched in 1989 with the mission to promote public health by taking a broader view. By applying decision science to a wide range of risk issues, and by comparing various risk management strategies, HCRA hopes to empower informed public responses to health, safety and environmental challenges by identifying policies that will achieve the greatest benefits with the most efficient use of limited resources.” (http://www.hcra.harvard.edu/about.html; accessed 03/29/06)
according to Center for Science in the Public Interest / Integrity in Science Database
http://www.cspinet.org/integrity/nonprofits/harvard_university.html
… tactics out of tobacco industry playbook.
Not good.
PS
Chuck – Great to see you made it to this front now that the other has cooled down.
Same war, though.
Timothy Chase says
Resources for researching the disinformation industry:
Integrity in Science Database
Center for Science in the Public Interest
http://www.cspinet.org/integrity
Source Watch
http://www.sourcewatch.org
Center for Media and Democracy: PR Watch.org
http://www.prwatch.org
Media Transparency
http://www.mediatransparency.org
Climate Science Watch
http://www.climatesciencewatch.org
DeSmogBlog
http://www.desmogblog.com
Society of Environmental Journalists
http://www.sej.org
See inks for more: http://www.sej.org/resource/index18.htm