Another week, another ado over nothing.
Last Saturday, Steve McIntyre wrote an email to NASA GISS pointing out that for some North American stations in the GISTEMP analysis, there was an odd jump in going from 1999 to 2000. On Monday, the people who work on the temperature analysis (not me), looked into it and found that this coincided with the switch between two sources of US temperature data. There had been a faulty assumption that these two sources matched, but that turned out not to be the case. There were in fact a number of small offsets (of both sign) between the same stations in the two different data sets. The obvious fix was to make an adjustment based on a period of overlap so that these offsets disappear.
This was duly done by Tuesday, an email thanking McIntyre was sent and the data analysis (which had been due in any case for the processing of the July numbers) was updated accordingly along with an acknowledgment to McIntyre and update of the methodology.
The net effect of the change was to reduce mean US anomalies by about 0.15 ºC for the years 2000-2006. There were some very minor knock on effects in earlier years due to the GISTEMP adjustments for rural vs. urban trends. In the global or hemispheric mean, the differences were imperceptible (since the US is only a small fraction of the global area).
There were however some very minor re-arrangements in the various rankings (see data [As it existed in Sep 2007]). Specifically, where 1998 (1.24 ºC anomaly compared to 1951-1980) had previously just beaten out 1934 (1.23 ºC) for the top US year, it now just misses: 1934 1.25ºC vs. 1998 1.23ºC. None of these differences are statistically significant. Indeed in the 2001 paper describing the GISTEMP methodology (which was prior to this particular error being introduced), it says:
The U.S. annual (January-December) mean temperature is slightly warmer in 1934 than in 1998 in the GISS analysis (Plate 6). This contrasts with the USHCN data, which has 1998 as the warmest year in the century. In both cases the difference between 1934 and 1998 mean temperatures is a few hundredths of a degree. The main reason that 1998 is relatively cooler in the GISS analysis is its larger adjustment for urban warming. In comparing temperatures of years separated by 60 or 70 years the uncertainties in various adjustments (urban warming, station history adjustments, etc.) lead to an uncertainty of at least 0.1°C. Thus it is not possible to declare a record U.S. temperature with confidence until a result is obtained that exceeds the temperature of 1934 by more than 0.1°C.
More importantly for climate purposes, the longer term US averages have not changed rank. 2002-2006 (at 0.66 ºC) is still warmer than 1930-1934 (0.63 ºC – the largest value in the early part of the century) (though both are below 1998-2002 at 0.79 ºC). (The previous version – up to 2005 – can be seen here).
In the global mean, 2005 remains the warmest (as in the NCDC analysis). CRU has 1998 as the warmest year but there are differences in methodology, particularly concerning the Arctic (extrapolated in GISTEMP, not included in CRU) which is a big part of recent global warmth. No recent IPCC statements or conclusions are affected in the slightest.
Sum total of this change? A couple of hundredths of degrees in the US rankings and no change in anything that could be considered climatically important (specifically long term trends).
However, there is clearly a latent and deeply felt wish in some sectors for the whole problem of global warming to be reduced to a statistical quirk or a mistake. This led to some truly death-defying leaping to conclusions when this issue hit the blogosphere. One of the worst examples (but there are others) was the ‘Opinionator’ at the New York Times (oh dear). He managed to confuse the global means with the continental US numbers, he made up a story about McIntyre having ‘always puzzled about some gaps’ (what?) , declared the the error had ‘played havoc’ with the numbers, and quoted another blogger saying that the ‘astounding’ numbers had been ‘silently released’. None of these statements are true. Among other incorrect stories going around are that the mistake was due to a Y2K bug or that this had something to do with photographing weather stations. Again, simply false.
But hey, maybe the Arctic will get the memo.
Deech56 says
So Gavin, besides posting replies here, how was your weekend? You have done yeoman’s work answering the queries. Thanks. I think a sticking point is that most of us would have a hard time replicating the GISS temps based on Hansen, et al. 2001 simply because we are not trained in the field. But I am sure many outside my field would have trouble replicating some of the vaccine work I published based on their reading of the Materials and Methods sections I painstakingly wrote. That’s the nature of the beast. Training in a scientific field does count for something.
I would just like to point out that with ClimateAudit.org down, Steve McIntyre has posted a couple of items on Anthony Watts’ blog (“Does Hansen’s Error “Matter”? – guest post by Steve McIntyre” and “Lights Out – Guest post by Steve McIntyre”). And I would recommend tamino’s data analysis to get an idea of the effects of the correction. It looks to me like the correction of the data for the lower 48 leads to a better slope match with the satellite data.
Lynn Vincentnathan says
RE #191, & Drought in California will have major impact on 30 million people, from lettuce farmers that flood their fields with Sierra snow melt to computer programmers that want a morning cup of coffee. A drought like the one that started in 1280 would be worse for California than any foreseeable earthquake, volcano, or plague.
I just started SIX DEGREES by Mark Lynas (had to order it thru http://www.amazon.co.uk) & am starting on the first chapter. He writes about what happened in 1280 as happening with a 1 to 2C increase. And I understand (from other sources) that’s already in the pipes from previous GHG emissions. Yes, such an increase & concomitant harms will be very very bad for much of the U.S. (& the world).
Patrick says
“We of course pointed that you get virtually the same warming trend whether you use all stations or just rural stations – which you didn’t even seem to acknowledge.”
This is being claimed wrt US temps.
Is this true wrt global temperature studies? That is, only rural stations have the same trend? If so, what is the published study on it?
FurryCatHerder says
Oh, PUH-leeze, Gavin. You’re not talking to the computer illiterate. I’d tinker with the models if I had the right FORTRAN compiler, but all my best efforts — and I was working in FORTRAN 28 years ago — have been for naught. One thing that publishing the code would do is get it ported to something that doesn’t require a proprietary compiler. The next thing is I suspect someone would find a way to parallelize the code that would let it be run on distributed networks, rather than being limited to tightly coupled, shared memory systems — or at least, I’ve been told it currently only runs on tightly coupled systems.
[Response: Are we talking about climate models now? The available ModelE code was only for OpenMP systems, but does compile on a number of different platforms and compilers. Our current in-house version has MPI as well, and will compile with g95 (runs pretty slow though). At some point in the near future, I’ll update the available code to the version with the same physics but more flexible coding. – gavin]
John Norris says
Gavin thanks for your reply to 195. Obviously we are keeping you busy. I appreciate your dedication to the science and the discussion. Onto your comments:
“You completely misread my point. …”
Not sure how, I completely read your words. Your words said that the code should not have to be publicly inspected; the public should create a new program based on the original algorithm. If the public gets different results, then the public can raise an issue with the Author.
Reviewing code cuts to the chase. Making a new program makes the process more complex. Why on earth would you want to make it more complex?
” … I am certainly all for as much openness as possible – witness the open access for the climate models I run. …”
I am for as much openness as possible too, please support release of Hansen’s source code and scripts.
“…But the calls for the ’secret data’ to be released in this case are simple attempts to reframe a scientific non-issue as a freedom of information one (which, as is obvious, has much more resonance). …”
I don’t think so. I think people are curious how it works and if it really works correctly. Sounds like science to me. If you are correct then get it released, and the issue as you described goes away. If I am correct people will take the information, run it, look at, size it up, and comment. How can that not be good for Climate Science?
” … My points here have been to demonstrate that a) there are no ’secret’ data or adjustments, and b) that there is no reason to think there are problems in the GISTEMP analysis. …”
a) The code and scripts are apparently kept secret from me – thus the implementation of the data and adjustments is indeed secret. I don’t think your point a is valid.
b) Before Steve McIntyre found the subject GISTEMP problem you describe above, I don’t think you had reason to believe that that GISTEMP problem was there. I don’t think your point b is valid either.
” … The fact that no-one has attempted to replicate the analysis from the published descriptions is strong testimony that the calls for greater access are simply rhetorical tricks that are pulled out whenever the initial point has been shown to be spurious. – gavin”
Funny, I think it is strong testimony when someone releases important results publicly but withholds important information on generating those results. Hansen’s code and script details are obviously very important towards generating those GISTEMP results.
It is perfectly reasonable to subject every piece of code for completely transparent review if the code establishes one of very few standard data sets for GW measurement, as Hansen’s does. It may be poorly configured code and scripts that are embarrassing for Hansen to release, but the world will get past that if it otherwise passes scrutiny.
[Response: The methodology is important to the results, and the methodology is explained in detail (with the effect of each individual step documented) in the papers. The fact that the results are highly correlated to the results from two independent analyses of the mostly the same data (CRU and NCDC) is a strong testimony to the robustness of those results. However, you are I think wrong on one point, the rhetoric for more access and more data is actually insatiable. As one set of code is put out, then the call goes up for the last set of code, and the code and results from the previous paper, the residuals of the fits, and for the sensitivity tests and so on. Given that all this takes time to do properly and coherently (and it does), there will never be enough ‘openness’ to squash all calls for more openness. Whatever the result from releasing the current code (which very few of the people calling for it will ever even look at), the ‘free the code’ meme is too tempting for the political advocates to abandon. People who are actually genuinely interested in all of these questions, will, I assure you, be much happier in the end if they code it themselves. Think of it as tough love. ;) – gavin]
Steve McIntyre says
Hansen et al 2001 says: “Only 214 of the USHCN and 256 of the GHCN stations within the United States are in “unlit” areas.”
The data set http://data.giss.nasa.gov/gistemp/station_data/station_list.txt
lists 294 USHCN and 362 GHCN with lights=0 and 308 USHCN and 371 US GHCN as dark (A). Can you please obtain a list of the 214 USHCN and 256 GHCN sites that were actually used in Hansen et al 2001. What accounts for the difference between the numbers in the data set and the article.
[Response: I have nothing to do with that analysis and so you need to ask the authors. Bear in mind that GISTEMP is being updated in real time, as are the source datasets. What was available in 2001 is not necessarily the same as what is available now. But like I said, ask them. – gavin]
Steve McIntyre says
#205. For applied economics articles in major economics journals e.g. American Economic Review, it is mandatory to archive code and data as used, at the time of submission of the article. There’s no reason why this sort of “best practices” should not also be adopted in climate science, where there are important policy considerations.
And by the way, if you’re worried that no one’s going to look at the code, I promise that I will look carefully at the code.
[Response: As implied above, you will be much better off doing it yourself. It’s not a complicated procedure, and you could try all sorts of different methodologies on a consistent platform. If you come up with something substantially different, I’ll be surprised, but that would be constructive science. Graduate students sometimes put a lot of effort in deciding which choice to make in an analysis. My advice is invariably to simply do it one way and then go back and see if doing it the way other matters. If it doesn’t matter, it isn’t worth worrying about, and if it does, then you have an interesting result. The point is that idle theorising about potential issues is a waste of time when it is only a matter of days to actually find out. Work it out and see. If you are worried about microsite issues, do the analysis with only the ‘good’ sites. If you worry about urban issues, throw out the urban stations. etc. Each persons interest’s are independent, and the direction their tests will take them is unique. Waiting on someone else to do your analysis for you is foolish. -gavin]
DaveS says
I think that’s nonsense. Noone should have to “attempt” to replicate the results. They should be able to pull the actual, complete and unaltered dataset that was used and replicate it. They should see each and every station that was used to do the urbanization adjustment for any particular station. Every bit of that should be available, and there is absolutely no defensible reason why it isn’t.
I’m sorry, Gavin, but you’re wrong on this. People shouldn’t have to jump through hoops and do tons of trial-and-error hoping to “replicate” the results of a paper that somewhat ambiguously (despite your differing opinion on that) describes methodology, nor should their failure to do be considered evidence that they are asking for the data in bad faith.
And, in the end, it shouldn’t matter if they are asking in bad faith or not. Every last scrap of data should be available to everyone, even people who disagree with your conclusions, and ESPECIALLY to people who are trying to tear apart your conclusions.
So, what’s wrong with that? I was shocked over the last few days to learn the extent to which those things AREN’T available, to be honest.
[Response: Underlying all of this is, I think, a big misconception about what replication means in an observational field like climatology. Everyone is working with ambiguous and imperfect data – whether that is from weather stations, paleo-records, models or satellites. Every single source needs to be processed in ways that are sometimes ad hoc (how to interpret isotopes, how to model convection, how to tie two satellite records and, yes, how to adjust for urban heating effects). Robustness of a result is determined by how sensitive the results are to those different ad hoc assumptions (do different climate models have the similar sensitivity, do results from south Greenland match those of North Greenland, does the UAH MSU record match the RSS MSU record). The important replication step is not the examination of somebody’s mass spec, or the analysis of lines of code, but in the broader results – do they conform with other independently derived estimates? Climate science is not pure mathematics where there is (sometimes) a ‘right’ answer – instead there are only approximations. These are outlined in innumerable papers which for decades have been the method of recording procedures, assumptions and sensitivities. To be sure, there are sometimes errors in coding (in the MSU record, or in climate models), but these come to light because the end results seem anomalous, not because there are armies of people combing through code. The independent replication of results is by far the more important stage in evaluating something.
Take the Greenland ice cores. The US and EU wanted to drill an ice core in central Greenland. They could have drilled one core , split the samples and done indpendent analysis of each sample. That would have given a good test of how good analytical techniques are. Fine. But what they elected to do was to drill two separate cores, 30 miles apart and do everything independently. This was much, much more useful. Firstly they found that for most of the cores the results were practically identical (which demonstrated the robustness of the analytics as well as sharing the samples would have), but more importantly, they found that the cores diverged strongly near the bottom – a sure sign that there was something wrong at one or (as it ended up) both sites. Without the independent replication that non-robustness would have not been uncovered. Thus while there are already two independent replications of the GISS temperature analysis (CRU and NCDC) – both of which show very similar features, there is always room for more. That would demonstrate something. Looking through code that does a bi-linear fit of data will not. – gavin]
Patrick the PhD says
“Patrick: can you explain to us your expertise in physics, algorithms, statistics, analysis of imperfect data, software engineering, simulations? You have done most of this stuff professionally, right? [Of course, as an anonymous poster, it will be hard for us to check.]”
I already replied to this besides-the-point challenge but it was not posted.
Not sure why. Let me briefly say: I have a PhD in Computer Science and know many of the matters/areas related to this subject sufficient to do some work here, but my skill/background is besides the point. I joined the call others made for the scientists in the area to do good, professional science by being as reproducible as possible with data and data analysis algorithm transparency. Open the details and code up for review. If I am being asked to do what those working in the field won’t do, I’d have to decline, as I have a job. But Steve McIntyre seems eager to review things, so work with him. Willingly. (See Judith Curry, #69, she ‘gets it’.)
John Mashey says
A problem with this thread is that we have two extremes, as seen before in other disciplines:
– Respected domain researchers (like Prof. Curry in this case), occasionally joined by others, like statisticians or software engineers who might have something constructive to add, who clearly want to improve the science, and are happy to make normal-science critiques. Reasonable people can and do disagree about the mechanisms, cost tradeofss, etc.
On the other hand:
– People who have little interest in improving the science, or actually getting better answers, and mainly want to slow down research whose answers they don’t like, raise the publication bar very high, and in general, cost-effectively waste researchers’ time.
If you don’t believe this goes on:
Consider “Good Epidemiology Practices (GEP)”, which describe the rules for doing good studies. This is good, and versions are created/debated by various researchers. Who could argue with that?
In Allan Brandt’s “The Cigarette Century”, p306-307: it turns out that in late 1992:
“Following years of fighting epidemiologists, Philip Morris now initiated a campaign for “Good Epidemiological Practices,” organized to “fix” epidemiology to serve the industry’s interests by changing standards of proof. One of the objective of the program, an internal memo explained, was to “impede adverse legislation.”
A really detailed analysis can be found (including the role of Steve Milloy) in:
http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1446868
The tobacco case is unusual, in that large amounts of *internal* documentation is publicly available.
Google: philip morris gep tobaccodocuments.org
gets plenty of pointers, including strategy:
‘generate letters to editors (scientific/general media) promoting study and highlighting EPA/other weak ETS methodology inc… encourage libertarian policy groups to promote study and criticise the weakness of epidemiology, especially on ETS * independent scientists to push feature articles promoting confounders arguments.’
Effective strategies proliferate.
For a view of the more widespread use of these techniques [for instance, Milloy has moved on from secondhand smoke to fighting AGW), see:
Chris Mooney, “The Republican War on Science”, especially:
Ch6 Junking “Sound Science”
p 71: ‘”sound science” means requiring a higher burden of proof before action can be taken to protect public health and the environment.’
The Data Quality Act is supposed to assure the Federal Agencies “use and disseminate accurate information”. Who could argue with that?
See:
http://library.findlaw.com/2003/Jan/14/132464.html
http://www.washingtonpost.com/wp-dyn/articles/A3733-2004Aug15.html
http://en.wikipedia.org/wiki/Data_Quality_Act
It has 12 separate references in Mooney’s book. On the surface, it’s supposed to assure good data. In practice, it has been used to inhibit publication of awkward data.
======
Anyway, I continue to be amazed at Gavin’s patient willingness to reply to posters who seem more like anonymous drive-by sockpuppets to me and often seem clueless about serious software engineering and what it costs. Having spent some time looking at climate websites, I think GISS does a terrific job cost-effectively making data and (lots of) code available.
I appreciate the different level of effort involved in getting the code done for a research study, and producing a systems program product that other people are expected to use. Again, if somebody is really interested in improving worldwide data, spending most of their effort chasing 2% USA Lower-48 is very weird. Reasonable, well-informed people can and do disagree about good procedures, but much of this thread seems right out of the Philip Morris GEP or Data Quality Act playbooks, and it does not help good science…
Matt says
Gavin wrote in #205: However, you are I think wrong on one point, the rhetoric for more access and more data is actually insatiable. As one set of code is put out, then the call goes up for the last set of code, and the code and results from the previous paper, the residuals of the fits, and for the sensitivity tests and so on.
When you work on something unimportant, you are right, folks don’t care much and nobody asks many questions. But if you are working on something important, then I’m sorry but people will ask for more and more information. That’s the way it goes.
But don’t these types of questions come up anyway when someone is peer reviewing the work? If I am reviewing another engineer’s work, it’s common for me to ask that certain inputs change so that I can be satisfied of the impact on outputs. It’s a quick sanity check that catches more issues than you might imagine.
If you are only providing descriptions, then my guess is that most peer reviewers aren’t at all scrutinizing to the level that most think things are being scrutinized.
If I tell you I have a million lines of code that prove something you believe to be a stretch, and my code is secret, then investment needed by you to demonstrate an error in my thinking is astronomical if you have to recreate everything from scratch.
This is what absolutely bugs me about all this climate research. As I’ve noted before, 1MLOC typically has 5,000 to 10,000 bugs lurking about unless you have an army of folks on QA.
[Response: As I said above, complex codes can’t be derived directly from the papers and so should be available (for instance ModelE at around 100,000 lines of code – and yes, a few bugs that we’ve found subsequently). The GISTEMP analysis is orders of magnitude less complex and could be emulated satisfactorily in a couple of pages of MatLab. – gavin]
Tim McDermott says
Gavin,
Here is some speculation that might be worth looking into by someone with sharper physics than mine.
I grew up in the desert of Eastern Washington State. I go back from time to time, and the weather has cooled. It used to be that every couple of years, late June or early July would have temperatures in the 110s. That doesn’t seem to happen now; I’ve been there in June when the temp didn’t get over 95 for a week. The common attribution of this change is irrigation. There are new irrigated vineyards, orchards, and even corn where there used to be only dry-land wheat.
My understanding of such things tells me that increasing humidity should depress high temperatures (and put a floor under lows). If this is true, then we may have the opposite of urban heat islands. At least in places where irrigation is increasing.
In poking around the web, I could find no records of relative humidity. I did find that about 2% of the total area of the US is under irrigation (to the tune of about 150 billion acre-feet a year); that the area under irrigation fell by about 10% from 1972 to 1984.
There are some interesting possibilities in looking at humidity as a moderator of temperature. Should high temperature in areas that humans have dessicated (LA has captured entire rivers to feed their water needs) count in temperature trends? Should areas that irrigation has cooled? Was 1934 so hot because so much of the country was very dry? Did the growth of irrigation suppress temperature in mid 20th century?
Mike Donald says
#146
Thanks for that Tamino. That’s a link I’ve already used on two other forums. Grand stuff.
Eric says
A common definition of whether a software application is implemented correctly is that it “conforms to the specification” (a similar definition is sometimes used for the quality of products produced by a manufacturing process).
If I follow Gavin’s argument (and I do respect and appreciate Gavin’s efforts even if I disagree on this point), we should test Microsoft Word by developing a duplicate of Microsoft Word based on a hypothetical (since its from MS) detailed specification and if the duplicate performs differently than Microsoft Word, then we may have found an error in Microsoft Word (or the duplicate)? I suppose this approach might work but it does not seem to be a particularly realistic way to test software for conformance with the specification nor is it used widely to test the accuracy of software implementations. For all but relatively simple algorithms, it is a potentially labor intensive and time consuming approach to checking the accuracy of an implementation. This is not an issue of “political advocates” but straight forward traditional software engineering and also a major tenet of the open source movement. Thanks for listening.
Tim McDermott says
John Norris wrote: Reviewing code cuts to the chase. Making a new program makes the process more complex. Why on earth would you want to make it more complex?
Reading to code won’t cut to the chase. In the first place, the code is the least informative artifact in the software development process. It has had too much of the what and why removed. What you really need to understand a program is a statement of the requirements for the program, a layout of the architecture of the program, and then a description of the high and low level design of the components of the program. Once you have all that, then you can start to make sense of the code. I doubt that much documentation exists.
I once worked for the Navy modeling *mumble*mumble* The point of the exercise was to gain understanding of certain phenomena and how certain weapons responded. Our product was understanding, not software. Our work cycle was, approximately, to find an area that didn’t seem to be right to us. We might do some research, we might run our simulation against new conditions. Then we would decide to try something, code it, and run it. Then if it was a better match for our reference data, we would keep it and start over. We didn’t have to be perfect; we couldn’t be perfect. We just had to keep getting better.
If we had to have operated in the kind of fishbowl you think would be good for climate science, our productivity would have gone in the toilet. Everything would have taken two or three times longer, because we would write documents we didn’t need for our work, but would be required by our auditors. And we would have been interrupted constantly by people wanting to know why we didn’t use longer variable names (F77, of course), don’t we know that common blocks are the root of all evil, and good gawd there’s a GOTO.
Exxon Mobile made, after taxes, nine thousand dollars every second of 2006. To think that they would not have folks poring over the code looking for things that just look funny (so that they could feed it to a blog to raise a stink) is, well. Not everybody just wants to understand.
Alastair McDonald says
Re #206
Tim,
It is already known that humidity sets a ceiling to high temperature. For instance in a jungle the temperature seldom goes above 80 F but in deserts over 100 F is not uncommon.
Water has several effects on climate with the first being that it prevents the surface from warming, because it absorbs the solar radiation as latent heat. This leads to more water vapour which is a greenhouse gas, and this heats the air near the surface. But water vapour is a lighter gas than air with a molecular weight of 18 compared with air at around 30. Thus the ‘wet’ air convects because it is warmer and lighter. The convection leads to cooling and the water vapour condenses and forms clouds. This cuts of the supply of solar radiation to the surface during the day and causes cooling. At night the blackbody radiation from the clouds keeps the surface warmer than on clear nights.
So it is not really the humidity that is causing the cooling. It is just a rather unpleasant side effect from the surface water.
Barton Paul Levenson says
[[Others say that this pattern change is CO2 induced. Neither side has yet to prove their case. I know the GISS people are high on CO2 causation, but they lost credibility when they keep their code secret (mistakes and all) from those who want to check their work.]]
The code from the GISS GCM is available on the web:
http://www.giss.nasa.gov/research/modeling/
As for CO2 accumulation raising surface temperatures, that has been clear since John Tyndall demonstrated that CO2 is a greenhouse gas back in 1859. It doesn’t depend on modern GCM results.
Barton Paul Levenson says
Re #172 — Tamino, you were right and I was wrong. For some reason I had gotten it into my head that you (or the original poster) was saying the lower 48 were not warming, whereas apparently you were saying the warming trend was not accelerating. My bad.
Barton Paul Levenson says
[[I have this doubt:
If it is man-made CO2 that pushes global warming, naively I would exspect that the US should be among the world regions in which the warming is MORE severely felt.
But now, after NASA revision, it seems that US are among the places LESS affected by long term warming…
There is something to ponder upon or everything is perfectly clear?]]
Mario —
CO2 is well mixed in the troposphere due to convection and turbulence. On a regional scale the proportion of carbon dioxide in the air is pretty much the same everywhere.
dhogaza says
So if the GISS code implements a relatively simple algorithm, you don’t mind if only the algorithm(s), not the code’s been published?
That would seem to be the implication of your post.
Gavin:
Hmmm, hey, it *is* relatively simple.
Barton Paul Levenson says
[[Now, moving on to global temperature anomalies, does anyone think there is an accelerating trend in the global data? Does anyone have some analysis of the time series to support this?]]
I took the GISS global annual temperature anomalies from 1881 to 2000 and divided them into 12 decades, each with a time variable T = 1 to 10, and regressed the figures on T for each decade. I then took the coefficient of the T term for each decade and regressed that on the decade number 1-12. The trend was up but not significant, which is perhaps attributable to the small sample sizes involved.
john mann says
BlogReader #72 said: Odd you didn’t mention that maybe some plates might be sliding lower into the ocean. Exactly how much of a rise is there? And how does it vary across the globe?
Well, plates do slide against each other, but sea level changes due to tectonics tends to be in the order of 1cm per thousand years, and this would generally result from doming events. Sliding plates can cause more dramatic changes when there is elastic springback after tension has been released, but there haven’t been any such events in human recorded times as far as I have found.
Any change in sea levels due to tectonic events would have both local and globals results: local changes will be due to the changes in plate tensions and buoyancy – global levels will result if the size of the oceanic basin changes.
scp says
Realclimate said:
“Another week, another ado over nothing.
… In the global or hemispheric mean, the differences were imperceptible (since the US is
only a small fraction of the global area)….”
From the link in 184: Hansen et al., 2001 says-
“… Although the contiguous U.S. represents only about 2% of the world area, it is important that the analyzed temperature change there be quantitatively accurate for several reasons.
Analyses of climate change with global climate models are beginning to try to simulate the patterns of climate change, including the cooling in the southeastern U.S. [Hansen et al., 2000]. Also, perceptions of the reality and significance of greenhouse warming by the public and public officials are influenced by reports of climate change within the United States…”
If Hansen et al. said accurate analysis of temperature changes are “important”, can we conclude now that the call for transparency is not actually “another ado about nothing”?
If I were inclined to read between the lines, that paragraph from Hansen et al, might raise my eyebrows for different reasons too, but that’s a topic for another day.
Bob Beal says
On the Toronto Star’s website this morning, Stephen McIntyre is quoted as saying that he caused “a micro-change. But it was kind of fun.”
Steve McIntyre says
#207. Gavin, I think that the problem that you’re failing to come to grips with is that when results are used for policy purposes, people expect different forms of due diligence than little academic papers, which have received only the minimal due diligence of a typical journal review. People are entitled to engineering-level and accounting-level due diligence.
The reason that current replication practices in econometrics require archiving of code is that this makes post-publication due diligence much more efficient. There are a number of interesting links and discussion of replication policy at Gary King’s website http://gking.harvard.edu/replication.shtml . The McCullough and Vinod articles are relevant and were relied on in the American Economic Review adopting its replication policy. (The editor at the time is presently the Chairman of the Federal Reserve.)
Your statement that no one had apparently tried to emulate the Hansen methods is itself evidence of the burden in trying to run the gauntlet of assembling the data and decoding the methods and precisely illustrates the obstacles to replication discussed in McCullough and Vinod (and its references) and why the American Economic Review changed its policy to require such archiving.
In addition, the GISS temperature series is essentially an accounting exercise, rather than a theoretical exercise. In an accounting audit, you don’t just hand a bunch of invoices to company auditors and say – well, do your own financial statements. Yes, maybe they’d learn something from the exercise, but that’s not the way the world works. Their job is not to re-invent the company’s accounting system, but to verify the company’s accounting system. Sure there’s a role for re-doing Hansen’s accounts from scratch, but there’s also a role for examining what he did. If he’d archived his code, then it would have been easy to see where he switched from using USHCN raw data to adjusted data. You’d ask – why did you change data sets? You might also consider the possibility that if they’d gone to the trouble of properly documenting and archiving their code, maybe they’d have noticed the error themselves.
The other problem that arises is that, if I do what you said – emulated GISS methods and arrived at different numbers, it’s not hard to imagine a situation where GISS said that my implementation of their method was wrong [edit] and right away everyone has a headache trying to sort out what’s going on.
The purpose of inspecting source code is precisely to avoid these sorts of games. I asked for code to avoid pointless controversies [edit]. Contrary to an impression that you’ve given, I wouldn’t try to run their code as is. My own practice is to re-write the code in R (as you would do in Matlab), recognizing that it is fairly trivial, and then try to test variations.
However even in “trivial” code, little things creep in. If you can read their Fortran code, this can elucidate steps and decisions that may not be described in the written text. [edit]
If one wants to test the impact of (say) using only rural stations on Hansen’s numbers or of using “good” stations on US temperature data, something that is on my mind, then you need to benchmark the implementation of Hansen’s methods against actual data as used and actual results, step by step, to ensure that you can replicate their results exactly and then see what the effects of changing assumptions or methods are. Only by such proper benchmarking can one ensure that you are analyzing the effect of rural stations and not unknown differences in methodology. This seems so self-evident that I don’t understand why you are contesting it.
[Response: Because, frankly, I find the ‘audit’ meme a distraction at best. I am much more interested in constructive science. Scientifically, independent replication – with a different set of ‘trivial’ assumptions is far more powerful (vis the Greenland ice core example) than any amount of micro-auditing. If there is a difference, then go to the code differences to see why (ie. UAH and RSS), but if you can show that the main result is robust to all sorts of minor procedural changes, then you’ve really said something. You have all the data sets from USHCN, GHCN, and GISS and you have demonstrated in a number of plots that all the GISS adjustment does is make a bi-linear adjustment to the stations based on close neighbour rural stations. How difficult is that to code? If the net result is significantly different than the GISS analysis then look into it further. If it isn’t, then why bother? In this field, methodology is not written in stone – it’s frequently ad hoc and contains arbitrary choices. Pointing out that there are arbitrary choice is simply telling us what we already know – showing that they matter is the issue. That kind of constructive analysis is how the rest of the field works – if you think you can do better and make better choices that are more robust to problems in the data, then that makes a great paper. Simply saying something is wrong without offering a better solution is just posturing. It’s worth pointing out that the GISTEMP analysis started out exactly because they were unhappy with how the station data were being processed elsewhere. – gavin (PS. edits to keep discussion focussed)]
Khebab says
For those interested, I’ve made a few supplemental charts from Gavin’s datasets, in particular the US and Global temperatures on the same chart:
http://graphoilogy.blogspot.com/2007/08/us-temperature-revision.html
bjc says
Gavin:
You don’t seem to have addressed the point that the AEA has and presumably enforces a policy of archiving all data and code. Are you suggesting that such a practice has no value?
Khebab:Thank you, these are useful charts especially the one overlaying US and Global temperatures. (The one I twice asked Gavin to produce!) It sure focuses attention on the 30s and the question of whether the US pattern was really regional or global. I quickly checked Canadian data for the same time period and it seems to conform to the US pattern, which of course means that its weighting is significantly higher than 2% – probably approaching 13% of land based temperature measures.
tamino says
I’ve never bothered to reproduce the entire NASA GISS analysis, mainly because I trust their results and reproducing it would be too much work. Besides, HadCRU and NCDC have done essentially the same work (albeit with different algorithm choices) and got essentially the same results — that’s pretty damn robust.
Most of the work would be in acquiring and formatting all the data and metadata. The fact is that making public all the code for all the programs and all the scripts wouldn’t much lighten the workload. Besides, I’m not at all interested in finding out whether running the same program on the same data will produce the same results — that’s obvious! Nor am I interested in going through all the programs and all the scripts line-by-line looking for potential errors. Program/script code can sometimes reveal simple errors, but it doesn’t really help very much in determining whether the answer is right according to the stated algorithm. Debugging my own code is a royal pain; debugging someone else’s is an exercise in self-flagellation.
I’d much rather acquire the data and metadata, then write my own programs to process it according to the stated algorithm. If my results agree, then I’d know that they got it right — and I did too. If not, I’d start debugging my own code until I was satisfied it was correct. If the results still disagree, I’d say so.
I’d never bother to debug someone else’s code. There are too many ways to make the code look wrong when in fact it’s right! And too many ways to make the code look right when it’s wrong. I’m not interested in disentangling someone else’s twisted logic, which very well might look right/wrong when in fact it’s wrong/right.
If NASA wants to release the code, and people want to pore over it line-by-line, more power to ’em. But I’d rather my tax dollars were spent funding real NASA research than preparing programs and scripts for public release, to feed a call for “openness” which is really more motivated by a desire to discredit than a desire to discover.
As for actually reproducing the analysis from what is published, it’s a lot of grunt-work but really is not that complicated. Why haven’t the denialists done so? Here’s my theory: it involves a lot of actual work. You guys want to know whether NASA got it right? Get busy — put up or shut up.
Goedel says
Economics would have much less need to verify code correctness if its theory was ever allowed to meet observation.
Rob Negrini says
#226. Khebab, Could you show the global data before and after? That’s what’s important, after all.
Climate Tony says
Didn’t “1998 is the warmest year” claim originate with the NOAA/WMO figures? That is, wasnt it NOAA the media was citing all this time, not NASA/GISS. Have NOAA’s rankings of largest anomalies been affected here, or do they remain the same?
[Response: Globally, all the indices showed that 1998 was a record breaker. In the GISS analysis, 2005 just pipped it to the post (as it does in NCDC product). For CRU, 1998 is still the warmest. The differences between the years are small, and the different products have slightly different rankings. Nothing from NOAA or CRU is affected by this correction to the GISTEMP analysis (and even there the global mean changes are too small to see). – gavin]
Hank Roberts says
Dr. Curry writes “Climateaudit has attracted a dedicated community of climateauditors, a few of whom are knowledgeable about statistics (the site also attracts ‘denialists’)” and “in the long run, the credibility of climate research will suffer if climate researchers don’t ‘take the high ground’ in engaging skeptics….”
Mr. McIntyre and the “few … knowledgeable about statistics” are facing a choice Saul Alinsky has written about in “Rules for Radicals” — when you have a little success as a critic, you will need to decide whether you’re trying to improve the institution, or tear it down. You then choose either to stay with the people who got you to the gates, outside, or leave them to go inside.
Dr. Curry’s good advice is the same advice she gave over at CA to the people about the kid whose high school paper chart error got pointed out last month, deflating her attack on Hansen — when someone points out an error, check it, fix it, and move on.
A huge one-sided error like that high school kid’s graph blows the whole paper.
Little errors, when those are caught, that improves the product.
The successful ‘auditors’ who understand statistics are, in fact, improving the product. This can’t please the ‘denial’ crowd.
tamino says
Re: #230 (Rob Negrini)
You can see that comparison here.
Khebab says
Re: #227
I’ve just added the chart.
Lawrence Brown says
If anyone wants to go through the source program of the GISS climate model, God bless them! Below is a small sample of FORTRAN statements from ModelE taken from “Field Notes From A Catastrophe” by Elizabeth Colbert(p 101-102)
C***COMPUTE THE AUTOCONVERSATION RATE OF CLOUD WATER TO PRECIPITATION
RHO=1.E5*PL(L)/(RGAS*TL(L))
TEM=RHO*WMX(L)/(WCONST*FCLD+1.E-20)
IF(LHX.EQ.LHS) TEM=RHO*WMX(L)/(WMUI*FCLD+1 .E=20)
TEM=TEM*TEMC
IFITEM.GT.10.) TEM=10.
CM1=CMO
IF(BANDF) CM1=CMO*CBF
IF(LHX.EQ.LHS) CM1=CMO
CM=CM1*(1-1/EXP(TEM*TEM)+1*100*)*(PREBARIL+1)+
*PRECNVL(L+1)*BYDTsrc)
IF(CM.GT.BYDTsrc) CM=BYDTsrc
PREP(L)=WMX(L)*CM
END IF
C**** FORM CLOUDS ONLY IF RH GT RHOO
219 IF(RH1(L).LT.RHOO(L» GO TO 220
C**** COMPUTE THE CONVERGENCE OF AVAILABLE LATENT HEAT
SQ(L)=LHX*QSATL(L)*DQSATDT(TL(L),LHX)*BYSHACM=CM1*(1.-1./EXP(TEM*TEM))+1.*100.*(PREBAR(L + 1) +
TEM=-LHX*DPDT(L)/PL(L)
QCONV=LHX*AQ(L)-RH(L)*SQ(L)*SHA*PLK(L)*ATH(L)
*-TEM*QSATL(L)*RH(L)
IF(QCONV.LE.O.O.AND.WMX(L).LE.O) GO TO 220 C**** COMPUTE EVAPORATION OF RAIN WATER, ER
RHN=RHF(L)
IF(RHF(L).GT.RH(U) RHN=RH(L)
There must be untold thousands of lines of code. It takes the largest computers today, a month for a single run to simulate 100 years of climate .It would be an enormous job for any individual, or even group of individuals.
Another difficulty would be that even though the components of the model that use the basic laws of physics,are the same for different models, the ones that are parameterized are different for different models.
barry says
Link to “the 2001 paper describing the GISTEMP methodology” is broken. Climate Audit has been sabotaged. An act of Gaia?
[Response: link fixed… thanks. – gavin]
dhogaza says
Your criticism of climate science would be more persuasive if weren’t wrong, yourself.
5 of the top 10 hottest years GLOBALLY are still in the last decade. Which has been the claim all along. As has been pointed out, the “G” in “AGW” doesn’t stand for “The Lower 48”.
Sheesh.
Neil B. says
Skeptic/deniers often claim, (like at Watts Up With That?) that climate scientists (like Hanson) won’t release their algorithms/SW (should see a joke in there…) I doubt that, and if not so, where can I find evidence of the algorithms/SW to show around?
Timothy Chase says
In response to Steve Mosher’s #156…
I wrote:
steven mosher (#156) responded:
Looking at the raw data for Lake Spaulding, you can most definitely see what would appear suspect: it goes from roughly 11.75 to 7.25 within the span of approximately ten years – very early on in the records. That is suspicious. It is particularly suspicious when you compare it with the neighboring rural stations.
Which stations?
This will give you a list:
http://data.giss.nasa.gov/cgi-bin/gistemp/findstation.py?lat=39.32&lon=-120.63&datatype=gistemp&data_set=0
There are four rural neighboring stations within 71 miles.
Here is a chart of the raw data:
http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=425745010040&data_set=0&num_neighbors=1
You can compare it with the neighboring rural stations:
http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=425745010040&data_set=0&num_neighbors=3
One station doesn’t go back that far. Another shows almost comparable variability – but would not result in such a distortion, but the nearest neighbor shows nothing comparable.
Additionally, he does not simply posit contamination, but points out that we know in fact that such contamination took place in those years – and identifies the known potential sources. Can we say that these stations were affected in this way? Of course not. If we knew the exact way in which they had been “contaminated,” we could correct for this. But since we do not have this information, the uncertainty of the effect combined with the evidence that such an effect may have played a part in the temperature records is enough to throw out those years of data.
More importantly, the reason for dismissing the earliest data (at least in the case of the station that I investigated) is that the extreme variability at the earliest point was inconsistent with that of other stations and unrepresentative of any long term trend. I believe you will find that the same is true of the other stations if you are inclined to investigate as I have done. And as he said, the distortion would have been transmitted to the urban stations as the result of their methodology.
*
In any case, this is very early on in the temperature record. What we are concerned with when it comes to global warming is principally from 1978 forward. An adjustment of this sort would presumably require more justification during this period – particularly since we have better kept records and more rigorous procedures in place.
I wrote:
Steven Mosher wrote:
No need to link to the studies – I found them myself. Finding them takes only a few minutes, but the reading a bit longer.
Assessment of Urban Versus Rural In Situ Surface Temperatures in the Contiguous
United States: No Difference Found
THOMAS C. PETERSON
VOL. 16, NO. 18 Journal of Climate 15 SEPTEMBER 2003
http://www.ncdc.noaa.gov/oa/wmo/ccl/rural-urban.pdf
Steven Mosher wrote:
That is a very brief synopsis – enough to give the reader some idea of what is in a well-known paper – which they should probably read anyway, since in my estimation it is an excellent example of what research should be like.
Peterson goes through quite some history in terms of the studies which were performed prior to his study and lists the various factors the authors failed to take into account. There are uncertainties, and he acknowledges them. As far as the various factors which would influence station temperatures are concerned, he listed what the major ones were and the empirically determined adjustments that these would require – as identified by past literature – and then filtered out those effects.
I quote:
A detailed analysis of each of these factors as well as explanation of the manner in which such factors may be expected to affect the readings is given. Additional studies are cited for each of these effects. Progress, building upon earlier, more well-established work. The residual would be what is left over after all other factors are accounted for. This is what is attributed to the urban heat island effect: 0.04 C. It is not statistically significant. Once results were obtained, additional tests were performed for robustness.
You wrote:
Peterson suggests that there are a variety of well-known reasons why the urban heat island effect is relatively insignificant. Park cool island effects are certainly one of these. Another potential mitigating factor will be bodies of water – since urban sites are more likely to be near bodies of water. Clouds are more likely to form over urban areas as the result of urban heat, but in the process they are more likely to shade those areas where sites are located, reducing the effects of solar radiation.
How do you test for a strawman or cherry-picking? Park cool islands are just one of the well-known effects which may account for the unimportance of urban heat islands. Besides, a photo won’t pick up the thermal, but this is precisely what we are interested in as heat cells are what determine the boundaries which isolate a park cool island from its surrounding environment.
Peterson states:
Likewise, as has been pointed out to you on a rather large number of occasions:
https://www.realclimate.org/index.php/archives/2007/07/no-man-is-an-urban-heat-island/
… a site which is in a poor location will not produce a trend. A site where an air conditioner is installed beside it at some point will not produce a trend – but a jump. These are things which would be picked up by statistical analysis – but they would not produce a trend. As has also been pointed out to you on numerous occasions, we get virtually the same trend whether we are using all stations or simply rural stations. We get quite similar trends whether we are using surface stations or lower troposophere. Lower troposphere has higher variability, but shows essentially the same trend.
Steven Mosher wrote:
So I have noticed.
Timothy Chase says
This is a response to Steven Mosher’s #157. My apologies at outset, but this will include some philosophy, although principally in defense of the contextual, falliblistic self-correcting approach of empirical science – as Steven Mosher’s position is fundamentally opposed not to the approach of climatology but of science. However, I doubt that I will ever have to deal with anything of this nature again, at least in this forum.
Please feel free to pass over it if this is not something which interests you. However, if you are inclined to read it, I have tried to make this easier by dividing it into titled sections.
*
Duhem and Quine
I had written:
Steven Mosher responded:
You are speaking of “The Aim and Structure of Physical Theory,” but this is an English translation of the book “La Theorie physique: son objet et sa structure,” which was published in 1906, although the original article which expressed the central insight was published as an essay in 1892 as “Quelques reflexions au sujet des theories physiques.”
In all honesty, I couldn’t remember after all these years whether it 1892 or 1893, so I had to look it up. But as part of an essay which critiqued early twentieth century empiricism that included logical positivism in all major forms, first individually and then as whole by means of self-referential argumentation, operationism, operationalism, etc and finally offering a critique of the analytic/synthetic-dichotomy by means of self-referential argumentation, I refered to it in a critique of Karl Popper’s principle of falsifiability and expounded roughly the same argument.
As I have said, I was a philosophy major.
*
The Fatal Flaw in Relativism and Radical Skepticism
I had written:
You responded:
Since he is a strong coherentialist, he would argue that even such distinctions as those between identification and evaluation, perception and emotion, subject and object are not ultimately based in observation, but are simply part of a web of belief in which even the law of identity is a subjective preference of sorts. His argument is essentially that in dealing with quantum mechanics, one may choose to employ an alternate logic. This much is true – as such an alternate could very well offer one greater theoretical economy.
But from this he concludes that it may be appropriate to abandon even at root the law of identity itself. As such, his “pluralistic realism” is fundamentally a form of extreme relativism. This however runs into a problem in that for an alternate logic to be regarded as an alternative to conventional logic, it must be internally consistent, but one must presuppose conventional logic and the law of identity in some form simply in order to test for internal consistency.
Any physical theory which is internally consistent would be logically untenable. Even in the case of quantum mechanics, one would have to abandon the theory if it proved internally inconsistent. However, assuming quantum mechanics is tenable, which is something which I believe we could both agree upon for our present purposes, it must be internally consistent.
Quantum mechanics may be counterintuitive – and it is.
However, for it to make specific predictions, even if they are expressed only in probablistic form, it must be internally consistent – otherwise, as a matter of logic it would be possible to derive any contradictory set of predictions in which case it would be untestable. Given the need for internal consistency, the law of identity is at root unavoidable, although one may in logic choose a formalism which expresses itself in some alternate formulation of it.
I will also point out that extreme relativism presupposes a form of radical skepticism in which the world that exists independently of our awareness of it is fundamentally unknowable. It should be clear given your statements regarding Quine’s theory that this applies to Quine as well. However, extreme skepticism is self-referentially incoherent.
This means that it has a fatal flaw which is closely related to the internal inconsistency which results from stating “I know that there is no knowledge.” This problem lies in the fact that when one asserts that radical skepticism is true, implicit in one’s assertion is the assertion that it is something which one knows. As such, there is a fundamental, fatal internal inconsistency implicit in its affirmation which renders it entirely untenable.
*
The Purpose of this Forum
Now I should quickly point out that the above is merely a brief summary. To properly deal with the issues which I have mentioned above at the level of technical philosophy would take a great deal longer. But this is not a graduate level philosophy course. This is a forum devoted to climatology. For this reason, I will not go into any more depth on these issues than I have treated them here. Duhem’s thesis could very well be a different issue as it deals with the interdependence of scientific knowledge, but since it is already something that we both agree upon I believe that is unnecessary.
*
The Nature of Science
Now with regard to climatology, no doubt you will point out that in my previous post it became quite clear that in arriving at any of the conclusions of the authors, one had to presuppose a fairly extensive background of assumptions – in very large part the conclusions of earlier papers. However, it should should also be clear that this is how science works. It is cummulative.
Even when an earlier theory is superceded by a more advanced theory, however much the form in which the more advanced theory is expressed differs from the earlier theory, it must be consistent with the evidence which formed the basis for the acceptance of the earlier theory. The form changes, but much of the substance is preserved.
This forms the basis for the correspondence principle which you are no doubt aware of. As such, there is nothing invalid in the cummulative nature of climatology. Additionally, it is clearly falliblistic. We will make mistakes. But given the systemic nature of human knowledge and empirical science, we can expect to uncover our mistakes in time.
There are degrees of justification, and where a given conclusion is justified by multiple independent lines of investigation, the justification that it receives is often far greater than the justification that it would receive from any one line of investigation considered in isolation. This applies to all empirical science. As such, the fact that its methods are fallible and its conclusions fail to achieve cartesian certainty cannot be held against it any more than it could to any empirical proposition.
*
Given the preceding sections, I believe we can skip much of the rest which follows, at least those sections dealing with issues in philosophy and the philosophy of science.
*
You state:
We will return to some of this in a moment.
But how much is to be determined by means of scientific investigation, and the results of such an investigation are to be regarded as our best estimate until further scientific investigation determines otherwise. It is not something to be decided by means of philosophy, word games or ideology.
Statistically, since 1978 they have been increasing worldwide according to according to virtually every study which has investigated it. Likewise, the global temperature was considerably higher for in 1998. This has been sufficient for every major scientific body which has taken a position on climate change to acknowledge that it exists and for mainstream science to acknowledge that by any rational human standard it is quite dangerous. Word games at this point will according to our endanger the a great many people, the world economy and quite possibly even more.
We have dealt with that at length – in the thread:
https://www.realclimate.org/index.php/archives/2007/07/no-man-is-an-urban-heat-island/
Now returning to your earlier statement, you begin by quoting me and then respond:
But this is not what has been suggested ever since you began participating in the debate here.
From your previous post:
This has clearly been your attitude since you first arrived – which is in line with the radical skepticism that I have dealt with above.
*
Your Motivation
Previously I had stated that the active skeptics with regard to anthropogenic global warming were generally either motivated by a misguided concern for the economy, financial concerns or ideology. The last of these was exemplified by you and your interaction with Ray Ladbury. At the time I did not know what your ideology was, but it was obvious that you are fairly intelligent and that your motivation has nothing to do with a concern for the genuine facts of the matter.
As I like to know who I am dealing with, I decided to do some digging.
Within five minutes, I found that you are the president of Population Research Institute, a spinoff of Human Life International, a pro-life organization. You advocate population growth as you view any attempt at zero population growth as being contrary to your pro-life stand.
Here is the evidence for you position as president of PRI:
An Interview with Steven W. Mosher, President of the Population Research Institute
By John Mallon
http://www.pop.org/main.cfm?id=151&r1=10.00&r2=1.00&r3=0&r4=0&level=2&eid=678
Here is the logic of your ideological position against the acknowledge the acknowledgement of what your organization views as the environmentalist nature of climatology – as expressed by your vice president:
300 Million and the Environment
Friday, October 20, 2006
By Joseph A. D’Agostino
http://theologyofthebody.blogspot.com/2006/10/300-million-and-environment.html
Now I do not care to debate ideology with you. However, your ideology is irrelevant to climatology and your approach is fundamentally anti-science. You will not be swayed by any evidence or argumentation.
We have no further reason to debate you.
Patrick says
“If anyone wants to go through the source program of the GISS climate model, God bless them!”
Sure, I’ll go through it, and I suspect many others in the ‘open source’ SW community would jump on the chance to help others utilize this.
Where is it available?
[Response: http://www.giss.nasa.gov/tools/modelE – this is the code that was run for the AR4 simulations, and so is now a little out of date (Feb 2004) – thus any issues are likely to have been fixed in our current version. But it gives you a feel for what the models are like and how they run. I plan on putting up the latest version at some point in the near future. – gavin]
Richard Ordway says
> #20 Niller writes:
2) The North Pole is melting so that there will soon >be a North-West Passage to which Canada is laying >claims.
Your information is wrong. The Northwest passage is already open for commercial purposes.
The open, peer-reviewed journal Science reported that in 1999 the Northwest passage was already open:
“In 1999, Russian companies sent two huge dry docks to the Bahamas through the usually unnavigable Northwest Passage, which winds through the labyrinthine…”
This is old news in the open, world-wide, peer-reviewed scientific community.
http://www.sciencemag.org/cgi/content/summary/291/5503/424
http://www.google.com/search?hl=en&q=1999+russians+northwest+passage+bahamas&btnG=Google+Search
garhane says
This general reader found Gavin’s comments dead on. The only point of vulnerability seems to be the danger that non scientists, like Mcintyre, can animate the boob circus masters to work through the political system and fasten a choke hold on a field of science. You can see what happens then if you review the grim story of Bush administration interference and outright sabotage of science for political ends in the blog by Rick Piltz, Climate Science Watch.
I suspect it will not be enough to gather round the water hole from time to time and agree on how unfair all this is, and perhaps the scientists in this field, climate science, who will always be working with close approximations and arbitrary bits here and there will need to set up a self defence organization. What will they need to defend? Their right to work the science as they please, not as meddling strangers want to control them to do, which would strangle the science. Perhaps it is turning out that Mann had exactly the right way to handle the non scientific opposition, stiff them, knock them down, give them a few more kicks. Like Terry O’Reilly played Hockey–always finish a check.
The union of concerned scientists seems to have impact, and you can see they use some of the weapons of the system itself.
Walt Bennett says
Re: #225, and Gavin’s response,
As a software engineer myself, I make sense of where Steve is coming from. Software is the analytical tool by which these scientific conclusions are reached. As I recall in reading my history of the development of computers in climate science, it was not possible to crunch the numbers faster than the time span being measured, until the advent of fast computers. This means that these computers, and the software which runs on them, are the tools.
In fact, to go Steve one better: it matters on what hardware the simulation was run, and it matters what the OS was, and it matters what the release level was, and it matters what else was on that computer. Any of these things could cause corruption.
As Steve points out, this is about policy-level decision making, and I can assure you that this member of the public seeks the highest possible standards from science which has such potential to define economic paradigms far into the future. Probably the future of the nation state depends on how well nations organize to confront AGW.
So, Gavin, there is just a little import to these issues. I’m sure you’d agree. And while your point is valid that good science means doing it yourself and seeing what happens, Steve is also right that controversy (“my results are different than yours”) only leads us back to where we are. In other words, audit-as-you-go is a very sensible approach.
I disagree with Steve on one point: the auditing can, and probably should be
in-house. It will take considerable understanding of the processes involved, to fairly audit them. However, the auditor needs to have broader knowledge than the climate scientists and engineers, in a few areas: 1) he/she must be well versed in the issues I cited above regarding hardware and OS; 2) he/she must have a strong background in statistics and other forms of analysis. It would be mighty helpful if this person or persons had previous software auditing experience.
The sad truth is that there is no alternative. Gavin, your approach leads us here: there could be an error in both your process and somebody else’s; verifying them against each other will not reveal the error. Or: your results are different than somebody else’s, but he is poorly funded and has little support, and his concerns are dismissed. Or: Nobody has the time or money to replicate your gigantic simulations, so there is nothing with which to compare your results.
Unless you have some other, heretofore unknown empirical method for validating the results of your simulations, then I side with Steve: your processes must be thoroughly and competently audited, on an ongoing basis.
The work you do is just too important.
Lynn Vincentnathan says
RE #225 & when results are used for policy purposes, people expect different forms of due diligence than little academic papers, which have received only the minimal due diligence of a typical journal review. People are entitled to engineering-level and accounting-level due diligence.
I don’t know much about this discussion, the codes & data sets, but I think there is a big difference between balancing the account books and addressing climate change so that we avert harms.
I remember how as a bookkeeper many years ago (before they used computers for it) I sweat bullets trying to find that 1 cent mistake I had made. I couldn’t just give them a penny, I had to find the mistake and correct it.
However, there may be some parallels with engineering a bridge so that it is strong enough to withstand maximum traffic load under worse-case conditions, except that with global warming we’re not talking about a bridge load of cars crashing into a river, but the possibility of huge problems around the world that may harm millions or even billions of people over the long time the CO2 is in the atmosphere (maybe up to 100,000 years even — see https://www.realclimate.org/index.php/archives/2005/03/how-long-will-global-warming-last/).
In that situation, we need to err on the side avoiding global warming and its harms, than on the side of helping fossil fuel companies that refuse to diversify.
As I’ve mentioned on this site many times, mitigating GW can be done to a large extent cost-effectively, and the potential harms are so great, that we really don’t need much evidence global warming is happening and is or will be harmful. Scientific standards of .05 significance are way too stringent. Even if there is less than a 50/50 chance the climate scientists are right that global warming is happening and is or will be harmful, we need to mitigate the problem. That should be the policy of a moral, even self-interested society. Anything else is tantamount to dereliction of the policy-maker’s duty. And it means we shouldn’t even be having debates about policy at all. It should have been our policy to mitigate GW with all our effort since at least 1990 — 5 years before the 1st scientific studies reached .05 on GW.
But here’s an idea. Rather than spend a lot of time and expense trying to find the mistakes of climatologists (I think they find each other’s mistakes — and get feathers in their caps when they find some gross ones that really matter), those who question the climate scientists’ accuracy and conclusions could really mess them up by getting the whole world to substantially mitigate GW. That would mess up their data sets. They wouldn’t have any increasing CO2, and if they are right that the temps rise with carbon, then they wouldn’t have any rising temps either (allowing for some lag time to pass). And then what could their codes do about it, without such great data?
John Mashey says
re: #241 Patrick
Great! [although, it’s a little strange that anyone really interested in this, didn’t know where the code was: SOFTWARE is one of the top-level menu items on the GISS website], but when people make a big effort to provide well-documented, relatively-portable code, it’s nice if people read it.
This is actually an interesting test case:
a) How about describing your background, i.e., familiarity with F90, OpenMP, and the relevant physics? What target machine(s) and software will you be using? (likewise for other people you get interested in this).
b) How about a status report, say once a week? Tell us how far you’ve gotten, and how much effort it took?
[These are not new questions, I’ve often asked these of people using software I’d written, to get feedback about match with intended audience.]
Ray says
What effect did the drought and lack of good irrigation have in 1934 average temperature as compared to more recent years where even in during a drought millions of acres are irrigated and potentially reduce the extreme high temps due to evaporation.
Lawrence Brown says
In 241 Patrick is willing to go through the program,and even sounds eager to do so. As Gavin points out in one of his responses above, Model E contains about 100,000 lines of code. Back in the stone age of computers, I carried boxes of punch cards for both programs and data to a CDC 6600 located at a local university. If there was a bug in the program,it was a painstaking process to locate and correct for it. And these were programs with a lot less statements(by a factor of 100).
If folks are willing to go through it, to search for discrepancies, my hat’s off to them.
John Mashey says
Re: #247
Interesting point.
One may wish to consider the effect of irrigation on the Ogallala Aquifer:
http://www.waterencyclopedia.com/Oc-Po/Ogallala-Aquifer.html
http://en.wikipedia.org/wiki/Ogallala_Aquifer
Hank Roberts says
Rod asks about the effect of irrigation. Google Scholar has plenty of info, try a search or two. For example:
http://www.agu.org/cgi-bin/wais?hh=A41E-0074