Another week, another ado over nothing.
Last Saturday, Steve McIntyre wrote an email to NASA GISS pointing out that for some North American stations in the GISTEMP analysis, there was an odd jump in going from 1999 to 2000. On Monday, the people who work on the temperature analysis (not me), looked into it and found that this coincided with the switch between two sources of US temperature data. There had been a faulty assumption that these two sources matched, but that turned out not to be the case. There were in fact a number of small offsets (of both sign) between the same stations in the two different data sets. The obvious fix was to make an adjustment based on a period of overlap so that these offsets disappear.
This was duly done by Tuesday, an email thanking McIntyre was sent and the data analysis (which had been due in any case for the processing of the July numbers) was updated accordingly along with an acknowledgment to McIntyre and update of the methodology.
The net effect of the change was to reduce mean US anomalies by about 0.15 ºC for the years 2000-2006. There were some very minor knock on effects in earlier years due to the GISTEMP adjustments for rural vs. urban trends. In the global or hemispheric mean, the differences were imperceptible (since the US is only a small fraction of the global area).
There were however some very minor re-arrangements in the various rankings (see data [As it existed in Sep 2007]). Specifically, where 1998 (1.24 ºC anomaly compared to 1951-1980) had previously just beaten out 1934 (1.23 ºC) for the top US year, it now just misses: 1934 1.25ºC vs. 1998 1.23ºC. None of these differences are statistically significant. Indeed in the 2001 paper describing the GISTEMP methodology (which was prior to this particular error being introduced), it says:
The U.S. annual (January-December) mean temperature is slightly warmer in 1934 than in 1998 in the GISS analysis (Plate 6). This contrasts with the USHCN data, which has 1998 as the warmest year in the century. In both cases the difference between 1934 and 1998 mean temperatures is a few hundredths of a degree. The main reason that 1998 is relatively cooler in the GISS analysis is its larger adjustment for urban warming. In comparing temperatures of years separated by 60 or 70 years the uncertainties in various adjustments (urban warming, station history adjustments, etc.) lead to an uncertainty of at least 0.1°C. Thus it is not possible to declare a record U.S. temperature with confidence until a result is obtained that exceeds the temperature of 1934 by more than 0.1°C.
More importantly for climate purposes, the longer term US averages have not changed rank. 2002-2006 (at 0.66 ºC) is still warmer than 1930-1934 (0.63 ºC – the largest value in the early part of the century) (though both are below 1998-2002 at 0.79 ºC). (The previous version – up to 2005 – can be seen here).
In the global mean, 2005 remains the warmest (as in the NCDC analysis). CRU has 1998 as the warmest year but there are differences in methodology, particularly concerning the Arctic (extrapolated in GISTEMP, not included in CRU) which is a big part of recent global warmth. No recent IPCC statements or conclusions are affected in the slightest.
Sum total of this change? A couple of hundredths of degrees in the US rankings and no change in anything that could be considered climatically important (specifically long term trends).
However, there is clearly a latent and deeply felt wish in some sectors for the whole problem of global warming to be reduced to a statistical quirk or a mistake. This led to some truly death-defying leaping to conclusions when this issue hit the blogosphere. One of the worst examples (but there are others) was the ‘Opinionator’ at the New York Times (oh dear). He managed to confuse the global means with the continental US numbers, he made up a story about McIntyre having ‘always puzzled about some gaps’ (what?) , declared the the error had ‘played havoc’ with the numbers, and quoted another blogger saying that the ‘astounding’ numbers had been ‘silently released’. None of these statements are true. Among other incorrect stories going around are that the mistake was due to a Y2K bug or that this had something to do with photographing weather stations. Again, simply false.
But hey, maybe the Arctic will get the memo.
Ray Ladbury says
Matt, Your insinuation that climate models are riddled with adjustable parameters provides an excellent example of why it is pointless to have the code “audited”. The problem is not with the coding, but rather with the fact that most of those calling for auditing don’t understand nearly enough about climate science to determine whether any anomalies or deficiencies they see are significant. It is exactly parallel to the whole debate over weather station siting. Finding a few badly sited weather stations will not make the problem of changing climate go away. Finding a few bugs or deficiencies in the models will not make the issue go away. There are simply too many independent lines of evidence and investigation, all of which point to 1)the fact that climate is changing, 2)that rising CO2 is the predominant culprit and 3)that the sensitivity is about 3 degrees C per doubling.
Those calling for stringent auditing of code or station siting could learn about the science of climate change with a fraction of the vain effort they are devoting to poking holes in it. They would then understand that the issue needs to be addressed, and that the sooner we start addressing it, the more likely we will be able to do so without draconian restrictions on our liberties and economic well being.
caerbannog says
FYI, another example of sloppy “auditing” is discussed here: http://atmoz.org/blog/2007/08/20/audit-the-auditor/
Matt says
#550 Timothy: Well, it is projected that by 2020 we will no longer be able to grow wheat in the United States. That would seem significant.
Where do you come up with this? You are stating that we cannot grow wheat in the year 2020 in spite of us being able to grow wheat today all the way down to the US-Mexico border? Source?
Lynn Vincentnathan says
RE #475 & 427 & why governments aren’t doing much.
What I wrote above has merit, but I also thought of other reasons. The main one is that AGW is caused mainly by nonpoint-source pollution. That means it’s caused by us, more than by governments and single businesses (though they also contribute mucho).
So, while there’s really quite a bit governments can do (that they aren’t doing now, esp USA) to reduce their own GHGs, and pass regs and laws and incentives to get the public to do so, ultimately we the people have to solve the problem. And if the public were in on this, businesses would be providing us with lower GHG emission products — which is slowly beginning to happen, esp since businesses save money doing so.
I was sort of surprised when in the early 90s the Jewel food chain in the Chicago area went on the gov’s Green Lights program, got a low interest loan to change all their conventional tube lights to ones with reflectors and electronic ballasts (reducing lighting electricity by 3/4 & saving the food chain $1 million per year, paying off the loan within the 1st year), that they didn’t use that as a marketing strategy: “Jewel cares about the Earth!” But at least I made the effort to shop only at Jewel, and not the other supermarkets. And I think many others would switch to companies that sold the same products but involved less GHG emissions.
Matt says
#551 Ray Ladbury: Matt, Your insinuation that climate models are riddled with adjustable parameters provides an excellent example of why it is pointless to have the code “audited”. The problem is not with the coding, but rather with the fact that most of those calling for auditing don’t understand nearly enough about climate science to determine whether any anomalies or deficiencies they see are significant.
Sorry, Ray, but once science begins to drive public policy and control the distribution of billions of dollars then an extra level of scrutiny must be applied.
Imagine your statement above and apply it to bridges or airplanes. It is absolutely absurd. FAA engineers understand a fraction of what Boeing engineers understand about flight and building airplanes. But that isn’t their job. Their job is to understand enough to make sure that Boeing is doing their job correctly. If you cannot explain your source code well enough to someone in another field with a solid technical background, then you have really failed in making your case.
Agree with your statement that there are many lines of evidence. But those are mostly historical in nature: they show what HAS warmed. The models are important because they are forward looking and show what WILL warm. Additionally, these all exist independently. Whether or not there is evidence things are warming doesn’t matter to the integrity of the model. If we ARE warming (and we are), then the integrity of the model becomes more important (back to the public policy bit above).
Hank Roberts says
> once science begins to drive public policy and control
> the distribution of billions of dollars then an extra
> level of scrutiny must be applied.
By definition, once science is added to politics, an extra level of scrutiny has been made available.
Your arguments are all in the direction of holding off applying the extra information science offers, and staying with market and political control of how money’s spent.
Right?
Philippe Chantreau says
“FAA engineers understand a fraction of what Boeing engineers understand about flight and building airplanes.” Total nonsense. Aeronautical engineers understand flight and building airplanes, regardless who they work for. Engineers working for Boeing on automation systems used to assmeble subsections or manage parts inventories may not have a clue about flight. FAA engineers who work on aircraft certification understand everything there is to know about flight and building airplanes. For communication to happen, you must have common background between the parties communicating. It is not enough to be a software engineer to analyze code used for GCMs. You also have to know what there is to know about climate, hence the truth of Gavin’s point earlier on this thread about the codes and the validity of Ray’s remarks. All this is not much more than a distraction.
James says
Re #546: [Hybrid cars are economically feasible, but guess what, the believers think it’s industry that is the problem and not their own consumption.]
In the real world, most of us can only consume what manufacturers choose to offer for sale. As it happens, I’ve been driving a Honda Insight hybrid for the last four years. (Averaging 70.5 mpg.) I’d like to replace it with something with even better fuel economy, but guess what? Nothing built today even comes close. How can I choose a more fuel-efficient vehicle, if the automakers choose not to make one?
steven mosher says
RE 483.
GAVIN inlined.
“Response: Think about this. If a tree grows, or a station is moved from the south side to the north side of a building, if you go from a city centre to an airport, if you ‘do the right thing’ and get rid of the asphalt etc… all of these will have the add a cooling artifact to the record. Assumptions that all siting issues are of one sign is simply incorrect. – gavin]”
1. Microsite issues can be (+ ) OR (- ) in sign. YES.
2. Changes in instrumentation ( cables) required
siting closer to buildings. Think.
3. You can speculate about the distribution or INVESTIGATE. Consider this. Consider that you screw up and read in the wrong file from USHCN! some sites will be hotter some will be cooler.. What was the actual outcome gavin? was the outcome a net positive
or net negative?
4. Growing trees? Shading of a site will hit TMAX.
Looking at TMAX ( NOT TMEAN)
will give you a cleaner signal of this potential
contamination. Also you are likely to see TMAX change
in a NONLINEAR FASHION reaching an asymptote when the tree fully shades the station at all times. a signature of sorts.
5. Asphalt hits TMIN. It stores heat ( like the ocean)
and gives it up slowly. Narrowing of Diurnals, narrowing of varience in Tmin, is a first order sign of
of UHI contamination.
6. Speculating that the distribution of micro site issues will be mean =0 is a nice hypothesis. The way we intend to test this is to survey as many sites as feasible and then crunch numbers.
[Response: What numbers are you crunching? No-one is looking at the actually effect any of the issues really have. Where are the controls? You can ideally speculate all you want about how dramatic it will all be in the end, but absent someone demonstrating that it makes a real difference, there is nothing going on. Plus there are dozens of real reasons to expect TMIN to go up faster than TMAX – it does not imply UHI. Same with your other pop ‘fingerprints’ – gavin]
steven mosher says
Havent read this yet.
Page 3 summarizes microsite issues…
http://ams.allenpress.com/archive/1520-0469/10/4/pdf/i1520-0469-10-4-244.pdf
Robin Levett says
If you cannot explain your source code well enough to someone in another field with a solid technical background, then you have really failed in making your case.
So the source code should contain a tutorial on climate science? Be realistic…
steven mosher says
intersting reading.
http://gking.harvard.edu/files/replication.pdf
Money shot: “If I give you my data, isn’t there a
chance that you will find out that
I’m wrong and tell everyone?
Yes. The way science moves for-
ward is by making ourselves vul-
nerable to being wrong.”
and this:
It is the policy of the American Economic Review to publish papers only if the data used in the analysis are clearly and precisely documented and are readily available to any researcher for purposes of replication. Authors of accepted papers that contain empirical work, simulations, or experimental work must provide to the Review, prior to publication, the data, programs, and other details of the computations sufficient to permit replication. These will be posted on the AER Web site.
DavidU says
#542
Well I can’t really comment on the specific code you have read, since I have neither written it nor read it. But most of your comments seem rather pointless without their full context.
I’ll just make two comments
1. That different constants have different number of significant digits is not strange at all. You don’t usually specify the price of a candy bar
with 8 significant digits, but if you are giving tolerances for motor parts for a jet engine you’ll want as many digits as you can get. Different things have different relevant scales.
2. Even though the dynamics of a model is given by basic laws of physics you will still have quite a few parameters that you need to specify, both for the initial conditions at the time where you start running the model and also to specify that it is actually this planet you are simulating.
The same set of dynamical laws applies to the behaviour of the atmosphere of Mars and the earth. So in order to simulate the climate of the earth you will have to say where there are continents, mountain ranges, the amount of gas in the atmosphere, and something as simple as the size of the earth.
Parameters are not the same thing as intuition based fiddling, they are needed to specify that it is the climate of our planet that is modeled.
Neither do parameters rule fiddling out, in order to do that you will have to actually know enough of the science to understand the model on you own.
DavidU says
#555
Here you demonstrate a serious missunderstanding about how verification works. The FAA engineers have to understand at least as much as the people at Boeing. They might not need to design new planes on a day to day basis, but they could to the job if that was what they had to do.
You also underestimate the gap in knwoledge between scientific fields when you think that one should be able to quickly explain everything to someone with “a solid technical background”. I work in materials physics and it would take me weeks to fully explain much of my work to someone with a PhD in e.g. astrophysics. We both would have a lot of background knowledge of physics, but not the relevant one for understanding the other field without actually learning a lot of it. Not to mention how clueless we would be if we talked to some mathematcians about their reasearch.
The span of modern science is vast.
Timothy Chase says
steven mosher (#559) wrote:
Steven,
TMIN goes up on account of the opacity of greenhouse gases. It is at night that the earth will tend to cool off by thermal radiation, but if you have the feedback between the atmosphere and ground, it will take longer for the thermal radiation to leave the system.
As for whether or not the record is accurate, try comparing it to the strictly rural or to the lower troposphere. Virtually identical trends – although there is greater variability in the lower troposphere.
These guys aren’t advocating zero population growth – they are just doing science.
Steve Reynolds says
Ray Ladbury> First, there has been plenty of good research done on potential risks due to climate change.
Yes, but as stated in your link: “Health outcomes in response to climate change are the subject of intense debate.” Many here want to claim that these are proven to be severe effects.
Ray> …you assume that it is either climate change mitigation OR development.
I do not mean to assume that. I’m just saying that agressive (costing more than $20/tC) mitigation is likely to do more harm than good. I think I’m in agreement with the majority of economists on this, so I do not see why some (not Ray) are calling me a denialist on this basis.
Steve Reynolds says
508 SecularAnimist Says: ‘Where is the evidence that mitigating global warming will have either of these outcomes? I have never seen any such evidence, only this talking point repeated over and over.’
There was a lot of discussion of this here:
https://www.realclimate.org/index.php?p=453
comments 185 and after.
Steve Reynolds says
Hank Roberts Says: “Your arguments are all in the direction of holding off applying the extra information science offers, and staying with market and political control of how money’s spent. Right?”
For myself, seeing a successful audit of the AGW evidence and quantitative effects would make me support more agressive action.
Matt says
#561 Robin: So the source code should contain a tutorial on climate science? Be realistic…
Of course not. But the source should have some measure of tracability back to a real-world equation or constant some place. Some places in the source are great: the reference to IPCC2, for example, for ice accumulation. Perfectly clear.
Other places you just see a constant that got changed by almost an order of magnitude. No explanation. No refernce to a paper. Tweaking?
[Response: Units. -gavin]
Matt says
#558 James: In the real world, most of us can only consume what manufacturers choose to offer for sale. As it happens, I’ve been driving a Honda Insight hybrid for the last four years. (Averaging 70.5 mpg.) I’d like to replace it with something with even better fuel economy, but guess what? Nothing built today even comes close. How can I choose a more fuel-efficient vehicle, if the automakers choose not to make one?
Well you can either assume “the man” is holding out on you and not offering you that 200 MPG engine or the tires that never wear out, or you can assume you are bumping against the laws of physics.
My calcs show a 1500 pound car (two seater Smart car) would require about 11 HP to cruise at 60 MPH (including various drags), and about 60 HP to go from 0..60 in 12 seconds. A rule of thumb is 1/10 GPH per horsepower, so 11 HP cruise would be 1.1G, or 54 MHG. Your heavier car is already beating the rule of thumb by quite a significant margin, which is really testament to the great engineering.
If you want greater MPG, either live with a smaller car, increased pollution output, or reduced acceleration.
Interesting that when the Smart car was brought to the US, it had to have additional pollution controls added to comply with US laws. That brought the MPG of that car down from 60 to 37 MPG (although the article indicates 50 MPG should have been possible). http://www.wired.com/cars/futuretransport/news/2005/05/67405
Article also gives some insight on how poorly these types of cars have sold world wide, which really indicates your desires are a very slim majority of drivers world wide.
KH says
Gavin, I think you have done a great job of responding to these comments. One observation on all this – many of the critics don’t seem to want to do any coding, and don’t seem to thoroughly read the climate papers. I liked your statement about ‘tough love’. The intent in asking for the code seems to be not to do science, but to discredit it.
Michael Tobis says
I’m afraid that I agree with the skeptics that even when published our codes are unnecessarily impenetrable, inadequately validated, inadequately linked to the literature (which itself is unnecessarily impenetrable, although IPCC reports are a big help in the latter regard).
We climatologists do not seem willing to acknowledge that our unanticipated responsibilities really do require more formality and more accountability than was the case when we were pursuing what amounted to a peculiar and idiosyncratic academic curiosity.
Our most adamant critics do not seem willing to acknowledge how difficult, expensive and risky such a change would be even in the best, most civilized and most supportive of circumstances. Such benign circumstances are not the ones those same critics are, for the most part, willing to grant us.
David B. Benson says
Gavin — I echo what KH says…
Rafael Gomez-Sjoberg says
Re #568:
Steve Reynolds, who would you nominate as auditors of the AGW evidence? Karl Rove? the Pope? your mother? me? maybe somebody with a Nobel Prize in literature or economics? the Dalai Lama?
Maybe some other scientists that know a lot about climate & earth sciences?
You seem to not have the foggiest idea of how science is done in this day and age.
All scientific disciplines, and especially all the natural sciences, have pretty good auditing systems already in place: replication of experiments/analysis, and peer review. The whole scientific community is engaged in a constant auditing of each other’s results. We all use somebody else’s results/work as a basis for our own work, and we constantly replicate what other scientists do. If somebody writes a paper explaining a particular methodology for doing an experiment or analyzing data, or provides some new data, somebody else is going to try using that methodology or those data in their own work. If the method or data are not good, chances are very high that this second person is going to spot the problem pretty quickly. I certainly don’t want to base my own experiments or analyses on flawed methods or data. Plenty of cases of scientific fraud have been spotted that way. And the more newsworthy a particular scientific result/report is, the faster it will be audited by replication, expansion and/or derivation. We are all after fame and glory and proving that some newsworthy result/data is flawed or can be improved substantially is a very good way to gain notoriety. So there’s a strong motivation in science to outdo one another, and that has an implicit “auditing” component to it. I’m not a climate scientist so my experience is in other disciplines, but I would be very surprised if climate science works very differently. I think the system works as well as human fallibility permits.
So please, if you think that the whole community (thousands) of scientists that work on climate research are not doing a good job of auditing one another, you better come with a very good idea of who should audit their work. It must be somebody that understands all the intricacies (every nook and cranny) of the subject.
You and all the other pro-audits are very welcome to go back to school and get a PhD in climate/earth sciences so that you can begin auditing things yourselves. All this monday-morning quarterbacking is pretty silly.
Hank Roberts says
There’s one good measure of how well “auditing” works — check the financial system.
Joseph O'Sullivan says
In the US the agency rule-making and enforcement processes are well known for the opportunities for public involvement and how democratic the processes are, but this not unlimited. There are limits on the openness because without limits involved parties could grind things to a halt to prevent a decision they did not like.
The calls for transparency in science might just be an end run around the safeguards that allow agencies to work. The tactics are very similar. The calls for codes to be released are like the requests for documents in legal proceedings that are attempts to drag the process out with the goal of getting the other side to quit by make the processing expensive and time consuming.
Robin Levett says
Of course not. But the source should have some measure of tracability back to a real-world equation or constant some place. Some places in the source are great: the reference to IPCC2, for example, for ice accumulation. Perfectly clear.
Other places you just see a constant that got changed by almost an order of magnitude. No explanation. No refernce to a paper. Tweaking?
At what level of detail do you want this referencing? To a climatologist who knows what he’s looking at, is the code as opaque as you claim it is?
I’m a lawyer; I’m familiar with trying to explain complicated legal concepts simply, I’m also familiar with people who think that all they need to do is read a couple of books to know how to be a lawyer. No amount of commentary in the source code will satisfy those who want it to stand on its own without further explanation; that is, no amount of commentary short of a full course in climatology. If you want to audit the code, learn the climatology first.
Philip_B says
I’ve been grappling with Time of Observation Bias and in particular the adjustments made to the annual temperature data to adjust for TOB.
TOB occurs when temperatures for the prior day are included in the current day’s record. If the prior day was warmer or cooler than the day of record then this could result in a too high or too low a value for the maximum or minimum respectively.
Time of Observation Bias can affect monthly averages and the monthly figures are adjusted for TOB. All of which is fine. The problem is the adjusted monthly figures appear to be compiled into annual figures (correct me if I am wrong), which have a significant TOB adjustment.
There are two reasons why annual and multi-annual data temperature data should not have any significant Time of Observation Bias.
The first reason is that over a year TOB can only result from the day prior to the period in question. If the period is a year then any TOB from a single day will be averaged over a large number of days and hence will be very small, i.e. TOB must be trivial over a year or longer. So, while individual months can have significant TOB, each subsequent month in the series has a TOB in the opposite direction for the simple reason its bias results from data that should have been included in the previous month’s data. And so over the whole year any bias in individual months is eliminated (except of course the bias from the day prior to the start of the year).
Even if this were not true (and I am quite sure it is) there is a second reason there cannot be a significant TOB over a year.
The two halves of the year have more or less equal and opposite monthly TOB. If the first half of the year has a warming bias then the second half of the year will have an equal cooling bias assuming the Time of Observation remains constant, which the adjustment method assumes.
If the annual temperature data has a significant TOB and hence TOB adjustment, it must be noise from the TOB estimating method, because it cannot be from Time of Observation biases accumulated over the whole year, because it’s impossible to accumulate such biases.
[Response: I think you misunderstand the nature of the TOB. The situation is more that historical temperatures were taken in the afternoon. Now they are taken in the morning. That imparts a cooling bias to all temperatures and does not cancel out in any averaging procedure. – gavin]
richard says
#571 “The intent in asking for the code seems to be not to do science, but to discredit it.”
As those who do not accept that significant AGW is occurring appear to be unable to mount a case through the peer-reviewed science literature, the intent is, yes, to discredit but also to intimidate. It is the tobacco industry tactic all over again.
If AGW skeptics have valid hypotheses to explain the various datasets (not just temperature but other observations as well), why are they not showing up in quantity in the peer-reviewed literature?
J.S. McIntyre says
re 574
“…the more newsworthy a particular scientific result/report is, the faster it will be audited by replication, expansion and/or derivation. We are all after fame and glory and proving that some newsworthy result/data is flawed or can be improved substantially is a very good way to gain notoriety.”
This can’t be emphasized enough. This faux “debate” has been going on for years and the so-called AGW “Skeptics” have, at best, seen literally all of their pet criticisms routinely shot down but, more important, have been unable to substantiate any of their attempts to “refute” the science in the only arena that actually matters – the scientific arena.
Ray Ladbury says
Matt,
The fact of the matter is that most people calling for “auditing” do not have the background to assess the code in the first place. Just because one can string together a few lines of code doesn’t mean you have the background to understand scientific code in any field.
The case for climate change depends in no way upon the models. It was laid out in 6 easy steps recently on this site. None of those steps was particularly dependent on modeling. Where the code is important is in LIMITING our assessment of risk–is it credible that all the ice at both poles will melt? The models say no, not in the short term. Is it credible that we could have a runaway greenhouse effect on Earth? Again the models say no. The fact that there are uncertainties in the models does not discredit their predictions–it just means we have to weight their predictions with those uncertainties.
Want to understand the models? Learn the science. Then it will be obvious what the code is doing?
steven mosher says
Gavin,
You inlined
“[Response: Gore’s statement was the that nine of the ten warmest years globally occurred since 1995. This is true in both GISS indices and I think is also true in the NOAA data and CRU data. So that’s pretty accurate. – gavin]”
The trailer for AIT. The first pronuncment is that “the 10 hottest years measured occured in the last 14 years and the hottest was 2005.”
Can you toss me a pointer to the data with the associated errors?
[Response: I’m pretty sure it refers to the NCDC land+ocean analysis: http://www.ncdc.noaa.gov/oa/climate/research/anomalies/anomalies.html – the hottest ten years are all 1995 onwards – they’ve recently upgraded their analysis, so it might have been slightly different a year ago. In the GISS analysis, 1990 sneaks into the top ten (displacing 1997), and so the phrase would have been in the last 16 years (assuming it’s written in 2006). Errors are estimated to be around 0.1 deg C on any individual year, and so there is a little uncertainty in any ranking, but nothing would change the basic thrust. – gavin]
David Price says
In the letters column of the London Daily Telegraph today sombody claims that recent research has shown that the sensitivity of the climate to co2 is a third lower than previously thought. Has anybody heard anything about this? If so where does it come from?
[Response: It refers the Schwartz paper alluded to above. The conclusion is unlikely to stand – but watch this space.. – gavin
Chuck Booth says
# 555 Matt: “.. once science begins to drive public policy and control the distribution of billions of dollars then an extra level of scrutiny must be applied.”
I don’t see any evidence that climate science is driving public policy in the U.S. (at least not on the federal level). But, why should scientific concerns about AGW be any different from other scientific concerns that have influenced public policy, such as smoking, AIDS, pollution, declining fisheries, etc? Isn’t scientifically-informed public policy the reason we (in the U.S.) have the National Academies of Science (“Advisors to the Nation on Science, Engineering, and Medicine”; http://www.nationalacademies.org), not to mention the National Institues of Health, National Science Foundation, NOAA, NWS, USGS, Centers for Disease Control and Prevention, et al?
steven mosher says
re 581.
That’s a good one.
The first I looked at as a “programmer” with no knowledge of science had a glaring error in the very first routine. They read data in from an external source without performing any checks on whether they had picked up the right file. Sounds kinda familiar.
The second model was DOD verified. At the end of day
I found that half the code didnt execute because of
a mistaken goto. Had no clue about what was going on in that code.
The third model was a phased array radar. I had no clue
what the heck that thing did. two days work and I find an error in the 3D transformation matrix — the guy left out a sin term or cos term. The code had been in use for 10 years. Fully validiated. That pissed a bunch of people off who saw 10 years of data go down the drain.
Next piece of code was also validated in use for ten years. 1 week, found the piece of code that was dead
( ie not executed, but expected to be executed)
Invalidated 10 years of study. opps.
These were simple rudimentary checks. Did you read the right File? Did you record the file you read? Does every bit of your code execute? Do you have test cases?
NONE of this requires Climate science knowledge.
The fact that a frigging mining guy found errors is
prima facia evidence that an code review is in order.
The irony of the last point is rich and creamy.
Attacked as a know nothing “mining guy” McIntyre
finds an error.
he finds an error without source.
Ray says they could not find errors with SOURCE
The bottom line is this. Hansen and crew are scientists.
They are not engineers. They are not software engineers.
were they, they last little burp would have been less
likely.
EthanS says
One interesting question the brouhaha highlights is, “why did we see global cooling from ~1940 to 1970?”
One possible answer:
On the Effect of the World War II Bombing and the Nuclear Bomb Test to the Regime Shift of the Mean Global Surface Temperature (SAT SST) Abstract; 2001 paper in Japanese.
Perhaps we just need to start slagging Nevada again? And lay off the Iranians and North Koreans, so they can get start
atomic testingsaving the planet?James says
Re #570: [Well you can either assume “the man” is holding out on you and not offering you that 200 MPG engine or the tires that never wear out, or you can assume you are bumping against the laws of physics.]
No, you’re missing an important point: we aren’t talking about 200 mpg carburetors here. We know that it’s possible to build a car that gets better than 70 mpg, because there’s one sitting in my driveway. (And I can see many ways to tweak the design for better economy.) We also know that it isn’t being sold any more, and that the nearest competitors get markedly poorer fuel economy. That’s fact.
[Article also gives some insight on how poorly these types of cars have sold world wide, which really indicates your desires are a very slim majority of drivers world wide.]
Which is my other point. How much money do the automakers spend on advertising fuel efficient cars, versus how much on the oversized gas guzzlers? It’s the feedback loop again: the guzzlers sell because they’re advertised (and a lot of the advertising sells qualities that are independent of brand). Change the advertising, and you’ll change what sells.
Timothy Chase says
RE Steven (#585):
I hope you don’t mind if I don’t quote your entire piece, but you make some good points, and I think the post is well worth reading. But I would like to address some concerns with regard to climate models.
As I understand it, NASA makes available the following: documentation, source code, the various external data sets – in a variety of configurations. With regard to the IPCC simulations, the output from these are made available as they are completed. Validation, technical discussions and papers dealing with various modules and their coupling are also available. Then of course there is all the technical literature on climatology itself.
Everything one would need to evaluate the models – assuming one has the appropriate level of expertise. Well beyond me, obviously. However, it always pays to keep in mind that, given the nature of human cognition and communication, there will always be aspects which are tacit rather than articulated. Much of cognition and communication consists of shifting the boundary between the two.
It might surprise many to know this, but given the need for standardization, McDonald’s has a virtual encyclopedia occupying an entire shelf in which they have attempted to articulate every detail in order to achieve that standardization. I know – I worked there at one point. Science probably wouldn’t progress so quickly, given the complexity of its subject matter, if scientists attempted to always articulate things with that level of detail. But they come rather close.
*
Incidentally, I know it has been repeated on a number of occasions, but I think it might help to remind people that climate models are not based off of surface temperature records. They aren’t based off of trends. They are based off of first principles, physics, basically, so any “error” regarding the small fraction of a degree by which 1934 and 1998 were different is irrelevant to their validity – and of small relevance in terms of the trends in global average temperature.
Timothy Chase says
PS to #588
My apologies for not hyperlinking to #585 (22 August 2007 at 12:28 PM) in #588 (22 August 2007 at 1:25 PM). I was paying more attention to the content and had honestly forgotten.
tamino says
A man goes to the hospital with severe chest pain, a shooting pain in his left arm, shortness of breath, and when the intern on call listens with a stethoscope she hears a highly irregular heartbeat, typical of heart attack victims. The intern orders an EKG, which shows the classic pattern of heart attack, so she pronounces that he’s suffered a major heart attack and orders the appropriate treatment.
Suddenly another doctor comes in. Hold the phone! The software used by that EKG machine has never been validated! It’s not “open source!” It can’t be trusted! Tell that patient to go home, we’ll call him back as soon as everyone agrees that the EKG software doesn’t have a “bug.”
Validating the EKG software is a good idea. But let’s not make the already overworked interns do it, and let’s not make the already underfunded hospital pay for it. And since we have a plethora of lines of evidence of lifethreatening illness — so many that even if the EKG is totally SNAFU there’s still no doubt — quit stalling, for GOD’S SAKE get that patient into the critical care unit. Stat.
Vernon says
Gavin, why don’t you want to post what this really proves, namely that all the ‘station temperature error would be detected and corrected as part of the process.’ This is blatantly not true or the rather large errors that have been carried for the last seven years would have been detect. Also, even though you do not want to admit it, in Hansen (2001) he specifically say he is relying on the data from the stations to be accurate, but we now know that per WMO/NOAA/NWS guidelines, the stations are not sited correctly. This means that there is no way to know the accuracy of the data. This means that Hansen UHI could be wrong which would significantly change the whole instrumented picture! That is why this error is so important, it shows that errors are not detected or corrected!
[Response: You are simply mistaken. Jumps in stations temperatures are indeed found in the NOAA data processing and are incorporated into the GISS analysis. However, GISS does not do that analysis, NOAA does, and the error in the processing was at GISS. Therefore, NOAA had no chance to find that error and your claims that this shows that the NOAA analysis is lacking, have no merit whatsoever. The bottom line remains, do the calculation to show that your issues have a practical effect. – gavin]
Ike Solem says
#591,
Vernon – there are constant and continuing efforts to check and correct the surface temperature datasets. For example: Comparison of trends and low-frequency variability in CRU, ERA-40, and NCEP/NCAR analyses of surface air temperature, Simmons et al, JGR 2004
As they point out,
“In reality, however, observational coverage varies over time, observations are themselves prone to bias, either instrumental or through not being representative of their wider surroundings, and these observational biases can change over time. This introduces trends and low-frequency variations in analyses that are mixed with the true climatic signals. Progress in the longer term depends on identifying and correcting model biases, accumulating as complete a set of historic observations as possible, and developing improved methods of detection and correction of observational biases.”
One of the motivations for this paper (18 pages of close-spaced comparison and discussion of the CRUTem2v, ERA-40 and NCEP/NCAR analysis) was the claim by Kalnay and Cai (2003) that much of the reported warming over North America was local and due to urbanization and land use changes (based on NCAR/NCEP). As a result of all this effort, the authors can say, with some confidence, that
“Results for North America cast doubt on Kalnay and Cai’s (2003) estimate of the effect of urbanization and land use change on surface warming.”
It’s pretty clear that climate scientists are well aware of the effect of observational biases, and also that they spend a great deal of time and effort on objectively taking these biases into account.
P.S. For those who are interested, they also explain why anomalies are used rather than absolute measurements:
“The CRU data are anomalies computed with respect to station normals for 1961–1990, a period chosen because station coverage declined during the 1990s. The reanalyses have accordingly been expressed as anomalies with respect to their own monthly climatic means for 1961–1990…. Working with anomalies rather than absolute values avoids the need to adjust for differences between station heights and the terrain heights of the assimilating reanalysis models.”
wildlifer says
I’m sure the “auditors” would be willing to provide photographs. That’ll fix up everything!
Hank Roberts says
A note on terms:
“auditing” — as defined by people who do auditing — doesn’t mean what the CA people want to do. They want to do what’s called inspection; auditing is sampling for quality control.
This is from one of the real experts in studying how people make mistakes. Did you know about half of all big spreadsheets have significant errors in them? Most bankers don’t ….
January 2007
A Rant on the Lousy Use of Science in Best Practice
Recommendations for Spreadsheet Development, Testing, and Inspection
Ray Panko
University of Hawaii
“Convergent Validation — Science works best when there is convergent validation, meaning that you try to measure the same thing in several ways. If the results all come out the same, then you can have some confidence in the science.
“… we have three sources of information about spreadsheet error detection…. (I will use the term inspections instead of auditing, because auditing means sampling for quality control, not the attempt to detect as many errors as possible).”
http://panko.cba.hawaii.edu/
Hank Roberts says
Another quote from Dr. Pankow, and I think this one goes to the bone on the approach required to be helpful:
“Although we still have far too little knowledge of spreadsheet errors to come up with a definitive list of ways to reduce errors, the similarity of spreadsheet errors to programming errors suggests that, in general, we will have to begin adopting (yet adapting) many traditional programming disciplines to spreadsheeting….
“In programming, we have seen from literally thousands of studies that programs will have errors in about 5% of their lines when the developer believes that he or she is finished (Panko, 2005a). A very rigorous testing stage after the development stage is needed to reduce error rates by about 80% (Panko, 2005a). Whether this is done by data testing, line-by-line code inspection, or both, testing is an onerous task and is difficult to do properly. In code inspection, for instance, we know that the inspection must be done by teams rather than individuals and that there must be sharp limits for module size and for how many lines can be inspected per hour (Fagan, 1976). We have seen above that most developers ignore this crucial phase. Yet unless organizations and individuals are willing to impose the requirement for code inspection or comprehensive data testing, there seems little prospect for having correct spreadsheets in organizations. In comparison, the error reduction techniques normally discussed in prescriptive spreadsheet articles probably have far lower impact.
“Whatever specific techniques are used, one broad policy must be the shielding of spreadsheeters who err from punishment. In programming, it has long been known that it is critical to avoid blaming in code inspection and other public processes. For instance, Fagan (Fagan, 1976) emphasized that code inspection should never be used for performance evaluation. As Beizer (Beizer, 1990) has emphasized, a climate of blaming will prevent developers from acknowledging errors. Edmondson (Edmondson, 1996) found that in nursing, too, a punitive environment will discourage the reporting of errors.
Quite simply, although the error rates seen in research studies are appalling, they are also in line with the normal accuracy limits of human information processing. We cannot punish people for normal human failings.”
http://panko.shidler.hawaii.edu/SSR/Mypapers/whatknow.htm
Ray Ladbury says
Steven Mosher, No I didn’t say that you or anyone else couldn’t find errors–just that you would have no idea whether they were significant or not.
Yes, McIntyre found an error–an error that changes nothing of significance about the scientific consensus. Code always has errors. Analyses always have error and uncertainties. This does not invalidate the code or the analysis.
Science is a human activity that takes into account the fact that people will make mistakes. That is why the strands of evidence for anthropogenic causation of climate change are many and varied. An error here and there will do nothing to dent the strength of evidence. Correction of such errors is just science as usual. Those who jump all over this process merely demonstrate that they don’t understand science.
David Ahlport says
====CRU has 1998 as the warmest year but there are differences in methodology, particularly concerning the Arctic (extrapolated in GISTEMP, not included in CRU) which is a big part of recent global warmth.====
Where can I get an original paper which mentions that CRU does not include Artic Temperatures?
Chuck Booth says
Re # 585 I wrote, “I don’t see any evidence that climate science is driving public policy in the U.S. (at least not on the federal level). ”
I just saw in yesterday’s (Aug. 21) newspaper that the U.S. Congress is proposing to spend $6.7 billion in the next fiscal year to combat global warming, an increase of nearly a one-third from this year. Bills moving through Congress would increase funing to reduce GHG emissions and oil dependency, and promote the use of geothermal and other renewable energy sources. So, I guess one could argue that AGW concerns are driving public policy to some degree. On the other hand, in a total budget of about $3 trillion (for 2007), $6.7 billion seems pretty trivial, esp. compared to the defense budget of nearly $700 billion (including the so-called War on Terror[ism]).
Philippe Chantreau says
Steven Mosher and the other climate audit folks are really excited about all this. They should consider what it would be like looking at it from another, layman-type, perspective (mine): a guy who prides himself for nitpicking every little bit of data he can get managed to find an error that has no significance (which he called himself a micro-error). He had done it before, finding a same type of discrepancy (with Mann’s paper) that did not affect the overall results(confirmed by other studies/lines of evidence). That is the fruit of intense (full time?) effort spent at trying to invalidate scientific stuff he dislikes, or believes is sloppy, driven by anterior motives, whatever. Countless hours of effort to find 2 errors that have no real significance on the results. Impressive.
As far as I’m concerned, it could very well mean that the guys actually doing the research (those who have their names before the date in the parentheses) are careful enough about what is significant that all that the “auditors” can find is not. Just my opinion, of course.
BTW, Tim’s other 19 points, permaforst, species shifts, satellite data, boreholes, etc, have not been properly challenged yet I think…
Dan Hughes says
re: #590
That’s a false analogy.
The EKG software will have been subjected to indepedent Verification, Validation, Qualification, and Certification prior to release for production use in its intended areas of applications. Additionally, all users of the software will be trained in its correct applications and in understanding the results from the applications. Finally, I will say that the software will very likely have been developed under approved Software Quality Assurance procedures and processes and is maintained under additional SQA procedures and processes.
[edit]