Another week, another ado over nothing.
Last Saturday, Steve McIntyre wrote an email to NASA GISS pointing out that for some North American stations in the GISTEMP analysis, there was an odd jump in going from 1999 to 2000. On Monday, the people who work on the temperature analysis (not me), looked into it and found that this coincided with the switch between two sources of US temperature data. There had been a faulty assumption that these two sources matched, but that turned out not to be the case. There were in fact a number of small offsets (of both sign) between the same stations in the two different data sets. The obvious fix was to make an adjustment based on a period of overlap so that these offsets disappear.
This was duly done by Tuesday, an email thanking McIntyre was sent and the data analysis (which had been due in any case for the processing of the July numbers) was updated accordingly along with an acknowledgment to McIntyre and update of the methodology.
The net effect of the change was to reduce mean US anomalies by about 0.15 ºC for the years 2000-2006. There were some very minor knock on effects in earlier years due to the GISTEMP adjustments for rural vs. urban trends. In the global or hemispheric mean, the differences were imperceptible (since the US is only a small fraction of the global area).
There were however some very minor re-arrangements in the various rankings (see data [As it existed in Sep 2007]). Specifically, where 1998 (1.24 ºC anomaly compared to 1951-1980) had previously just beaten out 1934 (1.23 ºC) for the top US year, it now just misses: 1934 1.25ºC vs. 1998 1.23ºC. None of these differences are statistically significant. Indeed in the 2001 paper describing the GISTEMP methodology (which was prior to this particular error being introduced), it says:
The U.S. annual (January-December) mean temperature is slightly warmer in 1934 than in 1998 in the GISS analysis (Plate 6). This contrasts with the USHCN data, which has 1998 as the warmest year in the century. In both cases the difference between 1934 and 1998 mean temperatures is a few hundredths of a degree. The main reason that 1998 is relatively cooler in the GISS analysis is its larger adjustment for urban warming. In comparing temperatures of years separated by 60 or 70 years the uncertainties in various adjustments (urban warming, station history adjustments, etc.) lead to an uncertainty of at least 0.1°C. Thus it is not possible to declare a record U.S. temperature with confidence until a result is obtained that exceeds the temperature of 1934 by more than 0.1°C.
More importantly for climate purposes, the longer term US averages have not changed rank. 2002-2006 (at 0.66 ºC) is still warmer than 1930-1934 (0.63 ºC – the largest value in the early part of the century) (though both are below 1998-2002 at 0.79 ºC). (The previous version – up to 2005 – can be seen here).
In the global mean, 2005 remains the warmest (as in the NCDC analysis). CRU has 1998 as the warmest year but there are differences in methodology, particularly concerning the Arctic (extrapolated in GISTEMP, not included in CRU) which is a big part of recent global warmth. No recent IPCC statements or conclusions are affected in the slightest.
Sum total of this change? A couple of hundredths of degrees in the US rankings and no change in anything that could be considered climatically important (specifically long term trends).
However, there is clearly a latent and deeply felt wish in some sectors for the whole problem of global warming to be reduced to a statistical quirk or a mistake. This led to some truly death-defying leaping to conclusions when this issue hit the blogosphere. One of the worst examples (but there are others) was the ‘Opinionator’ at the New York Times (oh dear). He managed to confuse the global means with the continental US numbers, he made up a story about McIntyre having ‘always puzzled about some gaps’ (what?) , declared the the error had ‘played havoc’ with the numbers, and quoted another blogger saying that the ‘astounding’ numbers had been ‘silently released’. None of these statements are true. Among other incorrect stories going around are that the mistake was due to a Y2K bug or that this had something to do with photographing weather stations. Again, simply false.
But hey, maybe the Arctic will get the memo.
James says
Re #335: [Anyway If you compare Lake Spaulding with Tahoe City, Colfax, Nevada city you can see that Lake spaulding has a cooling trend from 1914 -1931 ( not 1927) that differs from these other stations.]
I happen to live in the area, and thought I remembered something about this lake. Sure enough, the first hit from Google gives me this:
“Lake Spaulding rests at an elevation of 5,014 feet in a glacier carved bowl of granite. The lake has a surface area of 698 acres surrounded by giant rocks and a thick pine forest. The lake was originally built for hydraulic mining in 1912…”
OK, so people built a dam, and changed a square mile or so of rocky river valley into a lake. Water has a lot more thermal inertia than rock, and at those middle elevations it would probably get a good thickness of ice accumulating every winter. Wouldn’t you expect a cooling trend in nearby temperatures, quite independently of any larger trends?
Lawrence Brown says
Re 321: Amen.
Nobody ever erected a statue to a critic.
steven mosher says
RE 349
Gavin. [edit]
Its two pages of code… At least tell us which method you used for In homogeniety testing?
Easterling? Kohler? Salinger? Zhang? SNHT? Berry?
Vincent? There are bunches. Which was used?
[Response: RTFR. GISS only adds the urbanisation adjustment and does not do any homogeneity testing, the station history adjustments are taken from USHCN as is clearly described. The only further culls were for obvious weirdness such in the early Cal. data as was, again, clearly described. What is so hard about reading the papers? – gavin]
Timothy Chase says
Re: Steven Mosher
Steve,
One thing.
I don’t ever really harbor any hard feelings towards anyone. And assuming you feel the same way and can make it out to Seattle some time, I would be willing to spring for a couple of drinks.
Strictly non-alcoholic. Hopefully tea or coffee would be alright.
Here’s my email address:
timothychase at g mail.com (No spaces, well you can figure that out.)
Honestly – feel free to take me up on this.
Gary says
RE 322 and others: I have repeatedly seen the assertation that if you have large enough numbers errors in data will be statisically insignificant. However, lets say I am conducting an experiment in the biology lab measuring how the production of an enzyme by bacteria in petri dishes is effected by different nutrients. And some of those dishes were contaminated by fungus tha competed for those nutrients. If only a few were effected that might not change the results. But if 50% or 30% or even 20% were contaminated no matter how many dishes I had I would always get incorrect results. One of the ways to protect against this is to visually examine the dishes (including with a microscope) to look for contaminants. At some point it is important to come out from behind the computer screen and check where the data is coming from, how it is collected and look for possible errors in tabulation/computation.
A very recent example of finding that the real world does not always fit the computer model is: http://www.nasa-news.org/documents/pdf/Wentz_How_Much_More.pdf
steven mosher says
Fix the bad data.
From NOAA:
In summary, climatic normals are intended to serve as a baseline for both spatial and temporal comparison. They are values derived from the frequencies of identically distributed observations over a 30-year period of record. At most locations, however, non-climatic
influences such as station relocations, siting changes, instrument changes and recalibrations, etc. preclude obtaining a climatically homogeneous record of daily observations for 30 years. The statistical problem of detecting the full range of these inhomogeneities from the observational record is currently intractable.
Hank Roberts says
You don’t “fix” data — nobody’s going to go back through history and change what’s recorded.
You replicate — with better instruments. That’s what’s being done, rolling out new stations with better gear and more consistent criteria.
You run in parallel for a while. That gives you a parallel record of the old instrument sites and the new ones.
Then you can evaluate the old data because you’ve been able to check each of the old instrument setups running in parallel with the new instruments.
This is, perhaps, exactly what the know-nothings want to avoid by insisting the old data be fiddled or discarded.
Because a consistently biased observer is still a reliable observer —- and the long record made by a consistently biased observer becomes more valuable once you’ve run the old observer and the new observer in parallel.
Get a grip, folks, the new instruments going out set up with the new criteria are going to make all the old data _more_ useful.
Just as it is.
Without throwing it out. Without fiddling with it.
papertiger says
Timothy Chase Says:
11 August 2007 at 4:03 AM
I had written (#19):
Earth will feel the heat from 2009: climate boffins
By Lucy Sherriff
10th August 2007 15:31 GMT
http://www.theregister.co.uk/2007/08/10/climate_model/
It appears that climatologists are in the process of improving their short-range forecasting.
papertiger (#29) responded:
Interesting. It seems to be inproved just enough to cover the time period right after the next election.
You sure this is a non political website?
Different country – Hadley out of England – notice the UK in the website address. But I suppose they could be part of a global conspiracy. You have to watch those conspiratorial types pretty darn closely – particularly the scientists when they start getting uppety….
So you think that climate models can predict weather? You endorced their site as if it were authoritative. That same office predicted an unremarkable 2007 summer for the UK. As we have seen the UK had an exceptionally wet summer, well below average temperatures, and widespread floods.
Why do you endorce this model, if not due to politics?
Hank Roberts says
Come on, that’s a link to The Register.
Good grief, if you don’t read the actual science paper, at least read the comments of someone who has:
“… So I read the actual paper containing the new predictions. It turns out that the press reports are considerably overblown (surprise!); they give the unmistakeable impression that the HadCRU team has made definitive predictions of the future progress of global warming over the next decade or so, as though we now know with confidence how global average temperature will evolve up to 2014. If you read the actual paper you’ll find that is simply not so…. ”
http://tamino.wordpress.com/2007/08/14/impure-speculation/
nanny_govt_sucks says
Gavin said:
Why not just say the regional average is 0.2 deg/dec and leave the individual station data alone?
Anyway, I thought the purpose of the GISS adjustments is to correct any urban heating, not just to homogenize all stations.
[Response: The GISS-adjusted record is simply the last step in the process before you grid the data, it is not a claim that this is what was the most accurate history for that station. The purpose is to provide a regional and global analysis that is based on rural trends. – gavin]
Mark A. York says
Paper indeed sir! Humans are pattern-seeking creatures. Scientists deal in substantiated trends. See the difference between this and your fallacy? If not, review and report back.
Dodo says
The graph of global mean temperatures has disappeared fron Gavin’s text. Let’s hope it will soon re-appear with the y-axis similar to that in the US graph. This is basic stuff in statistical graphics: if you compare two time series, you visualize them in similar coordinates. And no values off chart, please.
It’s not that GISS does not know how to visualize their global data. There is for example one graph, showing that global warming more or less stopped about six years ago:
http://data.giss.nasa.gov/gistemp/graphs/Fig.C_lrg.gif
Nick Gotts says
Re in-line comment to #349 [My comments above stand – independent replication from published descriptions – the algorithms in English, rather than code – are more valuable to everyone concerned than dumps of impenetrable and undocumented code. – gavin]
But Gavin (and you know I’m no denialist, nor do I think this matter undermines the immense weight of evidence from AGW from numerous independent lines of research), code should not be impenetrable and undocumented. If it is, that’s a flaw in the work. Whether it’s a big flaw or a little flaw depends on context, but it’s a flaw.
Nick says
#359 – Errr… it looks more like the anomaly had a large peak in 1998 and has then returned to a rising trend. Also helpful if you show data before 1997.
Jeffrey Davis says
The demands for transparency and the cackling over a teensy adjustment in temps don’t reflect well on a group that allowed an order of magnitude change in a y-axis scale to back up one of their talking points. And that’s simply as a question of form and rhetoric. Counter science with science, folks, not with Death by Quibble.
Adam says
“So you think that climate models can predict weather? You endorced their site as if it were authoritative. That same office predicted an unremarkable 2007 summer for the UK. As we have seen the UK had an exceptionally wet summer, well below average temperatures, and widespread floods.”
Actually, despite the fact that the summer isn’t yet over, they predicted the temperature to be about average – and it has been so far. They also predicted the northern part of the UK to tend towards wetter than average. The southern part was predicted to tend towards drier and that’s where the main error on the part of the forecast has been. The UKMO state that the seasonal forecast is experimental and has a success rate of roughly 60%.
However, that is using a different forecasting model (it’s based on NAO predictions and SSTs partly using statistical techniques), and by a completely different group (the seasonal forecasts are carried out by the UKMO whereas the climate research is done by the Hadley Centre which is a semi-autonomous division – that might seem a trivial distinction but it’s not). The DePreSys model uses the HADCM3 model but with improved data initialisation. It is more akin to running the current UKMO GM for ten years than their seasonal forecast.
That said, the project has been running for over four years and is based in the UK, so any link with US elections is rather fanciful and a bit silly.
Finally, their paper is really a marker for a work in progress. It’s possibly the first time that anyone has done a climate prediction of this type. They acknowledge the limitations of the DePreSys forecast and I’d expect the forecast to change with newer runs.
BlogReader says
[ gavin : My comments above stand – independent replication from published descriptions – the algorithms in English, rather than code – are more valuable to everyone concerned than dumps of impenetrable and undocumented code. ]
Huh? I’m a computer programmer and the first thing I skip when tackling a problem is the docs and go straight to the code as that’s where the skeletons lie. What you’re advocating is like Ford telling a jury that they shouldn’t look at the blueprints of their pinto or at executive memos but rather at press releases.
This is akin to security by obscurity. It is the last thing that I would expect from scientists.
[Response: Again, you miss the point. It is better for someone to come up with the same answer by doing it independently – that validates both approaches. – gavin]
Paul Miller says
Skeptics leaping on the need for NASA to revise their figures have reached Sweden, where I work. Letters to the editor in regional newspapers (hd.se), no less. Under the heading “En Obekväm Nyhet” which means “An Uncomfortable Piece of News”, the letter writer neglected to mention that the new figures were for the US, and simply said that temperatures in 1934 were as warm as 1998, that this news (studiously avoided by the press because it’s so uncomfortable) would “disappoint” people with so much invested in the climate threat, and even that “the climate catastrophe has been cancelled”!
Needless to say, there’s a critical reply/correction on the way to the editor!
Keep up the good work RealClimate team.
P.
steven mosher says
RE351.
The cooling TREND at Lake spaulding ( compared to nearby sites ) is nearly linear. That would indicate
smething like a sensor going bad. Post 1931 it matches the other sites in the area nicely. Statin history records might show replacement of the sensor.. Havent checked that yet.
steven mosher says
RE 175. Gavin inlined:
”
Response: ‘Algorithms’ are just descriptions in various flavours of formality. If there is a question of whether the code correctly implements the algorithm it will be apparent if someone undertakes to encode the same algorithm and yet comes up with a different answer. ”
Well, If we both made the same error they would match.
If we used different math libraries and one had a flaw
they would mismatch for a different reasn.
If they didnt match, how would we resolve the mismatch?
By sharing each others code. Plus Nasa software policy
encourages you to share code.
Next:
“Absent that, there is no question mark over the code. So if I generically describe a step as ‘we than add A and B to get C’, there is only a point in checking the code if you independently add A and B and get D. That kind of replication is necessary and welcome. With that, all parties will benefit. ”
I would have to more than merely claim that the results
didn’t match, right? I suspect that you would want to
see my code. Can you imagine if 10 people tried to
match what you did and then all sent you their code to
check. that would be rather a time waste for you.
Next:
“Simple demands to see all and every piece of code involved in an analysis presumably in the hope that you’ll spot the line where it says ‘Fix data in line with our political preference’ are too expansive and unfocused to be useful science. – gavin”
I don’t expect to find comments like that. Fundamentally I believe in transparency. The default should be release the code, unless there is an IP issue.
[Response: Why talk about hypotheticals? Do your emulation and see. – gavin]
J.S. McIntyre says
J.S. McIntyre (#319). I see it all the time from everyone. Nobody has a lock on the rhetoric about this subject.
==============
Actually, for the sake of reference, it was post 321.
No one said otherwise, and I find it interesting you would infer that was my sole point.
As I outlined in my remarks, there is a very large difference between what we see emerging from the people promoting the science of Global Warming and the so-called “AGW Skeptics”, far beyond just “rhetoric”.
Dave Blair says
re #367,
Do Climatologists do the programming themselves or do they give the algorithms to programmers or comp. sci.students(in the case of a University) to program?
[Response: Depends on the size of the group. The Hadley Centre and NCAR have specific programmers, GISS is a smaller institution and the scientists do most of the work themselves though they do get some help from GSFC programmers. – gavin]
kevin rutherford says
Re 358 papertiger “the UK had an exceptionally wet summer, well below average temperatures” in fact temps for June 1.1 degrees above 1961 – 1990 average; temps for July 0.3 degrees below 1961 – 1990 average.
is this the same papertiger in ref 24:
Just to expand on my last point at 12, (somehow) many see this as indicating that we can’t trust anyone (especially Hansen) to handle the data properly.
I think the point is that we shouldn’t have to trust someone as in a single person or entity such as Nasa GISS to develope what is in effect policy for our country.
It’s un democratic and un scientific.
A person could make a mistake. Ahem.
So we can’t trust anyone handling data? I know who I would trust with attempting to be accurate and honest. Sadly typical that those who cast aspersions on the validity of this scientific work don’t seem to be able (or willing?) to treat facts with such respect as those they criticise.
Hank Roberts says
> I would have to more than merely claim that the results didn’t match, right?
You’d have passed a peer review and gotten a publication in a science journal.
You’d have coauthors, whose track record and reputation would support your conclusion.
You’d be taken seriously and people would assume you were doing honest science, and look for flaws in your approach.
Why not try it, as Gavin and many others suggest? So far every group that _has_ done a climate model finds much the same result.
Once you’ve done it — you’ll understand how it’s done.
As Willy Ley supposedly said, analysis is all very well, but you can’t understand a locomotive by melting it down and analyzing the mess. You have to build one to understand how it works.
Allan Ames says
Gavin: While the magnitude of what happened is small, the dimensionality — errors by a respected agency — is bad for the agency. My town once had a school superintendent whose approach to criticism was to 1)ask the selectmen to form a study committee 2)appoint the critic as the chair 3)make the “full resources of the department” available for the study. The dynamics of this response are interesting. First, the critic cannot decline without losing credibility. Second, the school department still maintained most of the control of the study process. Third, it got to know what was being found and correct additional problems before they got out of hand. Further, the committees sometimes found solutions that would have required paid consultants, so they more than paid for themselves. Everyone was happy.
It is not possible for anyone to review all or even most of a GCM, so there will always be questions. You might find it useful to have a mechanism in place for semi-formal review of data, design, or algorithms by outsiders via the internet. If it were all done on the web, it might even pay for itself.
Timothy Chase says
Papertiger (#358) wrote:
I don’t know what exactly they said about the summer of 2007 other than a forecast made back in January of a 60% chance of breaking temperature records – but that would have been world wide. Precipitation for Great Britain? Not seeing anything as of yet. But perhaps you have a link to their prediction.
In any case, precipitation is supposed to increase for England over time. A large part of it is geography. Located between the polar and ferrel cells, it is at a latitude of low pressure where warm moist air will tend to result in precipitation. It has the whole Atlantic to the east, and increased evaporation over the Atlantic will make floods more likely in England. Moreover, this is just the trend which we have been seeing over the past several decades. Roughly linear. But winter is when they get more precipitation – and it is during that time of the year that precipitation has increased by 35%.
By contrast, we are expecting the hadley cell south of the United States to expand, moving the high pressure between the hadley and ferrel cells north which will diminish precipitation. A higher rate of evaporation will mean that soil dries out more quickly. And in continental interiors this will become especially pronounced as the land is and will warm up more quickly, leading to a lower relative humidity. Then we should also see that precipitation events either diminish in frequency or overall amount, except for extreme events which cause flooding.
*
All of this is independent of what the weather does during any particular year. As climatologists generally tell you, their predictions are about average behavior, not the weather on any particular day, month or year. But the Hadley forecast is different. Climatology doesn’t attempt to predict the weather for a given period but the attractor, a probability distribution in what is called phase space in which the weather for any given day will be embedded. It does this by performing many different runs with slightly different initial conditions.
The butterfly effect will cause the “forecast” for any given day to different from run to run, but since climatology is only concerned with the average behavior, the butterfly effect is for the most part irrelevant. Physics is what drives the average behavior, the probability distribution as a whole. Some models will be better at capturing that behavior than others, depending upon how individual runs are calculated, for example, in terms of the resolution or the approach to calculation, which at a certain level becomes a matter of resolution as well.
*
In terms of climatology as a whole, the one thing that we have the most accurate understanding of is how higher levels of greenhouse gases will affect the system the climate system as a whole. This is a matter of radiation physics. Something we are able to understand in terms of quantum mechanics, measure in the labs and observe with infrared imaging of the atmosphere. This is as solid as it gets.
But unfortunately, while we can control the amount of greenhouse gases which we put into the atmosphere, the effects of this, at least in terms of carbon dioxide, the effects of our behavior won’t be felt until roughly forty years hence. Then the paths as determined by our behavior in the present will begin to diverge.
Carbon dioxide stays in the atmosphere. Therefore the effects of our behavior in terms of carbon dioxide will necessarily have a cummulative effect. The more years we have high emissions, the more carbon dioxide there will be in the atmosphere, and thus the higher the temperature which will be required for the amount of thermal radiation leaving the system to equal the amount of thermal radiation entering the system. The basis for this amounts to little more than the radiative properties of carbon dioxide and conservation of energy.
However, there is one big uncertainty regarding carbon dioxide: the feedback assoicated with the carbon cycle. At what point will various positive feedbacks kick in? This partly a matter of how plants respond, precipitation patterns, ocean circulation and so on.
*
Anyway, I won’t deny that I am political. For example, despite their personal flaws, Winston Churchill is my favorite statesman of the twentieth century and I have a great deal of affection for George Orwell. I really doubt they liked one another, but thats a different matter.
Likewise, I place a great deal of emphasis upon individual rights and property rights, the free market as a result of my understanding of economics and a great deal of emphasis upon climatology and the importance of limiting threat of climate change. At a certain level, I am probably less political than most in that I try to always give precedence to identification rather than evaluation.
But I am human and I have plenty of flaws. I make plenty of mistakes. I will get annoyed with people, sometimes strongly so. I will lose my temper. No doubt I have my prejudices. There are people I distrust, and people I consider friends. I even have my favorite television series: Babylon 5, although it hasn’t been on air for years.
However, science knows no politics. Particularly when one is dealing with physics. It is essentially the study of cause and effect. Despite the complexity of the phenomena it studies, climatology falls into that category.
Now of course individual scientists do have their own personal politics ad prejudices, and this will color their views, but assuming one is living in a free society, the evolution of our scientific understanding over the long-term will be essentially independent of that. largely for the same reasons that the free market works so well, at least as I understand it.
*
Anyway, I am not sure how much stock I put in the Hadley forecast, particularly for next year. I put more stock in the general trend which it predicts over the next decade, but even then I have some doubts. For example, it is quite possible that NASA has a better model. From what I understand, they are both very good models, world class.
I would hate to have to pick between the two. But I believe that the general approach that the people at MET are employing will be more powerful than what we have done in the past, initializing the model with our best measurements of real-world data regarding ocean conditions and the like from consecutive days.
In any case, this approach is new. It isn’t what they would have used to arrive at their predictions for this year. So they may be more successful at predicting the next whatever their predictions were for this year.
I am hopeful that it will work. Assuming it does, this approach will give us a better understanding of the conditions in which we tend to plan various projects. While it won’t help us control the trends, if it works it will at least help us with mitigating the effects of climate change.
steven mosher says
RE 353. Gavin inlines.
“RTFR. GISS only adds the urbanisation adjustment and does not do any homogeneity testing, the station history adjustments are taken from USHCN as is clearly described. The only further culls were for obvious weirdness such in the early Cal. data as was, again, clearly described. What is so hard about reading the papers? – gavin]”
Well. Obvious weirdness is not an algorithm. The text states that there are 5 stations in Norcal ( actually one is in Oregon) That exhibited a cooling trend not shared by other sites in the region. ” the region” is not specified. Electra is 400 miles from Crater lake. SO, my first question is what do you mean by region? That’s simply a practical question. Which sites were looked at to do the comparsion. And its a fair question. Sites within 1200Km, 1000km, 100km.
Now when you compare Lets say, Lake spaulding, to its three closest neighbors you will see that it is definately “weird”. A gross measure of weirdness is
(TahoeCity_Tmean – LakeSpaulding_Tmean) A simple linear regression on this shows that (T-S) changes on a nearly linear basis from 1914-1931. .27C or something thereabouts with a Rsq of .92. Thereafter the slope of (T-S) is roughly .06. Same goes for the pairing of Nevada city and Lake Spaulding. The comparison with other nearby sites was similair but not as dramatic.
The question is What is the TEST for weirdness? Was a weirdness test performed? Now the paper cites Easterling and Peterson on finding Inhomogenieties within series. So, I assume you used a calculation to quantify weirdness. The text doesnt say.
For interested folks you will find a nice description of some of the issues here:
http://www.climahom.eu/software/docs/Prezentation_T_homogenization.pdf.
Page 32-34 shows an adequate description of methods.
Anyway. Lake Spaulding Looks weird from 1914-1931.
Crater Lake just looks cold. Again, there is no list in the text or footnotes of “nearby station” . So I picked the closest one Prospect, to see if (P-C) “temps at Prospect-Temps at crater lake” were “obviusly weird”
well, the only thing I saw, at first blush, was that crater Lake ( station altitude 6475 ft) was consistently cooler that Prospect ( station altitude 2482ft) The trend of (P-C) looked to be fairly linear.
If you corrected for the difference in altitude, the sites came out pretty close on absolute Temp. I didn’t see any weirdness in trend differences.
Hence my question.
a. Was Crater Lake compared to other sites?
b. Which sites?
c. What test for weirdness was performed.
I assumed it was a test for inhomogeniety and assume that since easterling & peterson are sited that this was the test performed.
The issue isnt the .01 change.
[Response: You appear to still be confusing me with the authors of the papers. If you think there is something of interest in all of this, do a specific study on that area. I have no knowledge, nor interest, in the temperatures of Crater Lake in 1920. Doing the analysis yourself will make clear all of the answers to your questions. If you think that the procedure was invalid, do the study that demonstrates that. – gavin]
Timothy Chase says
Steven Mosher (#370) wrote:
Steven,
I would have to say that like me, you are something of an open book. I for one reason, and you for another. You are a fairly public figure. That happens when one testifies before Congress and writes several books, some of which are currently selling on Amazon at a reduced price.
At a certain level, I would have to say that I even admire you. In some ways we are a bit alike. I think both of us are roughly as passionate about our values. And this means that people can see rather deeply into both our characters – if they know how to look.
Chuck Booth says
Re # 336 David Blair
“you say they are accurate for the next decade, maybe so and time will tell, but when we hear the predictions for 50 or 100 years from now that when you wonder why all the movie stars and politicians are getting involved”
The GC models are based on known laws of physics, well-understood biogeochemical cycles, assumptions about GHC emissions levels (up, down, or no change), etc. They can’t account for unpredictable events, such as volcanoes or asteroids hitting the earth. Are you expecting the laws of physics to change any time soon? Or maybe a major alteration to the carbon cycle?
Lynn Vincentnathan says
Yes, this tiny insignificant NASA mistake is “Another week, another ado over nothing.” It’s weird how we can go on and on about nothing.
But here’s something, tonight (Fri, Aug 17) on Bill Moyer’s Journal on PBS (9 pm in my area), Bill will interview Mike Tidwell, author of The Ravaging Tide: Strange Weather, Future Katrinas and the Coming Death of America’s Coastal Cities.
That’s something significant.
Aaron Lewis says
The argument about conditions at rural weather stations is dumb. Talk to anyone in the fruit industry or agricultural extension agents, and bloom dates have been coming earlier and earlier. Moreover, you can see the date of greening from satellite data. No heat pumps or asphalt in all those sections and sections of orchard.
Every apple tree, every grape vine is a little integrating thermometer. Any one of them may not be as accurate as a laboratory thermometer, but there are so many of them, that over all, their ability to detect small changes in temperature is high. As long as we are getting earlier greening dates and earlier bloom dates, things are getting warmer.
Bottom line; the actual temperature measured at a weather station does not matter. What matters is how warming affects the plants and animals that we need to survive. What matters is how the heat stresses people. What matters is how the heat stresses our electrical grid. What matters is how heat affects our water supplies. What matters is the expanded range of insect pests and other pathogens. The temperatures measured at at any set of weather stations does not indicate the actual scope of any of these issues. Really, the numerical value of temperature is a proxy for actual damages.
Hank Roberts says
Aaron, Timothy — excellent posts.
As a reader here, beating my familiar drum — I do urge you to provide cites with your statements, to help out new readers coming to the topic, especially later on.
It’s a lot of extra work to do that (this is why programmers don’t comment code they don’t expect to publish, too!).
For later readers, it’s the footnotes and cites that separate sound writers from the merely opinionated posters. Without those only the people who already know the literature can tell the two groups apart. I know you’re good. Please show off (grin) for later readers.
ken rushton says
Going fast now.
Take a look at the jetstream. It hints that a second monster surge of warm air from Siberia is heading over the north pole onto Greenland. This is something I’ve never seen. Has the polar air circulation pattern changed?
In any event, a second arctic ice melt is now underway.
Hank Roberts says
Ken, can you provide a link to what you’re talking about please?
Have you read about the two big patterns described, in this study? http://pubs.giss.nasa.gov/abstracts/2007/Liu_etal_2.html
Ike Solem says
Speaking of climate models and surface temperatures, the folks at the Hadley Met Office have a recent paper in science that applies to this issue and seems fairly interesting:
Improved Surface Temperature Prediction for the Coming Decade from a Global Climate Model. Smith et al Science Aug 10 2007
It seems to be an effort to use global coupled circulation models to estimate the internal variability of the climate system over the next decade. Internal variability refers to unforced changes in the climate system (see here for the definition of a forcing). These would include things like El Nino and fluctuations in ocean circulation and heat transport. The model they use is something called the “Decadal Climate Prediction System” (DePreSys) based on the Hadley Center Model. Animations from the Hadley Center are available here.
To test this strategy they used a number of ‘hindcasts’ covering timeframes from 1950 – 2005 or so. They seem to do a pretty good job of it. Here’s an important quote:
“Because the internal variability of the atmosphere is essentially unpredictable beyond a couple of weeks, and the external forcing in DePreSys and NoAssim is identical, differences in predictive skill are very likely to be caused by differences in the initialization and evolution of the ocean.”
(the difference is that DePreSys contains information on initial conditions, and NoAssim does not. If you think about a weather (atmosphere) forecast model for the one-week future, it’s very sensitive to initial conditions. Thus, DePreSys seems to be an ‘ocean weather’ forecast model)
Once again, the point is made that the more comprehensive the data on current ocean conditions is, the better the decadal predicitons will be (just as in weather models, which rely entirely on the satellite and radiosonde networks for initialization). Unfortunately, the head honchos at NASA no longer believe that monitoring the Earth system is part of their job…???
The paper concludes by presenting a forecast for global surface temperatures for the coming decade:
“Both NoAssim and DePreSys, however, predict further warming during the coming decade, with the year 2014 predicted to be 0.30° ± 0.21°C [5 to 95% confidence interval (CI)] warmer than the observed value for 2004. Furthermore, at least half of the years after 2009 are predicted to be warmer than 1998, the warmest year currently on record.”
(Yes, 1998 is still the warmest year on record)
This seems like an interesting paper – do any of the professional modelers out there have any comments?
sparrow (in a coal mine) says
My comments above stand – independent replication from published descriptions – the algorithms in English, rather than code – are more valuable to everyone concerned than dumps of impenetrable and undocumented code. – gavin
Gavin you are talking scientific utility when politics is about perception. They are independent of each other. Whether the code is impenetrable or not is irrelevant to public perception. Maybe there is a happy middle ground (such as an official page or RC blog post dedicated to listing papers algorithms and where to find the data) but the current policy isn’t cutting it. That being said I do not envy what you have to deal with. I’m sure simply reading/answering 400 comments on one blog post is a major drain on ones time.
steven mosher says
Gavin,
As long as we are culling weird cooling data ( Lake spaulding is messed up, probably a sensor giong bad over time) How about we clean up the “weird” warming issues:
To wit:
““The Recent Maximum Temperature Anomalies in Tucson: Are They Real or an Instrumental Problem?”
http://gallery.surfacestations.org/main.php?g2_view=core.DownloadItem&g2_itemId=21224
Excerpt:
“…during 1986 and 1987 maximum temperature records were being set on 21 days and 23 days respectively. In 1988 this increased to 38 days, and to 59 in 1989.” and “With one exception, in 1988 and 1989, there were no other stations within 1000 miles that set any kind of record on the dates…””
Turns out NOAA changed sensor Suppliers…
Rafael Gomez-Sjoberg says
This off-topic, but I think it’s pretty important and don’t know where else it would be appropriate to post.
Nature News has a report on a new paper published in Ecology Letters that deals with how rising temperatures stunt the growth of tropical forests, which would severely decrease their ability to remove CO2 from the atmosphere or even make them net emitters of the gas.
Here is the link to the news report:
http://www.nature.com/news/2007/070806/full/070806-13.html
And the reference to the paper:
Feeley, K. J. et al. Ecol. Lett. 10, 461-469 (2007)
And here are some important excerpts from the news report:
“Global warming could cut the rate at which trees in tropical rainforests grow by as much as half, according to more than two decades’ worth of data from forests in Panama and Malaysia. The effect — so far largely overlooked by climate modellers — could severely erode or even remove the ability of tropical rainforests to remove carbon dioxide from the air as they grow.
The study shows that rising average temperatures have reduced growth rates by up to 50% in the two rainforests, which have both experienced climate warming above the world average over the past few decades. The trend is shown by data stretching back to 1981 collected from hundreds of thousands of individual trees.
…. The trends measured by Feeley suggest that entire tropical regions might become net emitters of carbon dioxide, rather than storage vessels for it. “The Amazon basin as a whole could become a carbon source,” Feeley says.
Feeley and his colleagues analysed data on climate and tree growth for 50-hectare plots in each of the two rainforests, at Barro Colorado Island in Panama, and Pasoh in Malaysia. Both have witnessed temperature rises of more than 1ºC over the past 30 years, and both showed dramatic decreases in rates of tree growth. At Pasoh, as many as 95% of tree species were affected, Feeley and his colleagues report. The research has also been published in the journal Ecology Letters”
I’m afraid there are too many positive feedbacks in the climate system that will soon kick in and give us a really hard time.
But here we are speeding down a road that all rational analysis indicates leads to an abyss, while the “skeptics” (Steve Moshers) of the world keep quibbling about the color of the numbers on the speedometer.
I’m sure that as the Roman Empire was falling, lots of “skeptics” were debating on the barbarians’ sense of fashion and whether they were really such unfriendly folk.
Timothy Chase says
Hank (#382) wrote:
I will try to do more of that in the weeks ahead, but as I intend to put my links together on the web over time, it will probably become easier to keep track and refer to them. Currently I tend to work of memory a little too much and the good majority of what I write is extemporaneous so too many references may slow me down a bit.
I promise I will try harder, though, even if it slows me down a bit.
ziff house says
Re 383 just got back from 81N on ellesmere, had two weeks of 15-20C.
Dave Blair says
379, Chuck, asteroids hitting the Earth are also based on physics. Weather prediction is also based on physics, complex models, and supercomputing as is climate science. However, weather predictions are easily testable. The prediction from science are influnenced by the number of starting variables, formulas and the complexity. There is no mathematical formala for different cloud types, you have to learn them from pictures. We can also look at Earthquake prediction for a comparable science – again the results are unreliable.
Weather prediction is an excellent comparison to climate science, tobacco science is a poor comparison.
steven mosher says
RE 378.
Timothy. Wrong again. First you try to sneak by a GE Moore argument by me ( here is a hand) without giving him attribution. Moore is one of my favorities. You should have recognized my Moorian trick in the UHI discussion.
Anyway, Now you confuse me with Steven W.
This has happened on several previous occassions.
1. I had a friend being interviewed for a TS/SAR position at Monterey. DIS was giving him the full monty inspection talking to all his friends. The lead investigator made the same mistake you did, thinking my middle initial was W.
2. When I applied for my TS/SAR The same question came up. Visiting PRC, back in the day, was a No no. To make matters worse I had a chinese girlfriend. Proving I wasnt him was fun but easy. I’m cute, he’s not. he’s Catholic, I’m agnostic. Funny story. I went to a local Charity event. ( a liberal thing, I’m libertarian ) I get introduced to a catholic priest and he mistakes me for Steven W.
3. I get lots of nice emails back from think tanks and talk show hosts who think I’m him. I’m always straight with them, so I offer you the same consideration.
When you get time you should read my dead friend
http://en.wikipedia.org/wiki/William_A._Earle
And the most serene soul I ever had the pleasure to study with.
http://en.wikipedia.org/wiki/Erich_Heller
A cool guy from the Earle gang. He was a grad student with Earle while I was doing my Honors with Earle. very funny dude.
http://en.wikipedia.org/wiki/Peter_Suber
Next I will twist your brain and get you to read Alvin Plantinga. You can of course google him… oh wait wiki has him. Nice guy. uber brilliant with a very weird take on certain issues. you go read. It’ll set your hair on fire
Adam says
Ike Solem #385:
Try tamino (Open Mind), James Annan (James Empty Blog) & William Connelly’s blogs (Stoat) (linked on the right). There’s some more stuff there.
Steve Bloom says
Re #390: Wow. That’s hot enough for some serious melting. Did you happen to observe or hear about any notable effects?
Dr. J says
This issue makes we wonder where the seams are in the global or U.S. temperature data between the various methods of temperature measurement. Like the seam between the switch from alcohol to mercury thermometers and likewise from mercury thermometers to satellite and digital measures. Does anyone know what years these seams are? Thanks.
Hank Roberts says
> Now you confuse me with Steven W. [Population Research Institute –hr]
> This has happened on several previous occassions.
Gavin could edit probably edit those errors claiming you’re the other guy, if you point them out.
Misattribution happens, but leaving confusion around isn’t kind to later readers.
Speaking of leaving confusion around,
> a GE Moore argument …. my Moorian trick in the UHI discussion.
I wish you would distinguish between a science discussion and a debate. This goes along with my pleas for cites.
Hard argument in science is different, or should be, by intent. I don’t know if this is taught nowadays.
I recall my dad teaching biology grad students, long ago, in the 1950s, to argue in seminars —- never meaning to “win” but always to clarify and, perhaps, improve everyone’s necessarily incomplete and muddy view of the subject — to help one another, as Beckett says, “fail better” next time.
Lawrence Brown says
In reponse to comment 377, Gavin says ……”Doing the analysis yourself will make clear all of the answers to your questions. If you think that the procedure was invalid, do the study that demonstrates that. – gavin]”
The more I get into the discipline of global warming, the more I come to believe that most,if not all, skeptics do not do original work. They believe their job is to criticize the work of others. They appear to object to having others do what they, the critics do, that is to open themselves to being criticized, in much the same way that many power companys begrudge having to pay their own rates to buy back power from individuals who generate a surplus of power from solar panels. Why do something original, when you can throw darts at the work of someone else?
Dodo says
Re #368. Maybe the Swedish paper wouldn’t have jumped to the wrong conclusion if GISS had published an orderly press release about what was wrong and so on. By trying to hide the mistake under the carpet our dear Real Climate scientists helped the other side’s extremists to misunderstand the news. Good lesson.
And to our deep disappointment, Gavin’s world temperature graph is back unchanged, with the misleading y-axis spoiling the US-ROW comparison, and the ridiculous off-chart point making a “dramatic” effect.
snrjon says
Re #390
Actual highest temperature (at Eureka, Ellesmere) in the first 15 days of August was 17 deg C. Only 6 days above 10 deg C (daily high). Average of mean temp for period was 7 deg C, so yes a little warm for the Canadian Arctic, but hardly extreme!
Which hot spot were you at actually (where you had the two weeks of 15-20)?
wayne davidson says
#397, There are other ways to prove accuracies in climate data, especially recent ones, I do this myself by a new unique method of measuring heat in the atmosphere by using the sun as a fix sphere of reference. The traditionnal techniques used to come up with global average temperatures match sun oblateness variances. Critics, must come up with their own way in comfirming or denying the validity of GT measurements. My opinion has never been negative of most official GT results published. My own independent work reinforces that opinion to the point that sceptics seem to have overused their armchairs, and are largely not contributing anything but nonsense in the greater quest of narroying down a more exact GW trend .