Observant readers will have noticed a renewed assault upon the meteorological station data that underpin some conclusions about recent warming trends. Curiously enough, it comes just as the IPCC AR4 report declared that the recent warming trends are “unequivocal”, and when even Richard Lindzen has accepted that globe has in fact warmed over the last century.
The new focus of attention is the placement of the temperature sensors and other potential ‘micro-site’ effects that might influence the readings. There is a possibility that these effects may change over time, putting in artifacts or jumps in the record. This is slightly different from the more often discussed ‘Urban Heat Island’ effect which is a function of the wider area (and so could be present even in a perfectly set up urban station). UHI effects will generally lead to long term trends in an affected station (relative to a rural counterpart), whereas micro-site changes could lead to jumps in the record (of any sign) – some of which can be very difficult to detect in the data after the fact.
There is nothing wrong with increasing the meta-data for observing stations (unless it leads to harassment of volunteers). However, in the new found enthusiasm for digital photography, many of the participants in this effort seem to have leaped to some very dubious conclusions that appear to be rooted in fundamental misunderstandings of the state of the science. Let’s examine some of those apparent assumptions:
Mistaken Assumption No. 1: Mainstream science doesn’t believe there are urban heat islands….
This is simply false. UHI effects have been documented in city environments worldwide and show that as cities become increasingly urbanised, increasing energy use, reductions in surface water (and evaporation) and increased concrete etc. tend to lead to warmer conditions than in nearby more rural areas. This is uncontroversial. However, the actual claim of IPCC is that the effects of urban heat islands effects are likely small in the gridded temperature products (such as produced by GISS and Climate Research Unit (CRU)) because of efforts to correct for those biases. For instance, GISTEMP uses satellite-derived night light observations to classify stations as rural and urban and corrects the urban stations so that they match the trends from the rural stations before gridding the data. Other techniques (such as correcting for population growth) have also been used.
How much UHI contamination remains in the global mean temperatures has been tested in papers such as Parker (2005, 2006) which found there was no effective difference in global trends if one segregates the data between windy and calm days. This makes sense because UHI effects are stronger on calm days (where there is less mixing with the wider environment), and so if an increasing UHI effect was changing the trend, one would expect stronger trends on calm days and that is not seen. Another convincing argument is that the regional trends seen simply do not resemble patterns of urbanisation, with the largest trends in the sparsely populated higher latitudes.
Mistaken Assumption No. 2: … and thinks that all station data are perfect.
This too is wrong. Since scientists started thinking about climate trends, concerns have been raised about the continuity of records – whether they are met. stations, satellites or ocean probes. The danger of mistakenly interpreting jumps due to measurement discontinuities as climate trends is well known. Some of the discontinuities (which can be of either sign) in weather records can be detected using jump point analyses (for instance in the new version of the NOAA product), others can be adjusted using known information (such as biases introduced because changes in the time of observations or moving a station). However, there are undoubtedly undetected jumps remaining in the records but without the meta-data or an overlap with a nearby unaffected station to compare to, these changes are unlikely to be fixable. To assess how much of a difference they make though, NCDC has set up a reference network which is much more closely monitored than the volunteer network, to see whether the large scale changes from this network and from the other stations match. Any mismatch will indicate the likely magnitude of differences due to undetected changes.
It’s worth noting that these kinds of comparisons work because of large distance over which the monthly temperature anomalies correlate. That is to say, that if a station in Tennessee has a particular warm or cool month, it is likely that temperatures in New Jersey say, also had a similar anomaly. You can see this clearly in the monthly anomaly plots or by looking at how well individual stations correlate. It is also worth reading “The Elusive Absolute Surface Temperature” to understand why we care about the anomalies rather than the absolute values.
Mistaken Assumption No. 3: CRU and GISS have something to do with the collection of data by the National Weather Services (NWSs)
Two of the global mean surface temperature products are produced outside of any National Weather Service. These are the products from CRU in the UK and NASA GISS in New York. Both CRU and GISS produce gridded products, using different methodologies, starting from raw data from NWSs around the world. CRU has direct links with many of them, while GISS gets the data from NOAA (who also produce their own gridded product). There are about three people involved in doing the GISTEMP analysis and they spend a couple of days a month on it. The idea that they are in any position to personally monitor the health of the observing network is laughable. That is, quite rightly, the responsibility of the National Weather Services who generally treat this duty very seriously. The purpose of the CRU and GISS efforts is to produce large scale data as best they can from the imperfect source material.
Mistaken Assumption No. 4: Global mean trends are simple averages of all weather stations
As discussed above, each of the groups making gridded products goes to a lot of trouble to eliminate problems (such as UHI) or jumps in the records, so the global means you see are not simple means of all data (this NCDC page explains some of the issues in their analysis). The methodology of the GISS effort is described in a number of papers – particularly Hansen et al 1999 and 2001.
Mistaken Assumption No. 5: Finding problems with individual station data somehow affects climate model projections.
The idea apparently persists that climate models are somehow built on the surface temperature records, and that any adjustment to those records will change the model projections for the future. This probably stems from a misunderstanding of the notion of a physical model as opposed to statistical model. A statistical model of temperature might for instance calculate a match between known forcings and the station data and then attempt to make a forecast based on the change in projected forcings. In such a case, the projection would be affected by any adjustment to the training data. However, the climate models used in the IPCC forecasts are not statistical, but are physical in nature. They are self-consistent descriptions of the whole system whose inputs are only the boundary conditions and the changes in external forces (such as the solar constant, the orbit, or greenhouse gases). They do not assimilate the surface data, nor are they initiallised from it. Instead, the model results for, say, the mean climate, or the change in recent decades or the seasonal cycle or response to El Niño events, are compared to the equivalent analyses in the gridded observations. Mismatches can help identify problems in the models, and are used to track improvements to the model physics. However, it is generally not possible to ‘tune’ the models to fit very specific bits of the surface data and the evidence for that is the remaining (significant) offsets in average surface temperatures in the observations and the models. There is also no attempt to tweak the models in order to get better matches to regional trends in temperature.
Mistaken Assumption No. 6: If only enough problems can be found, global warming will go away
This is really two mistaken assumptions in one. That there is so little redundancy that throwing out a few dodgy met. stations will seriously affect the mean, and that evidence for global warming is exclusively tied to the land station data. Neither of those things are true. It has been estimated that the mean anomaly in the Northern hemisphere at the monthly scale only has around 60 degrees of freedom – that is, 60 well-place stations would be sufficient to give a reasonable estimate of the large scale month to month changes. Currently, although they are not necessarily ideally placed, there are thousands of stations – many times more than would be theoretically necessary. The second error is obvious from the fact that the recent warming is seen in the oceans, the atmosphere, in Arctic sea ice retreat, in glacier recession, earlier springs, reduced snow cover etc., so even if all met stations were contaminated (which they aren’t), global warming would still be “unequivocal”. Since many of the participants in the latest effort appear to really want this assumption to be true, pointing out that it doesn’t really follow might be a disincentive, but hopefully they won’t let that detail damp their enthusiasm…
What then is the benefit then of this effort? As stated above, more information is always useful, but knowing what to do about potentially problematic sitings is tricky. One would really like to know when a problem first arose for instance – something that isn’t clear from a photograph from today. If the station is moved now, there will be another potential artifact in the record. An argument could certainly be made that continuity of a series is more important for long term monitoring. A more convincing comparison though will be of the existing network with the (since 2001) Climate Reference Network from NCDC. However, that probably isn’t as much fun as driving around the country taking snapshots.
Hank Roberts says
> these sites must be properly audited, or the data sets must be discarded.
And you’re the decider?
Andrew Dodds says
Just as an aside..
Urban areas (apparently from a google search, anyway) take up around 1% of the area of the planet as a rounded value. So a UHI of 3K would average out globally as 0.03K, or around 5% of the total AGW effect..
I suspect that the number above is an over estimate, but the conclusion would be that by removing UHI from the records we will be making a slight underestimate of global anthopogenic temperature increases.
Nick Gotts says
Re #98 [“See! Even the AGW believers admit the surface temperature site data is worthless. It would be absurd to take any action while this huge uncertainty remains unresolved.”==
We’re not doing anything serious about AGW at present anyways, so we might as well improve the data until we do, if we do, decide to act.]
You (deliberately?) miss the point. The rising temperature trend is abundantly clear from multiple lines of evidence. The main cause is known with a high degree of confidence. There are those who, for their own selfish reasons or from ideological conviction, continue to deny these facts. They will use any means they can to delay and obstruct the necessary action.
Eli Rabett says
Dan Hughes in #95 asks why there are not error bars on the points in a graph of yearly average temperatures from a single station in the GISSTEMP data set. Perhaps if he looked at the way the data is gathered into the GISSTEMP gridded temperature record (in detail of course) he would understand why.
ray ladbury says
I seem to remember a hearing conducted in the House shortly after the Republican takeover of Congress in 1994 where the scientific basis of climate change was attacked. The hearing was co-chaired by Congressmen Tom Delay and James Doolittle. Doolittle and Delay–it would appear that the agenda has not changed much.
Most of science is not really about radical new discoveries, but rather about how to use imperfect data to make those new discoveries. Anyone who is alarmed by imperfectins in a dataset, probably hasn’t done much science. Why should we think that they are competent to carry out an analysis of the systematic errors contributed by station siting? Even a relatively simple jackknifing analysis to look for potential problem stations (probably much more profitable than a photo campaign) would represent a considerable level of effort given that it would have to be conducted over time and globally.
Then there is the question of what it would accomplish. There are well developed procedures for removing artifacts, glitches, etc. The trends shown by the land stations are consistent with every other line of evidence.
I am not saying that the effort to document station placement should not be done, but I wouldn’t assign it a high priority. Moreover, I certainly would not hold out any hope that it will change the conclusions of the IPCC analysis. Since there is zero indication that there is any problem with the conclusions drawn to date, policy should be made on the basis of those conclusions and not delayed for a detour down a rabbit hole.
Barton Paul Levenson says
[[Why not use the methods we use to measure these distant planets to measure the Earth?]]
Because we’re standing on it.
Seriously, there is a lot of satellite data on the Earth’s climate. What makes you think there isn’t?
Barton Paul Levenson says
[[[Response: I’ll have to admit you are persistent, but you are simply wrong. But since you don’t want to do the calculation, let’s throw it out there into the blogosphere and see what anyone else says. Is there a significant trend here or not? (PS. trend according to Numerical Recipes is 0.91 deg C/century, SD of trend 0.24). – gavin]]]
Gavin — I took the annual figures, eliminated the years with no figures (1895, 1907, 1908, and 1970), and regressed the annual figures on the year for the rest of them. With N = 92, I got 14% of variance accounted for by trend alone, and it was statistically significant at better than the 99.9% level, with t = 3.8 for the year variable. The slope was 0.009105 with 95% confidence level boundaries of 0.004354 to 0.013856. In short, there has been warming at this station and it has been significant. The guy you’re replying to doesn’t understand elementary statistics.
Ender says
Paul G – “Dan, you are avoiding the issue. If surface temperature site data is being used by climate scientists, and it is, these sites must be properly audited, or the data sets must be discarded”
So go ahead and do it!!! I am sure that everyone would welcome better data if that is the result of your ‘audit’. If you are not prepared to do it then you, like the rest of the community, will have to make do with what they have.
“Not long to photograph the sites, that’s for sure. That this has not been carried out already on a regular basis by climate professionals using the data is astounding.”
Is that your idea of an audit??????? What would a photograph give you?? I thought that you were asking for better quality data. I guess the climate professionals where doing something far more constructive instead.
Bishop Hill says
#102
I thought this was about microsite issues (which can affect any station)rather than UHI (affecting only urban ones).
John F. Pittman says
#94 Your contention is that all these have supported that “the actual claim of IPCC is that the effects of urban heat islands effects are likely small”? Whereas the listed groups may agree with the conclusions of certain papers or even the IPCC, could you help me find where they claim this. Of interest is WG1 chapter 2 for IPCC. But even the IPCC with confidence in studies 1990 and prior for small effect admit “However, greater urbanisation influences in future cannot be discounted” which brings us to the comments of 2007. My quotes and comments are about the present and do the effects extend or even are they real. Please note that I am asking that they specifically weighed in the conclusion that UHI has had a negible effect, not that they have signed on that IPCC or any other group has done a credible job.
Timothy Chase says
Auditing Stations
Looking through the above it is pretty obvious that “contrarians” wish to make auditing the stations the central issue so that “bad data can be thrown out.” The problem is that these sites are audited – repeatedly, and the conditions and readings they give determine how their readings are adjusted, weighted or filtered. However, data isn’t thrown out simply on the basis of location – or because one individual or another doesn’t like the reading it gives. There is a methodology which is designed to make use of as many data points as possible to achieve a higher level of accuracy than if the only measurements which were used were those that were considered prestine enough by the “standards of contrarians” assuming such a beast existed. (Inline to #35, #57,#56, #97, #93)
But why exactly are they focusing on the auditing of that which is already subject to a fairly rigorous methodology for maintaining accuracy? Among those who are aware of this methodology, the only reason that comes mind for me at least is that they wish to make a non-issue the central issue so that the real issue becomes peripheral: regional and global temperatures are rising – and the rate at which they are rising is accelerating.
On at least a couple of occasions it was pointed out that there are many other independent lines of evidence justifying the conclusion that the global average temperature is rising, and that it is rising dramatically. (#68, #71) However, this has been deemed irrelevant by those who demand that the stations be audited. And what auditing have they engaged in so far? Cherry-picking those stations which, from the perspective of someone – who is entirely unfamiliar with how rigorous scientific methodology has become and who might assume that scientists would simply average all readings independently of even the most basic commonsense – would think the worst possible stations to include in the process. From what I can see, they have no more desire to improve a process which is in fact working quite well than they have to take into account the many other independent lines of evidence which cooberate the averages which are being obtained by means of a scientific methodology.
As I see it, their purpose is to shift the focus from the rising temperatures to cherry-picked stations as if this were the only evidence for rising temperatures, then to discredit the process by which regional and global trends in temperatures are identified so as to discredit the claim that temperatures are rising. Once this is done, they believe that they will no longer have to deny the trends in temperatures – because the question will rarely arise – at least for the time being.
It has been claimed before that applying price controls to an economy where the government is inflating the money-supply to pay for programs is roughly equivilent to nailing the needle immobile on a pressure gage. If so, this would be roughly equivilent to throwing away the thermometer just as the temperatures start becooming dangerously high. Such actions do not postpone the negative consequences – those consequences simply become maske – temporarily, so that the trends leading to those consequences are not questioned or even recognized until it is too late.
ray ladbury says
All of the organizations listed support the consensus position that humans are largely responsible for the undeniable changes in climate we are seeing and that these changes represent a significant concern. If the UHI cast any doubt on that conclusion, it would not have such wide support.
This is not to say that the effect is unimportant. I suspect it will be very important when it comes to improving regional climate models and improving the extrapolation of global effects to the local level.
In order to understand the potential importance of the effect, let’s look at what it could do to our understanding of climate:
1)It will have zero effect on the global climate models, because
a)the constraints on these models are derived from other sources
b)the effect is known and there are methods for dealing the errors they introduce
c)the effect they introduce is local, not global, so they cannot be responsible for the signal/trend we see, but would at most introduce noise into that signal
2)It will not alter the conclusion that the climate is changing or even the degree to which it is changing because of c) above and because that conclusion is supported by multiple additional lines of evidence, all of which are consistent with the trends shown in the land stations.
The attempts to chip away at the evidence for climate change are akin to the efforts of creationists to chip away a mountain to see if they can find human and dinosaur footprints side by side. It is the aggregate of the evidence that supports climate change. Indeed it is the only hypothesis that can explain that evidence in a self-consistent fashion.
Science is a methodology for drawing reliable conclusions from imperfect data. It works. If you want to ponder perfection, may I recommend the study of theology. If you want to draw reliable conclusions that can make a difference in the human condition now and in the future, science is your best bet.
Harold Pierce Jr says
RE #100 So what? What counts is the natural variation of temperature. The trend is just a possible reflection of the climate recovering from the Little Ice Age. Complete recovery from which occurred at or about 1980.
I doing most of these calculations manually. I claim that you don’t have to crunch enormous gobs of data when a minimal set will do. Go to the USHCN and crunch data from Telluride CO. It is not that easy to find temp records from remote sites that go back before 1900
[Response: Well, we’re making progress. “there is no trend” goes to “there is a trend, but it’s natural variability” – only two more stages to go! – gavin]
[Response: PS. Telluride data also shows a significant trend: 1.0 deg C/century (+/- 0.6 95% conf). Your point? – gavin]
Dick Veldkamp says
Re #67, #77, #83, #86, #100, #107 Is there a trend?
Just of the hell of it I redid the calculation on the Quatsino data. Earlier results confirmed: trend = 0.0091 deg/year, significance p=0.00026, correlation r=0.37.
Seems like we are moving to a consensus on this one.
Harold Pierce Jr says
RE #100 The records go upto present. However, there something is quirky about access. If you are logged on and try to access a temperature record, the computer seem to choke. Log off and try again. Like magic the records suddenly appear.
steven mosher says
thanks gavin! I’ll See if anyone over at CA with better math skills than mine cares to have a go at it. Hard as it is for some to believe, but there is a class of folks who just like to double check, understand things for themselves. Not deniers. Not believers. In the Middle. One more thing, the 1200km figure. Is there a document that shows which stations are associated with which stations
steven mosher says
Gavin, one more request. I made a error in specifing the grid of interest. 35N-40N, 120W-125W. in my prev. post I said 115W, sorry. Can I get the linear trend for the right grid. 35N-40N, 120W-125W. my mistake
[Response: same thing. Large scale anomalies etc…. – gavin]
Jim Cripwell says
In response to #103, and if Gavin will allow this message, let me try again, and I promise I will try not to make a fool of myself this time. But the question of the trend in average global temperatures (AGTs) is a subject which fascinates me. I hope no-one denies that over the last X years (where X is greater than 30), AGTs have ben rising. There are two rival ideas as to why this is happening; a dramatic increase in the amount of CO2 in the atmosphere as proposed by the proponents of AGW; and extraterrestrial factors, notably the sun, as proposed by the deniers.
A few words on the future of these two ideas. If the UNFCC meeting in Bali this December does not agree on some form of hard cap on global CO2 emissions, then the concentration of CO2 in the atmosphere is going to go on rising at unprecedented rates, and hence AGTs will go on rising at an equally unprecedented rate. If solar cycle 24 does not start until September 2008, and if cycle 25 is as low as predicted (a level unseen since just after the Maunder minimum), then average global temperatures are going to plummet.
The question is, what is going to happen in the 21st century? There seem to be two answers; either temperatures are going to rise at an average annual rate as predicted by the IPCC and the GCMs, or temperatures are going to reach a maximum and then decline. If the latter scenario comes, then looking back with 20/20 hindsight, the start of the cooling period will be seen to be the maximum of the warming period.
So to me the question that needs to be answered is not have temperatures been rising; they certainly have. Rather, here in 2007, are temperatures rising as fast as the GCMs predict? This is a much more difficult question the answer. The data is extremely noisy. We have at least 4 ways of analyzing the same temperature data, which come up with different numbers for AGTs, and whose methodology is not necessarily completely transparent. For other indicators – glacial retreat, sea level, arctic ice extent, etc. – the data is equally noisy, and it is difficult having a sensible discussion without the inevitable cherry-picking on both sides of the argument. All I can say is that my funny internal feelings tell me that there is no hard data to show that average global temperatures, in 2007, are rising as fast as the GCMs predict. But if I am asked to defend this position scientifically, I cannot. Can anyone provide hard data which demonstrates that, here in 2007, average global temperatures are rising as fast as the GCMs predict?
pat n says
I worked with climate data in hydrologic model development and calibration at a NOAA National Weather Service (NWS) River Forecast Center (RFC) from 1976-2005.
The NWS has uses software for analysis of inconsistencies in data due to changes in station locations, vegetation and other characteristics that influence temperature and precipitation readings. The software is used in selection of climate stations for use in river flow calibration. Quality control and editing of temperature is performed by RFC staff and contracted workers in deriving representative mean areal precipitation (MAPs) time-series for sub-basins which are used in calibration the river basin model parameters that are then input to the operational hydrologic models used by RFC staff in flood forecasting and extended hydrologic guidance.
NWS management did not allow work in evaluating Urban Heat Island (UHI), mainly because of the stigma of being related to what NWS viewed as the political and controversial nature of the climate change / global warming subject.
I was removed by NOAA NWS for doing research on climate and hydrologic change on July 15, 2005. I still continued to evaluate climate station data and historical and operational river flow data in my tracking of climate warming in the US, including Alaska, for personal interest.
Although I no longer have access to the double-mass and consistency plotting software being used at NWS RFCs I have an approach, which I believe to adequate, for finding what I believe are the best temperature station records to use and I do quality control on the data I use in plots, viewable by the public. My approach is partially documented in my paper at the link below.
http://www.mnforsustain.org/climate_snowmelt_dewpoints_minnesota_neuman.htm
Links to temperature data plots, showing temperature and snowmelt runoff data plots at stations in the US, including Alaska, are in #99, above.
tamino says
Re: #95 (Dan Hughes)
Yes.
The data themselves constrain the size of the errors present. For example, we know that the errors are less than, say, 100 degrees C, because if they were that large, there would be dramatically more scatter in the data. The total variance in the data gives an upper limit to the errors, and using that upper limit we can compute a statistically reliable estimate of the significance of the trend.
Re: #105 (Ray Ladbury)
Exactly that is part of the procedures documented in Hansen et al. (1999 and 2001).
Re: #107 (BPL)
Quite right.
I did the same with the monthly figures (N = 1,111). I included the effect of autocorrelation, and got a slope of 0.00902 with 95% confidence limits 0.00516 to 0.01288, significant at better than the 99.9% level. There has indeed been a significant warming at this location.
But more detailed examination shows that the trend is not actually linear. The location warmed to a peak in 1942, declined to a low around 1972, and since that time has warmed consistently and rapidly. The trend from 1972 to the present is at a rate of 0.0779 deg.C/yr (that’s about 4 times faster than the global rate) with 95% confidence limits 0.0414 to 0.1143, again significant at greater than 99.9% confidence.
Boris says
#106:
Don’t feel bad, a lot of people missed my sarcasm.
The point I was making is that the contrarian/sceptic crowd seem to accept that Mars, Neptune and Pluto are warming without much question. Yet, the warming of the Earth is somehow questionable. Anthony Watts posted about Neptune and Mars warming as some sort of solar proxy (even thoguh he knows that solar trends are flat since the 1950s–he published a graph on his blog.) In fact, Watts says:
“So we have three planets now with a warming trend; Earth, Mars, and Neptune. That’s not an insignificant coincidence.”
It seems odd to accept such a paucity of data on Neptune and Mars, while quesitoning the vast amount of data on global tmeperature and using his site to suggest that poor siting issues derail global warming completely (see my #41). The “audit” of surfacestations is motivated by political bent more than scientific inquiry.
Dick Veldkamp says
Re: Predictions by GCMs
Apologies, this is somewhat OT. In a recent discussion about GCMs I was challenged to provide some GCM output, in particular a comparison of model and actual rainfall in the Sudan over the 20th century.
Of course GCMs are not capable of making local predictions, but Sudan (2.5 million sqkm) comprises 16 cells or so (in my ClimatePrediction model), so I suppose numbers found for the country as a whole must have some meaning.
If so, does anybody know a place where I could find detailed comparisons of local time series? What I tend to find on the net are global comparisons.
John F. Pittman says
#112 “If the UHI cast any doubt on that conclusion, it would not have such wide support.” I do not have an opinion on this. I asked if they specifically responded to the question of UHI influence. Your lead “All of the organizations listed support the consensus position that humans are largely responsible for the undeniable changes in climate we are seeing and that these changes represent a significant concern” does not have necessarily a relationship to my specific question. It appears you have offered me an assumption. Of note, the use of “any” is not reccommended. I understand what you mean, but truthfully if UHI cast only 10% doubt, would you expect it would not have such wide support? I beleive it would have still about 99% of the present support because most would realize 90% is still a great fraction explained. I have to admit that your 1,a,b,c,2, is typical of arguments I see. But I have some issues with them, whether it is real may be a matter of wording, or even a matter of findings. I have not seen the information. Of information I have seen, is that recent archeological finds indicate that without doubt recent temperatures are approaching or equal to periods of higher if not highest temperatures for up to about 12000 years. Suppose you do not want this to occur and decide to do something about it. Whether you have a range of .6C and need to do .1C may change if the range is .5C and whether you still need a .1C change or not. It also applies if you think that man is causing the problem or the sun plus man. Of interest is your claim of “a)the constraints on these models are derived from other sources”. With that being the case one can see that correctly measuring this temperature difference either it will make the goal easier to reach or indicate that we have a much harder goal than thought. But in that they (models) have been “derived” from other sources and not be effected indicates they are not useful, of which I do not have an opinion. #2 is as far as I can tell incorrect. Assume it is somehow shown that the UHI is .2C of .6C and it all occurred in the decade of 1996 to 2006 indicating that only the most modest of the models was close to coming correct and that all those models so rigorously derived from other sources had errors of 33% for a decade and a cumalitve error of 2c over a century and humans needed to be concerned with .4C TOTAL change. To say that this would not alter the way we look at either temperature changes or model predictions would be incorrect. It would also be convincing evidence that we need to do better at fundamental measurements. After all, I would hope that these other derived sources also have quality data and relationships, otherwise GIGO from a computer.
James says
Re #120: {The trend from 1972 to the present…]
Perhaps a graph of trends would be useful. That is, starting from the first year the data was collected, compute the trend to the present, then do the same starting at the second year, third year, and so on. Or perhaps some other measure that would show any rate of change of the trend.
I’ll leave it to you to figure out error limits and such. I don’t understand statistics all that well, but unlike some people, I at least know that I don’t :-)
ray ladbury says
Re 123. John said “Assume it is somehow shown that the UHI is .2C of .6C and it all occurred in the decade of 1996 to 2006 indicating that only the most modest of the models was close to coming correct and that all those models so rigorously derived from other sources had errors of 33% for a decade and a cumalitve error of 2c over a century and humans needed to be concerned with .4C TOTAL change.”
Well, first there would have to be a model that differed from the others by 33%. I don’t think there is–but I’m willing to be wrong. Second, how do a bunch of KNOWN local effects, which are known and effectively dealt with by techniques currently employed, produce a GLOBAL signal? People have looked at the signal even without urban stations–guess what, still there. Moreover, the trend agrees with every other indicator!
John, this is not a fragile signal. It won’t go away or even diminish significantly as a result of subtracting out a couple of stations. I know it sounds reasonable to derive the data from only the most pristine of locations, but that is not necessarily the best solution. Actually, I suspect that many calling most loudly for a “cleanup” know this, and that their real motivation is to aggravate doubts among the uninformed with a few nonrepresentative pictures. Indeed, this is what is already being done with the photos gathered so far.
Matt says
#103 Nick: You (deliberately?) miss the point. The rising temperature trend is abundantly clear from multiple lines of evidence. The main cause is known with a high degree of confidence. There are those who, for their own selfish reasons or from ideological conviction, continue to deny these facts. They will use any means they can to delay and obstruct the necessary action.
I think the US has the most complete monitoring network in the world. Take a look at the raw data from the network showing all 1200 US USHCN: Much of the US has cooled over the last 100 years.
Then after all the corrections and adjustments are applied.
To my eye, the raw data from all the networks shows considerable cooling in the US over the last 100 years. The adjusted data shows considerable warming. Deciding how to adjust was largely made by humans sitting in an office, not out in the field. If Anthony Watts study is the first validation of the adjustment procedure, isn’t that a good thing? How many have validated the adjustment procedure? Did the peer review effort include field trips to visit sites and confirm that in a spot check of 10 sites that at least 90% were adjusted correctly?
[Response: Most of the adjustments you mention are for Time of Observation and station move biases and presumably you are not suggesting that known problems not be corrected? However, you are missing the fundamental point, the gridded data (which attempt to correct for UHI etc.) show a) much smaller trends than the individual station hot spots that jump out of your first figure, and b) clearly reflect the fact that the south east US has in fact cooled. Thus to what extent do you claim that the gridded products do not reflect reality? – gavin]
bigcitylib says
Re. #106,
the whole “warming on other planets” thing is such a bunch of baloney. But the best way to kill that argument quick is to point out that if the phenomenon is truly solar related, it should apply to ALL of the planets. However, you can tell people that, as a matter of scientific fact
(http://www.boulder.swri.edu/~layoung/eprint/ur149/Young2001Uranus.pdf)
…its getting colder on Ur Anus.
tamino says
Re: #124 (James)
I’ve graphed the 5-year averages, as well as a wavelet smooth on a 5-year timescale (both of which give a pretty good picture of the overall trend), at this post in my blog. The post is on another topic entirely, but if you scroll down to look for the “UPDATE UPDATE UPDATE” then you’ll find the graphs.
On an earlier discussion topic: for those who want to see an example of the mistaken assumptions at work in the blogosphere, here is an example of complete and utter denial of the validity of the thermometer record.
steven mosher says
re 89. As always Ray is always on target with his comments and analysis.
This is what ray wrote:
“It really only makes sense to eliminate stations if they give consistently bad data.”
Question: how does one tell what is “bad data” without a standard. especially, if many of the sites are impaired.? I will keep this simple, because I am slow and there is alot you can teach me Ray. WMO has standards for siting. CRN has standards for siting.
FOR EXAMPLE, don’t put the sensor on a roof top. WHY? because they studied this and the data from this kind of site is bad. Here is another example. Don’t put the site on a slope. WHY? cause we studied this and the data is bad. ( sun exposure ray) Here is another example, don’t put the site over PAVEMENT. Why? hmmm ..
What do you think ray. Do you think that Karl and NOAA and WMO know what matters in siting or not? I don’t know a thing about microsite issues like multipath, waste heat, evapotranspiration, and wind shelter. But I trust Karl, NOAA, the WMO. Bad siting leads to bad data.
If bad siting ( sensors in the shade, under tree, by a transformer, next to air conditioner exhaust) did,’t matter, if bad data could be magically rooted out or adjusted for by statistical techniques, then why expend all the time and effort to specifying siting criteria?
The consensus in siting science says: Don’t place a land surface temp sensor NEXT TO AN INCINERATOR.
Some people seem to adopt the following logic. We will accept a temp. sensor next to an incinerator until somebody else proves it is a problem. I recall a funny cartoon with three monkeys.. one has his hands on his eyes..
Now, the ray continued to shine:
“If the data are oversampled, any anomalies will be identifiable by rather simple analysis.”
Actually some started this analysis. I suggested that deviations ( at a station level) from global or grid level Tmin trends could be an indicator of site impairement.
Very simply, one could hypothesize that site impairment would hit the Tmin record more severely than the Tmax record, narrowing the diurnal range, and raising the mean, consequently. A quick look at the data suggested this might be an interesting signal to look at. But, for now, we will just stick to something every vistor to RC can see for themselves ( google gisstemp)
So, we think anomalies are easily spotted? Ok. Go to GISSTEMP. select the site at ORLAND, CA. It follows WMO and CRN guidelines. ( Photoverified) Plot its temp.
Now, search for MARYSVILLE, ca. Plot its temp. Hmm.
One site follows the guidelines. one site does not.
One site shows warming. one site does not. I’m a curious fellow. Which data is “bad”? Now, There are other sites in the grid that also break the CRN rules and WMO rules. These sites look like Marysville in regards to temp records. Funny how the sites located by pavement and building and wind breaks have “simliar” trends. on the other hand there PRECIOUS FEW sites in the grid that follow the rules. Orland is one. Willows is another. Lake Spauling used to be a good site up to a couple years ago.
So, consensus would say… toss the sites that dont follow the rules. Now, say that 24 of the 25 sites in the grid break the rules. Now say those 24 show a positive linear trend of .8C since 1900 and the 1 site that follows the siting guidelines doesnt show this trend. Which site is anomalous? Put another way, if 1 site out of 25 follows the siting guidelines, and 24 don’t, which site do you think will identified as an anomaly by merely looking at the data file?
Bottom line. Document the sites. Delete those that break siting guidelines. Let the data fall where it may. It’s an oversampled grid after all.
Further illumination:
“No system is ever perfect. The question you have to ask yourself is whether any improvement to the system will make a significant difference. ”
Agreed. No system is perfect. Delete the class 5 sites.
make it better. Second, I don’t have to ask myself if an improvement will make it “significantly” different.
First, The cost of “improving” the data is ZERO. don’t include bad sites. Second, The burden of proof is backwards in your analysis. The stations don’t meet standards. The instrument has not been calibrated. Show that INCLUDING them has no impact on trend or error.
Imagine a drug maker who said, Prove my drug is ineffective! Finally, NOAA have already said that an improved network is required.
Further illumination:
“I suspect that it would not for several reasons. First, as I said, in an oversampled system, anomalies are easy to identify.”
anomalies are easy to identify. If you look at the site photographs and temp records you will find that the sites that comply with siting guidelines are anomalies. ( psst, you think bad sites are anomalies, it might be the other way round ray…DOH)
Continuing:
“Second, we are looking at global trends, so unless there is a systematic error in siting/readings etc. bad stations will at worst produce noise on the overall trend. ”
Really? Well, that would depend on the number of bad stations ( we have no clue), the magantude of the error( we have no clue) any directionality in the error ( we have no clue).
So, best case, bad stations create a noise farm. This is bad for climate science. Fix it. Worst case, The land record might have a small positive bias, a minor annoyance but utterly correctable if proper QA is employed. Put QUALITY DATA IN, rather then testing for JUNK DATA after you put it in. Nobody thinks that attending to Quality is a bad thing. We have a QA consensus. And only a few folks in this project think that the warming will go away. Too many independent sources confirm the global increase. The issue is quality, reliability, and accuracy. Don’t farm the noise, if you don’t have to.
And…
“Even if a particular bad station had a paucity of good stations around it, it is unlikely that it would affect the global trend.”
you have a supposition about global trends. You think this siting issue won’t matter. That’s because you think bad stations are the exception and not the rule. This is a testable hypothesis. This is what we are investigating. How about you take some pictures for the project? we have 130 volunteers, 131 would be GREAT!
You conclude:
“Should we look hard at station site quality for future stations. You bet! Should we have any doubt about the trends seen to date. No. ”
Well, we agree. The “trend” upwards is supported by many independent threads. ( SST trends, Troposphere trends) the EXISTENCE of a trend is not our issue ( ok My issue ) The issue is quality, magnatude of the trend, error of the trend and the proceedure for incorporating the CRN into Goddard products in the future. So, you misconstrue the target of the doubt.
We should have doubt. Doubt is good. Denial is another thing altogether.
Dan Hughes says
re #104 and #120
Thanks Eli, I have been working my way through the papers that Gavin linked in his post. Do you have a pointer to reports and papers that might contain the actual equations and area data used in those calculations? Pointers to any software used in the calculations are of special interest.
Due to an oversight on my part I did not state my question precisely enough. I wanted to ask about the precision of the instruments, the accuracy with which the device can be read, recording the data, and calculations associated with reducing the data to its reported form. So far the papers seem kind of light on these, but maybe I will run across those discussions later in the papers.
Thanks too, tamino. I am not familiar with that concept. Can you point me to a textbook that contains the details? An online discussion would be more helpful actually. I am especially interested in the mathematical details outlined in this sentence; “The total variance in the data gives an upper limit to the errors, and using that upper limit we can compute a statistically reliable estimate of the significance of the trend.” BTW, does the total concept include discussions of the number of significant digits available from recorded data?
Thanks again
pat n says
Re: 89, 119
Regional changes in temperatures are more informative to me than globally averaged temperatures,
I show regional trends in temperature data based on US climate station data (1890s-2007). Temperature changes indicate greenhouse gas driven global warming with strong warming trends in the mid-high latitudes and elevations and the greatest diurnal increases in daily minimums in winter months.
Streamflow data supports warming by showing earlier in the year snowmelt runoff trends on rivers in the Upper Midwest and northern Great Plains, beginning in the 1970s and continuing to recent.
John F. Pittman says
#125 You now somewhat seem to be evolving towards my position. Note that I asked you if the sources of mainstream science organizations had agreed to specifically that UHI was known to be neglible versus having only agreed to IPCC is basically correct. You are avoiding, it seems. I will assume unless shown otherwise that my assumption about these organizations is true.
You ask “Second, how do a bunch of KNOWN local effects, which are known and effectively dealt with by techniques currently employed, produce a GLOBAL signal?” There are two fundamental problems with this statement. You claim that KNOWN local effects are effectively deealt with by techniques currently employed. I find in peer reveiwed/cited literature that this statement is not considered correct. You also claim about GLOBAL while it is far as I can tell from IPCC that Global is made of many local measurements. I have not made assumptions to their problems if any because I have not reveiwed them. However, why assume they are correct especially if one sees in peer reveiwed literature, and from obvious data that a general local, the USA, that they have not been effectively explained and other data indicates problems. Why not look instead of assuming.
That the trend agrees with every other indicator (don’t know what an indicator is necessarily…it was not defined by IPCC…lol) does not address my comment about accuracy at all. As far as I know they are either based on temperature or based on measurements that do not directly relate to temperature that is Global in your comment. Take an indicator like glacier retreat that some say is an indicator. While it might indicate warming, or lack of precipitation, it does not measure incorrect temperature measurements in the USA.
You said “Actually, I suspect that many calling most loudly for a “cleanup” know this, and that their real motivation is to aggravate doubts among the uninformed with a few nonrepresentative pictures. Indeed, this is what is already being done with the photos gathered so far.” Though this comment may be true, I have no opinion on this since I can’t read minds. I pointed out, using the assumptions I made, that an UHI effect could be important. You have done little to indicate that my assumptions or conclusions about a UHI effect were necessarily wrong which is what my #123 was about.
Jim Eager says
Re 113 Harold Pierce Jr: “The trend is just a possible reflection of the climate recovering from the Little Ice Age. Complete recovery from which occurred at or about 1980.”
You do know that the “Little Ice Age” was not actually an ice age, don’t you?
And shouldn’t that be 1880?
ray ladbury says
Steven Mosher, A network that corrects error-free data is not necessarily better than a network that collects data with errors that are well understood. The are several fundamental problems with your approach:
1)You are looking at stations individually, rather than as part of a network. Information theory suggests that if our oversampling is at least 3:1, we can have up to 1/3 of our stations be totally wrong with no real loss of information–and those are random errors. The siting criteria are excellent guidelines for single stations, and I would not site any single new station that did not comply (unless there were an overriding reason). Most of the station that violate the siting criteria, however, are old, with a long history. This is important, because:
2)On the other hand, systematic errors can be characterized and bounded (thus determining what weight to apply) or the result corrected. Such studies provide important information in and of themselves (how do you think the siting criteria were developed?).
3)You give no consideration to what kind of error a particular violation would produce–either prior to or after corrections are applied.
4)In essence jackknifing studies already do what you are asking for–look at the effect of excluding single stations from the analysis.
5)Your methods have a very high risk of being misappropriated by denialists to cast unwarranted doubt on a result that is incontrovertible–indeed, that is how they have been used to date.
6)There is no evidence of a systematic problem with the data or procedures, and plenty of evidence to the contrary.
So, Steven, if it were not for 5), I would consider your efforts to be at best a welcome volunteer effort and at worst an innocuous waste of time. However, there are plenty of actors out there with very deep pockets and far less than simon pure motives. They have already demonstrated that they will misuse any fact (warming on Mars, increasing snowfall inland in the Antarctic…) to sew doubt in the minds of the nonexpert. It would be naive to expect them to give your effort a pass.
Steve Bloom says
Re #78: “So, to answer your question, The CRN, I would suspect, would REJECT a class 5 site. At Worst, they would a RECORD of its clasification and photos. With the historical network we have neither. The point is the document shows people how to Rate a site for INCLUSION in the CRN. Bottom line: Marysville would be excluded. Lodi would be excluded. Lake spaulding would be excluded. tahoe city would be excluded. In fact, none of these sites or locations nearby are included in CRN.”
Well, here and in the rest of your comment you seem to have a lot of faith in Tom Karl and the CRN standards. If what you say is correct, then presumably they have a plan for abandoning the sites you have defined as “bad.” I’m probably just poor at searching, but I can’t seem to locate that plan. Pointer? Alternatively, as others have suggested, perhaps even these “bad” sites provide some useful data and one result of the CRN will be to improve that data. BTW, Lodi isn’t all that far from the Merced CRN site.
Re #129: “Now, There are other sites in the grid that also break the CRN rules and WMO rules.” Very likely *all* of the sites break the CRN rules in some degree.
Re #130: Dan, your stated expertise in quality assurance so greatly exceeds that of everyone here that I don’t see how you could rely on pointers from anyone else. Independent research is the answer. Let us know how that turns out.
Matt says
#130 I wanted to ask about the precision of the instruments, the accuracy with which the device can be read, recording the data, and calculations associated with reducing the data to its reported form.
http://www.srh.noaa.gov/ohx/dad/coop/EQUIPMENT.pdf notes that if an MMTS agrees with a thermometer within a degree, then the MMTS unit is good. Also, observers record temps only to the nearest degree.
So, there is actually a 3 degree window in which a measurement is valid. For example, the actual temp could be 71.5 and the recorded temp could be 70.0 on one units, and 73 degrees on the unit right next to it. This would be in specification.
Errors on a specific device will remain very close to constant over time, but it’s possible for a replacement device to measure almost 1.5 degrees higher or lower than the previous device and still be acceptable according to my reading.
Given the size of the network, most all of these errors will cancel each other out.
Dan Hughes says
#136 Thanks Matt. I’ll check in over there.
#130 And Steve B as you always do, you made up words that I have not said, addressed an issue that I did not mention, and failed yet again to discuss any technical aspects.
Matt says
Gavin: Most of the adjustments you mention are for Time of Observation and station move biases and presumably you are not suggesting that known problems not be corrected? However, you are missing the fundamental point, the gridded data (which attempt to correct for UHI etc.) show a) much smaller trends than the individual station hot spots that jump out of your first figure, and b) clearly reflect the fact that the south east US has in fact cooled. Thus to what extent do you claim that the gridded products do not reflect reality? – gavin
Gavin, of course biases should be corrected. All significant biases should be corrected. And potential biases should be investigated. However, I’ll admit to being a bit suspicious that the raw record shows little warming, and the adjusted record shows considerable warming. Bias correction can sometimes equal agenda injection. I think there are folks with an agenda on both sides of this argument and history shows repeatedly that those in positions of trust (presidents, governments, doctors, transmission repair shops) frequently withhold information to make their case more convincing. It’s not lying, but it’s not being 100% transparent either. Scientists working on pharamcuticals do it, working for cigaretee companies do it, and posters on this board do it. Why wouldn’t a climate scientist with an agenda do it? I worked at the USGS for a few years as an intern, and yes, there were folks there with an agenda. No budget, but they still had an agenda :) My attitude might sound cynical, but the population as a whole is just as skeptical when it comes to what scientists tell us. Frankly, the louder folks hear “the science is settled” the more people go “yeah, right”
I suspect the gridded maps largely reflect reality, though I also think that the extremes might be somewhat muted if 10% of stations are faulty because they are sitting next to an AC, parked car or BBQ grill.
I don’t quite understand why folks are upset that a private citizen, on his own dime and own time are looking at this. If he finds something the first set of eyes missed, then great. If not, then Anthony Watts ends up that much smarter on the subject.
ray ladbury says
John, and Steve Mosher, OK, so you say you are going to carry out a scientific analysis of siting. So what is your hypothesis going in? At how many sites do you expect to find problems? What kind of problems do you expect to find? What sorts of errors do you anticipate that these problems will introduce to the database? What sorts of analyses and noise/error rejection procedures might be effective against these errors? Are there any types of errors you might expect to find against which no commonly used mitigation algorithm would be effective?
If you can answer all of these questions going into your investigation, you are doing science. Otherwise, you’re goin’ fishin’. In particular, I think you need to think about the implications of these stations being in a heavily oversampled network with a long temporal database.
Barton Paul Levenson says
[[“So we have three planets now with a warming trend; Earth, Mars, and Neptune. That’s not an insignificant coincidence.”]]
So how does he explain that Uranus is cooling?
http://www.boulder.swri.edu/~layoung/eprint/ur149/Young2001Uranus.pdf
Paul G says
= Comment # 134 by ray ladbury” =
=”6)There is no evidence of a systematic problem with the data or procedures, and plenty of evidence to the contrary.”=
And how do you know this ray? The small amount of photographic evidence available so far does indicate there is a problem of some degree, which requires further analysis to ascertain how serious the issue is. Sweeping the issue under the rug is not an option.
Timothy Chase says
Contrarian Doubt: causes and consequences
Matt (#136) responded to an earlier comment:
… for essentially the same reason that the larger the number of coin tosses one performs with an unbiased coin, the more likely the number of heads divided by the number of tosses will be one half.
Of course contrarians will point out that instruments at poorer sites will have a bias, but as tamino (#91) points out, this bias is corrected for, and it is quite possible that given the methodology employed, removing the urban sites would actually result in a higher average temperature, and as Hansen points out (see tamino’s first reference in #93), the bias introduced by urban sites is quite negligible.
But why include them?
For the sake of consistency – as ray ladbury (#97) points out. Removing them would mean that we are no longer measuring temperature the same way, and as such would introduce new artifacts into the statistics so that the measurement from one year wouldn’t be directly comparable to the next. Similarly, replacing one station with another station would be replacing known errors which were already being taken into account previously with unknown errors and would suffer from the same sort of incommensurability.
Adding new sites with the appropriate precautions taken with respect to their location increases the number of data points, in essence paying for the additional noise which they introduce into the trends – in the same way that increasing the number of coin tosses leads to a heads to tosses ratio closer to one half. Simple replacement of older stations does not.
Additionally, what actually matters most in terms of the trends is not so much the temperature in any given year, but the change in temperature from one year to the next. But by focusing on the location of one particular station or another and how its location may result in slightly lower or higher measurements, contrarian rhetoric obscures this essential issue in popular perceptions.
Likewise, Dan (#52) points out that the good majority of sites are in rural settings. But by focusing on urban settings, contrarian rhetoric further distort popular perceptions.
Boris (#121) points out that contrarians are more than happy to accept the trends calculated for a few distant planets if it obscures the cause of the trends seen on Earth – even though the data which we have on those trends have a great deal more uncertainty associated with them (see Nicholar L’s #88), and as an explanation in terms of solar variability is not credible (ibid.), and solar irradiance has been flat since the 1950s (see Boris’ #121).
Dan (#52) also points out that the very same trends which we are seeing on land are showing up in temperature records at sea and the atmosphere, and as Spencer (#1) points out, in boreholes, and as I have pointed out, in the ocean depths down to 1500 meters. Moreover, Ray Ladbury (#125) points out that we are seeing the same trends even when the urban areas are thrown out and we simply use rural ones.
The trends we are seeing are not the result of urban heat islands. If they were, then the trends would be higher in the tropics than in the higher altitudes, as gavin points out in the essay itself:
Scientists include older urban sites not because they are ignorant of urban heat island effects, but because continuing to include them improves the accuracy of our identification of temperature trends. The contrarian’s purpose for focusing on urban heat islands is not to improve accuracy but to cast unreasonable doubt upon a process which is working quite well. Likewise, they prefer to debate urban heat island effects rather than to discuss the rising temperature trends, other clear signs of rising temperatures, the positive feedbacks which are beginning to kick in so that climate change will take on a life of its own independently of what we do in the future if changes are not made now (#111, “Storm World” post, comment #141) and what such climate change will imply for humanity as a whole (Curve manipulation, comment #74, A Saturated Gassy Argument, comment #116). They prefer debate which in which they can more easily manipulate public perception to their own ends rather than recognizing what is actually happening to our world as the latter would demand actions which given their nearsightedness they would prefer to avoid.
unconvinced says
re #44;
John, nowhere in my post did I suggest that anyone is wrong or deceptive or ignorant or anything else. All I suggested was that having someone else go over your work to look for mistakes and/or clarify your implicit assumptions (ie make them explicit) was a good thing and that anyone who cares about the truth and the scientific ethos should not be upset that someone “dares to question” their work. Nor did I suggest that money should be diverted from important work to audit other work. If someone wants to spend time and money doing this audit work, then they may have an agenda or they may not – just as those who did the original research may have had an agenda or they may not (and once again, there is no implication that this is the case for any particular field, researcher or paper)
Now, don’t get me wrong here – I certainly understand that, like most people, you will be confident in your own work; you are, after all, the one closest to it, and therefore have the best understanding of exactly what was done, how it was done and why it was done. But that doesn’t mean it’s right, it also doesn’t mean that you didn’t make a mistake, and it certainly doesn’t mean that you took everything relevent into account – you’re only human, after all.
In my experience, taking the time to explain your work to someone else who is *not* closely involved in it is a highly valuable exercise and one that, it seems to me, is not particularly popular in scientific circles. For better or worse (IMO worse), many scientists seem to become frustrated when asked to explain their work. That’s unfortunate IMO, and I would encourage you to try this out for yourself – in the effort to organise your thoughts in order to explain your work you will, in many cases – although not all – have an “Ah-ha!” moment, where it all “clicks together”. Of course, the person you are attempting to explain it to will probably end up rather frustrated with you as you run off to investigate your new insight, but that’s a small price to pay. How is this relevent? Auditors ask pesky questions! They demand documentation! Yes, it’s annoying, but it’s also valuable on many fronts: it ensures you properly document all steps in your work, making it more “bulletproof”; and as above, it makes you organise your thoughts in a different way, leading to new insights.
So, as per my original post, please don’t see just the negatives in an “audit”. Instead, look for positives, and use the whole process to your advantage. After all, if you believe your work is sound (why wouldn’t you?), you have nothing to lose and everything to gain.
David says
Re #65 [not one person advocates what is the only sensible thing to do: perform a thorough review of the surface temperature sites. Instead, abstract, Machiavaellian motives are attached to anyone who dares question the suitability of the sites.]
There are hundreds of papers that do this. Its a pretty standard scientific process. I can also point you to two very large PhD theses in Australia which are nice cook book examples.
The development of a high quality historical temperature data base for Australia. University of Melbourne, Simon James Torok. 1996. and Extreme temperature events in Australia. University of Melbourne, Blair C. Trewin.
2001
With a team of 100 students and a few million dollars for airfares you could do this work on a global scale. You couldn’t do it remotely, though, because most station meta data is tucked away on paper records in national archives (sure this should all be digitised, but who has the billions of $ that are required to do this).
John F. Pittman says
ray ladbury you said in 134: “Steven Mosher, A network that corrects error-free data is not necessarily better than a network that collects data with errors that are well understood.” Temperature, means and anomolies are so misunderstood? Why would any one want to correct error free data for temperature? This is what several on CA are implying is occurring by AGW proponets and procedures: The data do not support AGW so it must be “not necessarily better” and must be corrected (ie error free data or data that does not agree with an AGW hypothesis is wrong).
I have assumed you were talking of the same network data like temperature of the US. So let’s examine data that is “error free” but “not necessarily better”. I can’t think of one if it is about the same phenomena. I can think of several that mean little…number of prosecutions for drugs versus impact of drugs on human health. Yes, you can count and get an extremely accurate number, highly accurate for prosecutions, but the impact is more important. So what is the impact? Therefore what is so misunderstood about temperature?
You ask #1 “John, and Steve Mosher, OK, so you say you are going to carry out a scientific analysis of siting. So what is your hypothesis going in?” I think this should read “a scientific analysis of siting implementation according to accepted standards”.
Hypothesis: Cement, flaming grills. etc. next to temperature sensors do not meet accpeted standards nor make for accurate measures of temperature that should be used in a global computations or grids, assuming accuracy is important. Ray, please note, I think and have commented that accuracy is important.
#2 “At how many sites do you expect to find problems?” The better question is how many sites would it take to have a demonstrable effect on grid analysis? The answer is one or two in certain grids. The extent of this paper is to show that it can effect a grid. the actual extent needs further investigation. The hypothesis could be “Micro-site temperature influences impact specific site temperature data”. One is enough. Or choose a ramdom selection…there are several methods available, or do what paleos are claimed to do and cherry pick, and then show that what they choose are most important. Choose what Anthony Watts wants to do and do all. Each has its limits. But for me and Stevem we only have to show #1 with 1 site , and then proceed. If you want a hypothesis after step 1., perhaps you should wait as I would and do one hypothesis at a time.
#3 “What kind of problems do you expect to find?” SEE ABOVE. Several have been shown and I have commented on hand waving by Eli Rabbet (sp?) on CA in particular. Not that I know the answers already, you know the “deal”; you are quoting the first part, but that it should be investigated because the handwaving fell far short of obvious data in the photographs. This goes back to number 1 that I expect to find cement in close proximaty and walls to show increased temperature as has been repeatedly shown in literature. This may not be true if the station is not taking data correctly, but I guess all have to live with that if you use the data. There could be unknown and unresolved problems with the data, but I assume this is outside either your ability or mine? If you have information otherwise I would appreciate you providing it.
“What sorts of errors do you anticipate that these problems will introduce to the database? What sorts of analyses and noise/error rejection procedures might be effective against these errors? Are there any types of errors you might expect to find against which no commonly used mitigation algorithm would be effective?” I anticipate that microsite errors could introduce as documented in the literature up to a 3C false positive per site, and for GLOBAL (your emphasis) grids that have an underlying basis of one or two accepted sites, a 1.5 to 3C false positive on a claimed .6C phenomena. At present, noise/error rejection procedures have an underlying assumption that they are correct, verification has not been provided and analyses that claim the ability to reject have been demonstrated false in peer reveiwed literature. However, these also should be investigated after step 1. I find, by lack of consensus, commonly used mitigation algorithms do not address hypothesis 1 in any verifiable manner, which is why #1 was chosen to take primacy.
“If you can answer all of these questions going into your investigation, you are doing science. Otherwise, you’re goin’ fishin’. In particular, I think you need to think about the implications of these stations being in a heavily oversampled network with a long temporal database. ” Science does not have to answer all questions at once; it answers questions. Your “ALL” (emphasis mine) is the totality of anti-scientific thought. Darwin did not explain “all”, but still is considered one of the major modern scientists (please note I did not specify one Darwin over the other in case you are familar with modern evolutionary theory or say which one deservers the accolades). I am sure all the engineeers and physicists who have studied Newton’s law, and Einstein are grateful that Newton did not have to explain or answer “all”. Am sure Einstein would like to compare notes with Hawking.
Actually more than one scientist has gone fishing…”Otherwise, you’re goin’ fishin’.” Pastuer went fishing and founded modern biology, ie some of his hypotheses were shown to be utterly untrue. But he is still credited for what he accomplished, not “all” that he tried. However as the traffic cop asked the motorist he caught speeding who complained that others were speeding ” When you go fishing, do you catch every fish?” The motorist admitted he did not. The traffic cop said “Well neither do I, but you are a keeper!” The analogy is that is does not matter if I or Anthony Watts are fishing or not, if we show a real problem, it is real whether you complain, or not how we arrived at it.
There is no requirement that any scientist do “all” of a phenomena’s hypothesis at once. In fact ray ladbury, it is expected you do one at a time and use your time effectively (your word). MY hypothesis was and is “Cement, flaming grills. etc. next to temperature sensors do not make for accurate measures of temperature that should be used in a global grid, and do not meet accepted siting standards.”
I have little firm opinion yet on much of this. However, I think #1 should be done first, then conclusions or other hypotheses will be more appropriate for consideration.
matt says
#137 Ray, what are you talking about? People all the time do blind audits of their work and the work of other engineers hoping NOT to find a problem. And if you sample 5% of the population and don’t find a problem, you can start to feel pretty good that things are OK.
But if you sample 5% of the population and find problems in a third of the samples, then you need to worry. And I think that is where Anthony Watts is: he found a site that he knew was wrong, did a quick check of another few sites and found problems there too, and formed a hypothesis that a huge portion of the network was poorly sited.
“Huh, that’s weird. I wonder if…” drives most science and engineering.
warren says
It seems pretty clear that some HCN sites do not meet standards set by NWS. Some effort (and funding) should be made to do this and reanalyze existing data (instead of just whining about data). I know of a half dozen folks capable of making competent inspections, but only one listens to Limbaugh.
Instrument Requirements and Standards for the NWS Surface Observing Programs (Land) NWS inst. 10-1302, September 20, 2005. Appendix E Siting and Exposure Standards for the Climate Observing Program.
Other guidelines:
Guide to Meteorological Instruments and Methods of Observation, WMO-No. 8, World Meteorological Organization, draft 7th ed. (2006).
On-Site Meteorological Program Guidance for Regulatory Modeling Applications, EPA-450/4-87-013 or EPA-454/R-99-005, 1987 et seq. Office of Air Quality Planning and Standards, Research Triangle Parks, North Carolina 27711.
Heights and Exposure Standards for Sensor on Automated Weather Stations, The State Climatologist, 1985, v. 9, No. 4, October, 1985, American Association of State Climatologists.
Question:any other refs?
It should be noted that several insurance firms have become very interested (financially invested) in meteorologic data. It is quite likely that some data will be facing legal scrutiny, and may well be used in denying claims. Certainly aviation meteorological data has already reached legal status. I would strongly suggest that meteorological and climatological professional associations work with ANSI and NOAA (and possibly with Congress) about getting some legal standards set, and getting support to meet those standards.
Question: Which HCN sites were used for calibration of or reconciliation with satellite data?
Craig Allen says
Groan,
I’m repeating myself, but this page (the link to which Gavin included in his article) you will find details of how the data is cleansed/rectified. Much of what is being posted ignores what is actually being done with the data (possibly because a lot of people don’t understand the statistics terminology – or because they would rather stick to their straw man reasoning).
Also, the Australian climate monitoring reference network consists of about 100 stations in remote places with long recording histories. You can read about them here. See photos of them all by clicking the orange dots on the map here. Data from the Australian climate monitoring network is plotted here. You can get the here. Contact the Australian Bureau or Meterology for raw data – they are helpful folks.
If the US network were to miraculously somehow be shown to be giving false trends, then you would have to explain how there can be no warming in the US when the Australian network shows it clearly across a variety of parameters. And note in addition that in addition to the warming, there are strong trends toward decreasing rainfall across the Antipodean continent, which are backed up by tragically decreased river and stream flows, severe water restrictions in most states (starting to ease in some places due to recent floods), and a significantly increased farmer suicide rate.
Also, we know that the climate models are able to match the meteorological records remarkably well (including the observed mid century cooling episode due to aerosols, and the post Pinubo eruption cooling). If would be truly remarkable if the output of the US climate monitoring network is bogus, but somehow inexplicable matches the output from models that are based on atmospheric physics.
I look forward to Mr Pielke and his cadres visiting and photographing some of our more noted metropolises, such as Oodnadatta, Tibooburra and Meekatharra in order to document microsite effects, not to mention Maquarie Island and other Antarctic spots such as Casey and Davis Stations.
Steve Bloom says
OT: I just happened across an excellent newish ocean acidification blog. It’s run by a French scientist who appears to be an expert in the field. It’s not really a comment blog (although that may just be due to lack of traffic), but the author seems to be doing a thorough job of keeping abreast of the field via regular posts on significant new papers, media coverage, conferences, etc. It’s well worth a look IMHO.
steven mosher says
re 137. Fishing .
Ray wrote:
“John, and Steve Mosher, OK, so you say you are going to carry out a scientific analysis of siting. So what is your hypothesis going in?
1. well, actually, If I got paid for this instead of charged for this
I would suggest the following.
A. complete a photo survey of the network. 1221 stations.
B. Complete a CRN siting ranking of the sites.
C. Re analyze the land record with class 4 and class 5 sites
removed.
D. Hypothesis. The difference between these trends will be non zero ( with and without class4-5)
( bad stations warm the record)
E. Issue, will the test have the power to see the difference
at a significant level? This would be my biggest concern.
One might find a .05C difference at say 50% confidence.
So, Power of the test. which is your point in a round
about way. Always my biggest concern.
Next Question:
At how many sites do you expect to find problems?
well there are, supposedly, 1221 sites.
I’ve bounced around between thinking it should be normal.. that is,
with sites ranked 1-5, I’d probably started thinking it was normalish, with 5% class 5 sites.. 15% class4..
On the other hand, I had some days when I thought, it will be uniform
and we would see 40% in the class 4 to class 5 category.
So, put a gun to my head…. I guess 25% in the class 4- class 5. hows that?
Since we have a fixed sample, and since Anthony and team are intent on collecting EVERYTHING, then I don’t know if this estimate is necessary.
Since I showed Anthony the rating criteria his plan is to start classifying sites when we have sampled 10% of the population. Anyway, I have also been struck by the comment, made by someone ( i thought it was gavin) that you only need 60 good sites. If true, that would be heartening no?
Next Question:
“What kind of problems do you expect to find?
What sorts of errors do you anticipate that these problems will introduce to the database? ”
Well, we never expected to kind stations on rooftops and we never expected to find burn barrels by Stations.
and we never expected to find a Mig jet parked by one.
and we never expected to find batteries and light bulbs in stations
and we never expected to …
Seriously, it very simple. I would expect to find a distribution
of sites ranging from class 1 ( ORLAND) to Class 5 ( Marysville)
I would expect to find that the class 1 sites will exhibit different
warming trends ( perhaps only in TMin) than class 5.
As for errors in the database, I will show you a small example
tommorrow. Let’s call it a pilot study. If I had all the data,
the time, the money, the photos, I’d do exactly what gavin suggested.
Pick the good stations ( lets say 1-3s) calculate the trend.
HINT, the warming wont GO AWAY. Like I said there are too many
other sources that indicate warming. If a study of station
data made the warming GO AWAY, the study would be wrong.
Next Question:
“What sorts of analyses and noise/error rejection procedures might be effective against these errors? ”
Well, one method is to select different sites. The current approach
seems to favor sites with the longest records ( that’s good) but if the site becomes impaired over time, you have an issue. Their are other sites, sites that are well situated ( agricultural monitoring systems for example) BUT, one would have to “patch” together a record from several sites. So, Not a simple answer.
Its EASY to throw stones, but not unscientific.
last Question:
“Are there any types of errors you might expect to find against which no commonly used mitigation algorithm would be effective?”
I don’t think this issue is like a “SST bucket adjustment” or a
TOBS adjustment or an adjustment for lapse rate due to altitude change.
If the site were ALWAYS located in a parking lot, then ANOMALY approach
will “correct” for that, since trend is what matters.
The issue is gradual change over time that goes undocument. Trees get cut down, pavement added, building built, parking lot added, air conditioner put in. It gives one pause. BUT,
one can and should still believe in a global warming trend. That “observation” is supported by too many other tenacles to be taking down by some small errors in the weather stations.
Oh wait, One thing. I’ve wondered if this type of site impairment only impacts TMIN. That is, the mircosite issues may work to Bias Tmin up. But Tmax may be more robust ( variance analysis shows this I’ve been told)
So, one could still construct a trend of sorts from TMAX
Understand Tmean is simply (Tmax+Tmin)/2. So, I was pondering whether Tmax might not contain all the information ( trend wise) that one needs and that Tmin might not add that much “information” Tmin being more variable, and more prone to contamination from things like UHI and microsite issues
So, that was a line of thought I had… Plus the idea of using a narrwing diurnal range ( TMIN rising faster than TMAX) as being a proxy of sorts for impairment.
So, just some thoughts, ponderings.
You closed:
“If you can answer all of these questions going into your investigation, you are doing science. Otherwise, you’re goin’ fishin’. In particular, I think you need to think about the implications of these stations being in a heavily oversampled network with a long temporal database.”
I hear you Ray. So, you’ve been kind and patient. As Always. I’ll toss a little task at you tommorrow. Just tell me what you think.. and if it would make you curious… Not doubtful, just curious.