Observant readers will have noticed a renewed assault upon the meteorological station data that underpin some conclusions about recent warming trends. Curiously enough, it comes just as the IPCC AR4 report declared that the recent warming trends are “unequivocal”, and when even Richard Lindzen has accepted that globe has in fact warmed over the last century.
The new focus of attention is the placement of the temperature sensors and other potential ‘micro-site’ effects that might influence the readings. There is a possibility that these effects may change over time, putting in artifacts or jumps in the record. This is slightly different from the more often discussed ‘Urban Heat Island’ effect which is a function of the wider area (and so could be present even in a perfectly set up urban station). UHI effects will generally lead to long term trends in an affected station (relative to a rural counterpart), whereas micro-site changes could lead to jumps in the record (of any sign) – some of which can be very difficult to detect in the data after the fact.
There is nothing wrong with increasing the meta-data for observing stations (unless it leads to harassment of volunteers). However, in the new found enthusiasm for digital photography, many of the participants in this effort seem to have leaped to some very dubious conclusions that appear to be rooted in fundamental misunderstandings of the state of the science. Let’s examine some of those apparent assumptions:
Mistaken Assumption No. 1: Mainstream science doesn’t believe there are urban heat islands….
This is simply false. UHI effects have been documented in city environments worldwide and show that as cities become increasingly urbanised, increasing energy use, reductions in surface water (and evaporation) and increased concrete etc. tend to lead to warmer conditions than in nearby more rural areas. This is uncontroversial. However, the actual claim of IPCC is that the effects of urban heat islands effects are likely small in the gridded temperature products (such as produced by GISS and Climate Research Unit (CRU)) because of efforts to correct for those biases. For instance, GISTEMP uses satellite-derived night light observations to classify stations as rural and urban and corrects the urban stations so that they match the trends from the rural stations before gridding the data. Other techniques (such as correcting for population growth) have also been used.
How much UHI contamination remains in the global mean temperatures has been tested in papers such as Parker (2005, 2006) which found there was no effective difference in global trends if one segregates the data between windy and calm days. This makes sense because UHI effects are stronger on calm days (where there is less mixing with the wider environment), and so if an increasing UHI effect was changing the trend, one would expect stronger trends on calm days and that is not seen. Another convincing argument is that the regional trends seen simply do not resemble patterns of urbanisation, with the largest trends in the sparsely populated higher latitudes.
Mistaken Assumption No. 2: … and thinks that all station data are perfect.
This too is wrong. Since scientists started thinking about climate trends, concerns have been raised about the continuity of records – whether they are met. stations, satellites or ocean probes. The danger of mistakenly interpreting jumps due to measurement discontinuities as climate trends is well known. Some of the discontinuities (which can be of either sign) in weather records can be detected using jump point analyses (for instance in the new version of the NOAA product), others can be adjusted using known information (such as biases introduced because changes in the time of observations or moving a station). However, there are undoubtedly undetected jumps remaining in the records but without the meta-data or an overlap with a nearby unaffected station to compare to, these changes are unlikely to be fixable. To assess how much of a difference they make though, NCDC has set up a reference network which is much more closely monitored than the volunteer network, to see whether the large scale changes from this network and from the other stations match. Any mismatch will indicate the likely magnitude of differences due to undetected changes.
It’s worth noting that these kinds of comparisons work because of large distance over which the monthly temperature anomalies correlate. That is to say, that if a station in Tennessee has a particular warm or cool month, it is likely that temperatures in New Jersey say, also had a similar anomaly. You can see this clearly in the monthly anomaly plots or by looking at how well individual stations correlate. It is also worth reading “The Elusive Absolute Surface Temperature” to understand why we care about the anomalies rather than the absolute values.
Mistaken Assumption No. 3: CRU and GISS have something to do with the collection of data by the National Weather Services (NWSs)
Two of the global mean surface temperature products are produced outside of any National Weather Service. These are the products from CRU in the UK and NASA GISS in New York. Both CRU and GISS produce gridded products, using different methodologies, starting from raw data from NWSs around the world. CRU has direct links with many of them, while GISS gets the data from NOAA (who also produce their own gridded product). There are about three people involved in doing the GISTEMP analysis and they spend a couple of days a month on it. The idea that they are in any position to personally monitor the health of the observing network is laughable. That is, quite rightly, the responsibility of the National Weather Services who generally treat this duty very seriously. The purpose of the CRU and GISS efforts is to produce large scale data as best they can from the imperfect source material.
Mistaken Assumption No. 4: Global mean trends are simple averages of all weather stations
As discussed above, each of the groups making gridded products goes to a lot of trouble to eliminate problems (such as UHI) or jumps in the records, so the global means you see are not simple means of all data (this NCDC page explains some of the issues in their analysis). The methodology of the GISS effort is described in a number of papers – particularly Hansen et al 1999 and 2001.
Mistaken Assumption No. 5: Finding problems with individual station data somehow affects climate model projections.
The idea apparently persists that climate models are somehow built on the surface temperature records, and that any adjustment to those records will change the model projections for the future. This probably stems from a misunderstanding of the notion of a physical model as opposed to statistical model. A statistical model of temperature might for instance calculate a match between known forcings and the station data and then attempt to make a forecast based on the change in projected forcings. In such a case, the projection would be affected by any adjustment to the training data. However, the climate models used in the IPCC forecasts are not statistical, but are physical in nature. They are self-consistent descriptions of the whole system whose inputs are only the boundary conditions and the changes in external forces (such as the solar constant, the orbit, or greenhouse gases). They do not assimilate the surface data, nor are they initiallised from it. Instead, the model results for, say, the mean climate, or the change in recent decades or the seasonal cycle or response to El Niño events, are compared to the equivalent analyses in the gridded observations. Mismatches can help identify problems in the models, and are used to track improvements to the model physics. However, it is generally not possible to ‘tune’ the models to fit very specific bits of the surface data and the evidence for that is the remaining (significant) offsets in average surface temperatures in the observations and the models. There is also no attempt to tweak the models in order to get better matches to regional trends in temperature.
Mistaken Assumption No. 6: If only enough problems can be found, global warming will go away
This is really two mistaken assumptions in one. That there is so little redundancy that throwing out a few dodgy met. stations will seriously affect the mean, and that evidence for global warming is exclusively tied to the land station data. Neither of those things are true. It has been estimated that the mean anomaly in the Northern hemisphere at the monthly scale only has around 60 degrees of freedom – that is, 60 well-place stations would be sufficient to give a reasonable estimate of the large scale month to month changes. Currently, although they are not necessarily ideally placed, there are thousands of stations – many times more than would be theoretically necessary. The second error is obvious from the fact that the recent warming is seen in the oceans, the atmosphere, in Arctic sea ice retreat, in glacier recession, earlier springs, reduced snow cover etc., so even if all met stations were contaminated (which they aren’t), global warming would still be “unequivocal”. Since many of the participants in the latest effort appear to really want this assumption to be true, pointing out that it doesn’t really follow might be a disincentive, but hopefully they won’t let that detail damp their enthusiasm…
What then is the benefit then of this effort? As stated above, more information is always useful, but knowing what to do about potentially problematic sitings is tricky. One would really like to know when a problem first arose for instance – something that isn’t clear from a photograph from today. If the station is moved now, there will be another potential artifact in the record. An argument could certainly be made that continuity of a series is more important for long term monitoring. A more convincing comparison though will be of the existing network with the (since 2001) Climate Reference Network from NCDC. However, that probably isn’t as much fun as driving around the country taking snapshots.
Steve Milesworthy says
The following comments can be found on climate audit which imply belief in 4 of the assumptions. I was particularly irritated with the first comment because it is clear from the rest of the article (about Parker 2006) that Steve McIntyre understands the difference, but the diversion completely confused the subsequent discussion. Google them to find the context.
1. Steve McIntyre: If you are not a climate scientist (or a realclimate reader), you would almost certainly believe, from your own experience, that cities are warmer than the surrounding countryside
2. Anthony Watts: If you were conducting an experiment where the results were likely to shape national and world policy, wouldn’t it be prudent to check the origin of the data set?
5. Bob Meyer: Is Schmidt actually suggesting that large changes in the individual station data would have no effect on the grid data because by some occult process they have already “fixed” the deviant station data?
6. David Stockwell: if removing the contaminated stations reduced the 20th century increase to the point there was no increase in temperature, how could that possibly improve model fit, when the models show an increase of 0.5deg?
Dan says
re: 50. “I don’t quit (sic) see how you can say that individual stations do not matter when a network is a collection of individual stations.”
I have spent over two decades visiting and approving various meteorological data instrumentation sites for various uses. One critical issue that seems to be conveniently avoided by denialists is that most long-term climate data stations are not urban or airport sites where one gets the daily or current temperature on the radio or TV. Which essentially renders the UHI issue moot. There are well over one thousand long-term climate data stations across the United States. There are a few hundred in my state alone. The overwhelming majority are well-sited and in rural areas; denialists try to make it sound like urban sites are more common. Supposedly (conveniently?) the surfacestations.org study by a non-climate scientist (a “former TV meteorologist”) checked just 40 of them and draws an unpublished conclusion from that small set of select stations. That is about 3 percent of the purported 1200+ stations. It is not clear how the subset of 40 were chosen. There also seems to be a denialist’s myopic tunnelvision focusing on the US observations without realizing that they comprise a small fraction of the *global* database of surface observations. In short, the denialists sudden attempts to try to discredit the database outside of the scientific arena is nothing short of classic data cherry-picking. Of course it is then quickly (desparately?) picked up and regurgitated by typical non-science suspects such as the supposed journalist at the pittsburghlive.com link.
Then there is the fact that surface observations are just one indicator of global warming trends. As noted above, the recent warming “is seen in the oceans, the atmosphere, in Arctic sea ice retreat, in glacier recession, earlier springs, reduced snow cover etc.” Another “inconvenient truth” for denialists to avoid mentioning or acknowledging while overblowing the UHI issue.
steven mosher says
This little bit might help some get an idea of the importance of microsite issues. As referenced in Gavin’s text NOAA are building a Climate Reference Network. Much care has gone into siting. I’ll quote from a tom Karl document avaiable on NOAA’s web site.
Essentialy, it gives you an idea of how to rate sites and the kind of error you get when you put a site around pavement, buildings etc. So, in evaluating the historical netwrk for microsite issues, one should keep this in mind. Microsite issues can be more critical than an urban/rural distinction. So, here is Tom Karl:
NOAA/NESDIS NOAA-CRN/OSD-2002-0002ROUD0
CRN Series December 10, 2002
X030 DCN 06
The USCRN will use the classification scheme below to document the “meteorological measurements representativity” at each site. This scheme, described by Michel Leroy (1998), is being used by Meteo-France to classify their network of approximately 550 stations. The classification ranges from 1 to 5 for each measured
parameter. The errors for the different classes are estimated values.
Class 1 – Flat and horizontal ground surrounded by a clear surface with a slope below 1/3
(<19deg). Grass/low vegetation ground cover <10 centimeters high. Sensors located at
least 100 meters from artificial heating or reflecting surfaces, such as buildings, concrete
surfaces, and parking lots. Far from large bodies of water, except if it is representative of
the area, and then located at least 100 meters away. No shading when the sun elevation >3 degrees.
Class 2 – Same as Class 1 with the following differences. Surrounding Vegetation <25
centimeters. Artificial heating sources within 30m. No shading for a sun elevation >5deg.
Class 3 (error 1C) – Same as Class 2, except no artificial heating sources within 10
meters.
Class 4 (error >= 2C) – Artificial heating sources <10 meters.
Class 5 (error >= 5C) – Temperature sensor located next to/above an artificial heating
source, such a building, roof top, parking lot, or concrete surface.”
That’s not a climate denialist. That’s the criteria NOAA are using to establish the new network. It would seem reasonable to apply the same criteria to the old network. It might seem prudent to not use sites that are class 5.
gator68 says
What about places where the urban heat island might be important? California is city from north of Los Angeles down to Mexico. I think a thermometer in the middle of all that cement is valid data. If a city is hotter over time, isn’t that a real finding? It seems just as wrong to throw out a reading from downtown LA, and substitute a rural reading for that land area, as it would to make a reading from LA stand in for a rural area.
Is this an issue simply because the trend analysis assumes an equal land area for each temperature measurement used?
EW says
Implications of belief in 4 or not, again, how can a review of stations do any harm? And if there is any “uncertainty or controversy” in the siting or data, it would be better removed. Let’s see what the differences are.
Ray Ladbury says
A couple of points. People need to think not just about whether a particular siting issue, etc. will introduce an error, but what kind of error it will introduce. This post points out that temperature is oversampled by nearly 2 orders of magnitude over what is needed to produce a reasonable picture of temperatures on Earth. If a particular site regularly produces temperature readings higher than those of surrounding sites, this is easily identifiable and probably correctable. If a site produces a short-term spike, again, this will be evident wrt not only the surrounding stations, but also the readings of the same station.
Now let us say that we change instrumentation. If our new instrumentation produces a shift relative to the old instrumentation, that will again be easily identifiable, and the origin of the anomaly would be resolved.
Folks, we are talking about a GLOBAL, trend persisting >30 years. It is consistent with many other trends we are also seeing via completely independent measurements. It is consistent with what would be predicted given well established physical models of the atmosphere. I just don’t see how going around taking photos of a few ill sited stations is going to dent the overwhelming evidence that we are warming.
On the other hand, to represent these few anomalies as the norm rather than the exception can only have the purpose of increasing doubt among the lay population who may not be familiar with the overwhelming evidence. That is not science, but anti-science.
Another point that has been stated again and again but still doesn’t seem to be getting through: The parameters in the models are not unconstrained. There are physical processes and phenomena independent of the temperature than constrain most of these parameters to a pretty narrow range. The models are physics based–not best fits to the data. Changing data for a few stations may increase the anomaly between model and observed data for those few stations. It will not substantively change the models.
Hank Roberts says
> if there is any “uncertainty or controversy” in the siting or data, it would be better removed.
All that would leave is religion, you know.
bigcitylib says
Steven Mosher,
I wasn’t able to find the link to this document by Karl. What is the procedure recommended there for dealing with class 5s? Are they discarded, or is some correction applied to them?
Eli Rabett says
In #44 Steve Reynolds throws some spaghetti against the wall
“One simple one is the introduction of the electronic MMTS to replace manual thermometer reading. I think MMTS uses an RS232 cable with limited length, so sensors may have been relocated closer to buildings, which could systematically increase reported temperature.”
Except that the NWS system used fiber optic modems for the RS-232 communication. Even using wires, while 19.2 kbaud RS-232 is short range (50 feet) at lower baud rates the range is much longer, 500 feet at 9.6 kb.
I rather suspect that the NWS was aware of the issue. Operators of the automatic weather stations could easily tell us bout this.
Craig Allen says
I’m constantly amazed by the general assumption by many laypeople (particularly in the US) that the people devoting their careers to science are generally incompetent. I’m not a climatologists or meteorologist. So I make the assumption that the people who are involved in the these fields are for the most part competent and diligent. So I follow their work with interest while I get on with my own. Sure they will make mistakes, and science is often a two steps forward, one step back process. But on the whole, it seems reasonable both to accept that they are doing their job to the best of their abilities, and to accept that on the whole the science is advancing and that our understanding of the climate system is improving all the time. The climate monitoring networks around the World are very important. Why do people assume that they are run clowns and that the people in charge are for some reason ignoring the inherent messiness of real world data. Their job is all about working with the data. Why do you assume that they don’t know what they are doing?
In his article, Gavin gave a link to a web page page at the National Environmental satellite, Data and Environmental Network website that explains that the United States Historical Climatology Network is a high quality subset of the U.S. Cooperative Observer Network operated by NOAA’s National Weather Surface. It then goes on to explain how quality control is applied to this data. You can read it here. The page explains how the datasets from each station is compared – using various statistical techniques – with (up to 40) other stations that are in places with similar climate. This allows blocks of dodgy data, or trends that are cause by things such as changes to micro-climate, to be spotted and corrected.
The article goes into a fair bit of detail, and provides a list of papers that will take you into much more detail still. Clearly the people who are running the network and working with the data acknowledge the issues with collecting and analyzing real world data. And they have done a huge amount of work to identify which series of data from which stations are problematic, and to correct for it or if necessary exclude it. Furthermore, they continue to monitor the quality of the data from each and every station coming from the network and to improve their techniques.
What is with all you people who are so intent on pretending that meteorologists and climatologists are somehow deluded, incompetent or malevolently pigheaded. Don’t you have anything better to do?
Timothy Chase says
mankoff (#26) wrote:
Looks like you are putting in the whole world!
Beautiful. I saw one station which the “contrarians” might want to use a little while longer – there are probably others. But I noticed another which had been trending down for quite a while. They might want to skip that one as its reversed course.
This is just the sort of thing that Google Earth is really good for. And of course they already have glacier data in one of the KMLs attached to photos and text on the web.
Once I was looking through and saw the circular thermokarst lakes – methane-emitting thaw lakes pocketing the permafrost. Someone commented on how funny they look, but didn’t know what they were. I knew what they were from the descriptions. However, there is a more recent discussion on KeyHole about them here. I know that they have been doing studies on their evolution. Anyway, I had mentioned them a little while back here. For those who are interested, there is an older post that mentions them here:
12 Dec 2005
Methane hydrates and global warming
https://www.realclimate.org/index.php/archives/2005/12/methane-hydrates-and-global-warming/
In anycase, they visually demonstrate the same point that Spencer made in #1. I am kind of hoping someone will begin a KML specifically on them. It probably already exists. But it would also be nice if all the KMLs relevant to climate change were gathered in the same place – or at least links to the sites where they are available. Somebody may already be doing that. I will have to check.
As for the temperature records, that it something really special. Great work…
Thank you.
Timothy Chase says
John Mashley (#45) wrote:
Thanks, John.
I couldn’t post the response that I had been writing – my temper was just a little too high to write anything particularly rational at the time and I knew it.
Peter Griffin says
#52:
“Supposedly (conveniently?) the surfacestations.org study by a non-climate scientist (a “former TV meteorologist”) checked just 40 of them and draws an unpublished conclusion from that small set of select stations. That is about 3 percent of the purported 1200+ stations. It is not clear how the subset of 40 were chosen.”
I believe that right now the goal is to document all of the 1200 or so USHCN and GHCN-GISS sites, not all surface stations in operation across the US. The effort depends on volunteers from across the country to document sites close to them, rather than have two or three individuals visit all 1200+ sites themselves. Because the effort is new, the volunteer pool is small and the few documented sites tend to be physically located near the volunteers. I don’t think there is anything more to read into the selection of the current subset of sites other than that is where the current volunteers happen to live.
It looks like as of yesterday that 84 sites have been documented to one degree or another, so they are up to about 7% now.
Question: how many sites need to be surveyed before the data are sufficient enough for RC scientists to take interest in analyzing them and drawing non-dubious conclusions? Nothing to be read into that question. In the past I worked as an engineer in semiconductor manufacturing and we spent an awful lot of time measuring our tools, gathering data and statistics, and recalibrating the tools, and I often try to draw mental parallels between the science we practiced and the science of climate research.
Boris says
As a possible soultion, perhaps we could use different methods to measure the Earth’s tmeprature anomaly. I’ve noticed that claims about the wamring of Mars, Neptune and Pluto are never challenged by sceptics or contrarians. Why not use the methods we use to measure these distant planets to measure the Earth? Since there is little criticism of planetary results, this seems to be be a good middle ground solution.
Paul G says
Interesting, not one person advocates what is the only sensible thing to do: perform a thorough review of the surface temperature sites.
Instead, abstract, Machiavaellian motives are attached to anyone who dares question the suitability of the sites. Circle the wagons indeed.
Nick Gotts says
Re #55 [how can a review of stations do any harm?] It can take up the limited time of highly skilled people, that’s how. This in itself doesn’t mean it’s not worth doing, but it does mean the possible benefits need to be weighed against the costs. Given all the independent lines of evidence pointing to average surface warming over the last few decades (satellite measurements, ocean temperatures, sea-level rise, retreating glaciers, phenological changes, shifts in the ranges of temperature-sensitive species), it is highly implausible that it would lead to more than very minor refinements to the current overall picture.
Harold Pierce Jr says
How to avoid problems with most land-based temperature
weather stations: Use lighthouses as thermometers for accurate and unbiased measurement of surface air temperature.
Here is some data I have obtained. Only a small portion is given due to message box input restrictions.
Weather Station Name: Quatsino, B.C.
Sample Interval : Month
Sample Temperature : Daily Minimum
Sample Range, Years : 1899-1999
El Nino Year 1900
Mean Monthly Min +/- SD Deg K
Jun 284.2 +/- 2.7
Dec 278.2 +/- 2.7
El Nino 1998
Mean Monthly Min +/- SD Deg K
Jun 281.1 +/- 1.0
Dec 274.6 +/- 2.9
La Nina Year 1899
Mean Monthly Min +/- SD Deg K
Jun 280.9 +/- 1.7
Dec 277.3 +/- 3.3
La Nina 1999
Mean Monthly Min +/- SD Deg K
Jun 282.0 +/- 2.0
Dec 275.2 +/- 2.1
These data show that there has been no change in the mean monthly temperature for solstice months at this site for a century. Although there is no stat sig diff among the means, the data suggests a slight cooling over the century.
Note the magnitude of the SD’s. These are so large there would have to an enormous increase in climate warming to be detected by this thermometer.
[Response: Or you could just look at the annual mean data for that station, and calculate an extremely significant trend of 0.91 +/- 0.47 deg C/ century (95% conf). – gavin]
Timothy Chase says
Boris (#64) wrote:
We have satellite measurements (essentially what you are asking for), ocean temperature measurements, the accelerating decline of the glaciers (Himalayan glaciers should all be gone by 2100), the accelerating decline of the Arctic Sea ice (it should be gone during the summers around 2020), the rising sea levels, the borehole measurements, the thermokarst lakes, the migration of animals, bacteria and viruses (hemorrhagic dengue in Mexico and Taiwann), the fungi at higher latitudes, the tree lines at higher latitudes and altitudes, etc.. This isn’t a matter of a honest difference of rational opinion. The science is on one side – and I am still trying to figure out what is on the other.
Vernon says
Re: 64
Nice miss direction Boris. Just because someone is skeptical of CO2 based warming does not the same as does not believe in warming. You seem to be saying that if you dont believe the CO2 theory is correct that you do not believe in climate change. That is a false arguement. I believe in climate change, I just have read enough to know that I am not sure about CO2 and being told, that it right because it is an experts opinion does not cut it. I want to see a full and open debate about the facts, not opinion, not personal attacks, etc…
Eli Rabett says
Boris, Triana got shot down before launch. A satellite in one of the Lagrangian positions is far enough away to image the whole earth for such studies. Too bad that it was associated with Al Gore, maybe when the adults return we can take it out of mothballs and launch.
Dan says
re: 65. Because, as already stated several times here, it is a critical issue with respect to AGW despite anti-science denialists attempts to make it so. The warming trends are shown by ocean temperatures, sea-level rise, glacier retreats, satellite measurements, etc. And US measurements are certainly not the critical issue with respect to *global* measurements. Nothing abstract or “Machiavellian” about that in the least. To continue to focus on a small dataset without critical importance to the overall global dataset, other consistent trends and the issue as a whole is to data cherry-pick. A classic denialist move to create doubt, obfuscate, and stall. And frankly waste money. Shades of the acid rain debate of the 1980s.
Dan says
re: 69. Precisely. Except it has already been done. And the scientific debate is long over. Read the IPCC peer-reviewed reports which are linked to from this site. No opinions, no personal attacks, just the climate science research results conducted by actual climate scientists among others.
Nick Gotts says
Re #65 [not one person advocates what is the only sensible thing to do: perform a thorough review of the surface temperature sites. Instead, abstract, Machiavaellian motives are attached to anyone who dares question the suitability of the sites.] If such a course of action were agreed (and how many person-years would be needed?), the denialist response (despite all the independent lines of evidence) would be: “See! Even the AGW believers admit the surface temperature site data is worthless. It would be absurd to take any action while this huge uncertainty remains unresolved.” And of course no review, however thorough, would be deemed satisfactory. The motives I ascribe to the professional denialists (not the members of the public being fooled by them) may be considered Machievellian, but are not abstract: they are money – on the part of Exxon and its paid propagandists, for example; and commitment to “free-market” ideology – the Wall Street Journal editorial staff, for instance.
Dan says
Typo in 71: “…it is *not* a critical issue with respect to AGW…”
Ray Ladbury says
Re 60. Craig, My experience has been that the people who are most vociferous in this debate tend to be those who understand the science the least. And since all the evidence is really only on one side, the only recourse of the denialists is to attack the competence and credibility of those who came up with the evidence. We see exactly the same sort of thing with the debates over evolution and over various conspiracy theories. If one has actual evidence, one is usually too busy publishing to get really nasty.
Of course some of what we are seeing is the usual sausage making of science writ large on an international stage. Some researchers do not feel that their pet theories and ideas have been given enough emphasis in the IPCC reports and in other expressions of scientific consensus. Ultimately, however, it is up to them to convince the scientific community that their ideas are important. Taking it directly to the public and press is anti-science. After all, how are they supposed to know the science if they haven’t studied it for 20 years like the experts?
Timothy Chase says
Dan (#71) wrote:
Yep.
I saw a post by McIntyre of the “unbiased” let’s-audit-your-science gig. An early version of an IPCC report had said something to the effect that global warming was evident in all major oceans, so he ignored all but a fairly small section of the Southern Ocean where he could point out that the temperature was actually decreasing. But temperatures rising as far down as 1500 meters below the sea surface, and I believe eighty meters are what is of greatest concern to hurricane formation.
Then there are a few glaciers in the world which are not as of yet falling into the step decline. Contrarians like to point those out, too. But the global trend is obvious and even more so if you do the math. Accelerating decline of global mass balance – looks something like if you threw your keys straight out. Then there is the growing thunder of glacial melt in Greenland and over a hundred glaciers picking up speed in the Western Peninsula of Antarctica.
Harold Pierce Jr says
RE #69 Gavin: How can you can conclude that there is a “trend” in the data? These data say there is no trend.
[edit for brevity]
[Response: Possibly your definition of trend is different from mine (and everyone else’s). Take the annual mean data, fit a linear regression, examine whether slope of said regression is significantly greater than zero. Done. There is a trend and it’s pretty much in line with the trends everywhere else. You can make it more complicated, and you can subselect years so that nothing is significant, but you cannot claim there is no trend. Was it colder then and warmer now? Yes. – gavin]
steven mosher says
Bigcitylib,
I always enjoy your comments. Thanks for the civilized question.
The link is here. you should read everything on the site.
http://www1.ncdc.noaa.gov/pub/data/uscrn/documentation/program/X030FullDocumentD0.pdf
It is a wonderful program, so Karl and others need to be commended on creating a quality network.
The document covers site selection. That is, NOAA are trying to prevent the kind of siting issues that you see in the historical network, the network that Hadley and Goddard currently use. Have a look at the specifications for Proper siting.
In the historical network one can find examples of sites that appear to have changed gradually over time. Most humerous examples would be Lake Spaudling, Tahoe City, Marysville, Lodi, Livermore.
Metadata does not capture this, as Gavin notes. Metadata covers things like gross location, elevation, TOBS, and instrument changes.
Think about the MICROSITE issue as an “instrument drift.” Stations are not calibrated, small changes over time go undocumented.
For the CRN there are strict siting guidelines. Photos are required. And an expectation that the site wont change for 50-100 years. ( ask yourself why this is a requirement)
So, to answer your question, The CRN, I would suspect, would REJECT a class 5 site. At Worst, they would a RECORD of its clasification and photos. With the historical network we have neither. The point is the document shows people how to Rate a site for INCLUSION in the CRN. Bottom line: Marysville would be excluded. Lodi would be excluded. Lake spaulding would be excluded. tahoe city would be excluded. In fact, none of these sites or locations nearby are included in CRN.
Also,have a look at a diagrams of how a site should be constructed ( pg 17). This isnt a denialist document.
NOAA is a data source for CRU and GISS. It’s not Peilke saying this. Its Karl.
Now, here is a funny anecdotal thing. If you look at
MARYSVILLE photos for example, you will see a site in a parking lot. ( class 5) If you look at it’s anomaly graph ( ok you have to download the data and calculate this for yourself) you will see it getting hotter by the year. Now, move 20 miles to the northwest. Colusa, CA. (its a class 2 or 3) Construct it’s anomaly graph. Move 30 miles to the west of Marysville. Willows, california. Site is in a field. (class 1 or 2)
construct its anomlay graph. Now move 50 miles to the northwest. ORLAND. Visual inspection and photographic evidence shows a class 1 or class 2 site. construct it’s anolmaly graph.
Question, if the class 5 site show larger warming trends than the class 1-3 sites within 50 miles of that site what does that tell you about the wisdom of including a class 5 site in your grid estimate?
teledisconnection?
Propose a hypothesis. We have a site in Marysville
( used by GISS,, apparently but not by CRU) that is located in a parking lot. It’s a class 5 site, by NOAA CRN standards. It shows a warming trend, a substantial warming trend. Other sites in the area show no significant trends. These sites follow CRN guidelines.
Explain?
I look at that and I say, Well Tom Karl is right. We should pay attention to siting. We probably should not use data from class 5 sites. Ya think? We probably should not try to “adjust” the record; rather, FIX THE SITE or dont use it. CALIBRATE your instrument. Jeeze oh pete.
Luke says
An interesting recent Mori poll result of public opinion within the UK about climate change can be found at http://news.bbc.co.uk/1/hi/sci/tech/6263690.stm
Apparently, the survey reveals that dog mess is of more concern to the public than climate change.
I don’t know whether Mori asked any questions about calibration of metereological instrumentation and similar sorts of things.
Matt says
If you are studying the mating habits of New Zealand whistling frogs, then I think a case could be made that there aren’t the resources available to re-visit previously covered ground in the research.
However, the impact of CO2 warming could reach trillions. This article (http://www.iht.com/articles/2006/10/30/business/web.1030energy.php) notes spending on climate research has falling from $7.7B in 1979 to $3B today. Assuming a linear decrease over all that time, that is $153B the US has spent on climate research in the last 30 years. Surely there is budget to check and re-check a fundamental assertion that we are warming. And to cross check it several different ways.
It’s better to have a rock solid record indicating warming trends of 0.1 degree/decade than to have a flimsy record indicating 0.15 degree/decade. And you can bet after Anthony Watts has tossed out all the dicey stations that he himself will be a strong believer in the quality of the remaining stations. And they will still show warming.
There are a few on this board that screech “The science is settled” over and over. I’m not sure they are aware just how many times science has become unsettled in a matter of a few years.
It wasn’t too long ago that science had settled that ulcers were due to stress and that satellites indicated we were cooling. And then a ‘crackpot’ doctor found a bacteria called H Pylori and a math error was found and quickly that which had been settled became unsettled and then settled again. That is how science works.
Let folks throw darts. Let folks poke holes. It makes everything stronger. If it takes an army of 500 volunteers 3 hours each to identify high quality stations, then the cost to verify one of the foundations of the thesis was quite low relative to the total spend on climate research.
DaveS says
I don’t see how any reasonable, intelligent person could disagree with that sentiment.
steven mosher says
RE 56.
As usual Ray makes a world of sense. If the land surface tempature record is OVERSAMPLED by an order of 2x, then it would make sense to remove stations that are clearly in violation of WMO siting standards and CRN siting guides. Thanks Ray!
I propose a test Ray, and Gavin can help.
Anthony Watts has concentrated his study starting from his home in Chico California, radiating outward. This is located in grid 35N-40N, 115W to 120W.
Gavin will publish the list of stations used to provide the grid estimate for this area– both for CRU and for GISS.
Gavin will publish the raw data and programs for adjusting this data.
Gavin will publsih the anomaly history for this grid.
Then, you look at the stations. You will eliminate station data that comes from stations that Violate WMO rules. You will eliminate station data from stations that are class 5. ( by buildings, pavement etc).
Question: does this microsite stuff matter?
Then you recalulate the grid with this reduced number of stations.
After all, the sampling is 2X, so you can cut half the stations right? right ray? just get rid of the stations that are impaired or potentially impaired by microsite issues.
So ray. Lets take the grid 35N to 40N 115W to 120W.
Lets eliminate stations that are class 5 according to NOAA standards. Then recalculate the grid.
Your math is better than mine, So you and gavin work it out. Should take a couple days or so.
[Response: sounds fine, except I have a day job. Someone else might want to volunteer. Note that GISS uses information from up to 1200km away, so that’s a lot of stations. Turn it around. Calculate the trends just with the ones you think are good, and see if it’s different to the GISS or CRU analysis. For GISS, the linear trend from 1900 to 2006, for that grid box is 0.8 deg. – gavin]
Harold Pierce Jr says
RE: #77 The data say no trend. Look at the SD’s. End of argument. I have more data from lighthouses that support this conclusion.
[Response: I’ll have to admit you are persistent, but you are simply wrong. But since you don’t want to do the calculation, let’s throw it out there into the blogosphere and see what anyone else says. Is there a significant trend here or not? (PS. trend according to Numerical Recipes is 0.91 deg C/century, SD of trend 0.24). – gavin]
Chuck Booth says
Re 3 80 “Let folks throw darts. Let folks poke holes. It makes everything stronger. If it takes an army of 500 volunteers 3 hours each to identify high quality stations, then the cost to verify one of the foundations of the thesis was quite low relative to the total spend on climate research.”
You can throw all the darts and poke all the holes you want at this site, but I doubt many of them will influence the active researchers in the field of climatology – a couple of the RC moderators who frequently comment here, yes; but the many hundreds (thousands?) of scientists who are busy out in the field collecting data or in their laboratories analyzing data and writing grant proposals and research papers, probably not. I think you need to find a method to promote your skepticism in a way that those researchers will sit up an take notice (once again, publishing in reputable peer-reviewed journals is one way; presenting a paper at a research at a climatology research conference, is another; perhaps there are other ways); arguing with the mostly non-climatologists who actively participate in the RC threads will have virtually no impact, I’m afraid.
Steve Bloom says
Re #35: “Pielke believes ocean heat content changes are the most reliable metric for assessing global heating and cooling.”
This sounds good since the trend in ocean heat content would be very, very close to the trend for the whole system, but just try finding any sort of calculation of this metric on his site. The problem is that the data isn’t available to be able to produce it in a useful form. It doesn’t seem possible that even Roger is so obtuse as to be unable to grasp this point. (Bite your tongue, Gavin.)
Re #68: Just to note that we seem to have a record low Arctic sea ice anomaly going in the last few days.
tamino says
Re: #83 (Harold Pierce Jr.)
Gavin is right, the data linked to do indeed show a significant trend. Your protest “Look at the SD’s. End of argument.” indicates that you really don’t understand the statistics of trend analysis.
Steve Reynolds says
59 Eli Rabett> Except that the NWS system used fiber optic modems for the RS-232 communication.
From your (informitive) link: “In the mid- and late-1980s, the widely used air temperature radiation shield called the Cotton Region Shelter (CRS) was gradually replaced by the Maximum and Minimum Temperature System
(MMTS) in the cooperative weather station network. In the 1990s the Automated Surface Observing System (ASOS) replaced conventional observations at National Weather Service (NWS) and Federal Aviation Administration stations that report hourly observations.”
The question is then how many MMTS in the cooperative weather station network were converted to ASOS?
I rather suspect that the number is small. Does anyone have that info?
Nicolas L. says
re 64
“I’ve noticed that claims about the wamring of Mars, Neptune and Pluto are never challenged by sceptics or contrarians. Why not use the methods we use to measure these distant planets to measure the Earth? Since there is little criticism of planetary results, this seems to be be a good middle ground solution”
Let’s see what planetologists have to say
The trend for a global warming on Mars since the last 20 years seems confirmed, but has actually apparently little to do with the earth warming. According to the NASA scientist who highlighted this warming, it is due to the deposits of bright dust on Mars surface.
http://humbabe.arc.nasa.gov/~fenton/
Note that this global warming as been studied by only one research team and presented in one article (to be compared to the thousands of articles studying climate trends on earth), based on partial satellite data, and there is a serious debate now amongst the planetologists community to determine if this is a persistent trend or if it will stop in a few years.
As for Pluto, I actually know of a few articles (a little small to make a consensus) that assesses plutonian atmosphere is thickening, presumably showing a warming of its surface. Knowing that Pluto is the most distant planet from the sun, that the amount of energy it receives from the sun is hundreds of times inferior to the one that reaches the earth, and that we almost don’t know anything about this little rock and its atmosphere(no probe has ever approached it), I wouldn’t bet much on a relevant and well established trend for now.
http://www.newscientist.com/article/mg17623653.100
http://www.space.com/scienceastronomy/pluto_warming_021009.html
Finally for Neptune, again I find only one relevant scientific study, from Hammel and Lockwood (Hammel, H. B., and G. W. Lockwood, 2007. Suggestive correlations between the brightness of Neptune, solar variability, and Earth’s temperature, Geophysical Research Letters).
They seem to establish an upward trend in infrared radiation of the planet since 1980, but no particular trend between 1950 and 1980.
This study, as far as I know, didn’t make much noise in the planetologists world. I’m not the most qualified to make a judgment on their scientific work, but the two authors seem eager to attribute those measurments to an increase of solar irradiance since 1980, though no serious discussion about the other possible mechanisms (like atmospheric changes) is made in the paper. More important, it is to be noticed this increase of solar irradiance as not been measured by anyone yet (despite the 24/7 observations of the solar activity since long before 1980). They also are eager to link this “warming” to the Earth global warming, recognizing themselves that they have found no serious correlation between the two phenomenon but yet stating:
“Nevertheless, the striking similarity of the temporal patterns of variation should not be ignored simply because of low formal statistical significance. If changing brightnesses and temperatures of two different planets are correlated, then some planetary climate changes may be due to variations in the solar system environment.”
Which is a quite unusual scientific statement (we have nothing to link those two phenomenon, but we still think they are linked anyway). By the way, could someone explain me what “suggestive correlation” exactly means :) ?
So basically, those uncontested planet warmings are based each one on a very few studies(as far as I know, if anyone finds new data, I’m interested). Each one is poorly measured. In the case of Neptune and Pluto, the mechanism for this supposed warming still has to be found.
Most important when going further, none can apparently be linked with Earth Global Warming.
Finally and for fun, one should try to google a little bit with “Pluto warming” and “Neptune warming”. I did it and found an impressive quantity of links to contrarian blogs, but not much to peer reviewed scientific literature :)
ray ladbury says
Steve Mosher,
It really only makes sense to eliminate stations if they give consistently bad data. If the data are oversampled, any anomalies will be identifiable by rather simple analysis.
No system is ever perfect. The question you have to ask yourself is whether any improvement to the system will make a significant difference. I suspect that it would not for several reasons. First, as I said, in an oversampled system, anomalies are easy to identify. Second, we are looking at global trends, so unless there is a systematic error in siting/readings etc. bad stations will at worst produce noise on the overall trend. Even if a particular bad station had a paucity of good stations around it, it is unlikely that it would affect the global trend.
Should we look hard at station site quality for future stations. You bet! Should we have any doubt about the trends seen to date. No.
Ian Rae says
>Assumption #6 “…If the station is moved now, there will be another potential artifact in the record.”
This appears to contradict #4 which suggests that individual stations have little effect on gridded data.
>An argument could certainly be made that continuity of a series is more important for long term monitoring.
It would be a poor argument indeed that preferred the continuity of problematic sitings over good data. Surely if there is a significant problem with sitings, the solution is to discard problematic data. Not that surfacestations.org is even close to showing that a significant problem exists yet!
Stepping aside from the whole GW debate, aren’t people hear surpised/shocked that there are multiple sites near A/C vents, ashpalt, and buildings?
John F. Pittman says
Steve Milesworthy #51
For Assumption 1. Steve McIntyre: “If you are not a climate scientist (or a realclimate reader), you would almost certainly believe, from your own experience, that cities are warmer than the surrounding countryside From that, itâ��s easy to conclude that as cities become bigger and as towns become cities and villages become towns, that there is a widespread impact on urban records from changes in landscape, which have to be considered before you can back out what portion is due to increased GHG. One of the main IPCC creeds is that the urban heat island effect has a negligible impact on large-scale averages such as CRU or GISS.”
It is quite plain that the difference is that Steve McIntyre is claiming that IPCC and climate scientists are ignoring (has negigible impact) what is an accepted fact. That assumption stated “Mainstream science” which is not what Steve Mcintyre stated. However one may feel, think, or know about AGW, taking a stated group(s) to task is not to take everyone such would be included by any reasonable definition of “mainstream science”, unless this is code for IPCC and climate scientists who make up only a small part of what is considered “mainstream science”.
#2 thinks that all station data are perfect.
“By not checking the point of data collection and â��assumingâ�� that the weather station meets the published NOAA and WMO standards appears to have been standard practice for many researchers. If you were conducting an experiment where the results were likely to shape national and world policy, wouldnâ��t it be prudent to check the origin of the data set? Government (NCDC, Karl, et al) was charged with providing a relatively homogenous data set.” Isn’t it prudent to check data? Perfection is not claimed, care is. The claim is that lack of care and incorrect assumptions (I would think verification) appear to have been standard practice. There is no claim of perfection here.
No. 5: “Finding problems with individual station data somehow affects climate model projections. This probably stems from a misunderstanding of the notion of a physical model as opposed to statistical model. However, the climate models used in the IPCC forecasts are not statistical, but are physical in nature.” I think the discussion of this item is most appropriate. It does highlight one of the major contentions or faults in current discussions. It is a computer model based on physics. It is not the physics itself. IPCC forecasts are not physical in nature, they are computer models of known physics. A problem with this is that it is unlikely all physical relations are known, or even could be modelled with current technology. But that does not make the models useless by any means. The question becomes how can the models be verified? It appears the assumption was made that the models used actual data for verification. Many have expressed concern that actual data was not used for verification. I am in that crowd. I expect that if IPCC presses for carbon reductions based on such, it will be strongly opposed until verification has been obtained and the verification itself verified. As it stands, if my state or the US proposed carbon reductions and could not provide the information in suitable format and every “i dotted and every t crossed”, I would vigorously oppose it. Not from any sense of denial, but from not having the information in hand that I could use to show management that the monies were a justifiable expense. I do not wish to be fired for incompetence.
#6. If only enough problems can be found, global warming will go away
“David Stockwell: if removing the contaminated stations reduced the 20th century increase to the point there was no increase in temperature, how could that possibly improve model fit, when the models show an increase of 0.5deg?” Steve these are two different concepts. Asking that question is valid for determining if the data fits the requirements of showing global warming. It has been stated by most if not all warming has occurred, the extent and how accurately we have measured this have been discussed. Nor is it trivial if you happen to have concluded that GW is equal to AGW. In order to determine the best course and plan, the extent and sensitivity of the relationship are needed. Otherwise you will be asking engineers such as myself to waste time and money. According to most AGW people, time should not be wasted. As an engineer in regulations and energy, money should not be wasted either. Lest you think there is some ulterior motive, remember an axiom of engineers, time and money are often interchangeable; Wasting money on a failed solution is also a waste of time.
Ender says
Re 65: – “Interesting, not one person advocates what is the only sensible thing to do: perform a thorough review of the surface temperature sites.
Instead, abstract, Machiavaellian motives are attached to anyone who dares question the suitability of the sites. Circle the wagons indeed. ”
As far as I am concerned go for it!!! The problem with people asking for such a review is normally they also ask for somebody, anybody, else to do it rather than themselves.
If you are capable of conducting such a review and are prepared to write the research proposal and get funding and do the man years of work necessary to do such a review then go for it. I am sure you would have the support of everyone in the field as long as you share the data and publish peer reviewed work.
The people that work in the field on climate science freely acknowledge the problems with the sensors however as they have problems getting funding and time for what they are struggling to do now they would rather work with the system they have and compensate for the problems. They are satisfied that the system even with it flaws, as long as you understand them, gives accurate enough answers.
The people who scream the loudest that the surface temperature record is flawed are strangely silent when it is suggested that they actually do something about it.
tamino says
A lot of posts on this thread have appealed to the logic of checking the data for correctness. Isn’t it prudent to check the data? Of course it is.
I suspect most of those posting such comments don’t realize just how much effort has been expended doing exactly that. Those who want to know more (and haven’t already made up their minds that AGW is a crock) should carefully read Hansen et al. 1999 and Hansen et al. 2001. The data are not perfect, neither are the procedures, but these papers belie the assertion that care has not been taken.
I would also add another mistaken assumption to the list:
Mistaken Assumption No. 7: Bad data will artificially inflate the estimated global warming. In fact bad data are as likely to artificially deflate the estimated warming as to inflate it. Those who wish to discredit AGW by insisting on more thorough data checking should consider that they may be unhappy to get what they ask for; when the data are checked even more carefully, we may find that the global surface temperature increase is even higher than presently believed.
ray ladbury says
#91 John Pittman,
Hmm, making statements in support of the scientific consensus position:
Intergovernmental Panel on Climate Change (IPCC) 2007
Joint science academies� statement 2007
Joint science academies� statement 2005
Joint science academies� statement 2001
U.S. National Research Council, 2001
American Meteorological Society
American Geophysical Union
American Institute of Physics
American Astronomical Society
Federal Climate Change Science Program, 2006
American Association for the Advancement of Science
Stratigraphy Commission of the Geological Society of London
Geological Society of America
American Chemical Society
Engineers Australia (The Institution of Engineers Australia)
Making a completely mealy-mouthed, noncommital statement:
American Association of State Climatologists–they accept that humans are changing climate–just don’t know what it will mean.
Dissenting from the scientific consensus:
American Association of Petroleum Geologists (hmm, wonder why)–oh but wait, they’re considering changing their statement and moving into the mealy-mouthed camp.
Yeah, I’d say that mainstream science is pretty much in the consensus camp, wouldn’t you? Or did I miss a significant field relevant to climate change?
Dan Hughes says
Allow me to make a simple declarative statement and ask a simple direct question, neither having any motive other than clarification.
I see no error bounds on the data in this graph.
Can a true and correct trend be determined under this condition?
Thanks
Paul Squires says
Ref #26
“Blue is cooling, red is warming, white is insufficient data (baseline years or recent years). Note all the white pins in Canada! For some reason they seem to have turned off their network in the late ’80s.”
As one who was involved in Environment Canada at the time that’s pretty well what happened. I won’t go into the details, but it still burns!
Great post and I appreciate the level of discussion that RC maintains!
ray ladbury says
Re 89. Ian, it may surprise you, but the goal of science is not to take data with the smallest possible error, but rather to take data where the errors are understood. Errors that are understood can be corrected for or used to make bounding estimates, etc. Errors that are not understood cannot be guaranteed to stay insignificant in all applications.
So, continuity of a data set is a perfectly legitimate reason for not moving a station.
Also, note that Gavin said that the artifact would be in the record–that is the data, not in the model. Don’t confuse the two. And no, throwing out the data is not the answer. Data with errors/noise are not necessarily bad data. If it is intermittently bad, and you can identify the bad points, you use what’s still good. If it is skewed, you may be able to correct it. Even the information you get about the errors in the data is useful in correcting data.
It has been my experience that a graduate student who is desperate to get out of school is an excellent source for ideas on how to use marginal data. Of course, he or she would prefer to have pristine data, but if the choice is doing the experiment over again or correcting the data he or she has, most are more than willing to write an additional chapter on data correction in their thesis.
Paul G says
==Post # 65 by Dan: ==
==”The warming trends are shown by ocean temperatures, sea-level rise, glacier retreats, satellite measurements, etc. And US measurements are certainly not the critical issue with respect to *global* measurements.”==
Dan, you are avoiding the issue. If surface temperature site data is being used by climate scientists, and it is, these sites must be properly audited, or the data sets must be discarded. The rest of your post is peripheral to the issue.
==Comment #65 by Nick Gotts:==
==”If such a course of action were agreed (and how many person-years would be needed?)”==
Not long to photograph the sites, that’s for sure. That this has not been carried out already on a regular basis by climate professionals using the data is astounding.
==”. . . . the denialist response (despite all the independent lines of evidence) would be: “See! Even the AGW believers admit the surface temperature site data is worthless. It would be absurd to take any action while this huge uncertainty remains unresolved.”==
We’re not doing anything serious about AGW at present anyways, so we might as well improve the data until we do, if we do, decide to act.
pat n says
People should be concerned about UHI for health reasons. Data trends at climate station with 100 years of record show increasing UHI in areas having experienced large economic growth like at Fort Collins, Billings, Minneapolis.
http://picasaweb.google.com/npatphotos
http://new.photos.yahoo.com/patneuman2000/albums
rda says
Harold Pierce Jr,
I just did a quick regression analysis (using Excel) of the Quatsino, B.C. weather data. I looked at the annual average temperatures and obtained the exact same trend as Gavin — annual Tave has increased at the rate of 0.91deg per century. This trend is very highly significant (P=0.00026).
There’s 90+ years worth of data available. Seems kind of silly to toss out all of that and only look at a couple of data points in the manner that you did.
(BTW, why did they stop data collection in 1990??)