Observant readers will have noticed a renewed assault upon the meteorological station data that underpin some conclusions about recent warming trends. Curiously enough, it comes just as the IPCC AR4 report declared that the recent warming trends are “unequivocal”, and when even Richard Lindzen has accepted that globe has in fact warmed over the last century.
The new focus of attention is the placement of the temperature sensors and other potential ‘micro-site’ effects that might influence the readings. There is a possibility that these effects may change over time, putting in artifacts or jumps in the record. This is slightly different from the more often discussed ‘Urban Heat Island’ effect which is a function of the wider area (and so could be present even in a perfectly set up urban station). UHI effects will generally lead to long term trends in an affected station (relative to a rural counterpart), whereas micro-site changes could lead to jumps in the record (of any sign) – some of which can be very difficult to detect in the data after the fact.
There is nothing wrong with increasing the meta-data for observing stations (unless it leads to harassment of volunteers). However, in the new found enthusiasm for digital photography, many of the participants in this effort seem to have leaped to some very dubious conclusions that appear to be rooted in fundamental misunderstandings of the state of the science. Let’s examine some of those apparent assumptions:
Mistaken Assumption No. 1: Mainstream science doesn’t believe there are urban heat islands….
This is simply false. UHI effects have been documented in city environments worldwide and show that as cities become increasingly urbanised, increasing energy use, reductions in surface water (and evaporation) and increased concrete etc. tend to lead to warmer conditions than in nearby more rural areas. This is uncontroversial. However, the actual claim of IPCC is that the effects of urban heat islands effects are likely small in the gridded temperature products (such as produced by GISS and Climate Research Unit (CRU)) because of efforts to correct for those biases. For instance, GISTEMP uses satellite-derived night light observations to classify stations as rural and urban and corrects the urban stations so that they match the trends from the rural stations before gridding the data. Other techniques (such as correcting for population growth) have also been used.
How much UHI contamination remains in the global mean temperatures has been tested in papers such as Parker (2005, 2006) which found there was no effective difference in global trends if one segregates the data between windy and calm days. This makes sense because UHI effects are stronger on calm days (where there is less mixing with the wider environment), and so if an increasing UHI effect was changing the trend, one would expect stronger trends on calm days and that is not seen. Another convincing argument is that the regional trends seen simply do not resemble patterns of urbanisation, with the largest trends in the sparsely populated higher latitudes.
Mistaken Assumption No. 2: … and thinks that all station data are perfect.
This too is wrong. Since scientists started thinking about climate trends, concerns have been raised about the continuity of records – whether they are met. stations, satellites or ocean probes. The danger of mistakenly interpreting jumps due to measurement discontinuities as climate trends is well known. Some of the discontinuities (which can be of either sign) in weather records can be detected using jump point analyses (for instance in the new version of the NOAA product), others can be adjusted using known information (such as biases introduced because changes in the time of observations or moving a station). However, there are undoubtedly undetected jumps remaining in the records but without the meta-data or an overlap with a nearby unaffected station to compare to, these changes are unlikely to be fixable. To assess how much of a difference they make though, NCDC has set up a reference network which is much more closely monitored than the volunteer network, to see whether the large scale changes from this network and from the other stations match. Any mismatch will indicate the likely magnitude of differences due to undetected changes.
It’s worth noting that these kinds of comparisons work because of large distance over which the monthly temperature anomalies correlate. That is to say, that if a station in Tennessee has a particular warm or cool month, it is likely that temperatures in New Jersey say, also had a similar anomaly. You can see this clearly in the monthly anomaly plots or by looking at how well individual stations correlate. It is also worth reading “The Elusive Absolute Surface Temperature” to understand why we care about the anomalies rather than the absolute values.
Mistaken Assumption No. 3: CRU and GISS have something to do with the collection of data by the National Weather Services (NWSs)
Two of the global mean surface temperature products are produced outside of any National Weather Service. These are the products from CRU in the UK and NASA GISS in New York. Both CRU and GISS produce gridded products, using different methodologies, starting from raw data from NWSs around the world. CRU has direct links with many of them, while GISS gets the data from NOAA (who also produce their own gridded product). There are about three people involved in doing the GISTEMP analysis and they spend a couple of days a month on it. The idea that they are in any position to personally monitor the health of the observing network is laughable. That is, quite rightly, the responsibility of the National Weather Services who generally treat this duty very seriously. The purpose of the CRU and GISS efforts is to produce large scale data as best they can from the imperfect source material.
Mistaken Assumption No. 4: Global mean trends are simple averages of all weather stations
As discussed above, each of the groups making gridded products goes to a lot of trouble to eliminate problems (such as UHI) or jumps in the records, so the global means you see are not simple means of all data (this NCDC page explains some of the issues in their analysis). The methodology of the GISS effort is described in a number of papers – particularly Hansen et al 1999 and 2001.
Mistaken Assumption No. 5: Finding problems with individual station data somehow affects climate model projections.
The idea apparently persists that climate models are somehow built on the surface temperature records, and that any adjustment to those records will change the model projections for the future. This probably stems from a misunderstanding of the notion of a physical model as opposed to statistical model. A statistical model of temperature might for instance calculate a match between known forcings and the station data and then attempt to make a forecast based on the change in projected forcings. In such a case, the projection would be affected by any adjustment to the training data. However, the climate models used in the IPCC forecasts are not statistical, but are physical in nature. They are self-consistent descriptions of the whole system whose inputs are only the boundary conditions and the changes in external forces (such as the solar constant, the orbit, or greenhouse gases). They do not assimilate the surface data, nor are they initiallised from it. Instead, the model results for, say, the mean climate, or the change in recent decades or the seasonal cycle or response to El Niño events, are compared to the equivalent analyses in the gridded observations. Mismatches can help identify problems in the models, and are used to track improvements to the model physics. However, it is generally not possible to ‘tune’ the models to fit very specific bits of the surface data and the evidence for that is the remaining (significant) offsets in average surface temperatures in the observations and the models. There is also no attempt to tweak the models in order to get better matches to regional trends in temperature.
Mistaken Assumption No. 6: If only enough problems can be found, global warming will go away
This is really two mistaken assumptions in one. That there is so little redundancy that throwing out a few dodgy met. stations will seriously affect the mean, and that evidence for global warming is exclusively tied to the land station data. Neither of those things are true. It has been estimated that the mean anomaly in the Northern hemisphere at the monthly scale only has around 60 degrees of freedom – that is, 60 well-place stations would be sufficient to give a reasonable estimate of the large scale month to month changes. Currently, although they are not necessarily ideally placed, there are thousands of stations – many times more than would be theoretically necessary. The second error is obvious from the fact that the recent warming is seen in the oceans, the atmosphere, in Arctic sea ice retreat, in glacier recession, earlier springs, reduced snow cover etc., so even if all met stations were contaminated (which they aren’t), global warming would still be “unequivocal”. Since many of the participants in the latest effort appear to really want this assumption to be true, pointing out that it doesn’t really follow might be a disincentive, but hopefully they won’t let that detail damp their enthusiasm…
What then is the benefit then of this effort? As stated above, more information is always useful, but knowing what to do about potentially problematic sitings is tricky. One would really like to know when a problem first arose for instance – something that isn’t clear from a photograph from today. If the station is moved now, there will be another potential artifact in the record. An argument could certainly be made that continuity of a series is more important for long term monitoring. A more convincing comparison though will be of the existing network with the (since 2001) Climate Reference Network from NCDC. However, that probably isn’t as much fun as driving around the country taking snapshots.
Paul says
Re 489
You can read much of the report on line, perhaps you should try that, hear are a couple of bits.
“Vastly improved documentation of all changes in equipment, operations, and site factors in operational observing systems are required to build confidence in the time series of decadal-to-centennial climate change”
or how about
“Failure to pursue this recommendation will result in the CONTINUED struggle by USGCRP and other decision makers to distinguish between real observed climate change and artefacts produced by inadequate observing systems and data management practices.”
and yes they are talking about the US observational system.
dhogaza says
Struggle. Not failure.
Do you understand the difference?
Timothy Chase says
Paul (#501) wrote:
The current system is inadquate – or soon will be – if we are concerned with what will be happening to various parts of the country.
If I may quote from the Executive Summary:
In terms of simply identifying the trends in temperature at a global or national level, the current systems are more than adequate. Statistics can and does extract the signal from the noise.
The text of the Executive Summary says as much:
I don’t know if you have noticed, but at this point we are developing models for specific parts of the country and attempting to project what the average summer precipitation and temperatures and variability will be for areas around specific cities.
Ray Ladbury (#112 said as much earlier:
This will be required in order to plan for the changes which are coming down the pipeline. We will need to start making investments soon and over the next several decades if only for the purpose of preparing for the changes which lie ahead.
However, for models to accurately achieve this level of resolution, we need to be able to test them against real world data. That is how science generally works. Additionally, we need to keep in mind the fact that current data is already being used for purposes that were unforeseen at the time that instrumentation was put in place. It is likely that the uses that data twenty years from now will be put to are also unforeseen.
One can freely download the Executive Summary at:
Adequacy of Climate Observing Systems (1999)
Executive Summary
http://books.nap.edu/execsumm_pdf/6424.pdf
… and the entire report is available on-line at:
Adequacy of Climate Observing Systems (1999)
Board on Atmospheric Sciences and Climate
http://books.nap.edu/openbook.php?record_id=6424
One last point: it is clear from the report that increased funding for the maintanence of existing systems and development of new systems would be of great value. It is also clear that they do not expect such funding.
If someone is seriously interested in improving the system, rather than snapping some pictures of various existing sites, they should in all likelihood be pushing to increase the resources which are made available to the climate monitoring network. Local, regional and national economies will be affected by the types of data and the resolution of this data as it forms the basis for various investment decisions.
Finally I would also like to remind anyone who is coming into this discussion rather late that I am not involved in climate modeling or the mitigation climate change but merely a concerned citizen. Judging from what I have read, a great many people are going to be affected just within the next forty years – and judging from what I have been reading, things are going to get a great deal more serious later in the latter half of this century.
steven mosher says
I would like to thank the members of this blog and the scientists involved for the recent correction to GISS TEMP. The adjustment to global temps was minor. .01C
or so. The adjustment to the US was on the order of .15C
for the years 2000-2006.
1. Gavin. You have always been a gentleman and a scholar
even when some of us ( ok me) have been obnoxious. Thanks for your pointers and help and patience.
2. Tamino, Hank Roberts, Eli, Steve Bloom,
Thanks for challenging us on the value of pictures.
Thanks for pushing us to be more scientific. To audit
the whole network, and then go global.
3. Dr Peilke: thanks for your kind encouragement and for showing us how important it is to actually observe the sites.
4. Anthony W. You reduced global warming by .01C by taking pictures! you should sell carbon credits.
5. SteveMc. Next?
6. Dr. Hansen and Ruedy. You have been most gracious.
steven mosher says
re 496.
Timothy you are welcome to come over to CA and discuss the Oke Paper. I am lobbying to get a thread going so that people can tear it apart( we did one on Parker pro and con, submitted questions to him and he was kind enough to respond) Some folks ( the stats types) have reservations about Oke’s cooling ratio metric because of the varience problems with ratios. that should be a hot topic.
We don’t have a thread yet, but if we get one you are more than welcome to join. I think your perspective would be a healthy addition. In fact, when we discussed Parkers paper on UHI, neal King wh defended Parker pretty much lead the discussion.
Anyway, If we get a thread going ( steveM’s decision)
I will come back and invite you to join the discussion.
steven mosher says
Timothy you wrote
“If someone is seriously interested in improving the system, rather than snapping some pictures of various existing sites, they should in all likelihood be pushing to increase the resources which are made available to the climate monitoring network. Local, regional and national economies will be affected by the types of data and the resolution of this data as it forms the basis for various investment decisions.”
Well, I should tell you that in addition to snapping pictures, Anthony fully supports the improvements to the USHCN and the development of the CRN. The association of state climatologists has also warned about the continued deteriration of the historical network. Anthony is raising this issue in his meetings with his representative. Have you scheduled a meeting with your congressman to discuss the importance of funding improvements to our weather and climate monitoring system? One goal of Surfacestations is to educate people about the deteriration of the network.
Pictures help, but a letter or visit to your congressman would also help. Also, we need people to help out at Surface Stations. This documentation will help us lobby for more money for climate reasearch.
Paul says
RE 502
You can struggle and fail or struggle and succeed. The word struggle does not indicate which. The fact that the observing systems and data management practices are described as “inadequate” suggests that the struggle may be leading to failure, but of course you can put your own interpretation on such ambiguous language. The point is that the authors agree with those over at surfacestations.com, that the current system is not very good.
Vernon says
Why surfacestations.org is bad for this site:
Hansen says in Hansen et al, (2001)
http://pubs.giss.nasa.gov/docs/2001/2001_Hansen_etal.pdf
Well, now we have proof that Hansen’s lights=0 methodology does not work without actually checking the stations for asphalt, concrete, air conditioners, etc.
[Response: “could” != “does”. The latter requires a demonstration that the microsite issues actually add up to something. That has not been demonstrated in the slightest. – gavin]
Vernon says
Gavin, you show me proof that Hansen’s methodology which depends, per Hansen on ‘the accuracy of the temperature records of the unlit stations’ works when the accuracy is cast in doubt by the failure to follow NOAA and WMO siting standards is valid. The burden is on Hansen to show that his methodology is valid, not on anyone else. As you say Hansen could be right != Hansen was right. The latter requires a demonstration that the microsite issues do not add up to anything!
[Response: You have it backwards. An analysis is done using the imperfect data that is available. A question is raised about an effect that was not specifically addressed but no quantitative assessment of its importance is made. Then you demand that this effect be proven to be zero. How do you think that could happen? (Remember you can’t prove a negative). If however, you think there is a problem, quantify it! Do an analysis only using stations you think are good and see if it is the same as if you use all of them. That would be interesting. Conventional wisdom (which is not necessarily true of course) is that microsite issues mostly cancel out in the mean. The high correlation of nearby stations with each other, and the concurrence of plenty of other signs of warming, including the satellite data, all suggest that this is a reasonable assumption. The burden of proof that it isn’t is on you. – gavin]
Vernon says
Gavin, you are making statements which you have no proof of. If you do please cite the study that proves your position. The burden of proof is on the person that did the study. What is showing is that Hansen’s methodology for UHI is questionable and no it is not on me to prove he is wrong, it on him to prove he is right and surfacestations.org is showing that for Hansen’s study, there is no proof the data is accurate. I am not the one making the claim, that would be Hansen. I am not basing my model on data that is under contention and not willing to admit it.
[Response: Hansen’s 2001 study was to try and remove the effect of UHI, not microsite effects. In that study, they found the average US trend of urban stations to be 0.3 deg C/century greater than the trend of the rural stations (and then adjusted for it so that it didn’t affect the final graphs). What claim do you think I or Hansen are making without proof? – gavin]