Observant readers will have noticed a renewed assault upon the meteorological station data that underpin some conclusions about recent warming trends. Curiously enough, it comes just as the IPCC AR4 report declared that the recent warming trends are “unequivocal”, and when even Richard Lindzen has accepted that globe has in fact warmed over the last century.
The new focus of attention is the placement of the temperature sensors and other potential ‘micro-site’ effects that might influence the readings. There is a possibility that these effects may change over time, putting in artifacts or jumps in the record. This is slightly different from the more often discussed ‘Urban Heat Island’ effect which is a function of the wider area (and so could be present even in a perfectly set up urban station). UHI effects will generally lead to long term trends in an affected station (relative to a rural counterpart), whereas micro-site changes could lead to jumps in the record (of any sign) – some of which can be very difficult to detect in the data after the fact.
There is nothing wrong with increasing the meta-data for observing stations (unless it leads to harassment of volunteers). However, in the new found enthusiasm for digital photography, many of the participants in this effort seem to have leaped to some very dubious conclusions that appear to be rooted in fundamental misunderstandings of the state of the science. Let’s examine some of those apparent assumptions:
Mistaken Assumption No. 1: Mainstream science doesn’t believe there are urban heat islands….
This is simply false. UHI effects have been documented in city environments worldwide and show that as cities become increasingly urbanised, increasing energy use, reductions in surface water (and evaporation) and increased concrete etc. tend to lead to warmer conditions than in nearby more rural areas. This is uncontroversial. However, the actual claim of IPCC is that the effects of urban heat islands effects are likely small in the gridded temperature products (such as produced by GISS and Climate Research Unit (CRU)) because of efforts to correct for those biases. For instance, GISTEMP uses satellite-derived night light observations to classify stations as rural and urban and corrects the urban stations so that they match the trends from the rural stations before gridding the data. Other techniques (such as correcting for population growth) have also been used.
How much UHI contamination remains in the global mean temperatures has been tested in papers such as Parker (2005, 2006) which found there was no effective difference in global trends if one segregates the data between windy and calm days. This makes sense because UHI effects are stronger on calm days (where there is less mixing with the wider environment), and so if an increasing UHI effect was changing the trend, one would expect stronger trends on calm days and that is not seen. Another convincing argument is that the regional trends seen simply do not resemble patterns of urbanisation, with the largest trends in the sparsely populated higher latitudes.
Mistaken Assumption No. 2: … and thinks that all station data are perfect.
This too is wrong. Since scientists started thinking about climate trends, concerns have been raised about the continuity of records – whether they are met. stations, satellites or ocean probes. The danger of mistakenly interpreting jumps due to measurement discontinuities as climate trends is well known. Some of the discontinuities (which can be of either sign) in weather records can be detected using jump point analyses (for instance in the new version of the NOAA product), others can be adjusted using known information (such as biases introduced because changes in the time of observations or moving a station). However, there are undoubtedly undetected jumps remaining in the records but without the meta-data or an overlap with a nearby unaffected station to compare to, these changes are unlikely to be fixable. To assess how much of a difference they make though, NCDC has set up a reference network which is much more closely monitored than the volunteer network, to see whether the large scale changes from this network and from the other stations match. Any mismatch will indicate the likely magnitude of differences due to undetected changes.
It’s worth noting that these kinds of comparisons work because of large distance over which the monthly temperature anomalies correlate. That is to say, that if a station in Tennessee has a particular warm or cool month, it is likely that temperatures in New Jersey say, also had a similar anomaly. You can see this clearly in the monthly anomaly plots or by looking at how well individual stations correlate. It is also worth reading “The Elusive Absolute Surface Temperature” to understand why we care about the anomalies rather than the absolute values.
Mistaken Assumption No. 3: CRU and GISS have something to do with the collection of data by the National Weather Services (NWSs)
Two of the global mean surface temperature products are produced outside of any National Weather Service. These are the products from CRU in the UK and NASA GISS in New York. Both CRU and GISS produce gridded products, using different methodologies, starting from raw data from NWSs around the world. CRU has direct links with many of them, while GISS gets the data from NOAA (who also produce their own gridded product). There are about three people involved in doing the GISTEMP analysis and they spend a couple of days a month on it. The idea that they are in any position to personally monitor the health of the observing network is laughable. That is, quite rightly, the responsibility of the National Weather Services who generally treat this duty very seriously. The purpose of the CRU and GISS efforts is to produce large scale data as best they can from the imperfect source material.
Mistaken Assumption No. 4: Global mean trends are simple averages of all weather stations
As discussed above, each of the groups making gridded products goes to a lot of trouble to eliminate problems (such as UHI) or jumps in the records, so the global means you see are not simple means of all data (this NCDC page explains some of the issues in their analysis). The methodology of the GISS effort is described in a number of papers – particularly Hansen et al 1999 and 2001.
Mistaken Assumption No. 5: Finding problems with individual station data somehow affects climate model projections.
The idea apparently persists that climate models are somehow built on the surface temperature records, and that any adjustment to those records will change the model projections for the future. This probably stems from a misunderstanding of the notion of a physical model as opposed to statistical model. A statistical model of temperature might for instance calculate a match between known forcings and the station data and then attempt to make a forecast based on the change in projected forcings. In such a case, the projection would be affected by any adjustment to the training data. However, the climate models used in the IPCC forecasts are not statistical, but are physical in nature. They are self-consistent descriptions of the whole system whose inputs are only the boundary conditions and the changes in external forces (such as the solar constant, the orbit, or greenhouse gases). They do not assimilate the surface data, nor are they initiallised from it. Instead, the model results for, say, the mean climate, or the change in recent decades or the seasonal cycle or response to El Niño events, are compared to the equivalent analyses in the gridded observations. Mismatches can help identify problems in the models, and are used to track improvements to the model physics. However, it is generally not possible to ‘tune’ the models to fit very specific bits of the surface data and the evidence for that is the remaining (significant) offsets in average surface temperatures in the observations and the models. There is also no attempt to tweak the models in order to get better matches to regional trends in temperature.
Mistaken Assumption No. 6: If only enough problems can be found, global warming will go away
This is really two mistaken assumptions in one. That there is so little redundancy that throwing out a few dodgy met. stations will seriously affect the mean, and that evidence for global warming is exclusively tied to the land station data. Neither of those things are true. It has been estimated that the mean anomaly in the Northern hemisphere at the monthly scale only has around 60 degrees of freedom – that is, 60 well-place stations would be sufficient to give a reasonable estimate of the large scale month to month changes. Currently, although they are not necessarily ideally placed, there are thousands of stations – many times more than would be theoretically necessary. The second error is obvious from the fact that the recent warming is seen in the oceans, the atmosphere, in Arctic sea ice retreat, in glacier recession, earlier springs, reduced snow cover etc., so even if all met stations were contaminated (which they aren’t), global warming would still be “unequivocal”. Since many of the participants in the latest effort appear to really want this assumption to be true, pointing out that it doesn’t really follow might be a disincentive, but hopefully they won’t let that detail damp their enthusiasm…
What then is the benefit then of this effort? As stated above, more information is always useful, but knowing what to do about potentially problematic sitings is tricky. One would really like to know when a problem first arose for instance – something that isn’t clear from a photograph from today. If the station is moved now, there will be another potential artifact in the record. An argument could certainly be made that continuity of a series is more important for long term monitoring. A more convincing comparison though will be of the existing network with the (since 2001) Climate Reference Network from NCDC. However, that probably isn’t as much fun as driving around the country taking snapshots.
Verne Bauman says
Is there something wrong with using the satellite data set (below) to establish global temperature changes over the last 25 years or so? It seems to me that these measurements have been scrutinized and corrected to satisfy the most skeptical observer. Why all this concern with the meteorological station data now that there is a better way to track global temperature changes and a reasonably long record? Just asking.
http://vortex.nsstc.uah.edu/public/msu/t2lt/tltglhmam_5.2
Hank Roberts says
DocM, here’s an online text on the subject:
“Data Analysis: S&L1 Introduction
http://www.dartmouth.edu/~mss/Volumes%20I%20and%20II%20.pdf
on p.229 (of 636 pages):
“… The moral of the story is that if the variation of values is due to unbiased measurement error, then the distribution of values should be symmetrical and bell shaped.
“As is frequent in data analysis, the application … requires that we use it backward: When your data is not symmetrical and bell shaped, then you can not explain the variation … When the data is not symmetrical and bell shaped, you�ve got some work to do to explain why not….”
You’re asking how much noise you need to add to raise a flag? If it’s real noise, it just widens the range. If it’s bias you want to introduce, you can do that in Stoat’s (Hadley) database and see for yourself how much you have to introduce to obscure the five, ten and longer term trend lines.
ray ladbury says
re: 351. Verne, this dataset is discussed on this site here:
https://www.realclimate.org/index.php/archives/2005/08/et-tu-lt/
tamino says
Re: Noise
Noise comes in many forms.
Truly random noise tends to be unbiased and uncorrelated. This means that it has equal chance to be positive or negative so the “expected value” of the noise is zero, and the noise on any given day/month/year/whatever is unrelated to the noise at any other day/month/year/whatever. This kind of noise has very little effect on the estimated trend, except to make it more “fuzzy” — it generally increases the uncertainty in our estimate of the physical signal, but the error range of our estimate will still include the true value. If you take the “true” signal data, and add noise computed using a random-number generator with mean value zero, you have added unbiased, uncorrelated noise. Unless the noise is huge compared to the signal (so the signal-to-noise level is tiny), or the number of available data is small, it’s generally very easy to remove its impact, and we will be able to recover the signal.
If the noise is biased, so its expected value is not zero, we can still recover the trend, so long as the bias is constant thoughout time. In fact, translating from raw data to anomaly will remove the effect of the noise bias.
Real difficulties in recovering the signal arise when the noise is biased, and the bias is not constant through time. This can happen when the instrumentation, or data-collection procedures or environment, change over time. In such cases we can consider the time-evolving bias to be part of the signal, so the problem becomes one of separating the physical signal from the instrumental signal.
If we have only one reporting station, we can only separate the two kinds of signal if they have different mathematical behavior. For example, if we change from one kind of thermometer to another which has a different bias, the instrumental signal is a step change, a sudden shift from one value to another. If the physical signal is a linear increase/decrease, then these two signals are of different mathematical character, and can be separated by mathematical analysis.
If we have a network of nearby stations, then we have many ways to separate instrumental from physical signals. Due to the very strong spatial coherence of temperature, the same (or very nearly the same) physical signal will exist in nearby stations, but the likelihood that the same instrumental signal will apply to all stations simultaneously is extraordinarily small. In this case, signal which exists in all (or almost all) stations can be safely considered physical, while signal which exists in a single station (or very small number of stations) can be considered instrumental.
Therefore the ability to recover the trend from artificially altered data depends on what kind of alteration is applied. If you artificially add a trend of the same type as the signal (say, adding linear-trending noise to a linear physical signal), and apply it equally to all stations in the “nearby” network, it will not be possible to recover the physical signal. If, on the other hand, the artificially imposed noise is of a different mathematical character (step-change noise added to linear signal), or the noise is applied to a subset of station reports, or to a large set of stations but with significant time staggering, then it will be possible to recover the physical signal.
Verne Bauman says
Re:353. Ray, thanks but I had read this and articles re the corrections to early data. It is about 2 years old. The article seems to deal with the correlation of the satellite data with models. The conclusion was that the satellites now agreed with the models, so the data must be good – is this still the feeling after two intensive years of examination?
My question is about the current article – reliability of meteroligical station data. It seems a fundamentally superior method to use satellites to directly measure global temperature. Is there something still wrong with relying on this data over the station data?
Steve Reynolds says
343 Timothy Chase> In any case, we have several largely independent lines of inquiry suggesting something between 2.8-3.0 C.
Now you are disagreeing with the IPCC; they say 1.5-4.5C (which agrees with Annan’s paper).
My point is that there is a large uncertainty in all the methods. Combining the methods helps reduce uncertainty, but the recorded temperatures method is showing the largest sensitivity (in Annan’s figure 1). Warming bias errors in recorded temperatures may help explain this.
Hank Roberts says
Steve:
http://julesandjames.blogspot.com/2006/03/climate-sensitivity-is-3c.html
Dan says
re: 356. Goodness. It has been clearly stated and shown that there are many studies of global temperature trends that use proxies. As a very simple search here on RC would show that. Try the “search” box at the top of the page, for starters. Better still, try reading the IPCC reports re: global temperature trends and proxies. Tree ring studies are just one of many proxies. In fact, in and of themselves, tree ring studies may not necessarily mean much as one dataset (just like the surface US temperature set may not in and of itself). But taken as part of the large, collective, analyzed data set that spans various disciplines re: temperature trends, the data are consistent. And that is one of the things that the scientific analysis has shown about the data from various disciplines: the significant trends show up across the board.
It is really not all that difficult to comprehend that (US) surface temperature data are one small set of a global set of data that show the trend. Yet skeptics and denialists continue to harp on an issue that is a non-starter and a complete red herring with regards to the *global* data set temperature trends. The data set from various sources and disciplines is large. And consistent. There is little excuse not to read and learn.
FurryCatHerder says
Re 317:
John —
Yes, I know all about you. You and I conversed in an entirely different universe many, many years ago.
I acknowledged that CO2 dominates outside of cities, but CO2 dominating outside of cities doesn’t do a heck of a lot of good for people who live IN cities. Since UHI has a strong positive feedback, and easily exceeds the most pessimistic projections on warming, I think more attention needs to be paid to UHI.
Here’s an example — I consume about 25kWH / month more per 1°F rise in average high temperature. If you look at the 3 to 5°F rise from my house to downtown, that doesn’t contribute much, in terms of global warming. But if you look at the 25kWH / month per degree rise, times those 3 to 5°F, that contributes a lot more. See where I’m going with that?
Dan says
re: 356. I will make it even easier to read and learn re: tree rings and trends: See the IPPC chapter at http://ipcc-wg1.ucar.edu/wg1/Report/AR4WG1_Pub_Ch06.pdf
FurryCatHerder says
Re 317 (again — sorry, didn’t realize it was the same post)
My stupid cat walked all over the keyboard. That’s what I get for trying to herd them.
What I mean is that, strictly in terms of a wager, the probability of a given outcome becomes less certain the further out we get, rather than more certain.
Near term — and I think 13 years is pretty “near term” — we can’t react fast enough. The US Congress would have to grow a spine, or oil prices would have to rise significantly faster (and they are now locked in an upward spiral, I think — we’ll never see $30/bbl oil), to get CO2 emissions to come down near term enough to not put more warming in the pipeline. But also, near term, “natural” changes too much, up and down. We’ve still not surpassed 1998, correct? And if we regress to the mean, solar cycle wise, we may naturally cool enough to offset CO2 induced warming. In other words, a 2020 targeted wager is too much risk. Click the link by my name if you want to see some temperature records that show what I’m talking about with variation in temperature.
But as we move out — and especially out past 10 years, and into the era of $100 and $150 / bbl in ’07 dollars oil — the cost of CO2 emissions will rise, and tree hugger or not, people will react. Throw in some tree huggers, and CO2 emissions will fall. How fast is a matter for meaningless speculation, but I think we have reached a critical mass for explosive grow in tree hugging.
So, what I’m betting on is either we decide not to go broke trying to burn all that oil and coal, or more people care about the environment. I really don’t care, for the sake of a wager, which comes true. It seems to me that as time moves to the right, the likelihood of either of those scenarios panning out increases — and that, to me, is a basis for a nice long term wager.
tamino says
Re: #355 (Verne Bauman)
The problem is that the MSU satellites don’t measure surface temperature. They measure temperature in the atmosphere, and not in every level of the atmosphere. The “TLT” data (for temperature-lower-troposphere) is not a satellite measurement, but a derived data series, combining information from MSU channel 2 with channel 4 (I think) to remove the stratospheric influence, producing what is believed to be a representation of the lower troposphere.
Because it is a derived rather than directly observed series, there has been a distinct learning curve about how to derive the lower troposphere temperature from the MSU channels. There has also been continuing disagreement between the two teams (UAH, University of Alabama at Huntsville) and RSS (Remote Sensing Systems group) which have been constructing the derivation. Spencer & Christy, heading the UAH group, are outspoken critics of AGW. Their TLT reconstruction originally showed no real trend in the lower troposphere, despite the fact that computer models indicated the lower troposphere should be warming at least as fast as the surface. But over the last decade, numerous errors in their processing have been revealed, so that now their analysis does indicate warming in the lower troposphere. I think that their latest analysis indicates TLT is warming by 0.14 deg.C/decade, while the RSS group gets 0.19 (or is it 0.23?) deg.C/decade. The errors uncovered in the UAH analysis have now brought it much more in line with the predictions of the computer models, which have therefore been vindicated.
None of which tells us about the surface temperature; that is still determined by ground-based thermometers.
If I am mistaken about any of this, I will be glad to be corrected.
Steve Reynolds says
tamino> The errors uncovered in the UAH analysis have now brought it much more in line with the predictions of the computer models…
It is interesting that errors in the MSU satellite data have been diligently pursued (as they should be), but potential errors in surface temperature measurements are a very low priority according to some here.
tamino says
Re: #363 (Steve Reynolds)
I don’t think any of us want to hide, or hide from, potential errors in the surface temperature measurements. Quite the contrary, we want to find any errors and correct them. This is exactly what has been very diligently done by GISS and HadCRU.
The ire of some of the commenters here is due to the fact that the “evidence” for further potential errors which is the root of this post, comes from those who have an agenda to discredit the data, and whose efforts are far from objective and nowhere near comprehensive. That’s not science, it’s a smear campaign.
We would all welcome a thorough and scientific evaluation of the impact of micrositing issues. We rankle at unscientific, agenda-driven doubt.
Hank Roberts says
Steve, you’re comparing the MSU scientists who worked hard to improve their own data — and did, when nudged to do so in other published science papers —- with the self-elected audit team here who have published nothing and apparently chosen to ignore what has been published, cited above.
That’s comparing oranges and, well, horse apples.
The must-be-a-pony-here-somewhere approach gets tiresome. How about reading the actual work done and published? See the 2003 paper above. Talk to us about what it says, eh?
John Mashey says
#359, #361 FCH [and unfortunately, don’t recall]
Well, like I said, I didn’t really want to bet against *you*, but I was hoping Jim Cripwell would take me up on this, if we of similar mind to Abdusamatov & CS.
I want to be around to see the end of the bet, and 2026 is pretty unlikely, but 2020 might be OK. For the really long-term, you may well be right. Churchill’s said: “You can always count on Americans to do the right thing – after they’ve tried everything else.” I hope that’s not true here, but I just started reading Jeff Goodell’s “Big Coal”, which doesn’t help my mood.
We absolutely agree on the need to do what we can about UHI, which is why I mentioned Phoenix, but I thought Austin wasn’t so bad.
Timothy Chase says
Steve Reynolds (#363) wrote:
They have studied the urban heat island effect. (I am assuming that’s the horse you don’t think is quite dead yet.)
Assessment of Urban Versus Rural In Situ Surface Temperatures in the Contiguous United States: No Difference Found
Thomas C. Peterson
Journal of Climate, VOL. 16, NO. 18, 15 September 2003
http://www.ncdc.noaa.gov/oa/wmo/ccl/rural-urban.pdf
Global rural temperature trends
T Peterson, K Gallo, J Lawrimore, T Owen, A Huang, D McKittrick
Geophysical Res. Ltrs, Vol. 26 , No. 3 , p. 329 (1999)
… and I know there are more.
I could look them up for you, or… do you have access to Google?
nicolas L. says
Re 334
Thanks Chuck, I really had a good time with the “expanding hearth” theory :). My attention was catch by a little phrase repeated here and there (about plate tectonics…)on the site: “if it’s consensus, it isn’t science”. It’s a thing we tend to here a lot latelyâ?¦
Re 345
Timothy raises a good point, not commented much apparently (I wonder why…). The rural based stations show the same warming trends than urban sited stationsâ?¦ I found the same things with the Meteo France data, at a national level, here (were actually the rural regions are the most affected by a warming trend during 20th century):
http://secours-meteo-fr.axime.com/FR/climat/img/tempminimaxi.gif
Can’t account for a UHI there, can you? Moreover, if studies made exclusively on rural stations data show the same results than for global data, it would tend to show data analysts make a pretty good job at taking account of eventual UHI bias.
Finally, I still don’t see what does a picture tells you about possible bias in the data, or the way it is corrected when analysed? If I think my car has got a problem, taking a picture of it won’t help me much finding where does the problem come fromâ?¦
ray ladbury says
Steve Reynolds wrote: “It is interesting that errors in the MSU satellite data have been diligently pursued (as they should be), but potential errors in surface temperature measurements are a very low priority according to some here.”
Now hold on just a dad-blamed minute. What possible basis do you have for making that statement? Errors have been pursued to the nth degree by looking at the dat–just as was done by C&S. Would you have us suggest the the MSU data not be used until each satellite is visited by the Shuttle? I guarantee you that satellites suffer a whole helluva lot more wear and tear than meteorological stations. Sure you don’t want to rethink that statement there, Steve?
Verne Bauman says
Re: 362. Wow, great summary. I never understood before why it was thought necessary for the satellite data to conform to the models â?? seemed backwards.
OK, the satellites measure a composite signal containing information about several atmospheric layers. This signal (data) is manipulated to extract information about the several layers – one being the lower troposphere. The lower troposphere, I think, runs from the surface to about 30K feet. So there are only two problems with the satellite data?
1. Lower troposphere temperature is not measured directly, but is derived from a more complex signal.
2. The lower troposphere is not the actual surface but an entire layer of atmosphere.
Is there skepticism about the analysis of the satellite data to extract the temperature of the lower troposphere? The data is clearly labeled TLT. Is this wishful thinking?
In determining the global temperature anomaly, there is an important distinction between the surface of the Earth and the lower troposphere. And that distinction would be..?
Sorry for describing satellites as directly measuring global temperature. It would have been better if I had said thermometers are inherently representational – in the same way we elect a congressman to represent a district. Apparently station placement is not made by selecting an average place, but by convenience. That place then represents its entire grid area. The active area of a thermometer is perhaps a few square inches. There are thousands of such stations, but the globe is a very big place. Cutting it this way, there are a few thousand convenient samples of the Earth.
To the extent that global warming means that most places on Earth will get warmer, the thermometers will detect it. Of course you could get by with only a few dozen. It seems that their placement is relatively unimportant – so UHI and such concerns are irrelevant. In this view, the rub comes when you want to say how much warmer and what the Earth’s average surface temperature is. In that case, placement and number of points (resolution) is everything.
By contrast, the satellites measure all the Earth. They do it often. Seems to me that is like having billions of thermometers being read thousands of times (for statistical averaging). That’s why I said “directly”. If the satellites are doing a good job of determining TLT, and TLT is a good indicator of surface temperature, then why isn’t this a superior system to the thermometer network?
[Response: Because i) it’s not one satellite, and they all have drift and calibration issues, ii) three different analyses give three different trends depending on how the satellites are tied together. Since there is no perfect data series understanding comes from looking at as many independent data sets as possible and looking for consistent patterns. – gavin]
Dano says
RE the argumentation in 363 (Reynolds and others):
If those who go out and take pretty pictures brought survey equipment and temp measuring equipment with them (maybe a barometer and anemometer too), I might listen to the argument. Taking a picture of a site without measuring temp/press/wnd is akin to a doctor making a distance diagnosis via video (not that it ever happens…).
Nonetheless, I look forward to someone writing up the photo experiment and submitting it to a scholarly journal, in order to overturn the current warmer paradigm. I trust a pre-print will be available for us to audit before submission.
Best,
D
Dan says
re: solar trends. A recent paper by M. Lockwood and C. Frohlich to be published in the Proceedings of the Royal Society was featured as a news item in the July 5 issue of the journal Nature (see http://www.petedecarlo.com/files/448008a.pdf). From capitolweather.com: “It is described as “the final nail in the coffin for people who would like to make the Sun responsible for current global warming.” Based on solar data for the last 100 years, the authors were able to show that recent trends in solar activity are actually opposite to those required to explain global warming.”
Now if only grossly irresponsible journalists such as http://www.pittsburghlive.com/x/pittsburghtrib/opinion/columnists/steigerwald/s_513013.html would take a moment to read and learn for themselves. I suppose that is hard to do when you have an agenda and do not care about data or science.
FurryCatHerder says
Re 366:
You wouldn’t remember me. I wasn’t of your stature at the time and I wound up going into an entire different field from you.
Ah, okay — well, good luck making a bet with someone who wants to bet you. There is a certain satisfaction in separating other people from their money.
I’ve had discussion on what I think about coal and I don’t think it has the promise that others ascribe to it. Solar power is falling, fossil fuels are rising — that’s not a bet I want to be on the “coal wins” side of, and PV is the most expensive of the renewables. I trust Adam Smith to get it right.
All values of UHI are “bad” — we’re not Phoenix “bad” yet, but we’re getting there. And as I wrote, 25 kWH / month / degree F times 5 degrees F is greater than what I can get from increased AC efficiency, so I lose when Round Rock and Austin turn into a Dallas-Fort Worth blob of a city. So, even small values for UHI effect result in increased energy demand, which move us further from where we need to be. All land-use changes that result in increased UHI effect have this property — there’s a lot of energy consumed for environmental control and all of that energy is related to degree-days, all of which are worse with UHI. Fight UHI, and you fight rising energy use, and indirectly, CO2 emissions and global warming.
My stance is that global warming is an overall problems, not just a CO2 problem. If we try to burn all the fossil fuels we can get, poverty is the result — upward spiralling fossil fuel prices will create conflict and misery at the lower ends of the economic scale. As increasingly larger amounts of money are siphoned off for “energy”, Adam Smith steps back in and we’ll see the companies that are harmed by upward spiralling energy costs and reduced consumer demand for their products pushing harder and harder to get energy costs under control.
The “Tree Hugging Quotient” is already high enough, I think, that we’re on a path towards reducing CO2 emissions. Not just “growth”, though the Chinese and Indians will do their best to increase growth, but reduce per capita CO2 output in the developed world. For example, look at the growth in hybrid cars, interest in pluggable hybrids, interest in battery powered lawn mowers even, wind and solar electric, etc. Look at companies like Google, Sun and IBM trying to position themselves as “green”. Look at companies like NativeEnergy, Green Mountain Electric. TXU Electric charges me about $0.15/kWH, I can buy wind for about $0.12/kWH now.
That’s the sort of activity that makes betting “CO2 wins” a bit too risky out in the long term. Now, that doesn’t mean we don’t have to do anything, it just means, I think, that we make sure the change in attitudes continue to expand, those secondary effects (impoverishment of the lower and middle classes as fossil energy costs soar, UHI effect increasing growth in energy consumption) are talked up, fiscal policy is strongly tilted in the direction of renewables, people plant urban forests ;), and so on.
Steve Reynolds says
Re 363, 364, 365, 367, 369, 371: I seem to have touched a nerve there�
tamino> The ire of some of the commenters here is due to the fact that the “evidence” for further potential errors which is the root of this post, comes from those who have an agenda to discredit the data, and whose efforts are far from objective…
While the purely ad hominem argument above probably deserves no answer, I have seen no evidence that Anthony Watts and the others collecting data at surfacestations.org are any less objective than professional climate scientists.
Hank Roberts> you’re comparing the MSU scientists who worked hard to improve their own data — and did, when nudged to do so in other published science papers —- with the self-elected audit team here who have published nothing and apparently chosen to ignore what has been published, cited above.
I don’t think Anthony Watts has ignored what has been published any more than outside climate scientists did when critiquing the MSU data or Peterson (in Timothy’s reference) did in disagreeing with previously published studies with different conclusions than his (UHI science seems far from settled).
Timothy Chase> They have studied the urban heat island effect.
In any of the papers that you have found, did they do any site visits?
ray ladbury> Would you have us suggest the the MSU data not be used until each satellite is visited by the Shuttle? I guarantee you that satellites suffer a whole helluva lot more wear and tear than meteorological stations. Sure you don’t want to rethink that statement there, Steve?
I do not see your point. I think the close scrutiny the MSU data received is what should be done for all critical climate data.
Lynn Vincentnathan says
I think this UHI effect is very important to keep in mind. Thank you, denialists. Most people live in cities, so with GW coming on top of the UHI effect, it will probably be getting VERY VERY HOT in the cities, resulting in a lot more health problems and death…..not to mention a positive feedback loop of people using their ACs more, and sending up more GHGs in the process.
The idea that there can be these micro-site effects, is also troublesome. So we have GW on top of the UHI, then there’s a micro-site jump in temp. That could be REALLLLLY BAD for people caught unawares walking through the micro-sites in a GW-UHI city at peak summer temps, and then they walk through the end side of an AC blowing out hot air….
We must thank the denialists for making us aware that it is even more urgent than we thought to mitigate GW pronto. We just don’t need it added on to the UHI and micro-site hot spots. It could be the straw that breaks the camel’s back, or that last increment of hot air that finally kills people.
James says
Re “…interest in battery powered lawn mowers even…”
Off topic, but I bought one last year when the old gas model died. Quieter, no fuss with starting, and I’ll probably save most of the purchase price by not having gas around that the neighborhood teenagers can “borrow” when they run out :-)
Hank Roberts says
Steve — read the study again. The data are no different between the urban and rural sites — what will you learn by visiting individual sites?
Do you suspect an urban cooling error counterbalancing an urban heat effect you believe they ought to show?
Perhaps you need to photograph more rural boxes?
Maybe the urban boxes get repainted every year and the rural ones are heavily coated in decades worth of soot and dirt,and the thermometers covered with spiderwebs, so the rural ones are reading too hot, kind of a rural heat island problem?
Lynn Vincentnathan says
It seems to me there are other indicators of GW, aside from monitoring stations. How about melting glaciers and ice caps? What about the Larsen B shelf?
I remember some 10 or 15 years ago reading about some 5,000+ year old fossil remains found in the Alps after some melting…..while I was reading about people denying GW. Then I read about some very old fossil finds in the Andes due to melting.
No one mentioned how strange that was that 5,000 year old ice would just up and melt like that in several places around the world. I think I’m the only one (I know of) who thought, this could be due to GW. There wasn’t even a mention of warmer temps causing the melting. It’s like ice melting and freezing has nothing at all to due with temperature; it’s just one of those unexplained happenings of nature.
Timothy Chase says
Steve Reynolds (##374) wrote:
Without studies like what I sited, the assumption has been that they could apply certain statistical methods to get rid of any significant distortion due to the Urban Heat Island effect.
The essay by Gavin above references two articles which detail such methods:
However, given studies like what I have pointed to:
Assessment of Urban Versus Rural In Situ Surface Temperatures in the Contiguous United States: No Difference Found
Thomas C. Peterson
Journal of Climate, VOL. 16, NO. 18, 15 September 2003
http://www.ncdc.noaa.gov/oa/wmo/ccl/rural-urban.pdf
… it would appear that these methods work quite well if there is no discernable difference between the trends where all stations are used and the trends where only the rural stations are used – unless of course you believe that rural stations are experiencing Urban Heat Island effects as well.
Is this your worry?
That rural stations are experiencing the Urban Heat Island effect which is distorting the temperature trends we are reading off of them? And that this error is getting worse decade after decade creating only the appearance of rising temperatures?
Verne Bauman says
Re: 370 Gavin, Since you took the time to respond, I assume you are trying to be helpful.
You say i) its not one satellite, and they all have drift and calibration issues.
Seems this also applies to each and every thermometer.
You say ii) three different analyses give three different trends..
Seems there is also a lot of raw data manipulation in the thermometer data to compensate for population density and area coverage. Each decision is a different analysis and will produce different trends – else why do it?
Finally, you say “Since there is no perfect data series, understanding comes from looking at as many independent data sets as possible and looking for consistent patterns.”
Consistent patterns in an independent set – that’s what brought me to your article. I was playing with the MLO CO2 data and plotted the yearly rate of change in CO2 against time and out popped a temperature curve complete with El Ninos, volcanoes, and all. The good people at Mauna Loa told me this is just another example of the biological feedback similar to the yearly cycle.
Since the CO2 rate curve mimics the satellite data better than the CRU temperature data, I began to take a look at the differences between the two sets. From what you say, it’s not the data set but the consistency of the pattern that leads to understanding. I’ll think about it and thanks for the response.
Dano says
RE 374 (Reynolds):
I have seen no evidence that Anthony Watts and the others collecting data at surfacestations.org are any less objective than professional climate scientists.
Collecting data.
Objectively pointing a camera and playing with likely a non-calibrated GPS (to do what with), while following and documenting what protocols to crunch what data to determine what?
I also point out many are assiduously not taking comparative ambient temp measurements, determining wind effects, laying out and measuring transects of temps, taking pictures of their thermograph/barograph charts they set up, nothing.
IOW: what’s the point until you collect useful data? And what’s so difficult about writing a grant proposal and attaching your manuscript to it, along with your study plan and sample transect/thermograph data you collected to support your hypothesis? How can pictures be more informative to the community than data and analysis?
Is there some aspect of the NewScience that doesn’t need these things?
Best,
D
Steve Reynolds says
Dano> IOW: what’s the point until you collect useful data? And what’s so difficult about writing a grant proposal and attaching your manuscript to it, along with your study plan and sample transect/thermograph data you collected to support your hypothesis? How can pictures be more informative to the community than data and analysis?
So everyone is supposed to wait 6 months to see if grant proposal is approved before they can do anything?
How about just determining how temperature measurement stations do vs. USCRN Site Survey Classification Scheme:
http://www1.ncdc.noaa.gov/pub/data/uscrn/documentation/program/X032FullDocumentD0.pdf
The classification ranges from 1 to 5 for each measured parameter. The errors for the different classes are estimated values.
Classification for Temperature and Humidity
Class 1: Flat and horizontal ground surrounded by a clear surface with a slope below 1/3 (<19 degrees). Grass/low vegetation ground cover <10 cm high. Sensors located at least 100 meters (m) from artificial heating or reflecting surfaces, such as buildings, concrete surfaces, and parking lots. Far from large bodies of water, except if it is representative of the area, and then located at least 100 meters away. No shading when the sun elevation >3 degrees.
Class 2: Same as Class 1 with the following differences. Surrounding Vegetation <25 cm. Artificial heating sources within 30m. No shading for a sun elevation >5 degrees.
Class 3 (error 1 C): Same as Class 2, except no artificial heating sources within 10m.
Class 4 (error >/= 2 C): Artificial heating sources <10m.
Class 5 (error >/= 5 C): Temperature sensor located next to/above an artificial heating source, such a building, roof top, parking lot, or concrete surface.
dhogaza says
Reynolds:
ray ladbury
Actually I think he’s saying we shouldn’t use the MSU data until the shuttle has photographed each satellite, which is even more limiting than visiting it :)
Reynolds
Which, as you’ve been told many times, has been and is done for the surface temp data, as well.
The self-contradiction in your statements is obvious to all.
Ray Ladbury says
Steve,
I understand that you may have some reservations about some of the data. I can understand that you might want to glean some idea of data quality. However, you have to take into consideration how the data are being used and what sort of signal they are looking for. Watts et al. have not taken time to do that. The way I know this is because they are looking at exactly the wrong thing if they are concerned about bias. You aren’t going to find evidence for bias in a GLOBAL signal by looking at individual stations. To suggest that you can is either ignorant or willfully misleading.
You also don’t seem to understand the difference between satellite and terrestrial data. The MSU dataset was never intended for use as a measure of surface warming. As such, you don’t have all the telemetry, calibrations, etc. needed, and you have to infer these relations from the data itself. Also, keep in mind that while there are many many measurements, you only have one or a few satellites making them. Thus any bias that affects a satellite affects all the measurements, while in a terrestrial network, you have independent stations making independent measurements. If one station starts giving crappy data, chuck it or downweight the data. If the satellite starts giving crappy data, you might not even know it for awhile, and you sure can’t send maintenance out to see what has gone wrong.
In an oversampled network, you have many techniques for dealing with imperfect data. I know a lot of these techniques. They work. The guys doing the actual data analysis, know a whole lot more of these techniques than I do. They’re not dumb, and the signal they are trying to pull out has very distinctive properties that distinguish it from noise. If people don’t understand this, they’ve no business mucking about with the network.
Steve Reynolds says
Ray,
Thanks for the most thoughtful reply.
Ray> You aren’t going to find evidence for bias in a GLOBAL signal by looking at individual stations.
“When a respected scientist says something is impossible…”
While you may be right that global bias is less likely to be found looking at individual stations, it is certainly not impossible. Microsite effects can be very important.
A possible example already given is the introduction of limited length RS232 cable for MMTS that may have caused sensors to be moved closer to buildings.
Another example is the likely increased paved parking near the sensors.
Ray> The MSU dataset was never intended for use as a measure of surface warming.
Neither were most of the surface stations.
Ray> In an oversampled network, you have many techniques for dealing with imperfect data.
True, but at sufficiently low S/N, no technique works very well. I think it is worth establishing what the actual signal to noise ratio is. That will likely require additional attention from the professionals, but if data collected by surfacestations.org helps that happen, I can not see why anyone dedicated to the scientific method would object.
Hank Roberts says
Steve, nobody’s _objected_. Many have pointed out that the research has already been done that would reveal a difference between urban and rural sites, if there were a difference, and that, counter to everyone’s intuition about cities being warmer, that doesn’t show up in the data. Others have pointed out that errors go both ways (resp. 20 for example) and that the size of the signal being detected compared to the annual variation requires a very large number of observations to show up.
Whatever problems the self-chosen auditors observe in their pictures, aren’t affecting the data enough to detect.
Take any comparable big data set, like the one Stoat points to in the “five year trend” article I cited earlier. Fiddle with the numbers, run the trend analysis, tell us how much you have to bias the data in what percentage of the stations to see a change in the detected trend.
Math is hard. No excuse for not doing it however. The people publishing have done it and shown no detectable effect urban vs rural. Show us how you could fake one, to find out how big the problem would have to be to be detectable given the number of samples taken and the statistics done.
Else it’s just “I say there’s a problem, you have to prove me wrong.”
That kind of approach is only heard from those who turn only in one direction.
Dan says
re: 385. “I think it is worth establishing what the actual signal to noise ratio is. That will likely require additional attention from the professionals, but if data collected by surfacestations.org helps that happen, I can not see why anyone dedicated to the scientific method would object.”
Because the surfacestations.org study by a non-climate scientist (a “former TV meteorologist” with no apparent background or expertise in site surveys) has checked a very small number of sites apparently chosen simply based on where the volunteers live. Gee, that is a well-planned, objective survey. Not! It then draws an unpublished, un-peer reviewed (gee, I wonder why not?), skewed conclusion from that small set of select stations. That does nothing at all to support the scientific method! Furthermore, as has been said numerous times here already, the entire issue is a rotten red herring, trumped up by denialists and skeptics. The surface stations in the US are a very small subset of the data sets used to determine global temperature trends.
Aside: Why in the world do non-expert skeptics and denialists continue to grasp at weak straws and repeat them as if by repeating them they will become true? Yet the peer-reviewed science by experts must not be and must be severely questioned, after the fact? I suppose it is a reflection of the “dumbing-down”, anti-science approach and a failure to learn logic and critical thinking. Simply regurgitating what non-scientists say is apparently that much easier. Sigh.
Ray Ladbury says
Steve, Think for a second about what would happen if either of your two putative biases were true. You would see an abrupt change, not a gradual one. The change would persist but not increase. None of this is what we are seeing. Remember, you have not just oversampling, but also time series here. Since we are looking at a global signal, any bias has to be happening to a greater or lesser extent to the majority of stations, or it will just introduce noise. Then there are the multiple other lines of evidence that support the same trends. I think you can take this one to the bank.
It is very unfair to imply that the researchers who produce this data have not made every effort to ensure its quality. They may not have physically visited and photographed every station, but they look carefully at the data. The look for biases, systematic and random errors and anything else that might come up six ways to Sunday. In this way, they are actually more likely to find any issues than they would via a site visit. The proof of the pudding is in the eating–and who better to proof the pudding than those who consume it every day.
Neil B. says
It’s easy to forget, that even if urban heat islands distorted the measure of average temperature increase, and/or the measure of how much warming is caused by CO2: They actually do make the earth hotter! So we still have to take them into account, albeit maybe adjust some interpretations a bit.
Joe says
Can somebody help me out here? All the GW talk is about greenhouse gas creation by human activity, but what about the direct heating effect of burning all the fossil fuels?
It would seem that much of the urban heat island effect is caused by high concentrations of machines creating heat as a by-product, so how about over the whole planet? Is the conversion of oil, gas, etc. into heat having a measurable effect on the atmosphere?
Hank Roberts says
Hmmm, Joe, did you by chance just read this somewhere? You’re the second person in the last few minutes to come in with the same talking point.
It’s bogus, you can look it up.
Hank Roberts says
Joe, try this:
http://www.gea.or.jp/41activ7/confe05/crutzen-paper.pdf
And the answers following this similar question here:
https://www.realclimate.org/index.php/archives/2007/06/a-saturated-gassy-argument-part-ii/#comment-37007
Timothy Chase says
Urban vs. Rural, etc.
With the following it shouldn’t even be necessary for people to open their Adobe Acrobat (I hate pdfs myself), but they will want to click on the links if they want to see the charts, etc.
According to the 1997 analysis by Peterson and Vose cited by IPCC 2001, the long-term (1880 to 1998) rural (0.70 C/century) and full set of station temperature trends (0.65 C/century) showed rural stations trending slightly higher. A more recent analysis (1998) for the long-term trends (1951-1989) rural (0.80 C/century) and full set of station temperature trends (0.92 C/century) showed urban stations trending slightly higher.
The difference between urban and rural trends were not regarded as significant in either case.
Please see:
2.2.2.1 Land-surface air temperature
http://www.grida.no/climate/ipcc_tar/wg1/052.htm#2221
You might also check the following from the MET in the UK…
Isn’t the apparent warming due to urbanisation?
http://www.metoffice.gov.uk/faqs/2.html#q2.3
The chart shows you temperature trends from the Hadley Centre for the past 50 years – but divided accord to windy and calm. If the Urban Heat Island effect were significant, you would expect the calm to show higher temperatures – but it is the windy that shows higher temperatures. At the same time the temperature trends for windy and calm look almost like doubles of one-another, only with the windy shifted somewhat above. Almost, but not quite.
Joe says
No, Hank, I didn’t read it somewhere and it isn’t a “talking point” (not to me anyway). I thought it up all by my lonesome. An honest question and I ain’t no troll.
Thanks for the links. I’ll check them out and get back if I still have questions.
Steve Reynolds says
386 Hank Roberts> Steve, nobody’s _objected_.
This sounds like an objection to me:
“Because the surfacestations.org study by a non-climate scientist (a “former TV meteorologist” with no apparent background or expertise in site surveys) has checked a very small number of sites apparently chosen simply based on where the volunteers live. Gee, that is a well-planned, objective survey. Not! It then draws an unpublished, un-peer reviewed (gee, I wonder why not?), skewed conclusion from that small set of select stations. That does nothing at all to support the scientific method! Furthermore, as has been said numerous times here already, the entire issue is a rotten red herring, trumped up by denialists and skeptics.”
Also, it is clear that he did not get that info from surfacestations.org.
I believe there was another objection from Timothy that has since been deleted after I tried to respond.
Steve Reynolds says
Ray> Think for a second about what would happen if either of your two putative biases were true. You would see an abrupt change, not a gradual one. The change would persist but not increase.
Why an abrupt change? Did everyone pave their parking lot and install a/c the same year or even same decade?
Were all Stephenson Screen stations replaced with MMTS the same year?
ray ladbury says
Steve, paving will have the most effect when it takes place adjacent to the station site. Likewise, the site where the instruments are replaced would respond instantly–and this would be noticed in the analysis. In fact, it is probably one of the things the analysis is specifically set to reject.
UHI is local, the signal is global. Instrument changes are both local and abrupt, the signal is global and gradual. Scientists perform much tougher noise rejection analyses daily.
Then there is the fact that the results look the same when you remove the urban stations, and that the trends agree with trends from completely independent networks and independent analyses.
Steve, if there is a problem, you are much more likely to see it in the data than in a photo of a station. That’s why the scientists who do these analyses look there.
Timothy Chase says
Steve Reynolds (#385) wrote:
Ray Ladbury (#388) responded:
Steve Reynolds (#396) responded:
This works – assuming Ray Ladbury were speeking of the aggregate. Individual stations would show abrupt change once – under the scenarios you gave. That would be picked up. By means of statistical analysis.
However, if you do not believe this, you could get Google Earth then a kml that Ken Mankoff made available:
http://edgcm.columbia.edu/~mankoff/GISTEMP
Click the station and you can look at the temperature trend for that station yourself. In fact anyone who believes what you suggested above can do the same – if they have Google Earth and a connection to the web.
I hope this helps!
:-)
Steve Reynolds says
Ray> the site where the instruments are replaced would respond instantly–and this would be noticed in the analysis.
You are assuming a decent S/N. The T vs. time graphs I have seen commonly have 2C jumps from year to year (of apparently natural variation). How can you then pick out 0.5C errors from microsite changes?
Typical example:
http://gallery.surfacestations.org/main.php?g2_itemId=11386
Dan says
re: 395. No. That information about the site survey was indeed from the surfacestations.org site. For the record, from the surfacestations.org site: “You can visit our download section to get the instructions and forms, as well as to look at the lists of USHCN and GHCN climate reporting stations near you to determine which ones might be appropriate for you to survey. Then after following the instructions to complete the site survey and the gathering of photographic data, completion of the forms for upload to this website.”
In other words, that is indeed a volunteer-based survey. Not much about appropriate site survey training. Also for the record, I have conducted various meteorological monitoring site surveys for 24 years and counting. You do not simply download forms with survey instructions and take photographs. That is not a broad or necessarily accurate survey. Land use, obstructions, distances and angles to any buildings, hills or trees, etc. all come into play. The surfacestations.org survey depends on who has volunteered to look at sites near their homes “to determine which ones might be appropriate for (them) to survey”. Objectivity? That is part of the scientific method. Anyone who has never conducted a site survey before could have essentially submitted/posted one of the 250+ “surveys” that are there now.
And from the surfacestations.org FAQs, you can read that indeed the survey is being conducted by a (apparently former) “TV meteorologist”. What expertise does that bring to the table with respect to the credibility of a siting survey?
Most important though: I and others have also pointed out numerous times that this issue re: these US surface stations and surfacestations.org’s “survey” is a complete red herring with respect to the larger global data set indicating temperature trends either directly or through proxies. That should not require repeating again. It is getting to the point (if I may mix my metaphors) that we are beating a red herring.