Observant readers will have noticed a renewed assault upon the meteorological station data that underpin some conclusions about recent warming trends. Curiously enough, it comes just as the IPCC AR4 report declared that the recent warming trends are “unequivocal”, and when even Richard Lindzen has accepted that globe has in fact warmed over the last century.
The new focus of attention is the placement of the temperature sensors and other potential ‘micro-site’ effects that might influence the readings. There is a possibility that these effects may change over time, putting in artifacts or jumps in the record. This is slightly different from the more often discussed ‘Urban Heat Island’ effect which is a function of the wider area (and so could be present even in a perfectly set up urban station). UHI effects will generally lead to long term trends in an affected station (relative to a rural counterpart), whereas micro-site changes could lead to jumps in the record (of any sign) – some of which can be very difficult to detect in the data after the fact.
There is nothing wrong with increasing the meta-data for observing stations (unless it leads to harassment of volunteers). However, in the new found enthusiasm for digital photography, many of the participants in this effort seem to have leaped to some very dubious conclusions that appear to be rooted in fundamental misunderstandings of the state of the science. Let’s examine some of those apparent assumptions:
Mistaken Assumption No. 1: Mainstream science doesn’t believe there are urban heat islands….
This is simply false. UHI effects have been documented in city environments worldwide and show that as cities become increasingly urbanised, increasing energy use, reductions in surface water (and evaporation) and increased concrete etc. tend to lead to warmer conditions than in nearby more rural areas. This is uncontroversial. However, the actual claim of IPCC is that the effects of urban heat islands effects are likely small in the gridded temperature products (such as produced by GISS and Climate Research Unit (CRU)) because of efforts to correct for those biases. For instance, GISTEMP uses satellite-derived night light observations to classify stations as rural and urban and corrects the urban stations so that they match the trends from the rural stations before gridding the data. Other techniques (such as correcting for population growth) have also been used.
How much UHI contamination remains in the global mean temperatures has been tested in papers such as Parker (2005, 2006) which found there was no effective difference in global trends if one segregates the data between windy and calm days. This makes sense because UHI effects are stronger on calm days (where there is less mixing with the wider environment), and so if an increasing UHI effect was changing the trend, one would expect stronger trends on calm days and that is not seen. Another convincing argument is that the regional trends seen simply do not resemble patterns of urbanisation, with the largest trends in the sparsely populated higher latitudes.
Mistaken Assumption No. 2: … and thinks that all station data are perfect.
This too is wrong. Since scientists started thinking about climate trends, concerns have been raised about the continuity of records – whether they are met. stations, satellites or ocean probes. The danger of mistakenly interpreting jumps due to measurement discontinuities as climate trends is well known. Some of the discontinuities (which can be of either sign) in weather records can be detected using jump point analyses (for instance in the new version of the NOAA product), others can be adjusted using known information (such as biases introduced because changes in the time of observations or moving a station). However, there are undoubtedly undetected jumps remaining in the records but without the meta-data or an overlap with a nearby unaffected station to compare to, these changes are unlikely to be fixable. To assess how much of a difference they make though, NCDC has set up a reference network which is much more closely monitored than the volunteer network, to see whether the large scale changes from this network and from the other stations match. Any mismatch will indicate the likely magnitude of differences due to undetected changes.
It’s worth noting that these kinds of comparisons work because of large distance over which the monthly temperature anomalies correlate. That is to say, that if a station in Tennessee has a particular warm or cool month, it is likely that temperatures in New Jersey say, also had a similar anomaly. You can see this clearly in the monthly anomaly plots or by looking at how well individual stations correlate. It is also worth reading “The Elusive Absolute Surface Temperature” to understand why we care about the anomalies rather than the absolute values.
Mistaken Assumption No. 3: CRU and GISS have something to do with the collection of data by the National Weather Services (NWSs)
Two of the global mean surface temperature products are produced outside of any National Weather Service. These are the products from CRU in the UK and NASA GISS in New York. Both CRU and GISS produce gridded products, using different methodologies, starting from raw data from NWSs around the world. CRU has direct links with many of them, while GISS gets the data from NOAA (who also produce their own gridded product). There are about three people involved in doing the GISTEMP analysis and they spend a couple of days a month on it. The idea that they are in any position to personally monitor the health of the observing network is laughable. That is, quite rightly, the responsibility of the National Weather Services who generally treat this duty very seriously. The purpose of the CRU and GISS efforts is to produce large scale data as best they can from the imperfect source material.
Mistaken Assumption No. 4: Global mean trends are simple averages of all weather stations
As discussed above, each of the groups making gridded products goes to a lot of trouble to eliminate problems (such as UHI) or jumps in the records, so the global means you see are not simple means of all data (this NCDC page explains some of the issues in their analysis). The methodology of the GISS effort is described in a number of papers – particularly Hansen et al 1999 and 2001.
Mistaken Assumption No. 5: Finding problems with individual station data somehow affects climate model projections.
The idea apparently persists that climate models are somehow built on the surface temperature records, and that any adjustment to those records will change the model projections for the future. This probably stems from a misunderstanding of the notion of a physical model as opposed to statistical model. A statistical model of temperature might for instance calculate a match between known forcings and the station data and then attempt to make a forecast based on the change in projected forcings. In such a case, the projection would be affected by any adjustment to the training data. However, the climate models used in the IPCC forecasts are not statistical, but are physical in nature. They are self-consistent descriptions of the whole system whose inputs are only the boundary conditions and the changes in external forces (such as the solar constant, the orbit, or greenhouse gases). They do not assimilate the surface data, nor are they initiallised from it. Instead, the model results for, say, the mean climate, or the change in recent decades or the seasonal cycle or response to El Niño events, are compared to the equivalent analyses in the gridded observations. Mismatches can help identify problems in the models, and are used to track improvements to the model physics. However, it is generally not possible to ‘tune’ the models to fit very specific bits of the surface data and the evidence for that is the remaining (significant) offsets in average surface temperatures in the observations and the models. There is also no attempt to tweak the models in order to get better matches to regional trends in temperature.
Mistaken Assumption No. 6: If only enough problems can be found, global warming will go away
This is really two mistaken assumptions in one. That there is so little redundancy that throwing out a few dodgy met. stations will seriously affect the mean, and that evidence for global warming is exclusively tied to the land station data. Neither of those things are true. It has been estimated that the mean anomaly in the Northern hemisphere at the monthly scale only has around 60 degrees of freedom – that is, 60 well-place stations would be sufficient to give a reasonable estimate of the large scale month to month changes. Currently, although they are not necessarily ideally placed, there are thousands of stations – many times more than would be theoretically necessary. The second error is obvious from the fact that the recent warming is seen in the oceans, the atmosphere, in Arctic sea ice retreat, in glacier recession, earlier springs, reduced snow cover etc., so even if all met stations were contaminated (which they aren’t), global warming would still be “unequivocal”. Since many of the participants in the latest effort appear to really want this assumption to be true, pointing out that it doesn’t really follow might be a disincentive, but hopefully they won’t let that detail damp their enthusiasm…
What then is the benefit then of this effort? As stated above, more information is always useful, but knowing what to do about potentially problematic sitings is tricky. One would really like to know when a problem first arose for instance – something that isn’t clear from a photograph from today. If the station is moved now, there will be another potential artifact in the record. An argument could certainly be made that continuity of a series is more important for long term monitoring. A more convincing comparison though will be of the existing network with the (since 2001) Climate Reference Network from NCDC. However, that probably isn’t as much fun as driving around the country taking snapshots.
Vernon says
re: 248
Your missing the point. Some one firing up the barbie would be random noise (I think) but having AC exhaust or more concrete around would not be random. That is what your ignoring. Changing the environment, and not knowing the environment changes would introduce a bias that could be miss read as something else. I am not worried about the random event, that can be filtered, in signal processing terms, but changing the environment with out documenting the change will cause a bias. Undocumented bias is not good, do you see the difference?
Pianoguy says
Considering the number of lines of evidence that point to AGW, it’s astonishing that anyone would even bother to attempt discrediting the surface measurements. Sure, they’re probably off by a couple hundredths of a degree, though there’s no compelling reason to think they’re too high instead of too low.
But so what? The tropospheric and stratospheric measurements – unpolluted by UHI – are consistent with AGW. The ocean is acidifying, consistent with rising CO2 levels. Changing snowfall patterns in Greenland are consistent with AGW. And so on, and on, and on, from disappearing arctic sea ice to the new species of insects that have take up residence in my city.
So what possible gain is there from such a breathtakingly myopic inquiry?
Here’s a thought experiment. It’s early December, 1941; you’re an American intelligence officer in the Pacific; and you receive evidence of a large Japanese fleet steaming towards Pearl Harbor. Do you take action? Or do you conclude that we should do nothing until we know exactly how many airplanes are on each aircraft carrier?
(Just to improve the analogy: Taking action against the fleet would slow economic growth. ;-))
ray ladbury says
Vernon, OK, let’s take your air con idea. Now presumably, the air con was installed at some point in time, so if there is a sudden change in the data, we know something’s up, and we exclude or downweight that station. Moreover, that change is not reflected in nearby stations, so we identify it that way as well and can average or smooth it out.
Let’s look at increasing pavement. Well, again, the paving took place within a very short window of time (unless you used a paving company in the Chicago area), so over some very short time we see a significant change AND it isn’t reflected in the surrounding data. Hmm, our analyst (or rather our algorithm) says, Something’s up. OK, let’s say we have a paving company that gets paid by the hour and they lay down a square foot of asphalt a day–gradual warming, not good. But it still isn’t reflected in surrounding stations, and so gets downweighted, averaged out or otherwise corrected. See, that’s the thing–to bias the data, you have to have noise that looks at least very much like your signal, and since your signal is both gradual and global, that’s really hard to do. So, can YOU come up with a noise source that looks like the signal? Do you even know what the signal looks like? Do you know what techniques are used to separate signal from noise? If not, then how do you expect to contribute constructively to the data analysis?
caerbannog says
Your missing the point. Some one firing up the barbie would be random noise (I think) but having AC exhaust or more concrete around would not be random.
When I look at global temperature anomaly maps, I see that the most dramatic warming is occurring in places like Alaska, northern Canada, and Siberia. The warming in the continental USA is lagging way behind observed warming of the high northern latitudes. I rather doubt that the the dramatic warming observed up north is the result of Eskimos setting up bbq grills or AC units or whatever next to weather monitoring stations.
To put things into context, check out http://gristmill.grist.org/images/user/6932/global_anomalies.gif
Do you seriously think that a few AC units in the USA is going to have any impact at all on observed global temperature anomalies?
This is just the M&M “noise-only” hockey stick PC with a 0.03 eigenvalue magnitude all over again.
pat n says
After posting to realclimate from late 2004 to current with little or no feedback on my post about hydrologic and temperature changes at climate stations in the Upper Midwest and elsewhere in the US including Alaska, and on my not so great treatment by NOAA’s NWS river forecast center for the Upper Midwest, I figured that, given the title of this realclimate blog, there might be some interesting feedback concerning work even though it didn’t meet peer review criteria but I figured wrong, this man is still an Island in urban and unprofessional heat.
Hank Roberts says
Answer: You audit the instruments, demand more precise information, and throw out anything dubious.
“… Pvts. Eliot and Lockard were manning the radar at Opana Point. They noticed a large blip on the scope and call in to the as-yet not fully functional Fighter Information Center. Pvt. McDonald took the call and located the sole officer at the Center and asked him to call the operators back. Lt. Kermit Tyler, having ending his first tour of training at the newly established Fighter Control Center, received the report and, thinking it was a flight of B-17s due in from the mainland, told the operators to “forget it.” The report went no higher than that. Interestingly enough, the new radars tracked the planes coming and going, but the Army did not tell the Navy about this pointer to the Japanese carriers until the 8th…” http://www.ibiblio.org/pha/myths/index.html
Philippe Chantreau says
Re 239: looking at data from Argentina’s large variety of latitudes, that winter may be more atypical than brutal. When Buenos Aires and Mar Del Plata were having snow and slightly below zero temps, Ushuaia (southermost town in the world) had essentially the same temperatures, not colder. Looks like a weather event, not a climate trend. Meanwhile, Australia is not so chilly and its multiyear drought far from relieved by the recent torrential and short lived rains.
Eli Rabett says
#207 one picture ain’t worth a bucket of warm spit. You need a history of changes over the observing period or something like the Climate Reference Network run in parallel for a period of time. Remember it is the anomalies that count not the absolute temperatures
Fred says
Check this US Carbon Footprint Map out, has United States Interactive Carbon Footprint Map, illustrating Greenest States. This site has all sorts of stats on individual State energy consumptions, demographics and State energy offices.
http://www.eredux.com/states/
Harold Pierce Jr says
Gavin:
Is there a method for changing this long linear list of posts into a tree structure? With so many posts it get really difficult to find anything. Also I notice that multiple subthreads start to develop. Is there some secret hotkey or macro I can use to produce a tree structure?
To Anybody: Is there a software app that can sort all of the responses in this webpage into groups based up an initial post by a poster? Can Google do this?
Jim Cripwell says
Re 257. You are missing the point. Would people who live in Argentina, and places at similar latitudes in the southern hemisphere, agree with the statement that, here in July 2007, “Winters are still starting later and ending earlier”? It is my contention that they would not. The quotation was given was as a reason for believing that, currently, termperatures are still rising as fast as they have been in recent years.
Sam says
RE: 242: here is your Indiana data
http://climatesci.colorado.edu/
FurryCatHerder says
Alright, I freely admit I’ve not read all 261 comments, so I’m sure I’ve missed something. But it bothers me that UHI effects are massaged out of the data without taking into consideration where that heat is going. Because it has to go somewhere, it doesn’t just disappear. Right?
And because I’ve got a teenage I must wake up and feed breakfast, I’m just going to throw out an analogy, and then watch the ensuing shredding of said analogy —
To me, ignoring UHI is like ignoring a candle in the middle of the room. Sure. that candle is very hot, but it’s localized (so the argument goes …), so we’ll ignore it. Except that the candle is also warming the room, and having ignored the candle it seems that the cause of the warming is being ignored.
Now, I don’t think that is an argument against AGW. I think it’s part of AGW, and I think that by arguing against the UHI effect, people are playing into climate change deniers hands. The response, to me, should be “Why yes, the UHI effect is real, and it is also contributing to global warming because that heat has to go somewhere.” If you look at power consumption in urban regions, power consumption exceeds all of the solar radiation falling on that area. At ~ 1,360 watts per square meter, a nice multi-story IT-rich building can produce more heat than the sun — and that heat, along with the heat produced generating that power from carbon-based sources, has to go somewhere.
jd says
re #258.
Of course 1 picture is not going to tell anyone anything about the changes that have taken place, but it is a start of a record that can be referenced in the future so that when more pics are taken in the future they have something for comparison.
I wonder if this one will get posted or dumped liket he last one?
Rod B says
Ray (253), is that how it actually works, or is this just a well thought out idea?? After the anamoly next to the air cond exists for a long period at the same slightly higher temp, is it still adjusted downward (because it’s still slightly off neighboring readings) or does the real algorithm assume it now must be correct and accept it?
caerbannog says
RE: 242: here is your Indiana data
http://climatesci.colorado.edu/
Interesting…. but when I look at http://gristmill.grist.org/images/user/6932/global_anomalies.gif, I see that Greenland has warmed far more than Indiana has. I can only imagine the magnitude of the land-use changes on Greenland that must be responsible for the warming there. Ditto for Alaska, the Northwest Territories, and northern Siberia.
mankoff says
Re #238
I realize this comment is irrelevant to SurfaceStations.org and/or their use of my Google Earth KML file. As with GISTEMP on the NASA page, anyone can take it and do what they like, and we have to trust they use it appropriately.
I’m actively trying to improve that KML layer, and I think photographs in each bubble would help. I’ve heard back from SurfaceStations.org and they assuaged my fears and are interested in this too. Should we do this we’ll work together to make sure that the photographs are statistically representative (i.e. not only the “How Not To Measure” photos but the “How To” also).
ray ladbury says
Rod, I’m not an expert on the algorithms, but I’m also no smarter than the guys who are doing the analysis. If I can figure this out, so can they. Tamino has said they do jackknifing, and several have pointed out that both time series and spatial comparisons figure into the analysis. My data analysis experience is in very different fields, but you never have pristine data, and it’s not worth it usually to try to make the data perfect when you can get the same answer by doing proper cleaning, filtering and error analysis on the data you have.
Again, the most innovative creature on the planet is a graduate student desperate to graduate who has a crappy dataset. The real reason to care about the quality of the stations is because it makes the analysis easier. It’s the difference between a 100 page thesis and a 200 page thesis–and probably an extra year of grad school as well.
Philippe Chantreau says
Re 261: It seems to me that you picked a select location that serves your purpose. What really matters is global heat content. Argentina’s climate is mostly influenced by the South Atlantic. Regardless, whether you’re making your assertion based on this particular 2007 winter event or on a trend for the all hemisphere, I am still very much unconvinced that it compensates for what is observed elsewhere. How much would Argentina, Chile, South Africa and Australia have to cool down to counterbalance the later/warmer/shorter Siberian and Canadian winters?
Global climate includes local variations that may be atypical. So far, for Argentina, the big differences are increased precipitation and decreased amplitude (lower highs and higher lows), which may be related. That does not change the whole picture.
FurryCatHerder says
On the subject of UHI and temperature records, I neglected to mention an observation based on data I gather myself — I have 5 minute interval data for two outdoor thermometers and the disagreements between the two can be as great as 10F. The difference appears to be caused by thermal radiation from surrounding structures. The more “accurate” of the two doesn’t show the sharp rises contained in the lesser accurate of the two, but it also doesn’t show the sharp drops of the other.
06.26.07 00:00 76.6 74.4
06.26.07 01:00 76.1 74.4
06.26.07 02:00 76.1 73.9
06.26.07 03:00 75.9 73.7
06.26.07 04:00 75.9 73.7
06.26.07 05:00 75.9 73.7
06.26.07 06:00 75.9 73.7
06.26.07 07:00 75.9 73.7
06.26.07 08:00 76.1 73.9
06.26.07 09:00 77.9 77.3
06.26.07 10:00 79.3 78.0
06.26.07 11:00 77.3 73.4
06.26.07 12:00 77.9 77.1
06.26.07 13:00 81.6 81.6
06.26.07 14:00 86.1 83.4
06.26.07 15:00 85.6 82.4
06.26.07 16:00 86.9 86.3
06.26.07 17:00 88.1 88.1
06.26.07 18:00 88.7 90.8
06.26.07 19:00 85.6 84.0
06.26.07 20:00 84.0 81.6
06.26.07 21:00 81.6 79.5
06.26.07 22:00 80.2 78.2
06.26.07 23:00 79.7 77.5
Rather than relying on a small number of weather stations in an area, whether it’s one or two weather stations at local airports and military bases (what we have here), or stations that get moved over time, I’d argue that a significantly large number of data points are needed to accurately know the surface temperature in developed areas. In my part of the planet the “Weather Data Network” (or whatever they call it) routinely shows temperature variations of several degrees.
Steve Jewson says
This discussion reminds me of one of the unfortunate ways that I feel the global atmospheric science community lets society down: the surface data being discussed is almost completely proprietary, and not publicy available (except at considerable cost) even though it was collected at taxpayers expense. Bottom of the class are UKMO and Meteo France. The only country (that I’m aware of) that does a good job of making its data available is the US.
I’m not a skeptic, but this is a bit of an achilles heal in the climate change argument: supposing someone from outside the state-funded research community (such as myself) wanted to use observed data to make their own study of the patterns of global climate change (as I would indeed like to). Well, they can’t because they can’t get the data (legally). At that point, they either have to believe you or not: it’s a matter of faith. I *do* believe you, mostly, but I could understand why some people would, at the very least, be a little bit skeptical of your claims, because of this lack of transparency. In the UK, the government’s line on climate change is effectively: “UK climate change is real, but if you want to check for yourself you have to pay the UKMO lots of money”…sounds unfortunately like a snake oil salesman.
If there are any skeptics out there reading this, maybe you could pick up on this issue, until the UK govt, and various other governments around the world, are shamed into making the climate research they fund a bit more transparent. Don’t stop until historic climate data, and meta-data, is freely downloadable for anyone who wants to look at it.
I suspect that in other areas of science it would simply not be acceptable to publish results based on data that isn’t freely available (does anyone know any comparative examples?) but for historical reasons climate science is an exception. This never really mattered in the past, since no-one else was bothered to look at the data anyway, but given the societal important of climate change, it’s now very unfortunate.
(note that I appreciate that the authors of realclimate.org are just cogs in the machine, and not in a position to make decisions or not about the availability of climate data, so don’t take this as a personal attack).
[Response: All of the data being discussed here is available at no cost from NOAA or GISS, regardless of it’s country of origin. There is additional data that comes from higher density observing networks in many countries that is also available (Norway, NZ for instance). However, many National Weather Services have been given mandates to develop commercial products and that has lead them to restrict some analyses to fee paying customers. For the most part that is irrelevant for large scale considerations, but it is obviously a little frustrating at times. Don’t complain here though – complain to the national governments that impose such commercial pressures in the first place. – gavin]
ray ladbury says
Furry Cat Herder, your little exercise in #270 is an excellent illustration of the power of an oversampled network–although as you point out, to get a better idea of which thermometer were more accurate, you’d need at least one other thermometer in another nearby location.
Indeed, with such a network we can eliminate or downweight points that are affected by local conditions such as UHI and look for the gradual, global signal due to increased greenhouse gases, or we can concentrate on investigating the UHI effect by emphasizing local and short-term variations. Same dataset, different signals, different analyses to bring down the noise. Or put another way–one man’s signal is another man’s noise.
I believe Pielke protests that local conditions have been given short shrift in the analyses to data. He may be right. However, the chances of local conditions being responsible for even a small portion of the warming attributed to anthropogenic CO2 is virtually nil–and he should realize this.
John Mashey says
re: #261 Jim Cripwell
I’m not exactly sure how to define “winters starting later and ending earlier”, but the state of ski resorts may be a useful surrogate, as it is a topic of intense interest to some people, and so ti gets reported.
I live about 5 hours’ drive from Lake Tahoe, which has many ski resorts, and have skied there (some) almost every year since 1983. A few years ago, we bought ski property in Canad, which requires flying San Francisco -> Seattle -> Kelowna, and then taking an hour bus ride to get to Big White, i.e., about 8-9 hours door-to-door.
Q: Why would somebody regularly do that when they could just drive to Tahoe?
A: Because the ski season around here seems less predictable than it used to be. In fact, the Sierra Ski Resorts are especially worried about global warming:
Google: sierra ski resorts global warming
But, what does that have to do with the Southern Hemisphere?
Many Austrlaians ski at Big White, and I often talk to them on ski lifts. Why are they so far from home?
A: Because they’ve been getting less and less happy with their skiing Down Under.
Of course, some of this is increased crowds, but the most common complaint is that it’s getting harder to pick a good week six months in advance. Of course, good ski conditions not only require reasonable snowfall, but that temperature stay cool, as even a a few days above freezing, with rain, can wreck the conditions.
Here are a few references.
http://www.news.com.au/heraldsun/story/0,21985,21712611-5012935,00.html
http://www.unep.org/geo/geo_ice/PDF/GEO_C4_LowRes.pdf
The second is a nice report called “Snow”, with good maps, which says:
“mean monthly snow-cover extent in the Northern hemisphere has decreased at a rate of 1.3 per cent per decade during the last 40 years.
Of course, in the Southern Hemisphere, outside of Antarctica, serious snow is pretty rare {Chile/Argentina, New Zealand, a few spots in Australia, and a few mountains elsewhere.]
It’s pretty easy to summarize the overall effects:
1) Lower resorts are hurting.
2) Even at higher resorts, the snow depth (which always varies considerably) is trending slowly lower; as in the USA, people are and are planning even more snow-making to stay in business.
3) Individual ski resorts rarely say anything official but “come on out, the snow is great!” which makes it hard to find long-term data from them. However, a lot of Australians are worried about the long-term future of skiing there.
So far, New Zealand, further South, with higher mountains, and more precipitation seems OK on the ski front, and quite happy to take Australian skiers’ money, but quite a few prefer to make the long flight to Canada.
Despite the cool weather in Buenos Aires this year, Argentina and Chile ski resorts mostly got off to a late start due to lack of snow.
Of course, one might argue that skiing is an especially energy-intensive sport (particularly when you have to make snow), and it ought to disappear anyway, but meanwhile, ski resorts provide a useful (albeit somewhat anecdotal) surrogate for the nature of winter around the world.
tamino says
Re: #271 (Steve Jewson)
You can find quite a lot of temperature time series on line. There’s the GHCN (google will probably find it for you), which gives easy access to the entire database, and covers the earth. GISS makes their data available. And the European Climate Assessment & Dataset Network has daily data for temperature, pressure, humidity, snow cover, sunshine, cloud cover for many locations throughout Europe. So if you’re serious about your project, you can get started.
FurryCatHerder says
Ray @ 272:
Ray,
I know which one is more accurate because I have another thermometer which doesn’t record temperatures :)
On the other paw (oh, my name is Julie if you want to stop spelling my ‘nym out longhand, tho I also respond to FCH), my point really isn’t just that small datasets in developing regions have “problems”, it’s also that there is significant heat being produced by these regions and removing urban data points seems like a bad idea to me.
It’s a shame I’m not willing to have a wireless thermometer run over by a car, because I’d really like to find out what the air temperature above asphault is over the span of a day. Even more interesting would be the air temperature above a parking lot full of closed up cars that are each 150oF on the inside. And while most of the planet is still covered with water (and moreso every day …), I’d tend to think that land use changes have the potential to drive surface temperature where people live significantly more than CO2.
Maybe out in the rural areas, where food production will be harmed, and up in glacial regions where runoff will increase sea level, CO2-related warming is going to dominate, but I’m putting my money on UHI effect as the #1 driver of inhabited area temperature increases. I greatly enjoy the 3 to 5oF drop when I drive from downtown out to the ‘burbs. And I dread what’s happening as the 3 adjacent cities all continue to develop around my neighborhood and will likely raise local temps in the ‘hood by 3 to 5oF due to UHI effect over the next 10 to 20 years.
ray ladbury says
Steve Jewson,
Just curious. Do you download the data from the human genome before you go to take a test to see if you have a predisposition to a form of cancer? Do you download the wind tunnel data for a jet design before you fly in it? Would you know how to make sense of this data? Please don’t take this as a criticism. I am really curious why this particular subject elicits such incredulity among lay people (especially technically inclined lay people), and I’m curious whether that curiosity sparked by this field would be sufficiently strong that a lay person would persist through the difficulty of analyzing the data. Data rarely yield their secrets to a casual glance. There is always noise, and there are always random and systematic errors. None of this poses insurmountable problems, but it does require a lot of work. What I’ve seen to date among skeptics is a willingness to look through the data until you find a couple of stations whose raw data seem to buck the overall warming trend, and then proclaim climate change a fraud. I wonder whether there might not be a few lay people out there though where this could be a teachable moment–where we might not demonstrate how science is actually done: the perspiration behind the inspiration.
Lawrence Brown says
Far be it from me to lock horns with John Donne,but a man can be a (heat) island. If a human body(assuming he radiates as a blackbody) has a surface area of say 1.5m^2 and a surface temperature of 32deg.C and is in surroundings of 20C,his net heat loss rate to the surroundings is given by sA(T1^4-T2^4) or 5.67×10^-8W/m^2-Kx1.5m^2(305^4-293^4)=109 watts. As the temperature of the environment decreases, heat energy is lost at a greater rate.
There were concerts here in NYC and around the world, today to raise awareness about global warming.Why is it that when climatologists warn us about the potential hazards,the general public mostly doesn’t listen but when someone like,say, Madonna does it everybody perks up their ears?
Hank Roberts says
Steve, I’m always curious where people get their feelings and beliefs. When you wrote above:
“… I feel the global atmospheric science community lets society down: the surface data being discussed is almost completely proprietary, and not publicy available ….”
Everyone’s entitled to their own feelings, but I wonder how you came by this feeling. Is it from first hand experience, contacting agencies asking for data? Or did you read that statement somewhere, from someone you trusted to tell you the truth?
I often see strongly held beliefs and feelings being stated — often represented as facts —- in postings made here by new readers coming in to RC.
As a longtime reader (and no expert myself) I find it’s really helpful to ask and learn — how did you come by what you believe to be true? What are your sources?
As the cartoon puts it: citation needed http://www.xkcd.com/c285.html
Eric Baker says
oops!
pat n says
Reading about changes in ski seasons near Lake Tahoe brings to mind the study of changes in the timing of snowmelt runoff on rivers in the Upper Midwest, links below. The study did not rely on air temperatures thus urban heat island was not a concern.
http://www.mnforsustain.org/climate_snowmelt_dewpoints_minnesota_neuman_table_figure1.htm
http://www.mnforsustain.org/climate_snowmelt_dewpoints_minnesota_neuman.htm
Updated figures through 2007 are at: http://picasaweb.google.com/npatphotos
Vernon says
Ray and Eli,
While you can come up with all the rationalization that you want, if the environment of the stations becomes more urbanized, then bias is being introduced which cannot be told from a warming trend statistically without actually inspecting the site. Arguing against this is sophistry, being so against validation of the underlying data makes no sense to me.
Signal processing I understand and as far as I can tell, tracking the global temperature is nothing more than signal processing. If you don’t identify the biases, then you cannot correct for them and those biases could invalidate your results. Please note I am saying could not will, so have a site inspection and determine your biases, then determine what the signal is without the bias.
Philippe Chantreau says
Re 275: What exactly is that oops for? What this article mean is that in the last big interglacial, there was ice where that sample was taken. It does not take into consideration today’s melting due to coating with dark particles, which some say plays a more important role than temperatures (recent SciAm article and corresponding paper). The observed melting is definitely more than what would be expected with current temps. There is also the fact that sea levels have risen more than expected and nobody seesm to really know why. And it is possible that the temperatures will rise much more than the GCMs predict, due to still poorly understood feedbacks. What we are experiencing now is new, I have no doubt that there are surprises in store.
Hank Roberts says
Erik, it’s been discussed already. Off topic here. Need a pointer?
Steve Reynolds says
gavin> All of the data being discussed here is available at no cost from NOAA or GISS, regardless of it’s country of origin.
Does that apply to raw data and intermediate correction step data? Those are critical for studies of correction accuracy.
William Jockusch says
Sorry for the thread hijack here . . .
There is a professor in forecasting who appears to be willing to bet $10000 that he can forecast global temperatures better than Al Gore. Gore turned him down. This story has been gaining traction on conservative sites. Seems like the kind of thing that cries out for a response.
Google “scott armstrong global warming” to learn more.
ray ladbury says
Re: 275. No matter what the discovery, there will always be some idiot trumpeting that it proves his pet theory, and usually he will have no understanding of the discovery or even his own theory.
Timothy Chase says
Eric Baker (#275) wrote “oops!”
Judging from the link you provided, I assume you meant to refer us to the following:
Fossil DNA Proves Greenland Once Had Lush Forests; Ice Sheet Is Surprisingly Stable
July 5, 2007
http://www.sciencedaily.com/releases/2007/07/070705153019.htm
Well, that is interesting.
The dating done on the flora and fauna of Greenland was by means of DNA, radiocarbon and some method involving mineral luminescence. 450,000 years… Well, that isn’t exactly saying how thick the ice was at the time of the last interglacial. What it might suggest is that when the rest of the world had warmed up, Greenland found some way to keep cool… after a while.
But assuming the glaciers of Greenland were as stable as these investigators seem to think, as both Chris and Hank have pointed out, that still leaves open the question of where the water came from that raised the sea levels of the time. They seem to be suggesting it might have been some other large body of ice.
See:
Chris O’Niell’s #231 and Hank Roberts’ #246
Rod B says
Ray, I can accept that. I suspect, but don’t know at all, the modelers do properly account for the apparent discrepencies, many of which would be of no consequence. Though there is a point where the data can be enough crappy that, while the grad student might get by, the science faults. I just wonder where that point is and if the modelers are forever watchful…..
Pekka Kostamo says
I yesterday visited an old friend, a strawberry farmer. His business is suffering. The grandmas had not come in their usual numbers to pick up his main crop, the obvious reason being that the crop was ripe two weeks early.
The grandmas are used to start up their strawberry jam pots mid-July, and probably come at that time, hopelessly too late for the berries. Extra advertising did not help too much. Nor had the grandmas yet understood about the practical impact of global warming on this important aspect of their lives.
Definitely no urban heating on his site.
Similar things happen in the wild nature everywhere. Some organisms wake up when the spring temperatures come, others react to length of day (calendar). Where these organisms are strongly interdependent, disasters follow.
Timothy Chase says
William Jockusch (#282) wrote:
Armstrong is very good at what he does: self-promotion. From what I understand, he keeps statistics on how often he gets refered to and where. He is promoting a so-called “scientific methodology” of forecasting – something which he invented himself.
If I remember correctly, ten cities are picked at random for which both he and Gore would forecast for, and Gore wouldn’t be able to make his own predictions, but would have to go with the results of a climate model of his choice. By way of contrast, Armstrong would be using a so-called “naive” model of moving averages, with the bet being over a ten-year period, and where he gets to have as his prediction for any given year the behavior from the last year.
Armstrong likes to claim that what we have are ~”scientists making forecasts, not scientific forecasts,” but when he makes this claim, it pays to keep in mind that by “scientific forecasts,” he means nothing more nor less than something which conforms with his personal and oftentimes ambiguously-stated methodology.
… If I remember correctly.
Anyway, if Armstrong gets his name bantied about by the same people who celebrated a German school teacher for screwing up a couple of charts, I guess that is an accomplishment… of sorts.
Timothy Chase says
PS (to my post above)
The problem with Armstrong’s wager is that his forecast tracks the behavior from the previous year, and thus with a warming trend, it will track that warming trend, building it right into his “naive” forecast. Additionally it is local, specific to each city.
Climatology doesn’t work like that.
Climatology is regional – and whatever models Gore might pick from wouldn’t be making predictions specific to individual cities. Likewise, the purpose of the model would be to predict the behavior as it will be ten, twenty or a hundred years from now based upon the information which we have today, not by building into the forecast what had happened the previous year.
A large part of the power of climatology depends upon its dealing with large averages where the fluctuations matter a great deal less. What will be the behavior for a given decade is answered much more easily than what will happen on a particular day a month from now. No doubt Armstrong knows all of this already. But he hopes to ride the wave of controversy over climate change right into the limelight.
pete best says
Re #282. A load of rubbish basically, first of is that cores predictions are over decades whilst this mans assertions and bet are over the next decade which hardly adds up does it.
Just sounds like more obfuscation to me and a deliberate attempt to throw some more confusion on the subject. Its on the TV and hence it must be true.
Personally I doubt that anyone will get the AGW message across in the USA due to right wing interests who are highly organised and funded.
Hank Roberts says
The ‘Armstrong Challenge’ thing is actually related to weather station data very, um, precisely. If Armstrong is as focused on specific single weather stations as his web page makes it appear, he’s illustrating all the same issues dealt with in this topic.
He’s confusing weather and climate. He’s asking for a climate model that can make ten year weather forecasts for ten specific weather station instrument locations. And he writes “Al Gore is invited to select any currently available fully disclosed climate model to produce the forecasts (without human adjustments to the modelâ��s forecasts). Scott Armstrongâ��s forecasts will be based on the naive (no-change) model; that is, for each of the ten years of the challenge, he will use the most recent yearâ��s average temperature at each station as the forecast for each of the years in the future ….”
Armstrong sums up Gore’s position (I think naively at best) as being that there are ” … scientific forecasts that the earth will become warmer and that this will occur rapidly …. the challenge will involve making forecasts for ten weather stations that are reliable and geographically dispersed. An independent panel composed of experts agreeable to both parties will designate the weather stations. Data from these sites will be listed on a public web site along with daily temperature readings and, when available, error scores for each contestant….”
Wiggle words include “currently available” “fully disclosed” “human adjustment” “rapidly” “reliable” …. Sigh.
Seems quite in line with the general theme of casting doubts on the weather station instruments, one at a time, eh? Oh no, bad paint on that one. Nope, parking lot nearby. Nope, bird sitting on roof. Nope, SUV parked on top. Nope, that’s a WalMart, not a weather box …. Not to mention, oops, we need to wait a decade to see if anything’s happening.
We’re talking about a fraction of a degree C in a decade. No way that’s expected to be uambiguously predictable at ten individual points over ten years. The man’s asking for weather, not climate, prediction.
Heck, he might as well ask for predictions of the interest rate at ten individual banks to be predicted the same way.
Spencer Weart points out in the AIP History that the warming signal, relatively tiny compared to natural variability, is only beginning to emerge from that variability. No one bet on that, in the 1990s. Note 33: http://www.aip.org/history/climate/20ctrend.htm#N_33_
Stoat recently pointed out the problems with inference from five year trends (using large data sources; ten years from ten individual stations would be less reliable than five years from a large network, eh?). http://scienceblogs.com/stoat/2007/05/the_significance_of_5_year_tre.php
Tamino well points out how, counterintuitively, it takes large amounts of noisy data are used to obtain useful information — which isn’t generally understood. This guy’s “challenge” falls flat for all the reasons well discussed, seems to me.
http://tamino.wordpress.com/2007/07/05/the-power-of-large-numbers/
Ok, that’s what a nonscientist can make of the ‘challenge/bet’ thing. At least it’s right on topic, seems to be part and parcel of the attack on the station data.
ray ladbury says
Rod B. and Vernon,
First off, Vernon, who’s rationalizing? All I am saying is that
1)all data are less than perfect;
2)there are very good techniques for dealing with imperfect data
3)these techniques are being applied
4)based on the agreement with the result from this data and many other independent lines of data, the techniques are being applied properly
5)the signal they are looking for here is so robust and global, it is hard to imagine how errors at a few individual stations could affect it.
6)you should know how the data are being used and have some idea of what is really a concern before you go off and start futzing with the network.
Now, Vernon, you may understand signal processing, but you sure aren’t thinking about how those techniques apply here or you wouldn’t be getting wrapped so tightly around the axle over a couple of photos of stations that violate siting guidelines. Now, understand that I am in no way saying, “Don’t do this.” I’m just saying you need to understand the network before you embark on your goose chase.
Rod B., yes, there does come a point where a dataset becomes unusable. We’re nowhere near that. Information theory as a guideline suggests that if >33% of the stations were complete crap we might have to start worrying. And yes, the modelers are forever vigilant–nobody wants to put out a crappy product. On the other hand, most of the data filtering algorithms can be implemented completely automatically while still reporting back on any changes. Personally, I think this is one of the most important revolutions in science from the last century–the ability to deal with data that has errors–and most people have never heard of it.
Now, personally, I have to admit that there are a few denialists where I kind of like the idea of them traipsing through poison ivy and blackberry thickets to get to some of the more remote sites. It might be a character-building experience. But again, I strongly recommend understanding the system before you start, or you will likely be wasting your efforts. And if you are seriously concerned, you haven’t understood the system.
Leonard Evens says
Re 282 and wagers about global warming:
The “professor in forecasting” can find several experts who are ready to bet him on specific issues regarding global warming. See for example Realclimate, “Betting on climate change”, 14 June 2005, by James Annan. The last time I checked no prominent skeptic has been willing to accept well posed wager.
Al Gore continues to do a great job in prublicizing the issue of global warming and the necessity to start now to try to do something about it. It is easy for conservatives to pick on him, but he is not a scientist, nor does he claim to be one.
Vernon says
Ray,
How do you know if is only a few stations? How widespread is the bias? You can make assumptions but that seems like a denialist stance where you would rather not know the truth. As for your argument:
1. I agree
2. I agree – only if the imperfections are random.
3. I agree – only if the imperfections are random.
4. So, it does not address my argument.
5. Wrong – the whole purpose of this is to properly identify the signal from the noise and accurately measure the delta in the signal in order to see the trend. By say that there is a signal but you donâ��t know the bias and donâ��t care means that you donâ��t know the actual temperature and you donâ��t know the actual trend.
6. This is the worse statement I have seen you make. The truth is you should know the data collection points to identify bias. Then you should take the bias adjusted data and use that.
If there was no bias then what you�re saying would be correct from a signal processing point of view. Those few pictures indicate that there is possible bias at many stations and the bias appears to be due to urbanization which has all the biases being warming biases. This means that if you do not know and adjust for the bias, then your process could result in a temperature signal that is higher and changing upwards faster than it actually is.
If this is not reason enough to actually validate and profile the individual stations, then what is?
Lawrence Brown says
Attacking Al Gore is akin to killing the messenger. Gore is the trumpet player calling the Cavalry to battle, but by portraying him as a self interested analyst and (former) politician to boot, the deniers,stir their own ideological Cavalry to arms and they’re are able to divert attention toward disrespecting the individual and away from the mostly incontrovertible evidence of the science. Smearing the troubador and even making the cavalry retreat won’t stop the reality of global climate change.
Mark A. York says
Hey, I’m a denialist and I read these missives…
Comment by Ken Coffman � 2 Jul 2007 @ 11:53 am
Good. This is another one of my critics Gavin. Nice to firm up data I’ve collected from other stations. Things are much clearer now.
Matei Georgescu says
RE#290: “he means nothing more nor less than something which conforms with his personal and oftentimes ambiguously-stated methodology” … what exactly is this supposed to mean? The guy is an expert in this field, is he not? He has published in peer-reviewed, widely-respected journals, has he not? But, because he goes against the grain he’s all of a sudden a hack of some sort?
Hank Roberts says
> urbanization which has all the biases being warming biases.
Wrong on that belief stated as though it were a fact, see #20 above.
Who is telling you that story?