Observant readers will have noticed a renewed assault upon the meteorological station data that underpin some conclusions about recent warming trends. Curiously enough, it comes just as the IPCC AR4 report declared that the recent warming trends are “unequivocal”, and when even Richard Lindzen has accepted that globe has in fact warmed over the last century.
The new focus of attention is the placement of the temperature sensors and other potential ‘micro-site’ effects that might influence the readings. There is a possibility that these effects may change over time, putting in artifacts or jumps in the record. This is slightly different from the more often discussed ‘Urban Heat Island’ effect which is a function of the wider area (and so could be present even in a perfectly set up urban station). UHI effects will generally lead to long term trends in an affected station (relative to a rural counterpart), whereas micro-site changes could lead to jumps in the record (of any sign) – some of which can be very difficult to detect in the data after the fact.
There is nothing wrong with increasing the meta-data for observing stations (unless it leads to harassment of volunteers). However, in the new found enthusiasm for digital photography, many of the participants in this effort seem to have leaped to some very dubious conclusions that appear to be rooted in fundamental misunderstandings of the state of the science. Let’s examine some of those apparent assumptions:
Mistaken Assumption No. 1: Mainstream science doesn’t believe there are urban heat islands….
This is simply false. UHI effects have been documented in city environments worldwide and show that as cities become increasingly urbanised, increasing energy use, reductions in surface water (and evaporation) and increased concrete etc. tend to lead to warmer conditions than in nearby more rural areas. This is uncontroversial. However, the actual claim of IPCC is that the effects of urban heat islands effects are likely small in the gridded temperature products (such as produced by GISS and Climate Research Unit (CRU)) because of efforts to correct for those biases. For instance, GISTEMP uses satellite-derived night light observations to classify stations as rural and urban and corrects the urban stations so that they match the trends from the rural stations before gridding the data. Other techniques (such as correcting for population growth) have also been used.
How much UHI contamination remains in the global mean temperatures has been tested in papers such as Parker (2005, 2006) which found there was no effective difference in global trends if one segregates the data between windy and calm days. This makes sense because UHI effects are stronger on calm days (where there is less mixing with the wider environment), and so if an increasing UHI effect was changing the trend, one would expect stronger trends on calm days and that is not seen. Another convincing argument is that the regional trends seen simply do not resemble patterns of urbanisation, with the largest trends in the sparsely populated higher latitudes.
Mistaken Assumption No. 2: … and thinks that all station data are perfect.
This too is wrong. Since scientists started thinking about climate trends, concerns have been raised about the continuity of records – whether they are met. stations, satellites or ocean probes. The danger of mistakenly interpreting jumps due to measurement discontinuities as climate trends is well known. Some of the discontinuities (which can be of either sign) in weather records can be detected using jump point analyses (for instance in the new version of the NOAA product), others can be adjusted using known information (such as biases introduced because changes in the time of observations or moving a station). However, there are undoubtedly undetected jumps remaining in the records but without the meta-data or an overlap with a nearby unaffected station to compare to, these changes are unlikely to be fixable. To assess how much of a difference they make though, NCDC has set up a reference network which is much more closely monitored than the volunteer network, to see whether the large scale changes from this network and from the other stations match. Any mismatch will indicate the likely magnitude of differences due to undetected changes.
It’s worth noting that these kinds of comparisons work because of large distance over which the monthly temperature anomalies correlate. That is to say, that if a station in Tennessee has a particular warm or cool month, it is likely that temperatures in New Jersey say, also had a similar anomaly. You can see this clearly in the monthly anomaly plots or by looking at how well individual stations correlate. It is also worth reading “The Elusive Absolute Surface Temperature” to understand why we care about the anomalies rather than the absolute values.
Mistaken Assumption No. 3: CRU and GISS have something to do with the collection of data by the National Weather Services (NWSs)
Two of the global mean surface temperature products are produced outside of any National Weather Service. These are the products from CRU in the UK and NASA GISS in New York. Both CRU and GISS produce gridded products, using different methodologies, starting from raw data from NWSs around the world. CRU has direct links with many of them, while GISS gets the data from NOAA (who also produce their own gridded product). There are about three people involved in doing the GISTEMP analysis and they spend a couple of days a month on it. The idea that they are in any position to personally monitor the health of the observing network is laughable. That is, quite rightly, the responsibility of the National Weather Services who generally treat this duty very seriously. The purpose of the CRU and GISS efforts is to produce large scale data as best they can from the imperfect source material.
Mistaken Assumption No. 4: Global mean trends are simple averages of all weather stations
As discussed above, each of the groups making gridded products goes to a lot of trouble to eliminate problems (such as UHI) or jumps in the records, so the global means you see are not simple means of all data (this NCDC page explains some of the issues in their analysis). The methodology of the GISS effort is described in a number of papers – particularly Hansen et al 1999 and 2001.
Mistaken Assumption No. 5: Finding problems with individual station data somehow affects climate model projections.
The idea apparently persists that climate models are somehow built on the surface temperature records, and that any adjustment to those records will change the model projections for the future. This probably stems from a misunderstanding of the notion of a physical model as opposed to statistical model. A statistical model of temperature might for instance calculate a match between known forcings and the station data and then attempt to make a forecast based on the change in projected forcings. In such a case, the projection would be affected by any adjustment to the training data. However, the climate models used in the IPCC forecasts are not statistical, but are physical in nature. They are self-consistent descriptions of the whole system whose inputs are only the boundary conditions and the changes in external forces (such as the solar constant, the orbit, or greenhouse gases). They do not assimilate the surface data, nor are they initiallised from it. Instead, the model results for, say, the mean climate, or the change in recent decades or the seasonal cycle or response to El Niño events, are compared to the equivalent analyses in the gridded observations. Mismatches can help identify problems in the models, and are used to track improvements to the model physics. However, it is generally not possible to ‘tune’ the models to fit very specific bits of the surface data and the evidence for that is the remaining (significant) offsets in average surface temperatures in the observations and the models. There is also no attempt to tweak the models in order to get better matches to regional trends in temperature.
Mistaken Assumption No. 6: If only enough problems can be found, global warming will go away
This is really two mistaken assumptions in one. That there is so little redundancy that throwing out a few dodgy met. stations will seriously affect the mean, and that evidence for global warming is exclusively tied to the land station data. Neither of those things are true. It has been estimated that the mean anomaly in the Northern hemisphere at the monthly scale only has around 60 degrees of freedom – that is, 60 well-place stations would be sufficient to give a reasonable estimate of the large scale month to month changes. Currently, although they are not necessarily ideally placed, there are thousands of stations – many times more than would be theoretically necessary. The second error is obvious from the fact that the recent warming is seen in the oceans, the atmosphere, in Arctic sea ice retreat, in glacier recession, earlier springs, reduced snow cover etc., so even if all met stations were contaminated (which they aren’t), global warming would still be “unequivocal”. Since many of the participants in the latest effort appear to really want this assumption to be true, pointing out that it doesn’t really follow might be a disincentive, but hopefully they won’t let that detail damp their enthusiasm…
What then is the benefit then of this effort? As stated above, more information is always useful, but knowing what to do about potentially problematic sitings is tricky. One would really like to know when a problem first arose for instance – something that isn’t clear from a photograph from today. If the station is moved now, there will be another potential artifact in the record. An argument could certainly be made that continuity of a series is more important for long term monitoring. A more convincing comparison though will be of the existing network with the (since 2001) Climate Reference Network from NCDC. However, that probably isn’t as much fun as driving around the country taking snapshots.
spencer says
For historical perspective, the very first person to compile weather data that showed global warming, G.S. Callendar back in 1938, already thought of the urban heat island effect and made an effort to compensate for it. All subsequent workers have taken it into account. Debates over just how to compensate for it began seriously as early as 1967. After much debate the issue was pretty much settled, in terms of figuring out how to compensate for the urban effect and detecting a warming trend anyway, by 1990. Refs. here.
Mistaken assumption no. 6-A: We need land station measurements to tell us global warming is underway.
Personally I got convinced that warming was underway in the late 1990s after borehole measurements in rocks around the world, far away from civilization, showed unmistakable evidence of warming over the past century… if you log temperature down the hole, you find that extra heat has been seeping down from the surface. I think any scientists not convinced by that would have been satisfied by the measurements of the oceans in the early 2000s that showed definitively that heat is seeping down there too. After all, most of the excess energy from any radiation imbalance will wind up in the oceans, and the top layers are undoubtedly getting warmer. Temperature measurements in the thin layer of air around cities don’t mean much in comparison. (Except of course for those of us who live in cities!)
Adam says
Tamino at Open Mind has some good posts on temperature records and covers the anomaly tracking of disparate stations (as mentioned in assumption 2). Re comment#1 he also covers boreholes pretty well, too.
Steve Horstmeyer says
Thanks Gavin:
A great summary and one easy to review and use in discussions with those that do not fully understand the breadth of data that indicate global warming and the great pains scientists go to insure the quality of the data they use. Your distinction between a physical model and a statistical model is particularlly clear.
Your comments are reminiscent of the argument Richard Dawkins makes for evolution in “The Ancestor’s Tale”. He states that Darwinian Evolution (DE) is proven beyond any reasonable doubt by the fossil record alone. Like the instrumental meteorological record, the fossil record has problems with temporal and spatial continuity, how representative a particular observation (i.e. fossil occurrence) is and a dearth of observations in many cases.
Using DNA and other biochemical evidence while ignoring the fossil record altogether DE is ALSO a slam dunk.
Here are two independent lines of evidence that both prove DE so it is only a “theory” because the all small details have not been discovered or explained.
Climate science too has more than one conclusive independent line of evidence for GW. So do a thought experiment. Ignore the instrumental record and consider only the other lines of evidence you cited and what conclusion do you come to?
Finally most members of the general public do not appreciate what scientists mean by the term “theory”. There is little in common between a theory like Oliver Stone’s theory of the Kennedy asassination in his film “JFK” and what a scientist calls a theory. Stone’s theory is supported by innuendo and unsubstantiated assertions which are often in many arguments erroneously called “facts”.
To a scientist a “fact” is not a “fact” until is is demonstrated to be true. A fact is not merely an element of an argument for or against a particular issue. Only after an assertion is demonstrated to be true can the “fact” be used to build a logical chain of reasoning that elevates a hypothesis to a theory. Of course a theory in science relies on many “facts”.
Ray Ladbury says
Gavin,
Thanks for this post. It is very timely, as it appears that every denialist has gone into the business of producing their own global temperature trends from “selected” stations. This coupled with the use of 1998 (an anomalous El Nino year) to skew the data and make it look as if global temperatures are now falling, and you have a new onslaught against the contention that the globe is warming. One would think that the decline of the ice sheets might give them some clue. Unfortunately, I’m afraid the denialists will not be reading this missive. They seem bound and determined to trash this site as “biased”. Well, if insisting on good science is a bias, then, thank God for that bias.
Todd says
Another great RealClimate post. Thanks Gavin. I especially enjoyed Mistaken Assumption No. 6. Too bad the following also isn’t true:
If enough holes in their arguments can be found, global warming skeptics will go away!
Thanks for working towards the above goal.
Bishop Hill says
I can see that the UHI is different to the microsite effects you describe. You say that the temperature record is corrected for UHI. Is it corrected for the microsite effects too?
tamino says
Thanks Adam (#2) for the endorsement, and thanks to the moderators for the link.
I’ve actually posted often about the themometer record, so to make them easier to find I’ve just posted a list of such posts (with links) on my blog. Enjoy!
Munin says
A request for clarification:
Presumably, models are changed, weighted or discarded on the basis of their agreement with observed climate. What’s the practical difference between doing this and “tuning” or “tweaking” the models to agree with the surface data?
[Response: Fair enough question. There is some discussion of this in our last model development paper. In there, you will see (fig 17) a comparison of the average surface temperatures of the model (in different seasons) with the CRU data. Although the pattern correlation is high, there are clear offsets in summer-time mid-continental temperatures (the model temps being up to 5 deg C too warm in places). The pattern of the mis-match is clearly much larger than any individual weather station could have produced and it’s ubiquity (N. Am, S.Am, Asia, Africa) indicates that it is systematic problem. This cannot be fixed by fiddling with a parameter or two, but instead is a symptom of something more fundamental. Thus, model developers are spending a lot of time looking at the processes that are important in summer, continental climates (particularly surface moisture) and trying to see how the simulation of those processes can be made more realistic. What happens then is that the new physics will be tested and we will see whether it has improved the match. Usually it does, but not always, and yet we will generally keep a more realistic treatment in the model rather than go back to something we knew was inadequate. Surprisingly, this does overall reduce the biases in the model with time (see Table 5).
So what are the key points? 1) we only use large scale patterns to match to the models – not individual grid points, and not individual stations, 2) problems are tackled by looking at the physics, not tweaking knobs. Some knob tweaking goes on within the constraints of any physical representation, but there are very strong limits on what can be achieved by that – witness the large biases we still stuck with. 3) we develop the model to improve the match to climatology, not to the trends. – gavin]
Adam says
Re #6 Bishop Hill. There are tests done to see if stations “stand out” from their neighbours (amongst others). Such stations can be excluded from the dataset. There’s more detail in some of the links above, or at the GHCN site amongst others. There are probably more technical descriptions to the techniques, but that’s the gist of it.
Ken Coffman says
Hey, I’m a denialist and I read these missives…
bigcitylib says
I am having a little trouble with 5. As a matter of historical fact, would not your first models have “assimilated” the observational data? The way you describe the process, model builing seems awfully Rationalistic (as opposed to Empirical): build the model and then compare. But how do you know how to even begin building your model?
[Response: To answer your first question. No. Physical climate models have never assimilated data in this sense. People started off with basic radiation physics, added in the dynamic equations and then clouds, and then better land surface schemes and oceans and sea ice etc. At each point, the match to observations and the variability improved. This point might become clearer once it’s realised that climate models are not developed just to the climate change problem, but as much more general tools to quantify the net effects of all the different processes we know about. -gavin]
Dan Hughes says
Can anyone point me to where each of the “Mistaken Assumption” has been stated by someone other than the author of this post?
And, aren’t the “large scale patterns” ultimately set by the individual stations plus unknown procedures/processes and maths? Surely these patterns cannot be independent of the numbers from the individual stations. If this were true, why can’t numbers for the individual stations simply be made up? A pointer to these procedures/processes and equations is also of interest. Absent a pointer to a complete set of records and results, to a level of detail that allows independent verification and replication, will be taken to mean that such information is not available.
Thanks in advance.
[Response: Assuming you are not joking, I suggest you take a look at the comment threads on CA or Pielke Sr’s site. All those and more….. Of course, large scale patterns are made up of individual stations, but they average over a lot of the noise. Micro-site effects and their timing are not coherent over thousands of kilometers – large scale temperature anomalies are. The curious thing is that the GISS effort (exhaustively described in the papers linked to above) was specifically designed to do a different job from what was available from NOAA and CRU – a replication if you will – and the fact that it gives pretty much the same answers is a testament to the robustness of the result. GISS processes the raw data from NOAA and has no access to data other than what you can download for yourself. So if you want to do your own analysis with whatever methodology you choose, please go ahead (in fact I’d encourage it). Try something constructive! – gavin]
Don Thieme says
I am sure that there is a need for the NWS to strike a balance between measuring weather in remote places that escape all anthropogenic effects and measuring weather phenomena that effect people the most and can have direct impact on public health and our economy. Many of these stations caricatured as “poorly sited” may be giving excellent data if one’s purpose is not climatological but meteorological. I know that NWS has ongoing scientific studies on these very sorts of problems and has also completed numerous studies in the past of the urban heat island effect.
Dan Hughes says
It is very clear that changes to certain aspects of the mathematical models, numerical solution methods, and application procedures are in fact based on “the match to observations”. Thus the observations are an implicit part of the modeling effort. If this is not true, then the observations are not needed. Additionally, if the observations are not correct, how can the changes to the models, methods and application procedures be correct?
A straightforward answer to this question might improve the clearity; “Can the models/methods/application procedures be developed in the absence of the obdserved data?” If the answer is “yes” then why are the data needed?
Thanks
[Response: Maybe the fact that ‘data’ is a plural might help you out there…. – gavin]
Timothy Chase says
Re Gavin’s inline to #11
As near as I can tell, the models always procede from principles of physics, not modeling on the basis of the observed behavior of the system.
When confronted with a contrarian who argues that somehow global warming isn’t taking place, I would point to the Arctic sea ice and glaciers – which have even lasted through the warm periods of the past two thousand years – and probably well before. I would then of course point out that the melting is occuring much more quickly than we anticipated, at least in the case of the Arctic sea ice, Greenland’s glaciers and the Western Antarctic Peninsula.
Of course at this point they are likely to raise the issues of:
“Why should we trust the models if they aren’t doing such a hot job on ice?,” and,
“Why don’t they just incorporate the observed behavior?”
The response to the latter is that the observed behavior has to be modeled on the basis of physical principles – not simply empirical observation. It takes a while to develop such models – but wherever we notice that models appear to be doing a poor job, that is where we focus on developing the appropriate models. Clouds were one of the weakest links in the past, the carbon cycle another and so is the behavior of ice.
But we are working on all three fronts, and have made a great deal of progress on the first, somewhat less on the second and clearly need to do more work on the third. At the same time, this also leaves the first question unanswered, and it would appear that we may be underestimating the rate at which the system as a whole evolves given that all subsystems are coupled, either directly or indirectly, with stronger or weaker coupling between the subsystems. If we underestimate the rate at which one subsystem evolves, it would seem that we are underestimating how all evolve, to one extent or another.
However, once the sea ice is gone, the Arctic should warm up much more rapidly. At present, the thermohaline downwelling is moving poleward.
What happens to ocean currents once the sea ice is gone? And how will this affect the system as a whole?
Lawrence Brown says
Hasn’t the ocean been swelling over the last several decades,leading to a rise in sea level?(That’s rhetorical). In other words, the oceans are acting like a giant thermometer,rising as its temperature rises. The effects are already being felt in low lying atolls and islands in the Pacific and in Bangladesh.
As Claude Raines character said in “Casablanca”- Round up the “usual suspects”. Temperature isn’t the only suspect. As you point out….. “the recent warming is seen in the oceans, the atmosphere, in Arctic sea ice retreat, in glacier recession, earlier springs, reduced snow cover etc…”. Also daily temperature ranges are getting smaller,and plant and animals are migrating northward in this hemisphere. Could they know something the skeptics don’t?
Bishop Hill says
Adam
Thank-you. Can you be a bit more precise with the reference to the adjustments methodology? I’m going to struggle otherwise.
Does the methodology you describe mean that if a site and its neighbour both suffer from microsite effects then potentially they might both be included in the dataset?
Presumably everyone agrees that where the the site has not been properly maintained, the relevant data should be excluded from the dataset, regardless of any similarity to adjacent sites?
Dan Hughes says
re: #12 and #14
Gavin, thanks for the extremely high information content in your responses. I would ask again for independent sources for the “Mistaken Assumption(s)”, but I am now sure that there are none. The Mistaken Assumptions are your’s and your’s alone.
By your response in #14, I take the answer to the question, “Can the models/methods/application procedures be developed in the absence of the observed data?” to be “no”. To me that means that the data are in fact a part of the models/methods/application procedures.
I have over the past almost three years tracked down a large number of the papers that are said to contain information to a level of detail that allows independent verification and replication. I have yet to find one for which this has been true. At the same time some in the GCM community have urged me to go out and find funding so that the community can do a better job of documentation; a disingenuous response if there ever was one. Here is another example in your response to #12. The fact that there are Web sites devoted to trying to discover the basic foundations for some aspects of the science conducted in the GCM community is a strong indication to me that I am not alone in my lack of success.
[Response: There are websites devoted to showing the moon landings were faked as well, but that is hardly proof of anything. You have downloaded the GISS model and you are in a position to run it and make any changes you like. You can download the much simpler earlier version of the model through the EdGCM project. Write your own model if you want, but don’t expect already over-committed people to take time out to hold you hand. If you want to contribute constructively go ahead, if not, I’ll assume you are only interested in drive-by criticism. Fun, but hardly useful. – gavin]
tamino says
Re: #18 (Dan Hughes)
I have a blog about global warming (which was referenced in the post and recommended in comment #2) through wordpress. WordPress provides a “tag surfer” feature, which enables bloggers like me to locate other blog posts related to global warming, so I regularly hear what the wordpress blogosphere is saying.
Every one of the mistaken assumptions identified in this post exists in myriad posts in the blogosphere. I have also seen them in published documents and news articles. If you really can’t find any of them, you’re not trying very hard. In fact, you’re probably not trying at all.
I strongly suspect that you’re yet another denialist who is all too eager to deny the truth of something, but unwilling to do any of the work required to learn about the subject.
And to answer your other question, “Can the models/methods/application procedures be developed in the absence of the observed data?” — the answer is yes.
Steve Horstmeyer says
Just a note on a specific case of Urban Heat Island and microclimate that may illustrate some of the complications.
I am a meteorologist in Cincinnati, OH. The instrumental record, which goes back to 1858 on a daily basis and is mostly complete having a variety on locations that are not directly comparable and were not quality controlled, to Jan. 1, 1814.
If we concentrate on the 1870’s – 1895 the observations were made in Downtown Cincinnati, roughly at an elevation of 400 ft. (~122m) above mean sea level (msl) in the often humid Ohio River Valley. During summer low temperatures around 80F (~27C) were not that uncommon.
In 1895 the official observation was moved to Abbe Observatory, at an elevation of roughly 800 ft.(~244m) msl, just north of the city center. There a morning minimum temperature of 80F (~27C) never happened. Away from the dense network of heat absorbing (daytime) then heat radiating (nighttime) structures which is the Urban Heat Island and above the air with high water vapor content trapped by the valley along the river, not to mention the pall of coal dust over the city, morning low temps were much more like what the natural countryside would experience.
In 1949 the official observation was again moved, this time across the river to what is now the international airport (KCVG)in northern Kentucky, a location about 900 ft.(~274m)msl on a large plateau above the river valley.
There are two very important factors that one must note in using the data from this location.
First micro climate: The airport is a very broad shallow depression. The thermometer is located near the lowest elevation and subject to cold air drainage and what meteorologists call “boundary layer separation”. This means that as the dense cold air flows towards the low spot and pools there the influence of the large scale wind decreases to zero in a shallow layer near the surface. Above the shallow layer the influence of the wind can still be measured. Near the surface the wind goes calm, mixing is near zero and conditions are perfect for re-radiation and minimum temperatures are often much lower than representative temperature for rurals areas.
Second Urban Heat Island Effect: Under meteorological situations that dicate winds out of the northeast through east, warm air from the city is blown towards the airport. The effect is greatest (from my experience, I have not quantified it)with winds from the ENE and a wind velocity of 15 mph (~7 meters per second) or less. Winds that are too rapid increase the mixing and the effect is essentially diluted.
Looking at the meteorological record one would note a rather abrupt cooling trend in the late 1890’s followed by another but smaller in magnitude in the late 1940’s.
Those that would want to use this as an example negating global warming, by ignoring both site and situation changes could do so. By the way the record at the international airport does show warming over the last 30 years.
Matei Georgescu says
Mistaken Assumption No. 1 uses Parker (2006) as evidence of nothing more than local (as opposed to large-scale) impact of the UHI.
A well-accepted feature of the UHI is the ‘modulation’ of the wind speed on the magnitude of the UHI, namely on the difference between urban and rural stations. In calculating no trend between “windy” and “calm” days (with wind data obtained from NCEP/NCAR Reanalysis), Parker (2006), in effect, states that there is no modulation to speak of – in and of itself, that is a remarkable statement, or else there is no UHI to speak of. Since he explicitly states that the UHI is a real phenomenon, it must be the former mechanism that he believes is non-existent.
Why weren’t ‘urban minus rural’ temperature differences used instead? That is the definition of UHI (or else, skin temps, rather than 2-meter temps) and will get right at the UHI signal, not simply the magnitude of the urban value.
Lastly, there is no mention (at least, I could not find it) of how NCEP/NCAR grid point data was interpolated to station locations and station observation time (the gridded data is available only 4 times daily and how the author makes these times match is rather critical).
Because of all these reasons, this paper does not ‘Demonstrate that Large-Scale Warming Is Not Urban’, at least in my view.
Hank Roberts says
> Presumably everyone agrees … properly maintained …
I’d suggest it’d be safer to ask the people that maintain the stations.
I’d guess ‘properly maintained’ = ‘reported data not found to change abruptly after maintenance’; but ask.
This may help: http://en.wikipedia.org/wiki/Wikipedia_talk:Neutral_point_of_view/Fact_disputedfact_value
Will Gosnold says
I would like to add to the comment by Spencer and show how warming of the ground surface is manifested in borehole temperature logs. http://www.heatflow.und.edu/Landa2007.htm
The borehole was drilled in 1983 for geothermal research in flat terrain in north central North Dakota. It was cased, plugged at the bottom and the casing was filled with water to facilitate temperature measurements. When we visited the site this summer, we found that the water level had dropped, probably due to leakage at a coupling, and we did not log in air. In any case, one can see how the ground has warmed between each successive measurment. Integrating the temperature change in time over the area (volume) of mass that has been changed gives the excess heat that is stored in the ground.
Dan Hughes says
re: #19
Gavin and RC. Unfounded and incorrect statements about me have been given in comment #19. Kindly allow me to respond to the ad hominem.
I simply asked for citations to the sources for the Mistaken Assumptions. I think that is a reasonable request and that most people here would agree. I do not see any such citations in the original post.
And now we are to the point that Gavin nor you have answered my simple request. Instead you have now labeled me as a suspected “yet another denialist” based on no information what so ever. That is, yet another ad hominem that dodges the original question. You did not and you cannot provide any supporting evidence for this statement. Nor have I ever labeled anybody to be anything. Additionally you accuse me of not working to try to understand the subject. Yet another false accusation about which you have no information what so ever. You don’t know me and I don’t know you, so how can you know what I do and don’t do.
You devoted about half your post to explaining how the Mistaken Assumptions are just about all over the Web and easily located, yet you failed to point me to a single one.
Straightforward answers given to simple straightforward questions is a very much better way to conduct a conservation.
I will leave it to others to point out the error in your final sentence.
Adam says
Re #17 Bishop Hill
Well I’m just an interested reader who’s short on time, so tend to “toe-dip” until my curiosity gets satisfied. I am also very bad at bookmarking references. However a quick retrace of steps has brought up this: http://www.ncdc.noaa.gov/oa/climate/ghcn-monthly/index.php
There’s a couple of papers on here about how they do it. They pay no/little attention to the individual sites but just the data. There’s a umber of statistical methods carried out (see Open Mind for some basic examples) that compare sites to a number of neighbouring stations. The importance is in the multiple numbers thus reducing the chances that they all have the same bias (eg they are all sited next to a barbecue). I think (from memory I haven’t re-read the papers) that they use “high quality” reference sites as well as a comparator.
I’m sure someone who visits here is more up to speed (I’s appreciate any errors on my part to be corrected as well as it’ll highlight any misunderstandings I may have).
The discussion at Open Mind on Shelby County shows how a mere tinkering (as I hope tamino doesn’t mind me calling the post relative to the GHCN QC effort) can raise possible errors and show which stations would be flagged for further investigation.
mankoff says
I’ve imported the location of all the GISTEMP stations into Google Earth. You can access the KML file here: http://edgcm.columbia.edu/~mankoff/GISTEMP/
This is an initial attempt to recreate the work of Hansen but is a work in progress. The color of the pins is encoded to the temperature trend, the size to the years of data, and the opacity is inverse to proximity to populated centers.
Blue is cooling, red is warming, white is insufficient data (baseline years or recent years). Note all the white pins in Canada! For some reason they seem to have turned off their network in the late ’80s.
Mitch Golden says
A question: In the article you say that
It seems to me that micro-site changes would overwhelmingly have the effect of raising the station’s reported temperature. What are the scenarios in which the reported temperature would be reduced?
[Response: A tree growing nearby, increased lawn sprinkling, shade from tall buildings, moving away from a road, changes to air flow (any sign), movement to a roof etc. etc. The point is not that any of these things might have a large effect, but that the effects in different stations are going to be uncorrelated. My feeling is that these effects are all much smaller than the site move or time of observation biases that are being corrected for. – gavin]
tamino says
Re: #24 (Dan Hughes)
In the response to #12, Gavin pointed you to ClimateAudit (http://www.climateaudit.org/) and Roger Pielke Sr.’s blog (http://climatesci.colorado.edu/), as places you would find the aforementioned misconceptions. Apparently you weren’t willing to do a google search for either, or to do a little rummaging around this site to find what they were.
If you had been the least bit willing to do the slightest amount of work to find information on the web (and it requires very little indeed), you’d have found them even before you posted.
But instead, you chose to make a thinly veiled, and in my opinion rather snide, ad hominem against Gavin himself, when you said, “The Mistaken Assumptions are your’s and your’s alone.”
Now you want to get on a high horse and complain about how you’ve been attacked personally. The greatest damage to your reputation, on this blog, has come from your own comments.
An since your last sentence indicates you’re absolutely convinced that it is not possible to develop the models/methods/application procedures in the absence of the observed data, why did you ask in the first place? Here’s my guess: you had already decided it’s not possible, and you were making another (thinly veiled) derogatory implication against climate models.
Hank Roberts says
Thanks, Adam; I’m an “interested reader who’s short on time” myself, the cite helps. The page says:
“… quality assurance reviews…. include preprocessing checks on source data, time series checks that identify spurious changes in the mean and variance, spatial comparisons that verify the accuracy of the climatological mean and the seasonal cycle, and neighbor checks that identify outliers from both a serial and a spatial perspective.”
So you write “They pay no/little attention to the individual sites but just the data.”
“They” on the website page for GHCN-Monthly, I’d guess, are the data analysts at headquarters, and the reviews above seem appropriate for them to be doing.
It’d be interesting to know whether the analysts get a “ding” on their review — and inquire of whoever does the maintenance — any time they they detect a change meeting those criteria they use to review.
Do they get a “ding” to look at when someone goes out and scrapes and repaints the box or moves the instruments to a fresh box, is that what you’re wondering?
I’d guess that would be one good measure of whether the stations are getting proper maintenance — if the data analysts don’t notice an oddity occurring when maintenance is done (over time, over the range of the instruments deployed) then it’d suggest that maintenance is being done often enough by definition.
But I’m speculating.
> GHCN-Monthly is used operationally by NCDC to monitor long-term trends in temperature and precipitation….
Dick Veldkamp says
#24 Misconceptions about the UHIE (Dan Hughes)
It did not take me long to find this site: http://www.warwickhughes.com/climate/ where there is a lot of nonsense about the UHIE. I realise that this does not constitute definitive proof that “misconceptions are widespread”, but why go looking actively for more rubbish?
Hank Roberts says
>25, Mankoff
Thanks for your http://edgcm.columbia.edu/~mankoff/GISTEMP/
Very nice! it’s great to be able to see that info so easily for local curiosity purposes.
And thanks for giving “linear fits to the last 10, 25, and 50 years from 2007 (… when sufficient data exist)” as well.
Question– are there error bars for the linear fits? Do the shorter linear fits always have larger errors? I’d guess that’d be true on average, but a station if say improved or moved might have more accurate info recently than overall. Dunno if it’s even available info let alone possible to show.
Dan Hughes says
re: #28
More completely I said, “I would ask again for independent sources for the “Mistaken Assumption(s)”, but I am now sure that there are none. The Mistaken Assumptions are your’s and your’s alone.” I think that given no pointers to sources for the statements I made a good assumption; not a ” … and in my opinion rather snide, ad hominem against Gavin himself.”
It is very ironic given that you said, ” … and Roger Pielke Sr.’s blog (http://climatesci.colorado.edu/), as places you would find the aforementioned misconceptions. Apparently you weren’t willing to do a google search for either, or to do a little rummaging around this site to find what they were.” That I now point you to this.
Cat Black says
So the new spectator sport is Attack The Model. Foo. What I’ve noticed is a pattern where someone has just enough intellectual stamina to notice that there is a pattern to the overall data and science (which many of us now accept) but not enough to understand where that pattern came from (which most of us at least struggle to understand). The “denialist” personality seems bent on joining the discussion as a peer but without actually accepting the intellectual challenges. Which is NOT to say that there is no room for well-reasoned questioning of data and processes; RC has provided a forum for exactly this, and those who avail themselves of the resources here and elsewhere to further the evaluation of the science are always warmly received in my observation.
The intellectual challenges cannot be down played. This really is rocket science. One doesn’t need to be a genius to join the discussion (look mom! I’m on RC!) but one DOES need to exhibit some basic respect for the combined efforts of countless women and men working very hard on extremely hard problems, an effort that is at best under appreciated and (it seems) usually misunderstood.
Hans Erren says
Gavin,
I would appreciate if you add the link to Pielkes own response to your posting:
http://climatesci.colorado.edu/2007/07/02/climate-science-responds-to-real-climates-web-posting-of-july-2-2007/
Michael Strong says
Pielke, Sr., specifically cites evidence of Lower Troposphere warming, Middle Troposphere warming, and Lower Stratosphere cooling in time series from the 1980s onwards:
Pielke Sr., R.A. 2007: The Human Impact on Weather and Climate. Bonn, Germany, June 5, 2007
He is clearly not denying that warming is taking place insofar as he specifically endorses temperature records showing that warming is taking place, and he is also concerned about human contributions to these changes. Pielke believes ocean heat content changes are the most reliable metric for assessing global heating and cooling.
But he is also rightly concerned regarding the unreliability of the land surface temperature data, as we all should be. Precisely because there is independent evidence of warming, those climatologists most concerned about AGW should be not be afraid of efforts to understand the nature and extent of the flaws in our surface station temperature records.
A commitment to empirical reality is so fundamental to science that the impulse to ridicule the documentation of microsite problems at surface stations is in danger of back-firing. Anthony Watts has maintained a civil, constructive tone and manner throughout his efforts to document surface station micro-climates. It may well be that his work turns out to be completely irrelevant in the long run. But it is difficult to understand how anyone with a commitment to the most basic scientific ethos could possibly complain about his efforts. Watts’ documentation project is something that every engineer, every high school science teacher, every 9th grade science student can understand – and believe in. His project is as close to Mom and Apple Pie as science gets. Attacking his documentation effort is a very bad p.r. move.
At present some people seem to think that the number of stations with unreliable data is small and could not possibly impact the large data sets on which climate science is based. Maybe so. But the fact is at present no one really knows just how pervasive the problems are. I would not want to bet on the accuracy of our existing system of surface stations. Suppose 60 station sites are selected at random for a trial experiment (ideally, but unrealistically, double-blind) in which an extremely high quality measurement instrument is installed away from all buildings, paved surfaces, etc. and similarly rigorous measurement protocol is followed, and this high quality set of measurements are then compared for a specified period to the data being collected from existing stations. Would Gavin bet that the average deviation between temperatures being recorded at existing stations and those of the hypothetical rigorous network are within .1 degree C? 1 degree C? Or might the problem be worse than 1 degree off on such a random sample of sites?
It seems as if all we really know about surface station data at present is that it is unreliable. We really have no idea exactly how unreliable it is. Why not simply agree to eliminate all dependence on surface station data and focus exclusively on other measures of increased temperature over the last couple decades as Pielke recommends?
[Response: I don’t know who you are addressing here. I have neither complained about nor ridiculed Watts’ efforts. I have merely pointed out that they are unlikely to have as much impact as some would like them to. All data is imperfect, all models are flawed. But, the data do have useful information contained within them, and the models do a reasonable job at simulating what happens. To arbitrarily exclude any source of information simply because it is not perfect is foolish – understanding is only going to come from using as many different independent lines of evidence as possible. There are plenty of additional lines of evidence that suggest the large scale gridded products are consistent with what we can see in other measures, and so there is no need to throw out the baby with the bath-water. -gavin]
John F. Pittman says
#28 I read ClimateAudit often and would like to comment. For assumption #1, the majority on CA were concerned that despite the increase in energy use and population, Hansen did NOT show some UHI. It was the opinion of commenters that one would expect some. The only comments I remember being close to UHI did not exist were intended IMO to be funny or sarcastic.
#2 I don’t think I have seen anyone claim station data was perfect, anywhere. Instead I see lots of discussion on UHI, stations, Hansen, and other items questioning the extent and reliability of temperature data and other data in general. I tend to give some leeway to comments due to the abbreviated nature of posts. Take as example: “That is to say, that if a station in Tennessee has a particular warm or cool month, it is likely that temperatures in New Jersey say, also had a similar anomaly.” I do not think anyone, has claimed that they can tell the world’s temperature anomolies from just 2 data points by one particularly warm or cool month. But I realize what was meant, a good correlation is still good for something even if someone has not explained it to everyone’s satisfaction. The conversations have been on real or assumed problems with data and sites.
#3 I guess this is a problem with abbreviated comments. NOAA, whom was acknowledged in this argument, does run the US NWS. It is hard to see if we are discussing data that was used for grids for USA, that such a discussion did not occur. It is the use of “produced” versus the discussions on ClimaeAudit of the underlying data, and how the data became the “product”. The comments I have seen do not dwell on GISS or CRU “collecting” the data. Perhaps you or others have spotted this problem because of your expertise in this area. Some of us are looking at the data and relationships of how one gets from point A to point B, not that the attributions are entirely correct.
#4 That Global mean trends are not simply averages of all weather stations has been discussed in many different ways, none of which meet such a simplistic sentence that I remember except comments to the effect how could a person discern if only one trend could be used or how much noise using all the trends entail. There is no question that many on ClimateAudit question much. But this questioning argues directly against assumption #4 applying to CA blog.
I think #5 should state computer climate model projections. After all CA seems to have questions about all models and projections. Even better statement would be “Finding problems with individual station data somehow affects computer climate model projections or it really should because you can’t rely on a model that does not use real data for confirmation”. LOL. I admit I also like to use models that have been verified some way. Otherwise, the model may be as good as a good chess problem…elegant and intelligent, but not particularly useful.
#6 is actually something I have seen the opposite on CA. The comments include not only is global warming occurring today, but several times in the last 10,000 years. They also have the cooling in different times as well, which would imply warming periods at other times.
I do not frequent Roger Pielke Sr.’s blog. You have taken Dan Hughes to task, but included CA through editor’s response to #12. Perhaps you should do some of the work you deride him for not doing. I say this because, though I have seen comments on CA that perhaps could somehow fit these descriptions of assumptions, I find that typically by the most senseless reading of the comment. I note that the editor did not have links to good examples on CA where these assumptions were stated or implicit. I would like to read these and see if it is by one person or many. I would also like to read their reasoning. Such reasoning appears poor to me, but I would rather read and make up my mind, rather than just assume their reasoning is poor. So I would like you to do something constructive for those of us who don’t see what you do but want to look and make up our own minds…post some links.
bigcitylib says
Pielke has suggested that you have “ignored” the following two papers in composing this post:
“Pielke Sr., R.A., C. Davey, D. Niyogi, S. Fall, J. Steinweg-Woods, K. Hubbard, X. Lin, M. Cai, Y.-K. Lim, H. Li, J. Nielsen-Gammon, K. Gallo, R. Hale, R. Mahmood, R.T. McNider, and P. Blanken, 2007: Unresolved issues with the assessment of multi-decadal global land surface temperature trends. J. Geophys. Res. in press.”
“Pielke Sr., R.A. J. Nielsen-Gammon, C. Davey, J. Angel, O. Bliss, M. Cai, N. Doesken, S. Fall, D. Niyogi, K. Gallo, R. Hale, K.G. Hubbard, X. Lin, H. Li, and S. Raman, 2007: Documentation of uncertainties and biases associated with surface temperature measurement sites for climate change assessment. Bull. Amer. Meteor. Soc., in press”
When I pointed out that he lists these as “in press”, he claimed that you are “aware” of them. Can you shed some light on this.
[Response: I saw a preprint of the first paper a while back, but I have not seen the final version, nor have I seen the second paper. Roger has my email address, if he wanted me to read them, he could send them along. However, I stress what I said above, I have nowhere said this effort was not worthwhile, indeed I stated that “more information is always useful” – my point was simply that it isn’t going to have the impact some think it will. Roger’s rather aggressive response misses the point entirely. – gavin]
Jim Dukelow says
It seems clear that the UHI effect is a real physical effect and the complaint from AGW skeptics and denialists is that the strong (and real) warming in urban areas is contaminating regional and global temperature averages.
I have thought for a while that the “problem” would go away if the the regional and global averages were area-weighted averages of the data from the various weather stations.
Specifically, use the locations of the weather stations to construct a Voronoi tessellation (see en.wikipedia.org/Voronoi_diagram for a description of the construction) of the land surface of the earth. Assign an area-weight to the temperature data from each station equal to the area of that station’s Voronoi “cell”. Use those area-weights to construct the regional/global averages. This would have the effects of decreasing the weights assigned to urban weather stations — since there are lots of them, they are relatively close together, and the areas of their Voronoi cells will be relatively small — and correspondingly increasing the weights assigned to rural weather locations. This process also captures and appropriately weights the real and strong warming occurring in urban areas.
This Voronoi decomposition could also be used to construct (again by area weighting) gridded temperature time series.
Best regards.
Michael Peterson says
The things I wonder when I’m measuring something, what is the detail I can get into significant digit wise, and when I was last calibrated to that level of detail. Then how frequent and consistent is my sampling interval. Next is where everything is located, so am I measuring what I think I am.
I would like to see some calculations and figures of what dumping CO2 into the atmosphere over the San Francisco area from 1961-1975 has had on the temperature here from 1991-2005. Or something somewhat similar for someplace or another, esp if compared to a similarly sized rural area etc. There doesn’t usually seem to be that level of detail reported.
Timothy Chase says
Took a look at Steve McIntyre’s site. Normally something I wouldn’t care to do given that the major scientific bodies and peer reviewed reports came out strongly in favor of Mike Mann – and it is pretty obvious that Steve McIntyre is a man with a vendetta and no love for the hockey stick, but…
Yikes!
I can’t tell whether he is accusing the entire climatology profession of grand conspiracy or simply gross incompetance. But I am not seeing anything resembling systematic analysis in any meaningful sense – at least not yet. Mostly just innuendo and cherry-picking.
Now please pardon me while I go take a shower…
Boris says
35:
“Anthony Watts has maintained a civil, constructive tone and manner throughout his efforts to document surface station micro-climates.”
Anthony Watts site, while it may be worthwhile in some way, is meant as a rhetorical “gotcha.” Take the two examples presented on the fornt page of his site. One shows a station in a clearly urban environment, the other in a more rural setting. But the trick is in showing the temp plots inset with the pictures. The bad, mean and undisciplined station shows warming, and the lovely, calm and good station shows cooling. The visual argument is clear: Any wamring must be due to the mean station; therefore, no worries.
If the pictures weren’t meant for a rhetoircal “gotcha” then why cherrypick a cooling, rural station? Why show inset graphs of the temp at all, as this is a red herring vis a vis ideal station setup?
Craig Allen says
Thanks for the article Gavin. You prompted me to look at what information the Australian Bureau of Meteorology has on their website about their monitoring program.
Apparently Australia too have thousands of stations. And a sub-set of these have been designated ‘reference’ stations. These were selected using the following criteria:
* High quality and long climate records,
* A location in an area away from large urban centres.
* A reasonable likelihood of continued, long-term operation.
I presume that in order to identify anomalies, data from all the other stations is compared to the data from the reference stations. This would enable the data to be corrected or discarded. Because many of the stations are now automatic with live uplinks to the Bureau, I imagine that it is possible or will soon be possible for problems with stations to be i9dentified as soon as they occur, and for technicians to be dispatched to fix them.
There is a map of the reference stations here. If you click on the orange dots you will see a photograph of each station. Australia is a big empty place, so you’ll see that that they are almost all in quiet lonely places.
If people want to see plots and summary statistics of data derived from the monitoring network, then look here. Plots with particular relevance to tracking climate change are here. A warning to the skeptics – there are very obvious trends for most of the parameters, which accord with climate model predictions for a hotter drier future. A warning for Australians in general, if the trends continue, we’re stuffed. A warning for everyone, if you want to see the Great Barrier Reef or the Northern Tropics rain forests, you had better be quick.
As stated here, the bureau “will soon finish reprocessing much of the data which is used to calculate the climate statistics”.
However, for now there is no detail on the Bureau’s website about how this is being done.
unconvinced says
Another CA (and lately, Pielke) reader here, and I agree with the comments in #36.
In my view no-one who follows the tenets of science should be afraid of criticism and/or an audit of their work – it should be welcomed, because if you’re right, an audit will show it, and if you are wrong, you will have learned something. In either case, knowledge will be gained.
Don’t be afraid of those who examine those pesky details, be grateful that your work has such an impact! Don’t say “it doesn’t matter”, investigate and publish the results of that investigation! Don’t complain about “denialists”, gather data, do experiments, write papers and prove them wrong! And most of all, remember that the truth will win out in the end. It might not be what you think it is now (it probably won’t be, if history is any guide), but providing you contribute to the data and the debate, your input is most welcome – right *or* wrong, pro- *or* anti-, consensus *or* outlier, all contribute, because, if nothing else, they make you *think* and *act*.
Steve Reynolds says
27 gavin> The point is not that any of these things might have a large effect, but that the effects in different stations are going to be uncorrelated.
How can we be sure of that? There are many possible reasons for correlation. One simple one is the introduction of the electronic MMTS to replace manual thermometer reading. I think MMTS uses an RS232 cable with limited length, so sensors may have been relocated closer to buildings, which could systematically increase reported temperature.
John Mashey says
#43: unconvinced
People can poke at data for at least two reasons:
a) Because they do science, and the idea is to get things as right as possible, and that’s good science, and real scientists do it a lot.
OR
b) They want to create uncertainty and controversy, and waste as much time as possible for real researchers who may produce results they don’t like.
Without claiming anyone in particular is doing this here, what you posted is indistinguishable from the classic playbook, famously expressed by Brown & Williamson in 1969 about fighting the cigarette/cancer link:
‘Doubt is our product, since it is the best means of competing with the “body of fact” that exists in the mind of the general public. It is also the best mens of establishing a controversy.’
The whole idea is “that more study is needed” on anything where the outcome isn’t what you like.
The technique was well-learned in the tobacco wars, and used repeatedly (often via the same lobbyists, PR organizations, thinktanks) for:
– smoking
– acid rain
– CFCs
– AGW
Sometimes this strategy is called insisting on “sound science” (that’s the code-phrase), which means in practice: we will accept any random crackpot idea if it supports us, and if there is a strong scientific consensus that we don’t like, “sound science” requires that we study it more until it becomes 100% certain, or if necessary, even better :-) before any decisions would be made, i.e., preferably never.
If this is idea is new to you, you’ll want to start reading some relevant history, such as Chris Mooney’s first book.
Kevin says
Your statement on mistaken assumption #5 about climate model projections being theoretically based rather than empirically based is well made. On the other hand, would I be wrong in assuming that a siting issue, like a bank of A/C exhaust vents near a thermometer, would influence the USHCN temperature record at that site?
I understand the attempts to adjust for inhomogeneities. It just seems that of the small number of sites recently photographically surveyed and with the USHCN being [self-described] as a high quality data set, there are a lot of siting issues that might imply a general lack of quality control re: NOAA’s published siting standards and which might speak even more poorly for QC of surface temp. measurement in countries without high budgeting for projects like this. Unlike the modelling projections, the instrumental record is empirical. I just don’t understand how fairly plain heat biases, oil drum trash burners, A/C exhausts, etc., near thermometers are irrelevant to a given site’s recorded Tmax. Since the USHCN states its goal is to assist in detecting regional climate change, US siting issues such as systemic heat biases seem fairly relevant to me.
Zeke Hausfather says
It seems that attacks on the validity of the surface temperature record as an attempt to cast doubt on the recent warming trend would have been a bit more convincing back in the day when there were competing satellite temperature records that suggested a cooling trend. These days, with the multiple independent lines of evidence supporting the current anomoly, people seem to be grasping at straws by focusing on poorly sited temperature stations. Yes, there are certainly temperature stations that could be better designed, and yes, the observed surface temperature record might change slightly if all temperature stations were making precisely accurate measurements. Would this change anything substantive about our current understanding of the past warming trend worldwide? Unlikely.
ChrisC says
Here in Australia, we have a large network of weather station, across our (rather big) country. Some are automated (AWS), others are operated in conjunction with proffessional weather observers while others are operated with the help of volunteers (SYNOPs).
Each weather station is attempted to be constructed to WMO standards, in order to reduce interference to a minimum. Also, each station is checked by an engineer every 6 months, (which is again a WMO standard).
Despite this, the station data is imperfect. For instance, a rain event in the south eastern state of Victoria recorded 300mm of rain in 2 hours at one weather station, while the neighbouring stations recorded closer to 60mm. As such, automated and human checks of this data are made before it is put into the climatic data-base.
The data we use is reasonably reliable. There are some problems, and we always welcome input from the public into increasing the fidelity of our network (for example, is a tree has grown to shade our Stevons Screens in the afternoon). As such, projects like surfacestation.org are valuble. But they are unlikley to have a huge effect on the surface temperature record. There are tombs of literature on the subject of the placement of weather stations, and organisations such as BoM take extrodinary care in the placement of stations.
Good on them for trying to help, but in the long run, the averaged temperature record is unlikely to change much.
Alex Nichols says
Some, but not all, UK weather stations have records of soil temperature dating back over 100 years. An extensive study by A. M. Garca-Suarez and C.J. Butler at Armagh Observatory, N. Ireland found the following:-
‘We have analysed the trends in four long meteorological time series from Armagh Observatory and compared with series from other Irish sites where available. We find that maximum and minimum temperatures have risen in line with global averages but minima have risen faster than maxima thereby reducing the daily temperature range. The total number of hours of bright sunshine has fallen since 1885 at the four sites studied which is consistent with both a rise in cloudiness and the fall in the daily temperature range. Over the past century, soil temperatures at both 30cm and 100cm depths, have risen twice as fast as air temperature.’
see:-
climate.arm.ac.uk/calibrated/soil/soil11.ps
http://climate.arm.ac.uk/calibrated/soil/soilT_Garcia-Suarez_2005.pdf
http://climate.arm.ac.uk/calibrated/soil/soil11.pdf
Vernon says
I don’t quit see how you can say that individual stations do not matter when a network is a collection of individual stations. I also fail to understand how verification and validation is wrong within the context of making economic and societal changes based on a theory. On the skeptic side I see many wanting to validate and verify all information and on the AGW (CO2) proponent side I see reasoning why validation and verification is not needed.
I say that transparency is the only way to do any thing and something as important as this needs all the assistance that it can get. Release the data, processes, and procedures and take what help you can get. Coming up with arguments for why inputs are to be ignored, such as how many of the stations that are collecting temperature reading are not properly set up or operated, or hiding which stations are used to determine the UHI off-set.
I really don’t expect that this will be posted; most of my posts are not accepted because I am skeptical of taking anyone’s word on just about anything. I just fail to see how putting up a strawman argument instead of actually saying “Ok, check all the stations and help us identify the actual conditions the readings were collect under so we can have the best data.” is doing anything good.