Observant readers will have noticed a renewed assault upon the meteorological station data that underpin some conclusions about recent warming trends. Curiously enough, it comes just as the IPCC AR4 report declared that the recent warming trends are “unequivocal”, and when even Richard Lindzen has accepted that globe has in fact warmed over the last century.
The new focus of attention is the placement of the temperature sensors and other potential ‘micro-site’ effects that might influence the readings. There is a possibility that these effects may change over time, putting in artifacts or jumps in the record. This is slightly different from the more often discussed ‘Urban Heat Island’ effect which is a function of the wider area (and so could be present even in a perfectly set up urban station). UHI effects will generally lead to long term trends in an affected station (relative to a rural counterpart), whereas micro-site changes could lead to jumps in the record (of any sign) – some of which can be very difficult to detect in the data after the fact.
There is nothing wrong with increasing the meta-data for observing stations (unless it leads to harassment of volunteers). However, in the new found enthusiasm for digital photography, many of the participants in this effort seem to have leaped to some very dubious conclusions that appear to be rooted in fundamental misunderstandings of the state of the science. Let’s examine some of those apparent assumptions:
Mistaken Assumption No. 1: Mainstream science doesn’t believe there are urban heat islands….
This is simply false. UHI effects have been documented in city environments worldwide and show that as cities become increasingly urbanised, increasing energy use, reductions in surface water (and evaporation) and increased concrete etc. tend to lead to warmer conditions than in nearby more rural areas. This is uncontroversial. However, the actual claim of IPCC is that the effects of urban heat islands effects are likely small in the gridded temperature products (such as produced by GISS and Climate Research Unit (CRU)) because of efforts to correct for those biases. For instance, GISTEMP uses satellite-derived night light observations to classify stations as rural and urban and corrects the urban stations so that they match the trends from the rural stations before gridding the data. Other techniques (such as correcting for population growth) have also been used.
How much UHI contamination remains in the global mean temperatures has been tested in papers such as Parker (2005, 2006) which found there was no effective difference in global trends if one segregates the data between windy and calm days. This makes sense because UHI effects are stronger on calm days (where there is less mixing with the wider environment), and so if an increasing UHI effect was changing the trend, one would expect stronger trends on calm days and that is not seen. Another convincing argument is that the regional trends seen simply do not resemble patterns of urbanisation, with the largest trends in the sparsely populated higher latitudes.
Mistaken Assumption No. 2: … and thinks that all station data are perfect.
This too is wrong. Since scientists started thinking about climate trends, concerns have been raised about the continuity of records – whether they are met. stations, satellites or ocean probes. The danger of mistakenly interpreting jumps due to measurement discontinuities as climate trends is well known. Some of the discontinuities (which can be of either sign) in weather records can be detected using jump point analyses (for instance in the new version of the NOAA product), others can be adjusted using known information (such as biases introduced because changes in the time of observations or moving a station). However, there are undoubtedly undetected jumps remaining in the records but without the meta-data or an overlap with a nearby unaffected station to compare to, these changes are unlikely to be fixable. To assess how much of a difference they make though, NCDC has set up a reference network which is much more closely monitored than the volunteer network, to see whether the large scale changes from this network and from the other stations match. Any mismatch will indicate the likely magnitude of differences due to undetected changes.
It’s worth noting that these kinds of comparisons work because of large distance over which the monthly temperature anomalies correlate. That is to say, that if a station in Tennessee has a particular warm or cool month, it is likely that temperatures in New Jersey say, also had a similar anomaly. You can see this clearly in the monthly anomaly plots or by looking at how well individual stations correlate. It is also worth reading “The Elusive Absolute Surface Temperature” to understand why we care about the anomalies rather than the absolute values.
Mistaken Assumption No. 3: CRU and GISS have something to do with the collection of data by the National Weather Services (NWSs)
Two of the global mean surface temperature products are produced outside of any National Weather Service. These are the products from CRU in the UK and NASA GISS in New York. Both CRU and GISS produce gridded products, using different methodologies, starting from raw data from NWSs around the world. CRU has direct links with many of them, while GISS gets the data from NOAA (who also produce their own gridded product). There are about three people involved in doing the GISTEMP analysis and they spend a couple of days a month on it. The idea that they are in any position to personally monitor the health of the observing network is laughable. That is, quite rightly, the responsibility of the National Weather Services who generally treat this duty very seriously. The purpose of the CRU and GISS efforts is to produce large scale data as best they can from the imperfect source material.
Mistaken Assumption No. 4: Global mean trends are simple averages of all weather stations
As discussed above, each of the groups making gridded products goes to a lot of trouble to eliminate problems (such as UHI) or jumps in the records, so the global means you see are not simple means of all data (this NCDC page explains some of the issues in their analysis). The methodology of the GISS effort is described in a number of papers – particularly Hansen et al 1999 and 2001.
Mistaken Assumption No. 5: Finding problems with individual station data somehow affects climate model projections.
The idea apparently persists that climate models are somehow built on the surface temperature records, and that any adjustment to those records will change the model projections for the future. This probably stems from a misunderstanding of the notion of a physical model as opposed to statistical model. A statistical model of temperature might for instance calculate a match between known forcings and the station data and then attempt to make a forecast based on the change in projected forcings. In such a case, the projection would be affected by any adjustment to the training data. However, the climate models used in the IPCC forecasts are not statistical, but are physical in nature. They are self-consistent descriptions of the whole system whose inputs are only the boundary conditions and the changes in external forces (such as the solar constant, the orbit, or greenhouse gases). They do not assimilate the surface data, nor are they initiallised from it. Instead, the model results for, say, the mean climate, or the change in recent decades or the seasonal cycle or response to El Niño events, are compared to the equivalent analyses in the gridded observations. Mismatches can help identify problems in the models, and are used to track improvements to the model physics. However, it is generally not possible to ‘tune’ the models to fit very specific bits of the surface data and the evidence for that is the remaining (significant) offsets in average surface temperatures in the observations and the models. There is also no attempt to tweak the models in order to get better matches to regional trends in temperature.
Mistaken Assumption No. 6: If only enough problems can be found, global warming will go away
This is really two mistaken assumptions in one. That there is so little redundancy that throwing out a few dodgy met. stations will seriously affect the mean, and that evidence for global warming is exclusively tied to the land station data. Neither of those things are true. It has been estimated that the mean anomaly in the Northern hemisphere at the monthly scale only has around 60 degrees of freedom – that is, 60 well-place stations would be sufficient to give a reasonable estimate of the large scale month to month changes. Currently, although they are not necessarily ideally placed, there are thousands of stations – many times more than would be theoretically necessary. The second error is obvious from the fact that the recent warming is seen in the oceans, the atmosphere, in Arctic sea ice retreat, in glacier recession, earlier springs, reduced snow cover etc., so even if all met stations were contaminated (which they aren’t), global warming would still be “unequivocal”. Since many of the participants in the latest effort appear to really want this assumption to be true, pointing out that it doesn’t really follow might be a disincentive, but hopefully they won’t let that detail damp their enthusiasm…
What then is the benefit then of this effort? As stated above, more information is always useful, but knowing what to do about potentially problematic sitings is tricky. One would really like to know when a problem first arose for instance – something that isn’t clear from a photograph from today. If the station is moved now, there will be another potential artifact in the record. An argument could certainly be made that continuity of a series is more important for long term monitoring. A more convincing comparison though will be of the existing network with the (since 2001) Climate Reference Network from NCDC. However, that probably isn’t as much fun as driving around the country taking snapshots.
steven mosher says
re 134.
Hi Ray, sorry I’m working backward through the comments, crablike.
You wrote:
“Steven Mosher, A network that corrects error-free data is not necessarily better than a network that collects data with errors that are well understood. The are several fundamental problems with your approach:”
Well, first you have to establish What kind of errors you have, before
you can characterize them as “well understood” You do that by looking.
Continuing:
“1)You are looking at stations individually, rather than as part of a network. Information theory suggests that if our oversampling is at least 3:1, we can have up to 1/3 of our stations be totally wrong with no real loss of information–and those are random errors. ”
Well, to establish the oversampling rate I suppose one must have a
understanding of the signal structure and probabilities. So yes,
Mr Shannon and Mr Nyquist play a role here. I have not seen any evidence
that the climate signal at the grid level is over sampled. Ground truth
is kinda missing. PLus, one can look at the STATION and the network.
After all, Hansen et al, look at stations to account for things lke urbanization, and record length, etc… So FALSE DILEMMA.
“The siting criteria are excellent guidelines for single stations, and I would not site any single new station that did not comply (unless there were an overriding reason). Most of the station that violate the siting criteria, however, are old, with a long history. This is important, because:”
Well ray. you are funny. Orland CA, for example, is a fine site.
In the same place since 1883. Do not assume. LOOK. OBSERVE.
INVESTIGATE. You assumed the older sites violate. You didnt even
Look. every site survey has a siting history. read the file.
You say Most of the stations that violate are old? How many
of the 80 sites surveyed violate siting? We havent even started
evaluating all of them. If you have reviewed all of them and classified
them according to CRN standards.. COOL. pass the data son!
You go on:
“2)On the other hand, systematic errors can be characterized and bounded (thus determining what weight to apply) or the result corrected. Such studies provide important information in and of themselves (how do you think the siting criteria were developed?).”
True. Think about noise reduction and error correction.
“3)You give no consideration to what kind of error a particular violation would produce–either prior to or after corrections are applied.”
Well, actually I have. See later comments. Still GIGO.
“4)In essence jackknifing studies already do what you are asking for–look at the effect of excluding single stations from the analysis.”
It is absolutely clear to me that 1 station will not make a difference. Worst case, I’d guess that 20-25% of the sites are impaired
I’ll go through all the 35-40, 120-125 sites and do a count, but I’d rather someone else rate the sites, double blind like. I already “know” what sites have squirrely records… from looking at the data so I’m not
confortable making any rating determination.
“5)Your methods have a very high risk of being misappropriated by denialists to cast unwarranted doubt on a result that is incontrovertible–indeed, that is how they have been used to date.”
Yes, but sunshine is a good thing.
“6)There is no evidence of a systematic problem with the data or procedures, and plenty of evidence to the contrary. ”
Well, SYSTEMATIC evidence would come from a system wide study.
You find one roach. Look for more. You find one bad batch
of dog food, screen for more… When the owner of the dog food
factory says ” go away” you get curious.
So, do you have a camera and GPS Ray, its a fun way to spend the day
Matt says
Timothy Chase writes in 142: Of course contrarians will point out that instruments at poorer sites will have a bias, but as tamino (#91) points out, this bias is corrected for, and it is quite possible that given the methodology employed, removing the urban sites would actually result in a higher average temperature, and as Hansen points out (see tamino’s first reference in #93), the bias introduced by urban sites is quite negligible.
Some biases are corrected for (time of observation, re-siting to place), but it is fair to say not all biases are accounted for. For example, if over 20 years a big tree grows up and puts the station in the shade much of the day, that isn’t accounted for. If a parking lot gets added adjacent the sensor, that isn’t accounted for.
The logical thing to do here is take a reasonable sample, say 10% of the sites, throw out the ones that aren’t sited correctly, and look again at the trends. If the trend stays about the same, then great. False alarm. If it shows a fraction of the current warming, well, then that is interesting too. If the trend show even more warming, then that is interesting too.
It is a bit interesting to me that many are willing to let site issues slide, but the time of observation bias not slide. To me, with enough stations, time of observation is a non-issue. Some measure early AM, some measure late PM. As long as everyone measures their own station at the same time everyday it should work itself out. But I read Karl’s paper and seems that isn’t the case. Thus, there could very well be a significant bias from station location.
There’s a convincing arguement on Climate Science that all non-ideal site issues (pavement, trees + plants, paint deterioration) will result in a positive bias and that if you initially had a correctly installed site that you won’t see a negative bias.
Marion Delgado says
Dan Hughes re #19. If we’re just doing a head count, I agree entirely with tamino, and not at all with you. so that’s 2 who agree with tamino, and 1 (you) that agrees with you. Just empirically testing your “most people” believe your request for cites is reasonable.
I second the “unwilling to look” probability, and the likelihood of being “just another denialist,” and I add that this is obviously a ploy to get people to waste time. The topic of debate is, roughly, “Is global warming just an artifact of bad meterological station data?” Gavin points out 6 mistaken assumptions he sees leading to the topic question being answered in the affirmative. You claim they’re, in essence, strawmen created by him out of intellectual dishonesty. Fine.
You, not us, need to find a source that maintains the affirmative that does not use one of those assumptions. The burden of proof, again, is not on the person who bothered to make the Real Climate post, it’s on the person who challenged him, to actually cite data, not simply rail against him with spurious claims of logical fallacies and rhetorical tricks you cannot, seemingly, justify.
Again: the burden is on you to find a source that affirms that global warming is being registered, or the accepted magnitude of global warming is being registered, due to faulty meteorological station data, that does not use one of those assumptions. Or perhaps to explain why there is a campaign to harass stations and describe their activity as a “cover up.”
Paul G says
== Post # 134 by David: ==
“Re #65 [not one person advocates what is the only sensible thing to do: perform a thorough review of the surface temperature sites. Instead, abstract, Machiavaellian motives are attached to anyone who dares question the suitability of the sites.”
= David says: = =” There are hundreds of papers that do this. Its a pretty standard scientific process. I can also point you to two very large PhD theses in Australia which are nice cook book examples.”=
How does the scientific process correct for an undocumented parking lot? Or a rooftop sensor? It still seems to me that applying corrections to a site whose characteristics are unknown to the scientists doing the correction is not the best science. Kind of like a doctor who only diagnoses over the telephone without ever seeing the patient.
ray ladbury says
Re 145. Now wait a minute–how do you KNOW that such artifacts will not be corrected. Keep in mind that you have a time series as well as a spatial grid of stations. A station with readings that drift will be noticed. A systematic trend in one station over time that is not evident at neighboring stations will be seen. A momentary glitch at one station not seen at its neighbors. There are statistical techniques for dealing with these different types of errors.
Time of day is just another error they need to correct for. The problem is that you are assuming that other errors are not similarly corrected–and that simply is not the case. And that is precisely the problem with a bunch of people going out traipsing around stations–they may find siting errors, but they will have absolutely no idea what they mean for the conclusions drawn with the dataset. Data by themselves–especially vast amounts of data–really mean nothing. You have to analyze the data to draw conclusions that are meaningful.
Removing a “bad” station from the database does not necessarily improve the quality of the dataset–and it may even deteriorate it. On the other hand, if the station gives consistently unreliable readings, it would be removed via the statistical techniques used on the data.
My experience is that people love data–numbers–they glaze over when you start talking about what you do to the data to make them meaningful. How many times have we seen someone select two stations “randomly” and see that they don’t show significant warming over a limited range of time and conclude there’s no climate change?
If you do not understand the station in the context of the network and the methodology for analyzing the data, you are at best wasting your time (a hobby akin to train spotting), and at worst creating a tempest in a teapot of those who are similarly ignorant (which would include pretty much everyone who isn’t familiar with the entire network and its dataset going back to inception).
James says
#149 – read the page linked in #147, they account for those trends. Anyway, having read all 150 comments I agree that microsite issues are probably not a concern, and while a survey of HCN sites is worthwhile, it’s also very easily used as a distraction for the deniers. However, the AGW side is not much better, with articles like this that basically say we’re all doomed unless “emissions of greenhouse gases are reduced by 60% over the next 10 years” (for 2 deg C rise, and the chance of avoiding each further 1 deg C rise is given as “poor” due to cascading effects) which isn’t going to happen, becuase, well, China. At which point I stop caring since we’re either screwed from AGW or we’re not becuase GW isn’t AGW, most governments are trying to reduce CO2 emissions (see AP6 which includes China and India unlike Kyoto) and I have better things to do with my time.
Barton Paul Levenson says
[[The logical thing to do here is take a reasonable sample, say 10% of the sites, throw out the ones that aren’t sited correctly, and look again at the trends.]]
No. You don’t throw out the ones that aren’t sited correctly. You estimate what their biases are and correct for them. That’s what’s actually done in practice, and for good reason — you don’t throw out data, even distorted data, if you can correct for the distortion.
[[ If the trend stays about the same, then great.]]
What part of “the rural stations show approximately the same trend as the urban stations” did you not understand? For the 17th time, the land surface temperature record is not the only thing that shows global warming. Sea temperature series reflect it too — are there urban heat islands on the sea? Boreholes reflect it too — are the boreholes poorly sited? Glaciers and tree lines and migration of plants and animals and sea ice and sediments and seashells show it too — are they all poorly sited?
You can’t get rid of global warming by throwing doubt on the land temperature records.
pat n says
Re 131.
Temperature records at US climate stations in Minnesota, Wisconsin, North Dakota and Montana show 3-5 deg F upward trends in annual mean data, 1890s through 2006.
In determining which stations to use for estimating the trends it was easy to pick out the stations with low quality data records by comparing the trends and data at the particular station being evaluated with records at its nearby stations.
For example, the record at St. Cloud didn’t fit with its nearby stations so St. Cloud records were not used in determination average regional trends. The station at St. Cloud was moved a few decades ago due to expanding economic growth at the old site to a new site which has frequent fog. A cooler annual mean temperature record than its nearby stations can be seen for the recent decades since the station was moved to the new site due to the more frequent days with fog. Although St Cloud is an exception to otherwise numerous high quality climate stations and data records, the records have been used frequently by climate change skeptics of global warming, e.g. John Daley, deceased, and others.
Temperature plots for US climate stations (from regional climate center data bases).
http://picasaweb.google.com/npatphotos
http://new.photos.yahoo.com/patneuman2000/albums
Julian Flood says
I’m not surprised Pluto is warming. Its orbit is eccentric and it is a lot nearer the sun than it used to be — moving away again, but probably still showing the results of its time as the eighth planet.
JF
Peter Griffin says
I, for one, believe the preponderance of data indicate the climate is changing and trending warmer. I remain skeptical that CO2 is the primary culprit or even that a warmer earth is a bad thing (by the way, can anyone tell me what the temperature or climate should be?)
But let’s put all that aside. If the belief that rising CO2 emissions are going to cause catastrophic changes to the climate force policy changes that result in real, measurable reductions in emissions and pollution, is that a bad thing? Is it wrong to “go with the flow” if I feel the right thing will ultimately happen?
I tend to think it is not wrong – it is OK to go with the flow. So long as our efforts are directed at lessening our impact, then I’m all for it.
My concerns only pop up when there is talk of attempting artificial changes to the climate (force cooling) or simply moving the problem from point A to point B (carbon trading).
Alan K says
#134: “A network that corrects error-free data is not necessarily better than a network that collects data with errors that are well understood.”
I have read and re-read this statement a few times and wondered if you could explain why this should be so. Is the first “correct” correct? I still struggle with it reading “collect”..
thx
Ray Ladbury says
Re 161. Alan–yup, it’s a typo. It should be “collect”. Sorry for the confusion.
Matt says
#157 BPL: No. You don’t throw out the ones that aren’t sited correctly. You estimate what their biases are and correct for them. That’s what’s actually done in practice, and for good reason — you don’t throw out data, even distorted data, if you can correct for the distortion.
After you understand the impact of local influences and can dial those out, of course you can leave the sites in. But please explain to me how you account for the fire chief pulling his SUV next to the temp sensor? Does he arrive at the fire station everyday in a guassian or rayleigh distribution?
Because you can’t know that, you can’t eliminate the bias at this point. So the reasonable thing to do is to delete the 10% of that sited poorly and recalc. The system is oversampled, so you can easily see if the trend changes significantly. If it does, then it means the biases at the tossed out sites, while unknown, are significant. If you decide you want to recover the info in the tossed out but biased sites, then you set about determining the bias.
What part of “the rural stations show approximately the same trend as the urban stations” did you not understand? For the 17th time, the land surface temperature record is not the only thing that shows global warming. Sea temperature series reflect it too — are there urban heat islands on the sea? Boreholes reflect it too — are the boreholes poorly sited? Glaciers and tree lines and migration of plants and animals and sea ice and sediments and seashells show it too — are they all poorly sited?
The land temp has the most data points of any historical record, so it’s interesting. Note that I believe rural stations were classified based upon nighttime sat photos. Lots of lights around a station means it’s not rural, and no lights means it’s rural. But a site surrounded by a parking lot and next to a large brick building in the middle of nowhere coudl certainly exhibit temp distortions in spite of being in the sticks and being classified as rural, right? Have you read any recent “peer reviewed” research on UHI?
You can’t get rid of global warming by throwing doubt on the land temperature records.
You mean the way some wanted to get rid of MWP? :) Seriously, though, we both know that. Again, getting this validated isn’t costing me or you a dime. Why worry?
Phillip Shaw says
I have to smile at the commentors who seem to believe that an audit of all 1221 data collection stations wouldn’t be a large task. As an engineer who’s worked on a large number of cost proposals I’d like to offer a rough order of magnitude (ROM) estimate of the time and money it would take to do a complete audit.
To adequately audit a site would require more than just a drive-by photograph. You’d need take the photographic survey of the site but you’d also need to inspect the sensor mounting, power supplies and data acquisition system. You’d need to review maintenance and operation logs for completeness and anomalies. And, of course, you’d need to check the accuracy of each sensor against a calibrated standard. All of this could probably be accomplished in one 8-hour workday. Add a second day for the auditor to write up the findings and travel to the next site.
So each auditor could inspect roughly 2.5 sites per work week, or about 125 sites per 50 week man-year. Thus it would take about 10 man-years to audit all 1221 sites. Give or take a few man-years.
The defense firm I work for prices out technical manpower at between $150K to $200K per man-year (for salary, benefits, and overhead). For this estimate I’ll use the lower end of the cost range but given the high amount of travel involved a more definite figure could well be higher.
So 10 man-years at $150K per man-year would be 1.5 million dollars. That’s a lot of money if you’re a private citizen, or even a university. But it’s not a lot of money for a major fossil fuel company. It’s less than the cost of a commercial during the Superbowl, and much less than companies such as Exxon are spending on their FUD campaigns. Does anyone seriously believe that Exxon, or another of its ilk, would hesitate to spend that money if they believed it would support their position?
Ray Ladbury says
Steven Mosher,
Hopefully you understand that my concern is not that any systematic audit of stations will overturn the conclusion that climate is changing. Were that possible, it is undoubtedly something we would all wish for. My concern is that there are plenty of unscrupulous (and often highly paid) elements in the denialist camp who will stop at nothing to delay action on climate. (I also hope you understand that I do not impute ulterior motives to you.)
There are many ways of dealing with imperfect data–trying to perfect it is only one solution, and often not the best or most efficient. In many cases, excluding less than perfect data can actually diminish the overall quality of the dataset. And if a particular station were really problematic, a good statistical analysis procedure would effectively eliminate it from the analysis anyway.
I would very strongly urge you before taking part in this effort to familiarize yourself with the network and analyses as a whole. Understand the statistical quality controls used and how they work, and if you discover a problem site, look at the types of errors it might produce, how they would be identified/treated and what the ultimate post-analysis effect on the conclusions of the analysis might be.
Just so you know, I’ve been on both sides of this issue in the past. My thesis experiment (experimental particle physics) featured very noisy data that I had to clean up without manufacturing a signal. On numerous occasions, I had folks running up to me very alarmed saying, “Did you know…” Based on the fact that I did get my doctorate, you can assume that I did know and did come up with procedures for dealing with the faults in the data. In my current incarnation as a radiation engineer, I often find issues with electronics and rush to the satellite design engineer and say, “Did you know…” More often than not, they say, “Yeah, and this is what we did to mitigate that…” Orbiting observatories like Hubble are a devils playground for radiation effects, which can corrupt data and cause systematic drifts over time as radiation dose mounts. The data are imperfect, but there is always some desperate grad student who finds a way to use it.
Also, note that I said that the stations that have problems are more likely to be those that are old–not that invariably old stations will have problems. And keep in mind that imperfect data is not unusable data.
Mitch Golden says
Re #12, and others, Dan Hughes’s questions:
I was checking out the always-amusing conservapedia (“The Trustworthy Encyclopedia”)regarding global warming, and here’s the very first sentence of their explanation of what they call “The Modern Warm Period”.
(The two citations are to Pielke Sr’s blog.) Given this wording, and the prominence conservapedia gave this issue, it’s clear they hold Mistaken Assumptions 1 and 6 in Gavin’s post. Given the nature of Conservapedia, I think that it’s safe to say that this is a fairly common belief, held by more than one or two people.
Hank Roberts says
And as people begin to understand the issue, and realize this isn’t going to be a killer complaint we start to hear all the other beliefs relied on as rationalizations for doing nothing come out again — it can’t be real, it’d cost too much to avoid the cliff maybe the car will grow wings before we crash, my life is too short to care what happens after I die, I can’t believe it’s a problem, anything people do is natural change …. each time a new supposed magic bullet is pulled out to kill the monster and fails, the same litany of reasons gets expressed. People, when you find yourself chanting the litany of beliefs that reassure you that you don’t have a problem, you have a problem.
Alan K says
#162 – thx Ray, I’m still interested in why it’s better to have well-understood errors rather than no errors?
[Response: False dichotomy. There are always errors and one should strive to understand them as best as possible. The idea that there are any error-free sources of information about the real world is an illusion. – gavin]
Jim Dukelow says
Re #38
I suppose my problem in #38 was using a phrase — Voronoi tessellation — that no one was familiar with. I’ll try again.
To construct the Voronoi tessellation of the land surface of the earth using met station locations, you do the following: 1) take a station, 2) connect it with line segments to all the nearest adjoining stations, 3) construct the perpendicular bisectors of each of the line segments, and finally 4) construct the convex cell (with polygonal boundary) formed by joining of all of these perpendicular bisectors.
The “cell” so constructed will consist of all of the points on the earth that are closer to the station you started with than to any other station. In an urban area with several met stations, that cell will be relatively small. In a rural area with the nearest stations tens or hundreds of miles away, the cell will be relatively large. Assign to that station’s data times series a weight equal to the area of its cell.
Now the global/regional/local “average” temperatures/precipitation/etc. will be calculated using area-weighted averages of the individual station time series.
This process has the following virtue:
Anyone who has lived in an urban area for a few decades and listened to or watched weather reports for that time is aware that the urban heat island is a real phenomenon — telling us that a real part of the real earth has experienced accelerated warming. That warming may be a product of land use changes, thousands of exhaust air streams of air conditioners, construction of thousands of heat storage structures (Trombe walls?), or less-than-ideal locations of met station instruments (or, for that matter, less-than-ideal location of urban residents). The weighted average described here will assign to that real warming of small part of the earth’s surface the appropriate (small) weight.
What RC’s visiting denialists are calling “bad” local met station data is, in most cases, perfectly good data recording what is really happening in one small part of the earth’s surface.
If you wanted the global/regional/local averages to somehow provide a measure of average human misery due to increasing temperatures, then population-weighted or un-weighted averages will probably capture that, since the density of met stations is a reasonable proxy for population density.
Best regards.
Julian Flood says
re reply to 67
The Quatsino data is surpringly like the SST temperatures with the so-called ‘bucket correction removed. That correction distorts the Hadcrut3 temperature plot – I believe the correction was originally required to make one of the major GCMs produce accurate land temperatures – and should now be re-examined.
I came to suspect something wrong with Hadcrut3 because of my own toy theory of global warming (briefly, oil and surfactant spill reduce marine strato-cumulus cloud, lower the eath’s albedo and warm the ocean), which predicts a temperature blip during WWII caused by the Kreigesmarine effect. Removing the bucket correction from SST records shows this blip loud and clear, as does the Quatsino record. This means that we can bypass the UHI troubles by accepting uncorrected SSTs as valid — a huge data pool of uncorrupted temperatures. Eyeballing, the Quatsino data shows a warming of .14 deg/decade for the last 100 (ish) years. So does the SST data.
Even toy theories have value it seems.
BTW, if this duplicates, my apologies: the first attempts didn’t show up.
Julian Flood.
Ray Ladbury says
Re 167. ” People, when you find yourself chanting the litany of beliefs that reassure you that you don’t have a problem, you have a problem.”
Maybe we could come up with a 12-step program for denialists. Except these usually put everything in “God’s hands”, and then people could go right back to ignoring the issue saying, “Oh, God will sort it out.” The inability to accurately assess risk may be the fatal flaw in the human intelligence. The irony is that the condition while not curable is treatable by large doses of pragmatism and a strict avoidance of ideology (left or right) and complacency.
Matt says
#165 Ray Ladbury My concern is that there are plenty of unscrupulous (and often highly paid) elements in the denialist camp who will stop at nothing to delay action on climate. (I also hope you understand that I do not impute ulterior motives to you.)
Delaying action will already happen, because “doing a little” (Kyoto) doesn’t change the outcome appreciably, and “doing the amount needed” will cause rioting in the streets once the western world understands what they will need to give up.
And you don’t think there are those that stand to make millions if global warming is indeed a serious problem? Do you honestly believe there are no agendas on the “believer” side of things? For real?
Make no mistake, while there are indeed a few humble scientists toiling away on the subject, there are forces lining up on both sides that will make (or continue to make) lots of money (cash, fame, adulation, free dinners) on this. Be very suspicious of both arguments when this much money is at stake. First in line at the cash register is Al Gore with his clean energy fund. If folks don’t believe in warming, his fund tanks. If they are scared to death of warming, his fund soars.
pat n says
Re: 158.
Taking out stations with low quality data records does not mean the data at those stations is never used. The station where a bias or shift is know to exist in the data record may still be used in estimation of missing reports at nearby high quality stations for non-critical points (i.e not a recent, warm or cold value. The data from that site can be used with as little as three known good quality values by deriving a year-to-year change at that station and applying the difference to determine an estimate of for a point at a station of high quality where a single data value is missing or questionable.
Ray Ladbury says
Alan K., Gavin is of course right–error-free data does not occur in nature–or at least it does not occur in a context where you can do anything meaningful with it. In order for a dataset to be error free, you would have to constrain the problem to such a degree that it would no longer apply to the real world. Moreover, a dataset that was advertised as error free, would lead to overconfidence both in the data and to what it could be used for.
The old aphorism applies here: “A man with one watch always knows what time it is; a man with two is never sure.” Maybe so, but two watches gives you much more of an idea of how unsure you should be (though to really know, you need at least 3, since that’s the minimum number you can use to calculate a meaningful variance.).
Rod B says
My problem/concern with global temperature measurements, briefly summarized as (and still…) “margin of error”, is amplified by the borehole and ocean depth “validation”. If, for the sake of discussion, measuring the year-by-year temperatures and coming up with anamolies that add up to 0.7 degrees over 100 years or so is dicey, measuring reliably the even finer temperature gradient one meter, five meters, 100 meters, whatever, has to be damn near physically impossible, is it not? Scientifically at least.
I do recognize a practical dilemma if my contention has merit. And that is if you wait for global temp measurements that satisfy my accuracy requirements, the global average would have to be 5-10 degrees or so hotter, so we finally validate the warming a couple of days before we die. You have no choice but to use what you have, but I see no need (other than politically) to ballyhoo it.
For the record (and speaking for myself, not the skeptic community), with a couple of nuances and one generality, I agree the six “skeptic arguments” of this thread have little scientific credibility. The nuances: #1) I remember when the issue of urban heat islands there was a hue and cry from AGW proponents that UHIs did not exist. Though I don’t recall if those proponents were politicos or scientists. At any rate when it was pretty much determined that UHIs do exist but are easily accounted for in the mathematics it became a non-issue except for the small contingent of loud bottom-feeding skeptics (I feed only at the shoreline [;-} ). #5) Your refutation of this (individual station errors) is valid but only up to a point. Clearly there could be cases where individual station errors would lead to erroneous results. Though I don’t believe we’re currently there. Admittedly my nuances, while I think accurate, are not very important in the scheme of things. Overall, while I don’t necessarily fully agree with Pielke (see, I can disagree with far more intelligent people than me on both sides of the table with no shame what-so-ever!), I do think he has a general point that AGW scientists are too quick and too willing to solidify in gold data and information that, while significant, is less than perfect.
I also think that proof coming from Arctic sea ice, glacier retreats, etc. might be indications, but are just one step away from the cherry blossoms blooming early, the 26+ storms we had in 2005, Ted Turner’s statement, “it’s hotter than hell outside”, etc. “proofs” that are thrown out by some. There is no “proof” that these are more than natural occurrences (or, admittedly, neither that they are not) and require, for now, really contorted tortuous explanations ( snow/ice getting covered with soot, Arctic really getting lots warmer than the few tenths of a degree of the global average just the past 2-3 decades, etc.) why AGW is causing them — though they are professed with religious conviction.
Ray Ladbury says
Matt,
I do not care about Al Gore. The impetus to deal with this issue is not coming from him, but from what the science is telling us. Listen to the scientists–they are very nearly all on the same side on this one. I have nothing against someone making money if they do so honestly. I deplore those who lie to make a buck or to preserve their privileged status–and that is what the denialist disinformation machine is doing in this case.
Hank Roberts says
Rod, please read some science.
You write:
> There is no “proof” that these are more than
> natural occurrences
You’re right, of course.
You’re a longtime reader and commenter here.
You’ve either missed one of the basic things about science that people have been trying to help you learn, or you know better and you’re posting the talking point from the PR people consciously.
Read the cartoon in the link below at least, please.
For any new readers who don’t understand why the insistence on “proof” or even proof in science is a bogus claim, this may help:
http://zenoferox.blogspot.com/2007/05/truth-with-capital-t.html
joe says
Ray,
Do you include Pielke Sr. in your “denialist disinformation machine”? What are his motives for raising contrarian questions? You suggest we “listen to the scientists” but some scientists do disagree, as you indicate. [edit]
Who else do you include in the denialist cabal? Please name names so we know who to avoid.
Gary says
I am sorry, I do not understand this thread. It appears to me that Mr. Watts project involves checking (by photo) the existing weather sites in the U.S. Why would a scientist NOT want to periodically check his instraments, from which he recieves data, to assure their accuracy?
Hank Roberts says
Gary, when you take a picture of a thermometer, what does that tell you about its accuracy?
When you take a record every day of hundreds of thermometers all around your home and compare them, will you know any more about the temperature where you live? That’s what the instruments are used for — the accuracy is an outcome of the large number of measurements taken, and of quality control checks on the data.
The picture tells you only that there really is a box at the location described. Looking at the temperature record in the database tells you if there’s anything odd about the record over time. Looking at the other records tells you if there’s anything odd about that location over time.
Nobody’s argued it’s not good to consider whether the picture shows a problem. If there’s a tree on top of the box or a car parked on it or a big hole in it or a barbeque restaurant’s open pit fire next to it, that’s good to know.
Ray Ladbury says
Re 178. Some scientists will always disagree. All will have their own motives. That is why scientific consensus is critical to the progress of science. Pielke is problematic, because he has never said whether he really believes climate change will just go away if all his concerns are addressed.
Re 179. The stations are checked. The data are checked. And checked again. And measured against other indicators. The real question is why this extraordinary level of checking is insufficient for some.
Timothy Chase says
Gary (#179) wrote:
Scientists check their instruments by visual, instrumental and statistical means. But contrarians either wish to have stations eliminated (even though we can get useful information from them by correcting the data using well established statistical methods and closing stations would reduce the accuracy of our temperature estimates) or what is more likely, simply wish to change the focus from the well-established rise in temperatures (by means of many independent lines of investigation including the shrinking of the Arctic Ice Cap) to the fact that some stations are not ideal in order to discredit the science which has established that climate change is taking place and that it threatens countless lives.
In some cases, the motivation for opposing the science are financial (e.g., due to someone being in the pay of Exxon), in other cases it is a concern for the economy, but it may also be more ideological in nature. I personally suspect that the latter two categories taken together are more common than the first. I have some sympathy for the second, although I believe it is misplaced given the likely consequences for the global economy of climate change simply in terms of this century considered by itself.
Dan Hughes says
The use of ‘denialist’ and ‘alarmist’ and their variations has reached entirely new depths of disgust in this thread. Unfortunately, the RealClimate Web site has already started to lose its creditability because of the presumptive and unilateral applications of these labels by quite a few people who post here. What was started to be a source of correct and reliable science information has degenerated to its present state of mess.
If a poster decides to presumptively and unilaterally apply any label to other posters, they should be required to state exactly which subject matter the label applies too. After all there are an almost uncountable number of things that can be the subject of denial and alarm. More importantly they should be required to cite references in which the target of the label has explicitly stated ‘denial’ or ‘alarm’ about the subject.
The subjects of this thread are basically related to the quality of empirical data. So, ‘denial’ and ‘denialist’ must mean that some people are denying that the data shall be of highest quality. Others, apparently, are ‘alarmed’ that the data are of highest quality.
“Science should not tolerate any lapse of precision, or neglect any anomaly, but give Nature’s answers to the world humbly and with courage.” Sir Henry Dale, past President of the Royal Society of London.
Dave Blair says
#177
re: your statement to “please read some science”
I like that article. Science is traditionally a difficult thing to define. Math is formal science. Climatology is a natural science. Climatology is a soft science (much like economics, archaeology or geology are) so it’s difficult to prove theories and is open to interpretation. Not only that, but AGW has a social science aspect to it since the claim is that it is caused by humans.
Steve Bloom says
Re #178: For those interested in RP Sr., see this Stoat thread and in particular this comment by me. The ultimate arrangement of these particular tea leaves seems to me to point to an explanation that’s more psychological-social-political than scientific.
James says
Re #179: [Why would a scientist NOT want to periodically check his instraments, from which he recieves data, to assure their accuracy?]
The real issue here is not checking the instruments (which, as people have been pointing out, has been and is being done), it’s the motivations of the people who are now calling for the checking. Their argument is essentially “We don’t like what your instruments tell us, therefore they must be wrong, or at least we can make enough noise shouting about it to drown out the real issues.”
Let’s do a little thought experiment. Since these people claim instrument error, let’s pretend that no one ever invented an accurate thermometer until today. Throw out all temperature records, and make a judgement based on all the other lines of evidence: arctic & glacial melting, earlier spring thaws, runoff patterns, plant & animal cycles & migrations, and all the rest. Doesn’t all that tell exactly the same story as those allegedly inaccurate temperature records? And doesn’t that mean that either the temperatures must be pretty much right, or that the whole darned world is wrong?
Which, come to think of it, is the problem in a nutshell: the world as it is doesn’t suit these people, so they pretend it’s otherwise :-)
Timothy Chase says
Dan Hughes (#183) wrote:
Dan,
By suggesting that climatologists need to have their stations audited, contrarians are implying that climatologists incapable of monitoring themselves, either because they are extremely incompetent, grossly negligent, dishonest, ideologically motivated or involved in some sort of conspiracy, or perhaps a little of all of the above. That is of course their privilege – for the most part.
However, given the fact that in the vast majority of the cases they refuse to acknowledge the overwhelming evidence from many different lines of investigation which cooberates the trends that are being discovered by means of temperature measurements, this leads me to the conclusion that they are not sincere or particularly concerned with the truth. At that point I believe it is appropriate to discuss their motivations – and it would be dishonest to treat them as genuine seekers of the truth.
Nice blog, by the way:
http://danhughes.auditblogs.com/
steven mosher says
ok, I promised myself I would not respond to some things but Let’s have a look at the local epistemologist.
Timothy Chase:
“Scientists check their instruments by visual, instrumental and statistical means. ”
There is no evidence that Hansen Jones or Parker ever did a visual inspection of weather sites.
Parker even claimed that Urban sites were located in PARKS. More on this later. I have yet to see a SINGLE calibration record for any HISTORICAL site. link one, please.
Now, I want you to Check JONES’ treatment of the instrument error at stations. You go hunt down his paper. Then see if you can spot the error he made in estimating instrument error over a month long period.
Timothy Chase:
“But contrarians either wish to have stations eliminated (even though we can get useful information from them by correcting the data using well established statistical methods and closing stations would reduce the accuracy of our temperature estimates) ”
Well, on one hand I have Ray telling me that the “grid” is over sampled ( like he knew the frequency) and on the other hand I have you telling me that We can get good information from these junk stations, begging the question. You all should get your story straight.
either, believing ray, the grid is over sampled by a factor of 3 and we can live with Noise, or delete stations; or believeing you, excising stations that don’t meet QA standards will corrupt the “SIGNAL”.
You believe in the signal.
Timothy Chase:
“or what is more likely, simply wish to change the focus from the well-established rise in temperatures (by means of many independent lines of investigation including the shrinking of the Arctic Ice Cap) to the fact that some stations are not ideal in order to discredit the science which has established that climate change is taking place and that it threatens countless lives.”
Motive hunting. Hansen and Karl, initiated this criticism of the historical network. NOT US.
Let me quote HANSEN Karl. Then you decide.
“Are we making the measurements, collecting the data, and making it available in a way that both todayâ??s scientist, as well as tomorrowâ??s, will be able to effectively increase our understanding of natural and human-induced climate change? We would answer the latter question with an emphatic NO. There is an urgent need for improving the record of performance.”
more from Hansen/karl
“It is necessary to fully document each weather station and its operating procedures. Relevant information includes: instruments, instrument sampling time, station location, exposure, local environmental conditions, and other platform specifics that could influence the data history. The recording should be a mandatory part of the observing routine and should be archived with the original data. ”
[edited to remove pejoratives – please stay polite]
Eli Rabett says
Eli thinks that a lot of people don’t have a clue about how stations are run and calibrated and checked. There is literature out there folks, go read it before running off telling everyone that you are going to save the world by taking pictures.
Hint: One picture says nothing about the HISTORY of the station
Timothy Chase says
Dave Blair (#184) wrote:
From what I can see, you would regard chemistry and physics to be soft sciences. This isn’t how term “soft science” is generally used. The term “soft science” is generally contrasted against “hard science” which would include physics and chemistry.
“Soft science” typically refers to those sciences which study humans, particularly where human psychology becomes involved. Climatology does not study humans – although it may be used to identify where human causation has resulted in certain effects within the climate. But this would be no different from an analysis in terms of physics of how a driver stepping on a gas pedal resulted in the car plowing into a bus.
In truth climatology is best regarded as an advanced branch of physics – although there are certainly elements of chemistry to it.
BlogReader says
James: The real issue here is not checking the instruments (which, as people have been pointing out, has been and is being done), it’s the motivations of the people who are now calling for the checking. Their argument is essentially “We don’t like what your instruments tell us, therefore they must be wrong, or at least we can make enough noise shouting about it to drown out the real issues.”
Maybe it is just me, but the way to combat this is to not take the Timothy Chase route and devine what’s in their souls to decide if a response is necessary, but rather to make sure that everything is documented so that these questions won’t come up in the future. If their questions are foolish then it should be easy to refute.
To a casual observer like myself it looks like things might have been a bit sloppy (weather station that is on a new parking lot, cities marked as rural when they are now urban) and that instead of trying to fix the issue people on here are trying to say that they are being persecuted.
steven mosher says
Well
Today’s question for the curious.
Gavin gave me a nice little project. THANKS!. basically, GISS
estimates that the land record for the period since 1900 has increased at about .8C +-.2C ( 95%CI) or .008C per decade +-.002C
Since Anthony Watts started his search in Chico california, I thought it would make sense to try to understand what the science said about that grid.
So, Gavin provided me with the linear trend for increase in temps in Anthony’s grid: 35N-40N, 120W, 125W.
The linear Trend, per gavin, is .8C/century. I didnt get a CI from him… so, I’ll assume the global CI
Ok.. back to the investigation.. just begining
Let’s talk about 35N-40N, 120W -125W. It is in california.
includes San Fransisco, San Jose, Sacramento, Sac valley,
And inches towards Tahoe. It’s geographicaly diverse.
Coastal, Urban, Rural, agricultral..
GISS, best as I can tell, uses 20 stations in this GRID.
Data from those stations is “used”.
Now, 3% of the world’s land surface is URBAN. California is about 5% urban.
I Have no reason to believe that 35-40N, 120W-125W varies from this
percentage in a substantial way.
But, Lets imagine that 10% of the land mass in this grid were URBAN.
twice the mean for the state 3 times the mean for the world.
Ok.. imagine that 10% of 35N-40N, 120W to 125W is Urban.
Now. I have a list of weather stations in this grid.
Weather stations that are “used” ( according to gisstemp files)
Now, if 10% of your land mass were URBAN and 90% rural, and you randomly picked 20 locations to sample the climate, how many would
come from Urban areas ?
questions:
1. if 10% of the land is Urban, how many stations out of 20 are CATEGORIZED as urban?
a. 2 (10%)
b. 5 (25%)
c. 10 (50%)
d. 15 (75%)
2. What percentage of weather stations are located at Airports and or Military bases?
a. 10%
b. 20%
c. 40%
d. 60%
3. If the Urban landscape is oversampled and the rural lanscape is undersampled, can you perform powerful discriminating tests comparing the two? More specifically, if 5% of your population ( urban land)
is represented by 50% of your sampling, and if 95% of your population ( rural land) is represented by the other 50% of your sample, What kind of claims can you make about difference between the two?
Timothy Chase says
steven mosher (#188) wrote:
Not personally.
However, if you have ever taken time out for economics you might have learned about the division of labor. Population growth tends to result in that sort of thing and the efficiencies of scale which follow from it.
Oversampling is part of what makes it possible to get good information out of the grid. Cross-verification, and when one station or another is on the fritz you have other stations to fall back on.
Context please. References…
If you look at what he is actually saying, he seems to be concerned with improving the quality of the science. I presume this means that you will be throwing your full support behind his efforts? Advocating the kind of funding which it will require?
*
In any case, there are always motives.
In science the primary motive is curiosity and the reward a sense of wonder. One might also believe that one is under a moral obligation to understand to the best of one’s ability. Patterns in human action will suggest different motives. But in any case, one begins with identification which precedes evaluation, and in communication, one begins with the assumption that others are engaged in a similar process – until one has sufficient evidence for thinking otherwise.
Ender says
Dan Hughes – “The subjects of this thread are basically related to the quality of empirical data. So, ‘denial’ and ‘denialist’ must mean that some people are denying that the data shall be of highest quality. Others, apparently, are ‘alarmed’ that the data are of highest quality.
“Science should not tolerate any lapse of precision, or neglect any anomaly, but give Nature’s answers to the world humbly and with courage.” Sir Henry Dale, past President of the Royal Society of London. ”
I am not sure that this has been said in the 192 posts on the subject however if it has please delete this comment.
The most completely obvious comment that sort of takes all the wind out of McIntyre’s sails is that the network of weather stations is not the property of climate scientists. It is not climate scientist’s responsibility to calibrate, check or collect data from these weather stations. They are provided with sufficient accuracy for the purpose for which they were designed – that of helping to predict the weather. For their primary task they are sufficiently accurate. If climate scientists had the money and opportunity to set up a system I am sure that they would demand sensors of the highest precision sited in ideal locations. However they have to work with what they have. Lacking the funds to build a parallel network with higher precision, they make use of this imperfect data because the network is extensive, already in place, has a long history and is paid for by someone else and not draining scarce research funds.
Rabbeting on about climate scientists should do this and they should not put up with data of dubious quality is completely missing the point that they are not in charge of the data. Start harassing the relevant meteorology departments to improve the network. However they will probably tell you that the network serves admirably for its primary purpose and why should they spend money upgrading it?
If you are so concerned with the data then pay to setup a higher precision network with carefully chosen sites. It should only cost a few million dollars which would be fossil fuel companies’ money better spent. I honestly think that this is not really not an ideal wedge issue and that a better one needs to be found.
mark s says
RE 183,
I find it interesting that you quote a past president of the Royal Society, Dan, maybe you should check out what they say about AGW.
http://www.royalsoc.ac.uk/landing.asp?id=1278
James says
Re #191: […but the way to combat this is to not take the Timothy Chase route and devine what’s in their souls…]
I have to disagree, simply because that’s where the problem lies. The questions related to weather station siting & accuracy have been addressed, here and in the links people have provided. I’ve seen nothing that persuades me that the people claiming problems have even looked at any of this, let alone understood it, or would allow their opinions to be affected if they had.
[To a casual observer like myself it looks like things might have been a bit sloppy (weather station that is on a new parking lot, cities marked as rural when they are now urban) and that instead of trying to fix the issue…]
Because you don’t, or won’t, give the matter enough study to understand that it has been fixed. The problem is that even after being fixed, what the data is showing isn’t what the people asking the question want to hear, so they ignore the answer and go on repeating the question in order to convince their audience that it’s a valid question. This is a basic underhanded debating tactic, used by everyone from major religions & political movements down to UFO cultists & 9/11 conspiracy theorists.
pat n says
Re: 188
——–
In the US the answer is yes, yes and no.
Hank Roberts says
> quote HANSEN karl …
He’s quoting second hand from Climate Adit:
climateaudit.org/?m=20070605
pat n says
Yes, yes, in theory anyways, but still a no for public availability unless they have lots of money to spend on 100 year historical records and recent observations.
… The National Weather Service (NWS) makes observations and measurements of atmospheric phenomena as required for climatological, hydrologic, meteorological, and oceanographic services. …
http://www.nws.noaa.gov/hdqreorg.php#od
Al Zumbuhl says
Some of this criticism seems to be confusing different scales of “error” that have different functional meanings. My training is in soil science so this may be outside my area of expertise and maybe I’m way off-base. However I have assisted with the set-up and analysis of meteorological stations in my own research (on small watersheds in the New York City Watershed).
Much of this debate seems to be confusing “measurement error” with a spatial covariance of land-use and temperature trends (someone with a Phd: is “heteroscadasticity” a correct term for this?). Measurement error can be either be due to instrument error or artifacts due to poor siting (“the asphalt effect”).
But it strikes me that this type of error is something different than the UHI effect (though on the face of it they would appear to be related). But this “instrument” and “siting” error geostatistically speaking is expressed as a microscale variability which is what I believe is “corrected for”. And as far as the “asphalt effect” we should also consider that there is an intrinsic natural, microscale variability in the micro-climate system. I’m thinking for example of how we modeled a basic estimate of evapotranspiration using the temperature gradient derived from a surface and 1.5 meter temperature reading.
But isn’t the UHI effect something other than this “microscale variability” that would be evident at a regional, spatial scale as a “hot spot” for instance (thus the urban heat island). Yes? No? Shut-up?