A few items of interest this week:
Katrina Report Card:
The National Wildlife Federation (NWF, not to be confused with the ‘National Wrestling Federation’, which has no stated position on the matter) has issued a report card evaluating the U.S. government response in the wake of the Katrina disaster. We’re neither agreeing nor disagreeing with their position, but it should be grist for an interesting discussion.
An Insensitive Climate?:
A paper by Stephen Schwartz of Brookhaven National Laboratory accepted for publication in the AGU Journal of Geophysical Research is already getting quite a bit of attention in the blogosphere. It argues for a CO2-doubling climate sensitivity of about 1 degree C, markedly lower than just about any other published estimate, well below the low end of the range cited by recent scientific assessments (e.g. the IPCC AR4 report) and inconsistent with any number of other estimates. Why are Schwartz’s calculations wrong? The early scientific reviews suggest a couple of reasons: firstly, that modelling the climate as an AR(1) process with a single timescale is an over-simplification; secondly, that a similar analysis in a GCM with a known sensitivity would likely give incorrect results, and finally, that his estimate of the error bars on his calculation are very optimistic. We’ll likely have a more thorough analysis of this soon…
It’s the Sun (not) (again!):
The solar cyclists are back on the track. And, to nobody’s surprise, Fox News is doing the announcing. The Schwartz paper gets an honorable mention even though a low climate sensitivity makes it even harder to understand how solar cycle forcing can be significant. Combining the two critiques is therefore a little incoherent. No matter!
Dylan says
Does anyone know of a plot over time of total reflected radiation from the Earth back into space (as measured by satellites)?
Presumably this would show a gradual decrease over the last 100 years.
dhogaza says
File this under “the obvious”, but we haven’t had satellites for 100 years…
Dylan says
Sure we have…we’ve had at least one for 4 billion years! But uh, yes, obviously 100 years is asking a tad much. Was meant to be 10 (although 20 or 30 would be better).
bigcitylib says
A bit OT, but here is an honest to gawd email exchange between British Coal’s Richard Courtney and various other deniers on the topic of carbon sequestration:
http://bigcitylib.blogspot.com/2007/08/great-balls-of-dry-ice-deniers-at-play.html#links
Bob B says
Surely the GCM includes parameters that are calibrated against the surface data?
It seems hard to believe that improved surface data wouldn’t, for example, improve our estimates of CO2 sensitivity?
[Response: They are compared against things like the absolute annual mean, the seasonal change and the diurnal range. But these are much better characterised than the trends that are being discussed. And it’s worth bearing in mind that the errors in the GCM range up to a few degrees or so, and so are much larger than differences any of this will make. Therefore the skill scores are not going to be greatly affected and the code will not change. – gavin]
bjc says
#44 Eli:
The problem is that trees are likely to be far more representative of the environment that is being measured, i.e., a given climate in a given region, than are air conditioners unless you are trying to measure the temperature of NYC.
Vernon says
Gavin, thanks again for this discussion. I removed both comments when there was no response to my comment.
[edit for conciseness]
Vern’s 2nd Response:
So you do agree not meeting the site guidelines will inject 0.8 – 5.4 degree C?
You have not presented any facts to show that the injected error will not cause a bias, remember, I am not arguing that there is one, only that it is not possible to tell.
Finally, per CRN, Hansen’s 250 rural stations are not hugely over sampled since they are putting out 100 to give a 5 degree national coverage. Further, CRN states 300 stations will be needed reduce climate uncertainty to about 95%.
Well, as to whether I understand what Hansen says he is doing, as to the trend vs temperature delta, I could be wrong on this, and if I am, enlighten me, but I understood that Hansen is taking the urban stations, processing them, then the rural stations, processing them for each grid, then on a yearly basis, do the delta between rural and urban for each grid cell per year, then taking those to do the UHI off-set. Then that off-set is done against individual stations as part of GISSTemp processing. (I know I am simplifying this since there is actually urban, semi-urban, and unlighted.) It is not trend vs trend. The only part I really have questions on is which off-set was he doing in what order, or was he doing all variations and then taking the mean?
[edit]
Vern’s 2nd response: Well you did not disagree with this one the first time around but do now. Fine, please show me your evidence that all 250 stations meet site guides? If not then I believe that the 0.8 – 5.4 degree C from failure to meeting site standards far exceeds the few hundredths of a degree C over the past 100 years that Hansen assumes. You presented no facts or logic, just made a statement. Please back that up with facts or studies.
[edit]
Vern’s 2nd response: I addressed the UHI off-set above. I apologize if I did not say it clearly enough but poorly sited rural stations do not give accurate rural data. I did not say light=0 was an urban environment, you just did. I said that light=0 did not give you a good rural environment. CRN says that a good rural site will be: ‘not be subject to local microclimatic interferences such as might be induced by topography, katabatic flows or wind shadowing, poor solar exposure, the presence of large water bodies not representative of the region, agricultural practices such as irrigation, suspected long-term fire environments, human interferences, or nearby buildings or thermal sinks.’
Then you drag out a red herring, there is proof that the stations are not meeting site guidelines. Hansen said that his work needs accurate data to be correct. It can be shown that that assumption is not supported by the stations. You still have not addressed this.
[edit]
Vern’s 2nd response: Gavin, you do use the surface station data, just not directly. In your Present-Day Atmospheric Simulations Using GISS ModelE: Comparison to In Situ, Satellite, and Reanalysis Data (2006) you use the station data to verify that the model is correct. ‘As in the other diagnostics, the differences among the different models are small compared to the offset with observations.’ So you are using the surface station data to make your model better. If the surface station data is wrong, then your model also surfers.
Ok, worthless was a bit much, but if you do not know the errors in the stations, you do not know now much error is being injected into your model as you have optimize against a ‘real’ base-line that may or may not reflect actual climate change.
[Response: First, I am not disputing that microsite effects can offset temperature readings. However they can offset them in both directions, it will be the net effect that matters. Second, the impact of the microsite effect only enters the trend calculation if it changes, not if it is a constant offset (since only anomalies are used). Thirdly, the GISTEMP analysis has a smoothing radius of 1200 km, this means that for the continental US there is nothing close to 200 degrees of freedom in the regional temperature trends the Hansen papers try to estimate. Eyeballing it you would guess something more like 10 or even less. That is why the regional trends are hugely oversampled. Finally, if you look at the GCM comparisons to the CRU data in the paper you cite, you will notice that the comparison is to the regional absolute temperatures and the seasonal cycle and that local errors can be large. The microsite issues are not going to make any difference to that comparison. Trust me on that. – gavin]
Hank Roberts says
bjc, are you saying the criteria for new stations should _not_ mention shade if there are trees in the area?
Remember, these are not “climate” measuring devices, they’re thermometers for air temperature.
FurryCatHerder says
On the subject of raising the level of New Orleans —
When I was there a year ago we couldn’t figure out why there was so much dirt inside of everything. Then we realized the 1″ thick layer of dirt was silt from the flooding. All that would be needed are another 96 more Katrina’s and the Lower Ninth Ward will be at sea level.
But seriously, there are parts of the Upper Ninth Ward — Musician’s Village, in particular — that are being built at the proper grade. It was really interesting for me to be in one family’s new house looking down on the houses across the street.
New Orleans needs to be where it is, and it’s not going to move because of storms. Further down river is further out into the Gulf of Mexico. Further up river is swamp. Many parts of the city are several feet below sea level, and raising any neighborhood with fill would require completely rebuilding the infrastructure.
The fault for Katrina lies with multiple people and entities. The city cannot be evacuated simply because it there are too few roads out — I-10 to the east and west, the Causeway to the north (the Gulf is to the south — wrong way). There are 6 lanes east, 6 lanes west, 4 lanes north, for a total of 16 lanes. Normal traffic is 2 seconds per lane per vehicle, or 8 vehicles per second. Assume a million people (less than it was), 4 people per vehicle (there aren’t enough buses, including the 100 or so school buses that weren’t used, to evacuate the city), that’s 250,000 vehicles. That starts to look do-able until you figure out the rest of it, and that’s where it breaks down.
The evacuation for Rita had people stuck on the highway, in traffic jams, when the storm made landfall. Many people who evacuated never made it where they were going — the traffic was so bad that they turned around before getting to where they were driving. I live 200 miles from Houston, and the evacuation of Houston wrecked traffic where I live. The only way to avoid another Katrina is to make cities like New Orleans (and Houston) storm-proof.
bjc says
Hank:
I agree that the thermometers are there to measure air temperature and for local weather recording purposes, but the data is also being used to build a measure of climate trends and these records are being adjusted for a whole range of microsite effects. But not all microsite effects are conceptually equivalent. UHI, anthropogenic heat sources, asphalt are non-representative of the 5*5 regional climate grids, but trees in a forested region are most definitely part of the climate as is packed earth/rocks in a desert region, viz., weather station at UA at Tuscon. Difficulties obvioulsy arise where a region has a complex, heterogenous landcape or a multiplicity of land uses…then we need to ensure an adequate sampling of the differing land uses/landscapes. It strikes that non rural sites are almost by definition non-representative and as such should be excluded from the data-base, and not adjusted or corrected. If this is done then UHI becomes a non-issue!
Vernon says
Gavin, thank your taking the time to answer me, and I am enjoying this discussion. I do have a few questions based on you last.
[Response: First, I am not disputing that microsite effects can offset temperature readings. However they can offset them in both directions, it will be the net effect that matters. Second, the impact of the microsite effect only enters the trend calculation if it changes, not if it is a constant offset (since only anomalies are used). Thirdly, the GISTEMP analysis has a smoothing radius of 1200 km, this means that for the continental US there is nothing close to 200 degrees of freedom in the regional temperature trends the Hansen papers try to estimate. Eyeballing it you would guess something more like 10 or even less. That is why the regional trends are hugely oversampled. Finally, if you look at the GCM comparisons to the CRU data in the paper you cite, you will notice that the comparison is to the regional absolute temperatures and the seasonal cycle and that local errors can be large. The microsite issues are not going to make any difference to that comparison. Trust me on that. – gavin]
Vern’s Response:
I am glad that we have reached agreement that microsite affects to stations that do not meet site guidelines for local microclimatic interferences such as might be induced by topography, katabatic flows or wind shadowing, poor solar exposure, the presence of large water bodies not representative of the region, agricultural practices such as irrigation, suspected long-term fire environments, human interferences, or nearby buildings or thermal sinks which surfacestations.org is bring to light.
However, your second point is not valid for Hansen (2001). No study I have read indicates that a surface station that does not meet site guides will change. If there was anything that indicated that the changes would be constantly changing in a random manner, then I would agree with you but there is no evidence of that. The effect, I believe, would be consistent until something changed in the environment. It would be wrong, but it would be consistently wrong. This hurts Hansen (2001) since he is doing the temp delta for the grid cell and there are a limited number of lights = 0 (~250). The mere fact it is wrong in a small data pool will have even larger impact.
I also disagree that there is enough information about the stations to make a case for a binomial distribution. Basically, you’re saying it has an equal chance of being warm or cold but I have not seen any studies to back up that position. That is why I do not believe that until the evidence is collected the Hansen (2001) UHI off-set is valid until further due diligence is accomplished in light of the failure of several of his assumptions.
Your third point about GISTEMP is not quite valid. Why? Because you have already applied the Hansen’s UHI off-set to the stations so no matter how big you make the pool at that point, the data is already tainted. Since you do not know how much, or which one, there is no statically valid way to correct for it.
As for you last point, I have to disagree. Why, because Hansen’s UHI off-set is used. An off-set is applied to individual stations which at this point, there is no way to know if it is valid. Why is Hansen’s UHI off-set wrong, because of the microsite issues. I see no way of fixing Hansen’s work with out studying the lights = 0 stations to determine what the microsite issues are. Once they are known, he can redo his work and you should get an accurate UHI off-set.
Additionally, even with over-sampling at the global level, there is nothing that indicates that microsite problems are a local (USA only) issue. Without a study that actually does and assessment of individual sites, as time consuming or as hard as it would be, there is no indicator that the microsite issues do not cause bias.
Do you know of such a study?
J.C.H says
They learned a lot from the evacuation of Galveston and Houston, which was a total fiasco that easily could have become a human catastrophe had a 5Cat plowed directly into the two cities.
In the weeks between the two storms Texans had a field day mocking the incompetence of LA and NOL. They got a well-deserved comeuppance right on the old kisser. They fared no better, and they had a lot of extra time to get ready.
With the storm about 4 days out a Houston city official pronounced on the news that he knew of no structures in the city of Houston that would survive a 5Cat, and that sent millions of people onto the freeways, which immediately locked up like the worst case of constipation in history. In a few hours there was no gas and no food available along the freeway.
There is no way to storm proof a city. Mother nature is just that darn powerful.
Eli Rabett says
bjc, are you arguing that all the sites should have shade? or air conditioners (pretty common in the US) The point is that there is a mix, and the shaded sites will be cooler than the unshaded ones. It really has not been shown how close to an A/C unit the thermal sensor has to be for that to have an effect and whether the effect would be a step when it was installed, etc. Otherwise what Gavin said about trends.
Mike Alexander says
I liked Schwartz’ paper. I could actually follow most of it and his approach is very similar to my own amateur efforts. My results are different, I get the standard result. One source of error is his 20th century temperature increase for which he uses 0.57 C. If I subtract the 2000 CRU value from the 1900 value I get 0.53, which is close to the value Schwartz uses. But that’s not the right value to use. This is because the temeperature series shows short-term one-quarter degree fluctuations, so you have to use the *trend* value. For example in the database I use the temp value is 2000 was +0.277, but the average value over 1995-2006 was +0.377. Similarly, the temp value for 1900 was -0.253 while the average value over 1895-1906 was -0.369. The temperature increase using the single year points is +0.53, but using the averaged values its 0.75. Figure 5 of the linked webpage shows the trend line I constructed using a running centered 20-year linear regression. The change in trend temperature obtained using this measure is +0.78. With this larger delta T the impied forcing is 2.6 watts/m^2 instead of 1.9, which is *larger* than the 2.2 watts/m^2 for the greenhouse effect, implying the sensitivity is greater than 0.3.
Now this is just a plain bonehead error. Another source of error is the deep ocean response. The definition of climate response says it is at *equilibrium* This means deep ocean response has to be included. The problem is the deep ocean response is so slow that it mostly hasn’t taken place over a few decades. In fact you can roughly represent the climate response as a two phase first order response. One is short and the other is much longer. In this case the *apparent* sensitivity obtained by considering short term dynamics is depressed by maybe 20% or so.
So using the same approach by Schwarz we have 2.6 (not 1.9) watts of apparent climate forcing plus the 0.3 watts of aerosol forcing impact he grants magnified by 1.2 to account for deep ocean effects to give 3.5 watts of apparent forcing compared to 2.2 watts of actual greenhouse forcing. In other words the climate sensitivity appears to be 3.5/2.2 = 1.6 times larger than the 1.1 CO2x2 value he favors. The actual value consistent with his own approach is thus about 1.8 C for a CO2 doubling.
Hank Roberts says
> If there was anything that indicated that the changes would
> be constantly changing in a random manner, then I would agree
> with you but there is no evidence of that. The effect, I believe,
> would be consistent until something changed in the environment.
Parking lots? Weekdays vs. weekends.
Trash burning? Day of the week
Air conditioner? On/off cycles, building hours, thermostat
Water sprinklers? On/off cycles, drought indexes, time of day
Freeways? time of day, day of week
Peeling paint? day/night
Nesting birds? Springtime ….
Nesting bats? Time of day, season of year ….
Spiderwebs? “… along came the rain, and washed the spider out …”
ray ladbury says
Vernon, Do not forget that you are dealing with an oversampled system for the purposes of comparison to GCM. Moreover, the way to estimate the systematic errors is from the data–not by examining every station down to the last tree or biulding. This will be true unless every station has a comparable bias–and it is safe to conclude that the amount of oversampling is at least 3:1, so the probability is that there is no information lost.
bjc says
Eli:
You really have to give people more credit. All I am saying is that the environment that surrounds the measurement device should be representative of the region for which the data is meant to represent. How hard is that? THe proble with urban settings is that actually are not representative of very much on an area basis. CLearly we are looking at trends not absolute measures, but with an adequate number of representative stations there would be no issue about UHI trends. Don’t you agree? The issue wiuld be moot.
Petro says
Vernon reasoned:
“Without a study that actually does and assessment of individual sites, as time consuming or as hard as it would be, there is no indicator that the microsite issues do not cause bias.”
This is plainly false. The temperature records by different actors are not only in agreement with each other regarding on the trend in global temperatures, they are also in agreement with other observations indicating global warming. Would your issue cause signicant bias, there should be a discrepancy and that is not a case.
However, if you have a hunch you could demonstrate otherwise, please carry out the time consuming and hard assessment by yourself and publish the results in a scientific journal. Why harrass professionals to do it for you? They are competent enough to find more relevant topics for their research.
Vernon says
Hank, If you have proof of any of those things, please present it the evidence. I am going strictly by errors associated with poor sites. Surfacestations.org is showing bad stations that do not meet the guidelines. What else could be happening, I do not know.
Ray, your on the wrong page. This is addressing Hansen (2001) which is does not have that amount of oversampling. CRN says to get 95 percent confidence within CONUS takes 300 stations, Hansen only has ~250 lights = 0.
Also, surfacestations.org is showning a lot of stations do not meet site guidance. We know that failure to meet site guidance injects 1-5 degrees C of error.
Finally, this is not about getting just the trend. It is about getting the light = 0 temperature for all relavent stations getting the temp delta with the remaining urban stations.
So, there is no way to know what the actual temp delta is since we do not know the quality of the stations. This is fully addressed by NOAA/CRN in how they are building a quality climate network.
I just do not think we have 30 years to wait on them so, Hansen, if he wants his UHI off-set from (2001) needs to get funding to validate his stations.
Anyway, Ray, your going after the wrong thing and for lights = 0, there is not over sampling, and sense the goal is to get the actual temp to do a temp delta. It would still be wrong.
Hank Roberts says
> representative
What data set do you rely on, if you want to find a representative half acre, in your neighborhood? Or if you don’t have data, how would you decide?
Rod B says
Furry (59), well put.
Philippe Chantreau says
RE 71: I agree that furry’s comment is astute. No city in the world is designed to be quickly and efficiently evacuated. However, the reasons why that is may be similar to the reasons why cities are not storm proof either. Cities are accumulations of strucures that corresponded to more immediate needs at a given time, without the specific, conistent risk analysis that would impose one given priority (i.e. storm resistance or “evacuability”) high on the list. Priorities, risks and benefits translate into actions according to how we perceive them at the moment when we do the analysis. If the focus in the design of a city was that, no matter what, you have to have it withstand a cat5 hurricane, then cities would be mostly compliant with that. If the focus was that, no matter what, you’d have to be able to have 90% of the population out of there in 36 hrs, then they would probably be able to achieve close to that. Of course, there would be much groaning and moaning of anti-regulations groups arguing that, historically, the likelihood of such an occurence doesn’t deserve the effort and regulatory burden on the city’s overall structure.
The main problem is the objective reality of risk compared to our perception of it at the time when the risk is integrated in risk/benefit ananlysis. Imagine that you have to design a 747 and the emphasis imposed by management on your department is marketability and production costs. To achieve that, you route hot compressed air through the center fuel tanks. Then an accident happens involving fuel/air mixture in the center tanks exceeding a flamability treshlod because, among toher things, the extra heat afforded by the hot air ducts. That was deemed unlikely enough at the time of conception. Now consider that this led to catastrophic inflight explosion and your son or daughter was on that airplane. You would see that risk in a different light (not necessarily better). Risk assessment (perception) and their politics are the main drivers of curent climate change policies (or the lack thereof). Yet they are highly subjective areas. There are things even more difficult than building accurate climate models. Strangely enough, humans are both best equipped and worst equipped to accurately perceive objective realities.
Hank Roberts says
Vernon, this is silly. The instrumentation across the world developed over the past century or more, starting with boxes with mercury thermometers and pocket watches and calendars and paper and ink.
Your old car or old house don’t meet contemporary guidelines. Your old education doesn’t. Your old dental work doesn’t. You don’t throw them out, you improve on what’s done now.
The guidelines are for installing new instruments. Once the new network is in place, running in parallel to the old equipment, it allows getting more information out of the old data collection by verifying the existing instruments.
If you want to ruin the ability to know what’s going on — go out there and move instruments around, change their location, change their paint, change their environment, and then declare they now “reliable” — do you understand why that’s foolish?
A consistently biased instrument is as valuable as a perfectly accurate instrument — once you know the bias. You don’t mess with the old gear. You install better gear to the new guidelines, nearby, and run them in parallel
John Mashey says
re: #59 FCH
I haven’t been to New Orleans for several decades, so hopefully you can offer some more insight. I have a concern that is likely to be shared by many, but which of course, is very difficult to get mentioned by politicians and subject to any reasonable debate. As you note, NOL can’t move either upstream or downstream, but the real question is:
How much will it cost, and who will pay for it, to keep NOL viable in:
2020
2050
2100
2200
Americans live there, and of course NOL is a sentimental favorite, but sooner or later economics matter as well:
a) NOL/LA can afford to spend some money on its own behalf, although since LA is a net recipient of federal money, as of 2001, it got ~8B more then it sent, of which 24% came from CA, 16% from NY, 10% from NJ, 14% from (CT, WA, CO, NV). LA also got 10% from IL, 5% from TX, 5% from MI, 4% from MN, 2% from WI, and the latter states would seem to benefit more directly from having LA where it is, although presumably all of us benefit somewhat. The Mississippi River is rather valuable.
b) There are economic benefits to having LA where it is that do not accrue to LA/NOL; I have no idea how LA captures revenue from being where it is, and how close that is to the economic value.
c) The Corps of engineers spends money to build.
(I.e., this is planned work).
d) Finally, there are potential subsidies from the Federal treasury for:
– Disaster relief & rebuild
– Flood insurance [given the pullback of private insurers]
(i.e., these happen less predictably).
There are clear historical facts (*), and some predictions as in:
http://www.sciencedaily.com/releases/2000/01/000121071306.htm
A)* New Orleans is slowly sinking (3 ft/century).
B)* The Mississippi has been known to flood, although NOL has usually escaped that.
C)* The Mississippi really *wants* not to go through NOL, but down the Atchafalya channel, as well-described in John McPhee’s “Control of Nature,” bypassing not only NOL but Baton Rouge. It has generally shifted channels ~1000 years, and last did so around 1000AD. It would have already shifted except for large and continuing efforts by the Corps of Engineers, startimg in 1954 when Congress budgeted money for the Old River control effort.
http://en.wikipedia.org/wiki/Atchafalaya_River
http://www.newyorker.com/archive/1987/02/23/1987_02_23_027_TNY_CARDS_000348555
D) Sea level is expected to rise. Although the following is simplistic, it’s still worth studying: zoom in to LA, set sea-level-rise =0, then +1m, which we probably *won’t* get by 2100, unless these nonlinear melting effects happen.
http://flood.firetree.net/?ll=43.3251,-101.6015&z=13&m=7
Needless to say, don’t go much higher unless you want to get depressed.
E)* NOL certainly gets hit by hurricanes; recall that Katrina actually missed.
F) Temperatures will rise, and (maybe) that will increase the frequency of more intense hurricanes.
Hence, the real policy questions (for which I certainly don’t know the answers):
– how much will it cost to keep NOL viable, and in what form, and for how long? [some of this depends strongly on the actual rate of sea-level rise, a subject of some contention, and one of the reasons it is *very* important to keep refining models and improving their skill as inputs to rational planning.]
– who will pay for it?
– and is the opportunity cost worth it? given $X, is it better to spend the money building big levees around NOL, or to look ahead 50 years, diverting development upstream, and figuring out how to handle the possible jump to the Atchafalaya. Alternatively, if there is $YT available for building levees and dealing with other seacoast issues along the Gulf/Atlantic Coasts during this century, what fraction of it does NOL get? And of course, the world doesn’t end in 2100 either (I hope).
None of this is arguing for “abandon NOL now” as I suspect that is a bad idea, although it seems to approximate what the current administration is doing (without saying so) … but sooner or later, the level, structure, and priority of investment has got to start being debated more publicly.
[Maybe someone knows some good reports.]
Anyway, FCH: you live not too far away, and you’ve been there recently. Any opinions on any of this?
Vernon says
Hank, that is so wrong it is not even funny. The site guidance from NWS/WMO far preceeded what CRN is doing, but it was done for the same reason. You argument does not address the issues and is just a misdirection.
[edit – no personal comments].
This is not about a trend, it is about the temp delta. Hansen (2001) is doing off-sets. I will agree that after he has the temp delta he finds the trend, but the trend does not matter till then. He is doing urban – rural = UHI off-set. A consistant bias is going to give bad numbers.
Finally, this is not about how accurate the instrument is, but how accurate the station is. If the station is not sited IAW the guidance, then injecting 1-5 degree C of error is not going to get you the actual temp delta.
Dan says
re: 73. Indeed it is. The entire surfacestations.org canard has been shown here by numerous comments to be quite unscientific. A very limited sample of station photographs by “volunteers” is not the least objective or scientific. Yet the Vernons of the world harp on it with little if any basis. More important, despite umpteen mentions, the data are almost trivial with respect to the numerous proxies and other *global* data that show the clear global warming trend. The Vernons of the world keep repeating the “noise” ad nauseum as if it were essential facts, go away for a while, and then come back as if the issue is somehow still essential. There is no excuse for the failure to objectively assess what surfacestations.org has done. When one assesses it that way, it is simply a red herring and nothing more.
bjc says
Hank:
I think you have the logic reversed. The fact that we use surface temperature data to assess climate implicitly assumes that the stations are representative. My point is that besides any flaws in individual sites, the dependence on stations that are close to major population centers is (a) an inadequate sampling (b) necessarily introduces a potentially confounding UHI trend that cannot be adequately controlled for without sufficient more “representative’ stations and, if you have a sufficient number of more representative stations you wouldn’t need the urban stations that have limited representativeness.
Dan (#75):
The efforts to build a better climate network is proof positive that the network of existing weather stations is flawed in terms of instrumentation, micro-climate effects and geographic coverage. The Pielke and Watts effort is simply underscoring what Karl et al already know, otherwise why the expensive push for an updated network?
Ray Ladbury says
Vernon, quite frankly, single studies do not interest me. But you need to define your terms–95 confidence of WHAT? 300 stations measuring WHAT?
In the end, the proof of the pudding is in the eating, and the trends observed by Hansen et al. support those seen in completely independent measurements. And even if there were errors in his analysis, they will not be found or quantified by a bunch of amateurs who don’t understand the science traipsing around through poison ivy and photographing thermometers near barbecue grills. The data will tell us what the biases are.
Vernon says
RE: 75 Dan,
Surfacestations.org census does not have to be scientific, that sir, is a red herring. All it has to show is that the station does not follow site guides. Pictures from ‘volunteers’ is just as good for this purpose as a professional film crew.
You have not addressed my argument. Gavin agrees that failure to meet station siting guides will inject error and that surfacestations.org is enough to tell if the station is meeting the guide or not.
[Response: I said no such thing. That some microsite issues impart biases in certain conditions, does not imply that ever so-called issue highlighted in a photograph actually does or not. Without more information, you just have insinuation, not science. – gavin]
Once again I will state my argument. I do not doubt global warming, I do doubt the direct instrumented ‘accelerated’ warming.
My hypothesis’s is that Hansen’s assumptions in his 2001 work for UHI off-set’s are not supported by the evidence. I have presented my facts and logic which has survived discussions with Gavin.
I would be pleased to hear your analysis of the flaws of my argument. But failing to address any of the facts or logic I presented and calling it ‘noise’ is not. Please produce some facts or logic to support your attack.
Finally, Dan, the fact that Hansen’s UHI off-set could be wrong is not trivial. It is applied to every station in GISTEMP as part of the station adjustment process prior to processing. This means a bias is being interjected that cannot be corrected, ever, since by definition it is applied globally to the data.
[edit]
[Response: You have a very odd idea about what Hansen et al are doing. They detect a clear UHI-related trend and remove it. It what sense ‘can it not be corrected’? Just take the raw data and do what you want to it yourself. The GISTEMP analysis doesn’t preclude anyone else’s analysis, and if you want to do it differently, go ahead. – gavin ]
Barton Paul Levenson says
Vernon writes:
[[A consistant bias is going to give bad numbers.]]
And to get good numbers out of those bad numbers, you compensate for the bias.
There is no such thing as unbiased data. The fossil record is biased toward creatures with hard parts. The local motion of galaxies and quasars are biased by their red shift. And temperature stations can be biased in their readings. You don’t throw the data out, you compensate for the biases.
Vernon says
Barton, your wrong on two points. I make no assumption that good sited stations would not have some bias, nor do I make any assumptions on the direction of the bias.
What I do say is that surfacestations.org is showing that a significant number of stations are poorly sited. the impact it is to inject 1-5 degrees C of error per stations. Since Hansen (2001) is doing temp delta, getting the rural (light = 0) stations temp wrong is a critical failure.
I make no claim on whether the Hansen’s UHI off-set is high or low, only that based on our current understanding, there is no proof that it is right. All Hansen needs to do is eliminate the light = 0 stations that fail to meet site guides and redo the math.
Your ‘to get good numbers out of those bad numbers, you compensate for the bias’ is flatly untrue within this context. The bias is Hansen’s UHI off-set which is applied to all stations. Please explain how you compensate for this? If I am misunderstanding you, and your talking about a bias in the rural stations, please show me how to removed this without knowing which stations and now much. Please remember this is not looking for a signal trend, it is looking for the actual temp from the local rural stations.
Finally, this is a huge red herring. Once again, I am not addressing random bias that my exist in stations that meet site guides. Do not know what it is, and for this argument, I do not care.
So other than dragging some fish around, please point out where my facts and logic is wrong in the context of my argument.
Vernon says
Gavin, once again thank you for taking the time to address my arguments.
Gavin, I believe your statement is just false. Science is based on observation. Are you saying that a picture cannot show whether a station is meeting site guides? That is all I am taking out of this, either the station meets site guidance or not. If it does not, studies show that the site will be off by 1-5 degree C. Which part of this is wrong?
I have a very clear idea what Hansen (2001) is doing. You accepted that I did back in #57 which you did not disagree with me then. I do not disagree that he detects UHI and comes up with an off-set. The problem is with the accuracy. It comes down to the yearly urban temp – rural temp = UHI off-set. Hansen makes the assumption:
The error in station’s sites makes that assumption unsupportable.
Your falling back into if you do not like it, do your own is a sad way to do discourse. Maybe you do not mean it like that but that is the appearance.
What is wrong with my facts or logic? We started having a discussion. I presented facts and logic, you challenged me, I defended. We were coming down to less and less that we disagreed on. Now, I think you do not like my argument, but you are finding less and less you can challenge and do this.
[Response: Your logic is the most faulty. Take the statement above, ‘science is based on observation’ – fine, no-one will disagree. But then you imply that all observations are science. That doesn’t follow at all. Science proceeds by organised observation of the things that are important. You cannot quantify a microsite problem and its impact over time from a photograph. If a site’s photograph is perfect, how long has it been so? If it is not, when did it start? These are almost unanswerable questions, and so this whole photographic approach is unlikely to ever yield a quantitative assessment. Instead, looking at the data, trying to identify jumps, and correcting for them, and in the meantime setting up a reference network that will be free of any biases to compare with, is probably the best that can be done. Oh yes, that’s what they’re doing. – gavin]
L Miller says
Vernon are you trying to claim an improperly sited station will be out by 1 deg C one year, 5 deg C the next?
The notion that the stations temperature readings would jump around like that is absurd. Even if it were to happen there is still usable data in the signal that can be extracted and used. Random noise while not desirable does not remove the underlying trends.
It should be even more obvious that a constant error will not remove the underlying trends either. If a station reads high by 3 deg you can still spot an underlying trend with ease. I.E. if a station is reading 17 deg when it should read 14 and 20 years later it reads18 deg when it should read 15 you still get a 1 deg temperature increase.
To make a difference in the final calculation of the trend you need a trend in site placement issues. Assuming all site placement issues result in higher temperature readings, progressively worse site placement over time will introduce a false positive trend in the final results. Progressively better practices will introduce a false negative trend into the final results, even though the error in the site is positive.
As far as I can tell all the issues that could introduce a false trend into the final results are being accounted for, but if you have some that are not I’m sure the people here would love to hear them. As I noted above though, none of the issues you have discussed so far seem capable of introducing a false trend. What you have talked about so far is simply noise in the data, which isn’t a good thing but doesn’t render the data useless either.
Vernon says
L Miller, can you be any more wrong. I am not claiming that the errors change at all. In fact, I have stated that I would expect the error from not meeting the site guide to be consistent year by year.
Second, I am talking about Hansen (2001) where he is taking the urban temp subtracting the rural (light = 0) temp on a grid cell basis and getting an UHI off-set. He does this for every year of the time series. From this he develops his UHI off-set trend.
What I am pointing out is that his assumptions are showing to not meet the facts. That injects error at the temp delta point which then gets propagated though out his work.
bjc says
I think the more trenchant issues are how do you compensate for “inconsistent” biases and at what point do you discard problematic data. What is very problematic is the domination of the current temperature record by urban stations to the extent that identifying the UHI trend becomes extremely difficult. For example, of the 47 stations included in GISS data set for Brazil, only one meets the stated criteria for rural stations of having a population of less than 10000. That one station is on an island in the South Atlantic!
Hank Roberts says
> To make a difference in the final calculation of the trend you need a trend in site placement issues. … Progressively
> better practices will introduce a false negative trend into the final results, even though the error in the site is positive.
That’s a simple clear explanation for why it’s important _not_ to go moving the old stations around to “improve” them.
(Cynically, it might be why people are agitating to do exactly that — to screw up the data by “improving” the stations.)
Instead new stations are put in. That improves the old data by adding more, and more accurate, cross-checks.
Vernon says
Gavin, thank you for your input.
[Response: Your logic is the most faulty. Take the statement above, ’science is based on observation’ – fine, no-one will disagree. But then you imply that all observations are science. That doesn’t follow at all. Science proceeds by organised observation of the things that are important. You cannot quantify a microsite problem and its impact over time from a photograph. If a site’s photograph is perfect, how long has it been so? If it is not, when did it start? These are almost unanswerable questions, and so this whole photographic approach is unlikely to ever yield a quantitative assessment. Instead, looking at the data, trying to identify jumps, and correcting for them, and in the meantime setting up a reference network that will be free of any biases to compare with, is probably the best that can be done. Oh yes, that’s what they’re doing. – gavin]
I really like the way you moved from specific (my argument) to general (nothing to do with my argument) and then proceeded to take me to task for something I did not say. I said ‘science is based on observation’ and are ‘you saying that a picture cannot show whether a station is meeting site guides?’ You seem to disagree with neither of these two facts.
I will admit that I have a hard time following your logic, but your basically saying that since the pictures will show whether the station meets site guidance does not matter because you cannot use them to determine the amount or history of the error. I do not see what that has to do with my argument. My argument is quite simple; either a site is compliant or not-compliant with site guidance. I believe that a picture will show whether the site is compliant or not, which you appear to agree on. If it is not, then I expect based on the studies that the error the site will be reporting will be between 1-5 degrees C, but that does not matter. What matter is the site should not be used by Hansen et al (2001) to determine the UHI off-set.
So I have to ask, what is faulty about my facts or logic?
Here is my argument:
Hansen (2001) states quite plainly that he depends on the accuracy of the station data for the accuracy of his UHI off-set. (You agree with this.)
WMO/NOAA/NWS have siting standards (You agree with this)
Surfacestations.org’s census is showing (based on where they are at now in the census) that a significant number of stations fail to meet WMO/NOAA/NWS standards (You agree with this)
There is no way to determine the accuracy of the station data for stations that do not meet standards. (You agree with this, well actually you seem upset that this is not being provided.)
Hansen uses lights=0 in his 2001 study (You agree with this.)
Due to failure of stations to meet siting standards, lights=0 does not always put the station in a accurate rural environment (You agree with this.)
At this time there is no way to determine the accuracy of Hansen’s UHI off-set (You will not commit to this so where did I get it wrong?)
Any GCM that uses this off-set has no way to determine the accuracy of the product being produced. (You do not agree with this, but since you use the surface station temp as a diagnostic, then it does have an impact.)
Vernon says
Actually Hank, this is being addressed by the NOAA/CRN project. 300 stations at sites that have been extensively studied to insure there is none of the current problems. It is over at http://ams.confex.com/ams/pdfpapers/71817.pdf
and specifically:
a. will most likely remain in a stable tenure or
ownership for at least the next century.
b. are not envisioned as being areas of major
development during at least the next century.
c. that will not be subject to local microclimatic
interferences
‘microclimatic interferences such as might be induced by topography, katabatic flows or wind shadowing, poor solar exposure, the presence of large water bodies not representative of the region, agricultural practices such as irrigation, suspected long-term fire environments, human interferences, or nearby buildings or thermal sinks.
It is being done in two phases. Phase I is 100 stations (about half done) and Phase II is another 200 stations. With 200 stations they will reduce climate uncertainty to about 95%.
Once we have this, no more adjusting data.
richard says
82. “Your falling back into if you do not like it, do your own is a sad way to do discourse”
Seems to me a number of attempts have been made to address your concerns. You remain unsatisfied with the answers given. The best route for you to take would be to do as as been suggested: take the raw data and carry out your own analysis. Then submit the results to the peer-review process. At that point the discussion could proceed.
Majorajam says
Vernon,
I think what people are trying to tell you here is that you cannot undermine Hansen by innuendo regarding his data set. If it is your assertion that flawed surface stations systemically bias the data such that Hansen’s UHI adjustment is too low, surely this should be easy enough to demonstrate by closer examination (and here I’m thinking scientific experiment using hard data and even a hypothesis involving a physical explanation. Photos, not so much). Another way of saying this is you have to establish the relevance of the objection to the data, (in the context of systemic bias), before the prevalence. You have not pointed to a study that does this, nor produced any analysis of your own. Why not?
Jim Eager says
Re 84 Vernon: “I am not claiming that the errors change at all. In fact, I have stated that I would expect the error from not meeting the site guide to be consistent year by year.”
Which means that the errors can easily be compensated for, something that you have repeatedly been told is done, yet you continue to refuse to believe it.
Petro says
Vernon,
It has been told you in plain English several dozens times in this thred only, the microsite issues you worry have been addressed and dealt with scientific manner.
Below the dialogue is in formal form.
Science: A
Vernon: Not A, since B
Science: A do not follow from B
Vernon: A follows from B
Do you realize which side in this controversy has more evidence for his argument?
Do you realize which side has a burden of proof to collect more evidence to support his argument?
Do your realize your insistence to ask the leading climate researchers to carry out a time-consuming study for your argument is very arrogant position?
James says
Re #87: […since the pictures will show whether the station meets site guidance does not matter because you cannot use them to determine the amount or history of the error.]
One of many things you’re ignoring here (which no one else seems to have pointed out) is that a picture of a site only shows site conditions at the moment it was taken. There are hundreds if not thousands of sites, many with records going back 50 to 100 years. Say you find a site where your picture shows some factor that might cause a bias: when did that factor come into play? Was it there a century ago, or was it built last week? (And of course the reverse: maybe a site meets your standards today, but how about back in 1943, when it was a busy AAF training facility?)
It seems that what you’re really calling for is a record of what the site was like when first constructed, and through all the years between. Now unless you have a time machine you’re not telling us about, it’s going to be impossible to get such a record, so by your logic we should throw away that century of records, and start fresh, no?
Barton Paul Levenson says
[[So other than dragging some fish around, please point out where my facts and logic is wrong in the context of my argument.]]
Climatologists DO analyze the record of individual stations, and they do it by comparing it to other stations within a wide radius. It is easy to pick out outliers.
The theory of a significant bias in the land surface temperature stations fails empirically because the same trends they show are also shown in other ways:
Sea temperature readings show warming:
Gille, S.T. 2002. “Warming of the Southern Ocean Since the 1950s.” Sci. 295, 1275-1277.
Levitus, S., Antonov, J., Boyer, T.P., and Stephens, C. 2000. “Warming of the World Ocean.” Sci. 287, 2225-2229.
Levitus, S., Antonov J., and Boyer T. 2005. “Warming of the World Ocean, 1955-2003.” Geophys. Res. Lett., 32, L02604.
Are there urban heat islands on the ocean?
The balloon radiosonde record shows warming:
http://members.cox.net/rcoppock/Angell-Balloon.jpg
The satellite temperature records show warming:
http://members.cox.net/rcoppock/UAH-MSU.jpg
Melting sea ice shows warming:
http://nsidc.org/news/press/20050928_trendscontinue.html
http://nsidc.org/data/seaice_index/
Glacier retreat shows warming:
http://nsidc.org/sotc/glacier_balance.html
Boreholes show warming:
http://www.ncdc.noaa.gov/paleo/globalwarming/pollack.html
Rising sea levels show warming:
http://sealevel.colorado.edu/
http://en.wikipedia.org/wiki/Image:Recent_Sea_Level_Rise.png
The theory that land surface temperature readings are biased upward has no validity unless those readings are higher than the readings everyone else is getting. They aren’t. All your theories are worthless if they don’t match the evidence. You can argue till the cows come home about how badly sited the temperature stations are, but if no bias shows up in the data, your arguments are meaningless. You’re developing a theoretical basis to explain a result that doesn’t exist.
Pekka Kostamo says
Vernon repeats in every message in a very propaganda-like manner the same claim “the error the site will be reporting will be between 1-5 degrees C”. I wonder where this comes from?
I could believe in such large impacts of unperfect siting regarding the daily max temperature on a calm and sunny day. Mixing by wind and/or shadowing by clouds reduce the UHI impacts substantially both day and night. Most days in most locations the UHI is not detectable.
JamesG says
Since we know that the US is over-represented and that leaving the affected stations out won’t affect the global graph, or the US graph overly much, then why on earth are you guys still arguing? Do you have to argue everything on principle? Anthony Watts is correct – just accept it. The reason right-wingers have such a strong argument is because the idea that you can correct data from unknown errors (up or down) is so obviously dumb that you look like you don’t care if data is accurate or not. Imagine if you said “Well done Anthony, we’ll take out these poor stations when you’ve finished the survey”, you then do the new analysis with no change in the results. You’d pull the teeth off the opposition, save a bit of extra analysis effort and we can all move on.
Incidentally are you all still unaware that the administration knew the levees were going to break but didn’t tell anyone? Not only that but the maintenance engineers had been sent to Iraq. The myth that Katrina resulted from AGW has conveniently obscured this info and let them off the hook. Happy about that are we? See http://www.gregpalast.com for the truth. Lessons? Sticking to the facts is always the better idea.
Lawrence Brown says
Well, the President of the United States is visiting New Orleans today, the second anniversary of Katrina.Thank God for that!The first time he went he regaled the residents about how he used to sow his wild oats in N.O. during his youth, and how poor Trent Lott lost one of his homes. Hope his script is an improvement this time.
Found this site that gives the data sources for the global maps from GHCN data, and breaks down these sources. It clearly shows the land data sources for Vernon, and anyone else who’s interested. There’s also an input calculator for generating trend maps. http://data.giss.nasa.gov/gistemp/maps/
Using the default data, except changing the type of projection to polar, shows that the poles are far ahead of the rest of the planet and in the most danger from global warming.
Hank Roberts says
JamesG, you haven’t been reading the thread carefully, check your assumptions please.
There’s no list of “affected stations” — there are claims station data should be thrown out based on pictures.
Looking at the data from the station tells you the record, a picture shows what it looks like today.
The report you wish for has been done excluding all urban stations (made no difference).
Error correction is done routinely and tested regularly.
Much of contemporary science seems “obviously dumb” til you study it.
dhogaza says
“Science is dumb, scientists dumber”
That about sums it up, I guess.
Dan says
re: 99. I would add to that slightly to say “We do not understand or want to learn about science and we can not admit when we are wrong. Therefore science is dumb, scientists dumber.” ;-)