A few items of interest this week:
Katrina Report Card:
The National Wildlife Federation (NWF, not to be confused with the ‘National Wrestling Federation’, which has no stated position on the matter) has issued a report card evaluating the U.S. government response in the wake of the Katrina disaster. We’re neither agreeing nor disagreeing with their position, but it should be grist for an interesting discussion.
An Insensitive Climate?:
A paper by Stephen Schwartz of Brookhaven National Laboratory accepted for publication in the AGU Journal of Geophysical Research is already getting quite a bit of attention in the blogosphere. It argues for a CO2-doubling climate sensitivity of about 1 degree C, markedly lower than just about any other published estimate, well below the low end of the range cited by recent scientific assessments (e.g. the IPCC AR4 report) and inconsistent with any number of other estimates. Why are Schwartz’s calculations wrong? The early scientific reviews suggest a couple of reasons: firstly, that modelling the climate as an AR(1) process with a single timescale is an over-simplification; secondly, that a similar analysis in a GCM with a known sensitivity would likely give incorrect results, and finally, that his estimate of the error bars on his calculation are very optimistic. We’ll likely have a more thorough analysis of this soon…
It’s the Sun (not) (again!):
The solar cyclists are back on the track. And, to nobody’s surprise, Fox News is doing the announcing. The Schwartz paper gets an honorable mention even though a low climate sensitivity makes it even harder to understand how solar cycle forcing can be significant. Combining the two critiques is therefore a little incoherent. No matter!
Ray Ladbury says
James G., So… how are we supposed to correct for biases in the data if we don’t analyze the data to look for biases in it? I mean it is not as if there aren’t checks we could use–time series, comparisons to surrounding stations, filtering, averaging. That is how you find biases, errors, etc.–not by traipsing throught the poison ivy and looking for the odd barby near a thermometer. What is more, there are completely independent trends we can look at to see if they are consistent with what we are seeing in our data. As a scientist who often has to draw conclusions with very limited data, I see a dataset like this and begin to salivate. What is more, I believe the very analysis you ask has been done (urban stations removed), and no significant difference was seen.
Falafulu Fisi says
firstly, that modelling the climate as an AR(1) process with a single timescale is an over-simplification;
Now, if is not AR(1), then what order of the transfer function would be the right order?
The fact of the matter is that you either predetermined the order to be able to build a model or otherwise used System Identification algorithms (linear or non-linear) to identify the best model orders for you if you have no clue to what order that would best model the system.
Here are some points:
If you think that AR(1) is over-simplification, then you must accept that you have a full prior knowledge of the dynamics of the systems. However, in reality this in not the case. On the other hand, if you accept the you don’t know a lot about the dynamics of the system, then you must accept that AR(1) is NOT an over-simplification.
BTW, does any member of the RealClimate has any knowledge about linear and non-linear System Identification & State-Space algorithms? The reason, I am asking here, is that it seems that you have no clue to the question that you have just asked, and that is : whether AR(1) is an over-simplification or not?. You can pretty much fit any AR model to any data, however the AR model with minimum error could be identified via AIC (akaike information criteria). So, the point of saying whether a AR model of order 1 is an over-simplification sounds like someone who makes a comment about something that he/she has no clue about.
[Response: AR(1) processes have certain properties. You can examine the data, in particular the auto-correlation structure as in Schwatrz’s paper, and examine whether those properties are found. In this case, they are not. Pretty good evidence that you are not dealing with a simple AR(1) process with a single timescale, therefore AR(1) is an over simplification. We’ll put up more details on this soon. – gavin]
steven mosher says
RE 82. Vernon. Here is one way to respond to Gavins
inline.
Gavin inlines:
“Response: Your logic is the most faulty. Take the statement above, ’science is based on observation’ – fine, no-one will disagree. But then you imply that all observations are science. That doesn’t follow at all. Science proceeds by organised observation of the things that are important. You cannot quantify a microsite problem and its impact over time from a photograph. If a site’s photograph is perfect, how long has it been so? If it is not, when did it start? These are almost unanswerable questions, and so this whole photographic approach is unlikely to ever yield a quantitative assessment. Instead, looking at the data, trying to identify jumps, and correcting for them, and in the meantime setting up a reference network that will be free of any biases to compare with, is probably the best that can be done. Oh yes, that’s what they’re doing. – gavin]”
1. Let’s accept the notion that pictures don’t matter.
Hansen uses a pictures taken from satillites ( night lights) to judge urbanization.
The pictures were taken, best I can figure, in the mid 1990s. So yes, pictures are useless. Nighlights is junk. The pixel extent, as best as I can figure is 2.7km. NOW, consider what gavin would say if the resolution of Anthony’s pictures was 2.7km per pixel. and IF anthony used such a crude measure of urbanization.?
So what gavin is telling you is that a picture taken from a satillite in 1995 that measures the intensity of light from a 2.7km square TELLS YOU about the urban/periurban/rural nature of that site from 1880 to 2006. That’s a smart pixel.
Further, the population figures used by hansen are decades out of date, circa 1980. AND he does
not correct for the known and documented errors in certain sensors. like the h-83.
2. Bias can be identified and corrected by numerical methods. True. However, certain methods are fragile, while others are robust.
Did Peterson or Hansen Identify the issues with the Ashland ASOS? NOPE. Why? Peterson’s method is weak. This has been documented.Why does the Ashland ASOS matter?
well, Gavin mentions the CRN ( the REFERENCE network)
In the VERY FIRST test where NOAA compared the CRN to the historical network, They found unexpected microsite issues. Issues missed by ham fisted numerical methods.
I’ll let the lead scientist of the CRN explain for gavin
who seems to be a microclimate sceptic.
“At the Asheville site, the effect of siting difference
between the ASOS and CRN led to a ∆Tlocal effect of
about 0.25o C, much larger than the ∆Tshield effect (about
-0.1o C). This local warming effect, caused by the heat
from the airport runway and parking lots next to the
ASOS site, was found to be strongly modulated by
wind direction, solar radiation, and cloud type and
height. Siting effect can vary with different locations
and regions as well. This term, undoubtedly, needs to
be taken into account in the bias analysis if two
instruments of interest are separated by a significant
distance.”
[Response: You seem to be continually confused by UHI which is known to have a significant and largely single-signed trend over time and microsite issues which do not. Hansen’s study was to remove UHI effects. If lights=0 in 1995, I think it’s pretty likely that lights=0 in 1880 as well. And once again, I am not a micro-site-effect skeptic, I am a photos-of-stations-today-give-quantatative-and-useful-ways-to-characterise-changes-in-microsite-effects-through-time skeptic. Think about that difference and stop claiming the former. – gavin]
tamino says
Re: #103 (Steven Mosher)
If you’re going to say things like, “Peterson’s method is weak. This has been documented,” then you’d better give a reference or link to exactly where it’s documented. I’m not going to just take your word for it.
Falafulu Fisi says
Barton said…
You don’t throw the data out, you compensate for the biases.
And what numerical methods that you use to compensate for the biases? Just curious to see if you really understand of what you’ve just said.
ray ladbury says
#103 Steven Mosher
You are indeed the master of the straw man argument. Actually, you can tell a lot via satellite photos–that’s why DOD uses them. That’s why citrus farmers pay for photos of Brazilian orange groves to help them anticipate orange juice futures. And, yes, where there are no lights at night, you have less urbanization (viz. the difference between N. Korea and S. Korea, or Nigeria and Zaire).
Now, pray, what does a picture of a station tell us about the data? It might make us wonder about a possible bias, but if we’ve looked at the data we’ll know that already. In fact, if we see something suspicious in a photo, what will we do? Go to the data and see if we see anything funny confirmed. You learn a lot more about the data from the data. I’d have thought that would be obvious.
Falafulu Fisi says
ray ladbury said…
You learn a lot more about the data from the data.
In what way Ray? Hey look, if you can’t use sophisticated analytical algorithms for analyzing images , then your statement above is meaningless. BTW and please don’t prompt me to start going into this area of computer vision & data-mining for image analysis, since it is one of the area that I am interested in algorithm development.
Hank Roberts says
> methods …. used to compensate for the biases?
You probably know this is a question people take quite seriously and the methods are being checked and will be improved as more data comes in and newer stations are put in place and accumulate parallel records.
For example http://climatesci.colorado.edu/publications/pdf/R-318.pdf
dhogaza says
Perhaps not the best example, given Colin Powell’s performance at the UN a few years ago.
(don’t get me wrong, Mosher and the rest are full of s…)
JamesG says
Hank and the others: Oddly you are reading something I didn’t write. No there is no list yet of affected stations – the survey is still ongoing. Yes an analysis has been done to exclude urban stations but the concern is about micro-site effects at rural stations: There may be few of these – I don’t know but neither does anyone yet. Correcting for biases is hard work – why bother? Just exclude the suspect stations. It’ll make no difference except it says that you actually care about quality control. Even if you do want to further correct for bias then photos of the stations are rather handy to tell us what biases may exist (heating or cooling), so Watt’s effort is valuable in any event. All I’m saying is stop this petty naysaying to a sensible exercise in QC – it just makes you look dogmatic, biased and silly. For those here who say they are scientists then you should know that the best quality data is the data that doesn’t need any corrections at all. If you salivate at the prospect of correcting data then you should see a shrink. And I didn’t say science or scientists are dumb: I said this particular argument is dumb and if you’d take your blinkers off you’d see it too.
Ray Ladbury says
Falufulu Fisi, not sure I understand your point, or indeed whether you have one. Images are data. Of course you learn more if you analyze them in detail. My question is what you learn from a snapshot of a site that you couldn’t learn from a detailed examination of the data–not bloody much that I can see. How else do you estimate biases and systematic errors other than by analysis of the data and the models it pertains to? And fascinating as your insights on computer vision and data mining might be, I don’t see how they are pertinent here.
Ray Ladbury says
James G: What you are ignoring is the fact that imperfect data are not bad data. There is still information content, and by correcting that data you can emerge with a better product than you would have if you eliminated it. One way to see this is to look at the Likelihood–it always becomes more definite as you add information. Quality control does not consist of eliminating “bad data”, but rather in understanding all your data–its random and systematic errors. By eliminating “bad” data you may be throwing out important information, and it opens up a minefield as to how different researchers define “bad”. Understanding systematic errors on the other hand is usually a transparent process. And if you don’t like data analysis, might I suggest a career other than science.
dhogaza says
Because you want as many data points as possible.
This is really naive.
And this is an incredibly offensive thing to say to the scientists involved at NASA (you are saying that since they’ve not all bought brownie cameras, they don’t care about quality control)
J.C.H says
Inhofe:
http://tinyurl.com/ynoggu
Ray Ladbury says
Re 114.
All I can say is that I wonder what color the sky is on James Inhofe’s planet. Of course, this is a guy who equates winning a prize for research with being paid to endorse a position. Wow!
Hank Roberts says
JamesG wrote, on 30 August 2007 at 5:12 AM:
“Correcting for biases is hard work – why bother?”
“Math is hard.” — Barbie
People pushing the dump-it-all talking point know the upgraded stations are going in, and that their data will allow doing the “hard work” so we get better info from the deep historical archive, after doing comparisons of the old stations running in parallel with the new.
If much of the historical data were thrown out completely, based on these snapshots, rather than having the “hard work” done, it would be far more difficult to reach any conclusion about longterm trends and many more years would have to elapse to have any confidence.
Who profits from that? Where are people getting this PR talking point?
Timothy Chase says
Ray Ladbury (#106) wrote:
I have seen false color satellite images being used to identify minerals by means of the spectra, and simply going on the opacity of carbon dioxide to infrared (yes, the same observations that we are able to perform to see carbon dioxide actually performing its role in the greenhouse effect) we are able to get detailed information on topography and surface altitudes for geologic surveys.
Falafulu Fisi says
Ray Ladbury said…
How else do you estimate biases and systematic errors other than by analysis of the data and the models it pertains to?
By training the computer software system that is use for realtime satellite image analysis to automatically detect of what is called normal behavior and what is called anomaly behavior and alert the operators? Any anomaly images that are shown up or been detected would give alert.
So, the biases could have been detected much earlier, because biases are anomaly behavior.
I believe that NASA is already doing satellite image & vison data-mining:
NASA DATA MINING REVEALS A NEW HISTORY OF NATURAL DISASTERS
http://www-robotics.jpl.nasa.gov/groups/ComputerVision/
NASA also developed their own freely available vision toolkit.
NASA Vision Workbench toolkit
DH says
Re: 74 John Mashey,
See the NSF independent investigation at:
http://www.ce.berkeley.edu/~new_orleans/
I will try to answer some of your questions, but the cognitive dissonance that one experiences after living in NOLA for a decade, evacuating from Katrina, dealing with FEMA, Louisiana Road Home, insurance companies, changing jobs, etc, has addled my brain.
NOLA is one of the craziest, mixed up cities that I have ever lived in. Partly this comes from its history as a port city that has been in the hands of the Spanish, French, English, and US. It received the same immigrants as the industrial northeast, mostly the Irish and Italians. Add to this cultural gumbo the descendents of slaves, free slaves, and Creoles. Diversity makes NOLA unique, gives it strength, but is its greatest weakness. The difficulty of coming to a consensus on any decision results in a hodgepodge of action and inaction. Sometimes I wondered how the city survived preK. It was a city in decline with the banking businesses moving to Atlanta, the oil companies moving to Houston, and the port becoming mechanized. This left higher education, tourism and health care as the core of its economy.
When Katrina hit, the weaknesses of the city, state and federal governments were laid bare. The most vulnerable were the poor since they were the most dependent upon the government for help. The immediate response was one of the most bungled bureaucratic operations in US history. The only thing that did work well was the evacuation: ~1.2 million people left the city and surrounding parishes in around 36 hours. There had been some practice. The evacuation for George in 1998 exposed many of the crucial deficiencies which were largely corrected for the next evacuation for Ivan in 2004. Subsequent fine tuning resulted in the orderly evacuation of the people who could evacuate and chose to. The largest most glaring mistake was the evacuation of the people who did not have access to a car. Plans were in the works to use school busses, but these had not been fully implemented and certainly not practiced. Such evacuations take a lot of planning because when you are under the gun, if you had not planned and practiced ahead of time, it most likely would not get done.
NOLA is now caught between a levee and a wet place.
The anniversary will be marked by politicians making promises and showing their concern. When they leave and the news cycle has reset, NOLA will return to being “the city that care forgot.” The practical issues of recovery will remain. I have not heard of any of the candidates talking about realistic governmental reforms that can deal with this and future disasters. Politicians resist reforming entrenched bureaucracies run by political cronies who are hamstrung by layers of bureaucracies put in place to make sure that the political cronies don’t run off with all of the money. I am old enough to be cynical of both parties because both practice political patronage, the only difference being who receives the patronage. I am young enough to be hopeful that true reform can take place, where competent government can flourish.
[[How much will it cost, and who will pay for it, to keep NOL viable in:
2020
2050
2100
2200]]
To do the repairs and upgrades to protect the city will cost ~$10 billion, this includes wetland restoration. Will it be done right? Who knows, but at least the citizenry is much better informed. Who will pay? As with most federal projects, ~20 % is absorbed by the state. You as a taxpayer will probably pay as much for it as Boston’s Big Dig. In general, I would like to see a greater contribution from individual states towards the building of infrastructure.
[[Americans live there, and of course NOL is a sentimental favorite, but sooner or later economics matter as well:
a) NOL/LA can afford to spend some money on its own behalf, although since LA is a net recipient of federal money, as of 2001, it got ~8B more then it sent, of which 24% came from CA, 16% from NY, 10% from NJ, 14% from (CT, WA, CO, NV). LA also got 10% from IL, 5% from TX, 5% from MI, 4% from MN, 2% from WI, and the latter states would seem to benefit more directly from having LA where it is, although presumably all of us benefit somewhat. The Mississippi River is rather valuable.
b) There are economic benefits to having LA where it is that do not accrue to LA/NOL; I have no idea how LA captures revenue from being where it is, and how close that is to the economic value. ]]
A geographer once noted that NOLA is built in a place where no city should have been built, but a city needs to be there, primarily due to the need for a port. Around half of the US agricultural exports pass through the port as well as other materials. With the growing demand for ethanol, the port is needed to ship it to northern refineries (it cannot be pumped through pipelines). However, the port business is very competitive and not as labor intensive as before the introduction of containers (the Warehouse District is mostly condos now). Thus, dollars flowing into the local economy from wages is relatively small.
The other industry that the country relies upon, for better or worse, is the petrochemical industry. Luckily the refineries have the wherewithal to fund their own protection so the $4/gal gas was short lived and the pumping of natural gas was not severely disrupted. Again, it takes relatively few people to run these facilities so there is not as much as a gain as you might think. Because of NOLA’s proximity to the oil platforms and the understandable NIMBY attitude in other states, the refineries aren’t moving any time soon.
One might think that the oil from offshore drilling sites could bring some economic benefit. But, the royalties used to go to the federal government, since most platforms are outside the three mile limit for Louisiana and are under federal jurisdiction. (For onshore public domain leases, states generally receive 50% of rents, bonuses, and royalties collected). Louisiana had to lobby to receive funds back to address coastal and wetland degradation. This changed in 2006 where ~27 % of the money bypasses the Fed and goes directly to the gulf states, specifically for coastal restoration.
A significant amount of coastal damage was done the early oil industry cutting canals in the wetlands, allowing salt water intrusion. Additionally, the building of levees for flood control prevented the spring time flooding, necessary for the constant renewal of “land.” Another pernicious but not well known villain is the invasive species called nutria. These voracious rodents (think of a water dwelling guinea pig with the size of a small dog and the coloring of a rat) consume large quantities of marshland grasses and wreak havoc (like rabbits in Australia).
[[c) The Corps of engineers spends money to build.
(I.e., this is planned work).]]
Ah, the Army Corps of Engineers (ACOE), one of New Orleans’ favorite villains, along with FEMA, supplanted corrupt politicians (we liked them so much, we kept voting them back into office) and crime as coffee house conversation. I could go on forever about the Corps, but will limit myself to just a few points. For a thorough analysis of the failure of the flood protection system (planning, funding, designing, and political), the NSF report (at http://www.ce.berkeley.edu/~new_orleans/) and articles by Michael Grunwald in the Washington Post provide a clear description of the series of errors that led to the flooding of NOLA and St. Bernard Parish.
When I moved to NOLA, I found residents held an unshakable, bordering on religious, faith in the Corps. They were supposed to always overbuild projects (e.g., dams) and they would keep NOLA safe. When I asked about the Old River Project, they said have no fear, not going to happen (I agree with you that this is a big threat to NOLA and SE LA in general). The floodwall projects surrounding the drainage canals were nearing completion and their thick sturdy appearance belied their vulnerabilities hidden in the sandy and peaty soils beneath.
With regard to the COE and money, there are a myriad of decision making and organizational defects that have caused of problems over the years. A prominent one is that the Corps’ budget comes from earmarks. I am not sure of the specific mechanism on how the yearly budget is set, but with the NOLA levee system there was always uncertainty from year to year. I didn’t follow it too much from when I lived there – the politics hid below the surface like most pork-barrel spending. This also leads to a fragmentation in planning for the whole Mississippi watershed from the federal to the local level. There is movement afoot in congress to address this, but little headway is being made mainly due to _____ (fill in your favorite political gripe). It is interesting to note that the presidential budget requests for the levees were the same as the congressional requests until 1993, when the presidential requests became substantially lower than the congressional requests (NSF report, Fig. 13.1). Since this disparity has spanned two administrations with several turnovers in congress, it appears that NOLA has been a political football for a while.
From a scientist’s perspective, the most egregious errors occurred during the design process. It has been widely reported that the design was defective. “Defective” is not the right word. Prior to beginning construction of the floodwalls, whose failures resulted in the lion’s share of flooding in NOLA, the Corps built a full scale test wall on a levee and soil nearly identical to the conditions found at the sites of the NOLA canals. The design was to take 8 feet of water above the top of the levee with a 1.25 safety factor (i.e., the design should fail at ~10 ft of water). However, the structure failed at 8 ft, primarily due to a mechanism that had not been appreciated before. So far, so good. However, these results were not incorporated in the final design. How that happened is still a mystery. Several red flags were raised along the way by the companies involved in both design and construction, but for some reason this study was not communicated effectively to the engineers.
Such a design flaw could have been tolerated if the safety factor were high enough. But, the Corps “tradition” was to set the safety factor to 1.3 for levees (good enough for protecting mainly agricultural land), and this value was adopted for the floodwalls (not good enough for protecting a major urban center). A safety factor of 2 – 3 would have prevented most of the flooding and subsequent costs for the central portion of NOLA (i.e., west of the Industrial Canal) with all of the hospitals and Superdome.
When I explained the floodwall story to a Russian engineer, he said “We know this well! It’s a Potemkin village!”
[[d) Finally, there are potential subsidies from the Federal treasury for:
– Disaster relief & rebuild
– Flood insurance [given the pullback of private insurers]
(i.e., these happen less predictably).]]
Who is responsible? Upon reading the history, there are numerous decisions at all levels that led to this disaster. By law, the Corps takes complete responsibility: the subcontracted design and construction firms bear no liability. But, in the ultimate Catch 22, congress has also passed a law in 1927 absolving the Corps from liability. Ultimately, no one had responsibility, which I think goes a long way to explaining things.
Who will pay? All the US citizens will contribute some, but the people affected will bear most of the burden. Perhaps rightly so, but it is tough to blame the victims of a system that was mostly beyond their control.
What of the future? The future climate and its effect on sea level is one parameter that needs to be considered. Is it the Doomsday scenario? Or is it the Pollyannaish “what global warming?” I suspect that it is somewhere close to the middle of the IPCC projections of ~3 mm/yr. Regardless of these two scenarios, NOLA as well as all major coastal cities will remain vulnerable to hurricanes. This is true even with the hotly debated up tick in severe storms. It doesn’t matter whether it is an active or quiet season, it only takes one. Remember that NOLA dodged a bullet with Andrew in an otherwise quiet season.
I yearn to return to NOLA and participate in the rebuilding. Who knows what the future has in store.
Eli Rabett says
Re #103: If you actually RTFR you would realize that the picture (it is really a composite of a great number of pictures) is only useful because of the ground truth measurements backing it up. Since the ground truth had only been done for the continental US, the GISSTEMP folk did only used the satellite method for the continental US.
This is EXACTLY what people, including me have been trying to tell Anthony Watts and Co., including Steve Mosher
Richard Ordway says
re. 114 Yeah, a newspaper is also reporting the many “global-warming-is-false” “peer-rewiew” studies as fact:
…I mean “if they are ‘peer-review’ then they must be correct, right?
-that global warming is not happening?” (The poor, poor, poor public that has to try to shift through all this ‘seemingly equal’ material as real science gets trashed).
http://www.hawaiireporter.com/story.aspx?d87f58c3-be16-4959-88e2-906b7c291fd6
Falafulu Fisi says
Ray Ladbury said…
How else do you estimate biases and systematic errors other than by analysis of the data and the models it pertains to?
By training the computer software system that is use for realtime satellite image analysis to automatically detect of what is called normal behavior and what is called anomaly behavior and alert the operators? Any anomaly images that are shown up or been detected would give alert.
So, the biases could have been detected much earlier, because biases are anomaly behavior.
I believe that NASA is already doing satellite image & vison data-mining:
“NASA DATA MINING REVEALS A NEW HISTORY OF NATURAL DISASTERS”
http://www.nasa.gov/centers/ames/news/releases/2003/03_51AR.html
NASA also developed their own freely available vision toolkit.
“NASA Vision Workbench toolkit”
http://ti.arc.nasa.gov/visionworkbench/
Falafulu Fisi says
To the moderator, it is appalling, that messages are delayed by more than 24 hours. I had posted a few and they don’t appear at all. If you want live debate, then allow messages to appear immediately.
[Response: Sorry, but moderation is essential to prevent spam and keep threads from being filled more than they are already with garbage. This is a managed space, not a free for all, and reflective comments are more welcome than knee-jerk responses. If you don’t like it, there are plenty of free-fire forums elsewhere. – gavin]
Dodo says
Re 121. Dear Falafulu. You should understand that the tone of your messages may influence their eventual fate. I have sometimes posed impolite questions to the RC group, and noticed that such are not tolerated.
Like what? Well, I wanted to know how a collective opinion is formed at RC when a post is signed as “group”. This was not answered, probably because I implied that “groupthink” is involved.
Hank Roberts says
FF, make sure you click “reload” after opening a page.
(Gavin, it may be worth checking the “Recent Comments” code — that for sure is lagging way behind, and using it to navigate seems to bring up an old (cache?) version of the thread.
Just now, I clicked on the last comment shown in this thread from the right hand “Recent Comments” list, and it took me to what’s now #115. It looked like that was really the last comment.
I clicked “Reload” and it now is showing me the thread up to #121.
Probably the “Recent” list isn’t updating, but I don’t understand why it would take me to a copy of the page that doesn’t go on from the comment that “Recent” led me to — but to a copy that ends there.
That may well be the cache in my browser not checking or not getting an update properly.
To paraphrase Mr. Reagan: “Trust, but reload.” — Me.
[Response: The cache should reset whenever a new comment is posted which means that the recent comments should also update. I’ll try and find time to investigate…. – gavin]
Lawrence Brown says
In comment 114,JCH refers us to a site of Senator Imhofe! If he’s a credible reference we’re all doomed. I believe he once said that global warming was the ‘biggest hoax ever perpetrated on the American people’! Where did Imhofe get his climatology degree, Walmart?!
steven mosher says
Gavin inlines.
“[Response: You seem to be continually confused by UHI which is known to have a significant and largely single-signed trend over time and microsite issues which do not.”
Actually the consensus of peer reviewed studies on microsite issues shows the exact opposite of what
you claim. You should RTFM, but you won’t. Peilke’s study, Oke’s study, Gallo’s study…. Very simply
UHI causes can also be seen at the microsite level. The causes are the same.they are scale invarient
I will detail them for you.
1. Canyon effects: these happen at the “urban level” and the microsite level. Think multipath radiation.
2. Sky view impairment; Happens at the urban level and microsite level.
3. Wind shelter. happens at both scales..
1-3. its just geometry gavin.
4. Evapotranspiration. asphalt dont breath like soil. 10 miles of asphalt or 10 feet.
5. Artifical heating. essentially a density question. Same cause. different scales.
the sign of bias for UHI and microsite is the same because the causes are the same.
Essentially UHI is recapilulated on a smaller scale. My Prior is that the sign of the
effects will be likewise. Fortunately the published science back me up and not you.
Bottom line: the distrubution of microsite bias has been demonstrated to be a warming bias. Just like UHI
Why? because the causes are the same. Geometry effects ( canyon, skyview, shelter) and material effects
( evaptranspiration and artifical heating) are the same whether the scale is 10meters or 10KM.
I do not see why you deny the published science on this and pretend that there is any study showing
the opposite.
“Hansen’s study was to remove UHI effects. If lights=0 in 1995, I think it’s pretty likely that lights=0 in 1880 as well. ”
Really, Thats not the issue. The issues are here.
1. Its Light intensity as Hansen notes in 2001.
2 If its lights =0 in 1995, is it lights 0 in 2007? ( hansen questions this himself)
3. How was lights verified? ( taps foot)
4. Lights accuracy is 2.7km. Given the error in site locations, how accurate is lights?
5. Here’s a fun test. Population of Marysville in late 1800s? versus early 1900
Thats a stupid question.
Gold rush stupid. I would not assume that rural stays rural or urban stays urban.
I would not rely on 1980 population figures or 1995 satilitte photos. Hansen does.
You are smarter than that gavin
But if we want to speculate, we might say this. A site that hasnt moved in 100 years
and a site that is photo documented today as being rural, and a site that is
photo documented as being in compliance with CRN guidelines, should be trusted over
a site located in a parking lot.
Please say no.
“And once again, I am not a micro-site-effect skeptic, I am a photos-of-stations-today-give-quantatative-and-useful-ways-to-characterise-changes-in-microsite-effects-through-time skeptic. Think about that difference and stop claiming the former. – gavin]”
Well since you ignore the science of the matter I have little choice. Go ahead and cite the study that
shows that microsite contamination has a mean of zero. Taps foot….. Further there are more to the
surface station records than photos. For example, they verify the Lat/lon. In some cases the lat/lon
is off by miles ( think nighlights pixels gavin) They also verify the elevation. Elevation changes
have gone unrecorded ( think lapse rate gavin) They verify the instruments. Did you know that
almost 5% of the stations have used a sensor that measures .6C high? and that this flaw has been
documented? and not corrected?
Another way to look at this is to say
if microsite contamination is zero, then the CRN is useless. BUT you have promoted CRN
here. Why? Isnt USHCN good enough?
If the microsite bias is normaly distributed with mean =0
then why fuss? Why fuss? why?
Here is my thought. I give gavin the benefit of the doubt. DESPITE the priors, despite all
the studies that show microsite contamination is warming biased( just like UHI because the CAUSES are the same)I think the right thing to do is to use CRN guidelines. A microsite issue might cool a site or warm a site so LETS AGREE TO USE SITES that dont have obvious issues. PSST… gavin you already conceeded this
a long while back so watch the thin ice.
SO, we agree. Use good sites. Stop the bickering. CRN is good. “parts” of the USHCN are good.
Pick good sites. Good bouys, good satilites, Good buckets, good tree rings, Good samples.
Get this whole instrument, data, sampling issues behind us. Agreed? Good.
Now. How many sites? for the US?
1. As many as south america?
2. As many as Africa?
You pick the land mass Gavin. I’ll count the stations that Hansen uses for that land mass AND THEN
we will pick the same number for the US. And we will pick stations that meet the goodness guidelines
of the CRN.
OK?
[Response: Despite your claims, there is a fundamental difference between UHI and microsite effects. In two nearby stations within a city, UHI effects will be consistent. Microsite effects will not be. Even in general, there is no reason to expect microsite effects at any two sites to be correlated in time. Thus microsite-related jumps are detectable in the records if they are significant. I said weeks ago that you should do an average of the stations that passed you criteria and see whether that gives a different answer, go ahead. That is science – test your assumption that this matters. Stop dicking about playing word games here and do some work. – gavin]
Hank Roberts says
Nice tidbit from Sigma Xi, full article here:
http://www.americanscientist.org/template/AssetDetail/assetid/55905
“Revolutionary Minds
“Thomas Jefferson and James Madison …. recorded temperatures twice daily, at dawn and 4:00 p.m. Initially, they followed the guidelines of the Royal Society and placed their instruments in an unheated room on the north side of their homes. Madison, however, having noted ice outside when the temperature inside remained above freezing, moved his thermometer to the porch on February 10, 1787. Jefferson didn’t follow suit until 1803, but the record the two generated thereafter closely matches more-recent measurements at the Charlottesville, Virginia, area weather stations located between their plantations. …”
Comment: I hope someone will go out and photograph the Charlottesville, Va. area weather stations, and dig out of the archive the old temperature records. Since we know the date when the first thermometer was moved from the British standard location to the revolutionary new outoor location, and the year when the second one was moved, it will be possible to look for the jump in the measurements when the move happened, and to check for any bias in the measurements and probably correct the older measurements using the contemporary data set.
Nice work, Founders.
Hank Roberts says
http://www.americanscientist.org/content/AMSCI/AMSCI/Image/FullImage_200783141738_846.jpg
Timothy Chase says
Regarding microsite issues…
Does it really make sense to say that the bias will necessarily always be positive? Or even that on the whole it will necessrily be positive? I would think that it is the average temperature which we wish to measure, and if it is the average, then for any positive deviation from that average there must exist a negative deviation of the same magnitude somewhere – at least among potential sites with microsite issues or potential microsite issues. As such it would seem that on the whole potential microsite issues must be neutral – simply as the result of what we mean by “average.”
Or am I missing something here?
John Mashey says
re: #118 DH
Thanks, that was very useful: I was hoping someone who really knew NOLA well could comment. The reference looks interesting.
dhogaza says
Of course not. Wee-willie-wick waving isn’t sufficient, these people need to get out and gather some real data if they want to make a case.
JamesG says
Ray Ladbury, Hank and others: You assume that suspect data must be included in the analysis as if there was a dearth of data but in the US there isn’t and it makes little sense for the US to be oversampled in relation to the rest of the world. Strictly speaking we probably just need enough good sites to calibrate the satellite. Urban sites are (I trust) already excluded so rejection of suspect sites is exactly a radical new idea. And no, rejecting sites is not the same as dumping data – it is still available. Watt’s current findings are that around 50% of sites are affected, which still leaves a lot of compliant sites and is still far more representation than in other parts of the world. If it is compliant now then it is also quite a good assumption it was compliant in the past. Another odd assumption here is that software which failed to pick up a 2 degree step change in 2000 will manage to identify less-obvious micro-site effects. Who’s being naive? Wouldn’t it be better if we just used sites that needed no adjustments whatsoever? Just maybe it’s possible – we’ll see. Can anyone argue with that being the best data? Don’t you think that would pull the teeth from any counter-arguments? Regarding QC and the possible lack of it, don’t you think that cat is already out of the bag? And stop telling me what science is or isn’t! It used to be about experimentation and finding the truth from disparate sources of information – photos too (yes just like in CSI); data-adjustment is an unpleasant side issue that is often necessary but should be avoided if possible. I strongly get the feeling that less easy access to satellites would have produced a photo survey like this in the first place, or at least they might have picked up up the phone and asked the observers if a site was rural and compliant. So much cheaper!
JamesG says
Timothy Chase: Yes you are missing the fact that errors don’t necessarily cancel, they often compound – it is really situation dependent. However, I think that if the results were plotted then good results would be indicated by a Gaussian distribution but a bias would likely show an obviously skewed distribution. This was my usual practice in data gathering. I once identified a faulty contact from a double-headed skew. Additional data may not actually be necessary.
Ray Ladbury says
OK, let’s look at the question of bias. How might it creep in? Well, it would have to be some source of noise that we weren’t taking into account. To be of serious concern, it would have to be monotonically increasing or decreasing and always positive or negative. Steve Mosher’s comments notwithstanding (and they don’t withstand anything resembling serious scrutiny) nobody I know of has described such a source of noise.
If such a bias were present, how would we find it? 3 possible ways:
1) With no knowledge of its cause, characteristics or severity, we could start inspecting every weather station and HOPE we find something.
2) We could sttempt to look for some sign of such a bias in the data–after all, it is bound to have characteristics of its own that deviate from our expected signal.
3)We could look at independent trends sources of data that pertain to our signal and see if our trends are consistent.
2 and 3 have already been done. 1 is probably a hopeless quest unless supplemented by 2 and 3. So without an indication from 2 and 3 that there is any sort of issue, 1 is a chimera.
Ray Ladbury says
JamesG, no it is not that there is a dearth of data. Rather it is that by excluding data without a really, really good reason (and “It’s got errors” is not a good reason) can introduce biases into your data. It is much better to understand your data with all the errors that contribute to it.
Moreover, changing the way the data are analyzed now would make comparisons to past behavior much more difficult. It raises questions of not just whether but exactly how you must go back and reanalyze past data, and any reanalysis can itself introduce new biases and errors.
An example: In the 1980s, the US changed the way it calculates its unemployment rate. The changes seemed reasonable–to be included in the numerator, a person had to be “actively seeking work”; and the armed forces were included in the denominator. However, the changes introduced a downward bias for the unemployment rate that has made it very difficult to judge how we compare to past epoch or to other countries. Similarly, the changes in the way the inflation rate has been calculated since the ’90s have produced an inflation rate that is artificially low and very hard to compare to past inflation rates, despite the fact that the changes made were economically sensible. The system ain’t broke. Keep your freakin’ hands off of it.
richard says
133: “Watt’s current findings are that around 50% of sites are affected”
By that, I suppose you are suggesting that the data from those sites are ‘bad’. Wouldn’t you have to demonstrate that by first examining the data from that site and comparing it to nearby sites, then showing that trends were different? “Guilt by photograph” would not seem to be sufficient at this stage.
dean says
The Katrina report shouldn’t be here. It’s a purely political issue and political discussions have been repeatedly removed in an effort to keep things on topic.
Hank Roberts says
> 133: “Watt’s current findings are that around 50% of sites are affected”
You’d have to mean he’s designed an experiment, chosen a statistical test, then had raters sort the stations into good and bad by looking at the photographs (and shown the agreement between raters is consistent and repeatable, of course) first, then had someone compare the data from the stations on the resulting two lists (without knowing which list is supposed be the “good” and “bad” list of course) and shown a meaningful difference between them.
Where would this be published? Please don’t say “Energy and Environment” …
Timothy Chase says
JamesG (#134) wrote:
No, errors do not necessarily cancel.
But unless you are speaking of the actual installation itself, that is, if what you are speaking of location, microsite issues will tend to cancel as the potential microsites with a positive bias must be perfectly balanced by those with a negative bias, and the potential microsite issues which introduce a positive bias must be perfectly balanced by the potential microsite issues which are negative. This follows simply as the result of what we mean by average and the fact that what we mean by “bias” is deviation from the average.
It would only be possible to claim that all biases are positive if one redefined bias with respect to the minimum rather than the average. But this is obviously not what we mean by “bias.” As such, the claim that microsite issues “recapitulate” urban heat island effects…
… is entirely nonsensical.
And that was my point. With regard to this, the actual shape of the distribution is irrelevant.
It is possible that there are a great many more potential microsite issues which introduce a positive bias, but only if they tend to be much smaller than those that introduce a negative bias. It is also possible to claim that the potential positive biases tend to be larger, but only if they tend to be fewer in number. It is possible of course that the potential biased sites tend to have larger biases. I believe this is the skewed distribution which you are suggesting. But this would be possible only if the number of positively biased sites tend to be fewer in number.
Now it is possible that the actual locations of the sites are systematically biased. But this would have to be the subject of rigorous empirical studies. One might argue that the actual construction of the sites tend to result in a positive bias. But this would have to be similarly demonstrated. And it is worth noting that the most rigorous study performed to date of urban heat island effects has shown them to be rather negligible – as the result of cool park island effects. Likewise, it is worth noting that the trends produced by all stations essentially parallel those produced by all stations even in terms of the detailed shape of the curves.
And it would seem that the only way that one could “reasonably” argue that a bias is introduced into the system that grows worse over time in a way that will produce a measured trend that increases linearly over time where no actual trend exists is by assuming that it is the result of a deliberate distortion involving a great many people – including those in a great many other countries.
But even then it would have to find its way into satellite measurements, sea surface temperature measurements, sea measurements at different depths, borehole measurements. It would leave unexplained the rising temperatures of the lower troposphere – which nearly match the trend in stations in rise, although they involve greater variability. Then there is the expansion of the Hadley cells and the subtropical regions to the north of them, the rise in the tropopause, the rise in sea level both as the result of melting and the thermal expansion of the ocean, the predicted cooling of the stratosphere, the radiation imbalance at the top of the atmosphere, etc..
It would also leave unexplained our observations of what is happening to the cryosphere, including the rather dramatic decline in arctic sea ice since 1958 – which this year alone has seen an area minima 25% lower than the previous minima. It would leave unexplained the decline mass balance of the glaciers, Greenland and Antarctica, the acceleration of the the glaciers along the West Antarctic Peninsula and the increase in the number and severity of icequakes in Greenland.
Any claims with respect to the actual shape of the distribution other than those which necessarily follow from what we mean by average would have to be made on the basis of empirical studies. No doubt you would not claim that the trends we see are merely the results of distortions in how we measure temperatures, but there are many climate skeptics who still do – and there is a vast body of evidence weighing against them.
No doubt there is some distortion in temperature measurements. But any claims with respect to the nature of this distortion would have to be the subject of rigorous empirical studies. And to claim that they are especially significant would be difficult – given the vast body of literature which suggests otherwise. Similarly, at this time, it would seem that there is a far larger body of evidence than the literature on surface stations which suggests that the global temperatures and climate change itself is quite significant.
JamesG says
Ray:
Since the photos tell us where to look you don’t need to look at every site. I said already I don’t think it makes much of a difference but it is good QC and good PR. We disagree on what constitutes good data but what biases do you think may be introduced by excluding suspect data? – A cooling bias? – By using standards compliant sites? That implies that you think the data indeed does have an abnormal warming bias. IMO a small amount of trustworthy data is far better than a lot of poor, corrected data: An opinion forged from bitter experience.
Richard: It’s no secret that micro-site effects affect temperatures – that’s why there are standards in the first place. Personally I’d exclude the sites because I just prefer “clean” data, others would analyse it. Ok fine, but the photos tell you where to start looking. Others just want to ignore the issue which is not good politics.
Hank: He will publish it somewhere – where doesn’t matter as it’ll be the same text. Good and bad are already clearly defined according to official standards.
Sean O says
I agree with James G above that if a site is audited and found to be bad, throw it out. I would extend that to say that all of its data should be thrown out as well and the funding for that station should be dropped (why would we fund a station that doesn’t live up to standards?). Of course that opens up the conversation of appropriate uses of government treasure which is really what the NO conversation in this meandering thread is all about.
I question Ray L’s analysis (135) as I understand it. You seem to imply that regular physical auditing doesn’t happen and wouldn’t matter. Is this true? I haven’t dug into the standards so I am simply asking the question – aren’t all of the sites regularly inspected and audited? If not then I am shocked that anyone would believe any number the stations produce. If they are audited, then why is the Watts finding that 50% of them are bad (is that number confirmed – a quick internet search couldn’t find the source even though it is oft repeated). Once again this probably goes to the question of appropriate uses of government treasure.
If the data is oversampled then throwing out bad data is much better than trying to understand the individual errors and mathematically correct. Any mathematical “correction” is simply an approximation anyway and the resulting data has large error bars than necessary. The entire data sampling program already has enough error variations in it – no need to introduce more just because one thinks one knows how to mathematically correct outliers.
Gavin/Vernon – you guys are talking past each other and not to each other. It is a shame because you are both intelligent and well educated. Sitting on the outside and watching it is almost comical. I think that if you would sit down for a beer, you would likely agree on more than you disagree with.
I think that everyone gets that GCMs don’t use surface data (esp. after Galvin has stated at least half a dozen times above). I would argue that this is a problem. I know that the computer technology of 20 years ago might have been unable to handle that level of modeling but that is likely NOT the case today. I have stated here as well as on my own global warming site http://www.globalwarming-factorfiction.com that we need to invest much more effort in developing these types of models to really understand what is going on and how to make significant impacts. Oops – now I am back to the appropriate use of government treasure again. Hate how that happens.
L Miller says
Sean, the problem is what constitutes a bad site.
If a site reads higher temperatures then it should but is consistent so that it accurately reflects changes in temperatures is it a bad site?
If a site reads temperatures accurately today but didn’t do so in the past and therefore does not report the temperature trends properly is it a bad site?
The technique of photographing sites to determine if they meet placement standards assumes the former is a bad site and the latter is a good site. To study temperature trends, however, the first site is useful the second is not is not. It’s all about what you are going to use the data for.
Barton Paul Levenson says
[[ Wouldn’t it be better if we just used sites that needed no adjustments whatsoever?]]
No, it wouldn’t, because we’d have a much lower sample size and less small-scale resolution. You don’t throw out biased data, you correct for the biases.
Look, there is no empirical data that is free from all biases. The fossil record is biased against preserving creatures without hard parts. The local motions of galaxies and quasars are biased by their cosmological red shifts. You don’t throw out biased data, you correct for the biases. That’s not just true in climatology, it’s true everywhere in science.
Hank Roberts says
JamesG writes: “He will publish it somewhere – where doesn’t matter as it’ll be the same text.”
None of this silly science stuff needed, then, eh? No peer review needed?
ray ladbury says
James G., Excluding data for no good reason (and if you can correct for the biases, you have no good reason) is bad science. Should we ignore the sample standard deviation because it provides a biased estimate of the population standard deviation. No, because we understand the reasons why, and we know how to correct for it. Excluding data can exclude science. Moreover, if a site provides data that are so low quality that it contains no information then it can and will be detected and effectively excluded in the analysis. When it comes to a choice between good science and PR, PR will have to take a back seat.
Lawrence Brown says
Dean in #138 says: “The Katrina report shouldn’t be here. It’s a purely political issue……..”
Well,I’m not so sure. Two very powerful storms(Katrina and Rita) landing within a month in 2005 could be a portent of things to come if surface ocean waters continue to warm and all indications are that they will. These hurricanes can have immediate tragic consequences for life and property, and we need leaders who are aware of this. In our system our leaders are political. If a dictatorship were as vulnerable to hurricanes as we in the U.S., I wouldn’t think of the problem as a dictatorial issue. These extreme events cross a number of disciplines, climate change included.
Steve Bloom says
Steve McIntyre Lex Luthor and the other ringleaders of this little audit charade are well aware that the huge over-sampling of the continental U.S. plus the fact that it shows significantly less trend than globally will mean little if any adjustment in the end. What they are really about is getting acceptance of the “stations that don’t meet standards should have all their data thrown out” meme so that it can be used to attack the global land data. Consistent with this goal, they will focus on identifying and analyzing “bad” U.S. stations for as long as possible. They will say that they need to entirely finish that task before doing their own analysis of the remaining U.S. stations, which they will rig in any case. IOW the script is obvious.
FurryCatHerder says
Re #147:
Well,I’m not so sure. Two very powerful storms(Katrina and Rita) landing within a month in 2005 could be a portent of things to come if surface ocean waters continue to warm and all indications are that they will.
I think there is a lot of politics surrounding hurricanes and climate change. Considering that it’s after the anniversary of Katrina, and we’re just now getting to “F”, and the projection for 2006 was way off, I’m going for “the observed data isn’t matching the rhetoric”.
Be careful with rhetoric — when rhetoric doesn’t match reality, people will use that to attack the message, even if it is otherwise valid.
Timothy Chase says
Regarding the Stephen Schwartz paper…
I ran into a paper earlier today from 2005 that made one of the mistakes regarding response times and therefore climate sensitivity – then quick looked up a critique. The authors of the original paper (a couple of astronomers) where outside of their depth when they treated the ocean as a single box rather than running their calculations with a number of layers. Current GCMs use forty, I believe.
The results were the same: a climate sensitivity considerably lower than what we know it to be. Other mistakes were made as well, but this was apparently the main one.
This kind of gambit seems to be getting rather popular with the degree’d climate skeptics.
Anyway, the critique was:
Comment on “Climate forcing by the volcanic eruption of Mount Pinatubo” by David H. Douglass and Robert S. Knox
T. M. L. Wigley, C. M. Ammann, B. D. Santer, K. E. Taylor
Geophysical Research Letters, Vol. 32, L20709, Doi:10.1029/2005gl023312, 2005
Pretty short and quite readable.