A few items of interest this week:
Katrina Report Card:
The National Wildlife Federation (NWF, not to be confused with the ‘National Wrestling Federation’, which has no stated position on the matter) has issued a report card evaluating the U.S. government response in the wake of the Katrina disaster. We’re neither agreeing nor disagreeing with their position, but it should be grist for an interesting discussion.
An Insensitive Climate?:
A paper by Stephen Schwartz of Brookhaven National Laboratory accepted for publication in the AGU Journal of Geophysical Research is already getting quite a bit of attention in the blogosphere. It argues for a CO2-doubling climate sensitivity of about 1 degree C, markedly lower than just about any other published estimate, well below the low end of the range cited by recent scientific assessments (e.g. the IPCC AR4 report) and inconsistent with any number of other estimates. Why are Schwartz’s calculations wrong? The early scientific reviews suggest a couple of reasons: firstly, that modelling the climate as an AR(1) process with a single timescale is an over-simplification; secondly, that a similar analysis in a GCM with a known sensitivity would likely give incorrect results, and finally, that his estimate of the error bars on his calculation are very optimistic. We’ll likely have a more thorough analysis of this soon…
It’s the Sun (not) (again!):
The solar cyclists are back on the track. And, to nobody’s surprise, Fox News is doing the announcing. The Schwartz paper gets an honorable mention even though a low climate sensitivity makes it even harder to understand how solar cycle forcing can be significant. Combining the two critiques is therefore a little incoherent. No matter!
VirgilM says
1) What is Stephen Schwartz’s response to the un-peer reviewed “early scientific reviews”? 2) If a journal allowed a bad anti-AGW paper to be published, then how can I be assured that these same journals didn’t allow bad pro-AGW papers to be published? Do we need a stronger peer-review process?
[Response: There is, its called an assessment process (e.g. IPCC, NRC, etc). See our previous article “Peer Review: A Necessary But Not Sufficient Condition”. -mike]
3) So in the last 100 years, the Sun had zero impact on the change of globally averaged temperatures? How much of the warming in the last 100 years can be attributed to the Sun? The IPCC doesn’t even try to answer the last question, but I get asked that question by the public that I interact with.
[Response: Hmmm? Not sure whom you are referring to as claiming that the sun has had zero impact on global mean temperature. I’m also not sure which IPCC you are referring to. The IPCC Third and Fourth Assessment reports both contained detailed summaries of Detection/Attribution studies that have indeed attempted to estimate the relative roles of natural (solar and volcanic) and anthropogenic forcing in observed 20th century temperature changes. I’d suggest reading Chapter 9 (“Understanding and Attributing Climate Change”) of the Fourth Assessment Working Group 1 report, the full report is available online here. -mike]
bjc says
The NWF report is informative in what regard? It is pot banging pure and simple.
John Cook says
The sun certainly doesn’t have zero impact on temperature. Tung’s study (the one mentioned in that Fox News article) estimates the impact on global temperature from the solar cycle (eg – from solar minimum to maximum over 5.5 years) is about .18 degrees. But note that this is superimposed over the top of global warming – Tung had to detrend the temperature data to filter out the CO2 warming in order to find the solar signal.
Tung’s follow-up paper (http://www.amath.washington.edu/research/articles/Tung/journals/solar-jgr.pdf) has some interesting conclusions. Independently of models, he calculates a climate sensitivity of 2.3 to 4.1K, confirming the IPCC estimate. And as we’re currently at solar minimum, he estimates the solar warming over the next 5/6 years is going to add double the amount of warming as we head towards a solar maximum around 2013.
It’s ironic that the studies that skeptics quote as proving the sun
is causing global warming (eg – Solanki 2003 gets mentioned a lot) actually conclude the opposite.
Alastair McDonald says
The group write Why are Schwartz’s calculations wrong?
But Schwartz is being published in a peer reviewed journal. How can he possibly be wrong :-? Are you saying that peer review is not enough :-? If that is true then all the science published since the late 20th century must be suspect.
[Response: At the risk of being repetitious, please see our previous article “Peer Review: A Necessary But Not Sufficient Condition”. -mike]
Perhaps the climate models, which only date to the 1960s, are wrong too! Surely, if Schwartz calculations are based on the models and he is wrong then the models must be wrong too.
OTOH, Schwartz is only considering the oceans. 30% of the surface of the Earth is covered with land. ‘No a lot!’ as Paul Daniels used to say. But it is where we live, so for us it is important. The models say that the land areas will warm twice as much as the oceans, So his 1C rise will mean a 2 C rise for us.
But if we look at his calculations, which include recent ocean cooling, one wonders if there is not a flaw somewhere. The Arctic ice has been melting much faster than the models predicted, and perhaps that cooling was just an abberation produced by icy water flowing from the Arctic Ocean. When Schwartz calculated the average ocean warming, he only included the increase in the senisible heat of the oceans, but he should also have included the increase in latent heat from the loss of sea ice.
That would have needed a factor of 80 for every cubic metre of perennial Arctic ice melted, along with the same factor for every cubic meter of ice calved and melted from the base of the Antarctic ice shelves.
The point to note is that this error would explain the slow rise in ocean temperature despite the current CO2 forcing. And when the sea ice disappears then that factor will have to be added to the ocean warming instead of being subtracted from it as happens now. If, using round figures, we assume we have had a warming of 1.0C over the last century, and that another 0.5C has been diverted to melting ice, then the true warming would have been 1.5C. When the ice disappears then the true warming true will be enhanced by the 0.5C that is no longer going into the melting the ice and temperatures will rise by 2.0C per century.
The bottom line is that Schwartz’s calculations are correct.
[Response: Hardly. We’ll have more on this soon, we promise. -mike]
It is the underlying science which underestimates the melting of the sea ice which is wrong.
Aaron Lewis says
Why is the Schwartz paper getting all this attention?
In Equation 1, he assumes that Q ≈ E, then he does a bunch of hocus-pocus and comes out with an equilibrium climate sensitivity and time constant of values in a range to make Q ≈ E. Along the way, he forgets that by only using instrumental records, he has assumed a very short time constant. We do not have instrumental records from the days when Earth was a snowball or all tropical. As Earth went into its snowball stages, as it come out of its snowball stages, as Earth went into its tropical stages, and as Earth came out of its tropical stages, Q did NOT come close to equalling E!
If Q does not approximate E, then the value for dH/dt in Equation 2 is much larger, dT/dt in Equation 3 is larger. Et certa!
Unless he offers compelling evidence for the stability of ocean heat content and the stability of the heat content of polar ice on geologic time frames, Schwartz cannot expect the reader to accept Equation 1.
I would say that a careful reading of this paper in the context of careful remote sensing strongly argues for a higher equilibrium climate sensitivity and longer time constant than Schwartz proposes. Try redoing the math, only instead of assuming Q ≈ E, assume that Q is related to E through various functions of atmospheric physics. Oh, but wait! Then, you would have the global climate models that offer a higher equilibrium climate sensitivity and longer time constants.
Jerry says
Schwartz’s model seems awfully simplistic; in fact, I don’t see much difference between his calculations and those presented in Sec. 12.6 of the textbook “Global Physical Climatology” by Dennis Hartmann.
Chris S says
I love reading this kind of material on this blog. Fox is reporting it much differently then the article states it on the right (from that link) and then come to find out those of you who have read his work, conclude that in the end, his end results are that of what the IPCC predicts. They don’t tell you that. Guaranteed my father will come home talking about “they discovered what causes global warming again”. And now I can kill that too with my newly acquired knowledge from you all. Thanks.
Richard LaRosa says
Stephen Schwartz of BNL gave a talk to the Science Club of Long Island at SUNY Stonybrook in 2004. I knew BNL was interested in sequestering CO2 in the ocean so I asked him about ocean acidification due to CO2. He said not to believe everything you see in the Sunday Times. I replied that I was referring to two recent papers in Science. He said don’t believe it until a great concensus of scientists duplicates the findings and agrees. Something fishy about this.
Lawrence Brown says
This is regarding the Katrina “Report Card”.
Who speaks for the Corps?
The Army Corps of Engineers has become an all too convenient whipping post for the failures of the New Orleans levees.
But the levees that were originally built were as much as a public policy decision as were the Corps design criteria. Thelevees were designed to withstand a category 3 hurricane which may have been a rarer occurrence than it is today.
The process is that The Corps presents an initial proposal before Congress containing planning , construction, maintainance and other costs for say a 150 or 200 year flood ( a flood that has a recurrance interval, on average, of the specified time period). The Congress makes it’s decision based on it’s budget and against competing expenses in the
budget. Federal funds were requested to bolster the levees several years before Katrina but no money was authorized.
There was a great deal of complacency at all levels of government. After all New Orleans had escaped disaster so
often in thepast, and so there was a mindset among the officials and the public that it would continue to do so. Live and learn.When Rita hit in the Gulf afterward, everyone was better prepared for emergency plans and evacuations.The Corps has many conscientious and dedicated employees, who are willing and able to protect against future foreseeable flooding
but will be constrained by what the decision makers are willing to budget toward that cause.
John Mashey says
re: #9
The Economist has a succinct current story about New Orleans,
http://www.economist.com/world/na/displaystory.cfm?story_id=9687404
Among other things, it says:
“But the government has done practically nothing to discourage rebuilding in the most flood-prone areas….In the end, most homeowners were able to rebuild precisely where they were before and still qualify for federal food insurance.”
While I normally don’t read the Wall Street Journal’s Editorial, they had an article by Lomborg today, of which one point was right: encouraging building on some coasts is crazy.
John Mashey says
http://www.forbes.com/forbes/, current issue (Sept 3), has a nice section on solar power, but also an unfortunate editorial by Steve Forbes, praising the (negative) analysis of An Inconvenient Truth by Mary Ellen Gilder (George’s daughter), a *medical school student*, pointing at:
http://www.oism.org/pproject (a site that RC fans may recognize).
I’ve suggested that “debunking bad papers” might be given a new section, with one thread per paper, to get such out of other threads. This one wouldn’t be worth it … except for being featured in Forbes.
Rod B says
Just an observation RE 9, et al: Rita was better prepared for because it was done by Texas and Texas cities (though they too had their bad moments). New Orleans and LA, as first responders, get an F- for their response and their bumblings before, during and after Katrina. I find it odd that the entire focus is on the Feds who in fact deserve at least a D if not a C (or C-) for their efforts. (Though nobody could handle a leeve breach.) I think I heard the Mayor of NO is parlaying his total incompetance into a run for Governor… (or maybe Senator, I can’t recall).
Hank Roberts says
Richard, thanks for the report on Schwartz’s disbelief in the oceah pH work.
Now that’s scary. It’s not fancy physics, it’s physical chemistry.
B Buckner says
Re:9 while you make many valid points, the levees were in fact poorly designed and constructed, and this falls right in the lap of the Corps.
Lawrence Brown says
Respecting B. Buckner’s comment in 14, from the partial plans that I saw in the media shortly after the breach of the levees, the underground supports didn’t go deeply enough into the the loose surrounding terrain and should have penetrated bedrock to prevent overturning. I know nothing about the inspection at the construction sites, but some few inspectors are notoriously lax in accepting shoddy construction methods, too much of a proportion of water inthe concrete for example.But there’s plenty of culpability to go around. Engineers are hardly ever given carte blanche on what they can build. Pressure will come from those responsible for the budget to build at a quicker pace and at less cost.
In addition the government didn’t have it’s priorities straight on spending. The Boston big dig was built at a cost of $15 billion. The estimated cost to upgrade the New Orleans levees to protect for cat.5 hurricand was $2.5 billion.It would have been well worth it. It had been anticipated in the years before Katrina but no money was alloted
Hank Roberts says
> levees, the underground supports should have penetrated bedrock …
Impractical: “Southern Louisiana has been built up out of sediments transported from the interior of the continent down the Mississippi River. Tens of thousands of feet of these soft sediments overlie crystalline bedrock….”
http://www.nae.edu/nae/bridgecom.nsf/weblinks/CGOZ-6ZQPVT?OpenDocument
Lessons from Hurricane Katrina
John T. Christian
Volume 37, Number 1 – Spring 2007
Geotechnical conditions and design flaws both contributed to the failure of the levees in New Orleans.
This is a good summary referencing some 6,000 pages of technical reports on the Louisiana failure.
Short answer: the builders used wishful thinking, optimism, and hope, and called it engineering.
Joe Romm says
I also have some posts debunking Schwartz. The basic question is whether scientists (i.e. the IPCC consensus) are overestimating or underestimating climate change. Schwartz’s results would imply the former — but all observations and much recent research strongly suggests the latter. See
http://climateprogress.org/2007/08/21/are-scientists-overestimating-or-underestimating-climate-change-part-i/
http://climateprogress.org/2007/08/22/are-scientists-overestimating-or-underestimating-climate-change-part-ii/
http://climateprogress.org/2007/08/23/are-scientists-overestimating-or-underestimating-climate-change-part-iii/
Rod B says
“Re:9 while you make many valid points, the levees were in fact poorly designed and constructed, and this falls right in the lap of the Corps.”
And neither NOL nor LA had absolutely nothing to say about it?
B Buckner says
RE: 15 Bedrock is about 2,000 feet below the ground surface, so that was not the issue. The Corps made many rookie, and frankly unprofessional mistakes. They ignored ample subsurface information that indicated the existence of permeable layers beneath the dikes that should have been cut off to prevent a boiling or quick condition that undermined the foundation when water rose on one side of the wall. Also they ignored the presence of soft compressible soils beneath the dikes. The dikes subsequently settled several feet so that they were no longer set at the design flood level, and were subsequently overtopped, something they were not designed to withstand. Overall, their performance was an embarrassment.
dhogaza says
Not directly, only indirectly through the political process, and then only in regard to the big picture, i.e. “how big a hurricane to guard against”, not, say, “what kind of concrete to use”.
SecularAnimist says
Regardless of whether you attribute hurricane Katrina to anthropogenic global warming, it is likely representative of the sort of catastrophic “extreme weather events” that will become more and more frequent as a result of global warming. (Recall that Houston narrowly escaped an even worse disaster from hurricane Rita.) The moral of the response to the Katrina disaster is that even the wealthiest, most powerful and most technologically advanced nation on Earth is entirely unprepared and unable to respond effectively to such events.
David B. Benson says
Do better to hire the Dutch engineers…
Lawrence Brown says
Re #16 “Short answer: the builders used wishful thinking, optimism, and hope, and called it engineering.”
C’mon Hank, you don’t know that. It’s this kind of inflammatory rhetoric that causes and confusion and contention among the public. Since I found Realclimate about 4 months ago, you’ve been among the more cogent and analytical posters here. It seems out of character to state this kind of subjective verbiage. See B.Buck’s more reasoned response in comment #19. Most career civil employees and officers are professional in their work. They don’t make as much as they might outside but there are gratifications in serving the public.
John Mashey is onto something in comment #10. Why keep doing the same thing and risk getting the same results.Eventually a storm greater than Katrina will come ashore,especially since greater magnitude storms have been occurring more frequently,lately. Maybe New Orleans ought to do what Galveston did after a storm surge in 1900 caused 6000 deaths.
They jacked up the buildings and deposited dredged sand beneath the raised structures and built a large seawall for further protection.They could attempt the same in New Orleans to raise some of the lower sections of the city above sea level.
bigcitylib says
# 8 Shwartz has a website here
http://www.ecd.bnl.gov/steve/pubs.html#preprints
And here is the conclusion of his 2007 paper:
Quantifying climate change — Too rosy a picture?
“The century-long lifetime of atmospheric CO2 and the anticipated future decline in atmospheric aerosols mean that greenhouse gases will inevitably emerge as the dominant forcing of climate change, and in the absence of a draconian reduction in emissions, this forcing will be large. Such dominance can be seen, for example, in estimates from the third IPCC report of projected total forcing in 2100 for various emissions scenarios2 as shown at the bottom of Fig. 1. Depending on which future emissions scenario prevails, the projected forcing is 4 to 9 W m-2. This is comparable to forcings estimated for major climatic shifts, such as that for the end of the last ice age3.”
Phil. Felton says
“Do better to hire the Dutch engineers…
Comment by David B. Benson — 25 August 2007 @ 14:39”
Or the guys who designed and built this:
http://en.wikipedia.org/wiki/Thames_Barrier
Nick Stokes says
I thought the answer to the Schwartz puzzle is simply mishandled signal analysis. Starting at the bottom of p 13, he describes the filtering he applied. Without detrending, he gets a relaxation time constant of 15-17 years, which would give a sensitivity much in line with other calculations. But then he applies detrending, which he concedes is a high pass filter. By attenuating long-term effects in this way, he gets a much shorter time constant of 5-7 years, which then leads to the lower sensitivity.
Detrending is quite inappripriate here. His argument for using it seems to be that without it, he gets inconsistent data between the first and second half-periods of data. All that means is that his method cannot resolve the longer term effects properly, and so he removes them. The root problem is that he is trying to find a transfer function for a system by dividing the output spectrum by the input, but the input is poorly known, and has a small range of frequencies.
Hank Roberts says
Lawrence, when I say ‘builders” I mean the people who hire the engineers, and the politicians who pay for the work and tell the public that it’s good and their money well spent. From that study — based on the first 6,000 pages of reports — it seems like the engineering information was sufficient to say the builders were wrong, and some engineers did warn about the situation. The pressure’s always going to be strong on engineers, as on climatologists, to provide data but not warnings, I imagine.
These huge retrospective studies failures I guess are how they figure out what their professional responsibility is to warn where the builders and politicians don’t want the public to know a failure is likely.
Compare the way earthquake building standards get changed after each major earthquake — there’s a cycle of engineering reports, proposals to upgrade the way buildings are built, a long period of political negotiation, and maybe change or maybe not. There’s a whole academic field studying how this kind of change happens.
Remember this one? http://www.fema.gov/news/newsrelease.fema?id=6202
“… the Northridge earthquake …. discoveries alarmed the structural engineering community, building codes officials and emergency managers, especially in earthquake prone areas. The findings called into question all building codes developed over the previous 20 years that addressed this type of construction…..”
That’s what engineers can do — “call into question” — but can they do any more?
Climatologists have a tougher situation — they don’t get a “next time” toward which to make recommendations.
Vernon says
Well I have been ad hom’ed on a bunch of other pro CO2 AGW sites but no one has been able to point out where my facts or conclusions are wrong. Any one here want to take a shot at it?
[Response: Vernon, the reaction you get is probably directly proportional to your reluctance to listen. But here are the answers again. – gavin]
Here are the facts and conclusions:
Hansen (2001) states quite plainly that he depends on the accuracy of the station data for the accuracy of his UHI off-set
[Response: Of course. – gavin]
WMO/NOAA/NWS have siting standards
Surfacestations.org’s census is showing (based on where they are at now in the census) that a significant number of stations fail to meet WMO/NOAA/NWS standards
[Response: They have not shown that those violations are i) giving measurable differences to temperatures, or ii) they are imparting a bias (and not just random errors) into the overall dataset which is already hugely oversampling the regional anomalies. – gavin]
There is no way to determine the accuracy of the station data for stations that do not meet standards.
[Response: There is also no way to determine the accuracy of the stations that do either. Except for comparing them to other nearby stations and looking for coherence for the past, and actually doing some measurements of temperature now. – gavin]
Hansen uses lights=0 in his 2001 study
Due to failure of stations to meet siting standards, lights=0 does not always put the station in an rural environment
[Response: False. You are confusing a correction for urbanisation with micro-site effect. UHI is a real problem, and without that correction the global trends would be biased high. The Hansen urban-only US trend is about 0.3 deg C/century warmer than the rural trend (which is what is used). Therefore the lights=0 technique certainly does reduce urban biases. – gavin]
At this time there is no way to determine the accuracy of Hansen’s UHI off-set
[Response: The effect diminishes with the size of town, it is actually larger than corrections based on population rises, and it gives results that are regionally coherent and you have yet to show that any objective subsampling of the rural stations makes any difference. – gavin]
Any GCM that uses this off-set has no way to determine the accuracy of the product being produced.
[Response: GCMs don’t use the surface station data. How many times does that need to be pointed out? – gavin]
Tell which facts I got wrong!
Oh and if you did not catch this, it means that GISS GCM is pretty worthless till they figure this out.
[Response: GCM physics is independent of the trends in the surface data – no changes to that data will change a single line of GCM code or calculation. If you want to have a continued discussion then address the responses. Simply repeats of the same statements over and again is tiresome and pointless. – gavin]
ray ladbury says
Re #12. Rod. B. Oh, now come on. The federal response to Katrina would have been comic had the consequences not been so tragic. Chertoff didn’t even know there were refugees at the superdome until NPR pointed it out to him.
Rod, I travel a lot overseas and work with lots of foreign nationals on international satellite collaborations. This one massive failure did more to destroy faith in the US federal government than anything else–including Iraq. This was the federal government saying: “You’re on your own.”
Lawrence Brown says
“Climatologists have a tougher situation — they don’t get a “next time” toward which to make recommendations.”
After The Tacoma Narrows Bridge failure( Galloping Gertie) in 1940, Tacoma Narrows Bridge: “Galloping Gertie” Collapses November 7, 1940 , engineers began to pay much more attention to wind stress. Often we learn from our mistakes and miscalculations, though at great expense.
Future climatologists will learn from what today’s climatologists are doing. Hopefully, they’ll be able to fine tune or more fully understand such things as details of parameterizing partial cloud cover inside a model’s cell.
First somebody does something than somebody does it better. The Wright Brothers flight was measured in minutes, and two decades later Lindbergh made a non-stop flight across the Atlantic.
Whether you’re a scientist climatologist or engineer,it smoothes the process if you have the support of those who control the purse strings.Here’s what Jim VandeHei and Peter Baker of the Washington Post had to say on Sept.2,2005:
“In recent years, Bush repeatedly sought to slice the Army Corps of Engineers’ funding requests to improve the levees holding back Lake Pontchartrain, which Katrina smashed through, flooding New Orleans. In 2005, Bush asked for $3.9 million, a small fraction of the request the corps made in internal administration deliberations. Under pressure from Congress, Bush ultimately agreed to spend $5.7 million. Since coming to office, Bush has essentially frozen spending on the Corps of Engineers, which is responsible for protecting the coastlines, waterways and other areas susceptible to natural disaster, at around $4.7 billion.”
As recently as July, the White House lobbied unsuccessfully against a plan to spend $1 billion over four years to rebuild coastlines and wetlands, which serve as buffers against hurricanes. More than half of that money goes to Louisiana.”
Bottlenecks are always found at the top of the bottle. When this Nation has short sighted leaders,we’ll all inevitably pay in the long run.
Vernon says
Gavin, thank you for your input and I have addressed your responses in-line. I bolded the original post.
Here are the facts and conclusions:
Hansen (2001) states quite plainly that he depends on the accuracy of the station data for the accuracy of his UHI off-set
[Response: Of course. – gavin]
Vern’s Response:
Yeah, Gavin agreed with me.
WMO/NOAA/NWS have siting standards
Surfacestations.org’s census is showing (based on where they are at now in the census) that a significant number of stations fail to meet WMO/NOAA/NWS standards
[Response: They have not shown that those violations are i) giving measurable differences to temperatures, or ii) they are imparting a bias (and not just random errors) into the overall dataset which is already hugely oversampling the regional anomalies. – gavin]
Vern’s Response:
i) They do show that the stations are not in accordance with NOAA/NWS guidelines. No one knows what this is doing to the station accuracy.
ii) This is a red herring, it does not matter what they are doing, what matters is no one knows what this is doing to accuracy.
iii)Oversampling does not matter, Hansen (2001) is not about trends, it is about adjustments to individual stations for UHI, TOD, and station movement.
[Response: False. The Hansen study is precisely about calculating regional and global trends. It specifically states that for local studies, looking at the raw data would be better. If you don’t know why a study is being done, you are unlikely to be able to work out why some details matter and some do not. – gavin]
There is no way to determine the accuracy of the station data for stations that do not meet standards
[Response: There is also no way to determine the accuracy of the stations that do either. Except for comparing them to other nearby stations and looking for coherence for the past, and actually doing some measurements of temperature now. – gavin]
Vern’s Response:
Gavin, this is another red herring. Hansen assumed the stations did meet the accuracy requirements, it can be shown this assumption is not supported.
So Hansen says that the data from the surface stations needs to be accurate for his methodology to work.
[Response: Hansen appropriately acknowledges that if the data are seriously flawed, then the GISS analysis will have included those flaws. But again you are mistaken in what you think has been shown. No-one has demonstrated that are significant problems with enough stations to change the regional picture. For you “individual non-compliance”=”unusable data”, but this has not been shown at all for a significant number of single stations, let alone their regional average. – gavin]
Further, Hansen made additional assumptions (his definition of rural):
Hansen uses lights=0 in his 2001 study
Due to failure of stations to meet siting standards, lights=0 does not always put the station in an rural environment
[Response: False. You are confusing a correction for urbanisation with micro-site effect. UHI is a real problem, and without that correction the global trends would be biased high. The Hansen urban-only US trend is about 0.3 deg C/century warmer than the rural trend (which is what is used). Therefore the lights=0 technique certainly does reduce urban biases. – gavin]
Vern’s Response:
Gavin, I am not confusing anything. You have a nice red herring but I did not say that the currently used UHI off-set does not reduce urban biases. I said there is no way to know the accuracy of the UHI off-set. You have not disputed this, and saying your doing something that you cannot prove is right is not much better than doing nothing.
[Response: First off, the UHI correction is a trend, not an offset. Secondly, you are confused. Where is there a ‘lights=0’ station that is in an urban environment? An urban environment is a city, not a building or a road. Secondly, you asking for something that is impossible. How can anyone prove that the correction is correct? Science doesn’t work like that. Instead, you make assumptions (that rural trends are more representative of the large scale than urban ones), and you calculate the regional trends accordingly. You might dispute that assumption, but at no stage can anyone ‘prove’ that it is perfectly correct. – gavin]
At this time there is no way to determine the accuracy of Hansen’s UHI off-set
[Response: The effect diminishes with the size of town, it is actually larger than corrections based on population rises, and it gives results that are regionally coherent and you have yet to show that any objective subsampling of the rural stations makes any difference. – gavin]
Vern’s Response:
Your response has nothing to do with my statement. You then follow up with talking about the census based UHI off-set which Hansen say specifically his methodology is better.
Any GCM that uses this off-set has no way to determine the accuracy of the product being produced.
[Response: GCMs don’t use the surface station data. How many times does that need to be pointed out? – gavin]
Vern’s Response:
yet another red herring. I never claimed that you used surface station data. You use the trends, which are in part, formed by using Hansen (2001) off-sets based on the surface station data. [edit]
Tell which facts I got wrong!
Oh and if you did not catch this, it means that GISS GCM is pretty worthless till they figure this out.
[Response: GCM physics is independent of the trends in the surface data – no changes to that data will change a single line of GCM code or calculation. If you want to have a continued discussion then address the responses. Simply repeats of the same statements over and again is tiresome and pointless. – gavin]
Vern’s Response:
yet another red herring. Gavin, you can continue to mischaracterize what I said but it will not change the facts. The facts are that the surface station trends are used by GISS GCM as an input. There is no way with the work that Hansen has currently done in (2001) to know if the trends which use his off-sets are any good. Remember garbage in garbage out?
So your basic response consists of conceding that the surefacestations.org census is showing that a significant number of stations, to date, are not in compliance. You offer nothing to show what the impact of being out of compliance is. You offer nothing to show that if Hansen’s assumptions are wrong, his results are still right. You offer red-herrings as to why this would affect the GISS GCM.
[edit]
[Response: Pay attention here. I said the surface data are not used in the GCM. You insist that they are. Since I think I know what’s in the GCM quite well, I am certain that I am not mistaken on this. If you think I am wrong, please point out the lines of code where this data are used. This is a straightforward point of fact, if you cannot concede this, there is absolutely no point continuing this discussion. – gavin]
Zeke Hausfather says
Re: 11
That Gilder essay makes the amusing claim that the Oreskes article should be discounted because it was published in Sciences “Essays” section rather than in the peer reviewed literature. Perhaps she is right; from now on, I’ll only cite surveys that have been peer reviewed.
[/sarcasm]
Aaron Lewis says
I know that sea level was discussed in this group at some length in July. However, with the new evidence that changes in atmospheric and thus oceanic circulation may have obscured changes in sea level (http://environment.newscientist.com/article/dn12547-flatter-oceans-may-have-caused-1920s-sea-rise.html ) , is there any evidence that the previously apparently static sea levels caused groups to self-censor data on ice sheet melting? That is: Is there ice sheet melting data out there that was not published because it seemed inconsistent with sea level data?
Jim Eager says
Re 28 Vernon : “Any GCM that uses this off-set has no way to determine the accuracy of the product being produced.”
Vernon, yet again you prove that you have no idea what a GCM is or how they work.
This is not an ad hom, merely a statement of fact based on your own expressed misconceptions.
Jerry says
Re #31
This exchange reminds me of a student I once had who was convinced from the very
beginning that anything I told her was a lie. Needless to say, I wasn’t
able to persuade her of much!
Rod B says
Ray, and what exactly were all those refugees doing at the Superdome besides wondering where the supplies NOL promised were and where the busses NOL promised to take them to safety were? Not knowing most were sitting at the bus lot untouched and now mostly submerged. I don’t say the Feds response was anything great. But the attitude, as you express, that the Feds — not the State, Locals, or ourselves, are responsible to kiss us and make us better is a never winning situation, and not the Feds fault.
beefeater says
Well I feel better already. A non-peer reviewed blog refuting this study! Oh who to put faith in? A website or a peer reviewed published in a respected scientific journal study? I just can’t decide.
Eli Rabett says
Vernon’s mission is to anger Gavin, at which point the climate blogger ethics panel in permanent session over at Climate Audit and Climate Science will convict Real Climate of being a very nasty place indeed. If Gavin tires or allows Vernon to keep propagating ignorance, well, that’s fine with Vernon. The whole purpose of the Vernon exercise is to put Real Climate into Zuzwang.
So one needs another strategy: The Time Out Box TM
When you get someone whose mission in life is provocation, send them to the Time Out box. Put a notice on the front page: The following children are sitting in the Real Climate Time Out Box: Vernon. The box would have links to their comments showing why they were sent there. If Vernon decides that there is no future in claiming that surface data is used in GCMs after being told that it was not so, after being asked to provide some evidence, etc. well, he has the keys to the box in his own hand.
steven mosher says
RE 1.
Peer review is a method of insuring that Bayesians don’t get surprised since they share priors with their peers.
Hostile scrunity is the evolutionary test of an ideas fitness.
[Response: Peer review does not go around simply squashing ideas that aren’t agreed on by the reviewer. Instead, it is a review of the logical basis of an idea, not whether the idea is right or wrong, or likely or unlikely. Frankly, it often doesn’t even test that properly. It is a minimum test of an idea’s fitness, not the end result. – gavin]
Ron Taylor says
Rod (36) I think the supplies were to come from FEMA. No city can possibly store emergency supplies for a disaster of this scale. Yes, the city was supposed to have buses. But the bus drivers all fled. Should have been anticipated.
No one expects the feds to kiss us. But planning for disasters of this scale has to be a federal problem or we will be needlessly duplicating expertise and planning all over the country.
dallas tisdale says
ref: 36 One good thing happened because of Katrina, local and state officials learned that the FEMA is there to provide assistance, not usurp authority. Calling for the equivalent of marshal law in the wake did not speak well for LA’s and NOL’s planning.
Disaster planning has improved dramatically throughout the country because of NOL. Of course it is easier to blame a federal administration.
steven mosher says
RE 28. Vernon writes, and gavin inlines
“Surfacestations.org’s census is showing (based on where they are at now in the census) that a significant number of stations fail to meet WMO/NOAA/NWS standards
[Response: They have not shown that those violations are i) giving measurable differences to temperatures, or ii) they are imparting a bias (and not just random errors) into the overall dataset which is already hugely oversampling the regional anomalies. – gavin]”
I’ve pointed this out before so I do not understand why some cannot RTFM
1. Leroy’s study that forms the basis of the new standards for the CRN ( which gavin endorses) indicates that poor siting, like that shown by Surfacestations.org
can lead to microsite errors on the order of 1-5C. RTFM.
Watts and company are relying on the published peer reviewed science. Their Prior is that bad siting leads
to a 1-5C error.
2. Peilke’s study of Colorado sites showed microsite Bias. However, Karl objected that this “micro site issue” while important was limited. Surface stations has shown the issue to be more widespread than Karl Imagined. Kar’s Prior was wrongly assumed to be a rare event type of distribution. So much for priors.
3. The First study of the new network ( the CRN) found that microsite issues ( see the Ashland study) were citical. In this particular case it was siting a weather station by a runway that caused an issue.
Simply, SurfaceStations does not have to Reprove what
1. Karl admits is true
2. Hansen admits is true
3. CRN studies prove.
Namely THIS. The WMO scientists were correct when they established siting guidelines. They expressed a consensus of scientific thought.
In short. WMO have standards for a reason. It is the consensus of scientists that improper siting leads to improper readings. This finding has not been seriously challenged anywhere in peer reviewed literature. Karl called for better siting, HANSEN called for better siting. The WMO calls for better siting. Gavin has called for better siting by pointing to the CRN.
only A siting sceptic could argue that bad siting is ok .
dhogaza says
Which is all meaningless blather. It does not speak to the pertinent question:
Can we draw meaningful conclusions from the historical data we have?
Those same scientists calling for a better network in the future also happen to believe that yes, we CAN draw meaningful conclusions from the historical data.
A bunch of photographs aren’t going to change that conclusion.
Eli Rabett says
That microsite bias exists is not an issue. That it goes in only one direction is. There are more USHCN sites near trees than airconditioners. Moreover, Peterson has shown that trends at urban sites are essentially the same as trends at rural sites.
ray ladbury says
Rod B.,
While I agree that there was failure on all levels, what was new in NO was that the failure reached all the way to the top. The Bush administration’s political appointees utterly failed to appreciate the gravity of the situation–that is clear from the emails. I do not let LA or NO off the hook, but the perception of our allies (and our enemies) is that the government of the most powerful nation on Earth allowed civil society to break down for an extended period of time–and it makes them wonder just how strong we are.
ray ladbury says
Steven Mosher,
Let us presume that there is a dataset with a systematic error of unknown origin. Two graduate students start looking for the source fo the error. Graduate student A begins looking at every nut and bolt in the apparatus that is the source of the data. Grad student B looks at the dataset and characterizes the error–maybe looks at the data at different stages trying to find where the error is creeping in and what might be the nature of the error.
I will bet you dollars to donuts (granted, not that great odds given what the dollar is doing these days) that not only will grad student B find the source of the error first, she will also have a better idea of how to correct for it and salvage the dataset rather than starting over. Data are never perfect.
By all means, let us apply the standards to any new stations. However, there is zero chance that the audits of stations will produce evidence of any problem the analyses cannot handle.
Craig Allen says
There is an excelent article on Swartze paper at the blog.
The lack of sophistication in the model he use in the study is revealed in this quote from the paper:
Finally, as the present analysis rests on a simple single-compartment energy balance model, the question must inevitably arise whether the rather obdurate climate system might be amenable to determination of its key properties through empirical analysis based on such a simple model. In response to that question it might have to be said that it remains to be seen. In this context it is hoped that the present study might stimulate further work along these lines with more complex models…. Ultimately of course the climate models are essential to provide much more refined projections of climate change than would be available from the global mean quantities that result from an analysis of the present sort.
Craig Allen says
Hey, where did the preview button go? I didn’t mean to submit that my last comment yet, that last paragraph is meant to be a block quote and I was only just getting started otherwise …
Neil B. says
This is important, but didn’t (?) get much media attention:
Mine fires in China, India etc., burning coal, put out about as much greenhouse CO2 as US gas consumption. Suppressing them out substantially would have significant impact on reducing greenhouse gas increases.
Link
Rod B says
Ray, one reason why the feds didn’t fully appreciate Katrina’s situation might be because both LA and NOL told FEMA that the levees were intact hours after they had been breached. Are the critical foreigners the ones who had just been crushed by a tsunami and helped extraordinarily by our military? Or maybe the self-centered French et al because we’re 30 years late with our periodic saving their butts from the Germans? Actually, some of what you and they say is valid. I do not give the Feds high marks by a long shot, and ultimately they have to carry the ball for this scale of disaster. I just think we should appropriately spread the blame. Plus I think Mississippi has put LA to shame with their recovery.