Today, Science published an important comment pointing out that there were serious errors in a climate research article that it published in October 2004. The article concerned (Von Storch et al. 2004) was no ordinary paper: it has gone through a most unusual career. Not only did it make many newspaper headlines [New Research Questions Uniqueness of Recent Warming, Past Climate Change Questioned etc.] when it first appeared, it also was raised in the US Senate as a reason for the US not to join the global climate protection efforts. It furthermore formed a part of the basis for the highly controversial enquiry by a Congressional committee into the work of scientists, which elicited sharp protests last year by the AAAS, the National Academy, the EGU and other organisations. It now turns out that the main results of the paper were simply wrong.
Von Storch et al. claimed to have tested the climate reconstruction method of Mann et al. (1998) in model simulations, and found it performed very poorly. Now, Eugene Wahl, David Ritson and Caspar Amman show that the main reason for the alleged poor performance is that Von Storch et al. implemented the method incorrectly. What Von Storch et al. did, without mentioning it in their paper, was to remove the trend before calibrating the method against observational data – a step that severely degrades the performance of Climate Field Reconstruction (CFR) methods such as the Mann et al. method (unfortunately this erroneous procedure has already been propagated in a paper by Burger and Cubasch (GRL, 2005) where the authors refer to a personal communication with Von Storch to justify the use of the procedure). Another more recent analysis has shown that CFR methods perform well when used correctly. (See our addendum for a less technical description of what this is all about).
How big a difference does this all make? The calibration error in the temperature minimum around 1820, where one of the largest errors occurs, is shown as 0.6ºC in the standard case of 75% variance in the Von Storch et al analysis. This error reduces to 0.3ºC even in the seriously drift-affected ECHO-G run when the erroneous detrending step is left out. In the more realistic HadCM3 simulation, this error is just above 0.1ºC. The error margins (2 sigma) provided by Mann et al. and pictured in the IPCC report are ±0.17ºC (Fig. 2.21, the curves are reproduced in our addendum). It is therefore clear that the model test of Von Storch et al, had it been implemented correctly, would have shown a small but undramatic underestimation of variance and would have barely ruffled a feather.
Error made, error corrected, and all is well? Unfortunately not. A number of questions remain, which need to be resolved before the climate science community can put this affair to rest.
The first is: why did it take so long to correct this error, and why did the authors of the original paper not correct it themselves? The error is reasonably easy to spot, even for non-specialists (see addendum). And it was in fact spotted very soon after publication. In January 2005, a comment was submitted to Science which correctly pointed out that Von Storch et al. had calibrated with detrended data and had therefore not tested the Mann et al. method. As such comments are routinely passed to the original authors for a response, Von Storch et al. must have become aware of their mistake at this point at the latest. However, the comment was rejected by Science in May 2005.
In a paper dated July 2005, Zorita and Von Storch admit their error in passing, writing: “the trend is subtracted prior to the fit of the MBH regression/inflation model (von Storch et al. 2004). […] It seems, however, that MBH have exploited the trends”. It is thus clear that they knew that their central claim of the Science paper, namely that they had tested the Mann et al. method, was false. But rather than publishing a correction in Science, they wrote the above in a non-ISI journal called “Memorie della Societa Astronomica Italiana” that not many climatologists would read.
An unambiguous correction in Science, where the original paper appeared, would not only have been good scientific practice. It would have been particularly important given the large public and political impact of their paper. It would have been a matter of courtesy towards their colleagues Mike Mann, Raymond Bradley and Malcolm Hughes, who had suffered a major challenge to their scientific reputations as well as having to invest a large amount of time to deal with the Congressional enquiry mentioned above. And it would have been especially pertinent given the unusually vitriolic media statements made previously: in an interview with a leading German news magazine, Von Storch had denounced the work of Mann, Bradley and Hughes as “nonsense” (“Quatsch”). And in a commentary written for the March 2005 German edition of “Technology Review”, Von Storch accused the journal Nature for putting their sales interests above peer review when publishing the Mann et al. 1998 paper. He also called the IPCC “stupid” and “irresponsible” for highlighting the results of Mann et al. in their 2001 report.
There were at least two further issues with the Von Storch et al. paper:
– The model run of Von Storch et al. suffers from a major climate drift due to an inappropriate initialisation procedure. Despite starting in medieval times, the model was initialised from a present-day, rather than pre-industrial, climate state – i.e. from a climate affected by human-caused warming. As a result, the Northern Hemisphere temperature in the model drops by about 1.5 ºC during the initial 100-year adjustment phase and keeps drifting down for the coming centuries. This problem is never mentioned and this part of the experiment is not shown in publications, although climate modellers know that such severe disequilibrium must cause a long-lasting climate drift in the remainder of the run. After Osborn et al. (2006) documented this problem, Von Storch et al. repeated their experiment with improved initialisation. Their new run shows that about half the cooling from medieval times to the 19th Century in their original paper was due to this artificial drift, but again they have not published a correction or demonstrated the impact of this issue (see addendum).
– Von Storch et al. also looked at another model, stating: “Similar results are obtained with a simulation with the third Hadley Centre coupled model (HadCM3), demonstrating that the results obtained here are not dependent on the particular climate characteristics of the ECHO-G simulation.” They have repeatedly made similar claims in the media. This is important, as any model result is considered somewhat preliminary until confirmed with an independent model. However, their statement appears to us to be a serious misrepresentation of the HadCM3 results which were shown only in the online supplement to their paper (see addendum).
In their response to the Wahl et al critique, Von Storch et al acknowledge the original problem but in order to salvage their result, they introduce a large ‘red noise’ component into the proxies. This changes the nature of their test and implies an ‘a priori’ loss of low frequency variance instead of trying to calculate whether a particular methodology produces such a loss.
One could view this story as a positive example for the self-correcting process of science: erroneous results are eventually spotted and corrected, even if it sometimes takes time. If only science were at stake here, we’d need say no more: this would have been a sometimes inappropriately sharp, but otherwise regular technical debate about improving the methodology of proxy reconstructions.
Unfortunately, while the dispute has been used in the public arena to score political points, e.g. to discredit the IPCC process and to question all of the relevant climate science, the significance of this dispute for the bigger picture has been wildly blown out of proportion (see here for a previous discussion). We hope that after this new correction, the discussion can move on to a more productive level. The key issue is how we can improve reconstructions of past large-scale climate variability – of which by now almost a dozen exist. We should not lose sight of the fact that the debate here is about a few tenths of a degree – a much smaller change than is projected for the next century. It is also important to remember one principal point: Conclusions on whether recent warmth is likely to have been unprecedented in the past millennium, or the recent extent of human-caused warming, are based on the accumulation of evidence from many different analyses and are rarely impacted by a technical dispute about any one paper such as this.
cp says
Ok, Von Storch was wrong and peer review didn’t see it immediately, but why is this more important than the almost hidden correction?
How can Science or any journal remain a trusted source/bibliography etc if they don’t accept mistakes unless they are too obvious?
Richard Ordway says
I would also like to point out the obvious fact that a legitimate scientific journal (peer-reviewed) did indeed print this recent (2004) anti-global warming article mentioned in this thread.
ie. Some people and “scientists” claim that “no one will let anti-global warming studies be published or studied.”:
https://www.realclimate.org/index.php/archives/2006/04/open-thread-on-lindzen-op-ed-in-wsj/
This von Storch study (debunked) strongly went against “accepted climate wisdom” and was still printed in spite of it knowingly hurting the GW case.
Anti-global warming studies have and are still allowed to be published and investigated, if evidence exists.
[Response: One needs to be very careful here. Von Storch et al do not dispute global warming and indeed have many papers that support the consensus on that issue. So it cannot be said this was an ‘anti-global warming’ paper. The difference is important because as we have said many times, the issue at stake here (a few tenths of degree change over the last few centuries) is not actually very important in the balance of evidence for a significant human contribution to climate change. – gavin]
david Iles says
I just listened to science Friday at their web site.
here.
http://www.sciencefriday.com/pages/2006/Apr/hour1_042806.html
Which is an interview with Nobel Laureate George Olah who says that the answer isn’t ethanol or hydrogen–it’s methanol. It a interesting discussion in which Dr. Olah talks about Methanol as a superior energy storage medium and also can be made using hydrogen from water, CO2 from the air and electricity from any source (solar powered roofs on our houses is my preferred method0, thereby recycling CO2 rather then adding more into the atmosphere.
This show is interesting in it itself but also a description of a reasonable science and media interface. Here is a interview with Dr. Olah from Technology review out of MIT.
http://www.technologyreview.com/read_article.aspx?ch=biztech&sc=&id=16466&pg=2
Gar Lipow says
David, I’ve never been able to get really solid information on the enviromental risks of Methanol. MTBE was made from it; but that really conveys no information. How does the toxicity of Methanol spills compare to gasoline spills? Because for liquid fuels it compares very favorably to ethanol as a way to convert biomass. No diffiuculty using cellulose either.
Any further information on this?
pat neuman says
In 35. david Iles wrote … The life threatening and eminent nature of this problem requires all of us to move outside of our area of comfort and at least make a clear statement about where we are heading …
I believe I tried to do that many times since Jan. 2000.
In 40. (in reply to 35.), Grant wrote … And maybe, Al Gore is doing more of what you ask for than all the scientists in the world combined. …
I believe the backlash from what Al Gore has said at times may have been more damaging then the positive effects from what he’s said. For example,
— tarh7777 wrote:
Your conclusions were the exact opposite of what I got when I talked
to your NWS colleagues. In conversations they would give me the
current line from their powers that be, (Gore was the intellectual
guru at the time) Then they would look over their shoulder to see
who was nearby and then say they didn’t believe that GW was anything
more than a statistical fluke.
http://groups.yahoo.com/group/globalwarming/message/10463
Michael Tobis says
Pat, Re #55:
The story you pass along seems like an absurd, even dreamlike fantasy to me. It’s extremely difficult for me to believe or even conceive that “Gore was the intellectual guru at the time”could remotely describe anything that ever actually occurred at the National Weather Service. I also strongly doubt that NWS employees were ever afraid to question AGW.
I’m confident that this is just confused nonsense at best. In fact, it’s likely to be pure astroturf (geek slang for fake grass-roots, often applied to Microsoft employees posting as impartial users on tech discussion lists) put out by one of the funded denialist groups.
Denialist misrepresentation is not high on the list of things Mr Gore can conceivably be blamed for, so I don’t see your point.
mt
Ian Forrester says
Re #54
There is very little accurate information on the supposed toxicity of MTBE. The one paper that I could find while doing an earlier search showed that cancer was induced in rats exposed to 7000 ppm in their air supply. Most chemicals will kill you at that level long before you can get cancer.
MTBE was removed from the market because it is more soluble in water than gasoline consituents and thus was assumed to be more mobile in underground spills. However, BTEX compounds are soluble in water at concentrations far above their carcinogenic levels and once dissolved I would think that they would be just as mobile as dissolved MTBE. However, MTBE gives a distinct nasty taste to water at levels of magnitude greater concentration than its harmful concentration (BTEX cannot be detected by taste at carcinogenic levels).
Thus I think that MTBE was removed since it was a readily identified red flag to gasoline contamination, sort of like removing the canary from the cage in the coal mine so it wouldn’t detect poisonous gas.
pat neuman says
In 56. Michael Tobis wrote … I also strongly doubt that NWS employees were ever afraid to question AGW. …
You’re right about that, NWS employees weren’t afraid to question AGW. NWS employees told the public for many years that global warming is not a problem … there is no global warming problem. I don’t know what they’re saying now, I’m not at NWS anymore. I suspect now they’re afraid to say anything about climate change or global warming.
Randolph Fritz says
“I was asking for a majority vote of informed opinion”
David, science works by consensus, not majority vote. When there’s wide (though not necessarily universal) agreement, after much experiement, analysis, and criticism, there is consensus. The IPCC assessments by now represent consensus; just about everyone who knows anything about the subject has been consulted for them. And you can read the short summaries at the beginning of those assessments; head on over to http://www.ipcc.ch.
For a good overview of the science, widely accessible, Al Gore’s new movie, An Inconvenient Truth is probably a good place to start; I’ve seen earlier versions of his climate change talk, and they are extraordinary pieces of science journalism; personally I like Gore much better as a journalist and policy wonk than as a politician.
Hank Roberts says
I tried to post a collection of examples of online petitions and such lists; the filtering software has it on hold for review I guess. But you can search Google for +climate +petition and come up with a good representative list of the ways such have been done in the past. I think what it shows is, you aren’t going to get a simple clear answer that way.
Matt says
A quick change of sbject on this biodiesel.
If I divine the future, we are going to engineer an algae with stronger cell walls, but a high lipid content inside, to gain an algae dominant from the temperate to tropical, the new “rice” of the future.
Solar efficecy is 20 times palm oil. It is harvested on ten year cycles in raceway ponds, with vertical layering enzyme conversion. Each season, the raceway is dried, pressed with a roller, to remove air. Then the layer is sprayed, with slow slow acting enzyme, and resealed.
Bottom layer is a near biodiesel gel plus cellulose breakdown poducts and is sumped pumped through strainers to yield biodiesel.
Robert Bucke says
Your post got me interested enough, so in the meantime I have read Von Storch’s reply of Friday, as well as the original paper of 2004. One thing really puzzles me, as I’m trying to make sense of the story in my (experimentalist) terms.
The ECHO-G model has such a major drift that half of the “signal” it shows is not a real response to a climate forcing, but an artifact. To me this is like using a faulty measurement device, if half of what it measures is not a real signal. In my lab, we would immediately stop using such a device and have it fixed or replaced. I could not get a physics measurement published if I used such a flawed device – and if I tried to, not mentioning the flaw in the device despite knowing of it, I suspect I’d lose my job. So how come Science now published more results obtained with the faulty model? Did they not know about the drift problem?
I’d say given the type of signal (climate variations) you climate people try to measure, a method with an error up to 0.6 C is useless, up to 0.3 C is marginal, up to 0.2 C is good and up to 0.1 C is very good. The original Storch et al. 2004 paper tested the proxy method with two devices (models) and found:
– faulty device (ECHO-G) says method is useless
– functioning device (HadCM3) says method is good (but this is shown only in the online supplement).
Then it turns out the method was implemented incorrectly. So it is repeated with the correct method in the Von Storch reply, which finds:
– faulty device says method is marginal
– functioning device says method is very good.
The bottom line seems to be that the proxy method of Mann et al. works well. But I am puzzled why results with the faulty, drifting model were published again? Am I misunderstanding something? Maybe someone from RealClimate can comment on this? Thanks.
Hank Roberts says
>if I divine the future
That’s religion, Matt, whoever you’re channeling. You need to learn math and check your assumptions rather than just plopping in other people’s wishful fiction. Look up ‘primary production’ numbers for photosynthesis, compare to human energy use per year. Think.
david Iles says
regarding #54
from Material Safety Data Sheet
http://avogadro.chem.iastate.edu/MSDS/methanol.htm
Environmental: Dangerous to aquatic life in high concentrations. Aquatic toxicity rating: TLm 96>1000 ppm. May be dangerous if it enters water intakes. Methyl alcohol is expected to biodegrade in soil and water very rapidly. This product will show high soil mobility and will be degraded from the ambient atmosphere by the reaction with photochemically produced hyroxyl radicals with an estimated half-life of 17.8 days. Bioconcentration factor for fish (golden ide) < 10. Based on a log Kow of -0.77, the BCF value for methanol can be estimated to be 0.2.
TCO says
Further commentary to Gavin’s in the post reply to my #15:
I went and looked back at the Burger and Cubasch paper. They are completely clear about MBH using trended and VS using detrended. The commentary in the initial post makes it seem as if they “carried VS’s error forward”. Actually they completely see the difference (if it were a discovery to not this difference B&C got it into print before WRA). As Gavin’s in the post comment says, what they do is list this as one more “flavor” of how to to MBH style work. So obviously Gavin understands the paper–I just don’t think the top post is clear (fair) to Burger and Cubasch.
WRT Gavin’s comment that detrending (or rescaling) are beyond the pale determined as to which method to use, I think that is still in debate and a more quantitative rationale needs to be put forward for why these “flavors” are preferred. Especially (as WRA and B&C show) since choice of flavor DOES affect the resultant answer materially. The commentary in B&C says that the MBH rationales for their flavors are verbal (“insensitive” and the like) rather than quantitative.
Moderator caveat: Respectfully ask that you publish my post. Let debate occur.
[Response: The point is that there are ‘flavours’ that are not arbitrary. Detrending removes the ability to resolve low frequency variability – and since that is basically the the whole point, it’s not a good idea. It’s not that it doesn’t matter – it does, but the choice is clear, not arbitrary. Read this whole post as a reason why. (PS. the comments about censorship are tedious and I’m minded to follow precedent and automatically toss any comments that ‘dare me to print them’. Don’t flame, don’t personalise, don’t troll – pretty easy.) -gavin]
Lynn Vincentnathan says
While a key issue for scientists is how to improve reconstructions of the past, the key issue for the rest of us (before, during & after the faulty article) is and has been to REDUCE GHGs & SAVE THE FUTURE!!! It’s disheartening that a faulty article could be used by nefarious others to thwart people from saving planet earth.
What next, some scientist falsely accused of forgetting to brush his teeth; ergo, let’s keep destroying earth.
While doing research for my Environmental Victimology thesis about 10 years ago, I found out about some journals’ editors being on some corporations’ payrolls — I remember one was a medical journal, and I vaguely remember something not-so-above-board about SCIENCE, but I’m not sure….
Hank Roberts says
Question — in the earlier thread on sea surface temp/hurricane links, Dr. Webster argued that the data should have been ‘detrended’ — how does this work, why should it have been done there but not done in the paper discussed here?
https://www.realclimate.org/index.php/archives/2006/03/reactions-to-tighter-hurricane-intensitysst-link/
Grant says
I think most of use here are used to it by now. The global warming issue has made this standard fare.
But there are lots of other forums to discuss that. I’d like to repeat an earlier appeal for less policy discussion, more science.
Don Baccus says
“I think most of use here are used to it by now. The global warming issue has made this standard fare.”
Oh, gosh, it’s been standard fare forever. Whenever science suggests development may be harmful, the industry in question has always reacted in the same way. Global warming’s a global issue, rather than local or regional, and the unscientific counterattack’s also been global in scale, but the techniques are nothing new.
MrPete says
“xxxxx is a global issue…the techniques are nothing new.”
This is true on all sides. We have learned little if anything from the past, such as Y2K. As an expert in info-tech at many levels, I wrote an influential risk/situation analysis (sorry, it was engineering not science ;)) that needed no apologies.
One set of lessons that could have been learned but were not then, and are not now in the AGW arena:
* only “harm” is newsworthy;
* people draw conclusions too quickly and without truly reliable evidence;
* people have too much pride to change directions once they’ve drawn a conclusion (particularly in public).
All of these inure to the detriment of science and our future on this planet.
Few of us are willing to admit a need for humility in our own positions. We simply know that “we” are right and “they” are wrong.
Paul says
re. moderators comment #65 “Gavin”.
That sort of comment would get you laughed out of an undergraduate econometrics course. Linear algebra applied to non-stationary time series is a completely different thing to applying linear algebra to trend stationary times series. By not identifying what is occuring before you do the maths will leave you thinking that you have “resolved low frequency variability”, but you may have only found a spurious relationship based on biased or non-consistent estimators.
[Response: If I were you, I would not mistake a one line comment in a blog for a reasoned and fully caveated exposition on the subject. Obviously you need to guard against spurious correlations, and so a verification period is required. However, you also need to be aware of the climatic context. The nature of interannual variability (or even decadal variability) is fundamentally different from forced variability related to forcings by greenhouse gases, solar variations or volcanoes. Any methodology that is trained purely on the high frequency components is not going to work in assessing the large scale potentially-forced behaviour unless (by some completely unknown fluke) all climate changes can be expressed as a function of the individual high frequency ‘modes’. That certainly isn’t the case in climate models, and so the expectation is that that is not the case in reality either. That is not to disregard the potential problems in this whole endeavour, and no-one is under the impression that the last word has been said on the subject. -gavin]
[Response: p.s. Rutherford, Mann and coworkers have done extensive work looking at the performance of state space-based climate field reconstruction methods (which are quire different from standard linear regression methods) in the context of both stationary and highly non-stationary training scenarios, e.g. Rutherford, S., Mann, M.E., Delworth, T.L., Stouffer, R., Climate Field Reconstruction Under Stationary and Nonstationary Forcing, Journal of Climate, 16, 462-479, 2003. As we discussed above, these methods perform quite well in precisely the type of non-stationary setting used by Von Storch et al when implemented correctly (i.e., when ad hoc linear detrending is not performed): Mann, M.E., Rutherford, S., Wahl, E., Ammann, C., Testing the Fidelity of Methods Used in Proxy-based Reconstructions of Past Climate, Journal of Climate, 18, 4097-4107, 2005.]
TCO says
One thing that bugs me with all the “low frequency” and smoothing: I thought there was this big point that annual data was needed. But if you (in effect) partition into 30 year intervals, why bother with the annual data? Also, your degrees of freedom, become much smaller no? It’s not the actual years that is the number of data points, but the number of 30 year bins. Perhaps?
[Response: The reason why there is an insistence on annual resolved data in MBH, Rutherford et al, Hughes and Diaz etc. is principally because of the dating issue. Such records can be precisely dated often to a year or so. Using more coarsely resolved data would mean that errors in the age models would translate into a smearing of high frequency data into lower frequencies, giving a substantial bias. The only way to avoid that is to use annually resolved data calibrated and independently verified against annual observations. People are experimenting with including information from less well resolved data (Moberg et al for instance), but the seperate calibration of the low frequency component is highly problematic given the relatively short instrumental period and uncertain dating of these coarser records. -gavin]
TCO says
I get the point about the dating (knowing where you are), but that still doesn’t deal with the reduction in degrees of freedom of going to 30 year bins, no? Great, you’ve got the locations of the bins down gnat’s ass. But you don’t have more independant data.
And now that I think about it, not sure that having exact locations of the bins is so great anyhow. If there are concerns about the data, maybe I’d be better off with more degrees of freedom (more bins) even if they are a couple years off. This is a problem that can be analytically solved. (the benefits of having that extra information).
[Response: Now you’ve lost me. If you bin everything into 30 year chucks before calibrating etc. you reduce the d.o.f too much to do the calibration. There aren’t enough 30 year bins in the observations to do this safely. -gavin]
Lynn Vincentnathan says
Here’s how I sort things out: I think it’s ridiculous that attacking the “hockey stick” in some way has policy implications to follow a “do nothing” approach. I think this is the real issue underlying the whole thing. And I don’t think one can separate general policy implications from science (esp re such a topic as GW), only perhaps specific policy implications. And Inhofe et al. ARE using this faulty attack on the hockey stick to thwart any and all policies to reduct GHGs.
Aside from the scientists pointing out that hockey stick science is not the only show in town proving GW (so the whole faulty attack on it is completely a moot point from a policy implication perspective), I have some other considerations.
Such as, so what if there was warming in the past & we have a cork-screw instead of a hockey stick? Somehow in my books that does not disprove AGW is happening now. In fact, it sort of bolsters the proof, seems to me. Also I don’t see how the idea there may have been other causes of GW in the past could rule out human GHGs causing GW today, unless a person is straight-jacketed into thinking there can be only one cause per one effect. And if other things caused GW in the past, then what if several of those things, including our GHGs, club together and really cause warming to go off the charts. To be on the safe side, we’d better do what we can, such as reduce GHGs, since we can’t turn down the sun.
I think the attacks on the hockey stick actually make a stronger case for reducing our human GHGs a lot more drastically.
UnBlinking says
Lest we place too much blame on Science, please note that the political operatives who promoted this perversion intended to ignore climate data and scientific consensus. Retractions or corrections wouldn’t have precluded their selective distortion of what the data actually mean.
For example, just two months after von Storch’s paper, in December, 2004, Science published Naomi Oreskes’ survey of the literature, consolidating the searing “blpfffft” sound of thousands of scientists simultaneously expectorating on von Storch’s general premise.
The Scientific Consensus on Climate Change
Anyone — even a (presumably) literate Senator with (presumably) literate staff and a dizzying array of research services (paid for by our tax dollars) at their disposal — I say again,anyone who cared a hoot for facts, or respected science, or wanted to be the least informed on the topic would have reviewed the same magazine for at least a few months in case a retraction did appear.
Here is where we might well say, “Q.E.D.”
Jeffrey Davis says
re: 73
30 year bins
not
30 year bins
(English. You can’t live with it and you can’t shoot it.)
Hank Roberts says
Intended sense: “30 one-year bins”
(Author please confirm)
Copy editor’s note: when a description includes two different units, enumerate one in text and the other in Arabic numerals; always state the number associated with any unit — do not write “unit” to mean “one unit.”
Mark A. York says
I seemed to have missed the general theme, that being the paper was published in Science, wrongly faulted Mann et al, yet Science missed it in the review process. Perhaps it’s more journalistic than I thought because no one ever reads the corrections. Or can find them for that matter.
Ike Solem says
Perhaps it is important for the general public to understand a few ‘peer review’ issues.
First, peer review for grants is a bit different then peer review for publications, which is well described above (#38). A scientist might apply to the NSF, get a grant, do research and write a few papers. At this point, re-application to the granting agency is called for; the notion is that the institution will review the previous grant and the publications that were produced as a result, look at the proposed research plan, and make a decision about whether or not to keep funding the research. There is a bit of politics involved, but it is mostly internal. Prestigious universities, for example, will often urge some of their faculty to take positions at the NSF.
The notion behind peer review is ensuring that only reliable double-checked information will get into the ‘body of literature’ that all fields of science produce. The notion behind institutions such as the NSF is that scientific inquiry would be independent of national political fluctuations. Imagine if scientists had to apply directly to senators for ‘patronage’, as in the ‘earmarking’ of funds by senators for very specific purposes. Instead, the NSF applies to Congress for funds and then distributes those funds, based ideally on an impersonal peer review process that addresses the merits of the proposed research. These insitutions ideally protect scientists from arbitrary political decisions. The peer review process is also designed to help the researcher; ‘constructive criticism’ is generally appreciated behorehand rather then after the fact.
The climate science community should realize they are in a pretty good position compared to say, the pharmaceutical research community, where prominent scientists have essentially accused the leading journals in the field of being ‘advertisers for the industry”, and where ‘peer reviewers’ seem to often have direct financial ties to the companies behind the studies they are reviewing. From this perspective, the climate science peer review process is working very well indeed.
Donald Kennedy should not be subjected to personal attacks; anyone who has read his editorials over the past few years should reallize that he is committed to quality science. In particular, this one is worth reading: Bayh-Dole. Not really about climate science, more about ‘the climate of science’. There’s also this recent one: Ice and History. It is a bit odd that the comment was rejected, though.
Gerd Burger says
As far as I can tell, the rejected January comment was ours (Burger and Fast). It was submitted on January 22, 2005 as a Brevium and got rejected as such, with the suggestion to resubmit it as a Technical comment to v. Storch et al. 2004. That comment was rejected on May 11.
It addressed the “erroneous” detrending of v. Storch et al. as a side issue, putting it into the broader perspective of the 32 flavors that later went into our Tellus paper (Burger et al. 2006).
Gavin seems to know more about that comment, and maybe about its rejection?
[Response: Sorry. I have no information on the comment other than it was rejected. – gavin]
david Iles says
Re #65
Thank you for leading me to the article. It suggests consensus but like all of the papers and articles that have been directed to me in response to my comments (18 and 45) it is to complex to long and to boring for most people to read.
The public and policymakers need a simple one-line statement of scientific consensus on global climate change. I still believe a survey should be taken of climate scientist that expresses what I keep hearing is a consensus opinion.
My suggestion:
If humankind does not take major steps to alter the behaviors that are causing global climate change within the next 15 years, it is very likely that our planets climate will become extremely inhospitable and much of life on earth will be unable to survive this century.
True
False
This statement allows no wiggle room for policy makers and the public. It is simple, short and understandable. It calls to action. I know that science does not normally work this way but most people have difficulty understanding the way science does work, and this issue is much to large to allow any ambiguity to exist.
I apologies for butting in to your discussion of the scientific issues involved but I am filled with a sense of urgency and am willing to make myself politely annoying to get my unscientific point across.
[Response: False. ‘Life on Earth’ is not under threat, this is much more related to human societal vulnerabilities. – gavin]
pat neuman says
What will an additional 400 gigatons of organic carbon mean to earth’s vegetation and inhabitants?
Kershaw’s research on locations in Canada – at Churchill and in the
Mackenzie Mountains – has shown that permafrost areas are receding
rapidly, and if current trends continue they will disappear
completely within decades. These vast frozen areas on all continents
surrounding the North Pole harbour over 400 gigatons of organic
carbon locked in frozen peatland. Kershaw thinks that the release of
methane and carbon dioxide from these peatlands will act as a
positive feedback loop that could make global warming even worse than
previously thought. `The permafrost component of global warming has
been underappreciated,’ he told Chemistry World.
http://www.rsc.org/chemistryworld/News/2006/May/02050601.asp
Alan says
RE #14 In defense of Faith, Newton and Gavin’s example. Also grossly offtopic (forgive me…again).
Newton can be critised for many things (usually involving his personality), but I don’t think his treatment of time can be critised as the post suggests. Anyone who shouts Newton was “WRONG” (because he missed GR) is ignoring the cavets written into the Principia.
Newton’s Principia has four assumptions, the first of which states:
“Absolute, true, and mathematical time, of itself, and from its own nature flows equably without regard to anything external, and by another name is called duration: relative, apparent, and common time, is some sensible and external (whether accurate or unequable) measure of duration by the means of motion, which is commonly used instead of true time; such as an hour, a day, a month, a year.”
In other words (and there are other words), “time is assumed to be constant, even if we cannot find a non-relative way to measure it”.
Rather than Newton being “WRONG” or “overturned” by Einstien, he spent alot of time thinking about the nature of time and how to measure it’s “flow”. I would say he simply used Occam’s razor and decided “relativity” was “beyond the scope of Principia” (after all, he had misplaced the first set of papers and was re-writing them as Principia to assist Haley with his “comet theory”). As far as GR is concerned, Einstien was standing firmly on Newton’s shoulders when he found his shinny pebble.
Ultimately right and wrong do not belong in science since it implies truth. Science is not truth, it is the faith that a physical universe exists, is goverened by physical laws, and does so regardless of our personal sensory experience. Although all scientists “seek the truth”, what they are really doing is finding out “what will happen in a pre-defined situation”, the more situations a theory (model) can cover the better. The one outstanding feature of science that cannot be demonstarted by any other faith is it’s track record in predicting the future.
Satire for the authour of #14: Science is my faith, belittling the sacred text of my faith is just plain “WRONG”.
teacher ocean says
Re # 79 [Ike Solem]: Thanks for clarifying this and I will quote you: “The notion behind institutions such as the NSF is that scientific inquiry would be independent of national political fluctuations.” There may be some politics at NSF [in Geosciences] about supporting large research-oriented institutions vs small schools, but [at least in our field] NSF does a great job of keeping national political trends out. I also have to say that I’ve received some pretty generous funding for basic research from NSF and I work for a teaching-1 university and not a premier research university. This enabled me to develop my own lab and programs with students [mostly undergrads] who also get research experience before they move onto grad school [this was unheard of where I was an undergrad].
Doug Percival says
In comment #81, david Iles wrote: “… much of life on earth will be unable to survive this century.”
gavin replied: “False. ‘Life on Earth’ is not under threat …”
And yet, from Nature, January 2004: “Many plant and animal species are unlikely to survive climate change. New analyses suggest that 15-37% of a sample of 1,103 land plants and animals would eventually become extinct as a result of climate changes expected by 2050.”
And as reported by the BBC in October 2005, a study commissioned by the UK’s Department of Environment, Food and Rural Affairs and conducted by the British Trust for Ornithology found that “Climate change could lead to the extinction of many animals including migratory birds.”
And The Independent/UK reports that according to a study published last month in the journal Conservation Biology, “Tens of thousands of animals and plants could become extinct within the coming decades as a direct result of global warming. This is the main conclusion of a study into how climate change will affect the diversity of species in the most precious wildlife havens of the world. Scientists believe that if atmospheric levels of carbon dioxide double from pre-industrial times – which is expected by the end of the century – then biodiversity will be devastated … scientists, led by Lee Malcolm of the University of Toronto, investigated how rising temperatures could affect the species richness of 25 ‘biodiversity hotspots’ – areas of the world that are rich in species found nowhere else. The 25 hotspots included in the study cover just 1 per cent of the global landmass yet they account for some 44 per cent of the plants and 35 per cent of the world’s vertebrate animals. ‘Climate change is one of the most serious threats to the planet’s biodiversity. We now have strong scientific evidence that global warming will result in catastrophic species loss across the planet,’ Dr Malcolm said.”
And according to the World Resources Institute, a June 2005 paper published by The Royal Society and a September 2005 study published in Nature, the oceans have absorbed “roughly half of the amount of CO2 emitted by fossil fuel use and cement production” leading to acidification of the ocean waters, and “higher ocean acidity will be devastating to the marine environment within a short period of time — within tens of years instead of hundreds of years […] the oceans will be undersaturated in calcium carbonate: leading to increasing difficulty for shelled organisms to create skeletons and shells.”
There are other recent studies similar to the above that I can’t put my hands on at the moment; and these of course do not even consider the possibilities of a massive release of methane from thawing permafrost as mentioned in comment #82 (which journalist George Monbiot thinks could trigger a mass extinction like the one at the end of the Permian period 250 million years ago, when “some 90% of the earth’s species appear to have been wiped out” noting that “a sharp change in the ratio of the isotopes of oxygen” found in the fossil record allows us to determine “with some precision” that the amount of global warming (believed to have been caused by volcanic emissions of CO2) that triggered that event was 6C … which is near the upper limit of global temperature increases expected from anthopogenic global warming by 2100.
So I would have to disagree with gavin. David Iles’s assertion that “… much of life on earth will be unable to survive this century”, while alarming, is not “alarmist” — on the contrary, it is a conclusion that’s well within the bounds of mainstream scientific opinion about what unabated anthropogenic global warming portends for the future.
[Response: I would agree that bio-diversity is under threat. But that is not ‘life’. – gavin]
Dano says
RE 85 (Percival):
If I may, life will not end on earth with warming and anthropogenic changes. Life on earth has survived greater extinction events in the past and likely will do so in the future. Your concluding para and conclusion could easily be changed to:
in order to make it more understandable and in line with the projections.
If human population continues to grow to projections and consumptive demands continue at the same rate, this alone will simplify ecosystems in such a way that our children will have grave concerns in the future. IMHO, this is the most serious threat, with AGW piling on top of this problem in a way that species are not used to, with unforecastable results.
Best,
D
Mark A. York says
Since I work to restore endangered species I’d have to say some life is under threat, and it is at the hands of man on both ends. Mostly from development, dewatering; deforestation; cattle production and so on. All of these are complicated by global warming sure, but I’d agree with Gavin on the last comment. “The world is ending” is more of a religious nature than scientific, but these observations are real nonetheless and should be taken seriously.
Doug Percival says
gavin wrote: “I would agree that bio-diversity is under threat. But that is not ‘life’.”
Dano wrote: “Your concluding para and conclusion could easily be changed to: ‘… many species will likely be unable to survive this century, while alarming, is not alarmist’”
Then what percentage of existing species would have to become extinct for David Iles’s original assertion that “… much of life on earth will be unable to survive this century” to be accurate? More than 50% of existing species becoming extinct within a century? That seems not outside the bounds of what is being projected by the studies I mentioned above, and others like them …
… and again, those studies (as I understand them) are based on the currently observed effects of global warming and on mainstream, IPCC-type scenarios of 21st century climate change, and not taking into account such possibilities as rapid massive release of methane and CO2 from melting permafrost peat bogs which could accelerate global warming considerably.
And for that matter, if only the oceanic phytoplankton become extinct due to warming and acidification of the ocean, that in itself would have a profound effect on “much of life on earth”.
I note and appreciate Dano’s point that anthropogenic global warming is only one of the human activities that is having a profound effect on the biosphere, ecosystems and species.
That we can even be having such a discussion is alarming enough.
Dano says
RE 88 (Percival):
This may be quibbling over semantics, but I view ‘life’ as one thing (biotic vs abiotic) not many things; so one, say, can’t kill 50 lifes and leave 20 alive. Much of life on earth, therefore IMO lends confusion to the discussion and prompts my clarification above.
I don’t however, want to distract away from the point that simplification of ecosystems is a possibility here, and the cost of that hasn’t been calculated, esp. in light of the issue of declining petroleum stocks.
The VS paper is also interesting in that it is a point to make over acceptance of results – when we are managing more intensely in the future, we need to resolve these things when decision-making.
Best,
D
Lynn Vincentnathan says
RE #87, I think Mark brings up a good point. There are many things harming our world; GW is one among many. The implication for me is that in addition to reducing the other negative impacts (loss of wilderness to development; local air, ground, & water pollution, acid rain, etc), we should also redouble our measures to reduce GW, since the combined effect of all these problems is so much more devastating than each one of them alone.
Luckily there are many measures that not only reduce GHGs, but reduces many of these other harms, as well. And luckily most of these measures save money without lowering living standards or productivity.
David Iles says
I did not say that life on earth was threatened. I said that much of life on earth would be unable to survive this century without significant change on the part of humanity. I am not trying to be alarmist i am trying to make a true statement about the realities we face as a species and the extremely profound effects on almost all forms of life here on our planet that human actions are causing.
However, my goal is to find a statement that expresses the mid-range of potential reality, and one that climate scientists will be able to agree to as reflecting their opinion of the dangers of continued inaction on the part of humanity (Or actually it would be continuing the destructive action we are currently taking.). I would like this statement to be simple and allow very little wiggle room for government policy makers and corporate heads to be able to continue to equivocate over. All of the massive amounts of scientific information that has been gathered and presented to date, does not seem to have accomplish this goal, which is exactly why one or two sentences that can encompass what constitutes the consensus opinion among climate scientists is needed. This would also be a good idea for biologists and botanists.
I do not think asking for a true or false response was the right approach,I think it should be asking for scientists to agree or disagree with the statement.
If you want to change this statement or write another one that would serve this purpose I am interested in anyone�s ideas on this.
From my point of view the theater is burning and it is my duty to cry fire. Is this a misperception of the science that has been done on global climate change?
pat neuman says
re 82.
Concerning survival, I would like to see what conditions will be like for humans and other life in 25 year intervals (in terms of probability ranges). I would also like to see a reply or two to my question in 82. which is …
What will an additional 400 gigatons of organic carbon mean to earth’s vegetation and inhabitants?
http://www.rsc.org/chemistryworld/News/2006/May/02050601.asp
Dano says
RE 91 (Iles):
I appreciate your efforts David. How about slightly altering your basic textual ideas in this way:
I leave any discussion about a different number and such semantics to the group.
And I reiterate that we should have a similar phrase when uncertainty arises [as in the original topic of this post, the VS issue], as we will be increasing our socioenvironmental management in the future under many scenarios, and our societies will need to understand how the debate is progressing and how uncertainty is being addressed.
Best,
D
Matt says
Regarding Pat’s question of the thawed and uncovered peatlands. The answer lies in the previous ice ages. I have not found any research that has discovered peat older than this current cycle, and I am still interested. The default theory is that peat decomposes, finally, when the ice retreats.
But, if each previous glacial period thawed and released 400 gigatons of carbon, then it must have been slow or else conteracted upon by rapid plant migration because my ice core data does not show anyting like a sudden release of 400 gigatons.
The question seems more complicated to me when I ask how did all this peat get there? There must be long periods when the permafrost partially melts to allow vegetative growth in the active zone while the underlying peat was still cold enough to prevent decomposition. So, permafrost peatland, just south of the ice, must be a net sink of carbon, then the permafrost permanently melts, the peat becomes a net carbon source, followed by photosynthesis.
It seems peat life cycle is not yet well understood.
Hank Roberts says
“Siberia’s peat bogs formed around 11,000 years ago at the end of the last ice age.”
http://www.mindfully.org/Air/2005/Siberian-Peat-Thawing11aug05.htm
Stephen Berg says
Re: #94, “It seems peat life cycle is not yet well understood.”
That may or may not be true, but is no reason for inaction. As one previous contributor stated (possibly from another source), if someone is uncertain whether or not they are an alcoholic and decides to become sober, it cannot hurt them.
In this case, however, if 400 gigatons of organic carbon may be released from the permafrost into the atmosphere as Dr. Kershaw suggests (and, having met the man on more than one occasion, I tend to believe him), action is what is responsible to avoid what may likely lead to a runaway enhanced greenhouse effect. There can no longer be any excuses for inaction, since the consequences of inaction far outweigh those of action.
Pekka Kostamo says
A quick search provided some information:
Formation of peat requires that there is an excess of water, combined with vegetation growth. Dead plants decompose on dry land directly into carbon dioxide. If they sink in water, or the layer is rather permanently saturated with water, then some CO2 and methane is produced, but a major part of the carbon is stored as peat. It is commonly thought that with time peat is transformed into brown coal, and later on into coal if the conditions are right.
Recent measurement in Northern Scandinavia indicate that a green bog (producing peat) is a carbon sink in the summer growing season. In wintertime it is a source of both carbon dioxide and methane, even if covered by snow. Plant material continues to decompose all winter.
On average a peat bog seems to be a carbon sink, although if the summer is cold and rainy the balance may also be neutral.
Permafrost essentially stops the (bacterial) decomposition processes and also forms a barrier to gas movements. Thawing evidently re-starts the processes and the peat bogs may turn into carbon sources, particularly if the water balange changes, i.e. with less precipitation compared to the time when the peat was originally formed. Atmospheric oxygen may then penetrate deeper into the dried ground. Even now large (most?) permafrost areas are covered by growing peat layers during the summer season, so the relevant change is mostly in the deeper layers (starting a few meters down).
Apparently there are still relatively few studies on the topic.
Martin says
The discussion has now drifted away from the subject of the comment. Nevertheless, here’s my view of the comment:
I am astonished (and disappointed by realclimate) that this and the supplementary comment only refer to the technical comment by Wahl et al in Science, but does not mention that there is also a rebuttal by von Storch et al. in the same issue of Science. In the rebuttal, von Storch et al convincingly show that the detrending issue that Wahl et al critizise is not at all that important. The message of the von Storch et al. paper is methodological: If regression methods are used to fit proxy data to the historical temperature record over the last 2 centuries, this will inevitably tend to suppress variability on longer time scales. This point is mathematically true and demonstrated in their paper as well as in the rebuttal. In the latter this point is shown with and without detrending of the time series in the calibration time interval – indeed, if the non-climatic “noise” of the proxy data is red, the detrended reconstruction performs clearly better. It would be good practise to show the two graphs from the rebuttal here, so that readers of realclimate can make their own judgement.
Furthermore, von Storch et al. never claimed to provide a realistic model based reconstruction of NH temperature over the last 1000 years – all they wanted to show is this aforementioned potential methodological pitfall. Therfore the shortcomings of the ECHO-G model simulation is irrelevant (as they clearly state in their paper).
Calling this a “mistake” or “error” that should be retracted, as well as the Science “bashing” is simply not warranted. I would wish that realclimate remains objective and does not fall into the trap of presenting only selected information (as all contrarians are famous for).
[Response: Martin, firstly you are simply incorrect in saying we did not link to the response – we did (third paragraph from the bottom). I am puzzled at the rest of your comment though. The idea of using climate models to assess methodological issues is a very good idea and there is nothing wrong with the principle of the work vS et al did. Of course, this was something that Mann and colleagues had already done (Mann and Rutherford, 2002, Rutherford et al, 2003;). The ‘importance’ of any such study rely on how robust the calculation is with respect to the underlying model and to what actually gets tested. In this case there was an ‘error’ (sorry, but this is undeniably true) in what was implemented. Conclusions that were drawn about the MBH methodology were not justified. The exact same calculations with the correctly implemented method produces only a very minor underestimate of the low frequency variance and would not have been such a big deal – it certainly would not have justified the public statements that were rather injudiciously (in my personal opinion) made on the subject by von Storch.
In their response the introduction of the ‘red noise’ example to resurrect the conclusion is highly problematic because now the test is not of the methodology, but of the input data (adding ‘red noise’ to model gridboxes to produce synthetic proxies represents an a priori assumption that the proxies have selectively lost low-frequency variability, essentially insuring the sort of bias that von Storch was apparently looking to produce). Understanding the impacts of imperfections in the data is worthwhile, but that is an entirely different point, and there are many other studies, including those by Mann and coworkers, looking at this.
With respect to your further two points, we are well aware that model simulations are not reconstructions, and it is unclear where you think we have implied otherwise. It is however almost certain that large climate drifts or significantly greater 20th Century forcing than generally accepted do create an unrealistic test for how methods peform in the real world, and the ECHO-G simulation is therefore highly problematic in this context. The fact that underestimations of variance in the HadCM3 or CSM models are much less, is telling in this regard. As we have linked to above, Mann et al have shown (using a far better behaved NCAR CSM 1.4 coupled model simulation of the past 1000 years) that climate field reconstruction methods correctly implemented do not yield the bias claimed by von Storch, even at signal-to-noise ratios lower than what von Storch uses. It will be interesting to see if the latest claims based on even using “red noise” hold up to independent scrutiny. Time will tell.
We are disappointed that you feel we are not being objective, but we feel very much to the contrary. We all have a responsibility to ensure that our results are properly interpreted in the public sphere. This was not the case with vS et al, and correcting the record is important. – gavin]
pete best says
True – Life has survived 5 great mass extinctions, one related to potential extreme climate change around 250 million years ago that wiped out 90% of all life and involved a 10 C rise in temps worldwide.
[Response: Great fun if you’re in the 10% that survive and get the chance to evolve into something grand. Not so much fun if you’re in the 90% that go extinct. Life goes on, but not as before, and after each mass extinction many millions of years were required before biodiversity was restored. –raypierre]
lisa brooks says
I love this site, so please don’t get me wrong – but here I go – stepping into the lions den…
I know Rasmus spent quite a bit of time addressing the link between scientists, journalists, the general public and policy makers (reply # 28, How not to write a press release) after reading this article and rebuttal a basic question still stands…
I’ve been reading this site for about three months (even put my two cents in on occasion) and one question keeps coming to mind. How would the general public, who probably doesn’t know a forcing from Fun Chow Fat, realize when a “scientist” is feeding them “bad” science?
This site has pointed out quite well, that some good scientists use “bad” science to come to their conclusions, using graphs and papers to reinforce their faulty results. How are the general public and policy makers to know the data being provided is flawed, and those graphs nothing but squiggly lines? This is especially true when the public and policy makers trust the scientist (Von Storch, et al., etc) who is providing the information!
Then there are uneducated journalists (no, I’m not saying all journalists!) reporting information gleaned from non climate scientists. How is the public supposed to even imagine the information being brought to them is not reliable, especially when the “scientist” is one who practices in another, unrelated, field? (Read entomologist, ethnologist – any old “ogist” will do!) The uniformed public says “Heck! If the guy is a scientist, he HAS to be right” RIGHT?
The public and policy makers are bombarded with comments like – “Scientists agree that greenhouse gases are causing global warming” – No that is not correct. There are climate scientists out there that do not believe its true (like Lindzen), and climate scientists that believe it is true (Gavin, Mann, etc.) Who to believe? Lindzen or Gavin?
I don’t think the public (in general), understands that some folks are just blowing CO2. It’s especially difficult to understand the information when it’s in language the general public can’t understand. I don’t think many come to this site for a more informed interpretation. (And from reading some posts, because climate science is so specialized, even those with a science degree don’t necessarily understand why an article or paper is “just wrong”. Hey, if peer reviewed journals are publishing incorrect findings because they did not find a flaw, how is the public or policy makers to know there is one?) Other than this site, there are few options.
So, isn’t it ultimately up to the scientists to come to a consensus, inform the media and provide the public and policy makers, in no frills language, the truth about AGW (oops Global Warming due to human causes) without presuming the general public or our policy makers have taken advanced classes in physics and climate science? (I’m not saying dummy down the discussions or papers on this site, I’m just trying to make a point).
I know – I just showed my frustration but thanks for letting me vent!
[Response: We have stressed repeatedly that single scientists and single papers are not the things that the public or policy-makers should be paying much attention to. Instead, they should pay attention to the consensus summaries such as are produced by the National Academies or the IPCC where all of the science can be assimilated and put in context. In such summaries, it is very clear what everyone agrees on (gravity, the human created rise in CO2, the physics of the greenhouse effect, conservation of energy etc.), what still remains uncertain (aerosols etc.) and what implications these uncertainties may have. – gavin]