Nature Geoscience has two commentaries this month on science blogging – one from me and another from Myles Allen (see also these blog posts on the subject). My piece tries to make the point that most of what scientists know is “tacit” (i.e. not explicitly or often written down in the technical literature) and it is that knowledge that allows them to quickly distinguish (with reasonable accuracy) what new papers are worth looking at in detail and which are not. This context is what provides RC (and other science sites) with the confidence to comment both on new scientific papers and on the media coverage they receive.
Myles’ piece stresses that criticism of papers in the peer-reviewed literature needs to be in the peer-reviewed literature and suggests that informal criticism (such as on a blog) might undermine that.
We actually agree that there is a real tension between a quick and dirty pointing out of obvious problems in a published paper (such as the Douglass et al paper last December) and doing the much more substantial work and extra analysis that would merit a peer-reviewed response. The approaches are not however necessarily opposed (for instance, our response to the Schwartz paper last year, which has also lead to a submitted comment). But given everyone’s limited time (and the journals’ limited space), there are fewer official rebuttals submitted and published than there are actual complaints. Furthermore, it is exceedingly rare to write a formal comment on an particularly exceptional paper, with the results that complaints are more common in the peer reviewed literature than applause. In fact, there is much to applaud in modern science, and we like to think that RC plays a positive role in highlighting some of the more important and exciting results that appear.
Myles’ piece, while ending up on a worthwhile point of discussion, illustrates it (in my opinion) with a rather misplaced example that involves RC – a post and follow-up on the Stainforth et al (2005) paper and the media coverage it got. The original post dealt in part with how the new climateprediction.net model runs affected our existing expectation for what climate sensitivity is and whether they justified a revision of any projections into the future. The second post came in the aftermath of a rather poor piece of journalism on BBC Radio 4 that implied (completely unjustifiably) that the CPDN team were deliberately misleading the public about the importance of their work. We discussed then (as we have in many other cases) whether some of the responsibility for overheated or inaccurate press actually belongs to the press release itself and whether we (as a community) could do better at providing more context in such cases. The reason why this isn’t really germane to Myles’ point is that we didn’t criticise the paper itself at all. We thought then (and think now) that the CPDN effort is extremely worthwhile and that lessons from it will be informing model simulations some time into the future. Our criticisms (such as they were) were mainly associated instead with the perception of the paper in parts of the media and wider community – something that is not at all appropriate for a peer-reviewed comment.
This isn’t the place to rehash the climate sensitivity issue (I promise a new post on that shortly), so that will be deemed off-topic. However, we’d be very interested in any comments on the fundamental issue raised – how do (or should) science blogs and traditional peer-review intersect and whether Myles’ perception that they are in conflict is widely shared.
Martin Vermeer says
##147 Geoff Sherrington:
If you mean the values in Table II, the answer is in Section 2.2: the individual runs were archived by teams participating in the GCM intercomparison project; Douglass did the ensemble averaging, and presumably all calculations after that.
Hank Roberts says
Sherrington who once worked for CSIRO, who wrote “There are greenie scientists in CSIRO and there are honest ones.” ?
Hank Roberts says
Or this Sherrington? http://tamino.wordpress.com/2007/10/19/not-alike/
Don Worley says
Is there a broadly accepted figure on the ideal range beneficial to living things insofar as the percentage of CO2 in the atmosphere?
Assuming that we are aspiring to control this level, one assumes that there is a specific target. My question is, what is this target?
David B. Benson says
Geoff Sherrington (145) — The SI unit for temperature is ‘Kelvin’, not ‘degrees Kelvin’. But to confuse matters, the derived SI unit for temperature (often used) is ‘degrees Celcius’, equal to Kelvin-273, not ‘Celcius’.
Martin Vermeer says
Re #152 Hank: you mean the self-same CSIRO that contributed model run number 15 in the Douglass et al. paper? What’s this about, a claim of scientific fraud?
Hank Roberts says
I’m not sure what it’s about, Martin; Google the quote and you’ll know as much as I do. No way for me to even tell it’s the same person, just wondering.
Hank Roberts says
Maybe this one: http://www.jennifermarohasy.com/blog/archives/001281.html
David B. Benson says
Don Worley (154) — The atmospheric carbon dioxide is now about 385 ppm. Dr. James Hansen states ‘under 350 ppm’. But at 315 ppm in 1958 CE, the glaciers in the Swiss alps were already melting back at about 4 m/y. However, at 288 ppm in 1850 CE, those glacierss were advancing. So somewhere in between seems best to me, but in any case vastly lower than now.
Ike Solem says
If blogs should be peer-reviewed, should lectures also be peer-reviewed?
For example, http://cires.colorado.edu/events/lectures/allen/
That statement – should it have been subjected to peer review?
Peer review should be reserved for things like journal publications, funding decisions, and faculty hiring decisions.
Geoff Sherrington says
Re 156 Martin Vermeer
You quote –
“Re #152 Hank: you mean the self-same CSIRO that contributed model run number 15 in the Douglass et al. paper? What’s this about, a claim of scientific fraud?”
When you have gained the experience that I have, if you ever do, you will know when to reexamine numbers that appear to have unusual characteristics.
Please guys, don’t talk down to me. Assume I understand unless or until I show otherwise, by proper scientific criteria, not made-up stories.
[Response: Well, O wise one, tell us what conclusions should we draw? Don’t just leave us with implicit accusations – remember that we do not have your vast experience in these matters. Perhaps you would like to demonstrate that the Douglass et al analysis is somehow amiss by actually doing the calculation yourself? All the data is available at the PCMDI website…. – gavin]
Martin Vermeer says
Re #161 Geoff Sherrington
So the implied story is that the good folks at CSIRO doing the simulations somehow looked at what got checked in to the GCM Intercomparison Project archive, computed the average of the stuff already there, and fraudulently submitted a result close (but not equal; too obvious) to that average…
Incredible, the lengths to which AGW middle-of-the-roaders will go in order to have their beliefs accepted :-)
Geoff Sherrington says
At 110,
“Ray Ladbury Says:
9 April 2008 at 10:03
James G. I think you misunderstand my point–the agreement of the models says that there is agreement on the most important forcers–it suggests consensus.
James G and Martin,
The fact that the models agree as well with overall trends in the very noisy climate data suggest that on the whole they are correct. The uncertainty comes in for the less well constrained aspects, and the takeaway message here is that that uncertainty is overwhelmingly on the positive side. Since that is also where the highest costs are, those fat tails could wind up dominating the risk.”
This seeded the discussion. I gave a rather good example of agreement that looks odd to a numbers man. It’s up to you guys to admit or disagree that the numbers look odd. That way we can tell a bit about how objective you are.
Ray Ladbury says
Geoff, I’m afraid I agree with Gavin: I don’t see anything out of the ordinary. I also fail to understand the point you are trying to make. Why is it surprising that at least one element in a group should be near the mean? It would be much more surprising to me if none of the models were near the mean. Also, keep in mind that much of the physics is common between the models. It is usually the data used to constrain the forcings that varies from model to model.
You say: “Please guys, don’t talk down to me. Assume I understand unless or until I show otherwise, by proper scientific criteria, not made-up stories.”
Well, Geoff, I’d say we’re there.
Hank Roberts says
Any relation to the Sherrington linked above?
Geoff Sherrington says
Re 125
I know 4 people with the same first and last name as me, with tertiary qualifications, 3 in Australia, one a Prof, one a maths/surveyor, one a generalist, all of whom post on the Internet. There might be more. At least I’m not ‘Jones’. The name is not important, the science is.
I do not live on the coat tails of Sir Charles Scott Sherrington, who, upon his death in 1952, had the longest entry ever in Who’s Who, a Nobel Prize, a Knighthood and twice presidency of the Royal Society of London. The name is not important, only the science.
Indeed, under a pseudonym, I have had several hundred Letters to the Editor published in the largest National newspaper here. It proves nothing except that perhaps successive Editors liked my style and content. I also have cases of others using the same pseudonyms as I use (used because of death threats). The name does not matter.
I have not made an implied story or any allegations of scientific fraud. I am testing how critically people on this Blog accept data without study of contained anomalous characteristics. As I have written above, the fact that the data relate to climate is somewhat incidental. A set of numbers can fail because they agree to well, just as they can fail because they agree too poorly. It’s the science that matters.
Hank Roberts says
Ah, this is a test. Okay. Bye.
Geoff Sherrington says
Bye Hank. Sorry you are afraid of failing tests.
Next?
Does just one of you admit that the numbers I noted are extraordinary? Does not one of you see the surprise of the explanation that “It would be much more surprising to me if none of the models were near the mean” when most are not. Does any one of you really comprehend the real maths of the physical world?
[Response: Now we are back to numerology. Let’s assume I picked out my telephone number from a series of unrelated figures. Would that be extraordinary? yes and no. Yes because picking exactly those numbers would be unusual, but no because I would have been equally astonished if I had seen any number of different telephone numbers (or birthdays, or PIN numbers or license plates etc). Thus since you didn’t define what would be extraordinary ahead of time, your definition of extraordinary having already looked at the data is simply an exercise in finding numerical coincidences. Since the alternatives in front of us are a) that a modelling group fixed their output to produce something that was nearly the same as an average of an arbitrary set of other models for a metric that spanned an arbitrary number of years two years before the analysis was done, b) that the authors of the paper themselves made up the data despite it being publically available and checkable by anyone or c) it’s just a coincidence, it is hardly surprising that everyone has gone for (c). If you think otherwise, it is incumbent upon you to demonstrate that by checking the calculation (which you could easily do). Despite my opinion that the Douglass paper is flawed, your continued insinuations are out of line. Either put up or shut up. – gavin]
spilgard says
Re #168: I doubt that we’re thinking of the same thing, but I’m inclined to agree with you that my objectivity can be measured by my willingness to buy your coy hints that scientific fraud and conspiracy are afoot.
Ray Ladbury says
Geoff Sherrington, I comprehend the Central Limit Theorem. Do you? I also comprehend that it is kind of difficult to draw conclusions from a sample size of one–particularly when that sample is cherry-picked. I also comprehend that scientific fraud is rare precisely because it is so easily detected, that the results of a single model do not reveal much about the current understanding of climate science as a whole and that on the basis of 3 numbers (while ignoring 10 others), you are alleging scientific fraud.
If you are really surprised by such a fortuitous coincidence of 3 numbers, all I can say, is that this must be one of the first times you’ve analyzed a dataset.
Philip Machanick says
On the central point of the main article — accuracy of what gets published: The Australian today published an op ed alleging that not only was global warming over but we are headed for a new ice age, because temperatures in 2007 dipped by 0.7°C over 2006. Unfortunately this crucial fact on which the whole article hung was wrong, as was pointed out in several comments including mine. So, guess what? The paper took the reference away from their main page, and deleted all the comments! That’s honesty for you.
Philip Machanick says
Geoff Sherrington (126 et seq.). Are you unfamiliar with the concept of “coincidence”? Yesterday I went shopping. At the deli counter, I asked for “about 200g” of something in irregular sizes. The deli guy was obviously new at the job, and his first attempt was 103g. His second attempt was exactly 200g to the accuracy of the scale. Was he replaced between the two attempts by a space alien with a gramme-accurate sense of weight? Must be something like that — what are the odds otherwise?
Here’s an exercise for you. Look at 100 papers with similar comparisons of numbers. You won’t see the effect you saw often. So no conspiracy. You were just lucky this one time.
I also publish often in letters to the press (#166), using my own name (try google). My observation: the Australian press shuns factual corrections. I have a good hit rate unless I point out an error in an article, which is almost never published — particularly not in matters of climate change. So I wouldn’t consider a high count of letters in the Australian press as a badge of honour.
Update on my #171: I found the original article with comments (hidden, not in the usual places for articles with lost of comments, not deleted — and they didn’t publish any corrections in letters — including mine — the following day).
Geoff Sherrington says
Gavin, you are plain evasive. If you saw a set of numbers like I have shown, would you reexamine them or not? Yes or No? Either put up or shut up?
If NO, perhaps that’s why you are where you are and I am where I am – able to see such evasion from 30 paces.
That stuff about numerology is so 1970s. We have moved on since then. I first saw the coincidence of birthdays in a party group in a Martin Gardiner column about 35 years ago, from memory. (You might not even know who Martin Gardiner was. If you ask nicely I’ll give you a clue).
Barton Paul Levenson says
Geoff Sherrington says:
Gee, Geoff, I have no idea who Martin Gardiner was, either. Do you by any chance have him confused with Martin Gardner, the late science writer?
And maybe Gavin isn’t so much being evasive as just refusing to entertain a crackpot in the style the latter would prefer. Playing with numbers and hunting patterns in them, however much fun it may be, is not the same thing as statistical analysis.
Ray Ladbury says
Geoff, the only thing that is amazing in this is that you are amazed by it. If you look at the actual data, you find that the large standard deviations are dominated by a couple of models for each altitude. Other than that, the agreement between the models is pretty darned good–to the point where your coincidence is not significant at the 95% confidence level, and that is purely statistical modeling, not taking dynamics into account. I think I’d want better than 95% significance to issue an allegation of fraud, but you seem to have much lower standards.
SteamGeek says
Gavin, thanks for the assistance.
(this is cleaned up and clarified a little from eariler)
As long as scientists are working on the tax payer’s tab, the work must be open to public scrutiny. The worse thing that will happen is those producing the work will be forced to do a better job of packaging the methods facts findings and conclusions. If the work CAN NOT be defended in a public forum, it doesn’t meet the basic requirments. The best thing(s) that will happen is the public will become more educated about the work, and the scientists will gain better public support.
If a scientist wishes to start a career as a poltical lobbyist or a text book censor, so be it. They are however no longer useful as an objective scientist. Its a simple matter of personal choice. The chosen course of action is one or the other but not both. And certainly not on the tax payer’s tab if the attempt is for both. The private sector would be a separate issue, although an employer would have the final say, not the individual.
I can’t think of ANY other adequate way to keep scientists pn task other than open peer review. Not saying all are dishonest, but there are a few who need closer public review.
It seems ALL TOO OFTEN some forget who the customers / owners really are. Public servants are employed with the blessing of the public, a blessing which can and from time to time should be revoked.
Operating behind closed doors on public issues is a BAD idea.
So, while peer review of advanced science papers is obviously best conducted by people who know the language and understand the underlying science, still, in the open is an opportunity not a hinderance. Further, peer review of the issues are absolutely required given the way media and polticians use the science for personal and political gains. The citizens deserve far better than they’re getting in this regard.
TCO says
I’m a skeptic and I think Geoff is making much ado about nothing. The model agreement to average occurs at a few spots, is not exact, and the standard deviations are high regardless. Also, why/how would people be building a model to match an average of other models? Who goes first? The faker and the others match up to him? Or the others and he makes his match theirs? (But nothing is told us about the time of the different runs, if the “faker” had access to knowledge of the results of the others.)
Geoff: Please. This kind of fever swamp thing makes us look paranoid. Also makes us look stupid. And when expressed with too much bluster, makes us look pompous, pretentious and windbagged. Settle down and be a good skeptic.
Philip Machanick says
TCO (#177): well put. The role of a genuine sceptic is to keep scientists on their toes and make them recheck their results. There’s precious little of that going on under the climate sceptic brand. Why bluster and reshooting spent cartridges has to be part of armoury if there is a case to be made escapes me.
May I quote some Shakespeare here? “… full of sound and fury, signifying nothing.” Look up what comes before this fragment to see how apt it is.
Dave Andrews says
re 168,
Gavin, you say that your opinion is that Douglass’ paper is flawed.
Fine, but in the way that science is meant to progress via ‘peer review’ surely you need to publish a paper rebutting it?
A blog statement, after all can’t count for much because that’s the sort of thing sceptics do!?!
[Response: The flaw is obvious and doesn’t need a peer-reviewed paper to demonstrate it, but who knows what tomorrow will bring… – gavin]
Hank Roberts says
Dave Andrews, how many refutations do you require be published?
Read at least the last paragraph, first post, in the previous thread:
https://www.realclimate.org/index.php/archives/2007/12/tropical-troposphere-trends/
which references
http://www.agu.org/pubs/crossref/2007/2007GL029875.shtml
Maxine Clarke says
Mark Stewart #28 says, incorrectly so far as Nature is concerned: “Journal articles, especially in ‘leading’ journals such as Nature and Science, are getting ridiculously short, and many details of analysis are necessarily omitted, and much can be buried in a simple figure.”
Has he read Nature in the past 5 years? Nature Articles and Letters are several (between 4 and 8, usually) pages long in the print/online edition of the journal, and have associated with them online-only, peer-reviewed supplementary information and methods, sometimes extremely extensive. We also publish other journals of high quality and impact, to allow further publications in a topic– for example Nature Geoscience, in which the two commentaries on blogging have formed the basis for this fascinating discussion. (Which by the way I am highlighting on Nature’s Peer to Peer blog at http://blogs.nature.com/peer-to-peer . Our peer-review debate, which is mentioned by a kind person in the comment thread above, is also archived here.) Best wishes, Maxine, an editor at Nature.
Martin Vermeer says
Re #177 TCO:
What makes you think there is any “us” left after you take away the paranoia, the stupidity and the mendacity? “Skepticism with an honest face”, Gorbachev style?
“Good skepticism” already exists, TCO. It’s called science. That’s the “us” you should be joining.
Joseph Hunkins says
I think blog posting and commenting are a wonderful aspect of “new science” and I’d hate to see the legitimate but overblown concerns about how this might impact the peer review process lead to less blogging and commentary. I’d wildly guess that something like 1000 times as many people read blogs about a paper as read the paper itself. That presents some very important interpretation issues, but we should recognize that finally there is a mechanism for much broader exposure of scientific research and work to improve that mechanism, not reject it in favor of what many believe to be a seriously compromised peer review process.
Jay Nickson says
It would be curious to have a machine, in the Turing sense, that compared people’s accuracy on answers on clear questions which will be answered in a time period tau, compared and publicly ranked.
Is there ‘judgment’? How does Bush rank versus, e.g. Clinton? How about Lee Smolin vs Newt Gingrich? Peer review committee members?
Lots of clusters needed, but not much software.
‘twould be curious.
Nikolaus Kriegeskorte says
a late comment on this post:
we need the whole continuum from informal oral comments in private lab meetings, to public science blogging, to open post-publication peer review, to peer-reviewed response papers to peer-reviewed target papers.
the missing link in this continuum is open post-publication peer review (OPR). OPR is different from science blogging in that it is crystallized (each review is an official digitally signed publication with a doi). OPR is different from publishing a real paper in that a review will typically have limited original content, will refer mainly to one paper, and will include numerical ratings of the reviewed paper.
i’m exploring this idea here: futureofscipub.wordpress.com
–nikolaus kriegeskorte