Nature Geoscience has two commentaries this month on science blogging – one from me and another from Myles Allen (see also these blog posts on the subject). My piece tries to make the point that most of what scientists know is “tacit” (i.e. not explicitly or often written down in the technical literature) and it is that knowledge that allows them to quickly distinguish (with reasonable accuracy) what new papers are worth looking at in detail and which are not. This context is what provides RC (and other science sites) with the confidence to comment both on new scientific papers and on the media coverage they receive.
Myles’ piece stresses that criticism of papers in the peer-reviewed literature needs to be in the peer-reviewed literature and suggests that informal criticism (such as on a blog) might undermine that.
We actually agree that there is a real tension between a quick and dirty pointing out of obvious problems in a published paper (such as the Douglass et al paper last December) and doing the much more substantial work and extra analysis that would merit a peer-reviewed response. The approaches are not however necessarily opposed (for instance, our response to the Schwartz paper last year, which has also lead to a submitted comment). But given everyone’s limited time (and the journals’ limited space), there are fewer official rebuttals submitted and published than there are actual complaints. Furthermore, it is exceedingly rare to write a formal comment on an particularly exceptional paper, with the results that complaints are more common in the peer reviewed literature than applause. In fact, there is much to applaud in modern science, and we like to think that RC plays a positive role in highlighting some of the more important and exciting results that appear.
Myles’ piece, while ending up on a worthwhile point of discussion, illustrates it (in my opinion) with a rather misplaced example that involves RC – a post and follow-up on the Stainforth et al (2005) paper and the media coverage it got. The original post dealt in part with how the new climateprediction.net model runs affected our existing expectation for what climate sensitivity is and whether they justified a revision of any projections into the future. The second post came in the aftermath of a rather poor piece of journalism on BBC Radio 4 that implied (completely unjustifiably) that the CPDN team were deliberately misleading the public about the importance of their work. We discussed then (as we have in many other cases) whether some of the responsibility for overheated or inaccurate press actually belongs to the press release itself and whether we (as a community) could do better at providing more context in such cases. The reason why this isn’t really germane to Myles’ point is that we didn’t criticise the paper itself at all. We thought then (and think now) that the CPDN effort is extremely worthwhile and that lessons from it will be informing model simulations some time into the future. Our criticisms (such as they were) were mainly associated instead with the perception of the paper in parts of the media and wider community – something that is not at all appropriate for a peer-reviewed comment.
This isn’t the place to rehash the climate sensitivity issue (I promise a new post on that shortly), so that will be deemed off-topic. However, we’d be very interested in any comments on the fundamental issue raised – how do (or should) science blogs and traditional peer-review intersect and whether Myles’ perception that they are in conflict is widely shared.
Hank Roberts says
Have the people who actually wrote the original press release spoken up in this thread or elsewhere? I mean by that the people who put the words together in the form sent out, probably by a marketing or PR department staffer. Their job is getting the organization’s name into the news, not writing abstracts with real info.
Ray Ladbury says
Richard #48: Although you claim to have “science degrees,” it’s a pretty safe assumption that you haven’t done any science in, say, the past 30 years, as otherwise you would realize that modeling (computer or otherwise) is central to science. How else are we to study Earth’s core, the explosion of supernovae, ecological systems, many aspects of materials science, and really the majority of cutting edge science.
And I would also urge you to look into model validation in fields outside your own narrow discipline (whatever that may be). Different techniques are appropriate to different models, techniques and fields of inquiry. The fact that you do not understand this means you really know nothing of how science is actually done.
Myles Allen says
In response to Hank (51):
I think the offending paragraph was written by a long-suffering Natural Environment Research Council press officer who has since moved on to other things. But I don’t think it’s fair to tee off on the press officers, who have a pretty thankless task. If I recall correctly the 11 degree number went in and out of successive drafts like a yoyo, and ended up being left in on the grounds that it had to highlight something “new and concrete” — not, I might add, “alarming”: my impression was that the Press Officer would have just as happily drawn attention to zero-sensitivity models, if we’d have found any.
Anyway, I eventually signed it off on the understanding that no serious mass-circulation journalist would rely on the press release in reporting the story, and that its sole purpose was to encourage journalists to find out more. It seems, judging from the responses Fiona got and despite Richard Vadon’s claims, that this understanding was correct.
The press release could undoubtedly have been clearer, but it seems no-one who reported the story directly actually misunderstood what had been done, so it didn’t in fact do any damage. But of course, if Richard had stuck to “scientists issue a press release that might have been misunderstood but wasn’t” his editors probably wouldn’t have been very impressed.
Of course, if Richard can come up with journalists who did report the story solely on the basis of the press release and did not understand that 11 degrees was the top end of a large range, then that is a different matter. So far, no one has come forward to my knowledge.
DBrown says
I would like to add a few points based on my experience with this blog:
1) A lot of educate people put in some interesting posts but rarely have detailed training/education in the area they are discussing.
2) A lot of strange claims are made with no supporting proof
3) These posts are always reviewed by many readers and comments are allowed.
Relative to these points, overall I feel that these are good things – waiting only for experts to write on a given topic is not a good way to get a large amount and diversity of science knowledge out into the layman world.
However, a lot of junk science, incorrect statements and some great insight will result.
I feel that the overall result is that the basic job is being done. The issue of peer-review is fully addressed because of feedback that is allowed.
However, I believe that your site needs one major change:
You should post a notice that any one wishing to post should be careful that when they state facts or make board claims to qualify these statements as opinions. Otherwise, they need to offer links or proof based on know science in the field.
(PS: anyone who thinks computer modeling is ‘central’ is going way over board – I can do massive amounts of research in many areas of advanced physics and never need to use a computer based modeling program and in fact, many great advances in science didn’t need any such thing; in some fields it is critical but ‘central’? No way.
PPS: as for media hype, that is life in a media driven, ‘8-second’ sound bite would – that’s just the way it is and it is something blogs are great at handling compared to the regular print/cable noise machines.)
Rich Thompson says
I teach large gen ed classes for non-science majors and I love RC. In fact, I love any venue where I can get a “quick and dirty” education from those who really know. I prowl the posters at fall AGU stalking authors of posters on subjects of interest to me (pretty much everything) because scientists know so much and have to communicate so narrowly in their peer-reviewed papers. I can get caught up on the latest thinking in just a few minutes when talking to an expert or reading a blog, whereas I simply don’t have the time, background, or fortitude to wade through the literature on all of the subjects of interest to my students to look for exciting, new developments.
Chuck Booth says
Re # 54 DBrown “I can do massive amounts of research in many areas of advanced physics and never need to use a computer based modeling program and in fact, many great advances in science didn’t need any such thing”
I would argue that any explanation of empirical data that attempts to describe physical reality constitutes a model, whether that model is depicted in words only, in a graph or other diagram, in mathematical equations, or in computer code. Bohr’s 1913 depiction of atomic structure as a central nucleus surrounded by circling electrons was a model (http://en.wikipedia.org/wiki/Bohr_model), and a roadmap depicting the spacial relationships of cities, roads, and other geographical features is a model. Without the model, you have nothing but disparate “facts,” with no way to link them.
Lynn Vincentnathan says
#54, & “anyone who thinks computer modeling is ‘central’ is going way over board”
That’s probably true as a general statement. It would be much better from a scientific POV to have 2 earths, one the control & one the experimental earth. Or we could do an O X O type of experiment, where we make observations, then apply the treatment to our one earth (emit GHGs), then make further observations (of course, this may take a long time due to the lag of T to GHGs, such as ocean thermal inertia, etc). Or we could do an O X O -X O type of experiment, where we make observations, then apply the treatment to our one earth (emit GHGs), make further observations, then remove treatment (stop emitting GHGs) and make further observations (which again would require a long time frame due to the lag).
Or, we can use models.
Hank Roberts says
Myles, I understand your point about press officers, but I think it should be cautionary — you did what most big organizations do. It blew up because what these people do — whether you call it “press officer” or “marketer” or “PR expert” — is advertising.
Advertising is such a big industry because
— the law says no one is going to believe it
— the practice says people believe it
— the law says ‘puffery’ is ok, nobody believes it
— the practice *including*the*science* says it works.
Your press officer, you say, “had to highlight something “new and concrete”
and you “eventually signed it off on the understanding that no serious mass-circulation journalist would rely on the press release in reporting the story, and that its sole purpose was to encourage journalists to find out more.”
How can you believe this? Even though it’s the assumption the marketing business works on, even though it’s the legal presumption made about puffery and ads, even though everyone in the business world _pretends_ they believe this — nobody really believes it.
If anyone believed it, they wouldn’t spend money on doing something knowing it wouldn’t work.
Advertising works. It fools people all the time.
Media people are people.
In a hurry.
Looking for filler.
Looking for something to catch people’s interest.
Looking to attract the readers their publication exists to sell to its advertisers.
None of this is a secret. It’s only awkward when the contradiction between theory and practice surfaces.
Ray Ladbury says
DBrown, Whether a model is done on a computer or on the back of a napkin is immaterial. It is still a model, and you absolutely cannot do physics in any meaningful way without models. Whether you are doing measurements or theory, at some point you will need to model your errors at the very least. Looking at discrete events during a time interval? You model your errors as Poisson. Estimating probabilities by looking at proportions of “success” and “failure”. Your error model is binomial. I can do this on a parallel supercomputer or in Microsoft Excel or on a chalk board–but if I do not resort to a model, I have not done physics.
I would be curious what sort of physics you think is possible without modeling.
DBrown says
#59 – who said anything about not needing models? Computers, yes. Please read my post, then comment – thanks.
Myles Allen says
Dear Hank,
I know it’s almost as fashionable to knock journalists as it is to knock press officers, but to be honest, I was very reassured by the results of Fiona Fox’s inquiry (which, after Richard’s initial allegations, I was rather dreading). The journalists not only clearly understood the story perfectly well (in spite of, you might say, the unclear wording of the press release), they could remember all about it in remarkable detail more than a year on (Fiona Harvey of the FT could even quote me verbatim).
In many ways the worst libel in Richard’s piece was the claim (which has since been repeated by other BBC journalists) that his colleagues were just copying out the press release, when he knew perfectly well they were doing nothing of the kind.
Myles
Jim Bouldin says
Picking up on Mark Stewart’s comment (# 24), the condensing of journal article lengths seems to be occurring in many journals. Whatever the reason for this, it can very much work against the goal of a clear and elaborate explanation of the information needed for proper understanding of a paper’s major and minor points. The introduction/background and materials/methods sections are often especially hard hit in this regard, although even important results can be shunted to a digital appendix if they take up too much space (and which are made thereby, inaccessible to those with access only to a paper copy of the journal). Thus I find, as a scientist, that even papers in one’s own field can sometimes be difficult to understand, or at the very least more laborious and inconvenient than need be. So imagine what non-scientists are increasingly faced with. Scientist-based blogs like RC help counter this trend by fleshing out this ultra-terse language that we’re increasingly bound to. I see this as part of the “tacit knowledge” that Gavin mentions.
Also, until journals (or some new form of information dissemination) can provide a real-time, interactive, and somewhat informal discussion among working scientists in those fields having very high societal/political relevance, scientist-based blogs like RC are filling a very important void. A function which I very much appreciate.
Russell Seitz says
Re 2
Joseph Romm complains http://climateprogress.org/2008/04/02/nature-pielke-pointless-misleading-embarrassing-ipcc-technology/ of Pielke’s Nature commentary that :
“Since this paper doesn’t define the word “innovation,” it is very hard to tell what precisely the authors’ point is (other than to lead us into the technology trap). ..this is characteristic of Pielke’s work —BF he doesn’t define terms specifically enough to make policy-relevant conclusions.BF [emphasis in the original…He says “all the regular readers of this blog know why the technology trap is dangerous (it leads to delay, which is fatal to the planet’s livability ) …failing to stabilize well below, say, 700 parts per million of CO2 ppm is really, really, really suicidal ….So what is the point of the piece? To convince people the situation is hopeless? [Nature actually runs a side piece on the commentary titled, “Are the IPCC scenarios ‘unachievable’? — and people call me an alarmist!].”
While Romm neglects to define ‘fatal ‘ or ‘ suicidal’ , his own commentary has elicited a reader response that indeed qualifies as ‘Alarming ‘:
http://adamant.typepad.com/seitz/2008/04/the-last-carbon.html
Richard Vadon says
Hi Myles
I can’t believe we are doing this again. Let’s stop soon ;-)
As you know my position is that you judge the journalists on what they write. The broadsheet articles on your story did not give the readers a proper understanding of your research. We quote them in the programme. They make it sound like the world was about to end. Let’s be clear that this is first and foremost the responsibility of the journalists and headline writers involved. I know that you at CPDN were appalled at some of the coverage. The question we asked in the programme was, did the press release play a part in this?
I think you are correct when you say “The press release could undoubtedly have been clearer”. In our programme you didn’t say that. I think if the press release had been clearer the coverage wouldn’t have been so apocalyptic. I know you disagree. I suggest people listen to our interview with you. You put your view strongly and clearly.
The programme is available here :
http://news.bbc.co.uk/1/hi/magazine/4923504.stm
Pat Neuman says
Although traditional peer-review may be useful in technical communications between scientists it has failed the public on climate change science. A new process is needed to for use in climate change science blogs.
Myles Allen says
Dear Richard,
The journalists who covered the story clearly understood it, so while the press release might have been mis-understandable (what press release isn’t?), we made sure no-one actually misunderstood. What the headline writers (who wouldn’t have even read the press release) chose to say was beyond our (and, I understand, even the journalists’) control.
Did you interview anyone who actually covered the story who you were accusing of acting highly unprofessionally in just copying out the press release? If so, why did you not include that in the final version of the programme? I think you (and Bryan Lawrence, Tim Palmer and all) would have got a very different impression of what happened if you had done.
I appreciate by the time you got the results of Fiona Fox’s inquiry the BBC had already invested too much in the programme to change it, but I don’t see why you are defending it now.
Myles
JamesG says
[The error in the Douglass et al paper is clear and obvious and does not require a much thought to discern. It is simply that the statistical test they apply would reject ~80% of samples drawn from an exactly similar distribution. To make it clearer, take a fair die – the mean number of points is 3.5 and with around 100 throws, the mean will be known to within 0.1 or so. Then take the same die and imagine you get a 2, then the Douglass et al test would claim that this throw doesn’t match since it is below 3.3 (3.5 – 2 standard deviations). This is absurd……..- gavin]
Gavin: Are you saying you think all model results are equally likely or just that Douglass et all should have included a bell curve to show us which were more likely? If you believe the former then your critique of Myles Allen’s work seems contradictory. If it’s the latter then if Douglass et al. had added that bell curve and shown that only the very unlikely model results clipped the observation results, would you then agree that the model predictions don’t match observations and that consequently the theory behind the model needs revised? And would you really include ALL models, even the obviously barmy projections and the obviously poor models, or should you use only the ones that the IPCC use, which would seem to me to make more sense?
[Response: If they had shown the distribution of the individual simulations (varying due to different models, different ‘weather’ etc.), the observations (with their own uncertainties) would have been shown to fall well within the distribution of the models. Thus their main conclusion would have been refuted. If they wanted to do something else they could have, but they didn’t. All of the model runs used are in the IPCC archive. – gavin]
Myles Allen is confusing. He has now proven by citation that he argued at the press conference the 11 degree outlier result should be treated as far less significant as the 2-3 degree spread then he argues on this blog that the 11 degree result IS the significant result of the work. Which is it? Anyway, I remember seeing that there were results that showed cooling in the sensitivity analysis and which were specifically rejected as “obviously wrong”. Now a sensitivity analysis with such huge variance on the aerosol parameter will always produce some cooling curves, so his later statement that non-warming results weren’t obtained was just confirmation bias via data culling. The scientific way would be to say that if the cooling results are obviously wrong then the 11 degree result is just as wrong because they both represent extremes of the error bars. [edit]
[Response: You are misunderstanding the CPDN simulations. Try reading some of the papers arising from the project. Sanderson et al (2007) or Knutti et al (2006) are quite good – or go to the cpdn website and read what is available there. There was no variation of the aerosols, and the ‘cooling curves’ you mention were obvious errors in the control runs due to an inability of the simple ocean treatment they used to deal with extreme conditions in the Equatorial East Pacific. Absolutely nothing to do with climate sensitivity. – gavin]
Myles Allen says
Response to 67: It was the huge range, 2-11 degrees, and the asymmetry relative to the traditional 2-4 degree range, that we felt was the important result, not the fact that the “vast majority of their results showed that doubling CO2 would lead to a temperature rise of about 3C”, as Richard Vadon put it in the web version of his programme (this cluster was simply an artifact of the way we had imposed the perturbations). It was also important that it wasn’t just a “tiny percentage” showing high values, as Richard claims, but a systematically fat-tailed distribution (20% higher than 7 degrees, if I recall correctly).
I think our interpretation of the Stainforth et al results was correct. Certainly, the paper has been cited quite a few times (including by the IPCC, with appropriate caveats, of course) as evidence that climate sensitivity could be a lot higher than the traditional range. No one, to my knowledge, has ever cited that study as providing support to the 3 degree traditional value. To this extent, it seems that the journalists Richard Vadon was criticizing appear to have understood the significance of the study rather better than he did. It’s a shame he doesn’t appear to have talked to any of them.
Myles
Ray Ladbury says
DBrown, Computers have been central to science since they were invented in the ’50s. And coincidentally, most of the great physics done without a computer were done before there were computers. And as I said, it makes no difference whether model predictions are done by computer or on the back of the envelope. All a computer does is make it possible to look at more complicated models and apply more powerful techniques–that’s why they are in fact central to many disciplines on the frontiers of physics…or biology, chemistry and even mathematics, for that matter.
Hank Roberts says
Well, cautionary.
I’d suggest every scientific organization adopt the wording above, and print what was said about the press release ON every press release.
I realize the British notion of “libel” differs from that under US law and I can’t comment on that.
But I do agree the European adoption of the Precautionary Principle is a good idea and I wish it would be adopted by US journalists.
In that spirit, a cautionary text box along these lines, paraphrasing the above language, would be well advised for any press officer. This stuff’s too serious and too easy to spin to put press releases out without some warning that they’re not news releases.
“No serious mass-circulation journalist will rely on this press release in reporting a story. Its sole purpose is to encourage journalists to find out more.”
Hank Roberts says
Oh, I realize I’m saying the same thing Gavin wrote in the prior thread:
In How not to write a press release, Gavin wrote:
“… the scientists also need to appreciate that most journalists will only read the press release, … This implies that the press release itself is the biggest determinant of quality of the press coverage …”
Curious says
“whose papers to trust, and which to be suspicious of [Hey Prof. here’s a great new paper!… Son, don’t trust that clown.] In short the kind of local knowledge that allows one to cut through the published literature thicket.
But this lack makes amateurs prone to get caught in the traps that entangled the professionals’ grandfathers, and it can be difficult to disabuse them of their discoveries. Especially problematical are those who want science to validate preconceived political notions, and those willing to believe they are Einstein and the professionals are fools. Put these two types together and you get a witches brew of ignorance and attitude.”
If peer review is so great, then why is this the case?
Simeon says
As an interested amateur, albeit one with two Physics degrees, may I raise my hand in support of blogs and forums. On the topic of Climate Change, we of the general public obtain our information and understanding from:
the environmentalist lobby which tends to be sensationalist
the skeptic lobby which tends to be brutal and unprincipled
the media who cherry pick for the sake of populism
the realclimate website and similar
The first three I mention really have muddied the waters. We depend on the forth for up to date information and appraisal. Communicating to a wider audience is necessary in this case where we have competing lobbies diverting our attention.
DBrown says
Ray Ladbury: This thread is getting old – as I have been saying – COMPUTER modeling is not critical to all science – MODELING IS essential but (try reading this part carefully, please) NOT computer modeling – computers are not central to the subject of science or the empirical reasoning it is based. I do not see why you keep defending the issue of modeling – I have NOT said modeling is not critical, only that using computers is not central – valuable, sure. Please, try reading this post and my others more carefully because you are defending an issue that I have not disagreed with you about.
Hank Roberts says
Curious,
> if peer review is so great
Wrong tense, you mean “if peer review were so great” — it’s the worst form of scientific publication except for all the others tried so far.
Look here:
https://www.realclimate.org/index.php/archives/2005/01/peer-review-a-necessary-but-not-sufficient-condition/
Hank Roberts says
DBrown, “not central” may mean something special to you, but what does it mean for work that can’t be done without the computer as a tool?
Heinlein once told a student that he and his wife had spent many days calculating in pencil on rolls of butcher paper to get the math right for “Destination Moon” — and the student had asked why he didn’t just use a calculator. But at the time, ‘calculators’ didn’t exist.
From reading Dr. Weart’s history, computers made climate modeling possible — there wasn’t ever enough time to do the math otherwise.
TonyN says
Re: 50 Myles Allen
It was very thoughtful of Fiona Fox to provide the journalist who attended the press conference with her own recollections of what had happened more than a year previously when asking for theirs.
tamino says
The only difference between computer models and back-of-the-envelope (or 60 pages of equations) models, is that in the former case a computer does the arithmetic. Doing the arithmetic without a computer takes so long, and is so prone to error, that computer modelling has become *central* (and essential) to many fields of science.
If you really want to reject the importance of computer models, you might as well claim that arithmetic (or mathematics in general) isn’t central to modern science.
Ray Ladbury says
DBrown, We are talking at cross purposes because you are making a distinction (between modeling and computer modeling) without a difference. Many of the most active and exciting fields in physics simply cannot be done without computer modeling–and this includes astrophysics, planetary physics, geophysics, particle physics… and yes, climate physics. What I fail to understand is how you cannot say that these fields are central to physics. Even in fields where experiments can still be done on desktops, the error analysis must be done by computer (e.g. Monte Carlo methods)–and yes, this is modeling. Hell, computers are even becoming central to mathematics–as in the proof of the 4-color map problem several years ago.
If in fact you are a scientist, I can only assume that you have been so deeply immersed in your own research that you haven’t noticed the passage of the past 40 years. Time to come up for air. If you are not aware of the importance computers have assumed, it’s time to reacquaint yourself with physics. You’re missing more than half the fun.
Ray Ladbury says
In the case of Stainforth et al., both the agreement of model predictions AND the thick tail are important results. The fact that predictions agree by an large supports the contention that the most important contributors to climate are well understood. This is crucial in exposing the lies of the denialists.
On the other hand, the long tail on the positive side is crucial because the costs of climate change rise nonlinearly with temperature. This means that from a risk management perspective, there is still much to be gained in better understanding climate even as we work to mitigate the effects that are a virtual certainty. It may be a difference in perspective between science and engineering, but they are both crucial viewpoints.
Myles Allen says
Tony,
Yes, if we’d known this was turning into some kind of forensic examination, it would have been better for her not to have written the e-mail like that (which is why I included it along with the responses). But all she knew was that a concern had been raised: we had no idea Richard Vadon was going to go to such lengths to pin the blame for the headlines on us.
Anyway, the only relevant point here is that Richard Vadon clearly thought (and presumably still thinks, since he hasn’t revised the web post on their programme) that the percentage of models we found with different sensitivities somehow told us anything about the real world. It didn’t (because of the way parameters were sampled etc. etc.), and no scientist has ever suggested otherwise in the peer-reviewed literature. The only relevant finding was the fact that the high-sensitivity models were not significantly less realistic than the normal-sensitivity models, together with the fact that there were enough of them to rule out a pure fluke (20% greater than 7 degrees is hardly a tiny percentage). He was starting out from a blogosphere myth, and working out a way to stop such myths promulgating in the first place is what this debate should actually be about.
Myles
[Response: Myles, It is not a ‘blogosphere myth’ that the most likely value for the climate sensitivity is around 3 deg C – you, me and the IPCC all agree with that. The CPDN results did not change that (and haven’t in subsequent papers either). You are conflating a minor technical misinterpretation of the histogram of CPDN results with a much bigger issue of where the CPDN results fall in the wider context. Most of the erroneous headlines were based on an interpretation of the Stainforth paper as implying that sensitivities greater than 7 (and up to 11 deg C) were now much more likely and that the predictions of climate change in the next 50 to one hundred years needed to be dramatically revised upwards. That is not the case since, as you state above, the histogram of CPDN results tells us nothing about the real world. What Vadon and Cox saw was based on this much more obvious disconnect, not one line in our original post (which after all came after all the headlines). I fundamentally disagree that the problem was our post or some ill-defined ‘myth’ that you imply we are propagating. The problem was one of insufficient context. The solution is what you found at the press conference – if the time is taken to explain things properly and make sure that people understand, then you get a better outcome (most of the time). Our efforts are therefore better directed at continually trying to improve the background context that journalists have, and not trying to find convenient scapegoats in the ‘blogosphere’.
As I stated in the post above, your NG piece did end on an interesting point, but continually dragging this conversation back to an inappropriate single example is diluting it – because we end up arguing about the specific and not the general. We will probably need to agree to differ on how influential our post was compared to the headlines in multiple mass-media outlets, but I don’t think I am alone in thinking your fire is aimed at the wrong target. – gavin]
Bill S says
arXiv.org (now hosted by Cornell Univ. Library) seems to occupy an interesting place in the science communications continuum. Does anyone know how many of the articles posted there find their way into peer-reviewed journals or does posting there make that impossible?
Julian Flood says
quote When blogs strike out on their own and try to do “original” work there is going to be a real problem because blogs are fundamentally appealing to and often written by armatures who don’t have the experience to apply what knowledge they have. unquote
Don’t take them seriously. They’re just winding you up.
JF
BlogReader says
#37 Think back to your first introduction to geometry, algebra or calculus. Your teacher/professor undoubtedly gave you all the theoretical knowledge to solve every single problem given to you.
Would you say that 2nd grader had “tacit knowledge” of math?
David B. Benson says
Bill S (82) — At least one peer-reviewed journal requires submission by, in part, posting the ms on arXiv.
Myles Allen says
Hi Gavin,
Sorry, I didn’t mean to imply (didn’t think I implied) it was a blogosphere myth that the most likely value of climate sensitivity was 3 degrees. You’re right that’s the consensus for the most likely value. The myth I was referring to was the idea that the cluster around 3 degrees in the Stainforth et al results somehow gave support to this value, which Richard seems to be convinced was our “real” result (and in which case his programme would have made complete sense).
But you’re right, it’s just an example of what can go wrong, and there is no point in getting lost in the details of who thought what when and why (in self-defence, I only got dragged back into this one because Richard popped up).
I’m not sure I agree with you that Stainforth et al told us nothing about sensitivities greater than 7, given that no-one had reported GCMs behaving like that before so these fat tails could, until then, have been dismissed as a simple-modelling artifact. But that would bring us dangerously close to discussing climate sensitivity, which is off-topic.
I’m afraid I’ll have to drop out of blogging for a few days: it’s been an interesting weekend, which has left me with a much better impression of blogs than I had before, not least because of your very measured moderation (particularly impressive in view of my article — someone told me you had to be provocative to be noticed in the blogosphere). Thanks, and enjoy the rest of the discussion. Let me know what you all decide to do (if anything).
Regards,
Myles
L Miller says
“#37 Think back to your first introduction to geometry, algebra or calculus. Your teacher/professor undoubtedly gave you all the theoretical knowledge to solve every single problem given to you.
Would you say that 2nd grader had “tacit knowledge” of math?”
If you had read the very next line you would know. Here is the rest of that paragraph.
“Think back to your first introduction to geometry, algebra or calculus. Your teacher/professor undoubtedly gave you all the theoretical knowledge to solve every single problem given to you. But how easy was it to apply that theory the first time out? How much worse would it have been if they didn’t tell you explicitly which piece of theory you needed to use to solve the first few questions?”
Eli Rabett says
arXiv is an interesting mix. People in my fields (me too) use it as a placeholder for important (we think) results that later appear in the peer reviewed literature. In that way it functions as a preprint server (preprints were the samidzat that circulated by post or at conferences in the pre-web world). On the other hand, in extremely rapidly moving areas of theoretical physics, it has become the primary means of exchange and last but not least some very curious stuff has appeared there. If the curious stuff starts to make noise, often a comment is inserted as an arXiv manuscript, for example, what Arthur Smith did about the Gerlich and Tscheuschner paper. So in that respect arXiv is self correcting, but, as with much of the peer reviewed literature, there is a lot of stuff that no one ever looks at, deservedly so. We might also point out that arXiv is not designed for the lay reader.
Costanza says
Re 79:
Ray, I AGREE w/ you. However, pls control your rhetoric; in astrophysics and particle physics (where I “live”), computer modeling is NOT critical (except in the experimental aspects), nor even all that common.
Re 82:
Bill, it actually depends on the area under consideration. Nearly all astrophysics entries are published. Elementary particle physics papers are published, mostly as an afterthought in order to satisfy tenure committees and so on. It is generally accepted that submission to the arxiv IS the primary means of communication. And so it goes…I’ll see if I can come up w/ some numbers (the arxiv has utilities for just such a thing.
Surya says
You raised the important point. Editor’s decision is usually influenced by the external reviewers (2 or more). It would be fair if there is a blog where reviewers’ comments are posted for public comments before final decision is made based on how the authors responded to the comments.
pete best says
This thread is utterly fascinating and Gavin appears to have given a good account of himself (especially when talking about the nature of sciene and to Myles Allen who seems to be a well balanced scientist at heart)here.
I personally thought that science was the study of cause and effect through the use of models. It is as Einstein once said, you should account for all of the facts as simply as possible. Doesn’t greenhouse gas theory do just that?
It is was not for real climate I would have been lost overall but now I think myself of reasonable earth science knowledge in regard to climate change (although some of the terms here still get me – doh). Indeed in the UK newspaper the Sunday Telegraph (yesterday)there is a article with Lord Lawson on the age of unreason in which he argues that global warming has stopped since 1988. I immediately knew this was an erroneous assumption remembering the recent article here on this very subject (8 year bars) which deal with weather and not climate due to their timelines not being long enough. However I believe that some scientists have responded to tehse arguments incorrectly as the article also states that global warming will kick in again come 2009. As far as I know it has not gone away!!?
Apparantly according to Lord Lawson we are all being overly religious and zealous in our claims for future AGW. Not likely say I.
Barton Paul Levenson says
Jenne posts:
[[Scientists should embrace the open scientific debate, and anyone who challenges that should be made very, very clear that without open debate, there simply is no science, no matter how much one is in favor of or opposes to particular people, statements and actions.]]
What do you mean by “open debate?” Allowing unqualified people to stick their two cents in? They can do that already. What they can’t do is have their ignorant ramblings accepted by scientists. Like it or not, all opinions on a subject are not equally valid, and the opinion of someone familiar with the subject always outweighs the opinion of someone who has never studied it.
TonyN says
It’s a pity that Myles Allen has had to drop out.
I would have liked to ask him why he thinks that testimonies obtained by Fiona Fox (comment 50) from journalists who attended his 2005 press briefing are reliable evidence of what happened when she had so obviously jogged their memories. It would also be interesting to see the whole letter, rather than just a single paragraph.
Richard Black’s coverage for the BBC can be found here:
http://news.bbc.co.uk/1/hi/sci/tech/4210629.stm
An interview with Myles Allen on the BBC’s Today programme on 27th Jan 2005 can be found here:
http://www.bbc.co.uk/radio4/today/listenagain/ram/today1_climate_20050127.ram
A transcript from Simon Cox and Richard Vadon’s BBC Radio4 programme ‘Overselling Climate Change’, including the interview with Myles Allen, can be found here:
http://ccgi.newbery1.plus.com/blog/?p=70
Ray Ladbury says
Constanza, I’m curious how one would develop a theoretical model of a supernova w/o computer modeling. Or do orbital dynamics calculations? Or extract signal from noise for extra-solar planets?
In particle physics, how would one establish error bars w/o Monte Carlo simulations? How do you know your acceptances? In the dark and distant past, when I was a grad student in particle physics, lattice gauge theory was central to understanding quantum chromodynamics.
Look, I agree that computer modeling doesn’t give you “the answers”. Rather, as with all models it should be used to obtain insight into the central mechanisms of the phenomena under study. And what I object to is the distinction being drawn between computer modeling and modeling of any other type. The tools used to model do not matter. What matters is the insight gained.
Lynn Vincentnathan says
Here’s my memory. I remember reading about the Climate Prediction project both before and after the results. I distinctly remember a range being mentioned, from some low number up to 11C, and I distinctly remember that the 11C was less likely. And, of course, I was more focused on the high end.
In any case people living up in the cold north are not impressed with 3C warming, and would not be very impressed with 11C warming. To them it’s very tiny (as the temp fluctuates even 20C within 24 hours sometimes). Most people just don’t know what 3 or 11C means.
Which brings me to accusations of alarmism. There is just no way anyone can say anything that would be alarmism re GW. We’ve increased our GHGs emissions here in the U.S., I believe by 20% since 1990. I guess the logic goes, “if I don’t perceive I’m suffering from it right now, then it’s not a problem.”
When and if real alarmist talk starts happening, then we’d see it in the results — people lowering their GHG emissions.
Costanza says
Re 94:
Ray,
Point taken on the astrophysics. I will, however, split a supersymmetric hair. The Monte Carlo analysis and acceptances calculations are in the realm of experimental particle physics, which I excluded, and lattice work is a fairly small corner (theory and phenomenology are where the action is…look at the publication statistics). But again, you and I are in agreement on this issue. I simply think there’s too much rhetoric and hyperbole around this place.
Ray Ladbury says
Costanza, You have to understand that one of the constant refrains coming from the denialosphere is that “climate change needn’t be taken seriously because the only support for it comes from computer models and blah, blah blah…” That this is false is easily demonstrable given the daily increasing evidence that climate change is happening before our eyes. That it also denigrates and ignores the critical importance of modeling (computer or otherwise) in science today is inexcusably ignorant. As such, one way to ensure a reception involving both barrels is to imply that computer modeling is unreliable or unimportant. Unfortunately, on-line we tend to be unaware of the impression our rhetoric may be making–and given the prevalence of anti-science types this issue attracts, it is not uncommon to shoot first and clarify later.
Geoff Sherrington says
Your quote
“we like to think that RC plays a positive role in highlighting some of the more important and exciting results that appear.”
What are your most outstanding 5 examples, as you see them?
Would you include your opening paragraph to this thread?
Geoff Sherrington says
Re # 10 Eli Rabett
Amateurs dabbling in science.
You are rather mixed up. The amateurs who write on this blog are good for an occasional laugh, as are some of the pros.
Why don’t you include Medicine and Dentistry as sciences where Joe Citizen can futz around and make significant conrtibutions through “tacit understanding” of the poor quality of the qualified work?
John Mashey says
re: models and such, for people who don’t do them, or for people who do one kind and want to understand others [because they can be very different].
Albeit a little old now, I recommend “Supercomputing and the Transformation of Science” by William J. Kaufmann III & Larry Smarr, 1993, W. H., Freeman.
It’s a beautiful book, and you can basically get one from Amazon ~price of shipping.