The first published response to Lindzen and Choi (2009) (LC09) has just appeared “in press” (subscription) at GRL. LC09 purported to determine climate sensitivity by examining the response of radiative fluxes at the Top-of-the-Atmosphere (TOA) to ocean temperature changes in the tropics. Their conclusion was that sensitivity was very small, in obvious contradiction to the models.
In their commentary, Trenberth, Fasullo, O’Dell and Wong examine some of the assumptions that were used in LC09’s analysis. In their guest commentary, they go over some of the technical details, and conclude, somewhat forcefully, that the LC09 results were not robust and do not provide any insight into the magnitudes of climate feedbacks.
Coincidentally, there is a related paper (Chung, Yeomans and Soden) also in press (sub. req.) at GRL which also compares the feedbacks in the models to the satellite radiative flux measurements and also comes to the conclusion that the models aren’t doing that badly. They conclude that
In spite of well-known biases of tropospheric temperature and humidity in climate models, comparisons indicate that the intermodel range in the rate of clear-sky radiative damping are small despite large intermodel variability in the mean clear-sky OLR. Moreover, the model-simulated rates of radiative damping are consistent with those obtained from satellite observations and are indicative of a strong positive correlation between temperature and water vapor variations over a broad range of spatiotemporal scales.
It will take a little time to assess the issues that have been raised (and these papers are unlikely to be the last word), but it is worth making a couple of points about the process. First off, LC09 was not a nonsense paper – that is, it didn’t have completely obvious flaws that should have been caught by peer review (unlike say, McLean et al, 2009 or Douglass et al, 2008). Even if it now turns out that the analysis was not robust, it was not that the analysis was not worth trying, and the work being done to re-examine these questions is a useful contributions to the literature – even if the conclusion is that this approach to the analysis is flawed.
More generally, this episode underlines the danger in reading too much into single papers. For papers that appear to go against the mainstream (in either direction), the likelihood is that the conclusions will not stand up for long, but sometimes it takes a while for this to be clear. Research at the cutting edge – where you are pushing the limits of the data or the theory – is like that. If the answers were obvious, we wouldn’t need to do research.
Update: More commentary at DotEarth including a response from Lindzen.
HH says
For years I have been wondering why there has been no comment from a process/automation engineer about temperature controls with only positive or negative feedbacks.
Any process engineer will tell you what the end result is in a system that has only positive feedback components, process ending in the extreme temperature. So goes with earth climate. Nowhere are there any mentions of the negative feedback components of earth climate, or the study of them. The CO2 and methane are studied, but in the history earth has had very different values for both gasses, and we dont live in a boiler.
Hasn’t any climate scientest been to process automation 101? For earth climate being as stable as it is there must be both positive and negative temperature debendant coefficients limiting the change.
[Response: The Planck long wave emission (sigma&T^4) is the dominant negative feedback. Everything else is just modifying that. – gavin]
David Weisman says
I’m still curious as to exactly what they were measuring. It’s already known we had an ice age recently. I don’t think anyone has postulated the gross change in solar radiation which would explain this without amplification, so it seems the Earth’s temperature has to be sensitive to SOME stimulus.
Many of the skeptics have suggested most of the twentieth century temperature change is due to the sun. If so, wouldn’t these measurements measure sensitivity to a mixture of CO2 change and solar change – even if the whole thing were correct?
Jim Bouldin says
Ray, I’ve never been fond of the “letters” approach to science publishing, which connotes late-breaking, novel, newsworthy–and generally short–publications. That’s a media-circus like approach to science that is out of place IMO (where did that whole idea come from anyway?). What exactly can you really say, and explain well, and defend–forget about introducing properly–in four pages? It serves nobody to be so over-extended that you can’t catch fundamental mistakes. And even less when you get put through a Rick Trebino-like experience in trying to point out such in a formal Comment, if such are even allowed.
barry says
I’m curious – I assume it’s normal for studies to rebut single papers, but is this amped up particularly in climate science? I’m wondering if the climate wars, which are so hard-fought in the political arena and semi-popular blog literature, has accelerated the practise.
Walter Manny says
“Peer review in other fields is much stricter than this … Maybe that level isn’t necessary for geophysics – but it sure looks like there’s a need for a bit more effort here.”
Is that statement meant to cover climate science publications in general, or just those of contrarians’ publications?
Larry says
Re: (Jim Prall, 15): “… question: does anybody ever actually pay those steep single-aritcle prices?”
If you are an AGU member ($20/year) you can buy a “multi-choice” subscription: 10 papers for $40, 20 for $30, or 40 for $50 (barely $1 each). Much less than a GRL subscription.
If you are quite interested in the subject, subscriptions to Nature and Science will likely prove more economical over the course of a year than single paper purchases there. Often, PNAS papers are open source.
For web searches, use scholar.google.com (in addition to regular google), and search by title or author and year. Often there will be a link off the right to a PDF copy of the paper. You can also try to sites of the authors, and often you can obtain papers there.
To organize your library of papers, if you get deep into this, I recommend Biblioscape. From scholar.google you can download citations directly into the programs database, find things easily later, and link to a paper’s website and a pdf on your hard drive, relate one paper to another, take notes, etc., etc. $300 (ouch), but I can’t function without it anymore.
Andrew says
@gavin: “I’m only aware of papers being withdrawn in the case of proven fraud”
A bit of an exception which proves the rule, but there is one quite surprising instance – a paper of C.L. Siegel and Richard Bellman was accepted in the Annals of Mathematics before Siegel had seen the proofs – Bellman had done the proofs. Siegel withdrew the paper and demanded that the issue – which had been printed and shipped to subscribers – be recalled and printed with his further minor revisions to the proofs. Which was done.
I wouldn’t expect that to happen today though.
Andrew says
@pete best: “30.Thats the one thing about peer review that seems a little odd to the public at large perhaps. If its findings turn out to be wrong then how come it got published in the first place?”
A large amount of referee work in many fields is handed to graduate students. This is, oddly enough, one reason that it is easy to get a bad paper published in a well known problem, and sometimes hard to get even a good paper published in a not-so-well-known problem.
One other point is that “Letters” journals are often for rapid publication of work that, for one reason or another, seems to need rapid publication. Sometimes, although it is not supposed to, this results in a lower standard. The layman should probably think of “Letters” publications as ink still drying.
Andrew says
@Ray Ladbury: “I think the referees may have felt some pressure to allow publication, since Lindzen is a prominent skeptic (or at least pseudo-skeptic), and the implications were significant if he was right. Personally, I think it’s probably better to have this one published and demolished rather than floating around the blogosphere (in various incarnations) as another zombie.”
I think there ought never be consideration of who wrote a paper in deciding whether to publish, although maybe I’m old fashioned? Most of my papers were reviewed anonymously by anonymous referees – I system I was trained in, although more recently not. I don’t see what’s wrong with anonymous review; although current styles of referring to previous work can make things transparent.
On the other hand I do agree that it’s good to have this line of argument out in the open though, since Lindzen and Choi apparently thought it worth working out. I am quite unimpressed with the turning point detection they used, surely MIT has some statistician who can straighten them out on that. Change point detection in such cases should not be done “in vacuo” but with respect to the overall estimation – you don’t really care as much about what those dates were as you care about whether your climate sensitivity can be more honestly or closely bounded. Likely best will be using a probability distribution for the regime change as opposed to a change point, which can reduce variance and also increase robustness. Or if one loves point estimation too much, you could use a soft model like the log odds on the distribution of the change points (e.g. logistic change).
Whenever a transition at an uncertain time is modeled as a certain transition (at the best estimated time) most models that estimate the best time work to reduce some residual as much as possible – typically by fitting as much sample error as possible, thereby biasing the point estimate. Almost any reasonable transition smoothing reduces the influence of the data in the immediate neighborhood of the transition, leading to much less risk of “overfitting” (i.e. cherry picking). As soon as you consider correlates conditioned by the transitions you are really asking for trouble unless you have confidence that those transitions are really nailed down.
Shirley says
So much going on. Busy, busy busy. The heat island effect in urban areas is cumulative, and any power plants of any kind are peripheral to cars, asphalt (an absorber of short wave rad while emitting long wave; the darker the better at this), industrial scale bakeries/food processing, etc. Here in my very small area, the immediate harbor of lake Erie doesn’t freeze over due to the Coal fired (formerly Niagara Mohawk, now NRG) 600 MW Dunkirk, NY power plant. If I had to make an estimation of the lake which doesn’t freeze from the hot cooling water expelled from the facility, I’d say it’s roughly 10-15x the footprint of the entire property (not just the plant). I’m not sure how many football fields that would be, but it would be measured in football fields, not in large units like states or even counties. A very interesting but little known text book called “Energy, Physics and the Environment” by McFarland, et al turned out to be a very good primer on pretty much all known forms of energy and the general footprint of each. The book’s description says it’s for a student who has already had college physics 101, but I would recommend it to anyone. It offered a pretty balanced, dispassionate presentation of what is available, feasible and known. The authors also delve into the fringes (eg: fusion) and the whole lifespan of technologies (like the start with mining) in a pretty clear format. It’s not a perfect book, but I would recommend it to anyone who has an interest in getting a broad overview of energy extraction methods and their real costs, from cradle to grave. It’s not a book with an agenda other than to present the facts, so the reader is left to make up their own mind. Many of the chapter 3end questions are easy mathematically but though provoking, and answers, without the steps to obtain them, are provided for all the questions. Of course there are areas which could have been better treated, but it’s very, very good for what it is. So I bring up that book to go back to things like tidal energy and extracting it to do work (covered briefly, but with equations, in the book, while wind is treated a bit more extensively, although friction tends to be ignored along with overcoming the initial inertia, while apparently well-established constants, which I have found elsewhere, dominate the equations, which tells me that in all these years of making fans and other turbines, but I digress)… tidal energy, from what I’ve read about it, just converts existing energy, which does work (on shores, ocean floor, etc., where ever) and isn’t put to human use. What I learned from that book, which I hadn’t before considered (like the cooling which occurs due to evaporation), is that when water “falls” from one location and lands on another, latent heat is released, which can be used, in conjunction with the force of gravity, to do work (extractable energy). This can be observed if you’re somewhat sensitive by sticking your hand into a bowl of water and comparing how that feels to pouring that same bowl of water in a direct stream onto the same part of your hand. One will feel colder than the other, but neither make any significant effect on the room temperature (and this analogy could probably be used in some clever way to distinguish between weather and climate as well). The temp of the water after it falls onto a surface (as in to perform work on a turbine) is higher than it’s starting temp (assuming all other regional temps are the same, including the surface which it hits; some substances dissipate heat more easily, others more slowly as in wood/insulator vs. copper, a conductor)but we’re not talking about large amounts of heat. There is no fire involved in these processes, and this, in my mind is crucial: there is no STEAM involved. STEAM = water, converted into steam by constant high temp burning of whatever substance. In coal and nuke plants, steam is used to move turbines to create work; the same with active geothermal plants (the ones which use molten magma, not the kind that rely on localized air temp in near-surface bore holes) so water vapor, along with whatever other emissions the heat source emits, are sent into the atmosphere. With solar, wave (tidal), wind, no steam is required, a massive human generated heat source isn’t required. No aquifers depleted, no heat island effect. That isn’t to say that there couldn’t be ecological problems (fish which depend on shores where tidal power might be useful are already in danger, bats and birds are being imperiled by poorly placed wind turbines and a heat island effect from a concentrated solar installment could create its own problems), but, combined with conservation, it’s all better than anything we have. My personal conclusion is that if the power technology uses water (steam, like all coal, active magma geothermal and nuke plants), then it’s not sustainable. We’re talking about some pretty massive amounts of water for each, and there are lots of places running out of water. Pumping more and more of the sequestered CO2 into the atmosphere is very, very harmful for our (humanity’s) long term survival, but if we wipe out our fresh water by pollution and overuse, then doomsday comes much sooner. We’ve lived with water like it’s endless, and worse, let corporations treat it like it’s in endless supply, and those days are coming to a close, even here in America. in the mean time, we can argue about things like whether or not LC09 has any real meaning on what we’re doing, while each and every one of us contribute to what could be our demise if we don’t get serious.
Doug Bostrom says
Larry says: 9 January 2010 at 11:44 PM
Alternatively, marrying a faculty member of a university works a treat. Just pick one at a school w/bulk access to journals.
Andrew says
@HCG: “35.Nonstationarity of error terms is a serious problem in time-series analysis, but I don’t have a good sense of how well this issue has been treated in climate analysis.”
It’s a very well understood issue, but you have to understand that nonstationary residuals may very well occur in nonstationary time series. It is not always a ‘flaw’, it can be a fact.
There is no reason to expect in general that a nonstationary stochastic process will have stationary residuals. In fact, it is sort of exceptional for that to occur – exceptional in any of several senses which can be made mathematically precise.
H Hak says
Here some similar concerns Roy Spencer has regarding LC09 http://www.drroyspencer.com/2009/11/ Scroll down.
AxelD says
Off topic, but the UK press is reporting in this story that oceanic cycles are producing the wether changes many of us are seeing in the NH, and they’ll continue. The story quotes Mojib Latif who gives the same story as he did last year – cooling for at least 20 years, plus a whole lot more about weather for the next few decades.
A Professor Tsonis is also quoted as saying that multi-decadal oscillations explain all the major changes in world temperatures in the 20th century. “We can expect colder winters for quite a while”.
I believe that the GCMs don’t account for these MDOs that Latif and Tsonis say account for climate shifts. What is RC’s position on this? Because press stories like this here in the UK are likely to gain a lot of traction
[Response: Wow. Quite frankly I find these comments (assuming the recent quotes are accurate and in context) very strange. Neither Tsonis or Latif can have done any kind of attribution studies with data from this winter, and so their connection of two weeks of negative AO to some multi-decadal cycle is just speculation. Latif’s paper in 2008 made predictions for the period 2000-2010 which are guaranteed not to come true (this year would need to be as cold as a year in the 1970s) – and for quite well understood reasons (see this recent paper by Dunstone and Smith (2010)(sub. reqd.)) . Tsonis’s paper was discussed here by his co-author. – gavin]
Ray Ladbury says
Andrew @59 says “Change point detection in such cases should not be done “in vacuo” but with respect to the overall estimation – you don’t really care as much about what those dates were as you care about whether your climate sensitivity can be more honestly or closely bounded.”
Important point–and one that is under-appreciated. You have to treat the change date (or other such parameter) as a parameter. This is very clear in an analysis I did where I developed a fitting routine based on Akaike Information Criterion. If the additional parameters don’t add info, they should be omitted and a simpler model used. Failure to do so can really distort results.
Regarding the whole LC’09 analysis, the words “too clever by half” spring to mind.
And Re: the peer review process, I suspect that an anonymous review would have been impossible. Everyone knew Lindzen had been working on this. He’d essentially published the analysis on WUWT. In any case, I think that reviewers often give some leeway to dissenting opinions, regardless of authorship.
Ray Ladbury says
Walter Manny,
We’re talking Geophysics Research Letters here, ferchrissake. The whole point of the publication is to get results out in front of the community with rapidity. The very fact that LC’09 was published, despite obvious flaws that had been identified even in its blog version, simply puts paid to the assertion in the denialosphere that you guys can’t get published. You can get published–it’s just that what you publish is usually wrong or uninteresting.
Ray Ladbury says
Bary asks, “I’m curious – I assume it’s normal for studies to rebut single papers, but is this amped up particularly in climate science? I’m wondering if the climate wars, which are so hard-fought in the political arena and semi-popular blog literature, has accelerated the practise.”
I see no difference between climate science and any other field at the level of journals or conferences. It’s only in the blogosphere, the editorials and the political arena where things get nasty. In other fields I’ve seen some really nasty, life-long feuds, too. Scientists are human–well most of us. They said that von Neumann was a Martian who just learned to do a good imitation of human behavior by later life.
Ray Ladbury says
Jim Bouldin,
Well, as someone who had the job of following geophysics for a physics magazine, I didn’t devote a whole lot of attention to GRL. The real purpose as you say was “breaking news”–discoveries of interest to the community. In reality, it has come to be a place where you publish results when you don’t want to deal with the rigor and time of an intense peer review. It’s not a dumping ground, and I did faithfully scan the tables of contents every month, but GRL does have a low SNR.
David says
@dhogaza(40)& GFW (42) Thanks guys. The reason for apologising and expecting flames was that of going off-topic. The reason for posting here first rather than doing a Google search first (dhogaza – you are correct) was the usual obvious one: gambling on better result asking within a forum of genuinely interested parties than doing a Google search, however well you frame the query. Indeed – subsequent Google searches turned up little in (so far) 20 min of work. However, there was one item, so, regarding – “if you come across anything interesting post it here” – well, all right then:
http://www.ucalgary.ca/~keith/papers/94.Kirk-Davidoff.SurfaceRoughnessJAS.p.pdf
But only from one of the same authors, unfortunately.
Ray Ladbury says
HH says, “Any process engineer will tell you what the end result is in a system that has only positive feedback components, process ending in the extreme temperature.”
Actually, only the ones who haven’t studied infinite sereies would tell you that. As long as each term in the series decreases geometrically, the series converges:
http://mathworld.wolfram.com/Series.html
And as Gavin points out, there are negative feedbacks.
cervantes says
I think you are making a serious mistake by ignoring what is happening right now in the real world. The populous regions of the northern temperate zone are experiencing historically unprecedented cold — and I don’t just mean since systematic temperature records were kept, I mean since people started to make written records. Frost in New Orleans, snow in Orlando and Chihuahua, Ireland and the British isles completely covered in snow. I know this has to do with the Arctic oscillation and that temperatures elsewhere are warm, but politically, the consequences of this event are obvious to me — most people just aren’t going to believe the world is getting warmer when the world around them is manifestly colder than it has ever been. And any hope of political action in the U.S. to reduce carbon emissions is gone, completely, for at least the next few years, until people’s immediate experience changes. That’s a fact, and you need to acknowledge it, and come down out of the ivory tower and address it.
Just some friendly advice
[Response: “Colder than it’s ever been”? Really? – gavin]
Yvan Dutil says
Well, at the same time Eastern Canada had a very unseasonally hot weather. Roughtly 15 degrees above the average. Since their is no big cities there nobody speaks about it.
Dallas says
LC’09 and the subsequent discussions have been fascinating. As an unintended consequence, they seem to have taken some of wind out of the deniers who often pollute the RC discussions. Good riddance. They have been cluttering up the RC discussions. Maybe I am wrong, but I don’t believe that an MIT professor and an MIT post doc could have accidentally made the mistakes they have made. [edit – this is not ok]
Obviously I am not nearly as charitable as the non-denier RC experts whose opinions I respect.
Joe says
“More generally, this episode underlines the danger in reading too much into single papers……Research at the cutting edge – where you are pushing the limits of the data or the theory – is like that. If the answers were obvious, we wouldn’t need to do research.”
I fully agree with this. Also, it is good to realise, that together with the good research, countless flawed, biased, erroneous papers get published annually in peer-reviewed scientific journals. This may come as a surprise to those not involved in the process themselves.
I do not mean that we should abandon the scientific process. I do believe that in the long run it will deliver the goods. Emphasis on the long run. Especially with something as complicated as global warming.
Bill says
re#71: Colder than it has been since 1981-2, then 1962-3, then 1947 , the oh ..this weather stuff is boring
Spaceman Spiff says
@51 says:
“Nowhere are there any mentions of the negative feedback components of earth climate, or the study of them.”
Because climatologists are actually interested in understanding how Earth’s climate works, they do investigate and assess negative as well as positive feed backs. See, if you weren’t interested in understanding how nature behaves, you’d go around cherry-picking only those data which confirm what we already know must be true. Of course, such behavior would lead to nothing learned, contrary to the actual development of knowledge and understanding since the early 19th century.
For starters try here and here for discussions of the effects and amplitudes of both types of feedback. An enormous amount of work has been done to understand all of the important feedback mechanisms.
Joe Blanchard says
Eastern Canada is NOT warmer than usual. In Montreal, we are having a “normal” winter. However, November was warm with some 15 C days.
It was -20 C last night.
Joe Blanchard says
Just out of curiosity.
Is there any paper or study that tried to come up with a figure of the max theoretical temperature that the earth could achieve?
Ray Ladbury says
Dallas@73 Let’s not play this game. LC’09 had some significant flaws that were spotted fairly quickly once it was published. If his intent had been to deceive, do you think he would have published it and put it out in front of all the smart people, or would he have made the rounds of denialist blogs and received their adulation.
Lindzen and Choi played by the rules. They published. This puts them lightyears ahead of the typical denialists. Their ideas were put out in front of the community, flaws were found, and now L&C have acknowledged the flaws and promised to submit an improved version. Kudos to them. That is how you play the game of science.
Anti-science is what happens when one publishes on blogs frequented by laymen who read things refracted through their own ideological prism and shout down the voices of scientific reason that point out flaws. I have been harshly critical of Lindzen in the past for overstepping the bounds of scientific respectability in op eds, public debates, etc.
I’ll even go so far as to take back my assertion that LC’09 is “not even wrong” and apologize to the authors. It had clear flaws, but remember the authors aren’t experts in interpreting satellite data. Maybe they have an ideological bias, but I think that they were honestly trying to call attention to something they thought was important.
The important thing to take away from this is the way science works–the individual scientists have biases, but the product of the collective effort of the scientific community winds up with those biases greatly diminished.
Andrew says
@Ray Ladbury: “Important point–and one that is under-appreciated. You have to treat the change date (or other such parameter) as a parameter.”
Absolutely. This falls into the category of “do we actually have to say this?” Apparently it is necessary to say it.
On the other hand, simply counting parameters is really not recommended, nor are simple complexity measures like AIC, etc. Too much distributional dependence and sample independence is needed for those approximations to work. You really want to be looking at eigenvalues of the Fisher information if you want to live with efficient estimation. I personally have given up on efficient estimation in practice, but superefficient estimation is probably nonviable in climate, where the political pressure on credibility probably makes it impossible to use methods which, however objectively superior, necessarily include arbitrary choices.
And another “need we say it?” point is that change points that were parameterized in preliminary or unsuccessful analyses reduce the significance of the final analysis even if those parameters do not appear – or else the “multiple testing” form of the prosecutor’s fallacy can occur. To make this concrete, suppose you will require the probability of a false positive by no more than 5% in order to accept an analysis. Well, as long as you have more than 100 analyses up your sleeve, then none of those analyses need be anything other than chance for you to have a 99% chance of finding one test with 95% significance. (The formula is 1 – (1 – p)^n where p is the probability of a false positive and n is the number of independent trials with that probability.)
When you have a division of noisy data into epochs by change points, it doesn’t take too many change point choices to result in this sort of effect – for example 7 change points, each with 2 choices, yield 128 different possibilities. And if you present the one division of epochs that gave the “best results” – possibly for good reason – then the significance of that result is still much less than it would be had the other possible divisions not been considered.
Note that this is an additional complication to possible sensitive dependence of the inference on the location of the change points – it is only necessary that some parameterizations of the model result in an uncertain distribution of the value of interest for the significance to be much reduced, it is not required that the values actually fitted exhibit this sensitivity for the inference to suffer the loss of significance. This criticism will apply not just to the Lindzen and Choi analysis, but to other “counter-analyses”, unless care is taken to avoid this problem.
Chris ODell says
Given the large number of comments on the peer-review process in general and in the LC09 case in particular, it is probably worthwhile to give a bit more backstory to our Trenberth et al. paper. On my first reading of LC09, I was quite amazed and thought if the results were true, it would be incredible (and, in fact, a good thing!) and hence warranted independent checking. Very simple attempts to reproduce the LC09 numbers simply didn’t work out and revealed some flaws in their process. To find out more, I contacted Dr. Takmeng Wong at NASA Langeley, a member of the CERES and ERBE science teams (and major player in the ERBE data set) and found out to my surprise that no one on these teams was a reviewer of LC09. Dr. Wong was doing his own verification of LC09 and so we decided to team up.
After some further checking, I came across a paper very similar to LC09 but written 3 years earlier – Forster & Gregory (2006) , hereafter FG06. FG06, however, came to essentially opposite conclusions from LC09, namely that the data implied an overall positive feedback to the earth’s climate system, though the results were somewhat uncertain for various reasons as described in the paper (they attempted a proper error analysis). The big question of course was, how is it that LC09 did not even bother to reference FG06, let alone explain the major differences in their results? Maybe Lindzen & Choi didn’t know about the existence of FG06, but certainly at least one reviewer should have. And if they also didn’t, well then, a very poor choice of reviewers was made.
This became clear when Dr. Wong presented a joint analysis he & I made at the CERES science team meeting held in Fort Collins, Colorado in November. (http://science.larc.nasa.gov/ceres/STM/2009-11/index.html). At this meeting, Drs. Trenberth and Fasullo approached us and said they had done much the same thing as we had, and had already submitted a paper to GRL, specifically a comment paper on LC09. This comment was rejected out of hand by GRL, with essentially no reason given. With some more inquiry, it was discovered that:
1) The reviews of LC09 were “extremely favorable”
2) GRL doesn’t like comments and is thinking of doing away with them
altogether.
3) GRL wouldn’t accept comments on LC09 (and certainly not multiple comments), and instead it was recommended that the four of us submit a stand-alone paper rather than a comment on LC09.
We all felt strongly that we simply wanted to publish a comment directly on LC09, but gave in to GRL and submitted a stand-alone paper. This is why, for instance, LC09 is not directly referenced in our paper abstract.
The implication of statement (1) above is that LC09 basically skated through the peer-review process unchanged, and the selected reviewers had no problems with the paper. This, and for GRL to summarily reject all comments on LC09 appears extremely sketchy.
In my opinion, there is a case to be made on the peer-review process being
flawed, at least for certain papers. Many commenters say the system isn’t perfect, but it in general works. I would counter that it certainly could be better. For AGU journals, authors are invited to give a list of proposed reviewers for their paper. When the editor is lazy or tight on time or whatever, they may just use the suggested reviewers, whether or not those reviewers are appropriate for the paper in question. Also, when a comment on a paper is submitted, the comment goes to the editor that accepted the original paper – a clear conflict of interest.
So yes, the system may work most of the time, but LC09 is a clear example that it doesn’t work all of the time. I’m not saying LC09 should have been rejected or wasn’t ultimately worthy of publication, but reviewers should have required major modifications before it was accepted for publication.
Hank Roberts says
> 77 Joe Blanchard says: 10 January 2010 at 10:47 AM
> Eastern Canada is NOT warmer than usual.
Joe, it is LESS COLD than usual, that’s what the map shows.
This does not mean you feel warmer, nor that local temps aren’t cold.
It’s the _anomaly_ from the longterm mean shown on the map.
I wish Andy Revkin explained this every time; his DotEarth thread posters make the same mistake repeatedly.
Ray Ladbury says
Andrew@80 This may be taking the discussion too far afield, but if Gavin will indulge our side discussion…
I agree that the Fisher Information is a fundamental quantity, but I am also not quite ready to give up on quantities like AIC, BIC, DIC… for the simple reason that I think overfitting is a significant problem in many analyses. In some cases in my field, under-fitting is also an issue. For example, in the example you gave, the changepoints are additional parameters and the AIC would have to improve exponentially to justify the added complexity. I don’t see how you get the same Occam’s-razor effect just with Fisher Information (I’ll admit this could be due to the fact that I’m just a dumb physicist). One thing I have noticed is that for a “good” model the decrease on log-likelihood is less than linear as you add data, while for a “bad” model, it doesn’t improve or may even worsen.
BTW, speaking of overfitting, have you seen this wonderful example:
http://blogs.discovermagazine.com/cosmicvariance/2007/07/13/the-best-curve-fitting-ever/
Andrew says
@HH, Ray Ladbury: “70.HH says, “Any process engineer will tell you what the end result is in a system that has only positive feedback components, process ending in the extreme temperature.”
Actually, only the ones who haven’t studied infinite sereies would tell you that. As long as each term in the series decreases geometrically, the series converges.”
HH is thinking of feedback in the context of systems, which is a bit different than a series of positive terms.
What HH has in mind is that the linear system of differential equations y’ = A y is unstable if A is not identically zero and none of the elements of the matrix A are negative. This is an elementary consequence of Perron-Frobenius theory, which provides that such A have a postive eigenvalue. It follows that nonlinear equilibria or periodic solutions with such linearizations are unstable.
Since about the time of Budyko and Sellers, it has been useful to view climate as a nonlinear system of equations (I have in mind works of Ghil and others – see North et. al. http://ams.allenpress.com/archive/1520-0469/36/2/pdf/i1520-0469-36-2-255.pdf).
In this sort of picture, you expect equilibria to be characterized by their linearizations, hence requiring some sort of negative feedback to be stable; the most important such feedback (radiation into space) already having been mentioned.
It is necessary in such a model, in the presence of both positive and negative feedbacks to attempt to assess the stability of a purported equilibrium. However one should not be too concerned with this sort of “phase portrait” since even in the case of a known stable equilibrium the issue is whether the climate will remain in a happily habitable region – not whether it will eventually return to a habitable region after leaving it.
Joe Blanchard says
Hank Roberts,
I don`t care much for the map. I know that here in Montreal it is NOT less cold – I checked the history myself. That map is not reliable (the MET office admitted it but they have not corrected it).
Completely Fed Up says
“In this sort of picture, you expect equilibria to be characterized by their linearizations, hence requiring some sort of negative feedback to be stable”
And this is not the picture that represents climate feedbacks, Andrew.
Sorry, but there it is.
Steve Bloom says
Comments that are flagged as spam get deleted automatically? Now there’s some blog functionality. :(
Andrew says
@Ray Ladbury: “84.Andrew@80 This may be taking the discussion too far afield, but if Gavin will indulge our side discussion…
I agree that the Fisher Information is a fundamental quantity, but I am also not quite ready to give up on quantities like AIC, BIC, DIC… for the simple reason that I think overfitting is a significant problem in many analyses. In some cases in my field, under-fitting is also an issue. For example, in the example you gave, the changepoints are additional parameters and the AIC would have to improve exponentially to justify the added complexity. I don’t see how you get the same Occam’s-razor effect just with Fisher Information (I’ll admit this could be due to the fact that I’m just a dumb physicist). One thing I have noticed is that for a “good” model the decrease on log-likelihood is less than linear as you add data, while for a “bad” model, it doesn’t improve or may even worsen.
BTW, speaking of overfitting, have you seen this wonderful example:
http://blogs.discovermagazine.com/cosmicvariance/2007/07/13/the-best-curve-fitting-ever/”
We certainly agree on the importance of knowing how good one’s fit is, and whether it is due to chance. However under- or over- fitting are really only approximate ideas of model quality.
One problem with the various xIC ideas is that they come from a parametric model of the likelihood which doesn’t accomodate a lot of things. For example consider how many parameters does an SVM have? Or a regression tree?
The “occam’s razor effect” of all those xIC type information criteria comes from the Fisher information in the first place – in some sense the appropriate “dimensionality” for a model that will be estimated by conventional means is the number of eigenvalues of the Fisher information which are positive. However when you do not know the true parameter of a system, you are using an estimate of the Fisher information, and so you are up against the question of the number of positive eigenvalues of the Fisher information at some other, hopefully nearby point. The various xIC ideas all correspond to this sort of inference. It’s one way the Fisher information contributes to model assessment.
However it can be shown that if you want to estimate the true parameter, then the parameterization of your model itself matters. The Fisher information provides a Riemannian metric on the manifold of parameters. On this Riemannian manifold, it turns out that the kernel of the heat equation (using the Laplace-Beltrami operator from the Riemannian metric given by the Fisher information) provides a family of “reference” priors which are known in advance to outperform lots of other forms of estimation, especially in the case of high dimensional model with lots of noise. These “superefficient” estimators will beat all the unbiased forms of estimation (which are limited to being merely efficient). As a result, xIC type model selection, appropriate to efficient estimators, will choose a much lower dimensional model than optimal. The superharmonic estimators “know” lots of stuff about the geometry of the whole manifold, your xIC stuff only knows about the geometry of the manifold in the neighborhood of your estimate of the parameter. You can think of the superharmonic estimator as being able to “see over the horizon” of a bumpy likelihood landscape – extremely powerful stuff.
This sort of stuff is starting to hit the open literature (see list below) – the Japanese school of information geometry is hot on this sort of trail.
Here’s the problem. There isn’t just one “best” superefficient estimator. Consistent with “Stein’s Paradox” – the granddaddy of all such “shrinkage” estimator – if you have one such estimator for a model, you have infinitely many and no objective way to prefer one over the others. So you pick your point or set of superefficiency, um, because it’s your favorite color, I don’t know. You can be quite confident of beating the more sensible appearing lower complexity efficient estimators, and it is in any objective sense the sort of estimator you should be doing if you really want the best possible answer. But the hang-up is just try and explain why policy makers should prefer your estimate over worse performing estimators which can be at least pretended to be objective. Ask the policy makers their favorite color when picking the estimator? There’s a reason why statisticians spent the last 50 years sweeping superefficient estimation under the rug. They got away with it for a long time because superefficient estimation does best when you have a very high complexity model and not as much data to fit it as you would like.
I have seen stuff like that Laffer curve fit. As a long time practitioner of finance, I am a confirmed economics skeptic, if not an outright economic denialist.
Useful references:
AMARI, S. (1987) Differential geometry of a parametric family of invertible linear systems – Riemannian metric, dual affine connections, and divergence. Mathematical System Theory 20, 53–82.
AMARI, S. and NAGAOKA, H. (2000) Methods of Information Geometry.
Oxford: American Mathematical Society.
Kass, R. and Vos, P. Geometric Foundations of Asymptotic Inference, (1997) Wiley Interscience
KOMAKI, F. (2006) Shrinkage priors for Bayesian prediction. to appear in Annals of Statistics. (http://arxiv.org/pdf/math/0607021)
Fuyuhiko Tanaka, Fumiyasu Komaki, Asymptotic Expansion of the Risk Difference of the Bayesian Spectral Density in the ARMA model (http://arxiv.org/abs/math/0510558)
Malay Ghosh, Victor Mergela, Gauri Sankar Dattaa, Estimation, prediction and the Stein phenomenon under divergence loss (http://dx.doi.org/10.1016/j.jmva.2008.02.002)
Andrew says
@CFU: “And this is not the picture that represents climate feedbacks, Andrew.”
I guess the link to North et. al. was broken?
http://authors.library.caltech.edu/11329/1/NORjas79.pdf?
Hank Roberts says
Joe Blanchard writes:
> “(the MET office admitted it but they have not corrected it).”
Where did you read that, Joe? I can’t find a source for your claim.
I hope you’re not misstating this guy’s blog column comments:
http://www.bbc.co.uk/blogs/paulhudson/2010/01/a-frozen-britain-turns-the-hea.shtml
Leo G says
Gavin, just wanted to give you guys a 2 thumbs up for this site. This subject alone is worth the cost of admission.
Being a lay person, it is very educational to see how science actually works. I think most of us lay people have pictures in our minds of overstuufed chairs, cigars and brandy in the Profs lounge! Absolutely intriguing!
Thanx for allowing us into your world.
Simon Rika says
Several people have commented with the opinion that the peer review process may have been flawed in this case. This may be so, but as a layman I would like to offer my opinion on what peer review should be, at least from my perspective.
It seems to me that people are saying that papers that are wrong should be weeded out at the review level, but personally I think this is wrong unless the reviewer can show clear evidence of intentional ‘errors’. The contradiction of a paper should happen at the published level rather then the pre-published level if the paper in question is not obviously intentionally wrong.
I say this because doing it at the review level keeps it hidden from us laymen. We don’t see the contradiction to the paper, all we hear about is “they refused to publish it”.
Basically a concept from law is appropriate here: “Justice should not only be done, but be SEEN to be done.” Papers prevented from even being published are like secret trials. We (meaning us laymen) have to take both sides at their word rather than see the claim counter-claim process out in the open.
Wrong is not bad… fraudulent is. Being wrong can still educate the layman on the process by which science moves forward, and can help people like me to see that the science IS being done fairly and completely.
My argument would go both ways – if this paper was allowed through, but any contradictory papers weren’t then yes, ‘peer review’ would have failed because it would have been used to push one side or the other.
Peer review should be to ensure that the published paper is as good as it can be… not to decide whether it is publishable at all (except as I said in extreme cases). If Lindzen and Choi want to stake their reputations on a paper, they should be allowed to, even if it is wrong. Let the follow up papers show why and how they are wrong, and no one can complain that they were treated unfairly.
Basically, peer review should be there to allow the authors of a paper to see the kinds of arguments that will be made against the paper, and to thus modify the paper pre-publication to address those kinds of criticism. If the authors wish to go ahead with a paper that the reviews feel is flawed, then that is their risk to take.
I guess what I am saying is that any other way of working would put the journals in the position of being the gatekeeper, deciding what is and isn’t “science”, and giving them the ability to bias the field one way or another, even if unintentional. The journals should be the formal discussion forum, not the final word on what is or is not correct – they are there to keep the discussion formal, not to decide what should be discussed.
Hank Roberts says
Oh, I see. Sorry for the digression, folks.
Joe’s looking at today’s temperatures in Montreal, comparing them to
http://www.metoffice.gov.uk/corporate/pressoffice/2010/images/20100106b-chart.jpg (26 December through January 1st).
This may help:
http://www.weather.com/outlook/events/weddings/monthly/CAXX0301?from=36hr_topnav_wedding
— 23F Jan. 1st in Montreal;
— 3F yesterday in Montreal.
That’s the weather. Now back to your climate discussion, I hope.
Barton Paul Levenson says
buenos dias, cervantes: it’s winter.
Record high temperatures beat out record lows by two to one over the past decade.
Doug Bostrom says
Comment by cervantes — 10 January 2010 @ 9:52 AM
It’s important not to dismiss Cervantes’ objection too lightly.
— Average people are members of the electorate. To a greater or lesser extent, policymakers are responsive to the electorate.
— If the average person is unable to reason out something so basic as the difference between weather and climate, it is quite unlikely they’ll be able to follow the science behind climate change.
— For the specific case cited by Cervantes, if the average person is not helped to reason out why today’s weather is an unreliable indicator of the future, the average person is not going to be able send a signal of concern to policymakers.
— As Cervantes indicates, with such a poorly prepared electorate, policy response to climate change will be severely retarded in speed.
No surprise to denizens of RC, but there is an excellent site with friendly and comprehensible explanations of virtually all of the misunderstandings encountered by the average person with regard to climate science.
Here’s how that site explains how to sort out confusion over weather versus climate:
http://www.skepticalscience.com/global-warming-cold-weather.htm
Doug Bostrom says
I should add with regard to Cervantes’ remarks, what little actual scientific research (as opposed to opinion polls) has been performed on public understanding of climate science indicates that the public (in the U.S. at least) has been dithering around a fairly poor level of understanding for the past 15 years.
Public thinking about climate is actually surprisingly good, considering the firehose of deception directed into the ears of John Q. Citizen, but is not up to delivering a useful message to policy makers.
Beyond reactive battling against malicious PR campaigns there’s a huge job of remedial education waiting to be done here. Overcoming susceptibility to misleading PR requires shovel work at a basic level, and more than shovels it needs patience.
Andrew says
@Simon Rika: “It seems to me that people are saying that papers that are wrong should be weeded out at the review level, but personally I think this is wrong unless the reviewer can show clear evidence of intentional ‘errors’.”
Oh my no. You want the referees to catch as many of your unintentional errors as possible to save you the difficulty of having them immortalized in print.
Jiminmpls says
#93 Hank and Joe
This should be under the Unforced Variations thread, but it came up here, so…
Hank, the anomaly map you cite is for Dec 26-Jan 1. The current cold snap started after that.
This cold snap is indeed unusual – at least in eastern North America. It’s *significantly* colder than normal in most of the US east of the Rockies The cold reaching down in to the SE United States is particularly unusual – not just in terms of the temps reached, but moreso in the duration of the cold. It is very strange – esp in an El Nino year when we were expecting a mild winter.
There is a clear cause for the strange weather pattern: A very strong negative Arctic Oscillation. This negative AO is NOT NATURAL: It is a effect of severe warming in the Arctic.
From http://www.wunderground.com/blog/JeffMasters/comment.html?entrynum=1398
“A new atmospheric pattern emerges: the Arctic Dipole
In a 2008 article titled, Recent radical shifts of atmospheric circulations and rapid changes in Arctic climate system Zhang et al. show that the extreme loss of Arctic sea ice since 2001 has been accompanied by a radical shift of the Arctic atmospheric circulation patterns, into a new mode they call the Arctic Rapid change Pattern. The new atmospheric circulation pattern has also been recognized by other researchers, who refer to it as the Arctic Dipole (Richter-Menge et al., 2009). The old atmospheric patterns that controlled Arctic weather–the North Atlantic Oscillation (NAO) and Arctic Oscillation (AO), which featured air flow that tended to circle the pole, now alternate with the new Arctic Dipole pattern. The Arctic Dipole pattern features anomalous high pressure on the North American side of the Arctic, and low pressure on the Eurasian side. This results in winds blowing more from south to north, increasing transport of heat into the central Arctic Ocean.”
While these reports concerned earlier episodes of the Arctic Dipole pattern, the same appears to be occuring now. The high pressure over Greenland is forcing cold arctic air southward and causing the unusually cold weather in the eaastern North America.
So, ironically, the unusually cold weather we are experiencing is most likely an effect of global warming! (I’m just a lay person tyring to connect the dots. I’m sure that I’ve misunderstood more than one thing along the way.)
[Response: Don’t get carried away with pop attributions. The AO has a very strong random component, even while there is some evidence that its pdf can be shifted by increasing GHGs, volcanoes, solar etc. The expected tendency as CO2 increases is towards slightly more positive phases (Miller er al, 2006), with a similar tendency associated with volcanic effects (‘winter warming’) and long-term solar. You could make a vague (and I think weak) argument that the current phase of the solar cycle could give a slight tendency towards a negative AO, but the magnitude of any forced tendency is much, much smaller than what we’ve seen so far this winter. AFAIK, there is no evidence or model study that we expect greater variance in the AO as a result of any of these forcings. – gavin]
Brian Dodge says
Hypothetically, let’s suppose GRL picked three reviewers; Spence Royer, a well known AGW skeptic and flat earth creationist, Lad Raybury, a middle of the road physick who accepts the mainstream view of AGW but is aware of its shortcomings, and Lord Blowhawk, a warmist who spends half his time running arcane incomprehensible models and the other half blogging about how the sky is falling. Whatever their sociopolitical bent, they have the math/physics/science chops to understand the fundamentals of LC09 and make a reasonable assessment of its merit.
Spence’s initial reaction is “Aha, this will put another nail in the AGW coffin”, but he takes his job as a reviewer seriously, and finds a few flaws or weaknesses, suggests some improvements, and recommends publication.
Lad’s initial reaction is “this isn’t even wrong – they are obviously unaware of FG06”, but he takes his job as a reviewer seriously, and finds a few flaws or weaknesses, suggests some improvements, and although he doubts this paper will significantly advance the science, maybe some young Turk thinking about how FG06 and LC09 differ will make a breakthrough, so he recommends publication.
Lord Blowhawk’s reaction is “more denialist propaganda disguised as Real Science”, but he takes his job as a reviewer seriously, and finds a few flaws or weaknesses, suggests some improvements, makes notes of not-so-obvious flaws where it can be attacked after publication, hopes that maybe the embarrassment of a crap paper will quiet some of the denialist camp and help influence policy makers, and recommends publication.
Which is my take on why unimportant(and I’m specifically avoiding “bad” or “error ridden”, as these are usually post facto judgments) papers, “useful fools” of publication, make it into print.
(I suppose I should include the usual “None of the people or events depicted in this scenario are actual events or depictions of real people. It is a fictional account intended only for illustrative purposes, and any names similar to real world people are coincidental and used here for the edification of readers, without sarcasm or intent for ridicule” disclaimer &;>)
Jason Miller says
Re#71 Gavin’s Response
I read Revkin’s Dot Earth blog and was surprised by the statement that rain was falling in Greenland in winter. I then did a Google search and found this article –
Rain speeds Antarctic Peninsula glacier melt – http://www.reuters.com/article/idUSTRE50F35D20090116
This got me curious about the effects of rain on the glaciers and if more rain led to more and quicker melting. And if this was considered in the glacier melt models. I found the following articles:
Reducing the uncertainty in the contribution of Greenland to sea-level rise in 20th and 21st centuries by Bugnion (2000) – http://www.uas.alaska.edu/envs/publications/pubs/Motyka_et_al.pdf
This provides MIT rain versus snow modeling to calculate runoff (such as ice melts faster than snow, bare ice will not freeze rain until the next snow fall). The paper concludes “The changes in sea level estimated by all three models for the 20th and 21st centuries cannot be distinguished from zero at any confidence level.” This does not seem to support the Reuters article but it is limited in scope to Greenland.
Submarine melting at the terminus of a temperate tidewater glacier, LeConte Glacier, Alaska, U.S.A. by Moytyka et al (2003) – http://www.uas.alaska.edu/envs/publications/pubs/Motyka_et_al.pdf
This paper does state that melt rates are highest in late summer and after heavy rain. And “However, it is likely that submarine melting does contribute directly and indirectly to both short- and long-term changes in terminus position. If so, we suggest that prolonged periods of exceptionally heavy rain, coupled with warm fjord water temperatures, could trigger terminus destabilization of a tidewater glacier.We note that LeConte Glacier began its retreat in fall of 1994, after a long period of exceptionally heavy rain.”
Greenland Ice Sheet Surface Mass Balance Variability (1988–2004) from Calibrated
Polar MM5 Output by Jason E. Box, et al (2006) -http://polarmet.osu.edu/jbox/pubs/Box_et_al_J_Climate_2006.pdf
The paper only mentions it as a “liquid water lubrication of ice sheet flow, as suggested by Zwally et al. (2002).” I does not say anything about the actual contribution to the melting of the ice by rain.
Elimination of the Greenland Ice Sheet in a High CO2 Climate by Ridley et al (2005) – http://epic.awi.de/Publications/Rid2005a.pdf
Describes saturation of liquid water in the snowpack.
The Dynamic Response of the Greenland and Antarctic Ice Sheets to Multiple-Century Climatic Warming by Huybrechts, and de Wolde (1999) – http://ams.allenpress.com/perlserv/?request=get-document&doi=10.1175%2F1520-0442%281999%29012%3C2169%3ATDROTG%3E2.0.CO%3B2
It gives a good description of rain versus snow in the models. but does not seem to be include anything about the rain increasing the ice melt. It may, but I didn’t understand the equations.
Greenland and Antarctic Mass Balances for Present and Doubled Atmospheric CO2 from the GENESIS Version-2 Global Climate Model by Starley L. Thompson and David Pollard (1997) – http://ams.allenpress.com/perlserv/?issn=1520-0442&issue=05&page=0871&request=get-document&volume=010&ct=1
Contains equations for refreezing and meltwater corrections for models.
Greenland’s climate: A rising tide by Quirin Schiermeier (2004) -http://www.nature.com/nature/journal/v428/n6979/full/428114a.html
It’s behind Nature’s pay wall. Google listed this quote: “Less snowfall and more rain would cause the ice to disappear at a faster rate than …” which supports the statement in the Reuters article.
Modelling Changes in Glacier Mass Balance That May Occur as a Result of Climate Changes, by Roger J. Braithwaite and Yu Zhang © 1999 Swedish Society for Anthropology and Geography. – http://www.jstor.org/pss/521488.
Another pay wall. Google quote: “Simulated snow melt and rain being insignificant in amount over the Antarctic ice sheet”
After I finished search I then went back to Revkin’s blog then to the blog where he first mentioned rain in Greenland. I linked to his source of the data and found the following message in a box at the top –
From Weather-Forecast.com – http://www.weather-forecast.com/locations/uummannaq20/forecasts/latest –
Alert: We recently moved our weather-forecasts to a new machine. An incompatibility with our existing code has resulted in equivalent rain being forecast rather than snow, regardless of whether the temperature is close to or below zero. Update 22:30 GMT 7-Jan-2009: This bug has been fixed. We apologize for the error.
So I guess Rivken got some bad info about it raining in Greenland.
Re #40 – Google time plus searching and reading papers then writing this = 2 hours+, not 30 seconds. I know it would have been quicker to just post a question, but going through the papers was rewarding and something to do on a very cold Sunday afternoon.
If anyone else knows of some good papers not behind pay walls please let me know. I watched 22 inches of snow melt to a few inches over a cold (just above freezing), wet Christmas day last month and I am curious about any acceleration of the glaciers and ice packs that rain can cause.
Thanks