Richard Lindzen is a very special character in the climate debate – very smart, high profile, and with a solid background in atmospheric dynamics. He has, in times past, raised interesting critiques of the mainstream science. None of them, however, have stood the test of time – but exploring the issues was useful. More recently though, and especially in his more public outings, he spends most of his time misrepresenting the science and is a master at leading people to believe things that are not true without him ever saying them explicitly.
However, in his latest excursion at a briefing at the House of Lords Commons in the UK, among the standard Lindzen arguments was the following slide (which appears to be a new addition):
What Lindzen is purporting to do is to compare the NASA GISS temperature product from 2012 to the version in 2008 (i.e. the y-axis is the supposedly the difference between what GISS estimated the anomaly to be in 2012 relative to 2008). A rising trend would imply that temperatures in more recent years had been preferentially enhanced in the 2012 product. The claim being made is that NASA GISS has ‘manipulated’ (in a bad way) the data in order to produce an increasing trend of global mean temperature anomalies (to the tune of 0.14ºC/Century compared to the overall trend of 0.8ºC/Century) between the 2008 and 2012 versions of the data, which are apparently shown subtracted from each other in Lindzen’s figure. Apparently, this got ‘a big laugh’ at his presentation.
However, this is not in the least bit true: the data are not what he claims, the interpretation is wrong, and the insinuations are spurious.
The annotation indicates that Lindzen is using the GISTEMP Land-Ocean Temperature index (LOTI, i.e. the index that includes weather station data and sea surface temperature data to give a global anomaly index with wide spatial coverage) (“GLB.Ts+dSST.txt”). There is another GISTEMP index (the Met station index) which only uses weather station data (“GLB.Ts.txt”) which doesn’t have as much coverage and has a substantially larger trend reflecting the relative predominance of faster-warming continental data in the average.
Old versions of the data can be retrieved from the wayback machine quite readily, for instance, from February 2006, October 2008 or December 2007. The current version is here. I plot these four versions and their differences below:
As should be clear, the differences are tiny, and mostly reflect slightly more data in the earlier years in the latest data and the different homogenisation in GHCN v3 compared to GHCN v2 (which was used up to Dec 2011). This is however in clear contradiction with Lindzen – the biggest difference in trend (between 2006 and today), is a mere 0.05ºC/Century, and from 2008 to 2012 it is only 0.003ºC/Century – a factor of 40 smaller than Lindzen’s claim. What is going on?
The clue is that the transient behaviour of Lindzen’s points actually resembles the time evolution of temperature itself – not homogenisation issues, or instrumental or coverage changes. Indeed, if one plots the two GISTEMP indices and their difference (using current data), you get this:
Thus it looks very much like Lindzen has plotted the difference between the current Met Station index and an earlier version of the LOTI index. I plotted the Feb 2012 Met index data minus the Feb 2009 LOTI index, and I get something very close to Lindzen’s figure (though it isn’t exact):
This is sufficient to conclude that Lindzen did indeed make the mistake of confusing his temperature indices, though a more accurate replication would need some playing around since the exact data that Lindzen used is obscure.
Thus, instead of correctly attributing the difference to the different methods and source data, he has jumped to the conclusion that GISS is manipulating the data inappropriately. At the very minimum, this is extremely careless, and given the gravity of the insinuation, seriously irresponsible. There are indeed issues with producing climate data records going back in time, but nothing here is remotely relevant to the actual issues.
Such a cavalier attitude to analysing and presenting data probably has some lessons for how seriously one should take Lindzen’s comments. I anticipate with interest Lindzen’s corrections of this in future presentations and his apology for misleading his audience last month.
Update: Lindzen did indeed apologise (sort of) (archived) though see comments for more discussion.
Ray Ladbury says
Jim Larsen,
Unfortunately, that is the problem with anti-science: When you have no evidence favoring your side of the argument and tons of evidence favoring the other, the only recourse is to cast doubt on the entire process by portraying it–and all attached to it–as corrupt.
Scientific debates can get nasty enough, but because they must be based on facts, there is at least a hope they can be brought to a cordial conclusion. When you forsake evidence, all hope for cordiality is lost.
Susan Anderson says
Jim Larsen @~44, well said. However, calling these guys skeptics gives real skepticism a bad name. You are right that the whole ballyhoo of them are questionable once one starts to look. (I’m one of the 99+% who cannot check exactly, but even for a layperson it’s easy to find the false joins and fossil fuel nexus support. There are also personal attacks and twisted attributions – if you do a good enough job they’ll come after you and you’ll see how they grab one phrase, change it and ignore the rest.) I think it important to explain what a real skeptic is and does each time you use their self-designated moniker.
Sometimes fake or phony skeptic isn’t enough, sometimes unskeptical skeptics, or “skeptic” (real skeptics question all sides), or something longer about accepting the minority insistence wholesale and question the vast majority of science over time …
It looks to me like you are well able to find what’s wrong with the likes of Happer, for example. Monckton has already been clarified by some excellent people, among whom Dr. Abraham is probably the best.
Kevin McKinney says
#453–Well-said, Susan. There is quite a lot of diagnosing that requires no more than some diligence and attentiveness to logic, context and consistency.
For me, a big question is, does the story I’m being told create confusion, or clarity?
For example, I recall an exposition ‘proving’ that re-emission of radiation by water would prevent its warming by sunlight. I was not able to critique the ‘sciency’ bits of the explanation; but I observed that the claim was at odds with everyday casual observation made by me–and many, many others, of course. I then asked myself: in what way does this proposed ‘absorption and re-emission’ differ from reflection? Answer: as described, none whatever. At this point I had two serious incoherencies–confusions by another name.
Clearly, this was a bogus screed, one which I need not waste time deconstructing in detail. The ‘scientific’ details were mere obfuscation–window dressing.
richard says
@421 Paul Vincelli: “I am saying that rebuttals to a refereed paper that challenges a key aspect of the scientific consensus on global warming, will carry much more weight with other scientists if they are in refereed journals.”
Fair enough, but the ‘need’ for rebuttal is in part based on the quality of the journal, is it not? Does every ‘outlier’ article in a fairly obscure publication require a rebuttal? The top tier climate scientists would be hard-pressed to find time to rebut those low-quality articles via a peer-reviewed article – that is the great value of science blogs, IMHO. As a scientist, but as someone who is not a climate expert, I would be doubtful of a few reports in obscure journals that claim to upset the consensus. If their case is really solid, why did they not appear in mainstream top-tier journals?
BTW, I am also a plant pathologist and can attest that lots of crap in low quality plant path journals is never rebutted – the top players know it is crap and just ignore it. As a scientist, I feel it is best to accept what the great majority of experts in other fields tell me, especially when large data sets are available and the issue has been a ‘hot topic’ for several years. They MIGHT be wrong, but their interpretation of current data is most often correct. In this particular case, I find that blogs such as Skeptical Science and Tamino produce excellent analyses that can help the non-experts interpret the data pretty well.
Radge Havers says
Re: Teh Debate.
Just as a rough cut, what has raised a flag for me is a pattern that almost invariably occurs during the back-and-forth between working scientists and denialists in comment sections. It’s a realtime reflection of what goes on in the world of paid pundits and more or less goes like this:
Denialist makes assertion. Working scientist takes it apart with some care, logic, and cites– offering a bigger more detailed picture. Denialist tries a little harder. Working scientist takes it apart with some some care, logic, and cites– offering a bigger more detailed picture. And so it goes for a very short while.
Then what little debate there was starts to fall apart. The denialist Gish gallops and/or makes stock accusations and spouts irrelevant cliches about the history of science, and when he or she thinks everyone has lost the gist of the thread, circles back to the original assertion, changes the subject, or just disappears from the conversation.
You don’t have to be too scientifically literate or spend years on the subject to sense the gist of the problem. Teh Debate not a legitimate scientific controversy. It’s just a tawdry social attack on science.
John P. Reisman (OSS Foundation) says
#454 richard
This is a good point. If the paper is published in a known pulp fiction or non-reputable journal, there is a good chance for it to just be ignored if the substance is just not there.
Hank Roberts says
> lots of crap in low quality plant path journals is never rebutted
> – the top players know it is crap and just ignore it.
But this stuff may not be ignored by the regulatory agencies, in the absence of correction.
Isn’t a lot of the crap actually “advocacy science” — intended to facilitate sale of some particular kind of herbicide or proprietary seed variety?
The regulatory agencies have some constraints about exercising judgment and if something’s published they don’t have staff to rebut advocacy science, even if they know it’s crap.
This gives new hope to industry — they can buy the kind of science they want to read with fair assurance it won’t be rebutted.
Science goes on thinking the crap is ignored.
Instead it’s served up to the public as the basis for political decisions.
I’ve mentioned this before:
The Rise of the Dedicated Natural Science Think Tank
Meow says
OK, I’ve begun reading L&C 2011, and almost immediately I’ve got a question. In equations 1-4, ΔT would seem to describe earth’s Planck response, as viewed from space, to a given ΔQ _at earth’s effective radiating altitude_. In constructing eq. 5 from eq. 4, L&C replace ΔQ with ΔFlux (“the change in outgoing net radiative flux”), which is fine because the quantities mean the same thing. But then they replace ΔT on the left-hand side with ΔSST, the change in some observed tropical sea-surface temperature. But ΔSST is hardly the same thing as ΔT at earth’s effective radiating altitude.
L&C recognize this when they say, “Note that the natural forcing, ΔSST, that can be observed, is actually not the same as the equilibrium response temperature ΔT in Eq. (4)”, but they don’t, as far as I can tell, explain why they think it’s permissible to substitute ΔSST for ΔT in eq. 5.
Can anyone explain what they’re doing?
Geoff Wexler says
Re: #450
@Ray Ladbury who argues:
The second point is interesting but is it non-rigorous or just complicated? It would certainly involve a bit of theory. We should have to imagine that all of the CO2 were to be removed gradually (e.g. in a simple computer model) and calculate the reduction in water vapour (feedback) and increase in ice (ditto). No doubt these would have large positive values, so far untested. But we could also calculate the fall in temperature and compare it with the one estimated directly from Stefan Boltzmann. If the two estimates agreed then we should have corroborated the values of the feedbacks obtained from the simple model. One trouble could be that the feedbacks might not have a single value over such a large range of CO2. In that case the exercise would be hard to interpret.
Is there a better way?
t marvell says
I asked how long it takes CO2 changes to affect temp, and Gavin kindly gave me a good source (#391). I made an amateurish attept to the check this out using graphs in http://www.climate4you.com/index.htm. The only lag I could see is in the opposite direction – changes in temperature are followed a few months later by similar changes in CO2. The relationship seems very strong. Warmer water holds less gas. This must be a sizeable positive feedback, but I have not seen it publicized.
BeeMaya says
Important for global warming is to always take graphs which start
in the 19 century or earlier….better even in the 18 or 17 century….
The slowest in mind may clearly see global warming, and our Prof Rahmstorf made the good suggestion to do away with all short 30 WMO periods, better would be a compulsory minimum of 150 years when talking about warming, the
shorter ones are just the weather.
We know better the realclimate
Alex Harvey says
Rob Dekker,
You may be interested in the publication of Choi and Song 2012 which
appears to further analyse the feedback estimation procedure of Lindzen and
Choi 2011.
Choi, Y.-S. and H.-J. Song, 2012: On the numerical integration of a
randomly forced system: variation and feedback estimation, Theoretical and
Applied Climatology, DOI: 10.1007/s00704-012-0612-3.
MARodger says
t marvell @460
I’m not sure where you’re looking at climate4you (& this is getting off-topic) Frankly, as somebody who plays with graphs myself, I had to get off that site coz its style of graphical presentation is sickening. However, I think I know where you’re coming from on this one.
Superimposed onto the annual wobble of the Keeling curve is an ENSO induced wobble. Whether it is the result of temperature changes (longer growing seasons) or rainfall changes or some other effect I know not. What I can tell you is this is a correlation between wobbles not some unnoticed correlation between temperature and CO2. I don’t see graphing it is worth the candle.
Over the long-term, the rise in CO2 concentrations remains at c45% of total human emissions. That is the figure to watch if you are worried about CO2 feedbacks.
CO2 wobble graph two clicks down here:-
https://1449103768648545175-a-1802744773732722657-s-sites.googlegroups.com/site/marclimategraphs/collection/G02.jpg?attachauth=ANoY7cqY-xD71OBc_ZG4xr2SCADOtPXUa4LKMooMbif1gJJQh-iWUL6L72VSEdFPjTj6yBiGQFJu-YMdAJ18h8q44DkUS76-jIjQB_vh4ekCUFQHtBbBUc-OvG3kRedlUYQQZZesHymXUyuakFDO4SxsMNZgM8plUjeTEDT6ZbATp8ie4HNKDOWyTwnqznZeDWphXRt4tnE11KKGJUWpCtD2yPbF4Sf0jw%3D%3D&attredirects=0
ENSO/temperatre graph:-
https://1449103768648545175-a-1802744773732722657-s-sites.googlegroups.com/site/marclimategraphs/collection/G09.jpg?attachauth=ANoY7cpLrpSc84VLczmQaUB3HNMbWkmxr8udsvBcYG15Lu1PGaDSyyBfP_PW8K_trGgOXWoXoATePHUC8Pkwb2i4TkOwY_DgdBb4sojVBha6etRYGtq8b2lrchQT9RTJ6fHLh4uEJ1U094KCHxGadFss08cmSzTz3PaIuorF39rYzROzFjJD4lZ5aJKswVi9W-stucMK0MTSOGIebz-zXkYZIkCDXiZhhQ%3D%3D&attredirects=0
Rob Dekker says
Meow, thanks for looking into L&C’11.
I think that Lindzen is still OK there (in eq.5), since T refers to actual surface temp (with feedbacks), and T0 refers to zero-feedback surface temp. But since he derives eq (5) from the equilibrium equation (eq (2)) he also needs to say something about the timeframes. And he does The latter cannot be observed since, for the short intervals considered, the system cannot be in equilibrium, and over the longer periods needed for equilibration of the whole climate system, ∆Flux at the top of the atmosphere (TOA) is restored to zero.
FYI, please check out Forster and Gregory, 2006, which uses the same data as L&C’11 (and L&C’09), and obtained positive feedback.
Please check L&C’09 which contains a blatant mathematical mistake (negative feedback concluded while their data shows almost exactly (no-feedback) black-body response).
And read Trenberth 2010, who points out how Lindzen is cherry-picking.
This will give you a feeling of how Lindzen works and hides his manipulations.
But as opposed to L&C’09, where the incorrect negative feedback stood out like a sore thumb, I think the trouble in L&C’11 is much deeper than that.
Alex, thanks for that abstract. I am not sure what to think of it. It seems that Choi recognizes that the AGW forcing (very slowly increasing forcing) poses an issue for “simple model” simulations such as the Spencer and Braswell model that Lindzen used. I believe that is true, since that forcing will create a T offset, which will incorrectly lead to a positive feedback bias (as Spencer and Braswell suggested, and which may have led to the T offset in figure 6) in the regressions.
If that is true, then that would suggest that the AGW forcing rate (which L&C’11 sets at 0.4 W/m^2/decade) indeed interferes with a fair assessement of feedback bias for the “simple regression” technique (one of Lindzen’s accusations against Forster and Gregory, 2006), and may let their own lead-lag method show much more ‘realistic’ than it actually is.
I think there is no way of knowing, unless we have the Spencer & Braswell simple model set up and can run some tests.
In that regard, let me repeat my questions from the last post : Did you run that L&C software from climateaudit ? Did you reproduce figure 6(a)) ? And if so, do you know why there is an offset on T ? And what happens if you eliminate that offset ? Is there still a positive feedback bias ? Or did it disappear ?
Jim Larsen says
452 Susan Anderson said, “Jim Larsen @~44, well said. However, calling these guys skeptics gives real skepticism a bad name. You are right that the whole ballyhoo of them are questionable once one starts to look. ”
“Denialist” is a fighting word and “contrarian” felt more negative than I wanted to convey at the moment, so I settled on letting them pick their own word, but changed it into a name by capitalizing it. Like Democrats have no monopoly on democratic principles, Skeptics aren’t the only skeptical folks. The problem, of course, is that they seem to believe absolutely anything unless it supports AGW theory. They’re monoskeptical! Yes, I’m going to call them Monoskeptics from now on.
The whole lot are questionable, eh? This just screams to become a collaborative book.
Martin Vermeer says
t marvell #460:
Sure you’re not looking at the annual cycle in CO2? That would produce such a correlation with a few months inbetween. But the mechanisms are quite unrelated: for CO2 it’s the formation and destruction of biomass in the boreal forests over the annual cycle.
See here.
richard says
@457 “But this stuff may not be ignored by the regulatory agencies, in the absence of correction.”
Good point, but surely a regulatory agency would look at the balance of the data. The use of outlier publications would be mainly by politicians, but they would use it whether it was rebutted by another peer-review or not.
Dr Tom Corby says
http://www.independent.co.uk/news/science/scientists-to-be-protected-from-libel-7565746.html
Scientists in the UK publishing peer reviewed research are to be protected by law against malicious libel attacks. Suck it up false sceptics.
Paul Vincelli says
My point in this thread is simply this: If LC11 is flawed, that needs to be demonstrated in the peer-reviewed scientific literature. If there are flaws, submit a manuscript to a journal.
Blogs are great. However, in my experience on tenure and promotion committees, blog postings count for nothing when a committee is evaluating research productivity. Not even a little bit.
As far as I am aware, in all fields of science, peer-reviewed papers in refereed journals are the only primary source of scientific information.
[Response: This is mainly true. For instance, IPCC will only cite rebuttals (if it needs to) from the literature. But the decision to write up an official comment, or new work based on showing the flaws in previous work is very much a cost/benefit decision for any individual scientist (or group). Far more poor science is published than is generally acknowledged, and most of the time, this stuff just fades into obscurity without the need for a rebuttal. In politicised fields like climate, certain kinds of poor papers (ones that give answers that are pleasing to certain political segment) gain far more notoriety than they would otherwise receive (a few obvious ones would be L&C09, Soon and Baliunas (2003), Douglass et al (2007), Michaels and McKitrick etc). These are worth rebutting mainly because of the noise they generated rather than the importance of their conclusions. LC09 was more high profile than LC11 and generated more noise, and multiple rebuttals. It would generally fall to those previous rebutters (rebunkers?) to have a go again for LC11, but as far as I can tell, there was very little appetite to do so – mostly because even if the method was fine, their own results indicate that a) the method is unreliable, and b) biased low. Any rebuttal would simply make those points again, and so it doesn’t appear a particularly fruitful endeavour. You might think that the benefits outweigh the costs in specifically rebutting their analysis, but someone cabaple of doing this really needs to agree. Perhaps this is a good grad student project for someone? – gavin]
simon abingdon says
#452 Susan Anderson. “Monckton has already been clarified by some excellent people, among whom Dr. Abraham is probably [only] the best”. Susan, well said. See http://joannenova.com.au/2010/07/abraham-surrenders-to-monckton-uni-of-st-thomas-endorses-untruths/
richard says
@457 “Isn’t a lot of the crap actually “advocacy science” — intended to facilitate sale of some particular kind of herbicide or proprietary seed variety? ”
No most of the ‘crap’ is just poor quality work, nothing to do with advocacy – except the advocacy of one’s career. You don’t see that many pesticide or varietal trials published in peer-review journals any more in any case.
The regulatory agencies will generally rule based on the weight of the data – an outlier or two should not affect their rulings that much. The politicians on the other hand will certainly try to magnify the outlier if they find it has value to them (e.g. climate science denial); but they will do that whether the crap is rebutted or not.
Pat Cassen says
Whoa, Martin (#466), that is totally cool…
Kevin McKinney says
#471–Well, if we are going to jump into our own little Wayback machines, let’s not forget to revisit this:
http://www.guardian.co.uk/environment/georgemonbiot/2010/jul/14/monckton-john-abraham
And this:
http://bbickmore.wordpress.com/2011/08/08/the-monckton-files-bombshell-john-abraham-to-be-sued/
And this:
http://www.stthomas.edu/magazine/2012/Winter/abraham.html
And of course, this:
http://tinyurl.com/Python-Only-a-flesh-wound
While I was at it, I searched for more on this devastating lawsuit Monckton is going to launch any
dayyeartime now. Nothing yet… I’m sure he’s just distracted with his plans to clone Fox News in Australia.Yeah, that’s it.
Paul Vincelli says
$467, Points noted, Gavin. Thanks.
Dr Tom Corby says
The following is a letter from the Clerk of Parliaments to Christopher Monkton telling him off for claiming to be a member of the house of lords. When he testified to congress he tried this cheap trick to by stating he bought fraternal greetings from the Mother of Parliaments to the Congress. He’s a shyster of the highest order.
Dear Lord Monckton
My predecessor, Sir Michael Pownall, wrote to you on 21 July 2010, and again on 30 July 2010, asking that you cease claiming to be a Member of the House of Lords, either directly or by implication. It has been drawn to my attention that you continue to make such claims.
In particular, I have listened to your recent interview with Mr Adam Spencer on Australian radio. In response to the direct question, whether or not you were a Member of the House of Lords, you said “Yes, but without the right to sit or vote”. You later repeated, “I am a Member of the House”.
I must repeat my predecessor’s statement that you are not and have never been a Member of the House of Lords. Your assertion that you are a Member, but without the right to sit or vote, is a contradiction in terms. No-one denies that you are, by virtue of your letters Patent, a Peer. That is an entirely separate issue to membership of the House. This is borne out by the recent judgment in Baron Mereworth v Ministry of Justice (Crown Office) where Mr Justice Lewison stated:
“In my judgment, the reference [in the House of Lords Act 1999] to ‘a member of the House of Lords’ is simply a reference to the right to sit and vote in that House … In a nutshell, membership of the House of Lords means the right to sit and vote in that House. It does not mean entitlement to the dignity of a peerage.”
I must therefore again ask that you desist from claiming to be a Member of the House of Lords, either directly or by implication, and also that you desist from claiming to be a Member “without the right to sit or vote”.
I am publishing this letter on the parliamentary website so that anybody who wishes to check whether you are a Member of the House of Lords can view this official confirmation that you are not.
David Beamish
Clerk of the Parliaments
15 July 2011
Source: http://www.parliament.uk/business/news/2011/july/letter-to-viscount-monckton/
Dikran Marsupial says
Gavin, with regard to your comment to Paul Vincelli. One way of altering the cost-benefit ratio would be to crowd-source the writing of the rebuttal. There are pleny of people who reads blogs such as RealCLimate who have useful scientific and/or statistical skills who could draft a comment paper, if it could be checked over (or co-authored) by an expert before it was submitted. This would be a more constructive use of our time than merely commenting on blogs.
I have written one such paper, pointing out the flaw in the residence time argument put forward by Essenhigh, but I couldn’t have done it without help (see http://pubs.acs.org/doi/abs/10.1021/ef200914u)
John P. Reisman (OSS Foundation) says
#469 Paul Vincelli
I hope you’re reading enough of this blog to realize that rebutting advocacy science such as Lindzen’s work is simply a waste of time for the RealScience and scientists. because it’s more important to focus on relevance.
Lindzen is on the fringe and clearly doing advocacy science. He wil likely be less and less respected from a relevant science perspective.
If you really want to understand the relevant science then just stay close to the relevant papers and discussions. That actually is sufficient to get a good handle on climate science.
Not every piece of BS needs to be formally responded to and relevant scientists have little time for such games.
Paul Vincelli says
#477 John Reisman
Maybe we have reached a point of respectful disagreement. Being one myself, I realize that publishing scientists are busy and can’t allocate time to all worthy projects. That’s why I responded to Gavin’s post @474 as I did.
If publishing scientists choose to allocate their precious time to their research projects instead of playing “whack-a-mole” with papers like LC11, I respect that completely. All I am saying is that a refereed paper from one of the world’s preeminent scientific skeptics (at least in the minds of other skeptics) should be challenged in the peer-reviewed literature, the primary source of scientific information, to the extent that a qualified person or group has time.
I thought Dikran Marsupial @476 had a productive suggestion.
John P. Reisman (OSS Foundation) says
#478 Paul Vincelli
Actually I think we are in general agreement. I also like the suggestion in #476. I also agree it would be great to have these argument to authority and advocacy papers addressed formally. But re-bunking the same old stuff or new flavors of the same old stuff is hard to commit too for busy scientists as you know.
So yes, it would help. And as those that can, can make time, they likely will, or wait until such tripe takes it’s proper place in the debate as it slips further into oblivion as relevant science plods forward on its methodical course.
Alex Harvey says
Rob Dekker, #448; #464:
I am a bit busy now so probably will be slow to respond.
I read the Choi and Song paper a couple of times. (I emailed Dr. Choi for a copy.) The authors are searching for a method that improves on both the simple regression and LC11 in accounting for non-feedback cloud variations. Unfortunately it appears that Choi and Song documents a failure to solve the problem using an “attenuator” function. The paper may shed some further light on their approach to the simple regression and the LC11 method though.
Regarding your questions:
Did you run that L&C software from climateaudit ? Did you reproduce figure 6(a)) ? And if so, do you know why there is an offset on T ? And what happens if you eliminate that offset ? Is there still a positive feedback bias ? Or did it disappear ?
No I haven’t attempted to reproduce anything. I can’t find any discussion of the offset. I do note in the code a comment that just say
As far as the appropriateness of this choice I am not the right person to ask. I look forward to your comments on Frankignoul. I see a lot of compelling physically based arguments that the simple regression is quite inappropriate.
Alex Harvey says
Gavin, #469:
The question is this – Trenberth had plenty of time for publishing a rebuttal to the Spencer and Braswell 2011 paper that was published in a low impact journal. So when Lindzen and Monckton get up to promote the LC11 result, how can you criticise them for this? There is no known problem in the LC11 paper other than large uncertainty as acknowledged by the authors. The paper is ostensibly supported by other papers in the literature (Schwartz 2012; Masters 2012). How do you reasonably expect the public not to see this as Trenberth simply can’t find a flaw in it?
[Response: The public doesn’t have any idea who any of these people are, so their opinions are unlikely to be informed by what any single scientist they’ve never heard of has or hasn’t done. The idea that anything must be accepted as true because no-one has bothered to rebut it is logically absurd. – gavin]
T. Marvell says
About temperature changes preceeding CO2 changes (continuing my #460, and answering #463 and #466). I said there seems to be a strong positive association between temperature changes and CO2 changes a few months later. This suggests a strong positive feedback – more CO2, then higher temp, then more CO2. I disagree with #463 that this is off topic. Lindzen’s argument is that there is little or no net positive feedback. Off hand, that seems unlikely given the huge size of the temp-to-CO2 connection.
In #460 I said I got this from http://www.climate4you.com/. More secifically, it is under the “greenhouse gases” link, comparing 2 charts: ’12 month change of atmospheric CO2′ and the next chart ‘global terperature estimates. . .’. This is comparing yearly changes in CO2 with yearly changes in temperatures. Very roughly, a 0.5 increase in CO2 follows a 0.1 degree temp. change. A lagged association between change variables strongly suggests causation.
I agree with #463 that comparing graphs is tricky, but this association cries out.
To test it out more I did a Granger regression analysis with monthly data since 1958, regressing changes in CO2 on changes in SST with the latter lagged 1 to 60 months, controlling for such things as El Nino, Punatubo, and seasons. Sure enough there is a huge relationship – SST increases “cause” CO2 increases with lags of some 6-48 months (P lt .0001). A 0.1 degree increase in SST leads to roughly 1.5 more CO2 (a higher estimate than derived from the graphs partly because the graphs include land temperature. I found no such effect with land temperatures.)
This result seems obvious given the fact that warmer oceans, at least on the surface, cannot hold as much gas. Why don’t the models show that this conclusively indicates a positive feedback, and why doesn’t Lindzen mention it?
Alex Harvey says
Ray Ladbury, #450:
This is a bit off topic but since you keep asking I’ll have a go.
To infer climate sensitivity from climates of the past you need to know the forcing and cooling at the LGM.
My own doubts would be different and independent from Lindzen’s arguments but it is perhaps relevant to this thread to quote LC11 on the matter:
So Lindzen alludes to what he apparently sees as circular reasoning in the consensus.
In Köhler et al. 2010 QSR, one of the most recent and comprehensive treatments of LGM forcing, they write this of aerosols:
The indirect effect, you will recall, is the one that accounts for the greatest uncertainty and the most cooling in the present climate.
[Response: Yep – but you can think of all of that as part of the feedback in more expanded definition of the climate system. Either way the idea that because this is difficult to estimate the forcing is ‘completely unknown’ is nonsense. I agree that Kohler et al (2010) is the best paper on this so far. – gavin]
What about clouds?
Next, Lindzen only points to uncertainty in the forcing – but what about uncertainty in the change in temperature? Estimation of change in temperature is outside of the scope of the Köhler et al. study; they simply follow Schneider von Deimling et al. 2006 and assume change in temperature is 5.8 K +/- uncertainty. Combining their forcing estimate with this temperature estimate they arrive at a best guess for equilibrium climate sensitivity of 2.4 K. They say their method rules out anything >6 K, but it fails to rule out low sensitivities.
However, the recent Schmittner et al. 2011 study carefully examined the change in temperature and they found the LGM cooling was likely to be considerably less. Oddly enough, they also found almost the same best guess of climate sensitivity (2.3 K instead of 2.4 K). But, if you combined Köhler’s forcing and Schmittner’s cooling you’d actually get a very low climate sensitivity – remembering that both Köhler and Schmitter already find significantly lower sensitivity than the IPCC.
To me, this is the real “arbitrariness” of the consensus position. With (1) a large uncertainty in temperature; (2) a large uncertainty in forcing; (3) known missing processes; (3) scientists who know what the “right answer” is and what the “wrong answer” is at the beginning of the study (i.e. anything outside of 2-4.5 K is going to be scrutinised carefully for error); (4) no effort to mitigate for the confirmation bias of the scientists; and (5) an answer at the end that doesn’t actually rule out the low sensitivities anyway.
The IPCC doesn’t use the paleoclimate arguments prominently, so I tend to regard it as something like an internet myth that the paleoclimate arguments rule out low sensitivities. Does anyone aside from James Hansen actually say this?
[Response: Yes. I have said it frequently. We discussed the Schmittner paper last year, and that result is likely to be an underestimate due to the small magnitude in the inferred cooling. That inference was related to the models lack of land/ocean constrasts and under-weighting of southern hemisphere data. You list of points is a significantly over-egged pudding though. The signal is very large at the LGM and it turns out that the real uncertainties in the temperatures and the forcings still allow for the signal to come through the noise. A sensitivity below 1ºC means both a large overestimate of the temperature change (which is very unlikely and even if true would imply a sensitivity of sea level/ice sheets to temperature changes that would be alarming), *and* a large underestimate of the forcings – which is hard to do given we know the GHG changes and so the error (by large factors) would have to be in the ice sheet or dust term. That is just really hard to square away. I do find it amusing though that you are accusing mainstream scientists of confirmation bias, while quoting from Lindzen, the erstwhile subject of this thread. – gavin]
Hank Roberts says
> comparing yearly changes in CO2 with yearly changes in temperatures….
Photosynthesis, net primary productivity, has this cycle.
http://neo.sci.gsfc.nasa.gov/Search.html?datasetId=MOD17A2_M_PSN
John P. Reisman (OSS Foundation) says
#481 Alex Harvey
Just because something is published doesn’t make it sound.
Just because something has not been rebutted does not make it sound.
Just because the public thinks something is sound does not make it sound.
Just because you think something is or is not sound does not make it sound.
Just because I might think something is sound does not make it sound.
Science determines confidence levels in what can be inferred or determined based on evidentiary lines and the more inputs and stronger the basis in physics and observation through the many critical eyes that pick it apart, the stronger the inference, hence the stronger the confidence interval.
I’ve noticed in my analysis that science is pretty good at figuring out what is, and what is not a good suspect in a particular case through the process of the scientific method. For example, Science is pretty sure at this point where they won’t find the Higgs Boson, so that gives a good clue (parameters) on the most likely places left to look. It’s the same with climate science too.
Don’t you think that is pretty cool?
Rob Dekker says
Gavin, Dikran Marsupial, Alex, John Reisman, Paul Vincelli, Meow,
Not sure where it may lead, but let me kick-off some “crowd sourcing” right here on a rebuttal to LC11.
As I mentioned before, LC11 not only claims low climate sensitivity, but it also claims that the “simple regression” methods used by other papers, such as Forster and Gregory 2006 and Dessler 2010, has a positive feedback bias.
(PS, Gavin, that accusation should at least affect the “cost-benefit” tradeoff, no?).
L&C11 uses the Spencer and Braswell 2010 “simple model” to “prove” their assertion a bit suspect, since even Spencer and Braswell’s quantification of that positive feedback bias should zero-out for a large number of samples.
To find out what was true of their claim, I implemented the Spencer and Braswell model today (it’s really a very simple model) in C++ and ran some experiments.
My first runs did not show any positive feedback bias for “simple regression” methods at all. The ‘observed’ F is very close to the real F, regardless of which F I choose, and I also could not reproduce the temperature “offset” seen in scatter plot (figure 6).
But then I inserted the +0.4 W/m^2/decade rate of ‘f’ (AGW forcing). That’s when the first real issue (apart from the already known negative feedback bias of the lead-lag method) with L&C11 showed up.
I could explain better with a separate post with graphs and such, but without that, for now, just hear me out :
Imagine a climate system in equilibrium. SST and TOA flux are constant, and in the Spencer and Braswell model that means both are 0. Now, imagine that at t=0 (starting time) we introduce a +0.4 W/m^2/decade increased forcing. Something like what a constant increase in CO2 can do. Of course, TOA flux goes down with that rate initially, but SST (being the integral of flux) will start to push back on that (amount and rate of push-back is determined by the real feedback factor F), and eventually TOA flux levels off at some negative amount, while SST is increasing at a constant rate.
If you plot this in a (x-axis) SST and (y-axis)TOA-flux graph (such as figure 6), you will be in the right-lower quadrant, which means that it gives the appearance of a ‘positive feedback’ bias (which, incidentally, also will show a positive ‘offset’ in T, since SST is increasing over time).
So, I think you feel it coming : because LC11 modeled a ‘sudden’ change in RATE of AGW climate forcing of +0.4 W/m^2/decade, the simple model shows an artificial “positive feedback” bias. How large is this bias ? Well, depends on the ‘feedback’ parameter F, but my “simple model” simulations show the following results, if we run the simulation for the full 1985-2009 (288 month) period. Here are the ‘end’ results (settings at t=288months) for each of the three LC11 ‘feedback’ parameter (F) settings :
SST TOA-flux slope (bias)
F=1 : 0.497 -0.458 -0.922
F=3.3: 0.2384 -0.169 -0.708
F=6 : 0.143 -0.093 -0.648
What does this mean ? Well, the ‘slope (bias)’ means that any response to SST changes is suppressed by -0.922 W/m^2/K (for the F==1 case) which means that there is a ‘positive feedback’ bias in the model itself of some 1 W/m^2/K.
If you now look at figure 7 (A), you will see that LC11 indeed reports that the “simple regression” reports s slope 1 W/m^2/K below the “true” value. LC11 concludes that this is a “positive feedback bias” in the “simple regression” method, but in fact we now know that this is completely caused by the starting point of the +0.4 W/m^2/decade increase in GHG forcing, and NOT by the regression method.
And let me note that it is completely unrealistic to model GHG forcing to change from 0 trend to +0.4 W/m^2/decade trend in 1985. In reality, there was very little (if any) change in GHG forcing trend in 1985, so this “bias” is an artifact of the way that LC11 modeled GHG forcing changes.
Note that this also explains the temperature offset in scatter plot Figure 6 : The temperature is drifting to the right over the full 288 months, so it looks like there is small slope (high positive feedback) overall. However, in reality the dots on the left are from the ‘early’ samples, and the dots on the right are the ‘late’ samples, and if you know that, then the ‘slope’ of dots that are not far apart in time are actually much closer to the 6 W/m^2/K “true” slope in the model.
That’s the bias they blame on “simple regression” methods, but in fact is caused by their own modeling of GHG forcing.
What is the effect on this on the LC11 conclusions ?
Well, LC11 uses an artifact of their own unrealistic modeling to discredit other scientist’s findings, and this conclusion is now highly suspect.
But also, this positive feedback bias in the modeling makes the negative feedback in their own “lead-lag” method look “not so bad”, while in reality it is much more negatively biased than presented.
There must be more though : My first runs show that for large F, the ‘positive’ bias should be smaller than for the F=1 case. However, figure 7 suggests that higher F actually increases the bias.
I have not figured out where he got that from, but I’ll find out now that I have the model reproduced.
This is what I love about Lindzen’s papers : like a puzzle maker, he creates a challenge to find out all the mistakes he made.
Also, since Spencer and Bra swell also used this +0.4 W/m^2/decade GHG forcing rate, (and presumably also starting at t=0) I wonder how much that paper’s results are affected by this artifact…
Gavin, any comments ?
MARodger says
T Marvell @482
I’m not sure what to say to you other than to restate the message @463 in the hope that two explanations will be better than one.
You ask “…and why doesn’t Lindzen mention it?” This is because any “relationship – SST increases ’cause’ CO2 increases” is in no way “huge” and thus this discussion is off topic.
Your 2 graphs from climate4you.com present two of the wobbles shown in the two graphs here. Graph A – rate of CO2 increse wobbles (in red, two clicks down) and Graph B HadCRUT3 temperature wobbles. Graph B also shows a third wobble – ENSO (MEI).
All three wobbles can be plotted on a single graph and if this is done it demonstrates that ENSO preceeds temperature wobbles by a few months & temperature preceeds rate of increases in CO2 wobbles also by a few months. So “…a lag of some 6(…) months” would not sound out of place. Your “(…)48 months” sounds bizarre. But then your 4th paragraph @482 looks like a trawl for corroborating answers, not analysis.
I am of the opinion that whatever you’ve found (even thuogh it is not clear to me what it is) is likely not what you think you’ve found.
Dave says
Hi,
this is a little off-topic but I hope it will anyway elicit a useful response either here or on a separate post.
I’m a professor of physics at a leading European University. I mention this not to establish any type of authority. Rather it should be taken to mean that I can grasp a fairly technical explanation if it is appropriately phrased for a scientist working in a different field.
I consider myself to be a sceptic of all research (including my own). However, there are certain areas of research in which working theories of nature have been established and can be relied upon to an extremely high degree to predict future experimental results. Examples include general and special relativity, and quantum mechanics. These theories gained acceptance because they passed “classic” scientific tests. They made predictions which would have allowed them to be falsified and discarded. And they passed these tests. We still test these theories today because we know they are not the “final word” and we hope that through any discrepancies with data we hope to learn more.
When it comes to climate science its difficult for me find the equivalent of, eg the general relativity tests of mercury’s perihelion or binary pulsar motion. Have there been any “classic” tests of climate models that would falsify the hypothesis of X degrees of warming within a specified long term time frame ? I ask this question since I’m more than fed up with hearing about a consensus. I don’t doubt that one exists but its not an argument that I would use to argue for the correctness of general relativity or quantum mechanics. Instead I would describe the numerous experimental tests which could have falsified these theories. Is there a series of experimental tests which could have falsified the climate models which presently imply drastic long term warming i.e. X degrees over y years or something similar whereby X and Y would have been defined at the time of the test and would correspond to dangerous long-term warming ? By “falsified” I mean that a paradigm shift would be needed rather than further model optimisation.
Thanks,
Dave
John P. Reisman (OSS Foundation) says
#485 Rob Dekker
Lest we not forget, I’m the idiot in the room. I haven’t looked at C++ since somewhere in the early 90’s… and even then it was a brief fling. We hardly got to know each other.
Aren’t there any grad students trolling this thread, looking for a bit of fun?
Hank Roberts says
> I’m a professor of physics at a leading European University.
Then you have access to a good reference librarian who can help you.
A simple test, repeated: alter the quantity of carbon dioxide in the planetary atmosphere; measure the temperature change until it equilibrates.
http://www.ncdc.noaa.gov/paleo/icecore.html
John P. Reisman (OSS Foundation) says
Dave,
You’re right, it’s not about consensus.
I wrote a book that I think will be tremendously helpful for you in getting some perspective.
Exposing The Climate Hoax: It’s all about the economy.
Susan Anderson says
Simon Abingdon@~470
Your dedication to mischaracterization of the facts is laid bare in this comment for all to see. All they need do is look at the original materials. Your pals see chimeras everywhere when they choose to mislead and fear to look the truth straight the the face. I had heard of Jo Nova, but since I’ve seen the originals and now actually visited (ick) her acrobatic verbal distortion, it is no longer in question that reality is too blinding when it is important to make statements contrary to the facts.
Susan Anderson says
Simon Abingdon, you got a very thorough answer from Kevin McKinney@~473 with links which I would suggest as background to my previous comment (if it comes through). Unfortunately, it was addressed to #471 and you are now moved to #470
Jim Larsen@~465, on the “book” it’s been done, it doesn’t go anywhere (Mooney, Oreskes, and a host of others, SkepticalScience, DeSmogBlog, etc.). I still think it important to make it clear each time the word skeptic is used what it actually means and why these guys are not. Perhaps just quotes: “skeptics”. I don’t think an initial cap functions the way you hoped it would. Ray Ladbury@~451 has the truth of it (like usual).
More neat stuff on Monckton will go to open thread.
dbostrom says
Dave would be better off taking up his questions on the “Unforced variations” thread. Meanwhile, Rob Dekker’s 20 Mar 2012 at 2:50 AM post is most interesting, nicely explained, tractable even to the real idiots in the room. Sorry, John Reisman, you’re trumped in that department…
John P. Reisman (OSS Foundation) says
#494 dbostrom
Geez, and I’ve been trying so hard to maintain my ranking :)
Dave says
@Hank-490
Perhaps the university library doesn’t contain details of any classical falsification tests for long term climate models. I’ve never heard of any, that’s why I came here.
@dbostrom-494
Thanks for the tip.
Hank Roberts says
> classical falsification tests for long term climate models
That’s different.
You asked about experimental tests of the physics.
Mount a scratch planet, make an experimental change; let it run to completion, as the paleo work documents has happened several times.
How can one classically falsify a long term computer run?
Perhaps you’re trying for humor?
Troy_CA says
Re Rob Dekker (486):
Rob, I think you might not have the S&B model implemented correctly, as the positive feedback bias using regressions against their model does NOT result from the GHG forcing applied (indeed, a stronger GHG forcing would actually eliminate the radiative noise contamination), and I suspect you may be getting this apparent bias because you are not subtracting the forcing (Q in FG06 terms) from the TOA flux (N in FG06) before regressing it against T. What you want is the radiative response with respective to T, which has the forcing removed from the TOA flux.
If you assume the simple energy balance and noise models of S&B, the “simple regression” methods of FG06 and Dessler indeed show a positive feedback bias, absent any GHG forcings, resulting from the correlation between X (the radiative noise) and T. Nobody (Forster, Murphy, Dessler, etc.) disputes this. However, what IS in dispute is whether the decorrelation time of X as used by S&B is realistic (because if the decorrelation time is shorter than the temperature response time the X and T parts will not be correlated), as well as the ratio of the radiative noise “forcing” versus the ocean “forcing”. Obviously, the effective ocean mixed layer has implications here, because it affects both the temperature response time and the inferred ocean “forcing” from SST.
ScienceOfDoom had a good series on this called “Measuring Climate Sensitivity”, and implemented this S&B model in MATLAB I believe. I’ve also implemented the model in R, and there’s some background here: http://troyca.wordpress.com/radiation-budget-and-climate-sensitivity/
I have not yet looked in depth at L&C11, but I suspect that the bias results from the same thing as in S&B. If so, any rebuttal of the alleged positive feedback bias would need to focus on determining a more accurate noise model, not your current track.
Hank Roberts says
Dave, being “a professor of physics
at a leading European University”
you doubtless know of John Baez
and you can ask him
Hank Roberts says
If that link didn’t work, here’s the link for John Baez’s climate physics questions: http://azimuth.mathforge.org/?CategoryID=7