Two more teams in the seemingly endless jousting over the ‘hockey-stick’ have just made their entry onto the field. In the first two (of four) comments on the original McIntyre and McKitrick (2005) (MM05) paper in GRL, von Storch and Zorita, and Huybers have presented two distinct critiques of the work of M&M.
The two comments focus on the ‘PC normalisation’ issue raised in MM05 which we discussed previously. Specifically, von Storch and Zorita show that in a GCM model emulation of the Mann, Bradley and Hughes (MBH) method, changing the PC normalisation technique makes no difference to the eventual reconstruction (i.e. it is not the normalisation that creates the ‘hockeystick’), consistent with earlier conclusions. Huybers comments that neither of the two suggested normalisations are actually optimal, and proposes a third method which looks like it gives results halfway between MBH and MM05. However, given the von Storch result, this too is unlikely to matter in the final reconstruction.
Huybers additionally makes an interesting point regarding the calculation of significance levels in MM05 and shows that a crucial step (the rescaling of variance of the proxies to match the variance in the instrumental calibration period) was missed out. Including it produces results identical to MBH.
For each comment comes a reply, and in the M&M responses, they introduce a number of further complications and focus on the quality of some of the proxies that were input data into the MBH methodology. We note as an aside that this is quite a different criticism than claiming that MBH’s methodology contains ‘coding errors’ (to quote one of the Ms). Indeed, the quality of paleo-climatic data and its relationship to climate variables has been discussed all along (see for instance MBH99).
Their further calculations will take time to assess, but of the original claims in MM05, the first (the PC normalisation issue) demonstrably makes no difference to the reconstruction, and the second (the calculation of the significance of the RE statistic) was just wrong. So for this round at least, it looks like ‘Hockey Team: 2, MM: 0’.
Look out for the next bout coming to a journal near you…
Armand MacMurray says
Re: #48
William, you say that “As for the CA post… there seems a determined attempt to personalise this by some people which I think is regrettable.”
As I read it, this suggests that the Steve McIntyre post I referred to was disallowed because of the author’s identity, even though it was a scientific, and not political, post. If so, it would seem useful to add that criterion to your posted comments policy. If not, could you please clarify why it was disallowed?
[Response: All comments pass through a filter. Most pass through immediately, some are caught for later assessment. This is stated plainly when comments are made. Approval depends on people seeing the comments and deciding to let them through, commenting on them or disallowing them for various reasons (as stated in the comment policy). Sometimes this takes time if people are busy or a response is deemed neccesary, thus comments sometimes appear ‘out of order’, particularly at weekends or when we are busy. This is unfortunate but can’t be helped. -gavin]
Lynn Vincentnathan says
RE #44 on politics. Seems there’s a fear that human freedoms will be curtailed if we address GW. We don’t like people dictating how much we can drive, or what car we can own. I’m with you on that one, Sanderson (if that’s one of your peeves); I’d like to buy a plug-in hybrid so I can drive almost entirely on wind power (& save $$$ & the earth), but someone with power has decided only to make them available in Europe and Japan, and not in the U.S.
I’m thinking that if people don’t address GW, and we don’t have some regs now (as an inoculation against future harm), then there may come a time when things get really bad that will either (1) lead to chaos — every man for himself in a disintegrating society & skip the women & children — or (2) extreme regulations & curtailment of freedoms, so as to distribute life-sustaining resources more equitably in a starving world. Or maybe (3) just war and more and faster destruction of the world, because people don’t really understand why they’re suffering & just lash out. The longer we postpone acting to reduce GHGs either on our own accord or by a few light regulations now, the worse the environmental/economic/political situation may become as GW cuts deeper and deeper into our subsistence base.
There is even a better way to look at it. Necessity is the mother of invention. In the past natural obstacles/blockages have led to marvelous breakthroughs. Now artificial blockages in small doses (regulations) are leading to breakthroughs. Businesses having to meet environmental regs actually come up with solutions that save them more money & make them more efficient. For insights, read NATURAL CAPITALISM by Amory Lovins (NatCap.org), or look into 3M’s 3P program (Pollution Prevention Pays).
My idea is to let the Hockey Stick push us & $$-saving carrots pull us to a more efficient & conservative, lower GHG emitting world now, than risk paying the GW piper later. We only have money (& perhaps our subsistence base) to lose by not lowering GHGs in smart ways.
About the political (power) dimension, which is only analytically, not concretely separable from our total human-in-the environment condition. We’ve all heard the dictum that “knowledge is power.” Michel Foucault (social theorist) came up with “power is knowledge.” If you read Chris Mooney’s THE REPUBLICAN WAR ON SCIENCE (he also shows how Democrats twist things, too), or even the earlier entries on this site about Barton and Inhofe, you’ll understand that the powerful people (in government, media, industry) are the ones who hold the “knowledge strings” for Americans.
Hurrah for the internet. We can wade through it and find knowledge that makes sense (like on this site), without government or indy-tied media creating a false reality for us.
TCO says
William, the “so what” is that Mann did not explain how he performed his statistics adequately in his description of methods in his paper. Also, the so what is that as he is using an unapproved method, it is incumbent on him to check that his method does not produce artifacts.
[Response: We showed ages ago that the normalisation doesn’t make any difference even in the real case, and that VZ shows a similar result is not surprising – gavin]
TCO says
Gavin (thanks for your reply),
A. Your comment does not address my point that it is encumbent on experimenters to properly describe their procedures in the literature so that readers can evaluate the implication. This goes double for non-standard procedures.
B. I read your cited RC rationale for how you “disproved” the effect of abnormal normalization:
1. The shown mean is a mean of tree series. It has no weighting for area. That’s like sampling 100 Democrats and 5 Republicans and basing your guess on the outcome of a presidential election on that survey. your sampling method is skewed. You need to do a fair sample or you need to weight by area (party, in the analogy).
2. Steve does not agree with your arguments for several reasons (and you have not engaged on those).
3. Even if for some reason the “offcenter PCA method” worked with the particular, and I’m not acknowledging that it does, if it is a flawed method for general cases, you need to show how your case does not have those flaws. Or better yet, just use a normal area-weighted mean.
4. Steve has proven that the method can data mine for hockey sticks out of red noise. That it magnifies the effect of hockeystick signal. Given that, why use such a funky method? Better yet, completely open question: why was that offcenter method picked, vice conventional Preisendorfer methods?
C. Finally, this matter is still much in debate. Steve had peer-review accepted replies to the comments. You should read them and evaluate the suitability of his logic, points in engaging on this topic. You gotta read both sides…
[Response: One could go on debating this point ad nasueum, and despite the fact that it has been shown not to be important for the final reconstruction, there are apparently always new reasons why we have to keep revisiting it. Throw it out completely, it still makes no difference! So in terms of possible benefit compared to the costs, it does not seem worthwhile to continue. Instead, the scientists involved (which doesn’t really include me) move on to testing new methods, incorporating more data and trying to reduce the error bars. This whole debate on the technicalities that don’t matter is just a waste of time. There are much more interesting things to do. This field is not quantum mechanics or pure mathematics where there is a ‘right’ way to do it and everything else is wrong, there are only useful or not-so-useful approaches, and you just want the answer not to have to depend on the (relatively arbitrary) details. In this case it doesn’t, so why continue? -gavin]
TCO says
Please, let’s continue. You say that debate is welcomed in the policy for the blog. Don’t slam the door shut on the primary issue of controversy around. Let’s dig into the issue and the subissues.
You say that you’ve proven something. Then you say it doesn’t matter. Surely if you’ve proven it, it’s irrelevant if it doesn’t matter. Also if a technicality is wrong, you should acknowledge that (regardless of if you think it’s effect is minor).
[Response: Try reading what I said before; It’s demonstrably irrelevant therefore it doesn’t matter. ‘Debate’ over. Of course, there are historical precedents for longwinded irrelevant debates, ‘counting angels on the heads of pins’ for instance, but excuse me if I have better things to do. -gavin]
Steve McIntyre says
Gavin, your statement….[redacted]
[Response: Absent a public apology regarding your remarks about my ethics, I will not be drawn into a personal discussion with you. Discussion regarding upcoming papers is best left to after they have appeared. -gavin]
Brooks Hurd says
Gavin,
Steve’s remarks were in the form of a question, not a statement. The question was based on a perceived difference between the implementation of RC’s postings policy and the stated RC posting policy.
You stated a reason for this perceived discrepancy which certainly makes sense. The filters catch certain posts and they must be reviewed. This takes time and causes posting delays. Based on the appearance of a number of posts on this thread, it appears that this delay has a finite duration. This certainly lends credence to your explanation.
What you redacted in #56 was scientific and on topic.
Paul says
I came here looking for “commentary site on climate science by working climate scientists”
Just a brief reading of this sorry petty political and ego driven drivel has left me with no confidence in this whole field.
Lynn Vincentnathan says
Steve (#42), I think what you’re saying in your works is that we have not broken through the “noise barrier” re GW. If that’s a somewhat correct assessment of your idea, then do you think we will break through in the future — say, in 50 or 100 years?
Hans Erren says
re 59:
This depends on:
What el nino will do
What volcanoes will do
What the sun will do
Which emission scenario will materialise
How the sinks respond to the mission
How the temperature responds to CO2
How the temperature responds to aerosols
Unknown factors
Summary: we don’t know with sufficient accuracy yet.
Eli Rabett says
It strikes me that the two papers (Zorita and von Storch-ZvS) and Huybers (H), treat two different microissues of McIntyre and McKitrick’s (MM) criticisms and also treat the macro criticism of Mann, Bradley and Hughes’ (MBH) implementation of principle component analysis. These arguments are being randomly packed and unpacked here sowing confusion.
It has been shown that the MBH implementation of PCA works, although it is not optimal, but the significance of those problems with the particular data sets are insignificant (see H and ZvS for comments on the PCA of specific data sets which have been challenged). The implementation MBH used appears to be widely accepted in the paleoclimate community. If one wished to make a contribution other than beating a political drum, you might look at why this non-Preisendorfer implementation was adopted. In particular, it would be useful to ask on what sort of data sets, if any, it provides a superior analysis, and on what sort of data sets it fails, and how badly it fails. Further, one could explore if there were superior implementations. To use a Preisendorfer 88 is not by itself an interesting factum to me. ZvS and H try to answer this question.
H carries out part of this program, but his paper is lacking as it only treats a single data set. More convincing would have been construction of multiple arbitrary data sets showing that his implementation was superior on all of them, or outlining where it worked best, and where it was not an improvement. Along the way he uncovers some issues in the MMo5 analysis.
Allow me to pause here for some cultural background for lurkers. As you have seen, the scientific literature is built to publish reanalysis and criticisms of published results. Typically this is done in further publications which start: We X have looked into Y, which Z first published on, and found results which show exactly the opposite. Z typically does not get to reply to that except in a few journals such as Science/Nature which have short letter columns. What Z does is further research published the next article which goes: We did Y again and all criticisms made in this other strange paper are wrong because …..
After a while, a consensus forms on what the correct answer is, and the field moves on, although you still see X and Z going at it, but the consensus is accepted. Once this forms, X or Z will start getting referees reports saying hey, you got it wrong, move on, although they may find a friendly editor somewhere.
It is more serious when a journal publishes a comment on a particular paper. In that case Z gets to respond. I have seen very few cases where Z says X was right we were wrong. I have seen all sorts of strange justifications, and generally speaking everyone goes, yeah, yeah and figures that Z got it wrong. (We scientists are very polite). So the moral of the story is, when you read a criticism that is clear and a reply to that which is murky, go yeah, yeah and move on.
However, H did provide a useful parsing of the MM05 and MBH98 results with respect to the North American Tree Ring series. He explicitly stayed away from the question, can the series be used because of anamolous growth in the 20th century. This is really one for the dendrology community which has been curiously quiet on the issue and I would greatly appreciate some postings on this. My suspicion is that this is not the issue that MM make of it for various reasons, starting with the fact that Malcolm Hughes is an expert in the area and the analysis in MBH99. To summerize H’s result, his implementation is superior on the NoAm series, MM05 is next best, but not so different from MBH98 that it makes a difference.
ZvS generate a data set and use it to come to about the same conclusion wrt the various implementations of PCA.
I think that Gavin has it about right when he talks about this degenerating into a debate about very, very useful, vs. very useful. I take his point about math, but believe me there is similar noise in discussions about quantum mechanics. Then, of course there are the folks who simply don’t believe in quantum mechanics. http://www.crank.net/physics.html
Brooks Hurd says
Eli,
Of course it works if your goal is to individually weight a specific PC (or PCs) so that one or several PCs dominate. The reason one would use PCA is to give more weighting to certain proxies which are better than the rest. The result of such weighting is inconsequential to the overall reconstruction if the behavior of highly weighted PCs are basically similar to most of the other PCs. However, if one or several highly weighted PCs are distinctly different from most of the others, then the resulting reconstruction will be most similar to the highly weighted PCs. The impact on the reconstruction under these conditions is certainly not inconsequential.
[Response: Ummm… this is getting just a bit boring. Fairly soon now we’ll decide that going round the same old circles again and again is too dull. Anyway: as the von S stuff shows, the actual answer isn’t terribly sensitive to how you do the PCA – William]
Dano says
Hello Han (current #60). I see you and your buddies are all hanging out here.
I’m interested in your statement:
Summary: we don’t know with sufficient accuracy yet.
Has any of your buddies defined “sufficient” yet? And does The Posse discuss the relevance of the moderator’s replies on current 54, 55?
Best,
D
Stephen Berg says
Re: #60, “What el nino will do
What volcanoes will do
What the sun will do
Which emission scenario will materialise
How the sinks respond to the mission
How the temperature responds to CO2
How the temperature responds to aerosols”
These are all taken into account in the models! The results are given in the IPCC reports!
Brooks Hurd says
Eli,
Unfortunately, this is exactly what happens; right up to the paradigm shift when it becomes clear to everyone that the consensus was dead wrong.
Timothy says
Re; #63 Yes, but not perfectly, and they contribute to the noise, which is why a formal detection and attribution analysis is a complicated thing to do. The detection and attribution ‘community’ appear to have concluded that the signal has now emerged from this noise, however.
I’d be interested to hear on what basis people felt there was too much noise and how much more warming would have to be observed for them to conclude that the signal was greater than the noise.
In terms of forcing uncertainties I understand that uncertainties in aerosol forcing are the largest factor when analysing the climate of the 20th century [and trying to use observations of changes in global mean temperature up to now to constrain what the future response to future increases in CO2 would be – over longer timescales the effect of aerosols gets much smaller].
However, I don’t see how this uncertainty can lead to uncertainty over the effect of GHGs.
Steve McIntyre says
Re #59: Lynn, I have not expressed any views about whether or not we have broken through a “noise barrier”. In the papers in controversy, I argued that no conclusions can be drawn as to 20th century uniqueness based on MBH98 for a variety of reasons, including their flawed PC method, the interaction between their PC method and flawed proxies (bristlecones) and the failure of the reconstruction to pass statistical cross-validation tests. I am dubious about other multiproxy papers as well, but have not published on them to date.
Ferdinand Engelbeen says
Re #66,
The answer about the uncertainty of aerosols is contained in the first graph of the RealClimate discussion about their influence.
GHGs influence and aerosol sensitivity are in lockstep. If the current influence of aerosols is zero, then the sensitivity of GHGs is ~1 C to match the past temperature trend (and solar need to be increased too). If the current influence is -1.5 W/m2, then the sensitivity for GHGs increases to 6 C…
Further, have a look at comment #14 at the same page.
Btw, the largest uncertainty of current models is in the cloud feedback.
Lynn Vincentnathan says
re #67. Thanks for your reply, Steve. I guess I’m not so interested in the past as in the future, even though we need to understand the past to help us understand the future.
What do you think about the future? If the 20th c. did not show any special signal of AGW vis-a-vis the more distant past (at least in regard the papers, data, and methods you critique), do you think in the next 100 years there will more likely than not be an AGW signal (i.e., will global warming become evident, at least to some extent)? What’s your best guess?
Hans Erren says
re: 68
IMHO the strong alleged cooling influence of aerosols is a haunting legacy of Schneider…
Gavin, please unedit #56, you did have your apology, and it helps the discussion of the b-word.
plea-ea-ea-ea-ease?
Ferdinand Engelbeen says
Re #67,
Lynn, there is an essential point in looking at the past. Climate models need to correctly “predict” the past, to have any projective power for what may happen in the future. Therefore, it is crucial to know past climate as accurate as possible. If the past was as found by MBH98/99, then there was little natural variation, models give GHG/aerosol a high sensitivity and are predicting a warm future. If the past was more Moberg-like (with more natural variation, thus higher solar/volcanic sensitivity), then models will need to adjust to a lower GHG/aerosol sensitivity and the future will be at the lower side of the projections…
Anyway, within some 10 years from now, the real trend in climate will make it clear if the current warm period is mainly part of a natural variation or mainly GHG based, or a mix of both…
[Response:This is an incorrect premise. Earlier periods were not much affected by aerosols, but solar forcing, which is similarly uncertain. Any reasonable temperature variability falls within the results based on the current sensitivity combined with different values of solar forcing – thus these periods are not terribly useful in constraining climate sensitivity. Your second point was made (with slightly more validity) 10 years ago. The subsequent data has all fallen on the side of dominant GHGs… -gavin]
Eli Rabett says
wrt #65, I simply note that the density of paradigm shifts is about 4 orders of magnitude higher on http://www.crank.net then it is in reality.
Hans Erren says
re 71 (response)
High climate sensitivity is always linked with speculative anthropogenic aerosol cooling.
ref
Objective Estimation of the Probability Distribution for Climate Sensitivity; Andronova, N., and M. E. Schlesinger. 2001., J. Geophys. Res., 106, D19, 22,605-22,612
The Dutch CKO climate run was performed with a sensitivity of 1K/2xCO2, using the A1B Sres scenario
ref:
http://www.knmi.nl/onderzk/CKO/Challenge_live/Engels/index.html
[Response: Only if you think that climate sensitivity is only determinable from the 20th century changes. It turns out that this does not provide much of a constraint because of the uncertainties in aerosols, solar, ozone etc. Better constraints come from the paleo-data. See the previous post on climate sensitivity and aerosols to see why the association you are promoting doesn’t work. And by the way, the number you quote for the Dutch model is the transient climate response, not the equilibrium sensitivity as you well know (since I pointed it out before). -gavin]
Lynn Vincentnathan says
RE # 73, Gavin’s response: What is the estimated hottest it got during the 5 runaway GW extinction events in the past (e.g., end-Permian), and what level of super-anoxia was there – oxygen depletion due to O2 reacting with that massive amount of CH4 from melting clathrates, producing CO2?
[Response: This is a new one on me… were there really 5 runaway GW extinction events? It sounds rather doubtful – William]
I’d like to know what could happen (highest warming, anoxia) within the range of upper end possibility in what might turn out to be the 6th mass extinction event (if we don’t start greatly reducing GHGs). I understand Earth can’t become like Venus, and that GW would eventually plateau & the climate come back viability, but what would the upper limit (constraint) be, if everything that could go wrong went wrong (re all the warming forces). I understand it’s pretty speculative and guesstimation.
I’ve been talking about this to my students, but (another question) if the worst were to happen, when could the anoxia start kicking in (I’ve been telling them hundreds of years from now, but I haven’t the faintest).
See: “Global Warming Preserved ‘Mass Kill’ Fossils, Study Says,” by James Owen, for National Geographic News, at
http://news.nationalgeographic.com/news/2005/10/1018_051018_fossils.html
[Response: Ah… looking at that, I say that (1) it doesn’t mean runaway (in the Venus sense) and (2) it looks like very early work – not something to rely on yet – William]
Lynn Vincentnathan says
Re the response in #74, by “runaway” I’m referring to positive feedbacks outweighing negative feedbacks, at least until some limit is reached. Initial warming (from whatever cause) triggers increased GHG emissions (or some more warming, e.g. from low albedo), which lead to further warming, which leads to further GHG emission, and so on until it gets a lot hotter than the initial warming from the initial forcing. “Runaway,” I suppose, is an anthropocentric term. I think of human GHG emissions (which are mostly under our control) causing initial increased warming; this warming then causes further GHG emissions from nature (which we people have no control over, hence “runaway”), which causes further warming, and so on, until it gets to some constraint or limit, beyond which it would be impossible for Earth’s climate to go, given the various variables involved (which are unlike the variables for Venus).
It was that limit I was asking about. What would that limit of warming be? How hot could earth’s climate get if all possible factors conspired together to cause warming….
Sashka says
Re: comment to 73
Only if you think that climate sensitivity is only determinable from the 20th century changes. It turns out that this does not provide much of a constraint because of the uncertainties in aerosols, solar, ozone etc. Better constraints come from the paleo-data. See the previous post on climate sensitivity and aerosols to see why the association you are promoting doesn’t work.
This is quite astounding. If I weren’t sure of the opposite I’d suspect that you’re a GW skeptic. So, the well-documented evidence of the fastest ever GW coincidental with CO2 concentration growth is not much of constraint because of other uncertainties. OK, suppose so. But how could paleo-data provide better constraint? How about uncertainties in paleo-data? How about sparcity of paleo-data? What do we really know about insolation in the distant past? My guess would be that, given the error bars, paleo data shouldn’t provide better constraints.
[Response:Read the previous posts (here, here, and here) on this subject. The paleo-data I refer to is for the last glacial period. -gavin]
nanny_govt_sucks says
The silence on this thread is deafening.
Have you no response to Steve’s post –
“von Storch and Zorita did not replicate the MBH98 methodology in key respects. In particular, their paper indicates that they did principal components on the correlation matrix of short-centered data, whereas MBH98 did singular value decomposition (SVD) on the short-centered data matrix itself. There’s a big difference. The VZ procedure simply does not generate hockey stick shaped PC1s in the way that the MBH98 procedure does and cannot be used to test the impact of the MBH98 methodology.”
?
Does your silence imply that you accept that VZ cannot be used to test the impact of the MBH98 methodology?
[Response: Since the VZ procedure manifestly *does* produce a hockey stick shaped result, the comment is incomprehensible – William]
Andrew Dodds says
Re: 74, 75
What you are effectively looking at here is something along the lines of the mid-late Cretaceous period, with thermohaline driven ocean circulation at a much higher temperature (and more sluggish). The evidence suggests that an equatorial band became effectively uninhabitable by anything bigger than bacteria. Temperatures were around 6-7K hotter than today, on average, with a much flatter longitudinal temperature gradient.
A change to such conditions is, needless to say, extremely unlikely. Today’s ocean circulation patterns – in particular the isnlation of antartica – effectively refrigerate the ocean, and although this process could concieveably weaken with large scale, long term AGW, it cannot competely fail; it’s driven by geography. Now, if someone decides to build a dam from the tip of S America to the Antartic penninsular, and from Scotland to Iceland at the same time, then things would change. As far as the idea of methane releases changing Atmospheric O2 concentrations, the amounts required simply don’t exist; methane amounting to several percent of the volume of the atmosphere would have to be released. The idea of anoxic events causing kills is more to do with local breakdowns in shallow water circulation, possible due to higher temperatures. The idea is quite speculative; and always bear in mind that shallow water life is massively overrepresented in the fossil record – a single, local fish kill event can easily generate more fossils than a million years of life on land.
The Persian gulf, Eastern Med, and Red sea (to name a few) are all ‘anoxic event’ candidates; a freakishly hot year could make them all do this at once, which would look like a big catastrophe in the fossil record. The black sea, of course, is anoxic already.
So as a summary – a global anoxic ocean will not happen. Local anoxic events may well happen. A 6-7K rise is probably the limit even if we burn the planet, although polar rises would be double this. Positive feedbacks are ultimately limited – once all the Arctic ice melts, for instance, the feedback stops, and ditto for Antartica in extremis. Methane has a short residence time, meaning your emissions have to increase with temperature to give a positive feedback; hence a one off event will not lead to a runaway.
Lynn Vincentnathan says
RE #78, what you wrote makes sense. There are at least 3 factors that would make this era different from the others: 1. the continents have drifted into a configuration that makes such extreme GW & anoxia, and hence such great extinction, much less likely (I think that’s what you’re saying??); 2. I read that life is more resilient now to survive climate change than it was eons ago; and 3. humans, who have invented the things that are contributing to AGW, also have the smarts to reduce the negative side effects of their projects. Unlike having to adapt biologically through natural selection over many many generations, we people can change our culture within a life-time, even within a short time, if we wish. Social (revitalization) movements can happen very quickly to change culture. What is needed for that to happen is the (correct) perception of serious problems that affect or will affect many people, a new vision for a better society (more humanly helpful, less environtmentally harmful), leadership, resource mobilization (funds, communications, etc). I think the internet is a great tool for spreading information.
On the other hand, AGW is not the only thing harming our world & contributing to mass extinction. So a holistic approach would include all factors (see: http://www.well.com/user/davidu/extinction.html ).
And, even though you have alleviated my concern re an extreme extinction event, we are still facing plenty of problems & human deaths/harm with regular AGW. So whether it’s billions or millions, or even hundreds of thousands, who will eventually die from AGW, it behooves us to do the best we can to reverse course on this, without (I bow to contrarians’ concerns) shooting ourselves in the foot (loosing our political freedoms & economic good things) in doing so.
Joel Shore says
Re #74 and 75: Lynn, you seem to be implying that unless negative feedbacks counter the positive feedbacks, we will necessarily get a runaway effect (until such time as the negative feedbacks overpower the positive ones). This is not correct. It is possible to have positive feedbacks that are sufficiently strong to amplify warming but not in a runaway manner.
As a simple example, suppose that each 1 deg increase in temperature causes water vapor to increase by an amount that would then be expected to produce an additional 0.5 deg of temperature increase. One might naively expect that this could lead to a runaway effect since this 0.5 deg temperature increase will then lead to further increase in water vapor and further temperature increase and so on. However, a simple thought experiment shows that this is not so: if GHG’s raise the temperature by 1 deg alone, the water vapor feedback will then kick it up another half a degree, then the feedback of the water vapor on this additional half degree of warming will kick it up another quarter of a degree, and so on. In this example, we have a convergent geometric series, so the water vapor effect amplifies the warming but the temperature does not grow without bound; in fact, in this simple example, the water vapor feedback ends up amplifying the warming due to GHGs alone by a factor of 2.
The above example may seem very contrived but in fact I was able to show that such a geometric series giving amplification is exactly the sort of mathematical result one gets within a very simplistic, back-of-the-envelope model of the CO2 greenhouse effect with H20 feedback. In that model, I assumed only that the change in temperature depends logarithmically on concentration for both CO2 and H20 and that the water vapor responds to increases in temperature in such a way that the relative humidity remains constant.
In that model, you indeed get a geometric series. Whether that series is convergent (leading to amplification) or divergent (leading to a runaway effect) depends on the strength of the response of temperature to changes in H20 concentration. And, of course, if it is convergent, the amount of amplification you get depends again on this strength.
Armand MacMurray says
Re:#77
William, perhaps I can help clarify. The key words in nanny’s quote are “…in the way that MBH98 procedure does…”.
The question is NOT “does VZ produce hockey sticks?” (which your answer addresses), but rather “does VZ produce hockey sticks *in the same way that MBH98 does*?” A deeper analysis of VZ’s work is required to answer this question; I would look forward to reading any such analysis done by the contributers here.
Dan Allan says
re 78:
Andrew, I’m curious that you see a limit around 6-7 k. Even given that positive feedbacks are utlimately self-limiting, couldn’t continued doubling of CO2 in the atmosphere (assuming, for example that we switch to a coal-based economy) lead, in the long run, to increases that are greater than that?
Hans Erren says
re 73
I invite you to continue the discussion of climate sensitivity on a neutral forum ukweatherworld
I can’t speak freely here.
Gavin, don’t censor this one, it’s shadowposted at ukweatherworld.
Ferdinand Engelbeen says
Re #71 (response),
Gavin, indeed solar is/was the main driver of climate in the past millenium, as GHGs and aerosols probably didn’t vary much. Thus the decrease in temperature during the LIA is mainly solar driven (assuming that volcanoes were not far more active in the LIA as a whole).
[Response: Bad assumption. If you’ve read any of the studies that have actually been done in modeling the climate of the past 1000 years (e.g. Crowley, 2000, but many others since), you should be aware that volcanic forcing is the primary radiative forcing responsible for “LIA” cooling at the hemispheric or global-mean scale. You can find references to these studies here and in this review paper on “Climate Over Past Millennia” by Jones and Mann (2004). Solar forcing appears to play a more significant role in explaining the spatial patterns of temperature change. See e.g. this review paper (Schmidt et al, 2004), where the response of a climate model to estimated past changes in natural forcing due to solar irradiance variations and explosive volcanic eruptions, is shown to match the spatial pattern of reconstructed temperature changes during the “Little Ice Age” (which includes enhanced cooling in certain regions such as Europe) as well as the smaller hemispheric-mean changes. – Mike]
Looking at the difference in reconstructed temperature between the MBH98 (-0.3 C) LIA and the Moberg (-0.8 C) LIA, or the borehole (-1.0 C) LIA, the influence of solar is two to three times higher, depending of the chosen reconstruction.
A higher solar influence means a lower influence of CO2/aerosols to obtain a good fit to the surface data of the past 1.5 century. In that case, the projection for a CO2 doubling ends at the low(er) side of the IPCC range…
[Response: Again, your reasoning is incorrect because your assumption that solar forcing is the dominant forcing associated with LIA hemispheric-mean cooling is false. Careful, quantitative studies by Hegerl and others using paleoreconstructions and model simulations of the past millennium estimate sensitivites in the mid-range of the IPCC TAR. You can find references to these studies in the review paper by Jones and Mann (2004) linked to above. – Mike]
That aerosols have less influence than calculated can be seen in the temperature trends in Europe: there is no to little measurable difference in trend between non/less polluted areas and places where the models predict the largest change, due to the 56% reduction of SO2 emissions since 1975.
That the GHG trend now is emerging from the background noise seems to be a little premature. Especially as current models don’t reflect the natural variations in heat content of the oceans: compare the curve of Fig. 1 in Levitus with the CO2 and aerosol (SO2 emissions are globally steady since 1975) trends for the same period. Have also a look at the frequency analyses by Barnett:Fig. S1 shows that the models don’t capture any periodic event between 10 and over 50 years. That includes the 11/22 year and longer (60+) years solar and other natural cycles visible in many climate events. As a consequence, any secular trend in e.g. solar (constantly higher now than in the warm 1930-1940 period) may be underestimated too…
Ferdinand Engelbeen says
In addition to #78,
The University of Florida has made a summary of the Cretaceous period where they estimated a 4-10 times higher CO2 level than today and a 6(-12) C average warmer global temperature than today, be it mostly toward the poles.
Even with CO2 levels some 25 times higher than today, the global average temperature seems to be limited around 22 C vs. 15 C today. Thus while there were times with far more CO2, it seems that there are negative feedback mechanisms at work which limit the upper level temperature of the earth.
Sashka says
Re: 76
Gavin, I’ve read previous posts and I’m still confused.
Absent paleo data, the uncertainty in aerosol forcing must have increased the uncertainty in the climate sensitivity to CO2. I believe you aknowledged this in the “dimming” thread:
While it is true that, holding everything else equal, an increase in how much cooling was associated with aerosols would lead to an increase in the estimate of climate sensitivity, the error bars are too large for this to be much of a constraint.
I interpret the undelined part as a saying that you cannot say how much worse the sensitivity estimate becomes. Right?
Then you say that using paleo data you can constrain it back to the same 1.5 to 4.5 deg C range. Again, I don’t understand how paleo data can help if you don’t even have a good grip on everything that I mentioned in the previous post? In addition, how do we know that climate sensitivity now and then is the same? Isn’t it a somewhat shaky assumption?
Sashka says
Re: 86
I was trying to underline
the error bars are too large for this to be much of a constraint.
Lynn Vincentnathan says
RE #80, I was even thinking of mentioning water vapor positive feedback as part of the “regular” climate package, and not the type of positive feedback I had in mind. I was thinking more of warming causing the release of further CO2 & CH4 from nature, which are themselves “forcings,” unlike water vapor, which is only a “feedback” (see https://www.realclimate.org/index.php?p=142) and not also a forcing.
Water vapor feedback doesn’t concern me as much as warming causing the release of further CO2 & CH4 from nature.
Question: Have these “positive feedbacks” which are also “forcings” (CO2 & CH4 emitted from nature due to the warming) been included in the models, or is there not enough accurate info on them to include them at this point?
Steve Latham says
Re 81 by Armand:
Ah, now I think this is getting somewhere! For McIntyre fans it is important to determine how the hockey stick is generated in the reconstructions. I suspect that for most, including RC, the fact that various methods produce the hockey stick is a validation of the original work (especially since no demonstrably better method contradicts the hockey stick, yet).
Gregor Mendel appears to have “cheated” (knowingly or unknowingly) in his work on pea genetics; yet his conclusions hold up. What is more important for population geneticists or for people interested in evolution? For people who want to say that evolution is crap, perhaps pointing to this early work (or to Java man or whatever) is the best strategy. For those interested in understanding evolution, however, reanalysing Mendel’s work can only yield so much.
I suggest that the same is true for people interested in the phenomenon of global warming, especially given the quality of the paleo data (bristlecones and such). Whereas for some people, the method used in a study 7 years ago is a potential “soft target” on which to focus, many others would rather focus on the best reconstruction and whether the original conclusion has held up. Thus, for the latter group, the relevant question is not as Armand stated.
I am curious about why only Eli has written regarding reasons the MBH98 method may have been used (why hasn’t Mann justified it?), but I’m more curious about whether or not recent temperatures are anomalous and why. Because the result is robust, it seems understanding subtle nuances of PCA methods is not necessary to satisfy that curiosity.
Steve McIntyre says
Re #89: The result is not robust. The various supposedly “independent” reconstructions are not in fact independent either in authorship or proxy selection. There are important defects in each such study individually with proxy quality and robustness with respect to outlier results.
[Response: At the moment, this looks like wild assertion / mud slinging. Given that the various reconstructions are the same on the important points, it seeems that the major conclusions are robust. Asserting that everyone else is wrong and only you are right is implausible – William]
Andrew Dodds says
Re: 82, 85
Yes, I suggested that one obvious stopper (I hesitate to call it a negative feedback) is that you lose positive feedbacks – for example, once all of the ice has melted, the ice-albedo positive feedback stops. Indeed, this alone is probably largely responsable for the limit on how hot things can get.
Of course, a 6-7K rise on any kind of human timescale would be catastrophic.
Hans Erren says
re 91
You can overmoderate, that’s also why yahoo climatesceptics doesn’t work.
I think this blog is not a proper format for discussions as users cannot spawn threads.
I do consider light moderation essential, that’s why I prefer ukweatherforum over sci.environment.
Lynn Vincentnathan says
Re #91, yes, I read that the end-Permian extinction (when up to 95% of life died) happened in a 6 degree C warmer climate than now.
The more limited positive feedbacks, such as ice-albedo, would perhaps help put the climate at a somewhat higher temp, at which, say ocean clathrates would start melting at a much higher rate & that would perhaps make the warming spiral up until all those were melted. I’m also aware that we have about 200 years worth of coal — so if all that were rapidly burnt up by voracious, energy-hungry Homo sapiens (don’t know if “sapiens” would then an acurate descriptive term), then that would also help push up the average global temp to its maximum.
There are human positive feedbacks. It gets hotter, we buy ACs and run them longer, which is somewhat moderated by reducing our heating in winter. Except that, I think (not sure) the climate is expected to increase in variability. Is that right? The mean fairly steadily increases in a jagged fashion, but the range or variance or standard deviation from cold to hot also increases???
I have to thank you folks for educating me and those out there reading this blog who don’t participate.
Dan Allan says
re 91:
Andrew, not disputing your assertion, just still trying to understand the explanation. I see that the change in albedo is finite, and that there is no reason to expect a “runaway” GHG effect, where the positive feedbacks themselves continue to send temperatures above the 6-7K increase.
But, even absent positive feedbacks, wouldn’t a continued redoubling of CO2 in the atmosphere potentially continue to drive the temperature higher? I’m not saying this is likely – that obviously depends human behavior. I’m just wondering what the outcome would be if CO2 were produced at a much faster rate than currently forecast, and if this faster rate were to continue for a couple of hundred years.
Stephen Berg says
“Modeling Of Long-term Fossil Fuel Consumption Shows 14.5 Degree Hike In Temperature”:
http://www.sciencedaily.com/releases/2005/11/051101222522.htm
To all skeptics out there, this was done on a CLIMATE MODEL and not a WEATHER MODEL. There is a great difference!
[Response: Minor irrelevant personal attack removed. Note also that the 14.5 is oF – “only” 8 oC, and to be fair is to 2300, not the more usual 2100 or CO2 doubling – William]
Thomas Lee Elifritz says
Ellen Thomas’ slideshow is now online :
http://ethomas.web.wesleyan.edu/DSL3.htm
Lynn Vincentnathan says
For those upset about RC’s moderation, I think it’s great real scientists from among the top climate scientists in the world, are sharing their knowledge and most current understandings with laypersons. I see RC as a climate change course for non-majors. Sometimes we students whisper to each other in class, or bring in some non-physical science considerations; sometimes guest professors participate & help teach. Sometimes the host-professors bring up important topics off the syllabus (like I bring up GW in my sociology & anthropology classes). Wow, it’s quite like other educational fora.
I want the RC hosts to correct us “students” when we’re wrong or unruly. This is a serious blog about an extremely serious topic. I know it stings a bit when they correct us (I’ve had a few wrist slaps myself), just as when teachers correct students in class or mark them off on tests. This blog isn’t so much for argumentation and debate, as it is for presenting scientific truth (scientifically accepted range of truths) as it currently stands, stochastic and subject to change as it may be. And this is not a place for scientists to meet & share knowledge so as to produce and advance the field. I think they do that by reading each others’ articles, attending conferences, and personal communications. This blog was created for us students.
It’s better to go to other blogs if what you want is all voices to be heard equally or explore the Zen of GW abatement — marklynas.org is a good one. If you want to learn about climate change, one of the most pressing issues of our time (perhaps of all time), then stay tuned here.
nanny_govt_sucks says
#94 – “But, even absent positive feedbacks, wouldn’t a continued redoubling of CO2 in the atmosphere potentially continue to drive the temperature higher? ”
Another aspect of “greenhouse” warming that no one wants to talk about is the saturation of the IR absorption effect of CO2. The relationship between CO2 concentration in the atmosphere and IR absorption is not linear. Actually it is logarithmic and we’re already on the flat part of the log curve, so additional CO2 concentrations will add little to the absorption of longwave IR. That means a smaller and smaller contribution to any warming from CO2, but don’t let the public hear this – they might not want to ban SUVs.
[Response: Stating something very clearly in the IPCC report would seem to be a funny way of keeping this ‘secret’. By the way, logarithmic is never flat. -gavin]
Dan Allan says
98:
I know that the temperature response to CO2 is logarithmic, not linear. This is why I used the term “redoubling”. So, again, even though positive feedbacks will eventually shut off, isn’t there potential for an increase in temp that is greater than 6-7K, if fossil-fuel burning were to significantly exceed what is currently predicted for a protracted period? (Gavin, please feel free to field this one. I’ve been curious about what happens, theoretically, with the 2nd doubling of CO2, perhaps out somewhere in the next century, and find very little information about it. Thanks.)
Lynn Vincentnathan says
Thanks, Thomas (#96), for the PP slides. It answers a lot of my questions, & I shared a few slides with my mythology class, along with a “What Would Noah Do?” email I received yesterday. We just started the flood myths today, and those sort of fit, since we included “big bang” with the creation myths. (Note: myths are considered true to those who hold them, the equivalent of science or history, and they speak and are relevant to the present, though they happened in the past.)