One of the central tasks of climate science is to predict the sensitivity of climate to changes in carbon dioxide concentration. The answer determines in large measure how serious the consequences of global warming will be. One common measure of climate sensitivity is the amount by which global mean surface temperature would change once the system has settled into a new equilibrium following a doubling of the pre-industrial CO2 concentration. A vast array of thought has been brought to bear on this problem, beginning with Arrhenius’ simple energy balance calculation, continuing through Manabe’s one-dimensional radiative-convective models in the 1960’s, and culminating in today’s comprehensive atmosphere-ocean general circulation models. The current crop of models studied by the IPCC range from an equilibrium sensitivity of about 1.5°C at the low end to about 5°C at the high end. Differences in cloud feedbacks remain the principal source of uncertainty. There is no guarantee that the high end represents the worst case, or that the low end represents the most optimistic case. While there is at present no compelling reason to doubt the models’ handling of water vapor feedback, it is not out of the question that some unanticipated behavior of the hydrological cycle could make the warming somewhat milder or on the other hand, much, much worse. Thus, the question naturally arises as to whether one can use information from past climates to check which models have the most correct climate sensitivity.
In this commentary, I will discuss the question "If somebody were to discover that climate variations in the past were stronger than previously thought, what would be the implications for estimates of climate sensitivity?" Pick your favorite time period Little ice age, Medieval Warm Period, Last Glacial Maximum or Cretaceous the issues are the same. In considering this question, it is important to keep in mind that the predictions summarized in the IPCC reports are not the result of some kind of statistical fit to past data. Thus, a revision in our picture of past climate variability does not translate in any direct way into a change in the IPCC forecasts. These forecasts are based on comprehensive simulations incorporating the best available representations of basic physical processes. Of course, data on past climates can be very useful in improving these representations. In addition, past data can be used to provide independent estimates of climate sensitivity, which provide a reality check on the models. Nonetheless, the path from data to change in forecast is a subtle one.
Climate doesn’t change all by itself. There’s always a reason, though it may be hard to ferret out. Often, the proximate cause of the climate change is some parameter of the climate system that can be set off from the general collective behavior of the system and considered as a "given," even if it is not external to the system strictly speaking. Such is the case for CO2 concentration. This is an example of a climate forcing. Other climate forcings, such as solar variability and volcanic activity, are more clearly external to the Earth’s climate system. In order to estimate sensitivity from past climate variations, one must identify and quantify the climate forcings. A large class of climate forcings can be translated into a common currency, known as radiative forcing. This is the amount by which the forcing mechanism would change the top-of-atmosphere energy budget, if the temperature were not allowed to change so as to restore equilibrium. Doubling CO2 produces a radiative forcing of about 4 Watts per square meter. The effects of other well-mixed greenhouse gases can be accurately translated into radiative forcings. Forcing caused by changes in the Sun’s brightness, by dust in the atmosphere, or by volcanic aerosols can also be translated into radiative forcing. The equivalence is not so precise in this case, since the geographic and temporal pattern of the forcing is not the same as that for greenhouse gases, but numerous simulations indicate that there is enough equivalence for the translation to be useful.
Thus, an estimate of climate sensitivity from past data requires an estimate of the magnitude of the past climate changes and of the radiative forcings causing the changes. Both are subject to uncertainties, and to revisions as scientific techniques improve. The mechanical analogy in the following little parable may prove helpful. Down in the dark musty store-rooms of the British Museum, you discover a mysterious box with a hole in the top through which a rod sticks out. The rod supports a platform which has a 1 kilogram brick on it, but the curator won’t let you fuss with the brick, otherwise something might break. For various reasons, though, people in the Museum are thinking of adding a second 1kg brick to the platform, and you’ve been hired by the Queen to figure out what will happen. Though you can’t mess with the device yourself, you notice that every once in a while a mouse jumps down onto the brick, and the platform goes down a little bit, when this happens, after which the platform returns to its original level without oscillating. From this you infer that there’s some kind of spring in the box, which is sitting in molasses or something like that, which has enough friction to damp out oscillations. Your job amounts to estimating how stiff the spring in the box is, without being allowed to take apart the box or perform any experiments on it. If the spring is very stiff, then putting another brick on the platform won’t cause the platform to sink much further. If the spring is very soft, however, the second brick will cause the platform to go down a great deal, perhaps causing something to break. The displacement of the platform is analogous to global mean temperature, and the stiffness of the spring is analogous to climate sensitivity.
Now, the unfortunate thing is that the mice are too light and come along too infrequently for you to get a good estimate of the stiffness of the spring by just watching the response of the platform to mice jumping on it. However, from looking through other dusty records elsewhere in the basement of the British Museum, you discover some notes from an earlier curator, who had also observed the box. He notes that there used to be big, heavy rats in the Museum basement, and has written down some things about what happens when the rats jump on the platform. From indirect evidence, like footprints in the dust, size of rat droppings, shed fur, plus some incomplete notes left behind by the rat catcher, you infer that the typical rat weighed a quarter kilogram. Now, the curator has left behind some notes about how much the platform drops when a rat jumps onto it from the shelf just above the platform. Unfortunately, the curator was a scholar of Old Uighur, and left behind his notations in the Old Uighur numeration system so his rivals couldn’t read it. Also unfortunately, the curator died before publishing his explanation of the Old Uighur numeration system, and that has been lost to time. Using the same Uighur wheat production records available to the curator, you estimate that his notes mean that the typical displacement is 10 centimeters per rat. From this you estimate that the stiffness of the spring is such that a 1 kilogram brick would cause a 40 centimeter displacement of the platform. Things are looking good. You get paid a handsome sum. Then, one day, to your horror, you open a journal of Uighur studies and find a lead article proving that everybody has been interpreting Uighur wheat production records wrong, and that all previous estimates of what the Uighur numbers mean were off by a factor of two. That means that while you thought the typical displacement of the platform was 10 centimeters per rat, the "natural variability" caused by rats jumping on the platform is much greater than you thought. It was actually 20 centimeters, using the new interpretation of the Uighur numbering system. Does that mean you ring up the Museum and say, "I was all wrong — the natural variability was twice what we thought, so it is unlikely that adding a new brick to the platform will cause as much effect as I told you last year!" No, of course you don’t. Since you have no new information about the weight of the rats, the correct inference is that the spring in the box is softer than you thought, so that the predicted effect of adding a brick will be precisely twice what you used to think, and more likely to break something. However, being a cautious chap, you also entertain the notion that maybe the displacement of the platform was more than you thought because the rats were actually fatter than you thought; that would imply less revision in your estimate of the stiffness of the spring, but until you get more data on rat fatness, you can’t really say. If you think all this is obvious, please hold the thought in mind, and bring it back when, towards the end of this commentary, I tell you what Esper et al. wrote in an opinion piece regarding the implications of natural variability observed over the past millennium.
The Last Glacial Maximum (i.e. the most recent "ice age", abbreviated LGM) probably provides the best opportunity for using the past to constrain climate sensitivity. The climate changes are large and reasonably well constrained by observations. Moreover, the forcing mechanisms are quite well known, and one of them is precisely the same as will cause future climate changes. During the LGM, CO2 dropped to 180 parts per million, as compared to pre-industrial interglacial values of about 280 parts per million. Depending on just what you assume about cloud and water vapor distributions, this yields a radiative forcing of about -2.5 Watts per square meter. Global mean temperatures dropped by about 7°C at the LGM. Does this mean that the true climate sensitivity is (7/2.5) = 2.8°C per (Watt per square meter)? That would indicate a terrifying 11.2 °C warming in response to a doubling of CO2. Fortunately, this alarming estimate is based on faulty reasoning, because there is a lot more going on at LGM time than just the change in CO2. Some of these things are feebacks like water vapor, clouds and sea-ice, which could be reasonably presumed to be relevant to the future as well as the past. Other forcings, including the growth and decay of massive Northern Hemisphere continental ice sheets, changes in atmospheric dust, and changes in the ocean circulation, are not likely to have the same kind of effect in a future warming scenario as they did at glacial times. In estimating climate sensitivity such effects must be controlled for, and subtracted out to yield the portion of climate change attributable to CO2. Broadly speaking, we know that it is unlikely that current climate models are systematically overestimating sensitivity to CO2 by very much, since most of the major models can get into the ballpark of the correct tropical and Southern Hemisphere cooling when CO2 is dropped to 180 parts per milllion. No model gets very much cooling south of the Equator without the effect of CO2. Hence, any change in model physics that reduced climate sensitivity would make it much harder to account for the observed LGM cooling. Can we go beyond this rather vague statement and use the LGM to say which of the many models is most likely to have the right climate sensitivity? Many groups are working on this very question right now. Progress has become possible only recently, with the availability of a few long-term coupled atmosphere-ocean simulations of the LGM climate. Time will tell how successful the program will turn out, but you can be sure that Real Climate is monitoring the pulse of these efforts very closely.
However that shakes out, if somebody were to wake me up in the middle of the night tomorrow and tell me that the LGM tropical temperatures were actually 6°C colder than the present, rather than 3C as I currently think, my immediate reaction would be "Gosh, the climate sensitivity must be much greater than anybody imagined!" That would be the correct reaction, too, because the rude awakener didn’t suggest anything about revisions in the strength of the forcing mechanisms. Indeed, this is the very reasoning used, in reverse, by Dick Lindzen in the late 1980’s, for that decade’s flavor of his argument for why CO2 increase is nothing to worry about. At the time, the prevailing climate reconstruction (CLIMAP) indicated that there was little reduction in tropical surface temperature during the LGM. Making use of mountain snow line data indicating larger temperature changes at altitude, Lindzen proposed a new kind of model of the tropical response, which fit the CLIMAP data and indicated very low sensitivity to CO2 increases in the future. When the CLIMAP data proved to be wrong, and was replaced by more reliable estimates showing a substantial tropical surface temperature drop, Lindzen had to abandon his then-current model and move on to other forms of mischief (first the "cumulus drying" negative water vapor feedback mechanism, since abandoned, and now the "Iris" effect cloud feedback mechanism).
Now, how about the Holocene including the Little Ice Age and Medieval Warm Period that seem to figure so prominently in many skeptics’ tracts ? This is a far harder row to hoe, because the changes in both forcing and response are small and subject to large uncertainties (as we have discussed in connection with the "Hockey Stick" here). What we do know is that the proposed forcing mechanisms solar variability and mean volcanic activity are small. Indeed, the main quandary faced by climate scientists is how to estimate climate sensitivity from the Little Ice Age or Medieval Warm Period, at all, given the relative small forcings over the past 1000 years, and the substantial uncertainties in both the forcings and the temperature changes. The current picture of Holocene climate variations is based not just on tree ring data, but on glacial mass balance and a wide variety of other proxy data. If this state of knowledge were to be revised in such a way as to indicate that the amplitude of the climate variations were larger than previously thought, that could very well call for for an upward revision of climate sensitivity
Indeed, quantitative studies of the Holocene climate variations invariably support this notion (e.g. Hegerl et al, Geophys. Res. Lett 2003, or Andronova et al Geophys. Res. Lett 2004.). Such studies can reasonably account for the observed variations as a response to solar and volcanic forcing (and a few secondary things) with energy balance climate models tuned to have a climate sensitivity equivalent to 2.5C per doubling of CO2. If the estimates of observed variations were made larger, a greater sensitivity would then be required to fit the data. Ironically, even arch-skeptics Soon and Baliunas, who would like to lay most of the blame for recent warming at the doorstep of solar effects, came to a compatible conclusion in their own energy balance model study. Namely, any model that was sensitive enough to yield a large response to recent solar variability would yield an even larger response to radiative forcing from recent (and therefore also future) CO2 changes. As a result, their "best fit" of climate sensitivity for the twentieth century is comfortably within the IPCC range. This aspect of their work is rarely if ever mentioned by the authors themselves, and still less in citations of the work in skeptics’ tracts such as that distributed with the "Global Warming Petition Project."
This brings us to the claims made recently by Esper et al. In an opinion piece in Quaternary Science Reviews (J. Esper, RJS Wilson, DC Frank,A Moberg, H Wanner and J Luterbacher, "Climate: past ranges and future changes," Quat. Sci. Rev. 24, 2005) they outlined the uncertainties in knowledge of the amplitude of Holocene climate variations, and also a strategy for reducing the uncertainties. We at RealClimate could hardly object to that. Better estimates of the Holocene variability would be of unquestioned value. However, Esper et al. concluded their piece with the statement:
- "So, what would it mean, if the reconstructions indicate a larger (Esper et al., 2002; Pollack and Smerdon, 2004; Moberget al., 2005) or smaller (Jones et al., 1998; Mann et al., 1999) temperature amplitude? We suggest that the former situation, i.e. enhanced variability during pre-industrial times, would result in a redistribution of weight towards the role of natural factors in forcing temperature changes, thereby relatively devaluing the impact of anthropogenic emissions and affecting future predicted scenarios. If that turns out to be the case, agreements such as the Kyoto protocol that intend to reduce emissions of anthropogenic greenhouse gases, would be less effective than thought."
They go on to qualify their conditional criticism of Kyoto by stating "This scenario, however, does not question the general mechanism established within the protocol, which we believe is a breakthrough," but the political opinions of the authors are not our concern. Neither are we weighing in here on the relative merits of the various Holocene climate reconstructions. What is our concern is that the inference regarding climate sensitivity is precisely opposite what elementary mathematical and physical analysis dictates it should be. Our correspondents at the Montreal climate negotiations which concluded last week report that Esper et al was given a lot of play by the inaction lobby. The only major news outlet to pick up on the story, though was Fox News, whose report by "Junk Science" columnist Steve Milloy here arguably represents a new low in propaganda masquerading as science journalism.Milloy does not mention that Esper et al is an opinion piece, not a research article. He also fails to mention that Esper et al do not actually conclude that a downward revision in the importance of CO2 actually is necessary; they only attempt to say (albeit based on faulty logic) what would happen if higher estimates of climate variation proved right. Milloy also fails to note the final quote supporting Kyoto, for what that’s worth. Of course, it is too much to expect that Milloy would look into other papers on the subject to see if there might be something wrong with the reasoning in Esper et al. . The lack of "balance" in this instance is jarring, for a network that claims to have a copyright on the description "Fair and Balanced". What Milloy was engaging in goes beyond mere lack of balance. It is an example of "quote mining" which has become a favored tactic of those seeking to counter sound science with unsound confusion (see the interesting discussion on quote mining on the Corante site.). The fact that no other media outlets have picked up on the unfortunate Esper quote leaves us with some feeling of encouragement that journalists are beginning to be able to filter out bad science, no matter how interesting an article it might make.
Maguns says
Link to Esper; http://www.wsl.ch/staff/jan.esper/publications/QSR_Esper_2005.pdf
[Response:Thanks for the link. I didn’t know it was available anywhere without subscription. The AGU publications I quoted unfortunately require subscriptions to read. –raypierre]
Don Flood says
On a related note, is a “runaway greenhouse” effect impossible, given the current data and understanding about global warming? In other words, is this something that you can “rule out” as being extremely unlikely, or impossible?
[Response: This is one of the few perils I think we can rule out with essentially 100% confidence. If you were to build glass walls and keep all the heat in the tropics, a saturated tropics would actually be over the limit for the runaway greenhouse. However, heat leakage would be inevitable, and if you allow that, you find that you don’t get a runaway even if you force the tropics to be saturated with water vapor. I did this calculation in the AGU Chapman volume water vapor article, which you can find on my publications site. I didn’t actually show the 8xCO2 calculation there, but that amount of CO2 isn’t enough to throw the system into a runaway. While you don’t get a runaway, the tropics does get very warm in a saturated scenario — around 50C. See also the saturated GCM calculation in my more recent water vapor article, from the Caltech general circulation book. I had to stop that calculation before it reached equilibrium, but not because of a runaway. the warm ocean next to Antarctic ice that hadn’t yet melted caused gale-force winds that crashed the model from numerical instabilities. –raypierre]
Jon Bland says
So what is actually being stated here ? Unless climate scientists can nail down with reasonable accuracy the implications of climate change for good or bad then who is really listening ? I have recently found out that climate scientists state that 60 % of existing carbon emissions must be cut but Kyoto only say 5 % I believe. Montreal last week probably stated another 5 % after 2012, what use is that to humankind in reality.
It’s about time that climate scientsis started going on the offensive with regard to the impact of climate change otherwise we are going to keep on burning the vested interest.
I mean look at the recent results and findings for the gulf stream around Europe, this is bad news but no climate scientist stuck their neck out and stated that this was even potentially catastrophic, just that more research needs to be done.
It is about time someone got serious about Climate Change and not just reported findings that do not make good reading.
Roger Pielke Jr. says
Ray-
You continue a pattern here at RC of confusing an opinion column that appears in the media with “science journalism.” This is not only a mischaracterization but a great insult to people who actually make their living reporting on science and issues involving science — a group which would not include Steve Milloy. RC has every right to call out cherry picking, but you will also better serve your readers by knowing what it is you are criticizing. Milloy’s work is not “propaganda masquerading as science journalism” it is just “propaganda” which is defined by askoxford.com com as follows:
“information, especially of a biased or misleading nature, used to promote a political cause or point of view. – ORIGIN originally denoting a committee of Roman Catholic cardinals responsible for foreign missions: from Latin congregatio de propaganda fide – congregation for propagation of the faith.”
[Response: Oh, come on, Roger. You’re the one who knows how to get a read on public perceptions. Ask 1000 Fox viewers if they can tell the difference between a piece like Milloy’s and what you consider journalism, and knock me over with a feather if as many as a half of them can. For that matter, even if Milloy’s piece should be considered “opinion,” like an op-ed, that doesn’t absolve him from using correct arguments. Propaganda, on the other hand, is a wilful distortion of truth made in order to advance an agenda. Milloy tries to set himself up as a source of information about what is good science, and in trying to put global warming in the same class as copper bracelets, crystals and pyramids he is doing a great disservice to public discourse. As for rights, of course Milloy has every right, in the constitutional sense, to publish what he does. That doesn’t mean that what he publishes is any less morally reprehensible than various kinds of hate speech, which are also constitutionally protected.–raypierre]
Mark A. York says
Well it’s good to know Roger doesn’t side with Milloy, but as a science background journalist myself, albeit unemployed, the difficulty is finding a journalist who knows the nuances of the field. Moreover, you could get an editor who vetoes certain quotes. Thata and in some circles Milloy represents the truth they want to hear. It’s very insideous and RC should be commended not scolded for what they are doing to absolve the so-called controversy. Real controversies please. The mainstream media often fail to know one from the other.
Mauri Pelto says
Raypierre: Excellent analogy. You raise a point about tropical temperature difference from the present to the LGM that has long been a sort of interest and uncertainty on my part. I had a chance in the mid-1980’s to work with Manabe, Broecker and Denton, regarding the CLIMAP temperatures and how to independently check them in the tropics. I proposed that we examine alpine glacier snowline data from today and the LGM from Tierra Del Fuego to the Brooks Range. In doing this I was able to complete a transect with more than 50 data points. In fact the changes with latitude were nearly identical along the entire cordillera. Precipitation could not be the culprit since the change was the same on Maritime and continental setting alpine glaciers. The change in snowline was almost always, 90%+ between 600 and 900 m. The LIA snowline lowering was 100-150 m. Thus, the LGM represents a five-six fold greater temperature change. And it occurred even in the tropical mountains of the Andes. This work then appeared in the Scientific American Conveyor belt article in the early 1990’s by Broecker and Denton and I published all the data in Paleo cubed. Since publication the ocean data has come around in support of this higher tropical latitude temperature change. My question is, how do expect to be able to maintain a much higher temperature gradient during the LGM than we have today between tropics and high latitudes, since this would tend to increase heat flux. Further the glacier data suggests we did not at least for a portion of the LGM, since there is no way to note how long the glaciers were at the depressed snowline location.
[Response:You’re absolutely right to point out that the mountain snowline data was a critical part of unraveling the tropical LGM puzzle. I think Dave Rind, Dorothy Peteet and Peter Webster had a big role in bringing its importance to the fore also. Essentially, Lindzen assumed that both CLIMAP and snowlines were right about temperature, and that demanded a mechanism which increased lapse rate in cold climates (decreasing it in warm climates). That wasn’t a bad assumption at the time, and Lindzen’s work was good science. What happened is that CLIMAP was wrong, and extrapolating snowline data to the surface was right. A case of a beautiful theory (Lindzen’s) shot down by an ugly fact. The fact that Lindzen’s ideas have always been considered seriously by the climate research community, despite going against the grain, makes me proud of the functioning of my field. Now, as for the implications for meridional gradient during LGM times, this is a really interesting subject. The story from Manabe and Broccoli is that sensible (“dry”) heat transport goes up with the temperature gradient but that is compensated by a reduction in latent (“moist”) heat transport. That’s certainly part of the story, but it’s still an active research field and while models agree pretty well on net flux out of the tropics the lack of a good theory of baroclinic eddy transport inhibits asking “why” questions. Inhibition of eddies by strong horizontal shears plays a role, as well as shifts in storm tracks. It’s something I’m working on myself, as well as a handful of other people. The more the merrier.–raypierre]
Carl Zimmer says
Ray–Thanks for linking to my post on Milloy’s quote mining. I just wanted to say that I think Roger Pielke’s comment is unfair.
Take a look at the Milloy piece. Is there any evidence on the page that it is an opinion column? All you see is a header, “Junk Science,” which would suggest that the piece is going to blow the lid off of some pseudoscientific myth. The piece claims to contrast claims in the media and “Kyoto believers” with scientific research, for which Milloy gives specific citations. I find it hard to see how one would get the impression that this article is presented as “just propaganda.” It is true that the Junk Science column is listed under the opinion section of the top banner (under the subheading of “views”) But that’s like burying a disclaimer in fine print. Most people who get their information online would probably just follow a link straight to the piece, and would assume that it accurately describing how real science goes against Kyoto, etc. So I think it’s over the top to say that RealClimate is doing a disservice to readers in this post.
Mark A. York says
That’s right Carl, but that’s what they all say. I’ve used RC in skeptic arguments and they were just called political by the naysayers. In other words they buy the naysayers claims like Dr. Gray, and Milloy’s political “science” but not the real thing. It’s sad but these propaganda wars will continue. That’s just the sort of game show world we live in now. Keep exposing them is the key as discouraging as that is.
Tim Osborn says
It may be of interest to some that in our “perspective piece” in Science last year (Osborn and Briffa, The real color of climate change?, Science 306, 621-622), where we discussed the von Storch et al. paper, we concluded with:
It is already clear, however, that greater past climate variations imply greater future climate change.
which is a similar conclusion to Ray’s (we had no space to explain why we reached this conclusion, but the reasons are those explained above by Ray). Esper et al. had more space to explain how they reached their opposite conclusion, but didn’t do so – which is a shame!
Tim
Dragons flight says
Could someone provide references discussing the radiative forcing equivalence of things like CO2? It is touched on in this article, but I’ve never seen a discussion of how effective this approximation is. Obviously CO2 is not just a lid on the atmosphere so there must be some effect of the vertical gradient (even if small), and since it depends on the absorption and reemission of radiation, there must be some effect by latitude. I would also expect some non-linearity in response as one moves toward saturating the infrared absorption window with higher CO2.
I’ll admit to not having looked at the question in any detail, but for at least some GCMs one gets the impression that they are just turning the same linearly varying knob at the top of the atmosphere everywhere. Maybe that’s an okay approximation to be making, but I’d like to see some discussion of it. Thanks for your help.
[Response: You’ll be pleased to know it isn’t a linearly varying knob at the top of the atmosphere! For an in depth discussion, I suggest Hansen et al, 2005 which goes into some detail about how well the forcings concept works for the very different physics of all the main effects. The basic conclusion is that the concept does work well enough for these kinds of comparisons to be done. – gavin]
[Response: To amplify on Gavin’s comment, the main inaccuracy in using a CO2-equivalent is not from the vertical distribution, but rather from the fact that bands of one gas overlap with bands of others, so there is a little non-additivity. You can play with this yourself using the online modtran or ncar radiation models. (look here). The nonlinearity isn’t very strong. A much bigger issue in the often quoted “global warming potentials,” is the question of how the effect of relative lifetimes are figured in (see Archer’s piece on this). This goes beyond the translation into radiative forcing equivalents, and attempts to factor in that a long-lived greenhouse gas will be around to do things to the climate longer than a short lived one. It is also worth noting how and why such equivalents are used. It used to be (pre 1995) that most radiation models in GCM’s only had CO2 and water vapor, so one had to translate, say, methane, into CO2 radiative equivalent just to do the calculation. This is no longer true. Almost all radiation models now have the specific band structure of the individual greenhouse gases explicitly represented. However, for use with integrated assessment models, energy balance models, and — most importantly — for writing of treaties and legislation — it is still generally necessary to translate all GHG’s into some equivalent in terms of CO2. –raypierre]
[Response: A similar conclusion to the one cited by Gavin above was reached independently by a panel of scientists (of which I was a member) convened to report on these issues by the National Academy of Sciences last year, resulting in the NAS report “Radiative Forcing of Climate Change: Expanding the Concept and Addressing Uncertainties (2005)”. The report also recognized the important role, for scientific understanding, in evaluating regional relationships involved in radiative forcing and response. – mike]
Jim Wolford says
Any possibility of providing a way to check what the many abbreviations stand for for us laymen?
Thanks
Jim in Alaska
[Response: Our admittedly incomplete glossary should provide some assistance here. -mike]
andre bijkerk says
Reflecting about:
“Now, how about the Holocene – including the Little Ice Age and Medieval Warm Period that seem to figure so prominently in many skeptics’ tracts ? This is a far harder row to hoe,”
Here are a few woolly arguments that the Holocene climate variation is considerable bigger than generally assumed:
http://www.yukonmuseums.ca/mammoth/abstre-g.htm
“TUSK GROWTH INCREMENT AND STABLE ISOTOPE PROFILES OF LATE PLEISTOCENE AND HOLOCENE Mammuthus primigenius FROM SIBERIA AND WRANGEL ISLAND (L)
David L. FOX, Daniel C. FISHER, and Sergey VARTANYAN
[…]
To date, we have a full complement of analyses for about three years of growth for one tusk from the Taimir Peninsula of Siberia (the Jarkov mammoth; ca. 20,300 rybp) and growth increment and tusk apatite carbon and oxygen isotope ratios for two to three years of growth from two tusks from Wrangel Island (4,400 and 4,120 year bp). Our sampling focused on the last several years of life in these tusks, which are preserved in the dentin adjacent to the pulp cavity. The d18O values of structural carbonate in apatite from the Jarkov mammoth (15.3±1.3 permil VSMOW) are similar to published values for other Siberian mammoths and to mean values for high-latitude North American mammoths. The values for the two Wrangel specimens are higher (21.1±1.0 permil and 22.4±0.9 permil VSMOW) and more like values for mammoths from eastern Russia and Hot Springs, South Dakota. The higher d18O values in the Wrangel tusks relative to the Jarkov mammoth and others from Siberia suggest considerably warmer temperatures and/or major differences in moisture transport during the middle Holocene relative to the late Pleistocene.”
Wrangel Island now:
http://sea.unep-wcmc.org/index.html?http://sea.unep-wcmc.org/sites/wh/wrange_island.htm~main
“The average annual temperature is ~11.3°C. Average July temperatures range from 2.4°C to 3.6°C on the south coast but notable differences in temperature occur with differences in terrain, and in the intermontane depressions, temperatures can reach 10°C. Fohn winds also occur. The frost-free period is about 2 to 3 weeks.”
Paul says
Sure. Milloy is an obvious hack, except apparently when he’s judging journalism contests. “The Junkman Climbs to the Top”
I guess it’s better in a blog to reference other industry writers such as those at Tech Central.
Roger Pielke Jr. says
Ray (response to #4)-
You are right, Andy Revkin, Steve Milloy, what’s the difference? Similarly, in science there s probably no difference between peer-reviewed and not peer-reviewed? After all the public can’t tell the difference.
You claim a harm to public peceptions from opinions that you find “morally reprehensible” (hate speech and cherry picking quotes, all the same, huh? Amazing.), any evidence for this claim?
And among folks who study propoganda, in case you are interested in what people who actually study it think, it need not necessarily involve a willful distortion of the truth, see for a brief intro:
http://en.wikipedia.org/wiki/Propoganda
Gavin says
Whether Milloy is or isn’t a journalist is not a very interesting question to me, and I’m a little puzzled as to why Roger feels so strongly about it. The only issue we have any useful expertise to contribute here is whether his ‘scientific’ pronouncements have any validity or not (in this case, clearly not, as we all agree). The discussion might be better served if we refrain from over-extrapolating each others’ comments and avoid unnecessary distractions.
nanny_govt_sucks says
#12 – “The higher d18O values in the Wrangel tusks relative to the Jarkov mammoth and others from Siberia suggest considerably warmer temperatures…”
You’re kidding, right? Look at where one Siberian mammoth was found:
http://www.geocities.com/stegob/mammoth.html
Note the ice in the picture. Everywhere. Obviously the Siberian climate was much warmer when the mammoth lived there. Quoting from the link above:
“[Dutch paleontologist Dick] Mol also notes that the mammoth lay atop clay soil filled with frozen prehistoric plants that “still had their original green color.” Mol says that these “smaller” clues, “this is very important, because it indicates a lake and pond 20,000 years ago, and might tell us about the climate and temperature at the time.””
Mark A. York says
“You are right, Andy Revkin, Steve Milloy, what’s the difference? Similarly, in science there s probably no difference between peer-reviewed and not peer-reviewed? After all the public can’t tell the difference.”
Many can’t and reporting who is a political hack and who isn’t should be part of the story. It isn’t now, but I’m hopeful. To Milloy’s tribe Revkin is a liberal Times reporter and thus, anything he reports is false. This false impression must be countered somehow.
Ender says
Just a couple of questions:
1. What could have been the mechanism that drove down the CO2 level at a LGM? We can see that anthopogenic causes are increasing CO2 but is there any research about what may cause the opposite? My guess would be a vast bloom of plants possibly algea or photosynthetic plankton or something like this. Mind you I am not advocating ocean seeding or anything just curious about what could do this.
[Response: This is the most critical outstanding question in the theory of ice ages. Many theories have come and go, with none actually standing up. Almost everybody agrees that it has to do with fluctuations in the carbon uptake by the oceans, with a number of theories relying on enhancement of the biological pump, much along the lines you suggest. Perhaps David Archer could be persuaded to do a post on the current state of the problem. –raypierre]
2. Is thermal runaway impossible only with CO2 alone? There is evidence that thermal events have happened in the past, such as in the Eocene, however then methane was in on the act.
[Response: What happened in the Eocene wouldn’t count as a runaway in the sense of the runaway greenhouse that brought Venus to its present toasty state. To get a runaway of that sort, you need a reservoir of greenhouse gas that goes into the atmosphere to ever greater extents as temperature increases. Oceans on Earth provide such a reservoir for water vapor, but there’s no corresponding reservoir for CO2. Clathrates potentially provide a reservoir for methane which could destabilize and dump into the atmosphere. That would be a kind of “mini-runaway,” I guess. Thinking more broadly, it is possible to get a pure CO2 runaway on a planet with a liquid CO2 ocean or with massive CO2 glaciers. Not an issue for Earth, though.–raypierre]
Ender says
Thanks for that. Perhaps I should been more specific on what I think thermal runaway is. I was previously under the impression that thermal runaway was like the Eocene thermal event whereas it would seem that the term thermal runaway is a more accurately a Venus like event – fair enough.
In the light of this perhaps I should rephrase my question to – Do you think that a dangerous thermal event like the Eocene is probable with the degree of warming from anthropogenic causes?
[Response: When you say the “Eocene,” I presume you’re referring to the Paleocene-Eocene Thermal Maximum. For a while that event was attributed to methane release from clathrates, and that hypothesis still has its supporters. As to whether such a runaway could happen in today’s climate, I’ll refer you to Dave Archer’s recent post on the clathrate story. Regarding the Paleocene-Eocene event itself, the field seems to be coming around to the idea that it had something to do with a greatly accelerated oxidation of organic carbon stored on land, perhaps associated with the drying up of interior shallow seaways. Obviously, we don’t have interior shallow seaways to dry up today, but one could envision a feedback process where warming accelerates oxidation of soil carbon, which leads to more warming, and so forth. Something like this could rear up and exacerbate global warming but I’m not sure I’d classify it as a “runaway.” The question of whether accelerated carbon sinks on land can turn to accelerated carbon sources is something a lot of terrestrial carbon cycle modellers are interested in, but I couldn’t give you an accurate read on the state of the art there, except that some models do show the land sink turning into a land source given sufficient warming. –raypierre]
Pat Neuman says
In #18, response: “What happened in the Eocene”
…
James Zanchos, in his article “Rapid Acidification of the Ocean During the Paleocene-Eocene Thermal Maximum”, concludes with a question. “What, if any, implications might this have for the future? If combustion of the entire fossil fuel reservoir (~4500 GtC) is assumed, the impacts on deep-sea pH and biota will likely be similar to those in the PETM. However, because the anthropenic carbon input will occur within just 300 years, which is less than the mixing time of the ocean(38), the impacts on surface ocean pH and biota will probably be more severe”. (James C. Zanchos, et. al. 10 June 2005 Science)
Timing of input for operational events may be different than timing of imput used in model calibration. Additionally, forecasting physical events often involves choosing if models are off track in timing or volume, then making model adjustments. If the forecasters choose the wrong kind of adjustment, a poor prediction can result. Unfortunately, no one knows for sure if models are off in timing or volume (or both) until after the peak has occurred, which is too late (can’t use hindsight).
James Annan says
Regarding “runaway greenhouse”, we’ve found parameter values in our AGCM+slab ocean model that seem to generate a runaway warming under 2xCO2. That is, we got a near-linear increase in temperature to +16C after 60 years which showed no signs of tailing off, and didn’t pursue it further.
Needless to say, I don’t actually think such a result is realistic. But I’m not sure that it could easily be ruled out as impossible based on elementary physical considerations. The model in question didn’t give a particularly good simulation of the present-day climate, but one could say the same about every model if one was picky enough…
[Response: That’s interesting, James. Do you know if that behavior is associated with cloud feedbacks? As I mentioned, I can get similar-looking behavior in a GCM if I over-ride the convection scheme and force water vapor to be saturated throughout the troposphere. However, I don’t know of any reasonable way the normal GCM physics could do that, given the role of subsidence regions in creating
unsaturated air.–raypierre]
Hans Erren says
Another debate on climate sensitivity is running here:
http://www.ukweatherworld.co.uk/forum/forums/thread-view.asp?tid=25003&start=1
some preliminary conclusions:
A distinction must be made between transient climate sensitivity, equilibrium sensitivity and the equilibrium time.
Equilibrium time is typically several centuries, so the equilibrium sensitivity does not matter for scenarios that only span a century.
Tom Rees in UKweatherworld:
Almuth Ernsting says
Is it possible to conclude from the increasing rate of warming since 1990 (including this year, with neutral ENSO, being as hot as 1998 with an intense El Nino) that climate sensitivity must be higher than, say, the lower end of figures suggested by models?
Ferdinand Engelbeen says
Raypierre,
Back to the essence of the discussion. If there was a larger natural variability in the past, you and other scientists presume that this points to a larger general sensitivity for all greenhouse gases. Scientists like Esper, Moberg, Luterbacher and others disagree, and expect that a larger sensitivity for natural (mainly solar and volcanic) goes at the cost of the sensitivity for natural and man-made greenhouse gases.
Or to make an analogy with your example: instead of one spring and one platform, there are many springs and platforms interconnected with different levers (“interactions”) in the system. The overall effect of a small mouse jumping on one of the “sensitive” platforms may be just as large as putting a heavy brick on a “less sensitive” platform. In that case, the discovery of a doubling of the mouse’s effect says something about the sensitivity for the mouse platform, but nothing about the other platforms…
And there are differences in platforms. Solar has its largest direct effect in the stratosphere and further top down, as more and more is absorbed/reflected in different layers. Volcanic dust also has most direct effect in the stratosphere. Both have proven (opposite) effects on stratospheric temperatures, the Jet Stream position, wind and cloud positions and cloud amount and precipitation.
Greenhouse gases like CO2 and water vapour and dust have their largest effect at lower altitudes and their effect is reducing bottom up. For methane and ozone, the maximum effect is somewhere in the higher troposphere/lower stratosphere. Thus all different effects at different levels.
The main problem with current GCM’s is cloud feedback. These are responsible for most of the 1.5-5 K range in projection for a CO2 doubling of the different models. Several models see a positive feedback of clouds when the temperatures increase, but this seems to be wrong, at least in the tropics and the Arctic, where clouds form a strong negative feedback. See also the comment of Wielicki and Chen at the NASA page and the next page about natural variability and the performance of the models in the tropics.
For volcanic, there may be some overestimating of historical influences, as the influence of temperature and reduced solar input (less insolation) on tree rings is hardly to separate. For solar, there is a clear correlation between the sun cycle and cloud cover (+/- 2% over a cycle, no matter what the underlying physics may be), such that the original ~1.2 W/m2 (TOA) variability in the sun’s radiation is enhanced, which is underestimated in most models. What is difficult to know, is how much change there was between e.g. the LIA and current solar strength, as we only have accurate measurements since the satellite age. Here we depend on the reconstructions, which give a wide range of 0.1-0.9 K for solar influences (with an average influence of 0.1 K for volcanic) for the period LIA-current, and thus a wide range for the real climate sensitivity for solar.
For sulphate aerosols, current models probably overestimate their influence, as there is no measurable effect of the large (over 60%) reduction in SO2 emissions in Europe at the places where the largest influence should be visible, according to the models. If the influence of aerosols is less than expected, then the influence of CO2 must be lower than expected, to fit the temperature trend of the 1945-1975 period. Unknown is what the overall effect of greenhouse gases/temperature was/is/will be on cloud cover. The measurements of cloud cover are much too short (and/or too coarse) to make any long-term correlation valid.
Thus in summary, a change in sensitivity of one of the primary actors in climate variation has only effect for the general sensitivity of climate, if all the feedbacks are essentially similar for all primary actors involved, which is highly probably not the case…
[Response: In order to conclude that higher past variability meant lower sensitivity, one would have to demonstrate two things. First, one would have to show that sensitivity to known non-GHG forcing mechanisms (solar variability and volcanic aerosols) was greater than sensitivity to the same radiative forcing applied via GHG changes. Second, one would have to show that those non-GHG forcing mechanisms are operating today in such a way as to allow the recent warming to be matched despite a reduction in climate sensitivity to GHG changes. These things are not outside the realm of physical possibility, but nobody has demonstrated a physical mechanism that makes this scenario work. In contrast, the more conventional view, which I put forth in my article, has been turned into equations and analyzed quantitatively. –raypierre]
Tom Rees says
Ferdinand, regarding the localisation of aerosols and the climate response. You shouldn’t expect much covariance. See Climate Sensitivity and Response. Boer & Yu, 2003
Tom Fiddaman says
I think the essence of Raypierre’s argument is that if
T = c*FORCING
then absent any new information about forcing, new information suggesting larger variability in T implies larger c.
In In the Esper et al. 2002 reconstruction paper, the authors conclude:
Therefore, the large multicentennial differences between RCS and MBH are real and would seem to require a NH extratropical forcing to explain them, one that attenuates toward the equator. That sounds like a conclusion that more variable T implies more forcing, which is reasonable if you have adequate other information to constrain c.
Moberg et al. conclude: This large natural variability in the past suggests an important role of natural multicentennial variability that is likely to continue.
The argument that larger sensitivity for natural (mainly solar and volcanic) goes at the cost of the sensitivity for natural and man-made greenhouse gases, or enhanced variability during pre-industrial times, would result in a redistribution of weight towards the role of natural factors in forcing temperature changes, seems to rely on a model like the following:
T = a*ANTHRO + b*NAT
[Response: This indeed would seem to be the kind of thing Esper et al have in mind, but the problem is coming up with a physical explanation that would allow the system to behave this way. The difficulty with that is that both ANTHRO and NAT can be translated into equivalent radiative forcings, and you’d have to say why the system should respond more strongly to 1 W/m**2 from solar variability than 1 W/m**2 from greenhouse gas changes. Also, you’d have to show that your model was still able to fit the recent changes, where we know what the NAT forcing mechanisms are. I’m not saying that this is impossible, just that until somebody does it there’s no basis for concluding that higher past variability means lower climate senstivity. –raypierre]
It’s not clear to me why one should conclude that more variable T implies bigger b and smaller a, when it could just as well imply bigger b and a. One possible mental model is that a is constrained by the instrumental record, while b is constrained by reconstructions. Then, if reconstructions turn out to be more variable, b is bigger absent new information on natural forcing, and some of the variability in the instrumental record that was thought to be due to a is really due to b. However, that presumes either an unknown natural forcing, or that some combination of known natural forcings fits the instrumental record to permit substitution of b for a. The former goes out on a limb; the latter should be easy to demonstrate as a statistical exercise with a simple energy balance model.
In any case the statement that agreements such as the Kyoto protocol that intend to reduce emissions of anthropogenic greenhouse gases, would be less effective than thought isn’t the whole story. If sensitivity is lower, then it’s obvious that the deltaT created by GHGs at any level will be smaller, and thus that Kyoto will cause a smaller reduction in temperature from a lower baseline. However, if your definition of “effective” is staying below a given deltaT, low sensitivity could increase Kyoto’s chance of success – though it would be the sensitivity doing the heavy lifting, and the need for Kyoto would be less evident.
Ferdinand Engelbeen says
Re #25,
Tom, the Canadian model results are for proposed future CO2/SO2 emission/concentration levels. Compared to the past decades, the pattern (more emissions in South Asia) and the relative forcings are completely different, with much less relative influence of aerosols than today (due to faster increasing CO2 levels).
The huge change in SO2 emissions in Europe should be measurable, according to runs of the Hadcm3 model for the period 1990-1999, but it is not…
Alastair McDonald says
Re 25.
It is not Ferdi, but Ray who is expecting to find ONE sensitivity! That is the Holy Grail which the IPCC is searching for.
Ender says
Ray – thank you for you considered reply. Real Climate is such a valuable source of information – please never give it up.
Andrew Dessler says
Ray-
you said that climate can’t change w/o forcing. but what about el ninos … the global temperature increases, but it’s an entirely internal phenomenon. or is there some forcing going on? I think this is an important point, because a skeptic might argue that the MWP was warm because of internal variability, and it is this multi-century scale internal variability that’s driving the present day warming. what’s your take on this argument?
Regards.
[Response: Hi Andy! I’m looking forward to coming down to see you at A&M sometime. I’m glad you brought up the point about what might be called “internal” variability. The possibility of things like El Nino is why I left myself an out with the rather cryptic phrase to the effect that SOMETIMES the reason for the climate change can be set off from the collective behavior of the system and considered as an external forcing. In the case of the global temperature change caused by El Nino, there’s still a “reason” for climate change, to be found in the coupled air-sea interaction.. It’s a reason that can be identified and studied, but the different links in the behavior are too intimately coupled to allow one to extract any part and call it a forcing. I didn’t bring this up in the context of the centennial Holocene or longer term LGM climate changes because nobody has yet put forth a viable mechanism accounting for such climate changes in terms of internally generated variability. All the quantified mechanisms involve forcings like volcanic and solar variability for the Holocene case, and CO2 and Milankovic (modified by the slow land-glacier response) for the LGM. It’s theoretically possible that some internal cycle in the ocean circulation could give Holocene temperature fluctuations as big as the LIA, but until one identifies such a mechanism, it’s essentially impossible to say what the consequences would be for climate sensitivity.
Just for the sake of illustration, though, here’s one scenario where higher Holocene variability could go along with lower climate sensitivity: Suppose that some unknown stabilizing mechanism makes the real world less sensitive to radiative forcing than our current models. Suppose also that — DESPITE THIS STABILIZING MECHANISM some as-yet unknown ocean circulation cycle operates that is the sole cause of the Holocene centennial scale fluctuations, and that this cycle has reversed and is operating today, yielding a temperature change that happens to mimic what models give in response to radiative forcing changes. In that case, you could have a consistent picture with lower climate sensitivity. Aside from the fact that there’s no physical support from such a picture, this state of affairs is highly unlikely because you’d still have to account for things like the way the system responds to CO2 at the LGM, the observed radiative imbalance of the planet at present, the observed penetration of heat into the upper ocean, and so forth. I suspect a scenario like I’ve given is what people have in mind when they think that higher “natural” variability would indicate reduced sensitivity, but until somebody puts specific mechanisms on the table, it’s just science fiction. –raypierre]
Ferdinand Engelbeen says
Re #26,
Tom, the real world temperature looks more like:
T = Fsolar x FBsolar + Fvolcanic x FBvolcanic + FCO2 x FBCO2 + Faero x FBaero + ….
To be added, other greenhouse gas forcings with their feedbacks and internal oscillations which may be – or not – enhanced by the primary forcings. Further the strength of the feedbacks depend on the initial conditions (like ice age – interglacial). And last but not least, FCO2 and Faero include natural and man-made CO2 and aerosol levels.
The basic problem thus is that we have (at least) four input variables and only one equation where the output for the past 1.5 century is more or less exactly known. The constraint of the temperature trend thus is for the sum of all components + their feedbacks. That means that it is impossible to solve the equation without further information. Further information comes from proxies (ice cores, tree rings,…), which give (less exact) information about temperature and some of the primary actors of the past.
What Raypierre and others expect is that the feedbacks of the different forcings are essentially the same for the same change in forcing and the same starting conditions. Esper, Moberg and others expect different feedbacks for each individual forcing.
As there is only one temperature record, any change in forcing or feedbacks of one of the actors is at the cost (or enhancement) of one or more of the others. If the influence of aerosols is less than expected, then the influence of CO2 must be decreased too, or it is impossible to explain the cooling period 1945-1975 with increasing CO2 levels. The graph of temperature vs. aerosol forcing on RealClimate makes that clear: if the aerosols forcing is near zero, then a CO2 doubling gives an 1.2 K increase in temperature. If the aerosol forcing is -1.5 W/m2, then the increase in temperature can reach 6 K! In both cases, you need to adjust the solar factor to fit the temperature trend of the past century.
The same applies for variations in solar output (radiation) and/or insolation (Milankovitch cycles). Here too, one can expect that an increase of solar influence, based on historical variations of the last millennium, will lead to a decrease of the influence of CO2, again necessary to fit the temperature trend of the last century in the temperature equation. Stott ea. have made variations with the Hadcm3 model to get a “best fit”, be it within the constraints of the model (like a fixed minimum influence of aerosols). It turned out that a doubling of solar at the cost of 20% influence of greenhouse gases did make an optimum…
In summary: there is a large uncertainty about the relative influences of the four main forcings (including their feedbacks), and it is quite certain that the feedbacks are different for the different forcings. Any change in the strength of natural (volcanic, solar) influences based on historical variations will have an opposite effect on the influence of greenhouse gases, and thus on man-made emissions.
About Kyoto: based on the effect (a few years delay before a CO2 doubling is reached) and the costs, I would prefer an enormous effort to search for and promote fossil fuel alternatives. That will have more effect on CO2 emissions in middle-long term than several Kyoto’s…
[Response: There’s some good thinking here, but I think you may have confused Gavin’s discussion of the attempts by Andrae et al to infer climate sensitivity from recent warming with the question of whether there’s a different sensitivity coefficient for aerosol vs GHG radiative forcing. There’s nothing in the material cited in Gavin’s post to support the latter. What Andrae et al do is very much in the spirit of the discussion given in my article. They, too, assume an equivalence in radiative forcing between GHG and aerosol, What they do is add different estimates of the aerosol radiative forcing to the GHG forcing, while keeping the temperature response fixed at the observed recent warming. That gives them various estimates of the climate sensitivity. In this case, there’s no uncertainty about the magnitude of climate variation, but uncertainty about the forcing. In the spirit of my analogy, they are talking about changing the estimated weight of the rats rather than the estimated displacement of the platform. Now, with regard to possibilities for different sensitivity coefficients, what we should really be thinking hard about is the implication of Shindell et al’s finding that the changes in Solar UV can give an amplified stratospheric response, which can work its way into an amplified regional NH tropospheric response. One earlier comment tangentially alluded to this, but there are a lot of gaps that need to be filled in to say what such a result might mean for attempts at estimating climate sensitivity. Certainly, it would say that energy balance models are too crude a tool. One could build similar stories around the possibility that the solar effect is via a cosmic-ray and cloud connection, but I don’t think this is considered to be a viable hypothesis anymore, given the sloppiness uncovered in the way Svensmark et al analyzed their data.–raypierre]
Lynn Vincentnathan says
There’s been discussion of RUNAWAY GW, and I think I can see the semantic problem. From a geologist’s view, it may mean “runaway from any earthly controls” (or negative feedback processes) – like what has happened on Venus. From a layperson’s (my) view, it may mean “runaway from any human controls.” That is what I mean when I use it (but the geologist’s view would be an extreme subset of that).
So, what I would propose, since “runaway” is a useful term, indicating a positive feedback scenario more succinctly, is to distinguish between “permanent runaway” and “temporary or limited runaway” GW (temporary on the geological time scheme of thousands or millions of years).
I had a horse when I was a kid. It would “runaway” with me. That is, when I got it in a fast cantor or gallop, it would “take the bit” and run as fast as it could (we actually beat several Del Mar race horses on the beach that way). Whatever I would do, however hard I would pull on the reins, I could not stop or control that horse. However, eventually it would tire out on its own and stop. So, after that time it ran away with me into a forest and I ended up with broken ribs, I would never get it into a gallop, unless I was at the (empty) beach or on the race track.
[Response:Interestingly enough, though the Academie Française doesn’t have an approved term for “runaway greenhouse,” the term that has gained some currency in France is “Effet serre gallopant.”–raypierre]
I think GW is sort of like that. If we can reduce GHGs enough (slow down GW), we may be able to avoid triggering positive feedbacks that we may not have any control over. If not, those feedbacks may kick in, taking us up to a higher level of GW & other nasty effects — and we will have no ability to control it, even by reducing our GHGs to near zero. Then after much damage from a human (and animal & plant) perspective is done, the warming will level out (stabilize) and eventually come back down again over eons.
Sort of like the end-Permian GW & extinction period, though perhaps not quite so severe (but who really knows).
Lynn Vincentnathan says
RE #24, Ferdinand you state, “Several models see a positive feedback of clouds when the temperatures increase, but this seems to be wrong, at least in the tropics and the Arctic, where clouds form a strong negative feedback.”
I’m not even an amateur climate scientist, but my logic tells me that if clouds have a stronger negative feedback in the Arctic, and I know (from news) the Arctic is warming faster than other areas, then it seems “forcing GHGs” (CO2, etc) may have a strong sensitivity than suggested, but this is suppressed by the cloud effect. Then what if we got to another “quantum-type” level where the cloud effect disappeared or reversed (I don’t know what I’m talking about here – skating on really thin ice), and all we had left wast the unsuppressed forcing GHGs effect, then it would really really get hot.
RE the main points made in the post, I think I have also used the same logic to suggest if natural variability is greater than thought, then our A-GHGs should also have a higher sensitivity.
I know I have made the argument that more info about natural forcings being really strong, all the more makes it a matter of prudence to totally reduce as much A-GHGs as possible & pronto, since we wouldn’t want at situation in which both the natural forcings (a bunch of volcanos or greater solar output in the near future) to piggy-back with our anthropogenic greenhouse forcings. That would really be bad. Since we can’t control the sun or volcanos, then it behooves us all the more to do what we can do & reduce our own GHGs.
Joel Shore says
Re #31: You say, “About Kyoto: based on the effect (a few years delay before a CO2 doubling is reached) and the costs, I would prefer an enormous effort to search for and promote fossil fuel alternatives.”
However, I don’t understand how you expect this search and promotion to happen. Some would recommend crash government programs but others, who believe more in markets, argue that government is not good at choosing the winning technologies. And, the best way to get the market involved is to internalize the cost associated with greenhouse gas emissions rather than making the earth’s atmosphere a free sewer for these gases. This is exactly what Kyoto does.
[Response: Yes, indeed. The stumbling block right now is that coal is cheap and is likely to remain so. It is cheap because the environmental damage caused by coal burning isn’t factored into the price. A profit making private company not only has no reason to avoid burning coal, it in some sense has an obligation to burn coal if that produces the greatest profit without breaking any laws. There’s no reason to expect a company to behave any way else. The Kyoto protocol helps to address this by imposing a kind of extra cost on burning coal, but there is the problem that it this cost is applied non-uniformly. It doesn’t affect the US or the developing world. Naturally, I would vastly prefer a global tax on coal burning, with some kind of mechanism to plow back revenues into developing world aid. The argument for Kyoto isn’t that it’s the best that can be done, but it’s all we have right now, and sets at least a few countries moving in the right direction. –raypierre]
To my mind, the main benefit of Kyoto is not the emission cuts per se but the technologies that will be developed in order to make these cuts. The supposed dichotomy between Kyoto and technology is a completely false one because in market economies, technologies are not developed to solve problems whose costs are externalized. If I can offload the cost on to everyone else, why should I bear it myself?
Hans Erren says
re 31:
So the FB’s are transfer functions (H) of signal periodicy (w) and optical wavenumber (k) and probably also temperature (T) .
A response to a given forcing F(w) is therefore
T(w)=F(w)* H(w,k,T)
That’s a very ugly differential equation…
Ferdinand Engelbeen says
Raypierre,
Thanks for the several responses (on my and other’s comments). Here a reaction on the main points about the natural (solar, volcanic) vs. man-made (GHGs, aerosols) sensitivity:
– If there was a larger temperature variation in the past millennium, the mathematical evidence is that an increase of one of the terms of the temperature trend equation must go at the cost of one or more other terms of the equation. There is only one temperature trend which is the result of all individual terms (forcings and feedbacks) and against which all proxies are calibrated, and a larger influence of solar in the past equals a larger influence at present. The same reasoning is used by Andreae and Gavin for aerosols vs. CO2 alone (Andreae) and for aerosols vs. all other sensitivities (Gavin). The same reasoning is used by Stott ea. in a search for the relative strength of the different sensitivities in the Hadcm3 model. Both sulphate aerosols and CO2 have their influence in the (lower) troposphere, while solar and volcanic have their highest influence in the stratosphere, this is essential in the discussion.
– It is practically proven that tropospheric aerosols have (far) less influence on temperature than expected by current models, see my comment on aerosols here and the lack of increase in insolation, despite a huge reduction of aerosols in Europe, according to Philipona ea. This necessitates a reduction of the sensitivity for the CO2 forcing.
– Climate probably has a higher sensitivity for solar than for CO2, for the same change in forcing. This is based on the fact that, while the change in total energy is only 0.1% during a sun cycle, the change in UV is over 10%, which has its largest effect in the stratosphere. From Stott ea.:
“We find that climatic processes could act to amplify the near-surface temperature response to (non enhanced) solar forcing by between 1.34 and 4.21 for LBB [Lean ea.] and 0.70 to 3.32 for HS [Hoyt & Schatten], although degeneracy between the greenhouse and solar signals (especially HSâ??see earlier in this paper) could spuriously increase this upper limit.”
Note that the last remark can go either way, as the solar signal can even be more enhanced at the cost of the sensitivity for the greenhouse signal…
And from Hansen ea.:
“Solar irradiance change has a strong spectral dependence [Lean, 2000], and resulting climate changes may include indirect effects of induced ozone change [RFCR; Haigh, 1999; Shindell et al., 1999a] and conceivably even cosmic ray effects on clouds [Dickinson, 1975].
Furthermore, it has been suggested that an important mechanism for solar influence on climate is via dynamical effects on the Arctic Oscillation [Shindell et al., 2001, 2003b]. Our understanding of these phenomena and our ability to model them are primitive,…”
While there are doubts about the link between cosmic rays and cloud cover, there is an observed significant link between (low) cloud cover and solar radiation within the last two sun cycles. I don’t see any reason why this shouldn’t be included in current models (including a long-term factor for changes in solar radiation since the Maunder Minimum). After all, the (secundary) influence of aerosols (on clouds) is included in models too, and its sensitivity is far from certain…
About the models reproducing past temperature trends:
It is known that multivariable processes can fit trends with different sets of parameters. Climate is not different, as can be seen in the fact that a broad range of cloud feedbacks (compensated by other parameters…) or a range of combined aerosol/CO2 sensitivities is able to fit the temperature of the past century. Even an unrealistic tenfold increase of (H&S) solar (see Fig.1 in Stott ea.) does fit the temperature trend to an acceptable level, if one should reduce the sensitivity for CO2/aerosols far enough…
Current models also can reproduce other transitions (LGM-Holocene) with a reasonable accuracy, but this is mainly in periods where there is a huge overlap between temperature (as initiator) and CO2/CH4 levels (as feedback). I am very curious if the same models with the same paramaters also reproduce the Eemian-110,000 years before present period where there is an almost total separation of temperature and CO2 trends…
Ferdinand Engelbeen says
Re #33:
Lynn, the increase of temperatures in the Arctic, is mainly the result of an inflow of warmer air from lower latitudes (with the current AO) and the change in albedo (mainly in summer). The influence of greenhouse gases is one order of magnitude lower in this case. The interesting part is that more clouds in summer as well as less clouds in winter both act as negative feedbacks: less warming in summer with more clouds reflecting the sunlight and more cooling in winter from less clouds allowing more heat to escape to space. Even so much that there is a cooling temperature trend in winter, large enough to refreeze almost all ice that was melted in the other seasons.
What will happen if the AO changes is an open question, at one side there may be less inflow of warmer air, at the other side, this may result in opposite changes in cloud cover…
About natural variability and sensitivity for man-made GHGs, here I disagree with Raypierre in another (large) comment…
[Response: It is not at all established that the Arctic warming is due to the AO. For that matter, even if the AO is part of the Arctic climate change, one has to face the possibility that changes in GHG are affecting the AO, a point made by Palmer and Molteni. As for the points Ferdinand makes in his (large) comment, I still contend that Ferdinand is misinterpreting the work on climate sensitivity to various forcings, and the need to make the sensitivity inference consistent with what we know about the physics of the system. Even if it could be shown that climate is more sensitive to solar variability than the strict radiative forcing would suggest (along the lines of Shindell et al) one would still have to contend with the fact that we know the solar variability for the past fifty years quite well, and it does not do the kind of things necessary to give the present warming pattern. This is why Stott et al conclude that “Nevertheless the results confirm previous analyses showing that greenhouse
gas increases explain most of the global warming observed in the second half of the twentieth century,” DESPITE their indications that HadCM3 underestimates the observed response to solar forcing. (Note also that Stott et al isn’t the final word on solar sensitivity, since their method doesn’t guarantee that what they are calling “solar response” is actually solar response and not simply something else that happens to be correlated with the solar cycle.) I also dispute the claim that there is a significant association between low clouds and cosmic rays. The analysis purporting to show this correlation is so highly suspect as to border on worthless (see Damon and Laut Eos,Vol. 85, No. 39, 28 September 2004). –raypierre]
Ferdinand Engelbeen says
Re #34,
Joel, if you are convinced that there is a huge influence of GHGs on temperature, then Kyoto indeed is peanuts and one need to reduce CO2/CH4 emissions to near zero within a few decades to prevent disaster. That will not be obtained by buying a Prius or other low-to-medium cost measures in factories (many energy intensive factories have learned to be economical in the seventies – as a matter of survival). Promoting (if you wish, crash) research into all alternatives for generation (solar, geothermal,…) and cost effective storage of energy and subsidies for (private) installations will have far more effect.
With the current Kyoto, any energy intensive factory simply will move out to developing countries if the cost for energy, due to taxes or carbon credits is too high. Because of the difference in efficiency and emissions, the net effect will be more CO2 and more pollution…
Btw, how much of the current and emerging (European) energy taxes is/will be used for alternatives research or subsidies for installations?
Hans Erren says
re 35:
correction:
T(w)=F(w)x H(w,k,T)
and
T(t)=F(t)* H(t,k,T)
where x is multiplication and * is convolution.
However, as H is unknown and derived from T (attribution), we are faced with the wellknown problem of identifiability in closed loop systems.
ref:
Identification of Closed Loop Systems – Identifiability, Recursive Algoritms and Application to a Power Plant, Henk Aling, 1990, Dissertation Delft University.
This highly mathematical study tries to find constraints when an event in a power plant, say a pressure wave, can be traced to a source fluctuation (fuel or oxygen).
One of his conclusions:
â??In practise the estimated covariance function of the joint output/input signal obtained by a closed loop experiment will [b]never[/b] have the structural properties associated with the feedback system. This is due to the finiteness of the dataset, model structure mismatch and other circumstances by which the ideal assumptions, used in the derivation of the identifiablity results are violated.â??
In other words, closed loop systems contain signals that cannot be attributed to a given forcing.
for more on identifiablity see the work of Kitoguro Akaike.
http://www.ism.ac.jp/~kitagawa/akaike-epaper.html
Tom Rees says
Ferdinand: The Boer & Yu, 2003 paper shows that the correlation between the pattern of aerosol forcing and the pattern of temperature response has only 20% covariance, and that the covariance of the response to GHG and aerosol forcing is >60%. On your page, you show the results for HADCM3 aerosol and ozone (actually the difference between total forcing and GHG forcing, but should be approximately the same). The similarities with GHG forcing are clear – mostly over N hemisphere, mostly over land, and with polar amplification. The biggest effect is in Barent’s Sea. This shows that the temperature response, even to a geographically defined forcing such as aerosols, shows little overlap with the spatial pattern of forcing itself – although of course there is some overlap.
You say that “The huge change in SO2 emissions in Europe should be measurable, according to runs of the Hadcm3 model for the period 1990-1999, but it is not…”. However, the real world is not aerosol and ozone only. To check whether the model is accurate, you need to either strip out the other forcing effects (GHG etc) from the real world data (which we can’t), or add the other forcings into the model. When you do this, there is no major anomaly – see Stott et al, 2000. The model does not, in fact, predict the major negative anomaly that you say it does.
Regarding the feedbacks to different forcings: the models can and do show different responses to equivalent forcings from different sources. For example, the response to solar forcing in HADCM3 is 50% of the response to an equivalent GHG forcing (see Lambert et al, 2004 – last page). This is at the low end of the range. So, when Stott et al use HADCM3 to show that solar forcing may be underestimated, is this a revelation about models in general or just HADCM3? Another point:, although it’s true to a first approximation that forcings are linearly additive, it does not always hold true – e.g. see Meehl et al 2003. Finally, although there is uncertainty about solar forcing, this is also true for GHG and other forcing. Furthermore, the pattern of solar and volcanic forcing is uncertain (e.g. Hoyt vs Lean for solar, robertson vs crowley for volcanic). The temperature responses in reconstructions from the past
millenium can be reproduced (approximately) without inferring novel solar mechanisms: therefore, they cannot be used as evidence for novel solar mechnisms.
Tom Rees says
Ferdinand, regarding climate models of the eemian, see: http://www.uni-mainz.de/FB/Geo/Geologie/sedi/Deklim/kaspar.pdf
Ferdinand Engelbeen says
Re comment on #37:
Raypierre, according to Wang and Key in Science (unfortunately under subscription):
“Are these changes due to large-scale advective processes rather than to local radiative effects? The correlation between surface temperature and the Arctic Oscillation (AO) index (18), which can be used to represent large-scale circulation patterns, is shown in Fig. 5. The correlations are as expected: positive in northern Europe and northern Russia but negative over Greenland and northern Canada. Given the increasing cooling effect of clouds found here, the rise in surface temperature is clearly related to large-scale circulation.”
Of course, there may be a change in AO index due to GHGs, maybe even as good as the influence of solar on the AO… Remains to be seen what will happen with temperatures and cloud cover if the AO index changes.
For cloud cover and the solar cycle, this is not about cosmic rays and cloud cover, but solar radiation (in general) and cloud cover. From Kristjanson ea. (capture of Fig. 2):
“Significance level of correlations: 67% for cosmic rays and low clouds, 98% for solar irradiance and low clouds… …30% for cosmic rays and [daytime] low clouds, 90% for solar irradiance and [daytime] low clouds.”
Further discussion about sensitivity in a response to several interesting points made by Tom Rees…
Hans Erren says
the amazing thing about the eemian is that winter temperatures in europe were comparable to 20th century values but summer temperatures were 4 degrees higher, yet rivers kept flowing during summer.
ref:
G. Russell Coope, 2000, The climatic significance of coleopteran assemblages from the Eemian deposits in southern England, Geologie en Mijnbouw / Netherlands Journal of Geosciences 79 (2/3): 257-267
http://www.nitg.tno.nl/eng/products/pub/njg/download_0001/257-268abs.pdf
Pat Neuman says
re 43.
Not amazing to me. Thermohaline Circulation?
Ferdinand Engelbeen says
Re #40,
Sorry, this is a long response, but a rather fundamental discussion of the validity of sensitivities and forcings used in current climate models…
Tom, if you compare different models for the regional distribution of the anthropogenic aerosol forcing and/or temperature response, there are not two models which agree with each other. See the Hadcm3 model response to aerosols here, and compare that to the Canadian model Fig. 2 fa and Ta and fig. 3, the Japanese model href=”http://cfors.riam.kyushu-u.ac.jp/~toshi/research.html”>Fig. 3, Hansen ea. Fig. 3 for sulphate aerosols and last but not least, what the IPCC expects in Fig. 6.7(d) and (h). Not directly convincing for the reliability of the regional resolution of the models…
[Response: Or a clear demonstration that regional climate is not controlled by purely regional forcing. This is indeed a statement about the predicitability of regional climate (still a cutting edge topic), but your feeling that there must be a strong link is not based on any actual studies. – gavin]
The expected global average direct + indirect forcings for aerosols vary between -1.0 (Japan) and -1.4 W/m2 (Hansen, IPCC) for the past centuries and -0.9 to -1.3 W/m2 for future (2050, 2100) emissions (Canada). The Canadian model suppresses the influence of aerosols in the regional distribution far more, as the direct forcing of GHGs increases to 3.3 and 5.8 W/m2 for resp. 2050 and 2100 against 2.3 W/m2 in the other models which use past and current emissions.
The Hadcm3 model has calculated the largest increase in temperature which may be attributed to the reduction of aerosol load (40%) over the period 1990-1999 somewhere in NE Europe, other models do that more in Southern Europe. Anyway, the sum of aerosol decrease, GHG increase and positive NAO (all with a warming effect for W, NW and NE Europe) in the same period is only visible in the West and North European temperature trends with a stepwise change in 1990 and no trend thereafter. This clearly points to the stepwise change in NAO. As far as I remember: acid rain (acids formed from SO2 emissions in rainwater) in Scandinavia wasn’t that caused by the industry in England, thanks to the prevailing SW winds? And as tropospheric aerosols have an average lifetime of only 4 days before raining out, the influence must be at and near the sources… Thus what is the real influence of aerosols?
This all points to a very low sensitivity for or a low forcing of aerosols. Consequently a lower sensitivity for CO2 forcing…
[Response: Illogical. The model you cite has similar sensitivity to both aerosols and CO2, how you can conclude its results are right for one, and wrong for the other makes no sense.]
Stott ea. 2003 is similar to Stott ea. 2000, except that they used a large forcing for solar (10x) and volcanic (5x) in separate runs to see if the relative influence of both may need to be adjusted, as the Hadcm3 model possibly underestimates the – relative – weaker forcings. Which was what they discovered for solar. The problem with this test is exactly the restraint of a fixed aerosol forcing trend and a fixed sensitivity… without that, the adjustment for solar at one side and GHGs/aerosols at the other side might have been much larger, while maintaining the same (or better) result.
Climate sensitivity of solar (for the same forcing) in the Hadcm3 model indeed is only 50% of other forcings in the same model. This is in contrast to the model that Hansen ea. 1997 used, where the general variation in sensitivity is within 20% (but what do other models?) and contrary to what Raypierre expected (all forcings have the same sensitivity). Moreover, from the Hansen ea. 1997 abstract:
and
Sounds like differentiated solar influences (in how far were these included in the GCM that Hansen used?)…
[Response: Different forcings can have different impacts (which can be measured by the efficacity – Hansen et al, 2005), and to some extent that is model dependent. But the differences are not by orders of magnitude, more like a few 10’s of percent at max.]
Further from Lambert ea. 2004:
Again solar influences, linked to clouds and precipitation…
[Response: What? He is talking about short wave surface forcing not ‘solar’ forcing at the TOA. ]
Further, I totally agree with you that there are a lot of unknowns in forcings as well as in sensitivities. With the current uncertainty, one can fit the past with different sets of forcings and sensitivities, making any prediction of the future rather questionable. Therefore we urgently need a more accurate reconstruction of climate in the pre-industrial millennium, to get rid of the large historical variance of solar forcing/sensitivity of about 1:9, depending of the chosen reconstruction. That has nothing to do with the invention of some novel solar mechanism (although the exact mechanism is not known), but with the implementation of the observed changes in clouds, as result of solar changes, in the models. The discovery of the exact mechanism (probably along the lines mentioned by Hansen) may be just a question of time.
[Response: The chances that models are underestimating solar forcing by an order of magnitude is very very slim. What actual evidence is there for this? The ‘argument from personal incredulity’ is not a sound basis. – gavin]
Re #41:
Wow! Nice to see that the post-Eemian cooling indeed was possible without the help of CO2. It is a pity that they stopped the simulation before the CO2 decrease (111,000-106,000 BP), to see what the model produces for further cooling, compared to reality…
Ferdinand Engelbeen says
Re #44,
I thought that some models predict a reduced THC, even a shutdown, as result of higher temperatures? But have a look at Kaspar and Cubasch simulation and the reconstructed European Eemian temperature distribution (link thanks to Tom Rees), looks more like a very strong NAO (AO/AMO?), be it more to the East.
Kooiti Masuda says
Re: #39 (Hans Erren):
> For more on identifiablity see the work of Kitoguro Akaike.
> http://www.ism.ac.jp/~kitagawa/akaike-epaper.html
It seems that you mixed up names of three mathematicians of the same group, Hirotugu Akaike, Genshiro Kitagawa and Makio Ishiguro. Prof. Kitagawa keeps records of the retired Prof. Akaike. The subject of Akaike’s works is essentially linear autoregressive models of time series. He found applications in controlling chemical plants. Autoregressive models are also found useful in studies of oscillations of the solid earth, but (in my opinion) not so much in atmospheric science. (I once hoped to apply them but gave up.) It is probably because oscillations of solids have discrete power spectra in the frequency domain while atmospheric phenomena have continuous spectra.
I agree that the issue of estimating climate sensitivity is conceptually something like “identifying” H from F and T in your formula. And, since the system function (H) is likely to be dependent on time scales, thinking in the frequency domain seems a good idea (especially in modeling studies where F can be speficied in known forms), though we must also anticipate complication due to essential nonlinearity of the system. I feel, as you do, that it might not yield useful information from observations where F is not known precisely.
Now let’s go back to the original thread, everyone, equipped with Hans Erren’s formula (in a generalized sense). If the situation is as simple as T = F H, and F does not change and T turns out to be larger, then H also turns out to be larger. This is a paraphrase of Raypierre’s story. In a similarly simplified way, Esper’s story may be like this: T_1 = F_1 H and T_2 = (F_1 + F_2) H, and T_1 turns out to be larger but T_2 and F_2 do not change. Then it is likely (though also dependent on actual numerical values) that F_1 turns out to be larger and H smaller.
Jurgen Hubert says
One claim that I have come across frequently is that volcanic eruptions influence the climate much more than human activity. So what is the impact of volcanic activity on the climate in relation to human activity? I searched this site, but couldn’t find any satisfactory answers…
Ferdinand Engelbeen says
Re comment on #45:
Gavin, here follows some reactions and clarifications on your comment…
– If there is a substantial forcing by sulphate aerosols, this is concentrated in three main areas (at least for the direct forcing). The global change of >1 W/m2 thus is much higher in smaller areas. Forcing changes of similar magnitude, due to water vapour variations, are measurable as regional temperature changes in Europe, see Philipona, but aerosol changes are not…
– I agree that both CO2 and sulphate aerosols have similar sensitivity, but with opposite sign. Thus if one of them has a lower sensitivity (or in the case of aerosols, a lower forcing), the other one must follow, even if it is only to match the 1945-1975 temperature trend. See your own work and that of Andreae…
– That different forcings can have different impacts is exactly the origin of the discussion. Some don’t believe that there may be different climate responses for equal forcings. The GISS model finds an efficacity for solar of 0.92, but doesn’t consider the secondary responses to solar changes. In the Hadcm3 model, the doubling of the (too low) sensitivity of solar leads to a 20% decrease of sensitivity for GHGs. Thus a manifold increase of the smaller forcings is not unthinkable…
– From Lambert ea. (page 4):
“The solar forced run exhibits a larger precipitation response per degree of warming than the CO2 forced run, as expected from the theory outlined earlier in this section, even though the precipitation response [note: this must be the temperature response] per unit forcing is smaller than for CO2.”
The run was done with TOA solar changes. And again with the (uncorrected!) Hadcm3 model which has halve the sensitivity for solar forcing than for CO2 forcing…
– I don’t think that the influence of solar is an order of magnitude larger than incorporated in current models. The variation in effect of 1:9 for solar (0.1 K in MBH, 0.9 K in bore hole reconstructions, 0.1 K for volcanic in both) is simply the result of different millennium reconstructions. Thus it is very important to know what the real impact of historical solar changes is, as 0.1 K in the past, results in climate sensitivity for anthropogenic at the high end, while 0.9 K results in a very low effect of anthropogenic, if the instrumental temperature trend of the last 1.5 century is used as reference. I expect that reality is somewhere in between those two extremes, towards the lower end…
Urs Neu says
There is an aspect that hasn’t been addressed until now: Esper et al. are talking about “a redistribution of weight towards the role of natural factors in forcing temperature changes”. This conclusion seems not meaningful as a general statement. The weight of the different factors on temperature changes always depends on the specific time period and timescale you are observing. A general time-independent statement would only be reasonable if you are comparing the largest possible influence on temperature, i.e. the largest possible temperature change a factor can produce, and these are neither known nor discussed to my knowledge (even then, it would be timescale dependant).
An active forcing through a certain forcing factor at a given period and on a given time scale can only be expected if this factor changes its influence at that time and on that timescale. The temporal behaviour of most factors is very unsteady.
When drawing conclusions from a possibly higher temperature variability during the last millenium about the weight of different forcing factors we therefore have to consider the time period (and ev. timescale) we are talking about and the corresponding forcings of the different factors.
An enhanced variability of temperature during the last millenium suggested by the work of Esper, Moberg, etc. is mainly related to the time frame 1000 – 1900 and the centennial time-scale. The anthropogenic influence, the Kyoto protocol and future projections are mainly in the period 1950 – 2100 and on a decadal to multi-decadal timescale. The difference on the time-scale is minor, but it’s a different time period.
I think we agree that the forcing from anthropogenic GHG concentrations starts at about 1800 and then increases, getting to its strongest influence after about 1950 (more or less steady GHG increase since 1980). Thus the GHG forcing before 1800, for the most part of the Esper/Moberg time-frame, is near zero on the multi-decadal time scale, because the concentrations hardly have changed. The GHG forcing only started to get important at the very end of the Esper/Moberg period. The weight of man-made GHG forcing therefore is very low (a few percent at maximum) and the weight of natural forcings is near to 100 percent for this period.
However, for the period of 1950-2000, there seems to be a consensus that natural forcing (solar, volcano) as a whole is near zero or slightly negative (IPCC 2001) on the multi-decadal timescale. The weight of natural factors therefore also is near zero whereas the weight of anthropogenic forcing (GHG minus aerosols) is very high. If the forcing due to a certain change of solar and/or volcanic activity should have been higher than previously assumed, this wouldn’t change the weight of these factors much, neither for the Esper/Moberg period nor the period since 1950. This is independent of the question if the sensitivity to natural and anthropogenic forcings might be different or not. If the factor doesn’t change its properties, there is no forcing.
Moreover if the natural forcing since 1950 should be slightly negative, an enhanced natural forcing would mean that anthropogenic forcing must be greater than expected to explain the observed warming. And this would mean that future warming has been underestimated. The opposite, an overestimation of future warming, could only be suggested, if the natural forcing since 1950 has been positive. However, the work of Esper, Moberg, von Storch, etc. has nothing to do with changes of natural forcing factors during the last decades and therefore do not alter the corresponding IPCC findings.
As long as the latter findings are uphold, I can’t see any logical reasoning that the detection of enhanced variability before the instrumental period might lower the projected future warming.
[Response: Good points…. – gavin]