One of the central tasks of climate science is to predict the sensitivity of climate to changes in carbon dioxide concentration. The answer determines in large measure how serious the consequences of global warming will be. One common measure of climate sensitivity is the amount by which global mean surface temperature would change once the system has settled into a new equilibrium following a doubling of the pre-industrial CO2 concentration. A vast array of thought has been brought to bear on this problem, beginning with Arrhenius’ simple energy balance calculation, continuing through Manabe’s one-dimensional radiative-convective models in the 1960’s, and culminating in today’s comprehensive atmosphere-ocean general circulation models. The current crop of models studied by the IPCC range from an equilibrium sensitivity of about 1.5°C at the low end to about 5°C at the high end. Differences in cloud feedbacks remain the principal source of uncertainty. There is no guarantee that the high end represents the worst case, or that the low end represents the most optimistic case. While there is at present no compelling reason to doubt the models’ handling of water vapor feedback, it is not out of the question that some unanticipated behavior of the hydrological cycle could make the warming somewhat milder or on the other hand, much, much worse. Thus, the question naturally arises as to whether one can use information from past climates to check which models have the most correct climate sensitivity.
In this commentary, I will discuss the question "If somebody were to discover that climate variations in the past were stronger than previously thought, what would be the implications for estimates of climate sensitivity?" Pick your favorite time period Little ice age, Medieval Warm Period, Last Glacial Maximum or Cretaceous the issues are the same. In considering this question, it is important to keep in mind that the predictions summarized in the IPCC reports are not the result of some kind of statistical fit to past data. Thus, a revision in our picture of past climate variability does not translate in any direct way into a change in the IPCC forecasts. These forecasts are based on comprehensive simulations incorporating the best available representations of basic physical processes. Of course, data on past climates can be very useful in improving these representations. In addition, past data can be used to provide independent estimates of climate sensitivity, which provide a reality check on the models. Nonetheless, the path from data to change in forecast is a subtle one.
Climate doesn’t change all by itself. There’s always a reason, though it may be hard to ferret out. Often, the proximate cause of the climate change is some parameter of the climate system that can be set off from the general collective behavior of the system and considered as a "given," even if it is not external to the system strictly speaking. Such is the case for CO2 concentration. This is an example of a climate forcing. Other climate forcings, such as solar variability and volcanic activity, are more clearly external to the Earth’s climate system. In order to estimate sensitivity from past climate variations, one must identify and quantify the climate forcings. A large class of climate forcings can be translated into a common currency, known as radiative forcing. This is the amount by which the forcing mechanism would change the top-of-atmosphere energy budget, if the temperature were not allowed to change so as to restore equilibrium. Doubling CO2 produces a radiative forcing of about 4 Watts per square meter. The effects of other well-mixed greenhouse gases can be accurately translated into radiative forcings. Forcing caused by changes in the Sun’s brightness, by dust in the atmosphere, or by volcanic aerosols can also be translated into radiative forcing. The equivalence is not so precise in this case, since the geographic and temporal pattern of the forcing is not the same as that for greenhouse gases, but numerous simulations indicate that there is enough equivalence for the translation to be useful.
Thus, an estimate of climate sensitivity from past data requires an estimate of the magnitude of the past climate changes and of the radiative forcings causing the changes. Both are subject to uncertainties, and to revisions as scientific techniques improve. The mechanical analogy in the following little parable may prove helpful. Down in the dark musty store-rooms of the British Museum, you discover a mysterious box with a hole in the top through which a rod sticks out. The rod supports a platform which has a 1 kilogram brick on it, but the curator won’t let you fuss with the brick, otherwise something might break. For various reasons, though, people in the Museum are thinking of adding a second 1kg brick to the platform, and you’ve been hired by the Queen to figure out what will happen. Though you can’t mess with the device yourself, you notice that every once in a while a mouse jumps down onto the brick, and the platform goes down a little bit, when this happens, after which the platform returns to its original level without oscillating. From this you infer that there’s some kind of spring in the box, which is sitting in molasses or something like that, which has enough friction to damp out oscillations. Your job amounts to estimating how stiff the spring in the box is, without being allowed to take apart the box or perform any experiments on it. If the spring is very stiff, then putting another brick on the platform won’t cause the platform to sink much further. If the spring is very soft, however, the second brick will cause the platform to go down a great deal, perhaps causing something to break. The displacement of the platform is analogous to global mean temperature, and the stiffness of the spring is analogous to climate sensitivity.
Now, the unfortunate thing is that the mice are too light and come along too infrequently for you to get a good estimate of the stiffness of the spring by just watching the response of the platform to mice jumping on it. However, from looking through other dusty records elsewhere in the basement of the British Museum, you discover some notes from an earlier curator, who had also observed the box. He notes that there used to be big, heavy rats in the Museum basement, and has written down some things about what happens when the rats jump on the platform. From indirect evidence, like footprints in the dust, size of rat droppings, shed fur, plus some incomplete notes left behind by the rat catcher, you infer that the typical rat weighed a quarter kilogram. Now, the curator has left behind some notes about how much the platform drops when a rat jumps onto it from the shelf just above the platform. Unfortunately, the curator was a scholar of Old Uighur, and left behind his notations in the Old Uighur numeration system so his rivals couldn’t read it. Also unfortunately, the curator died before publishing his explanation of the Old Uighur numeration system, and that has been lost to time. Using the same Uighur wheat production records available to the curator, you estimate that his notes mean that the typical displacement is 10 centimeters per rat. From this you estimate that the stiffness of the spring is such that a 1 kilogram brick would cause a 40 centimeter displacement of the platform. Things are looking good. You get paid a handsome sum. Then, one day, to your horror, you open a journal of Uighur studies and find a lead article proving that everybody has been interpreting Uighur wheat production records wrong, and that all previous estimates of what the Uighur numbers mean were off by a factor of two. That means that while you thought the typical displacement of the platform was 10 centimeters per rat, the "natural variability" caused by rats jumping on the platform is much greater than you thought. It was actually 20 centimeters, using the new interpretation of the Uighur numbering system. Does that mean you ring up the Museum and say, "I was all wrong — the natural variability was twice what we thought, so it is unlikely that adding a new brick to the platform will cause as much effect as I told you last year!" No, of course you don’t. Since you have no new information about the weight of the rats, the correct inference is that the spring in the box is softer than you thought, so that the predicted effect of adding a brick will be precisely twice what you used to think, and more likely to break something. However, being a cautious chap, you also entertain the notion that maybe the displacement of the platform was more than you thought because the rats were actually fatter than you thought; that would imply less revision in your estimate of the stiffness of the spring, but until you get more data on rat fatness, you can’t really say. If you think all this is obvious, please hold the thought in mind, and bring it back when, towards the end of this commentary, I tell you what Esper et al. wrote in an opinion piece regarding the implications of natural variability observed over the past millennium.
The Last Glacial Maximum (i.e. the most recent "ice age", abbreviated LGM) probably provides the best opportunity for using the past to constrain climate sensitivity. The climate changes are large and reasonably well constrained by observations. Moreover, the forcing mechanisms are quite well known, and one of them is precisely the same as will cause future climate changes. During the LGM, CO2 dropped to 180 parts per million, as compared to pre-industrial interglacial values of about 280 parts per million. Depending on just what you assume about cloud and water vapor distributions, this yields a radiative forcing of about -2.5 Watts per square meter. Global mean temperatures dropped by about 7°C at the LGM. Does this mean that the true climate sensitivity is (7/2.5) = 2.8°C per (Watt per square meter)? That would indicate a terrifying 11.2 °C warming in response to a doubling of CO2. Fortunately, this alarming estimate is based on faulty reasoning, because there is a lot more going on at LGM time than just the change in CO2. Some of these things are feebacks like water vapor, clouds and sea-ice, which could be reasonably presumed to be relevant to the future as well as the past. Other forcings, including the growth and decay of massive Northern Hemisphere continental ice sheets, changes in atmospheric dust, and changes in the ocean circulation, are not likely to have the same kind of effect in a future warming scenario as they did at glacial times. In estimating climate sensitivity such effects must be controlled for, and subtracted out to yield the portion of climate change attributable to CO2. Broadly speaking, we know that it is unlikely that current climate models are systematically overestimating sensitivity to CO2 by very much, since most of the major models can get into the ballpark of the correct tropical and Southern Hemisphere cooling when CO2 is dropped to 180 parts per milllion. No model gets very much cooling south of the Equator without the effect of CO2. Hence, any change in model physics that reduced climate sensitivity would make it much harder to account for the observed LGM cooling. Can we go beyond this rather vague statement and use the LGM to say which of the many models is most likely to have the right climate sensitivity? Many groups are working on this very question right now. Progress has become possible only recently, with the availability of a few long-term coupled atmosphere-ocean simulations of the LGM climate. Time will tell how successful the program will turn out, but you can be sure that Real Climate is monitoring the pulse of these efforts very closely.
However that shakes out, if somebody were to wake me up in the middle of the night tomorrow and tell me that the LGM tropical temperatures were actually 6°C colder than the present, rather than 3C as I currently think, my immediate reaction would be "Gosh, the climate sensitivity must be much greater than anybody imagined!" That would be the correct reaction, too, because the rude awakener didn’t suggest anything about revisions in the strength of the forcing mechanisms. Indeed, this is the very reasoning used, in reverse, by Dick Lindzen in the late 1980’s, for that decade’s flavor of his argument for why CO2 increase is nothing to worry about. At the time, the prevailing climate reconstruction (CLIMAP) indicated that there was little reduction in tropical surface temperature during the LGM. Making use of mountain snow line data indicating larger temperature changes at altitude, Lindzen proposed a new kind of model of the tropical response, which fit the CLIMAP data and indicated very low sensitivity to CO2 increases in the future. When the CLIMAP data proved to be wrong, and was replaced by more reliable estimates showing a substantial tropical surface temperature drop, Lindzen had to abandon his then-current model and move on to other forms of mischief (first the "cumulus drying" negative water vapor feedback mechanism, since abandoned, and now the "Iris" effect cloud feedback mechanism).
Now, how about the Holocene including the Little Ice Age and Medieval Warm Period that seem to figure so prominently in many skeptics’ tracts ? This is a far harder row to hoe, because the changes in both forcing and response are small and subject to large uncertainties (as we have discussed in connection with the "Hockey Stick" here). What we do know is that the proposed forcing mechanisms solar variability and mean volcanic activity are small. Indeed, the main quandary faced by climate scientists is how to estimate climate sensitivity from the Little Ice Age or Medieval Warm Period, at all, given the relative small forcings over the past 1000 years, and the substantial uncertainties in both the forcings and the temperature changes. The current picture of Holocene climate variations is based not just on tree ring data, but on glacial mass balance and a wide variety of other proxy data. If this state of knowledge were to be revised in such a way as to indicate that the amplitude of the climate variations were larger than previously thought, that could very well call for for an upward revision of climate sensitivity
Indeed, quantitative studies of the Holocene climate variations invariably support this notion (e.g. Hegerl et al, Geophys. Res. Lett 2003, or Andronova et al Geophys. Res. Lett 2004.). Such studies can reasonably account for the observed variations as a response to solar and volcanic forcing (and a few secondary things) with energy balance climate models tuned to have a climate sensitivity equivalent to 2.5C per doubling of CO2. If the estimates of observed variations were made larger, a greater sensitivity would then be required to fit the data. Ironically, even arch-skeptics Soon and Baliunas, who would like to lay most of the blame for recent warming at the doorstep of solar effects, came to a compatible conclusion in their own energy balance model study. Namely, any model that was sensitive enough to yield a large response to recent solar variability would yield an even larger response to radiative forcing from recent (and therefore also future) CO2 changes. As a result, their "best fit" of climate sensitivity for the twentieth century is comfortably within the IPCC range. This aspect of their work is rarely if ever mentioned by the authors themselves, and still less in citations of the work in skeptics’ tracts such as that distributed with the "Global Warming Petition Project."
This brings us to the claims made recently by Esper et al. In an opinion piece in Quaternary Science Reviews (J. Esper, RJS Wilson, DC Frank,A Moberg, H Wanner and J Luterbacher, "Climate: past ranges and future changes," Quat. Sci. Rev. 24, 2005) they outlined the uncertainties in knowledge of the amplitude of Holocene climate variations, and also a strategy for reducing the uncertainties. We at RealClimate could hardly object to that. Better estimates of the Holocene variability would be of unquestioned value. However, Esper et al. concluded their piece with the statement:
- "So, what would it mean, if the reconstructions indicate a larger (Esper et al., 2002; Pollack and Smerdon, 2004; Moberget al., 2005) or smaller (Jones et al., 1998; Mann et al., 1999) temperature amplitude? We suggest that the former situation, i.e. enhanced variability during pre-industrial times, would result in a redistribution of weight towards the role of natural factors in forcing temperature changes, thereby relatively devaluing the impact of anthropogenic emissions and affecting future predicted scenarios. If that turns out to be the case, agreements such as the Kyoto protocol that intend to reduce emissions of anthropogenic greenhouse gases, would be less effective than thought."
They go on to qualify their conditional criticism of Kyoto by stating "This scenario, however, does not question the general mechanism established within the protocol, which we believe is a breakthrough," but the political opinions of the authors are not our concern. Neither are we weighing in here on the relative merits of the various Holocene climate reconstructions. What is our concern is that the inference regarding climate sensitivity is precisely opposite what elementary mathematical and physical analysis dictates it should be. Our correspondents at the Montreal climate negotiations which concluded last week report that Esper et al was given a lot of play by the inaction lobby. The only major news outlet to pick up on the story, though was Fox News, whose report by "Junk Science" columnist Steve Milloy here arguably represents a new low in propaganda masquerading as science journalism.Milloy does not mention that Esper et al is an opinion piece, not a research article. He also fails to mention that Esper et al do not actually conclude that a downward revision in the importance of CO2 actually is necessary; they only attempt to say (albeit based on faulty logic) what would happen if higher estimates of climate variation proved right. Milloy also fails to note the final quote supporting Kyoto, for what that’s worth. Of course, it is too much to expect that Milloy would look into other papers on the subject to see if there might be something wrong with the reasoning in Esper et al. . The lack of "balance" in this instance is jarring, for a network that claims to have a copyright on the description "Fair and Balanced". What Milloy was engaging in goes beyond mere lack of balance. It is an example of "quote mining" which has become a favored tactic of those seeking to counter sound science with unsound confusion (see the interesting discussion on quote mining on the Corante site.). The fact that no other media outlets have picked up on the unfortunate Esper quote leaves us with some feeling of encouragement that journalists are beginning to be able to filter out bad science, no matter how interesting an article it might make.
Ferdinand Engelbeen says
Re #50,
I agree that time scale is important in the whole discussion. And I agree that there is very little influence from anthropogenic GHGs in the full Holocene period until app. 1900. But I disagree that the influence of solar + volcanic in the period 1950-2000 is near zero. For the very reason that you forget the time frame.
Indeed there is little trend measured in solar radiation since the start of the satellite measurements (which is unfortunately only a few decades of data). For the period before the satellite era, we depend on indirect indications of solar variations like sunspots, magnetic field data and cosmogenic isotopes. These are interconnected, but have no 100% match. What is clear, is that the sun’s activity since 1930-1940 is higher than in any period of the preceding 1,000 years, see Usoskin, Solanki ea. or even the preceding 8,000 years proximated by 14C data and sunspot number or a more than doubling of the sun’s magnetic field in the past 100 years.
It takes time for the earth to get to a new equilibrium (especially for the oceans), even if the extra forcing stays steady after a shift to one or other side. That is true for GHGs as good as for solar disturbances…
Urs Neu says
Re #51
First point: Esper et al. talk about natural forcing as a whole, not solar forcing. Thus we have to include volcanos. They compare anthropogenic forcing (GHG + aerosols + other forcings) to natural forcing (solar and volcano in principle). If you only look at solar forcing, the picture might be somewhat different. During the second half of the 20th century, there is only a very small solar forcing, if ever; whereas during the same time there is a negative forcing by volcanoes which probably gives a net negative forcing during that time (IPCC 2001, http://www.grida.no/climate/ipcc_tar/wg1/448.htm)
Second point: You seem to suggest a time lag of global temperature to solar forcing or a lagged reaction due to ocean inertia. This time lag or lagged reaction must be at least about 50 years to explain the recent warming since the rise of solar activity ended in about 1940.
Shindell et al. (2001, http://pubs.giss.nasa.gov/docs/2001/2001_ShindellSchmidtM1.pdf) have found a time lag of about 20 years for the solar influence on AO/NAO with impacts on the regional scale, but hardly on the global scale. A time lag of about 20 years could not explain the temperature rise after 1970 which started 30 years after the rise of solar activity has stopped.
If you compare the Usoskin et al. data with the temperature evolution of the last millenium, there is hardly any evidence of a 50-or-more-year time lag even on the centennial time scale. The strongest change, i.e. the rise of solar activity and temperature at about 1900 is more or less synchronous. I can’t see room for a time lag of more than some years there.
Recent GCM calculations (Meehl et al. 2005, http://www.sciencemag.org/cgi/content/full/307/5716/1769, Fig. 1B) show that there is only very weak remaining inertia reaction of temperature after 30-40 years for a forcing comparable to the solar forcing 1900-1940, i.e. in the region of about 0.02ºC per decade. Far too less to explain recent warming.
However, since the warming after 1970 is comparable to the one 1900-1940 an inertia reaction seems unlikely, it would have to be a time lag. But how would you then explain the temperature rise of 1900-1940?
R. T. Pierrehumbert (raypierre) says
Regrettably, I’ve been tied up with other things, and haven’t been able to actively monitor this interesting conversation in the past week or so. I will make a few last remarks concerning the issue of possible differences in sensitivity to solar vs. GHG radiative forcing, mostly prompted by Ferdinand’s musings.
First, note that in my article I stated that the equivalence of solar radiative forcing to GHG radiative forcing is not as precise as, say, the equivalence of methane to CO2 forcing. Solar forcing has a different pattern than GHG forcing. In particular, sunlight is absorbed mostly at the surface, and energy changes have to work their way into the atmosphere via changes in surface conditions. This can have a big effect for short-time response over the oceans, where the surface doesn’t have time to catch up. For radiative-convective (column) models, the energy changes at the surface are pretty much rapidly mixed into the troposphere by convection, and so the equivalence of solar to GHG forcing seems pretty secure. This has been known since Manabe and Wetherald and Manabe and Strickler. Now, when you go to a full GCM, things, a priori, start to look fuzzy. The changes in solar luminosity cause a forcing pattern that is very non-uniform both in time and space, and not like the pattern of GHG forcing. You might think this would completely invalidate the concept of radiative forcing equivalents, but GCM simulations with large changes (equivalent to doubled CO2) show a remarkable degree of equivalence (see Govindaswami and Caldeira GRL Vol. 27, 2001). I don’t know what is going on in the Hansen study cited by Ferdinand, but last time I read it I didn’t read it with this point in mind, and I haven’t had time to check that the results are being cited correctly.
Generally speaking, it’s not impossible for models to have a different sensitivity to solar vs. GHG, but there’s a lot of physical reasoning and numerical simulation supporting a near equivalence, so one has to argue carefully for why a given case should have different sensitivity.
Now, regarding the Stott et al paper, one has to be careful to read what they actually said in its entirety and not just look at the high end numbers for sensitivity. In their Table 1, they actually find that equal sensitivity to solar between the model and observations is within the confidence limits of their estimates.
The “factor of 3” figure is the high end of the confidence interval. The mid-range estimate does indicate 1.65, but the calculation isn’t incompatible with the null hypothesis that there’s nothing missing in the model sensitivity to solar. Now, when they break out the “Natural” forcing into separate volcanic and solar components, they do find support for enhanced amplification of solar forcing BUT, as the authors themselves note, part or all of this result is likely to be spurious. The reason for thinking that, is that when they apply their same regression analysis to the TOTAL model response against individual components (instead of doing data vs. model components) they find that a lot of the GHG response is mis-attributed to solar, so that one gets a completely spurious implied reduction of GHG sensitivity. This is known to be a misattribution because one knows why the model did what it did, though not necessarily the atmosphere. It is difficult to separate the observed solar response from the GHG response, because over part of the record the patterns look similar.
Note that the main issue treated by Stott et al is model sensitivity vs. observed sensitivity, which is a different thing from model sensitivity to solar vs GHG.
It is true that models with different climate sensitivities can equally well match the 20th century record, owing to compensating uncertainties in aerosol forcing. This is precisely why there is a range of IPCC forecasts and why we can’t at present say which of the IPCC models is most likely. This doesn’t render the prediction “questionable.” It renders it uncertain, which was always openly admitted. Note that nobody has yet produced a model which fits the recent data and which also has much lower sensitivity to CO2 than the bottom of the IPCC range. Ferdinand isn’t saying anything more than the IPCC reports say. From the standpoint of risk assessment, though, one must keep in mind that as far as present knowledge goes the top of the IPCC range is essentially as likely as the bottom of the range.
The whole reason for the interest in natural variability and climate sensitivity is that one would indeed like to say which of the forecasts is most correct. This is proving to be very difficult, but nothing has emerged to seriously question the range given by IPCC.
Regarding the Holocene variability, the issue isn’t trade-off between CO2 and solar sensitivity. The CO2 fluctuations in that time were small. If the sensitivity of climate to solar changes is different from the sensitivity to CO2, that makes it even harder to infer CO2 sensitivity from the Holocene fluctuations — and darn near impossible if the supposed mechanism for giving extra amplification to the solar forcing is completely unknown. My article gives the most straightforward interpretation of the data, and links an increased observed amplitude of response to increased sensitivity to the particular form of the
forcing responsible (which sensitivity may or may not be transferrable to CO2). The overall point is that inferring sensitivity from observations is HARD. The subsequent discussion here embellishes that point. However, I repeat that there’s nothing in this discussion that points to an appreciable downward revision of the current estimates of sensitivity of climate to CO2.
Now, there is certainly one general point on which I am wholeheartedly in agreement with Ferdinand. That is that we badly need better reconstructions of the past millennium — but be assured nobody is neglecting that problem. I would expand Ferdinand’s sentiment to say we badly need a better understanding of the response of climate in the deeper past, including glacial times and the Eocene. While there are many researchers interested in this question, it is regrettable that funding agencies don’t seem to see it as a big part of global change research. The funding for this area is pitiful compared to satellite data collection and modelling of the next century. The first draft of the US Strategic Plan for Climate research left out paleoclimate altogether. I see this as a major problem in funding priorities.
There’s no end to how long one could go on regarding these fascinating topics, but I’ll end by expressing my appreciation for all the interesting points people have raised. With this, I sign off for the holidays, wishing you all a Happy Holiday season and a fruitful and prosperous New Year.
Steve Bloom says
Re #48 (JH): One of the standard contrarian myths, I think still being actively promoted by at least the Idsos, has been that volcanos emit some much larger quantity of CO2 than do anthropogenic sources. This claim was made up from whole cloth. Actually it has been discussed once or twice on this site, but I think in comments rather than as a post topic. Anyway, the upshot is that volcanos emit far less CO2 than do anthropogenic sources. Conveniently, there was a very large eruption quite recently (Pinatubo in ’92) that was studied very closely. Large amounts of CO2 would have been hard to miss. In fact, the main effect of Pinatubo was a temporary cooling (from the aerosols) that maxed out at about half a degree. Had there been a lot of CO2, this cooling effect would have been overwhelmed by an obvious warming pulse that would yet be with us (since CO2 remains in the atmosphere long-term rather than falling out quickly like the aerosols).
Steve Bloom says
Re #53 (RP): Thanks, Raypierre, for taking the time to make this one of the most informative RC posts ever. Happy holidays!
Re #50 (UN): I had an earlier comment, apparently eaten by cyberspace, covering somewhat the same ground as your final paragraph, but I think there’s a further implication:
I think Esper was a little bit the victim of his own bad writing or of bad editing when he wrote the closing paragraph Raypierre quoted (reproduced here so folks don’t have to scroll back up):
“So, what would it mean, if the reconstructions indicate a larger (Esper et al., 2002; Pollack and Smerdon, 2004; Moberg et al., 2005) or smaller (Jones et al., 1998; Mann et al., 1999) temperature amplitude? We suggest that the former situation, i.e. enhanced variability during pre-industrial times, would result in a redistribution of weight towards the role of natural factors in forcing temperature changes, thereby relatively devaluing the impact of anthropogenic emissions and affecting future predicted scenarios. If that turns out to be the case, agreements such as the Kyoto protocol that intend to reduce emissions of anthropogenic greenhouse gases, would be less effective than thought.”
Milloy spun this as something of an attack on Kyoto, but I don’t think it was intended as any such thing. Esper pointedly did not propose any reduction in the absolute amount of anthropogenic forcings (and if anything implies some degree of increase), but rather suggested that if natural forcings are larger, then the *relative* value of the anthropogenic forcings declines and that of the natural forcings increases. If this is the case, it is a truism that Kyoto and similar efforts to control anthropogenic forcings would be “less effective than thought” since the natural forcings are by definition uncontrollable by climate treaties.
At the same time (and this is the thought Esper really needed to add), climate treaties such as Kyoto (or more to the point its successors) become that much more essential since without them we have the potential of enhanced warming from a combination of natural and anthropogenic forcings considerably in excess of what would be possible from anthropogenic forcings alone. It would be more than a little ironic if this turns out to be the conclusion to which all the attacks on the flatter versions of the “hockey stick” lead.
Ferdinand Engelbeen says
Re #52 (and in part 55):
Volcanic forcing in the last 50 years, as good as in the previous 600 years (Fig. 6) results in an average less than 0.1 K cooling, with quieter periods and more active periods. The influence of GHG variations in the pre-industrial times is very low. That means that the residual historical climate change (0.1 K for MBH98, 0.9 K for bore hole reconstructions) is near entirely from solar changes, be it that (multi)decadal internal natural variations/oscillations may play a role in all times.
If you have a look at the shape of the different solar reconstructions at the IPCC pages, you can see that solar fluctuations can explain near all of the variations of the last centuries, including the 1900-1940 warming, the 1945-1975 cooling and the 1975-2000 warming, if the sensitivity were large enough, and including the inertia of the climate (which is larger for larger changes).
As the sum of all influences (solar, volcanic, GHGs, aerosols) results in the temperature record, a larger pre-industrial natural variation (in this case by a higher sensitivity for solar), will go at the cost of the sensitivity for man-made emissions in the current period. Thus Esper is right that the result of Kyoto then will be less than expected.
Besides the main forcings, in recent decades there is something natural happening which is hardly explainable by increasing GHGs. See
Wielicki and Chen last two paragraphs (and the rest of the pages before and after).
Ferdinand Engelbeen says
Re #53,
Thanks Raypierre for the response and I wish you and other readers too Happy Holidays and all the best for the New Year.
About the attribution of sensitivities to CO2 and solar in all GCMs: as there is a overlap in CO2 and solar since the start of the industrial revolution, it is difficult to know the right attribution for each, except if there is near-equivalence of sensitivities. As most GCMs have near identical sensitivities for the tandem GHG/aerosols, it is normal that the simulations have similar results (it is a necessary but not sufficient property for validation). But if there is a difference in sensitivities, the relative attribution can go either way.
About the response to CO2 in the pre-industrial Holocene: the variations are very low, which has as advantage that it is possible to deduct the sensitivities for mainly solar, without the overlap of solar and CO2 changes.
In summary, the discussion is about the possibility that there may be differences in response to solar and other forcings, because of the differences in spectrum, the influence of these differences on several layers of the atmosphere (and the surface for land/oceans) and cloud responses.
The latter is clearly a very weak point in current GCMs and the main cause of the large range of future climate projections.
A second problem in current GCMs is the influence of (human made) aerosols. Here there is an offset between aerosols and GHGs. If aerosols have a low sensitivity (or a low forcing), then GHGs have a low(er) sensitivity and vv.
Further discussion somewhere in the New Year, I hope!
Eli Rabett says
#48 and 54. To give credit where credit is due, the Idsos’ explicitly state that CO2 emitted by volcanoes is small (co2science is their site) http://www.co2science.org/scripts/CO2ScienceB2C/subject/questions/1999/volcano.jsp
Steve Bloom says
Re #58 (ER): Thanks for the fact check. Since my memory was clearly in error, I wasted way too much time tracking down what I suspect may be the source for this urban legend. See http://www.agelesslove.com/boards/archive/index.php/t-16764.html for the gory details. The claim made by the book isn’t quite that excess CO2 comes directly from volcanos, but rather that undersea volcanos warm the oceans which in turn emit the CO2. I’d say you can’t make this stuff up, but obviously you can. :)
Urs Neu says
Re #56: You only can explain the 1975-2000 warming by solar forcing if the sensitivity has changed in the middle of the 20th century. The solar reconstructions you mention show that there is an increase in TSI from 1900 to 1950 of about 1.5-3 W/m2 (depending on the reconstruction) and an increase of global temperature of about 0.4ºC. The following decrease (until ~ 1970) and increase (since 1970) of TSI is of about the same amount (0.5-1.5 W/m2) and thus compensates each other while the temperature increase after 1970 is much greater (at least 5 times) than the cooling before (about 0.5ºC). I can’t see how you can explain the warming after 1970 by TSI changes without changing the climate sensitivity to solar forcing at that time.
Concerning Chen and Wielicki:
1. It is quite difficult to detect decadal variations in a 15 year time series.
2. It is physically plausible that global warming changes large scale circulation patterns (in the atmosphere as well as in the ocean).
3. C&W “feel” and “believe” that this variation is independent from global warming, any arguments are missing.
Ferdinand Engelbeen says
Re #60,
Urs,
First, the solar reconstructions are for the TOA (top of atmosphere) forcing. Any increase in sensitivity, due to the change of cloud amount caused by the long-term (longer than the solar cycle) change in solar intensity is not included.
Second, the measured increase in 1900-1945 temperature is only a transient one, and if solar should have levelled in 1945, the temperature would have increased further, until a new equilibrium had been reached (all other forcings being frozen too). But as there was a decrease in solar strength 1945-1975, the net effect is a small cooling instead. After 1975, solar strength increased again until the current level, which still is higher than in the 1930’s…
You need to read the original works of Wielicki ea. and Chen ea. to see the basic point of what happened in the past decades in the tropics. CO2 increased in the period 1985-2000 from 345 ppmv to 370 ppmv. This should give a direct radiative gain of some 0.35 W/m2 in the period and area of interest. In contrast, due to a change in cloud cover (as result of increased Hadley cell circulation), there is some 2 W/m2 more solar insolation at the surface and some 5 W/m2 more outgoing IR to space in the same area and period. The net effect is an extra loss of some 3 W/m2 to space…
That is an order of magnitude larger loss than the gain from the extra CO2 and with an opposite sign. Which points to a natural cause.
Hans Erren says
re 59:
Neat! I hadn’t the considered the warm water effect.
Now the claim can easily be checked because undersea volcanic eruptions are coincident with earthquakes. As seismicity has been monitored in detail ever since underground nuclear testing started in the 50’s, we have a reliable database available.
Just need some time to query it, or has somebody done this already?
Hank Roberts says
This may help (I’ll defer to the real scientists here to evaluate it).
Link is to a PDF file:
Detecting and Attributing External Influences on the Climate System: A
Review of Recent Advances — February 2, 2005, Journal of Climate
http://www.llnl.gov/tid/lof/documents/pdf/315840.pdf
Quote
…. Changes in solar forcing can potentially explain only about 2% of the observed increase in ocean heat content (Crowley et al. 2004). Geothermal heat escaping to the oceans from the great rifts may explain perhaps 15% of the observed change (W. Munk and J. Orcutt 2003, pers. com.) and thus sea floor heating is probably not a major factor. In contrast, estimates of changes in ocean heat content caused by anthropogenic warming provide a much closer fit to the observations ….
End quote
Steve Bloom says
And of course volcanic warming is not a zero effect, but it’s interesting that someone figured out how to quantify it. Hans, I believe Louis Hissink has been going on about this for years, claiming it to be the dominant factor in ocean warming. You may take that for what it’s worth.
Ferdinand Engelbeen says
Re #63,
There are some scientists who disagree with the 2% attributed to solar changes. According to Scafetta and West (2005), the small increase (0.45 W/m2 according to ACRIM) in TSI (total solar irradiation at the top of the atmosphere) measured by satellites is good for at least 10-30% of the observed increase in surface temperatures in the period 1980-2002. This is only based on recently measured TSI values and the observed effect of ~11/~22 year solar cycles since 1850. That doesn’t include longer-term effects on surface temperature or ocean heat content, due to the increase in solar radiation over the past century.
Further, the observed ocean heat content (since 1955, the first halve century lacks sufficient sub-surface data) only resembles the GHG forcing for the linear increase in heat content. The cyclic behaviour of ocean heat content points to natural cycles of forcing (probably caused by changes in cloud cover) with one order higher magnitude than the changes attributable to GHGs. Thus we need to know what drives the natural cycles before we can make any real attribution to the different forcings…
[Response: As you are no doubt aware, there are are two separate efforts to string the solar observing satellites together, and that the other method (PMOD) doesn’t show any trend at all…. -gavin]
Urs Neu says
Re 65
I had a look at the Scafetta and West paper (SW). The arguments and conclusions of this paper are, mildly speaking, very questionable. The main flaws are the following:
SW apply a band-pass filter to global temperature (GT) and total solar irradiance (TSI) for the last 150 years for two bands (7.3-14.7 years, centered at 11 year, and 14.7-29.3, centered at 22 years). Then they compare the solar and temperature signals of these frequency band.
To derive climate sensitivity to the 11 and 22 year cycle, resp., they compare the amplitudes of both signals in the above mentionned frequency bands. They do so in assuming that 100% of the temperature signal in this frequency band is due to the solar 11 year cycle.
This latter assumption is certainly not true: first, from figure 4 of their paper it is obvious that the (filtered) temperature signal and the solar signal have different frequencies (temperature with 16 cycles over the 150 year period compared to 14 for the solar signal; or a frequency of about 9 years for temperature for the last 5 cycles compared to about 11 years for the solar signal). The same for the 22year cycle (7 temperature cycles compared to 6 solar cycles). Not to speak about the amplitudes of the cycles they pretend to match showing no apparent correlation at all. Just the fact, that both factors show a signal in the same wide frequency band (7-14 years!) is not very convincing (to say it politely…). Second, the filtered signal certainly contains components of other forcing factors (like volcanoes or El Nino), since these signals surely have components in all frequency bands.
SW assume that “our methodology filtered off volcano-aerosol and ENSO-SST signals from the temperature data because these estimates are partially consistent with already published independent empirical findings.” This is a very peculiar logic. They compare their result of the lower frequency band (11 year) e.g. to the climate sensitivity found by Douglass and Clader (2002) (DC) through a regression analysis including volcanoes and El Nino. However, their climate sensitivity of the upper band (22 year) is much higher than the result of DC (0.17 K/Wm-2 compared to 0.11). And for their calculation of the solar influence, they use this value.
However, they make another logical mistake, comparing the TSI increase between two 11-year cycles (mean of 1980-1991 to mean of 1991-2002, which compares periods separated by 11 years) to the global temperature trend 1980-2002 which compares periods separated by 23 years!
This compensates somewhat for their use of the higher sensitivity…
I wonder how this paper could pass peer review?
Or: another paper claiming solar influence by easily passing over the fact, that the frequencies of the signals which they claim to be linked, unfortunately do not match at all (after Shaviv and Veizer 2003 in GSA Today, see Rahmstorf et al. 2004 in EOS)…
Did someone else had a closer look at that paper?