Guest comment by Malte Meinshausen, Reto Knutti and Dave Frame
Yesterday’s BBC article on the “Avoiding Dangerous Climate Change” report of the Exeter meeting last year, carried two messages that have left some a little confused. On the one hand, it said that a stabilization of greenhouse gases at 400-450 ppm CO2-equivalent concentrations is required to keep global mean warming below 2°C, which in turn is assumed to be necessary to avoid ‘dangerous’ climate change. On the other hand, people are cited saying that “We’re going to be at 400 ppm in 10 years’ time”.
So given that we will exceed 400 ppm CO2 in the near future, is a target of 2°C feasible? To make a long story short: the answer is yes.
The following paragraphs attempt to shed a little light on the background on why 2°C and 400 ppm are mentioned together. First of all, ‘CO2-equivalent concentration’ expresses the radiative forcing effect of all human-induced greenhouse gases and aerosols as if we only changed CO2 concentrations. We use that as shorthand for the net human perturbation – it’s not the same as real CO2 being at 400 ppm because of the substantial cooling effect from aerosols. However, the other greenhouse gases such as methane and N2O increase the forcing and compensate somewhat for the aerosol effects. Thus the CO2-equivalent concentration is roughly equal to current levels of real CO2.
The ecosystems on our planet are a little like a cat in an oven. We control the heating (greenhouse gas concentrations) and the cat responds to that temperature. So far, we have turned the controls to a medium level, and the oven is still warming-up. If we keep the oven control at today’s medium level, the cat will warm beyond today’s slight fever of 0.8°C. And if we crank the control up a bit further over the next ten years and leave it there, the fever is going to get a little worse – and there is even a 4 in 1 chance that it could exceed 2°C.
Where does that probability come from? If we knew the real climate sensitivity, then we would know the equilibrium warming of our oven/planet if we left the CO2-equivalent concentrations at, say, 400 ppm for a long time. For instance, if the climate sensitivity were 3.8°C, such an oven with its control set to 400 ppm would warm 2°C in the long-term1. But what are the odds that the climate sensitivity is actually 3.8°C or higher? The chances are roughly 20%, if one assumes the conventional IPCC 1.5-4.5°C uncertainty range is the 80% confidence interval of a lognormal distribution2. Thus, if we want to avoid a 2°C warming with a 4:1 chance, we have to limit the greenhouse gas concentrations to something that is equivalent to 400 ppm CO2 concentrations or below. (Note, though, that one might want to question, whether a chance of 4:1 is comforting enough for avoiding fever of 2°C or more…).
At the heart of the second statement (“We’re going to be at 400 ppm in 10 years’ time”) is the fair judgment that we seem committed to crank up concentrations not only to 400 ppm CO2 but beyond: anything less implies we would have to switch off our power plants tomorrow. Current CO2 concentrations are already about 380ppm and they rose about 20ppm over the last ten years.
Figure: A schematic representation of (a) fossil CO2 emissions, (b) CO2-equivalent concentrations, and (c) global mean temperature for two scenarios: Firstly, an “immediate stabilization” which implies rising CO2-equivalent concentrations up to around 415 ppm in 2015 and stable levels after that (red dashed line). This scenario is clearly hypothetical as the implied emission reductions in 2015 and beyond would hardly be feasible. Secondly, a peaking scenario (green solid line), which temporarily exceeds and then returns to a 415 ppm stabilization level. Both scenarios manage to stay below a 2°C target – for a climate sensitivity of 3.8°C or lower. This is roughly equivalent to a 4:1 chance of staying below 2°C3.
Indeed, avoiding concentrations of – say – 475ppm CO2 equivalent (see Figure a) will require quite significant reductions in emissions. However, as long as we reduce emissions decisively enough, concentrations in the atmosphere could lower again towards the latter half of the 21st century and beyond. The reasons are the relatively short lifetimes of methane, nitrous oxide, other greenhouse gases and some CO2 uptake by the oceans (as discussed here).
Now, what is going to happen to our cat, if we turn up the heat control of our oven to about 475 ppm and then reduce it again? If we react quickly enough, we might be able to save the cat from some irreversible consequences. In other words, if concentrations are lowered fast enough after peaking at 475 ppm, the temperature might not exceed 2°C. Basically, the thermal inertia of the climate system will shave off the temperature peak that the cat would otherwise have felt if the oven temperature reacted immediately to our control button.
Thus, to sum up: Even under the very likely scenario that we exceed 400 ppm CO2 concentrations in the very near future, it seems likely that temperatures could be limited to below 2°C with a 4:1 chance, if emissions are reduced fast enough to peak at 475 ppm CO2 equivalent, before sliding back to 400 ppm CO2-equivalent.
Peaking at 475 ppm CO2-equivalent concentrations and returning to 400 ppm certainly comes at a cost, though: Our oven will approach 2°C more quickly compared to a (hypothetical) scenario where we halt the build-up of CO2 concentrations at 400 ppm (see Figure c). Thus decisions would need to be different if we care more about the rate of warming than the equilibrium. In fact, some emission pathways and some models also suggest that peaking at 475 ppm CO2 equivalent and returning to 400 ppm might even slightly decrease our chances to stay below 2°C (see Chapter 28 in the DEFRA report). Depending on the actual thermal inertia of the climate system, the peak temperature corresponding 475 ppm might be very close to the 400 ppm equilibrium temperature. This points to a more fundamental issue: Rather than discussing ultimate stabilization levels, it might be more worthwhile for policy and science to focus on the peak level of greenhouse gas concentrations. By the time that we managed to peak concentrations we could still decide whether we want to stabilize at 400 ppm or closer to pre-industrial levels. We will most likely be able to make wiser decisions in the future given that we certainly have learnt something about the cat’s behavior in its current fever.
1. The equilibrium warming dT can be easily estimated from the CO2 equivalent stabilization level C, if one would know the climate sensitivity S, with the following little formula: dT=S*ln(C/278 ppm)/ln(2)
2. This is of course a probability that merely reflects the uncertainty in our knowledge. The climate sensitivity is not random, it is just unknown. It expresses our degree of belief in that outcome, and is of course subject to future modification in the light of new evidence.
3. The depicted peaking scenario is the EQW-S475-P400 scenarios as presented in Chapter 28 of the DEFRA report. The “combined constraint” (see as well Chapter 28) has been chosen to find aerosol forcing and ocean diffusivity values for a 3.8°C climate sensitivity, which allow an approximate match to historic temperature and ocean heat uptake records. The historic fossil CO2 emission data is taken from Marland et al., the CO2 observations from Etheridge et al. and others are as given here and here, and the temperature observations and their uncertainties are from Jones, Folland et al.. The simple climate model that was used is MAGICC 4.1 as in Wigley and Raper (2001 – see here).
David H says
Re # 47 Sea Level
Thanks Hank, I thought I was alone.
Hank Roberts says
We’re all alone (grin); I don’t know if that’s a reliable graph, it’s a page inside a teaching web site addressing a variety of different stories from history and myth, some of it’s fiction. I saw another reference elsewhere saying sea level hasn’t been more than a half meter higher since the last ice age, also making reference to a “climate optimum” somewhat warmer than present. But that had references to flood theology on it too. There are pages saying that the location of Eden is now under the Red Sea.
It gets hard to tell the players from the fans.
I ask here for the experts’ references hoping theirs are more informative than the ones I’ve found.
Pat Neuman says
re: #49. … “CONCLUSIONS The frequently mentioned rapid increase of the temperature in the Arctic is based on a record beginning at a minimum in the temperature around the 1970s …
Please take a look at annual temperature plots for McGrath AK (1942-2005) and Big Delta AK (1943-2005), at:
http://pg.photos.yahoo.com/ph/patneuman2000/my_photos
The Alaska album contains annual temperature plots for 20 climate stations in Alaska.
Hank Roberts says
Looking at those curves that represent different scenarios reminds me of the scenarios in the back pages of Catton’s book titled Overshoot.
I found them online here:
http://greatchange.org/footnotes-overshoot-graphs.html
Catton’s lengthy explanatory footnote explains, toward its end:
…
“… in Pane D, “carrying capacity” has been represented by two different curves. A major fraction of the recent, apparently high carrying capacity for human high-energy living must be attributed to temporary resources – i.e., non-renewable fossil acreage, the earth’s savings deposits. In Panel D, it is optimistically assumed that the component of carrying capacity based on renewable resources has remained stable so far. But it is recognized that serious overshoot, induced by temporarily high composite carrying capacity, will at least temporarily undermine even the sustainable component. “Energy plantations” for example (one of the Cargoist3 proposal’s), will tend to aggravate the competitive relation between our fuel-burning prosthetic machinery and ourselves; land taken over to feed technology will not feed humans. So “temporary carrying capacity” is shown actually dipping below the horizontal line for a while, before it recovers and becomes again simply “carrying capacity”. The lesson from Panel D is that crash caused by the exhaustion of phantom carrying capacity by Homo Colossus could preclude a later cycle of regrowth.”
“The boundary between past and future is drawn in Panel D, as in the other three panels, at a time when population appears not yet to have overshot carrying capacity. Whether or not that optimistic feature of the model is justified by current facts makes little difference if current practices have committed us to a trajectory that continues upward so that it is destined soon to cross the descending curve that represents global carrying capacity, a capacity not yet acknowledged to be finite. My own view, of course, is that the curves have already crossed.”
“Either way, the past shown in Panel D more nearly accords with ecological history that do the pasts shown in Panels A, B, or C. The future hypothesized by Herman Kahn’s think-tank group is dangerously optimistic because it is based on the least realistic past. But the pasts shown in Panels B and C are also less realistic than the past shown in Panel D. The futures shown in Panels B and C are therefore also probably somewhat “optimistic” – although it seems necessary to enclose the word in quotation marks, because even the Panel B future seems dismal, and the Panel C future seems disastrous…..”
That was written a long time ago; looking into it and deciding which of the assumptions made for each scenario have mattered in reality, is a good exercise every now and then. I’m still puzzled myself on some of them.
Joel Shore says
Re #49: Thanks, Tim, for the posting of that paper. I am no expert on the issue of arctic climate change…but my reading of that paper is that it is that it is a bit oversimplistic in discussing the study of arctic change and attribution of causes of that change. The issue of attribution is much more complicated than simply looking at temperature records. This site: http://www.arctic.noaa.gov/detect/climate-temps.shtml and the references at the bottom give some discussion of this issue. I assume the ACIA report, which I haven’t read yet (see Ref. 1 of the Karlen paper) gives much more.
Joel Shore says
Re #49 and #55: I think it is well worth reading what the ACIA report has to say in regards to temperature trends in the arctic and attribution of causes. In particular, see pages 34-39 of Chapter 2 of the scientific report available here: http://www.acia.uaf.edu/pages/scientific.html
To quote from part of this:
“The question is whether there is definitive evidence of an anthropogenic signal in the Arctic. This would require a direct attribution study of the Arctic, which has not yet been done. There are studies showing that an anthropogenic warming signal has been detected at the regional scale. For example, Karoly et al. (2003) concluded that temperature variations in North America during the second half of the 20th century were probably not due to natural variability alone. Zwiers and Zhang (2003) were able to detect the combined effect of changes in GHGs and sulfate aerosols over both Eurasia and North America for this period, as did Stott et al. (2003) for northern Asia (50ºâ��70º N) and northern North America (50ºâ��85º N). In any regional attribution study, the importance of variability must be recognized. In climate model simulations, the arctic signal resulting from GHG-induced warming is large but the variability(noise) is also large. Hence, the signal-to-noise ratio may be lower in the Arctic than at lower latitudes. In the Arctic, data scarcity is another important issue.”
“In conclusion, for the past 20 to 40 years there have been marked temperature increases in the Arctic.The rates of increase have been large, and greater than the global average. Two modeling studies have shown the importance of anthropogenic forcing over the past half century for modeling the arctic climate. Johannessen et al. (2004) used a coupled atmosphereâ��ocean general circulation model (AOGCM) to study the past 100 years and noted, â��It is suggested strongly that whereas the earlier warming was natural internal climate-system variability, the recent SAT (surface air temperature) changes are a response to anthropogenic forcingâ��. Goosse and Renssen (2003) simulated the past 1000 years of arctic climate with a coarser resolution AOGCM and were able to replicate the cooling and warming until the mid-20th century. Without anthropogenic forcing, the model simulates cooling after a temperature maximum in the 1950s. There is still need for further study before it can be firmly concluded that the increase in arctic temperatures over the past century and/or past few decades is due to anthropogenic forcing.”
Ben A says
Re #47,#48,#52
You asked for experts references, I am sadly not an expert but I have background knowledge if that is of any use to you, if it is too basic feel free to ignore it!
The problem with recreating sea level is that what is usually observed in proxy records is the local (isostatic) change as opposed to global (ecstatic change). Sea level is a relative term and depends upon the relative levels of the ocean and the land, so sea level can appear to rise if land is depressed etc. Several areas of the world will have seen sea levels higher than today’s during the Holocene, but this is due to isostatic rather than eustatic changes. In the UK, because of the great amount of ice over Scotland, the land was tilted with around 12-14m of depression over Scotland, and a subsequent forebulge over southern England. So when looking at UK records you have to realize that Scotland is currently bouncing up so sea levels will appear to fall even if they are stationary, and the South of England will be submerging. This is why the most reliable evidence of sea level change comes from far field sites in areas of no glacial activity and with as small as possible tectonic and uplift activity. Only then can you gather global sea level changes.
There are good introductions to the topic of sea level change in the text books –
Pages 53-68 in Reconstructing Quaternary Environments by Lowe and Walker, 1997 (Prentice Hall)
Pages 107-125 in Quaternary Environments by Williams, Dunkerley, De Drekker, Kershaw and Chappell, 1998 (Arnold)
For the definitive – what was happening in the Wash 4000 years ago, Ian Shennan and Ben Horton used over 12,000 radiocarbon dated sea level indicator points across the UK to recreate sea levels back to 16,000 years. You will sea much higher sea levels in Scotland, because of the depressed land mass, but the fenlands were all lower than today throughout the Holocene. I am not too sure how copyright things work so I don’t want to post the figure but it is available in: Shennan, I and Horton B, 2002, Journal of Quaternary Science, 17(5-6) 511-526
If you do want to read around the topic it would be worth looking at the modeling side of things as well as the proxies, Lambeck and Peltier actively publish in the field, although their papers tend to differ on their exact interpretations. Just using google scholar will give you plenty of references.
Hope that helps
Timothy says
I was wondering whether the folks on RealClimate had anything to say about the 2005 increase in CO2. I remember that the increase for 2004 was hotly anticipated because the 2002 and 2003 increases had both been >2ppm [being the first time this had happened for two consecutive years].
I’ve heard that the following can be said about the annual increase in CO2 concentrations.
1. There is a long-term increasing trend which correlates very strongly with increasing anthropogenic emissions of CO2 from fossil fuel burning.
2. After removing this trend, the remaining variability matches the ENSO variability very strongly, the exceptions being due to volcanoes and in 2003.
3. There is evidence that links the anomalous 2003 increase to the exceptional European drought/heatwave of that year and to forest fires in Siberia.
4. #3 is often cited as evidence that we are starting to see the effects of a [land] carbon cycle positive feedback, whereby climate change reduces the ability of the land system to absorb anthropogenic emissions of CO2.
5. Preliminary data for 2005 suggests that drought in the Amazonia region has contributed to anomalously high CO2 increase in 2005, which can’t be explained by ENSO variability.
Question: Could we be very close to a tipping point? Are we actually *at* the tipping point with respect to land uptake of CO2?
If we are this would make it much harder to constrain CO2 levels to a low stabilisation level.
Also, with regard to overshoot, there is a lot of uncertainty over how quickly CO2 concentrations might reduce if we overshot a target. I recognise that you are also considering short-lived species such as methane [and using a CO2-equivalent rate], but I think we would need to keep ‘actual’ CO2 levels below a stabilisation target, in order to be confident about meeting that target. 415ppm of CO2 looks like a very big ask.
Note that I only think it is a big ask in a political/organisational way. Technically and economically I think it is feasible, but it would require the sort of political focus that existed in WWII, for example.
To digress slightly, I remember learning in history class that during the early stages of the [second world] war some private companies in the UK resisted attempts to have their manufacturing turned over to armaments production, arguing that they could contribute to the war effort more effectively by “business as usual” and through economic growth and taxation on the profits of selling vacuum cleaners. This reminds me of the dichotomy between economic growth and tackling global warming that is often drawn today. Personally, I feel that the experience of WWII shows that economies can actually benefit from the stimulus created by a governement-inspired boost in demand. In WWII this was to manufacture tanks, warplanes, etc, but today it would be to manufacture wind turbines, solar panels, efficient mass transit systems and other technologies necessary to change our societies to emit less carbon.
Chuck says
Stabilizing atmospheric CO2 concentrations at 550 ppm will require a 70% reduction of emissions from the current annual emission rates (which are between 6 and 7 billion tons per year) to rates of about 1 or 1.5 billion tons per year. This is the rate of global emissions that existed in the late 1920s. Put another way, the 13 largest emitting countries in the world (USA, China, India, Russia, Ukraine, Japan, Germany, Canada, Australia, …) emit about 70% of the global total. So one “extreme” to reduce global emissions by 70% would be to reduce the emissions of those 13 largest countries to ZERO, and hold the emissions of all other countries in the world constant at their current rate. The “opposite extreme” would be to reduce the emissions of the 13 largest countries by at least 60%, and reduce the emissions of all of the OTHER countries in the world to ZERO. Stabilizing atmospheric CO2 concentrations at 475 ppm will require a larger emission rate deduction, probably between 80 and 90% of current annual emission rates, to a emission rate less than 1 billion tons per year. And the time to stabilization will be about 400 years. Now, does anyone REALLY believe that stabilization of atmospheric greenhouse gas concentrations is feasible? Certainly, none of the people alive in the world today (Shirley MacLaine excepted) will live to see it.
Mike Atkinson says
Perhaps you might comment on Estimated PDFs of Climate System Properties Including Natural and Anthropogenic Forcings (Chris E. Forest, Peter H. Stone and Andrei P. Sokolov, MIT). If they are correct and the climate sensitivity can be constrained to be withing 2.4 – 9.2 K with 90% confidence then the probability of breaching 2C rise is about 80% at 400ppm.
Hank Roberts says
That’s the latest of many reports on climate from MIT. The graphics illustrating their conclusion are at the back of the PDF file.
Their summary:
“The implications of these results are that the climate system response will be stronger (specifically, a higher lower bound) for a given forcing scenario than previously estimated via the uncertainty propagation techniques in Webster et al. (2003).
John L. McCormick says
Re: #59 Chuck summed the topic of “avoiding dangerous climate change” by challenging us to think in real terms about the infeasibility of stabilizing atmospheric greenhouse gases at any level remotely approaching that which can prevent a 2 degree warming. As a lifelong environmental activist, I attest to being an accomplice to the failed effort and the self-appeasing approach environmentalists have taken.
No. Renewable energy is not a solution to reducing fossil fuel-generated CO2, nor is renewable energy an alternative to base load electric generation, a substitute for imported or any other petroleum. Any of the renewable options are too little, too late and “fools gold” offered to the public and policy makers to keep them in the mitigation game and focused on an international treaty of some sort.
Time is the enemy of our children and we ran out their clock. Pat Michaels has become irrelevant and all the skeptics are immaterial to the facts coming to light regarding temperature amplification in the arctic and sub arctic.
Who really benefits from pursuing the international dialogue aimed at creating a post-2012, Post Kyoto, CO2 intensity agreement, clean technology and all the hand wringing efforts to lure developing countries into the mitigation tent? Only the advocates can claim victory. The climate will not.
Winters in the wheat-growing areas of the US Northern Plains will never be the same as more of the US-Canada boundary Arctic sea ice disappears in September and yields warm sea surface temperatures to the air currents that once delivered the essential winter snows to the farmlands on the US northern border.
Winter and summer wheat crops contribute to the world’s grain surplus and that is rapidly changing to a negative account. When the world loses access to surplus wheat, food prices rise rapidly in the poorest countries that have per capita incomes able to buy food until costs exceed their income and they drop into the class of malnourished or starving. The rest of us compete against each other and struggle to boost wheat output any way we can. Ethanol and bio-diesel are going to be in the way and quickly eliminated in this new world order of a bidding war for wheat. What happens then?
Corn and other grain farmers find less groundwater available because diminished snow pack is not replenishing the Ogallala and other vital aquifers in the grain states. And, food is only one of the victims as unseasonable winters becomes the norm. Look at the satellite images and tell yourself we have time to dither around with climate stabilization. Its payup time and we cannot escape the reality that courageous, outspoken scientists have been trying to impart on all of us; even the saintly environmentalists who fear talking about adaptation as if that will drive them out of business.
I say loudly and clearly as my limited knowledge will permit: THE WORLD COMMUNITY IS WITNESSING AN ABRUPT CLIMATE CHANGE IN THE MELTING OF THE ARCTIC ICE CAP AND WE HAVE VERY PRECIOUS LITTLE TIME TO DETERMINE THE CONSEQUENCES TO OUR AGRICULTURAL BASE (IF ONLY THAT WAS THE TOTALITY OF OUR WORRY) AND MAKE SERIOUS WATER SUPPLY ADJUSTMENTS.
Measure the US federal deficit for then next 25 years and you will begin to appreciate the complexity of responding to a melting Arctic ice cap. Who and what will provide the monies needed to assure the farmers will have the soil moisture and irrigation water they will need to feed us.
Please forgive my rant. Your page is important to the scientific discussion and others provide political pages for views such as mine.
Arctic sea ice melt does, however, beg the question of time available to prepare for the inevitable â?? at least as far as access to affordable food is concerned.
John L. McCormick
Paul Duignan says
Do you have a comment regarding the recent statement by Professor Keith Shine head of meterology at University of Reading that the 2004 CO2 equivalent level is currently 425 parts per million and what that means for talk of getting CO2 equivalent levels below 400ppm to hold temperatures to 2 degrees C.
http://news.independent.co.uk/environment/article344690.ece
[Response: Dr. Shine’s analysis is only for the well-mixed greenhouse gases which of course gives a CO2-equivelence greater than for CO2 alone. However, it does not include the main counterbalancing terms (reflective aerosols, land use change), nor a few of the minor warming effects (black carbon, tropospheric ozone). By some fluke of nature all of the extra terms cancel out (with some uncertainty), and so our best guess for the total forcing is just about equal to that of CO2 on it’s own. – gavin]
Chuck says
Dr. Shine is pointing out that besides CO2, there are several other gases in the atmosphere (such as methane, nitrous oxide, and certain chlorofluorocarbons, as examples) that are also mostly transparent to the majority of the short wavelengths of energy incoming from the sun, but are strong absorbers of the longwave radiation that is re-emitted back to space from the Earth. Thus, like CO2 and water vapor, their presence in the atmosphere contributes to the warming of the atmosphere, and their growing presence in the atmosphere contributes to the growing warming in the atmosphere. Taken together by doing the calculations of the wavelength-specific absorption by all of these gases (and avoiding any “double-counting” where two gases may be absorbing at the same wavelength) — one can see that this is not a trivial exercise — current estimates are that these other gases are contibuting an additional 10% or so to the warming produced by CO2 alone. Hence, Dr. Shine’s comment that the 380 ppm or so of CO2 currently in the atmosphere (annual mean) PLUS the effect of the other greenhouse gases is equivalent to about 425 ppm of CO2 alone. My comment in #59 was based solely on the CO2 component. I’m guessing that it may be even more difficult to reduce the growth in the atmospheric concentrations of some of the other greenhouse gases, especially if their sources are much more dispersed around the world and if they have longer effective lifetimes in the atmosphere than CO2.
As an aside somewhat off-topic, let me note that the term “greenhouse effect” is a misnomer, since real greenhouses do not warm their interiors primarily by differential absorption of longwave versus shortwave radiation, but rather by supressing convective motions of the air in the greenhouse that would carry heat away. Some classic experiments about 40 years ago proving this point were published in the Journal of Atmospheric Sciences or Journal of Meteorology (I can’t remember which — I’ve got a copy of the original article buried away in some box, but haven’t looked at it for a long, long time). The author (I believe his surname was Lee) compared ordinary greenhouses to ones where the glass was replaced with rock salt on a grated surface, so that the walls of this greenhouse were porous to air motions but “radiatively correct”) and showed that it was the air restriction of the glass in the ordinary greenhouse that was more important than its weakly-absorbing radiative properties. Lee wanted to call the real phenomenon in nature the “Atmospheric Effect”, but “Greenhouse Effect” sounds better and has stuck through the decades.
Paul Duignan says
Thanks for reponse on comment 63. Further question, given the quote from the paper in comment 61 about a stronger climate system response than previously estimated and statements like, “not yet factored into the models” which seems to appear in some of the reports of recent findings; has anyone done an analysis of the trend in predictions regarding climate change effects over, say the last 15 years or the series of IPCC reports and does any informed person want to give us a guess regarding the future trend line of the predictions.
Chuck says
Regarding #65 and the quote in comment #61 about a paper by MIT scientists Forest, Stone, and Sokolov, they are pointing out the difficulties in defining the “climate sensitivity to CO2 concentration changes in the atmosphere”. The basic issue here is that (as others have undoubtedly noted in other threads on these web pages) since CO2 is mostly transparent to incoming shortwave solar radiation, but strongly absorbing of the longwave radiation re-radiated back to outer space from the earth AT SPECIFIC WAVELENGTHS NOT ALREADY BEING ABSORBED BY WATER VAPOR (which is the most prominent “greenhouse gas”), it follows that increasing CO2 concentrations in the atmosphere will increase the global average temperature in the atmosphere at the altitudes where the CO2 resides (at least until the CO2 level rises to a level which “saturates” its wavelength absorption bands). But how much of a temperature change per unit change in CO2? BTW, this relationship should apply for both increases and decreases in CO2.
Some scientists (Prof. Richard Lindzen, among others) argue that the sensitivity is very low, so that large changes in CO2 will produce only small changes in global atmospheric temperature. I’ve seen estimates attributed to them of 0.1 degrees C change in temperature for a factor-of-2 change in CO2. Other scientists argue that the sensitivity is quite high — perhaps 10 degrees C change in temperature for a factor-of-2 change in CO2. The IPCC reports indicate a range of 1.5 to 4.5 degree C change in temperature for a factor-of-2 change in CO2, depending on the climate model used and lots of other assumptions.
One way to attempt to determine this sensitivity directly is to compare the observed changes in global atmospheric temperature over a long period of time with the observed changes in atmospheric CO2 concentration over the same time period. Easy to say, not easy to do, since the exact measurement records are not that long — reliable temperature records go back 150 years or so, reliable DIRECT atmospheric CO2 measurements go back to 1957. The MIT scientists are noting in their paper that there are many other factors that can contribute to the behavior of global atmospheric temperatures in the records, such as a rather large but time-dependent cooling effect from large emissions of sulfur-related aerosols from volcanic eruptions. Since the observed temperature records is presumed to reflect all of the real warming and cooling influences on the temperature record, than the sensitivity to CO2 alone must be larger than previously calculated, because its effect on the real atmospheric temperature is being muted somewhat by the cooling effects of the aerosols.
It’s important to note that the OPPOSITE effect on the estimation of CO2 sensitivity could be produced if one considers the warming effect of other greenhouse gases (methane, nitrous oxide, CFCs are examples). In other words, CO2 might erroneously be given “credit” for some of the observed warming in today’s world that may actually be due to the observed increases in methane, N2O, and CFCs). I would also note that the atmosphere’s response is not instantaneous, since the very large heat sink called the ocean considerably moderates and “drags out” over time the atmospheric temperature response. Put another way, it’s not entirely clear whether the atmospheric temperature increases observed in, say, the 1990’s are due to the observed increases in greenhouse gases in the 1990’s. Those temperatures may be responding instead to changes in greenhouse gases that occured many years (decades? centuries?) before.
My guess on what will be the trend in CO2 sensitivity estimates in future IPCC reports? I think it is most likely that the “spread” in the CO2 sensitivity estimates will increase as more is learned about other factors that influence the relationship between climate and man-made and natural factors, but that the mean of the CO2-specific sensitivity estimates will decline, as the concentrations of other contributing greenhouse gases increase. (Inherent in my guess is that global aerosol concentrations from volcanic emissions will not increase substantially in the future, but I don’t pretend to have any clue whether volcanic eruptions will be more or less prevalent in the future.)
Lewis Cleverdon says
The notion of a human capacity to “stabilize” atmospheric CO2 at a still higher concentration has long seemed to me somewhat hubristic –
As was written in New Scientist last year, any such goal of a stabilization above the 320 ppmv at which peat bogs globally began releasing more dissolved organic carbon to watercourses (which effect has since risen by around 6%pa) is plainly more rhetorical than scientific.
The very framing of the debate – as being about Global Warming, Climate Change, Stabilization, is more than unhelpful – its soporific tone is downright obstructive of the problem being addressed.
While the goal of the “stabilization” of concentrations needs to retreat to the “capping” of man’s global GHG emissions, perhaps a larger issue is the nonsense of using degrees Centigrade to describe global temperature to the layman.
This scale is a gift to propagandists and the deluded alike, and if science is to end its dismal failure to communicate effectively with the public then it urgently needs revision.
Degrees Aarhenius, being Ds Centigrade/100, may be the simplest replacement, but there might perhaps be a better case for using that name with a scale reflecting the known range of planetary temperature –
I would be glad of scientists’ thoughts on these issues.
Regards.
Neil Fisher says
Re #2: Why do we assume positive feedbacks? I don’t think anyone would deny that climate is variable in the short term yet stable in the long term. This indicates negative feedbacks, not positive ones. If one accepts the MWP and LIA as real events (and obviously not anthropogenic) then it seems to me that negative feedbacks are more likely. Surely any positive feedbacks would have been apparent with these events and we would have already seen climate spiral into a Venus like state or an ice age?
[Response: Overall feedbacks are negative – the principle one being the long wave radiation going like T4, however that does not preclude positive feedbacks for small perturbations. This isn’t ‘assumed’, it is observed – as it gets wamer, ice melts and water vapour increases. -gavin]