Almost 30 years ago, Jule Charney made the first modern estimate of the range of climate sensitivity to a doubling of CO2. He took the average from two climate models (2ºC from Suki Manabe at GFDL, 4ºC from Jim Hansen at GISS) to get a mean of 3ºC, added half a degree on either side for the error and produced the canonical 1.5-4.5ºC range which survived unscathed even up to the IPCC TAR (2001) report. Admittedly, this was not the most sophisticated calculation ever, but individual analyses based on various approaches have not generally been able to improve substantially on this rough estimate, and indeed, have often suggested that quite high numbers (>6ºC) were difficult to completely rule out. However, a new paper in GRL this week by Annan and Hargreaves combines a number of these independent estimates to come up with the strong statement that the most likely value is about 2.9ºC with a 95% probability that the value is less than 4.5ºC.
Before I get into what the new paper actually shows, a brief digresssion…
We have discussed climate sensitivity frequently in previous posts and we have often referred to the constraints on its range that can be derived from paleo-climates, particularly the last glacial maximum (LGM). I was recently asked to explain why we can use the paleo-climate record this way when it is clear that the greenhouse gas changes (and ice sheets and vegetation) in the past were feedbacks to the orbital forcing rather than imposed forcings. This could seem a bit confusing.
First, it probably needs to be made clearer that generally speaking radiative forcing and climate sensitivity are useful constructs that apply to a subsystem of the climate and are valid only for restricted timescales – the atmosphere and upper ocean on multi-decadal periods. This corresponds in scope (not un-coincidentally) to the atmospheric component of General Circulation Models (GCMs) coupled to (at least) a mixed-layer ocean. For this subsystem, many of the longer term feedbacks in the full climate system (such as ice sheets, vegetation response, the carbon cycle) and some of the shorter term bio-geophysical feedbacks (methane, dust and other aerosols) are explicitly excluded. Changes in these excluded feaures are therefore regarded as external forcings.
Why this subsystem? Well, historically it was the first configuration in which projections of climate change in the future could be usefully made. More importantly, this system has the very nice property that the global mean of instantaneous forcing calculations (the difference in the radiation fluxes at the tropopause when you change greenhouse gases or aerosols or whatever) are a very good predictor for the eventual global mean response. It is this empirical property that makes radiative forcing and climate sensitivity such useful concepts. For instance, this allows us to compare the global effects of very different forcings in a consistent manner, without having to run the model to equilibirum every time.
To see why a more expansive system may not be as useful, we can think about the forcings for the ice ages themselves. These are thought to be driven by the large regional changes in insolation driven by orbital changes. However, in the global mean, these changes sum to zero (or very close to it), and so the global mean sensitivity to global mean forcings is huge (or even undefined) and not very useful to understanding the eventual ice sheet growth or carbon cycle feedbacks. The concept could be extended to include some of the shorter time scale bio-geophysical feedbacks but that is only starting to be done in practice. Most discussions of the climate sensitivity in the literature implicitly assume that these are fixed.
So in order to constrain the climate sensitivity from the paleo-data, we need to find a period under which our restricted subsystem is stable – i.e. all the boundary conditions are relatively constant, and the climate itself is stable over a long enough period that we can assume that the radiation is pretty much balanced. The last glacial maximum (LGM) fits this restriction very well, and so is frequently used as a constraint. From at least Lorius et al (1991) – when we first had reasonable estimates of the greenhouse gases from the ice cores, to an upcoming paper by Schneider von Deimling et al, where they test a multi-model ensemble (1000 members) against LGM data to conclude that models with sensitivities greater than about 4.3ºC can’t match the data. In posts here, I too have used the LGM constraint here to demonstrate why extremely low (< 1ºC) or extremely high (> 6ºC) sensitivities can probably be ruled out.
In essence, I was using my informed prior beliefs to assess the likelihood of a new claim that climate sensitivity could be really high or low. My understanding of the paleo-climate record implied (to me) that the wide spread of results from (for instance, the first reports from the climateprediction.net experiment) were a function of their methodology but not a possible feature of the real world. Specifically, if one test has a stronger constraint than another, it’s natural to prefer the stronger constraint, or in other words, an experiment that produces looser constraints doesn’t make previous experiments that produced stronger constraints invalid. This is an example of ‘Bayesian inference‘. A nice description of how Bayesian thinking is generally applied is available at James Annan’s blog (here and here).
Of course, my application of Bayesian thinking was rather informal, and anything that can be done in such an arm waving way is probably better done in a formal way since you get much better control on the uncertainties. This is exactly what Annan and Hargreaves have done. Bayes theorem provides a simple formula for calculating how much each new bit of information improves (or not) your prior estimates and this can be applied to the uncertain distribution of climate sensitivity.
A+H combine three independently determined constraints using Bayes Theorem and come up with a new distribution that is the most likely given the different pieces of information. Specifically they take constraints from the 20th Century (1 to 10ºC), the constraints from responses to volcanic eruptions (1.5 to 6ºC) and the LGM data (-0.6 to 6.1ºC – a widened range to account for extra paleo-climatic uncertainties) to come to a formal Bayesian conclusion that is much tighter than each of the individual estimates. They find that the mean value is close to 3ºC, and with 95% limits at 1.7ºC and 4.9ºC, and a high probability that sensitivity is less than 4.5ºC. Unsurprisingly, it is the LGM data that makes very large sensitivities extremely unlikely. The paper is very clearly written and well worth reading for more of the details.
The mathematics therefore demonstrates what the scientists basically thought all along. Plus ça change indeed…
Dano says
RE 50 and raypierre’s reply:
It’s late and lets see if I can be coherent here:
Scenarios are used as management tools or outcomes in adaptive management schemes.
Combined with chosen indicators, one chooses among the scneario trajectories to gauge likely outcomes given the values from a range on indicators. Or, alternatively, one can use a trajectory to manage indicators.
The scenarios, in and of themselves, sitting there, are valueless and lack probability assigments until one takes the values of certain indicators, and then analyzes them as to which scenario trajectory they fit into. One then follows the management strategy to manipulate the conditions that create indicator values in order to align with an agreed-upon scenario (such as the track to 3C warming).
The scenario trajectories could also be what you get to (or follow, or want to acheive) when you have, say, q human population growth with r GDP growth and s carbon sequestration and t arable land in production and u urban growth and v climate variability and…you get the idea.
Perhaps the IPCC or another body can include an adaptive management executive summary to help us understand how scenarios work (and maybe lessen some criticism of Ian’s attempt to tighten up emissions profiles).
Maybe that summary will help me write more cogent comments about scenario management too.
Best,
D
raypierre says
If I may rephrase what I think Hugh was hinting at in comment #49, it is this: The SRES scenarios are imperfect, viewed as predictions, but the state of economic models is so dismal (and so unlikely to improve) that we are fooling ourselves if we think that there is some pool of economic “rigor” that can be brought to bear on the emissions forecasts that will significantly improve the situation. Given that state of affairs, one is more or less stuck with driving models with a range of emissions curves lightly constrained by the net pool of fossil fuels and some generous assumption about the amount of demand there will be for them. After all, what do you get our of the scenarios at the end? Answer (basically) a curve of CO2 emissions. How many different ways are there to draw a smooth curve between now and 2100 that goes up monotonically (or maybe goes down a bit toward the end, if we’re lucky and policy makers pay any attention to the science)? Once you’ve decided how much it goes up, and integrated the net emissions to make sure you haven’t burned more coal than exists, the details of where you put the wiggles don’t make much difference to the climate forecast. I’d argue that there’s a big enough library of curves in the SRES scenarios already that one gets a perfectly adequate feel for the range of possible future climates. To think that economics is really going to tell us which curve is the right one is a fata morgana. There may be some room for improvement in the population projections, though — not because the state of the art has improved, but because there is new data on how rapidly the demographic transition can take place.
wayne davidson says
#33-34 I rather think that modellers must be judged by a very high standard beyond human peers. The peer of GCM’s and long range climate models is the future, it is the only way to judge them efficient. I only know of NASA GISS has being quite accurate, the rest are not impressive. I find it curious that adaptation by trial in order to eliminate errors has not improved most models, it is a mystery to me that succesful projections are not commonly achieved. This is not a trivial matter, either sensitivity is greater than expected, or there are fundamental mis-applications generating unfortunate failures.
Hank Roberts says
Ian, re my #46 — no offense meant by my dumb question; you’ve got a mix of professional scientists and avid amateur readers here (and it’s the Internet so some write-only contributors are inevitable). This subject in its branches is seriously fascinating stuff; the stretch between writing for scientists and writing for the average reading level is enormous. Many thanks for doing it.
Ian Castles says
I agree generally with Raypierre’s comment on my 50 and also with Dano’s 51. But I don’t agree that it is only the population component of the projections that can be improved. To my mind, the scenarios ‘industry’ has not done a good job of explaining what the scenarios mean in concrete physical terms – and if this were done better, it would quickly become apparent that some scenarios are scarcely conceivable.
For example, the A1FI scenario assumes that by 2100 the average consumption of electricity per head for the world as a whole will be five times greater than the average consumption in the RICH (OECD90) countries in 1990; that the efficiency of energy use will have greatly increased; and that 70% of the total primary energy supply will still be being met from fossil fuels. Does anyone really believe that all of these things could happen? I believe that the model-builders could be asked to give detailed illustrative specifications of what different scenarios assume: for example, under A1FI, what will be the size of the average house in India, what proportion of dwellings will have centralised air conditioning in Nigeria, and what will be the capacity of coal-fired power stations in Japan, Sri Lanka and Poland?
That is one end of the exercise: the long-term projections. But it’s also possible to start off from the present and look ahead through the period for which plans are already under way. I don’t believe that it is true that medium-term economic models have failed so dismally. The projections of energy use and CO2 emissions for 2020 which were made by the Commission of the World Energy Council in 1993 (published in ‘Energy for Tomorrow’s World’) still seem remarkably good 13 years later, with only 14 more years to go. Dr. Pachauri, the present Chair of the IPCC, was one of the fifty experts that produced that report.
The International Energy Agency has been modelling energy production and use up until 2030 in considerable detail, for several years. One can monitor the changes, and overall the projections haven’t changed much. In 2003 the IEA produced a 500-page report, linked to its quantitative forecasts of demand and supply of energy in different regions, on the financing of global energy infrastructure up to 2030. It is not difficult to realise from such studies that the prospective growth in energy use in some of the IPCC scenarios is already completely out of the question.
However sceptical one is about the possibilities of forecasting energy demand and emissions in the longer-term (and most economists are properly sceptical), it is important to try and narrow the range of uncertainty. Figure 9.15 in the TAR main scientific report (at http://www.grida.no/climate/ipcc_tar/wg1/fig9-15.htm) seems to show that the mean temperature increase from 1990 and 2100 varies between the six illustrative scenarios shown in that Figure in a range which is at least comparable in magnitude with the “likely” range identified by A&H for climate sensitivity. Measured against cumulative CO2 emissions from 1990 to 2100 the full set of 35 scenarios stretches from, respectively, 16% above to 21% below the highest and lowest of the six illustrative scenarios shown in Fig. 9.15. And this full set still does not include any scenarios with an end-century population lower than 7 billion, not to mention a number of other reasons why emissions could plausibly be lower than the lower end of the SRES range.
Raypierre asks ‘How many different ways are there to draw a smooth curve between now and 2100 that goes up monotonically (or maybe goes down a bit toward the end …)’ Answer: there are many such ways and one needs to start with some of the SRES scenarios. The lowest SRES scenario (in terms of the forcing pattern at the end of the century) is B1T MESSAGE. This scenario assumes that global CO2 emissions will increase by 42% between 2000 and 2030 – about the same increase as in the IEA’s Alternative Scenario, which makes moderate, specified assumptions about what countries will do under the heading of ‘climate policy’ (much of which they may well have decided to do anyway, for reasons of energy security, curbing air pollution etc.)
Yet, on my understanding, B1T MESSAGE stabilises CO2-equivalent concentrations at well below a doubling of the pre-industrial level of about 270 ppm. So why is it being constantly asserted that emissions must start falling within a decade or so if such a stabilisation is to be achieved? I do not believe that the IPCC ‘no climate change policy’ scenarios have been properly reconciled with the stabilisation scenarios. This is unfinished business.
[Response: Ian, what do you think about the issue of the inertia arising from the long capital life of energy systems? Without policies rewarding carbon-free energy production going into place quite soon, the number of coal fired power plants likely to be built in the next decade have the potential to lock us into increasing emissions for the next 50-60 years, given the long capital life of such power plants. That would seem to argue that even if we could tolerate some delay in the date of the drop in emissions, we can’t tolerate much delay in implementing economic disincentives to coal burning and tar sand oil production. I’m interested in what the energy forecasting models have to say about this issue, but don’t know where to start looking in order to find out. –raypierre]
[Response: On the matter of how many ways there are to draw the curve, I was suggesting (for the sake of discussion) that perhaps one could just replace the whole SRES process with a two or three parameter family of curves, and dispense entirely with any pretense that these are supposed to be actual forecasts. From the standpoint of the way most of the modelling community makes use of the scenarios, this would do just about as well as SRES, or would if it weren’t for the need for sulfate aerosol emissions (which may not be all that critical in the out years). One would still want to lightly constrain the curves by things like total fossil fuel availability, plausible ranges of population growth, and plausible ranges of per capita energy usage or per capita carbon emissions. By “plausible ranges” I mean reasoning to the effect that we could estimate the year 2100 Chinese percapita (or maybe per-GDP) carbon emission to be something between the present US level and the present French level. From my standpoint, I’d say that SRES has too many scenarios rather than too few. The right way for policy makers to use IPCC results to make decisions is to take sensitivity coefficients from IPCC WGI and use those in parameterized climate models run by economists themselves. For that matter, there’s nothing to keep economists from doing this right now. It’s what Nordhaus does. –raypierre]
Alastair McDonald says
Re #53 I entirely agree with your view “This is not a trivial matter, either sensitivity is greater than expected, or there are fundamental mis-applications generating unfortunate failures.” The radiation scheme that I am proposing does have a sensitivity greater than expected and highlights a fundamental mis-application. However, Gavin does not want me to discuss it here :-( See his response to #45.
However, perhaps he will allow me to correct his assertion that LTE “has nothing to do with whether a a volume of air is radiatively cooling or not.” Air in LTE obeys Kirchhoff’s Law – see http://amsglossary.allenpress.com/glossary/search?id=local-thermodynamic-equilibrium1 . If that is the case, then the input radiation equals the output radiation and the Law of Conservation of Energy dictates that the temperature of the air cannot change.
The corollary to that is if the air temperature near the sufrace is changing, then it cannot be in LTE. In other words, the boundary layer, where the air temperature follows the diurnal cycle, is not in LTE.
But my views have been singled out to be banned from appearing here. So, Adieu. I will no longer outstay my welcome :-)
Cheers, Alastair.
[Response: Alastair, your views on most subjects are most welcome here. However, insisting that everyone has this particular point wrong (except you) and not listening to anyone who points this out is a bit of a waste of time. One last try: you are confusing LTE with the concept of the large scale steady state. This is just not the case. LTE is a statement about how evenly energies (thus temperature) are spread over a air volume, it is not a statement about whether that volume is in large scale equilibirum with the larger system. LTE means that radiating temperature of the volume is the same as the bulk temperature and is valid over >99.99% of the atmosphere. Thus endeth the lesson. – gavin]
[Response:And if the theoretical argument isn’t good enough for you, consider that radiative transfer schemes based on LTE have been verified against field observations of measured fluxes literally hundreds of times — and millions of times if you consider that such schemes are the very basis of all remote sensing in infrared and longer wavelength channels. Alastair, please don’t go, but please do stop bringing up the exact same spurious claims about radiative transfer over and over again. –raypierre]
Hugh says
Re: current 49-55
Yes, taking a leaf from Hank’s book (thank you), after a night’s sleep and a day at work I return to find that my comment caused a little more angst than perhaps I intended. I’d like to take this opportunity to apologise if I appeared as a hit-and-run ‘writer’.
Although…I feel I do have a point, even if it takes raypierre to make it more eloquently than I ever could. Roger Pielke Jnr may not approve of this, but as I work in the field of the public perceptions of flood risk I can only say that I get frustrated when ‘I perceive’ that people are tending to make themselves comfortable on the left tail of any distribution curve they come across, before refusing to consider the slightest attempt at an ascent up the gradual slope above them.
Perhaps I have just laid myself open to crys of *alarmist*, however, my point is that *extremes happen anyway* and as I see it (and perhaps this wasn’t the best thread to make this point) it is inappropriate to skew conversation in any forum which is considered a teaching and learning resource (which RC without doubt is) completely toward the moderate and benign.
I am aware of uncertainty, I am also aware of extremes, and my personal position is that one should not be used to camoflage the other.
I apologise again but also thank subsequent contributors for clarifying the SRES issue for me. I shall know respectfully doff my cap, touch my forelock and retire to the rear of the church in order to re-take my pew…on the right.
David donovan says
Re 56.
Alastair please see the following article.
http://en.wikipedia.org/wiki/Thermodynamic_equilibrium
Cheers,
Ian Castles says
Raypierre, Thanks for your responses to my 65. On the first, I think we are at cross purposes. The purpose of the IPCC emissions scenarios was (is) to project what future emissions levels under various plausible assumptions might be, ASSUMING that there are no policies explicitly to combat climate change. My point is that, IN FACT, every one of the 35 IPCC scenarios assumes that CO2 emissions WILL go on increasing at least up until 2030 – and that IN FACT the models used by the IPCC to produce the temperature projection in the TAR find that under several of those scenarios (in the B1 family) CO2-equivalent concentrations are stabilised at or below twice the pre-industrial level.
I agree with your inertia point, but I’m not addressing the policy question: as a first step in framing policies explicitly to combat climate change, there is a need to assess what might happen in the absence of such policies. Many jurisdictions which have announced targets to reduce GHGs by 60% or more by 2050 (e.g.,the UK, California, South Australia) assert that there is a need for such reductions in order to stabilise at 2 X CO2 or below. On my reading of the TAR, this is simply untrue. If I am wrong on this TECHNICAL question, I’m ready to be told why. The POLICY question of whether and what rewards should be provided for clean energy is to my mind an entirely separate matter.
On your question of where to start looking on the energy forecasting/technology issues, I’d recommend the IEA World Energy Outlook 2004, especially the discussion of the IEA Alternative scenario. The World Energy Outlook 2005 would also be useful, but it is focused especially on the Middle East.
I’ve a lot of sympathy for your suggestion of just having hypothetical emissions curves rather than fully-fledged scenarios. In fact, David Henderson and I made a proposal along these lines to the convenor of the discussions on the scenarios at the IPCC meeting in Amsterdam in 2003, Dr. Richard Moss. The problem is that if the scenarios are to be used to guide (say) energy policies, you have to know what energy use, and from what sources, underlay the CO2 emissions at different points on your hypothetical curve. The need for the underlying socio-economic assumptions is even more obvious if the purpose is to project future numbers at risk of hunger, malaria, etc. The whole reason for commissioning the SRES in the first place was to have projections that would be useful for purposes other than as input to climate models.
With respect, I think that your suggestion that (e.g.) China’s per capita emissions in 2100 be modelled as an average of the PRESENT US and French levels embodies a fundamental fallacy. The UK’s per capita emissions now are lower than they were a century ago, but its real income is five times as great. Conversely, China’s per capita income is now higher than the UK’s was a century ago, but its per capita emissions are only a quarter as great. In order to model prospective emissions a century hence, it’s essential to take account of technological change: I think that it’s surprising that even the lowest of the IPCC scenarios project significant fossil fuel CO2 emissions at the end of the century.
I’ll reply separately to your point about Nordhaus and the economic modellers.
[Response: Ian, the chances of a decreasing CO2 emission trend by 2030 are extremely slim if not absolutely zero (barring some absolute economic catastrophe), so I think the SRES scenarios are on pretty safe ground there. And the climate changes through 2030 are mostly going to be a catch up excercise – so very little change can be expected climatically based on conceivable emission pathways to that date. On the technical point regarding what is necessary for eventual stabilisation, back of the envelope calculations do indeed suggest that a 60-70% reduction emissions would be necessary – the exact number will depend on carbon cycle feedbacks to the new climate regime. This point is simply related to the carbon cycle and has nothing to do with any specific scenario of course. The basic argument goes like this: current emissions are around 7 Gt/yr, of which 60% remains in the atmosphere. The sinks for this anthropogenic CO2 depend for the most part, on the atmospheric concentration, which implies that the CO2 levels today are sufficent to ‘push’ about 3 Gt/yr into the ocean and biosphere. Therefore a reduction to 3 Gt/yr global emissions would be much closer to a stable state than the current emission level. This is not a controversial number. – gavin]
Dano says
RE 55:
Ian, your argumentation omits the implementation from feedback loops.
There is no a priori necessity, say, 20 years after a projection to stick with that projection, unless you are managing to that projection. So, for instance your [f]or example, the A1FI scenario assumes that by 2100 the average consumption of electricity per head for the world… would be modified in an adaptive management program as more information became available; presumably, management strategies would attempt to reduce the amplitude of this scenario and successful implementation would change the trajectory.
But this gets back to Ian’s observation that [t]o my mind, the scenarios ‘industry’ has not done a good job of explaining what the scenarios mean in concrete physical terms .
Regardless, I agree with raypierre that there are too many scenarios. There is no way decision-makers can decide in that sort of environment.
Best,
D
Ian Castles says
Thanks Gavin for your comments, but with genuine respect I don’t think that what you’ve said meets the point I’m trying to make. I’ll try putting it a different way.
The projected level of CO2 emissions in 2100 under the IPCC B1T MESSAGE scenario, which like all of the IPCC scenarios excludes the impact of policies that explicitly address climate change initiatives (e.g. the Kyoto Protocol) is 2.68 GtC. As you’ve said that 3 GtY is not a controversial number, I assume that you agree that the IPCC’s emissions modelling experts believe that annual CO2 emissions may be at a sustainable level in 2100, without climate change policies.
One can of course debate whether the assumptions underlying that projection are reasonable, and I’ve already said that I don’t believe that the scenarios ‘industry’ has done a good job in this respect. But I can’t see the point of the IPCC going through an elaborate four-year study imagining possible futures if the Panel then turns round and says that this particular future isn’t imaginable after all.
The next step is to look at the time profile of global fossil fuel CO2 emissions under this scenario. As I’ve said, B1T MESSAGE projects a growth of 42% between 2000 and 2030. It happens that this looks to be in the ballpark as a PREDICTION at this stage but THIS IS NOT NECESSARY TO MY ARGUMENT.
EITHER the IPCC was wrong in finding that if (I emphasise ‘if’) emissions rose by 40%+ in the first decades of the century it would still be possible (by following the B1T MESSAGE profile during the rest of the century) to achieve stabilisation at a CO2-equivalent concentration lower than 2 X CO2-equivalent; OR I am wrong in my interpretation of the IPCC’s findings as outlined in the SRES and Chapter 9 and Appendix II of the WGI Contribution to the TAR (in which case please tell me where I’m wrong); OR many governments (and others) are wrong in claiming that emissions must start to fall within a decade or so, and be below the 2000 level by large percentages by mid-century. I don’t see any other alternative.
If the governments are right, the world has already failed. As you say, growth in CO2 emissions for several decades is inevitable. But if the IPCC is right, this growth does not at all exclude achieving stabilisation via (for example) the B1T MESSAGE emissions trajectory. The achievability or otherwise of this trajectory is a separate issue which will require the probing of assumptions – it can’t be settled by back-of-the-envelope figuring.
The probability range for climate sensitivity in the A&H paper refers, as I understand it, to the increase in temperature at equilibrium resulting from a doubling in CO2-equivalent atmospheric concentrations. Am I right in my understanding that if these concentrations are stabilised at or below twice the pre-industrial level, the frequency distribution of the probability of different climate sensitivities in that paper can be used to assess the prospective increase in global mean temperature at equilibrium COMPARED WITH THE PRE-INDUSTRIAL LEVEL? In other words, the increase which has already occurred and the ‘catch-up exercise’ between now and 2030 are included within the increase at equilibrium that is estimated to result from the near-doubling in CO2. If that is the case, I don’t understand the relevance to this discussion of your ‘catch-up exercise’ point. The catch-up increases are already taken into account in the IPCC temperature projections.
[Response: I think you may be confusing reaching a level of emisisons that will (eventually) lead to a stabilisation, and the actual level at which CO2 may be stabilised. The more CO2 that is emitted in the meantime, the higher the eventual stable level will be. The B1 scenario looks like it would stabilise at around 2xCO2 in the 22nd Century which is a relatively optimistic outcome (given no specific climate related reductions in emissions). But don’t kid yourself that this would be a minor climate change. The EU target of trying to avoid a 2 deg C change over the pre-industrial level is extremely unlikely to be met under such a scenario, and most efforts appear to be directed towards stabilisation at significantly less than 2xCO2, hence the emphasis on slowing growth rates now. -gavin]
[Response: I can even go Gavin one better. You can compute the CO2 response to your favored emission scenario yourself, using Dave Archer’s online version of the ISAM model here. I’ll leave the land carbon emissions the same as in IS92b, and assume that we reach 8Gt per year in 2010 (an approximate linear extrapolation from the 7Gt per year global emissions in 2002, from CDIAC). Then, if I assume we at least manage to stabilize the emissions thereafter at 8GT per year, atmospheric CO2 hits about 550ppm in 2100, but it’s not in equilibrium at that point — it’s still rising and will continue to do so into the indefinite future, until the coal runs out. So, in order to stabilize CO2 at less than doubling, it is certain we have to reduce emissions after 2010. Suppose we reduce emissions by 50% from 2015 onward. In that case, we only hit 440 ppm in 2100, which is short of doubling, but the important thing is that the CO2 is not STABILIZED at 440ppm; in 2100 it is still rising at a good clip, and without further reductions in the out years it will double and continue increasing beyond that. If we reduce CO2 emissions beyond 2010 by 75% (down to 2Gt per year), then CO2 does indeed stabilize at a value short of doubling (about 400ppm). ISAM isn’t a state of the art carbon accumulation model, but it’s not bad. By these calculations, the statement that it would require a 60% reduction of emissions in the near term seems spot on. If you have some other scenario in mind that you think would stabilize CO2 without such an emissions reduction, you can try it out in ISAM and see what happens. I’m curious about what you actuallly had in mind. Were you perhaps confusing a target of keeping CO2 below doubling at 2100 with a target of achieving stabilization by 2100? Those are two very different things. –raypierre]
raypierre says
Forgive me for leading us farther astray from the topic, but I’m finding the dialog with Ian very instructive, and if not here, where else would we pursue it? If I understand Ian correctly, he is suggesting that naturally occurring technological progress will reduce the per capita carbon emission in the future to a point where the problem of global warming will largely solve itself. Evidently, many nations of the world must not believe this, otherwise they’d have nothing to lose by signing on to mandatory carbon controls beyond Kyoto. I don’t see any clamor to do this, not by the US, not by China or India, and not by Australia either. Perhaps these nations are misguided.
However, I question the assumption that technological innovation will substantially decrease per capita carbon emissions in the future, in the absence of aggressive internalization of the environmental costs of coal burning and tar sand mining, as implemented by either cap and trade schemes or carbon taxes. Without such measures, where would the incentive be to invest in new technology? Coal is abundant and cheap, and is likely to remain so. There are no market signals in place (beyond the limited Kyoto action) that would cause any free enterprise to consider anything other than coal burning in pulverized coal plants, perhaps with some scrubbers to cut down on the more obvioius pollution. In fact world per capita CO2 emissions leveled off at about 1.1 tonnes per person per year in about 1970 and have stuck there ever since. So where is the incentive to do things any differently in the future? The improvement from the 19th to the early 20th centuries was driven by cost, and the desire to get rid of the more obvious forms of pollution (like the London smogs). Most of those incentives are gone now, except to a limited extent in China which is scheduled to reduce its coal related pollution somewhat.
Ian Castles says
Thanks Raypierre. I’m finding this interesting too. There’s no mystery about the profile of emissions I’m talking about: it’s the B1T MESSAGE scenario in the SRES, NOT the B1 IMAGE scenario which has a much higher growth of emissions. Thanks for the reference to ISAM and I’ll do my best to feed B1T MESSAGE into it and report what I come up with.
The reason that I remain mystified is that your response to 59 seemed to me to mean that if CO2 emissions in 2100 amounted to 3GtC or less, they would be absorbed by sinks and the atmospheric concentration of CO2 would not be increasing at that point. Is that correct, or is it the case, as you seem to imply, that the atmospheric concentrations would still be rising at ‘a good clip’? Even if ISAM tells me the latter, I think that I will still have trouble understanding the mechanism: where is the increased CO2 concentration coming from, if not from emissions? (I understand the point that there will still be a TEMPERATURE increase in the pipeline).
Now to your 67. You are not understanding me quite correctly because it’s not me that’s canvassing the possibility that ‘naturally occurring technological progress’ will reduce the carbon emission in the future to the level where the problem will solve itself: it’s the IPCC itself, which approved the Special Report on Emissions Scenarios at the WGIII plenary session at Katmandu, Nepal in March 2000. The arguments in your second paragraph are effectively questioning the findings of that Report. In Box 4-9 (pps. 216-220) of the SRES there is an analysis of estimated cost levels and the output of energy from 22 energy technologies in 2050 and 2100, for each of the six IPCC marker scenarios. Of course the technological assumptions underlying some of the scenarios may be over-optimistic, but that would have to be demonstrated on the basis of evidence rather than the casual empiricism exhibited in your examples of London a century ago and China today. (It’s important to note that the IPCC experts excluded all technologies that had not already been demonstrated on a prototype scale, which seems to me to be a very conservative assumption for projections extending a century into the future).
[Response: Yes, the technologies exist, but I ask again, what is the economic incentive to use them when coal is so cheap? –raypierre]
Ian Castles says
Raypierre, I downloaded the ISAM link but wasn’t able to work out how to enter an emissions profile such as that of B1T MESSAGE. But I can hardly believe that the projected concentrations (including at equilibrium), forcings and temperature increases under this scenario haven’t been modelled already. Dr Tom Wigley said in his presentation to the IPCC Expert Meeting on Emissions Scenarios in Amsterdam on 10 January 2003 that the projected level of CO2 concentrations under this scenario was 480 ppm in 2100. Regrettably, the IPCC has not published a report on this meeting. However, the projected emissions of the main GHGs under the B1T MESSAGE scenario are shown, in the same detail as was used to model the six illustrative scenarios in Appendix II of the WGI Contribution to the TAR, in http format at http://www.grida.no/climate/ipcc/emission/164.htm , and there is a link to the same data in Excel at http://www.grida.no/climate/ipcc/emission/data/allscen.htm . Would it be possible for someone more familiar with the technology than I am to calculate some rough-and-ready projections of key climatic data from these emissions numbers?
Lawrence McLean says
I would like some feedback from the climate scientists regarding my thoughts regarding climate change. From what I have seen of the current climate models (or simulations) they are remarkably accurate.
I certainly disagree with a line of argument put forward by the contrairians that you cannot make predictions. They love to quote some Physicist who said something to the effect: “making predictions is very hard, especially about the future”. If this were the case, then we might as well throw away Engineering. It is a stupid argument, that is in line with the rest of their repulsive opinions.
My thoughts follows:
I have been thinking about the weather in terms of basic thermodynamics, in order to imagine what a worst case ultimate climate equilibrium would look like.
The way I figure it: weather involves changes in phase of water (solid – liquid – vapor) and the movement of air. These changes require work to drive them. The work that is available to drive the weather is derived from the temperature differences within the weather system. For example in your car, the amount of work that is available, before any other efficiency losses, is the temperature difference between the temperature of combustion and the temperature of that same gas at the end of its expansion in the cylinder.
If global warming leads to reduced temperature differentials then what will emerge will be a hotter climate with much less rain and much less wind, so there will be less humid air being forced over mountains. That would mean widespread desert.
My questions to the climate scientists are:
Do the climate models predict ultimately (when equilibrium is reached) reduced temperature differentials, including latitude and altitude?
What was the climate like after the Permian extinction, when CO2 levels were very high?
Hank Roberts says
Lawrence – Permian models, maps, discussion that answer your question, several in the top ten hits here. Note that there’s no ultimate equilibrium, the continents drift, orbit/axis change ….
http://www.google.com/search?q=Permian+climate&start=0
Also ‘Permian’ in the search box at top of this page will lead you to other topics here, one in particular, on that question.
Ian, Raypierre, your conversation is fascinating, more! more!
raypierre says
Continuing the discussion with Ian…
To put in new data in the online ISAM model, you just click on one of the IPCC scenarios and then you get an editable table, where you can put in the numbers you want. Since this is made for use in class, the time resolution for the data entry is a little limited, to make things easier for the students; however, you can do pretty well by putting in SRES data interpolated to the time points allowed in the web interface.
I did this for the B1T MESSAGE scenario, which I hadn’t looked at closely before (as I said, there are a whole lot of scenarios there). This scenario basically assumes we manage to hold fossil fuel emissions steady at about 9GT per year between 2015 and 2025, whereafter they drop precipitously to 5Gt in 2050 and 2.68 Gt in 2100. This indeed does result in stabilization at 460ppm in 2100, largely because the sharp drop to 2.68 brings emissions down to where they are balanced by oceanic uptake of the accumulated atmospheric burden. It thus does appear that if we can at least keep emissions flat up to 2025, we can put off the need for really strong reductions until after 2025. Keep in mind also that the MESSAGE scenario includes an assumption of 1.5Gt additional reduction by 2050 from land carbon sequestration. Reducing the resulting net emissions to 3.5Gt in 2050 represents a 56.25% reduction from an 8Gt baseline. It seems excessive to raise such a hue and cry (as if it were a real scandal) if some politicians in some speeches set their target at 60% reduction rather than 56.25%. For politicians, I think that’s doing pretty well! For that matter, how much reduction you need in 2050 depends somewhat on how much you think emissions will increase before then. We’re really quibbling about differences around the margin here. If everybody managed to adhere to something like the MESSAGE scenario, you’d get no complaint from me.Note also that stabilizing at 460ppm still gives you a pretty hefty climate change once equilibrium is reached, so if it’s possible to do better by more agressive emissions reductions, that would be highly desirable.
Alastair McDonald says
Re #58 Thanks David for that link, but you don’t say whether you think it agrees or disagrees with what I have to say. It is mainly concerned with LTE in a glass of water containing an ice cube. That is rather dissimilar to a planetary atmosphere containing a mix of gases which includes greenhhouse gases in trace amounts and a condensible greenhouse gas.
It does say:
“It is important to note that this local equilibrium applies only to massive particles. In a radiating gas, the photons being emitted and absorbed by the gas need not be in thermodynamic equilibrium with each other or with the massive particles of the gas in order for LTE to exist.”
That does not fit with my understanding. As I understand it LTE exists when the Planckian temperature equals the Maxwellian temperature i.e. when the kinetic energy of the massive particle matches the emission of photons. See; http://scienceworld.wolfram.com/physics/LocalThermodynamicEquilibrium.html
However in the glass, the water and ice will radiate more or less as blackbodies unlike the gases in the atmosphere.
Cheers, Alastair.
Coby says
I don’t think it is said nearly enough in conversations about IPCC scenarios: the world does not come to a stop in 2100. Be careful not to look at the climate in 2100 under your favorite scenario and think “that’s manageable” without understanding that there is usually more change in the pipe.
I know raypierre touched on that by making the distinction between CO2 levels in 2100 vs a CO2 equilibrium in 2100, I just wanted to frame the point more prominently.
Is a 2oC change over the 21st century ok? Well what if there is another 1oC in the pipe? This is harder, especially when we DO NOT KNOW what the threshold is for massive releases of methane from ocean sediment.
I know Michael Tobis makes this point from time to time but I agree and it is worth repeating, and that is that the idea of focusing on 2100 is really very arbitrary, even if I understand the need for some line in the sand and the difficulty of caring about 1000 years from now.
James Annan says
With reference to comments 31 and 44, I think that this paper was indeed published in time to be cited in the latest draft of the AR4, but apparently it wasn’t in time to have much influence, if this article is to be believed.
[Response: I’m pretty sure that can’t have been the current draft because they were not finalised at that point. Let’s hope not anyway! -gavin]
Hank Roberts says
That was a month ago, have they been continuing to edit the draft since then? For the public, I think dealing with the “low probability” (James, you call it less than 5% chance) of higher warming takes into account the number of people at risk.
If you compare 2 degrees C to a small asteroid, what level of warming compares to a large one?
The answer is counterintuitive, for risk perception, and I think IPCC needs to deal with the low probability high consequences case. A 4% risk of, what, 6 degrees C warming, to pick a number wildly.
Compare your odds of dying from an asteroid impact, from The Planetary Society’s page:
http://www.planetary.org/explore/topics/near_earth_objects/threat.html
“… in any given year, there is only a 1 in 500 million chance that you will die from a Tunguska-like impact. Over a human lifetime, which we round up to an even 100 years for simplicity, it would seem there is only a 1 in 5 million chance that a Tunguska-like impact will result in your untimely death. A 1 in 5 million chance may be small enough that most people would give it little practical concern.
“What about the comparative hazard from much less frequent global-scale impacts? If we assume that such events occur only once every million years but are so devastating to the climate that the ultimate result is the death of one-quarter of the world’s population, this translates to an annual chance of 1 in 4 million that you will die from a large cosmic impact even if you happen to be far removed from the impact site. Integrated over a century, our simple metric for a human lifetime, the chance becomes 1 in 40,000 that a large cosmic impact will be the cause of your death. Such a probability is in the realm that most people consider a practical concern.”
—–
Now, the odds we’d get a serious excursion into warming from unexpected (methane burp after a major subduction earthquake series, a Yellowstone-caldera explosion) are low. One in four million per year?
Of course, if we got the warming _and_ the asteroid, they’d cancel out (wry grin).
I realize this is all outside what the modelers model. But it’s what the worriers worry about, eh?
Alastair McDonald says
Re 70 et al. Did you include the forcing from the reduction in Arctic sea ice in your sensitivity calculations? Or were they based on the effect of CO2 on the level of the atmosphere which emits at the effective temperature?
Cheers, Alastair.
Alastair McDonald says
Re 71 Hank, if CO2 levels continue to rise, then the effects of global warming will get worse. How bad does it have to get before we take action? When we do decide, how much worse will it get then?
Cheers, Alastair.
Lawrence McLean says
65 Continued…
Hank,
I realise that there is no absolute equilibrium, however, what I mean, what will conditions be like when the system has had time to adjust. Periods of stability of climate do exist. In any case thank you for the suggestion, I thought there may have been a ready answer to my two questions.
Cheers
James Annan says
Gavin (70),
Maybe you don’t realise quite what an outlier we currently are in the probabilistic climate prediction community :-)
It will be interesting to see what gets written in the months and years to come, but anyone who’s hoping for a handbrake turn in the next few days is likely to be disappointed. Indeed, it would be risky of the IPCC authors to overturn several years of accumulated work based on one short GRL paper that they’ve barely had time to read. Who knows, maybe someone will demonstrate why we are wrong…OTOH, plenty of people have seen the paper now and I’ve yet to hear anyone in the field still maintaining their belief (of a significant chance of a sensitivity substantially greater than 4.5C) in the light of our argument.
[Response: But as I said, your paper essentially formalises what most people felt already – some of those were likely reviewers of the first draft and so one might expect more of a consensus in the final version. We shall see. – gavin]
Hank Roberts says
Aren’t people saying that besides sensitivity to CO2, other causes for sudden climate excursions need to be mentioned, to cover worst case risk as explained to the public? I don’t see them as ignoring your models, but as saying the models and CO2 sensitivity aren’t guaranteed to cover all risks.
Ian Castles says
Raypierre, In your response to my 63, you ask ‘what is the economic incentive to use [alternative technologies to coal] when coal is so cheap.’ I find it ironic that I am being asked this question, when the IPCC issued a press statement in December 2003 (its only press statement in more than two years) which strongly criticised me and my co-author David Henderson for ‘questioning the scenarios developed by the IPCC’. The statement noted that these had been published in a report ‘based on an assessment of peer reviewed literature … and subject to the review and acceptance procedures followed by the IPCC.’
The IPCC defines ‘scenario’ as a ‘PLAUSIBLE description of how the future may develop’. I think your question implies that some of the IPCC scenarios are implausible – for example, there are three (A1T AIM, A1T MESSAGE and B1T MESSAGE) which assume that coal’s contribution to total energy supply will be less than 3% at the end of the century. The reasons why the SRES Writing Team found these scenarios to be plausible are outlined in the Report and in the peer-reviewed literature cited in the lists of references. Another valuable contribution to the literature on this subject, not cited in the SRES, is Chakravorty et al, 1997, ‘Endogenous substitution of energy sources and global warming’, Journal of Political Economy, 105 (6) 1: 201-34.
Your 67 is totally off-beam because the emissions numbers you cite bear little relationship to those in the B1T MESSAGE scenario (or any other IPCC scenario). You say that fossil fuel emissions under this scenario ‘drop precipitously to 5 GtC in 2050’: they don’t – these emissions are projected at 8.48 GtC in 2050, According to the table on p. 526 of the SRES (to which I provided links to html and Excel versions), global fossil fuel CO2 emissions in this scenario at mid-century are projected to be 42% above the level in the Kyoto base year.
I don’t know where your numbers come from – perhaps some other scenario done by the MESSAGE modellers at IIASA? You ask me to keep in mind that the MESSAGE scenario ‘includes an assumption of 1.5 GtC additional reduction by 2050 from land carbon sequestration’. No, this is not true of B1T MESSAGE: the reduction from land use changes in 2050 is 0.67 GtC, not 1.5 GtC.
You go on to say that ‘Reducing the resulting net emissions to 3.5 GtC in 2050 represents a 56.25% reduction from an 8 GtC baseline’, and that ‘It seems excessive to raise such a hue and cry (as if it were a real scandal) if some politicians in some speeches set their target at 60% reduction rather than 56.25%.’ Well, for a start the projected net CO2 emissions in 2050 in B1T MESSAGE are 7.81 GtC, not 3.5 GtC, and the reduction from an 8 GtC baseline is therefore 2.4%, not 56.25%. And the 60% reductions to which I referred are formal targets set and announced by national and state governments, not ‘some politicians in some speeches.’
I reject your statement that ‘we’re really quibbling about differences at the margin’: the differences are in fact very large and the targets are, at least in most cases, expressed in relation to 2000 or current levels (NOT from the peak levels that may be achieved as your comments imply).
I’m glad of your confirmation that the CO2 emissions of 2.68 GtC under B1T MESSAGE in 2100 would be balanced by oceanic uptake of the accumulated atmospheric burden, so that emissions would be stabilised at that level. Gavin said in response to my 61 that I might be ‘confusing reaching a level of emissions that will (eventually) lead to a stabilisation, and the actual level at which CO2 may be stabilised’, but I take your comment to mean that I was not confused on this point.
You say that ‘stabilizing at 460ppm still gives you a pretty hefty climate change once equilibrium is reached, so if it’s possible to do better by more aggressive emissions reductions, that would be highly desirable.’ In my view, the question of how aggressive emissions reductions should be is properly a matter for decision by governments, not for scientists. But scientists have an important role in helping to ensure that government decisions are made on a fully-informed basis.
In my 51 I said that “I do not believe that the IPCC ‘no climate change policy’ scenarios have been properly reconciled with the stabilisation scenarios. This is unfinished business.” I reiterate that statement. If anything, the comments that have been made confirm my view that there is no substance to claims that CO2 emissions must start falling within a decade or so if 2 X CO2 is to be avoided. I know some argue that the scale of the emissions reduction task must be exaggerated in order to put pressure on governments to act. I disagree strongly with that view. It’s the responsibility of scientists to tell it like it is.
[Response: Ian, if your aim was to complete obsfucate any original point you were trying to make you have succeeded. If you wanted to hoist a strawman argument (‘global emissions must fall within a decade’) to knock down, then you may have succeeded as well. The fact remains that decisions being made in the next decade will determine not this decade’s emissions but those of the next 20 or 30 years. Unrestrained growth of emissions over that time scale make stabilisation at levels consistent with a climate change of (say) < 2 deg C over pre-industrial levels extremely difficult. No-one I know thinks that the scale of the emission reduction task is being exaggerated – if anything I think it’s being understated to make it look more achievable. A cut of 60% in emissions? easy? I don’t think so. – gavin]
David Donovan says
Re 68.
I was basicly reinforcing the points made by Gavin and Raypierre in thier response to #56 (see also my commet #30). You appeared to argue that LTE does not apply in the lower atmosphere while it indeed does. This does not mean that the emmission is treated as a smooth featureless `blackbody curve’. The existence of LTE only means that the distribution of gass molecules having particular quantum vibrational and rotational energy states can be accurately calculated using Boltzmann statistics and LTE also means that Kirchoff’s law applies (emmission coefficeint=absorption coefficent at all wavelengths).
In general the IR absorption coefficient of a gass (e.g. its spectrum) is a function of wavelength, its molecular structure, and the population of its various quantum states (which in LTE is determined by Temperature). The line shapes themselves are functions of temperature and pressure (the higher T and P the broader they become). If one looks at a IR emmission spectra (looking down form space) one sees something like http://www.atmos.umd.edu/~owen/CHPI/IMAGES/emisss.html (note the specta shown here are at low spectral resolution at higer res individual lines may be resolved). If the absorption coefficent in the atmosphere were constant with wavelenght one would measure a smooth backbody curve like those depicted in the figure. However, since gasses do indeed possess specifc absorption spectra we see this reflected in the observations.
In practice if one is interested in using measured spectra to deduce things like gass concentrations in the atmosphere one must, in general, perform so-called `line-by-line’ radiative transfer calculations. These type of calculations are preformed directly using detailed data on line positions and strengths and are carried out on a fine spectral grid. Depending on what spectral region one is interested in 1000’s of line may have to be treated. Line-by-line calculations are generally too time consuming for use in weather and climate models and thus so-called band models are used. Here, in effect, low spectral resolution calculations using effective absorption coefficients are carried out. The accuracy of these band models (there exists several methods for creating band models) are tested against observations and line-by-line calculations and have generally been found to do a pretty good job for their intended purposes.
Hope this helps
Ian Castles says
Thanks for your comment on my 77 Gavin. The Stern discussion paper issued by the British Government states that ‘any of these stabilisation levels [450, 500 or 550 ppm equivalent CO2] would require global emissions to peak in the next decade or two and then fall sharply.’ This claim was based on calculations by the Hadley Centre.
Under the IPCC’s B1T scenario, global fossil fuel CO2 emissions are projected to be higher in 50 years time than they are now, but the rapid decline after that means that the CO2 burden in 2100 under this scenario is estimated at 480 ppm (Dr. T M L Wigley at IPCC Expert Meeting in Amserdam, January 2003). You thought that I might be ‘confusing reaching a level of emissions that will (eventually) lead to a stabilisation, and the actual level at which CO2 may be stabilised’, but it is clear from Raypierre’s comments in 67 that in this case the eventual stabilisation would not be higher than 480 ppm. I conclude that the UK Government view that emissions must peak in a decade or so and then fall sharply in order to stabilise CO2 concentrations at 450, 500 or 500 ppm CO2 equivalent is incorrect. I’m grateful for all of the comments I’ve had.
raypierre says
Ian —
I pointed you towards Archer’s online model in the hopes that you would dial in the kind of scenario you wanted and get a feel for the behavior of the system yourself. Since your computer skills don’t seem to be up to filling in some numbers on a web form and clicking on a button, I did a hasty job of this myself; my time isn’t unlimited, and if you want other people to do your work for you, you’ve got to be patient and not rant if people don’t do things exactly the way you want the first time. Nonetheless, I’m trying to provide some numbers that will help me (and you) understand your argument.
I banged in some numbers that had an emissions reduction (land sequestation included) of about 60% in 2050, and showed that this yielded stabilization at a value somewhat short of doubling CO2. I rather like this scenario, but it’s true that it isn’t B1T Message — when doing the interpolation to transfer the numbers from the SRES web site to Archer’s model I accidentally shifted a few columns and wound up creating a scenario with earlier reductions than B1T Message. The CO2 stabilization value I quoted was correct for this scenario; also the 2.67 Gt/Year target value I quoted (and used) for 2100 is the correct one for B1T Message, including land sequestration. That’s a pretty aggressive reduction, I think you’ll agree.
In response to your comment, I had a closer look at B1T Message, and ran the correct numbers for this scenario through a proper interpolation before putting them in Archer’s model. My description of the scenario given in my previous post is basically right, except that the aggressive reductions (relative to 2010 base values) don’t begin until after 2050. B1T Message has a net emission reduction (including sequestration) of 13% in 2050, increasing sharply to 70% in 2100. This scenario, put into the carbon cycle model, just barely stabilizes CO2 by 2100, at a value of 480ppm. However, this is only CO2, and this concentration yields a radiative forcing just .8 W/m**2 shy of the radiative forcing corresponding to doubling of CO2; you have all the other anthropogenic greenhouse gases to add in as well. But yes, it does seem that if one’s only ambition is to live with a climate stabilized at that yielded by a doubling of CO2, there is a scenario where one can postpone the date of 60% emission drop past 2050, and make up for that with sharp reductions going to 2100. This still leaves you with a climate that is very likely warmer than anything we’ve seen for the past 10 million years or so.
Even this would be a worthy goal — it would at least mean we would never have to face a 4xCO2 or 8XCO2 climate. If the nations of the world could at least agree to do this, it would be a good start. One could justifiably argue that to allow the world emissions to conform to B1T Message, the developed countries should be subjected to more stringent emissions reductions, to make up for their greater past emissions and the benefits reaped thereof by their economies. Note that even though the B1T Message scenario doesn’t call for sharp reductions in emissions until after 2050, it does require emissions to be kept below 9Gt through 2025 and to begin to decrease after that. Are you really saying that that will happen without some kind of carbon tax or equivalent action? I have yet to see any evidence of that.
Finally is your main complaint that some politicians may have said that a 60% reduction by 2050 would be necessary to stabilize at 2XCO2? There are many different variations on what might have been meant by a claim of that sort. I would be very grateful if you could provide some specific citations to where this specific argument has been made, so we can all base our judgement on what was actually said. I’ll reiterate what Gavin said, and what I said earlier: even achieving the modest goals of B1T Message is no mean feat, and (given the long capital life of energy systems) is very likely to require the right market signals to be put in place very soon.
Now, Ian, I do assume you are really interested in all this, and are not just using this discussion as an ruse to harvest sound-bites that can be quoted out of context on those other blogs you like to post to. If that happens, I will be both disillusioned and disappointed.
Ian Castles says
Dano, In your 60, you say that my argumentation ‘omits the implementation from feedback loops’, and that in an adaptive management world the successful implementation of management strategies would reduce the amplitude of a scenario in which (to take my illustration) electricity output per head for the world as a whole was five times as great as that of the rich OECD90 countries in the Kyoto base year.
I think that you may have misunderstood my argument. I wasn’t arguing that the level of electricity use projected in A1FI could ever happen, with or without adaptive management strategies: on the contrary, my illustrations were designed to show that it couldn’t. It follows, all other things being equal, that the projected emissions in the A1FI scenario couldn’t happen either, and neither could the IPCC’s projections of high increases in temperature. Of course all other things wouldn’t be equal, but this doesn’t affect the in-principle point that if the assumed level of electricity consumption isn’t realised, for whatever reason, the upper end of the IPCC’s temperature range is overstated.
Ian Castles says
Raypierre, Thank you for the detailed information in your post 80. I can assure you that I AM genuinely interested in all this. I did go to the model you suggested, but when I realised how few data points there were (e.g., no decadal values in the second half of the century) I couldn’t see the point of trying to replicate the number I already had from Tom Wigley’s presentation at Amsterdam. You have now arrived at the same figure (480 ppm). I note this is for CO2 alone.
Some of your comments imply that I am advocating this emissions scenario. That’s not so. I’ve tried to explain that I believe that the severity of emissions reductions, and the balance between adaptation and mitigation, is a matter for governments to decide, based on the best available information. We’ve now established, I think, that an important statement in the Stern Review papers is incorrect: it is stated in these papers, on the basis of advice from the Hadley Centre, that achievement of stabilisation at 550 ppm CO2 equivalent ‘would require global emissions to peak in the next decade or two and then fall sharply.’ That appears not to be so.
I don’t conclude from this that 550 ppm CO2 stabilisation equivalent is an appropriate stabilisation target. Perhaps in the light of this information, and the advice of scientists that the climate would very likely still be warmer than for the past 10 million years or so, governments may want to work to a more stringent target. As I see it that’s for them to decide, on an ‘eyes-open’ basis.
You ask whether I am really saying that emissions will begin to decrease after 2025 without a carbon tax. No, it’s the IPCC’s model-builders that have found that this MAY happen. Judgments on whether or not it WILL happen could only be made by examining the realism of the assumptions. I’ve said that I believe that the growth assumptions in the B1 (and A1) scenarios are too high, but it doesn’t necessarily follow that the assumptions are unreasonable overall. I’d make the same comment on your statement that a global 2.68 GtC represents an ‘aggressive’ reduction, and the issue about the need for the ‘right market signals’ to be in place if the profile of emissions in B1T MESSAGE is to be achieved. If this means climate-related policies such as carbon taxes, I can only repeat that the SRES modellers were specifically precluded from assuming these. This is true of ALL of the scenarios.
Finally, I’m not ‘complaining’ about what some politicians may have said: I don’t blame politicians or governments for saying that a 60% reduction in global GHGs is required by 2050 in order to achieve stabilisation at 2 X CO2 if, for example, Stern has said the same thing based on Hadley Centre advice. The UK House of Lords Committee said that they ‘were concerned that UK energy and climate policy appears to rest on a very debatable model of the energy-economic system and on dubious assumptions about the cost of meeting the long run 60% target’ (para. 94). I know that the target was set in 2003 and relates to the period 2000 to 2050. I’ll try and dig up some more information on the UK and on other jurisdictions that have set targets to meet your request. I agree that industrialised countries would have to work to above-average reductions in order to achieve any given global percentage reduction. Please bear in mind that my time isn’t unlimited either.
Dan Hughes says
Gavin, will you provide citations for the three papers/reports by Jule Charney, Suki Manabe and Jim Hansen mentioned in the first sentence of the post. Thanks
[Response: It’s a reference to the Charney report:
-gavin]
Dan Hughes says
Thank you for the info. The NAS report seems to be no longer available from NAS, so I cannot get citations to the Manabe and Hansen peer-reviewed papers. I have been tracking down and working with some of the original climate models as reported in the peer-reviewed literature; Manabe, Moller, Sellers, Budyko, Paltridge, North, Saltzman, Lorenz, Ramanathan, along with several others … . These papers add up to a lot of material so it is possible that I’ve missed what I’m looking for, but I have not seen climate model results for doubling CO2 published prior to 1976. Maybe my focus on papers relating to models has caused me to miss this specific information.
Will you provide the exact citiations for the Manabe and Hansen papers mentioned in the post?
Thanks again.
[Response:
However, note that the simulations reported in 1984 were actually done prior to the 1979 report and were discussed in there. You are probably best off trying to track a copy of the NAS report from your local library. -gavin]
Hank Roberts says
Has the very recent info from the Aura satellite study surprised any of the modelers? Quoting a snippet from EurekaAlert with the cite:
” … Su et al. collected simultaneous observations of upper tropospheric water vapor and cloud ice from the Microwave Limb Sounder on the Aura satellite. Their observations show that upper tropospheric water vapor increases as cloud ice water content increases. Additionally, when sea surface temperature exceeds around 300 Kelvin [30 degrees Celsius; 80 degrees Fahrenheit], upper tropospheric cloud ice associated with tropical deep convection increases sharply with increasing sea surface temperature. This moistening leads to an enhanced positive water vapor feedback, about three times larger than that in the absence of convection.
Title: Enhanced positive water vapor feedback associated with tropical deep convection: New evidence from Aura MLS
Authors: Hui Su: Skillstorm Government Services, Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California, USA; William G. Read, Jonathan H. Jiang, Joe W. Waters, Dong L. Wu, and Eric J. Fetzer: Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California, USA.
Source: Geophysical Research Letters (GRL) paper 10.1029/2005GL025505, 2006
I gather this tropical convection study fills a gap in observations; curious what the climatologists are learning from it, if it’s not too soon to comment. If there’s another thread for it, I’ve not found it.
raypierre says
Thanks to Ian for the additional information in #82. I think we are converging on some kind of understanding here.
I haven’t read the Stern Review myself, but based on your quote I’d agree that it is technically incompatible with an analysis of B1T MESSAGE based on simplified carbon models like ISAM. I do think that policy documents should strive to be as accurate as possible in their scientific arguments, but I think it is equally important to ask whether the nature of the mis-statement (if indeed it is a mis-statement) does any real harm, or sends the policy in a completely wrong direction. In this case, I think the answer is clearly in the negative. Based on results from ISAM, and taking into account the assumed land sequestration, a more correct statement would have been “The global net carbon emissions must essentially stop increasing in the next decade or two, and begin decreasing sharply in the following two or three decades.” This statement conveys precisely the same sense of urgency, and the difference in policy implied is very little. It calls for a little less near-term action, but the difference between the policies you’d put in place to keep emissions from growing and policies that would go a little farther and begin the decrease earlier, is not so very great. Still, assuming that the Stern review was working off an ISAM-type analysis, I would have preferred my version of the statement. For developed countries, I myself would advocate a nearer-term start of reduction, but not on the basis of the precise statement you quote.
Now, I don’t want to be too hasty in blaming the Stern commission, since my discussion above is based on the ISAM model, which is a pretty crude carbon cycle model. I suspect Wigley’s talk was based on the same model. The Hadley Center group has been doing some very sophisticated things with modeling ocean carbon uptake and land carbon feedbacks, and they may have provided some not-yet-published information to the Stern group which does indeed suggest that an earlier start of the reduction would be necessary. It would be very interesting to know precisely what information they provided to the Stern commission, and whether the commission interpreted it correctly.
I’ll comment later on the matter of whether the SRES scenarios should be considered forecasts. Briefly, they are storylines, not forecasts, and though SRES may have been told not to consider carbon taxes and such like policies, some of these scenarios are extremely unlikely to happen without policy intervention. B1T Message basically assumes everybody sees the light and goes green — spending their money on opera subscriptions rather than Hummers — without any nudging. It could happen, but I woudn’t bet on it. Policy measures would have the intent of making B1T Message a more likely future and A2 a less likely future.
James Annan says
Ray,
You appear to be at least tending towards, if not quite repeating, the same regrettable error of regarding the high growth (eg A2) as “business as usual” and asserting that climate change-related policy action is required to push us onto a lower emissions path. As Ian points out, the SRES itself explicitly disavows such an interpretation. Choosing the highest emissions pathway one can get away with is fine for investigating model behaviour, but it’s not much of a basis for a plausible forecast. I noted with some disappointment that some of the recent work on ice sheets even used a 1% pa compound CO2 increase, backdated to 1990! That’s already 20ppm too high for the present day, a gap that will grow rapidly over the next few decades. Yet this was presented as a forecast for 2100, unless we “do something”. This sort of behaviour really gives the sceptics a stick to beat ourselves over the head with.
Of course one can also note that there is no such thing as “no policy” anyway. Different policies will lead to different outcomes, and the SRES considers a range of futures in which AGW mitigation is not considered. The concept of (environmental) policies that do not explicitly consider GHG emissions in any way seems a bizarre one to me, but it seems to be how they approached things, and no doubt they have some justification for this. It certainly doesn’t describe how the real world is working, though. I predict (hey, a real forecast) that the next set of scenarios will have generally lower emissions than the current set – they will have to start lower, just to match reality in 2000-2010.
Chip Knappenberger says
Hmmm, James (re: #87), would you be referring to Overpeck et al. (2006)? Who after running their model with a 1% per year increase from 1990, and then predicting doom and gloom from melting ice by 2100, had the nerve to write in their Supplementary Material section “For example, it is highly likely that the ice sheet changes described in this paper could be avoided if humans were to significantly reduce emissions early in the current century.” As you point out, CO2 concentrations are already much lower than their modeled values (BTW, by my calculations CO2 levels are currently 30ppm below a 1% per year increase starting at 355 in 1990). So instead of giving us stern warning, Overpeck should be praising our remarkable achievements!
And James, you are perfectly right… as long as folks continue to use a 1% per year CO2 increase in their models and then attach dates to events in the future, us “skeptics” will continue to bash the results (e.g. here ). At the very least, remove the DATES from the events. I would have a lot less problem with Overpeck et al. reporting that global temperature will rise to a level high enough to significantly melt ice sheets when they reach a concentration of 1060ppm (the 1%/yr value for 2100 starting in 1990 at 355ppm), or whatever other value they want, than I do when they say the year 2100. Sure attaching dates to things makes for really scary headlines and attempts to prompt legislative action, but it is not based in reality.
Stephen Berg says
Re: #88, “So instead of giving us stern warning, Overpeck should be praising our remarkable achievements!”
What “achievements”? Us North Americans have done little, if anything, to cut our greenhouse gas emissions. American per capita GHG emissions has increased 13% since 1990, while Canadians have increased their GHG emissions 24% since 1990. The Europeans have at least something to be proud of, since they have cut their emissions. However, we all have to do our part by cutting GHG emissions drastically to avoid catastrophe.
Chip Knappenberger says
Stephen,
Well, we (the world) have already achieved an atmospheric CO2 concentration that is 7% below where Overpeck et al.’s scenario said we should be in 2005. If you want to cast our emissions trends in an unfavorable light (as you have in #89) then press upon Overpeck and other modelers to use a scenario that is along the lines of the pathway that you think we should be on (or the one that we are actually on).
My point is, that simply based upon Overpeck’s scenario (a 1%/yr increase in CO2 concentrations since 1990), it *appears* as if we have achieved a great deal by doing nothing at all. I am agreeing with James Annan’s sentiments expressed in #87–why start out with something that is already wrong? At the very least, as I mentioned in #88 remove the dates. In other words, tie events to CO2 concentrations and not specific years. If Overpeck et al. had done that, then there would be no grounds for this exchange between you and I.
Hank Roberts says
Does “1%” refer to anthropogenic CO2 increase, or to an overall measured atmospheric level total?
March 2005 NOAA (averaged from 60 sites worldwide) said:
The increase in 2002 was 2.43 ppm; the increase in 2003 was 2.30 ppm. .[the increase returned to longterm average of 1.5 ppm per year in 2004, “indicating that the temporary fluctuation was probably due to changes in the natural processes that remove CO2 from the atmosphere.”
http://www.publicaffairs.noaa.gov/releases2005/mar05/noaa05-035.html
Reuters Tue Mar 14, 2006 11:25 AM reported:
The average annual increase in absolute amounts of CO2 in the atmosphere over the past decade has been 1.9 ppm, slightly higher than the 1.8 ppm of 2004 ….[according to] National Oceanic and Atmospheric Administration, cited by the British Broadcasting Corporation, … carbon dioxide had grown last year [2005] by 2.6 ppm.
NOTE — I’m not saying these numbers are comparable.
1.5 ppm from NOAA in March for 2004
1.8 ppm from Reuters quoting BBC quoting NOAA, also for 2004.
I haven’t found the data at NOAA, can someone point to the actual numbers, what stations they’re comparing, to make sense of this mismatch? A simple table current through 2005 for Mauna Loa alone would be nice.
—> Does the “1%” rate of increase IPCC assumed include feedback added to the historical annual rate of 1.5 ppm? When would such feedback increases be expected to show up above natural variation?
Can a statistician tell us — please — how many more annual observations will be needed to tell us if the rate is increasing (assume 5% confidence, one tail test)?
Alastair McDonald says
The Manua Loa data is here;
http://cdiac.ornl.gov/ftp/trends/co2/maunaloa.co2
Cheers, Alastair.
Stephen Berg says
Re: #90, “Well, we (the world) have already achieved an atmospheric CO2 concentration that is 7% below where Overpeck et al.’s scenario said we should be in 2005.
…
My point is, that simply based upon Overpeck’s scenario (a 1%/yr increase in CO2 concentrations since 1990), it *appears* as if we have achieved a great deal by doing nothing at all.”
A 1% per year increase in CO2 would result in an increasing rate of CO2 emissions and not a steady rate. Therefore, we could see jumps from 3 ppm/yr to 6 ppm/yr within a decade. As for the “7% below” comment, that may be as a result of European reductions, not North American action, since our GHG emissions have risen 15% per capita.
Perhaps Overpeck et al. should have used a ppm/yr rate increase instead of a percentage increase. However, it’s nothing we should be squabbling over. It is only a tactic famously used by skeptics to prevent or at least delay the necessary action.
Almuth Ernsting says
Re: Future increases in CO2 levels
I understand that future CO2 emission trends are based largely on projected emissions from fossil fuel burning and ongoing deforestation. Something I have read a lot about and which really worries me is whether this ignores potentially vast additional emissions – not just possible natural feedbacks, but emissions from the destruction of carbon sinks.
What comes to mind are
a) the massive release of carbon from drained peat swamps on Borneo (and perhaps elsewhere). I understand that the region was a carbon sink with no big fires until the late 1990s, then Suharto came along and decided to drain the swamps and within just two years the peat turned into a massive source of CO2 emissions (see here, for example: http://www.nature.com/news/2004/041108/pf/432144a_pf.html ). Incidentally, the years of the largest recent peat fires were also the years with annual CO2 increases above 2ppm.
b) the possibility of a collapse of the Amazon as a viable ecosystem, with increasing fires, loss of vegetation and emission of vast stores of carbon. Although the Hadley Centre suggest that this could result from climate change, it is also quite likely with ‘business as usual’ deforestation.
Am I correct to think that, unless people safeguard those carbon sinks (ie re-flood the peat swamps and protect rainforests), there will be little hope of stabilising CO2 levels, even if any of the (essential) efforts were made to cut down on fossil fuel emissions? How significant are those emissions in the greater scale of things?
Hank Roberts says
Alastair, that Mauna Loa data ends with 2004. Any word about 2005, yet?
Hank Roberts says
Here is more recent CO2 data, including graphics:
http://www.cmdl.noaa.gov/ccgg/trends/
1995 2.01
1996 1.19
1997 1.98
1998 2.95
1999 0.90
2000 1.78
2001 1.60
2002 2.55
2003 2.31
2004 1.54
2005 2.53
Hank Roberts says
Here is more recent CO2 data, including graphics:
http://www.cmdl.noaa.gov/ccgg/trends/
1995 2.01
1996 1.19
1997 1.98
1998 2.95
1999 0.90
2000 1.78
2001 1.60
2002 2.55
2003 2.31
2004 1.54
2005 2.53
Paul Baer says
I would like to return to some of the points made in the original posting, and a few of the early responses.
I have read the the Annan and Hargreaves (hereafter A&H) paper several times and have had the opportunity to discuss some of the issues with James. There are, I believe, two central conclusions that should be drawn from their work.
First, not all PDFs for the climate sensitivity are equally credible, and PDFs such as those generated (for example) by Bayesian computation using a single broad data set (e.g., Andronova and Schlesinger 2001 and others primarily based on the last two centuries of forcing and warming) should not be considered as credible or “likely” as those, like A&H, that use additional lines of evidence.
Second, however, and this is what I will attempt to argue here, there are additional considerations regarding the Bayesian methods used by A&H which suggest that their own calculations are not robust, and are not inherently more credible than PDFs with, in the jargon, “longer tails.” The devil is so in the details. Some of the reasons that it is not relate to the issues raised by Hank Roberts in #33 and other comments, having to do with the interpretation of past climate dynamics as evidence for constraints on future climate dynamics.
To start with, let me lay out what I think are the crucial claims made in the paper. My interpretation of their argument is that it goes like this:
1. Bayesian methods are the proper method for estimating the PDF for the climate sensitivity.
2. Using Bayesian methods, if there exist three fully independent and equally weighted lines of evidence, each of which produces a PDF for a parameter like the climate sensitivity, the appropriate “posterior” PDF is that which you obtain by multiplying the other three PDFs.
3. In the case of climate sensitvity, there exist three such lines of evidence, which give the posterior probability we report.
4. There are if anything additional independent lines of evidence; thus the PDF they produced from just three is conservative, and should be sufficiently convincing to us to justify our actions.
There are three obvious criticisms of this line of argument, which primarily address (1) and (3) above. The importance of (4) I leave for another time.
First, whether the three lines of evidence are actually independent is not obvious. The authors discuss this at some length, but based on my moderate familiarity with the methods used in each of the three cases, I suspect that there are any number of interdependencies. I believe it is reasonable to claim, as A&H do that the independence is â��close enoughâ�� to justify the â��as ifâ�� methodology. But, and crucially, in the absence of an independence assumption, the method of integrating three PDFs becomes indeterminate. Thus there is no straightforward sensitivity analysis that can be done to show that the interdependencies are irrelevant, because there is no method for such a calculation. Thus the authors have no way of strongly refuting the claim that it’s also reasonable to assume that the three lines of evidence are not independent, and thus that the conclusion is not robust.
Secondly, and this raises the point that Hank Roberts raises: the method they use depends on the assumption that the past will be like the future; thus it can at best be used to robustly justify the claim that “if the mechanisms that govern the climate sensitivity in all three of the ‘cases’ are effectively the same as each other and as those governing it in the (e.g., doubled CO2) future, then it would be reasonable to act as if this PDF will continue to be our best representation of the climate sensitivity going forward.” I think there are ample reasons to question this assumption as well, and again, once one entertains this notion, the appropriate “modification” of the calculated PDF becomes indeterminate.
Third, a closely related notion is that the apparently ‘objective’ method of multiplying PDFs to get a posterior distribution cannot be substituted for a more subjective analysis that weights the evidential strength of the component PDFs on the basis of the reasoning that went into them. That is to say, in this case, that it’s worth asking what is the causal story that leads us to treat a certain line of evidence as a constraint? Prima facie, for example, one might think that the response to a perturbation like Mt. Pinatubo – in which a small, globally distributed negative forcing is created for a very short time – isn’t a very strong constraint on the response of the system to a very heterogeneous and high amplitude positive forcing signal.
I admit I am out of my depth here, and there may be many reasons for considering the volcanic signals as a strong constraint on future climate sensitivity. But the key point is methodological – once one admits that some lines of evidence are stronger than others, the calculation of a posterior distribution becomes indeterminate. One can come up with arbitrary methods for “weighting” alternative PDFs in such a calculation, but there is no single correct method, and no obvious method for demonstrating sensitivity to the assumption that the lines are equal and “full” constraints.
The key point, in my opinion, is that what Bayesian methods do is suggest what it is reasonable to believe, and thus to act as if you believe, and the A&H method has nothing to say to many of those whose have other reasons for assigning a higher probability to high climate sensitivities. They do effectively make the argument that those who would choose to base policy recommendations on the output of any one of the included constraints, without considering other lines of evidence that would tend to narrow it, are making a mistake. But the claim that one must therefore treat the three PDFs as fully independent and that the multiplicative algorithm is robust, I believe is not justified.
The significance of methods such as climateprediction.net is another discussion, but suffice it to say for now that one reason that we build mechanistic models rather than only statistical models is to give evidence about how a system will behave pushed outside the domain of experience. The credibility of such models is necessarily a matter of domain-specific arguments.
James Annan says
Hi Paul,
Thanks for your comments. I’ll offer a brief reply to some of them.
1) Although I do agree that Bayesian methods are at least a good method to approach the problem, I think it’s more accurate to say that our paper is not so much aimed at promoting this idea as merely pointing out that if you are going to do it (as many people have been in recent years), there are better and worse ways to go about it :-)
2) I have to say I’m not overly impressed with some of what I’ve read in the climate science literature regarding such ideas as “probability of probability”, or in other words “how confident are you that your pdf (or assumptions) are the ‘correct’ ones”. This seems to be at least in part a category error regarding the nature of the probability that we are estimating – there isn’t really such a thing as the ‘correct” probability, only the strength of a belief. Kandlikar et al provide strong arguments that precision and confidence have to go pretty much hand-in-hand – it doesn’t make much sense to give a precise probabilistic estimate that you have little faith in, and (but perhaps a little more debatably) vice versa. I don’t have great confidence that our 3 underlying constraints are exactly ‘right’ (inasmuch as that assessment even has a meaning) but I think they are tending towards the generous side, and the overall result is robust to substantial changes in them, so I’m therefore more confident about the more precise posterior. Uncertainty in a particular estimate can be accounted for by increasing its error, and in fact we explicitly did this for the LGM. So I reject the notion of some constraints being more equal than others, aside from their actual shape. I’m not insisting that the concept is wholly without merit, merely saying that it is probably an unnecessary complication.
Independence is a bit more of a tricky one, but note that accounting for dependence could result in a stronger result as well as a weaker one. While our paper was in review, yet another estimate was generated by Forster and Gregory using ERBE data which also points clearly towards low sensitivity. Had this result been available to us earlier, it would have been very tempting to use it in some way (eg here) as their analysis doesn’t involve a climate model at all (merely a linear regression) and thus its independence from other analyses would be hard to challenge. So I am quite confident that the results are robust. Given that “beyond reasonable doubt” in legal terms is often stated to mean about 95% confidence or higher, I think we could reasonably expect to convict S of being less than 4.5C.
Of course, what matters is not really what I think, but how other researchers receive the work – having written a peer-reviewed paper merely means that a couple of referees didn’t see any obvious errors. I look forward to seeing what others have to say over the next few months and years – we’ve actually had very little feedback from anyone, least of all IPCC authors :-) So far, I’m not aware of anyone producing an alternative analysis (in the light of our arguments) to justify the belief that S > 4.5C is credible at more than the 5% level, and certainly not S > > 4.5C (I’m trying to avoid such constructions as “the probability of S > 4.5C”, cos it’s not “the probability”, it is their probability).
I’m not sure what “other reasons” one can have for assigning high probability to high sensitivity, other than it being our (rationally defensible) belief. Are you really saying we should pretend to believe that climate sensitivity is high (with high probability), even if we don’t actually believe it? Why?
James
Ref:
Kandlikar M., Risbey J., Dessai S, (2005) Representing and communicating deep uncertainty in climate-change assessments. Comptes Rendus Geoscience 337(4): 443-455. (not on-line, AFAIK)
Paul Baer says
Hi James –
Thanks for the thoughtful reply. Here’s some quick thoughts:
I think that the point raised by Kandlikar et al. is quite appropos, about the connection between precision and confidence. The argument I’m making, actually, is that our confidence in our understanding of how the feedbacks in the climate system will evolve in the coming decades – summarized by the climate sensitivity – is in fact low, and that the appropriate conclusion is that the PDF should be considered to be broad.
Quite specifically, I am indeed arguing, as you suggest that no one has yet (and I’m happy to be first :^) that in light of your analysis, S>4.5 is still credible at greater than 5%. Because if as you argue, that it is reasonable to believe that it is in fact 2.5%, then our lack of understanding of the process can justify a widening of the credible tail that includes S=5º at 2.5% and S=4.5 at 5% or 6% or 7% or what have you. There is no reason not to operate on your calculated PDF differently than you operate on the components, widening them subjectively, and arguably good reason to do so.
I don’t disagree that the fact that we have three types of constraints on historical climate sensitivity is some evidence for predicting its future behavior (and lets be clear, probabilistic prediction is what we’re trying to do here). I just don’t think it’s all that strong evidence. We know enough about the complexity of the system to be able to say that even if we knew exactly what the climate sensitivity “was” in each of the three constraining periods – that is to say, we had a reasonably precise model of the feedbacks and forcings operative in each period – it wouldn’t give us a particularly high confidence that it would be very similar in process and consequence in the future. Certainly I think it’s reasonable to say “there’s at least a 5% chance that the feedbacks will work very differently as we warm the planet towards temperatures unprecedented in recent millenia.”
In short, then, yes, I believe that we ought to act as if the climate sensitivity has a higher probability of being high than we believe may be the best estimate of its �historical� value.
In fact, given significant evidence about the state dependence of the climate sensitivity, I’m not even sure that what is estimated by the three constraints is meaningfully a single, unique parameter :^)