Guest commentary by Ben Sanderson
Millar et al’s recent paper in Nature Geoscience has provoked a lot of lively discussion, with the authors of the original paper releasing a statement to clarify that their paper did not suggest that “action to reduce greenhouse gas emissions is no longer urgent“, rather that 1.5ºC (above the pre-industrial) is not “geophysically impossible”.
The range of post-2014 allowable emissions for a 66% chance of not passing 1.5ºC in Millar et al of 200-240GtC implies that the planet would exceed the threshold after 2030 at current emissions levels, compared with the AR5 analysis which would imply most likely exceedance before 2020. Assuming the Millar numbers are correct changes 1.5ºC from fantasy to merely very difficult.
But is this statement overconfident? Last week’s post on Realclimate raised a couple of issues which imply that both the choice of observational dataset and the chosen pre-industrial baseline period can influence the conclusion of how much warming the Earth has experienced to date. Here, I consider three aspects of the analysis – and assess how they influence the conclusions of the study.
Figure 1: (a) shows temperature change in the CMIP5 simulations relative to observed temperature products. Grey regions show model range under historical and RCP8.5 forcing relative to a 1900-1940 baseline. Right-hand axis shows temperatures relative to 1861-1880 (offset using HadCRUT4 temperature difference). (b) shows temperature change as a function of cumulative emissions. Black solid line shows the CMIP5 historical mean, and black dashed is the RCP8.5 projection. Colored lines represent regression reconstructions as in Otto (2015) using observational temperatures from HadCRUT4 and GISTEMP, with cumulative emissions from the Global Carbon Project. Colored points show individual years from observations.
The choice of temperature data
We can illustrate how these effects might influence the Millar analysis by repeating the calculation with alternative temperature data. Their approach requires an estimate of the forced global mean temperature in a given year (excluding any natural variability), which are derived from Otto et al (2015), who employ a regression approach to reconstruct a prediction of global mean temperatures as a function of anthropogenic and natural forcing agents. In Fig. 1(a), we apply the Otto approach to data from GISTEMP as well as the HadCRUT4 product used in the original paper – again using data up to 2014. Although the HadCRUT4 forced Otto-style reconstruction suggests 2014 temperatures were less than the 25th percentile of the CMIP5 distribution, following the same procedure with GISTEMP yields 2014 temperatures of 1.08K – corresponding to the 58th percentile of the CMIP5 distribution.
This draws into question the justification for changing the baseline for the cumulative emissions analysis, given it quickly becomes apparent is that the use of a different dataset can undermine the conclusion that present day temperatures lie outside of the model distribution. Fig. 1(b) shows that the anomaly between observations and the CMIP5 mean temperature response to cumulative emissions is halved by repeating the Millar analysis with the GISTEMP product instead of HadCRUT.
The role of internal variability
There is also an important question of the degree to which internal variability can influence the attributable temperature change, given that the Millar result is contingent on knowing what the forced temperature response of the system is. We apply the approach to the CESM Large Ensemble, a 40-member initial condition ensemble of historical and future climate simulations where ensemble members differ only in their realization of natural variability. Although all models have identical forcing and model configuration, Fig 2(a) shows the range of estimated forced warming in 2014 in the CESM model by the Otto approach varies from 0.68-0.94K (almost as much as the actual 2005-2014 decadal average temperature itself in the CESM ensemble), and there is a strong correlation between inferred forced warming in 2014 and the global mean temperatures in the preceding decade, suggesting that the unforced estimate in the Otto approach can be strongly influenced by temperatures in the preceding decade.
Figure 2: (a) Temperatures of reconstructed global mean temperature in 2014 for the CESM large ensemble following the Otto (2015) regression methodology, plotted as a function of average global mean temperature in the years 2005-2014. (b) correlation between mean grid-point temperatures in 2005-2014 and reconstructed global mean temperature in 2014, ellipses show regions proposed for Pacific Climate Index. (c) CESM large ensemble reconstructed global mean temperature in 2014 as a function of Pacific Climate Index. Vertical lines show index values for observations in period 2005-2014 (solid) and historical (dashed).
In short, irrespective of what observational dataset was used – it’s likely that an estimate of forced response made in 2014 would be biased cold, which on its own would translate to an overestimate of the available budget of about 40GtC.
The low CMIP5 compatible emissions
Millar’s paper also points out that the discrepancy between the CMIP5 ensemble and the observations arises not only due to temperature, but also because cumulative emissions were greater in the real world than the mean CMIP5 model in 2014. But this only translates to a justifiable increase in the emissions budget if the real world is demonstrably off the CMIP5 cumulative emissions/temperature line. By some estimates, cumulative emissions in 2014 might be higher than the models simply be because emissions were consistently above the RCP range between 2005-2014. In other words – by 2014 we’d used more of the carbon budget than any of the RCPs had anticipated and if we are not confident that the real world is cooler than the models at this level of cumulative emissions, this means that available emissions for 1.5 degrees should decrease proportionately.
A key point to note is that, by resetting the cumulative emissions baseline, the Millar et al available emissions budget is insensitive to the actual cumulative emissions to date. The unforced temperature estimate is used as a proxy for what cumulative emissions should be given the current level of warming. This is only justified if we are confident that we know the current unforced temperature more accurately than we know the current cumulative emissions. However, the combined evidence of the influence of natural variability on the unforced temperature estimate, the disagreement between different observational datasets on warming level, and the uncertainty introduced by an uncertain pre-industrial temperature baseline means that we can’t be confident as the Millar paper suggests on what the current level of warming is, and that the balance of evidence suggests that the Otto warming estimate may be biased cold. If this is right, the Millar available cumulative emissions budget would be biased high.
So, is it appropriate to say that 1.5ºC is geophysically possible? Perhaps plausible would be a better word. Depending on which temperature dataset we choose, the TEB for 1.5 degrees may already be exceeded. Although it would certainly be useful to know what the underlying climate attractor of the Earth system is, any estimate we produce is subject to error.
We ultimately face a question of what we trust more: our estimate of our cumulative emissions to date combined with our full knowledge of how much warming that might imply, or an estimate of how warm the system was in 2014 which is subject to error due to observational uncertainty and natural variability. Changing the baseline for warming and cumulative emissions is effectively a bias correction, a statement that models have simulated the past sufficiently poorly that they warrant bias correction which allows for emissions to date to be swept under the carpet. Alternatively, we trust the cumulative emissions number and treat the models as full proxies for reality, as was done in AR5, which would tell us that the emissions to date have already brought us to the brink of exceedance of the 1.5 degree threshold.
Methods
For Figure 1, global mean temperatures are plotted from the HadCRUT4 and GISTEMP products relative to a 1900-1940 baseline, together with global mean temperatures from 81 available simulations in the CMIP5 archive, also relative to the 1900-1940 baseline, where all available ensemble members are taken for each model. In each year from 1900-2016, the 5th,25th,75th and 95th percentiles of the CMIP5 distribution are plotted. Figure 1(b) the CMIP5 cumulative emissions and temperature data used are identical to those in AR5 for the historical and RCP8.5 trajectories. Observational data is from GISTEMP and HadCRUT4 global mean products, and annual cumulative emission data is replicated from the Global Carbon Project. Regression analyses are performed as in Otto (2015), using natural and anthropogenic forcing timeseries (historical and the RCP8.5 scenario) with a regression constructed using data from 1850-2016 (for HadCRUT4), and from 1880-2016 (for GISTEMP).
Figure 2 uses data from the CESM large ensemble, where the Otto (2015) analysis is applied to each ensemble member. Regressions are performed using data from years 1850-2014 for each member of the archive, to replicate the years used in the Otto (2015) analysis. Figure 2(a) shows the regression reconstructed temperature for 2014 plotted as a function of model temperatures in the preceding decade (2005-2014). Figure 2(b) shows the correlation pattern of 2005-2014 temperatures with the 2014 regression reconstruction in the CESM Large Ensemble. Ellipses are constructed to approximately highlight regions of high positive and negative correlation in the pattern. A central Pacific ellipse is centered on 2N,212E, while a South Pacific ellipse is centered on 34S,220E. Figure 2(c) employs a Pacific Climate Index specific to this analysis, constructed using the difference of mean annual temperatures in the years 2005-2014 in the two ellipses in Figure 2(b) (central Pacific minus south Pacific region), showing reconstructed 2014 regression temperatures as a function of the index for each member of the CESM Large Ensemble. A linear regression was computed to predict 2014 Otto (2015) forced temperatures as a function of the Pacific Climate Index. Dashed lines show the 5th and 95th percentile uncertainty in the regression coefficients. The same index is then calculated for the 2005-2014 period and the historical 1880-2000 period in the HadCRUT4 and GISTEMP datasets.
Dan Miller says
I assume the question is can we stay below +1.5C warming without any intentional geo-engineering. Given that the Earth is not in energy balance, there is more warming in the pipeline. In addition, according to the recent PNAS paper by Yangyang Xua and Veerabhadran Ramanathan, “Well below 2 °C: Mitigation strategies for avoiding dangerous to catastrophic climate changes” (http://www.pnas.org/content/114/39/10315.full), manmade aerosols are currently “hiding” 0.9C of warming. Given those two factors and ignoring future emissions that will drive the temperature even higher, we are already over +2C warming once we stop emitting short-lived coal smoke and other pollutants into the air and we give the Earth time to reach temperature equilibrium.
So, without solar radiation management and/or direct air capture and sequestration of CO2, it does not appear geophysically possible to stay below +1.5C warming. The PNAS paper does suggest that focusing on reducing short-lived climate pollutants such as soot and methane is a good near-term strategy for buying time.
Clive Best says
Fig 1b shows that the relationship between temperature and emissions is not linear, independent of which temperature series you use. The only way this relationship could be linear would be if an increase in airborne fraction cancels out the logarithmic relationship between CO2 concentrations and radiative forcing.
The clear evidence is that this is not happening and the airborne fraction remains approximately constant. Carbon cycle models have overestimated CO2 concentrations and this explains why the carbon budget left to reach 1.5C has increased.
JCH says
~2005-~2014 is also the period visited by intensified trade winds.
“the heat uptake is by no means permanent: when the trade wind strength returns to normal – as it inevitably will – our research suggests heat will quickly accumulate in the atmosphere. So global temperatures look set to rise rapidly out of the hiatus, returning to the levels projected within as little as a decade.”
jai mitchell says
I would contend that recent CMIP5 model analysis has shown that thr IPO shift to negative phase beginning in 1998 was the result of shifting regional NEGATIVE forcing of aerosols from the Western Hemisphere to East. Therefore masking the GHG forcing impact (more positive). These assumptions are critical to derive true variability and bias any ECS and carbon budget estimates that rely on observed warming.
Mal Adapted says
jai mitchell:
It just occurred to me that the topic of this post might be why Russell keeps nattering about the failure of ECS or TCR estimates to converge.
Jeremy Grimm says
I’m just a dumb layman who comes to this site from time to time to see what’s the skinny with Global Warming — so take my comment for what it’s worth ….
The Miller paper discussed in this post seems like an exercise in skinning fleas. I guess I’m wondering why Miller et al’s paper “has provoked a lot of lively discussion.” All the discussions, goals, deadlines, and agreements made, skimmed-on, and broken just use “Miller-et-al-style” flea skinning to justify “kicking the can further down the road.” Whether we exceed the 1.5 degree threshold, already exceeded it, or still have “carbon budget” or remaining [sure sounds like Market-speak to me] the water in this pot is already too hot for this frog.
Matthew R Marler says
Good essay, imo.
Thank you.
Gorgon Zola says
#6
Correct. We can’t expect marginal differences to have a significant impact on things like carbon budgets, which in itself is already a complete misnomer.
-G.
Michael Roddy says
Thank you, Gorgon. The notion of a “carbon budget” assumes that we can keep roughly on our present path for a few more decades, which would be suicidal.
And thanks to Dan Miller. I didn’t realize that the aerosol cooling effect was that large.
Chris G says
Typo: ” that there paper ”
I won’t mind if you delete this.
Tony Weddle says
Yes, “plausible” would be a better word. It might well prove to have been impossible to keep temps from rising more than 1.5C (because they might have risen above that if we stopped all emissions now – e.g. the lack of aerosols alone might be enough to push temps beyond that). So Millar et al could not, scientifically, say that “1.5C is not geophysically impossible.” The best that they could say is that, given their choice of data sets, ignoring data beyond 2014 and assuming their calculations on that data are correct, it’s plausible that temperature rises could be restricted to 1.5C.
Dan Miller says
Regarding aerosols hiding a significant share of global warming, see Jim Hansen’s 2013 paper “Doubling Down on Our Faustian Bargain”:
http://www.columbia.edu/~jeh1/mailings/2013/20130329_FaustianBargain.pdf
Basically, coal puts up a warming gas (CO2) that lasts in the atmosphere for hundreds to thousands of years and we are hiding about half of its effects with a gas (coal smoke) that lasts in the atmosphere for a few weeks. Hmmm., what could go wrong?
nigelj says
Regarding coal smoke aerosols, coal burning increased globally to some extent from 2002 – 2013 mainly due to Asia, (remember reading this somewhere) and has reduced slightly since approx. 2013 -2017.
This correlates quite well with the so called pause and the big jump in global temperatures from 2015. Would coal have been a factor in both of these?
Might be a naive question. I’m just a layperson, not a climate scientist.
Dan Miller says
#13 Nigelj: Yes, coal smoke helps explain the “pause” and recent jump. It also helps explain the initial cooling after the Industrial Revolution began (the smoke effect overwhelmed the relatively weak warming effect back then), the increase of global temperture during WW2 (shut down of industries) and decrease after WW2 (re-industrialization) and acceleration in the 1970’s after the passage of the Clean Air Act.
nigelj says
Dan Millar @14
That makes plenty of sense, except I don’t get shut down of industries in WW2. Didn’t industry just shift from consumer goods to military weapons?
Or perhaps you mean destruction of industry in bombing, but this was replaced quite quickly from what I read about war history, amazing what they did under pressure, and was confined to just a few countries.
But ok, maybe industry did play some role. I think warming up to 1945 was mainly driven by CO2, solar and low volcanic emissions.
Digby Scorgie says
Dan Miller @14
I would’ve thought World War Two would’ve seen continued industry — all geared to producing arms. Or does the arms industry produce less smoke?
DP says
#14 The timing slightly wrong. Surely it was in the depressed 1930’s that industry would have produced less emissions, and that was a warm decade. Though global temperatures were high in the war years in Europe there was a string of very severe winters. One of them stopped the German Panzers in front of Moscow and they were unusually severe in England too. Some people (unscientifically of course)blamed cordite from bomb and shell explosions.
Kevin McKinney says
I agree with #14 & 15 that it’s very unlikely–pretty much impossible–that industrial production decreased during WW II. Production mostly shifted from ‘wants’ to immediate military needs.
I suspect data quality and availability is a bit of an issue, especially on a global basis, but for the US–then, I think, *much* the world’s largest economy–the picture is pretty clear. Insofar as we can take GDP as a proxy for emissions–and I think we can, up until the last decade or two–there was a marked *increase* in emissions:
https://fred.stlouisfed.org/graph/?id=GDPCA,
Dan Miller says
#15 7 #16: Global GDP dropped during WW2 and spiked thereafter:
http://www2.york.psu.edu/~dxl31/econ14/lecture12.html
nigelj says
Dan Millar @19
“#15 7 #16: Global GDP dropped during WW2 and spiked thereafter:”
I’m not so sure, and maybe you just didn’t look closely enough at the graph. Your graph shows global gdp per capita rising right to at least 1944, then dropping after this date for about 5 years, then rising from 1950 onwards. Nasa giss shows temperatures rising to about 1945 then dropping after that. So its hard to see a drop in industrial emissions being more than a small factor at most in warming to 1945.
I’m nit picking a bit. I agree with your overall assessment of aerosols over the last 200 years, and it was illuminating to me as I hadn’t considered it that way.