This month’s open thread on climate topics (no joke).
Climate Science
Some new CMIP6 MSU comparisons
We add some of the CMIP6 models to the updateable MSU [and SST] comparisons.
After my annual update, I was pointed to some MSU-related diagnostics for many of the CMIP6 models (24 of them at least) from Po-Chedley et al. (2022) courtesy of Ben Santer. These are slightly different to what we have shown for CMIP5 in that the diagnostic is the tropical corrected-TMT (following Fu et al., 2004) which is a better representation of the mid-troposphere than the classic TMT diagnostic through an adjustment using the lower stratosphere record (i.e. ).
[Read more…] about Some new CMIP6 MSU comparisonsReferences
- S. Po-Chedley, J.T. Fasullo, N. Siler, Z.M. Labe, E.A. Barnes, C.J.W. Bonfils, and B.D. Santer, "Internal variability and forcing influence model–satellite differences in the rate of tropical tropospheric warming", Proceedings of the National Academy of Sciences, vol. 119, 2022. http://dx.doi.org/10.1073/pnas.2209431119
- Q. Fu, C.M. Johanson, S.G. Warren, and D.J. Seidel, "Contribution of stratospheric cooling to satellite-inferred tropospheric temperature trends", Nature, vol. 429, pp. 55-58, 2004. http://dx.doi.org/10.1038/nature02524
How not to science
A trip down memory lane and a lesson on scientific integrity.
I had reason to be reviewing the history of MSU satellite retrievals for atmospheric temperatures recently. It’s a fascinating story of technology, creativity, hubris, error, imagination, rivalry, politics, and (for some) a search for scientific consilience – worthy of movie script perhaps? – but I want to highlight a minor little thing. Something so small that I’d never noticed it before, and I don’t recall anyone else pointing it out, but it is something I find very telling.
The story starts in the early 90’s, but what caught my eye was a single line in an op-ed (sub. req.) written two decades later:
… in 1994 we published an article in the journal Nature showing that the actual global temperature trend was “one-quarter of the magnitude of climate model results.”McNider and Christy, Feb 19th 2014, Wall Street Journal
Most of the op-ed is a rather tired rehash of faux outrage based on a comment made by John Kerry (the then Secretary of State) and we can skip right past that. It’s only other claim of note is a early outing of John Christy’s misleading graphs comparing the CMIP5 models to the satellite data but we’ll get back to that later.
First though, let’s dig into that line. The 1994 article is a short correspondence piece in Nature, where Christy and McNider analyzed MSU2R lower troposphere dataset and using ENSO and stratospheric volcanic effects to derive an ‘underlying’ global warming trend of 0.09 K/decade. This was to be compared with “warming rates of 0.3 to 0.4 K/decade” from models which was referenced to Manabe et al. (1991) and Boer et al. (1992). Hence the “one quarter” claim.
But lets dig deeper into each of those elements in turn. First, 1994 was pretty early on in terms of MSU science. The raw trend in the (then Version C) MSU2R record from 1979-1993 was -0.04 K/decade. [Remember ‘satellite cooling’?]. This was before Wentz and Schabel (1998) pointed out that orbital decay in the NOAA satellites was imparting a strong cooling bias (about 0.12 K/decade) on the MSU2R (TLT) record. Secondly, the two cited modeling papers don’t actually give an estimated warming trends for the 1980s and early 90s. The first is a transient model run using a canonical 1% increasing CO<sub>2</sub> – a standard experiment, but not one intended to match the real world growth of CO2 concentrations. The second model study is a simple equilibrium 2xCO2 run with the Canadian climate model, and does not report relevant transient warming rates at all. This odd referencing was pointed out in correspondence with Spencer and Christy by Hansen et al. (1995) who also noted that underlying model SAT trends for the relevant period were expected to be more like 0.1-0.15 K/decade. So the claim that the MSU temperatures were warming at “one quarter” the rate of the models wasn’t even valid in 1994. They might have more credibly claimed “two thirds” the rate, but the uncertainties are such that no such claim would have been robust (for instance, just the uncertainties on the linear regression alone are ~ +/-0.14 K/dec).
But it gets worse. In 2014, McNider and Christy were well aware of the orbital decay correction (1998), and they were even aware of the diurnal drift correction that was needed because of a sign error introduced while trying to fix the orbital decay issue (discovered in 2005). The version of the MSU2R product at the beginning of 2014 was version 5.5, and that had a raw trend of -0.01 K/decade 1979-1993 (+/- 0.18 K/dec 95% CI, natch). Using an analogous methodology to that used in 1994 (see figure to the right), the underlying linear trend after accounting for ENSO and volcanic aerosols was…. 0.15 K/dec! Almost identical to the expected trend from models!
So not only was their original claim incorrect at the time, but had they repeated the analysis in 2014, their own updated data and method would have shown that there was no discrepancy at all.
Now in 2014, there was a longer record and more suitable models to compare to. Models had been run with appropriate volcanic forcings and in large enough ensembles that there was a quantified spread of expected trends. Comparisons could now be done in a more sophisticated away, that compared like with like and took account of many different elements of uncertainty (forcings, weather, structural effects in models and observations etc.). But McNider and Christy chose not to do that.
Instead, they chose to hide the structural uncertainty in the MSU retrievals (the TMT trends for 1979-2013 in UAH v5.5 and RSS v3.3 were 0.04 and 0.08 +/- 0.05 K/dec respectively – a factor of two different!), and ignore the spread in the CMIP5 models TMT trends [0.08,0.36] and graph it in a way as to maximise the visual disparity in a frankly misleading way. Additionally, they decided to highlight the slower warming TMT records instead of the TLT record they had discussed in 1994. For contrast, the UAH v5.5 TLT trends for 1979-2013 were 0.14± 0.05 K/dec.
But all these choices were made in the service of rhetoric, not science, to suggest that models are, and had always been, wrong, and that the UAH MSU data had always been right. A claim moreover that is totally backwards.
Richard Feynman often spoke about a certain kind of self-critical integrity as being necessary to do credible science. That kind of integrity was in very short supply in this op-ed.
References
- J.R. Christy, and R.T. McNider, "Satellite greenhouse signal", Nature, vol. 367, pp. 325-325, 1994. http://dx.doi.org/10.1038/367325a0
- F.J. Wentz, and M. Schabel, "Effects of orbital decay on satellite-derived lower-tropospheric temperature trends", Nature, vol. 394, pp. 661-664, 1998. http://dx.doi.org/10.1038/29267
- J. Hansen, H. Wilson, M. Sato, R. Ruedy, K. Shah, and E. Hansen, "Satellite and surface temperature data at odds?", Climatic Change, vol. 30, pp. 103-117, 1995. http://dx.doi.org/10.1007/BF01093228
Unforced variations: March 2023
This month’s open thread. Antarctic sea ice anyone?
The established ground and new ideas
Science is naturally conservative and the scepticism to new ideas ensures high scientific quality. We have more confidence when different scholars arrive at the same conclusion independently of each other. But scientific research also brings about discoveries and innovations, and it typically takes time for such new understanding to receive acknowledgement and acceptance. In the meanwhile, it’s uncertain whether they really represent progress or if they are misconceived ideas. Sometimes we can shed more light on new ideas through scientific discussions.
[Read more…] about The established ground and new ideas2022 updates to model-observation comparisons
Our annual post related to the comparisons between long standing records and climate models.
As frequent readers will know, we maintain a page of comparisons between climate model projections and the relevant observational records, and since they are mostly for the global mean numbers, these get updated once the temperature products get updated for the prior full year. This has now been completed for 2022.
[Read more…] about 2022 updates to model-observation comparisonsUnforced variations: Feb 2023
2022 updates to the temperature records
Another January, another annual data point.
As in years past, the annual rollout of the GISTEMP, NOAA, HadCRUT and Berkeley Earth analyses of the surface temperature record have brought forth many stories about the long term trends and specific events of 2022 – mostly focused on the impacts of the (ongoing) La Niña event and the litany of weather extremes (UK and elsewhere having record years, intense rainfall and flooding, Hurricane Ian, etc. etc.).
But there are a few things that don’t get covered much in the mainstream stories, and so we can dig into them a bit here.
What influence does ENSO really have?
It’s well known (among readers here, I assume), that ENSO influences the interannual variability of the climate system and the annual mean temperatures. El Niño events enhance global warming (as in 1998, 2010, 2016 etc.) and La Niña events (2011, 2018, 2021, 2022 etc.) impart a slight cooling.
Consequently, a line drawn from an El Niño year to a subsequent La Niña year will almost always show a cooling – a fact well known to the climate disinformers (though they are not so quick to show the uncertainties in such cherry picks!). For instance, the trends from 2016 to 2022 are -0.12±0.37ºC/dec but with such large uncertainties, the calculation is meaningless. Far more predictive are the long term trends which are consistently (now) above 0.2ºC/dec (and with much smaller uncertainties ±0.02ºC/dec for the last 40 years).
It’s worth exploring quantitatively what the impact is, and this is something I’ve been looking at for a while. It’s easy enough correlate the detrended annual anomalies with the ENSO index (maximum correlation is for the early spring values), and then use that regression to estimate the specific impact for any year, and to estimate an ENSO-corrected time series.
The surface temperature records are becoming more coherent
Back in 2013/2014, the differences between the surface indices (HadCRUT3, NOAA v3 and GISTEMP v3) contributed to the initial confusion related to the ‘pause’, which was seemingly evident in HadCRUT3, but not so much in the other records (see this discussion from 2015). Since then all of the series have adopted improved SST homogenization, and HadCRUT5 adopted a similar interpolation across the pole as was used in the GISTEMP products. From next month onwards, NOAA will move to v5.1 which will now incorporate Arctic buoy data (a great innovation) and also provide a spatially complete record. The consequence is that the surface instrument records will be far more coherent than they have ever been. Some differences remain pre-WW2 (lots of SST inhomogeneities to deal with) and in the 19th C (where data sparsity is a real challenge).
The structural uncertainty in satellite records is large
While the surface-based records are becoming more consistent, the various satellite records are as far apart as ever. The differences between the RSS and UAH TLT records are much larger than the spread in the surface records (indeed, they span those trends), making any claims of greater precision somewhat dubious. Similarly, the difference in the versions of the AIRS records (v6 vs. v7) of ground temperature anomalies produce quite distinct trends (in the case of AIRS v6, Nov 2022 was exceptionally cold, which was not seen in other records).
When will we reach 1.5ºC above the pre-industrial?
This was a very common question in the press interviews this week. It has a few distinct components – what is the ‘pre-industrial’ period that’s being referenced, what is the uncertainty in that baseline, and what are the differences in the long term records since then?
The latest IPCC report discusses this issue in some depth, but the basic notion is that since the impacts that are expected at 1.5ºC are derived in large part from the CMIP model simulations that have a nominal baseline of ~1850, ‘pre-industrial’ temperatures are usually assumed to be some kind of mid-19th Century average. This isn’t a universally accepted notion – Hawkins et al (2017) for instance, suggest we should use a baseline from the 18th Century – but it is one that easier to operationalise.
The baseline of 1880-1900 can be calculated for all the long temperature series, and with respect to that 2022 (or the last five years) is between 1.1 and 1.3ºC warmer (with Berkeley Earth showing the most warming). For the series that go back to 1850, the difference between 1850-1900 and 1880-1900 is 0.01 to 0.03ºC, so probably negligible for this purpose.
Linear trends since 1996 are robustly just over 0.2ºC/decade in all series, so that suggests between one and two decades are required to have the mean climate exceed 1.5ºC, that is around 2032 to 2042. The first specific year that breaches this threshold will come earlier and will likely be associated with a big El Niño. Assuming something like 2016 (a +0.11ºC effect), that implies you might see the excedence some 5 years earlier – say 2027 to 2037 (depending a little on the time-series you are following).
2023 is starting the year with a mild La Niña, which is being forecast to switch to neutral conditions by mid-year. Should we see signs of an El Niño developing towards the end of the year, that will heavily favor 2024 to be a new record, though not one that is likely to exceed 1.5ºC however you calculate it.
[Aside: In contrast to my reasoning here, the last decadal outlook from the the UK MetOffice/WMO suggested that 2024 has a 50-50 chance of exceeding 1.5ºC, some 5 or so years early than I’d suggest, and that an individual year might reach 1.7ºC above the PI in the next five years! I don’t know why this is different – it could be a larger variance associated with ENSO in their models, it could be a higher present day baseline (but I don’t think so), or a faster warming rate than the linear trend (which could relate to stronger forcings, or higher effective sensitivity). Any insight on this would be welcome!]
References
- E. Hawkins, P. Ortega, E. Suckling, A. Schurer, G. Hegerl, P. Jones, M. Joshi, T.J. Osborn, V. Masson-Delmotte, J. Mignot, P. Thorne, and G.J. van Oldenborgh, "Estimating Changes in Global Temperature since the Preindustrial Period", Bulletin of the American Meteorological Society, vol. 98, pp. 1841-1856, 2017. http://dx.doi.org/10.1175/BAMS-D-16-0007.1
Unforced variations: Jan 2023
The water south of Greenland has been cooling, so what causes that?
Let’s compare two possibilities by a back-of-envelope calculation.
(1) Is it due to a reduced heat transport of the Atlantic Meridional Overturning Circulation (AMOC)?
(2) Or is it simply due to the influx of cold meltwater as the Greenland Ice Sheet is losing ice?
The latter is often suggested. The meltwater also contributes indirectly to slowing the AMOC, but not because it is cold but because it is freshwater (not saline), which contributes to the first option (i.e. AMOC decline).
AMOC heat transport
For that we take the AMOC flow rate times the temperature difference of 15 °C between the northward upper branch and southward deep return flow to obtain the heat transport.
17,000,000 m3/s x 15 K x 1025 kg/m3 x 4 kJ/kgK = 1 PW (1)
(Here, 1 PW = 1015 Watt and 4 kJ/kgK is the heat capacity of water.)
An AMOC weakening by 15 % thus cools the region at a rate of 0.15 PW = 1.5 x 1014 W and according to model simulations can fully explain the observed cooling trend (2). Of course, this slowdown is not only due to Greenland meltwater – other factors like increasing precipitation probably play a larger role, but the impact of Greenland melting is not negligible, as we argue in (3).
Greenland ice melt
Here we start by taking the Greenland mass loss rate into the ocean, times the temperature difference between the meltwater and the water it replaces. Note we are interested in the longer-term temperature trend over decades over the region with the meltwater properly mixed in, not at some temporary patches of meltwater floating locally at the surface.
Total Greenland mass loss has been on average 270 Gt/year for the last two decades (4).
Most of that evaporates though, and what ends up in the ocean of this, according to a recent study by Jason Box (5), is around 100 Gt/year, about 30% of which in form of ice and 70% in form of meltwater.
100 Gt/year = 3000 tons/second – that sounds a lot but the AMOC flow is more than 5000 times larger.
Assuming the ice and meltwater runoff occurs at 0 °C and replaces water that is 10 °C (a very high assumption corresponding to summer conditions and not the long-term average), the cooling rate is:
3,000,000 kg/s x 10 K x 4 kJ/kgK = 1.2 x 1011 W
So in comparison, the cooling effect of a 15 % AMOC slowdown is over 1,000 times larger than the direct cooling effect of the Greenland meltwater.
For the part entering the ocean as ice, we must also consider that to melt ice requires energy. The heat of fusion of water is 334 kJ/kg so that adds 900 tons/s x 334 kJ/kg = 3 x 1011 W.
So it turns out that those suggesting that ‘cold’ meltwater might cause the cold blob in the northern Atlantic are doubly wrong: if we talk about the direct impact of stuff coming off Greenland, than ice is the dominant factor and the energy that’s required to melt the ice. But both the direct effect of meltwater and of icebergs entering the ocean are completely dwarfed by the weakening of the AMOC (regardless of whether we take the numbers of Box et al. or other estimates). And Greenland’s contribution to that is not because the meltwater is ‘cold’, but because it is fresh – it contains no salt and dilutes the saltiness of the ocean water, thereby reducing its density.
As an additional observation: the cooling patch shown above often vanishes in summer, covered up by a warm surface layer – just when the Greenland melt season is on – only to resurface when deeper mixing starts in autumn. Which again supports the idea that it is not due to a direct effect of cold meltwater influx. Also compare the temperature change directly at the Greenland coast, where the meltwater enters, in the image above.
Finally, some have suggested that the cold blob south of Greenland has been caused by increased heat loss to the atmosphere. That of course is relevant for short-term weather variability – if a cold wind blows over the ocean it will of course cool the surface – but I do not think it can explain the long-term trend, as we discussed earlier here at Realclimate.
References
1. Trenberth, K. E. & Fasullo, J. T. (2017) Atlantic meridional heat transports computed from balancing Earth’s energy locally, Geophys. Res. Let. 44: 1919-1927.
2. Caesar, L., Rahmstorf, S., Robinson, A., Feulner, G., & Saba, V. (2018) Observed fingerprint of a weakening Atlantic Ocean overturning circulation, Nature 556: 191-196.
3. Rahmstorf, S., J.E. Box, G. Feulner, M.E. Mann, A. Robinson, S. Rutherford, and E.J. Schaffernicht, 2015: Exceptional twentieth-century slowdown in Atlantic Ocean overturning circulation. Nature Climate Change, 5, 475–480, doi:10.1038/nclimate2554.
4. NASA Vital Signs, https://climate.nasa.gov/vital-signs/ice-sheets/
5. Box, J. E., et al. (2022), Greenland ice sheet climate disequilibrium and committed sea-level rise, Nature Clim. Change, 12(9), 808-813, doi: 10.1038/s41558-022-01