Some of the authors of a recent commentary on k-scale modeling respond to RealClimate.
[Read more…] about Overselling k-scale? HmmScientific practice
The modern demarcation problem
Defining (and enforcing) a clear line between information and mis-information is impossible, but that doesn’t mean misinformation doesn’t exist or that there is nothing to be done to combat it.
I found myself caught in an ‘interesting’ set of exchanges on twitter a few weeks ago (I won’t link to it to spare you the tedium, but you could probably find it if you look hard enough). The nominal issue was whether deplatforming known bull******s was useful at stemming the spread of misinformation (specifically with respect to discussions around COVID). There is evidence that this does in fact work to some extent, but the twitter thread quickly turned to the question of who decides what is misinformation in the first place, and then descended into a free-for-all where just the very mention that misinformation existed or that the scientific method provided a ratchet to detect it, was met with knee-jerk references to the Nazi’s and the inquisition. So far, so usual, right?
While the specific thread was not particularly edifying, and I’ll grant that my tweets were not perfectly pitched for the specific audience, this is a very modern example of the classic Demarcation Problem (Stanford Encyclopedia of Philosophy) in the philosophy of science.
[Read more…] about The modern demarcation problemIssues and Errors in a new Scafetta paper
Earlier this week, a new paper appeared in GRL by Nicola Scafetta (Scafetta, 2022) which purported to conclude that the CMIP6 models with medium or high climate sensitivity (higher than 3ºC) were not consistent with recent historical temperature changes. Since there have been a number of papers already on this topic, notably Tokarska et al (2020), which did not come to such a conclusion, it is worthwhile to investigate where Scafetta’s result comes from. Unfortunately, it appears to emerge from a mis-appreciation of what is in the CMIP6 archive, an inappropriate statistical test, and a total neglect of observational uncertainty and internal variability.
[Read more…] about Issues and Errors in a new Scafetta paperReferences
- N. Scafetta, "Advanced Testing of Low, Medium, and High ECS CMIP6 GCM Simulations Versus ERA5‐T2m", Geophysical Research Letters, vol. 49, 2022. http://dx.doi.org/10.1029/2022GL097716
- K.B. Tokarska, M.B. Stolpe, S. Sippel, E.M. Fischer, C.J. Smith, F. Lehner, and R. Knutti, "Past warming trend constrains future warming in CMIP6 models", Science Advances, vol. 6, 2020. http://dx.doi.org/10.1126/sciadv.aaz9549
The Future of Climate Modeling?
There was an interesting workshop last week focused on the Future of Climate Modelling. It was run by the World Climate Research Program (WCRP) Core Project on Earth System Modelling and Observations (ESMO) which is part of a bewildering alphabet soup of various advisory committees that exist for mostly unclear historical reasons. This one actually does something useful – namely it helps organize the CMIP activities that many modeling groups contribute to (which inform the assessment reports like IPCC and various national Climate Assessments). They had a wide variety of people and perspectives to discuss the changing landscape of climate modeling and what people want from these models. You won’t agree with everything, but it was informative.
[Read more…] about The Future of Climate Modeling?“Don’t Look Up”
The highlight of the movie season for climate science has clearly been the release on Dec 24th 2021 of “Don’t Look Up”. While nominally about a different kind of disaster – the discovery of a comet heading to Earth on a collision course – the skewering of our current science-policy dysfunction transcends the specifics and makes a powerful metaphor for climate change, and even the ongoing COVID-19 pandemic.
[Read more…] about “Don’t Look Up”BAU wow wow
How should we discuss scenarios of future emissions? What is the range of scenarios we should explore? These are constant issues in climate modeling and policy discussions, and need to be reassessed every few years as knowledge improves.
I discussed some of this in a post on worst case scenarios a few months ago, but the issue has gained more prominence with a commentary by Zeke Hausfather and Glen Peters in Nature this week (which itself partially derives from ongoing twitter arguments which I won’t link to because there are only so many rabbit holes that you want to fall into).
My brief response to this is here though:
Mike Mann has a short discussion on this as well. But there are many different perspectives around – ranging from the merely posturing to the credible and constructive. The bigger questions are certainly worth discussing, but if the upshot of the current focus is that we just stop using the term ‘business-as-usual’ (as was suggested in the last IPCC report), then that is fine with me, but just not very substantive.
References
- Z. Hausfather, and G.P. Peters, "Emissions – the ‘business as usual’ story is misleading", Nature, vol. 577, pp. 618-620, 2020. http://dx.doi.org/10.1038/d41586-020-00177-3
Update day 2020!
Following more than a decade of tradition (at least), I’ve now updated the model-observation comparison page to include observed data through to the end of 2019.
As we discussed a couple of weeks ago, 2019 was the second warmest year in the surface datasets (with the exception of HadCRUT4), and 1st, 2nd or 3rd in satellite datasets (depending on which one). Since this year was slightly above the linear trends up to 2018, it slightly increases the trends up to 2019. There is an increasing difference in trend among the surface datasets because of the polar region treatment. A slightly longer trend period additionally reduces the uncertainty in the linear trend in the climate models.
To summarize, the 1981 prediction from Hansen et al (1981) continues to underpredict the temperature trends due to an underestimate of the transient climate response. The projections in Hansen et al. (1988) bracket the actual changes, with the slight overestimate in scenario B due to the excessive anticipated growth rate of CFCs and CH4 which did not materialize. The CMIP3 simulations continue to be spot on (remarkably), with the trend in the multi-model ensemble mean effectively indistinguishable from the trends in the observations. Note that this doesn’t mean that CMIP3 ensemble means are perfect – far from it. For Arctic trends (incl. sea ice) they grossly underestimated the changes, and overestimated them in the tropics.
The CMIP5 ensemble mean global surface temperature trends slightly overestimate the observed trend, mainly because of a short-term overestimate of solar and volcanic forcings that was built into the design of the simulations around 2009/2010 (see Schmidt et al (2014). This is also apparent in the MSU TMT trends, where the observed trends (which themselves have a large spread) are at the edge of the modeled histogram.
A number of people have remarked over time on the reduction of the spread in the model projections in CMIP5 compared to CMIP3 (by about 20%). This is due to a wider spread in forcings used in CMIP3 – models varied enormously on whether they included aerosol indirect effects, ozone depletion and what kind of land surface forcing they had. In CMIP5, most of these elements had been standardized. This reduced the spread, but at the cost of underestimating the uncertainty in the forcings. In CMIP6, there will be a more controlled exploration of the forcing uncertainty (but given the greater spread of the climate sensitivities, it might be a minor issue).
Over the years, the model-observations comparison page is regularly in the top ten of viewed pages on RealClimate, and so obviously fills a need. And so we’ll continue to keep it updated, and perhaps expand it over time. Please leave suggestions for changes in the comments below.
References
- J. Hansen, D. Johnson, A. Lacis, S. Lebedeff, P. Lee, D. Rind, and G. Russell, "Climate Impact of Increasing Atmospheric Carbon Dioxide", Science, vol. 213, pp. 957-966, 1981. http://dx.doi.org/10.1126/science.213.4511.957
- J. Hansen, I. Fung, A. Lacis, D. Rind, S. Lebedeff, R. Ruedy, G. Russell, and P. Stone, "Global climate changes as forecast by Goddard Institute for Space Studies three‐dimensional model", Journal of Geophysical Research: Atmospheres, vol. 93, pp. 9341-9364, 1988. http://dx.doi.org/10.1029/JD093iD08p09341
- G.A. Schmidt, D.T. Shindell, and K. Tsigaridis, "Reconciling warming trends", Nature Geoscience, vol. 7, pp. 158-160, 2014. http://dx.doi.org/10.1038/ngeo2105
10 years on
I woke up on Tuesday, 17 Nov 2009 completely unaware of what was about to unfold. I tried to log in to RealClimate, but for some reason my login did not work. Neither did the admin login. I logged in to the back-end via ssh, only to be inexplicably logged out again. I did it again. No dice. I then called the hosting company and told them to take us offline until I could see what was going on. When I did get control back from the hacker (and hacker it was), there was a large uploaded file on our server, and a draft post ready to go announcing the theft of the CRU emails. And so it began.
From “One year later”, 2010.
Many people are weighing in on the 10 year anniversary of ‘Climategate’ – the Observer, a documentary on BBC4 (where I was interviewed), Mike at Newsweek – but I’ve struggled to think of something actually interesting to say.
It’s hard because even in ten years almost everything and yet nothing has changed. The social media landscape has changed beyond recognition but yet the fever swamps of dueling blogs and comment threads has just been replaced by troll farms and noise-generating disinformation machines on Facebook and Twitter. The nominally serious ‘issues’ touched on by the email theft – how robust are estimates of global temperature over the instrumental period, what does the proxy record show etc. – have all been settled in favor of the mainstream by scientists plodding along in normal science mode, incrementally improving the analyses, and yet they are still the most repeated denier talking points.
[Read more…] about 10 years onJust the facts?
In the wake of the appalling mass shootings last weekend, Neil DeGrasse Tyson (the pre-eminent scientist/communicator in the US) tweeted some facts that were, let’s just say, not well received (and for which he kind of apologised). At least one of the facts he tweeted about was incorrect (deaths by medical errors are far smaller). However, even if it had been correct, the overall response would have been the same, because the reaction was not driven by the specifics of what was said, but rather by the implied message of the context in which it was said. This is a key feature (or bug) of communications in a politicized environment, and one that continues to trip up people who are experienced enough to know better.
[Read more…] about Just the facts?Koonin’s case for yet another review of climate science
We watch long YouTube videos so you don’t have to.
In the seemingly endless deliberations on whether there should be a ‘red team’ exercise to review various climate science reports, Scott Waldman reported last week that the original architect of the idea, Steve Koonin, had given a talk on touching on the topic at Purdue University in Indiana last month. Since the talk is online, I thought it might be worth a viewing.
[Spoiler alert. It wasn’t].
[Read more…] about Koonin’s case for yet another review of climate science