There was an interesting workshop last week focused on the Future of Climate Modelling. It was run by the World Climate Research Program (WCRP) Core Project on Earth System Modelling and Observations (ESMO) which is part of a bewildering alphabet soup of various advisory committees that exist for mostly unclear historical reasons. This one actually does something useful – namely it helps organize the CMIP activities that many modeling groups contribute to (which inform the assessment reports like IPCC and various national Climate Assessments). They had a wide variety of people and perspectives to discuss the changing landscape of climate modeling and what people want from these models. You won’t agree with everything, but it was informative.
The workshop was in four quite digestible chunks which are all on Youtube
Day 1 (21st March): (3h 10min)
Day 2 (22nd March): (1h 54min)
Day 3 (23rd March): (1h 33min)
Day 4 (24th March): (1h 45min)
Many of the presentations are also available. The organizers are working on a paper of some sort that digests what was said and will I think be reaching out to a wider audience for feedback.
TL;DR. Summary:
The main themes were familiar – how should we prioritize new activities (at the community level) given limited resources? Higher resolution? More complexity? More initial condition ensembles? More forcing ensembles? More perturbed parameter ensembles? More machine learning? Better post-processing? All of the above in little bits? In reality, these decisions are taken at the model group or national or agency funding manager level, and not by international committees, but the facilitating role the committees have can increase the utility of the individual group contributions and guide some choices. The tensions between these different directions has existed for decades, but some of the new elements (the role of AI/ML, the increased spread of ECS in CMIP6, the demonstrated utility of Large Ensembles etc.) add some wrinkles to the discussion.
One new theme which hasn’t come up much before at this level, is the carbon footprint of these activities – at the supercomputer centers, but also the in-person workshop and international meetings that until recently were commonplace. This was a virtual international meeting with active participation from Asia, Australasia, Europe, and the Americas which a few years ago would certainly have been in-person, with a much smaller attendance and a much higher cost. But whether this scales to the bigger meetings and how we can provide the important mentoring/socializing/community building/career aspects of the previous practice is as yet unclear.
Susan Anderson says
Thank you for this. Unfortunately, I fall in the tl;dr part of the population, but may make the effort to view some of the longer items. Getting people more informed about how results are obtained is always useful.
Gin says
I can only imagine the ‘alphabet’ soup of groups for ‘oversight’. Micro-management tends to ‘sneak’ in via such groups. Each group should have their ‘own’ writers to digest and breakdown the info from MAIN office of RC writers and re-written by their own writers to specifics of their ‘groups’. It could be somewhat less work for you. Your office would only need to ensure information for each ‘group’ is included. I love point-form when delivering info but know it’s not popular either. Short and to the point is best I think. I’m glad you and all the contributing scientists do have your notes and thoughts in one spot though, thank you.
Barton Paul Levenson says
Better radiative transfer!
Kevin McKinney says
Thanks for the summary!
Paul Pukite (@whut) says
Regarding
It’s been said that increasing the numerical horsepower will often only get you to the incorrect answer faster. But what happens when the algorithm can be revolutionized, and all the speed of a supercomputer become less of a requirement? A NASA JPL researcher named JH Shirley has a long preprint (somewhat rambling) on arXiv (https://arxiv.org/abs/2112.02186) that speaks to the situation that some aspects of climate science should be reconsidered:
If indeed something was left out and climate research has been spinning its wheels, isn’t it a worthwhile endeavor to spend effort on that instead off just pushing for more supercomputer cycles?
Killian says
Could this be why GHG levels and temps are well-modeled but effects are strongly minimized compared to reality?
Paul Pukite (@whut) says
Definitely variability is not accounted for as well as trends.
John A Boothroyd says
I posted this further down on the replylist: however, it fits much better here. Answers what you state as problem.
Based on the discussions above the actual modeling; that doesn’t work; is of little concern. I find a lot that needs to be fixed: however, many take that as climate change denial vs correction of their logic and assumptions.
Short story that makes the point: I asked my Dad 80 something at the time why he still had software Job at a time when most were out of work in California. I’m not very good a programming end, however I know what it’s supposed to do.
Average temp means little and the mean even less. If applied at the extreme’s, point a which we care, it is almost guaranteed to be wrong. You are not on a simple one to one slope. Average or that masks or amplifies the real effect you are concerned with.
Evaporative/Vapor cycles out do any effects of green House gases at the high extremes.
I can go on. Are we doing the analog math at all in order to understand it all? Secret algorithm’s???
Paul Pukite (@whut) says
The lack of comments on such an interesting topic indicates that perhaps ending comments is not too far off. I counted 10 questions posed in the post but few responses after nearly 2 weeks.
My subjective take
How should we prioritize new activities (at the community level) given limited resources?
Higher resolution? The energy cascade favors larger scale, so perhaps not required
More complexity? No. Parsimony is important.
More initial condition ensembles? No, if initial conditions do not rule.
More forcing ensembles? The right kind of forcing
More perturbed parameter ensembles? Yes. Structural uncertainty important in nonlinear models.
More machine learning? Yes. Best way to explore non-linear space.
Better post-processing? Yes. More state-of-the-art signal processing. How else to find underlying patterns?
The Future of Climate Modeling? The sky is still wide open
Can also take it to Twitter: https://twitter.com/WHUT/status/1512391196668223493
RiIchard the Weaver says
I think a short set of pertinent and informing comments is ideal. I enjoyed reading this thread. Sure beats slogging through hundreds of less value.
Different models are better at different things. Are they modular enough in similar enough ways to fashion a Frankenstein Model?
Paul Pukite (@whut) says
“fashion a Frankenstein Model?”
In other disciplines a Frankenstein model doesn’t always give good connotations since it may cause problems, just as the fictional monster wreaked havoc. An example of a Frankenstein model is a GCM that tries to perform mean-value estimates (such as climate sensitivity) and detailed behavioral simulations (such as El Nino). Isaac Held was interviewed recently and hinted at this when he described difficulties working in a team environment :
https://deep-convection.org/2022/04/12/episode-1-isaac-held/
Another bad connotation is the “model designed by committee”.
Killian says
We need to get the effects of restoration of ecosystems into the models, too.
https://www.outsideonline.com/video/letter-from-mother-earth-to-humanity/?auto-play=true
Carbomontanus says
Maybe the very Kindergarten Education of model and modelings was inferiour and perverse. It was The LEGO, Plastilina, and virtual reality- public basic school and inauguration. They learnt less about dirt and twigs and rubbish, blood swet and rears in the climate and elsewhere in reality.
Phil Carver says
One area that could be better explored is getting models to replicate recent interglacial periods and maybe even the PETM excursion. We seem to be headed into uncharted GHG levels.
John A Boothroyd says
Based on the discussions above the actual modeling; that doesn’t work; is of little concern. I find a lot that needs to be fixed: however, many take that as climate change denial vs correction of their logic and assumptions.
Short story that makes the point: I asked my Dad 80 something at the time why he still had software Job at a time when most we out of work in California. I’m not very good a programming end, however I know what it’s supposed to do.
Average temp means little and the mean even less. If applied at the extreme’s, point a which we care, it is almost guaranteed to be wrong. You are not on a simple one to one slope. Average or that masks or amplifies the real effect you are concerned with.
Evaporative/Vapor cycles out do any effects of green House gases at the high extremes.
I can go on. Are we doing the analog math at all in order to understand it all.
jgnfld says
I’m curious: How are you differentiating the average of a series of values from the mean of the same series, here?