In recent years, the idea of climate change adaptation has received more and more attention and has become even more urgent with the unfolding of a number of extreme weather-related calamities. I wrote a piece on climate change adaptation last year here on RealClimate, and many of the issues that I pointed to then are still relevant.
The dire consequences of flooding, droughts and heatwaves that we have witnessed the last couple of years suggest that our society is not yet adapted even to the current climate. One interesting question is whether the climate science community is ready to provide robust and reliable information to support climate change adaptation when the world finally realises the urgency to do so. In other words, we need to know how to use the best available information the right way.
Much of the knowhow concerning consequences of global warming and climate science has evolved within the World Climate Research Programme (WCRP), which is undergoing a transition and will emphasise regional climate information for society (so-called “RiFS” – one of many unnecessary acronyms). However, I feel the pace of this transition is slow compared to changes we have seen in weather patterns and climate recently.
WCRP oversees the COordinated Regional climate Downscaling EXperiment (CORDEX) that recently has published a whitepaper on dynamical downscaling and one on empirical-statistical downscaling. Those witepapers are useful, but since they are two separate papers focussing on more academic aspects, I don’t think they consolidate the downscaling approaches into something that can become more robust and reliable for climate change adaptation. There is also a whitepaper from the European Climate Research Alliance (ECRA) which is more geared towards consolidation.
I’m impatient when it comes to seeing developments regarding both climate change adaptation as well as mitigation, and therefore I decided to organise a hybrid workshop in Oslo to accelerate the pace of progress when it comes to climate change adaptation. The workshop was hosted by the Norwegian Meteorological Institute and involved a collaboration with CORDEX, the Norwegian Public Health Institute and ESA. Its web page provides more information about it: https://sites.google.com/met.no/concord-oslo-workshop.
One objective of the workshop was to facilitate discussions within the regional climate modelling community on how to proceed together with impact researchers. It did involve a degree of coproduction of knowledge itself as it also included impact researchers. We wanted to take stock of the present situation regarding our experience and preparedness of providing guidance for climate change adaptation. It’s sometimes useful to take a step back and ask whether we are heading in the right direction.
Some of the issues discussed during the workshop
There was an expressed concern that climate change adaptation is threatened by the time aspect (“something that will happen in the far future”) and that it’s not regarded as an urgent need. Furthermore, long time planning is required, but other more emerging things are higher prioritised. On top of this, there also seems to be a lack of useful information according to some participants.
Sometimes there is no data or information and sometimes there is too much. And the users don’t know exactly how to choose between the whole available dataset or a selection of a small number of climate model results. Climate scientists are still thinking about how to extract useful information, which can be interpreted as useful messages that decision-makers can act upon.
Ensuring transfer of reliable knowledge and actionable information requires effort, time and money. Merely providing climate simulations is not enough for impact assessments, and we need a framework for this. One point was made about the provision of climate indices/services often being too generic and do not fit the users needs
There is sometimes a misunderstanding concerning why climate models necessarily don’t align with historical observations. Further cross-sectoral cooperation is essential as there was an impression that many impact modellers don’t know how to use climate simulations as input for their models. Hence there is a large variety in the way of use of available climate information and knowledge.
Although there are some guidelines provided by Euro-CORDEX, they may still be difficult to use for people with limited climate literacy. We need to bring together information from climate models, observations, process understanding, meteorological understanding and expert judgement to come up with storylines of the plausible worst case for planning purposes, allowing for stress-testing of scenarios etc.
The workshop participants represented countries from all over the world who had concerns that can be described by a wide range of examples of climate impacts, such as dust storms in the Middle East, typhoons outside Japan, projections of flooding in the Andes, icing on power lines in Norway, impact on groundwater resources in the UK (FUTURE-DRAINAGE), and links between admission data for cardiovascular disorders and temperature in Europe (EXHAUSTION).
In South Asia, there are plans for robust projections of climate extremes and adaptation (APN). Regional climate information is also relevant for renewable energy potential, and extreme precipitation is of course always a concern.
There is an impression that more work is dedicated to the use of dynamical (regional climate models, RCMs) than empirical-statistical downscaling (ESD). Perhaps less trust is put in the latter because of lack of knowledge of how physically consistent they are.
High spatial resolution is desired, but also large ensembles are needed (“the law of small numbers”, see [1]). The topic of Machine learning was also discussed, and nonlinearity is a challenge. A concern about inconsistencies among the nested simulations was acknowledged, and there is a need to measure the skill and quantify the uncertainty.
One challenge, of course, is having reliable observations with good coverage. Nevertheless, I think that we can say that we both have traditional challenges connected to the modelling of regional climate, in addition to cross-sectoral challenges concerning coproduction of knowledge and how to best get multi-directional flow of knowledge.
References
- C. Deser, R. Knutti, S. Solomon, and A.S. Phillips, "Communication of the role of natural variability in future North American climate", Nature Climate Change, vol. 2, pp. 775-779, 2012. http://dx.doi.org/10.1038/nclimate1562
John Bolduc says
Interesting. Some communities are using statistical downscaling. Regionally models are still coarse and don’t account for variability in a small area. I’m in Massachusetts, USA and there are differences in projections for the various parts of our small state. For practitioners who aren’t modelers, could you explain why using statistical downscaling calibrated to recent climate is not sufficient? Or maybe you think that’s fine. It would be useful to have a central source develop projections for communities. This would facilitate more planning with future projections. This first step probably is a barrier. But the city I worked for found it wasn’t unreasonably expensive relative to other modeling, e.g. urban flood modeling.
John Pollack says
I can give you one perspective, from a forecast meteorologist.
Statistical downscaling might well be an inexpensive approach that can give you some good detail. But just how reliable are those statistics for your situation?
Anything “calibrated to recent climate” will not necessarily reflect likely climate change, nor relatively rare extreme events that may not even have occurred recently.
For example, some of the most extreme events affecting New England are landfalling hurricanes. These would of course produce high winds, coastal storm surge, and inland flooding. Furthermore, there is a distinct difference between fast-moving storms, such as the 1938 hurricane, and slow ones that are absorbed into a large-scale upper low system (Hurricane Bob). No location will have accumulated enough statistics to give meaningful probabilities for hurricanes, let alone distinguishing the types. Now, throw in climate change. The Gulf Stream has been moving closer to the New England coast for decades. Clearly, this affects the probability and intensity of possible hurricanes, but it would take a long time for statistics to reflect this.
Dynamical downscaling can at least partially address the lag between climate change and statistics by looking at how hurricane probabilities and patterns shift with anticipated sea surface changes, although they would miss local details.
A hybrid approach is probably best.
rasmus says
[Thanks John Pollack! The good thing about empirical-statistical downscaling is that it can be evaluated quite comprehensively so that you can get some idea for the case that you are interested in (e.g. DOI:10.5194/gmd-2021-176). When we calibrate the statistical models, it is in terms of capturing change and variability rather than detailed climatology. Statistical modelling of events such as hurricanes may be best done by attempting to model how the number of hurricanes depend on e.g. the area of sea surface temperature above 26.5C, assuming a Poisson distribution of the number of events. This is fundamentally different to a dynamical approach where we try to simulate the 3D picture of winds, humidity and temperature. Both are expected to reproduce real dependencies albeit relying on completely different assumptions – and it they give similar results, then we have higher confidence that the results are realistic. Yes, the hybrid approach is very useful but probably uner-appreciated. -rasmus]
Kevin McKinney says
Thanks. It sounds like the beginning of a long road–and one regarding which I think many share your impatience, Rasmus!
macias shurly says
@Rasmus says: –
” …consequences of flooding, droughts and heatwaves that we have witnessed the last couple of years…”
” …to accelerate the pace of progress when it comes to climate change adaptation…”
ms: — If we add the problem of rising sea levels to droughts, floods and heat waves, it becomes even clearer that humanity’s problems with climate change are mainly associated with water. An element that not only cools your car, your body – but also the surface of our planet. Your skin & body is also cooled by water, air and radiation in the IR range to keep your core temperature in balance. You are part of the earth’s surface.
In my home region (latitude 50 degrees NH) there are forests that consist of up to 95% dead, withered trees. The cause is often excessive groundwater extraction by larger cities in the vicinity. They have served their purpose as carbon and water reservoirs and are reversing their function.
They are no longer suitable for evaporation and cooling of the earth’s surface and the neighboring cities. The binding of CO2 and also the formation of clouds decrease towards zero.
As long as climate scientists confuse/explain these causes of the loss of evaporative landscapes with increasing concentrations of greenhouse gases, the effort for hybrid climate workshops can be saved.
Everything else has already been commented on here:
https://www.realclimate.org/index.php/archives/2022/10/new-misguided-interpretations-of-the-greenhouse-effect-from-william-kininmonth/#:~:text=5%20OCT%202022%20AT%206%3A11%20AM
Barry E Finch says
At https://www.youtube.com/watch?v=sqREsiR8H5c is a talk 7 years ago by Dr. Michael Pritchard who discusses his work on “super parameterization” for clouds which is selecting, presumably key, areas and having the atmosphere grid boxes be only such as 1/32nd of the regular model size such as 4×4 km instead of 128×128 km just in that region then somehow using the superior cloud representation to adjust what the model finds over a much larger area-volume around it (a sampling with greatly-enhanced accuracy) to get greatly improved cloud accuracy for only a small computer cost premium. Does anybody know how that’s progressing ?