There is a growing need for local climate information in order to update our understanding of risks connected to the changing weather and prepare for new challenges. This need has been an important motivation behind the World Meteorological Organisation’s (WMO) Global Framework for Climate Services (GFCS).
There has also been a lot of work carried out to meet these needs over time, but I’m not convinced that people always get the whole story.
A background on downscaling
The starting point is that global climate models (GCMs) are not designed to provide detailed information on local climate characteristics, but are nevertheless able to reproduce the large-scale phenomena reasonably well. The models tend to be associated with a minimum skillful scale.
The local climate is also connected with the large-scale conditions in the region that the models are able to reproduce well, as well as being influenced by local geographical factors.
Downscaling
The dependency of local climate to the large-scale situation implies that it’s possible to downscale information about temperature and precipitation on a local scale, based on a description of the large-scale information and geographical effects.
There has also been a fair amount of activities connected to downscaling climate information, notably through the international COordinated Downscaling EXperiment (CORDEX) under the World Climate Research Programme (WCRP).
The main emphasis in CORDEX has been on running regional climate models (RCMs) over a limited region, using results from GCMs on their boundaries, but with a finer grid mesh than those of the GCMs. The grid size of the GCMs are typically of the order 100 km, whereas for the RCMs they tend to be 10-50 km (some go down to a few kms).
There are also some activities following a different approach to running RCMs, using empirical-statistical downscaling (ESD) where statistical downscaling models have been calibrated on observational data. This approach has much in common to Artificial Intelligence (AI).
It is important to use both RCMs and ESD in downscaling since a combination of the two can say something about the confidence we should expect in the results. The reason is that the two types of downscaling make use of independent information sources, where the former derives an answer based on coded equations representing dynamics and thermodynamics, whereas the latter utilises information hidden in empirical data.
ESD is also important because it offers a computationally cheap tool for downscaling, which makes it suitable to downscale large multi-model ensembles such as the Coupled Model Intercomparison Project (CMIP) experiments presented in the IPCC reports.
Three different ESD approaches
Furthermore, there are three approaches in ESD, where one is known as ‘Perfect Prognosis‘ (PP) which uses pure observations for calibrating the models, and a second approach is referred to as ‘Model Output Statistics’ (MOS) that uses model output to represent the large-scale predictors and observations to represent local conditions during the calibration stage. I’ll return to the third approach later on.
Most of the work on ESD so far has tried to replicate results on a similar basis as the RCMs, which involves downscaling the local temperature or precipitation on a day-by-day basis similar to the output provided by the RCMs. I refer to this approach as ‘downscaling weather’ as in Oxford Research Encyclopedia. This practise has also framed networks and projects such as the European COST-VALUE project and the experiment protocol of CORDEX-ESD.
Differences about best downscaling approach
But, is downscaling weather the optimal way?
I am not convinced.
For starters, this approach often requires that the predictors comprise a set of different variables describing the large-scale conditions, such as a mix of mean sea-level pressure, the temperature near the surface and at various levels in the atmosphere (e.g. 500 hPa, 700 hPa and 850 hPa), the specific humidity at various heights, and the geopotential height at some levels.
It is important to keep in mind that once the statistical models are calibrated with reanalyses as predictors, the models subsequently replace them with corresponding data simulated by the GCMs to make projections.
I doubt that the GCMs are able to reproduce the covariance structure between all the variables in a typical mix of predictors with sufficient accuracy to give reliable results.
Information useful for climate adaptation
On the other hand, the question is what type of information do people really need?
Very few decision-makers I have met need a time series, and those who ask for time series tend to be impact researchers who use them as input in an impact model. Daily time series are in other words as intermediate results.
In the end, the impact researchers too usually produce some information about the risk or probability of some events to take place.
In many cases, the probability density functions (pdfs) will do, especially for mapping risks, and what we really need is to predict how the pdfs will change in the future as shown in Figure 1.
I have hung along with statisticians and mathematicians long enough to realise that it may be possible to try to predict the pdfs directly, either for temperature or precipitation (e.g. Figure 1), or for the output of the impact models.
Figure 1. The most common variables for the local climate are daily temperature and 24-hour rainfall. The right panel shows a normal distribution that can represent temperature anomalies, and the left panel is an exponential distribution that can represent wet-day 24-hr rainfall (e.g. on days with more than 1 mm). If the objective is to predict the change in their pdfs (or cumulative probability functions), then it does not have to involve a long chain of calculations. Here μ is the mean and σ is the standard deviation. |
So rather than downscaling weather, like the majority of scholars engaged in ESD, it makes sense to downscale pdfs, which I rephrase as ‘downscaled climate’.
On local scales, climate can for all intents and purposes be defined as the pdfs describing variables such as daily temperature and precipitation as shown in Figure 1.
The downscaling climate approach has many advantages:
-
It allows using mean seasonal values as predictors representing the large-scale conditions which are more readily available from GCM simulations.
-
It requires less computational resources and is faster.
-
The statistical properties often are more predictable than single outcomes.
-
The seasonal mean values are closer to being normally distributed according to the central limit theorem.
-
Experience indicates that only one variable is typically needed as predictors as opposed to a set of many.
-
When the local predictands describe the parameters for the seasonal pdfs, they also tend to be approximately normally distributed because they are aggregated over samples, which make principal component analysis (PCA) an efficient and suitable way of representation.
The third approach
Also, the downscaling climate approach is ideal for using common EOFs (see previous post) to represent the large-scale predictors. Of course, using common EOFs means that you no longer use the PP approach, but a PP-MOS hybrid approach. This is the third approach in addition to PP and MOS described above.
There is some contention in the ESD community about how to classify these approaches, but the calibration of statistical models using common EOFs as a framework involves a mix of observations (reanalysis) and GCM results to represent the large-scale conditions.
Hence it is consistent with neither definitions for PP nor MOS.
To me, it seems to be a ‘no brainer’ to downscale parameters for the pdfs and use common EOFs. It is a bit curious that so few others in the ESD community use these methods.
The use of common EOFs implies that the problem matching predictors from reanalysis and GCM is greatly reduced, and they enable an evaluation of the GCM results which often is lacking.
Furthermore, using PCA to represent a set of predictands within a given region also appears to be superior to downscaling the sites one-by-one and it ensures spatial consistency, which often is a problem.
A complete picture is important for climate services
It seems that the ESD community is split, and the strategy for downscaling climate has been ignored or neglected. For instance, it was not appreciated in the European COST-VALUE project. The COST-VALUE project is sometimes presented as an all-encompassing project for ESD, but I don’t agree with that view.
I participated in COST-VALUE, but felt that many decisions were enforced by the leaders with strong opinions and an uncompromising attitude. A number of suggestions were brushed aside and the project never accomodated for the downscaling climate strategy or included evaluation aspects based on common EOFs.
Despite this limitation, the COST-VALUE project can in many ways be regarded as a successful effort that produced a great deal of good results. However, it doesn’t provide the whole story when it comes to ESD.
I have repeatedly come across incomplete accounts of the ESD development, even in recent papers where the strategy of downscaling climate has been ignored. This common omission may lead to new generations of scholars in the downscaling community missing a part of the story.
If the work on downscaling is to carry on, then it’s also important to account for and acknowledge all related work done to make the best out of our knowledge for climate services and climate change adaptation. This is particularly relevant these days, as a new IPCC report is being drafted on climate change on global and regional scales.
B Eggen says
I think in Fig. 1 the descriptions of left and right panel have been mixed up – left one is T and right one precip.
Russell says
Rasmus, what are the prospects for improving regional parametric realism as models are downscaled?
Some variables of interest are both regional and seasonal– beyond the obvious variations in snow and vegetation cover, evaporation and evapotranspiration react on b oth long and short time scales to meteorologically and biologically driven changes in ground , water, and canopy albedo.
How will advances in the resolution , spectral bandwidth, and data density from the growing constellation of environmental satellites and platforms be integrated into evolvong regional models?
And how will global models reflect new feedbacks that may appear within rgional models, as resolution and spectral discrimination improve?
rasmus says
Good questions, Russell. My take would be to design a set of experiments to explore such questions. There is also an issue with what the available observations really represent. It may be a good idea to combine different and relevant data sets, models and methods. Then I think we have to judge from case to case if we have sufficient information/data and skilful models.
John Lanzante says
Thanks Rasmus for elaborating on the need for downscaling, which is often neglected with regard to assessment of climate change at the local level. I’d like to add to or expand upon a few topics that you raised.
ESD methods are often categorized as either PP vs MOS. While this is an important distinction, it doesn’t cover the very wide range of approaches that are employed. An alternate categorization might include the following classes of techniques: (1) Distributional, (2) Transfer function, (3) Spatial, and (4) Stochastic weather generators.
(1) are methods that employ probability distributions of observed and model data from the past, and model projected for the future, to derive pseudo observations for the future. Loosely speaking, the goal is to “correct” biases in model output for the future in such a way that the “corrections” vary by position within the statistical distribution. Sometimes these are referred to as quantile mapping techniques.
(2) are methods that use transfer relationships to map between observations and model data. They might range from the relatively simple approach of linear regression to much more complex AI techniques such as neural nets.
(3) are methods that attempt to use spatial information. These often involve multivariate techniques such as EOFs, SVD, etc. They also include analog approaches in which one seeks “weather maps” from the historical record of observations that resemble those projected in the future by climate models.
(4) are methods that statistically model the day-to-day weather variability. One salient feature is that they are used to generate ensembles of realizations of weather sequences.
It’s important to note that there are no rigid boundaries between these classes — some techniques may fall into more than one category, and an ESD technique may consist of more than one method. I would refrain from trying to categorize any particular approach as the “best” — methods that perform well with regard to some metrics may do poorly with regard to others. For example, methods from (1) might be better suited towards assessing the range of possible future values at individual locations, for example occurrences above some threshold. Those from (3) should yield a better representation of multivariate fields — so they would be better suited towards assessing risk of wildfire — which involves variables such as temperature, humidity, wind, solar radiation, etc., and their inter-relationships on a particular day. If you are interested in spells of weather then (4) might be the preferred choice.
Regarding your point about downscaling weather vs. downscaling climate — I don’t think these have to be mutually exclusive. One can apply techniques that downscales weather — and aggregate the results to yield a PDF. However, I am not enthusiastic about your approach of using seasonal mean values as predictors and common EOFs for several reasons.
Most fundamentally, when you use seasonal or other long time-mean predictors you are likely losing the ability to capture the range of short-term variability and in particular the extremes. Suppose you had a four-day heat wave with temperature exceeding 100F each day followed by cool weather the rest of the month. In such a case your predictors would be near-normal. From an impacts standpoint it is the heat-wave that matters most.
Secondly, there is a danger in using common EOFs in that implicitly you are assuming that the covariance fields of the things you are combining (observations and GCMs) come from the same population. I would argue that this is not true in general. Otherwise you end up mixing “apples and oranges”.
You also mention other advantages to “downscling climate” in terms of the availability of daily data from GCMS, using less computing resources, and having variables that are Gaussian distributed. While in the past the first two may have been issues, given current technology they are not a big concern. And given the complexity of the climate system I’d be hesitant to try to shoe-horn it into a Gaussian model.
Finally, I’d like give praise to the COST-VALUE project with a bit of envy in that the Europeans are ahead of us on the North American side of the pond! I am a little disappointed that it was so heavily geared towards “show and tell” but that may have been inevitable in a first stab at such a monumental undertaking. I am hopeful that it will spur more work on the diagnostics of ESD techniques, which I think is a neglected area.
rasmus says
Thanks for your comments. One idea about using seasonally aggregated statistics as the predictands is that it may include standard deviation of daily data, the number of hot days or the mean duration of hot spells. Many of these involve other types of distributions such as Poisson or geometric distributions, but using GLMs may alleviate that. Another thing is that common EOFs work best with a combination of reanalysis and GCM results, partly because the reanalysis involves the use of an atmospheric model. And of course I appreciate your point about “best” method, as it depends on a case by case situation.
Brian James says
Apr 18, 2020 Shape of the Cosmos | Tricksters of the Universe
Watch “Shape of the Cosmos” Part 1:
https://youtu.be/JQu8BPNDOOg
https://youtu.be/XFeoWG3eAOc
Nick O. says
Apologies for being a bit off topic, but Sir John Houghton died a few days ago. There is a short review of his life in the link below, including some comments on his work in meteorology and climate physics, and latterly his role as founding editor of the IPCC:
https://www.wunderground.com/cat6/climate-scientist-and-founding-ipcc-editor-sir-john-houghton-dies-at-88
Jim Harrison says
The Tennessee Valley Authority (TVA) commissioned a regional climate study some years ago. At that time, downscaling was pretty crude and the projections had huge uncertainties. What could be done, however, was to investigate the particular vulnerabilities of a given area granted a range of plausible outcomes. For the TVA region, for example, the study suggested that plausible changes in average temperatures would tend to favor more southern species of trees and changes in precipitation would have a marginal effect on what would remain a well-watered zone.
Barton Paul Levenson says
I’m sorry about Sir John. I have one of his books–The Physics of Atmospheres–and it was a big influence on me. When I wrote him to ask for electronic copies of his tables, he didn’t hesitate to send them, either. A good scientist and a good man.
Paul Pukite (@whut) says
Why shouldn’t EOFs just be considered as standing wave modes of the climate within a region?
barry says
Vale, John Houghton. A great teacher, communicator, and steward of climate scienceand the IPCC.