This month’s open thread…
Climate Science
An exercise about meaningful numbers: examples from celestial “attribution studies”
Is the number 2.14159 (here rounded off to 5 decimal points) a fundamentally meaningful one? Add one, and you get
π = 3.14159 = 2.14159 + 1.
Of course, π is a fundamentally meaningful number, but you can split up this number in infinite ways, as in the example above, and most of the different terms have no fundamental meaning. They are just numbers.
But what does this have to do with climate? My interpretation of Daniel Bedford’s paper in Journal of Geography, is that such demonstrations may provide a useful teaching tool for climate science. He uses the phrase ‘agnotology’, which is “the study of how and why we do not know things”.
The CERN/CLOUD results are surprisingly interesting…
The long-awaited first paper from the CERN/CLOUD project has just been published in Nature. The paper, by Kirkby et al, describes changes in aerosol nucleation as a function of increasing sulphates, ammonia and ionisation in the CERN-based ‘CLOUD’ chamber. Perhaps surprisingly, the key innovation in this experimental set up is not the presence of the controllable ionisation source (from the Proton Synchrotron accelerator), but rather the state-of-the-art instrumentation of the chamber that has allowed them to see in unprecedented detail what is going on in the aerosol nucleation process (this is according to a couple of aerosol people I’ve spoken about this with).
This paper is actually remarkably free of the over-the-top spin that has accompanied previous papers, and that bodes very well for making actual scientific progress on this topic.
[Read more…] about The CERN/CLOUD results are surprisingly interesting…
How large were the past changes in the sun?
We only have direct observations of total solar irradiance (TSI) since the beginning of the satellite era and substantial evidence for variations in the level of solar activity (from cosmogenic isotopes or sunspot records) in the past. Tying those factors together in order to estimate solar irradiance variations in the past is crucial for attributing past climate changes, particularly in the pre-industrial.
In the May issue of Astronomy & Astrophysics, Shapiro et al. present a new long-term reconstruction of the solar irradiance that implies much greater variation over the last 7000 years than any previously reconstruction. What is the basis for this difference?
[Read more…] about How large were the past changes in the sun?
CMIP5 simulations
Climate modeling groups all across the world are racing to add their contributions to the CMIP5 archive of coupled model simulations. This coordinated project, proposed, conceived and specified by the climate modeling community itself, will be an important resource for analysts and for the IPCC AR5 report (due in 2013), and beyond.
There have been previous incarnations of the CMIP projects going back to the 1990s, but I think it’s safe to say that it was only with CMIP3 (in 2004/2005) that the project gained a real maturity. The CMIP3 archive was heavily used in the IPCC AR4 report – so much so that people often describe those models and simulations as the ‘IPCC models’. That is a reasonable shorthand, but is not really an accurate description (the models were not chosen by IPCC, designed by IPCC, or run by IPCC) even though I’ve used it on occasion. Part of the success of CMIP3 was the relatively open data access policy which allowed many scientists and hobbyists alike to access the data – many of whom were dealing with GCM output for the first time. Some 600 papers have been written using data from this archive. We discussed some of this success (and some of the problems) back in 2008.
Now that CMIP5 is gearing up for a similar exercise, it is worth looking into what has changed – it terms of both the model specifications, the requested simulations and the data serving to the wider community. Many of these issues are being discussed in a the current CLIVAR newsletter (Exchanges no. 56). (The references below are all to articles in this pdf).
There are three main novelties this time around that I think are noteworthy: the use of more interactive Earth System models, a focus on initiallised decadal predictions, and the inclusion of key paleo-climate simulations as part of the suite of runs.
The term Earth System Model is a little ambiguous with some people reserving that for models that include a carbon cycle, and others (including me) using it more generally to denote models with more interactive components than used in more standard (AR4-style) GCMs (i.e. atmospheric chemistry, aerosols, ice sheets, dynamic vegetation etc.). Regardless of terminology, the 20th Century historical simulations in CMIP5 will use a much more diverse set of model types than did the similar simulations in CMIP3 (where all models were standard coupled GCMs). That both expands the range of possible evaluations of the models, but also increases the complexity of that evaluation.
The ‘decadal prediction’ simulations are mostly being run with standard GCMs (see the article by Doblas-Reyes et al, p8). The different groups are trying multiple methods to initialise their ocean circulations and heat content at specific points in the past and are then seeing if they are able to better predict the actual course of events. This is very different from standard climate modelling where no attempt is made to synchronise modes of internal variability with the real world. The hope is that one can reduce the initial condition uncertainty for predictions in some useful way, though this has yet to be demonstrated. Early attempts to do this have had mixed results, and from what I’ve seen of the preliminary results in the CMIP5 runs, significant problems remain. This is one area to watch carefully though.
Personally, I am far more interested in the inclusion of the paleo component in CMIP5 (see Braconnot et al, p15). Paleo-climate simulations with the same models that are being used for the future projections allow for the possibility that we can have true ‘out-of-sample’ testing of the models over periods with significant climate changes. Much of the previous work in evaluating the IPCC models has been based on modern period skill metrics (the climatology, seasonality, interannual variability, the response to Pinatubo etc.), but while useful, this doesn’t encompass changes of the same magnitude as the changes predicted for the 21st Century. Including tests with simulations of the last glacial maximum, the Mid-Holocene or the Last Millennium greatly expands the range of model evaluation (see Schmidt (2010) for more discussion).
The CLIVAR newsletter has a number of other interesting articles, on CFMIP (p20), the scenarios begin used (RCPs) (p12), the ESG data delivery system (p40), satellite comparisons (p46, and p47) and the carbon-cycle simulations (p27). Indeed, the range of issues covered I think presages the depth and interest that the CMIP5 archive will eventually generate.
There will be a WCRP meeting in October in Denver that will be very focused on the CMIP5 results, and it is likely that much of context for the AR5 report will be reflected there.
Volcanic vs. Anthropogenic CO2
Guest Commentary by Terry Gerlach*
TV screen images of erupting and exploding volcanoes spewing forth emissions are typically spectacular, awesome, and vividly suggestive of huge additions of gas to the atmosphere. By comparison, the smokestack and exhaust pipe venting of anthropogenic emissions is comparatively unexciting, unimpressive, and commonplace. Consequently, it easy to get traction with the general public for claims that volcanic CO2 emissions are far greater than those of human activities, or that the CO2 released in some recent or ongoing eruption exceeds anthropogenic releases in all of human history, or that the threat of a future super-eruption makes concerns about our carbon footprint laughable. The evidence from volcanology, however, does not support these claims.
[Read more…] about Volcanic vs. Anthropogenic CO2
Unforced Variations: Aug 2011
This month’s open thread. Your starter for 2010, the 2010 State of the Climate report….
“Misdiagnosis of Surface Temperature Feedback”
Guest commentary by Kevin Trenberth and John Fasullo
The hype surrounding a new paper by Roy Spencer and Danny Braswell is impressive (see for instance Fox News); unfortunately the paper itself is not. News releases and blogs on climate denier web sites have publicized the claim from the paper’s news release that “Climate models get energy balance wrong, make too hot forecasts of global warming”. The paper has been published in a journal called Remote sensing which is a fine journal for geographers, but it does not deal with atmospheric and climate science, and it is evident that this paper did not get an adequate peer review. It should not have been published.
The paper’s title “On the Misdiagnosis of Surface Temperature Feedbacks from Variations in Earth’s Radiant Energy Balance” is provocative and should have raised red flags with the editors. The basic material in the paper has very basic shortcomings because no statistical significance of results, error bars or uncertainties are given either in the figures or discussed in the text. Moreover the description of methods of what was done is not sufficient to be able to replicate results. As a first step, some quick checks have been made to see whether results can be replicated and we find some points of contention.
The basic observational result seems to be similar to what we can produce but use of slightly different datasets, such as the EBAF CERES dataset, changes the results to be somewhat less in magnitude. And some parts of the results do appear to be significant. So are they replicated in climate models? Spencer and Braswell say no, but this is where attempts to replicate their results require clarification. In contrast, some model results do appear to fall well within the range of uncertainties of the observations. How can that be? For one, the observations cover a 10 year period. The models cover a hundred year period for the 20th century. The latter were detrended by Spencer but for the 20th century that should not be necessary. One could and perhaps should treat the 100 years as 10 sets of 10 years and see whether the observations match any of the ten year periods, but instead what appears to have been done is to use only the one hundred year set by itself. We have done exactly this and the result is in the Figure..
[ed. note: italics below replace the deleted sentence above, to make it clearer what is meant here.]
SB11 appears to have used the full 100 year record to evaluate the models, but this provides no indication of the robustness of their derived relationships. Here instead, we have considered each decade of the 20th century individually and quantified the inter-decadal variability to derive the Figure below. What this figure shows is the results for the observations, as in Spencer and Braswell, using the EBAF dataset (in black). Then we show results from 2 different models, one which does not replicate ENSO well (top) and one which does (second panel). Here we give the average result (red curve) for all 10 decades, plus the range of results that reflects the variations from one decade to the next. The MPI-Echam5 model replicates the observations very well. When all model results from CMIP3 are included, the bottom panel results, showing the red curve not too dis-similar from Spencer and Braswell, but with a huge range, due both to the spread among models, and also the spread due to decadal variability.
Figure: Lagged regression analysis for the Top-of-the-atmosphere Net Radiation against surface temperature. The CERES data is in black (as in SB11), and the individual models in each panel are in red. The dashed lines are the span of the regressions for specific 10 year periods in the model (so that the variance is comparable to the 10 years of the CERES data). The three panels show results for a) a model with poor ENSO variability, b) a model with reasonable ENSO variability, and c) all models.
Consequently, our results suggest that there are good models and some not so good, but rather than stratifying them by climate sensitivity, one should, in this case, stratify them by ability to simulate ENSO. In the Figure, the model that replicates the observations better has high sensitivity while the other has low sensitivity. The net result is that the models agree within reasonable bounds with the observations.
To help interpret the results, Spencer uses a simple model. But the simple model used by Spencer is too simple (Einstein says that things should be made as simple as possible but not simpler): well this has gone way beyond being too simple (see for instance this post by Barry Bickmore). The model has no realistic ocean, no El Niño, and no hydrological cycle, and it was tuned to give the result it gave. Most of what goes on in the real world of significance that causes the relationship in the paper is ENSO. We have already rebutted Lindzen’s work on exactly this point. The clouds respond to ENSO, not the other way round [see: Trenberth, K. E., J. T. Fasullo, C. O’Dell, and T. Wong, 2010: Relationships between tropical sea surface temperatures and top-of-atmosphere radiation. Geophys. Res. Lett., 37, L03702, doi:10.1029/2009GL042314.] During ENSO there is a major uptake of heat by the ocean during the La Niña phase and the heat is moved around and stored in the ocean in the tropical western Pacific, setting the stage for the next El Niño, as which point it is redistributed across the tropical Pacific. The ocean cools as the atmosphere responds with characteristic El Niño weather patterns forced from the region that influence weather patterns world wide. Ocean dynamics play a major role in moving heat around, and atmosphere-ocean interaction is a key to the ENSO cycle. None of those processes are included in the Spencer model.
Even so, the Spencer interpretation has no merit. The interannual global temperature variations were not radiatively forced, as claimed for the 2000s, and therefore cannot be used to say anything about climate sensitivity. Clouds are not a forcing of the climate system (except for the small portion related to human related aerosol effects, which have a small effect on clouds). Clouds mainly occur because of weather systems (e.g., warm air rises and produces convection, and so on); they do not cause the weather systems. Clouds may provide feedbacks on the weather systems. Spencer has made this error of confounding forcing and feedback before and it leads to a misinterpretation of his results.
The bottom line is that there is NO merit whatsoever in this paper. It turns out that Spencer and Braswell have an almost perfect title for their paper: “the misdiagnosis of surface temperature feedbacks from variations in the Earth’s Radiant Energy Balance” (leaving out the “On”).
CRUTEM3 data release (except Poland)
The entire CRUTEM3 database of station temperature measurements has just been released. This comes after a multi-year process to get permissions from individual National Weather Services to allow the passing on of data to third parties and from a ruling from the UK ICO. All the NWSs have now either agreed or not responded (except for Poland which specifically refused). Since the Polish data is a such a small fraction of the globe (and there are a few Polish stations in any case via RBSC or GCOS), this doesn’t make much difference to hemispheric means or regional climate. These permissions were obtained with help from the UK Met Office (who have also placed the station data on their website in a slightly different format) and whose FAQ is quite informative.
This dataset has occasionally come up in blogospheric discussions.
Reanalyses ‘R’ Us
There is an interesting new wiki site, Reanalyses.org, that has been developed by a number of groups dedicated to documenting the various reanalysis products for atmosphere and ocean that are increasingly being made available.
For those that don’t know, a ‘reanalysis’ is a climate or weather model simulation of the past that includes data assimilation of historical observations. The observations can be very comprehensive (satellite, in situ, multiple variables) or relatively sparse (say, sea level pressure only), and the models themselves are quite varied. Generally these models are drawn from the weather forecasting community (at least for the atmospheric components) which explains the odd terminology. An ‘analysis’ from a weather forecasting model is the 6 hour (say) forecast from the time of observations. Weather forecasting groups realised a decade or so ago that the time series of their weather forecasts (the analyses) could not be used to track long term changes because their models had been updated many times over the decades. Thus the idea arose to ‘re-analyse’ the historical observations with a single consistent model. These sets of 6 hour forecasts using the data available at each point are then more consistent in time (and presumably more accurate) that the original analyses were.
The first two reanalysis projects (NCEP1 and ERA-40) were groundbreaking and allowed a lot of analysis of the historical climate (around 1958 or 1948 onwards) that had not been possible before. Essentially, the models are being used to interpolate between observations in a (hopefully) physically consistent manner providing a gridded and complete data set. However, there are noted problems with this approach that need to be borne in mind.
The most important issue is that the amount and quality of the assimilated data has changed enormously over time. Particularly in the pre-satellite era (around 1979), data is relatively sparse and reliant on networks of in-situ measurements. After 1979 the amount of data being brought in increases by orders of magnitude. It is also important to consider how even continuous measurement series have changed. For instance, the response time for sensors in radiosondes (that are used to track atmospheric profiles of temperature and humidity) has steadily improved which, if uncorrected for in the reanalyses, would lead to an erroneous drying in the upper troposphere that has nothing to do with any actual climate trend. In fact it is hard to correct for such problems in data coverage and accuracy, and so trend analysis in the reanalyses have to be treated very carefully (and sometimes avoided altogether).
A further problem is that different outputs from the reanalyses are differently constrained by observations. Where observations are plentiful and span the variability, the reanalysis field is close to what actually happened (for instance, horizontal components of the wind), but where the output field is only indirectly related to the assimilated observations (rainfall, cloudiness etc.), the changes and variability are much more of a product of the model.
The more modern products are substantially improved (NCEP-2, ERA-Interim, MERRA and others) over the first set, and new approaches are also being tried. The ‘20th Century Reanalysis‘ is a new product that only uses (plentiful) surface pressure measurements to constrain dynamics and although it uses less data than other products, it can go back much earlier (to the 19th Century) and still produce meaningful results. Other new products are the ocean reanalyses (ECCO for instance) that tries to take the same approach with ocean temperature and salinity measurements.
These products should definitely not be assumed to have the status of ‘real observations’, but they are very useful as long as people are careful to take the caveats seriously, and be clear about the structural uncertainties. Results that differ enormously across different reanalyses should be viewed with caution.
The new site includes some very promising descriptions on how to download and plot the data, and will hopefully soon be able to fill up the rest of pages. Some suggestions might be for a list of key papers discussing the results of these reanalyses and lists of issues found (so that others don’t waste their time). It’s a very promising start though.