Open thread for October…
Greenland meltdown
After a record-breaking 2010 in terms of surface melt area in Greenland Tedesco et al, 2011, numbers from 2011 have been eagerly awaited. Marco Tedseco and his group have now just reported their results. This is unrelated to other Greenland meltdown this week that occurred at the launch of the new Times Atlas.
[Read more…] about Greenland meltdown
References
- M. Tedesco, X. Fettweis, M.R. van den Broeke, R.S.W. van de Wal, C.J.P.P. Smeets, W.J. van de Berg, M.C. Serreze, and J.E. Box, "The role of albedo and accumulation in the 2010 melting record in Greenland", Environmental Research Letters, vol. 6, pp. 014005, 2011. http://dx.doi.org/10.1088/1748-9326/6/1/014005
Resignations, retractions and the process of science
Much is being written about the very public resignation of Wolfgang Wagner from the editorship of Remote Sensing over the publication of Spencer and Braswell (2011) – and rightly so. It is a very rare situation that an editor resigns over the failure of peer review, and to my knowledge it has only happened once before in anything related to climate science – the mass resignation of 6 editors at Climate Research in 2003 in the wake of the Soon and Baliunas debacle. Some of the commentary this weekend has been reasonable, but many people are obviously puzzled by this turn of events and unsupported rumours are flying around.
[Read more…] about Resignations, retractions and the process of science
Arctic sea ice minimum discussions
Here is a continuation of the last Arctic sea ice discussion as we get closer to the 2011 minimum. All figures will update continuously.
JAXA Sea ice extent and area:


Cryosphere Today sea ice concentration:

Estimated sea ice volume from UW PIOMAS (updated every month):

The CERN/CLOUD results are surprisingly interesting…
The long-awaited first paper from the CERN/CLOUD project has just been published in Nature. The paper, by Kirkby et al, describes changes in aerosol nucleation as a function of increasing sulphates, ammonia and ionisation in the CERN-based ‘CLOUD’ chamber. Perhaps surprisingly, the key innovation in this experimental set up is not the presence of the controllable ionisation source (from the Proton Synchrotron accelerator), but rather the state-of-the-art instrumentation of the chamber that has allowed them to see in unprecedented detail what is going on in the aerosol nucleation process (this is according to a couple of aerosol people I’ve spoken about this with).
This paper is actually remarkably free of the over-the-top spin that has accompanied previous papers, and that bodes very well for making actual scientific progress on this topic.
[Read more…] about The CERN/CLOUD results are surprisingly interesting…
CMIP5 simulations
Climate modeling groups all across the world are racing to add their contributions to the CMIP5 archive of coupled model simulations. This coordinated project, proposed, conceived and specified by the climate modeling community itself, will be an important resource for analysts and for the IPCC AR5 report (due in 2013), and beyond.
There have been previous incarnations of the CMIP projects going back to the 1990s, but I think it’s safe to say that it was only with CMIP3 (in 2004/2005) that the project gained a real maturity. The CMIP3 archive was heavily used in the IPCC AR4 report – so much so that people often describe those models and simulations as the ‘IPCC models’. That is a reasonable shorthand, but is not really an accurate description (the models were not chosen by IPCC, designed by IPCC, or run by IPCC) even though I’ve used it on occasion. Part of the success of CMIP3 was the relatively open data access policy which allowed many scientists and hobbyists alike to access the data – many of whom were dealing with GCM output for the first time. Some 600 papers have been written using data from this archive. We discussed some of this success (and some of the problems) back in 2008.
Now that CMIP5 is gearing up for a similar exercise, it is worth looking into what has changed – it terms of both the model specifications, the requested simulations and the data serving to the wider community. Many of these issues are being discussed in a the current CLIVAR newsletter (Exchanges no. 56). (The references below are all to articles in this pdf).
There are three main novelties this time around that I think are noteworthy: the use of more interactive Earth System models, a focus on initiallised decadal predictions, and the inclusion of key paleo-climate simulations as part of the suite of runs.
The term Earth System Model is a little ambiguous with some people reserving that for models that include a carbon cycle, and others (including me) using it more generally to denote models with more interactive components than used in more standard (AR4-style) GCMs (i.e. atmospheric chemistry, aerosols, ice sheets, dynamic vegetation etc.). Regardless of terminology, the 20th Century historical simulations in CMIP5 will use a much more diverse set of model types than did the similar simulations in CMIP3 (where all models were standard coupled GCMs). That both expands the range of possible evaluations of the models, but also increases the complexity of that evaluation.
The ‘decadal prediction’ simulations are mostly being run with standard GCMs (see the article by Doblas-Reyes et al, p8). The different groups are trying multiple methods to initialise their ocean circulations and heat content at specific points in the past and are then seeing if they are able to better predict the actual course of events. This is very different from standard climate modelling where no attempt is made to synchronise modes of internal variability with the real world. The hope is that one can reduce the initial condition uncertainty for predictions in some useful way, though this has yet to be demonstrated. Early attempts to do this have had mixed results, and from what I’ve seen of the preliminary results in the CMIP5 runs, significant problems remain. This is one area to watch carefully though.
Personally, I am far more interested in the inclusion of the paleo component in CMIP5 (see Braconnot et al, p15). Paleo-climate simulations with the same models that are being used for the future projections allow for the possibility that we can have true ‘out-of-sample’ testing of the models over periods with significant climate changes. Much of the previous work in evaluating the IPCC models has been based on modern period skill metrics (the climatology, seasonality, interannual variability, the response to Pinatubo etc.), but while useful, this doesn’t encompass changes of the same magnitude as the changes predicted for the 21st Century. Including tests with simulations of the last glacial maximum, the Mid-Holocene or the Last Millennium greatly expands the range of model evaluation (see Schmidt (2010) for more discussion).
The CLIVAR newsletter has a number of other interesting articles, on CFMIP (p20), the scenarios begin used (RCPs) (p12), the ESG data delivery system (p40), satellite comparisons (p46, and p47) and the carbon-cycle simulations (p27). Indeed, the range of issues covered I think presages the depth and interest that the CMIP5 archive will eventually generate.
There will be a WCRP meeting in October in Denver that will be very focused on the CMIP5 results, and it is likely that much of context for the AR5 report will be reflected there.
CRUTEM3 data release (except Poland)
The entire CRUTEM3 database of station temperature measurements has just been released. This comes after a multi-year process to get permissions from individual National Weather Services to allow the passing on of data to third parties and from a ruling from the UK ICO. All the NWSs have now either agreed or not responded (except for Poland which specifically refused). Since the Polish data is a such a small fraction of the globe (and there are a few Polish stations in any case via RBSC or GCOS), this doesn’t make much difference to hemispheric means or regional climate. These permissions were obtained with help from the UK Met Office (who have also placed the station data on their website in a slightly different format) and whose FAQ is quite informative.
This dataset has occasionally come up in blogospheric discussions.
Reanalyses ‘R’ Us
There is an interesting new wiki site, Reanalyses.org, that has been developed by a number of groups dedicated to documenting the various reanalysis products for atmosphere and ocean that are increasingly being made available.
For those that don’t know, a ‘reanalysis’ is a climate or weather model simulation of the past that includes data assimilation of historical observations. The observations can be very comprehensive (satellite, in situ, multiple variables) or relatively sparse (say, sea level pressure only), and the models themselves are quite varied. Generally these models are drawn from the weather forecasting community (at least for the atmospheric components) which explains the odd terminology. An ‘analysis’ from a weather forecasting model is the 6 hour (say) forecast from the time of observations. Weather forecasting groups realised a decade or so ago that the time series of their weather forecasts (the analyses) could not be used to track long term changes because their models had been updated many times over the decades. Thus the idea arose to ‘re-analyse’ the historical observations with a single consistent model. These sets of 6 hour forecasts using the data available at each point are then more consistent in time (and presumably more accurate) that the original analyses were.
The first two reanalysis projects (NCEP1 and ERA-40) were groundbreaking and allowed a lot of analysis of the historical climate (around 1958 or 1948 onwards) that had not been possible before. Essentially, the models are being used to interpolate between observations in a (hopefully) physically consistent manner providing a gridded and complete data set. However, there are noted problems with this approach that need to be borne in mind.
The most important issue is that the amount and quality of the assimilated data has changed enormously over time. Particularly in the pre-satellite era (around 1979), data is relatively sparse and reliant on networks of in-situ measurements. After 1979 the amount of data being brought in increases by orders of magnitude. It is also important to consider how even continuous measurement series have changed. For instance, the response time for sensors in radiosondes (that are used to track atmospheric profiles of temperature and humidity) has steadily improved which, if uncorrected for in the reanalyses, would lead to an erroneous drying in the upper troposphere that has nothing to do with any actual climate trend. In fact it is hard to correct for such problems in data coverage and accuracy, and so trend analysis in the reanalyses have to be treated very carefully (and sometimes avoided altogether).
A further problem is that different outputs from the reanalyses are differently constrained by observations. Where observations are plentiful and span the variability, the reanalysis field is close to what actually happened (for instance, horizontal components of the wind), but where the output field is only indirectly related to the assimilated observations (rainfall, cloudiness etc.), the changes and variability are much more of a product of the model.
The more modern products are substantially improved (NCEP-2, ERA-Interim, MERRA and others) over the first set, and new approaches are also being tried. The ‘20th Century Reanalysis‘ is a new product that only uses (plentiful) surface pressure measurements to constrain dynamics and although it uses less data than other products, it can go back much earlier (to the 19th Century) and still produce meaningful results. Other new products are the ocean reanalyses (ECCO for instance) that tries to take the same approach with ocean temperature and salinity measurements.
These products should definitely not be assumed to have the status of ‘real observations’, but they are very useful as long as people are careful to take the caveats seriously, and be clear about the structural uncertainties. Results that differ enormously across different reanalyses should be viewed with caution.
The new site includes some very promising descriptions on how to download and plot the data, and will hopefully soon be able to fill up the rest of pages. Some suggestions might be for a list of key papers discussing the results of these reanalyses and lists of issues found (so that others don’t waste their time). It’s a very promising start though.
Revisiting historical ocean surface temperatures
Readers may recall discussions of a paper by Thompson et al (2008) back in May 2008. This paper demonstrated that there was very likely an artifact in the sea surface temperature (SST) collation by the Hadley Centre (HadSST2) around the end of the second world war and for a few years subsequently, related to the different ways ocean temperatures were taken by different fleets. At the time, we reported that this would certainly be taken into account in revisions of data set as more data was processed and better classifications of the various observational biases occurred. Well, that process has finally resulted in the publication of a new compilation, HadSST3.
[Read more…] about Revisiting historical ocean surface temperatures
How Soon is now?
Willie Soon is a name that pops up every so often in climate ‘debate’. He was the lead author on the Soon and Baliunas (2003) paper (the only paper that has ever led to the resignation of 6 editors in protest at the failure of peer-review that led to its publication). He was a recent speaker (from 37.20) at the 2011 Heartland Institute conference, and can be counted on to produce a contrarian take on any particular issue that anyone might care about – ranging from climate, to mercury in fish and polar bear population dynamics.
[Read more…] about How Soon is now?