There is an interesting new wiki site, Reanalyses.org, that has been developed by a number of groups dedicated to documenting the various reanalysis products for atmosphere and ocean that are increasingly being made available.
For those that don’t know, a ‘reanalysis’ is a climate or weather model simulation of the past that includes data assimilation of historical observations. The observations can be very comprehensive (satellite, in situ, multiple variables) or relatively sparse (say, sea level pressure only), and the models themselves are quite varied. Generally these models are drawn from the weather forecasting community (at least for the atmospheric components) which explains the odd terminology. An ‘analysis’ from a weather forecasting model is the 6 hour (say) forecast from the time of observations. Weather forecasting groups realised a decade or so ago that the time series of their weather forecasts (the analyses) could not be used to track long term changes because their models had been updated many times over the decades. Thus the idea arose to ‘re-analyse’ the historical observations with a single consistent model. These sets of 6 hour forecasts using the data available at each point are then more consistent in time (and presumably more accurate) that the original analyses were.
The first two reanalysis projects (NCEP1 and ERA-40) were groundbreaking and allowed a lot of analysis of the historical climate (around 1958 or 1948 onwards) that had not been possible before. Essentially, the models are being used to interpolate between observations in a (hopefully) physically consistent manner providing a gridded and complete data set. However, there are noted problems with this approach that need to be borne in mind.
The most important issue is that the amount and quality of the assimilated data has changed enormously over time. Particularly in the pre-satellite era (around 1979), data is relatively sparse and reliant on networks of in-situ measurements. After 1979 the amount of data being brought in increases by orders of magnitude. It is also important to consider how even continuous measurement series have changed. For instance, the response time for sensors in radiosondes (that are used to track atmospheric profiles of temperature and humidity) has steadily improved which, if uncorrected for in the reanalyses, would lead to an erroneous drying in the upper troposphere that has nothing to do with any actual climate trend. In fact it is hard to correct for such problems in data coverage and accuracy, and so trend analysis in the reanalyses have to be treated very carefully (and sometimes avoided altogether).
A further problem is that different outputs from the reanalyses are differently constrained by observations. Where observations are plentiful and span the variability, the reanalysis field is close to what actually happened (for instance, horizontal components of the wind), but where the output field is only indirectly related to the assimilated observations (rainfall, cloudiness etc.), the changes and variability are much more of a product of the model.
The more modern products are substantially improved (NCEP-2, ERA-Interim, MERRA and others) over the first set, and new approaches are also being tried. The ‘20th Century Reanalysis‘ is a new product that only uses (plentiful) surface pressure measurements to constrain dynamics and although it uses less data than other products, it can go back much earlier (to the 19th Century) and still produce meaningful results. Other new products are the ocean reanalyses (ECCO for instance) that tries to take the same approach with ocean temperature and salinity measurements.
These products should definitely not be assumed to have the status of ‘real observations’, but they are very useful as long as people are careful to take the caveats seriously, and be clear about the structural uncertainties. Results that differ enormously across different reanalyses should be viewed with caution.
The new site includes some very promising descriptions on how to download and plot the data, and will hopefully soon be able to fill up the rest of pages. Some suggestions might be for a list of key papers discussing the results of these reanalyses and lists of issues found (so that others don’t waste their time). It’s a very promising start though.
Didactylos says
Holy synchronicity, Batman. I just started looking seriously at 20th Century Reanalysis earlier today.
[Response: Great minds…–eric]
Bob Pasken says
I have already used the ERA-interim data for a study on the influence of the Saharan Air Layer (SAL) on hurricane development. I originally started the project with GFS analysis/forecasts, but found that the ERA-Interim provide a much better set of initial conditions. A comparison of WRF initialized with ERA-interim and dropsonde data versus WRF initialized with GFS and dropsonde data showed that using ERA-interim data for initialization data produced a more realistic hurricane than did the GFS initialization. There were a few hiccups when we first started using ERA-interim, but the extra work was worth the effort in producing a realistic storm
Steve L says
This interests me greatly. My agency models salmon run sizes and migration behaviour for fisheries management purposes. Our tools have been changing over time (changes in fishing gear, hydroacoustic technology, stock identification methods, etc). We are struggling greatly to learn about the fish over time (comparisons across years) because our assessment tools have been changing over time. Despite this confounding influence we dare not throw up our hands in frustration because, well, we need to try to inform fisheries decisions somehow. We have a further confounding influence, in that the fish themselves could have evolved (as opposed to physical principles). Our solution to date has been to keep restricting our analyses to recent time periods, so we suffer chronically from low sample sizes in these efforts. Perhaps investigation of these Reanalysis approaches can be informative for us. Thanks for posting about it.
AntonyIndia says
Kevin Trentberth thinks that there are large disparities between different analyses (sea levels, total sea ice area, heat content anomalies)? He said so in his powerpoint presentation (pages 9/10 2010) http://www.joss.ucar.edu/events/2010/acre/thursday_talks/trenberth_WOAP.pdf
One objective of this (his?)new WCRP is to “help improve and promote sound data stewardship, including data archiving, management, and access. This includes making sure that climate-related data variables are reaching data archives, and that standards are set for archiving new types of data. Help make data accessible and available e.g., through the internet. Promote shared efforts for data quality control.”
The Climate establishment knows all is not well and this looks like their new project. A confession?
[Response: Huh? FYI WCRP and WOAP. – gavin]
Dick Dee says
Thanks for a generally accurate description of reanalysis, its advantages, and pitfalls. One important benefit of reanalysis that you didn’t mention is that it can provide very useful feedback about the quality of the input observations themselves. For example, the 20th Century Reanalysis includes many surface pressure observations that have never been used before. The time series of residuals at station locations are now available for statistical analysis. Shifts and sudden changes in the series may indicate possible problems with the data. These can be checked against station information to account for changes in measurement method, errors in station height, etc. In a similar way, errors in the radiosonde temperature record have been identified using residuals from the ERA-40 reanalysis (Haimberger 2007, http://journals.ametsoc.org/doi/abs/10.1175/JCLI4050.1).
Reanalyses were originally developed in NWP centers in the early 80s, to help them improve their forecasting systems. Since then we have seen a very positive feedback cycle: improved observations, improved reanalyses, improved models, etc. Science at its best.
Russell says
Next on the analytical agenda is differential sheep-goat thermometry– Watts and McIntyre have begun confusing the two on the light of
James D. M. Speed, Gunnar Austrheim, Alison J. Hester and Atle Mysterud (2011), ‘Browsing interacts with climate to determine tree ring increment’,
published today in Functional Ecology .doi: 10.1111/j.1365-2435.2011.01877.x,
Robert says
Actually the KNMI Climate Explorer site:
http://climexp.knmi.nl/start.cgi?someone@somewhere
is very useful for using the reanalysis datasets also. Not a big fan of the 20th century reanalysis project on a regional basis though based on my comparisons. ERA-interim seems to perform very well.