It’s worth going back every so often to see how projections made back in the day are shaping up. As we get to the end of another year, we can update all of the graphs of annual means with another single datapoint. Statistically this isn’t hugely important, but people seem interested, so why not?
AGU Fall 2009
16,000 attendees, thousands of cups of coffee and thousands of interesting conversations (and debates) about science.
That would be San Francisco, not Copenhagen of course.
There are a few of the RC crew there, so hopefully we’ll get some updates, but keep track of some other attending bloggers as well:
- Michael Tobis
- Steve Easterbrook
- Update: Harvey Liefert on the AIRS CO2 data
- Update2: Summary of Richard Alley’s talk
- Update3: The official AGU blog
and the whole AGU blogroll. There are some live webcasts through the week that might be interesting too.
If there are any other attendees reading, feel free to post about any interesting sessions/talks you see. I’ll update the main post with anything particularly noteworthy.
Unsettled Science
Unusually, I’m in complete agreement with a recent headline on the Wall Street Journal op-ed page:
“The Climate Science Isn’t Settled”
The article below is the same mix of innuendo and misrepresentation that its author normally writes, but the headline is correct. The WSJ seems to think that the headline is some terribly important pronouncement that in some way undercuts the scientific consensus on climate change but they are simply using an old rhetorical ‘trick’.
[Read more…] about Unsettled Science
CRU Hack: More context
Continuation of the older threads. Please scan those (even briefly) to see whether your point has already been dealt with. Let me know if there is something worth pulling from the comments to the main post.
In the meantime, read about why peer-review is a necessary but not sufficient condition for science to be worth looking at. Also, before you conclude that the emails have any impact on the science, read about the six easy steps that mean that CO2 (and the other greenhouse gases) are indeed likely to be a problem, and think specifically how anything in the emails affect them.
Update: The piece by Peter Kelemen at Columbia in Popular Mechanics is quite sensible, even if I don’t agree in all particulars.
Further update: Nature’s editorial.
Further, further update: Ben Santer’s mail (click on quoted text), the Mike Hulme op-ed, and Kevin Trenberth.
The CRU hack: Context
This is a continuation of the last thread which is getting a little unwieldy. The emails cover a 13 year period in which many things happened, and very few people are up to speed on some of the long-buried issues. So to save some time, I’ve pulled a few bits out of the comment thread that shed some light on some of the context which is missing in some of the discussion of various emails.
- Trenberth: You need to read his recent paper on quantifying the current changes in the Earth’s energy budget to realise why he is concerned about our inability currently to track small year-to-year variations in the radiative fluxes.
- Wigley: The concern with sea surface temperatures in the 1940s stems from the paper by Thompson et al (2007) which identified a spurious discontinuity in ocean temperatures. The impact of this has not yet been fully corrected for in the HadSST data set, but people still want to assess what impact it might have on any work that used the original data.
- Climate Research and peer-review: You should read about the issues from the editors (Claire Goodess, Hans von Storch) who resigned because of a breakdown of the peer review process at that journal, that came to light with the particularly egregious (and well-publicised) paper by Soon and Baliunas (2003). The publisher’s assessment is here.
Update: Pulling out some of the common points being raised in the comments.
- HARRY_read_me.txt. This is a 4 year-long work log of Ian (Harry) Harris who was working to upgrade the documentation, metadata and databases associated with the legacy CRU TS 2.1 product, which is not the same as the HadCRUT data (see Mitchell and Jones, 2003 for details). The CSU TS 3.0 is available now (via ClimateExplorer for instance), and so presumably the database problems got fixed. Anyone who has ever worked on constructing a database from dozens of individual, sometimes contradictory and inconsistently formatted datasets will share his evident frustration with how tedious that can be.
- “Redefine the peer-reviewed literature!” . Nobody actually gets to do that, and both papers discussed in that comment – McKitrick and Michaels (2004) and Kalnay and Cai (2003) were both cited and discussed in Chapter
2 of3 the IPCC AR4 report. As an aside, neither has stood the test of time. - “Declines” in the MXD record. This decline was
hiddenwritten up in Nature in 1998 where the authors suggested not using the post 1960 data. Their actual programs (in IDL script), unsurprisingly warn against using post 1960 data. Added: Note that the ‘hide the decline’ comment was made in 1999 – 10 years ago, and has no connection whatsoever to more recent instrumental records. - CRU data accessibility. From the date of the first FOI request to CRU (in 2007), it has been made abundantly clear that the main impediment to releasing the whole CRU archive is the small % of it that was given to CRU on the understanding it wouldn’t be passed on to third parties. Those restrictions are in place because of the originating organisations (the various National Met. Services) around the world and are not CRU’s to break. As of Nov 13, the response to the umpteenth FOI request for the same data met with exactly the same response. This is an unfortunate situation, and pressure should be brought to bear on the National Met Services to release CRU from that obligation. It is not however the fault of CRU. The vast majority of the data in the HadCRU records is publicly available from GHCN (v2.mean.Z).
- Suggestions that FOI-related material be deleted … are ill-advised even if not carried out. What is and is not responsive and deliverable to an FOI request is however a subject that it is very appropriate to discuss.
- Fudge factors (update) IDL code in the some of the attached files calculates and applies an artificial ‘fudge factor’ to the MXD proxies to artificially eliminate the ‘divergence pattern’. This was done for a set of experiments reported in this submitted 2004 draft by Osborn and colleagues but which was never published. Section 4.3 explains the rationale very clearly which was to test the sensitivity of the calibration of the MXD proxies should the divergence end up being anthropogenic. It has nothing to do with any temperature record, has not been used in any published reconstruction and is not the source of any hockey stick blade anywhere.
Further update: This comment from Halldór Björnsson of the Icelandic Met. Service goes right to the heart of the accessibility issue:
Re: CRU data accessibility.
National Meteorological Services (NMSs) have different rules on data exchange. The World Meteorological Organization (WMO) organizes the exchange of “basic data”, i.e. data that are needed for weather forecasts. For details on these see WMO resolution number 40 (see http://bit.ly/8jOjX1).
This document acknowledges that WMO member states can place restrictions on the dissemination of data to third parties “for reasons such as national laws or costs of production”. These restrictions are only supposed to apply to commercial use, the research and education community is supposed to have free access to all the data.
Now, for researchers this sounds open and fine. In practice it hasn’t proved to be so.
Most NMSs also can distribute all sorts of data that are classified as “additional data and products”. Restrictions can be placed on these. These special data and products (which can range from regular weather data from a specific station to maps of rain intensity based on satellite and radar data). Many nations do place restrictions on such data (see link for additional data on above WMO-40 webpage for details).
The reasons for restricting access is often commercial, NMSs are often required by law to have substantial income from commercial sources, in other cases it can be for national security reasons, but in many cases (in my experience) the reasons simply seem to be “because we can”.
What has this got to do with CRU? The data that CRU needs for their data base comes from entities that restrict access to much of their data. And even better, since the UK has submitted an exception for additional data, some nations that otherwise would provide data without question will not provide data to the UK. I know this from experience, since my nation (Iceland) did send in such conditions and for years I had problem getting certain data from the US.
The ideal, that all data should be free and open is unfortunately not adhered to by a large portion of the meteorological community. Probably only a small portion of the CRU data is “locked” but the end effect is that all their data becomes closed. It is not their fault, and I am sure that they dislike them as much as any other researcher who has tried to get access to all data from stations in region X in country Y.
These restrictions end up by wasting resources and hurting everyone. The research community (CRU included) and the public are the victims. If you don’t like it, write to you NMSs and urge them to open all their data.
I can update (further) this if there is demand. Please let me know in the comments, which, as always, should be substantive, non-insulting and on topic.
Comments continue here.
It’s all about me (thane)!
Well, it’s not really all about me. But methane has figured strongly in a couple of stories recently and gets an apparently-larger-than-before shout-out in Al Gore’s new book as well. Since a part of the recent discussion is based on a paper I co-authored in Science, it is probably incumbent on me to provide a little context.
First off, these latest results are being strongly misrepresented in certain quarters. It should be obvious, but still bears emphasizing, that redistributing the historic forcings between various short-lived species and CH4 is mainly an accounting exercise and doesn’t impact the absolute effect attributed to CO2 (except for a tiny impact of fossil-derived CH4 on the fossil-derived CO2). The headlines that stated that our work shows a bigger role for CH4 should have made it clear that this is at the expense of other short-lived species, not CO2. Indeed, the attribution of historical forcings to CO2 that we made back in 2006 is basically the same as it is now.
[Read more…] about It’s all about me (thane)!
Muddying the peer-reviewed literature
We’ve often discussed the how’s and why’s of correcting incorrect information that is occasionally found in the peer-reviewed literature. There are multiple recent instances of heavily-promoted papers that contained fundamental flaws that were addressed both on blogs and in submitted comments or follow-up papers (e.g. McLean et al, Douglass et al., Schwartz). Each of those wasted a huge amount of everyone’s time, though there is usually some (small) payoff in terms of a clearer statement of the problems and lessons for subsequent work. However, in each of those cases, the papers were already “in press” by the time other people were aware of the problems.
What is the situation though when problems (of whatever seriousness) are pointed out at an earlier stage? For instance, when a paper has been accepted in principle but a final version has not been sent in and well before the proofs have been sent out? At that point it would seem to be incumbent on the authors to ensure that any errors are fixed before they have a chance to confuse or mislead a wider readership. Often in earlier times corrections and adjustments would have been made using the ‘Note added in proof’, but this is less used these days since it is so easy to fix electronic versions.
[Read more…] about Muddying the peer-reviewed literature
350
I was quoted by Andrew Revkin in the New York Times on Sunday in a piece about the 350.org International Day of Climate Action (involving events in 181 countries). The relevant bit is:
Gavin A. Schmidt, a climate scientist who works with Dr. Hansen and manages a popular blog on climate science, realclimate.org, said those promoting 350 or debating the number might be missing the point.
“The situation is analogous to people trying to embark on a cross-country road trip to California but they’ve started off heading to Maine instead,” Dr. Schmidt said. “But instead of working out ways to turn around, they have decided to argue about where they are going to park when they get to L.A.”
“If you ask a scientist how much more CO2 do you think we should add to the atmosphere, the answer is going to be none.”
I’ve been told that some readers may have misinterpreted the quote as a criticism of the 350.org campaign itself. This was not the intent and in fact my metaphor wouldn’t have made sense in that context at all. Instead, it was a criticism of people who are expending effort arguing about whether 350 is precisely the right number for a long term target, or whether it should be somewhat higher or lower. Since we aren’t currently headed anywhere near 350 ppmv (in fact we are at 388 ppmv CO2 and increasing by about 2 ppmv/yr), we need to urgently think of ways the situation can turn around. Tapping into the creativity and enthusiasm shown by the 350.org campaigners will certainly be part of that process.
We discussed some of the thinking behind this ‘Target CO2‘ when Jim Hansen and colleagues’ paper first came out, where I think we made it clear that picking a specific CO2 target to avoid ‘dangerous’ climate change is an inexact science at best. The comments by Robert Brulle and Ray Pierrehumbert at DotEarth and James Hrynyshyn also highlight some of that complexity. And I think the suggestions by ‘Paulina‘ for how a tweaked article might have been clearer are very apropos.
However, as the final line in my NYT quote should make clear, personally I think the scientific case not increasing CO2 any further is very strong. Since the planet has not caught up with current levels of concentrations emissions (and thus will continue to change), picking an ultimate target that is less than today’s level is therefore wise. Of course, how we get there is much trickier than knowing where it is we should be going, but having a map of the destination is useful. As we discussed in the ‘trillionth ton‘ posting a couple of months back, how we get there also makes a difference.
In my original email to Andy Revkin, I had actually appended a line:
If you ask a scientist how much more CO2 do you think we should add to the atmosphere, the answer is going to be none.
All the rest is economics.
(and technology, and sociology, and psychology and politics etc.) but the point is that working out how we get there from here is the real challenge and the more people who are aware and involved in developing those solutions the better.
Why Levitt and Dubner like geo-engineering and why they are wrong
Many commentators have already pointed out dozens of misquotes, misrepresentations and mistakes in the ‘Global Cooling’ chapter of the new book SuperFreakonomics by Ste[ph|v]ens Levitt and Dubner (see Joe Romm (parts I, II, III, IV, Stoat, Deltoid, UCS and Paul Krugman for details. Michael Tobis has a good piece on the difference between adaptation and geo-engineering). Unfortunately, Amazon has now turned off the ‘search inside’ function for this book, but you can read the relevant chapter for yourself here (via Brad DeLong). However, instead of simply listing errors already found by others, I’ll focus on why this chapter was possibly written in the first place. (For some background on geo-engineering, read our previous pieces: Climate Change methadone? and Geo-engineering in vogue, Also the Atlantic Monthly “Re-Engineering the Earth” article had a lot of quotes from our own Raypierre).
[Read more…] about Why Levitt and Dubner like geo-engineering and why they are wrong
Decadal predictions
There has been a lot of discussion about decadal climate predictions in recent months. It came up as part of the ‘climate services’ discussion and was alluded to in the rather confused New Scientist piece a couple of weeks ago. This is a relatively “hot” topic to be working on, exemplified by two initial high profile papers (Smith et al, 2007 and Keenlyside et al, 2008). Indeed, the specifications for the new simulations being set up for next IPCC report include a whole section for decadal simulations that many of the modelling groups will be responding to.
[Read more…] about Decadal predictions