It’s long been known that El Niño variability affects the global mean temperature anomalies. 1998 was so warm in part because of the big El Niño event over the winter of 1997-1998 which directly warmed a large part of the Pacific, and indirectly warmed (via the large increase in water vapour) an even larger region. The opposite effect was seen with the La Niña event this last winter. Since the variability associated with these events is large compared to expected global warming trends over a short number of years, the underlying trends might be more clearly seen if the El Niño events (more generally, the El Niño – Southern Oscillation (ENSO)) were taken out of the way. There is no perfect way to do this – but there are a couple of reasonable approaches.
North Pole notes
I always find it interesting as to why some stories get traction in the mainstream media and why some don’t. In online science discussions, the fate of this years summer sea ice has been the focus of a significant betting pool, a test of expert prediction skills, and a week-by-week (almost) running commentary. However, none of these efforts made it on to the Today program. Instead, a rather casual article in the Independent showed the latest thickness data and that quoted Mark Serreze as saying that the area around the North Pole had 50/50 odds of being completely ice free this summer, has taken off across the media.
More PR related confusion
It’s a familiar story: An interesting paper gets published, there is a careless throwaway line in the press release, and a whole series of misleading headlines ensues.
This week, it’s a paper on bromine- and iodine-mediated ozone loss in marine boundary layer environments (see a good commentary here). This is important for the light that it shines on tropospheric ozone chemistry (“bad ozone”) which is a contributing factor to global warming (albeit one which is about only about 20% as important as CO2). So far so good. The paper contains some calculations indicating that chemical transport models without these halogen effects overestimate ozone near the Cape Verde region by about 15% – a difference that certainly could be of some importance if it can be extrapolated across the oceans.
However, the press release contains the line
Large amounts of ozone – around 50% more than predicted by the world’s state-of-the-art climate models – are being destroyed in the lower atmosphere over the tropical Atlantic Ocean.
(my highlights). Which led directly to the headlines like Study highlights need to adjust climate models.
Why is this confusing? Because the term ‘climate models’ is interpreted very differently in the public sphere than it is in the field. For most of the public, it is ‘climate models’ that are used to project global warming into the future, or to estimate the planet’s sensitivity to CO2. Thus a statement like the one above, and the headline that came from it are interpreted to mean that the estimates of sensitivity or of future warming are now in question. Yet this is completely misleading since neither climate sensitivity nor CO2 driven future warming will be at all affected by any revisions in ozone chemistry – mainly for the reason that most climate models don’t consider ozone chemistry at all. Precisely zero of the IPCC AR4 model simulations (discussed here for instance) used an interactive ozone module in doing the projections into the future.
What the paper is discussing, and what was glossed over in the release, is that it is the next generation of models, often called “Earth System Models” (ESMs), that are starting to include atmospheric chemistry, aerosols, ozone and the like. These models may well be significantly affected by increases in marine boundary layer ozone loss, but since they have only just started to be used to simulate 20th and early 21st Century changes, it is very unclear what difference it will make at the large scale. These models are significantly more complicated than standard climate models (having dozens of extra tracers to move around, and a lot of extra coding to work through), are slower to run, and have been used much less extensively.
Climate models today are extremely flexible and configurable tools that can include all these Earth System modules (including those mentioned above, but also full carbon cycles and dynamic vegetation), but depending on the application, often don’t need to. Thus while in theory, a revision in ozone chemistry, or soil respiration or aerosol properties might impact the full ESM, it won’t affect the more basic stuff (like the sensitivity to CO2). But it seems that the “climate models will have to be adjusted” meme is just too good not to use – regardless of the context.
Ocean heat content revisions
Hot on the heels of last months reporting of a discrepancy in the ocean surface temperatures, a new paper in Nature (by Domingues et al, 2008) reports on the revisions of the ocean heat content (OHC) data – a correction required because of other discrepancies in measuring systems found last year.
Of buckets and blogs
This last week has been an interesting one for observers of how climate change is covered in the media and online. On Wednesday an interesting paper (Thompson et al) was published in Nature, pointing to a clear artifact in the sea surface temperatures in 1945 and associating it with the changing mix of fleets and measurement techniques at the end of World War II. The mainstream media by and large got the story right – puzzling anomaly tracked down, corrections in progress after a little scientific detective work, consequences minor – even though a few headline writers got a little carried away in equating a specific dip in 1945 ocean temperatures with the more gentle 1940s-1970s cooling that is seen in the land measurements. However, some blog commentaries have gone completely overboard on the implications of this study in ways that are very revealing of their underlying biases.
The best commentary came from John Nielsen-Gammon’s new blog where he described very clearly how the uncertainties in data – both the known unknowns and unknown unknowns – get handled in practice (read that and then come back). Stoat, quite sensibly, suggested that it’s a bit early to be expressing an opinion on what it all means. But patience is not one of the blogosphere’s virtues and so there was no shortage of people extrapolating wildly to support their pet hobbyhorses. This in itself is not so unusual; despite much advice to the contrary, people (the media and bloggers) tend to weight new individual papers that make the news far more highly than the balance of evidence that really underlies assessments like the IPCC. But in this case, the addition of a little knowledge made the usual extravagances a little more scientific-looking and has given it some extra steam.
[Read more…] about Of buckets and blogs
Tropical tropospheric trends again
Back in December 2007, we quite heavily criticised the paper of Douglass et al (in press at IJoC) which purported to show that models and data were inconsistent when it came to the trends in the tropical troposphere. There were two strands to our critique: i) that the statistical test they used was not appropriate and ii) that they did not acknowledge the true structural uncertainty in the observations. Most subsequent discussion has been related to the statistical issue, but the second point is perhaps more important.
Even when Douglass et al was written, those authors were aware that there were serious biases in the radiosonde data (they had been reported in Sherwood et al, 2005 and elsewhere), and that there were multiple attempts to objectively address the problems and to come up with more homogeneous analyses. We mentioned the RAOBCORE project at the time and noted the big difference using their version 1.4 vs 1.2 made to the comparison (a difference nowhere mentioned in Douglass et al’s original accepted paper which only reported on v1.2 despite them being aware of the issue). However, there are at least three new papers in press that independently tackle the issue, and their results go a long towards addressing the problems.
[Read more…] about Tropical tropospheric trends again
What the IPCC models really say
Over the last couple of months there has been much blog-viating about what the models used in the IPCC 4th Assessment Report (AR4) do and do not predict about natural variability in the presence of a long-term greenhouse gas related trend. Unfortunately, much of the discussion has been based on graphics, energy-balance models and descriptions of what the forced component is, rather than the full ensemble from the coupled models. That has lead to some rather excitable but ill-informed buzz about very short time scale tendencies. We have already discussed how short term analysis of the data can be misleading, and we have previously commented on the use of the uncertainty in the ensemble mean being confused with the envelope of possible trajectories (here). The actual model outputs have been available for a long time, and it is somewhat surprising that no-one has looked specifically at it given the attention the subject has garnered. So in this post we will examine directly what the individual model simulations actually show.
Back to the future
A few weeks ago I was at a meeting in Cambridge that discussed how (or whether) paleo-climate information can reduce the known uncertainties in future climate simulations.
The uncertainties in the impacts of rising greenhouse gases on multiple systems are significant: the potential impact on ENSO or the overturning circulation in the North Atlantic, probable feedbacks on atmospheric composition (CO2, CH4, N2O, aerosols), the predictability of decadal climate change, global climate sensitivity itself, and perhaps most importantly, what will happen to ice sheets and regional rainfall in a warming climate.
The reason why paleo-climate information may be key in these cases is because all of these climate components have changed in the past. If we can understand why and how those changes occurred then, that might inform our projections of changes in the future. Unfortunately, the simplest use of the record – just going back to a point that had similar conditions to what we expect for the future – doesn’t work very well because there are no good analogs for the perturbations we are making. The world has never before seen such a rapid rise in greenhouse gases with the present-day configuration of the continents and with large amounts of polar ice. So more sophisticated approaches must be developed and this meeting was devoted to examining them.
Target CO2
What is the long term sensitivity to increasing CO2? What, indeed, does long term sensitivity even mean? Jim Hansen and some colleagues (not including me) have a preprint available that claims that it is around 6ºC based on paleo-climate evidence. Since that is significantly larger than the ‘standard’ climate sensitivity we’ve often talked about, it’s worth looking at in more detail.
Blogs and peer-review
Nature Geoscience has two commentaries this month on science blogging – one from me and another from Myles Allen (see also these blog posts on the subject). My piece tries to make the point that most of what scientists know is “tacit” (i.e. not explicitly or often written down in the technical literature) and it is that knowledge that allows them to quickly distinguish (with reasonable accuracy) what new papers are worth looking at in detail and which are not. This context is what provides RC (and other science sites) with the confidence to comment both on new scientific papers and on the media coverage they receive.
Myles’ piece stresses that criticism of papers in the peer-reviewed literature needs to be in the peer-reviewed literature and suggests that informal criticism (such as on a blog) might undermine that.
We actually agree that there is a real tension between a quick and dirty pointing out of obvious problems in a published paper (such as the Douglass et al paper last December) and doing the much more substantial work and extra analysis that would merit a peer-reviewed response. The approaches are not however necessarily opposed (for instance, our response to the Schwartz paper last year, which has also lead to a submitted comment). But given everyone’s limited time (and the journals’ limited space), there are fewer official rebuttals submitted and published than there are actual complaints. Furthermore, it is exceedingly rare to write a formal comment on an particularly exceptional paper, with the results that complaints are more common in the peer reviewed literature than applause. In fact, there is much to applaud in modern science, and we like to think that RC plays a positive role in highlighting some of the more important and exciting results that appear.
Myles’ piece, while ending up on a worthwhile point of discussion, illustrates it (in my opinion) with a rather misplaced example that involves RC – a post and follow-up on the Stainforth et al (2005) paper and the media coverage it got. The original post dealt in part with how the new climateprediction.net model runs affected our existing expectation for what climate sensitivity is and whether they justified a revision of any projections into the future. The second post came in the aftermath of a rather poor piece of journalism on BBC Radio 4 that implied (completely unjustifiably) that the CPDN team were deliberately misleading the public about the importance of their work. We discussed then (as we have in many other cases) whether some of the responsibility for overheated or inaccurate press actually belongs to the press release itself and whether we (as a community) could do better at providing more context in such cases. The reason why this isn’t really germane to Myles’ point is that we didn’t criticise the paper itself at all. We thought then (and think now) that the CPDN effort is extremely worthwhile and that lessons from it will be informing model simulations some time into the future. Our criticisms (such as they were) were mainly associated instead with the perception of the paper in parts of the media and wider community – something that is not at all appropriate for a peer-reviewed comment.
This isn’t the place to rehash the climate sensitivity issue (I promise a new post on that shortly), so that will be deemed off-topic. However, we’d be very interested in any comments on the fundamental issue raised – how do (or should) science blogs and traditional peer-review intersect and whether Myles’ perception that they are in conflict is widely shared.