There are two new papers in Nature this week that go right to the heart of the conversation about extreme events and their potential relationship to climate change. This is a complex issue, and one not well-suited to soundbite quotes and headlines, and so we’ll try and give a flavour of what the issues are and what new directions these new papers are pointing towards.
Let’s start with some very basic, but oft-confused points:
- Not all extremes are the same. Discussions of ‘changes in extremes’ in general without specifying exactly what is being discussed are meaningless. A tornado is an extreme event, but one whose causes, sensitivity to change and impacts have nothing to do with those related to an ice storm, or a heat wave or cold air outbreak or a drought.
- There is no theory or result that indicates that climate change increases extremes in general. This is a corollary of the previous statement – each kind of extreme needs to be looked at specifically – and often regionally as well.
- Some extremes will become more common in future (and some less so). We will discuss the specifics below.
- Attribution of extremes is hard. There are limited observational data to start with, insufficient testing of climate model simulations of extremes, and (so far) limited assessment of model projections.
The two new papers deal with the attribution of a single flood event (Pall et al), and the attribution of increased intensity of rainfall across the Northern Hemisphere (Min et al). While these issues are linked, they are quite distinct, and the two approaches are very different too.
The aim of the Pall et al paper was to examine a specific event – floods in the UK in Oct/Nov 2000. Normally, with a single event there isn’t enough information to do any attribution, but Pall et al set up a very large ensemble of runs starting from roughly the same initial conditions to see how often the flooding event occurred. Note that flooding was defined as more than just intense rainfall – the authors tracked runoff and streamflow as part of their modelled setup. Then they repeated the same experiments with pre-industrial conditions (less CO2 and cooler temperatures). If the amount of times a flooding event would occur increased in the present-day setup, you can estimate how much more likely the event would have been because of climate change. The results gave varying numbers but in nine out of ten cases the chance increased by more than 20%, and in two out of three cases by more than 90%. This kind of fractional attribution (if an event is 50% more likely with anthropogenic effects, that implies it is 33% attributable) has been applied also to the 2003 European heatwave, and will undoubtedly be applied more often in future. One neat and interesting feature of these experiments was that they used the climateprediction.net set up to harness the power of the public’s idle screensaver time.
The second paper is a more standard detection and attribution study. By looking at the signatures of climate change in precipitation intensity and comparing that to the internal variability and the observation, the researchers conclude that the probability of intense precipitation on any given day has increased by 7 percent over the last 50 years – well outside the bounds of natural variability. This is a result that has been suggested before (i.e. in the IPCC report (Groisman et al, 2005), but this was the first proper attribution study (as far as I know). The signal seen in the data though, while coherent and similar to that seen in the models, was consistently larger, perhaps indicating the models are not sensitive enough, though the El Niño of 1997/8 may have had an outsize effect.
Both papers were submitted in March last year, prior to the 2010 floods in Pakistan, Australia, Brazil or the Philippines, and so did not deal with any of the data or issues associated with those floods. However, while questions of attribution come up whenever something weird happens to the weather, these papers demonstrate clearly that the instant pop-attributions we are always being asked for are just not very sensible. It takes an enormous amount of work to do these kinds of tests, and they just can’t be done instantly. As they are done more often though, we will develop a better sense for the kinds of events that we can say something about, and those we can’t.
Kevin McKinney says
Thanks for this. Attribution is obviously a big issue, and one that tends to be rather a ‘hot button.’
I should think these will help lend a little clarity.
Tom Scharf says
Why waste all the time on other super-computers, just ask Watson! It knows all.
Where is there some raw data on precipitation in the last 100 years?
I haven’t seen this information. There is always concern that the start and end points for any trend study are not appropriate (both sides are guilty on this IMO). I have read precipitation studies were more difficult due to sparse data, and it seems we would have seen precipitation trend graphs a lot more often by now if it was straight forward. 7% seems to be a large change to not have been noted (vocally) earlier, seems like there is more to this story.
Are there individual trends like they have for temperature stations?
Is there any publicly available database?
[Response: Monthly precip data is part of GHCN, but Pall et al used the data from Alexander et al (2006) which is available via HadEX. – gavin]
Edward Greisch says
Thank you very much. This is exactly what we needed. As you say, it is a lot of work, not an easy thing. I believe you on that. I hope the same things will be done with the 2010 floods in Pakistan, Australia, Brazil, the Philippines, the Russian drought, etcetera. Those results are what we need to tell people.
The results are still probabilities not absolutes and so will be disappointing to many people. Probabilities will have to do. They are still things we can point to when questioned. When the probabilities get added together, they make an argument that will persuade more people.
A whole bunch of big storms, floods, droughts and fires are things that can invoke the fear necessary to get action on GW. Probable attribution is so much better than where we were before.
Pete Dunkelberg says
Risk assessment: drought and flooding in the 2050s vs agriculture. As we surely all agree, you don’t remain silent about, say, an approaching asteroid (still far enough away so that a slight push could make it miss earth) until the chance of collision is > 95 percent. How much more likely does impending climate disaster have to be than impending asteroid disaster to result, in practice, to the same level of warning?
Nagraj Adve says
Gavin, am interested in your earlier reported brief comment in the context of the Pakistan floods (perhaps here on Real Climate) that a different way of looking at extreme events is asking the question thus: what is the likelihood of such events occurring had atmospheric CO2 levels remained what they were at the time of the Industrial Revolution (276 ppm) rather than what they are now (390 ppm). Very unlikely I think is what you said. Has that any bearing on the above posting?
Nagraj Adve
(an activist based in Delhi)
[Response: That’s not really what I said (I presume you are referring to this New York Times interview?). We know that precipitation intensity has been increasing (the amount of rain that falls in the most intense events) across the northern hemisphere – this was clear in the literature even before the Pall et al paper. And while that doesn’t translate directly to flooding (which is a function of a lot of different factors – previous rainfall, soil moisture, water management policy, etc), it certainly plays a role. The study that Min et al did could be repeated for the Pakistan events and that would certainly be a very interesting result. – gavin]
Sean says
THX`Gavin,
Aware person not a scientist myself just an interested longterm AGW/CC observer & hobby researcher
RE “… though the El Niño of 1997/8 may have had an outsize effect.”
Comments like this, eg during recent Aussie weather events where I live, have confused me for some time.
My uncommon sense tells me that whatever the existing “climate drivers” natural or additional man made, they would sooner or later of themselves feed these El Nino/Nina cycles.
And yet what often I hear-read is as if these El Nino/a are being treated as a separate/independent “weather influences” … could you explain this to me please?
As it appears I have missed something along the way that I do not quite understand as yet.
THX Sean
[Response: Our ability to attribute changes in ENSO is very poor – both as a function of insufficient historical data and poor simulations of the phenomenon in the global models. So you can’t assume that (specifically) the 1997/1998 El Niño was related to long term climate change. So if you have a data set (as here) that goes from 1950-1999, you need to be aware of the fact that the trends could be affected just by this (potentially coincidental) El Niño. Looking at their figure 2, I don’t think this would change their results significantly, but it would be interesting to do the same study using a dataset that removed the ENSO influence – it might well be clearer. – gavin]
Bob (Sphaerica) says
Gavin,
Thanks for this. There’s been a lot of noise about extreme weather the past few months, and it frustrates me, because I think that too much jumping up and down about it does a disservice to the science, and to future expectations of immediate, in-your-face evidence of climate change.
I’ve tried to ameliorate this, when I could, exactly because actual attribution is so difficult to do.
Having hard, cold studies to point to, and a calm “well, let’s look at this realistically and understand our limitations” approach really helps, I think.
Bob (Sphaerica) says
2, Tom Scharf,
I get so tired of this. There are no sides. If there is any classification to be made concerning sources of information, then there are scientists who are seeking the truth, and non-scientists who are trying to prove or disprove a predetermined point. Regardless of what that point is, those people are not scientists, they’re ideologues, and they’re doing humanity a disservice.
But any scientific study you look at is not on any side. It’s just a result of doing research, trying to solve a problem, and the only objective is the correct and truthful answer, whichever “side” it happens to fall on.
Remember that for real scientists, they’ll ultimately be judged in their chosen careers and lifelong endeavors by posterity. Sooner or later the truth will be known, no matter what anyone says today. Scientists have no incentive whatsoever to choose sides, and in fact have a strong disincentive to avoid bias. They have every reason to be as impartial as they can.
Ron Broberg says
Gavin: Monthly precip data is part of GHCN, but Pall et al used the data from Alexander et al (2006) which is available via HadEX.
Likewise, GSOD includes precip data.
Kevin Stanley says
RE: \Some extremes will become more common in future (and some less so). We will discuss the specifics below.\
This statement caught my interest, but I didn’t get what I was hoping for…I would love to see a list, like
[Event type X] MORE likely in [regions A,B,C]
[Event type Y] MORE likely in [regions D,E,F]
[Event type Z] LESS likely in [regions G,H,I]
etc.
PS re-captcha has become really irritatingly difficult IMO
(because I’m lazy and hoping someone else will do the work of compiling such a list lol)
[Response: Table 3.8 in IPCC AR4 is a good list, and Joe Romm has a list of recent papers on the subject. – gavin]
Tim Joslin says
Gavin, I’m somewhat baffled how to interpret the findings of Pall et al. In fact I’m sceptical of the methodology or rather the underlying concept you outline of “fractional attribution”. If a rare event X occurs only when A, B, C, D and E are true, A is 100% attributable to the coincidence, not 20% to each factor. An eclipse occurs because of the alignment of the Sun and the Moon with the observer, it’s not 50% attributable to the position (relative to the observer) of the Sun and 50% to that of the Moon!
[Response: Not relevant. The question is how much more likely a flood is compared to a previous set of circumstances. If the odds go from 1 in 100, to 4 in 100, that is 3-fold increase in likelihood, and you can attribute 75% of the events to the new circumstances (statistically, not individually). In your example there is no change in anything against which to calculate the increased odds. – gavin]
Pall et al appear to make a stronger claim than that, because of AGW, flood events in the future will be more frequent, or more severe when they occur. Rather, Pall et al suggest anthropogenic forcings (henceforth “AGW”) in some sense caused the 2000 flood event in the first place.
[Response: No. This is not the claim. Read the paper more carefully. – gavin]
There appears to be an implicit assumption, at least in the Abstract (I don’t have full access to the journal) that the floods were just as likely to occur in 2000 as in any other year. Pall et al have therefore determined only how much more likely floods were to occur in 2000 with AGW compared to without AGW. But we know the floods – a rare event, remember – occurred in 2000, and we know forcings affect the state of the climate system. It would therefore be extremely surprising if Pall et al hadn’t found floods in 2000 were more likely with the actual pattern of forcings than with a different pattern. Maybe without AGW floods would have been more likely in 1999 or 2001 or 2013 (remember we’re talking about a rare event) than with AGW.
Let me put this another way. Weather patterns over the UK, as elsewhere, depend on the ocean heat state (as well as insolation etc). A factor other than AGW affecting the ocean heat state in 2000 was the Pinatubo eruption in 1991. Take this out of the model forcings and I’ll wager floods in Oxford in 2000 become less likely than if Pinatubo were left in.
Further, I hypothesise that the pattern of change of the ocean heat state following Pinatubo (which led to the loss of a considerable amount of heat) is an underlying cause of the 2000 floods. This pattern of change clearly depends on the forcings in effect. The fact that Pinatubo + AGW (+ any other relevant factors one might come up with) increased the probability of a once in several centuries event in 2000 in particular doesn’t in itself permit statements of the form “AGW makes flood events x% more likely”. We need to look at the entire problem – all the possible flood events under different natural climate forcing scenarios – under AGW and without in order to attribute “blame”. Some flood events will be less likely with AGW than without. But probably (as other research seems to show) more possible flood events are more likely with AGW than without.
I’m not sure that Pall et al tells us more than that AGW is one of the factors determining the specific weather events that occur, i.e. what we know already, that is, that AGW affects the climate. It doesn’t in itself tell us whether flooding will be more or less frequent and/or severe with or without AGW.
[Response: I think it is clear that you would want to do more work leading on from this – and doing different test cases is certainly useful. But you are expecting a little much from this study. – gavin]
Kooiti Masuda says
>Ron Broberg (#9) says:
17 Feb 2011 at 9:46 AM
>Likewise, GSOD includes precip data.
Yes, and GSOD is valuable data for understanding of spatio-temporal distribution of precipitation in extreme cases.
But GSOD is an uncertain source for evaluation of trends and anomalies of precipitation. When I compared monthly sums in 1990s at several Southeast Asian locations with nationally archived data, I found much discrepancy.
It is mainly because of failures of transmission. Though failures have become rarer in recent days, past observations are not recovered.
In additon, WMO’s rule of transmission of synoptic observations is designed far before climate change is an important issue. Precipitation is an optional element. If a synoptic report was successfully transmitted and it did not include the amount of precipitaiton, we cannot distinguish “no precipitation” from “observation missing” for sure.
So we often need data obtained from individual countries in non-real-time manner. And often (though not always) the national governments have restrictions on redistribution of those data.
Chip Knappenberger says
Gavin.
A couple of things regarding the Min et al. paper (if you have time).
I am struggling with the interpretation of the increased trend in “percent probability.” I understand this to mean that over time, there is a tendency to move upwards (to the right) along the cumulative probability curve, let’s say, for annual extreme 1-day precipitation. In other words, there is a tendency for rarer 1-day annual extreme precipitation amounts to occur later in the temporal record. I am interpreting that to mean that there is a trend towards increasing annual 1-day extreme precipitation—but I am not sure how to quantify that change. You describe this result as “the researchers conclude that the probability of intense precipitation on any given day has increased by 7 percent over the last 50 years.” I am not 100% sure this is what the results mean. I don’t think they are assessing the probability of daily occurrence, but instead, the probability of an annual daily extreme of some magnitude. Maybe they are the same thing?
Also, I don’t understand the authors’ preference for the ANT results over the ALL results. Don’t the observations reflect ALL (i.e. they are responding to all forcings, both natural and anthropogenic)? So, I would think, that to show a positive detection of an anthropogenic influence, you would show that the ALL models fit the observations whereas NAT (i.e., natural forcing only) runs do not. It doesn’t seem to me that the ANT runs would be of much use, other than perhaps to show what the signal may look like without NAT variability. But in this sense, the ANT results wouldn’t be used for detection.
Thanks for any help in trying to straighten me out about this!
-Chip
Shan Wells says
Gavin, what do you make of the recent WSJ article claiming that Gilbert Compo’s research proves that there is not a relationship between more extreme weather and AGW? It’s making the rounds on the denier sites. Might be good to address it or realclimate.
http://online.wsj.com/article/SB10001424052748704422204576130300992126630.html
[Response: The WSJ is not a reliable source. There is no discussion of extreme events in the Compo et al paper at all. The 20C Reanalysis project is however very useful (albeit with caveats) and we’ll address that in a future post. – gavin]
Septic Matthew says
The 2010 Joint Statistical Meetings included some invited papers on the analysis of extremes relevant to analysis of climate change. Here is one of them:
http://www.amstat.org/meetings/jsm/2010/onlineprogram/index.cfm?fuseaction=abstract_details&abstractid=308388
This one attempts to define “extreme events” a priori and then model them probabilistically. I expect much more work like this in the future, but I have not seen one published in the peer-reviewed literature since R. L. Smith, “Extreme value analysis of environmental time series: An application to trend detection in ground-level ozone (with discussion)”, Statistical Science, 1989, vol 4, pp 367-393.
To define an event after it has occurred, such as the simultaneous Russian heat and cold waves in summer 2010, and then post hoc to try to develop a probability model for it that you might have had (had you thought of it in advance), is nearly always a hopeless enterprise.
Septic Matthew says
nuts, I forgot to add: this was a good essay by gavin. Good work.
Tim Joslin says
Gavin, Thanks for your responses.
The Abstract of the paper seems to be clear that we’re looking at the 2000 Oxford floods in particular, not the increased risk of floods occurring over a specific time period, say 1996-2005. I’ll look at the text when I can access it.
Pall et al is being interpreted, e.g. in the Guardian, as raising the possibility of legal action based on the supposed increased probability of the 2000 floods.
But statements of “fractional attribution” are not meaningful – or rather reflect only the limitations of our knowledge, in this case as embodied in climate models, not “how likely an event was to occur” (this is painful to write – the event has occurred, the wave function has collapsed, it’s no longer a probability, it’s a fact!).
Let me elaborate:
(1) Statements of “fractional attribution” of meteorological events depend entirely on the characteristics of the climate models employed. Being deterministic for a second, given perfect models (and input data), the 2000 floods are 100% likely to have occurred under AGW (since we know they did, in fact, occur). Under any other scenario, such as “without AGW” they are either 0% or 100% likely. (Note we also need to define what constitutes “floods” since in every scenario a specific different set of meteorological events will have occurred). So statements of “fractional attribution” only reflect our lack of omniscience, not actual real-world probabilities.
(2) Furthermore, the floods are also either 0% or 100% likely to have occurred if we change one or more of an infinite (or at least very large) number of other factors. I mentioned Pinatubo earlier, but we could “blame” earlier volcanoes such as El Chichon, or the solar cycle, or anthropogenic factors such as historic deforestation or agricultural activity in Europe or North America or Africa, which surely affect weather patterns in England. It’s very likely – I’d say certain – that, with perfect modelling, any number of even minor counterfactuals could be shown to result in no floods in 2000, that is, by the logic being employed, the floods would be 100% attributable to such factors, compared to the counterfactuals. Even with “fractional attribution” (i.e. imperfect modelling) we’re pretty soon going to find we can attribute more than 100% of the event to various causes. With perfect modelling we’re simply going to identify large numbers of necessary causal factors.
On top of this, there’s the need to take account of the possibility that what might have happened otherwise (i.e. absent AGW) might have been just as bad, as I discussed before. I’m reminded of Stephen Fry’s Making History.
I really don’t think, unfortunately, that this nascent discipline of attributing specific climate events to AGW is going to stand up in court.
I’d advise the climate science community to stick to what we already have, i.e. statements of the form that “floods will be x% more likely”.
[Response: I cannot speak to the legal consequences of any particular method of attribution, but Myles Allen has discussed this speculatively in regards to the 2003 European heat-wave. I can’t see any reason why some use of this information would not useful though. Since we will never be able to say with absolute 100% confidence that single effect A happened because of cause B in the real world – even if something goes from almost never happening to happening all the time – a fractional attribution makes a lot of sense. I wouldn’t be at all surprised to find that something similar was used in medical malpractice cases for instance.
Susan Anderson says
Yup, first comment on new DotEarth is from Judith Curry!
Hank Roberts says
Gad, and JC points to the Huffington Post for coverage.
Bad source, generally. Proof if needed that “left” need not mean “scientific”
http://www.google.com/search?q=huffington+post+woo+quack
Mathieu says
[Response: Not relevant. The question is how much more likely a flood is compared to a previous set of circumstances. If the odds go from 1 in 100, to 4 in 100, that is 3-fold increase in likelihood, and you can attribute 75% of the events to the new circumstances (statistically, not individually). In your example there is no change in anything against which to calculate the increased odds. – gavin]
Gavin,
Your reply to Tim is centered on the issue I have with attribution studies. I believe that I understand the method used to derive the increase in likeliness of a type of extreme event due to global warming, but the trouble begins when the results are used to say something meaningful about real extreme events, or at least what the media make of it. I think most journalists and laypeople understand the results in some kind of twisted way similar to what Tim formulated, that is, as applicable to a particular extreme event that they’re considering, and not to the statistics of these extreme events. Taking your 75% number as an example, I think they understand something like this: “75% of the strength of this extreme event is attributable to global warming”, or “There’s 75% chance that this event would not have occurred without global warming”.
But they’re bound to understand it this way if the event is extreme enough, because the right way to understand it often leads to no useful conclusion. Indeed, as you point out, the right way to look at it is in statistic terms, that is to compare the statistics of the model with pre- and post-industrial revolution conditions to the statistics of the real world, and see whether the real trend in the statistics is well reflected in the model runs, statistically speaking. Then one could conclude that the current statistics of extreme events can only be explained by human influence. Unfortunately what extreme events tend to do for a living is to be rare. If we’re talking about a once-in-a-century event (the striking figure often used in the media), then we’ll need quite a few centuries before we can say anything meaningful about their statistics. As for the paleo data on extreme events at the scale of centuries, I guess it depends on the exact type of event but I imagine it cannot be so comprehensive either ?
To put it another way. One can compare the records of global surface temperatures to model runs with and without anthropogenic greenhouse gases, and meaningfully (with some quantifiable degree of confidence) conclude that the trend cannot be explained without the human influence. I don’t see a similar “point of contact” between models and reality as far as attribution studies of extreme events are concerned, given that what we need to compare are modeled statistics (which we can always have by making many model runs) and meaningful real statistics, (which are hard to get) ?
R. Gates says
Not having access to the full Min et. al. study I’d be curious to see how natural fluctuations in multi-decadal cycles such as the PDO and AMO during the time frame in question were filtered out to find the 7% attribution to AGW specifically, or is this kind of filtering even relevant in this kind of study?
Lynn Vincentnathan says
As a layperson (to climatology, tho not my field), who would be much more interested in avoiding the false negative of failing to mitigate a true serious problem, I look at ACC this way: We live in a warmed & warming world….that is my null hypothesis. When they get to .05 on my null (95% confidence that ACC is not happening, and extreme events cannot in any way at all be attributed to ACC), then I will still continue to mitigate, bec it saves me lots of money.
But I’ll be a happier camper.
Here’s how I tried to explain science to folks on my environmental studies program committee:
I like these attribution studies mention in this post, but the denialists seem forever stuck out on the long tail of “anything’s possible in a non-ACC world, it’s all within what’s natural.”
Hank Roberts says
Good radio science on this, from NPR’s Richard Harris:
http://www.npr.org/2011/02/16/133806402/researchers-link-extreme-rains-to-global-warming
goes into various questions, beyond these specific studies; he emphasizes these studies are about precipitation events.
Wizbang09 says
If any of you folks are interested, I just made a site which will allow everyone to collaboratively research this very topic. A lot of the stuff I read about global warming becomes a immensely time-consuming task to verify. With claimtree.org, we can divide the work and don’t have to repeat other peoples’ efforts. Check it out sometime. Right now the main claims are about global warming.
Lynn Vincentnathan says
RE the possibility that negative arctic oscillations may also be expected to become more frequent in a warming world (see https://www.realclimate.org/index.php/archives/2010/12/cold-winter-in-a-world-of-warming ), here is an effect on Mexican agriculture at http://www.grass-roots-press.com/2011/02/12/commerce-news-mexicos-new-agricultural-crisis/ :
M says
“I wouldn’t be at all surprised to find that something similar was used in medical malpractice cases for instance.”
I think that there is some case history in the pollution field, actually: when multiple sources produce a particular pollutant that has been fingered as “at cause” in a sickness or mortality incident, the blame gets apportioned between the sources as some function of their contribution to the exposure. I think. Its been a few years since I took environmental law… (if only I could remember the case name, but I can only remember a small subset, like Daubert, American Trucking, and cases by the judge “Learned Hand”, because how cool of a name is that?)
-M
Dave Andrewsa says
Interesting Nature accompanied Richard Allan’s comment piece with a photo of floods in York in 2000.
These floods set a record high. They were one inch higher than the previous record. This occurred in 1625 when, IIRC, AGW was in its preconception stage.
One Anonymous Bloke says
Whizbang09 #24 Thanks, but I had a look at your website, and it is pointless, irrelevant and utterly lacking credibility. You may gather a group of like minded individuals there, and waste one anothers’ time, but the real work will be done elsewhere, by others.
pete best says
Hi,
it thought it has been demonstrated that the atmosphere holds 4% more water vapour now then it did at the start of the 20th century and that 2010 has had the heaviest rainfall globally. In theory its going to rain more in certain places?
Tim Joslin says
Mathieu #20 wrote:
seemingly misunderstanding my position.
Mathieu, I agree with what you say in your post, but I think you need to look carefully at what Pall et al claim. They do in fact seem to be saying something analogous to your strawman example of “There’s 75% chance that this event would not have occurred without global warming”, to wit:
i.e. that they can somehow determine the increased likelihood due to AGW of an actual past extreme (once in some centuries) meteorological event. I’m arguing that this is not a meaningful exercise – i.e. against the “twisted” position. The increased likelihood due to AGW of the particular event was in fact infinite (or 0% if it would have happened anyway), i.e. in the AGW case the event was 100% likely since it did, in fact, happen, i.e. if they’d used perfect models (and data) all those run “under realistic conditions” would have correctly hindcast the 2000 Oxford flood event and all those run under a given scenario of “conditions as they might have been” would have also consistently either predicted flooding that year, or, more likely, predicted no flooding. Any other estimate, such as the PDF Pall et al came up with, merely reflects modelling inaccuracy, interesting to know though that may be.
OG says
One paper says
In the future, there will be more precipitation in the high latitudes of the NH and less precipitation in the mid-latitudes.
The other papers suggests-
In the future, there will be more precipitation in the mid-latitudes and less in the high latitudes of the NH.
Which one is correct?
[Response: Since neither of these statements are to be found in the papers mentioned, your question is moot. I strongly advise reading the papers before asking questions – even the titles would have been informative. – gavin]
Edward Greisch says
Andy Revkin shows his ignorance of probability & statistics once again at:
http://dotearth.blogs.nytimes.com/2011/02/17/on-storms-warming-caveats-and-the-front-page/
It is, of course, not possible to give a laboratory course in comments to dotearth.
Didactylos says
Wizbang09: doing science by popular vote?
No.
Just no.
Hank Roberts says
For OG: the claims you copypasted are amateur opinion; just above where you found them the writer says “My dunce head reading … is- One paper says-“
Philip Machanick says
Gavin, on your response to #14: my first reaction was yours, that WSJ isn’t a reliable source. That the 2nd paragraph starts with “Some climate alarmists” is a clear giveaway. That this sort of strident propaganda can appear in a top internationally read financial paper is disappointing but not surprising.
But I went and skimmed through the paper anyway: interesting but nothing to support what WSJ wrote. I suspect Compo may have been taken a tad out of context, as in intending to say something like this is new stuff we’re working on, I looked for something and didn’t find it, rather than the definitive spin WSJ put on it, that this is a conclusive study that shows nothing is happening. I look forward to your more detailed article on 20C Reanalysis.
Philip Machanick says
I’ve been living in Australia since 2002, and I have found the number of once in a century events surprising (hot dry weather leading to massive bush fires as well as deluges). In some areas, a once in a century flood has happened twice the same season. Brisbane’s recent floods were caused by double the rainfall of the 1974 floods, and only less severe because a flood mitigating dam was completed in the interim. At one point 70% of Queensland (land area 1.7-million km2) was under water. If this is what happens when rain fall is only a few per cent outside norms (assuming the southern hemisphere is not very far from the northern trend), what will happen when climate change starts getting serious?
Lloyd Smith says
Pressure to quantify and statistically determine the probability of local weather anomalies will continue to grow as population pressure mounts. Economic models notwithstanding, private companies will fund their own climate models when the cost of not doing so bumps up against profit margins. The real or imagined cost of a statistically stretched climate forecast based on regional models will soon be felt by all of us. Insurers motto “When in doubt, price it in” will overshadow policy and good science.
DougO says
More ponderings by an informed non-scientist:
* Assuming storms are driven by temp/pressure/energy gradients.
* If warming warms the poles relatively more than the tropics, then the horizontal temperature gradients may actually decrease.
[Response: depends on whether you look at the surface or the tropopause, and depends on whether there is any ice around. – gavin]
* If warming warms the lower atmosphere more than the upper atmosphere, then the vertical gradients will likely increase.
[Response: Depends again – the upper troposphere is predicted to warm more than the surface – at least in the tropics. It is the stratosphere that is cooling. – gavin]
* Are there compensating feedbacks that amplify or neutralize these effect?
* What are the controlling variables that determine the whether dissipative structures (storms) are self-organized (cyclonic) vs random-chaotic?
* How does this play out for the different kinds of “extreme events” and their differing responses to forcings and feedbacks?
[Response: It’s far more complicated than anyone can hand-wave about. The increase in water vapour as the surface warms is key, but so might be changes in boundary layer stability, rossby wave generation via longitudinally varying responses at the surface, impacts of the stratopshere on the steering of the jet, and the situation is completely different again for tropical storms.- gavin]
The Elf says
Changes in ocean salinity can be a useful indicator of chagnes in the hydrological cycle, since the ocean integrates those changes. Those studying the subject might be interested in the following three articles.
http://www.springerlink.com/content/y39365v07026153q/
http://journals.ametsoc.org/doi/abs/10.1175/2010JCLI3377.1
http://europa.agu.org/?view=article&uri=/journals/gl/gl1018/2010GL044222/2010GL044222.xml&t=gl,2010,helm
Stephen Leahy says
The authors of the studies did stress how difficult these were to do. My article for the global news service IPS which is careful to point out these impacts result from a fraction of the heating to come.
http://www.ipsnews.net/news.asp?idnews=54505
Titus says
About 6 years ago when I first took an interest in AGW it was all about warming and droughts. Snow and cold were a thing of the past. Now it appears that it can be attributed to pretty much everything.
All recent ‘extreme events’ on investigation appear not to be extreme when compared to known events in the past few hundred years before AGW. Snow and cold in Europe in the early 1800’s, floods in Australia in the mid 1800’s etc. You can easily check these on the WEB and I’ve not found anything recently that has not occurred more often or to a greater extent in the past. Interested to hear if there are.
Of course climate changes (by its very nature) but linking current trends to AGW appears very tenuous (even by your research) and bearing in mind the uncertainties and the media and political need for the correlation to stick, its very dangerous ground to attribute anything. We just do not know. This will just muddy the waters even further and loose confidence in climate science by me and I suspect the public at large.
Thomas says
Tim @11,17, and more.
I think you need to separate out the effects of the botterfly effect, from changes in the longterm statistics in climate. Beyond a few months, the only thing that can be said is statistical in nature. I personally should be held responsible for the 2000 flood, because in 1995 I steped on a botterfly. If I had not, and it got to flap its wings the chaotic sampling of the weather in fall 2000 would have been totally different, and that flood wouldn’t have happened. Similar to, being blamed by a roolette player for his losing a crucial bet because I hapenned to breath out, and perturbed the details of the balls trajectory. Weather is inherently chaotic, and trying to attribute the definite ocurrance of an event to any one cause just can’t be done. Any one of a gazillion other tiny changes would have averted it (possibly causing a worse diasater somewhere else). What we are trying to attribute here is proof of loaded diz, not an effect on a given roll of them.
Curses on the spam checker. I’ll try to deliberately mispell some words
and try again.
JCH says
Titus – the addition of the Wivenhoe Dam was supposed to hold the flood level a specified number of meters below the 1974 level. It failed to do so. The Wivenhoe Dam was designed to make it unlikely it would have to make a large release of water to defend the dam wall’s structural integrity. The dam’s managers had to do do just that. Try to think it through. To compare flood events somebody who knows what they are doing has to account for all of that relevant factors that contributed to the flooding and then analyze the mitigation effect of all the existent mitigation infrastructure. Once that work is completed, it’s very possible the 2011 Brisbane flood will fully compete with recorded history.
GlenFergus says
Thanks Gavin. Having read both papers last night and struggled a bit, must say your succinct summary is seriously impressive.
Sou says
@Titus #41 – I have read other people say that Australia has had worse floods and precipitation than has occurred this year. When investigated it appears they are comparing one or two floods in a particular area with only one of the floods this season. I have yet to see evidence that the total sum of floods since September 2010 across the whole of Australia has ever been experienced over a similar time period in the recent past. Can you please provide a link to your claim? Does your reference (if any) include the repeated flooding of Queensland and record rain and floods in Victoria (repeated), Tasmania, Western Australia, South Australia, the Northern Territory and NSW (ie in every state and territory except the tiny ACT)?
I have seem reports of extensive floods in parts of Queensland in the 1800s on the BoM site, but have not seen the reports to which you might be referring for the whole country.
(I know this isn’t relevant to the discussion, because weather from more than a century ago was responding to different climate and weather forcings than is the weather of today.)
One Anonymous Bloke says
Titus, #41 http://www.ipcc.ch/publications_and_data/ar4/wg2/en/ch1s1-3-7-5.html#1-3-8 and http://www.ipcc.ch/publications_and_data/ar4/wg2/en/ch1s1-3-1-2.html#1-3-2
Show beyond any doubt that, no matter what you may have been “reading” “about 6 years ago” (bearing in mind that IPCC AR4 was based on existing resources) everything we’ve seen is within the bounds of predictions. Or, to be more accurate, stretching the bounds of predictions.
Please start your entire argument again from the beginning, this time with respect to the facts.
Titus says
Sou @45 & JCH @43 Thanks for comments.
Here’s a link to some flood history:
http://www.bom.gov.au/hydro/flood/qld/fld_history/brisbane_history.shtml
The current level got to about 4.5m. That’s less than 1974 at 5.5m and a lot less than several in the 1800’s (in fact 2 were almost twice as large).
I admit this is just one area but take a look around the site and others for more examples.
Not sure of the effect of dams but it will no doubt be an interesting an needed study for the future. My point being that extremes are pretty much the norm in celestial time frames.
Tim Joslin says
@Thomas #44:
I understand that Pall et al are trying to account for the butterfly effect, it’s just that in hindcasting a specific flood that actually happened rather than floods in general (in contrast seemingly to the methodology adopted to assess the probability of European heatwaves comparable to that in 2003), we’re left with no way of separating forecast inaccuracy from real-life randomness. If we thought weather forecasting models were inaccurate solely because the weather doesn’t “know” what it’s going to do there’d be no point in investing in bigger and bigger supercomputers would there? I hope we’re not starting to confuse model behaviour with real-world behaviour! The fact is the 2000 Oxford floods really happened whether they were predicted by 10% of the model runs or by 100%, so they weren’t made 20% or 90% more likely because of AGW, they were infinitely more likely.
adelady says
One approach that might be fruitful I saw during a discussion on the Brisbane floods and the operation of the Wivenhoe dam. With modern recording equipment, it’s relatively easy to calculate the quantity of rain falling in any given period – in this case 2 or 3 days.
So this person simply said what if this quantity of water was larger than previous rainfall events by the 4% estimated for the increase in water vapour held in the whole of the atmosphere. When he did that and subtracted it from the total rainfall, hey presto!, the Brisbane flood would either not have happened at all or have been relatively minor. Haven’t seen anything similar for the flash flooding in the Toowoomba area.
Someone who knows what they’re doing could do some work on the basis (that occurs to me as a naive non-scientist) that localised effects would involve more than the 4% average for the whole globe / whole atmosphere.
But it’s a good logical, rather than scientific, exercise to quantify the point where we move from the 0% to 100% likelihood of not seen before effects from extreme weather.
Ray Ladbury says
Titus says, “All recent ‘extreme events’ on investigation appear not to be extreme when compared to known events in the past few hundred years before AGW.”
I have to say that it takes a special sort of studied obtuseness to look at all the events we are hearing about–unprecedented heat waves in Russia, flooding in Oz, S. America, etc. and say “Meh!” I have to wonder how far you’ll take the act. When you are up to your navel in floodwaters, will you still be proclaiming it’s no big deal? But then, it is your progeny who will bear the brunt of your complacency, so why should you care?