How do we know what caused climate to change – or even if anything did?
This is a central question with respect to recent temperature trends, but of course it is much more general and applies to a whole range of climate changes over all time scales. Judging from comments we receive here and discussions elsewhere on the web, there is a fair amount of confusion about how this process works and what can (and cannot) be said with confidence. For instance, many people appear to (incorrectly) think that attribution is just based on a naive correlation of the global mean temperature, or that it is impossible to do unless a change is ‘unprecedented’ or that the answers are based on our lack of imagination about other causes.
In fact the process is more sophisticated than these misconceptions imply and I’ll go over the main issues below. But the executive summary is this:
- You can’t do attribution based only on statistics
- Attribution has nothing to do with something being “unprecedented”
- You always need a model of some sort
- The more distinct the fingerprint of a particular cause is, the easier it is to detect
Note that it helps enormously to think about attribution in contexts that don’t have anything to do with anthropogenic causes. For some reason that allows people to think a little bit more clearly about the problem.
First off, think about the difference between attribution in an observational science like climatology (or cosmology etc.) compared to a lab-based science (microbiology or materials science). In a laboratory, it’s relatively easy to demonstrate cause and effect: you set up the experiments – and if what you expect is a real phenomenon, you should be able to replicate it over and over again and get enough examples to demonstrate convincingly that a particular cause has a particular effect. Note that you can’t demonstrate that a particular effect can have only that cause, but should you see that effect in the real world and suspect that your cause is also present, then you can make a pretty good (though not 100%) case that a specific cause is to blame.
Why do you need a laboratory to do this? It is because the real world is always noisy – there is always something else going on that makes our (reductionist) theories less applicable than we’d like. Outside, we don’t get to perfectly stabilise the temperature and pressure, we don’t control the turbulence in the initial state, and we can’t shield the apparatus from cosmic rays etc. In the lab, we can do all of those things and ensure that (hopefully) we can boil the experiment down to its essentials. There is of course still ‘noise’ – imprecision in measuring instruments etc. and so you need to do it many times under slightly different conditions to be sure that your cause really does give the effect you are looking for.
The key to this kind of attribution is repetition, and this is where it should become obvious that for observational sciences, you are generally going to have to find a different way forward, since we don’t generally get to rerun the Holocene, or the Big Bang or the 20th Century (thankfully).
Repetition can be useful when you have repeating events in Nature – the ice age cycles, tides, volcanic eruptions, the seasons etc. These give you a chance to integrate over any unrelated confounding effects to get at the signal. For the impacts of volcanic eruptions in general, this has definitely been a useful technique (from Robock and Mao (1992) to Shindell et al (2004)). But many of the events that have occurred in geologic history are singular, or perhaps they’ve occurred more frequently but we only have good observations from one manifestation – the Paleocene-Eocene Thermal Maximum, the KT impact event, the 8.2 kyr event, the Little Ice Age etc. – and so another approach is required.
In the real world we attribute singular events all the time – in court cases for instance – and so we do have practical experience of this. If the evidence linking specific bank-robbers to a robbery is strong, prosecutors can get a conviction without the crimes needing to have been ‘unprecedented’, and without having to specifically prove that everyone else was innocent. What happens instead is that prosecutors (ideally) create a narrative for what they think happened (lets call that a ‘model’ for want of a better word), work out the consequences of that narrative (the suspect should have been seen by that camera at that moment, the DNA at the scene will match a suspect’s sample, the money will be found in the freezer etc.), and they then try and find those consequences in the evidence. It’s obviously important to make sure that the narrative isn’t simply a ‘just-so’ story, in which circumstances are strung together to suggest guilt, but which no further evidence is found to back up that particular story. Indeed these narratives are much more convincing when there is ‘out of sample’ confirmation.
We can generalise this: what is a required is a model of some sort that makes predictions for what should and should not have happened depending on some specific cause, combined with ‘out of sample’ validation of the model of events or phenomena that were not known about or used in the construction of the model.
Models come in many shapes and sizes. They can be statistical, empirical, physical, numerical or conceptual. Their utility is predicated on how specific they are, how clearly they distinguish their predictions from those of other models, and the avoidance of unnecessary complications (“Occam’s Razor”). If all else is equal, a more parsimonious explanation is generally preferred as a working hypothesis.
The overriding requirement however is that the model must be predictive. It can’t just be a fit to the observations. For instance, one can fit a Fourier series to a data set that is purely random, but however accurate the fit is, it won’t give good predictions. Similarly a linear or quadratic fit to a time series can be useful form of descriptive statistics, but without any reason to think that there is an underlying basis for such a trend, it has very little predictive value. In fact, any statistical fit to the data is necessarily trying to match observations using a mathematical constraint (ie. trying to minimise the mean square residual, or the gradient, using sinusoids, or wavelets, etc.) and since there is no physical reason to assume that any of these constraints apply to the real world, no purely statistical approach is going to be that useful in attribution (despite it being attempted all the time).
To be clear, defining any externally forced climate signal as simply the linear, quadratic, polynomial or spline fit to the data is not sufficient. The corollary which defines ‘internal climate variability’ as the residual from that fit doesn’t work either.
So what can you do? The first thing to do is to get away from the idea that you can only be using single-valued metrics like the global temperature. We have much more information than that – patterns of changes across the surface, through the vertical extent of the atmosphere, and in the oceans. Complex spatial fingerprints of change can do a much better job at discriminating between competing hypotheses than simple multiple linear regression with a single time-series. For instance, a big difference between solar forced changes compared to those driven by CO2 is that the stratosphere changes in tandem with the lower atmosphere for solar changes, but they are opposed for CO2-driven change. Aerosol changes often have specific regional patterns change that can be distinguished from changes from well-mixed greenhouse gases.
The expected patterns for any particular driver (the ‘fingerprints’) can be estimated from a climate model, or even a suite of climate models with the differences between them serving as an estimate of the structural uncertainty. If these patterns are robust, then one can have confidence that they are a good reflection of the underlying assumptions that went into building the models. Given these fingerprints for multiple hypothesised drivers (solar, aerosols, land-use/land cover change, greenhouse gases etc.), we can than examine the real world to see if the changes we see can be explained by a combination of them. One important point to note is that it is easy to account for some model imperfections – for instance, if the solar pattern is underestimated in strength we can test for whether a multiplicative factor would improve the match. We can also apply some independent tests on the models to try and make sure that only the ‘good’ ones are used, or at least demonstrate that the conclusions are not sensitive to those choices.
These techniques of course, make some assumptions. Firstly, that the spatio-temporal pattern associated with a particular forcing is reasonably accurate (though the magnitude of the pattern can be too large or small without causing a problem). To a large extent this is the case – the stratospheric cooling/tropospheric warming pattern associated with CO2 increases is well understood, as are the qualitative land vs ocean/Northern vs. southern/Arctic amplification features. The exact value of polar amplification though is quite uncertain, though this affects all the response patterns and so is not a crucial factor. More problematic are results that indicate that specific forcings might impact existing regional patterns of variability, like the Arctic Oscillation or El Niño. In those cases, clearly distinguishing internal natural variability from the forced change is more difficult.
In all of the above, estimates are required of the magnitude and patterns of internal variability. These can be derived from model simulations (for instance in their pre-industrial control runs with no forcings), or estimated from the observational record. The latter is problematic because there is no ‘clean’ period where there was only internal variability occurring – volcanoes, solar variability etc. have been affecting the record even prior to the 20th Century. Thus the most straightforward estimates come from the GCMs. Each model has a different expression of the internal variability – some have too much ENSO activity for instance while some have too little, or, the timescale for multi-decadal variability in the North Atlantic might vary from 20 to 60 years for instance. Conclusions about the magnitude of the forced changes need to be robust to these different estimates.
So how might this work in practice? Take the impact of the Pinatubo eruption in 1991. Examination of the temperature record over this period shows a slight cooling, peaking in 1992-1993, but these temperatures were certainly not ‘unprecedented’, nor did they exceed the bounds of observed variability, yet it is well accepted that the cooling was attributable to the eruption. Why? First off, there was a well-observed change in the atmospheric composition (a layer of sulphate aerosols in the lower stratosphere). Models ranging from 1-dimensional radiative transfer models to full GCMs all suggest that these aerosols were sufficient to alter the planetary energy balance and cause global cooling in the annual mean surface temperatures. They also suggest that there would be complex spatial patterns of response – local warming in the lower stratosphere, increases in reflected solar radiation, decreases in outgoing longwave radiation, dynamical changes in the northern hemisphere winter circulation, decreases in tropical precipitation etc. These changes were observed in the real world too, and with very similar magnitudes to those predicted. Indeed many of these changes were predicted by GCMs before they were observed.
I’ll leave it as an exercise for the reader to apply the same reasoning to the changes related to increasing greenhouse gases, but for those interested the relevant chapter in the IPCC report is well worth reading, as are a couple of recent papers by Santer and colleagues.
John Mashey says
Good post. Hopefully, people might be able to actually stick to this topic, this time.
Rod B says
I haven’t read further, but have an early question. While statistics can’t prove/disprove attribution, does it never-the-less provide supporting clues one way or the other? If so to what degree of credibility? Barton Paul Levenson among others has shown considerable work here; doesn’t it have some value more than zero?
thomas hine says
This is a very good blog article. Indeed, since you are outside of the lab and with all of the variables mentioned, can anything really be said with 90% confidence, or stated as “very very likely”?
[Response: It depends on the size of the signal compared to the noise, and to the distinctiveness of the fingerprint. These things can be characterised, and so, yes, you can make Bayesian statements about the likelihood. – gavin]
How is inertia of the sytem accounted for in these models? Only as negative feedbacks? I.E. I conced that we have perhaps reached the second blade on the “stick” with the last few months temps, but still looking at IPCC AR4 model scenario A1B(21) model runs which are based on current emissions and approaches, I don’t see how these can not already be discounted. I don’t want to rehash the Ten-year no temp change stuff, but it is assumed that there is a baseline/human caused temp change of 0.07 deg C per decade inherent in our last century temp record. The IPCC AR$ A1B models show that this trend will increase by 100% up to 700% in order to reach low- and high-end model predictions by the 2090-2100 decade. O.K. – but with each decade of “inertia” as we may(?) characterize the last one 2000-2010 (?) it will take an enormous frog-leap to get back on track with the models. This is where perhaps “suspension of belief” takes place, as what we would need to witness in the climate would be much more drastic/frightening than anything imagined over such a short period of time, but alas more and more improbable from a physical stance. You may believe this can happen. But i’m not so sure – and I don’t think you could place a confidence of anywhere over 50% of even reaching the low-end model predicitons (of 100% increase in the inherent decadal temp change).
[Response: Your characterisation of the model trends over the last ten years is not accurate. – gavin]
Scott A. Mandia says
Thanks, Gavin. Unfortunately, those that think “models, shmodels” will not change their minds but perhaps those with an open mind can learn from the discussion. I will now add this as a link on my Climate Models & Accuracy page. :)
Scott A. Mandia, Professor of Physical Sciences
Selden, NY
Global Warming: Man or Myth?
My Global Warming Blog
Twitter: AGW_Prof
“Global Warming Fact of the Day” Facebook Group
Rod B says
Gavin, well explained from my perspective.
Rod B says
John Mashey, dream on! ;-)
Completely Fed Up says
Well Gavin understood.
Thanks.
richard pauli says
Thanks Gavin for presenting superb analysis of science. It is a brave and necessary lecture to give in the middle of a crisis. Although we brandish science and engineering tools, we face the political reality of feeble and failed political will. Our political and economic models of living no longer sync up with the science models.
As you say, models must be predictive. So once we introduce anthropogenesis, then we have to examine the social traits, scientific capacity and history of human behavior – and all the emotional and political interactions with our environment. Quite a messy calculation. When we feel painful catastrophic events directly predicted – such as continued sea level rise, heat waves, etc – then our predictions prove correct for the human model. So combine the human unwillingness to change and our infinite capacity for stupidity (Einstein) it is easy to hypothesize that AWG will be so excessive as to doom our species. The operation a success, the patient dies. I don’t want to reach that conclusion. Unfortunately, all our denials and contorted science cannot seem to break such a consideration.
I must call attention to the great new book “Merchants of Doubt” by Oreskes and Conway. Just received it: http://www.amazon.com/Merchants-Doubt-Handful-Scientists-Obscured/dp/1596916109/ref=sr_1_1?ie=UTF8&s=books&qid=1274887404&sr=1-1
John P. Reisman (OSS Foundation) says
Gavin
I’ve been on a few roller coasters and this one was one of the best. You can actually feel the ratcheting of the chain as you go up the hill, the level off with the panoramic views and the swooping feel as you drop into the next reality of the drop and subsequent turns. . . then up the again, another high hill and plateau, more drops and fast turns. Then at the end, the brakes get hold of the car and you feel it bringing you to a stop just as you hit the GCM’s, whew.
Great ride!!!
—
A Climate Minute The Greenhouse Effect – History of Climate Science – Arctic Ice Melt
‘Fee & Dividend’ Our best chance for a better future – climatelobby.com
Learn the Issue & Sign the Petition
Completely Fed Up says
A note about the lab work, if you’ve read Feynman’s (second?) autobiography “Surely you’re joking, Mr Feynman”, there’s the story about how he did an experiment that shows what rats used to remember routes around a maze.
Without that, your experiment could be checking how well rats remembered the last trip, not what you think you’re testing.
Even labs have their sources of “noise”.
Completely Fed Up says
Cross posted because it was in response to another thread comment, but places well here:
“It isn’t a question of “natural vs anthropogenic”, it’s a question of “how much” of each, i.e., it’s an attribution problem ”
I wonder since attribution often gets used as an absolute (X is attributed to Y, therefore X was caused solely by Y), could we use “apportioned” instead?
It’s pretty hard to turn that into an absolute.
Or at least assert clearly that this attribution is an apportioning of effects. Something like that.
Hank Roberts says
Gavin linked above to Ch. 9 of the fourth IPCC report and to the two Santer et al papers as basic reading (agreed!). The second paper says: “… an anthropogenic water vapor fingerprint …. is both robust to current model uncertainties and dissimilar to the dominant noise patterns.”
That’s the kind of foundation needed to begin formal attribution, I think?
In the last Report Summary for Policymakers, table SPM2 summarized then recent trends; footnote “f” there flags a few of those as “…. Attribution for these phenomena based on expert judgement rather than formal attribution studies.” (This is a summary table in a summary document–each item is extensively discussed in the actual Report.)
In which areas is attribution improved since the last Report? What issues are new for which attribution will be discussed? (There are some answers on this published already.)
http://www.google.com/search?q=site%3Aipcc.ch+attribution+AR5
Ray Ladbury says
Rod B., The statistics come in when you consider errors, and your error model tells you how likely you are to be wrong.
Thomas Hine,
You most certainly can attribute cause with 90-95% or even 99.9% confidence. It all depends on how strong the signature of the cause is in the evidence and how the errors make things fuzzy. CO2, as a well mixed, long-lived greenhouse gas, sticks out like a sore thumb.
Hank Roberts says
PS, Gavin, the AR5 outline here (brief 2 pages) separates the three Working Groups; that might be helpful here. I’m assuming your topic here is about WG1 issues primarily? These three get very muddled by people who don’t understand the difference between WG1, 2, and 3:
http://www.ipcc.ch/pdf/ar5/ar5-leaflet.pdf
David Davidovics says
What defindes a robust finger print is not very specific here. Would have been nice to have some more specifics on that instead of just general ideas and theory.
John E. Pearson says
great post Gavin.
Jim Eager says
Gavin, I have a question about factors thought to be playing a role in the current warming resurgence of the past 4 to 12 months.
The two factors that are most cited are the moderate El Nino, now waning, and the tardy but now building solar cycle 24.
I’m wondering about a possible third factor: a significant decrease in industrial aerosols due to the prolonged global recession, which would mean a drop in aerosol damping, unmasking part of the greenhouse forcing that was there all along, similar to what happened after clean air legislation came into force in the 1970s, or to what happened as stratospheric aerosols from Pinatubo declined naturally.
Is anyone looking at this as a serious factor?
Doug Bostrom says
Marvelous “exploded view” of attribution. Huzzah!
Kevin McKinney says
Once again, thanks for an illuminating post!
I’ve bookmarked the Hansen ’92 abstract–BPL might want that one for his “model predictions” collection, if it’s not already in there.
jyyh says
What an excellent and concise summary of many aspects also applied in other than climate sciences. Thank you.
John P. Reisman (OSS Foundation) says
Okay, you inspired me to finish my attribution page:
http://www.ossfoundation.us/projects/environment/global-warming/attribution
As always, if anyone finds a relevant mistake, please let me know and I will clean it up if applicable.
—
A Climate Minute The Greenhouse Effect – History of Climate Science – Arctic Ice Melt
‘Fee & Dividend’ Our best chance for a better future – climatelobby.com
Learn the Issue & Sign the Petition
Lichanos says
In the real world we attribute singular events all the time – in court cases for instance – and so we do have practical experience of this.
I am glad that you brought up this argument because I have been thinking about it for a long time. I heard a lawyer discussion the O.J. Simpson trial – before the verdict – say that “circumstantial” cases were often more powerful than those based on direct testimony by witnesses. That’s because the prosecuter constructs a logical narrative that ties together the bits of evidence into a coherent whole that is convincing and probable.
This is certainly a good way to understand the world, at least some of the time, but it seems that in your post it simply gets you off the hook. This is because of the nature of your argument about models and prosecutorial narratives.
The difference between what goes on in a court and what goes on in science, I believe, is that in court, we all start from a belief that we basically understand the ‘system.’ We understand how the minds of people work [not guilty by reason of insanity is a way out here…] what motivates them, how they act, and the basic limits imposed on them by the plain facts of reality, e.g., you can’t be in two places at once. You can try to argue that the same exists in science, e.g. we all accept Newton’s laws, the Conservation of Matter, etc. etc., but I don’t think that is at all analogous.
AGW posits small and precisely calibrated changes as the result of very complex interactions. The ‘narrative’ tries to tie up the mounds of circumstantial evidence that is consistent with the hypothesis into an explanation that presents logical necessity. That is, it tries to show that not only are the supposed bits of evidence [some are controversial in themselves] consistent with the hypothesis, but that that they demonstrate the superior plausibility of the hypothesis.
Unfortunately, given the scale of the system, and the sensitivity of it, we cannot claim to have the same understanding of Nature as we claim for human motivation. Murder trials are not very abstract. White collar prosecutions are, and herein lies their weakness… Your analogy is pretty weak.
This is why the endless statements about consilience, convergence, etc. leave me cold. The degree to which models have predicted events that came to pass is always open to discussion – the tests are never yes/no, on/off. They don’t pass muster as predictive tools, only as aids to understanding system dynamics.
Jacob Mack says
Gavin great post. Thanks for clearing up some confusion.
John P. Reisman (OSS Foundation) says
#17 Jim Eager
I am also curious about that. I asked a few friends about it at the very beginning of the event back in late 2008.
http://www.ossfoundation.us/projects/environment/global-warming/myths/images/greenhouse-gases/globalco2emission.png/view
An quantifiable reduction in aerosols could add to attribution confirmation.
—
A Climate Minute The Greenhouse Effect – History of Climate Science – Arctic Ice Melt
‘Fee & Dividend’ Our best chance for a better future – climatelobby.com
Learn the Issue & Sign the Petition
Doug Bostrom says
Lichanos:
Start here: Spencer Weart’s Discovery of Global Warming.
Once you’ve digested Weart’s enjoyable, information-packed and duly critical narrative you’ll better be able to perceive the difference between televised helicopter chases and climate research.
Len Ornstein says
Gavin:
As great as your post is, I’m troubled by a few of your comments – and what they probably ‘hide’:
One of the biggest problems with the ‘inappropriate’ influence of good science on public policy, is the general lack of appreciation that, at best, science can only provide increasing confidence in ‘models’ – but never ‘absolute’ confidence.
This is compounded by the large MINORITY of scientist who are “Platonists” – who believe that science can achieve absolute truths about reality – like that of
much of mathematics (despite Godel); and by the very poor distinction that’s made, in the education of the public, between the “truths’ of deductive reason vs inductive reason.
So when careful scientist couch conclusions with weasel words, many simply dismiss them as spineless dweebs, who deserve little attention!
When you seem to ‘disparage’ statistics – especially as if Bayesian statistics is all there is – you SEEM to imply that things like confidence intervals are unnecessary baggage for scientific prediction/projection – even though that certainly misrepresents your position ;-)
Science/Statistics still lack robust procedures for combining joint levels of confidence in multiple, only slightly related data sets – to provide measures of increased confidence. So the joint judgement of those who best understand the data sets (experts) presently must serve as our ‘best’ measure for guiding public policy.
This lesson is poorly understood by the public and most of their leaders.
Unfortunately, by default, you haven’t helped clarify this issue with this post.
[Response: I don’t see how I am disparaging statistics and dismissing confidence intervals simply by pointing out that putting a linear trend through some data does not – in itself – prove that the trend is caused by something. The pattern matching that attribution is based on obviously involves statistics *in combination* with physically plausible models. – gavin]
Lichanos says
@Doug Bostrom:
Thanks for the pointer, Doug, and I won’t take it amiss that you obviously assume I am brain dead. I have already read a good deal of Weart’s book, and I think he does a marvelous job of presenting the history of the scientific investigation of AGW. He is also very dismissive of critiques of the theory, tending to deal with them by saying, “this was resolved,” or “experts now agree…” Similarly, he has very strange views on the IPCC, which, for reasons he doesn’t make clear, he seems to regard as almost messianic in its ability to resolve nagging issues of attribution.
Kevin McKinney says
#22–Maybe it’s just my lack of street smarts, but I actually don’t agree that our understanding of human motivation is better than our understanding of Nature. I’d argue that human motivations are often not obviously subject to “forcing” and seem to display very large “internal variability.”
Frank Giger says
OMG, CFU (#11) and I are in complete agreement!
There is hope for peace in our time.
:)
Jerry Steffens says
#15
“Would have been nice to have some more specifics on that instead of just general ideas and theory.”
That’s what the scientific literature is all about.
Dig into it and you’ll find all the specifics you could possible want!
(You might start with the three references Gavin gave.)
Lichanos says
@28 Kevin McKinney:
Well, when you are put on a jury, the assumption is that you can think reasonably, and that reasonable people know why people do things, what is a likely motivation for a crime, etc. If a prosecuter tries to convict you of murder, saying you were enraged because your lottery ticket didn’t win, that would be a tough sell, right?
I don’t think law functions the way science does, the standards of proof are way different, the assumptions about the need for control, to which Gavin alludes, are not at all alike. That was my point.
Doug Bostrom says
Lichanos says: 26 May 2010 at 2:18 PM
I don’t think you’re brain dead, not at all since you’re not bad with writing, but your perception of slant or bias in Weart’s writing leads me to believe you’ve bringing a bias of your own that’s not helping you. But we can’t resolve that here so I’ll drop it since otherwise I’ll help commit yet another thread to pointless destruction. Last word goes to you.
Edward Greisch says
There are 3 kinds of models: physical mechanisms, theoretical [mathematical] models and computer simulations. We have all 3 for the climate. They all agree. The mechanisms have been tested an enormous number of times by many people. There is no problem with the science. The problem is with the several kinds of people we are dealing with: The untrained, the ignorant, the not quite bright, the paranoid, those who have a financial reason for denial, those who actually believe in something unscientific or delusional and I may have missed some. That covers 99% of all people. Overcoming all of these problems should not be in the jurisdiction of scientists because it is a way-beyond-Herculean task. An absolute dictator could just ignore 99% of the people. We don’t have that authority.
So don’t blame yourself. It is not RC that failed, it is the species Homo “Sapiens” that is not ready to handle the situation. But we can’t give up. We have to find another strategy. The other obvious strategies also require authority or money or power that we don’t have. So we have to find a way to get money or power or authority that will change the situation. A change in mode of thought is called for.
Hank Roberts says
Lichanos, you’re misreading misinterpreting one phrase Weart uses twice. THe other one you quote isn’t found.
Look at the two places in his site where he wrote “experts now agree” — these words are not what you think:
http://www.aip.org/servlet/SearchClimate?collection=CLIMATE&queryText=experts+now+agree
That’s found twice; neither one meaning what you say it does.
Look for “this was resolved” —
http://www.aip.org/servlet/SearchClimate?collection=CLIMATE&queryText=this+was+resolved
How much have you read first hand, and how much are you relying on someone else’s opinion about what Weart wrote? Remember the value of citing sources and searching for what someone actually says, read it in context, and read the footnotes cited.
CTG says
Interestingly, the graph that Easterbrook faked was an attempt to show that modern warming is not unprecedented, and therefore can’t be caused by CO2.
So not only did he have to fake the graph (by moving modern temperatures lower to make past temperatures look higher), but it was a pointless exercise in any case.
It will be interesting to see how Easterbrook’s fellow Heartland presenters react to his fraud. Do they stand by him, and become complicit in his fraud, or do they dump him and try to pretend they still have some integrity?
Anyway, excellent post, Gavin. The difference between real science and the garbage that the skeptics produce has never been clearer.
Lichanos says
@32 Doug Bostrom:
Last word goes to you.
Very gracious of you, thanks.
I don’t deny having my own point of view, bias, and so does Weart. The title of his book makes that clear; it’s a bit triumphalist.
Your comment that my bias “is not helping me,” is sort of amusing. Calls to mind a lot of sci-fi and Twilight Zone plots. My favorite is H.G.Wells’ story about the sighted man in the valley of the blind. Eventually, locals decided to subject him to an operation to remove his eyeballs since they were obviously not helping him, but were causing him to believe in all sorts of crazy things.
This is just a starting point, probably best not to try and finish here. I will close by saying that one must be ever on guard against one’s own biases and passions, not only those of others.
John P. Reisman (OSS Foundation) says
#22 Lichanos
au contraire; models can be wonderfully predictive tools. The predictive quality can vary of course but perfection in modeling is not possible but that does not mean they are not useful.
And not just climate models!!! Aircraft models, Models of bridges, models of buildings, economic models including resource economics that examine demand in various sectors verses availability and distribution capacity with things like iron, copper, uranium, oil, wheat, barley, corn et cetera, et cetera, et cetera
Don’t get caught in the trap that because not everything is knowable with perfect accuracy, the human race has no capacity to predict or understand.
Let not the lack of perspicacity in the world in general preclude your own capability of understanding what is truly and easily understandable such as the predictive ability of a well constructed model.
#27 Lichanos
Context is key. Many things are highly resolved and scientifically that translates to resolved in common speak; as in relatively certain, or virtually certain. Or pretty darn certain.
When you have achieved understanding of the relevant contexts you will understand what that means.
As an example: It is safe to say that the change in the climate path is certainly human caused. You can throw virtually certain on that if you wish, but it’s a good bet at 99.99% odds, though scientifically, I don’t think we are beyond around 95% at this time.
The path change is pretty clear:
http://www.ossfoundation.us/projects/environment/global-warming/attribution/image/image_view_fullscreen
[I just added the image to the page. If you get a 502 error, the site will reboot itself in 5 minutes]
—
A Climate Minute The Greenhouse Effect – History of Climate Science – Arctic Ice Melt
‘Fee & Dividend’ Our best chance for a better future – climatelobby.com
Learn the Issue & Sign the Petition
Lichanos says
@34 Hank Roberts:
I quoted from memory. Obviously, I was imprecise. I was giving my opinion, not submitting a review for publication. I have read his book carefully. Your response is really quite ill suited to the nature of my comment, which was in itself simply a response to another comment. Perhaps I will post at my blog and list detailed citations there, but that takes time, and I have other things to do.
Why not just review what Weart says in response to critical arguments yourself and try and see it from the point of view of someone who needs to be convinced? That would be more constructive.
Ray Ladbury says
Lichanos, It would appear that you have not been looking very hard at the evidence. A reasonable place to start is here:
http://www.bartonpaullevenson.com/ModelsReliable.html
At least 6-7 of the trends cited by BPL provide very strong evidence that the models are on the right track.
Another place you should look is here:
http://agwobserver.wordpress.com/2009/11/05/papers-on-climate-sensitivity-estimates/
There are about a dozen independent lines of evidence all favor 3 degrees per doubling of CO2, and preclude less than 2 degrees per doubling. Now what do you think are the chances of all that agreement being spurious?
Of course, all this presumes you are actually interested in understanding the science.
Jacob Mack says
Frank Giger # 29 as I am in complete agreement with CFU in # 11:)
wili says
I would like to ask a question about attribution from the opposite direction from the Pinatubo example given in the article. Early in the article, one of the bullet points is:
“Attribution has nothing to do with something being “unprecedented””
Yet, it seems to me I have heard or read (sorry about the vagueness–I’ll try to track something specific down) climatologists say that the extreme and deadly European heat wave in ’03 could be at least partly attributable to the effects of GW, partly because it was so far out on the probability curve.
Were they wrong? Am I missing something? Is this a different kind of accountability?
OK, here’s a quote from an abstract:
“It is an illposed
question whether the 2003 heatwave was caused, in a simple deterministic sense, by a modification of
the external influences on climate, for example increasing concentrations of greenhouse gases in the
atmosphere, because almost any such weather event might have occurred by chance in an unmodified
climate. However, it is possible to estimate by how much human activities may have increased the risk
of the occurrence of such a heatwave”
http://www-atm.physics.ox.ac.uk/main/Science/posters2005/2005ds.pdf
Is this the proper distinction to be made. If so, I’m afraid most laymen will see this as just too hair-splitting of a distinction to bother thinking about.
Hank Roberts says
Weart summarizes. That’s a history, and ends somewhat before the present date. The material he describes isn’t contentious. You can look the sources up to see if anyone’s publishing anything new on each subject. He footnotes his sources, with clickable links. He invites further questions from readers. He participates in these online forums. It’s enough.
wili says
Sorry to not add this in my earlier post–the concluding sentence of the abstract says:
“Using a threshold for mean summer temperature that was exceeded in 2003, but in no other year since the start of the instrumental record in 1851, we estimate it is very likely (confidence level >90%) that
human influence has at least doubled the risk of a heatwave exceeding this threshold magnitude.”
[Response: My point was not to claim that nothing is ever unprecedented. Some things clearly are – the polar ozone hole for instance was caused by the breakdown of chemicals that prior to a century before had never existed on Earth. The KT impact had (perhaps) unprecedented impacts – and obviously, if something is a unique event and there is a unique cause that can have caused those impacts, attribution is easier. However, it isn’t necessary for this to be the case for attribution to be made. You don’t need to kill more people than Genghis Khan to be found guilty of a single murder. In the 2003 heatwave case, the authors are trying for something a little more subtle – a probabilistic partial attribution for singular events. If an event can be shown to be twice as common under a new circumstance than the previous, then it might make sense to attribute half of the blame to the new circumstance for a single occurrence, even while the 100% increase is attributable to the new circumstances. – gavin]
John P. Reisman (OSS Foundation) says
#36 #38 Lichanos
I’m always skeptical. But denialism and skepticism are two different animals.
Science is not science fiction. So it really does not matter how many ‘Twilight Zone’ episodes you have watched. In a little place I like to refer to as ‘reality’, Weart is pretty spot on.
Confirmation bias is an issue typically with a hypothesis. Take a look at Svensmark
http://www.ossfoundation.us/projects/environment/global-warming/myths/henrik-svensmark
and his assertions regarding GCR’s.
or Richard Lindzen
http://www.ossfoundation.us/projects/environment/global-warming/myths/richard-lindzen
and his assertions regarding the ‘Iris Effect’.
Yes, confirmation bias can be a problem. But there is a difference between a hypothesis and well established science:
http://www.ossfoundation.us/projects/environment/global-warming/summary-docs/leading-edge/2010/2010-may-the-leading-edge
—
A Climate Minute The Greenhouse Effect – History of Climate Science – Arctic Ice Melt
‘Fee & Dividend’ Our best chance for a better future – climatelobby.com
Learn the Issue & Sign the Petition
Lichanos says
@ 37John P. Reisman:
au contraire; models can be wonderfully predictive tools. The predictive quality can vary of course but perfection in modeling is not possible but that does not mean they are not useful. And not just climate models!!! … et cetera Don’t get caught in the trap that because not everything is knowable with perfect accuracy, the human race has no capacity to predict or understand.
A wonderfully enthusiastic and vague rejoinder. I am an engineer, and I worked for many years with a firm that made its name doing computer simulations of very large natural waterbodies, e.g., the NY Bight. I am well aware of the uses of modeling.
The predictive quality can vary of course…
There’s the rub. This statement can hide a multitude of sins!
…perfection in modeling is not possible but that does not mean they are not useful.
Being useful does not entail that they are good predictors, or that they are always good predictors.
…the trap that because not everything is knowable with perfect accuracy, the human race has no capacity to predict or understand.
I have argued against this form of philosophical skepticism for years. The fact that we can’t ‘know’ with perfect ‘certainty’ (whatever those words mean!) doesn’t mean we know nothing. It also doesn’t mean we know what you seem to think we know about AGW.
Jacob Mack says
Hank Roberts # 41: absolutely. Weart is highly esteemed by all of his colleagues due to his detailed historical accounts (well referenced) and knowledge of physics. Nothing he states in his work is controversial. Reading Weart dramatically helps to put many aspects of climate science in perspective.
Lichanos says
@39 Ray Ladbury:
Now what do you think are the chances of all that agreement being spurious? Of course, all this presumes you are actually interested in understanding the science.
Spurious? Or just not convincing? I’m not implying that there’s a hoax going on. Is that what you think I think?
I imagine that followers of Ptolomey made similar remarks to Copernicus. After all, if one is not convinced, obviously one is not interested in addressing the facts…
Bart Verheggen says
Great post.
It is clear that statistics by itself is not enough to make a statement on attribution. OTOH, statistics can help to verify whether a proposed relation is indeed present in the data, so it definitely has its place in the verification of proposed causal mechanisms / attribution.
There was a long discussion on my blog about the use of statistics recently (e.g. http://ourchangingclimate.wordpress.com/2010/03/08/is-the-increase-in-global-average-temperature-just-a-random-walk/ and the preceding long thread). It is clear that when people leave all physics aside and go solely by statistics, they can reach erroneous conclusions quite easily. The reverse may also be the case. Both are needed to get to robust conclusions.
[Response: Yes, and indeed that thread was a partial inspiration for this post. – gavin]
Hank Roberts says
Dr. Ornstein suggests on his blog and at Piekle Sr.’s blog that “If the trunk of that tree were to be harvested, before decay, and were stored anoxically, or burned in place of coal, a net of about 2/3 of that amount of CO2 would be prevented from entering the atmosphere. If the ash-equivalent of each tree trunk (about 1% of dry mass) were recycled to the site of harvest, the process would be indefinitely sustainable and eco-neutral.”
The same argument could be made that turning whales into fuel was more sustainable than leaving them in the ocean. The problem in both cases is reducing a biological organism in a complicated ecology to its value as fuel. A “living” tree is mostly dead wood; when the tree dies and its perimeter defenses fail, the entire dead core, the bulk of the mass of the tree, is rapidly turned into living material. http://assets.panda.org/downloads/deadwoodwithnotes.pdf
Taking the tree away and returning the mineral ash to the forest removes all the life that would have grown using the fallen tree. Taking the whales for fuel removed most of the ecosystems that grew up around their carcasses in the ocean. http://www.google.com/search?q=whale+carcass+ocean+floor
It’s called ‘trophic collapse’ in both cases.
I think this fits with attribution; biology has to be considered.
Hank Roberts says
cite: http://pielkeclimatesci.wordpress.com/2009/10/26/guest-weblog-by-len-ornstein-how-to-quickly-lower-climate-risks-at-tolerable-costs/