How do we know what caused climate to change – or even if anything did?
This is a central question with respect to recent temperature trends, but of course it is much more general and applies to a whole range of climate changes over all time scales. Judging from comments we receive here and discussions elsewhere on the web, there is a fair amount of confusion about how this process works and what can (and cannot) be said with confidence. For instance, many people appear to (incorrectly) think that attribution is just based on a naive correlation of the global mean temperature, or that it is impossible to do unless a change is ‘unprecedented’ or that the answers are based on our lack of imagination about other causes.
In fact the process is more sophisticated than these misconceptions imply and I’ll go over the main issues below. But the executive summary is this:
- You can’t do attribution based only on statistics
- Attribution has nothing to do with something being “unprecedented”
- You always need a model of some sort
- The more distinct the fingerprint of a particular cause is, the easier it is to detect
Note that it helps enormously to think about attribution in contexts that don’t have anything to do with anthropogenic causes. For some reason that allows people to think a little bit more clearly about the problem.
First off, think about the difference between attribution in an observational science like climatology (or cosmology etc.) compared to a lab-based science (microbiology or materials science). In a laboratory, it’s relatively easy to demonstrate cause and effect: you set up the experiments – and if what you expect is a real phenomenon, you should be able to replicate it over and over again and get enough examples to demonstrate convincingly that a particular cause has a particular effect. Note that you can’t demonstrate that a particular effect can have only that cause, but should you see that effect in the real world and suspect that your cause is also present, then you can make a pretty good (though not 100%) case that a specific cause is to blame.
Why do you need a laboratory to do this? It is because the real world is always noisy – there is always something else going on that makes our (reductionist) theories less applicable than we’d like. Outside, we don’t get to perfectly stabilise the temperature and pressure, we don’t control the turbulence in the initial state, and we can’t shield the apparatus from cosmic rays etc. In the lab, we can do all of those things and ensure that (hopefully) we can boil the experiment down to its essentials. There is of course still ‘noise’ – imprecision in measuring instruments etc. and so you need to do it many times under slightly different conditions to be sure that your cause really does give the effect you are looking for.
The key to this kind of attribution is repetition, and this is where it should become obvious that for observational sciences, you are generally going to have to find a different way forward, since we don’t generally get to rerun the Holocene, or the Big Bang or the 20th Century (thankfully).
Repetition can be useful when you have repeating events in Nature – the ice age cycles, tides, volcanic eruptions, the seasons etc. These give you a chance to integrate over any unrelated confounding effects to get at the signal. For the impacts of volcanic eruptions in general, this has definitely been a useful technique (from Robock and Mao (1992) to Shindell et al (2004)). But many of the events that have occurred in geologic history are singular, or perhaps they’ve occurred more frequently but we only have good observations from one manifestation – the Paleocene-Eocene Thermal Maximum, the KT impact event, the 8.2 kyr event, the Little Ice Age etc. – and so another approach is required.
In the real world we attribute singular events all the time – in court cases for instance – and so we do have practical experience of this. If the evidence linking specific bank-robbers to a robbery is strong, prosecutors can get a conviction without the crimes needing to have been ‘unprecedented’, and without having to specifically prove that everyone else was innocent. What happens instead is that prosecutors (ideally) create a narrative for what they think happened (lets call that a ‘model’ for want of a better word), work out the consequences of that narrative (the suspect should have been seen by that camera at that moment, the DNA at the scene will match a suspect’s sample, the money will be found in the freezer etc.), and they then try and find those consequences in the evidence. It’s obviously important to make sure that the narrative isn’t simply a ‘just-so’ story, in which circumstances are strung together to suggest guilt, but which no further evidence is found to back up that particular story. Indeed these narratives are much more convincing when there is ‘out of sample’ confirmation.
We can generalise this: what is a required is a model of some sort that makes predictions for what should and should not have happened depending on some specific cause, combined with ‘out of sample’ validation of the model of events or phenomena that were not known about or used in the construction of the model.
Models come in many shapes and sizes. They can be statistical, empirical, physical, numerical or conceptual. Their utility is predicated on how specific they are, how clearly they distinguish their predictions from those of other models, and the avoidance of unnecessary complications (“Occam’s Razor”). If all else is equal, a more parsimonious explanation is generally preferred as a working hypothesis.
The overriding requirement however is that the model must be predictive. It can’t just be a fit to the observations. For instance, one can fit a Fourier series to a data set that is purely random, but however accurate the fit is, it won’t give good predictions. Similarly a linear or quadratic fit to a time series can be useful form of descriptive statistics, but without any reason to think that there is an underlying basis for such a trend, it has very little predictive value. In fact, any statistical fit to the data is necessarily trying to match observations using a mathematical constraint (ie. trying to minimise the mean square residual, or the gradient, using sinusoids, or wavelets, etc.) and since there is no physical reason to assume that any of these constraints apply to the real world, no purely statistical approach is going to be that useful in attribution (despite it being attempted all the time).
To be clear, defining any externally forced climate signal as simply the linear, quadratic, polynomial or spline fit to the data is not sufficient. The corollary which defines ‘internal climate variability’ as the residual from that fit doesn’t work either.
So what can you do? The first thing to do is to get away from the idea that you can only be using single-valued metrics like the global temperature. We have much more information than that – patterns of changes across the surface, through the vertical extent of the atmosphere, and in the oceans. Complex spatial fingerprints of change can do a much better job at discriminating between competing hypotheses than simple multiple linear regression with a single time-series. For instance, a big difference between solar forced changes compared to those driven by CO2 is that the stratosphere changes in tandem with the lower atmosphere for solar changes, but they are opposed for CO2-driven change. Aerosol changes often have specific regional patterns change that can be distinguished from changes from well-mixed greenhouse gases.
The expected patterns for any particular driver (the ‘fingerprints’) can be estimated from a climate model, or even a suite of climate models with the differences between them serving as an estimate of the structural uncertainty. If these patterns are robust, then one can have confidence that they are a good reflection of the underlying assumptions that went into building the models. Given these fingerprints for multiple hypothesised drivers (solar, aerosols, land-use/land cover change, greenhouse gases etc.), we can than examine the real world to see if the changes we see can be explained by a combination of them. One important point to note is that it is easy to account for some model imperfections – for instance, if the solar pattern is underestimated in strength we can test for whether a multiplicative factor would improve the match. We can also apply some independent tests on the models to try and make sure that only the ‘good’ ones are used, or at least demonstrate that the conclusions are not sensitive to those choices.
These techniques of course, make some assumptions. Firstly, that the spatio-temporal pattern associated with a particular forcing is reasonably accurate (though the magnitude of the pattern can be too large or small without causing a problem). To a large extent this is the case – the stratospheric cooling/tropospheric warming pattern associated with CO2 increases is well understood, as are the qualitative land vs ocean/Northern vs. southern/Arctic amplification features. The exact value of polar amplification though is quite uncertain, though this affects all the response patterns and so is not a crucial factor. More problematic are results that indicate that specific forcings might impact existing regional patterns of variability, like the Arctic Oscillation or El Niño. In those cases, clearly distinguishing internal natural variability from the forced change is more difficult.
In all of the above, estimates are required of the magnitude and patterns of internal variability. These can be derived from model simulations (for instance in their pre-industrial control runs with no forcings), or estimated from the observational record. The latter is problematic because there is no ‘clean’ period where there was only internal variability occurring – volcanoes, solar variability etc. have been affecting the record even prior to the 20th Century. Thus the most straightforward estimates come from the GCMs. Each model has a different expression of the internal variability – some have too much ENSO activity for instance while some have too little, or, the timescale for multi-decadal variability in the North Atlantic might vary from 20 to 60 years for instance. Conclusions about the magnitude of the forced changes need to be robust to these different estimates.
So how might this work in practice? Take the impact of the Pinatubo eruption in 1991. Examination of the temperature record over this period shows a slight cooling, peaking in 1992-1993, but these temperatures were certainly not ‘unprecedented’, nor did they exceed the bounds of observed variability, yet it is well accepted that the cooling was attributable to the eruption. Why? First off, there was a well-observed change in the atmospheric composition (a layer of sulphate aerosols in the lower stratosphere). Models ranging from 1-dimensional radiative transfer models to full GCMs all suggest that these aerosols were sufficient to alter the planetary energy balance and cause global cooling in the annual mean surface temperatures. They also suggest that there would be complex spatial patterns of response – local warming in the lower stratosphere, increases in reflected solar radiation, decreases in outgoing longwave radiation, dynamical changes in the northern hemisphere winter circulation, decreases in tropical precipitation etc. These changes were observed in the real world too, and with very similar magnitudes to those predicted. Indeed many of these changes were predicted by GCMs before they were observed.
I’ll leave it as an exercise for the reader to apply the same reasoning to the changes related to increasing greenhouse gases, but for those interested the relevant chapter in the IPCC report is well worth reading, as are a couple of recent papers by Santer and colleagues.
Ray Ladbury says
Len Ornstein, I think you misunderstand Gavin’s terminology. When he says that no purely statistical model is going to be very useful in atrribution, he is in no way disparaging statistics. Stating it in the positive rather than the negative, I think Gavin is saying that we absolutely have to have models motivated by the science rather than just “fits” to the data.
The models tell us what a warming world will look like if the warming is caused by different forcings. A well mixed, long-lived greenhouse mechanism has a very distinctive fingerprint, and those fingerprints are all over our current climate. This is precisely why it is important to look at all the evidence, not just global temperatures of temperatures for the continental US or of 3 stations near Athens.
I think Gavin has done an excellent job of summarizing the case for anyone who is not scientifically illiterate.
Jacob Mack says
Thank you Hank for for your mentioning of relevant Biology.
Doug Bostrom says
Ray Ladbury says: 26 May 2010 at 3:05 PM
Of course, all this presumes you are actually interested in understanding the science.
So often, the science is crowded out.
Further to the matter of independently derived but mutually supporting lines of evidence, there’s also the opposite, as on exhibit at Skeptical Science’s Museum of Mutual Exclusion*.
*Not John Cook’s title. The official name is “Global Warming Skeptic Contradictions” but my mind was a bit warped during the Bush era of Alluring Alliteration.
SecularAnimist says
Lichanos wrote: “It also doesn’t mean we know what you seem to think we know about AGW.”
Exactly and specifically what is it about AGW that you think we think we know, that you think we don’t know?
We know that CO2 is a greenhouse gas.
We know that human activities over the last century and a half, principally the burning of fossil fuels, have released huge amounts of previously sequestered carbon into the atmosphere as CO2.
We know that this anthropogenic excess of atmospheric CO2 is causing the Earth system to retain more of the Sun’s energy, causing rapid and extreme warming (and in addition is rapidly acidifying the oceans as they absorb the excess CO2, which may be an even worse problem than the warming itself).
We know that this anthropogenic warming is already causing rapid and extreme changes in the Earth’s climate, hydrosphere, cryosphere and biosphere.
We know that there is even more warming in store from the CO2 that we have already emitted.
We know that we are continuing to release more and more CO2 and that as a result CO2 concentrations are rising, at an accelerating rate, which guarantees even more rapid and extreme warming than we are already seeing.
What exactly do you think we don’t know?
Hank Roberts says
> I worked for many years with a firm that made its name doing computer
> simulations of very large natural waterbodies, e.g., the NY Bight.
> I am well aware of the uses of modeling.
What don’t you know about modeling, with that background?
Are these typical of the work you’ve had experience with?
http://scholar.google.com/scholar?hl=en&q=simulation+model+%2B%22New+York+bight%22&as_sdt=2001&as_ylo=2008&as_vis=1
Because there are a lot of different types of models, and expertise with one tradition does not assure awareness of all.
Raymon Heath says
As someone who spends a fair amount of time defending the the scientific realities on a couple of newspaper debate boards it does not help when you gift the deniers with a quoteable sound bite such as “There is of course still ‘noise’ – imprecision in measuring instruments etc. and so you need to do it many times under slightly different conditions to be sure that your cause really does give the effect you are looking for.”
To the average climate change denier you said that you set out to find the result that you want – Confirmation Bias! I will bet you that this is the only quote that gets used (at least in part) beyond the rarified atmosphere of your inter-scientific chats. We work so hard to defend the balanced and sceptical acceptance of the scientific probability that we do indeed affect the environment that we all depend on, when you write without thinking of the way your words will be cut and pasted it just makes it that little bit harder to slap the grinning monkeys.
[Response: Oh please. I appreciate the concern, and I also appreciate that the people are looking to misquote and misrepresent, but this really is not worth bothering with. I have no problem whatsoever with people doing more experiments to make sure – and neither should you. – gavin]
Phil Scadden says
“The degree to which models have predicted events that came to pass is always open to discussion – the tests are never yes/no, on/off. They don’t pass muster as predictive tools, only as aids to understanding system dynamics.”
There is something wrong with this statement. We use models to fly a rocket to mars – in fact we use models in every facet of engineering. Climate models surely have on/off/ yes/no. If the observations of real world differ from model prediction by more than can be accountable for in the modelled error estimates then the model is incorrect, pure and simple. Now where are the real world observations not matching the climate model predictions consistent with AGW or where are the observation more consistent with a different forcing?
Frank Giger says
“If the observations of real world differ from model prediction by more than can be accountable for in the modelled error estimates then the model is incorrect, pure and simple.”
That would be true if the model worked off something that had no “noise” in the system – such as gravitational forces, thrust, and vector in planning a Mars shot. Even there they have a band of acceptable results.
For example, they knew they would hit Mars with the last set of probes, and got that down to regions. However, they couldn’t predict precisely where they would land – no imaginary bullseye was drawn with two hundred yard score lines around it.
Climate models, like all predictive models, work on probabilities within a range.
David B. Benson says
Science is the application of inductive logic to determine generalities (laws) from evidence. Of course nothing is ever completely certain, so there is always some probability of incorect laws yet many laws are taken as causal; givens.
For more uncertain situations, one begins by looking at some correlations which suggest the possiblity of causality. Then an early test is
http://www.scholarpedia.org/article/Granger_causality
although there are other matters to test to help decide whether or not the correlation is merely accidental. If these tests are passed then when X G-causes Y one then suspects some method of actual causation exists. An example, although not orginally formulated in these modern terms, is the Arrhenius formula for the effect of CO2 change on temperature change. The matter is formulated in this statistical way in, e.g., Tol, R.S.J. and A.F. de Vos (1998), ‘A Bayesian Statistical Analysis of the Enhanced Greenhouse Effect’, Climatic Change, 38, 87-112.
In is certainly more informative when it is possible to provide good evidence that both Y and Z depend (lawfully) upon X as this much more strongly restricts the choice of possible models.
RalphieGM says
As to fossil fuel providing CO2 to the atmosphere – fossil fuel was at one time – CO2. These fuels were plants that fossilized into fossil hydrocarbons. So why the big concern about CO2 now? I don’t see the alarm.
Jacob Mack says
David, I there are some interesting papers/ textbooks on casual inference. One reference source I found helpful is the Berkeley Electronic Press. The articles I read there was from the International Journal of Biostatistics, but it was helpful and lead to other relevant articles on the Berkeley website and elsewhere.
In regards to inductive logic, I would add there is a healthy helping of deductive reasoning as well.
Leonard Evens says
“As to fossil fuel providing CO2 to the atmosphere – fossil fuel was at one time – CO2. These fuels were plants that fossilized into fossil hydrocarbons. So why the big concern about CO2 now? I don’t see the alarm.”
It all depends on the time scale. On geological times scales of hundreds of millions of years, it is probably not such a big deal. But on human time scales of a generation or two, it can make a very big difference. If we return the CO_2 to the atmosphere, that it took millions of years to deposit in the form of fossil fules, all in a hundred years or so, it will make a big difference to our children and grandchildren.
Doug Bostrom says
RalphieGM says: 26 May 2010 at 10:07 PM
Background of the fundamental problem is here.
But in a nutshell…
These fuels were plants that fossilized into fossil hydrocarbons. So why the big concern about CO2 now?
Plants that grew, sucked C02 out of the air, died, were buried with w/that C02 over millions of years during which -more- C02 was made available by weathering of rock. We’re digging or pumping a significant fraction millions of years’ storage of C02 out of the ground and releasing it in the course of a couple of dozen decades. Peeling away layers of quibbles and parsing of decimal points, C02 does in fact help to control the amount of heat retained from sunlight striking the Earth. Too much C02 added to the atmosphere too fast and things go out of whack, to an extent that looks like being a disruptive matter. Not terribly complicated in its fundamentals but of course this is not only a fascinating matter for scientists to nail down to the last iota of detail but also is a serious problem for our industrialized society which is pretty much entirely leveraged on rapid extraction and combustion of fossil hydrocarbons.
Doug Bostrom says
I should clarify my remark, it was sugars and cellulose and the like that were buried with those plants, not C02 per se. It comes back as gooey or runny or gassy or crumbly hydrocarbons which we burn thereby combining carbon w/oxygen to form C02. Sorry!
jyyh says
One thing about the triassic-jurassic i’ve never seen is the distribution of mammal fossils. Are there any of those from the (then) tropics? Having a constant temperature might be an asset in cooler climes but when i tried to keep track of the ground lizards in Cyprus (holiday) my eyes were too slow at temperature of 37 degrees in shade.
Andrew says
“In fact, any statistical fit to the data is necessarily trying to match observations using a mathematical constraint (ie. trying to minimise the mean square residual, or the gradient, using sinusoids, or wavelets, etc.) and since there is no physical reason to assume that any of these constraints apply to the real world, no purely statistical approach is going to be that useful in attribution ”
This is pretty general, and for example, nonparametric superefficient methods of estimation exist for which all of the claims are false. I expect I have to explain that a bit.
If you really really want the best answer, and you are limited by not having enough data for conventional confidence bounds to settle your question, then you are pretty much right in the middle of the situation where superefficient estimators add the most value, and people who have to make decisions with insufficient data should use such methods when they can.
One way to get at an understanding of how these methods work is to realize that methods that minimize sample error criteria also maximize the amount of actual system noise which is mis-attributed to the parameters. This includes the dreaded “overfit” but is not limited to it. You can set up a small multilinear regression and see – here is a MATLAB session that makes this clear (to the large number of people that understand MATLAB).
>> A = randn(5,1); % true parameters
>> X = randn(10,5); % true explanators
>> e = randn(10,1); % true system noise
>> y = X*A + e; % observations
>> Ahat = X \ y; % least squares model
>> ehat = y – X * Ahat; % residuals
>> sum(e .^ 2) % actual noise sum of squares
ans =
17.7329
>> sum(ehat .^ 2) % residual sum of squares
ans =
14.9373
Now every time you run it, you get different numbers. But you can actually prove that the residual sum of squares will always be less than the actual system noise sum of squares. (Which is the point of “LEAST SQUARES”). When the estimation process removed that noise from the residuals, it had to put it somewhere, and the only place it can put it is the parameter estimates. This is why one of my colleagues thinks of least squares as “maximum parameter noise” estimation.
Well, maximum parameter noise estimation doesn’t sound good, and, in a lot of cases it isn’t good. Similarly, any method that minimizes some residual norm is maximizing some measure of the noise mis-attributed to the parameter estimates.
Well were you hoping to estimate the residuals? Or the parameters? A lot of the time people estimate parameters by minimizing residuals, or appealing to a similar method as described in the original comment. People should usually not do this, but, not everyone has the memo yet.
One way to address this is to change from least squares to Wiener filtering, which in some sense is “the” right answer because it seeks to minimize the parameter estimation error. But Wiener filtering requires knowledge of true statistics which are not normally known. One can prove if you use the same data to estimate the preliminary least squares model and the process statistics with conventional (e.g. maximum likelihood) estimates, then the Wiener filter always is WORSE than the original least squares. By re-using the same data, you allow the possibility of correlations between the different use of the sample data to persist, where if you used independent data you would have none of these spurious correlations. If you split your data into independently used sets, you avoid the spurious correlations but you lose from the bigger error bars.
It turns out that one way out of the dilemma is the aforementioned superefficient estimation. It is a way to get away with re-using the data by using the data “badly” enough. If your re-use of the data is with a dull enough estimator, then it damps the spurious correlations between the original data use and the re-use. How it does this and how “dull” it needs to be depends critically on the exact form of the estimator. But the practical reality is that you end up using an estimator which trades off your control over it’s accuracy with the risk of these internal correlations.
It turns out that the values of the parameters you are estimating affect what is the best trade off between accuracy and risk. In other words, when you pick the estimator, it will work better for some parameter values than others. If you do it right, it works better than unbiased estimation for all possible parameter values.
So this choice – a choice between infinitely many possibilities for which there can be no advance guidance (and as opposed to Bayesian estimation, usually no posterior guidance either).
It is a bit of a philosophical issue whether this use of information geometry has a connection to laws of physics or not (e.g. Leon Brillouin’s physical information theory); however it is clear that you don’t need any physical “basis” for this.
Do scientists often use superefficient estimation? I don’t know. It was used in some cosmic background radiation work.
I assume a lot of climate scientists are aware of this technology, and would want to use these techniques, except in a highly policy-laden environment, you would open the door for someone who believes climate science is a hoax intentionally choosing estimators which are superefficient near the parameter values they believe. These estimators make sense when you really want to know the answer, not when you want to fight over it.
However, the point I want to make about the original comment is that no, it is not generally true that purely statistical estimators are necessarily as described in that comment, and what’s more, I would expect that purely statistical techniques are available which would be useful in estimation applied to climate attribution. I think it is only the problem of how these techniques would interact with interested parties, not how they would inform us about climate.
[Response: Thanks – this isn’t a methodology I was familiar with. But it does not rebut the point I was making in the slightest. As in any multiple regression, the choice of predictors will be important and without a physical understanding of which to use and why, you can end up with ‘highly significant’ nonsense – a recent attempt came to the conclusion that methane was an anti-greenhouse gas for instance. Statistical techniques on their own are not useful, not because of their ability to reduce residuals or the confidence interval on the parameters or projections, but because they don’t have any constraints based on the physics, and thus are prone to giving unphysical (and spurious) conclusions – for instance in McKitrick’s recent papers. It might well be that the techniques you discuss would be valuable in some contexts and I’d be happy to discuss specifics, but you are not going to be able to distinguish natural and anthropogenic variability, or the contribution of solar and GHGs without a physical basis. – gavin]
richard pauli says
#54 SecularAnimist asks “What exactly do you think we don’t know?”
We do not know if humans can unify in rational acts to reach species survival.
The biggest unknowns and confusion are not chemical, they are social and psychological.
MapleLeaf says
David @59,
Good points. IMHO, Granger causality has been under-utilized in climate research.
David Miller says
RalphieGM asks what the big deal with returning carbon to the atmosphere is.
The big deal is that we’re taking carbon that was stored over millions of years – very slowly removed from the atmosphere – and dumping it back into the atmosphere in a couple hundred. That drives CO2 levels, and temperatures, up.
If we released it over a million years it would be no problem at all.
GFW says
Re RalphieGM (60): Poe’s law applies?
jeannick says
.
excellent post with a rich element of positive inquiry
One should mention the insights obtained from the observation of errors ,
particularly model errors .
It give a quantitative indication for further inquiries ,
if the errors are constant , only few further factors or ratios have to be found , if the errors are wildly variable , then either there is a lot of work ahead or a similarly fluctuation can point out the missing parameter
Usually the worst case can be assumed ,
sometimes , like for the volcanoes effect , things turn out quite straightforward , other times it is useful to eliminate possible culprits
The effect of the pollutants is fascinating , there was a lot of SO2 and NO2 released by the old economies until pollution controls and the collapse of the communists economies , then India, China and others ramped up their emissions of heavy pollutants ,
There is a paper to write on a possible (or not!) correlation ;-)
.
Completely Fed Up says
“So why the big concern about CO2 now? I don’t see the alarm.”
The alarm is that most people are living where the ocean used to be when that CO2 was in the atmosphere.
Maybe you’re an alien from a waterworld, but most humans don’t breathe too well underwater.
Completely Fed Up says
“If the observations of real world differ from model prediction by more than can be accountable for in the modelled error estimates then the model is incorrect, pure and simple.”
Really?
So when your car gets 45mpg rather than the 50mpg the spec sheet says, your car isn’t working.
Hmmm.
Completely Fed Up says
“29
Frank Giger says:
26 May 2010 at 2:27 PM
OMG, CFU (#11) and I are in complete agreement!
There is hope for peace in our time.”
Frank, just agree.
I use your past statements to work out whether I will read your posts or whether I will pay especial attention to the wording, but I either agree with something you say or don’t agree.
I don’t find surprise in agreeing with someone when I think they’re right, no matter who they are.
I do find disappointment in how infrequently I can agree with someone because they have things wrong in trivial ways.
Completely Fed Up says
On statistical modelling, try this:
Take the first half of the dataset.
Statistically model the data you have.
Project that statistical or curve fit to the other datapoints.
See how well they match.
Something that the proponents of fourier analysis as the be-all/end-all of temperature reconstructions haven’t ever looked into.
Mostly because to prove them wrong if they fit ALL the data, you have to wait for ~15 years.
Delay, delay, delay. Just let them get to retirement and sell their annuity on the uptick.
Alexander Ač says
Would all this dicussions ever happen, if (instead of CO2) the Sun activity would have been proven to be “the highest during tha last 15 millions years”?
Would all that attribution talk be needed?
My guess is – no. I think this has something to do with “anthropo-blindness”, or so…
John P. Reisman (OSS Foundation) says
#45 Lichanos
Why don’t you use your real name in your posts? As you can see many/most people here use their real and full names. Why not you? I ask, knowing that some have due cause that is not merely hiding for fear of being actually known for ones words. Do you have the integrity to post your full name? Or some less interesting excuse?
Boy, if I had a nickel for every time a guy that doesn’t understand AGW said “I am an engineer” and then said or inferred that models can be wrong, or the science is not as good as the science indicates???
http://www.ossfoundation.us/projects/environment/global-warming/myths/models-can-be-wrong
I did a couple years engineering too, so what? That actually has nothing to do with brain surgery, economics, biology, or climate science.
If you are too focused on who is hiding sins, your can easily miss a good portion of reality. And while on the subject, in climate, what sins are being hidden??? Are you going to bring up EAU/CRU? hmmmm. . .
http://www.ossfoundation.us/projects/environment/global-warming/myths/climategate
You are correct in your assertion that “Being useful does not entail they are good predictors” but what does that have to do with climate science and climate models?
As to what ‘know with perfect certainty’ i.e. what those words mean:
http://www.merriam-webster.com/dictionary/know
http://www.merriam-webster.com/dictionary/with
http://www.merriam-webster.com/dictionary/perfect
http://www.merriam-webster.com/dictionary/certainty
But of course you could have looked that up, you being an engineer and all.
The attribution and signal to noise shows that this global warming event is largely human caused. Or do you have an alternate hypothesis or theory that can overturn the well established science?
Re. your post #47
As to your statement “Spurious? Or just not convincing? I’m not implying that there’s a hoax going on. Is that what you think I think?”
If your not implying a hoax, just what are you implying? your posts imply your doubt in the relevance and certainty levels of the science. That in itself implies via the open door that you think there is a degree of hoax.
hoax: to trick into believing or accepting as genuine something false and often preposterous
“How do we know Robock doesn’t lie to us?”
This implies a degree of hoax via the possibility of the lie.
You can of course say you meant only that the science is unsure, but that does not remove the implication.
http://iamyouasheisme.wordpress.com/tag/agw/
Generally speaking, you’re looking at the argument, not the science.
Okay, rather than beating around the bush, what do you think? From what you have written, you don’t seem to have confidence in the science or degree of certainty. Is that true, and if so to what degree?
Specifically, what percentage of this global warming event do you think is human caused?
I’m assuming that you will come back with we really don’t know. But that is only because I , and others here have heard this argument before.
—
A Climate Minute The Greenhouse Effect – History of Climate Science – Arctic Ice Melt
‘Fee & Dividend’ Our best chance for a better future – climatelobby.com
Learn the Issue & Sign the Petition
Didactylos says
Lichanos:
You insist on talking about climate models in abstract terms, and take exception to the view that they can be good predictors.
Instead of spending time debating abstractions, and perhaps allowing your perception of climate models to be coloured by the reputations of other, less successful models, I think you should turn your attention to the actual output and testing of real climate models.
Also, learn about the concept of skill. If a climate model has skill, then (in certain limited circumstances) it is a good predictor. Climate models are complex, and only a tiny fraction of your experience with other models will transfer.
Climate models have now had 30 years to prove themselves against completely fresh data. So far, they have proved exceptionally accurate. And that’s with models from 30 years ago! Hopefully, today’s models are even better. They take more processes into account, and they are calculated at finer resolution.
So, forget the abstractions and broad dismissals of millions of man-hours of scientific work. If you want to debate the usefulness of models, it is not enough to crow about the well-known limitations of models, you have to find fault with the specific interpretations of the results of every single model out there. Better qualified people have failed to do this, but please don’t let that stop you.
Barton Paul Levenson says
Ralphie 60: As to fossil fuel providing CO2 to the atmosphere – fossil fuel was at one time – CO2. These fuels were plants that fossilized into fossil hydrocarbons. So why the big concern about CO2 now? I don’t see the alarm.
BPL: CO2 is a greenhouse gas. Putting more of it in the air heats the ground. Human agriculture and civilization all developed in a time (the last 10,000 years) when the temperature was unusually stable.
Barton Paul Levenson says
I still can’t track down a prediction and a confirmation for the statement, “GCMs predicted an expanded range for hurricanes and cyclones.” If I can’t find something soon I’m going to have to take it off my list. I’ve found cites for everything else.
Anonymous Coward says
Raphie (#60),
CO2 is constantly absorbed in sediments and released by volcanoes. Since the absorbption of CO2 is partly a function of its concentration and of temperatures, it is thought to act a bit like a thermostat. But the process is very slow. Quickly releasing large amounts of carbon which had been sequestered is a bit like tampering with the thermostat. It should heal itself but that might take millions of years.
Kevin McKinney says
#62–Elaborating via analogy: you can transit from a building’s 4th floor to the ground floor via the stairs in a couple of minutes (a most mundane occurrence), or you can transit from 4th floor to ground floor via the window in a couple of seconds. (This method will tend to draw a crowd.)
Timescale can be absolutely crucial.
Kate says
A fantastic post. I’ve been really interested in attribution ever since I saw Peter Sinclair’s Solar Schmolar video a year or so ago and first learned about thermodynamic fingerprints. A detailed, reliable post like this is wonderful. I will be passing it around!
Jim Eager says
Ralphie @60, the concern is due to the fact that the carbon in fossil fuels has been locked out of the atmosphere and out of the active carbon cycle for millions of years. By rapidly injecting large quantities of that fossil carbon back into the atmosphere as CO2 we are recreating an atmosphere that has not existed for millions of years, while retaining the terrestrial and ocean carbon sinks adapted to a world with a much lower CO2 atmosphere. As a result those sinks are not capable of absorbing that CO2 as fast as we are emitting it, which is why CO2 is accumulating in the atmosphere.
Why is that a problem? Since CO2 is a greenhouse gas we know that adding more of it to the atmosphere will make surface and air temperatures higher than they are now, and potentially much higher they have been during all of human evolution. We also know that warming will melt a lot of the ice currently locked in the ice sheets of Greenland and Antarctica, thus raising sea level by several meters. And finally we know that changing Earth’s climate will change local weather patterns. Places where we now dependably grow large quantities of cereal crops may be come too dry or too wet to do so reliably.
In other words, all of human infrastructure and technology, including agriculture, has been developed and built to deal with the climate that we have now. Change the atmosphere and climate to the one that existed when CO2 was much higher and much of that infrastructure may be useless or worse.
Hank Roberts says
> expanded range for hurricanes and cyclones
Perhaps you’re thinking of an expanded range (in time and space) of ocean surface temperature conditions, those in which they begin?
Pete Dunkelberg says
CFU @ 74:
“I do find disappointment in how infrequently I can agree with someone because they have things wrong in trivial ways.”
That’s hard to parse, but I’m not sure you should be disappointed.
J. Bob says
Lichanos, hang in there. Nothing like going against the grain to bring out bias elements.
Considering the comments, one has to wonder how many of your critics, have experience in thermodynamic modeling.
Pete Dunkelberg says
Gavin on the risk of quotation abuse:
“I appreciate the concern, and I also appreciate that the people are looking to misquote and misrepresent, but this really is not worth bothering with.”
Unfortunately this is a serious problem, and in climatology it extends to graph abuse. One view is “Just speak correctly. The quotation abusers are going to do it anyway. And if all else fails they will make stuff up.” But there have been times during the emailhack affair that I have been instantly aggravated by what seemed unnecessarily heedless language. In general, if not in the specific case Gavin referred to, I hope the need for care is agreed. Think of it as part of proof reading.
Completely Fed Up says
“Lichanos, hang in there. Nothing like going against the grain to bring out bias elements.”
Says RC’s own bias element…
Yup, any complaints MUST be a bias. CANNOT be because he’s wrong. Nosiree.
ccpo says
Lichanos says:
26 May 2010 at 2:55 PM
The title of his book makes that clear; it’s a bit triumphalist.
If you are unwilling to take a man’s work for what it is, what reason do we have to converse with you at all? None, I say. As a teacher of language, I am quite sensitive to the use of it; I have to be to decipher what L2 students are trying to say. Where you see triumphalism in the use of “discovery” is beyond me. Warming was discovered. God didn’t announce it from on high, it was teased out of data – and at a time when the tools were far cruder than today, which is quite impressive.
What your language tells us in calling a descriptive title “triumphalist,” is that you have little control of your own bias. Your refusal of the conclusions the data point to causes you to interpret Weart as you do. You simply don’t accept the premise of AGW, thus, any declarative statement to the contrary is “triumphalist.” It’s your own bias showing, not Weart’s.
I will close by saying that one must be ever on guard against one’s own biases and passions, not only those of others.
Indeed.
Lichanos says:
26 May 2010 at 3:02 PM
Why not just review what Weart says in response to critical arguments yourself and try and see it from the point of view of someone who needs to be convinced? That would be more constructive.
Lichanos, if the huge mass of data, all of it reinforcing the obvious conclusion, doesn’t convince you, how can we? Time after time we have been shown there is no data, there are no studies that support anything other than the obvious: anthropogenic forcings are changing the planet. Period. This is not something you can argue, because there is nothing to support a counter argument.
It is not legitimate to say that because knowledge is imperfect it is unreliable or wrong. You actually have to show that some other cause is present. Occam’s razor applies: if over 100 years of science all points to anthropogenic warming, then we can be darned sure that’s the case.
Time after time we have seen the denialists’ arguments fully attributed to an intentional dis-/misinformation campaign and ideological constraints on comprehension and interpretation. See Oreskes, et al.
You don’t “need” to be convinced, you only need to remove the blinders you have placed over your own eyes. There is no other plausible attribution for climate changes, thus it is unreasonable to deny their attribution to human actions.
Cheers
Doug Bostrom says
J. Bob says: 27 May 2010 at 10:02 AM
Nothing like going against the grain to bring out bias elements.
On the other hand, “I doubt it” is not really an argument. Cutting across the grain with a toothless saw is not productive.
Lichanos says
@77 John Reisman
–Why don’t you use your real name in your posts?
I comment under my own name at sites like the New York Times or National Public Radio, but on blogs, I use my blog name. I wouldn’t hesitate to say face to face anything I say on my blog, but that doesn’t mean I want other people saying it for me anywhere across the Web. I don’t see it as an integrity issue – just deal with my arguments, and leave my person out of it. But just in case it matters, although it shouldn’t, I voted for Al Gore and Obama. I’m not a lunatic from the right, as Alan Robock would have it. Nor am I paid by fossil fuel multi-nationals.
–If you are too focused on who is hiding sins, your can easily miss a good portion of reality
Quite true, but I don’t know why it applies to me. I’m just arguing about hypotheses.
–I did a couple years engineering too, so what?
I don’t claim that my profession gives me extraordinary insight to these issues. I raised because a commenter suggested I feel computer models are useless. I certainly do not feel that way. Others have commented that my experience with hydrodynamic and water quality models is not ‘transferrable’ to GCMs. That would be relevant if I were applying for a job as a GCM modeler, but I’m not. The general question of how to evaluate models and when to rely on them, and for what, remains.
–As to what ‘know with perfect certainty’ … mean[s]:… you could have looked that up, you being an engineer and all.
You are extremely vehement, seem to lack a sense of humor or irony, and certainly have no knowledge of philosophy. If you think you can settle the meaning of ‘to know’ and ‘certainty’ by consulting Webster, you are either the greatest philosopher the West has seen since Aristotle or completely ignorant of the intellectual issues they raise. I was simply alluding to them in a jocular manner: I didn’t intend to divert the discussion into academic epistemology.
–If your not implying a hoax, just what are you implying? your posts
imply your doubt in the relevance and certainty levels of the science.
I don’t believe that the AGW crowd is guilty of a hoax. I have said this on my blog in many places. Is this the choice we have: AGW is true; AGW is a hoax? That’s how conspiracy theorists think, and I am not one. How about: AGW is plausible, but I think it’s not sufficiently demonstrated, so I think those scientists are wrong. I give them the benefit of the doubt regarding honesty. I do think they are dangerously biased at times, and may have done some shoddy work at others. I don’t think there’s a world-wide conspiracy of one-state liberals trying to impose eco-orthodoxy on the masses.
What is scientifically insupportable with doubting the relevance and claimed certainty of individual scientific claims? If that is not acceptable, we’re back in the age of truth-by-decree.
–“How do we know Robock doesn’t lie to us?” This implies a degree of hoax via the possibility of the lie.
You quote my blog post on Robock’s talk and completely misunderstand it, probably because you interpret everything literally. Robock said “Lindzen lies to you.” End quote. I find it disturbing that a scientist would accuse another professional scientist of outright lying, without giving any evidence, without any qualification at all. Simply character assasination. So, it’s logical for me to wonder aloud, how do I know that the accuser isn’t lying to me? Because he’s a nice guy? I’m not saying either of them are liars. I’m sure that Robock believes Lindzen is a liar, though he shouldn’t say it in that forum in that way.
–Generally speaking, you’re looking at the argument, not the science.
What the heck does this mean??
–Okay, rather than beating around the bush, what do you think? From what you have written, you don’t seem to have confidence in the science or degree of certainty. Is that true, and if so to what degree? Specifically, what percentage of this global warming event do you think is human caused?
Here’s my point of view, boiled down for you:
I find the AGW argument unconvincing. It is based on two foundations: the temperature record and the GCMs. I think the temperature record is extremely problematic. I think use of proxies is very problematic. I think the urban heat island effect has not be properly considered. I think many arguments presented to the general public to support AGW are utter garbage, e.g., glaciers are melting and migration patterns are changing, ergo, AGW is true.
I think there is a mass of observation and evidence that is consistent with the AGW view, but that does not prove it in any way because it is consitent with other views as well. Reliance on GCMs is a degradation of the scientific method. Falsification is honored mostly in the breach.
Many AGW proponents are shrill and intolerant – I’m talking about scientists here, forget about the politicos and environmentalists – and resort to ad hominem attacks whenever possible. (I won’t deny that skeptics are often the same, but that’s politics for you.)
Human beings certainly change local and regional climate. This idea has been around at least since George Perkins Marsh published Man and Nature in the 19th century. Land use patterns are very significant.
The IPCC states that it is highly likely that most of the temperature rise of the earth over the last century is due to AGW. If “highly likely” means 90% certainty, and “most” means more than 50%, what are we left with? Around half of the climate warming in the last century is do to AGW? (You can correct my figures a bit if you like, but the point is unchanged.) So then, what is that temperature increase? The historical record becomes critical! If my concerns about the data record and proxies are only partly correct, then the part of the rise that is AGW is not very big at all.
So what about the future? The entire AGW argument is based on positive feedbacks that will take this small increase and run with it, making the globe much warmer. After all, it’s “basic physics” as you folks like to say, that without the positive feedbacks, the warming effect would be naturally limited. Here is where the GCMs come into play. They are the crystal ball. Why do we rely on them? Should we really have confidence that they can predict the future to such a degree of precision over such a time scale when such positive feedbacks have never been observed before on this scale? When we have so little knowledge about many of the physical systems involved? When the models are calibrated against the historical record which is itself in doubt?
I remain unconvinced. I don’t think it’s a hoax. I think it’s a fad. We’ll know for sure in fifteen or twenty years.
Andrew says
“this isn’t a methodology I was familiar with. But it does not rebut the point I was making in the slightest. As in any multiple regression, the choice of predictors will be important and without a physical understanding of which to use and why, you can end up with ‘highly significant’ nonsense”
People doing things with statistics can end up with highly “significant” nonsense, but this is not one of the things in statistics that can have that result.
There are several reasons for this; but one of the more important reasons is that because of the essential role of the choice involved, these types of estimates do not have conventional significance associated with them; you can arrange the choice to be non-measurable, etc. So it’s hard to imagine someone believing they have anything “highly significant” in this sort of exercise.
If, as you say, you are not familiar with superefficient estimation, it can take a bit of head scratching before you get a feel for what it can and cannot do. When it was first discovered, it was a counterexample to the robust belief of middle twentieth century statisticians (like LeCam, etc.) that such a thing was impossible. It was considered a bizzare curiosity that was not expected to be of practical utility (other than on statistics Ph. D. oral exams) for decades (they actually called the first example “Stein’s Paradox”). Even now, most trained statisticians do not work in areas where a one relies heavily on these effects – most trained statisticians are trained to say you don’t have good enough data in these situations, or that you should design a different experiment. In observational sciences, we don’t get to just ‘get more data’, usually we have to wait. And we don’t get to design a better experiment, because we have to live the one we have. Most people who depend critically on this sort of estimation are in industries where publication is an afterthought, or even discouraged, so there is a lot more of this going on than you can see in the open literature.
There is a bit of an interesting interaction with “physics” too. In my field, beliefs about “laws” are generally false (in approximately the same sense that Aristotelean physics isn’t particular good at predicting enzyme catalysis reaction rates) so we usually ignore the “laws” that are on the books; however sometimes you do have good physics – the question here is what does the superefficient estimation do with that information? It tends to respect such information scrupulously if that information has a highly determinative effect on the observations, but it pays little attention to that information as part of the mechanism to reduce the uncertainty. This can be interpreted as the physics takes you so far, but the estimation heavy lifting starts where we have already used all the “known knowns” – they tend to have “high codimension”, and the superefficient estimator effect tends to be strongest in high dimensional estimates. One can casually think of this as the superefficiency effects as being concentrated where you are the most ignorant.
So you actually can throw this sort of estimation fairly blithely at problems that have ‘physics’ (whether you think you know the physics or not) and to first order, you can’t really screw things up – as long as you understand the estimation theory fairly closely.
So (in dire contrast to traditional Bayesian estimation) you have a situation where your estimation performance doesn’t depend strongly on the state of your understanding of the “physics”.
I’m not aiming to turn the thread into an estimation theory seminar, it’s just that statistics is a really big place, and there are a lot of interesting things going on in there. Superefficient estimation is sort of like how to do estimation with much too little data. There are even weirder things that are actually still practical – like what you can do with no data at all, or only data about the wrong things, etc. Of course, each time you descend to a more dire predicament of data, your results degrade, but you can descend pretty far before you are left with no information theoretic tools at all.
Doug Bostrom says
Lichanos: I find the AGW argument unconvincing.
Probably should have stopped there, certainly before I think the urban heat island effect has not be properly considered.
The matter of UHI has been teased, parsed, analyzed, turned upside down, scrutinized in a way that could fairly exhaust the English language. I have to say that if that’s one of the first things that comes your mind as a rebuttal to the entire suite of research in play here, you’re way behind the curve. If you were not, you’d pick something more challenging, such as clouds. Instead, you move on to talk about “crystal balls” and the like because you’re seemingly not able to specifically identify flaws w/GCMs and the like where you may be able to make a positive contribution.
In short, your argument reduces to “I doubt it.”
Here’s an opportunity for clarification, or maybe a retrieval of a slice of the rapidly fading reputation of the pseudonym you’re using here. When you say researchers “…resort to ad hominem attacks whenever possible”, can you produce some evidence for that? Failing that, how about a retraction? It costs you nothing for after all you’re not a personality here, simply a mask.
Completely Fed Up says
“I find the AGW argument unconvincing. It is based on two foundations: the temperature record and the GCMs.”
It’s based solely on the reports of companies like Texaco, Exxon, et al.
We know how much CO2 humans are producing.
The rest of it is climate science, which isn’t AGW (AGW is a consequence of us burning fossil fuels):
https://www.realclimate.org/index.php/archives/2010/05/what-we-can-learn-from-studying-the-last-millennium-or-so/comment-page-11/#comment-175797
And that science works on planets not earth, it works on the earth in the dim and unknowable past. It is damn solid.
And that solid science means the inevitable consequence of our burning of fossil fuels is AGW.
This was known BEFORE any computer model and BEFORE any measurement.
You didn’t seem to know this, so how do you know you know anything?
Completely Fed Up says
“I think there is a mass of observation and evidence that is consistent with the AGW view, but that does not prove it in any way because it is consitent with other views as well.”
Such as…?
Completely Fed Up says
“After all, it’s “basic physics” as you folks like to say, that without the positive feedbacks, the warming effect would be naturally limited.”
And another example of what you don’t know.
The warming effect is naturally limited WITH the feedbacks.
And if there are no feedbacks, I take it you refute the statement that H2O is a more powerful greenhouse gas? And you refute that clouds have an effect.
Completely Fed Up says
“When the models are calibrated against the historical record which is itself in doubt?”
Hmmm. Yet more anti-knowledge:
https://www.realclimate.org/index.php/archives/2008/11/faq-on-climate-models/
Lichanos says
@90 CCPO
Time after time we have seen the denialists’ arguments fully attributed to an intentional dis-/misinformation campaign and ideological constraints on comprehension and interpretation. See Oreskes, et al.
Which publication of Oreskes are you citing here? I have read her discussion of the “consensus” and find it to be awful, simiply awful. I have commented on it at length at my blog.
The arguments of yours and people like you seem to amount to: There is lots of evidence for our view- you don’t accept it- you’re wrong- Why should we argue with you if you won’t be convinced by our evidence?
To which I would reply, you evidence is not convincing. Here we go again.
Regarding Weart, his title is, shall we say, celebratory. To someone who is not convinced, that seems like triumphalism. Each of us has his point of view. Shall we leave it at that and eschew the word ‘bias?’ It sounds pejorative, but it’s part of life.
…You actually have to show that some other cause is present. Occam’s razor applies: if over 100 years of science all points to anthropogenic warming, then we can be darned sure that’s the case.
This is not what Occam’s Razor leads to at all. His venerable argument was that if a simpler explanation exists as opposed to a convoluted one, the simpler one will and should prevail. Assuming it is supported. One can see it as a medieval formulation of the value of the Do Nothing scenario or the Null Hypothesis. That is, the null hypothesis is that the earth has warmed a little, and that it is from natural causes that we do not fully understand. No unprecedented trend. YOU must prove the reverse. That’s what AGW is. It posits a human forcing mechanism. Plausible, but…
Your assertion about 100 years of evidence is merely circular logic.
Time after time we have seen the denialists’ arguments fully attributed to an intentional dis-/misinformation campaign …
Such campaigns did certainly exist when AGW was first proposed in the 1980s. If they are still going on, I certainly am not seeing their material. Most of my doubts about AGW developed by listening to scientists from GISS present their point of view and reading IPCC reports.
There is no other plausible attribution for climate changes, thus it is unreasonable to deny their attribution to human actions.
I hear this trope all the time. “You can’t get the warming without the increase in CO2.” “It’s the only plausible explanation…” Other possibilities exist:
1) The warming is not as severe as AGW folks say it is – that historical record again…
2) Even if it is, we don’t know it will continue…
3) Just because we can’t prove another explanation for the alleged warming doesn’t mean it doesn’t exist and is no reason to accept a weak explanation, except as a temporary aid to further investigation, perhaps.
4) Adopt the null hypothesis…
Lichanos says
@94 Doug B.
When you say researchers “…resort to ad hominem attacks whenever possible”, can you produce some evidence for that?
I should not have said “whenever possible.” I should have said “often.”
Is Robock calling Lindzen a liar in a public forum not a decent example? Many commenters here claim that skeptics are simply paid shills for the oil companies. Maybe they don’t count, but perhaps some of them are scientists making those charges. I won’t have to look very far for evidence, really.