How do we know what caused climate to change – or even if anything did?
This is a central question with respect to recent temperature trends, but of course it is much more general and applies to a whole range of climate changes over all time scales. Judging from comments we receive here and discussions elsewhere on the web, there is a fair amount of confusion about how this process works and what can (and cannot) be said with confidence. For instance, many people appear to (incorrectly) think that attribution is just based on a naive correlation of the global mean temperature, or that it is impossible to do unless a change is ‘unprecedented’ or that the answers are based on our lack of imagination about other causes.
In fact the process is more sophisticated than these misconceptions imply and I’ll go over the main issues below. But the executive summary is this:
- You can’t do attribution based only on statistics
- Attribution has nothing to do with something being “unprecedented”
- You always need a model of some sort
- The more distinct the fingerprint of a particular cause is, the easier it is to detect
Note that it helps enormously to think about attribution in contexts that don’t have anything to do with anthropogenic causes. For some reason that allows people to think a little bit more clearly about the problem.
First off, think about the difference between attribution in an observational science like climatology (or cosmology etc.) compared to a lab-based science (microbiology or materials science). In a laboratory, it’s relatively easy to demonstrate cause and effect: you set up the experiments – and if what you expect is a real phenomenon, you should be able to replicate it over and over again and get enough examples to demonstrate convincingly that a particular cause has a particular effect. Note that you can’t demonstrate that a particular effect can have only that cause, but should you see that effect in the real world and suspect that your cause is also present, then you can make a pretty good (though not 100%) case that a specific cause is to blame.
Why do you need a laboratory to do this? It is because the real world is always noisy – there is always something else going on that makes our (reductionist) theories less applicable than we’d like. Outside, we don’t get to perfectly stabilise the temperature and pressure, we don’t control the turbulence in the initial state, and we can’t shield the apparatus from cosmic rays etc. In the lab, we can do all of those things and ensure that (hopefully) we can boil the experiment down to its essentials. There is of course still ‘noise’ – imprecision in measuring instruments etc. and so you need to do it many times under slightly different conditions to be sure that your cause really does give the effect you are looking for.
The key to this kind of attribution is repetition, and this is where it should become obvious that for observational sciences, you are generally going to have to find a different way forward, since we don’t generally get to rerun the Holocene, or the Big Bang or the 20th Century (thankfully).
Repetition can be useful when you have repeating events in Nature – the ice age cycles, tides, volcanic eruptions, the seasons etc. These give you a chance to integrate over any unrelated confounding effects to get at the signal. For the impacts of volcanic eruptions in general, this has definitely been a useful technique (from Robock and Mao (1992) to Shindell et al (2004)). But many of the events that have occurred in geologic history are singular, or perhaps they’ve occurred more frequently but we only have good observations from one manifestation – the Paleocene-Eocene Thermal Maximum, the KT impact event, the 8.2 kyr event, the Little Ice Age etc. – and so another approach is required.
In the real world we attribute singular events all the time – in court cases for instance – and so we do have practical experience of this. If the evidence linking specific bank-robbers to a robbery is strong, prosecutors can get a conviction without the crimes needing to have been ‘unprecedented’, and without having to specifically prove that everyone else was innocent. What happens instead is that prosecutors (ideally) create a narrative for what they think happened (lets call that a ‘model’ for want of a better word), work out the consequences of that narrative (the suspect should have been seen by that camera at that moment, the DNA at the scene will match a suspect’s sample, the money will be found in the freezer etc.), and they then try and find those consequences in the evidence. It’s obviously important to make sure that the narrative isn’t simply a ‘just-so’ story, in which circumstances are strung together to suggest guilt, but which no further evidence is found to back up that particular story. Indeed these narratives are much more convincing when there is ‘out of sample’ confirmation.
We can generalise this: what is a required is a model of some sort that makes predictions for what should and should not have happened depending on some specific cause, combined with ‘out of sample’ validation of the model of events or phenomena that were not known about or used in the construction of the model.
Models come in many shapes and sizes. They can be statistical, empirical, physical, numerical or conceptual. Their utility is predicated on how specific they are, how clearly they distinguish their predictions from those of other models, and the avoidance of unnecessary complications (“Occam’s Razor”). If all else is equal, a more parsimonious explanation is generally preferred as a working hypothesis.
The overriding requirement however is that the model must be predictive. It can’t just be a fit to the observations. For instance, one can fit a Fourier series to a data set that is purely random, but however accurate the fit is, it won’t give good predictions. Similarly a linear or quadratic fit to a time series can be useful form of descriptive statistics, but without any reason to think that there is an underlying basis for such a trend, it has very little predictive value. In fact, any statistical fit to the data is necessarily trying to match observations using a mathematical constraint (ie. trying to minimise the mean square residual, or the gradient, using sinusoids, or wavelets, etc.) and since there is no physical reason to assume that any of these constraints apply to the real world, no purely statistical approach is going to be that useful in attribution (despite it being attempted all the time).
To be clear, defining any externally forced climate signal as simply the linear, quadratic, polynomial or spline fit to the data is not sufficient. The corollary which defines ‘internal climate variability’ as the residual from that fit doesn’t work either.
So what can you do? The first thing to do is to get away from the idea that you can only be using single-valued metrics like the global temperature. We have much more information than that – patterns of changes across the surface, through the vertical extent of the atmosphere, and in the oceans. Complex spatial fingerprints of change can do a much better job at discriminating between competing hypotheses than simple multiple linear regression with a single time-series. For instance, a big difference between solar forced changes compared to those driven by CO2 is that the stratosphere changes in tandem with the lower atmosphere for solar changes, but they are opposed for CO2-driven change. Aerosol changes often have specific regional patterns change that can be distinguished from changes from well-mixed greenhouse gases.
The expected patterns for any particular driver (the ‘fingerprints’) can be estimated from a climate model, or even a suite of climate models with the differences between them serving as an estimate of the structural uncertainty. If these patterns are robust, then one can have confidence that they are a good reflection of the underlying assumptions that went into building the models. Given these fingerprints for multiple hypothesised drivers (solar, aerosols, land-use/land cover change, greenhouse gases etc.), we can than examine the real world to see if the changes we see can be explained by a combination of them. One important point to note is that it is easy to account for some model imperfections – for instance, if the solar pattern is underestimated in strength we can test for whether a multiplicative factor would improve the match. We can also apply some independent tests on the models to try and make sure that only the ‘good’ ones are used, or at least demonstrate that the conclusions are not sensitive to those choices.
These techniques of course, make some assumptions. Firstly, that the spatio-temporal pattern associated with a particular forcing is reasonably accurate (though the magnitude of the pattern can be too large or small without causing a problem). To a large extent this is the case – the stratospheric cooling/tropospheric warming pattern associated with CO2 increases is well understood, as are the qualitative land vs ocean/Northern vs. southern/Arctic amplification features. The exact value of polar amplification though is quite uncertain, though this affects all the response patterns and so is not a crucial factor. More problematic are results that indicate that specific forcings might impact existing regional patterns of variability, like the Arctic Oscillation or El Niño. In those cases, clearly distinguishing internal natural variability from the forced change is more difficult.
In all of the above, estimates are required of the magnitude and patterns of internal variability. These can be derived from model simulations (for instance in their pre-industrial control runs with no forcings), or estimated from the observational record. The latter is problematic because there is no ‘clean’ period where there was only internal variability occurring – volcanoes, solar variability etc. have been affecting the record even prior to the 20th Century. Thus the most straightforward estimates come from the GCMs. Each model has a different expression of the internal variability – some have too much ENSO activity for instance while some have too little, or, the timescale for multi-decadal variability in the North Atlantic might vary from 20 to 60 years for instance. Conclusions about the magnitude of the forced changes need to be robust to these different estimates.
So how might this work in practice? Take the impact of the Pinatubo eruption in 1991. Examination of the temperature record over this period shows a slight cooling, peaking in 1992-1993, but these temperatures were certainly not ‘unprecedented’, nor did they exceed the bounds of observed variability, yet it is well accepted that the cooling was attributable to the eruption. Why? First off, there was a well-observed change in the atmospheric composition (a layer of sulphate aerosols in the lower stratosphere). Models ranging from 1-dimensional radiative transfer models to full GCMs all suggest that these aerosols were sufficient to alter the planetary energy balance and cause global cooling in the annual mean surface temperatures. They also suggest that there would be complex spatial patterns of response – local warming in the lower stratosphere, increases in reflected solar radiation, decreases in outgoing longwave radiation, dynamical changes in the northern hemisphere winter circulation, decreases in tropical precipitation etc. These changes were observed in the real world too, and with very similar magnitudes to those predicted. Indeed many of these changes were predicted by GCMs before they were observed.
I’ll leave it as an exercise for the reader to apply the same reasoning to the changes related to increasing greenhouse gases, but for those interested the relevant chapter in the IPCC report is well worth reading, as are a couple of recent papers by Santer and colleagues.
Barton Paul Levenson says
If Lichanos’s response to my argument was posted here, I ought to post my reply. From my email to him:
“Point 5. The trend toward greater warming for the past 160 years is statistically significant at well beyond the 99.99% confidence level. That it may have warmed more in the past is completely irrelevant. Our economy and agriculture are exquisitely adapted to the unusually stable temperatures the globe has experienced for the last 10,000 years.
The bit about the submarines doesn’t hold up. The famous photo of U.S.S. Skate at the North Pole wasn’t actually taken at the North Pole. Here’s detailed information:
http://www.members.iinet.net.au/~johnroberthunter/www-swg/
Point 8. Ln CO2 accounts for 76% of the variance of dT 1880-2008 (NASA GISS), and you get the same figure 1850-2008 with Hadley CRU dT. Want the numbers?
Disputing the instrumental record is a failed tactic. That the Earth is warming is confirmed by land temperature stations records, sea surface temperature readings (there are no urban heat islands on the ocean), satellite temperature readings (both RSS and UAH), borehole temperature reconstructions, melting glaciers and ice caps, tree lines moving toward the poles and up mountains, increasing drought in continental interiors (as predicted, BTW, by the GCMs), earlier blooming dates for flowers and flowering trees (e,g, the day the cherry blossoms bloom in Kyoto, which the monks have kept track of since 832 AD), earlier hatching dates for eggs of insects, frogs, fish, and birds, etc., etc., etc. It’s beyond intelligent dispute.
Point 9. It’s confirmed not just by GCM runs, but by observations over the last century and a half (see above). There’s no reason the Clausius-Clapeyron relation should suddenly stop working.”
Nick Gotts says
And finally, be careful about putting Ph.D’s on to high a pedestal. I’ve seen them make some pretty dumb mistakes, for which they were shown the door.,/i> – J Bob
*sniff, sniff*
Do I detect a whiff of sublimated envy and insecurity?
Buzz Belleville says
Good post.
I think what confuses some folks (skeptics) is, since we don’t know everything (and since some would read this as saying we can’t know wnything about AGW theory with certainty), we shouldn’t be doing anything. This helps clarify ‘why’ we don’t / can’t know everything with certainty.
But, to be effective in the public policy sphere, I do think the layperson’s narrative also has to include the fact that we do know somethings with certainty or relative certainty. This would include what the post refers to as ‘laboratory’ observation. For example, we KNOW with certainty that GHGs trap the longer waves of radiant energy. This physical property of GHGs can be and has been proven repeatedly in a laboratory. We KNOW to an acceptable degree of certainty that the avg land-ocean global temp is increasing. These are observations that do not depend on a causative analysis. Ditto as to our knowledge of certain paleoclimatic events. And I think the one that needs to be repeated is that we KNOW certain factors are NOT the cause of observed warming. Solar irradiance levels plus stratospheric temp observations allow us to rule out the sun as the ’cause’ of recent warming. Ditto, Milnkovich cycles, el Nino events, NAO cycles and volcanic activity. It looks like soon we’ll be able to place PDO cycles in that category of ruled-out causes as well. “Proving” the positive (that humans ARE causing GW) in this instance is a lot less certain than disproving the negative (that some natural phenomenon is NOT the causative forcing). Clouds seem to be the only potential natural forcing cause we cannot rule out with certainty at this time.
I just think the narrative debate needs to be tinkered. We can admit what we don’t know with certainty (and indeed this will ultimately enhance the scientific community’s credibility), but we need to accompany that by pounding on those things that we do know.
Completely Fed says
Hank, you’re talking to a self confessed troll. He loves seeing his name and claims repeated, and having people ask him to go on saying his stuff. You’re hooked, sir. Spit out the hook.
Completely Fed says
Hank, you’re talking to a self confessed troll. trolls love seeing their name and claims repeated, and having people ask them to go on saying their stuff. You’re hooked, sir. Spit out the hook.
Completely Fed says
“This is a little different then what I said about the math models reflecting REALITY.”
But a physical model in a wind tunnel is not a model that reflects REALITY where the physical product is life size instead of scale and it’s in the real outside world, not limited to a wind tunnel.
Completely Fed says
“Frankly, the test on the model system shouldn’t match computations very well any more because the physical models can’t get enough digits right.”
This is an a priori argument:
1) The computations are wrong,
2) therefore they shouldn’t match the model
3) This proves the computations are wrong
without actually going anywhere toward showing that the computations can’t get enough digits right.
Completely Fed says
“THIS. Please, moderators, can you move all the BP talk and other stuff not related to the Gavin’s original article, elsewhere?”
Including, also, THIS.
After all, not related to Gavin’s original article.
Of course, you then get into the infinite regression problem and the eternal catch-22.
neil pelkey says
Bayesian analysis or thinking would require the possibility that the application of data would lead to a posterior different from one’s prior– conditional on the model of course. There is little evidence of that here at RC. Dirac is more appropriate than Bayes. Gavin do you actually believe the “confidence” that AGW is real is about 90% or is more like 99.999999999%?
Ray Ladbury says
Steve Bloom,
I think it bears repeating that the models are even more essential in constraining the upper end of estimates of CO2 sensitivity. Throw out the models, and we can still rule out sensitivities below 1.5 degrees per doubling, but we cannot rule out values of 5.5 or even 6 degrees. And then, we’re really in the soup.
That denialists always attack the models is the surest sign that they don’t understand the science or the risk calculus. The models are the best tools we have for bounding uncertainty.
Jim Eager says
The self-proclaimed garden variety troll Lichanos sinks to reciting some of the usual septic canards:
Much of northern ice cap ice free in 18th century (historical literature)
The Arctic basin is a pretty large place. Although active exploration of the coastal waters north of Eurasia and North America began in the 16th century, most of the Arctic basin was in fact not explored until the 20th century, as is amply documented in that same historic literature. The Wiki capsule histories of the Northern Sea Route and the Northwest Passage cover the highlights pretty well.
The first successful multi-season transit of the Northern Sea Route was not until 1878-79, first single season transit not until 1932.
First multi-season transit of the Northwest Passage was not until 1903-1906, first single season transit not until 1944.
Note that both routes hug the coastlines of Eurasia and North America, respectively, circumventing most of the Arctic basin proper. There is nothing in the literature to document the assertion that “much of the northern ice cap ice free in 18th century” because no one had been there to document it.
Troll Lichanos then continues another chestnut: north pole ice free in and early 1960s (see Navy photos of nuclear submarines at th pole)
No one has yet pointed to a single documented instance of a submarine surfacing at an ice-free pole. Not one.
Lots of pictures have been pointed to. Some of these show subs in open leads or polynyas surrounded by pack ice. These hardly fit the claim of an “ice free pole.” No photo of a sub shown in truly open water has been documented as being taken at the pole. Not one.
Bill says
On the original thread: its easier to focus on ‘attribution’ if we can define ‘change’ somewhat better. When do you consider the ‘change’ happened or became apparent ?
Ray Ladbury says
Hi Neil,
I’ve looked at several ways of estimating confidence–some qualitative and some quantitative. I consistently get between 90 and 95% confidence that anthropogenic CO2 is the main culprit in the current warming epoch.
I’m not quite sure how to quantify the confidence we gain by simultaneously warming troposphere and cooling stratosphere. That ought to be diagnostic for a greenhouse mechanism, but of course 100% confidence is never possible.
Jim Eager says
Spit out the hook.
You’re right, CFU, I’m done with the digit that roared.
Ike Solem says
Yes, the Arctic is melting – that’s been known ever since the nuclear submarine data was released in the 1990s. See publications like Rothrock et. al (1999) “Thinning of the Arctic sea ice cover.”
Since 1999 further submarine data has been released, and IceSat measurements of the ice cap have also been incorporated:
Kwok & Rothrock (2009)”Decline in Arctic sea ice thickness from submarine and ICESat records: 1958–2008″
http://rkwok.jpl.nasa.gov/publications/Kwok.2009.GRL.pdf
That’s the long-term trend of decreasing ice thickness, well in line with climate model predictions – the collapse in ice extent in 2006 was however not predicted by climate models, but since a major factor was a shift in the wind field (which piled up the thin ice) – an unpredictable fluctuation, most likely – that doesn’t invalidate the climate models, any more than a volcanic eruption would.
It’s odd that the fossil fuel lobby is claiming that ‘sea ice has returned to normal’ – by ‘normal’ they must mean the gradual thinning and shrinking of the ice cap under global warming, rather than the unusual steep drop in ice area in 2006.
P.S. @Ed – since nuclear is safe, you must be a supporter of the removal of nuclear accident liability caps for utilities and investors – what is that called? The Price–Anderson Nuclear Industries Indemnity Act? It’s similar in structure to the offshore oil spill liability cap – and was renewed in 2005 for a 20-year period. Interesting enough, no solar, wind or biofuel producer has ever asked for an accident liability cap for their industry.
By the way, there should be an effort to get solar technology reclassified as nuclear technology in order to qualify solar plants for the $50 billion in nuclear guarantees that Congress and the White House are trying to push through (quietly, as a rider on the climate bill). Nuclear interests, after all, are trying to classify nuclear fission of uranium as a “renewable technology,” so why not?
Technically, the notion is sound. Fusion of light elements in the Sun’s core is the ultimate source of the radiation bath that Earth orbits through, after all. Those photons are the product of nuclear reactions, so it’s just nuclear energy traveling through space – and if you use photovoltaic systems to capture that energy, then you’re running a nuclear-powered energy system, are you not?
Think of how many gigawatt scale solar arrays you could build with $50 billion dollars – and it doesn’t take ten years to build one, as it does with a nuclear power plant:
Imagine if all that government nuclear money was available for solar energy R&D as well? Write your politicians today and ask them to reclassify solar energy as nuclear energy!
CM says
> shadow threads
Technically possible and occasionally being done already (cf. Unforced Variations 3. Probably takes some time and effort on the moderators’ part.
IMHO, doing this all the time would not be a solution, but a recipe for even more off-topic discussion. There would be more threads for the moderators to monitor, on topics they didn’t start and may not care or know about, and that would blur the focus of the site.
One exception: It might perhaps be helpful to consign rebunkers a la Lichanos to a special “Whack-A-Mole” thread, where we can play that game without diluting threads on new research?
Otherwise, the best technical upgrade for the moderators would be a big, pulsating “Kill” button. For the rest of us, self-restraint, and respect for our virtual commons.
(Yep, I know I’m a fine one to talk. Do as I say, not as I do…)
Hank Roberts says
> Bill says: 31 May 2010 at 10:09 AM
> … its easier to focus on ‘attribution’ if we can define ‘change’
Use the standard definition from statistics. This is a good explanation at high school level: http://moregrumbinescience.blogspot.com/2009/01/results-on-deciding-trends.html
Rod B says
ccpo (386), I assume you meant “they” instead of “you” in your comment, “This is what you are saying, dreck style:” since I personally have never said any that follows.
The difficulty with the argument that skeptics are welcome but not denialists is that 99% of the time any skeptic asking questions or making contrary claims is defined as a denialist. So your distinction is moot.
I should not have include 100% of RC in my question. With all of the banned skeptics and denialists gone some (I would say a small minority of the comments, though a large majority of the moderators’ lead posts) of RC would be scientific education as Ray asserts.
Bill says
re ~417, Hanks comment. Ive been there, read that. I’m not sure why the high school jibe.
Maybe its my english but the question was about ‘change from when?’Is it 1840 , 1970 or something else?
Hank Roberts says
Attribution vs. misattribution explained:
http://www.stthomas.edu/engineering/jpabraham/
hat tip to:
http://scienceblogs.com/deltoid/2010/05/monckton_is_wrong.php#more
Bill says
Gavin wrote “How do we know what caused climate to change” in his article. When did it occur ? Simple question. Elsewhere one of you on RC talks about over the last 160 years, and another over the last 30 years….. ??
Edward Greisch says
Amplifying 403 Buzz Belleville: Lawyers think that they can always trash anything a scientist or engineer says because we use confidence levels rather than being certain. The opposite is the truth. Certainty is the red flag of a charlatan. But just try explaining that to the average person or lawyer. The answer is unfortunately long term: everybody should have a degree in a hard science.
415 Ike Solem: The The Price–Anderson Nuclear Industries Indemnity Act doesn’t say what you think it says. And the rest of what you say is nonsense as well, but I am not going to take the bait today. Just go read “Power to Save the World; The Truth About Nuclear Energy” by Gwyneth Cravens, 2007 It is a truthful book about nuclear power. This book is very easy to read and understand. Even an innumerate humanitologist could understand it. Gwyneth Cravens is a former anti-nuclear activist.
415 Ike Solem: I am quite sure you are not asking sincere questions.
Hank Roberts says
Hat tip to Greg Laden for a pointer to this attribution gem
http://watchingtheworldwakeup.blogspot.com/2010/05/city-creek-part-3-rocks-global-warming.html
—excerpt—
Some “climate change skeptics” point to these ancient happenings as evidence that the Earth’s climate has always been variable, will continue to be so, and will do so regardless of the influence of mankind. But that’s obviously the wrong takeaway. The right takeaway is that relatively small changes in atmospheric chemistry appear to be capable of creating positive feedback loops which lead to dramatic and catastrophic climate changes in remarkably short periods of times ….
Nested Tangent: I wonder, way back when people were starting to live in permanent settlements, and had to figure out basic sanitation engineering, were there Sanitation Skeptics? Imagine…
CAVEMAN-SANITATION-SKEPTIC (CSS): There’s no evidence that pooping in the well is causing any harm.
PE: Yes, but there’s more poop in our drinking water than ever before. And some people are starting to get sick. Maybe we should try pooping somewhere else while we figure out if more poop in the well is going to cause problems.
CSS: Show me the evidence! People have always occasionally gotten sick from drinking water. There’s no evidence that drinking-water illnesses are poop- I mean human- caused.
PE: Yes, but we know that ingesting poop makes people sick, and we know there’s more poop in our water than ever before. How about we just be a little conservative and try pooping a little less in the well.
CSS: But think of the economic impact! Effort spent digging latrines and walking farther to poop will impede our critical economic development of cutting down forests and slaughtering Pleistocene megafauna!…
Hank Roberts says
> Bill …. change from when?
Ok, you’ve read Grumbine. No offense intended, couldn’t tell from how you asked the question how much statistics you’d done. As you’ve worked through his exercises, you know as much as I do. I’m not clear what you’re asking, it seems so obvious, let me try to get a better idea.
The main post says “We can generalise this: what is a required is a model of some sort that makes predictions for what should and should not have happened depending on some specific cause ….”
If it’s fossil fuel use, look at the time span involved. Aerosols? We know the various kinds and how they’ve changed over time. The “from when” is chosen “depending on some specific cause ….” And the time spans are determined by looking at the record as Grumbine explains.
Andrew says
J. Bob #388: “#363, Andrew says “No, the models do not have to accurately reflect physics.”. This is a little different then what I said about the math models reflecting REALITY.”
I forgot to knock out the reality claim too. Most climate models are used for CONDITIONAL prediction:
“Before I draw nearer to that stone to which you point,” said Scrooge, “answer me one question. Are these the shadows of the things that Will be, or are they shadows of things that May be, only?””
For climate, it’s “may be, only” for most of the interesting computations. You compute the response to many forcing scenarios, only one of which might actually occur in reality. The rest of the computations are essentially fictional.
Even the agreement between the one path that does occur, and reality, for the most useful plant model used to make decisions, need not be simple, and it need not support the obvious interpretations – it need not even be close. A huge part of this effect is that climate information tends to be confounded with other information on small time and space scales, but physics and physicality are not similarly concentrated with climate information time and space scales. Another big part is that the general “climate response” is what if you hit the climate with whatever forcing and watch it respond. You would have to hit the climate with things like “white noise” to require use of the full climate response as your plant model. In reality, the climate forcing is very far from arbitrary – the cost of control precludes this; so the control input lives in a much lower dimensional function space than would “fully excite” the climate. In a linear climate response, this makes things obvious – you would only care about the projection of the response onto that space of control inputs. Climate has a lot of nonlinear effects so it’s more complicated than the linear case, but qualitatively, the fact that the climate forcing isn’t arbitrary means that you don’t expect to fully identify the climate response.
From a decision theory point of view, the parts of the climate response which are shielded from identification by costs and constraints on forcing are fair game for improving controller performance by various information theoretic hooks and crooks. You can and should shred that part of physics (and reality) in the cause of getting a better control policy.
In some sense this is like when you change your driving on a blind curve in a night storm. Part of the change is because you know there are various risks associated with blind curves, night, and storms. Part of the change is because you do not know exactly which of those dangers are real in your situation. This second part is a reasonable reaction even if some of the risk combinations you might imagine cannot actually be real – yes you could have a huge tree trunk down on the other side of the curve fully obstructing traffic, or you might have an out of control car coming from the other side of the curve, but you can’t have both. Yet you can drive well even if you don’t bother to reduce the relative weight of those two dangers due to their exclusion of each other – policy based on unreal models do not have to be bad. In this case you can see why – similar optimal control responses to situations which are quite different means that the optimal policy does not need to distinguish the situations from each other, nor even from unreality. When you do the same things to avoid threats you cannot observe, you don’t have to care about which scenarios you avoid, even some which are unreal.
I sort of covered this in the previous post where I pointed out the well understood benefits of “editing the physics” to remove artifacts and improve statistical performance. I could have, and probably should have, elaborated that the desirable deviations from physics could very well entail deviations from reality.
If you want to make this feel less exotic, what is going on is that what you DO is highly influenced by how far apart things are in control space (the ‘cost of control’) and the limits of your partial knowledge as to where you are in control space (‘cost of information in control space’). Information about what you KNOW about physics and reality lives in a different metric (‘cost of observation’) and that’s a lot different. I suppose you can look at this as a microcosm of the fairly traditional separation between engineering (DO) and science (KNOW), but here I mean it in a precise technical sense. Only when the cost of observation is relatively proportional to the cost of control would you expect the information set your decision depends on to be relatively complete in your overall information set.
By the way, the idea that physical models are useful as standards of reality is wrong, too. That idea is only tenable when a high degree of dynamic similitude is achieved (usually never in nontrivial modeling problems). Otherwise, physical models, analog computations, etc., all appeal to the same excuses for utility as digital computations. The reason digital computations usually win is that they are faster, cheaper, and more flexible.
Phil Scadden says
“Much of northern ice cap ice free in 18th century (historical literature)”
I suspect we are going to see a lot of this argument if 2010 sets new record for low ice extent. However a much more telling question would surely be
“When was the last time that there was evidence of open-sea plankton present at the north pole?”. I know there has been sediment coring near pole but couldnt find this answer. Anyone got some pointers?
J. Bob says
#395 dhogaza, your comments about physical models, ship tanks & wind tunnels indicate you might want to take a course in Experimental engineering. Here are some dimensionless parameters you might want to read up on before you make to many comments:
Ship hull design:Weber & Froude #’s
Aero Design-Mach & Reynolds #’s
Thermal uplift convection- Grashof
Radiation Energy transfer-Shape or Form factors, albedo
JRC says
This may be a bit off topic. But I’m wondering if to get a message across to the average people that scientists might want to use language common to more laypersons. For example, a doctor in a court of law is ask if his/her opinion within a reasonable degree of medical certainty. Instead of letting denialists distract the layperson with the use of uncertainty, which even in medicine there is uncertainty, get the message across that what is being observed and the science is within a reasonable degree of scientific certainty. Just an observation, I do think that many more people should be educated in the sciences to understand the context of what the scientists are really saying when they speak of the uncertainties. Not just for this issue, but the sciences are in my opinion very important for the future and progress of the human race.
P. Lewis says
Re Lucien (and Ray). The paper is here. It was published in 2009.
Toppy says
“Given these fingerprints for multiple hypothesised drivers (solar, aerosols, land-use/land cover change, greenhouse gases etc.)”
*where is the internal variability? Is it ruled out as a driver?
Rule it out for me.
Mal Adapted says
Envison…
ccpo says
Rod B says:
31 May 2010 at 12:27 PM
The difficulty with the argument that skeptics are welcome but not denialists is that 99% of the time any skeptic asking questions or making contrary claims is defined as a denialist. So your distinction is moot.
Oh, please. 1. You’re engaging in hyperbole. You’ve never run the numbers. 2. People here, in particular, are quite patient with questions, even from trolls, as is exemplified in the treatment of Luchanos. 3. You are leaving out the obvious point that there is no legitimate contrarian claim. 3b. I know of no peer-reviewed, published paper that calls into question the essentials of what we call AGW, so these contrary claims are bull poo-poo, and should not be entertained.
Anyone with a serious counter-claim is welcome to publish it and post here. Rhetorical question: why don’t they?
All that is a long way of saying, there are no true skeptics in the denialist camp, by definition. After all, a skeptic has no reason to deny since any legitimate question is, again by definition, not denial, but legitimate questioning.
Put another way, a true skeptic would not arrive here, or anywhere else, spouting any of the denialist talking points because they would have done the research already, come to understand the science as it stands, and would raise only legitimate questions.
That’s how you tell a skepic from a denier, and it’s why denialists get jumped on so quickly; they’re very obvious.
Cheers
Jacob Mack says
http://www.statsoft.com/textbook/multiple-regression/
neil pelkey says
Dear Ray L., Other than the necessary offhand reference to the quantum spin at RC, Dirac is relevant because a large series of pdf’s will converge to a Dirac “distribution” if thin tailed enough. That is, it is pretty much irrelevant if the confidence levels are 90% or 60% or what have you, if you have enough streams of evidence in the same direction you converge to a point distribution. Perhaps physicists do this in their heads without formalizing it–hence my comment on Gavin’s belief system. Normal people see a series of 60% likelihoods and think the outcome is less likely not more likely. You could also take the RA fisher approach of combining probabilities, 20 streams with 80% confidence make one 99.99999 percent sure of the direction. But again, normal people use additive heuristics not multiplicative or power functions to assess relative probabilities. The real issue for many of us skeptics out here is how to parametrize the Ladbury “soup” distribution, and, as you have pointed out, that is a wickedly difficult task.
J. Bob says
#392, et al. Andrew- I take it you like digital simulation, as do I. I’ve been using analog, digital and hybrid computers, since the late 50’s. My last major simulation was on a CRAY (late 90’s), simulating thermal control of a medical diagnostic system. To get steady state results, on a state of the art CF system, it took about 3-4 hours run. This was compared to a instrumented physical model and differences noted. The computer model provided a guide in placement of the modules, baffles etc. for very tight temperature tolerances.
However I don’t what to make a long dissertation, as I think Gavin is getting bored with this discussion. I don’t know about your world, but in mine, lives could have been at stake. My point again is that one uses ANY tool available to solve a problem, efficiently and thoroughly as time and money constraints allow. And if there are concerns on shortcomings or problem areas, that also must be brought out.
As with any model, not all conditions are evaluated, ONLY the ones envisioned by the user, including abnormal and failure situations. Since you mentioned robust adaptive control, consider the case of the #3 X-15 using the Honeywell Adaptive controller. During normal flight, the system worked well. Neil Armstrong commented that it took so much of the workload off him, he could enjoy the view. This was a analog system blending aero & reaction controls to provide almost a consistent A/C response over the flight envelope. On the last flight, an electrical failure caused the plane to yaw, coming down at almost a of 90 deg. sideslip. This caused a Mach 4 spin, “confusing” the auto pilot, dropping the loop gain, or reducing the effects of the control actuators. So when the pilot partially recovered control, the plane went into a oscillation and ultimately a crash.
That sequence of events was never though of. So we might not have thought of all the possibilities as far as climate modeling. As to when we will know, Gen. Kutuzov’s reputed comment, after the battle of Moscow, “Time & patience, patience & time”, might be in order
Another point I would like to make is, have you ever wondered how the Redstone, X-15, SR-79, Atlas, Titan and Saturn were designed virtually WITHOUT digital computers? Most of the calculations were done on John Napier’s bones (slide rules) or for high accuracy, Friedan calculators. In fact our Celestial Mechanics prof. J. Danby would not let us use a computer (it was down most of the time anyway). Later when we worked together, on orbital rendezvous, I asked him as to why no computers. He replied he wanted us to get the “feel” of the system, which it did, after many weekends of manual calculations. It did instill a understanding of how a system worked, that many times looking at a list of numbers does not.
Some minor observations:
Speaking of computational orals, do you wonder if Teddy von Karman would have passed them?
How much would have science advanced if more knowledge had been shared by the Observers in China, India and western Europe?
How many control systems have you designed that have actually been used, or gone into production? You can take your pick of sub-orbital, manned (orbit or re-entry), Earth IR Scanner, Atmospheric IR simulator, thermal, refinery process, or stand alone internet sensor systems, for starters.
Hank Roberts says
> a true skeptic would not arrive here, or anywhere else, spouting
> any of the denialist talking points because they would have done
> the research already
And the remainder — people who haven’t done the research already and arrive with a lot of wrong ideas — get sorted out over time.
Some are kids. Some are emeritus. Some are recognized by people who know them as worth the time to get past the hot button words they come in using — this is really important.
“You’d worry a lot less what people think of you if you knew how rarely they did.” That’s true for climate bloggers like anyone else. Lots of people never heard of any of this stuff, or heard a few sentences.
Education — doesn’t happen everywhere overnight. Go out on the street and try asking a random dozen people what they know about climate change.
Heh.
So if someone makes it here we should treasure them til they prove they’re just here to waste time, not let our most impatient members run them off. YMMV. But as long as Gavin lets’em in, I figure they’re guests here just like the regulars are and we all owe it to our host.
Some just heard something (or a lot of stuff) somewhere including some mention of RC as a place to go (and to learn? or to troll? or just being curious?).
So we try to talk to them, to save the real scientists the trouble and time of dealing yet once another time again with the same stuff.
We try to do it calmly and using cites so people can learn to look things up for themselves.
And not to let the big red button get pushed, nor push it:
http://tvtropes.org/pmwiki/pmwiki.php/Main/BerserkButton
neil pelkey says
Phil Scadden, Just in case the Vermeer masterpiece was a little outside your comfort zone, this piece by Matt Powell is excellent.
The Latitudinal Diversity Gradient of Brachiopods over the Past 530 Million Years Matthew G. Powell
The Journal of Geology 2009 117:6, 585-594
http://faculty.juniata.edu/powell/
As my six-year-old says, Dr. Powell knows more about the planet than anybody.
Edward Greisch says
Record heat wave URL is posted by a climateprogres commenter:
http://www.guardian.co.uk/world/2010/may/30/india-heatwave-deaths
Rod B says
ccpo (430), your spin seems to have made you a bit dizzy. To my assertion that skeptics get defined as denialists thereby eliminating any distinction, you accuse my of hyperbole explaining how accepting you are of contrarian views — followed immediately with, and I quote, “…the obvious point that there is no legitimate contrarian claim…” You cite the treatment of Lichanos as a good example of your tolerance. There was a fair amount of discourse with Lichanos though it soon became nasty, and eventually there were calls to ostracize him, then ban him — even by you (167: “…don’t publish…”).
You tried to refute my assertion by essentially repeating and confirming it!
Phil Scadden says
neil pelkey – I cannot see the relevance of your references either to Vermeer or a 530 my brachiopod record. The question I was looking for was when was the last time we had an ice-free pole in summer. When this happened, you would expect open-ocean plankton to be present in sediment core from pole. I have seen studies of this covering this for whole of Quaternary or earlier, but not much detail on Holocene. There has also been excellent stuff done on biomarkers for ice extent on the arctic margin. What about the pole? As ice extent shrinks, it is what people will ask and we are already seeing nonsense about 18th century historical records.
Doug Bostrom says
#435: Self-directed ad hominem, positive in nature, genuinely entertaining, virtually devoid of relevance to the topic here.
ccpo says
Hank, don’t you find it pretty obvious which are confused/yet ignorant and which are trolling/denialists? I think most of us do. The former ask questions and respond to shared info with a light bulb moment. The latter just keep banging away at the talking points. Easy to tell the difference. Also, exceedingly few fit into the former group.
Try not to forget that denialism is largely an ideological, not logical, stance (Oreskes, e.g.), which means the person is largely unteachable.
Completely Fed Up says
“And the remainder — people who haven’t done the research already and arrive with a lot of wrong ideas — get sorted out over time. ”
And they can go to the Start Here button, if their first port of call is Realclimate.
“Physician, heal thyself”.
Completely Fed Up says
“As with any model, not all conditions are evaluated, ONLY the ones envisioned by the user, including abnormal and failure situations.”
Incorrect. But irrelevant anyway. Even scale models are only put through tests that are envisioned by the user. In fact, even actual full scale prototypes are only put through tests that are envisioned by the user.
But the fact that you can run a simulation means that you can enact a genetic model of your product and let it enact whatever it thinks.
Again, Langton’s Ant.
Funny how that really basic computer modelling example is missed by you, who profess such solid knowledge of the sphere.
Completely Fed Up says
“*where is the internal variability? Is it ruled out as a driver?”
Internal variability cannot drive anything unless you’re in a conditionally stable system. There’s no energy source to drive.
After all, molecules are variably moving, yet this doesn’t result in a cinder block moving across the floor.
Completely Fed Up says
JRC, 428, that’s been done by, among others, Al Gore in AIT.
It has been perpetually pounced on as “alarmist” because it doesn’t talk about the uncertainties.
Your proposition would work on the layman except we have liars, thieves and charlatans who are willing to manipulate the layman to ensure their comfort.
Completely Fed Up says
“#395 dhogaza, your comments about physical models, ship tanks & wind tunnels indicate you might want to take a course in Experimental engineering.”
This seems to be a common thread of J Billy Beau J Bob’s arguments: when he hasn’t got anything to say, he accuses those who disagree of not knowing even the basics, thereby trying to ad-hom the argument by saying “you don’t know anything on this” rather than explain why he’s right.
And, oddly enough, many of the concern trolls on RC (on both sides of the line) fail to bring him up on it.
It’s all “How DARE you say that someone has to have a PhD before they can come on here!!!” yet when J Billy Beau Jim J Bob Junior uses the same argument, silence.
Ray Ladbury says
Neil Pelkey,
Yes, I understand that enough distributions multiplied together will be very sharply peaked. The question is whether–and how–we can multiply them together. Not all the evidence is independent, and we do not know all the conditional probabilities. Even so, if you look at the climate models that have been realized so far–none of which works with a low CO2 sensitivity–or if you look at the different lines of evidence for CO2 sensitivity–all of which favor a sensitivity around 3 degrees per doubling–or if you just look at the characteristics of the warming and stratospheric cooling, it’s pretty hard to come up with anything less than 95% confidence.
I would note that this is an issue not just for climate science. Scientific evidence is easiest to interpret comparatively–i.e. when you have more than one theory. However, once the alternative theories fall by the wayside, it is difficult to estimate the strength of support for the theory derived from ALL the evidence. And as we know, if your prior is nonzero everywhere except at +/- infinity, your posterior will also be nonzero unless your evidence is a Dirac distribution.
Off topic: I’m reading a biography of Driac, right now, as it happens. It’s pretty good. He was an odd duck. What can you say about a man who won a Nobel Prize in physics before he discovered girls?
Lawrence Coleman says
Eric: Done a little recearch on effects of glacial rebound and volcanic activity and found this site..http://www.heatisonline.org/contentserver/objecthandlers/index.cfm?id=5966&method=full
In a nutshell..in a large glacier the ice could have been 1km thick..and that’s slightly less than 1000tonnes/sqm holding the underlying rock in place..as you can visulaise that sort of pressure would do a pretty damn good job at that. When the glacier starts rebounding in eanest only a fraction of that weight remains on the underlying crust..allowing the once-eons back tectonically active region to be active once again. Another point that I didn’t pursue then was the fact that rising sea levels cause added weight and compression to the ocean floor once again destabilising the entire geological framework of the crust. I think I can visualise that this could probably proove more than “just the straw that broke the camels back”. That the ramifications of glacial rebound and rising sea levels could well cause cause greatly increased levels of tectoniic activity and or tsunamis in many parts of the globe.
Christoffer Bugge Harder says
Dr. Schmidt, thank you for ontoher fine post. I tried to post the below comment on Lubos Motl´s blog pointing to the very simple fact that unprecedentedness is just no argument, but something in it must have stepped badly on his toes since he deleted it, blocked me immediately and called me a “liar”, “ideologically blinded idiot”, a “stalker of Svensmark” and, worst of all, being “insulting”. I am quite flattered to have provoked a response from Motl that appears to be over the top, even by his standard, and which made him complain about harsh language and forget his own complaints about Realclimate “censorship”. I hope it is okay that I post it here; hopefully, someone may get a chuckle out of it: