How do we know what caused climate to change – or even if anything did?
This is a central question with respect to recent temperature trends, but of course it is much more general and applies to a whole range of climate changes over all time scales. Judging from comments we receive here and discussions elsewhere on the web, there is a fair amount of confusion about how this process works and what can (and cannot) be said with confidence. For instance, many people appear to (incorrectly) think that attribution is just based on a naive correlation of the global mean temperature, or that it is impossible to do unless a change is ‘unprecedented’ or that the answers are based on our lack of imagination about other causes.
In fact the process is more sophisticated than these misconceptions imply and I’ll go over the main issues below. But the executive summary is this:
- You can’t do attribution based only on statistics
- Attribution has nothing to do with something being “unprecedented”
- You always need a model of some sort
- The more distinct the fingerprint of a particular cause is, the easier it is to detect
Note that it helps enormously to think about attribution in contexts that don’t have anything to do with anthropogenic causes. For some reason that allows people to think a little bit more clearly about the problem.
First off, think about the difference between attribution in an observational science like climatology (or cosmology etc.) compared to a lab-based science (microbiology or materials science). In a laboratory, it’s relatively easy to demonstrate cause and effect: you set up the experiments – and if what you expect is a real phenomenon, you should be able to replicate it over and over again and get enough examples to demonstrate convincingly that a particular cause has a particular effect. Note that you can’t demonstrate that a particular effect can have only that cause, but should you see that effect in the real world and suspect that your cause is also present, then you can make a pretty good (though not 100%) case that a specific cause is to blame.
Why do you need a laboratory to do this? It is because the real world is always noisy – there is always something else going on that makes our (reductionist) theories less applicable than we’d like. Outside, we don’t get to perfectly stabilise the temperature and pressure, we don’t control the turbulence in the initial state, and we can’t shield the apparatus from cosmic rays etc. In the lab, we can do all of those things and ensure that (hopefully) we can boil the experiment down to its essentials. There is of course still ‘noise’ – imprecision in measuring instruments etc. and so you need to do it many times under slightly different conditions to be sure that your cause really does give the effect you are looking for.
The key to this kind of attribution is repetition, and this is where it should become obvious that for observational sciences, you are generally going to have to find a different way forward, since we don’t generally get to rerun the Holocene, or the Big Bang or the 20th Century (thankfully).
Repetition can be useful when you have repeating events in Nature – the ice age cycles, tides, volcanic eruptions, the seasons etc. These give you a chance to integrate over any unrelated confounding effects to get at the signal. For the impacts of volcanic eruptions in general, this has definitely been a useful technique (from Robock and Mao (1992) to Shindell et al (2004)). But many of the events that have occurred in geologic history are singular, or perhaps they’ve occurred more frequently but we only have good observations from one manifestation – the Paleocene-Eocene Thermal Maximum, the KT impact event, the 8.2 kyr event, the Little Ice Age etc. – and so another approach is required.
In the real world we attribute singular events all the time – in court cases for instance – and so we do have practical experience of this. If the evidence linking specific bank-robbers to a robbery is strong, prosecutors can get a conviction without the crimes needing to have been ‘unprecedented’, and without having to specifically prove that everyone else was innocent. What happens instead is that prosecutors (ideally) create a narrative for what they think happened (lets call that a ‘model’ for want of a better word), work out the consequences of that narrative (the suspect should have been seen by that camera at that moment, the DNA at the scene will match a suspect’s sample, the money will be found in the freezer etc.), and they then try and find those consequences in the evidence. It’s obviously important to make sure that the narrative isn’t simply a ‘just-so’ story, in which circumstances are strung together to suggest guilt, but which no further evidence is found to back up that particular story. Indeed these narratives are much more convincing when there is ‘out of sample’ confirmation.
We can generalise this: what is a required is a model of some sort that makes predictions for what should and should not have happened depending on some specific cause, combined with ‘out of sample’ validation of the model of events or phenomena that were not known about or used in the construction of the model.
Models come in many shapes and sizes. They can be statistical, empirical, physical, numerical or conceptual. Their utility is predicated on how specific they are, how clearly they distinguish their predictions from those of other models, and the avoidance of unnecessary complications (“Occam’s Razor”). If all else is equal, a more parsimonious explanation is generally preferred as a working hypothesis.
The overriding requirement however is that the model must be predictive. It can’t just be a fit to the observations. For instance, one can fit a Fourier series to a data set that is purely random, but however accurate the fit is, it won’t give good predictions. Similarly a linear or quadratic fit to a time series can be useful form of descriptive statistics, but without any reason to think that there is an underlying basis for such a trend, it has very little predictive value. In fact, any statistical fit to the data is necessarily trying to match observations using a mathematical constraint (ie. trying to minimise the mean square residual, or the gradient, using sinusoids, or wavelets, etc.) and since there is no physical reason to assume that any of these constraints apply to the real world, no purely statistical approach is going to be that useful in attribution (despite it being attempted all the time).
To be clear, defining any externally forced climate signal as simply the linear, quadratic, polynomial or spline fit to the data is not sufficient. The corollary which defines ‘internal climate variability’ as the residual from that fit doesn’t work either.
So what can you do? The first thing to do is to get away from the idea that you can only be using single-valued metrics like the global temperature. We have much more information than that – patterns of changes across the surface, through the vertical extent of the atmosphere, and in the oceans. Complex spatial fingerprints of change can do a much better job at discriminating between competing hypotheses than simple multiple linear regression with a single time-series. For instance, a big difference between solar forced changes compared to those driven by CO2 is that the stratosphere changes in tandem with the lower atmosphere for solar changes, but they are opposed for CO2-driven change. Aerosol changes often have specific regional patterns change that can be distinguished from changes from well-mixed greenhouse gases.
The expected patterns for any particular driver (the ‘fingerprints’) can be estimated from a climate model, or even a suite of climate models with the differences between them serving as an estimate of the structural uncertainty. If these patterns are robust, then one can have confidence that they are a good reflection of the underlying assumptions that went into building the models. Given these fingerprints for multiple hypothesised drivers (solar, aerosols, land-use/land cover change, greenhouse gases etc.), we can than examine the real world to see if the changes we see can be explained by a combination of them. One important point to note is that it is easy to account for some model imperfections – for instance, if the solar pattern is underestimated in strength we can test for whether a multiplicative factor would improve the match. We can also apply some independent tests on the models to try and make sure that only the ‘good’ ones are used, or at least demonstrate that the conclusions are not sensitive to those choices.
These techniques of course, make some assumptions. Firstly, that the spatio-temporal pattern associated with a particular forcing is reasonably accurate (though the magnitude of the pattern can be too large or small without causing a problem). To a large extent this is the case – the stratospheric cooling/tropospheric warming pattern associated with CO2 increases is well understood, as are the qualitative land vs ocean/Northern vs. southern/Arctic amplification features. The exact value of polar amplification though is quite uncertain, though this affects all the response patterns and so is not a crucial factor. More problematic are results that indicate that specific forcings might impact existing regional patterns of variability, like the Arctic Oscillation or El Niño. In those cases, clearly distinguishing internal natural variability from the forced change is more difficult.
In all of the above, estimates are required of the magnitude and patterns of internal variability. These can be derived from model simulations (for instance in their pre-industrial control runs with no forcings), or estimated from the observational record. The latter is problematic because there is no ‘clean’ period where there was only internal variability occurring – volcanoes, solar variability etc. have been affecting the record even prior to the 20th Century. Thus the most straightforward estimates come from the GCMs. Each model has a different expression of the internal variability – some have too much ENSO activity for instance while some have too little, or, the timescale for multi-decadal variability in the North Atlantic might vary from 20 to 60 years for instance. Conclusions about the magnitude of the forced changes need to be robust to these different estimates.
So how might this work in practice? Take the impact of the Pinatubo eruption in 1991. Examination of the temperature record over this period shows a slight cooling, peaking in 1992-1993, but these temperatures were certainly not ‘unprecedented’, nor did they exceed the bounds of observed variability, yet it is well accepted that the cooling was attributable to the eruption. Why? First off, there was a well-observed change in the atmospheric composition (a layer of sulphate aerosols in the lower stratosphere). Models ranging from 1-dimensional radiative transfer models to full GCMs all suggest that these aerosols were sufficient to alter the planetary energy balance and cause global cooling in the annual mean surface temperatures. They also suggest that there would be complex spatial patterns of response – local warming in the lower stratosphere, increases in reflected solar radiation, decreases in outgoing longwave radiation, dynamical changes in the northern hemisphere winter circulation, decreases in tropical precipitation etc. These changes were observed in the real world too, and with very similar magnitudes to those predicted. Indeed many of these changes were predicted by GCMs before they were observed.
I’ll leave it as an exercise for the reader to apply the same reasoning to the changes related to increasing greenhouse gases, but for those interested the relevant chapter in the IPCC report is well worth reading, as are a couple of recent papers by Santer and colleagues.
Kevin McKinney says
“Deeper water around the coastlines”!!!???
Makes it sound like the latter wouldn’t change under “deeper water” conditions. Surely the commenter can’t have been that obtuse–?
The problem of sea level rise is a tough one, politically, because it is such a slow-mo issue by regular standards. If someone can muster worry about 2100, they are already in the minority.
According to Gwynne Dyer’s “Climate Wars,” though, there’s very good reason to worry about other effects first: a *much* dryer Mediterranean basin & Middle East is a robust prediction across the model ensemble. Famine in Iran, anyone? (Remember, they might have nukes a decade or two hence.)
Speaking of drying, although the Himalayan glacier timeline got botched in the WG 2 report, the fact remains that we’re seeing stream flow problems in Nepal *now.* (Not sure how firm the attribution is from a scientific point of view.) India and Pakistan are both highly dependent on Himalayan glacial streamflows, and both populations are growing rapidly. And they both have nuclear weapons and a history of mutual bellicosity.
Drying in Mexico over the next few decades looks highly probable, too. What will the Arizona border look like then? (Dyer thinks there may be a transition from chain link to automated machine guns and landmines. Sounds melodramatic, perhaps, but given the climate (sorry!) of opinion today, it also doesn’t sound crazy if you extrapolate.)
Precipitation pattern graphics aren’t as easy to parse as mean temperature anomalies, but maybe we need to learn. Fast.
Ray Ladbury says
RalphieGM says, “The coal you refer to was originally plant material which once converted CO2 to C (carbon coal). So – burning coal returns the original CO2 to the atmosphere from whence it came. Simple conservation principle – we are not really creating CO2 – just re-animating it.”
I am sure the dinosaurs will be very happy about this development. The humans…not so much. Ralphie, work with me dude. How long have humans been on the planet? When did we develop all the infrastructure for human civilization? See the problem
Lucien Locke says
Gavin,
Quick note …off topic sort of… an interesting article on results of research with results indicating a direct relationship between carbon dioxide emissions and global warming. Paper to be in June 11 edition of Nature. Short blurb at communique from Concordia University. http://news.concordia.ca/main_story/014941.shtml
Lucien
JCH says
“Off topic, but has anybody else noticed that BP’s video feed now shows a big crater with oil streaming out, seemingly where the riser used to be? I’m wondering if anyone here has heard about an intentional removal of the riser, a new seafloor leak or the like? …” – DB
OT response:
It’s the same, far-end the of the riser shot they’ve been showing, from different angles, all along. It’s not in a crater. During Top Kill they pumped something like 31,000 barrels of heavy mud into the BOP. 90% of that squished out the top of the BOP and down the riser, and there it plumed out into the open ocean. Eventually some of it settled back to the seabed around the broken end of the riser and formed a large mound around it, which makes it look like it sank into a crater.
There are no seabed leaks. Gavin can have his buddies at NASA verify this fact. It would show up on the ocean surface like a teenager with two monster zits on his nose instead of just one.
Scott A. Mandia says
OT:
I just sent letters to the Western Washington University President, Provost, Dept. Chair and several staff at each office to notify them about Don Easterbrook’s “Hide the Incline” fraud. If you are unaware of this fraud please see:
http://hot-topic.co.nz/cooling-gate-easterbrook-fakes-his-figures-hides-the-incline/
http://hot-topic.co.nz/cooling-gate-the-100-years-of-warming-easterbrook-wants-you-to-ignore/
http://hot-topic.co.nz/cooling-gate-easterbrook-defends-the-indefensible/
http://hot-topic.co.nz/whose-lie-is-it-anyway-easterbrook-caught-red-handed/
http://scienceblogs.com/deltoid/2010/05/don_easterbrook_hides_the_incl.php
If any of you wish to do the same, the contact info appears below:
WWW: Office of the President Contacts: http://www.wwu.edu/president/Staff.shtml
WWU: Office of the Provost Contacts: http://www.wwu.edu/provost/affairs/staff.shtml
WWU Geology Dept. Chair: http://geology.wwu.edu/dept/faculty/babcock.html
Sincerely,
Scott A. Mandia, Professor of Physical Sciences
Selden, NY
Global Warming: Man or Myth?
My Global Warming Blog
Twitter: AGW_Prof
“Global Warming Fact of the Day” Facebook Group
Geoff Wexler says
#329 Daaniel Goodwin
That remark is highly debateable. The main uncertainty was whether the greenhouse gas signal would be strong eneough to break through the noise. Since the experts such as Hansen had been forecasting it, why should there have been so much surprise?
In other areas of science the ability to predict a novel observation is regarded as a triumph, just on its own. Have climatologists been triumphalist about that success? It seems not. They set to work on the attribution problem to check up whether the predicted outcome had really corroborated the science.
It seems like a stroke of luck for the solution of the attribution problem that the finger-prints have been discovered. Think counterfactually for a moment. What if they had not been discovered? Rising greenhouse gases would still have been a major cause of worry.
Attribution may be very important but it is only a part of this story.
CM says
ralphieGM #332 said:
> we are not really creating CO2 – just re-animating it.
In a Lovecraftian sense, that’s kind of apt.
:(
Ray Ladbury says
Lucien@353,
From the press release, the favored value for CO2 sensitivity in the work seems to be ~2.5 degrees per doubling. Well within the confidence interval favored by the evidence. At least it’s not another Schwartz.
J. Bob says
#348 CFU, do you seriously think that ALL the people involved on that program will be listed on Google? Back then I was a Jr. Engineer, however my immediate supervisor would go out to Edwards and had personally met with White, Walker & Armstrong.
If you have ever used math modeling, you would know that they reflect what you DO KNOW, physical models will bring out what you do not know, so you can upgrade the math model. A case in point was the “hot shot” wind tunnel at McDonnell A/C. The relief cap on the tunnel was made of heavy steel plate, modeled to a safety factor of 2-3 expected max load. First full test of the tunnel blew the cap off and dumped it 1/4 mile away. Seems heating a gas to a virtual plasma produces strange and novel effects.
If you have ever modeled fluids, you would understand the difficulty of trying to get it right.
John P. Reisman (OSS Foundation) says
#339 Lichanos the anonymous
Did your posts have substance? Or more unsubstantial, or relevant, opinion than you have been issuing?
I’ve had posts of mine not show up. Maybe I did not address the issue properly? Maybe I was saying something inappropriate to the thread. But I’m willing to admit that I can be too far off topic, or missing a point. Can you?
—
A Climate Minute The Greenhouse Effect – History of Climate Science – Arctic Ice Melt
‘Fee & Dividend’ Our best chance for a better future – climatelobby.com
Learn the Issue & Sign the Petition
Completely Fed Up says
re 349, well at least I get a reason. See: https://www.realclimate.org/index.php/archives/2010/05/what-we-can-learn-from-studying-the-last-millennium-or-so/comment-page-10/#comment-175638
Edward Greisch says
355 Scott A. Mandia: It looks like he made the Midieval times appear in the future as well. Quite a time travel trick! But Don Easterbrook, the retired geology professor is retired, which means there is nothing the Provost, Dept. Chair and so on can do, doesn’t it?
Can you follow the money trail and find out who paid Don Easterbrook how much? After we know that, we need to attribute Don Easterbrook’s statement to money in the popular press. That could cost us money.
Andrew says
#302 Andrew, no one said you rely exclusively on one or another, but it’s a general practice to cross check. I notice you said “almost” as far as Boeing using wind tunnels. Can you imagine the liability Boeing would have if they didn’t cross check with wind tunnel data.”
Yeah it would be about the same as what they would have with it. This is because they also have flight testing to check the computations. It’s not about liability or risk of being wrong. The computations are trusted. Back in 1994 they still had a physical model possibly because the wind tunnel was a sunk cost?
“However the point of the Machine Design article was that in this case the physical model trumped math model because the math model didn’t reflect reality. Did you know they still use the old wind tunnel at Langley. It’s cheaper to drive a car up there, put it on a tilt table. With the automated sensors ( we had to manually read monometers when I was in grad school) in a few hours they not only have data at various speeds as well as different sideslip angles. Unless one has a system to transform the vehicle outline (CAD data) seamlessly to a flow program, one can spend a week just doing that.”
Uh, those programs have been around since before 1990. Google “CAD Aerodynamics Jamieson” for example.
“Doesn’t the Navy still use tanks to check their ship design?”
Nowhere near as much as previously. The investment in towing tanks is nowhere near what it is in computers. Engineers aren’t stupid; they are spending their design dollars on computation. Structural engineers design buildings against dynamic risks with computers, because models don’t do that job.
“But coming back to the main point. The math models have to reflect reality and test results. If they don’t, one has to find out why. As far as math models go I have personally been using them for over 50 years, since the X-15 days, and have no problem with them, as long as they reflect reality, which I’m not sure the current climate models reflect.
No, the models do not have to accurately reflect physics. There are several reasons for this.
1. When you do climate, you average over a lot of “redistributive” effects which equilibrate over timescales shorter than of interest and preserve the “accumlative” effects. You can actually get a worse result by preserving effects which will average out (this is called “stiffness” of the system of differential equations – an effect well known to and well understood by engineers since before I was born). This also leads to a related effect called “sub grid scale parameterization” on which physics can (and should) be bent to avoid introduction of meaningless computational artifacts. Since you have an aero background, you are probably familiar with ‘artificial viscosity’ as an example of this sort of thing.
2. We can get better answers by modifying the physics to suit the particular answer we are looking for. This overlaps a little with (1) but in fact goes further. Suppose you want to compute the lift and drag of an airfoil; you would think that you have to run a “flow code”. In fact, you do not. Wilcox’s work at MIT is an excellent example (Google: “wilcox unsteady airfoil balanced reduction”). In this approach, you actually project onto a submanifold which contains the information relevant to the answers to the questions you will ask, but throws out whatever isn’t – which includes physics which would be important on the time and length scales of interest were you to ask different questions. In other words, this methodology figures out a space which spans the answers to your questions and then only computes the physics which projects onto that space. There are also other approaches which have different motivations but have similar results (moment closure descriptions of statistical fluid mechanics, etc.) So no, you do NOT always have to use the relevant physics, unless you want to know “everything” about the trajectory through all time.
3. Purely statistical reasons. You are usually computing an ensemble of trajectories, and there are ways to make the ensemble a more efficient sampling process at the expense of physical fidelity – for example you can put in a small repulsive force between the individual trajectories in such a way that the ensemble more efficiently samples the space of realizations. Clearly, these forces are non-physical for each individual realization, but they let you get a better answer for the ensemble.
Now in fact, I don’t think climate scientists are using most of these opportunities, I think most of them are that hardcore about scientific computation since they spend a lot of time trying NOT to massacre the physics. However, that is a missed opportunity if you really want the best use of the computer.
P.S. Solar motion of sun spots were observed long before the telescope. They used the “camera obscura”.
Until the telescope, nobody knew that those were on the sun. Even in 1607, Kepler himself mistook a sunspot for a transit of Mercury. Just a few years later (1610, 1611, 1612) with the introduction of the telescope, at least three observers had seen sunspots for what they are – spots ON the sun, and one guy who was at that time still a holdout for planetary shadows (Scheiner) who had the best drawing.
And the point was that when confronted with the telescope evidence that the Sun was not “perfect” and had “blemishes” there were people who refused to look through the telescope and see for themselves.
It’s a little harder to explain how climate computation works compared to how a telescope works, but it’s not out of the question; and more people should confront just how serious the effort has been before wondering about whether the computation has been done to their satisfaction. These guys are not really fooling around – climate computation has been for decades, a serious effort on the part of many substantially capable groups of scientists. I’d like to see more climate skeptics try and pass Ph.D. orals in computation.
Edward Greisch says
324 Frank Giger: I don’t need science to know that there is a problem with agriculture and GW. I live in the corn belt. We have had an extra 2 feet of rain in the past 2.5 years. Both planting and harvesting have become problematical because of the extra rain. Some fields are too muddy for a tractor or a combine to navigate. Seedlings washed away twice so fields have to be planted 3 times to get 1 crop in Mercer County. You should watch “Ag Day” to see the very strange checkerboard of flood and drought across the lower 48 states. It doesn’t look like what you theorists are predicting.
Somebody commented on RC recently that China is loosing an area the size of Rhode Island to the Gobi desert annually. China was an exporter of corn. China now imports corn. How long do you think it will take the 1.3 billion people of China + the 1 billion people of India to eat up the US production? There is no longer a surplus anywhere. We are already down to days rather than years, like it was in the 50s, of available food. It won’t take much to push us over the edge.
Doug Bostrom says
…nothing the Provost, Dept. Chair and so on can do, doesn’t it?
Take away his library card? Nah, why? He’s talking his way over the cliff of crumbling credibility quite on his own.
Doug Bostrom says
Barton Paul Levenson says: 30 May 2010 at 5:41 AM
…he emailed me claiming he tried to post in response twice but couldn’t get his post accepted.
Is he hiding his decline? With all the worthless dreck allowed on this site, he’s been singled out? I doubt it, and as far as comparing honesty goes what’s the data we have in hand?
Edward Greisch says
347 Completely Fed Up: Thank you. That is indeed what I meant.
350 CFU: Before Secretary of Defense Dick Cheney abolished the Mil Specs and Standards, we had something to hold the contractors to. When Dick Cheney became VP, he asked us to “trust” the contractors. As a level 3 Defense Acquisition official, I had learned long ago exactly how far contractors can be trusted. Then the W administration wanted us to ignore the differences between the drawings and the product and just sign the check. I accepted the early retirement because I couldn’t fight the president and the vice president.
I suspect that the standards for oil well equipment were likewise abolished by the W administration. Since it takes about 4 years for an administration to be “felt” completely at the mid level, The O administration hasn’t had time to re-instate any abolished specs and standards. In fact, that is an issue that will take Mr. Obama some time yet to understand. Obama is smart, but it is a complex subject in a field that is not his own, and Obama is not an engineer.
CTG says
Re 362 Ed Greisch
What is possibly more relevant is that Easterbrook gave his talk at the Heartland conference. This was attended by many current academics and serving politicians. So far, they have done nothing about Easterbrook’s fraud – the presentation is still there on the website. This makes the organisers and attendees of the conference complicit in the fraud. If any of your political representatives were at the conference, you may want to write to them and ask if they condone scientific fraud, and if not, ask them to publicly disassociate themselves from Easterbrook.
trrll says
#298 Lichanos says
“1. I agree. “All else being equal” is a very important phrase here. Lindzen puts it just this way. The earth is not this type of a system -it is a dynamic system. More CO2 will tend, however, to raise temperature to a point.”
At this point, this is little more wishful thinking. Nobody has been able come up any kind of plausible temperature governor that would “kick in” to protect us from global warming, yet still be compatible with the climate record (it certainly has gotten very warm in the past). A pretty thin threat to hang you hopes upon.
“5. Very vague. Earth has warmed? Since when? Last 200 years? ”
Quibbling to dodge the question. The issue is the modern temperature increase that has accompanied the modern increase in CO2, not temperature rises in the distant past that likely had other causes–although it is certainly true that large temperature increases in the distant past argue against the “magic governor” hope.
“8. Non sequitur. I would say, “plausible that AGW is part of the warming observed, whatever magnitude we decide on in No. 5.” IPCC says highly likely that MOST of the warming in the last 150 years is AGW. They do not say ALL. And, again, this is only if No. 5 is demonstrated. Note, I only say plausible, because correlation does not prove causation at all. ”
However, correlation constitutes supporting evidence for a theory that predicts such a correlation. “Most” vs. “all” is standard cautious phrasing. Scientists never say “all,” because it is impossible to support–what if it’s 99.999%, that’s still not all.
“This is not a necessary conclusion following any of the above. Again, it’s a plausible hypothesis, but it’s support is weak, and I believe comes principally from GCM runs. Model runs, in turn, assume some of the points above are proven, and that the model properly treats the necessary system dynamics – a big assumption.”
I think that it has been pointed out to you before that the prediction that CO2 from human activities will warm the climate long predates GCM’s. One might hope that these more detailed models would identify a “magic governor” that will kick in to save us from the predicted temperature rise, but decades of model refinement have failed to identify any mechanism that could work this way. But of course, models can alway be better. So just as creationists keep believing in a “god of the gaps” who hides his miracles in those lacunae where physical and biological knowledge is incomplete, some people will doubtless continue to hang onto their faith in rescue by a “governor of the gaps” even as the coastal cities begin to flood.
Radge Havers says
Doug Bostrom @ 366
He may have gone over the top, but my guess is that if a key combination of letters buried in his text got blocked by the spam filter, he wouldn’t have the wherewithal or patience to set aside the b.s. and actually figure out how to correct the problem.
Ike Solem says
@Ed 50% of U.S. corn production currently goes into factory farm or feedlot animal operations. The rest is split between ethanol production (~20%), exports abroad (~20%), and all other uses (high fructose corn syrup, tortillas and chips, and fresh corn).
In 2005, the U.S. produced 4 billion gallons of corn ethanol and imported 15 billion gallons of refined fossil fuel products. In contrast, Brazil produces around 7 billion gallons of sugarcane ethanol per year, accounting for 50% of their domestic transportation needs (In the U.S. biofuels account for only a few percent of the total). The U.S. however uses 10 times as much energy per capita as does Brazil. This is primarily due to inefficient vehicles and so on – using better technology, it’d likely be very easy to cut U.S. energy demand in half with no reduction in quality of life or economic health.
This is interesting from the standpoint of energy, but for climate issues this is only half the story – the other involves how much fossil fuel is required to grow the corn or sugarcane and produce the ethanol. What is the barrels to bushels ratio? What about the fossil CO2 to biofuel ratio? In the optimal case, can fossil fuels be entirely eliminated from agriculture without losses in productivity?
To answer that, you have to look at fossil fuel energy use in agriculture – tractors, trucks, refrigeration, and synthetic fertilizer production. It’s not too hard to show that farms can implement renewable energy strategies and eliminate the need for fossil fuels – devote a few acres to solar panels and rely on biofuels to operate tractors, for example. It’s also possible to generate hydrogen from solar or wind electricity, and then use that hydrogen to convert N2 to NH3, hence making nitrogen fertilizer. Now, you have carbon-neutral agriculture – a key requirement for stabilizing the global climate.
Whether or not the rest of the biosphere and the oceans will remain carbon-neutral as the planet warms… it seems unlikely, doesn’t it? Warmer oceans hold less dissolved gas, and while the ocean is actually currently acting as a carbon-negative sink, absorbing several gigatons of CO2 from the atmosphere each year, that could change as warming progresses. The permafrost is a more certain source of new CO2 and CH4 emissions, and shallow Arctic seabed emissions are also possible as Arctic waters warm. The defrosting freezer tends to outgas.
As far as the deepwater spill – that’s an inevitable result of going after oil in more remote locations. The industry has known that this situation could arise for years:
To get approval for these projects, BP had to downplay the risks of failure as well as the consequences of failure, and that required a hefty dose of pseudo-scientific rationalization – something that the U.S. government agencies that partnered up with BP were happy to supply. How else did this well get a rubber-stamped environmental waiver? It’s not just BP – similar ’emergency plans’ for Shell in the Arctic were also approved in 2009. In reality, there’s no way to mitigate these risks, and hence the companies involved should bear the full costs – which will likely be well into the tens of billions.
Accident liability caps should be removed from the energy industry, period – if investors aren’t willing to bear the risks, why should the public?
Hank Roberts says
> couldn’t get his post accepted.
Make sure he knows to copy the text before trying, if his browser’s back-arrow doesn’t get him back to the text in the posting window.
If he can’t figure the filter problem out from the message he gets, tell him to read the last line of it and follow the instructions and they’ll help him out by looking at it.
They can’t list all the keywords the spam filter looks for or they’d just get worked around, but “speci alist” and “social ist” share a deadly string for example. So does most any mention of gam bling terms.
David B. Benson says
Understanding climate is an example of system identification:
http://www.eolss.net/EolssSampleChapters/C05/E6-43-12/E6-43-12-TXT-05.aspx#5.%20Selecting%20Model%20Structures
Be sure to read the last sentence.
Rod B says
ccpo (342) you missed my point. If all skeptics and deniers (using your word for clarity) were banned from RC whatever their credibility (from some to none) how would you describe RC other than how I did?
Rod B says
Edward Greisch (364), I too grew up in the corn belt. When was the last three or four times it exceeded by two feet over 21/2 years the normal rainfall? Due to what?
Jerry Steffens says
#333 RalphieGM
“The coal you refer to was originally plant material which once converted CO2 to C (carbon coal). So – burning coal returns the original CO2 to the atmosphere from whence it came. Simple conservation principle – we are not really creating CO2 – just re-animating it.”
It takes millions of years to convert a significant amount of CO2 into coal. In the long-term carbon cycle, CO2 is slowly put back into the atmosphere as coal deposits are exposed to the air by erosion and then oxidized. The time scale involved is the same as that of coal formation. By digging up the coal, we greatly accelerate the uncovering of the coal; by burning the coal, we greatly accelerate its oxidation. The result: a flux of CO2 into the atmosphere that is orders of magnitude greater than the natural flux.
Philip Machanick says
Slightly OT: the WattsUpBots have invaded Grist and are talking rubbish about ice loss. Anyone interested in supporting me in countering this? On my blog my latest take on where we are at may be of interest to some.
Kevin McKinney says
DG 327,
In fairness to Lichanos, he emailed me claiming he tried to post in response twice but couldn’t get his post accepted. I don’t know what the content was.
I remember one “Bob FJ” trying a similar tactic, a year or so back. I’d call it “back channel trolling,” myself. I think it’s intended to disrupt the thread, or even the blog. Wonder if there’s any connection between “BFJ” and “L.”
Kevin McKinney says
Andrew–you are so right; not for nothing does one of my frequent blog antagonists consistently refer to climate models as “playstations.”
Scott A. Mandia says
I do not think Easterbrook is being paid for his fraud. He staked his claim on global cooling years ago and cannot let it go. I call this the Peter Duesberg Syndrome.
Yes, WWU can do something about Easterbrook. Although he is retired he is using their name and they still host his faculty Website, which is a snakepit of denialist pseudo-science and misrepresentations. Are you aware that there has been global cooling since 1999?
Scott A. Mandia, Professor of Physical Sciences
Selden, NY
Global Warming: Man or Myth?
My Global Warming Blog
Twitter: AGW_Prof
“Global Warming Fact of the Day” Facebook Group
Ray Ladbury says
Rod B asks: “If all skeptics and deniers (using your word for clarity) were banned from RC whatever their credibility (from some to none) how would you describe RC other than how I did?”
A climate science education website?
Completely Fed Up says
“359
J. Bob says:
30 May 2010 at 10:32 AM
#348 CFU, do you seriously think that ALL the people involved on that program will be listed on Google?”
Do you seriously believe we’ll take your word on your provenance on a blog forum?
Completely Fed Up says
“If you have ever used math modeling, you would know that they reflect what you DO KNOW,”
If you knew anything about computer models, you’d know what I mean when I say:
Langton’s Ant.
Andrew says
David B. Benson #373: “Understanding climate is an example of system identification:
http://www.eolss.net/EolssSampleChapters/C05/E6-43-12/E6-43-12-TXT-05.aspx#5.%20Selecting%20Model%20Structures
Be sure to read the last sentence.”
Yes indeed, it SHOULD be considered as an application of system identification, but despite the dynamic meteorology crowd being relatively thick with Kalman filter wranglers, the climate community doesn’t seem to have taken this path.
The point about getting enough information for decision without necessarily eliminating all the system uncertainty is also critically important. Climate change is maybe the ultimate example of a big risk penalty for decision latency.
More to the point, one really wants to treat the CO2 problem as model adaptive robust control. This speaks to the original topic here of attribution. It might be heresy in climate science, but from the model adaptive control point of view, you really don’t care if you end up with accurate attribution (which corresponds to accurately assessing components in the cost of control in the model adaptive robust control picture). You care if you keep the system within bounds, and with the least cost – since you want robust control, you really want the solution to tolerate deviations from the plant model, as well as the control. It’s why autopilots do not know how the plane is supposed to fly, they just know how to measure what it’s doing, and what control outputs they have; they constantly learn how to fly as they fly the plane. This is why autopilots can work despite damage to the aircraft, etc. They work despite not knowing exactly what they are controlling because they never cared that much in the first place. Robust control is not about developing the finest possible understanding of the system, it’s about developing the least cost solution to controlling as wide a class of systems as possible (so you don’t really have to know which one you really are controlling).
Since decision latency is thought to be critical in this problem (“alarm”) then people ought to really get serious about justifying decisions in the presence of system uncertainty as opposed to waiting until the last vote is in to call the election.
Hank Roberts says
So, why do you like this:
http://eosweb.larc.nasa.gov/EDDOCS/radiation_facts.html
as a replacement for Trenberth’s more recent and more detailed picture in
http://www.cgd.ucar.edu/cas/Trenberth/trenberth.papers/TFK_bams09.pdf ?
Is it because the EDDOCS picture is in percent (not clear of what) rather than in energy units? can’t resolve a difference of less than one percent between incoming and outgoing? doesn’t show where or how a warming could be happening? says “Heat energy is emitted into space, creating a balance” so it can be read to say the planet is in energy balance now?
Or is there something else about it you like better than Trenberth’s?
Just curious what’s so attractive about this. I think it’s a very simple picture that’s not meant to illustrate the same facts Trenberth published; it’s from an ‘educational’ section at NASA, from 2007.
ccpo says
#
ccpo (342) you missed my point. If all skeptics and deniers (using your word for clarity) were banned from RC whatever their credibility (from some to none) how would you describe RC other than how I did?
Comment by Rod B — 30 May 2010 @ 4:39 PM
Skeptics? Every good scientist is a skeptic. Why would we ban them? Don’t put words in my mouth.
Ditto what Ray said. Rod, it’s silly to claim eliminating dreck from useful dialogue is as you describe. You denialists just go around labeling things pejoratively because you know yout target audience is susceptible to dreck. We know it, too. It’s time we acted on that knowledge by simply not allowing dreck into legitimate discussions.
This is what you are saying, dreck style:
AGWer1: New study shows ice volume in the Arctic hit an all-time low last year, and every year for the last four years.
AGWer2: I’m a little concerned about combining the various measurements, and the margin for error between them. I.e. using satellite, submarine and observational data to determine the decline.
Denialista: They all have uncertainties, so collectively they must be a real mess, each error building on another. AGW is junk science!
AGWer1: Actually, the uncertainties help us understand, and in a sense reinforce the validity, for even with the various uncertainties, the conclusions are all the same, and graphing them all together shows that the basic conclusion is definitely accurate.
That’s right. My concern isn’t that the data is wrong, thus meaning the ice isn’t in decline, but that it’s declining faster than imagined, so we really need to get a handle on this.
Denialista: No, error means you are wrong and really can’t prove anything at all. Science is junk.
AGWer2: Err… so AGWer1, as we were discussing, the new data is startling. Have you seen the trends on ice extent? Below 2007 as we speak. Something like THREE standard deviations. 2013, indeed… yikes.
Denialista: Oh, come on. It was normal just a month or two ago.
AGWer1: Not exactly.
Denialista: They’re probably counting water on the ice!
AGWer2: Uh, so, as I was saying, I thing the physical observations from last Spring and the recent North Pole trek showing the ice is of very poor quality and far more broken up than the satellite imagery can see, is a very serious issue. I think we need to push the gov’t to use some of its hi-res satellites to get accurate readings.
Denialista: See? Your data is always wrong, by your own admission. And now you want to bring in the government, and we’re supposed to trust that data more than the current data?
AGWer1+2: Aargh…. Out. Now. Go.
Denialista: See? It’s a conspiracy! You suppress the truth! Oh, the oppression!
But, yeah, Rod, you’re probably right that we should continue to prove the proven instead of taking substantive steps toward dealing with what is already known.
Daniel Goodwin says
Lichanos, #298:
“More CO2 will tend, however, to raise temperature to a point.”
Wikipedia on the atmosphere of Venus:
“The CO2-rich atmosphere, along with thick clouds of sulfur dioxide, generates the strongest greenhouse effect in the Solar System, creating surface temperatures of over 460 °C.”
(BTW, I appreciate your detailed response, and apologize for my inflammatory rhetoric.)
J. Bob says
#363, Andrew says “No, the models do not have to accurately reflect physics.”. This is a little different then what I said about the math models reflecting REALITY. That is the physical model, math model and test results on a actual system, must match. If not, you better have a good explanation, or know a good liability lawyer.
In talking to a still working aero engineer, computational methods are still “weak” in the boundary layer – shock wave junctions, and in high turbulence areas.
I hate to tell you this, but there were astronomers (observers) in other parts of the world, besides western Europe. They may not have understood everything but they could record them, which they did. Their observatories were every bit as good, or better then Brahe’s.
And finally, be careful about putting Ph.D’s on to high a pedestal. I’ve seen them make some pretty dumb mistakes, for which they were shown the door.
Steven Sullivan says
“At the risk of being a broken record, please get something like “shadow threads” I’ve mentioned here before, i.e., a simple way to move posts to alternate thread, so they can exist without degrading the SNR of the main thread. This was an importnat topic, but relatively few posts actually discussed it in any useful fashion.” –John Mashey.
THIS. Please, moderators, can you move all the BP talk and other stuff not related to the Gavin’s original article, elsewhere? And if someone starts a post with ‘off-topic, but…’, please, ask them to post it elsewhere?
And as for Lichanos, his response to BPL’s list does appear on the thread — it’s #298. There’s already one response to that (#369).
Hank Roberts says
Oh, wait.
> Much of northern ice cap ice free in 18th century (historical literature)
“Ice cap”? What source are you relying on
> and north pole ice free in and early 1960s (see Navy photos of nuclear submarines at th pole).
Where are you getting your beliefs, and why do you consider your source reliable? Are you taking the time to check what you read?
It’s easy to look this stuff up. If you’d bothered you might have posted more good information at https://www.realclimate.org/?comments_popup=2348#comment-176219 and fewer easily debunked errors.
Look at the white stuff around the submarines in the pictures:
http://www.navsource.org/archives/08/08578.htm
http://www.nautilus571.com/skate_surfaces_at_pole.htm
http://www.navsource.org/archives/08/0857805.jpg
http://www.navsource.org/archives/08/tn/0857805.gif
http://www.navsource.org/archives/08/tn/0858411.gif
Frank Giger says
@ Mr. Greisch:
I’m glad you’re in the corn belt. I suggest building your own silo – the biggest you can – as within your lifetime there will not be enough food for you and you will starve to death.
I know this because I read it on RC.
In a comment section on something written by a scientist that says we shouldn’t attribute too much in the weather patterns to climate change.
Again, somebody show me in the IPCC reports where it predicts massive starvation within the USA because of AGW in the next 30 years.
You can’t. Nobody can. Because it’s not there.
Andrew says
J. Bob: #388: “#363, Andrew says “No, the models do not have to accurately reflect physics.”. This is a little different then what I said about the math models reflecting REALITY. That is the physical model, math model and test results on a actual system, must match. If not, you better have a good explanation, or know a good liability lawyer.”
Frankly, the test on the model system shouldn’t match computations very well any more because the physical models can’t get enough digits right. That is probably as good an explanation as any for walking away from physical modeling.
“In talking to a still working aero engineer, computational methods are still “weak” in the boundary layer – shock wave junctions, and in high turbulence areas.”
Depends on exactly what you need to know. For example it’s very easy to point at supercritical airfoil design (e.g. Garabedian-Korn) which involves the effects you mention. Yes, you can make problems computationally inconvenient, but down the hall from me is a guy handing out time on this machine called ‘Blue Gene/L’ – (and I will use some of that, too). Kind of a big machine, that, and there’s more where that came from – even if Moore’s law is close to sputtering out. There are still hard problems in CFD, but a lot of things that were brutally difficult thirty years ago when I started out, are in the “been there, done that” bin.
“I hate to tell you this, but there were astronomers (observers) in other parts of the world, besides western Europe. They may not have understood everything but they could record them, which they did. Their observatories were every bit as good, or better then Brahe’s.”
Not news, this; Aristotle reported these observations, there are descriptions by Chinese astronomers, but the telescope was critical to the discovery of sunspots as solar features. (“Despite these early observations, it was only after the invention of the telescope, in 1609, that any real study of sunspots was possible.” – http://www.bbc.co.uk/dna/h2g2/A2875430)
“And finally, be careful about putting Ph.D’s on to high a pedestal. I’ve seen them make some pretty dumb mistakes, for which they were shown the door.”
In this case I know about a dozen of the Ph. D’s. in climate science whose work we are actually talking about from when I was in graduate school, (after I moved from the hard-aero CFD group to a climate dynamics group). And they have worked with most of the others in this picture. And I know quite a few engineers (having done my undergraduate at one of those small dedicated engineering schools). I would expect the work of my Ph.D. classmates to hold up as well as any of my engineering classmates. Frankly, most of the people who are reading this have flown on an airplane with a wing or turbine blade design based on work of my Ph. D. classmates, and driven over bridges designed, built, and maintained by my B.E. classmates. I actually have come across Ph.Ds. and engineers who couldn’t tie their shoes, but the ones we are talking about here, are not those ones.
Hank Roberts says
> please get something like “shadow threads” …
> i.e., a simple way to move posts to alternate thread
Hey John, know any programmers? Want to set up a fund to get it written?
I’d gladly throw some of my small money at such a project. SMOP, right?
Hank Roberts says
> “shadow threads”
Much wished for.
http://www.google.com/search?q=move+posts+to+another+thread+splitting+moderation
Never written?
dhogaza says
This, to me, is a great point. The kind of physical modeling beloved of “anti-model denialist” types is, after all, scaled modeling. Some small thing dumped into a wind tunnel etc.
Scaling up to the real thing … via …
model.
Full-scale testing is often on components, and how this fits into the completed product … models.
You can’t escape them.
I’m just a humble computer freak, but this reality, if I were in aerospace, would be exactly the only motivation needed to move to a fully-model result. Cut out the middle man, the windtunnel on a 1:10 model (or whatever) which is extrapolated via model. Just model.
John P. Reisman (OSS Foundation) says
#390 Hank Roberts
In addition to your work. I called the Naval Historical Center and started collecting intel on that. The image that John Coleman and other have been using is most likely shot at the edge of the ice pack, not at the north pole.
http://www.navsource.org/archives/08/0857806.jpg
Coleman and that guy that was kissing up to O’Reilly last year were claiming that was at the north pole.
While there is a history of meetings at the north pole. the likely hood of ice free on a large scale is much less there. Next time I’m in DC I was planning to go dig up the records and get the lat/long for that shot and others.
—
A Climate Minute The Greenhouse Effect – History of Climate Science – Arctic Ice Melt
‘Fee & Dividend’ Our best chance for a better future – climatelobby.com
Learn the Issue & Sign the Petition
Edward Greisch says
371 Ike Solem: Yes, I know we have a lot of extra corn as far as Americans in isolation going hungry now. We could quit eating meat and eat corn, etc. But there are only 300 million Americans out for almost 7 billion people. China has a trillion dollars saved up with which it could buy corn if it has to. Care to guess on grocery prices if they do? And yes, the farmers could get a lot of renewable energy, but they tried that before and then abandoned windmills for electric water pumps. They have long memories and aren’t likely to try it again soon.
“Accident liability caps should be removed from the energy industry, period – if investors aren’t willing to bear the risks, why should the public?”
That won’t change until the lower 99% get smart enough to realize who is lying to them. That is why I am so one-track with my senators. I concentrate on climate change and ignore most other fiascoes.
Thanks to the other commenters who answered my questions.
John P. Reisman (OSS Foundation) says
#390 Hank Roberts
Just to give some more context to it. I looked for it about 3 or 4 years ago. When I cross referenced multiple sources/reports. It looked like the photo was actually taken closer to Greenland, on the way to the north pole. But I still have not been able to get confirmation.
I could still be wrong, and i don’t know where my tracking info is for it. I just remember collecting a bunch of data and found that it probably was not actually at the north pole.
Of course as others have pointed out, ice pack shifts and Arctic winds could clear the north pole of ice temporarily, but it would be nice to know just where it was shot.
—
A Climate Minute The Greenhouse Effect – History of Climate Science – Arctic Ice Melt
‘Fee & Dividend’ Our best chance for a better future – climatelobby.com
Learn the Issue & Sign the Petition
Hank Roberts says
John, this is old, old stuff being rebunked. Read the _Nautilus_ on how thick the ice was when they passed under the Pole; read the _Skate_ on how they located a thin enough spot to break through using their sonar, for example. Kelly over at Tamino’s found someone who had debunked the same story a long while ago, with quotes: ““From the account of the USS Skate: … We surfaced near the North Pole in the winter through thin ice less than 2 feet thick…. The Ice at the polar ice cap is an average of 6-8 feet thick, [not anymore!!] but with the wind and tides the ice will crack and open into large polynyas (areas of open water), these areas will refreeze over with thin ice. We had sonar equipment that would find these open or thin areas to come up through … ”
http://tamino.wordpress.com/2009/09/10/arctic-stations/#comment-35391
The information on the rate of change is also easy to find.
—–
ICESat Survey Reveals Dramatic Arctic Sea Ice Thinning
Posted on: Tuesday, 7 July 2009, 14:20 CDT
Arctic sea ice thinned dramatically between the winters of 2004 and 2008, with thin seasonal ice replacing thick older ice as the dominant type for the first time on record. The new results, based on data from a NASA Earth-orbiting spacecraft, provide further evidence for the rapid, ongoing transformation of the Arctic’s ice cover.
http://www.redorbit.com/news/space/1717041/icesat_survey_reveals_dramatic_arctic_sea_ice_thinning/index.html
Steve Bloom says
Lichanos in #298: BPL’s summary of the case for AGW in #241 makes it “a plausible hypothesis, but it’s support is weak, and I believe comes principally from GCM runs. Model runs, in turn, assume some of the points above are proven, and that the model properly treats the necessary system dynamics – a big assumption.”
Wrong. Jim Hansen notes that the case for AGW is based first on paleoclimate, second on observations, and only thirdly on the models. The first two are more than enough to know we’re headed for big trouble soon. The models tell us about the timing and other important details.
Attacking the models as if they are the key is a fun hobby but doesn’t have much to do with the science.