It’s worth going back every so often to see how projections made back in the day are shaping up. As we get to the end of another year, we can update all of the graphs of annual means with another single datapoint. Statistically this isn’t hugely important, but people seem interested, so why not?
For example, here is an update of the graph showing the annual mean anomalies from the IPCC AR4 models plotted against the surface temperature records from the HadCRUT3v and GISTEMP products (it really doesn’t matter which). Everything has been baselined to 1980-1999 (as in the 2007 IPCC report) and the envelope in grey encloses 95% of the model runs. The 2009 number is the Jan-Nov average.
As you can see, now that we have come out of the recent La Niña-induced slump, temperatures are back in the middle of the model estimates. If the current El Niño event continues into the spring, we can expect 2010 to be warmer still. But note, as always, that short term (15 years or less) trends are not usefully predictable as a function of the forcings. It’s worth pointing out as well, that the AR4 model simulations are an ‘ensemble of opportunity’ and vary substantially among themselves with the forcings imposed, the magnitude of the internal variability and of course, the sensitivity. Thus while they do span a large range of possible situations, the average of these simulations is not ‘truth’.
There is a claim doing the rounds that ‘no model’ can explain the recent variations in global mean temperature (George Will made the claim last month for instance). Of course, taken absolutely literally this must be true. No climate model simulation can match the exact timing of the internal variability in the climate years later. But something more is being implied, specifically, that no model produced any realisation of the internal variability that gave short term trends similar to what we’ve seen. And that is simply not true.
We can break it down a little more clearly. The trend in the annual mean HadCRUT3v data from 1998-2009 (assuming the year-to-date is a good estimate of the eventual value) is 0.06+/-0.14 ºC/dec (note this is positive!). If you want a negative (albeit non-significant) trend, then you could pick 2002-2009 in the GISTEMP record which is -0.04+/-0.23 ºC/dec. The range of trends in the model simulations for these two time periods are [-0.08,0.51] and [-0.14, 0.55], and in each case there are multiple model runs that have a lower trend than observed (5 simulations in both cases). Thus ‘a model’ did show a trend consistent with the current ‘pause’. However, that these models showed it, is just coincidence and one shouldn’t assume that these models are better than the others. Had the real world ‘pause’ happened at another time, different models would have had the closest match.
Another figure worth updating is the comparison of the ocean heat content (OHC) changes in the models compared to the latest data from NODC. Unfortunately, I don’t have the post-2003 model output handy, but the comparison between the 3-monthly data (to the end of Sep) and annual data versus the model output is still useful.
Update (May 2012): The graph has been corrected for a scaling error in the model output. Unfortunately, I don’t have a copy of the observational data exactly as it was at the time the original figure was made, and so the corrected version uses only the annual data from a slightly earlier point. The original figure is still available here.
(Note, that I’m not quite sure how this comparison should be baselined. The models are simply the difference from the control, while the observations are ‘as is’ from NOAA). I have linearly extended the ensemble mean model values for the post 2003 period (using a regression from 1993-2002) to get a rough sense of where those runs could have gone.
And finally, let’s revisit the oldest GCM projection of all, Hansen et al (1988). The Scenario B in that paper is running a little high compared with the actual forcings growth (by about 10%), and the old GISS model had a climate sensitivity that was a little higher (4.2ºC for a doubling of CO2) than the current best estimate (~3ºC).
The trends are probably most useful to think about, and for the period 1984 to 2009 (the 1984 date chosen because that is when these projections started), scenario B has a trend of 0.26+/-0.05 ºC/dec (95% uncertainties, no correction for auto-correlation). For the GISTEMP and HadCRUT3 data (assuming that the 2009 estimate is ok), the trends are 0.19+/-0.05 ºC/dec (note that the GISTEMP met-station index has 0.21+/-0.06 ºC/dec). Corrections for auto-correlation would make the uncertainties larger, but as it stands, the difference between the trends is just about significant.
Thus, it seems that the Hansen et al ‘B’ projection is likely running a little warm compared to the real world, but assuming (a little recklessly) that the 26 yr trend scales linearly with the sensitivity and the forcing, we could use this mismatch to estimate a sensitivity for the real world. That would give us 4.2/(0.26*0.9) * 0.19=~ 3.4 ºC. Of course, the error bars are quite large (I estimate about +/-1ºC due to uncertainty in the true underlying trends and the true forcings), but it’s interesting to note that the best estimate sensitivity deduced from this projection, is very close to what we think in any case. For reference, the trends in the AR4 models for the same period have a range 0.21+/-0.16 ºC/dec (95%). Note too, that the Hansen et al projection had very clear skill compared to a null hypothesis of no further warming.
The sharp-eyed among you might notice a couple of differences between the variance in the AR4 models in the first graph, and the Hansen et al model in the last. This is a real feature. The model used in the mid-1980s had a very simple representation of the ocean – it simply allowed the temperatures in the mixed layer to change based on the changing the fluxes at the surface. It did not contain any dynamic ocean variability – no El Niño events, no Atlantic multidecadal variability etc. and thus the variance from year to year was less than one would expect. Models today have dynamic ocean components and more ocean variability of various sorts, and I think that is clearly closer to reality than the 1980s vintage models, but the large variation in simulated variability still implies that there is some way to go.
So to conclude, despite the fact these are relatively crude metrics against which to judge the models, and there is a substantial degree of unforced variability, the matches to observations are still pretty good, and we are getting to the point where a better winnowing of models dependent on their skill may soon be possible. But more on that in the New Year.
Ray Ladbury says
Dave Salt, Barton Paul Levenson has provided several references that show evidence for positive feedback. I also commend to you the excellent review article by Knutti and Hegerl, along with the other papers by the same authors
http://www.iac.ethz.ch/people/knuttir/papers/
Here you will find very strong evidence (at least 10 independent lines) that CO2 forcing is at least 2 degrees per doubling of CO2. It is encouraging to see that all of these lines of evidence favor a sensitivity of about 3 degrees per doubling, but if the analyses are wrong, it is much more likely that sensitivity is higher, rather than lower.
And as to falsifiability, you really need to read up on philosophy of science. Strictly speaking, falsifiability applies more to relatively simple hypotheses rather than complex physical models. In such models, it is quite possible to falsify one aspect of the model and have the model remain mostly intact otherwise. Presumably, the tenet of the consensus model of Earth’s climate you would like to see falsified is the importance of CO2 as a greenhouse gas. That could certainly be done, but the easiest way to do it would be to come up with an alternative model that explains the data better than the current model. No such model has been proposed.
Andrew says
@jason: “My point was this:
Democrats could right now get enormous Republican support for a plan to dramatically reduce US CO2 emissions to a level that even Hansen would approve of.”
Putting aside for the moment what Republicans right now actually believe about climate science, let us consider how they are acting vis a vis the Democrats.
From http://www.politico.com/news/stories/1209/30083.html:
“Kirk is one of a growing group of Republican candidates flip-flopping away from cap and trade as they stare down more-conservative primary challengers. Republicans who once flouted their green bona fides are tacking right, to the point of questioning the science behind global warming, believing it’s politically toxic within the conservative base to favor anything Democrats want to do about the climate.”
This is not a political blog, though. I suppose the question of what sort of carbon tax might work is important and germane. However the politics of “right now” are probably less important for climate, since, painful as it is to consider, the politics of the current election cycle will not last as long as the climate problems. Maybe it is better to put aside Democrats and Republicans as they stand now (huge intradecadal oscillations there) and consider only the policy options which can persist long enough to be effective on climate time scales.
Jason says
#236, “just halting CO2 emissions will not be enough to prevent global warming induced climate change since a halt will not reduce the amount of extra carbon that has been added to the active carbon cycle on any meaningful human time scale.”
First, I don’t have any problem with the earth getting warmer, especially if it only gets a little warmer, and if the pace of that warming is slower than the IPCC forecasts. I only favor action to reduce atmospheric concentrations of CO2 in those situations where the uncertain benefits exceed the uncertain costs. I think that 350ppm in 2100 is an unrealistic pipe dream.
“Putting a price on carbon will certainly spur investment and research into developing methods to actually remove excess carbon from the atmosphere, but given that a good deal of energy is released by burning fossil carbon, not to mention the energy expended on digging up, pumping and refining that carbon, basic physics dictates that removing that carbon and sequestering it from the active carbon cycle will require an amount of energy of at least the same magnitude.”
That is not at all clear to me. In fact, I’ve seen outwardly credible estimates explicitly suggesting that it will be possible to build carbon sequestration technology that removes many times more carbon from the atmosphere than is emitted during the production of the required energy. Can you direct me to an argument explaining why this physically is impossible?
“Where will that energy come from?”
Personally, I think we should move aggressively back towards nuclear power, passing federal laws that streamline and consolidate the approval of new power plants and power lines. There is no reason why all safety and environmental concerns can’t be addressed by one or two panels.
“It took over two centuries from a near standing start to liberate the 300+Gt of carbon that we have added to the atmosphere and active carbon cycle. It is thus reasonable that it will take a similar time scale from a standing start to sequester enough C to reduce the carbon reservoir to a level that will avert the full consequences of the climate change that we have set in motion.”
What level would that be? I neither agree nor disagree with the reasonableness of your assertion, but there seem to be a fair number of credible researchers who believe otherwise.
“By diverting your comment into political accusations you avoided both answering my questions and dealing with the physical reality of our situation.”
You missed the point. You asked me “Do you have a plan for how we will draw down the increase in that reservoir once you are convinced?”
My response is:
We don’t have to wait until I am convinced.
There are options for limiting CO2 right now which, even if AGW is completely wrong [which I don’t think it is] will be a net positive for the economy.
We are not implementing these options because even the liberal wing of the Democratic party is not seriously committed to climate change. Which is OK. But lets not suggest that convincing the center and right of the scientific merits of AGW is somehow holding things up. As long as Democrats are sufficiently committed to fighting climate change that they are willing to negotiate away part of their agenda, we can do something serious about emissions right now.
You can convince me later.
John E. Pearson says
246 on falsifiability.
Dave, it is trivial to falsify the models that predict global warming. The models are based on physical theory going back to the early 1800’s. All you have to do to falsify the models is to falsify any of the underlying physical theory on which they are based: the heat equation, the Navier-Stokes equations, the Planck distribution, quantum mechanics, the laws of thermodynamics, etc. Your claim that the models aren’t falsifiable is nonsense.
Andrew says
@ Ray Ladbury: One of the fallacies we are hearing from the denialosphere is the presentation of the choices available to us. We do not face a choice between “doing nothing” and “investing trillions”.
What would be wrong with INVESTING trillions? All that is needed to justify that is a decent rate of return. With what U.S. stocks did in the past decade, the bar is set pretty low at the moment. Let’s consider what the rate of return on power production will be in the future. What if it becomes clear that the climate is threatened – even by less than many current estimates? If you have the lead on a non-carbon power economy, that will be worth a huge amount. Purely in economic terms, there is huge upside as well as huge downside risk; exactly the kind of environment which rewards informed investors and punishes ostriches.
TRY says
BPL – I did see your post and mentioned that papers you linked to in my followup posts.
I didn’t respond directly to you because – when I asked you for papers supporting your model prediction successes post, you sent me a list that was primarily a paper from 1896 that was full of incorrect predictions. I questioned this, and you never responded, so it seems like a waste of time to respond to you directly, no?
Restating what I’ve said in other posts, the studies you link to show no change from 1997 to 2003, and no followup studies of any kind. 1997-1970 is interesting, but site-specific, clear sky, etc.
Surely you understand, as I’ve said several times, that the really interesting thing is global radiation signature – that’s the core claim and something that presumably is worth monitoring and modeling.
Kevin McKinney says
Simon, you asked: “Would I be right in saying that even at the “coldest” point during the “current cooling phase” that global mean temperature was higher than at any point prior to 1998?”
Layman’s answer FWIW:
Yes, provided you were talking about annual mean anomaly for GISS or NCDC (aka Smith & Reynolds 05.) It’s likely that there was more variability at smaller intervals–for example, December 1997 may well have been warmer than some more recent December. (I’m not as sure about the Hadley data rankings.)
Jason says
#250: “As I tried to explain back in comment 162, GCMs are physical models (i.e. built on established physics).”
There is a long history of physical models accurately modeling the past, but being completely wrong. The Epicycles of early Copernican (and even Ptolemaic) astronomy come to mind.
If climate models were a deterministic application of well defined physics then:
1. They would agree with each other and
2. We would not have problems like the current “pause” in warming that Gavin alluded to.
Very few of the thousands of assumptions that go into climate models would somehow invalidate physics if they were proven false.
It is therefore an extreme stretch to suggest that physics implies the results of the climate models. The climate models depend on physics, and must be validated by successfully predicting events that occur after their publication.
[Response: Actually there is more scope for validation than that. You can also use a) information about the past that you weren’t aware of, or has only recently been published or b) relationships in existing data that had previously not been noticed or calculated. Either of these count as predictions in the methodological sense (i.e. it doesn’t just have to be about something that happens in the future). Cosmology would be in dire straits if that was the case! – gavin]
Ernst K says
I suppose there is one other possible explanation for not accepting hindcasts from physical models as an acceptable form of model verification: one might simply think the modelers are making the whole thing up, that they don’t really have physically based models, that the models have actually been calibrated to match the historical global surface temperature record (even though the modelers claim they haven’t), and/or that the presented results are not the actual output from the models.
But if that’s the case, just come out and say it and stop hiding behind the “hindcasts don’t count” smoke screen.
If I’m missing something, please tell us exactly why you think hindcasts are not an acceptable way to verify a GCM.
Edward Greisch says
102 Hank Roberts: You have the Earth at Mars times backwards.
Jason says
#254: Is it your contention the models are predicated only on: “the heat equation, the Navier-Stokes equations, the Planck distribution, quantum mechanics, the laws of thermodynamics, etc.”?
The models contain hundreds or thousands of additional unproven assumptions. These range from the response of upper tropospheric humidity to global temperature changes, to the impacts on cloud formation, to the historical forcing from aerosols (which to many skeptics appears to be the product of circular reasoning).
No skeptic is suggesting that physics is wrong.
To my knowledge, no credible climate scientist is suggesting that these basic principles of science are sufficient to imply the results of the AR4 model ensemble. Necessary, yes. Sufficient, no.
All of the principles you mention are likely to be right, yet the GCMs may be horribly wrong.
caerbannog says
Off-topic (Dr. Schmidt, feel free to s***can this post if you think that it’s too far off-base here)
Folks who consider Steve Mosher to be one of the more serious AGW skeptics might be interested in seeing him show his true colors here: http://biggovernment.com/2009/12/29/the-green-religion-and-climategate-interview-with-steven-mosher/
Nicolas Nierenberg says
re 250,
Thanks for the kind words Ernst. It is this kind of polite exchange that leads to greater understanding.
I was obviously only using an example to make a point. You may be surprised to learn that there are basic laws of economics that work fairly well. However in the real world there are so many confounding factors that it is difficult to build accurate models.
Similarly the laws of physics are quite clear, but there a huge number of confounding factors in climate. So there is not such a huge gap between climate models and economic models in my opinion. If it were just physics then we pretty much would have been done when the JASONs built there model of the world.
So I will restate my opinion. The ability to model climate can only be measured by skill in measuring periods after the model run. Not in the ability to model periods known to the modeler.
TRY says
Geoff #213 – There’s a nice symmetry to your suggestion, but I don’t think it holds for the general case. In your scenario, constant temperature implies an equilibrium state. Going back to my thought experiment, steadily radiate an atmosphere with an IR wavelength that only CO2 absorbs, wait for system to reach equilibrium. This situation meets your constant temperature requirement, but involves CO2 radiating less than it absorbs. I agree, the atmosphere won’t radiate at wavelenghts that it *can’t* absorb, but it will radiate at wavelengths that it’s *not currently* absorbing. Because I describe an environment with an external forcing (specific wavelength IR), I don’t think detailed balance applies, as that would only apply to a closed system at thermodynamic equilibrium? Or mayby I misunderstand the principle! Regardless, the thought experiment I describe seems clear and unambibuous.
Re the other issues – my point is that all this argument about second-, third-, fourth-degree effects is potentially endless. Yes, sheer volume counts for something, but the volume of potential second-degree effects of claimed CO2 impact is vast order of magnitudes higher than what’s actually been published. So why not spend some time looking at first-degree effects at a global scale? I’d rather leave the politics/business/etc for another discussion, personally.
Nicolas Nierenberg says
Or as Gavin pointed out in a subsequent post, potentially by other information not known to the modeler. But that only works once, obviously.
John E. Pearson says
257: wrote “No skeptic is suggesting that physics is wrong.”
Nonsense. There are skeptics that argue the stupidest stuff imaginable. I’ve seen all sorts of stuff from skeptics. You don’t have to look long to find skeptics writing down mathematical “proofs” that purport to prove global warming from CO2 is impossible. If you want to falsify global warming and leave all the relevant physical theory intact you’ll have a much harder time. It’s worth noting that Popper wasn’t a scientist and didn’t think very hard about what constitutes a scientific statement. You can’t falsify the statement (in Popper’s sense) that a given coin has a probability p=0.5 of coming up heads. What you can do is perform a long sequence of experiments which will provide evidence either for or against the statement. Eventually you might convince yourself and any reasonable person whether the coin is fair or not, but you certainly can’t falsify the statement; “The probability of this coin coming up heads is 1/2” with Popperian certainty. That doesn’t mean that a probabilistic statement is devoid of scientific meaning. I have yet to hear a skeptic that was whining about falsifiability acknowledge that probabilistic statements are part of science. If skeptics would face this fact it would be easier to take them seriously.
wil says
@249 and @228 (Gavin response):
I have used the NOAA-NCDC data and I am willing to believe that using GISS-data the answer will be slightly different (as it would be different again when using for instance the RSS satellite data). But that is not my point. The trend has been strongly positive from 1970 till around 2000. Now suppose for simplicity’s sake that all temperatures starting from 2000 are exactly the same and they would remain exactly at that same (high) level for several more years (for instance 10 years). Obviously, the trend value starting from 1970 would remain positive during all those years, and even maintain statistical significance for many years. But still, after 20 years of unchanged temperature, I suppose that most people would agree that a shift had occurred from rising temperatures to a flat line.
So basically my question is when does “insignificant, short term” change to something relevant? This is what I meant with the term “ignore”, it was in no way meant to be offensive.
Edward Greisch says
103 Lynn Vincentnathan: Methane is CH4, no sulfur in it. Hydrogen sulfide is H2S. The sulfur comes from another source. Oxygen kills sulfur bacteria. Without oxygen in the ocean, sulfur bacteria take over and make H2S. Otherwise true, and H2S reacts with water to form H2SO4 [sulfuric acid] in your lungs, I think. H2S gas is poison and kills all humans equally.
http://www.sciam.com/article.cfm?articleID=00037A5D-A938-150E-A93883414B7F0000&sc=I100322
Yes, Homo Sapiens will probably go extinct if global warming continues.
Ron Taylor says
Jason says: “Democrats could right now get enormous Republican support for a plan to dramatically reduce US CO2 emissions to a level that even Hansen would approve of.”
What on earth are you smoking? That is the most improbable political prediction I have heard in years.
Thomas Lee Elifritz says
If climate models were a deterministic application of well defined physics then:
Computer models are deterministic numerical representations of physical models that execute of finite von Neumann machines. As such the output that they produce is used as data for the testing and verification of the models against the empirical evidence – measurements and observations performed by instruments, which they themselves are also produced by the use of computer models of physical systems, namely semiconductor and condensed matter physics and the associated electrical and mechanical engineering necessary to produce the devices used in modern science.
Thus models are ‘tools’ and ‘methods’ used by scientists and engineers in the study of nature, and the application of the information and knowledge derived by that endeavor, and are a small part of the large repertoire of techniques available to YOU in your search for truth, profit or comfort.
Scientists and engineers are no different than you or anyone else, you have access to most if not all of the same resources they have – nature.
Jason says
#259: “I suppose there is one other possible explanation for not accepting hindcasts from physical models as an acceptable form of model verification: one might simply think the modelers are making the whole thing up, that they don’t really have physically based models, that the models have actually been calibrated to match the historical global surface temperature record (even though the modelers claim they haven’t), and/or that the presented results are not the actual output from the models.”
I don’t think that the models can reasonably be called physically based. They use physics, but the physics can be right and the models could still be wrong.
I do NOT believe that the modelers are just “making things up”. But it is undeniably the case that numerous assumptions and inputs go into these models.
Obviously the modelers’ decisions when making these assumptions are affected by their pre-conceived notions about climate sensitivity.
[Response: This is nonsense. If I set a parameter related to sea ice, I do it in order to improve the simulation of the sea-ice – usually on a seasonal cycle. The net effect on climate sensitivity is unknown. Same with cloud parameters, or land surface functions. These kinds of models just aren’t built the way you imagine – and we don’t test what the sensitivity is every time we change some minor thing. We certainly don’t optimise the model for some specific ‘pre-conceived’ notion of what sensitivity is. We just don’t. – gavin]
Moreover, I think that after these assumptions are made, the models are analyzed and debated in an environment where it is easier to get money, get published and get get tenure if your models show higher sensitivity.
[Response: Complete BS. (Sorry, but really? you think tenure is granted or not because you have a higher climate sensitivity? Get real). The GISS model used to have a sensitivity of 4.2 deg C, the AR4 model had a sensitivity of 2.7 deg C. Can you discern a difference in our publication rate? or budget? This is beyond ridiculous. – gavin]
Even if no researcher ever allows these concerns to impact his model, models showing less warming are still less likely to get published, get discussed, or get used by the IPCC. These models are more likely to be abandoned, or to have their assumptions reconsidered.
[Response: No. No. No. Where is there any evidence for this in the slightest? This is simply a fantasy. If there were such models, don’t you think the ‘skpetics’ would be all over them? – gavin]
In summary:
Nobody credible has alleged that modelers are making things up BUT
Skeptics suspect that those modeling decisions which are NOT based on physics (and there are many) are biased towards showing greater warming.
Forecasting (but not hindcasting) will reveal (or not reveal) this bias.
[Response: You are arguing from a completely false premise that has no connection to reality. – gavin]
Timothy Chase says
TRY wote in 201:
You mean that actually was you?
TRY wote in 201:
Bull.
To take just one example, you asked in 91:
However, in 163 you state that your question is:
I pointed out that we can measure the delta, the difference between 1970 and 1997. But this is the difference in terms of spectra and only at specific points in that spectra. And it isn’t on a yearly, monthly or daily basis. Your initial question suggested a mere delta between years would be more than enough to satisfy you. It suggested that simply looking for the finger print of change in the spectra — based upon specific wavelengths in the same way that a fingerprint identification is often based upon a dozen or so points in the fingerprint — would be more than enough to satisfy you.
In 110 I gave you what you were asking for. In 91.
But in 163 the language has changed, such that what you are asking for is “global radiation signature” and “over time.”
The words “radiation signature” still sounds like a few points in the spectra would be more than enough to satisfy you. But looking back at the word “global” it already sounds like what you are looking for is broad spectrum — not just a few channels. And then when you state:
… so it would appear that what you are looking for is something approximating a total acounting of radiation entering the system and radiation leaving the system such that we can show a surplus based upon that accounting. Perhaps you want to make sure that all that surplus energy isn’t somehow leaking out at a wavelength no one thought to look at? Well-established principles of physics like Planck’s radiation law might not well-established enough for you? Our ability to explain how greenhouse gases interact with radiation right down to the vibrational and rotational and rovibrational quantized states of molecular excitation and principles of quantum mechanics not quite enough?
By the time that we get to the phrase “over time” it begins to look like you aren’t looking for a mere delta between two different and widely spaced years, but for continuous measurement. Your concern with seasonal measurements would seem to suggest this as well. But then what day of the week did we take the measurements? Maybe all that excess energy we seem to have been accumulating slipped out on weekends.
No matter how much information we get you and no matter how advanced the technology gets, the information will be limited. It is the nature of the beast. There will always be one more level of detail that we could go. And you can point out that we haven’t made it to the next moving-the-goal-post level yet and say, not good enough.
*
No, you weren’t asking the same exact question, and your questions are not exact. Therefore you get different answers from different people who are either responding to different questions or who are responding to different interpretations of the same vaguely stated question. And at that point you can state that you are getting different answers to the same question and that therefore “there isn’t any consensus.”
I quote from 163:
Furthermore, you stated in 163:
… it would seem that one of your gambits is to dismiss anything that is the least bit tainted by being partly dependent upon models or theories. But anything that isn’t the direct reporting of sensory data may be regarded as theory-laden.
In fact, if one goes all Cartesian, you can doubt the existence of your own hand. And had Descartes been more rigorous in his application of doubt — relying only upon that which was truly indubitable, he would have realized that in a state of truly radical skepticism one has no theoretical foundation for distinguishing between thought and imagination, perception and hallucination or memory and fantasy. All knowledge is — at one level or another — theory-laden. And on this basis you will always have plenty of room in which to move those goal-posts so that you can continue to stare disbelievingly at your hand or study your belly-button.
TRY wote in 201:
As I pointed out in comment 855 of the Unforced Variations thread:
… our “opponents” are into selling doubt, not dealing in facts, and as a matter of fact I have identified 32 different organizations (in the same comment) that were involved in both the denial campaign surrounding tobacco and the denial campaign surrounding antropogenic global warming. In contrast, every major scientific organization and peer-reviewed which has seen fit take a position on anthropogenic global warming has said that global warming is taking place, we’re causing it, and its serious.
Please see:
The Consensus on Global Warming:
From Science to Industry & Religion
http://www.logicalscience.com/consensus/consensusD1.htm
Dave Salt says
Thanks, Ray Ladbury (#251) and John E. Pearson (#254).
Yes, I’m well aware that there are positive feedbacks and that they have the potential to amplify the basic CO2 greenhouse mechanism; hence my reference to the ‘enhanced greenhouse effect’, as described in Section 1.3.1 of the IPCC TAR. What I was enquiring about was real-world evidence that proves the Earth’s climate system is dominated by them, which is what appears to be needed for current climate models to ‘simulate’ past recorded trends. Maybe the evidence I seek is buried within the long list of references you and others have indicated, but I was sort of expecting a more thoughtful explanation from people who, I assume, are extremely knowledgeable of the subject and can therefore explain the salient points to a layman like myself.
As my first degree was in physics, I’m somewhat familiar with the Scientific Method and the associated philosophy behind it, hence my reference to Richard Feynman’s description (http://en.wikipedia.org/wiki/Cargo_cult_science). Similarly, I’m also aware of the basic mechanism by which CO2 can act as a greenhouse gas (i.e. via infra-red absorption) and the underlying physics upon which it is based, along with the IPCC’s assessment that this alone would cause a rise of less than 2C rise in response to a doubling of CO2. However, the IPCC are clear that this mechanism is insufficient to raise temperatures to catastrophic levels, which is why the understanding of – and real-world evidence for – the dominance of positive feedbacks is so crucial to the current AGW narrative.
Note that I wrote my post because I’m rather reluctant to believe things simply on the basis of rhetoric and exaggerated claims from either side of the ‘debate’. I also don’t respect argument by authority (i.e. trust me, I’m a scientist) but am more than willing to be educated by clear and logical reasoning.
Mike Cloghessy says
caerbannog wrote…
@Mike Cloghessy#121: Is this raw data or has this data been adjusted, homogenized and re-adjusted.
A quick google-search on “Mike Cloghessy” turned up this:
By Mike Cloghessy on Aug 10, 2009 | Reply
Carbon offsets are like global warming…alarmists would have you pay for something that does not exist.
Anybody want to place odds on how likely it is that Cloghessy will actually look at any of the data that he’s been pointed to?
Since my response to Ron Broberg was 86ed…and likely this post will not see the light of day on this website, I thought I would try again to point out that I have looked at some of the data. The issue is what happens to the data after the raw data is analyzed. With the leaked docs from CRU, with the allegations by the Russians re CRU, the HGCN data of Antarctica and Australia, the redefining of “peer review”, and again (it appears) the problems I have posting on this website it is becoming clear that the AGW camp has no desire for debate, no desire for opposing views, which by default weakens their “science”.
IMO the AGW scare of the 1990s (and early 2000s) will go the way of the coming ice age of the 1970s.
Until then… Hey caerbannog I have some carbon offsets for sale…you interested?
Ernst K says
Comment by Jason — 30 December 2009 @ 2:28 PM:
“The models contain hundreds or thousands of additional unproven assumptions. These range from the response of upper tropospheric humidity to global temperature changes, to the impacts on cloud formation, to the historical forcing from aerosols (which to many skeptics appears to be the product of circular reasoning).
No skeptic is suggesting that physics is wrong.”
First, I doubt that last sentence is correct, unless you put “credible” between “No” and “skeptic”.
But back to the first paragraph. Are these examples supposed to be just a snapshot of a much longer list or are you suggesting that these are the three biggest “assumptions” that could conceivably compensate (either individually or collectively) for the well established effect of radiative forcing due to greenhouse gasses such as CO2?
Now my understanding is that where physical processes are not based directly on extremely well established science (such as Navier-Stokes etc.) they are based on process models. For process that need to be parameterized, for example due to resolved scale issues in the case of cumulus clouds or turbulence, the parameters are defined a priori (i.e. in advance) based on field or lab experiments. The parameters are not calibrated in order to match the historically observed surface temperature record.
Please set me straight if I’m misinformed about this. I have worked with Mesoscale atmospheric models (such as MM5 and GEM – which can be run with no calibration whatsoever) but never with a proper GCM.
Alw says
Ken W (200)
I understand your references – but the this post is regarding looking back on how predictions are faring, as the first line puts it; “It’s worth going back every so often to see how projections made back in the day are shaping up”.
It is disingenuous to include the hindcasts within a historical looking back at how well the models have done as the accuracy of the hindcasts was known at the time!
The graphs should only be showing the changes that have happened since the point of production of the paper. Or at the least should show a line or point demarking when the projection was made.
Terry says
Paul @ 218
Thanks very much for the refs. The early Manabe et al 1967 and 1964 papers are exactly what I was looking for. Some good holiday reading (perhaps I need to get out more). Cheers
Hank Roberts says
> 102, 260
> Edward
That’s not my paper–I pointed to the abstract and quoted a bit from it.
http://dx.doi.org/10.1016/S0032-0633(00)00084-2
That abstract refers to a hypothetical Earthlike planet, at Mars’s current distance (looking at how far the habitability zone extends on either side of Earth’s orbit, and for what time spans). They’re talking about an Earthlike planet That made sense to me when I read the abstract; I imagine they’re addressing how long hypothetical Earthlike planet could have held its heat and volatiles at that distance, at the far outside edge of the habitability zone, as they compare it to the other extreme at Venus —I’ve been looking for more on the subject.
I don’t see any sign anyone else thought it a mistake, looking at the brief links on citing papers. But I did scratch my head about it for a while before thinking it made sense as written.
Ernst K says
Re 263:
“Thanks for the kind words Ernst. It is this kind of polite exchange that leads to greater understanding.”
I was as kind as your comment made it possible for me to be.
“I was obviously only using an example to make a point.”
The problem was that it was a horrible example to use to make a virtually meaningless point.
“You may be surprised to learn that there are basic laws of economics that work fairly well.”
I understand this quite well, but you are overreaching to suggest that they can be reasonably compared with a physically based climate model. Perhaps you could point me to a macroeconomics model that doesn’t require calibration with historical macroeconomic data. This is not a slight on economics, it’s a reflection of the fact that the field is so far removed from the current limits of our understanding of physics.
“The ability to model climate can only be measured by skill in measuring periods after the model run. Not in the ability to model periods known to the modeler.”
This argument would be valid if the models were calibrated to match historical data, if the modelers were obfuscating their calibration and verification periods, or if you thought the modelers were fixing the results outright.
Your argument implies that the modelers (from all the modeling groups) are either unaware of, hiding, or lying about something pretty basic even though their code and input data is (at least for the most part) in the public domain.
Doug Bostrom says
Nicolas Nierenberg says: 30 December 2009 at 2:42 PM
“The ability to model climate can only be measured by skill in measuring periods after the model run. Not in the ability to model periods known to the modeler.”
Nicholas, what you say is only true if you assume that persons running the models are either dishonest or for some reason completely incapable of divorcing the construction of the model from observations.
If the output of a model is faithful to observed behavior of an actual system being modeled, do you believe that as that skill is observed the very success of the model is thus invalidated?
That’s what you’re effectively saying, unless your assumption is that dishonesty or rank incompetence is at play.
Do you believe the same of other models of complex, dynamic systems? No model is correct once it is validated against observations?
Thomas Lee Elifritz says
trust me, I’m a scientist
Actually, Dave, we know exactly who you are and exactly what your internet comment history id. and what that reveals about your knowledge of planetary science and radiative transfer. You are no stranger to climate denialism.
Jason says
#275: “‘No skeptic is suggesting that physics is wrong.’ First, I doubt that last sentence is correct, unless you put ‘credible’ between ‘No’ and ‘skeptic'”
Which skeptics are saying that physics is wrong?
“Are these examples supposed to be just a snapshot of a much longer list are the three biggest ‘assumptions’ that could conceivably compensate (either individually or collectively) for the well established effect of radiative forcing due to greenhouse gasses such as CO2?”
Those are the first three that came into my head, but they are part of a much longer list (hence my use of the words: “The models contain hundreds or thousands of additional unproven assumptions.”)
That said, those are three good choices. Each of them individually could change the predicted warming by 25% or more. A majority of the assumptions are both less controversial and less impactful.
“Now my understanding is that where physical processes are not based directly on extremely well established science (such as Navier-Stokes etc.) they are based on process models. For process that need to be parameterized, for example due to resolved scale issues in the case of cumulus clouds or turbulence, the parameters are defined a priori (i.e. in advance) based on field or lab experiments. The parameters are not calibrated in order to match the historically observed surface temperature record.”
Neither of these extremes are accurate. The GCMs were not designed solely to replicate historical temperatures. But neither are they restricted to using values determined a priori.
In many cases the GCMs have gradually evolved over long periods of time. Perceived weaknesses are addressed by changing assumptions and inputs. Not matching the historical record is, of course, a weakness.
In the eyes of many skeptics, myself included, some of the input data (like historical aerosols) have co-evolved with the models.
There is an interesting email written by Tom Wigley to Phil Jones on September 27th of this year: http://www.eastangliaemails.com/emails.php?eid=1016&filename=1254108338.txt
As you probably know, due to methodological changes in the collection of SST measurements, there were significant errors in the WW2 SST record.
Correcting for these errors naturally introduces some uncertainty.
In this email, Tom Wigley suggests adjusting historical temperatures by selecting values which fit the models.
Please note that I see nothing unethical about his behavior. Tom thinks his models are right. The temperature record is uncertain and appears to disagree with his models. He recommends fixing the value he has less confidence in, in this case the historical temperature record.
But if model output is allowed to influence historical data and/or model assumptions, then a sort of circular reasoning results, with assumptions and historical data being selectively chosen to make the models look good.
In this case, all the participants are aware of this potential (and there is no evidence that I am aware of that models have actually been used to modify the historical temperature record). But it would be easy to imagine a similar situation in which a more complicated network of collaborators prevents any individual participant from even recognizing the potential for circularity.
My aim is not to prove that climate models have been compromised in this manner. I honestly don’t know if they have.
But I suspect it, and I therefore require validation by forecast to allay my suspicions.
Climate modelers should demand the same.
Jason says
#269: “What on earth are you smoking? That is the most improbable political prediction I have heard in years.”
Is it improbable because Republicans do not want to repeal or reduce the income tax?
Or is it improbable because Democrats do not view climate change as a sufficiently serious issue to make such a trade?
I am saying the latter. Just as Republican Lindsey Graham has jumped all over Waxman Markey thanks to the prospect of off-shore drilling, a great many Republicans would support climate change legislation if it resulted in the repeal of substantial reduction of the income tax.
Dave Salt says
Hello, Thomas (#281).
Yes, I remember our brief ‘discussions’ on several space policy boards and so am not surprised by the manner of your reply. Nevertheless, I’d still be interested to hear any reasoned reply you may be able to provide in response to my inquiry.
Ray Ladbury says
Dave Salt,
Chris Colose gives a pretty good treatment of feedback here, with some good references:
http://chriscolose.wordpress.com/2009/10/08/re-visiting-cff/
However, the bottom line is that it does not seem to be possible to get an Earthlike climate with a CO2 sensitivity less than ~2.1 degrees per doubling. Despite copious efforts at constructing a model with low sensitivity (very interesting for its inherent properties independent of its implications for climate change), no one has succeeded. That is a strongly constrained lower limit unless you can come up with something that overturns a whole boatload of evidence.
Now as to your cargo-cult reference, THAT, sir is unwarranted and merely serves to call attention to your ignorance. Climate science has been subjected to an unprecedented level of external scrutiny–ranging from review of the science by National Academy Panels, to reviews by societies of scientific professionals and even hearings by hostile legislative panels. None of these bodies have dissented from the consensus position, and in in fact the vast majority have endorsed it. You are impugning the integrity not just of climate scientists, but of the entire scientific establishment with your cavalier calumny. Did it ever occur to you that the reason it is so hard to “falsify” the science is because the evidence supports it?
Ken W says
ALW (276):
“It is disingenuous to include the hindcasts within a historical looking back at how well the models have done as the accuracy of the hindcasts was known at the time! ”
No it isn’t! Showing the fit of a GCM’s computed temperatures (whether in the future, in the past, or during the present) to actual measured values (which were not used for tuning) is a very useful gauge of model accuracy. If the model didn’t fit with the instrumental record, that would falsify the model or instrumental record. If the model didn’t fit with historic proxy records, that would either falsify the model or the proxy technique. In any case, those comparisons must be done to establish confidence in the models.
Donald Rumsfeld once said “we go to war with the army we have, not the army we wish we had”. Science is no different, we progress our knowledge based on what we have or can get. While it would be great to sit around doing nothing for another 20 or 30 years to demonstrate our current models accuracy to every skeptics satisfaction (if that were even possible), we don’t really have the luxury.
I wonder what skeptics would say if climate scientists didn’t compare model outputs with the instrumentation record (past, present, and future)?
Gavin shows 26 years of Hansens model future projection vs. instrumetal record, along with an additional (and useful) 20+ years hindcast. And he shows 6 – 9 years of the AR4 future projections, along with an additional (and useful) 20+ years of hindcast. As subsequent years come to past, I’m sure he’ll be glad to add additional points. But to accuse him of being disingenous says more about you than him.
Completely Fed Up says
Jason: “If Democrats actually believed that failing to act decisively _now_ would result in catastrophe, then they would do so.”
Nope. No more than since you know that drinking alcohol is bad for you, nobody drinks alcohol.
It is SOMEONE ELSE who gets it in the shorts.
Just like short-term thinking in corporations on the stock market. How many companies outsourced to save money short-term to tick the options up? And how many found out they spent more for worse service some time later?
But did they get the money back from the ones who exercised the options?
No.
Yours is a commonly appearing current fallacy.
“Asking people (even the poor) to pay for the carbon they use isn’t such a terrible thing. It certainly won’t doom civilization as we know it.”
This is definitely NOT what the denialist alarmists catawailing imply. Apparently even the thought of AGW mitigation will ruin the West and send us back to the caves.
Larry says
Re: In #239, [ “… Check out the acronym index … gavin ]:
The index is great! Hadn’t spotted it before. Thanks to Steve Fish, and the group for posting it.
It would also be wonderful to have a similar index (or listing) of the various models, both old and new, and their variants. It could perhaps provide some brief explanation of each, and a link to a relevant specific site.
I have also often wished that there were a spreadsheet (a wide one I suppose) that lists the various models, with columns for indicating what factors are or aren’t implemented in each (or perhaps even whether the implementations are simplistic or sophisticated).
Just a wish-list — I know you are all busy.
Ray Ladbury says
Jason, I am hoping that your implication that the surface station record was changed to reflect model output is unintentional on your part, because no one has in fact done this.
As to your implication that the models are tweaked to achieve agreement with the historical record, that is also incorrect. Any changes that are made must be motivated by the physics. It is valid to increase fidelity of the model–e.g. by adding a treatment of ocean currents around Antartica. It is not valid to tweak that ’til you get best agreement with temperature.
Jason, have you ever done any dynamical modeling? Do you even understand how it differs from statistical modeling? Because your comments sure do not indicate any such understanding.
Jason says
“[Response: This is nonsense. If I set a parameter related to sea ice, I do it in order to improve the simulation of the sea-ice – usually on a seasonal cycle. The net effect on climate sensitivity is unknown. Same with cloud parameters, or land surface functions. These kinds of models just aren’t built the way you imagine – and we don’t test what the sensitivity is every time we change some minor thing. We certainly don’t optimise the model for some specific ‘pre-conceived’ notion of what sensitivity is. We just don’t. – gavin]”
Ernst asked if data that is not known a priori is used to tune climate models.
As I understand it, Gavin’s answer is: We do attempt to tune our models to observed data using information that is not available a prior BUT we never ever consider the impact on climate sensitivity or temperature.
I didn’t think that you calculated climate sensitivity at each step. If I wanted to accuse you of engineering modelE specifically to maximize climate sensitivity, I would have. I do not believe this is the case and did not mean to imply it.
BUT surely it is reasonable to suppose that in the process of adjusting the model, your mental expectations about the intermediate results have an influence!
I’m no stranger to (non-climate) complex modeling myself, and I can tell you that it would be very hard for me to approach the process and not be thinking about the consequences of each change. Fortunately, in my case, it is very easy to acquire additional data against which I can validate my changes. But without this additional validation, I would be very concerned indeed. I would probably feel obliged to sequester a portion of the data, and/or take other preventative measures.
“[Sorry, but really? you think tenure is granted or not because you have a higher climate sensitivity? Get real.]”
Are you seriously going to claim that a scientist whose work supported a lower climate sensitivity would have the same chance of getting tenure as a scientist who work supported a higher sensitivity?
Tenure is the end result of one of the most ruthlessly political processes on planet earth. The same sort of hardball political science that was on display in the CRU emails occurs routinely in the tenure process of every department of every university of any significance. The notion that disagreeing with the consensus scientific position does not have a strongly negative effect on tenure proceedings is laughable.
I can think of several examples in other fields. Climate science is surely not immune.
[Response: I imagine that tenure is indeed tough (I’ve never gone through it). But tenure is generally granted by the university not your scientific colleagues – and most of them wouldn’t know the difference between climate sensitivity and a hole in the ground. They look at publications, letters, honours, teaching assessments and the like. It doesn’t matter how high or low your climate sensitivity number is if you haven’t got a decent track record. Name one single person who’s been denied tenure on the basis of their climate modelling results (of what ever sort !). Just one. And when you can’t, come back and apologise for letting your prejudice get in the way of the facts. – gavin]
TRY says
Timothy Chase – oh, please – that’s a lot of ridiculous text that is just a mish-mash. Let’s stick to the science! Let me make it nice and simple for you.
You claim, I assume: CO2 added to the existing atmosphere will absorb more outbound IR at certain wavelengths than would otherwise be absorbed. This will result in the global system moving to a new equilibrium, during which time the global system will retain more energy. As a result of this energy retention, overall water vapor in the atmosphere will tend to increase, which will have a similar impact in terms of system energy retention. More CO2 leads to more water vapor, which together lead to energy retention until the system reaches it’s new equilibrium – a moving target if CO2 continues to be added to the atmosphere. Actually pretty straightforward, right?
Your opponents claim that, in general, the impact of CO2 is much less significant than you claim it is. Overlapping absorption bands and saturation are two common claims. Bottom line, CO2 does not have the impact on outbound radiation that you claim it does.
Now, you may claim that you have omniscient knowledge of the very, very complex global energy system, or that you don’t need it because a few simple equations absolutely define our entire global system. The problem with that is that we use simple equations to make predictions, then we test those predictions. Once you start looking at second-order or third-order impacts, you get into a morass of claims and counter-claims.
So, why not look at first-orderimpacts? Specifically, actual out-bound radiation globally. Seasonal matters because CO2 varies seasonally. And of course time matters. Now do you understand why the papers you posted don’t address this?
They are a good start, but where is the followup? What would the overall IR signature look like if CO2 absorbs/emits at the levels you claim? What would the overall IR signature look like if CO2 absorbs/emits at the levels your opponent claims? Then, hey, look at the data.
As for the rest of it, argue with someone else about tobacco.
Completely Fed Up says
Jason says:
“First, I don’t have any problem with the earth getting warmer, especially if it only gets a little warmer, and if the pace of that warming is slower than the IPCC forecasts.”
So will you be fine when you have to take those displaced from marginally livable land that is now far too hot?
Will you be fine when the refugees come around looking for homes, food, jobs, education and all the things you have where you are?
Will you?
No.
Completely Fed Up says
wil whines:
“So basically my question is when does “insignificant, short term” change to something relevant?”
Never.
When it changes to something significant, it’s no longer insignificant.
If you wait longer to get more data, it’s no longer short term.
Ernst K says
Re 282:
“Which skeptics are saying that physics is wrong?”
For starters, how about anyone who says that CO2 doesn’t matter because water vapor is responsible for 98% of the greenhouse effect, or that “CO2 lags warming”? Perhaps you feel that such cranks don’t deserve a label as honorable as “skeptic”. If so, I wouldn’t disagree.
“In this email, Tom Wigley suggests adjusting historical temperatures by selecting values which fit the models.”
Is there any evidence to suggest that this adjustment was ever applied to the final CRU data? Or are we only talking about a possible explanation for a short term divergence between the observed temperature record and the model predictions? From my reading of that email, I don’t see any suggestion that the CRU numbers should be changed to better match the models. If no adjustment was made to the CRU data, then there is no circle.
“My aim is not to prove that climate models have been compromised in this manner. I honestly don’t know if they have.
But I suspect it, and I therefore require validation by forecast to allay my suspicions.”
At least I now know that you fall into the “climate science conspiracy” camp (even if you only “suspect” it might have happened). Personally, I find it hard to believe these people wouldn’t recognize that it would be wrong to change the observed record to match the models and then use the same record to validate the models. That’s why I’m left with conspiracy, because they really must know better.
The problem with such a point of view is that you won’t be able to settle such a doubt until there has been so much warming that it’s probably too late to do anything about it.
Now perhaps you’re just “infected” with a brand of extreme skeptical philosophy. I like to call this paralyzed skepticism, because ones doubt is so extreme that it makes it impossible for one to make informed decisions with incomplete information.
That’s fine for an individual, but such people should not be put in positions to make policy decisions. Of course, the same could be said of the other extreme, “faddists”.
Ray Ladbury says
TRY@264 The situation you describe is not one of equilibrium. Indeed, if you irradiate the atmosphere your source, it will immediately begin to heat up. You are no longer in equilibrium, so your temperature is time dependent.
I agree we need a detailed climate monitoring network. (Hell, for fun, I was even starting to think about what kinds of sensors you’d want!) I suggest contacting your representatives and telling them of your opinion. Know, however that it isn’t cheap, easy or uncontroversial. This is a slog, not a walk in the park. Science is the best guide, though, for keeping us on the path rather than in the swamp.
dhogaza says
Something like this happened when the first UAH satellite temperature reconstructions were published, supposedly showing the planet was actually cooling, not warming, in the 1990s. The WSJ even proclaimed these results to be the “wooden stake through the heart of AGW” or some such.
Surface temps and expectations from modeling both disagreed with the UAH results …
What had to be fixed? Hint: wasn’t the models.
S. Molnar says
I’m having trouble with Gavin’s reply to “honorable” @242. If what we’re seeing is the output of his “same old tired nonsense” filter, I really can’t picture the items that don’t get through. It must be very nonsensical indeed. Maybe he could set aside some of the best for a post; say, next April 1.
dhogaza says
Jason asks:
Fellow denialist TRY obliges:
Ray Ladbury says
TRY, Do you understand that a mere snapshot of the outgoing IR spectrum with a huge bite taken out right at the absorption line of CO2 is not sufficient to establish increased greenhouse warming? You have to look at the system over time–integrate the effect. You have to look at energy in and energy out (in the IR and visible). We have lots of snapshots. We don’t have enough eyes in the sky to integrate over time (~30 years) and separate climate from noise?
I’m all for doing it. The opposition comes from other quarters than the scientific community. There are some folks who really don’t want to know (viz. my desmogblog reference to Triana/DISCOVR–a situation with which I am intimately familiar).
Nicolas Nierenberg says
RE: 280,
Doug, I’m discussing a basic rule of modeling, it has nothing to do with climate scientists or honesty. And I want to make it clear that I don’t believe in these weird divisions of skeptics, and well whatever the others are.
A model has to be tested with out of sample data. In a model as complex as these the knowledge of the existing result has to influence the person writing and testing the model. Therefore even though Gavin believes that he his building only on first principles it isn’t, in my opinion, possible. It is the same reason why drugs have to be tested using double blind experiments. It isn’t that the physicians who are administering the tests and noting the results are dishonest, it is just the way the world works.
And looking at Gavin’s example of sea-ice modeling. I understand that he is doing it on first principles. But if a new module suddenly increased climate sensitivity so that 2009 was five degrees C warmer than present, I can assure you that he would first look for a bug, and then second rethink the model because obviously that isn’t what happened. That isn’t dishonest, that’s just makes sense. The result of a hundred decisions like that is something that will be very close to the historical record. Particularly if you average all the climate models. (Which may be a sociological explanation of why the average of all climate models has proven more accurate than any one.)