It’s worth going back every so often to see how projections made back in the day are shaping up. As we get to the end of another year, we can update all of the graphs of annual means with another single datapoint. Statistically this isn’t hugely important, but people seem interested, so why not?
For example, here is an update of the graph showing the annual mean anomalies from the IPCC AR4 models plotted against the surface temperature records from the HadCRUT3v and GISTEMP products (it really doesn’t matter which). Everything has been baselined to 1980-1999 (as in the 2007 IPCC report) and the envelope in grey encloses 95% of the model runs. The 2009 number is the Jan-Nov average.
As you can see, now that we have come out of the recent La Niña-induced slump, temperatures are back in the middle of the model estimates. If the current El Niño event continues into the spring, we can expect 2010 to be warmer still. But note, as always, that short term (15 years or less) trends are not usefully predictable as a function of the forcings. It’s worth pointing out as well, that the AR4 model simulations are an ‘ensemble of opportunity’ and vary substantially among themselves with the forcings imposed, the magnitude of the internal variability and of course, the sensitivity. Thus while they do span a large range of possible situations, the average of these simulations is not ‘truth’.
There is a claim doing the rounds that ‘no model’ can explain the recent variations in global mean temperature (George Will made the claim last month for instance). Of course, taken absolutely literally this must be true. No climate model simulation can match the exact timing of the internal variability in the climate years later. But something more is being implied, specifically, that no model produced any realisation of the internal variability that gave short term trends similar to what we’ve seen. And that is simply not true.
We can break it down a little more clearly. The trend in the annual mean HadCRUT3v data from 1998-2009 (assuming the year-to-date is a good estimate of the eventual value) is 0.06+/-0.14 ºC/dec (note this is positive!). If you want a negative (albeit non-significant) trend, then you could pick 2002-2009 in the GISTEMP record which is -0.04+/-0.23 ºC/dec. The range of trends in the model simulations for these two time periods are [-0.08,0.51] and [-0.14, 0.55], and in each case there are multiple model runs that have a lower trend than observed (5 simulations in both cases). Thus ‘a model’ did show a trend consistent with the current ‘pause’. However, that these models showed it, is just coincidence and one shouldn’t assume that these models are better than the others. Had the real world ‘pause’ happened at another time, different models would have had the closest match.
Another figure worth updating is the comparison of the ocean heat content (OHC) changes in the models compared to the latest data from NODC. Unfortunately, I don’t have the post-2003 model output handy, but the comparison between the 3-monthly data (to the end of Sep) and annual data versus the model output is still useful.
Update (May 2012): The graph has been corrected for a scaling error in the model output. Unfortunately, I don’t have a copy of the observational data exactly as it was at the time the original figure was made, and so the corrected version uses only the annual data from a slightly earlier point. The original figure is still available here.
(Note, that I’m not quite sure how this comparison should be baselined. The models are simply the difference from the control, while the observations are ‘as is’ from NOAA). I have linearly extended the ensemble mean model values for the post 2003 period (using a regression from 1993-2002) to get a rough sense of where those runs could have gone.
And finally, let’s revisit the oldest GCM projection of all, Hansen et al (1988). The Scenario B in that paper is running a little high compared with the actual forcings growth (by about 10%), and the old GISS model had a climate sensitivity that was a little higher (4.2ºC for a doubling of CO2) than the current best estimate (~3ºC).
The trends are probably most useful to think about, and for the period 1984 to 2009 (the 1984 date chosen because that is when these projections started), scenario B has a trend of 0.26+/-0.05 ºC/dec (95% uncertainties, no correction for auto-correlation). For the GISTEMP and HadCRUT3 data (assuming that the 2009 estimate is ok), the trends are 0.19+/-0.05 ºC/dec (note that the GISTEMP met-station index has 0.21+/-0.06 ºC/dec). Corrections for auto-correlation would make the uncertainties larger, but as it stands, the difference between the trends is just about significant.
Thus, it seems that the Hansen et al ‘B’ projection is likely running a little warm compared to the real world, but assuming (a little recklessly) that the 26 yr trend scales linearly with the sensitivity and the forcing, we could use this mismatch to estimate a sensitivity for the real world. That would give us 4.2/(0.26*0.9) * 0.19=~ 3.4 ºC. Of course, the error bars are quite large (I estimate about +/-1ºC due to uncertainty in the true underlying trends and the true forcings), but it’s interesting to note that the best estimate sensitivity deduced from this projection, is very close to what we think in any case. For reference, the trends in the AR4 models for the same period have a range 0.21+/-0.16 ºC/dec (95%). Note too, that the Hansen et al projection had very clear skill compared to a null hypothesis of no further warming.
The sharp-eyed among you might notice a couple of differences between the variance in the AR4 models in the first graph, and the Hansen et al model in the last. This is a real feature. The model used in the mid-1980s had a very simple representation of the ocean – it simply allowed the temperatures in the mixed layer to change based on the changing the fluxes at the surface. It did not contain any dynamic ocean variability – no El Niño events, no Atlantic multidecadal variability etc. and thus the variance from year to year was less than one would expect. Models today have dynamic ocean components and more ocean variability of various sorts, and I think that is clearly closer to reality than the 1980s vintage models, but the large variation in simulated variability still implies that there is some way to go.
So to conclude, despite the fact these are relatively crude metrics against which to judge the models, and there is a substantial degree of unforced variability, the matches to observations are still pretty good, and we are getting to the point where a better winnowing of models dependent on their skill may soon be possible. But more on that in the New Year.
Doug Bostrom says
“The same sort of hardball political science that was on display in the CRU emails occurs routinely in the tenure process of every department of every university of any significance.”
Oh, bullshit. Politics plays a role in tenure decisions, a little bit more or less in many though still not most cases, but it’s social politics that trips up candidates. Political tenure trouble results from poor social skills on the part of the candidate, not science. If somebody’s pleasant to get along with, they’ll get cut some slack and that’s about the extent of it.
This sort of bankrupt hysteria about the fundamental legitimacy of scientific inquiry is becoming all too common of late, even as the actual scientific case in question becomes less controversial. Legitimacy is a handy qualitative escape hatch to dive into when all else fails.
Undermining the legitimacy of scientists with broad attacks on the entire academy is a nauseating and repugnant scorched earth tactic smacking of desperation. The collateral damage this approach causes will feed into other areas of inquiry, and worse it’ll foster such other disasters as reluctance to vaccinate against measles, abstinence programs as a less squeamish approach to sex education, etc. Shameful, truly.
Barton Paul Levenson says
Jason: even the liberal wing of the Democratic party is not seriously committed to climate change.
BPL: Your telepathy is malfunctioning. I’m a liberal Democrat and I know AGW is real, and the greatest problem our civilization has ever faced outside of nuclear war. Your proposed tax plan would wreck the economy by suddenly imposing huge taxes on most working people. We do NOT have to wreck the economy to stop AGW. It’s a straw man argument. It’s not the Democrats who are blocking action on AGW, it’s the GOP, with its general anti-science attitude that rejects AGW theory, evolution, ozone damage to the CFC layer, health problems from smoking, and in general any scientific finding at all that might somehow result in more regulation of big business.
Dave Salt says
Thanks again, Ray Ladbury (#285)
Though very interesting and highly educational, the linked posting only discusses feedback concepts, as per its title “Re-visiting climate forcing/feedback concepts…”, but appears to provide no direct evidence (i.e. from real-world observations) that these mechanisms actually dominate the Earth’s climate system, which was the subject of my original enquiry. Your other comment would be acceptable if the climate models included every possible mechanism and associated couplings with a high degree of certainty but, as the IPCC admits, there appears to be some factors that are not yet sufficiently well understood (e.g. those relating to clouds) to justify this statement. Please correct me if I’m wrong here and, in fact, the models no longer include any assumptions or ‘plugs’ to match their output to past historical trends. Simply saying, in effect, that “we’ve tried everything else so it must be that” seems insufficient to justify a hypothesis that could have a massive influence upon the world’s future social and economic development.
As for my reference to Cargo Cult Science, you’ll note I was suggesting that a true lack of real-world evidence would likely lead to this conclusion and that my original post was, therefore, a sincere attempt to refute this through a simple enquiry. If your only means of reply is via ad hominid comments or argument by authority, I can only conclude that you are either unable or unwilling to discuss the issue in a civilised and rational manner; an unfortunate tendency that afflicts BOTH sides of this particular debate. Nevertheless, I refuse to believe both you and Mr Elifritz are true representative of this blog and so am still more than willing to learn from people with a more civilised and rational approach to science.
Andrew says
@ Jason: “Is it improbable because Republicans do not want to repeal or reduce the income tax?
Or is it improbable because Democrats do not view climate change as a sufficiently serious issue to make such a trade?”
Strange as it seems, there is even another possibility.
Replacing income tax with carbon tax is totally unworkable. In 2007, the income taxes paid by the top 1% earners was 40% of all income taxes paid – more than the bottom 95% of earners COMBINED (just under 40%).
So your carbon denominated income tax replacement must increase the taxes on 95% of the population. By a lot, actually – something like double.
You believe that BOTH Democrats and Republicans would eagerly line up to roughly double the income tax bill on 95% of the population and drastically slash the income tax bill for the highest earners?
I think most people reading this can figure out why this proposal is not part of any solution to anything.
Jason says
#289: “Jason, I am hoping that your implication that the surface station record was changed to reflect model output is unintentional on your part, because no one has in fact done this.”
I am saying that Tom Wigley in an email to Phil Jones dated September 27th 2009 recommended that this be done.
I explicitly stated that I did not think that this had occurred, and that the request itself was not, in my eyes, an ethical breach.
But considering how casually Wigley made this request, it doesn’t seem completely out of the realm of possibility. If you and Gavin say that this has never ever happened at GISS, then I believe you.
[Response: You have completely misunderstood what Wigley was doing. Given the issues unearthed by Thompson et al (2007), Wigley wanted to know what impact that might have on his detection and attribution work and what the magnitude of the change in the ocean SST might be. No-body is going to just shift the actual record in the ad hoc way you appear to think Wigley was suggesting. – gavin]
“As to your implication that the models are tweaked to achieve agreement with the historical record, that is also incorrect. Any changes that are made must be motivated by the physics. It is valid to increase fidelity of the model–e.g. by adding a treatment of ocean currents around Antartica. It is not valid to tweak that ’til you get best agreement with temperature.”
Here is what Gavin said: “If I set a parameter related to sea ice, I do it in order to improve the simulation of the sea-ice – usually on a seasonal cycle.”
It sure sounds like Gavin is comparing the output of the model to observed data and modifying the model with the intent of matching the observations.
I would never suspect that Gavin deliberately biases his own model. What would be the point?
But there is a world of difference between not deliberately sabotaging your own model, and having a model that is free of bias [a term I am using in the colloquial sense].
I was asked why I think that validation of forecasts is necessary and validation of hindcasts insufficient. I think that validation of forecasts is necessary because I believe that the models have been indirectly biased by the historical data they purport to predict.
“Jason, have you ever done any dynamical modeling? Do you even understand how it differs from statistical modeling? Because your comments sure do not indicate any such understanding.”
Almost entirely dynamical. But clearly my experience is very different from what goes on in climate modeling. I find some of the statements that you and Gavin have made equally perplexing.
[Response: If you read our papers (and my comments) we are completely up front about what we tune to – the climatology (average annual values), the seasonal cycle, the diurnal cycle and the energy in patterns like the standing wave fields etc. We do not tune to the trends or the sensitivity. – gavin]
Doug Bostrom says
Nicolas Nierenberg says: 30 December 2009 at 6:33 PM
Nicholas I see your point but I believe you’re investing too much confidence in your assumptions. If true in the literal sense I took from your statement, it would have wide repercussions extending into numerous fields beyond the one under discussion here.
Based on some of my own experience (not climate modeling of course but attempting to adjust variables in isolation in complex systems) I also suspect that such efforts– particularly if unintentional– would be reasonably likely to blow up the model.
But of course Gavin would know better.
Jason says
“[Response: I imagine that tenure is indeed tough (I’ve never gone through it). But tenure is generally granted by the university not your scientific colleagues – and most of them wouldn’t know the difference between climate sensitivity and a hole in the ground. They look at publications, letters, honours, teaching assessments and the like. It doesn’t matter how high or low your climate sensitivity number is if you haven’t got a decent track record. Name one single person who’s been denied tenure on the basis of the climate modelling results (of what ever sort !). Just one. And when you can’t, come back and apologise for letting your prejudice get in the way of the facts. – gavin]”
I know of none. I also know of no professors with climate models that predicted substantially lower sensitivity than the consensus who were granted tenure, so I don’t consider this a particularly meaningful measure.
I do, however, have some useful data to add. I have taken (over the past 3 years) to asking climate science grad students the following question:
If your research produced results that tended to disagree with the consensus, what impact do you think it would have on your ability to: Get a job/Get a grant/Get tenured.
Unfortunately, I’ve changed the wording around and only have 6-7 responses. But thus far the universal response of climate science grad students a major universities is that research disputing the consensus would be severely detrimental. The word “impossible” was used twice.
I imagine that you could trivially repeat this experiment. I would wager that you get the same result.
[Response: But you are testing people’s opinions, not actual facts. And frankly, your question is ill-posed. What does ‘the consensus’ even mean in your context? That CO2 is a greenhouse gas? That it’s rising? That there has been warming over the last century? That these things are likely connected? Or something else completely? The quality of someones career is much more tied to the quality of their work – not the specific results. And that is something you should be stressing to your students. – gavin]
John P. Reisman (OSS Foundation) says
#242 honorable
‘Honorable’, not even close.
Honorable people are not afraid to post their real names (due cause excepted).
Honorable people don’t make uninformed claims without cautionary statements.
Your anonymous post is absolutely not “characterized by integrity”, thus using the moniker honorable is incorrect in your case.
Your claim of being “a professor of medicine in a first rate North American university” means nothing here. The subject is climate change and anthropogenic global warming. If you want to participate in the discussion in a productive fashion you will need to get up to speed.
https://www.realclimate.org/index.php/archives/2007/05/start-here/
and for some dumbed down explanations
http://www.ossfoundation.us/projects/environment/global-warming
As to your unfair accusation of ‘cavalier’ comment blocking. Poppycock.
Why are you disturbed by climategate? It does not overturn the science though you may feel personally offended by the fact that people in private conversations have opinions, which I might add is entirely un-American.
http://www.ossfoundation.us/projects/environment/global-warming/myths/climategate
Criticism is the essence of the scientific approach if the criticism is scientific. Random BS form the bleachers is not scientific criticism.
RB says
@300 Nierenberg.
I accept your basis for out-of-sample testing – behavioral biases and pattern seeking behavior by the human mind are well understood in finance. Systems traders understand this as well.
http://oldprof.typepad.com/a_dash_of_insight/2007/08/developing-and-.html
I recently saw Richard Alley’s talk where he mentioned a CO2 sensitivity of 2.8C per doubling in line with IPCC’s 3C number. Would that qualify as out-of-sample corroboration of CO2 sensitivity for you – hindcasting of course?
Nicolas Nierenberg says
Re: 305,
Doug, I don’t understand your point.
To use my previous example if you were building a climate model, and you made a change resulting in 2009 (or 1975, or 1998) being 5C warmer than the 1961-1990 average are you saying that you wouldn’t look for a bug, or assume that you had made some kind of other error? Conversely if you made the change and now 2009 was closer to the actual than your previous model wouldn’t you be more likely to accept the result without looking for a problem? I know I would.
But I would do it with the full knowledge that the true test would be to see if my model was predictive of future periods. Or alternatively of some period that had been held back from me.
At the risk of boring everyone I have recently had a similar experience building algorithms for determining location of RFID (radio frequency) tags inside buildings. First principles say you can use signal strength to do triangulation. But in fact there are a number of confounding factors having to do with reflection etc. So we had to try different kinds of models. We made numerous recordings of signal strength so that we could test our models. This helped. But frequently improvements that worked against dozens of our recorded experiments failed when we tested in an actual new environment, or simply in an new time period in an old environment. (In the end we developed quite a nice solution which is patented, but it isn’t perfect.)
John P. Reisman (OSS Foundation) says
#246 Dave Salt
You want real world evidence of a positive feedback?
Put a pot of water on a counter top at room temperature and measure how much it evaporates over time. Then take another pot of water and put it on a stove and put a flame under it.
One of these pots will evaporate faster. They are both evidence that water evaporates…. oh, by the way, water is a green house gas. and the oceans are getting warmer. Add 2+2 and tell me it does not add up to 4.
Now, here’s another experiment for you. Go out in a parking lot on a hot summer day and place your hand on the surface of a black car and hold it there for one minute. Hurts doesn’t it. Now go to a white car and repeat the experiment. Notice a difference?
Now, take a look at the ice extent reduction trend in the Arctic and try to imagine what will happen to the heat energy that hits the region as more and more summer ice extent disappears (don’t forget the hand on car experiment and take into account that white ice is like the white car and dark water exposed to the sun is like the black car).
It’s really not that hard to understand Dave. Now, you have your proof of positive feed back and reasonable expectations. What say you?
Jason says
#294: “The problem with such a point of view is that you won’t be able to settle such a doubt until there has been so much warming that it’s probably too late to do anything about it.”
In #139 I gave specific measurable criteria that will, within ten years cause me to become convinced (assuming this “pause” ends and the warming resumes).
Since Waxman Markey makes no meaningful reductions in emissions, we won’t lose anything by waiting. (Arguably we gain since implementing Waxman Markey now will have the effect of creating entrenched interests that oppose change ten years from now).
As I said in #139:
“In data I trust. If mother earth starts following the models, I will be convinced. If not, the models are just an object lesson in scientific arrogance.”
Hank Roberts says
TRY, “A paper Trenberth published this year complained about our inability to monitor energy flows associated with short-term climate variability.” — Bob Parks
The scientists have been pointing out the need for better monitoring, for years. You should call your congressperson.
https://listserv.umd.edu/cgi-bin/wa?A2=ind0912&L=bobparks-whatsnew&D=1&T=0&H=1&O=D&F=P&P=424
Geoff Wexler says
Jason.
Forecasts preferable to hindcasts? How about forecasters like Callendar, Plass and others, who, resting on the shoulders of earlier scientists like Fourier, Tyndall and Arrhenius, forecast the emergence of observable anthropogenic global warming ? It seems to me that they did rather better than the early GW skeptics like Angstrom’s son (about 1900) even though his position was quite a plausible one given the evidence available at that time.
Hindcasts are not worth much? How about Bardeen , Schrieffer and Cooper, who hindcast the occurrence of superconductivity about 50 years after it was observed. I notice that the Nobel commitee did not share the view that they should be barred from receiving a prize for being so tardy.
Was the validation of Ptolemy’s theory really flawed because it was based on hindcasts? I thought that it was also good at forecasts , at least until Galileo observed the phases of Venus. After that it was still good at some kinds of forecast and similar kinds of hindcast.
So what was wrong with Ptolemy’s theory? It had no serious foundations but was just a brilliant form of curve fitting. In some ways it resembled the cyclical theory of climate which is now one of the alternatives to the consensus version.But it seems to me that Ptolemy’s theory was far more useful. As for climate model theory, unlike Ptolemy’s, it is almost certainly strongly constrained by its foundations in science, in spite of the fact that not all of it is based on calculations from first principles. Even the conservation of energy cuts down a lot on the range of possible answers and that is just one constraint.
Your super-skeptical attitude, Jason, to climate models reminds me of Lindzen. You write of thousands of assumptions but provide just three examples, of which one refers to the constancy of relative humidity. It is true that Arrhenious used this as a working assumption in his 1895 model but that does not mean that modern climate models follow his example. According to Realclimate (somewhere) this is not an assumption at all, but an emergent property of the models. So that would be one less assumption out of the thousands. . Which models and which papers about them were you discussing when you made that allegation? Anyway this behaviour can be understood better now , starting from the physics. This is discussed in Raymond Pierrehumbert’s book (on line).
The next example you give concerns aerosols which you assert involves circular reasoning. That may depend on the problem being tackled. I have serious doubts about the circularity which would imply that there is no independent knowledge about aerosols. I suspect that some denialist arguments depend on a hidden assumption of disregarding the aerosols altogether, which could of course be justified by suggesting that it is the usual practice to choose their properties arbitrarily.
The next example involves the lack of a first principles theory of cloud formation. Over to the experts. But I gather that there is a range of plausible cloud properties leading to a range of plausible answers. In order for the “horribly wrong” option to come true I suspect that it would be necessary to input some obviously wrong cloud data.
Rejection of the validation of the models because of a supposed problem with the “current pause in warming”? Isn’t that the same old misuse of short term trends discussed so many times or does it refer to the climate models inability to reproduce the details of the short term wiggles which would be based on an unrealistic demand for lots more detailed initial conditions? Neither constitutes a reason for rejecting the validation which depends on being able to get the right statistically meaningful trend.
dhogaza says
Dave Salt:
With all respect, you’re the one making contrarian assertions. It’s up to you to provide positive support for your assertions, in particular the claim that climate models boil down to “we’ve tried everything else so it must be that” or that they’re fit to the historical record. It’s not up to us to prove you wrong. GISS Model E and documentation is online, including references to papers describing the model’s physics and general implementation, and various modules reference the papers containing the physics upon which they’re based. I’m sure that if you put in the energy to dig out information to support your case that you’ll find some very interested readers here.
We’re all aware that your appendage waving (reader assert appendage of their choice) has convinced *you* of *your* superior understanding, but it ain’t likely to have much of an affect on a practicing physicist like Ray, or a mere BS math type like myself.
RB says
And yes, I also note Gavin’s comment above:
[We do not tune to the trends or the sensitivity. – gavin]
Ernst K says
Re: 290
“Ernst asked if data that is not known a priori is used to tune climate models.
As I understand it, Gavin’s answer is: We do attempt to tune our models to observed data using information that is not available a prior BUT we never ever consider the impact on climate sensitivity or temperature.”
Not quite, at least to my understanding, which is something like the following:
1) The models are built on well established basic physics (Navier-Stokes, the laws of thermodynamics, etc.) at the resolved scale of the model.
2) Physical processes that operate at scales smaller than can be resolved in the model are modeled with parametrization schemes. These schemes usually require at least on parameter that must be supplied by the modeler. So the question becomes, “where do you get an appropriate parameter value from”?
One answer would be to optimize the parameters to match a portion of the historical record, and then verify the values by comparing the model predictions with a different period. This is an acceptable approach but it can be problematic if you intend to build a model that can predict behavior under conditions that are significantly different from the period you used to calibrate. It also means you can’t use at least a portion of the observed record to verify your data.
Another approach would be to conduct field experiments of each individual physical process to estimate an appropriate value. For example, you could measure the latent and sensible heat fluxes over a test plot to estimate parameters that are appropriate for calculation of the rate of evapotranspiration from a boreal forest. This is the better approach because it makes for a more robust model and it parameter estimation procedure is independent of the global temperature record.
3) Because there are usually a number of competing process schems
es, how do you select which scheme to use in your model?
The correct answer to this questions is definitely not, “the scheme that lets my model match the historical temperature record better”, it’s “the scheme that the literature suggests is better at simulationg the process behavior.
For example, studies might show that snow albedo and melt rates can be better simulated if you include snow models with multiple snow layers. You therefore should include such a methodology in your GCM even if it means the GCM gets a little worse at predicting the average surface temperature.
In other words, you don’t calibrate the roughness height of a boreal conifer forest to best fit the global mean surface temperature record, you use the value that provides the best simulation of the rate of evapotranspiration as measured in a real boreal forest.
That’s the way it should be done, and as far as I know that’s the way it is done.
Ernst K says
Re 300:
“And looking at Gavin’s example of sea-ice modeling. I understand that he is doing it on first principles. But if a new module suddenly increased climate sensitivity so that 2009 was five degrees C warmer than present, I can assure you that he would first look for a bug, and then second rethink the model because obviously that isn’t what happened. That isn’t dishonest, that’s just makes sense. The result of a hundred decisions like that is something that will be very close to the historical record. Particularly if you average all the climate models.”
This would be a legitimate concern if the uncertainty in our understanding of any one process could lead to such a large change in model behavior. If GCMs really were anywhere near that sensitive to such changes in process parametrization I wouldn’t have any confidence in their predictions either. However, unlike conceptual economics models, GCMs don’t behave like that.
My understanding is that the kind of “tweeks” we are talking about mostly improve the simulation of seasonal and annual variability and how warming and precipitation changes are distributed over Earth. This is demonstrated by the fact that the only way to make the GCMs completely fail to match the historical record is to remove the components that deal with the increase in greenhouse gas concentrations.
TRY says
298 dhogoza – You’re kidding, right? Did you not read the whole post, or are you in the business of selectively quoting to mislead? Gee, who else does that?
295 Ray – sure, but as I said, wait until the system reaches equilibrium. At some point, surely, it will be emitting as much energy as is coming in. Agree with this now? re 298 – regardless of mechanics, there certainly seems to be some benefit in looking at first-order impacts.
Ray Ladbury says
Dave Salt says, “As for my reference to Cargo Cult Science, you’ll note I was suggesting that a true lack of real-world evidence would likely lead to this conclusion and that my original post was, therefore, a sincere attempt to refute this through a simple enquiry.”
Dude, there are MOUNTAINS of evidence, both literal and figurative. The climate senstitivities are constrained by evidence–more than 10 separate lines of it! The successful predictions of the models provide evidence. The paleoclimate provides evidence. Volcanic eruptions provide evidence. Satellite measurements provide evidence. Groundbased measurements. Hell, we’ve even got recorded dates of the first bloom of cherry blossoms on Mt Fuji going back to the 17th century that provides evidence. There are TONS of evidence. YOU merely refuse to look at it. You didn’t even bother to look at the references in Chris’s blog post, even though I specifically commended them to you. Now why would that be, I wonder?
And with regard to my tone, I think I have been more than civil given that you are alleging either massive corruption or massive incompetence by the entire global science community without even the vaguest hint of evidence! I’ve made nothing that even by the most liberal stretching of definitions could be considered an ad hominem attack.
OK, Dave, since you won’t look at any of the evidence I’ve provided, what evidence would you actually accept?
Ray Ladbury says
Nicholas Nierenberg says, “A model has to be tested with out of sample data. In a model as complex as these the knowledge of the existing result has to influence the person writing and testing the model. ”
No, it does not! In my field (radiation effects in electronics) we often have to do simulations where we know many of the answers up front. In my former life, we had to do complicated analyses to extract tiny signals from huge backgrounds. It is not just possible to do so without biasing the analysis, it is routinely done! Physics is your friend. Use it!
[Response: Actually I agree with Nicholas. Sensitivity of the models needs to be tested on ‘out of sample’ data – which wasn’t used in building the model. I have been pushing for increased use of paleo tests for precisely this reason. But the 20th C trend is also ‘out of sample’, as was the post 1984 trend in 1984, as was the OHC change in 2005. Your kinds of tests are of course needed in model construction but aren’t sufficient. – gavin]
TRY says
And dhogaza – surely you would acknowledge that “the physics” allows generally for two different molecules in the atmosphere to absorb the same IR wavelength? And that given a constant influx of energy in a mixed gas environment, how much energy one type of gas absorbs can be dependent on how much energy other gas molecules have absorbed?
Let’s leave AGW out of it entirely and just address that pretty well-defined science question.
Aaron Lewis says
Re 91: TRY (With sincerest apologies to all the computer guys that like formal statements)
The system is planet with liquid oceans and gaseous atmosphere including traces of CO2. The planet receives electromagnetic radiation from a nearby sun. The CO2 absorbs some radiation, vibrates, and transmits that energy to other gases in the atmosphere via collision. Those gases transfer that energy to the surface of the planet/ocean which circulates, exchanges energy with the atmosphere and radiates energy through the atmosphere to space.
The question then becomes: Can we use radiation signatures to detect a change in the heat of the system? And, are there extant observers?
Yes and yes.
Energy will be absorbed by the CO2, transferred to the other gases in the atmosphere, and thence to the ocean producing a radiation signature. See for example: http://www.osdpd.noaa.gov/data/sst/fields/FS_km5000.gif
This is a measure of outbound radiation detected above the Earth’s atmosphere. It is measured on a routine basis and the data is available. Another sample is: http://discover.itsc.uah.edu/amsutemps/AAT_Browse.php?chan=03&satnum=15&aord=a which is a graphic of the energy in water at various levels in the atmosphere.
Theory says that the temperature of the ocean surface will rise as CO2 rises. Given various heat transfer mechanisms most of the heat will end up in the ocean. If the radiation signature of the ocean changes, we know heat has accumulated and global warming has occurred. Here we can see an analysis of changes in the temperature of the surface of the oceans as detected by the ocean’s radiation signatures: http://www.osdpd.noaa.gov/data/sst/anomaly/2009/anomnight.12.28.2009.gif.
Note that much of the ocean’s surface is yellow or orange or red denoting a rise in the temperature of the surface of the ocean and hence the predicted change in the radiation signature. Yes, as predicted the oceans warm as CO2 increases, and the data is all available.
It is not the answer that you want, but it is fast, easy, and 71% correct.
Charlie S says
Re #50 Simon … it’s not a climate model in the any sense of the ones in this post … but it can be interesting to play with the Interactive Java Climate Model — and it doesn’t take a supercomputer or knowledge of FORTRAN (or any programming).
http://www.astr.ucl.ac.be/users/matthews/jcm/index.html
Andy says
Gavin: Re: response to #5 – does the AMO exist? I haven’t read anything definitive. If so, then what are the expected mechanisms for such a long duration change in ocean circulation?
[Response: Well there is clearly evidence for variability in the overturning circulation – both in paleo data and in models. The models produce patterns that have (varied) multi-decadal frequnecies and so it’s not silly to be looking for this in the modern observations. However, our ability to distinguish internal variability from forced changes in indicies like the ‘AMO Index’ is poor. So while it probably exists, we probably don’t know what it’s done (or is doing). – gavin]
Matthew says
#208, Martin Vermeer. I did not miss the point of Jeffreys’ comment, I pointed out that not everyone who writes of statistical inference agrees with the point.
There is no way not to bet.
I agree with you there, but I think that you underestimate the cost of betting huge sums of money on current technology with current knowledge. Money spent now will not be available 20 years from now when better technology is available, and will not be available for desalination projects, which the world really could use. Too much money spent too soon can do more harm than good, and will do no good whatsoever if some other theory than AGW turns out to be more accurate. In terms of controlling risk (loss times probability of loss), there is more than one risk.
“All models are false, some are useful.” I do not know the original source, but the textbook by Bates and Watts on estimation of nonlinear models has it as a chapter epigraph. Another chapter has a nice quote from Bertrand Russell on the fact that all scientific theories are approximations. All the specific relationships that are ingredients in the GCMs are simplifications of complex relationships with parameters estimated from noisy data of some sort. Take all of these approximations glommed together, and you have models that are potentially less accurate than simple models estimated from time series trend analyses. The evidence to date, summarized at the head of this thread, is that the climate models agree rather coarsely with observed data, not very precisely.
Spencer says
Anyone looking at the first graph can see clear as day that the hindcast data is much closer to reality then the futurecast data. Even the error bar are smaller. It may be selection bias, fitting the model to the data, it really doesn’t matter.
[Response: Not quite. It’s because the data is baselined to the beginning part of the curve so that the mean of each model is the same as the data over 1980-1999. The width of the bar is then due only to the size of internal variability. As you go into the future, you still have this kind of uncertainty, but you also start to see a little spread related to differing sensitivities of the models. As you go further out, the spread related to sensitivity will increase, and we will be able to winnow the models. But for the moment the internal variability is the dominant uncertainty in the AR4 simulations. – gavin]
Jim Eager says
Jason @ 253: First, I don’t have any problem with the earth getting warmer, especially if it only gets a little warmer, and if the pace of that warming is slower than the IPCC forecasts.
Well, considering that the current level of atmospheric CO2 without including future growth is already higher than it has been at any time in at least the last 3 million years (i.e. before the start of the current glacial-interglacial cycle) and perhaps as long as the last 20 million years [Tripati et al, Science 4 December 2009], then it seems clear that we can safely rule out your first caveat, while much paleoclimate research shows evidence of rapid changes in ice sheet dynamics and sea level during emergence from a glacial stade, your second caveat is on shaky ground.
J: I’ve seen outwardly credible estimates explicitly suggesting that it will be possible to build carbon sequestration technology that removes many times more carbon from the atmosphere than is emitted during the production of the required energy. Can you direct me to an argument explaining why this physically is impossible?
Surely, it’s called the First Law of Thermodynamics, which apparently has something to do with conservation of energy.
Let’s leave aside the capture and sequestration of CO2 from fossil fuel power plant flue gasses, which will obviously require almost as much energy as is usably generated and will remove not a single molecule of existing carbon from the active carbon cycle. Instead, let’s look at potential methods to sequester much less concentrated COe directly form the atmosphere. Even though photosynthesis provides the energy required for the production of biochar feed stock, turning it into char and getting it into the soil on the necessary scale will require a lot of additional energy. Although energy-free once in place, quarrying, crushing, grinding and distribution of periodite will require a good deal of energy.
J: What level would that be?
Oh, I don’t know, somewhere between the level that allowed the last glacial stade to start growing and the level that existed prior to the last time sea level was 25 to 40 meters higher [Tripati et al, Science 4 December 2009, again], i.e. definitely somewhere much closer to 350 ppmv than 387 ppmv.
Stephen Pruett says
What do you make of the following quote from a CRU e-mail, “Hi Tom How come you do not agree with a statement that says we are no where close to knowing where energy is going or whether clouds are changing to make the planet brighter. We are not close to balancing the energy budget. The fact that we can not account for what is happening in the climate system makes any consideration of geoengineering quite hopeless as we will never be able to tell if it is successful or not! It is a travesty!
Kevin (Trenberth)
Just on the basis of his credentials and involvement in the IPCC process, one would think that if anyone knows the current state of the science, it would be Dr. Trenberth. I suspect you will not agree with his conclusion, and it may be that your models do not attempt to account for the entire energy budget, so this comment may not apply directly to your work. In any case, I would be interested in your take on this.
[Response: He’s talking about the uncertainties in our current observing system which are too large to enable us to work out exactly where all the energy goes on a year by year basis (and no one is in disagreement about that). He’s even written a paper about it. – gavin]
Joseph says
A model should be tested with data that was not used to “train” the model, but the analogy with drug trials doesn’t apply. It’s not human bias that is the problem. The issue is that there can be overfitting, and you could be trying to model noise, or perhaps your model is not a reflection of how the world works (e.g. a 3rd-order polynomial model might fit a series pretty well, but it doesn’t forecast.)
Leo G says
From Tamino’s site –
{quote} The trend lines from 2000 to 2010 (actually to the present since 2009 hasn’t ended yet) are all positive:
•For GISS data, the trend from 2000 to the present is +0.0115 +/- 0.018 deg.C/yr.
•For RSS data, the trend from 2000 to the present is +0.0017 +/- 0.030 deg.C/yr.
•For UAH data, the trend from 2000 to the present is +0.0052 +/- 0.043 deg.C/yr. {end quote}
Am I reading this right?
GISS, with the error factor could be neg/flat/pos
RSS, also could be neg/flat/pos
Only UAH is absolutely positive.
If true, how ironic :)~
Leo G says
Whoops, just realized that my eyes missed the decimal place on the UHA
“never mind!”
TimTheToolMan says
With regards to the Real Climate article “Why greenhouse gases heat the ocean”, has this been resolved? I’ve not found a paper that has quantified the effect so I’m wondering what mechanism the models use? Or have I simply not found the science that has quantified ocean heating from anthropogenic CO2 yet?
Mark A. says
I’ve found the discussion on how the GCM’s were built intriguing. One of the main reasons i’ve been skeptical about the predictability of the models is that I was under the impression that they had been largely “tuned” using historical temperature data when they were built. If the models were tainted by historical data, their value would largely be dependent only upon their ability to forecast – which so far doesn’t appear to be very good. (although I concede that one decade is hardly enough to come to any conclusions)
However, it sounds as though Gavin and others are claiming that the GCM’s were constructed almost entirely based upon principles of physics and without any bias from historical data. If this is true, then the fact that the models predict so well the actual temperature records of the past up to the recent present would be cause for any skeptic to double take.
But then, this too makes me pause. Are we to believe that a hodgepodge of physical principles were coded into these models, and yet when these principles were pooled together, models were able to be created that so closely resemble the actual temperature record from the last century?
With so many complex unknowns as there are in the equation that climatologists admit are either poorly understood or completely unaccounted for in the models, could such accurate results have really been obtained without the models being tainted by the historical temperature data in some way? For one, i’m lead to believe that the positive and particularly the negative feedback mechanisms are left largely unaccounted for in the models, especially older ones, yet these models were still able to produce accurate results that followed actual temperature records? I would expect that even leaving one minor forcing or feedback mechanism unaccounted for in the models would create a cumulative effect over time exaggerating either a warming or cooling effect that would cause the model’s results to greatly diverge from actual temperature records over a significant period of time. Yet this doesn’t seem to be the case, at least in the public models, seeing as how they cross and follow historical temperatures repeatedly as time progresses, showing little deviation on a long time scale. Gavin, et. all: can you provide an explanation for this?
Martin Vermeer says
Nicholas #300, two things. Firstly, the situation in medicine is very different because of the placebo effect, which is very real and occurs also in “honest” patients.
Secondly, a large error of the kind you describe would, as Gavin described in #271, be found well before getting around to trying to replicate the instrumental record, by looking at the phenomenon that module is actually trying to describe. Implying that Gavin’s description is not what really happens, does come close to an accusation of less than full honesty. And actually some of the models in AR4 don’t do a very good job with the instrumental record (were those the more “honest” modellers?).
I understand that in adversarial situations (like trying to convince a skeptic public) you want to have “hard” blind testing, which means in practice that only old and currently obsolete models like Hansen 1988 can qualify. But, once the existence of minimally skillful models has been thus established, for production work, please let’s use the best and give the researchers a little credit for knowing what they are doing.
Edward Greisch says
The problem is: People listen to Rush Limbaugh on their car radios while going to and from work. There are no graphs to see on the radio. Rush says AGW is a hoax so many times that the average person believes it. They aren’t going to look at any graphs that they wouldn’t understand anyway.
RC needs to get a talk radio show that airs when the most people are driving. RC needs to include the same psychological hooks that Rush uses. Quit being Mr. Nice Guy. [edit]
wil says
@ 293 In answer to my question”:
“So basically my question is when does “insignificant, short term” change to something relevant?”
‘Completely Fed Up’ answered:
Never. When it changes to something significant, it’s no longer insignificant. If you wait longer to get more data, it’s no longer short term.
If you carefully read my original contribution (#224) you can see that the NCDC data indicate there is no significant temperature increase even since 1996. So the ‘significance’ is there (increasing in strength every year after 1996), but it is at a statistical level and every time the answer from the CO2-adepts is, well it’s only short term. Therefore I asked when it would be considered RELEVANT.
I am afraid that the answers given so far are really not convincing, at least not to me and probably a lot of others that have difficulty with the divergence.
Dave Salt says
Concerning Thomas Lee Elifritz (#281) responding to my post (#273), it dawns on me that he may have misinterpreted my statement “I also don’t respect argument by authority (i.e. trust me, I’m a scientist)”, which was an attempt to illustrate a point and not a statement about my professional status.
Just for the record, I am not a scientist nor do I pretend to be one on the internet.
Simon Rika aka Karmakaze says
@Ray Ladbury #229
“Try looking at monthly data–lots of up and down, right? Hard to spot a trend. Now average over a year. Still lots of up and down, but the trend is easier to see. Now try looking at 5 year averages and the trend becomes quite clear. Essentially, if you remove known sources of noise (like annual variation in absolute temperature) the trend becomes easier to see.”
I know I’m dense in this respect, because of my lack of formal education, so please bear with me:
Say we have a series of “annual mean temperature” datapoints. We select a representative period and average the annual means to get an average annual mean (kind of a double average).
So say that gets us the average annual mean of 14.1C. Is that how that part works?
Next we get each annual mean temperature for each year and plot its difference from that average annual mean. Is that what an anomaly graph is showing? Like:
Average for 1970-1999: 14.1C
2001: +0.1C
2002: +0.2C
2003: +0.1C
2004: +0.3C
Is that the sort of data that is being plotted? Have I got that right so far?
So wouldn’t:
Average for 1970-1999: 14.1C
2001: 14.2C
2002: 14.3C
2003: 14.2C
2004: 14.4C
Show the trend just as clearly? If not, what am I missing?
Don’t take this as meaning I am questioning the scientists, I am just trying to understand what the difference is.
Simon Rika aka Karmakaze says
Sorry Gavin or whoever is modding. I would understand if you want to delete this post but if you do, could you add the following request (or fill it for me) to the previous post somehow – I thought of it after I hit submit, and I don’t want to derail this thread.
Could you perhaps point me to a good source where I can learn the basics of this sort stuff. I looked at “start here”, but that seems to be talking about the climate science rather than what I know are the real basics – how to read and plot the graphs etc. used in climate science. Or maybe I’ve missed it.
Basically, I left high school at 15, and so my knowledge of even the basic stuff like the maths etc is only what I’ve picked up along the way, and I am wondering if I have any clue what I’m looking at.
[Response: With respect to temperature anomalies, read this piece on the Elusive Absolute Surface Temperature, but for a real intro, you probably want to read a book (Cough). ;) – gavin]
Timothy Chase says
TRY wrote in 291:
TRY,
You are the one who keeps on bringing in “my opponents” — and that’s not sticking to the science.
Additionally, they aren’t “my opponents.” For one thing, I am a philosophy major turned computer programmer in a field that is unrelated to climatology. So if anything, these opponents are opponents of the scientific consensus, a consensus that includes every major organization that has seen fit to take a position on whether climate change exists, whats causing it, and the apparent seriousness of the consequences of climate change. Or perhaps they are opponents of climatology, of science, or simply opponents of doing anything about climate change until it is too late.
But they aren’t my opponents.
And while you are more than happy to bring in these opponents, you treat their opposition as some sort of irreducible primary. You don’t ask or want to consider any relevant questions regarding the nature of their opposition.
What are their qualifications?
Judging from some of the “x number of scientists who oppose ‘global warming theory'” lists, they would appear to be TV weathermen, economists, sociologists, and an occasional ‘climatologist. Spencer, Christy, Lindzen. A few others — but almost none of any prominence in the field.
Have they published their opposition to the consensus in relevant peer-reviewed journals where people who are familiar with the issues in the field are able to judge their papers — and where certain standards of quality are presumably applied prior to the article being published?
Typically, no. If they are climatologists their opposition to the consensus appears in op-eds, or a newsletter, or in a speech before an audience who knows very little about climatology. But occasionally a really bad paper will make it through — and have all of its flaws dissected in later articles — articles which make one ask how the original paper ever made it through the peer review process.
What are the strengths of their arguments? Do they have any alternate, testable explanations for the same phenomena?
Nothing that has stood up to the evidence.
Do they have an alternate climate model — based upon known physics?
Heck, do they have a specific mechanism based upon known or at least testable principles of physics that will explain why warming has taken place since the mid seventies — and — are they able to explain why the well-known and well-understood physical mechanism which is generally regarded as being responsible for global warming in the later half of the twentieth century was somehow cancelled out?
Zip.
In no significant, scientifically relevant sense is there any opposition at all to the consensus regarding anthropogenic global warming. And in that sense it is meaningless to speak of opponents to the scientific consensus on anthropogenic global warming.
What do they have? The very same organizations often have a history of supporting scientifically indefensible positions — in tobacco and other areas. In fact as I pointed out and as you try to brush off, 32 of the organizations involved in the denial campaign surrounding anthropogenic global warming were also involved in the denial campaign surrouding tobacco. Its well-documented.
And if you bring up the opponents of the scientific consensus in AGW and I will point out other areas in which they were opponents of the scientific consensus.
*
TRY wrote in 291:
“Overlapping absorption bands”? I presume you mean overlapping with water vapor. Yes, at the surface any relevant wavelength is already saturated. Adding more carbon dioxide there won’t make any difference. However, water vapor has a low “scale height” or “e-folding distance” — which means essentially that unlike other gases it tends to stay close to the ground. The mean e-folding distance (where the partial pressure of water vapor is divided by the base of the natural logarithm (~2.7) relative to the surface is roughly 2 km. It tends to fall out as precipitation rather than going much higher. The level of carbon dioxide in the atmosphere falls off much more slowly with altitude. And as such carbon dioxide is an effective greenhouse gas where water vapor is no longer an issue.
“Saturation”? Even if the center of the absorption band is saturated, the wings won’t be. And as the spectral range over which saturation exists expands there will always be more distant parts of the wing that are undersaturated.
This is afterall what pressure broadening is all about.
Please see:
Pressure broadening
THURSDAY, JULY 05, 2007
http://rabett.blogspot.com/2007/07/pressure-broadening-eli-has-been-happy.html
… which I refered you to back in 110.
Furthermore, even if lower altitudes are saturated, infrared radiation will get through. Half of the radiation that is emitted go down, half goes up, and no matter how many times the energy is passed by emission, conduction or convection it will eventually escape to thinner parts of the atmosphere that are no longer saturated. And increasing carbon dioxide levels will increase the degree of saturation at those layers and the layers above them.
See:
Part II: What Ångström didn’t know
26 June 2007
https://www.realclimate.org/index.php/archives/2007/06/a-saturated-gassy-argument-part-ii/
TRY wrote in 291:
The physics says otherwise. The images showing a reduction in infrared brightness (in those parts of the spectra that carbon dioxide is opaque to) where carbon dioxide concentrations are higher says that they are wrong.
I gave you one of those, too.
Here:
AIRS Carbon Dioxide Data
A 7-year global carbon dioxide data set based solely on observations
http://airs.jpl.nasa.gov/AIRS_CO2_Data/
… and please also see the increasing opacity of the atmosphere due to carbon dioxide levels here:
Aqua/AIRS Carbon Dioxide with Mauna Loa Carbon Dioxide Overlaid
http://svs.gsfc.nasa.gov/vis/a000000/a003500/a003562/index.html
… for the period from September 2002 to July 2008.
As I explained in 110 in relation to the earlier image:
TRY wrote in 291:
Dealt with that already. In detail. 272.
As for feedbacks that go beyond the radiative properties of greenhouse gases — none of our current models are able to get a climate sensitivity as low as 1.5°C, and moreover, the feedbacks to the sun are the same as the the feedbacks to greenhouse gases, and you can’t explain the behavior of the earth’s climate system over the past half million years with a low climate sensitivity. Solar forcing alone won’t get you the glacials and interglacials.
*
TRY wrote in 291:
What co2 absorbs/emits at the levels I/”my opponent” claim?
It isn’t a matter of opinion. We know the absorption spectra of carbon dioxide. We know the absorption spectra of other greenhouse gases.
See:
The HITRAN Database
http://www.cfa.harvard.edu/hitran/
They are established facts — as are the effects of pressure and temperature upon their absorption spectra. As I pointed out when I linked to:
Pressure broadening
THURSDAY, JULY 05, 2007
http://rabett.blogspot.com/2007/07/pressure-broadening-eli-has-been-happy.html
Temperature
WEDNESDAY, JULY 04, 2007
http://rabett.blogspot.com/2007/07/temperature-anonymice-gave-eli-new.html
… in 110.
People who are interested can check out and play with Modtran for themselves for total atmospheric column calculations here:
Modtran
http://geoflop.uchicago.edu/forecast/docs/Projects/modtran.doc.html
TRY wrote in 291:
Don’t need to — your champions already lost that war.
But they and their tactics:
… continue, and your being here and your tactics are entirely in line with that.
Simon Rika aka Karmakaze says
@Hank Roberts #234
I’m sorry Hank, but I think you and Ray misunderstand me. Let me explain for you and anyone else who might have the wrong idea.
I am not questioning the data. I have no doubt that it shows what it shows and why, I am simply wondering why that method of showing it – a graph of anomalies – is better than another method – a graph of absolute temperatures.
I left school with above average marks in math and science at 15, but that is the extent of my formal education. I am simply trying to figure out what I am looking at, and why it couldn’t be shown in another way – I am by no means questioning the data.
Completely Fed Up says
Another way to post Doug’s message in #306 is: there’s an infinite number of ways to get it wrong, and non-infinite ways of getting it right. The chances of getting it right randomly is infinitesimal.
Completely Fed Up says
Andrew #304: the richest people have a tax rate of less than 20%. The tax rate for those too poor to set up a Cayman Islands account and pay accountants to find tax loopholes is double that.
The rich already avoid tax.
This doesn’t make carbon taxing replacing income tax a good idea, but the rich already avoid taxes and they’ll avoid this one too.
[edit – general taxation and capitalism is off topic]
Josh Cryer says
I built some n-body simulation code for a video game. While trying to understand the physics I found that I was matching what I believed to be proper n-body movements most of the time.
Now the code itself had collision detection, and the n-bodies would glance off each other in a way that didn’t seem right. They were too “sticky.” I tweaked the code many times, basic trial and error stuff, trying to figure out what was wrong. I thought it was initially in the collision detection code, because the n-body (Euler) formula is easy enough for a high schooler to understand. Also, visibly, that’s what it looked like was happening. The collisions were too sticky. It had to be the collision code, because the physics were “correct.”
Wrong. I spent days trying to figure it out, tweaking parameters into the gravity of the n-bodies, tweaking initial velocities, tweaking the virtual diameter of the bodies (Euler’s formula relies on the square of the distance to the body center). They still stuck to one another. It still moved like a chaotic system, but the bodies were glued to one another.
It turns out at the end of my n-body gravity routine I failed to reinitialize the gravity. Each iteration it built up and built up until the bodies all became one. The collision detection code, though it was imparting directional velocity (which was supposed to be completely inelastic), was inconsequential to the gravity that the bodies had. It was a tiny itty bitty mistake on my part.
The point I’m trying to make is that chaotic systems cannot be easily “tweaked” to represent some “preconceived version” of reality. If the physics are not good enough you will not get the results you expect, you must therefore improve (or in my case, fix) the physics. People accuse modelers of “pattern matching” to make the models fit the data, but I would be amazed to meet someone who was smart enough to be able to intuit model output from a chaotic system. All I see with the models is that they are improved by known (empirical) mechanisms and generalized physics of those mechanisms. Frankly that’s the only way I see that it can be done.
Josh Cryer says
Oops, “completely inelastic” should have been “completely or perfectly elastic” in my above post there. Sorry for the double post!
wallruss says
“However, the bottom line is that it does not seem to be possible to get an Earthlike climate with a CO2 sensitivity less than ~2.1 degrees per doubling. Despite copious efforts at constructing a model with low sensitivity (very interesting for its inherent properties independent of its implications for climate change), no one has succeeded. That is a strongly constrained lower limit unless you can come up with something that overturns a whole boatload of evidence.”
Does this mean, as it seems to imply, that the CO2 feedback parameter used in the models, is itself based on how well the models do? If so this doesn’t seem to fit with the statement that:
“As to your implication that the models are tweaked to achieve agreement with the historical record, that is also incorrect. Any changes that are made must be motivated by the physics. It is valid to increase fidelity of the model–e.g. by adding a treatment of ocean currents around Antartica. It is not valid to tweak that ’til you get best agreement with temperature”
mark says
I am neither a skeptic/denier nor a advocate of AGW, but someone who is trying to learn and understand – I have only a basic school level understanding of general science.
What does amaze me about this issue, on both sides of the fence, is the zealous religious type view. Surely this is about the science, facts and figures and how it is interpreted. Why do those not aligning to a particular view become either ‘deniers’ or ‘alarmist’. where is the option to debate to understand?
Prime example
”
Simon Rika aka Karmakaze
would hate to see the denier types be cleaned out completely, but maybe a “wall of shame” type thread could be made where the denier posts could be sent, that can not be replied to, but the long list of repetitions of the same tired old talking points could be visible to everyone and show WHY they are not welcome in the actual discussion threads? ”
Why a ‘wall of shame’ for holding a different view? I see the same sentiment on the other side of the fence.
About time we all grew up, listened, understood and debated.
Sorry for the rant, but as a self confessed layman, I feel I will never have the confidence to believe either side of the argument when everyone involved seems to have tunnel vision. I think this is the problem reflected in the general public that will only lead to apathy with the whole debate….and therefore no actions.
CM says
Dave Salt (#303), however unintentionally delicious your accusation of “ad hominid” arguments, shouldn’t that be ad hominidem? Anyway, Ray plainly did not commit an ad hom. He inferred the ignorance of the speaker from the quality of the argument, not the other way around. And an argument from the scientific process hardly reduces to an argument from authority. There was a reason why the medieval humanists insisted students begin with the trivium…
BTW, here’s the abstract of Dessler et al. 2008, to which BPL already pointed you.
Does this help answer your question? If I were you, I’d look up the abstracts of the other papers people here have helpfully pointed you to.
Stephen says
For figure 1 when you say “For example, here is an update of the graph showing the annual mean anomalies from the IPCC AR4 models plotted against the surface temperature records from the HadCRUT3v and GISTEMP products (it really doesn’t matter which). Everything has been baselined to 1980-1999 (as in the 2007 IPCC report) and the envelope in grey encloses 95% of the model runs. The 2009 number is the Jan-Nov average.”
Can you refer me to the corresponding figure in the IPCC AR4? I’m curious if one can compare the figure posted here to Chapter 8’s FAQ Figure 8.1, and Chapter 9’s figure 9.5.
Thanks for your help…
[Response: This is a pretty good comparison to those figures – the baseline is different, and you are seeing the individual runs, and they calculated the ensemble mean slightly differently – but apart from that… I’ll try and do a more exact match if I get time. – gavin]