It’s worth going back every so often to see how projections made back in the day are shaping up. As we get to the end of another year, we can update all of the graphs of annual means with another single datapoint. Statistically this isn’t hugely important, but people seem interested, so why not?
For example, here is an update of the graph showing the annual mean anomalies from the IPCC AR4 models plotted against the surface temperature records from the HadCRUT3v and GISTEMP products (it really doesn’t matter which). Everything has been baselined to 1980-1999 (as in the 2007 IPCC report) and the envelope in grey encloses 95% of the model runs. The 2009 number is the Jan-Nov average.
As you can see, now that we have come out of the recent La Niña-induced slump, temperatures are back in the middle of the model estimates. If the current El Niño event continues into the spring, we can expect 2010 to be warmer still. But note, as always, that short term (15 years or less) trends are not usefully predictable as a function of the forcings. It’s worth pointing out as well, that the AR4 model simulations are an ‘ensemble of opportunity’ and vary substantially among themselves with the forcings imposed, the magnitude of the internal variability and of course, the sensitivity. Thus while they do span a large range of possible situations, the average of these simulations is not ‘truth’.
There is a claim doing the rounds that ‘no model’ can explain the recent variations in global mean temperature (George Will made the claim last month for instance). Of course, taken absolutely literally this must be true. No climate model simulation can match the exact timing of the internal variability in the climate years later. But something more is being implied, specifically, that no model produced any realisation of the internal variability that gave short term trends similar to what we’ve seen. And that is simply not true.
We can break it down a little more clearly. The trend in the annual mean HadCRUT3v data from 1998-2009 (assuming the year-to-date is a good estimate of the eventual value) is 0.06+/-0.14 ºC/dec (note this is positive!). If you want a negative (albeit non-significant) trend, then you could pick 2002-2009 in the GISTEMP record which is -0.04+/-0.23 ºC/dec. The range of trends in the model simulations for these two time periods are [-0.08,0.51] and [-0.14, 0.55], and in each case there are multiple model runs that have a lower trend than observed (5 simulations in both cases). Thus ‘a model’ did show a trend consistent with the current ‘pause’. However, that these models showed it, is just coincidence and one shouldn’t assume that these models are better than the others. Had the real world ‘pause’ happened at another time, different models would have had the closest match.
Another figure worth updating is the comparison of the ocean heat content (OHC) changes in the models compared to the latest data from NODC. Unfortunately, I don’t have the post-2003 model output handy, but the comparison between the 3-monthly data (to the end of Sep) and annual data versus the model output is still useful.
Update (May 2012): The graph has been corrected for a scaling error in the model output. Unfortunately, I don’t have a copy of the observational data exactly as it was at the time the original figure was made, and so the corrected version uses only the annual data from a slightly earlier point. The original figure is still available here.
(Note, that I’m not quite sure how this comparison should be baselined. The models are simply the difference from the control, while the observations are ‘as is’ from NOAA). I have linearly extended the ensemble mean model values for the post 2003 period (using a regression from 1993-2002) to get a rough sense of where those runs could have gone.
And finally, let’s revisit the oldest GCM projection of all, Hansen et al (1988). The Scenario B in that paper is running a little high compared with the actual forcings growth (by about 10%), and the old GISS model had a climate sensitivity that was a little higher (4.2ºC for a doubling of CO2) than the current best estimate (~3ºC).
The trends are probably most useful to think about, and for the period 1984 to 2009 (the 1984 date chosen because that is when these projections started), scenario B has a trend of 0.26+/-0.05 ºC/dec (95% uncertainties, no correction for auto-correlation). For the GISTEMP and HadCRUT3 data (assuming that the 2009 estimate is ok), the trends are 0.19+/-0.05 ºC/dec (note that the GISTEMP met-station index has 0.21+/-0.06 ºC/dec). Corrections for auto-correlation would make the uncertainties larger, but as it stands, the difference between the trends is just about significant.
Thus, it seems that the Hansen et al ‘B’ projection is likely running a little warm compared to the real world, but assuming (a little recklessly) that the 26 yr trend scales linearly with the sensitivity and the forcing, we could use this mismatch to estimate a sensitivity for the real world. That would give us 4.2/(0.26*0.9) * 0.19=~ 3.4 ºC. Of course, the error bars are quite large (I estimate about +/-1ºC due to uncertainty in the true underlying trends and the true forcings), but it’s interesting to note that the best estimate sensitivity deduced from this projection, is very close to what we think in any case. For reference, the trends in the AR4 models for the same period have a range 0.21+/-0.16 ºC/dec (95%). Note too, that the Hansen et al projection had very clear skill compared to a null hypothesis of no further warming.
The sharp-eyed among you might notice a couple of differences between the variance in the AR4 models in the first graph, and the Hansen et al model in the last. This is a real feature. The model used in the mid-1980s had a very simple representation of the ocean – it simply allowed the temperatures in the mixed layer to change based on the changing the fluxes at the surface. It did not contain any dynamic ocean variability – no El Niño events, no Atlantic multidecadal variability etc. and thus the variance from year to year was less than one would expect. Models today have dynamic ocean components and more ocean variability of various sorts, and I think that is clearly closer to reality than the 1980s vintage models, but the large variation in simulated variability still implies that there is some way to go.
So to conclude, despite the fact these are relatively crude metrics against which to judge the models, and there is a substantial degree of unforced variability, the matches to observations are still pretty good, and we are getting to the point where a better winnowing of models dependent on their skill may soon be possible. But more on that in the New Year.
Grabski says
What if it becomes clear that the climate is threatened
…
What if, what if, what if it becomes clear that the climate is threatened by continued cooling? Then trillions will have been wasted.
Jaydee says
179. sierra117
1968 – 1978 gives a decade with a downward trend
http://www.woodfortrees.org/plot/gistemp/from:1968/to:1978/plot/gistemp/from:1968/to:1978/trend
A nine year sequence from 1986 to 1995 also has a downward trend
http://www.woodfortrees.org/plot/gistemp/from:1986/to:1995/plot/gistemp/from:1986/to:1995/trend
1977 to 1986 is pretty flat but does have a slight upward trend
http://www.woodfortrees.org/plot/gistemp/from:1977/to:1986/plot/gistemp/from:1977/to:1986/trend
As these periods are contiguous, you might conclude that temperature had declined, but no…
http://www.woodfortrees.org/plot/gistemp/from:1968/to:1995/plot/gistemp/from:1968/to:1995/trend
Completely Fed Up says
Maybe the way to deal with Pruett’s request isn’t to post the text at all. Just replace it with “your answer is HERE $HTTP$”.
If nothing else, it’ll up the link count to the answers. Note how any climate term sent to google comes up with 80% hits to denialist blogs rather than, say, the IPCC or the national met services.
Because they seed other blogs with references to the denialist blog sites.
Completely Fed Up says
“Take all of these approximations glommed together, and you have models that are potentially less accurate than simple models estimated from time series trend analyses.”
Except you don’t have a model if you take a time series trend analysis.
After all, even if you fit 100% by including enough wavelets, you do not know if there are any other wavelets and so your “model” is far more flawed than you think.
After all, Max sees a 60 year cycle, but who’s to say there’s not a 100 year cycle still going up? And then a 1000 year cycle still going up, etc?
Trend analysis is NOT A MODEL.
It allows you to check to see if the trend in the data is in agreement with your REAL model (the physical one that you proclaim fatally flawed).
And please explain how so many climate models that leave out different things you INSIST are the death knell to accuracy still manage to get so many things the same? After all, there’s an infinite supply of wrong numbers, so why are we seeing only a small and congruent set?
Luck?
Ray Ladbury says
Jason says, “I do, however, have some useful data to add. I have taken (over the past 3 years) to asking climate science grad students the following question:
If your research produced results that tended to disagree with the consensus, what impact do you think it would have on your ability to: Get a job/Get a grant/Get tenured.
Unfortunately, I’ve changed the wording around and only have 6-7 responses. But thus far the universal response of climate science grad students a major universities is that research disputing the consensus would be severely detrimental. The word “impossible” was used twice.”
Jason, either YOU are bullshitting or you are polling idiots. Do you think scientists establish their reputations and win Nobel prizes by blending in with the crowd? Do you think scientists are afraid of disagreeing with eachother on technical matters. Hell dude, we were the nerds in highschool, remember? We stood out. The absolute quickest way for a young researcher to become a superstar is to overturn a model that is well established. Good Lord, man, you don’t understand anything about the scientific process or scientists at all.
Ray Ladbury says
Matthew @323 “All Models are wrong; some models are useful”–George Box
Matthes says, “Take all of these approximations glommed together, and you have models that are potentially less accurate than simple models estimated from time series trend analyses.”
This is true only if you get the physics badly wrong–which shows up in the validation phase. If you get a physical model that is well validated, and suddenly it ceases to perform well, it is usually an indication that there is a physical process that is important for the new data that was not important for the calibration or validation datasets. You really ought to learn how these models work, don’t you think?
Completely Fed Up says
Simon:”I am simply wondering why that method of showing it – a graph of anomalies – is better than another method – a graph of absolute temperatures.”
But what IS temperature?
My body temperature is 98.
My body temperature AT THE SAME TIME is 36.
It is ALSO 303.
Because temperature is not absolute. It is ONLY in relation to something else.
My body is warm and a lizard is cold not because of any absolute temperature since the differences are practically the same when measured against absolute zero (both boiling hot) or when measured to the temperature of the sun (both freezing cold).
So your request is nonsensical: there IS no absolute temperature. It’s all relative.
Ray Ladbury says
Simon@341, OK, let’s think about this. What are we interested in determining? Whether the climate is changing, right? Climate is a long-term process, so we’ll need to look at data over a period of several (~30) years. And then we will want to determine whether there is an upward slope, a downward slope, no slope, a nonlinear rise… We will also be interested in any seasonal effects–did winter warm more than summer, for instance?
Exercise for the reader: Go try it yourself. Get some data. Graph absolute temperatures. Then graph anomalies. Which one makes it easier to see whether things are changing?
Jason says
#306: “Nicholas I see your point but I believe you’re investing too much confidence in your assumptions. If true in the literal sense I took from your statement, it would have wide repercussions extending into numerous fields beyond the one under discussion here.”
In most other fields it is much easier to acquire new validating data.
If, against this data, a model consistently makes more accurate predictions than its competitors, it is better.
I am thus free to use my own understanding (or often enough misunderstanding) of the system being modeled without fear of biasing the results.
GCMs do not have this luxury.
I am deeply skeptical that humans can accurately forecast ANY system that is even remotely as complex and noisy as the earth’s climate without access to vast amounts of validating data.
Cosmological models (which were previously mentioned, and which I would argue are _much_ simpler than GCMs) have the great advantage that they predict specific values. If a measured quantity differs substantially from what the models say it should be, you aren’t likely to hear about internal variability. Instead, the assumption is that the model is mistaken or incomplete or that the observations are wrong or misinterpreted.
If GCMs _are_ as accurate as some of their proponents seem to believe, it would be the high water mark in the history of human modeling of complex systems; an extraordinary accomplishment of which I am (quite legitimately I think) deeply skeptical.
John E. Pearson says
344: Celso Grebogi, Ed Ott, and Jim Yorke, and their collaborators wrote papers on controlling chaos about 2 decades ago, so it can be done. (I’m not saying that climate modelers do this. They don’t. I am saying only that it is possible to tweak the parameters of a chaotic system in order to produce a desired output.) I haven’t thought about this stuff for a long time but I believe that this is the key paper. http://prola.aps.org/abstract/PRL/v64/i11/p1196_1
Ray Ladbury says
wallruss asks “Does this mean, as it seems to imply, that the CO2 feedback parameter used in the models, is itself based on how well the models do?”
No it does not. It means that if you force your model to have a sensitivity less than this level, it doesn’t look like Earth. It fails. Is that clear.
Ray Ladbury says
Grabski asks, “What if, what if, what if it becomes clear that the climate is threatened by continued cooling? ”
What an odd thing to ask at the end of the warmest decade on record. You have quite an imagination.
Completely Fed Up says
will #337, yes it would HAVE to be a careful reading since your post didn’t say anything of the sort.
“If you carefully read my original contribution (#224) you can see that the NCDC data indicate there is no significant temperature increase even since 1996. So the ’significance’ is there ”
How do you get from “no significant increase” to “significant”? The number I’m thinking of is not positive. Therefore it must be negative? No. Could be imaginary or zero.
“and every time the answer from the CO2-adepts is, well it’s only short term.”
And it is. When your timescale is “30 years” less than 10 years is short term.
My answer was 100% correct and right. You just didn’t like it.
Read it again.
Completely Fed Up says
John opines uselessly: “If GCMs _are_ as accurate as some of their proponents seem to believe, it would be the high water mark in the history of human modeling of complex systems”
Nope, because climate is mostly an energetic proposal. After all, the solar system itself is chaotic (don’t believe me? let me know which side of the sun Pluto will be in 100 million years) but we can model it just fine for our needs.
The climate is not a chaotic problem any more than the fairness or otherwise of a siz sided dice or hand of cards is a chaotic problem. Predicting what number will turn up in the next roll of the dice IS.
But I guess you don’t know and don’t WANT to know. Because if you found out, you’d be discomfited.
Tough.
Reality doesn’t care if you like the consequences.
sierra117 says
Does anyone know where I might find time series data of atmospheric CO2 concentrations?
Lynn Vincentnathan says
Hansen himself puts reliance on models in 3rd place in STORMS OF MY GRANDCHILDREN, after paleoclimatology and observations.
However, since we don’t have a few earths — some as experimental in which we add GHGs and some control in which we don’t add GHGs — models are the next best thing for trying to figure out what might be coming up.
I was myself looking at paleoclimatology for “what’s the worst that could happen” and looking at the end-Permian warming during which 90% of life on earth died. But that happened very slowly over many thousands of years, not at the lickity-split rate at which we’re emitting GHGs. And the sun was less bright then, as Hansen points out in his 2008 Bjerknes lecture — http://www.columbia.edu/~jeh1/2008/AGUBjerknes_20081217.pdf
So I guess we have to look to Venus for “what’s the worst that could happen.”
Completely Fed Up says
“I am saying only that it is possible to tweak the parameters of a chaotic system in order to produce a desired output.) ”
And such would give massively different results if you didn’t do EXACTLY the right tweaks.
Which they managed to do despite no detailed description of how to do the tweak and how to change the tweak to fit a different model.
(after all, a hoax IS required or you get the infinite number of wrong answers and if there had been any communication, the CRU hack would have shown it)
caerbannog says
#271 (Comment by Jason — 30 December 2009 @ 3:10 PM)
Moreover, I think that after these assumptions are made, the models are analyzed and debated in an environment where it is easier to get money, get published and get get tenure if your models show higher sensitivity.
[Response: Complete BS. (Sorry, but really? you think tenure is granted or not because you have a higher climate sensitivity? Get real). The GISS model used to have a sensitivity of 4.2 deg C, the AR4 model had a sensitivity of 2.7 deg C. Can you discern a difference in our publication rate? or budget? This is beyond ridiculous. – gavin]
Uhh… Jason, here’s something for you to think about (presuming that you are inclined to do such things).
When Svante Arrhenius calculated the climate sensitivity to CO2, what values did he come up with? Do today’s climate models show higher or lower sensitivity than Arrhenius’ original calculations? And how do those numbers square with your claim that scientists are encouraged to tweak their models to produce higher sensitivity numbers?
If you don’t want to think about this, that’s fine. But I hope that some other folks here do.
Lynn Vincentnathan says
Here’s an interesting article about James Hoggan’s work on denialists, and I noticed it’s drawn out a lot of denialist bloggers: http://www.csmonitor.com/Environment/Bright-Green/2009/1228/James-Hoggan-talks-about-global-warming
I wish I had had a good “model” for understanding denialists, so I wouldn’t be so shocked to see them still at it 20 years after I first started mitigating AGW. The human sciences really need to catch up on human behavior modelling.
Now I’m thinking the last 2 people alive some hundreds or thousands of years from now when AGW really gets extremely bad will be 2 brothers, one a believer, one a denialist. Then one kills the other in a heated argument over AGW, and the last man standing not only has to suffer the horrors of AGW, but suffer them alone.
Ray Ladbury says
Jason@359, We simulate the collapse of neutron stars. We simulate the flow of air aroung supersonic jets. We simulate the paths and behavior of hurricanes with pretty good 5-day accuracy. We simulate the detonation of thermonuclear devices. And on and on. This is the poorest sort of “argument from ignorance”: You can’t understand how scientists can be so smart, so you refuse to even look at the evidence that climate models have substantial skill.
And even if you were right, what then? All of the evidence we have strongly favors a climate sensitivity of at least 2.1 degrees per doubling–and it’s much more likely to be above 4.5 than below 2.1. That is sufficient to establish that we have a serious threat. Climate models are crucial to identifying and evaluating risks posed by that threat. If you cannot bound the risk, it doesn’t mean you are off the hook. An unbounded risk is something you absolutely cannot ignore–any risk assessment professional will tell you that! You are under the misimpression that somehow ignorance is your friend. It is not.
Jason says
#314: “Hindcasts are not worth much?”
I actually think that hindcasts are very valuable, and have been exceptionally valuable to climate science.
I argue that they are insufficient, not that they are useless.
“So what was wrong with Ptolemy’s theory? It had no serious foundations but was just a brilliant form of curve fitting. In some ways it resembled the cyclical theory of climate which is now one of the alternatives to the consensus version.”
The problem with Epicycle based models of our solar system, aside from surprisingly minor inaccuracies, is that they can not tell us anything about how the system would respond to a change in circumstances.
If a massive object shot through the solar system, what impact would it have on the movement of the planets? This curve fitting exercise could tell us nothing.
GCMs are being used to model the effects of a change in the system. To the extent that they are an exercise in fitting (or over-fitting) curves to hindcasts, they will not be able to inform us about the consequences of that change.
[A discussion of the assumptions that I mentioned follows]
I lack the knowledge necessary to tell climate scientists how to improve their assumptions. Even if I did possess the requisite knowledge and understanding, I doubt that I would have sufficient confidence in my own assumptions to put any faith in their results (absent some forward looking validation).
“Rejection of the validation of the models because of a supposed problem with the “current pause in warming”? Isn’t that the same old misuse of short term trends discussed so many times or does it refer to the climate models inability to reproduce the details of the short term wiggles which would be based on an unrealistic demand for lots more detailed initial conditions? Neither constitutes a reason for rejecting the validation which depends on being able to get the right statistically meaningful trend.”
As I previously mentioned, I will be convinced if the observed data resumes its previous trends. The inability to replicate short term wiggles does not concern me, provided that those wiggles turn out to be brief interruptions in a long term trend.
Hank Roberts says
> sierra 117
> Does anyone know where I might find time series
> data of atmospheric CO2 concentrations?
The Start Here button at the top of the page is one good place to start.
So is the first link under Science in the right sidebar.
Or Google; here’s your question, with the magic letters in front of it:
http://www.google.com/search?q=time+series+data+of+atmospheric+CO2+concentrations%3F
Grabski says
Ray Ladbury wrote: What an odd thing to ask at the end of the warmest decade on record. You have quite an imagination.
…
Really? I can read the graphs in this article. For one, a change of zero is within the confidence bands of the chart labelled IPCC AR4 Realizations. So much for the confidence there.
[Response: Over the whole period? No. a zero trend from 1979 to 2009 is well outside the modelled range. – gavin]
Second, the Hansen graph is very rich. For one, it clearly shows that warming has stopped. In the words (paraphrased) of Dr Ternbreth: Where’s teh Warming? Why can’t we measure it? Yes, climate/weather blah blah; but after some time weather becomes climate and that’s happening right now. Temps do have an autocorrelation, meaning that it takes time to reverse field. So temps have to stop climbing (happened) before they start falling (looking like that in the most recent, short term trend).
Third, the Hansen out of sample forecast shows that even though forcings are 25% above actuals, temps are below that forecast. That is, temps are below what was forecast even for a lower level of forcings (which didn’t, of course, happen). There’s a lot of info in that miss.
[Response: Forcings are not ‘25% above’ expectations. Where did you get that idea? Go back to the orginal post on the Hansen et al results to see what the forcings actually have done (they are lower than Scenario B). – gavin]
[edit of tedium]
Ray Ladbury says
Simon@339 Ahh! I see the problem! You are taking yearly AVERAGE readings! In so doing, you are already removing the seasonal effect. In that case, the advantage of the anomaly numbers is merely that they correspond better to the physical parameter of interest–namely how much heat we have added to the system, rather than how much heat, total, is in the system. Make sense?
Lynn Vincentnathan says
#367, Completely Fed Up, I like your last phrase, “the CRU hack would have shown it.” That’s how we need to turn the CRU fiasco on its head. Let’s keep on using that phrase.
Like, “If the climate scientists truly understood it was cosmic rays and not GHGs causing the warming, the CRU hack would have shown it.”
Or, “If the climate scientists truly understood there was no global warming in the 20th-21st centuries, the CRU hack would have shown it.”
We can just go down the laundry list on this one.
Completely Fed Up says
Jason proclaims: “As I previously mentioned, I will be convinced if the observed data resumes its previous trends.”
Well were you for AGW until ~2003 then?
No, I don’t think you were, Jason.
And how will you know the trend when you don’t seem to be able to work out whether we have one (note: we don’t. 8-10 years isn’t long enough). So how will you change your mind? How will we even know?
PeteB says
Dave Salt,
Re : Climate sensitivities and feedbacks
There are a couple of good sections in AR4, both in the increased confidence in water vapour feedback, and the robustness of the combined water vapour / lapse rate feedback Section 8.6
http://ipcc-wg1.ucar.edu/wg1/Report/AR4WG1_Print_Ch08.pdf
And observational estimates of overall climate sensitivity Section 9.6
http://ipcc-wg1.ucar.edu/wg1/Report/AR4WG1_Print_Ch09.pdf
Completely Fed Up says
Ray: “You are under the misimpression that somehow ignorance is your friend. It is not.”
Well seeing as knowledge isn’t going to make him right, that’s not his friend either.
And ignorance has the benefit of being easier to come across.
Andrew says
@John Pearson: “Celso Grebogi, Ed Ott, and Jim Yorke,”
The climate has to be regarded as a system where the parameters have uncertainty; so the deterministic chaos approach is not likely to be safe. On the other hand, in the 1990s, the theory of H-infinity control was revolutionized by the work of Glover, Doyle, and others (based on AAK theory, etc.).
What modern control theory would tell us about climate control is:
1. It is possible to some extent; but we do not know enough about the precision of control that can be achieved.
2. It is not at all clear what the minimum cost of the control for a given performance specification will be but it is clear that the minimum cost robust control will likely not be a policy which is all on one parameter and none on the others. This is the biggest complaint I have about people that think we can best solve climate problems with all CO2 control and ignore everything else, or all pumping aerosols and nothing else, or the hydrogen economy, or only use planning, etc. Unless you have a very strong L-1 like cost of control, you don’t see this sort of sparse control vector in a minimal control. And if there is one thing we really do know about the system is that there are strong uncertainties about the costs of control; this means at the very least the cost of control is bounded from below by a quadratic form (i.e. L-2 norm). I would bet the house that all-or-nothing is completely non-optimal at this point. This would also preclude “getting lucky” by using a deterministic chaotic control, too.
Jason says
#328: “J: I’ve seen outwardly credible estimates explicitly suggesting that it will be possible to build carbon sequestration technology that removes many times more carbon from the atmosphere than is emitted during the production of the required energy. Can you direct me to an argument explaining why this physically is impossible?
Surely, it’s called the First Law of Thermodynamics, which apparently has something to do with conservation of energy.”
Nobody is proposing that atmospheric CO2 be turned back into petroleum and pumped into the ground.
There are numerous plausible methods of removing carbon from the atmosphere that require substantially less energy than was generated during the process of emitting the CO2 in the first place.
Ray Ladbury says
mark@348 asks “Why a ‘wall of shame’ for holding a different view?”
Nope! Not for holding a different view, but rather for refusing to consider evidence, for digging up thrice-killed zombie arguments, for attempting character assassination of scientists for just trying to do their job or for asserting the existence of a massive conspiracy by the entire scientific community to take over the world.
Scientists really aren’t that hard to get along with: Just don’t ignore the evidence and if you are going to accuse us of fraud, you better have something more substantial than quotes mined from stolen emails and taken out of context or post-modernist, anti-science claptrap.
PeteB says
Dave Salt
Re Climate Sensitivity and Feedbacks
Section 8.6 Climate Sensitivity and Feedbacks (including the robustness of the combined water vapour / lapse rate feedback)
http://ipcc-wg1.ucar.edu/wg1/Report/AR4WG1_Print_Ch08.pdf
Section 9.6 Observational constraints on climate sensitivity
http://ipcc-wg1.ucar.edu/wg1/Report/AR4WG1_Print_Ch09.pdf
Bill DeMott says
If GCMs _are_ as accurate as some of their proponents seem to believe, it would be the high water mark in the history of human modeling of complex systems; an extraordinary accomplishment of which I am (quite legitimately I think) deeply skeptical.
Comment by Jason — 31 December 2009 @ 10:13 AM
Jason:
I don’t see that GCMs are claiming or even attempting a “high degree of accuracy.” Rather, the goal is to show long term trends. As far as I can see, these general models are not attempting to predict el ninos and athey certainly don’t predict the occasional significant volcanic erruption. The point is that they predict the approximate slope of the long term trend. This is what we need to understand how humans are influencing climate and whether we need to do something about it.
Hank Roberts says
> I will be convinced if the observed data resumes its previous trends
Please see the MetOffice page I quoted earlier.
ADR says
Thoughts?
No Rise of Atmospheric Carbon Dioxide Fraction in Past 160 Years, New Research Finds: http://www.sciencedaily.com/releases/2009/12/091230184221.htm
Controversial New Climate Change Data: Is Earth’s Capacity to Absorb CO2 Much Greater Than Expected: http://www.sciencedaily.com/releases/2009/11/091110141842.htm
John E. Pearson says
367: Huh? You need to chill. I was simply commenting that it is in fact possible to control chaotic systems, that it has been done, and I provided a hook into the literature. You wrote: “And such would give massively different results if you didn’t do EXACTLY the right tweaks.” This is simply false. The tweaks would constitute a set of measure zero and would be unachievable in practice. Besides the theory they constructed an experimental realization of the theory in collaboration with a bunch of guys from the Naval Surface Warfare center. If you want a trajectory that falls within some prescribed tolerances it is possible to achieve it. I have no idea what HAD CRU has to do with this. I suppose that you are writing this nonsense because you have decided I am an evil denialist because of a joke that I made the other day in which I used the phrase “alarmist”. Personally it made me laugh out loud as did the initial post that I was responding too with said joke in which it was claimed that a month to month change in the anomaly of .2degrees was a really big deal and somehow invalidated all of climate science.
I hate it when people need it spelled out for them. Here goes. For starters I am no denialist. The top comment on this is me catching George Will in a blatant lie: http://www.washingtonpost.com/ac2/wp-dyn/comments/display?contentID=AR2006033101707 There, I’ve said it. Beyond that, people on this site really need to chill a bit. I’ve been insulted mildly and strongly on here on numerous occasions, always by people who had incorrectly concluded that I was trying to deny the science. Usually that was in response to questions. Hank has a tendency to type the question into google and tell me to read the papers. I use google all the time in my own research. It can take hours to find the important references. If I ask a question here it is simply a scientist asking a scientific question. I’ve asked questions on here and been called stupid for asking them. I may or may not have already googled for it. I may just want to hear what people think/know about a given subject. Sometimes I think out loud on here. That is simply the normal way that scientists communicate. You shouldn’t always jump on each and every remark that someone makes and one ought not to be so quick to insult people. It stifles intelligent discussion. If people say the same stupid shi* over and over it ceases being intelligent discussion. Don’t respond to them. Pearson’s first Law: Never argue with idiots.
Jason says
#355 “The absolute quickest way for a young researcher to become a superstar is to overturn a model that is well established.”
That is true. If you could produce a result that empirically overturned a branch of science you would have it made.
How often do you think this occurs? More to the point, what is the probability that a random MIT grad student (like the ones you called idiots) will make such a discovery?
What is the probability that they will get a tenure track position as a consequence of decidedly more incremental work?
If a PhD candidate hoping to snag a tenure tack position came to you for career advice, would you tell him to focus his research on proving the IPCC wrong?
“Good Lord, man, you don’t understand anything about the scientific process or scientists at all.”
I like your romantic view of how science works. Lone individuals producing results that overturn branches of science. But if you actually believe that this how the majority of science works, then I’m afraid it is you who is ignorant of the scientific process.
As the editors of Nature and other top journals make it their mission to suppress evidence that “that obstructionist politicians in the US Senate will probably use […] as an excuse to stiffen their opposition to the country’s much needed climate bill”, anyone who produces research that is not helpful to this agenda will find their work being published in lesser journals and being subject to considerable delays.
Grad students seem to understand this. I again suggest that if you repeat my experiment independently you will get similar results.
There is nothing even slightly subtle about the reception that awaits skeptical climate science research.
[Response: First off there is no specific thing as ‘skeptical climate science research’. There is plenty of skepticism throughout climate science. If you mean ‘research that is perceived to go against the conventional wisdom’ there is plenty of that – and most of it doesn’t get any press at all. Scientists are challenging conventional wisdom all the time. If you are talking about the occasional paper that is based on bad statistics (Douglass et al, 2008), flawed logic (Soon and Baliunas, 2003), inappropriate models (Schwartz (2007)), fundamental misconceptions about what attribution is all about (Scafetta and West (2005, 2006, 2007, 2009)), or ignorance about the basics of paleoclimate (Loehle, 2006) , then sure – those papers will be criticised. The issue is not the conclusion, but the methodology. If the actual research does not support the claims made in the press releases and the senate floor speeches, then, yes, that will affect your scientific reputation (as it should). Note that there are some contrary papers that deserve publication – whether they end up being right or wrong – I count Lindzen and Choi (2009) in that group. I think it will be found wanting and not turn out to be a robust or useful result, but the research done in demonstrating this will be useful. It simply a fact of life that bad papers (defined as lacking in methodology/logic/etc – not on the basis of their conclusion) will get contrarian attention if they say that they have ‘proved the consensus wrong’. That means that they also get more attention from the mainstream. Bad papers that go the other way generally just fade into obscurity – neither commented on nor cited. And in most other fields that is what happens to all of them. I’ll make one addition comment, if a student follows a research path because they want to demonstrate that climate sensitivity is negligible then they are in severe danger of having their prejudices guide their analysis. People should research things they are interested in – that might have applications for climate sensitivity – but they need to be guided by the results, not their wishful thinking. – gavin]
Jason says
#368: “When Svante Arrhenius calculated the climate sensitivity to CO2, what values did he come up with? Do today’s climate models show higher or lower sensitivity than Arrhenius’ original calculations? And how do those numbers square with your claim that scientists are encouraged to tweak their models to produce higher sensitivity numbers?”
Clearly there is a (recent) downward trend in climate sensitivity numbers. I think this is primarily a result of climate scientists looking at observations, observing that their models need improvement, and tweaking the models to achieve better results (ultimately resulting in a lower sensitivity).
I have never accused the modelers of ignoring data; just not having enough of it.
I suspect that the trend towards lower climate sensitivity numbers will continue during the new decade.
[Response: On the basis of what? This is just your wishful thinking.- gavin]
Jason says
#370: “You are under the misimpression that somehow ignorance is your friend.”
Ray Ladbury, you do your arguments no favor by behaving in such an ugly manner.
jl says
re 365
ftp://ftp.cmdl.noaa.gov/ccg/co2/trends/co2_mm_mlo.txt
Matthew says
354, Completely fed up: Trend analysis is NOT A MODEL.
It’s a statistical model, or a mathematical model, analogous to the pre-Newtonian models of planetary movement. Another example is the pre-Einstein Lorentz-Fitzgerald contractions.
After all, Max sees a 60 year cycle, but who’s to say there’s not a 100 year cycle still going up? And then a 1000 year cycle still going up, etc?
That’s why stringent testing, not lenient testing, is required. The Digital Orery showed that planetary movement is chaotic, not periodic, as far into the future as they were able to simulate. But the periodicity is a sufficiently accurate model for making calendars.
And please explain how so many climate models that leave out different things you INSIST are the death knell to accuracy still manage to get so many things the same?
“are the death knell to accuracy” is your formulation. I only claim, in a skeptical spirit, that they might be the sources of gross inaccuracy. The models make some common predictions because of what they have in common, which could be imprecise parameter values.
356, Ray Ladbury: This is true only if you get the physics badly wrong–which shows up in the validation phase.
No. Many tiny inaccuracies can accumulate to produce model predictions that are inaccurate on time scales of 10-30 years.
359, Lynn Vincentnathan: I wish I had had a good “model” for understanding denialists, so I wouldn’t be so shocked to see them still at it 20 years after I first started mitigating AGW. The human sciences really need to catch up on human behavior modelling.
It’s characteristic of some AGW credents that they overlook the need to understand how so much intense belief in AGW has been created from such incomplete and imprecise knowledge. Imagine the possibility that, 20 years hence, Indian and Chinese psychological scientists and historians of science write books and hold symposia about the decline of the EU and US that was precipitated by the AGW mass hysteria that swept them. Was it just a coincidence that the hysteria developed in parallel with the “Left Behind” religious movement? Or in parallel with the gross overinvestment in “end of life” medical care? In science, skepticism is the norm, and intense, motivated, skeptical debate is an ideal. A psychological problem, in general, is how do so many otherwise intelligent people develop such strong beliefs in stupid stuff? Maybe the AGW hysteria (that part, like believing Al Gore, that might be “hysteria”) is just another of many examples of fin de siecle disease.
Didactylos says
Completely Fed Up said:
“So your request is nonsensical: there IS no absolute temperature. It’s all relative.”
You are being confusing. Any measurement system is ultimately just a convention, so your “explanation” doesn’t really explain anything. Most importantly, it doesn’t even touch on any of the relevant science, which is all explained rather well in the link Gavin provided: http://data.giss.nasa.gov/gistemp/abs_temp.html
John P. Reisman (OSS Foundation) says
#307 Jason
You do realize that your question has nothing to do with actual science… right?
It’s similar to to saying 2+2=4 but if society believes that it is 5 and you want to get a job, you really should ignore the reality of the equation and go with what others think. That has nothing to do with the equation though. You are inferring a non sequitur argument regarding desire has relevance to scientific knowledge, bizarre at best. Straw-man arguments again… My niece eats ice cream and she’s not fat, so ice cream does not make people fat.
Such obtuse logic spouted by those that attempt to compare opinion with science are one of the major problems of getting people to understand the relevance of context in debate. Adding yet another shadow of complexity into the minds of students without explaining the context, from a teacher (if you are a teacher/professor?), this is more easily considered nefarious if not argument from ignorance/authority.
In other words it sounds like you are confusing your students more than educating them. No wonder America has so many critical thinking problems… or are you teaching critical thinking and not merely challenging the students to delve?
If my assumptions are correct about you, I’m surprised you are a teacher. This is a serious problem. If you can’t even parse out the logic, and you’re a teacher, and if that may be endemic among teachers across the nation, that explains a lot about the confusion.
I recommend you delve deeper into the science and leave the unrelated opinions aside.
Andrew says
@Jason: “If your research produced results that tended to disagree with the consensus, what impact do you think it would have on your ability to: Get a job/Get a grant/Get tenured.”
It would help you get a job, get grants, and get tenured. If there’s one thing a graduate student prays for, it’s a clear demonstration of an unexpected result.
Confirmation is tremendously important in science, but original discovery is what pulls down the prizes.
The place where you get the prize for finally showing what everyone already knew? That’s mathematics, not science.
Alan Millar says
361. Comment by Ray Ladbury — 31 December 2009 @ 10:16 AM
“It means that if you force your model to have a sensitivity less than this level, it doesn’t look like Earth. It fails. Is that clear.”
I think you mean that it doesn’t look like the modelled Earth.
That means one of two things :-
Either it helps to validate the models or it helps to show that the models are no good!
We hear that the models are not backfitted to match the data, that’s a laugh! Unless we are being led to believe that the models predicted the Mount Pinatubo eruption. That would be a good trick!
The models were obviously backfitted to match the effects of this eruption and the assumptions and parameters used were almost certainly those that would cause the models to match the actual temperature record.
If it was just the physics I would like to be pointed to the papers which have established the precise effects of aerosols on the global climate and how those agreed parameters were input into the models. As far as I am aware these are not well understood currently and are still being debated.
Alan
Lynn Vincentnathan says
#351, Grabski & “what if it becomes clear that the climate is threatened by continued cooling? Then trillions will have been wasted.”
Not at all — we would have become energy/resource efficient/conservative and on to really great alt energy — thus saving us money and strengthening our economic without loss of productivity. And we would have mitigated a host of other environmental and non-environmental problems to boot, saving us even more money and lives.
And if truly we find the earth is cooling, then we could get out some of those fossil fuels we fortuitously left in the ground and burn them a bit (making sure their other pollutants don’t get emitted), and keep our climate just right for life as we know and love it.
However, if you read Hansen’s STORMS OF MY GRANDCHILDREN, it seems very unlikely we will be going into an ice age. Ever!
dhogaza says
The first paragraph shows you don’t even know how the models work, despite various people including Gavin Schmidt telling you and others that they’re not “exercises in fitting curves to hindcasts”.
And yet here you are spouting off about assumptions that need improvement, models that you don’t believe work sufficiently well to be useful, etc.
Why don’t you just stay away until you’ve learned enough to stop embarrassing yourself in public?
Completely Fed Up says
Andrew procrastinates: “This is the biggest complaint I have about people that think we can best solve climate problems with all CO2 control and ignore everything else,”
What is this “everything else” that is being ignored, Andy? And how do you know it’s being ignored?
An example: I’m using a slow poison to kill my rich great aunt so I can inherit a wodge. I am, however, caught in the act.
My rich aunt has the flu at the same time.
Do they treat the flu symptoms but still let me give her the toxin?
That’s what your complaint “Don’t stop CO2 production” wail is asking for: we continue to add a slow poison whilst working on, oh, poor people or something.
Ray Ladbury says
ADR@384, Note that the study is talking about what percentage of CO2 goes into the atmosphere as opposed to “elsewhere”. If true, it would be encouraging. However, it does seem to conflict with some other recent studies and I would think the further you go back, the greater would be the uncertainty in both emissions and atmospheric vs. oceanic fraction. It also adds concern for oceanic acidification, though. One to watch, I’d say, until someone more knowledgeable than me has a chance to peruse it.
Blair Dowden says
ADR at 384: The titles of both articles are misleading, at least to me. The content of both are about the same thing, and it is valid – the fraction of CO2 absorbed by the land and oceans has not changed significantly over the period we are able to measure. This is good news, this most important negative feedback is not decreasing so far. As for this invalidating climate models, I don’t really know, but I think that the predicted reductions in the ability of the earth to absorb CO2 will happen too slowly to show up in the current data.