It’s worth going back every so often to see how projections made back in the day are shaping up. As we get to the end of another year, we can update all of the graphs of annual means with another single datapoint. Statistically this isn’t hugely important, but people seem interested, so why not?
For example, here is an update of the graph showing the annual mean anomalies from the IPCC AR4 models plotted against the surface temperature records from the HadCRUT3v and GISTEMP products (it really doesn’t matter which). Everything has been baselined to 1980-1999 (as in the 2007 IPCC report) and the envelope in grey encloses 95% of the model runs. The 2009 number is the Jan-Nov average.
As you can see, now that we have come out of the recent La Niña-induced slump, temperatures are back in the middle of the model estimates. If the current El Niño event continues into the spring, we can expect 2010 to be warmer still. But note, as always, that short term (15 years or less) trends are not usefully predictable as a function of the forcings. It’s worth pointing out as well, that the AR4 model simulations are an ‘ensemble of opportunity’ and vary substantially among themselves with the forcings imposed, the magnitude of the internal variability and of course, the sensitivity. Thus while they do span a large range of possible situations, the average of these simulations is not ‘truth’.
There is a claim doing the rounds that ‘no model’ can explain the recent variations in global mean temperature (George Will made the claim last month for instance). Of course, taken absolutely literally this must be true. No climate model simulation can match the exact timing of the internal variability in the climate years later. But something more is being implied, specifically, that no model produced any realisation of the internal variability that gave short term trends similar to what we’ve seen. And that is simply not true.
We can break it down a little more clearly. The trend in the annual mean HadCRUT3v data from 1998-2009 (assuming the year-to-date is a good estimate of the eventual value) is 0.06+/-0.14 ºC/dec (note this is positive!). If you want a negative (albeit non-significant) trend, then you could pick 2002-2009 in the GISTEMP record which is -0.04+/-0.23 ºC/dec. The range of trends in the model simulations for these two time periods are [-0.08,0.51] and [-0.14, 0.55], and in each case there are multiple model runs that have a lower trend than observed (5 simulations in both cases). Thus ‘a model’ did show a trend consistent with the current ‘pause’. However, that these models showed it, is just coincidence and one shouldn’t assume that these models are better than the others. Had the real world ‘pause’ happened at another time, different models would have had the closest match.
Another figure worth updating is the comparison of the ocean heat content (OHC) changes in the models compared to the latest data from NODC. Unfortunately, I don’t have the post-2003 model output handy, but the comparison between the 3-monthly data (to the end of Sep) and annual data versus the model output is still useful.
Update (May 2012): The graph has been corrected for a scaling error in the model output. Unfortunately, I don’t have a copy of the observational data exactly as it was at the time the original figure was made, and so the corrected version uses only the annual data from a slightly earlier point. The original figure is still available here.
(Note, that I’m not quite sure how this comparison should be baselined. The models are simply the difference from the control, while the observations are ‘as is’ from NOAA). I have linearly extended the ensemble mean model values for the post 2003 period (using a regression from 1993-2002) to get a rough sense of where those runs could have gone.
And finally, let’s revisit the oldest GCM projection of all, Hansen et al (1988). The Scenario B in that paper is running a little high compared with the actual forcings growth (by about 10%), and the old GISS model had a climate sensitivity that was a little higher (4.2ºC for a doubling of CO2) than the current best estimate (~3ºC).
The trends are probably most useful to think about, and for the period 1984 to 2009 (the 1984 date chosen because that is when these projections started), scenario B has a trend of 0.26+/-0.05 ºC/dec (95% uncertainties, no correction for auto-correlation). For the GISTEMP and HadCRUT3 data (assuming that the 2009 estimate is ok), the trends are 0.19+/-0.05 ºC/dec (note that the GISTEMP met-station index has 0.21+/-0.06 ºC/dec). Corrections for auto-correlation would make the uncertainties larger, but as it stands, the difference between the trends is just about significant.
Thus, it seems that the Hansen et al ‘B’ projection is likely running a little warm compared to the real world, but assuming (a little recklessly) that the 26 yr trend scales linearly with the sensitivity and the forcing, we could use this mismatch to estimate a sensitivity for the real world. That would give us 4.2/(0.26*0.9) * 0.19=~ 3.4 ºC. Of course, the error bars are quite large (I estimate about +/-1ºC due to uncertainty in the true underlying trends and the true forcings), but it’s interesting to note that the best estimate sensitivity deduced from this projection, is very close to what we think in any case. For reference, the trends in the AR4 models for the same period have a range 0.21+/-0.16 ºC/dec (95%). Note too, that the Hansen et al projection had very clear skill compared to a null hypothesis of no further warming.
The sharp-eyed among you might notice a couple of differences between the variance in the AR4 models in the first graph, and the Hansen et al model in the last. This is a real feature. The model used in the mid-1980s had a very simple representation of the ocean – it simply allowed the temperatures in the mixed layer to change based on the changing the fluxes at the surface. It did not contain any dynamic ocean variability – no El Niño events, no Atlantic multidecadal variability etc. and thus the variance from year to year was less than one would expect. Models today have dynamic ocean components and more ocean variability of various sorts, and I think that is clearly closer to reality than the 1980s vintage models, but the large variation in simulated variability still implies that there is some way to go.
So to conclude, despite the fact these are relatively crude metrics against which to judge the models, and there is a substantial degree of unforced variability, the matches to observations are still pretty good, and we are getting to the point where a better winnowing of models dependent on their skill may soon be possible. But more on that in the New Year.
Doug Bostrom says
Don Shor says: 15 January 2010 at 1:04 PM
“And overall, this is the graph that tends to impress me the most:
http://en.wikipedia.org/wiki/File:Instrumental_Temperature_Record.png”
It is striking, and I think compelling data like that is why– whether by accident or design– doubters are moving en masse away from attempting scientific arguments to explain accumulating observations without resorting to anthropogenic forcing, instead are attacking the observations themselves.
The data has fairly early proven robust against attack, so doubters are now moving to something that’s ultimately impossible to defend with rationality, namely that the scientists in charge of the data are corrupted.
This is the Hockey Stick gambit, on steroids.
I’m curious to see what exact angle they’ll work on the GRACE data. Presumably the “corruption” will have to be both broad and deep? A lot of prime and subcontractors, investigators involved there.
Tilo Reber says
Completely: #844
If you had read the blog you would understand why your arguments are nonsense. Easterling and Wehner seek to show that there is nothing unusual about the current period by demonstrating that they can find periods just like it in the last 35 years. They also say that they can provide 10 to 20 year periods. They then provide 2 specific 9 year periods. I then show why their two periods are not the same as the current 12 year period. The list that you saw was a way to difference the current period and their periods. If you had read the blog you would know that.
“Where do you pull these from?”
El Chichon was the reason for the flattening of E&Ws first example. Go read the blog.
“So your response as to the flaws is that when they pick a 9 year trend they’re wrong, but when you pick a 12 year trend it’s right because..?”
They are wrong because A. Their trends are not flat and mine is. B. My trend is 33% longer than theirs. C. This is the most important one – what flattening does happen in their trends can be shown to be elements of natural variation and mine cannot.
The entire significance of the 12 year flat trend is not to deny that it could not be caused by natural variation, but rather to assert that it is important because there are not natural elements of variation to account for it. At least none that we understand. Therefore the Trenberth quote that I gave you.
[Response: This has gone on for too long since everyone is simply repeating themselves. This is now OT. – gavin]
Completely Fed Up says
“My statement is that we have a 12 year flat trend from 98 to the present. ”
We don’t.
We have two numbers 12 years apart that aren’t all that different.
But two points do not make a trend.
(sorry, typo)
Completely Fed Up says
A trend requires significance.
Anyone who did statistics at school would know that.
Tilo doesn’t, or pretends not to, because his argument falls flat over if you consider it.
Tilo Reber says
Gavin:
“This has gone on for too long since everyone is simply repeating themselves. This is now OT. – gavin”
I’m okay with that. Thanks for letting me get the important points of my argument in.
Ray Ladbury says
You know what’s really funny, Tilo. When I try 1997 or 1999 on the “ENSO adjusted HADCRUT” I still don’t get a negative trend. So again, I’m left to wonder why you pick the “magic year” 1998.
And then you say, ” It doesn’t give you the strength of the event, and the 98 El Nino was very strong, but the 99 and 00 La Nina was very long.”
Uh, sorry Tilo, it doesn’t work this way. The 98 El Nino had MEI indices twice the magnitude of the 99-2000 La Nina.
So again, Tilo, shouldn’t a 12 year cooling trend also show cooling for 11 years?
Damn, I’d just love to see how you balance your checkbook!
Tristan says
Hello, still hoping for an answer to my previous post on this found at the below URL:
https://www.realclimate.org/index.php/archives/2009/12/updates-to-model-data-comparisons/comment-page-17/#comment-154353
Ray Ladbury kindly responded:
“Tristan, Hansen’s model assumes a significantly higher CO2 sensitivity than do current models. That alone probably is sufficient to explain why it is running hot.”
Thank you Ray, but that isn’t really the answer I was looking for. That’s an answer for why the model is running hot… but I think it appears to be running much hotter that the post shows, because senario B was selected when senario A was closer to reality.
[Response: This isn’t true. Where did you get your information? The forcings are available here. – gavin]
I was looking for an answer on why, in this post the author compares the eventual results to Senario B rather than Senario A where, I think, the emissions were, in fact, above that of Senario C.
Why senario B and not C for a comparison?
Also, why not plug in actual emissions to see what the model would have predicted?
I’m not trying to argue a positon here: genuinely thought about it and open to more evidence.
Tristan says
Thanks Gavin: I’m understanding the forcing stuff better now… but might still need more. But the term forcings might be confusing me.
Anyway, the initial basis for what I was saying is from the Hansen paper:
http://pubs.giss.nasa.gov/docs/1988/1988_Hansen_etal.pdf
On the bottom right hand paragraph of page 9343 it says:
Senario A assumes a growth rate of that similar to the 1970s and 1980s will continue indefinately (1.5% of current emissions) so that net greenhouse forcing increases exponentially… but it also goes on to say that this growth in emissions is less than typical than that of the last century (about 4% p/a).
Senario B had a decrease in gas growth rates… not sure how much this is, but I guess
Senario C has a drastic decrease in gas growth rates but includes several hypothetical effects (water vapor etc… On a side note I’m not really sure why these were not included in the other senarios, or their effects not tested individually.) So that by 2000 the annual growth rate in trace gas is 0%… which I assume is a drop in emissions rather than halting growth, yes?
I can’t find the emissions growth… but I would have thought this to be higher than 1.5%, and more than linear growth.
Essentially, I can’t follow the blog post you mentioned and it seems to gloss over why senario B was selected a little
https://www.realclimate.org/index.php/archives/2007/05/hansens-1988-projections/
Does this change anything? Maybe another go at why B was selected will help me.
[Response: The reason for B is based on the net forcings seen in the figure and datafile. It’s really not that complicated. – gavin]
sgposs says
Thank you for the discussion. As a scientist in a totally different field, although not unaffected by the implications of AGW, it is informative to learn how scientists in other disciplines attempt to explain their work to the general public. In the case of climate science it seems obvious that a lot more general science education is needed. To to this end I have but one brief comment/suggestion:
It would perhaps be useful for the climate change community to develop a comprehensive website that describes the various steps that go into assessing and predicting climate change, from the collection of data, through methods of analysis to results, so that the less educated could become more educated. In particular, efforts might usefully be made to “atomize” each step so that all significant influences could be accounted for that might affect the final results (being careful not to leave out a detailed discussion of each equation/theory used). Hypertext could be used to permit increasingly refined views, which need not be followed unless one were interested in in learning more about technical details (but no technical detail need be or should be hidden or left not heavily annotated and referenced). Such a system could then be used to: 1) allow users to get a structured overview of the topic and issues being discussed; 2) save climate scientists from having to take the trouble to respond to every ideological knee jerk that might be out there; 3) focus attention and debate to the more important/truly controversial “atoms” of information; 4) possibly allow various parameters of a complex model to be manipulated by end users, so that they could see the consequences of the various relationships as well; 5) put the onus on those who deny findings and scientific consensus of ideological rather than scientific reasons to explain how their “alternative proposals” have any scientific merit, besides mobilizing electrons across the internet; 6) put deniers to work in strengthening education on various issues; make it easier for journalists to educate themselves; 7) draw in others from other scientific disciplines that could benefit from more accurately modeling various potential senarios/predictions with respect to particular regions/topics.
Admittedly, it would not be possible to permit users to simulate large amounts of data that may take large systems day, weeks, or months to simulate/solve differential equations for, but such a system could also be used to better educate the public and hopefully educate children so that they can become better able to understand technical writing and appreciate its implications. It will only be when the general public is better educated that a consensus will emerge on how to address the real and significant impacts of AWG.
I might add that you don’t even have to be a climate scientist to see these already, shifts in animal distributions and patterns of localized extinctions/population declines/shifts in species composition are already dramatically demonstrating this in very disturbing ways almost everywhere you look.
Tristan says
I can see the data lines, but Hansen seems to indicate that this growth rate is lower than what was happening in the 70s, and everything I read seems to imply that the growth of emissions presses on, which I assumed also included an accelerating growth ‘rate’.
So, basically, forcing (and emissions?) grew at a rate of less then 1.5% a year? Is that what this says?
Or is it saying that the ‘total forcings’ grew by less than 1.5% taking into account other factors such as land use and clouds.
http://en.wikipedia.org/wiki/File:Radiative-forcings.svg
It’s the change between forcings and emissions that confusing me.
I can see you have the line of what happened following closest to senario A… but I just want to know what that means.
Additionally, I’m trying to figure out how mans emissions could be plugged into the model.
Anyway, one last response would be appreciated at which time, if i don’t get it, I might as well give up.
[Response: Concentrations of CO2 determine the forcing, and the forcing is what the model responds to. But the concentrations grow less quickly than do emissions (because you are adding to a very large reservoir). For instance emissions in the last couple of years have increased at around 3%/yr, but concentrations have only grown at about 2 ppmv/yr (~0.5%/yr). In the model, the radiation code takes note of how much CO2 there is when calculating transmission and absorption. If you tell it that CO2 has gone up, the calculation will change accordingly. – gavin]
Phil. Felton says
Tristan says:
18 January 2010 at 2:37 AM
Anyway, the initial basis for what I was saying is from the Hansen paper:
http://pubs.giss.nasa.gov/docs/1988/1988_Hansen_etal.pdf
On the bottom right hand paragraph of page 9343 it says:
Senario A assumes a growth rate of that similar to the 1970s and 1980s will continue indefinately (1.5% of current emissions) so that net greenhouse forcing increases exponentially… but it also goes on to say that this growth in emissions is less than typical than that of the last century (about 4% p/a).
Senario B had a decrease in gas growth rates… not sure how much this is, but I guess
Senario C has a drastic decrease in gas growth rates but includes several hypothetical effects (water vapor etc… On a side note I’m not really sure why these were not included in the other senarios, or their effects not tested individually.) So that by 2000 the annual growth rate in trace gas is 0%… which I assume is a drop in emissions rather than halting growth, yes?
I can’t find the emissions growth… but I would have thought this to be higher than 1.5%, and more than linear growth.
Essentially, I can’t follow the blog post you mentioned and it seems to gloss over why senario B was selected a little
https://www.realclimate.org/index.php/archives/2007/05/hansens-1988-projections/
Does this change anything? Maybe another go at why B was selected will help me.
Since you have the 1988 paper try reading page 9345 where it clearly points out why Scenario B was expected to be the most likely one, which has been borne out by the last 20 years.
ge0050 says
Climate change has the elements of a fractal distribution. A few large changes, increasingly many small changes, scale independence. This is not a normal distribution and models that assume increasing accuracy with longer forecast periods will not perform better than chance. Otherwise we could just as reliably predict long range currency exchange rates that would become increasingly accuracy with length of forecast.
Some models will do well, some will not, but none of them will do better in the long run than chance. Those that do well will be assumed to have “predicted” the future, but that will simply be an illusion.
The real world rarely operates like a fair coin toss. You cannot rely on heads and tails converging over time to minimize errors. Chaotic systems have inherently meaningless (unpredictable) time-series.
Looking at climate history there are clearly preferred states with rapid transitions between these states. Exactly what we would expect in a chaotic system.
Time-series modeling tells you very little about a chaotic system. As such, how well a climate model performs against time is essentially meaningless. You are simply measuring chance. Predicting the attractors is the key to modelling chaotic systmes.
Septic Matthew says
860, Response by Gavin: [Response: Concentrations of CO2 determine the forcing, and the forcing is what the model responds to. But the concentrations grow less quickly than do emissions (because you are adding to a very large reservoir). For instance emissions in the last couple of years have increased at around 3%/yr, but concentrations have only grown at about 2 ppmv/yr (~0.5%/yr). In the model, the radiation code takes note of how much CO2 there is when calculating transmission and absorption. If you tell it that CO2 has gone up, the calculation will change accordingly. – gavin]
That answers a question I was going to ask.
Ray Ladbury says
geOO50 says “This is not a normal distribution and models that assume increasing accuracy with longer forecast periods will not perform better than chance. ”
You might be right if these were statistical models. They aren’t. They are dynamical. Put your best physics in and let ‘er run and in a few hours or days of supercomputer time, you have 1 run.
These are completely different beasts.
Doug Bostrom says
Ray Ladbury says: 20 January 2010 at 6:55 PM
Just amplifying on “put your best physics in”, ge0050’s assertion is empty of value because the models in question here are ultimately based on physical constants and things in the material world with known values and properties. To compare human affairs such as international currency trading with a physical model is really quite absurd.
What is it you say? “Not even wrong?”
Mike Flynn says
@ge0050
You have hit the nail on the head. Unfortunately, many scientists abhor the very idea of a chaotic system, because it is, by its nature, inherently unpredictable in any useful fashion.
@Ray Ladbury.
May I quote part of a paper co-authored by R Allan of the Hadley Centre, UK Met Office : –
“Although climate variability is the consequence of an intrinsically non-linear, deterministically chaotic system, we can understand and predict (to a limited extent) aspects of this system and its behaviour. However, there are limits to what can be predicted and better understanding of these limits will not only help us to focus on what might be achievable, it will also help us to determine how best to use the valuable, but uncertain knowledge that we can gain about our future climate.
Two factors that impose limits on predictability of future climate states need to be distinguished: (a) uncertainty in initial conditions, including uncertainties about boundary conditions (as in the case of climate change) and (b) errors associated with, and gaps in, observational measurements and limitations of climate models, including the parameterisation of physical processes (Smith, 2000; Hasselmann, 2002). Hence, we can say a priori that for real, physical systems such as the earth’s climate, no perfect model exists (and never will). Consequently, we will always be restricted to probabilistic forecasting, since no accountable forecast system will be able to provide a credible, single outcome prediction or a deterministic forecast.”
Unfortunately, many people who should know better, confuse “predictions” with “assumptions”, and then further obfuscate the issue by talking about “probabilistic forecasts”. These are of course, largely meaningless, even when shrouded by sci-speak relating to confidence indices, standard deviation, FFTs and all the rest. If I gave you $50 to point a gun at your head and pull the trigger, and I told you that only one of the twenty bullets on the table was live, would you do it? After all, there is a 95% chance that the gun will not fire. In climate “predictions”, a 95% guess is often officially accorded “near certainty” status. Not enough for me, (or any rational person, I warrant), to bet my future on.
As to putting “your best figures in . . .”, this is precisely what led Lorenz to question why a minute change in initial conditions could lead to a staggering divergence in forecasts.
I will guess that ge0050 will agree with me inasmuch as the attractors “position” may change rapidly and without warning if the system being examined is on the verge of a bifurcation. Additionally, predicting the location of an attractor may only be possible by examining the chaotic system for a long period of time. Maybe billions of years for certain climate parameters – and that assumes that nothing else in the Universe is changing at the same time.
On a practical level, rather large amounts of money are spent by organisations with a vested interest in predicting the future, to obtain the services of the “best and the brightest” statistician, mathematicians, actuaries, Nobel Prize winning economists and so on. So I guess the global financial crisis never happened, and that governments all over the world know with a high degree of certainly the long term future.
Pardon me while I roll on the floor laughing! (I apologise for the sarcasm – the Devil made me do it!)
Live well and prosper.
Barton Paul Levenson says
Hi, TOF. Still on the global warming denial bandwagon, I see.
BTW, it has never been proved that climate is, in fact, chaotic. Weather has been, not climate.
Completely Fed Up says
Mike 866: there’s another factor you’re forgetting:
What can you predict?
You can, for example, predict the Lorenz attractor.
It doesn’t depend on start conditions and doesn’t change based on errors and gaps.
Likewise, predicting what the global temperature 30 year mean may be in Bombay 2305 may not be predictable, predicting that the globe as a whole has warmed up by X degrees doesn’t have to be.
They’re different questions and one avoids the problems of chaos and indeterminacy.
Completely Fed Up says
“862
ge0050 says:
20 January 2010 at 1:00 PM
Climate change has the elements of a fractal distribution.”
The Mandelbrot fractal has the elements of a fractal distribution. However, it still only and entirely fits within the +/- 1.0 value on the axises when you calculate it within those bounds.
Tautological, yes, but you cannot just say “it’s a fractal” as if that explains how you can’t measure or conclude anything.
Ray Ladbury says
Mike Flynn,
First, when they are talking about climate variability, they are talking about variability on a short timescale. Even here, it is not clear whether the system is in fact “chaotic” or rather merely too complex to model effectively given present resolution and computing power.
Second, chaotic systems are not entirely intractable and can be qyasi-stable over long time periods. So if the climate is in fact chaotic, it is all that much more critical that we avoid destabilizing it. Careful what you argue–if you advocate limited action on climate rather than all-out, draconian measures, you had better hope the climate is not chaotic.
Finally, you can argue all you want climate is beyond human understanding. However, there is a large body of successful predictions by climate models that says you are wrong. I’ll go with the published successes rather than the unsubstantiated opinion of an anomymous yokel on the Intertubes, thank you all the same.
Hank Roberts says
> The real world rarely operates like a fair coin toss. You cannot rely
> on heads and tails converging over time to minimize errors.
You really need to get out more. Take a walk. Watch a bird. Look at some trees. Compare a few dozen snowflakes.
The real world is amazingly, profoundly, and utterly reassuringly good at getting things to come out consistently.
Now, back to your computer, where errors — you’re running Windows and IE?
John E. Pearson says
Climate is chaotic? Hmmpf. I never heard that claim before. Even the claim is correct it does not mean that climate is not predictable for the next 1-200 years. Chaotic systems can be predicted for a Lyapunov time or two.
Mike Flynn says
@BartonPaulLevenson
Well, if a respected scientist like Allan (Hadley Center, UK Met Office) states that climate is chaotic, who am I to disagree?
@Completely Fed Up.
I’m not sure if I understand you. The Lorenz attractor has been proven to be a strange attractor, which amongst other things means that you cannot effectively predict the position of the attractor, as the trajectories appear to skip around randomly. The Lorenz attractor does depend on start conditions.
As to predicting the amount of “global warming”, you will have to predict the individual temperatures that you are going to average to get an “average” figure. Unless you want to guess two, five, twenty degrees or whatever without defining your “global temperature”. Say you assume a two degree C “global” increase. Is this to be applied to the atmosphere uniformly? Will the abyssal depths rise by the same quantum as the land surface of the tropics? I hope you see my point. You can’t “average” temperatures without some data to average.
It’s easy to avoid difficult problems by assuming they don’t exist. Good luck with that.
@Ray Ladbury.
I agree with you “Even here, it is not clear whether the system is in fact “chaotic” or rather merely too complex to model effectively given present resolution and computing power.” So how do you make your predictions if it is too complex . . .?
Chaotic systems can indeed be reasonably stable for indefinite periods of time. The problem is that you can’t predict when the system changes state. A bit like trying to predict whether or not an earthquake will occur, where, when and how big, based on the known physics of the behavior of the Earth. Not the same , I know, but analogous. Substitute hurricane, flood, drought, ice age, for earthquake and it should become clearer.
As to destabilizing a chaotic system, I am not sure what you mean. A dynamical system with sensitive dependence on initial conditions, if chaotic, “destabilizes” all by itself. You cannot predict what your well meaning attempts may bring. For example, you might trigger another rapid onset ice age. Conversely (or perversely) unexpected runaway temperature rises might result, resulting in total extinction of life on Earth.
As to your comment that ” . . there is a large body of successful predictions by climate models that says you are wrong.”, I point out that there is an even larger body of unsuccessful predictions. Unfortunately, the believers in accurate deterministic forecasts tend to crow about their lucky guesses, and not publish (or even discuss) the forecasts which turned out not to be right.
@John E Pearson
You must lead a sheltered life. The British Met Office (and, by extension, probably some of the workers at the Hadley CRU) accept, albeit grudgingly, and only after quoting some of their less widely publicised papers back at them, that climate is chaotic in the sense of intrinsically non linear, and sensitively dependent on initial conditions.
[Response: I may have missed this, but I am unaware of any demonstration that climate is sensitively dependent on initial conditions. On the contrary, all models of the climate do not exhibit this. You can start anywhere you like, and while the specific trajectory is indeed chaotic, the mean temperature, cloud cover, sensitivity to increased CO2 are all independent of the initial conditions. This is true for any level of complexity so far examined. This obviously is not a statement about the real world, but I have no idea how you would demonstrate that the real world climate is chaotic – it’s impossible to prove (though it seems very likely) that even the weather is chaotic. Something Lorenz often mentioned. – gavin]
Unfortunately, you first have to measure the Lyapunov time. Thus, you have to observe the start, and the subsequent behavior of the system. Even more unfortunately, on its way to chaos, the system my undergo, without warning, bifurcations – by definition, within the Lyapunov time. It may be possible that such events as glacial/interglacial periods, represent larger scale states within the overall dynamical system. Sudden climate changes like long droughts, long periods of above average rainfall and so on, may be evidence of smaller scale chaotic bifurcations. Who knows?
[edit – stick to the point]
Live well and prosper.
John E. Pearson says
873: Mike Flynn wrote sci-co blather “Even more unfortunately, on its way to chaos, the system my undergo, without warning, bifurcations – by definition, within the Lyapunov time. … yada yada yada Unfortunately, you first have to measure the Lyapunov time. Thus, you have to observe the start, and the subsequent behavior of the system.”
Your post is pure nonsense. Little point in responding except to point out to lurkers that it is in fact nonsense.
Hank Roberts says
Hm, well I’ve spent half an hour trying to find a source on which one might base Mike Flynn’s claim that Allan of the Met Office says climate is chaotic. Not found. Not even close. Copy editor says: Citation needed.
Found this though:
http://people.uncw.edu/binghamf/phy420/spring08/Raisanen2007.pdf
— excerpt follows—
“… Another issue that has implications for the interpretation of both model results and observed climate changes is the chaotic nature of weather and climate caused by the non-linearity of the governing equations (Lorenz, 1963). In numerical weather prediction, the skill of the forecasts deteriorates rapidly with time. …
… When the same climate model is run several times with the same external conditions but with different initial states, the resulting time-series differ so that, for example, the timing of individual warm or cold years is uncorrelated between the simulations (e.g. Stott et al., 2000). However, the magnitude of the differences saturates rapidly, rather than continuing the exponential growth characterizing the first days of weather forecasts. If the increase in greenhouse gas concentrations or other external forcing applied in the simulations is strong enough, the climate changes associated with this forcing will become larger than the internal variability resulting from the chaotic nature of the system.
This is analogous to the seasonal cycle of weather: despite inter-annual variability, there is a distinct difference between winter and summer conditions forced by the seasonal cycle in solar elevation.
Internal variability may either add to or subtract from forced climate changes, both in the real world and in model simulations. However, the magnitude of the variability decreases with increasing temporal and spatial averaging. Thus, the associated uncertainty in multidecadal global means is much smaller than that in individual yearly values at a single location….”
—end excerpt —-
And this
http://www.met.sjsu.edu/~tesfai/RESULTS/Journals/how%20well%20do%20we%20understand%20and%20evaluate%20climate%20change%20feedback%20processes.pdf
VOLUME 19 JOURNAL OF CLIMATE 1 AUGUST 2006 REVIEW ARTICLE
How Well Do We Understand and Evaluate Climate Change Feedback Processes?
—excerpt—
A particular difficulty in the interpretation of feedback processes arises from the time scales of the different responses. Some processes participating in the feedback mechanisms may be very fast, some very slow.
While the nonlinear equations fundamental to atmospheric GCMs cause a sensitivity to initial conditions that leads to chaotic behavior and a lack of predictability for weather events, there is little evidence for an
equivalent degree of sensitivity to initial conditions and lack of predictability in predictions of temporally averaged climate variables from coupled GCMs. For this reason, climate feedbacks have traditionally been analyzed in terms of the new equilibria ….
–end excerpt—
Doug Bostrom says
Mike Flynn says: 21 January 2010 at 9:34 PM
You don’t really know much about climate modeling, or so you say. Why not get up to date instead of risking your reputation by mashing up economics with something unrelated?
Read a bit:
http://www.aip.org/history/climate/GCM.htm
david says
Mike Flynn wrote :
“If I gave you $50 to point a gun at your head and pull the trigger, and I told you that only one of the twenty bullets on the table was live, would you do it? After all, there is a 95% chance that the gun will not fire. In climate “predictions”, a 95% guess is often officially accorded “near certainty” status. Not enough for me, (or any rational person, I warrant), to bet my future on.”
But you/we are betting our future on it. Except that our choice is between spending $50 or pulling the trigger. And we are choosing to pull the trigger. Furthermore, just to make things interesting, the 95% confidence for the IPCC predictions means that we have 19 bullets in the chamber.
Completely Fed Up says
JEP: Check up a google:
http://en.wikipedia.org/wiki/Lyapunov_stability
It’s pretty easy to see too: look at that Lorntz attractor. See where the lines are widely spread? Well your system isn’t sensitive to changes in those regions, since the difference between tracks of very different outcome need big changes to get there.
Yet where the lines are close, a small change can make a big difference to where the system goes next.
Those tight areas are where you can, for example, use the slingshot effect to change your orbit greatly.
Completely Fed Up says
“@Completely Fed Up.
The Lorenz attractor does depend on start conditions. ”
But if you can predict where the strange attractor sits.
Despite being strange and chaotic.
You can predict its shape quite accurately.
Barton Paul Levenson says
TOF: As to predicting the amount of “global warming”, you will have to predict the individual temperatures that you are going to average to get an “average” figure. Unless you want to guess two, five, twenty degrees or whatever without defining your “global temperature”. Say you assume a two degree C “global” increase. Is this to be applied to the atmosphere uniformly? Will the abyssal depths rise by the same quantum as the land surface of the tropics? I hope you see my point. You can’t “average” temperatures without some data to average.
BPL: You take the temperature found at meteorological stations and SST observations around the world. You find the average in each equally-sized grid square. Then you average the averages. Voila–mean global surface temperature. Average it over the year to eliminate seasonal effects. Voila encore–mean global annual surface temperature (MGAST), which is 287 or 288 K.
The tired old denier argument that “mean temperature doesn’t mean anything!” is unphysical and violates common sense as well as logic. Which is hotter at the surface, Venus or Earth? If you say “Venus,” you’re acknowledging that there is such a thing as a mean global annual surface temperature. In the standard atmospheres for each planet, it’s 735.3 K for Venus and 288.15 K for Earth.
Kevin McKinney says
The question of whether or not there is “such a thing as global temperature” leads rapidly to intractable philosophical questions about “ontology.” (Ie., the study of being–I suppose you could go all Clinton and say it’s the question of what “the meaning of is, is.”) We should all know this intuitively by now: the jokes about average families having 2.5 kids (or whatever the current number is) have been around for many decades.
But there is unquestionably a statistical measure called a mean, and it is unquestionably useful in all kinds of different situations–since it is in fact widely used, generally without undue concern about its exact ontological status. Why should the fact that out there somewhere is a man whose height is exactly the global mean affect our perception of whether the mean is valid? I say it shouldn’t.
As to the idea that, to calculate future mean global temps, you need to calculate future individual temps–well, funnily enough, that’s essentially how it’s done. (According to the RC FAQ on models, generally in 100×100 km grid boxes.)
Mike Flynn says
Hi all.
Darn it! I was about to admit defeat, and that you could predict climate.
However – from IPCC AR4 WG1 Chapter 1
– excerpt –
Originally, it was thought that predictions of the second kind do not at all depend on initial conditions. Instead, they are intended to determine how the statistical properties of the climate system (e.g., the average annual global mean temperature, or the expected number of winter storms or hurricanes, or the average monsoon rainfall) change as some external forcing parameter, for example CO2 content, is altered. Estimates of future climate scenarios as a function of the concentration of atmospheric greenhouse gases are typical examples of predictions of the second kind. However, ensemble simulations show that the projections tend to form clusters around a number of attractors as a function of their initial state
– excerpt –
Also, from IPCC (TAR)
– excerpt –
14.2.2 Predictability in a Chaotic System
The climate system is particularly challenging since it is known that components in the system are inherently chaotic; there are feedbacks that could potentially switch sign, and there are central processes that affect the system in a complicated, non-linear manner. These complex, chaotic, non-linear dynamics are an inherent aspect of the climate system. As the IPCC WGI Second Assessment Report (IPCC, 1996) (hereafter SAR) has previously noted, future unexpected, large and rapid climate system changes (as have occurred in the past) are, by their nature, difficult to predict. This implies that future climate changes may also involve ‘surprises’. In particular, these arise from the non-linear, chaotic nature of the climate system
– excerpt –
A couple of points. Gavin’s response appears to me to be in conflict with the IPCC excerpt first noted, which states that thinking has changed about the independence of some factors such as CO2 concentration. I may be misunderstanding him, and if so, my apologies.
The second point is that ensemble situations (which are incredibly computationally intensive, according to the IPCC scientists) are tending to form clusters around attractors. So – believe or don’t believe that weather AND climate may behave in a chaotic fashion. I take Gavin’s point that the chaotic nature of climate has yet to be established. The IPCC agrees – need more computing power. Don’t hold your breath.
@John E Pearson.
Sorry. I misspoke. Lyapunov time would be up the first example of chaotic behaviour, if the system turned out to be chaotic. Correct me if I’m wrong. Thinking about chaos often induces a similar state in my mind, obviously. Once again, apologies.
@Hank Roberts.
Probably too late. The excerpt was a cut and paste. I assumed, obviously incorrectly, that it would be easy to find.
@Doug Bostrom.
I thought that the IPCC AR4 would be accepted as reasonably up to date. My bad. As an aside, the chairman of the IPCC has a PhD in Economics, as well as his engineering qualifications. Not good?
@david.
I’m no statistician, but 95% means 95/100ths, or 19/20. Chance that it won’t happen 1/20. If I’m wrong, probably won’t change your likelihood of taking the bet.
@Completely Fed Up
I assume you are aware that the Lorenz attractor exists in 3d space. Only three variables, output of a simple oscillator. The shape of the attractor is quite variable. If I supply you with 3 variables to plug into the Lorenz oscillator, what method will you use to determine what the shape of the resultant output plot? It may be a 3d trefoil (stable) or more complicated toroid, may be chaotic (and look a bit like a butterfly from some angles).
Good luck.
@ Barton Paul Levenson
Unphysical? I like the term. As to the average surface temperature which as you state is either 287, 288, or 288.15 K, you don’t state the allowance for statistical error. If I measure my mains voltage as 239.6 VAC with my tolerably expensive measuring device, will it help you if I also mention that the manufacturer’s margin of error for AC voltages in that range is plus or minus 3 percent. Averaging a series of readings may or may not help to establish the true voltage.
As a fully qualified and well trained meteorological observer (in a past life) I can assure you that instruments used to measure such quantities as temperature, barometric pressure, humidity, wind strength and direction, and so on, have built in inaccuracies, dependent on maker, maintenance, and siting, amongst other things. Add to that the mood and state of sobriety of the observer at 3 am at 20 below, out in the dark in a howling wind with a small torch, a pencil and a notebook, and tell me about accuracy.
To add insult to injury, most of the observations in Australia were provided by Police Officers, Post Office workers or medium level poorly trained civil servants in smaller towns, Aboriginal communities and so on.
Which might tend to explain why Climate Centers need to “massage” the data to get rid of the most blatant examples of “creative observing” to avoid the aforementioned howling gale, and to provide a continuous record whilst on holiday. You are right, very “unphysical”, but true nevertheless. Good luck with the averages. When i am trekking in the Nepal Himalaya, I find the average temperature for the trek of less use than assumptions about the extremes. For example, 28 day trek circling Annapurna Massif. High temp around 38 C. Low temp around minus 8 C. Not interested in the average at all.
You may care to agree with me (or not) that a temperature increase of 5 deg C to frozen Minnesotans may be a good thing, whereas a 3 deg C increase in the temperature of the Southern ocean down to 50 metres or so could be bad thing. And so on.
@Kevin McKinney.
Not sure whether your comment is directed to me. I’m sure that there is a mean. I’m sure what use it is. It gives me no information about conditions at the Poles, or in Death Valley. If the mean temperature rises by say 3 deg C, what will the effects at the Poles or Death Valley be? Up, down, sideways? Am I allowed to ask?
To all –
Read the IPCC FAR Physical Science Basis for some illuminating insights. Global warming? Sure looks like the Ice Age has warmed up and gone away. Is climate change predictable? According to the IPCC – maybe, maybe not.
Live well and prosper.
Barton Paul Levenson says
TOF: As a fully qualified and well trained meteorological observer (in a past life) I can assure you that instruments used to measure such quantities as temperature, barometric pressure, humidity, wind strength and direction, and so on, have built in inaccuracies, dependent on maker, maintenance, and siting, amongst other things. Add to that the mood and state of sobriety of the observer at 3 am at 20 below, out in the dark in a howling wind with a small torch, a pencil and a notebook, and tell me about accuracy.
To add insult to injury, most of the observations in Australia were provided by Police Officers, Post Office workers or medium level poorly trained civil servants in smaller towns, Aboriginal communities and so on.
BPL: I seem to recall you had some knowledge of statistics. You know very well that the inaccuracy of a single measurement is NOT the same as the inaccuracy of a large number of measurements.
And what’s with the crack about aborigines? You think they’re too stupid to read thermometers? Are you an Australian, by any chance? A white one, that is.
Doug Bostrom says
I take it Mike Flynn did not follow my advice. Oh, well, can’t win ’em all.
Just in case you’re still there, Mike, here’s a reminder:
http://www.aip.org/history/climate/GCM.htm
As I’ve mentioned elsewhere, regardless of what prejudice you’re infected by, Weart is a great resource for information as opposed to rumor.
Mike Flynn says
@Doug Bostrom
I would be most grateful if you could let me know why you think I didn’t read the document you provided the link for. I can’t understand why you would think such a thing, apparently without any reason at all.
You might also care to define what prejudice you think I am infected with, and why. I am not sure to which rumours you refer. Your post is most puzzling. Have you confused me with someone else?
Live well and prosper.
Mike Flynn says
@ Barton Paul Levenson.
I have responded directly to realclimate.org. The comment is fairly lengthy – the moderator may allow posting, or not.
The short response to your first question – what crack about aborigines?
Your second question – For the purpose of this forum, yes, I hold Australian citizenship, and yes, I have pale skin.
If this makes me inferior in your eyes, so be it.
As was occasionally interjected during Monty Python’s Flying Circus, this is getting Silly. Far too much Silliness.
Live well and prosper.
Michael says
“The 2009 number is the Jan-Nov average.”
You forgot December? One of the coldest months of the year? Wouldn’t that have an impact on temperature?
[Response: Note the date of the blog post. And no, it doesn’t make much difference because what we are plotting is the anomaly. Including Dec would actually warm the observations. – gavin]
Hank Roberts says
Mike Flynn —
Sounds like you’re not the person who posted under that name earlier? Unless you link to a website (the third box at the response prompt) you can easily get confused with someone using the same name, or surname. Here’s how to check
http://www.google.com/search?q=site%3Arealclimate.org+flynn
Any of those that are or aren’t yours, to clarify?
Completely Fed Up says
“I assume you are aware that the Lorenz attractor exists in 3d space.”
Yup.
And the attractor is soluble and steady.
And from it you can say where your measurement error can have enormous effect (or, in the case of orbital shifting, where you can give a small push to get a big output: which is itself also predictable. How else did we manage the Voyager and Pioneer slingshots?) or where it has minimal impact.
Predictability.
Through chaos.
You just have to change what you want to predict.
t_p_hamilton says
“The 2009 number is the Jan-Nov average.”
You forgot December? One of the coldest months of the year? Wouldn’t that have an impact on temperature?
[Response: Note the date of the blog post. And no, it doesn’t make much difference because what we are plotting is the anomaly. Including Dec would actually warm the observations. – gavin]
Also, it is also summer for half of the planet.
Barton Paul Levenson says
TOF: The short response to your first question – what crack about aborigines?
BPL: The one where you said some of the measurements made were unreliable because they were made by aborigines.
TOF: Your second question – For the purpose of this forum, yes, I hold Australian citizenship, and yes, I have pale skin.
If this makes me inferior in your eyes, so be it.
BPL: Nope. Never said that. On the other hand, it would account for your attitude toward the aborigines. I mean, it was within our lifetime that the Australian government stopped stealing their children, wasn’t it?
Hank Roberts says
Guys, please.
Memory is so 20th Century.
http://www.google.com/search?q=site%3Arealclimate.org+aborigines
Sepilok says
Hi BPL,
As a white Australian – I would suggest that not all white Australian hold the Aboriginals in disregard, and the ones that do are d%#&heads. Unfortunately there are a fair number of them in Oz.
As to TOF suggestion that their lack of education (I suspect he is trying to suggest, rather than race) makes them unreliable recorders. I would disagree based on my experience with some of the different ethnic groups in Borneo. My best and most dedicated field assistants have generally been the ones that haven’t had the opportunity to get a formal education.
But back onto a slightly more CC related topic:
Tthe accuracy of downscaled predictions from GCMs. I’ve just read the Nature commentary which suggest this is one of the problem areas of CC modeling, especially changes in rainfall patterns. The problem is that we need downscaled predictions when it comes to developing management plans for forest reserves and protected areas in the tropics. While it is likely that changes in temp will affect some species, it is changes in rainfall that are likely to have the biggest management implications.
Are there ways to improve these predictions or check their preformance? I’m in the process of trying to preparing a summary on the potential impact of CC on the Forest reserves and terrestrial NP in Sabah, based on the WorldClim datasets and need to be able to give some indication as to how “likely” the predictions are.
i.e. I need to be able to convince the Director of Forestry that a prediction of a 10-20% drier dry season by 2050 (based on the CCCMA data from WorldClim) is a distinct possibility and that we need to start to look at ways to improve the FR resilence to fire (i.e. look at changing adjacent land-use practices, restoring vegetation cover, etc.).
Advice/comments greatly appreciated.
Mike Flynn says
@Barton Paul Levenson
Could you please cut and paste for me. The closest I can find is this : –
“To add insult to injury, most of the observations in Australia were provided by Police Officers, Post Office workers or medium level poorly trained civil servants in smaller towns, Aboriginal communities and so on.”
I am not sure what you are talking about. What observations taken by Aboriginals? Where? Where have I mentioned or indicated any “attitude” towards the Aboriginals? Yes, I have Australian citizenship, yes, my skin is pale. You have somehow assumed I am not Aboriginal as defined by the Australian Aboriginal and Torres Strait Islander Act 2005.
I like a guy who doesn’t let the truth affect his assumptions.
As to your comment about “child stealing”, children of all ethnic groups are still removed by (usually) State Government child protection authorities, where necessary. The relevant laws are quite possibly similar to those applying in your locality – unless you live in some uncivilized hellhole. I take that back. Some uncivilized hellholes do have strong child protection laws.
I’m not sure why you think I had influence of the various Federal Governments during my lifetime. Or maybe you just plain don’t agree with the findings of the IPCC AR4 WG1. Take it up with them.
Live well and prosper.
Mike Flynn says
@Completely Fed Up
Up to a certain value of the Rayleigh number, the attractor is one of two points. After a certain value, behaviour changes. Attractor looks totally different to me. Try rho=28.0, sigma= 10.0 beta=0.683902. Predict the shape. Is it chaotic, periodic or what? Does it look like a point to you? Please demonstrate the mechanism by which you produced the depiction of the attractor without iterating the input differential equations.
Good luck with that.
@Hank Roberts.
I have better things to waste my time on. Honestly. I am guessing I quoted something you can’t find. Keep trying. Took me less than 5 minutes.
@BPL
Just for fun, use your deductive powers to figure out my ethnic background. Tip – I also hold USA citizenship. It was within my lifetime that many US presidents came and went. So – my attitude to Native Americans would be . . .? Enough clues!
Good luck with that. In the meantime, read the AR4 WG1 paper – pay close attention to the footnotes and definitions. If you disagree with the science used, and the conclusions inferred therefrom, complain to the chairman, not me. I didn’t contribute, as far as I can remember.
@all –
Too silly. Too much Silliness. I’m off. By the way, my modeling shows 3 deg C increase to 2090, 5 deg C cooling from temp max to 2160. I have used publicly available data, and my model seems to accord pretty well with inferred temps back to about 1000 BC. Not at all certain about my forecast. If it comes to pass, I will credit blind luck. I will bundle up my bits and pieces, leave it with my grandkids. They may well laugh themselves silly about how wrong their grandfather was!!
Live well and prosper.
Luke Silburn says
Barton@891:
“The one where you said some of the measurements made were unreliable because they were made by aborigines.”
Barton you’ve misread what Mike Flynn wrote. He said many of the measurements were taken by govt workers (such as police officers, postmen, civil servants) based in small towns and aboriginal settlements scattered across the outback.
Ethnicity of said govt workers wasn’t implied or inferred.
Regards
Luke
Sou says
BPL: We have enough strife on the issue of climate without bringing claims of racism. If I look at the post, the writer was talking about post office employees and clerks in small towns and remote communities. Those in other countries might not be aware that many if not most of those remote communities across large parts of Australia just happen to be Aboriginal communities. That does not mean any slur on capability of Aboriginals. The writer was talking about lack of training of post office employees, not lack of capability of Aboriginal people.
As I’ve seen recently elsewhere, sometimes cultural norms totally unique to one country are misapplied to other countries. This can unfortunately make some people from other countries hypersensitive to certain words and phrases, and therefore distort the meaning of what people say.
Can we just leave it at that please, cut out silly accusations of racism and get back to climate.
If you still want to have a go; shift the discussion to the level of training given to the early operators of weather stations in Australia. Dedicated enthusiasts working through the government not only set up weather stations across Australia and training for operators, but they were also responsible for the building of telegraph communications spanning the continent, which was needed to communicate the weather data. Given the vast distances across inhospitable unpopulated areas of Australia, the lack of transport and other infrastructure at the time – that was a major achievement.
Luke Silburn says
Mike Flynn @82:
The imperfections in the surface networks are, of course, well known to the scientists who produce the historical instrumental temperature record and considerable effort has been devoted to figuring out how to detect and correct for the sorts of data quality issues you expound upon.
Fortunately an acceptably accurate of the the global surface temperature can be reached with a suprisingly small number of measurements (~100 stations IIRC, provided they were ideally distributed around the globe), which means that there is plenty of spare capacity in the station networks that are used. These duplications give the research teams lots of surplus material to work with when it comes to performing statistical validations and there is an extensive literature of peer-reviewed papers which document these tests, the homogonisation/correction procedures which are adopted to deal with the data quality issues that have been detected and the confidence levels that can be ascribed to the final data as a result.
If you (or any lurkers who are still reading) have any doubts regarding the quality of the data which feed in to the surface temperature records then I would suggest that you avail yourself of this literature in order to understand how the problems you describe have been addressed. The best place to start is the supplementary material for Chapter 3 of the IPCC AR4 at http://ipcc-wg1.ucar.edu/wg1/Report/suppl/docs/AR4WG1_Pub_Ch03-SM.pdf (“Technniques, Error Estimation and Measurement Systems”) and then follow the references back to the the peer reviewed literature.
Regards
Luke
Completely Fed Up says
RS, where are your papers?
I can’t find any.
I can find this:
“16 Oct 2009 … Richard Steckis. I am preparing a paper for publication in a peer reviewed journal. I will also publish it on my blog site in due course. .”
On Desmogblog.
Ever published it?
Mike:
“Try rho=28.0, sigma= 10.0 beta=0.683902. Predict the shape.”
Now try it again.
Is it the same shape?
Oh, looky.
Predictability.
Mark C says
Please can you update Average 2009 temperaute to include December and re-plot the point, now that you have the data?