473 Responses to "Cosa dicono realmente i modelli dell‘IPCC"
Donald E. Floodsays
So, to give one possibility, if the global mean temperature from 2050 to 2070 would end up being lower than the 1950 to 1970 global mean temperature, would that be enough to falsify the IPCC projections, assuming no volcanic eruptions, cometary impacts, etc.?
[Response: …and that the trajectories of the GHGs and aerosols looked something like this scenario. Yes. – gavin]
Richard Paulisays
A caveat clearly seen on some IPCC charts:
“Model-based range excluding future rapid dynamical changes in ice flow” Was this authored in 2005 or 2006?
We cultivate confusion by failing to have constant IPCC studies and updated reports.
A. Fucalorosays
It would be impossible to deconvolve a trend signal caused by CO2 increase if the climate were mediated by a cycle that is long enough and strong enough. Do we know for sure that the medieval warming and subsequent Little Ice Age are not manifestations of such a cycle?
tharangasays
A relevant post; some sceptics love to say that the projections have been wrong, without actually knowing what the projections were.
Some basic questions:
If I understand it correctly, Keenlyside et al attempted to achieve more realistic realizations by using realistic initial values. Can you explain the standard ‘un-initiallised’ ensemble approach? Surely, a model that runs over time requires some sort of initial conditions; are these randomly chosen for each realization within the ensemble? It couldn’t be that random, though – to some extent, they must be constrained by observational data, no?
Also, I’ve noticed that the various models tend to agree with each other within hindcasts, but there is rather more of a spread in the future projections. I’m told that the hindcasts are honest exercises, and not curve-fits, but in that case, shouldn’t there be more of a spread amongst the models in the hindcasts, as well?
Finally – any attempts I’ve seen to judge prior model projections involve picking the results for the scenario (A1B, or what have you) which came closest to the actual forcings over the period in question. Instead of that, why not dig up those prior versions of the models and re-run them with the actual forcings: CO2, sulphates, volcanos, etc? It’s the range of unforced natural variability we are interested in here, not the ability of modelers to predict external forcings.
I love the “Pinatubo dip” in the first graph (1991).
Maybe there is some legitimacy in the idea of “Dr Evil” to seed the upper atmosphere with particulates via 747s… but it only works in the short term. Once the aerosols dissipate, the curve keeps going up.
Davidsays
When the models show cooling for a few years, is this due to heat actually leaving the (simulated) planet, or due to heat being stored in the ocean ?
[Response: You’d need to look directly at the TOA net radiation. I would imagine it’s a bit of both. – gavin]
Sascha Samadisays
Thanks for the interesting and easy to understand read, Gavin. It’s hard for me to understand why some people apparently have a hard time distinguishing between individual model runs and ensemble means. It doesn’t seem to be too complicated…
Gareth Evanssays
Back to scientific business and a welcome post by Real Climate. The important message in layman terms is that we must not confuse “weather” with climate. The greenhouse gases we emit warm the earth – this has been known for a long time (back to Arrhenius). The temperature of the earth would be much colder, roughly that of the moon, were it not for green house gas warming. Extra global energy, from increased greenhouse gas concentrations in the atmosphere, is redistributed around the earth by natural circluation processses. These are complex processses that may be interelated. In adition, there are natural cyclic events that may affect weather (and climate) and the unexpected (e.g. a significant volcanic erruption) is always a possibility. There will always be “weather” fluctuations and the various climate models produce a range of possible future outcomes. So, what we must focus on in this debate are the mean trends (and climate). This is exactly what IPCC and groups, like Real Climate, have been telling us. We need to develop ways, however, of introduing a regional focus into this debate and the important role of other warming influences such as land use, urbanisation etc. This would help to improve the general understanding and wider acceptance of the issues involved. The focus on global, annual means does not always make the necessary local impact (and may be concealing important subtleties – such as any seasonal impact variations of an increasing global temperature).
pete bestsays
So in the grand scheme of GCM analysis these recent model runs that made it into the media as cooling are what exactly, inadequate? I am desperately attempting to find out the reason why a reputable preliminary scientific analysis went to the media spouting this via a peer reviewed journal when in actual reality the analysis seems flawed.
Is it statistics or the methods used I wonder. I just feel that the public are left frustrated and confused as to the reality of AGW. No wonder the deniers are still in the game when this sort of science is splattered all over the media large bold fonts.
Klaus Flemløsesays
I will be pleased if you can answer the following question:
Is the variation in the number of sunspots, the ENSO, changes in the thermohaline circulation and other periodic phenomenon included in the IPCC simulations? How good are then simulations to replicate the variations in the global temperature ?
For me it is unlikely to see a monotonic increasing global temperature.
[Response: Some of the models include solar cycle effects, all have their own ENSO-like behaviour (of varying quality) and THC variability. – gavin]
Nylosays
Certainly, weather influences climate trends.
Is there ANY chance that the observed temperature increase since the 70s (and till 1998) is due mainly to weather (PDO, ENSO, cosmic rays, sun irradiation, solar cycles, cloud cover), or is weather only going to be responsible for cooling or a lack of warming?
Is current La Niña “weather”? If so, was El Niño in 2002 and 2005 weather as well? Should we then say that the high temperatures we saw those years were because of weather, and not climate? Are their temperature records dismisable then? If not, will 2008’s decadal low temperature record be dismisable when it happens?
I saw nobody claim anything about how weather influences the apparent climate trend when it was an all-rise problem in the nineties. But now that we are not warming, weather comes to rescue of the AGW theory.
You have confidence in the models because the average of the ensemble seems to explain well the somewhat recent warming. But what if the warming was caused by weather? It is possible, because reality is just one realisation of a complex system. So all of your models could be completely wrong ans still their average be coincidental with the observations.
In the GH theory, the surface temperatures increase because there is a previous increase of the temperature of the atmosphere, which then emits some extra infrared energy to the surface. In that scenario, the troposphere warms faster than the surface. Otherwise its emissions would not be too big and we would not have so much surface warming. This happens almost in every model run. There are only a handful of model runs that correctly guess nowadays mild tropospheric temperature increase in the tropics. I would like to know what is the surface temperature trend predicted by exactly those model runs which managed to get nowadays tropical tropospheric temperatures correctly. It seems like they got the “weather” right and seem more trustworthy, for me.
Nylo: cloud cover, solar activity, etc. has always been factored into climate models, from what I understand. And no climate model has been able to model the recent warming without taking CO2 into account.
I would like to paraphrase the late Douglas Adams on this – to remind of us all of the “Whole Sort Of General Mish Mash” (WSOGMM) that one must consider in complex systems.
Two model runs for a century starting from the exact same initial conditions but with the same forcing may well end up in different states (yielding different trends) at some point of the run. Different models with same or different initial conditions but same forcing also spread in their states throughout the runs. Hence there is a lot of WSOGMM going on as seen in Figure 1.
What is rarely discussed is that WSOGMM is not something that is exclusively associated with climate models. WGSOGMM is an inherent property of the “real” climate system as well. It is most likely that if we had measurements of our instrumental period in one or several parallel universe ‘Earths’ the inter-annual to decadal temperature evolution of these parallel worlds would deviate from each other to some extent. The current near-decadal relaxation of the global temperature-trend may for example have started in 1994 or 2003 rather than 1998 on one of our ‘parallel planets’ since it is largely defined by the 1998 El Nino event – that may have occurred during any year when “conditions were favourable” on some particular ‘parallel universe’ Earth. Thus, to use our instrumental records as the “perfect answer” is probably faulty below some decadal time-scale because this notion mean we think that the climate system is 100% deterministic on this time-scale. This is, however, unlikely since many of the sub-decadal patterns (NAO, PDO, ENSO for example) seems to be resonating more or less stochastically.
All this remind us that a relaxation of the temperature trend for a decade or so is not falsification of the multi-ensemble IPCC-runs – also because the real-world data represent only one realisation of the WSOGMM on these short time-scales.
Were those, in fact, El Niño years? I knew 1998 was but I hadn’t heard about the other two. Does anybody know?
Ray Ladburysays
Nylo: Fascinating theory. Explain to me exactly how weather will cause warming over, say, 20 years. I will leave as an exercise to the reader a comparison of the amount of energy needed to warm Earth’s climate by 0.2 degrees and that of a hurricane. Here’s a hint. One’s gonna be a whole helluva lot bigger than the other.
Nylosays
I agree with #13, a relaxation of the temperature trend for a decade or so is not falsification of the multi-ensemble IPCC runs. In fact, a relaxation for 20 years would not be either. The problem with the models is that their error bars are so huge, compared to the trend that they are intended to predict, that they basically cannot be falsified during the academic lifetime of their creators, no matter what happens. However science MUST be falsifiable and at the same time not falsified by events in order to be science. As long as anyone claims that another 10 years of no warming or even cooling would not falsify the models, I cannot give the models any real value or contribution to science. A nice hobbie, at most.
@12: That no climate model has been able to predict the recent warming without an increasing CO2 doesn’t mean that it is not possible, it only means that they all share common beliefs that could be right or could be not. For example, no climate model has been able to get right, at the same time, the current surface temperature trend and the current tropical tropospheric temperature trend, but still it is happening, they are roughly the same. Only non-GH influenced warming has such a fingerprint. How can they all be wrong?
And then there is the fact that the models include things such as cloud cover. Given how unknown the process of cloud formation is, and given that their average results fail to correctly show the real anual variation of cloud cover – they all give too much cloud cover for winter and too little for the summer compared to reality, which means that the clouds of the models fail to cool as much as they cool in real life -, well, it doesn’t speak wonders of the models.
Anyway, it looks interesting for me that the models cannot predict the warming without CO2, but on the other hand, they can predict cooling in spite of CO2 (so that falisification is imposible). How can it be? The models in Gavin’s article show a variability of up to 0.2ºC in a period of 20 years, but they cannot explain a 0.3ºC rise in global temperatures between 1980 and 2000 without CO2? Using a similar reasoning, I would admit as good a model which showed that only an increase of 0.1ºC between 1980 and 2000 was because of CO2, with the remaining 0.2ºC being weather. Such a model would predict immediate cooling now, and only a total +0.4ºC between now and 2100. And you could not say that such a model was falsified by the data either.
[Response: You are too focussed on the global mean temperature. There are plenty of other trends and correlations that can be deduced from the models which can be independently validated (water vapour, sea ice, response to volcanoes/ENSO, ocean heat content, hindcasts etc.). Or you can go back twenty years and see what was said then. Either way, it is a balance of evidence argument. On one hand you have physically consistent models that match multiple lines of evidence, or … nothing. Given that the first indicates serious consequences for the coming decades, and the latter implies you have no clue, there is a big procrastination penalty for sticking your head in the sand. None of the issues you raise are ignored in the models and yet no model agrees with your conclusion. If there was, don’t you think we’d have heard about it? PS. you don’t need climate models to know we have a problem. – gavin]
Nylosays
@Ray Ladbury: it can. Gavin just showed it to you. The same model with just 5 runs can give differences of 0.2ºC in its trend for a period of 20 years. What is it, if not weather?
great post Gavin, that cleared up a lot of questons. thx.
Ray Ladburysays
Nylo, variability is not just weather–it includes initial conditions, and depending on the model may include variations in a variety of factors (many of which we could measure if they were occuring). Ultimately, what matters are long-term trends. Organisms are adapted to survive weather. Human civilization has done well to adapt to weather. However, sustained changes in climate are something that we haven’t had to deal with in about 10000 years, and certainly not on this order.
To explain the trends of the past 20 years would take a veritable conspiracy of natural variations–or you could assume that a process that is known to operate is still operating. Me, I’ll stick with physics over conspiracy.
steven moshersays
re 17. well put Ray. It could be numeric drift, but I’m sure that is well accounted for. Gavin?
Nylosays
Gavin,
One of the dangers of using an ensemble of models is that it can give you the false feeling that you cover every posibility. I will explain. The PDO is included in the models, as well as solar forcing, ENSO, etc. But because they are considered unpredictable, they are set random and averaged out by the ensemble of models because of pure statistics. They are actually ignored. And that is OK if you want to predict climate without weather, but then you cannot look at the real data and validate your climate-only models with it. Because real data is climate PLUS weather. So both temperature trends being similar says little until you use such a long period of time as to be able to claim that the weather component is irrelevant.
In normal conditions, one century could be enough. But we are not in normal conditions. Why? Because of all the warming during the century, roughly a 50% of it has happened in only 20 years and is therefore possibly weather-influenced. If it is, we should start to see cooling anytime now, as I think we will. So only some of the remaining warming of the century can be trusted as climate change, and therefore it is not clear what you can compare your models to, in order to verify if their predictions can be trusted or not.
What you cannot do is to say that the stable temperatures we have now are because of weather, and some hypothetical future cooling would be weather too, but the warming of the last decades of the 20th century was on the other hand “weather-clean”.
I’m surprised how many people take computer climate model forecasts over 50 years seriously when we still don’t get accurate predictions of the weather two weeks in advance. Uncertainty in projected temperature from models can approach ±55° after 50 years. Which is a forecast worth nothing.
This is according to Patrick Frank at Skeptic Magazine, http://tinyurl.com/635bf8. I’m not without skill, but I’m no scientist, so I judge who sounds honest.
[Response: Try judging who sounds credible. Frank’s estimate is naive beyond belief – how can it possibly be that the uncertainty is that large when you look at the stability of the control runs or the spread in the different models as shown above? – gavin]
I’m more interested in the real world and the real climate. So, rather than asking what models tell us about variability, I’d like to ask about the science. What are the ramifications for the AGW hypothesis of the lack of atmospheric warming over the ten years since 1998? Arguably, since 1998 was driven by an exceptional El Nino, there’s been no real warming since about 1979, just going by eyeball. It’s up and down, but no trend you could hang your hat on. Temperature today is the same as 1979. See Junk Science.
[Response: You are joking right? Junk Science indeed. – gavin]
I can understand people shouting warnings about future warming, since models fire them up (hmmm, sorry about the pun, it was actually unintentional), but some people have been screaming about the world growing hotter right now. I honestly can’t see high temperatures anywhere.
Last point: If CO2 is to warm the atmosphere, and warmer still with more CO2, then if CO2 rises but temperature is constant or falls, the theory is disproved. Done. Where is the faulty reasoning? Or what is the change to the theory?
[Response: The ‘theory’ that there is no weather and no other forcings and no interannual and no interdecal variability would indeed have been falsified. Congratulations. Maybe you’d care to point me to any publications that promote that theory though because I certainly don’t recognise that as a credible position. Perhaps you’d like to read the IPCC report to see what theories are in fact being proposed so that you can work on understanding them. – gavin]
Gavin – Your plot of the individual realizations is quite useful. To add to its value, I recommend that you also plot the global averaged upper ocean heat storage changes for each year in Joules that each model produces along with the resultant diagnosed global average radiative forcing in Watts per meter squared such as performed by Jim Hansen [see http://climatesci.colorado.edu/publications/pdf/1116592Hansen.pdf%5D.
[Response: Great idea – why don’t you do it? – gavin]
Ray Ladburysays
Has anybody else noticed how fixated the denialosphere is on Karl Popper? Everything is about “falsifiability”. It is as if the past 70 years of philosophy of science did not happen for them. Popper’s concept of falsifiability is important, but it isn’t particularly helpful when considering a complicated model with many interacting factors. The reason is that most of the factors included in the model probably contribute to some extent and especially for dynamical models, the selection of various ranges of parameters may be dictated (supported) by independent data. To “falsify” the model would mean giving up the explanatory and predictive power of a model where many aspects are right. Rather, it makes a lot more sense to keep the basic structure of a model with a proven track record and add additional factors as needed and supported by evidence. Alternatively, you could modify the strengths of various contributors–again as supported by evidence.
It makes a lot more sense to look at this in terms of model selection (or even model averaging) than it does “falsification”. So all you denialists have to do is come up with a model that does a better job explaining the preponderance of information (and more) explianed by the current crop of GCM. Go ahead. We’ll wait.
#21, Yeah Roger, why don’t you do it? I mean it’s not like Gavin won’t share his code with you, is it? Surely Gavin will give to you his complete model runs and the exact parameters that were included in all of them so that you can expand on his Science.
He says it is a “Great idea”, so I expect you to have this information before you ask for it.
[Response: Don’t be an ass. The data is all available at PCMDI and Roger has frequently expressed interest in it. I do think it is a good idea, but I have other things I am working on. The whole point of this post is to point people to the fact that the data is available and people should look at it for anything they particularly care about. If Roger cares about that metric, he should download it and look. I have not done so, and do not have the time to service every request that comes my way. FYI our complete model source code and runs are all available on the GISS web site. -gavin]
dhogazasays
I saw nobody claim anything about how weather influences the apparent climate trend when it was an all-rise problem in the nineties.
Climate science predictss nothing about your willingness to pay attention, and the fact that you didn’t notice all the hoo-raw about the exceptionally strong El Niño in 1998 doesn’t mean that millions of other people didn’t.
Alexander Harveysays
Gavin,
It is my understanding that once the all the known forcings are taken into account using their measured values the models reproduce the temperature history from 1950-present with a high degree of accuracy. Both in trend and in accounting for variation due to volcanoes. What they cannot easily account for is the precise timing of effects like ENSO.
Viewing the difference between the mean of multiple runs (or similar process) and the real temperature record as the “weather” or the erractic component I believe that its amplitude is less than +/- .15C for around 90% of the time and peaks at around +/- 0.25C.
Now visually your first figure is telling the same story which is heartening. If it was predicting a tighter band it would be contrary to reality.
Is there a recognised “profile” of the erractic part of the real temperature record, i.e. how much of the time the record should be 0.1C, 0.2C, 0.3C etc above and below trend? I mean after all known forcings including volcanoes etc are taken into account.
Your second figure seems to tell the same story. All the regression lines are confined inside a “pencil” of uncertainty with a width of about +/- 0.25C. The longer the pencil length you choose the tighter the degree C/decade band.
It is possible that this may be the fundamental limit to the acuracy of prediction but in the long run 50 years plus (a very long pencil) it gives a very narrow band for the degree C/decade figure.
Now what interests me is why the uncertainties for a doubling of C02 (or equivalent) are still so poorly constrained in comparison. (I think 3C +/- 1.5C is still the quoted band).
We now have some reasonably good figures for what the oceans have done of the last 50 years and the amount of heat taken up by the oceans does constrain the value of the climatic sensitivity for the past 50 years. Is it the case that the models are making different assumptions about how the sensitivity will evolve in the coming decades or is it simply that the models are improved by constraining their runs during the known historic period and they then diverge in the future due to the lack of constraint? That is does the “pencil” turn into a “cone”. I can see no convincing tendency towards divergence in your first figure. Perhaps a figure extending a few more decades would help.
Finally do the individual runs, that make up your first figure, simply reflect different initial conditions are certain parameters varied between runs.
Best Wishes
Alexander Harvey
[Response: The variations for single models are related to the initial conditions. The variations across different models are related to both initial conditions and structural uncertainties (different parameterisations, solvers, resolution etc.). The two sorts of variation overlap. – gavin]
In the graph of the link above you can see GISS temperature data for the last 125 years. On top of it, I have drawn one light blue line which would be approximately like the catastrophic predictions of the models (the trend being 1.5ºC/century in the end part, so not even as catastrophic as some of the models predict). Also on top of it, I have drawn a green line with an alternative forecast which would trust that the warming between 1980-2000 was mostly, but not all, due to weather. This line shows only a 0.6ºC/century warming, and would also NOT be falsified by real temperature data.
As you can easily see, because of the last decade of stabilised temperatures, we are now at a crucial point. In dotted lines, again in blue and green, I have tried to represent what would be the logical evolution of temperature in order to more or less match each of the predictions. The 0.6ºC/century prediction desperately needs cooling ASAP, and I would call it falsified if it doesn’t cool within 2 years. But the 1.5ºC/century AGW prediction also needs some warming pretty quickly too or it would be about impossible to catch up with the prediction. I wouldn’t wait more than 5 years before deciding which of the 2, if any, is more accurate. I don’t think that stable temperatures with no warming or cooling would support any of the 2 predictions. It would rather prove both of them wrong.
By the way, I chose a straight green line on purpose. The CO2 we emit is increasing, but on the other hand, the GH effect of any extra CO2 we emit is decreasing exponentially.
[Response: No it’s not. The forcing is increasing slightly faster than linearly. – gavin]
Patrick M.says
re 26 (gavin):
Could you post a link to where the source code can be downloaded?
i was just wondering how independant, and then, not redundant, all these different climate models really were, that is, if one should not somehow account, when “averaging” them, for some particular -maybe historical – “closeness” between some of them (for example, i would think there aren’t 20-something different and independant schemes of sub-grid parametrization for convection, or cloudiness – are there? or maybe i’m raising a false problem here…)
[Response: No, it’s a real issue. IPCC does exactly that. I didn’t bother. – gavin]
Nylosays
Gavin, the forcing by CO2 is measured in ºC for a DOUBLING, which means that it follows an exponentially decreasing trend: when we add 280 ppm we will have doubled, but in order to experience again the same achieved warming we would have to add a further 560 ppm, not a further 280. The more CO2 we already have, the more quickly we need to continue adding CO2 to mantain the same warming. It’s how the physics works. What really counts is how we change the existing concentration of CO2, the percentage of the change, not how much “raw” CO2 we add. Adding 5 ppm was quite more important when the concentration was 180 ppm than now.
[Response: We all know that the forcing is not linear in concentration. But it isn’t decreasing, it is increasing logarithmically. And it is certainly not decreasing exponentially. – gavin]
jaesays
Just squinting at those individual realizations, I sure don’t see any that show a ten-year long increase, level, or decreasing temperature.
[Response: The histogram shows at least one that has a negative trend from 1995 to 2014, and there are nine that have negative trends from 2000 to 2007. For 1998 to 2007 there are 7 downward trending realisations (down to -0.15 degC/dec). Actual calculation trumps eyeballing almost every single time. – gavin]
Bryan Ssays
Re #20: Ray, It requires an anomalous accumulation of heat of about 0.2 W/M2 over a single annual period (maybe 18 months) to heat the atmosphere 0.3-0.4 degrees C. This compares to a modeled net upward radiative flux from the ocean surface of abound 0.7 W/M2 during the 1998 El Nino alone. Now consider that the observed change in upper ocean heat storage (net TOA radiative imbalance) that was observed over this same time interval, as reported in Willis 2004, is around +1.2 W/M2. This means that even though there was a theoretical loss of 0.5 W/M2 from the atmosphere to space as a result of the El Nino, the ocean still accumulated significant heat during the El Nino. So clearly weather processes exchange plenty of heat back and forth between the ocean, atmosphere, and space to accomplish considerable warming or cooling of the atmosphere over an annual to multi-decadal period. The real science question concerns whether this annual to multi-decadal intrinsic variability averages to a 0 trend over the period in question. My point is that there is no physical law that suggests that the inherent trend must in fact be 0. The notion is based on the ensemble mean of different GCMs run with stable CO2, all having similar core physics and slightly different parameterizations of weather processes. There is only one individual realization of the actual climate system however, and clearly, unforced variability can have a trend across many different scales. Roger Pielke Sr. has made an excellent point however, in stating that there is really no such thing as “natural variability”. It is kind of like making a white cake batter, and stirring in a little chocolate, and then trying to make a white cake and a chocolate cake from the same batter. Once the chocolate has been stirred in, you have a chocolate cake. The human influence including aerosols, landuse, and GHGs have already been stirred up together with natural variability.
Gavin makes an important statement when he points out that many people have mistaken the range of model trajectories with uncertainty in the ensemble mean from multiple models. An important question to ask is why there is uncertainty in the ensemble mean, and is this uncertainty braketed properly (highside or lowside)? Another way to ask this is what are the variables controlling the uncertainty in the magnitude of the forced component of climate change. I suggest that as more physical processes are added to the models, that the range of uncertaintly will grow. Ice sheet dynamics included in the models might increase the highside, and more realistic representation of cloud feedback might increase the range on the lowside. Better landuse representation might go either way.
Ray Ladburysays
Nylo says: “The CO2 we emit is increasing, but on the other hand, the GH effect of any extra CO2 we emit is decreasing exponentially.”
OK, I don’t have to go any further than this. How can you expect to be taken seriously when you haven’t even bothered to acquaint yourself with the physics of the model you are arguing against?
Gary Plylersays
Can you please explain why you have decided to base your bet on surface temperatures (Hadcrut) instead of satellite measurements of global atmosphere temperature? After all, no matter what corrections are made to account for urban heat island effect or for sensor relocation, they are corrections that cannot be independently verified. I really think that your bet should be based on sattelites that look at all the atmosphere with no local bias possible.
[Response: The data and time periods for this wager are based purely on the targets suggested by Keenlyside et al. You will however find that no source of data is unaffected by structural uncertainty. – gavin]
Axelsays
Gavin, great post. The discussion regarding the Keenlyside et al paper naturally has been focusing on what the paper is predicting. I’d be interested to hear comments about the fact that the paper claims to have made advances in multi-decadal climate prediction (title). Their figure 1d shows considerable improvement in skill (correlations) over the unintialized simulations over ocean areas. I understand that there is a similar paper (Smith et al.) that also shows that it is apparently possible to nudge models into reproducing decadal variability of the “real world” realization and use this for decadal climate prediction? Is this an appropriate reading? But maybe this is a discussion for a separate thread.
[Response: We’ll discuss the K et al studies in greater depth at some point soon. – gavin]
Gavin, the forcing by CO2 is measured in ºC for a DOUBLING, which means that it follows an exponentially decreasing trend: when we add 280 ppm we will have doubled, but in order to experience again the same achieved warming we would have to add a further 560 ppm, not a further 280. The more CO2 we already have, the more quickly we need to continue adding CO2 to mantain the same warming. It’s how the physics works.
[sarcasm]Gavin is only a professional climate scientist — so he must not have known this.[/sarcasm]
And by the way, CO2 isn’t increasing linearly, so it turns out that in the real world CO2 forcing is increasing faster than logarithmic, in fact over the time span of the Mauna Loa record it’s faster than linear.
Ray Ladburysays
BryanS., What is typically going on is a change in the amount of cold water that comes to the surface. However, what you are failing to consider is the fact that such changes do not persist for long. And if you have warming due to such a fluctuation in the absence of increased GHG, you get more radiation escaping to space (and vice versa). It is only with a ghg mechanism or a sustained trend in some other forcer that you get sustained warming. What is your candidate for a mystery sustained forcer?
Gary Fletchersays
We could have 10 consecutive years of .04 C global annual mean temp rise,
each year being an “ambiguous” record, resulting in a cumulative .4 mean temp rise over the period, yet never have an “unambiguous” .1 C temp record year. Unless, of course, we keep a separate record of “unambiguous” years, so that unambiguous record years are considered separately from ambiguous ones. Which did you mean?
[Response: After 3 years the previous record would have been unambiguously broken. – gavin]
The GISS, in their 2007 summary, indicates that “projection of near-term global temperature trends with reasonably high confidence” can be made. They predict a record global temperature year “clearly exceeding that of 2005 can be expected within the next 2-3 years.” They base that prediction largely on the solar cycle. Your considerations are more general, even so, your graph indicates about a 2/3 chance of a record year in any three year period. Do you agree with the more confident and specific GISS prediction?
[Response: The 50% level (ie. when you would expect to see a new record half the time) is between 1 and 6 years depending on how ambiguous you want to be. So it isn’t contradictory, but they are probably a tad more confident than these statistics would imply. – gavin]
Micksays
So what results would falsify this chart?
Is it possible to observe something that contradicts the IPCC?
[edit]
[Response: Sure. Data that falls unambiguously outside it. – gavin]
Lamontsays
I was playing around over the weekend with ENSO data and NASA global temperature data. I get a fairly good fit if I smooth the global temperature data to a 6 month average, advance the ENSO data by 6 months and divide by 10 (so a +1.0 el nino results in +0.1 global climate forcing) and then I have to detrend a +0.5C rise in temperature since 1978.
Its not entirely scientific since I’ve just eyeballed the smoothing and fit parameters, but pinatubo is clearly identified and i “discovered” the eruption of mount agung in 1963.
I find it highly implausible that the global warming since 1978 has anything to do with ENSO based on the lack of correlation of the warming trend since 1978 with any warming trend of the ENSO pattern since 1978.
One thing I don’t quite understand about my fit is that I can identify two period of cooling which are not correlated to ENSO or AGW which are Pinatubo and Agung. However, there are a few anomolous transient warming spots like around 1980-1982 which are not explained by AGW or ENSO. What other factors could cause the globe to warm by a few 0.1C for a year or two, similarly to how the globe cools in response to a large volcano?
David Abramssays
As I read your graph, you are predicting better than 50/50 odds than there will be a new record temp set in the next 2 years. Would you be interested in a wager on this?
Or am I misreading the graph somehow?
[Response: For a record that would be unambiguous (and therefore clear in all estimates of the trend) the 50% waiting period is somewhere around 6 years according to this rough study. Therefore we a slightly overdue such a record (but not so much that you’d be worried it wasn’t coming). Let me think about the bet. – gavin]
richardsays
I had been vaguely working on a manuscript about waiting times for new records in the AR4 models. I like the approach you have used here, but by treating all the models together, you obscure the fact that some of the models have much more decadal scale variability than others. Analysing just this sub-set of models, which probably (but I need to test) have a better representation of 20C variability, gives a larger tail to the waiting time distribution, and suggests that the current waiting time is far from exceptional.
[Response: I would definitely recommend doing a better job for a publication. You would want to do the calculation as a function of magnitude/structure of the residuals from the expected pattern and then see where the real world would fall. – gavin]
Jaredsays
#11
Excellent points, and ones that are largely (and conveniently) ignored by the AGW community.
Why is it that natural variability can be given credit for short term trends (10, 20 year) when they might result in “relaxing” of global warming, but GHG-induced warming is hailed as the MAIN factor that led to the 30 year warming up to 1998? Never mind that this same period also happened to coincide with the warm +PDO phase, or that El Ninos outnumbered La Ninas 6 to 3 during this period, or that the warmest year on record also happened to feature the strongest El Nino on record.
In other words, basically ALL of the natural factors favored warming from 1977-1998, yet AGW is given nearly all the credit. Yet now that warming has obviously slowed the past 10 years, natural variability is to blame. Sorry, but it’s a two-way street, and this really needs to be acknowledged for a more balanced look at climate change.
Larrysays
My struggle for understanding continues. Can you please run the drill on this article from the Skeptic?
A key point appears to be “It turns out that uncertainties in the energetic responses of Earth climate systems are more than 10 times larger than the entire energetic effect of increased CO2”
Is this right? Does it have the implications that Frank claims?
[Response: No and no. Frank confuses the error in an absolute value with the error in a trend. It is equivalent to assuming that if a clock is off by about a minute today, that tomorrow it will be off by two minutes, and in a year off by 365 minutes. In reality, the errors over a long time are completely unconnected with the offset today. – gavin]
It’s also interesting that a simple linear model replicates the GCM model results with none of the complexity.
Finally the notion that the uncertainties introduced by weakness in cloud modeling are easily large enough to overwhelm GHG-related impacts really makes me want to throw up my hands.
Gavin- I do not have funding to analyze the trends in the upper ocean heat content. However, if you direct me to where the specific files are, I will see if I can interest a student in completing this anaylsis.
Since the use of the ocean heat content changes is such an effective way to diagnose the radiative imbalance of the climate system (and avoids the multitude of problems with the use of the surface temperatures), it is a disappointment that GISS doe not make this a higher priority. Jim Hansen has also emphasized the value of using the ocean heat content changes, so I would expect he would support this analysis.
[Response: Roger, everyone’s time is limited. This is why public archives exist (here is the link again). As I have said many times, if you want an analysis done, you are best off doing it yourself. – gavin]
Alf Jonessays
re#38 “I really think that your bet should be based on sattelites that look at all the atmosphere with no local bias possible.”
See the US CCSP report on satellite temperature reconstructions to find out about the uncertainties that lie in those data sets.
Donald E. Flood says
So, to give one possibility, if the global mean temperature from 2050 to 2070 would end up being lower than the 1950 to 1970 global mean temperature, would that be enough to falsify the IPCC projections, assuming no volcanic eruptions, cometary impacts, etc.?
[Response: …and that the trajectories of the GHGs and aerosols looked something like this scenario. Yes. – gavin]
Richard Pauli says
A caveat clearly seen on some IPCC charts:
“Model-based range excluding future rapid dynamical changes in ice flow” Was this authored in 2005 or 2006?
We cultivate confusion by failing to have constant IPCC studies and updated reports.
A. Fucaloro says
It would be impossible to deconvolve a trend signal caused by CO2 increase if the climate were mediated by a cycle that is long enough and strong enough. Do we know for sure that the medieval warming and subsequent Little Ice Age are not manifestations of such a cycle?
tharanga says
A relevant post; some sceptics love to say that the projections have been wrong, without actually knowing what the projections were.
Some basic questions:
If I understand it correctly, Keenlyside et al attempted to achieve more realistic realizations by using realistic initial values. Can you explain the standard ‘un-initiallised’ ensemble approach? Surely, a model that runs over time requires some sort of initial conditions; are these randomly chosen for each realization within the ensemble? It couldn’t be that random, though – to some extent, they must be constrained by observational data, no?
Also, I’ve noticed that the various models tend to agree with each other within hindcasts, but there is rather more of a spread in the future projections. I’m told that the hindcasts are honest exercises, and not curve-fits, but in that case, shouldn’t there be more of a spread amongst the models in the hindcasts, as well?
Finally – any attempts I’ve seen to judge prior model projections involve picking the results for the scenario (A1B, or what have you) which came closest to the actual forcings over the period in question. Instead of that, why not dig up those prior versions of the models and re-run them with the actual forcings: CO2, sulphates, volcanos, etc? It’s the range of unforced natural variability we are interested in here, not the ability of modelers to predict external forcings.
One Salient Oversight says
I love the “Pinatubo dip” in the first graph (1991).
Maybe there is some legitimacy in the idea of “Dr Evil” to seed the upper atmosphere with particulates via 747s… but it only works in the short term. Once the aerosols dissipate, the curve keeps going up.
David says
When the models show cooling for a few years, is this due to heat actually leaving the (simulated) planet, or due to heat being stored in the ocean ?
[Response: You’d need to look directly at the TOA net radiation. I would imagine it’s a bit of both. – gavin]
Sascha Samadi says
Thanks for the interesting and easy to understand read, Gavin. It’s hard for me to understand why some people apparently have a hard time distinguishing between individual model runs and ensemble means. It doesn’t seem to be too complicated…
Gareth Evans says
Back to scientific business and a welcome post by Real Climate. The important message in layman terms is that we must not confuse “weather” with climate. The greenhouse gases we emit warm the earth – this has been known for a long time (back to Arrhenius). The temperature of the earth would be much colder, roughly that of the moon, were it not for green house gas warming. Extra global energy, from increased greenhouse gas concentrations in the atmosphere, is redistributed around the earth by natural circluation processses. These are complex processses that may be interelated. In adition, there are natural cyclic events that may affect weather (and climate) and the unexpected (e.g. a significant volcanic erruption) is always a possibility. There will always be “weather” fluctuations and the various climate models produce a range of possible future outcomes. So, what we must focus on in this debate are the mean trends (and climate). This is exactly what IPCC and groups, like Real Climate, have been telling us. We need to develop ways, however, of introduing a regional focus into this debate and the important role of other warming influences such as land use, urbanisation etc. This would help to improve the general understanding and wider acceptance of the issues involved. The focus on global, annual means does not always make the necessary local impact (and may be concealing important subtleties – such as any seasonal impact variations of an increasing global temperature).
pete best says
So in the grand scheme of GCM analysis these recent model runs that made it into the media as cooling are what exactly, inadequate? I am desperately attempting to find out the reason why a reputable preliminary scientific analysis went to the media spouting this via a peer reviewed journal when in actual reality the analysis seems flawed.
Is it statistics or the methods used I wonder. I just feel that the public are left frustrated and confused as to the reality of AGW. No wonder the deniers are still in the game when this sort of science is splattered all over the media large bold fonts.
Klaus Flemløse says
I will be pleased if you can answer the following question:
Is the variation in the number of sunspots, the ENSO, changes in the thermohaline circulation and other periodic phenomenon included in the IPCC simulations? How good are then simulations to replicate the variations in the global temperature ?
For me it is unlikely to see a monotonic increasing global temperature.
[Response: Some of the models include solar cycle effects, all have their own ENSO-like behaviour (of varying quality) and THC variability. – gavin]
Nylo says
Certainly, weather influences climate trends.
Is there ANY chance that the observed temperature increase since the 70s (and till 1998) is due mainly to weather (PDO, ENSO, cosmic rays, sun irradiation, solar cycles, cloud cover), or is weather only going to be responsible for cooling or a lack of warming?
Is current La Niña “weather”? If so, was El Niño in 2002 and 2005 weather as well? Should we then say that the high temperatures we saw those years were because of weather, and not climate? Are their temperature records dismisable then? If not, will 2008’s decadal low temperature record be dismisable when it happens?
I saw nobody claim anything about how weather influences the apparent climate trend when it was an all-rise problem in the nineties. But now that we are not warming, weather comes to rescue of the AGW theory.
You have confidence in the models because the average of the ensemble seems to explain well the somewhat recent warming. But what if the warming was caused by weather? It is possible, because reality is just one realisation of a complex system. So all of your models could be completely wrong ans still their average be coincidental with the observations.
In the GH theory, the surface temperatures increase because there is a previous increase of the temperature of the atmosphere, which then emits some extra infrared energy to the surface. In that scenario, the troposphere warms faster than the surface. Otherwise its emissions would not be too big and we would not have so much surface warming. This happens almost in every model run. There are only a handful of model runs that correctly guess nowadays mild tropospheric temperature increase in the tropics. I would like to know what is the surface temperature trend predicted by exactly those model runs which managed to get nowadays tropical tropospheric temperatures correctly. It seems like they got the “weather” right and seem more trustworthy, for me.
bi -- Intl. J. Inact. says
Nylo: cloud cover, solar activity, etc. has always been factored into climate models, from what I understand. And no climate model has been able to model the recent warming without taking CO2 into account.
Gavin: Will you be discussing Monaghan et al.’s recent paper “Twentieth century Antarctic air temperature and snowfall simulations by IPCC climate models” (Geophy. Res. Lett.) some time? The handling of model uncertainties in the paper seems a bit weird to me…
— bi, Intl. J. Inact.
Olee says
I would like to paraphrase the late Douglas Adams on this – to remind of us all of the “Whole Sort Of General Mish Mash” (WSOGMM) that one must consider in complex systems.
Two model runs for a century starting from the exact same initial conditions but with the same forcing may well end up in different states (yielding different trends) at some point of the run. Different models with same or different initial conditions but same forcing also spread in their states throughout the runs. Hence there is a lot of WSOGMM going on as seen in Figure 1.
What is rarely discussed is that WSOGMM is not something that is exclusively associated with climate models. WGSOGMM is an inherent property of the “real” climate system as well. It is most likely that if we had measurements of our instrumental period in one or several parallel universe ‘Earths’ the inter-annual to decadal temperature evolution of these parallel worlds would deviate from each other to some extent. The current near-decadal relaxation of the global temperature-trend may for example have started in 1994 or 2003 rather than 1998 on one of our ‘parallel planets’ since it is largely defined by the 1998 El Nino event – that may have occurred during any year when “conditions were favourable” on some particular ‘parallel universe’ Earth. Thus, to use our instrumental records as the “perfect answer” is probably faulty below some decadal time-scale because this notion mean we think that the climate system is 100% deterministic on this time-scale. This is, however, unlikely since many of the sub-decadal patterns (NAO, PDO, ENSO for example) seems to be resonating more or less stochastically.
All this remind us that a relaxation of the temperature trend for a decade or so is not falsification of the multi-ensemble IPCC-runs – also because the real-world data represent only one realisation of the WSOGMM on these short time-scales.
Barton Paul Levenson says
Nylo posts:
Were those, in fact, El Niño years? I knew 1998 was but I hadn’t heard about the other two. Does anybody know?
Ray Ladbury says
Nylo: Fascinating theory. Explain to me exactly how weather will cause warming over, say, 20 years. I will leave as an exercise to the reader a comparison of the amount of energy needed to warm Earth’s climate by 0.2 degrees and that of a hurricane. Here’s a hint. One’s gonna be a whole helluva lot bigger than the other.
Nylo says
I agree with #13, a relaxation of the temperature trend for a decade or so is not falsification of the multi-ensemble IPCC runs. In fact, a relaxation for 20 years would not be either. The problem with the models is that their error bars are so huge, compared to the trend that they are intended to predict, that they basically cannot be falsified during the academic lifetime of their creators, no matter what happens. However science MUST be falsifiable and at the same time not falsified by events in order to be science. As long as anyone claims that another 10 years of no warming or even cooling would not falsify the models, I cannot give the models any real value or contribution to science. A nice hobbie, at most.
@12: That no climate model has been able to predict the recent warming without an increasing CO2 doesn’t mean that it is not possible, it only means that they all share common beliefs that could be right or could be not. For example, no climate model has been able to get right, at the same time, the current surface temperature trend and the current tropical tropospheric temperature trend, but still it is happening, they are roughly the same. Only non-GH influenced warming has such a fingerprint. How can they all be wrong?
And then there is the fact that the models include things such as cloud cover. Given how unknown the process of cloud formation is, and given that their average results fail to correctly show the real anual variation of cloud cover – they all give too much cloud cover for winter and too little for the summer compared to reality, which means that the clouds of the models fail to cool as much as they cool in real life -, well, it doesn’t speak wonders of the models.
Anyway, it looks interesting for me that the models cannot predict the warming without CO2, but on the other hand, they can predict cooling in spite of CO2 (so that falisification is imposible). How can it be? The models in Gavin’s article show a variability of up to 0.2ºC in a period of 20 years, but they cannot explain a 0.3ºC rise in global temperatures between 1980 and 2000 without CO2? Using a similar reasoning, I would admit as good a model which showed that only an increase of 0.1ºC between 1980 and 2000 was because of CO2, with the remaining 0.2ºC being weather. Such a model would predict immediate cooling now, and only a total +0.4ºC between now and 2100. And you could not say that such a model was falsified by the data either.
[Response: You are too focussed on the global mean temperature. There are plenty of other trends and correlations that can be deduced from the models which can be independently validated (water vapour, sea ice, response to volcanoes/ENSO, ocean heat content, hindcasts etc.). Or you can go back twenty years and see what was said then. Either way, it is a balance of evidence argument. On one hand you have physically consistent models that match multiple lines of evidence, or … nothing. Given that the first indicates serious consequences for the coming decades, and the latter implies you have no clue, there is a big procrastination penalty for sticking your head in the sand. None of the issues you raise are ignored in the models and yet no model agrees with your conclusion. If there was, don’t you think we’d have heard about it? PS. you don’t need climate models to know we have a problem. – gavin]
Nylo says
@Ray Ladbury: it can. Gavin just showed it to you. The same model with just 5 runs can give differences of 0.2ºC in its trend for a period of 20 years. What is it, if not weather?
Nylo says
@ Barton Paul Levenson:
ttp://www.cpc.ncep.noaa.gov/products/analysis_monitoring/ensostuff/ensoyears.shtml
steven mosher says
great post Gavin, that cleared up a lot of questons. thx.
Ray Ladbury says
Nylo, variability is not just weather–it includes initial conditions, and depending on the model may include variations in a variety of factors (many of which we could measure if they were occuring). Ultimately, what matters are long-term trends. Organisms are adapted to survive weather. Human civilization has done well to adapt to weather. However, sustained changes in climate are something that we haven’t had to deal with in about 10000 years, and certainly not on this order.
To explain the trends of the past 20 years would take a veritable conspiracy of natural variations–or you could assume that a process that is known to operate is still operating. Me, I’ll stick with physics over conspiracy.
steven mosher says
re 17. well put Ray. It could be numeric drift, but I’m sure that is well accounted for. Gavin?
Nylo says
Gavin,
One of the dangers of using an ensemble of models is that it can give you the false feeling that you cover every posibility. I will explain. The PDO is included in the models, as well as solar forcing, ENSO, etc. But because they are considered unpredictable, they are set random and averaged out by the ensemble of models because of pure statistics. They are actually ignored. And that is OK if you want to predict climate without weather, but then you cannot look at the real data and validate your climate-only models with it. Because real data is climate PLUS weather. So both temperature trends being similar says little until you use such a long period of time as to be able to claim that the weather component is irrelevant.
In normal conditions, one century could be enough. But we are not in normal conditions. Why? Because of all the warming during the century, roughly a 50% of it has happened in only 20 years and is therefore possibly weather-influenced. If it is, we should start to see cooling anytime now, as I think we will. So only some of the remaining warming of the century can be trusted as climate change, and therefore it is not clear what you can compare your models to, in order to verify if their predictions can be trusted or not.
What you cannot do is to say that the stable temperatures we have now are because of weather, and some hypothetical future cooling would be weather too, but the warming of the last decades of the 20th century was on the other hand “weather-clean”.
Richard Treadgold says
I’m surprised how many people take computer climate model forecasts over 50 years seriously when we still don’t get accurate predictions of the weather two weeks in advance. Uncertainty in projected temperature from models can approach ±55° after 50 years. Which is a forecast worth nothing.
This is according to Patrick Frank at Skeptic Magazine, http://tinyurl.com/635bf8. I’m not without skill, but I’m no scientist, so I judge who sounds honest.
[Response: Try judging who sounds credible. Frank’s estimate is naive beyond belief – how can it possibly be that the uncertainty is that large when you look at the stability of the control runs or the spread in the different models as shown above? – gavin]
I’m more interested in the real world and the real climate. So, rather than asking what models tell us about variability, I’d like to ask about the science. What are the ramifications for the AGW hypothesis of the lack of atmospheric warming over the ten years since 1998? Arguably, since 1998 was driven by an exceptional El Nino, there’s been no real warming since about 1979, just going by eyeball. It’s up and down, but no trend you could hang your hat on. Temperature today is the same as 1979. See Junk Science.
[Response: You are joking right? Junk Science indeed. – gavin]
I can understand people shouting warnings about future warming, since models fire them up (hmmm, sorry about the pun, it was actually unintentional), but some people have been screaming about the world growing hotter right now. I honestly can’t see high temperatures anywhere.
Last point: If CO2 is to warm the atmosphere, and warmer still with more CO2, then if CO2 rises but temperature is constant or falls, the theory is disproved. Done. Where is the faulty reasoning? Or what is the change to the theory?
[Response: The ‘theory’ that there is no weather and no other forcings and no interannual and no interdecal variability would indeed have been falsified. Congratulations. Maybe you’d care to point me to any publications that promote that theory though because I certainly don’t recognise that as a credible position. Perhaps you’d like to read the IPCC report to see what theories are in fact being proposed so that you can work on understanding them. – gavin]
Richard Treadgold
Roger A. Pielke Sr. says
Gavin – Your plot of the individual realizations is quite useful. To add to its value, I recommend that you also plot the global averaged upper ocean heat storage changes for each year in Joules that each model produces along with the resultant diagnosed global average radiative forcing in Watts per meter squared such as performed by Jim Hansen [see http://climatesci.colorado.edu/publications/pdf/1116592Hansen.pdf%5D.
[Response: Great idea – why don’t you do it? – gavin]
Ray Ladbury says
Has anybody else noticed how fixated the denialosphere is on Karl Popper? Everything is about “falsifiability”. It is as if the past 70 years of philosophy of science did not happen for them. Popper’s concept of falsifiability is important, but it isn’t particularly helpful when considering a complicated model with many interacting factors. The reason is that most of the factors included in the model probably contribute to some extent and especially for dynamical models, the selection of various ranges of parameters may be dictated (supported) by independent data. To “falsify” the model would mean giving up the explanatory and predictive power of a model where many aspects are right. Rather, it makes a lot more sense to keep the basic structure of a model with a proven track record and add additional factors as needed and supported by evidence. Alternatively, you could modify the strengths of various contributors–again as supported by evidence.
It makes a lot more sense to look at this in terms of model selection (or even model averaging) than it does “falsification”. So all you denialists have to do is come up with a model that does a better job explaining the preponderance of information (and more) explianed by the current crop of GCM. Go ahead. We’ll wait.
(crickets chirping)
Michael Lucking says
With these models, I assume the total heat absorbed by the yearly melting of ice has been included?
Gaelan Clark says
#21, Yeah Roger, why don’t you do it? I mean it’s not like Gavin won’t share his code with you, is it? Surely Gavin will give to you his complete model runs and the exact parameters that were included in all of them so that you can expand on his Science.
He says it is a “Great idea”, so I expect you to have this information before you ask for it.
[Response: Don’t be an ass. The data is all available at PCMDI and Roger has frequently expressed interest in it. I do think it is a good idea, but I have other things I am working on. The whole point of this post is to point people to the fact that the data is available and people should look at it for anything they particularly care about. If Roger cares about that metric, he should download it and look. I have not done so, and do not have the time to service every request that comes my way. FYI our complete model source code and runs are all available on the GISS web site. -gavin]
dhogaza says
Climate science predictss nothing about your willingness to pay attention, and the fact that you didn’t notice all the hoo-raw about the exceptionally strong El Niño in 1998 doesn’t mean that millions of other people didn’t.
Alexander Harvey says
Gavin,
It is my understanding that once the all the known forcings are taken into account using their measured values the models reproduce the temperature history from 1950-present with a high degree of accuracy. Both in trend and in accounting for variation due to volcanoes. What they cannot easily account for is the precise timing of effects like ENSO.
Viewing the difference between the mean of multiple runs (or similar process) and the real temperature record as the “weather” or the erractic component I believe that its amplitude is less than +/- .15C for around 90% of the time and peaks at around +/- 0.25C.
Now visually your first figure is telling the same story which is heartening. If it was predicting a tighter band it would be contrary to reality.
Is there a recognised “profile” of the erractic part of the real temperature record, i.e. how much of the time the record should be 0.1C, 0.2C, 0.3C etc above and below trend? I mean after all known forcings including volcanoes etc are taken into account.
Your second figure seems to tell the same story. All the regression lines are confined inside a “pencil” of uncertainty with a width of about +/- 0.25C. The longer the pencil length you choose the tighter the degree C/decade band.
It is possible that this may be the fundamental limit to the acuracy of prediction but in the long run 50 years plus (a very long pencil) it gives a very narrow band for the degree C/decade figure.
Now what interests me is why the uncertainties for a doubling of C02 (or equivalent) are still so poorly constrained in comparison. (I think 3C +/- 1.5C is still the quoted band).
We now have some reasonably good figures for what the oceans have done of the last 50 years and the amount of heat taken up by the oceans does constrain the value of the climatic sensitivity for the past 50 years. Is it the case that the models are making different assumptions about how the sensitivity will evolve in the coming decades or is it simply that the models are improved by constraining their runs during the known historic period and they then diverge in the future due to the lack of constraint? That is does the “pencil” turn into a “cone”. I can see no convincing tendency towards divergence in your first figure. Perhaps a figure extending a few more decades would help.
Finally do the individual runs, that make up your first figure, simply reflect different initial conditions are certain parameters varied between runs.
Best Wishes
Alexander Harvey
[Response: The variations for single models are related to the initial conditions. The variations across different models are related to both initial conditions and structural uncertainties (different parameterisations, solvers, resolution etc.). The two sorts of variation overlap. – gavin]
Hank Roberts says
http://julesandjames.blogspot.com/2008/05/are-you-avin-laff.html#comments
Nylo says
http://www.elsideron.com/GlobalTempPredictions.jpg
In the graph of the link above you can see GISS temperature data for the last 125 years. On top of it, I have drawn one light blue line which would be approximately like the catastrophic predictions of the models (the trend being 1.5ºC/century in the end part, so not even as catastrophic as some of the models predict). Also on top of it, I have drawn a green line with an alternative forecast which would trust that the warming between 1980-2000 was mostly, but not all, due to weather. This line shows only a 0.6ºC/century warming, and would also NOT be falsified by real temperature data.
As you can easily see, because of the last decade of stabilised temperatures, we are now at a crucial point. In dotted lines, again in blue and green, I have tried to represent what would be the logical evolution of temperature in order to more or less match each of the predictions. The 0.6ºC/century prediction desperately needs cooling ASAP, and I would call it falsified if it doesn’t cool within 2 years. But the 1.5ºC/century AGW prediction also needs some warming pretty quickly too or it would be about impossible to catch up with the prediction. I wouldn’t wait more than 5 years before deciding which of the 2, if any, is more accurate. I don’t think that stable temperatures with no warming or cooling would support any of the 2 predictions. It would rather prove both of them wrong.
By the way, I chose a straight green line on purpose. The CO2 we emit is increasing, but on the other hand, the GH effect of any extra CO2 we emit is decreasing exponentially.
[Response: No it’s not. The forcing is increasing slightly faster than linearly. – gavin]
Patrick M. says
re 26 (gavin):
Could you post a link to where the source code can be downloaded?
Thanks!
[Response: The ModelE source code can be downloaded from http://www.giss.nasa.gov/tools/modelE or ftp://ftp.giss.nasa.gov/pub/modelE/ , the output data are available at http://data.giss.nasa.gov and the full AR4 diagnostics from all the models at http://www-pcmdi.llnl.gov/ipcc/about_ipcc.php – gavin]
Ice says
Thanks for this very interesting post.
i was just wondering how independant, and then, not redundant, all these different climate models really were, that is, if one should not somehow account, when “averaging” them, for some particular -maybe historical – “closeness” between some of them (for example, i would think there aren’t 20-something different and independant schemes of sub-grid parametrization for convection, or cloudiness – are there? or maybe i’m raising a false problem here…)
[Response: No, it’s a real issue. IPCC does exactly that. I didn’t bother. – gavin]
Nylo says
Gavin, the forcing by CO2 is measured in ºC for a DOUBLING, which means that it follows an exponentially decreasing trend: when we add 280 ppm we will have doubled, but in order to experience again the same achieved warming we would have to add a further 560 ppm, not a further 280. The more CO2 we already have, the more quickly we need to continue adding CO2 to mantain the same warming. It’s how the physics works. What really counts is how we change the existing concentration of CO2, the percentage of the change, not how much “raw” CO2 we add. Adding 5 ppm was quite more important when the concentration was 180 ppm than now.
[Response: We all know that the forcing is not linear in concentration. But it isn’t decreasing, it is increasing logarithmically. And it is certainly not decreasing exponentially. – gavin]
jae says
Just squinting at those individual realizations, I sure don’t see any that show a ten-year long increase, level, or decreasing temperature.
[Response: The histogram shows at least one that has a negative trend from 1995 to 2014, and there are nine that have negative trends from 2000 to 2007. For 1998 to 2007 there are 7 downward trending realisations (down to -0.15 degC/dec). Actual calculation trumps eyeballing almost every single time. – gavin]
Bryan S says
Re #20: Ray, It requires an anomalous accumulation of heat of about 0.2 W/M2 over a single annual period (maybe 18 months) to heat the atmosphere 0.3-0.4 degrees C. This compares to a modeled net upward radiative flux from the ocean surface of abound 0.7 W/M2 during the 1998 El Nino alone. Now consider that the observed change in upper ocean heat storage (net TOA radiative imbalance) that was observed over this same time interval, as reported in Willis 2004, is around +1.2 W/M2. This means that even though there was a theoretical loss of 0.5 W/M2 from the atmosphere to space as a result of the El Nino, the ocean still accumulated significant heat during the El Nino. So clearly weather processes exchange plenty of heat back and forth between the ocean, atmosphere, and space to accomplish considerable warming or cooling of the atmosphere over an annual to multi-decadal period. The real science question concerns whether this annual to multi-decadal intrinsic variability averages to a 0 trend over the period in question. My point is that there is no physical law that suggests that the inherent trend must in fact be 0. The notion is based on the ensemble mean of different GCMs run with stable CO2, all having similar core physics and slightly different parameterizations of weather processes. There is only one individual realization of the actual climate system however, and clearly, unforced variability can have a trend across many different scales. Roger Pielke Sr. has made an excellent point however, in stating that there is really no such thing as “natural variability”. It is kind of like making a white cake batter, and stirring in a little chocolate, and then trying to make a white cake and a chocolate cake from the same batter. Once the chocolate has been stirred in, you have a chocolate cake. The human influence including aerosols, landuse, and GHGs have already been stirred up together with natural variability.
Gavin makes an important statement when he points out that many people have mistaken the range of model trajectories with uncertainty in the ensemble mean from multiple models. An important question to ask is why there is uncertainty in the ensemble mean, and is this uncertainty braketed properly (highside or lowside)? Another way to ask this is what are the variables controlling the uncertainty in the magnitude of the forced component of climate change. I suggest that as more physical processes are added to the models, that the range of uncertaintly will grow. Ice sheet dynamics included in the models might increase the highside, and more realistic representation of cloud feedback might increase the range on the lowside. Better landuse representation might go either way.
Ray Ladbury says
Nylo says: “The CO2 we emit is increasing, but on the other hand, the GH effect of any extra CO2 we emit is decreasing exponentially.”
OK, I don’t have to go any further than this. How can you expect to be taken seriously when you haven’t even bothered to acquaint yourself with the physics of the model you are arguing against?
Gary Plyler says
Can you please explain why you have decided to base your bet on surface temperatures (Hadcrut) instead of satellite measurements of global atmosphere temperature? After all, no matter what corrections are made to account for urban heat island effect or for sensor relocation, they are corrections that cannot be independently verified. I really think that your bet should be based on sattelites that look at all the atmosphere with no local bias possible.
[Response: The data and time periods for this wager are based purely on the targets suggested by Keenlyside et al. You will however find that no source of data is unaffected by structural uncertainty. – gavin]
Axel says
Gavin, great post. The discussion regarding the Keenlyside et al paper naturally has been focusing on what the paper is predicting. I’d be interested to hear comments about the fact that the paper claims to have made advances in multi-decadal climate prediction (title). Their figure 1d shows considerable improvement in skill (correlations) over the unintialized simulations over ocean areas. I understand that there is a similar paper (Smith et al.) that also shows that it is apparently possible to nudge models into reproducing decadal variability of the “real world” realization and use this for decadal climate prediction? Is this an appropriate reading? But maybe this is a discussion for a separate thread.
[Response: We’ll discuss the K et al studies in greater depth at some point soon. – gavin]
tamino says
Re: #34 (Nylo)
[sarcasm]Gavin is only a professional climate scientist — so he must not have known this.[/sarcasm]
And by the way, CO2 isn’t increasing linearly, so it turns out that in the real world CO2 forcing is increasing faster than logarithmic, in fact over the time span of the Mauna Loa record it’s faster than linear.
Ray Ladbury says
BryanS., What is typically going on is a change in the amount of cold water that comes to the surface. However, what you are failing to consider is the fact that such changes do not persist for long. And if you have warming due to such a fluctuation in the absence of increased GHG, you get more radiation escaping to space (and vice versa). It is only with a ghg mechanism or a sustained trend in some other forcer that you get sustained warming. What is your candidate for a mystery sustained forcer?
Gary Fletcher says
We could have 10 consecutive years of .04 C global annual mean temp rise,
each year being an “ambiguous” record, resulting in a cumulative .4 mean temp rise over the period, yet never have an “unambiguous” .1 C temp record year. Unless, of course, we keep a separate record of “unambiguous” years, so that unambiguous record years are considered separately from ambiguous ones. Which did you mean?
[Response: After 3 years the previous record would have been unambiguously broken. – gavin]
The GISS, in their 2007 summary, indicates that “projection of near-term global temperature trends with reasonably high confidence” can be made. They predict a record global temperature year “clearly exceeding that of 2005 can be expected within the next 2-3 years.” They base that prediction largely on the solar cycle. Your considerations are more general, even so, your graph indicates about a 2/3 chance of a record year in any three year period. Do you agree with the more confident and specific GISS prediction?
http://data.giss.nasa.gov/gistemp/2007/
[Response: The 50% level (ie. when you would expect to see a new record half the time) is between 1 and 6 years depending on how ambiguous you want to be. So it isn’t contradictory, but they are probably a tad more confident than these statistics would imply. – gavin]
Mick says
So what results would falsify this chart?
Is it possible to observe something that contradicts the IPCC?
[edit]
[Response: Sure. Data that falls unambiguously outside it. – gavin]
Lamont says
I was playing around over the weekend with ENSO data and NASA global temperature data. I get a fairly good fit if I smooth the global temperature data to a 6 month average, advance the ENSO data by 6 months and divide by 10 (so a +1.0 el nino results in +0.1 global climate forcing) and then I have to detrend a +0.5C rise in temperature since 1978.
Its not entirely scientific since I’ve just eyeballed the smoothing and fit parameters, but pinatubo is clearly identified and i “discovered” the eruption of mount agung in 1963.
I find it highly implausible that the global warming since 1978 has anything to do with ENSO based on the lack of correlation of the warming trend since 1978 with any warming trend of the ENSO pattern since 1978.
One thing I don’t quite understand about my fit is that I can identify two period of cooling which are not correlated to ENSO or AGW which are Pinatubo and Agung. However, there are a few anomolous transient warming spots like around 1980-1982 which are not explained by AGW or ENSO. What other factors could cause the globe to warm by a few 0.1C for a year or two, similarly to how the globe cools in response to a large volcano?
David Abrams says
As I read your graph, you are predicting better than 50/50 odds than there will be a new record temp set in the next 2 years. Would you be interested in a wager on this?
Or am I misreading the graph somehow?
[Response: For a record that would be unambiguous (and therefore clear in all estimates of the trend) the 50% waiting period is somewhere around 6 years according to this rough study. Therefore we a slightly overdue such a record (but not so much that you’d be worried it wasn’t coming). Let me think about the bet. – gavin]
richard says
I had been vaguely working on a manuscript about waiting times for new records in the AR4 models. I like the approach you have used here, but by treating all the models together, you obscure the fact that some of the models have much more decadal scale variability than others. Analysing just this sub-set of models, which probably (but I need to test) have a better representation of 20C variability, gives a larger tail to the waiting time distribution, and suggests that the current waiting time is far from exceptional.
[Response: I would definitely recommend doing a better job for a publication. You would want to do the calculation as a function of magnitude/structure of the residuals from the expected pattern and then see where the real world would fall. – gavin]
Jared says
#11
Excellent points, and ones that are largely (and conveniently) ignored by the AGW community.
Why is it that natural variability can be given credit for short term trends (10, 20 year) when they might result in “relaxing” of global warming, but GHG-induced warming is hailed as the MAIN factor that led to the 30 year warming up to 1998? Never mind that this same period also happened to coincide with the warm +PDO phase, or that El Ninos outnumbered La Ninas 6 to 3 during this period, or that the warmest year on record also happened to feature the strongest El Nino on record.
In other words, basically ALL of the natural factors favored warming from 1977-1998, yet AGW is given nearly all the credit. Yet now that warming has obviously slowed the past 10 years, natural variability is to blame. Sorry, but it’s a two-way street, and this really needs to be acknowledged for a more balanced look at climate change.
Larry says
My struggle for understanding continues. Can you please run the drill on this article from the Skeptic?
A key point appears to be “It turns out that uncertainties in the energetic responses of Earth climate systems are more than 10 times larger than the entire energetic effect of increased CO2”
Is this right? Does it have the implications that Frank claims?
[Response: No and no. Frank confuses the error in an absolute value with the error in a trend. It is equivalent to assuming that if a clock is off by about a minute today, that tomorrow it will be off by two minutes, and in a year off by 365 minutes. In reality, the errors over a long time are completely unconnected with the offset today. – gavin]
It’s also interesting that a simple linear model replicates the GCM model results with none of the complexity.
Finally the notion that the uncertainties introduced by weakness in cloud modeling are easily large enough to overwhelm GHG-related impacts really makes me want to throw up my hands.
Roger A. Pielke Sr. says
Gavin- I do not have funding to analyze the trends in the upper ocean heat content. However, if you direct me to where the specific files are, I will see if I can interest a student in completing this anaylsis.
Since the use of the ocean heat content changes is such an effective way to diagnose the radiative imbalance of the climate system (and avoids the multitude of problems with the use of the surface temperatures), it is a disappointment that GISS doe not make this a higher priority. Jim Hansen has also emphasized the value of using the ocean heat content changes, so I would expect he would support this analysis.
[Response: Roger, everyone’s time is limited. This is why public archives exist (here is the link again). As I have said many times, if you want an analysis done, you are best off doing it yourself. – gavin]
Alf Jones says
re#38 “I really think that your bet should be based on sattelites that look at all the atmosphere with no local bias possible.”
See the US CCSP report on satellite temperature reconstructions to find out about the uncertainties that lie in those data sets.