Over the last couple of months there has been much blog-viating about what the models used in the IPCC 4th Assessment Report (AR4) do and do not predict about natural variability in the presence of a long-term greenhouse gas related trend. Unfortunately, much of the discussion has been based on graphics, energy-balance models and descriptions of what the forced component is, rather than the full ensemble from the coupled models. That has lead to some rather excitable but ill-informed buzz about very short time scale tendencies. We have already discussed how short term analysis of the data can be misleading, and we have previously commented on the use of the uncertainty in the ensemble mean being confused with the envelope of possible trajectories (here). The actual model outputs have been available for a long time, and it is somewhat surprising that no-one has looked specifically at it given the attention the subject has garnered. So in this post we will examine directly what the individual model simulations actually show.
First, what does the spread of simulations look like? The following figure plots the global mean temperature anomaly for 55 individual realizations of the 20th Century and their continuation for the 21st Century following the SRES A1B scenario. For our purposes this scenario is close enough to the actual forcings over recent years for it to be a valid approximation to the simulations up to the present and probable future. The equal weighted ensemble mean is plotted on top. This isn’t quite what IPCC plots (since they average over single model ensembles before averaging across models) but in this case the difference is minor.
It should be clear from the above the plot that the long term trend (the global warming signal) is robust, but it is equally obvious that the short term behaviour of any individual realisation is not. This is the impact of the uncorrelated stochastic variability (weather!) in the models that is associated with interannual and interdecadal modes in the models – these can be associated with tropical Pacific variability or fluctuations in the ocean circulation for instance. Different models have different magnitudes of this variability that spans what can be inferred from the observations and in a more sophisticated analysis you would want to adjust for that. For this post however, it suffices to just use them ‘as is’.
We can characterise the variability very easily by looking at the range of regressions (linear least squares) over various time segments and plotting the distribution. This figure shows the results for the period 2000 to 2007 and for 1995 to 2014 (inclusive) along with a Gaussian fit to the distributions. These two periods were chosen since they correspond with some previous analyses. The mean trend (and mode) in both cases is around 0.2ºC/decade (as has been widely discussed) and there is no significant difference between the trends over the two periods. There is of course a big difference in the standard deviation – which depends strongly on the length of the segment.
Over the short 8 year period, the regressions range from -0.23ºC/dec to 0.61ºC/dec. Note that this is over a period with no volcanoes, and so the variation is predominantly internal (some models have solar cycle variability included which will make a small difference). The model with the largest trend has a range of -0.21 to 0.61ºC/dec in 4 different realisations, confirming the role of internal variability. 9 simulations out of 55 have negative trends over the period.
Over the longer period, the distribution becomes tighter, and the range is reduced to -0.04 to 0.42ºC/dec. Note that even for a 20 year period, there is one realisation that has a negative trend. For that model, the 5 different realisations give a range of trends of -0.04 to 0.19ºC/dec.
Therefore:
- Claims that GCMs project monotonic rises in temperature with increasing greenhouse gases are not valid. Natural variability does not disappear because there is a long term trend. The ensemble mean is monotonically increasing in the absence of large volcanoes, but this is the forced component of climate change, not a single realisation or anything that could happen in the real world.
- Claims that a negative observed trend over the last 8 years would be inconsistent with the models cannot be supported. Similar claims that the IPCC projection of about 0.2ºC/dec over the next few decades would be falsified with such an observation are equally bogus.
- Over a twenty year period, you would be on stronger ground in arguing that a negative trend would be outside the 95% confidence limits of the expected trend (the one model run in the above ensemble suggests that would only happen ~2% of the time).
A related question that comes up is how often we should expect a global mean temperature record to be broken. This too is a function of the natural variability (the smaller it is, the sooner you expect a new record). We can examine the individual model runs to look at the distribution. There is one wrinkle here though which relates to the uncertainty in the observations. For instance, while the GISTEMP series has 2005 being slightly warmer than 1998, that is not the case in the HadCRU data. So what we are really interested in is the waiting time to the next unambiguous record i.e. a record that is at least 0.1ºC warmer than the previous one (so that it would be clear in all observational datasets). That is obviously going to take a longer time.
This figure shows the cumulative distribution of waiting times for new records in the models starting from 1990 and going to 2030. The curves should be read as the percentage of new records that you would see if you waited X years. The two curves are for a new record of any size (black) and for an unambiguous record (> 0.1ºC above the previous, red). The main result is that 95% of the time, a new record will be seen within 8 years, but that for an unambiguous record, you need to wait for 18 years to have a similar confidence. As I mentioned above, this result is dependent on the magnitude of natural variability which varies over the different models. Thus the real world expectation would not be exactly what is seen here, but this is probably reasonably indicative.
We can also look at how the Keenlyside et al results compare to the natural variability in the standard (un-initiallised) simulations. In their experiments, the decadal mean of the period 2001-2010 and 2006-2015 are cooler than 1995-2004 (using the closest approximation to their results with only annual data). In the IPCC runs, this only happens in one simulation, and then only for the first decadal mean, not the second. This implies that there may be more going on than just the tapping into the internal variability in their model. We can specifically look at the same model in the un-initiallised runs. There, the differences between first decadal means spans the range 0.09 to 0.19ºC – significantly above zero. For the second period, the range is 0.16 to 0.32 ºC. One could speculate that there is actually a cooling that is implicit to their initialisation process itself. It would be instructive to try some similar ‘perfect model’ experiments (where you try and replicate another model run rather than the real world) to investigate this further though.
Finally, I would just like to emphasize that for many of these examples, claims have circulated about the spectrum of the IPCC model responses without anyone actually looking at what those responses are. Given that the archive of these models exists and is publicly available, there is no longer any excuse for this. Therefore, if you want to make a claim about the IPCC model results, download them first!
Much thanks to Sonya Miller for producing these means from the IPCC archive.
Gerald Browning says
Jonas (300),
I am willing to continue to discuss well posedness, but ill posedness of the hydrostatic system is the problem and the unbounded growth shows up faster and faster as the mesh is reduced (more waves are resolved) exactly as predicted by the continuum theory in the above reference by Heinz and me. There are computations that show this on Climate Audit under the thread called Exponential Growth in Physical Systems.I ran those tests just to illustrate the theoretical results.
I cannot here explain the connection between linear and nonlinear theory,
but there are theorems discussing this issue, especially for hyperbolic and
parabolic systems. See the book by Kreiss and Lorenz on the Navier-Stokes equations.
For some beautiful nonlinear theory on the NS equations, look at the minimal scale estimates by Henshaw, Kreiss and Reyna and associated numerical convergence tests.
Jerry
Jim Galasyn says
Something fun to model:
Gerald Browning says
Ray Ladbury (#297),
I am sure modeling will go on. That doesn’t mean it necessarily leads anywhere.
Jerry
Raven says
Ray Ladbury Says:
“So, Jerry, given that without the models, we do not have a way of establishing a limit on risk or of directing our efforts to best effect, is it your contention that dynamical modeling of climate is impossible given current computing resources and understanding. Or do you have concrete suggestions for improving the models?”
The issue is many AGW advocates have grossly oversold the reliability of the models and are using them to justify extremely radical policy actions on CO2 (calls for a WW2 style effort to ‘combat’ CO2 is one example of this kind of thinking).
If we did not have the models we would still be able to formulate policy but it would likely be heavily waited towards adaptation, R&D and a long term shift away from CO2 producing energy sources. Demands for radical cuts in emissions over short periods of time appear to be driven primarily by model outputs since there is little empirical evidence that shows that warming is a bad thing when the positives are weighed against the negatives.
This is one case where no information would actually be better than unreliable information because the unreliable information lulls people into believing that they know more than they do.
[Response: Ignorance is bliss then? That would be the difference between us. The fact is that people cling to the ‘we don’t everything so we don’t know anything’ mantra because it is the only attitude that leaves a tiny window open for their earnest desire for this not to be problem. In reality, it’s just a fig leaf to cover their wishful thinking. It doesn’t work in any other walk of life and it isn’t useful here. – gavin]
Raven says
Gavin,
Making decisions based on bad information is worse than making decisions based on no information. I realize that you believe the models are accurate predictors of the future but I have seen little compelling evidence of that (using extremely wide uncertainty intervals to explain why the actual data does not match the central tendency of the models actually undermines their credibility more than it supports them).
This issue will be resolved in 5-6 years when the next solar max rolls around. If the current flat trend continues through the solar max then it be virtually impossible to argue that the models accurately reflect reality. If warming resumes rapidly then the models will be vindicated. However, until then no one can reasonably claim that the models are known to be reliable predictors of the future.
In the meantime, policy makers will have to wait an see what happens.
[Response: Models quantify what is known. You are arguing that this quantification should not be done, and all arguments resolved purely by the passage of time. Thus why bother to study anything? This kind of argument is completely specious. For example, would you apply it to medicine as well? – “well I don’t want to give a diagnosis because I’m not certain of every detail and a little information can be a dangerous thing. Let’s just wait and see if you get better on your own.”… Of course not. Therefore this is simply a faux logical argument simply because you don’t like what the models tell us. This extremely partial philosophizing about science is simply noise. – gavin]
Ray Ladbury says
Raven, Sorry, but without the models, we would be flying blind in a physical system with known positive feedbacks that could rip any control we have away at any moment. That would be a situation demanding even more radical action, because we could not bound risk. The models give us at least some understanding of how much time we have before changes become irreversible and of what changes are likely to occur and whether we can adapt to them.
My day job involves risk management, and a risk we cannot bound is the worst kind. It demands we throw everything at the problem until we can at least bound the risk. To bound the risk, we must be able to model it.
Moreover, the models have yielded valuable physical insight into the climate system–and they’ve got a good and improving track record, despite what some on this thread have claimed. The information we have that we can rely on is that we are changing the climate. The models tell us how much, and if anything they are conservative.
David B. Benson says
Ray Ladbury (306) wrote “… and if anything they are conservative.” That is, the models may be erring on the side of less AGW effects than will actually occur. In the risk management sense, that is not being conservative, is it?
Pat Frank says
Re. #127 — Anthony Kendall wrote, “I just finished Frank’s article, and I have to say that it makes really two assumptions that aren’t valid …1) The cloudiness error…”
[snip]
“… he uses this number 10%, to then say that there is a 2.7 W/m^2 uncertainty in the radiative forcing in GCMs. This is not true. Globally-averaged, the radiative forcing uncertainty is much smaller, because here the appropriate error metric is not to say, as Frank does: “what is the error in cloudiness at a given latitude” but rather “what is the globally-averaged cloudiness error”. This error is much smaller, (I don’t have the numbers handy, but look at his supporting materials and integrate the area under Figure S9), indeed it seems that global average cloud cover is fairly well simulated. So, this point becomes mostly moot. ”
Your description is incorrect. Table 1 plus discussion in Hartmann, 1992 (article ref. 27) indicate that –27.6 Wm^-2 is the globally averaged net cloud forcing. That makes the (+/-)10.1 % calculated in the Skeptic article Supporting Information (SI) equal to an rms global average cloud forcing error of the ten tested GCMs. Further, the global rms cloud percent errors in Tables 1 and 2 of Gates, et al., 1999 (article ref. 24), are ~2x and ~1.5x of that 10.1%, respectively.
Your quote above, “what is the error in cloudiness at a given latitude,” appears to be paraphrased from the discussion in the SI about the Phillips-Perron tests, and has nothing to do with the meaning of the global cloud forcing error in the article.
“2) He then takes this 10% number, and applies it to a linear system to show that the “true” physical uncertainty in model estimates grows by compounding 10% errors each year. There are two problems here: a) as Gavin mentioned, the climate system is not an “initial value problem” but rather more a “boundary value problem”… ”
It’s both. Collins, 2002 (article ref. 28) shows how very small initial value errors produce climate (not weather) projections that have zero fidelity after one year.
Collins’ test of the HadCM3 has only rarely been applied to other climate models in the published literature. Nevertheless, he has shown a way that climate models can be tellingly, if minimally, tested. That is, how well do they reproduce their own artificially generated climate, given small systematic changes in initial values? The HadCM3 failed, even though it was a perfect model of the target climate.
The central point, though, is that your objection is irrelevant. See below.
“…–more on that in a second, and b) the climate system is highly non-linear. ”
But it’s clear that projection of GHG forcing emerges in a linear way from climate models. This shows up in Gates, 1999, in AchutaRao, 2004 (Skeptic ref. 25; the citation year is in error), and in the SRES projections. The congruence of the simple linear forcing projection with the GCM outputs shows that none of the non-linear climate feedbacks appear in the excess GHG temperature trend lines of the GCMs. So long as that is true, there is no refuge for you in noting that climate itself is non-linear.
[snip]
“The significance of the non-linearity of the system, along with feedbacks, is that uncertainties in input estimates do not propagate as Frank claims.”
To be sure. And theory-bias? How does that propagate?
“Indeed, the cloud error is a random error, which further limits the propagation of that error in the actual predictions. Bias, or systematic, errors would lead to an increasing magnitude of uncertainty. But the errors in the GCMs are much more random than bias.”
SI Sections 4.2 and 4.4 tested that very point. The results were that cloud error did not behave like a random, but instead like a systematic bias. The correlation matrix in Table S3 is not consistent with random error. Recall that the projections I tested were already 10-averages. Any random error would already have been reduced by a factor of 3.2. And despite this reduction, the average ensemble rms error was still (+/-)10.1 %.
This average cloud error is a true error that, according to statistical tests, behaves like systematic error; like a theory bias. Theory bias error produces a consistent divergence of the projection from the correct physical trajectory. When consistent theory bias passes through stepwise calculations, the divergence is continuous and accumulates.
“Even more significantly, the climate system is a boundary-value problem more than an initial-value problem. ”
Speaking to initial value error vs. boundary value error is irrelevant to the cloud forcing error described in my article, which is neither one.
Consider, however, the meaning of Collins, 2002. The HadCM3 predicted a climate within a bounded state-space that nevertheless had zero fidelity with respect to the target climate.
[snip]
“To summarize my points:
“1) Frank asserts that there is a 10% error in the radiative forcing of the models, which is simply not true. ”
That’s wrong. An integrated 10.1 % difference in global modeled cloudiness relative to observed cloudiness is not an assertion. It’s a factual result. Similar GCM global cloud errors are reported in Gates, et al., 1999.
“At any given latitude there is a 10% uncertainty in the amount of energy incident, but the global average error is much smaller. ”
I calculated a global average cloud forcing error, not a per-latitude error. The global average error was (+/-)2.8 Wm^-2. You mentioned having looked at Figure S9. That Figure shows the CNRM model per-latitude error ranges between about +60% and -40%. Figure S11 shows the MPI model has a similar error-range. Degree-latitude model error can be much larger than, or smaller than, 10%. This implies, by the way, that the regional forcings calculated by the models must often be badly wrong, which may partially explain why regional climate forecasts are little better than guesses.
“2) Frank mis-characterizes the system as a linear initial value problem, instead of a non-linear boundary value problem. ”
If you read the article SI more closely, you’ll see that I characterize the error as theory-bias.
Specific to your line of argument (but not mine), Collins, 2002, mentioned above, shows the initial value problem is real and large at the level of climate. The modeling community has yet to repeat the perfect-model verification test with the rest of the GCMs used in the IPCC AR4. One can suppose these would be very revealing.
[snip]
“Let me also state here, Frank is a PhD chemist, not a climate scientist…”
Let me state here that my article is about error estimation and model reliability, and not about climate physics.
[snip]
“There’s also a reason why this article is in Skeptic instead of Nature or Science. It would not pass muster in a thorough peer-review because of these glaring shortcomings.”
The professionals listed in the acknowledgments reviewed my article. I submitted the manuscript to Skeptic because it has a diverse and intelligent readership that includes professionals from many disciplines. I’ve also seen how articles published in the more professional literature that are critical of AGW never find their way into the public sphere, and wanted to avoid that fate.
Dr. Shermer at Skeptic also sent the manuscript to two climate scientists for comment. I was required to respond in a satisfactory manner prior to a decision about acceptance.
Barton Paul Levenson says
Raven posts:
Raven is notorious on “Open Mind” for insisting that any negative effect of global warming a poster can think of or cite either isn’t true or is really beneficial; e.g., people who get their fresh water from melting glaciers will benefit from global warming because the ice will melt faster. The fact is, there is plenty of empirical evidence that warming is, net, a bad thing for humanity, and that is not disputed by people with a clue.
Barton Paul Levenson says
Frank’s article assumes that global warming goes away if you take out the models. It doesn’t. And you don’t need computer models to predict global warming. The first estimate of global warming from doubled carbon dioxide was made by Svante Arrhenius in 1896. He did not use a computer model. Nor did G.S. Challenger in 1938. More CO2 in the air means the ground has to warm, unless some countervailing process happens. We’ve looked for such a process for many decades now without finding one.
But in a larger sense Frank’s argument is ridiculous. The models work. They have predicted many things that have come true — the magnitude and direction of the warming, the stratosphere cooling while the troposphere warms, polar amplification, reduction in day-night temperature differences, and the magnitude and duration of the cooling from the eruption of Mount Pinatubo. In the face of the clear evidence that the models give reliable answers, any argument that they don’t is out of court from the beginning. Robert A. Heinlein famously said that “When you see a rainbow, you don’t stop to argue the laws of optics. There it is, in the sky.” Frank’s article amounts to a lengthy argument that something we can see happening isn’t happening. Logicians call this the “fallacy of subverted support.”
Ray Ladbury says
Barton, re: #310. Actually, I suspect the motive is more insidious–if they can banish physical models, then the nonphysical assumptions they must make to explain the current warming can be more easily hidden. It is rather like the anti-evolution types seeking to discredit radiological dating and the other tools that make the fossil record make sense: their goal is to make their nonsense look less absurd in comparison.
Dan says
From 308: ” I’ve also seen how articles published in the more professional literature that are critical of AGW never find their way into the public sphere,…”
Proof? No, articles published in the “more professional literature” are subject to peer review, one of the cornerstones of scientific advancement. Ironically, many non-peer reviewed articles appear in the grey literature, on Fox News or on op-ed pages, with little credibility and away from scientific review. They receive absurd publicity despite the fact that the scientific debate has long been settled. See recent Wall Street Journal op-eds for example. Or George Will’s (a political writer with no scientific background at all) pathetic commentary just this week which dredges up his old tired lines about scientists supposed clamoring about global cooling. Which was thoroughly debunked the last time he wrote about it.
Jim Galasyn says
Re Raven’s extraordinary claim in 304 that warming won’t be a Bad Thing for the world, I have to ask, what’s your opinion about ocean acidification? Because we’re forcing ocean chemistry to change 100x faster than at any time in at least the last 650,000 years.
It’s hard for me to imagine how that can be for the better.
Jim Eager says
Re Jim Galasyn @313:
I would think it would be rather hard for Raven to imagine how that could be for the better, too, so it will likely be ignored.
Geoff Beacon says
Gavin
The BBC reports today
http://news.bbc.co.uk/1/hi/sci/tech/7408808.stm
Is this a positive feedback? Is incorporated in GCMs properly? Does it change the odds?
[Response: Yes. No. Maybe. (see previous discussions on the topic) – gavin]
Ray Ladbury says
Re: Ocean acidification–the bright side. I suspect it will be argued that the whole thing will become a giant fizzy drink–and think of the money we’ll save not having to buy soda at the beach! In the face of a crisis, the only thing that maintains optimism better than a little ignorance is a lot of ignorance.
Gerald Browning says
The Jablonowski and Williamson manuscript shows both the exponential growth of errors for extremely small perturbations and the divergence of solutions of very high resolution models in less than 10 days. This publication is by very reputable authors and the results speak for themselves, i.e. the models have serious problems even for the simple dynamical core tests in the manuscript.
The Lu et. al tests show that as the mesh sizes start to resolve mesoscale features (an important part of weather and climate), the fast exponential growth in the continuum solution starts to appear and in the case of the hydrostatic equations, the continuum system is ill posed. Both of these results were predicted from mathematical theory for continuum partial differential equations.
Adding forcing terms will not solve these inherent problems.
Jerry
[Response: But Jerry, a) what are the high resolution models diverging from? b) the climate models don’t resolve mesoscale features, and c) what added forcing terms are you talking about? – gavin]
Chris says
I think there’s a lot of confusion about two questions concerning models:
(i) what do models tell us that we don’t otherwise know?
(ii) how much of our concern about the consequences of continued large scale enhancement of atmospheric greenhouse gas concentrations is the result of inspection of model projections?
My feeling is that the answer to (i) is “not as much as one might think”, and the answer to (ii) is “rather little”.
Each of these questions is linked, and it’s useful to address the real source of our concerns [point (ii)] in addressing these. I would say these concerns result from the abundant scientific evidence that the Earth has a sensitivity of around 3 oC (plus/minus a bit) of warming per doubling of enhanced CO2. Thus one can make a reasonable projection (assuming confidence in the evidence) of future global warming expected at equilibrium. We don’t need “models” to tell us this (although a simple projection of global warming according to a known climate sensitivity is a “model” in itself, of course). We don’t need a model to tell us the extent of ocean acidification as atmospheric CO2 concentrations rise, and our understanding of the consequences of marked and rapid ocean acidification doesn’t come from models. Our knowledge that the last interglacial with a slightly higher than present temperature was associated with a sea level around 3-4 metres higher than present is a real concern (but is not something we need a model to enlighten us about)…and so on….
So the dominant concerns are essentially independent of the complex climate models used to project future responses to enhanced greenhouse gas concentrations (Ray Ladbury has made this point several times!).
What do models add to our knowledge? I would say that they are (i) a systemization of our understanding, (ii) that they allow a continual reassessment of our understanding via model/real world comparisons, (iii) that (in the case of climate models) they allow us to project our understanding onto three dimensional spatial scales that would be extremely cumbersome without a model (e.g. we can calculate ever more fine-grained spatial distributions of excess heat in a warming world), and (iv) they allow a relatively straightforward means of testing various scenarios by rational adjustment of the parameters of a model.
That’s a simple-minded description of what I think modelling is about. I think it’s important to recognise that modelling has this sort of rationale (I’m an occasional modeller in an entirely different context – protein folding – and I’m sure that a climate modeller could make a more appropriate qualitative description of climate modelling).
In recognising what models are about, one can better understand:
(i) that assertions that inappropriate policy responses may be made on the basis of projections from incorrect models are misplaced (since policy responses are made in relation to our general understanding of the role of CO2/methane etc. as greenhouse gases, even if this understanding is “systematized” in models)
(iii) that while some consider that modelling is the “soft underbelly” of climate science whose attack is likely to yield the greatest dividends in influencing (downplaying) public perception of the dangers of enhancement of the Earth’s greenhouse effect, in fact our understanding of the climate system is based on a robust body of independent scientific evidence and isn’t some “emergent” property of models…..and so while constructive criticism of modelling (so as to improve it!) is valuable, attempts to trash it are also misplaced.
Geoff Beacon says
Gavin
Thanks for the “no” to “Is [methane] incorporated in GCMs properly?”
The next question is about stochastics in GCMs in the presence of positive feedbacks. A simple example: Assume last year’s record arctic ice minimum had a random element to it. But this means the albedo changes and more radiation is absorbed, changing the future trend. Thus a random variation (with positive feedback) has caused a one-way change to the trend. Do GCM’s allow for such effects in general?
From what you say they don’t do in the case of the methane
emissions mentioned earlier.
Geoff
Phil. Felton says
Re #319
As I understood it it’s not whether methane emissions correctly incorporated but whether an additional growth term (i.e. permafrost) is correctly incorporated.
As I recall it the models of the arctic sea ice did capture those effects, for example some of the model runs did show a sudden drop in ice area which when it happened was a ‘one way’ change.
The simulations had there sudden drops at slightly later dates than the actual one.
https://www.realclimate.org/images/bitz_fig2.jpg
Chris Colose says
The direct radiative forcing from CH4 doesn’t appear to have a whole lot of uncertainty since its radiative properties can be measured in the laboratory and its concentration is well known. Of more uncertainty are the sources and sinks behind the background concentration, the decrease of the current growth rate, and implications for future change. The “permafrost feedback” is not something that I see as troublesome anytime soon (more of a “slow” feedback), but I have no idea how GCM’s treat that.
Gerald Browning says
Gavin (#317)
>[Response: But Jerry, a) what are the high resolution models diverging from? b) the climate models don’t resolve mesoscale features, and c) what added forcing terms are you talking about? – gavin]
But Gavin, a) the models are diverging from each other in a matter of less than 10 days due to a small perturbation in the jet of 1 m/s compared to 40 m/s as expected from mathematical theory b) the climate models certainly do not resolve any features less than 100 km in scale and features of this size, e.g. mesoscale storms, fronts, hurricanes, etc. are very important to both the weather and climate. They are prevented fron forming by the large unphysical dissipation used in the climate models. c) any added forcing terms (inaccurate parameterizations) will not solve the ill posedness problem, only unphysically large dissipation that prevents the correct cascade of vorticity to smaller scales can do that.
Jerry
Hank Roberts says
What kind of supercomputer did those people use? What model were they running, were they using one of those otherwise in use, or did they write their own? What’s puzzling is that of the models that are written up most often, while there are differences, they all seem quite credibly similar and none of them has had one of these runaway behaviors.
What’s so different about the one Dr. Browning is talking about? How can it go squirrely so quickly compared to the other climate models?
Gerald Browning says
Hank Roberts (#323),
> What kind of supercomputer did those people use? What model were they running, were they using one of those otherwise in use, or did they write their own? What’s puzzling is that of the models that are written up most often, while there are differences, they all seem quite credibly similar and none of them has had one of these runaway behaviors.
> What’s so different about the one Dr. Browning is talking about? How can it go squirrely so quickly compared to the other climate models?
For a summary read the manuscript’s abstract. Jablonowski and Williamson used 4 different models. One was the NASA/NCAR Finite Volume dynamical core, one was the NCAR spectral transform Eulerian core of CAM3, one was the NCAR Lagrangian core of CAM3, and one was the German Weather Service GME dyanmical core. Note that the models were run using varying horizontal and vertical resolutions (convergence tests) for an analytic and realistic steady state zonal flow case and a small perturbation on that state. Although a dynamical core theoretically should be only a numerical approximation of the inviscid, unforced (adiabatic) hydrostatic system, the models all used either explicit or implicit forms of dissipation. One can choose just the Eulerian core to see how the vorticity cascades to smaller scales very rapidly as the mesh is refined and the dissipation reduced appropriately. This cascade cannot be reproduced by the models with larger dissipation coefficients.
As I have repeatedly stated, unphysically large dissipation can keep a model bounded, but not necessarily accurate. And because the dynamics are wrong, the forcings (inaccurate approximations of the physics) must be tuned to overcome the incorrect vorticity cascade.
Jerry
[Response: Your (repeated) statements do not prove this to be the case. Climate models do not tune the radiation or the clouds or the surface fluxes to fix the dynamics – it’s absurd to think that it would even be possible, let alone practical. Nonetheless, the large scale flows, their variability and characteristics compare well to the observations. You keep dodging the point – if the dynamics are nonsense why do they work at all? Why do the models have storm tracks and jet streams in the right place and eddy energy statistics and their seasonal cycle etc. etc. etc.? The only conclusion that one can draw is that the equations they are solving do have an affiliation with the true physics and that the dissipation at the smallest scales does not dominate the large scale circulation. It is not that these issues are irrelevant – indeed the tests proposed by Jablonowski et al are useful for making sure they make as little difference as possible – but your fundamentalist attitude is shared by none of these authors who you use to support your thesis. Instead of quoting Williamson continuously as having demonstrated the futility of modeling, how about finding a quote where he actually agrees with your conclusion? – gavin]
Lawrence McLean says
Gerald Browning:
I cannot understand why you think that initial conditions are so crucial to climate models. Do you realise that climate is NOT chaotic?
If climate were chaotic then it would be not unexpected to find: some tropical climates in Siberia, some tundra along the coast of Vietnam, a bit of Mediterranean climate on the coast of Norway, some tropical rainforest in Antarctica!
Hank Roberts says
> why do they work at all?
Yep, that’s what I’m wondering. If these folks found something that when varied only slightly causes the model to fall apart in ten days (and presumably it never recovers when run past ‘weather’ out to ‘climate’ time spans?) — seems all it diverges from is the observed world.
Barely-able-to-follow grade question: what’s the difference between “dissipation” and “horizontal diffusion” in models?
http://www.agu.org/pubs/crossref/2008/2008GL033483.shtml
“… Reducing the horizontal diffusion by a factor of 3 leads to an increase of the equilibrium climate sensitivity by 13%.”
JCH says
“One of the best groups of fluid dynamicists in the world is arguably at Los Alamos National Laboratory. About 10 years ago, they were looking to redirect some of this brain power into climate modelling. After looking at the various elements of the climate models, they judged that there was little to do with the dynamical core of the atmospheric model (that it was quite mature and performing quite well), although there were issues with the parameterizations of convection and the atmospheric boundary layer. Hence they have focused their efforts on the ocean and sea ice models (and a new focus area is ice sheet modelling). Note, the LANL group collaborates closely with the NCAR group and NCAR is using their ocean (POP) and sea ice (CICE) models. Information on this group can be found at the LANL COSIM website http://climate.lanl.gov/
Now maybe Gerald is smarter than all the people at LANL and ECMWF, or even just has a plain good idea about something that is wrong or an idea for fixing it (I certainly can’t find evidence of this in his publication record, but i have an open mind). So far, all I’ve heard are innuendoes. …” – Judith Curry, comment 166 [snip – c’mon, Jerry]
Pat Cassen says
Gerald Browning –
As you can tell by the preceding posts, you’re still not making your point very effectively. Of course the vorticity cascade to fine scales will be improperly captured to a degree dependent on the coarseness of the grid. I suppose this is a big deal if one is trying to calculate the details of a Kelvin-Helmoltz instability or some phenomenon for which the smallest scales grow the fastest to dominate the flow characteristics. But there are plenty of situations in which this is not the case at all, or in which the large scale features are insensitive to the small scale, fast growing modes. Isn’t this why ‘artificial viscosity’ is used so successfully in so many applications? Although “unphysically large dissipation can keep a model bounded, but not necessarily accurate”, there are also many cases where unphysically large dissipation does not preclude the accurate representation of large scale flow features.
So it would be useful if you told us exactly what the practical consequences for climate models are due to their inability to model the leaves swirling in my yard or the surf conditions at Malibu.
As for the hydrostatic model being ill-posed, I confess that I wouldn’t know an ill-posed climate model from an ill-posed swim-suit model, I did look at one of your papers mentioned above to learn more, and was happy that my profs never assigned it.
Not all of us here are mathematicians or numerical analysts, so a little more careful explanation and less casual jargon and erudite references might help your case.
dhogaza says
JCH, thanks for that link to Climate Audit. Jerry, if your brand of denialism can’t even get love at Climate Audit, you’re in a world of hurt.
Gerald Browning says
Pat Cassen (#328),
Please site a reference containing a mathematical proof of this assertion.
Otherwise it is just hand waving. The scales I am referring to are those of mesoscale storms, fronts, and hurricanes. I guess you don’t consider those imporatnt to climate.
Jerry
[Response: It remains to be seen. – gavin]
Gerald Browning says
Gavin (#324),
[edit – stay polite or don’t bother]
>Nonetheless, the large scale flows, their variability and characteristics compare well to the observations.
Over what time period? That is not the case for the Jablonowski and Williamson test that is a small perturbation of a large scale flow.
[Response: Over monthly, annual and interannual timescales. – gavin]
>You keep dodging the point – if the dynamics are nonsense why do they work at all?
No you keep dodging the point that the dynamics are not correct and the parameterizations are tuned to hide the fact.
[Response: No they aren’t. How pray should I fix the radiation code for instance to hide a dynamical instability? It’s a ridiculous notion. – gavin]
>Why do the models have storm tracks and jet streams in the right place and eddy energy statistics and their seasonal cycle etc. etc. etc.?
Are we talking about a weather model or a climate model?
[Response: Both. Higher resolution models do better, but both give dynamical solutions that are clearly realistic. – gavin]
Pat Frank (and others) have shown that there are major biases in the cloudiness (water cycle), i.e. the models are not accurate.
[Response: Bait and switch. There are biases – particularly in clouds (but also rainfall), but I am making no claim to perfection. You on the other hand are claiming they have no skill whatsoever. – gavin]
>The only conclusion that one can draw is that the equations they are solving do have an affiliation with the true physics and that the dissipation at the smallest scales does not dominate the large scale circulation.
Or that they have been tuned to overcome the improper cascade of vorticity.
By “smaller scales” I assume you mean that mesoscale storms, fronts, and hurricanees are not important to the climate or that there is no reverse cascade over longer periods of time. That is news to me and I would guess many other scientists.
[Response: Climate models don’t in general have mesoscale storms or hurricanes. Therefore those features are sub-gridscale. Nonetheless, the climatology of the models is realistic. Ipso facto they are not a first order control on climate. As far as I understand it, the inverse cascade to larger-scales occurs mainly from baroclinic instability, not mesoscale instability, and that is certainly what dominates climate models. – gavin]
> It is not that these issues are irrelevant – indeed the tests proposed by Jablonowski et al are useful for making sure they make as little difference as possible – but your fundamentalist attitude is shared by none of these authors who you use to support your thesis. Instead of quoting Williamson continuously as having demonstrated the futility of modeling, how about finding a quote where he actually agrees with your conclusion? – gavin]
The tests speak for themselves. That is why I cited the references.
Did you ask Dave? I worked with him for years [edit]
[Response: Did you? I generally find that people who work hard on trying to make models better haven’t generally come to the conclusion that they are wasting their time. – gavin]
Jerry
Gerald Browning says
Lawrence McLean (#325),
Ill posedness has nothing to do with the initial conditions. The unbounded exponential growth will be triggered by any error no matter how small.
And no one has mathematically proved that the climate is or is not chaotic.
Jerry
[Response: Weather and climate models clearly are – and since that is what we are addressing, it’s relevant. Any perturbed initial condition will diverge from the the original path on the order of few days – hence my question above. Divergence of different model simulations of baroclinic instability is an expected result – not a symptom of ill-posedness. – gavin]
David B. Benson says
Gavin — Thank you very much for your replies to Gerald Browning.
I’m learning from them.
Gerald Browning says
Gavin (332),
Please cite a reference that contains a rigorous mathematical proof that the climate is chaotic. As usual you make statements without the scientific facts to back them up. I suggest that the readers review Tom Vonk’s very clear exposition on Climate Audit in response to this exact claim on the thread
called Koutsoyiannis 2008 Presentation in comment #174 if you want to know the scientific facts.
Jerry
[Response: No such proof exists, I never claimed it did, and nor do I suspect it is. However, NWP models and climate models are deterministic and individual realisations have an strong sensitivity to initial conditions. Empirically they show all the signs of being chaotic in the technical sense though I doubt it could be proved formally. This is a different statement to claiming the climate (the attractor) itself is chaotic – the climate in the models is stable, and it’s certainly not obvious what the real climate is. (NB Your definition of ‘clear’ is ‘clearly’ different to mine). – gavin
Gerald Browning says
Gavin (#332),
So resduce the dissipation and mesh size to see what happens if you are so certain.
Jerry
[Response: Go to Jablonowski’s workshop and see. – gavin]
Gerald Browning says
Gavin (#330)
So you don’t know the answer for the need to resolve the mesoscale storms, fronts and huricanes, but then you state that a climate model is accurate without resolving them. Is there a contradiction here? Wouldn’t a good scientist determine the facts before making such a statement?
Jerry
[Response: But the fact is that climate models do work – and I’ve given a number of examples. Thus since those models did not have mesoscale circulations, such circulations are clearly not necessary to getting a reasonable answer. I’m perfectly happy to concede that the answer might be better if they were included – but that remains an open research question. – gavin]
Chris Colose says
I still want to know how a hurricane or a front is going to stop the general physical reality that a body which takes in more heat via radiation than it releases will warm.
Gerald Browning says
Gavin (#231),
> [Response: Climate models don’t in general have mesoscale storms or hurricanes. Therefore those features are sub-gridscale.
And thus all of the dynamics and physics from these components of climate are not included nor accurately modeled. Fronts are one of the most important controllers of weather and climate. You cannot justify neglecting them in a climate model, yet claim a climate model accurately descibes the climate.
>Nonetheless, the climatology of the models is realistic.
Realistic and accurate are two very different terms. You are stating that fronts are not imporatant to climate, is that correct?
>Ipso facto they are not a first order control on climate.
Please cite a mathematical proof of this affirnmation.
> As far as I understand it, the inverse cascade to larger-scales occurs mainly from baroclinic instability, not mesoscale instability, and that is certainly what dominates climate models. – gavin]
If this assertion is correct (please cite a mathematical reference) the jet cannot be accurately approximated by a 100 km mesh across its width. Therefore the model does not accurately model the jet that you claim is important to the inverse cascade. Now you have a scientific contradiction based on your own statements.
Jerry
[Response: Many things are not included in climate models, who ever claimed otherwise? Models are not complete and won’t be for many years, if ever. That is irrelevant to deciding whether they are useful now. You appear to be making the statement that it is necessary to have every small scale feature included before you can say anything. That may be your opinion, but the history of modelling shows that it is not the case. Fronts occur in the models, but obviously they will not be a sharply defined – similarly the Gulf Stream is too wide, and the Aghulas retroflection in the South Atlantic is probably absent. These poorly resolved features and others like them are key deficiencies. But poleward heat transport at the about the right rate is still captured and the sensitivity of many issues – like the sensitivity of the storm tracks to volcanic forcing match observations. This discussion is not about whether models are either perfect or useless, it is whether given their imperfections they are still useful. Continuing to insist that models are not perfect when I am in violent agreement with you is just a waste of everyones time. (PS. If A & !C => B, then C is not necessary for B. It’s not very hard to follow). (PPS, try this seminar). – gavin]
Bryan S says
“But the fact is that climate models do work”
Gavin, I have been studying AchutaRao et. al. (2007) figure 1. I have also examined upper ocean heat content through this period in GDL CM2.1 model, and noticed a fairly large negative anomoly (in the model) presumably associated with Pinatubo. Why don’t the observations in upper ocean heat show this cooling that is seen in the models? This seems to challenge the assertion of model skill. If you say the ocean is too noisy to measure, that is fine, but we are still left with the observation of apparently increasing upper ocean heat content following the Pinatubo eruption.
[Response: If you look at the latest reanlyses of the OHC data that deal with the XBT and Argo problems (Wijfels et al, in press for instance), I think you’ll see that there is a decrease in the OHC after each big eruption. – gavin]
Pat Frank says
Re. #310 — B. P. Levenson wrote, “Frank’s article assumes that global warming goes away if you take out the models.”
It assumes no such thing.
“But in a larger sense Frank’s argument is ridiculous.”
You’ll have to demonstrate that on the merits. Unsupported assertions won’t do it. GCMs may have a million floated variables. John von Neumann reportedly said that he could model an elephant with 4 adjustable parameters and with 5 could wave the trunk. With a million variables, GCMs can be tuned to match any past climate, and that doesn’t mean anything about predicting future climate.
“Frank’s article amounts to a lengthy argument that something we can see happening isn’t happening.”
The article says nothing about climate as such. It’s about error assessment and model reliability; nothing more.
[Response: Actually not even that. – gavin]
#311 — Ray Ladbury, your supposed “insidious motives” are products of your mind, not mine.
#312 — Dan, for proof see article references 13 and 40.
[edit – random contrarian noise deleted]
Martin Vermeer says
#333 David B. Benson:
Good for you… I am not. Sainthood was never in the cards for me ;-)
Barton Paul Levenson says
Gerald Browning writes:
Sure he can, if the climate model is getting the right results. This is something your arguments fail to address again and again — that the models give the right answers. That’s what “accurate” MEANS.
Geoff Beacon says
Gavin
Methane is beginning to worry me even more, despite your soothing words. The BBC report I referenced earlier says.
http://news.bbc.co.uk/1/hi/sci/tech/7408808.stm
This gives the impression that it’s 25 times the effect of CO2 when a release into the atmosphere happens but fairly rapidly disappears.
But the Environmental Change Institute, University of Oxford say this
http://www.eci.ox.ac.uk/research/energy/downloads/methaneuk/chapter02.pdf
In the previous RealClimate discussion it says
https://www.realclimate.org/index.php/archives/2005/12/methane-hydrates-and-global-warming/
This seems to assume that the lifetime of methane is constant independent of its concentration. The Oxford paper seems to have a different assumption: The rate of extraction of methane from the atmosphere is at a constant rate determined by the size of the sinks. In a situation of rising methane levels and therefore saturated sinks, new methane emissions (or equal amounts of the total atmospheric methane) stay in the atmosphere until the levels fall below sthe capacity of the sinks.
Taking figures from Wikipedia I calculate the effect of methane compared to CO2 (by weight) as:
0.48(Wm-2CH4)/1.46(Wm-2CO2)*384(ppmCO2)/1,745(ppbCH4)*44(CO2)/16(CH4)
I hope I have got these correct. But last night I calculated methane
has 180 times the effect of CO2, under the current circumstances.
[Response: My words were not intended to soothe you, merely to inform. Increases in CH4 do end up increasing the lifetime, and really large increases could end up extending the lifetime significantly (see Schmidt and Shindell, 2003 for some simple calculations). – gavin]
Dan says
re: 340. Referencing Lindzen as “proof”? Oh puh-lease!
Ray Ladbury says
Pat Frank, I’m willing to withdraw my speculations about insidious motives (and even apologize) if you will simply admit that the case for anthropogenic causation of the current warming epoch is not at all contingent on the results of climate models and that the evidence is quite strong. Indeed, if it’s all about error analysis, then the basic physics should not be at issue, correct? After all, nobody has even come close to construcing a model (well or ill posed) without including anthropogenic warming. Given the lack of productivity along these lines (as represented by the lack of publishing activity), I’d call the question of anthropogenic causation “settled”.
What I object to about your approach is that you seem to be trying to negate the known physics by questioning its implementation. If you can clarify that this is not your intent, that would go a long way toward clarifying your motives.
Pat Cassen says
Gerald Browning, #330, referring to #328:
“Please site a reference containing a mathematical proof of this assertion.”
Which assertion are you are referring to? That there are flows in which growing perturbations never dominate the flow?
Jerry, take a break from the math. Sit down for a while on the bank of a deep mountain stream. Watch the eddies grow, break up, and get swept along. It’ll clear your head.
JCH says
It appears to me that David l. Williamson is a contributing author to the IPCC.
If you look at his working group’s (Climate & Global Dynamics Division: Atmospheric Modeling and Predictability) site on NCAR, there are several FAQs. One concerns the shortcomings of climate models:
They have been extensively tested and evaluated using observations. They are exceedingly useful tools for carrying out numerical climate experiments, but they are not perfect, …
I can’t figure this out. It appears to me that the group, Atmospheric Modeling and Predictability, uses a fairly wide array of climate models in their research, and interacts with a wide array of climate modelers around the world – on an almost perpetual basis. To be honest, I can’t see that they do much of anything else. What am I missing here?
Marcus says
Geoff Beacon #343: The major methane sink is reaction with the hydroxyl radical. Therefore, the sink is roughly proportional to the concentration of methane. The hydroxyl radical concentration can drop in response to increased emissions of methane, increasing methane lifetime, and therefore a 10% increase in methane emissions is likely to lead to a larger than 10% increase in concentration, but that’s a 2nd order effect.
The “methane factor of 25 larger than CO2” is actually taking into account some lifetime effects already: it is the “GWP” of methane, which is the integral of a radiative forcing from a pulse of methane over 100 years, divided by that of CO2. This therefore deals with an additional ton of methane adding 60 times the forcing of an additional ton of CO2, but a lifetime on the order of a decade rather than a century.
Finally, in the past decade CH4 actually came very close to stabilization, though recent results in the past year suggest a renewed increase (not yet known if it is of temporary nature or not).
Geoff Beacon says
Gavin
Thanks. I note two things about the Schmidt and Shindell paper.
1. “The stratospheric sink is a photolytic reaction and is presumed proportional to the concentration.” I was under the impression that the photolytic reaction that creates the sink (of OH- ?) did not involve methane and so is not proportional to methane concentration. Is their assumption of proportionality (with methane concentration) correct?
[Response: In the stratosphere, the rate limiting step is not the availability of OH-, but the
availability of CH4. Which is not the case in the troposphere. – gavin]
2. The measure of the residency time of methane in the atmosphere is relative to the total methane in the atmosphere. This means that an extra tonne (or gigatonne) of methane released not only stays in the atmosphere longer but causes the existing methane already in the atmosphere to stay longer. Is this correct?
[Response: Yes, if you increase sources by 10%, then the concentration rises by ~12%, implying a slight increase in residence time of just under 2%. This is net OH levels decrease. – gavin]
3. If the sinks were to be of a fixed size (and not dependent of methane concentration), we could describe an equivalent (but counterfactual) model of climate in which the “new methane” stayed in the atmosphere until all the “old methane” had disappeared into the sinks – a long time. If I have done my earlier sums properly, this would set methane emissions (i.e. “new methane”) much closer to 180 times the global warming effect of carbon dioxide emissions than the often quoted 23 times. Do you agree?
[Response: No. All methane is equal. ]
Gavin, my purpose in asking these questions is to try and inform the people that do carbon foot-printing in order to influence policy makers. I think it important that reasonable numbers are used.
Would you make a judgement on this? If not who will?
CH4Emissions= x*CO2Emmissions. What is x?
[Response: x is between 23 and 60 depending on timescale. look up Global Warming potential. – gavin]
tamino says
Gerald Browning utterly fails to realize, or simply refuses to acknowledge, that models which give useful answers and make accurate predictions, although not correct in all details, still give useful answers and make accurate predictions.
Instead of repeated, rude comments insulting the entire field of climate modelling, I suggest Browning should face the truth: they work. If he’s anywhere near as smart as he thinks he is, perhaps he should apply his intellect to investigating why, rather than denying what discomforts his small-minded viewpoint.