Over the last couple of months there has been much blog-viating about what the models used in the IPCC 4th Assessment Report (AR4) do and do not predict about natural variability in the presence of a long-term greenhouse gas related trend. Unfortunately, much of the discussion has been based on graphics, energy-balance models and descriptions of what the forced component is, rather than the full ensemble from the coupled models. That has lead to some rather excitable but ill-informed buzz about very short time scale tendencies. We have already discussed how short term analysis of the data can be misleading, and we have previously commented on the use of the uncertainty in the ensemble mean being confused with the envelope of possible trajectories (here). The actual model outputs have been available for a long time, and it is somewhat surprising that no-one has looked specifically at it given the attention the subject has garnered. So in this post we will examine directly what the individual model simulations actually show.
First, what does the spread of simulations look like? The following figure plots the global mean temperature anomaly for 55 individual realizations of the 20th Century and their continuation for the 21st Century following the SRES A1B scenario. For our purposes this scenario is close enough to the actual forcings over recent years for it to be a valid approximation to the simulations up to the present and probable future. The equal weighted ensemble mean is plotted on top. This isn’t quite what IPCC plots (since they average over single model ensembles before averaging across models) but in this case the difference is minor.
It should be clear from the above the plot that the long term trend (the global warming signal) is robust, but it is equally obvious that the short term behaviour of any individual realisation is not. This is the impact of the uncorrelated stochastic variability (weather!) in the models that is associated with interannual and interdecadal modes in the models – these can be associated with tropical Pacific variability or fluctuations in the ocean circulation for instance. Different models have different magnitudes of this variability that spans what can be inferred from the observations and in a more sophisticated analysis you would want to adjust for that. For this post however, it suffices to just use them ‘as is’.
We can characterise the variability very easily by looking at the range of regressions (linear least squares) over various time segments and plotting the distribution. This figure shows the results for the period 2000 to 2007 and for 1995 to 2014 (inclusive) along with a Gaussian fit to the distributions. These two periods were chosen since they correspond with some previous analyses. The mean trend (and mode) in both cases is around 0.2ºC/decade (as has been widely discussed) and there is no significant difference between the trends over the two periods. There is of course a big difference in the standard deviation – which depends strongly on the length of the segment.
Over the short 8 year period, the regressions range from -0.23ºC/dec to 0.61ºC/dec. Note that this is over a period with no volcanoes, and so the variation is predominantly internal (some models have solar cycle variability included which will make a small difference). The model with the largest trend has a range of -0.21 to 0.61ºC/dec in 4 different realisations, confirming the role of internal variability. 9 simulations out of 55 have negative trends over the period.
Over the longer period, the distribution becomes tighter, and the range is reduced to -0.04 to 0.42ºC/dec. Note that even for a 20 year period, there is one realisation that has a negative trend. For that model, the 5 different realisations give a range of trends of -0.04 to 0.19ºC/dec.
Therefore:
- Claims that GCMs project monotonic rises in temperature with increasing greenhouse gases are not valid. Natural variability does not disappear because there is a long term trend. The ensemble mean is monotonically increasing in the absence of large volcanoes, but this is the forced component of climate change, not a single realisation or anything that could happen in the real world.
- Claims that a negative observed trend over the last 8 years would be inconsistent with the models cannot be supported. Similar claims that the IPCC projection of about 0.2ºC/dec over the next few decades would be falsified with such an observation are equally bogus.
- Over a twenty year period, you would be on stronger ground in arguing that a negative trend would be outside the 95% confidence limits of the expected trend (the one model run in the above ensemble suggests that would only happen ~2% of the time).
A related question that comes up is how often we should expect a global mean temperature record to be broken. This too is a function of the natural variability (the smaller it is, the sooner you expect a new record). We can examine the individual model runs to look at the distribution. There is one wrinkle here though which relates to the uncertainty in the observations. For instance, while the GISTEMP series has 2005 being slightly warmer than 1998, that is not the case in the HadCRU data. So what we are really interested in is the waiting time to the next unambiguous record i.e. a record that is at least 0.1ºC warmer than the previous one (so that it would be clear in all observational datasets). That is obviously going to take a longer time.
This figure shows the cumulative distribution of waiting times for new records in the models starting from 1990 and going to 2030. The curves should be read as the percentage of new records that you would see if you waited X years. The two curves are for a new record of any size (black) and for an unambiguous record (> 0.1ºC above the previous, red). The main result is that 95% of the time, a new record will be seen within 8 years, but that for an unambiguous record, you need to wait for 18 years to have a similar confidence. As I mentioned above, this result is dependent on the magnitude of natural variability which varies over the different models. Thus the real world expectation would not be exactly what is seen here, but this is probably reasonably indicative.
We can also look at how the Keenlyside et al results compare to the natural variability in the standard (un-initiallised) simulations. In their experiments, the decadal mean of the period 2001-2010 and 2006-2015 are cooler than 1995-2004 (using the closest approximation to their results with only annual data). In the IPCC runs, this only happens in one simulation, and then only for the first decadal mean, not the second. This implies that there may be more going on than just the tapping into the internal variability in their model. We can specifically look at the same model in the un-initiallised runs. There, the differences between first decadal means spans the range 0.09 to 0.19ºC – significantly above zero. For the second period, the range is 0.16 to 0.32 ºC. One could speculate that there is actually a cooling that is implicit to their initialisation process itself. It would be instructive to try some similar ‘perfect model’ experiments (where you try and replicate another model run rather than the real world) to investigate this further though.
Finally, I would just like to emphasize that for many of these examples, claims have circulated about the spectrum of the IPCC model responses without anyone actually looking at what those responses are. Given that the archive of these models exists and is publicly available, there is no longer any excuse for this. Therefore, if you want to make a claim about the IPCC model results, download them first!
Much thanks to Sonya Miller for producing these means from the IPCC archive.
JCH says
On Jared’s statement that El Nino has dominated most the 2000s, this website has the years categorized strength:
http://ggweather.com/enso/oni.htm
Click on this link to see 2007 and 2008 data:
http://www.cpc.noaa.gov/products/analysis_monitoring/ensostuff/ensoyears.shtml
Given that the kickoff years of 1998 and 1999 are classified as strong La Nina, and 2007 looks to be in the cold camp, and 2008 is cold so far, is it really correct to classify the “pause” period as being dominated by El Nino?
tamino says
Let’s use Jared’s methodology to determine the present temperature trend.
Using GISS data, from January through April of this year the trend rate is warming at 1.416 deg.C/yr. Using HadCRUT3v data, from January through March of this year the trend rate is warming at 2.112 deg.C/yr. Conclusion: over the next century we can expect at least 140 deg.C warming.
Hank Roberts says
Jared, have you taken a statistics course yet? People aren’t “talking down” — they are pointing out that your claims don’t show signs you’ve studied statistics. People are trying to talk to you across that gap. You’re repeating what you believe; and what you say seems to indicate you don’t understand how statistics helps understand trends. Give us some idea where to start, in explaining this.
stevenmosher says
http://www.telegraph.co.uk/news/newstopics/theroyalfamily/1961719/Prince-Charles-Eighteen-months-to-stop-climate-change-disaster.html
here’s a good one. There is no shortage of verbiage when someone talks about 7 year trends.
RC have wack at the prince.
you wont.
[Response: There’s certainly no shortage of biased headline writers with made up quotes and people who are willing to jump to conclusions without checking their facts. Perhaps you’d like to show you aren’t one of those? (original Radio 4 interview). – gavin]
Chris says
Bryan S: I’ve just read Carl Wunsch’s Royal Soc article of which you stated:
I think he gives an abbreviated layman’s version of his sentiments here: http://royalsociety.org/page.asp?id=4688&tip=1 (there is caution for everyone here)
I would say that the “caution here” relates to summarising (for the layman presumably), in such a way that a relatively uninformed reader is almost certain to be misled. I’m sure that Professor Wunsch is being entirely genuine in attempting to summarise his viewpoint with simple examples and analogies. However his examples and analogies are problematic.
Wunsch argument is (paraphrasing; and please correct me if you feel I’m misrepresenting his article):
1. He asks the pertinent question “to what extent can the climate change all by itself?”
2. He says the answer is “a very great deal.”
3. The example he elaborates on is the ice age cycles. Clearly the ice age cycles resulted in dramatic changes in climate. No one would disagree with that.
4. He then discusses “the counter-intuitive (for most people)” behaviour of the consequences of random fluctuation in systems with memory.
5. He uses an analogy of coin tossing (where’s the “memory” btw?). He reminds us that if one tosses a coin 2 million times “the probability of exactly 1 million heads and 1 million tails is very small”. He uses this as a simple example that can be applied to ocean heat oscillations (flip the coin…if it’s heads the ocean is heated..if it’s tails it’s cooled…and so on….). Clearly, taking the coin-flipping as an analogy, the probability of the ocean being exactly at its equilibrium temperature is small.
6. He finishes his article with a reference to the “modern climate problem”. He’s already shown (point 3 above) that the climate can change rather dramatically without human intervention, and (point 4 and 5 above) that internal fluctuations means that the “natural state” is a fluctuating one that is never truly at equilibrium. He describes the fact that our direct knowledge of these fluctuations of the natural state is somewhat limited by a short instrumental record.
7. He points out that scientists use numerical models of the climate system to calculate natural variability with and without “human effects”, but that since our observational data base is limited there is some uncertainty about the realism of the models.
8. He concludes that it may be difficult to separate human induced change from natural change with the confidence that we would all seek.. that most scientists consider it highly probable that human-induced change is already strongly present, but not proven….and that public policy has to be made on the basis of probabilities, not firm proof.
Points 6 and 8 are not really controversial (‘though, re #8, we’d probably consider that we’re past the point at which we can be confident in distinguishing man-made from natural variations; re #7, one might point out that our understanding of “natural variability” doesn’t come largely from models but mostly from direct observation and understanding of the physics of various forcings, and paleoproxy data especially for the last couple of millennia), but the less well informed reader is likely to be misled in considering the relevance of these points to our present situation, having been primed by some inappropriate examples/analogies in points 1-5. There are two problems:
(a) The issue at hand relates to climate changes resulting from internal fluctuations of the climate system (coin tossing and so on). So the ice age cycles is an inappropriate example. Ice age cycles are not examples of “the climate chang(ing) all by itself”. We know pretty well why the ice ages happened; they’re the result of external forcing (cyclic variations in solar forcing due to orbital variations, with greenhouse and albedo feedbacks and so on). The ice age cycles are neither internal fluctuations in the climate system, nor are they good examples of the natural variations relevant to our consideration of the effects of man-made enhancement of the greenhouse effect.
(b) The coin-tossing analogy is a poor one in the context, and while it’s obvious that a system with internal fluctuations is never totally at equilibrium, the pertinent question is the amplitudes of fluctuations (and their timescales) around that equilibrium. Wunsch doesn’t address this at all, although the uninformed reader has just been given the example of ice age cycles as an indication of the potential variation in the climate system, and may well be gobsmacked at the potential for extreme temperature variations resulting from internal fluctuation!. The relevant point is not addressed which is whether internal fluctuations can give us significant persistent trends on the decadal timescale. Here the coin-tossing analogy isn’t helpful. For example if we were to toss a coin 2 million times in 1000 separate sessions, and graph the number of heads, we would expect to obtain a Gaussian distribution centered around 1 million. We’d be very surprised if in the 1000 sessions there was a rising trend for the first 300 sessions, followed by a falling trend in the next 700 and so on…
Wunsch’s final statement is a truism. “Public policy has to be made on the basis of probabilities, not firm proof”. Of course…and so the question relates to the strength of the evidence upon which we assess probabilities. Wunsch doesn’t address that.
Tapio Schneider says
Regarding the articles in Skeptic Magazine and claims made by Browning
(e.g., #91, #152) and others, some (late) comments to clarify the
history and some of the science.
In September 2007, Michael Shermer, the publisher of the magazine,
sent me Frank’s submitted article (an earlier but essentially similar version
thereof), asking me what I think of it. This was not a request to
review it but an informal email among acquaintances. I pointed Shermer
to some of the most glaring errors in Frank’s article. Some of these
have come up in posts and comments here. For example, Frank confuses
predictability of the first kind (Lorenz 1975), which is concerned
with how uncertainties in the initial state of a system amplify and
affect predictions of later states, with predictability of the second
kind, which is concerned with how (possibly chaotic) internal dynamics
affect predictions of the response of a system to changing boundary
conditions. As discussed by Lorenz, even after predictability of the
first kind is lost (“weather” predictability), changes in statistics
of a system in response to changes in boundary conditions may be
predictable (“climate” predictability). Frank cites Matthew Collins’s
(2002) article on limits to predictability of the first kind to infer
that climate prediction of the second kind (e.g., predicting the mean
climate response to changing GHG concentrations) is impossible beyond
timescales of order a year; he does not cite articles by the same
Matthew Collins and others on climate prediction of the second kind,
which contradict Frank’s claims and show that statistics of the
response of the climate system to changing boundary conditions can be
predicted (e.g., Collins and Allen 2002). After pointing this and
other errors and distortions out to Shermer–all of which are common
conceptual errors of the sort repeatedly addressed on this site, with
the presentation in Frank’s article just dressed up with numbers
etc. to look more “scientific”–I had thought this was the last I had
seen of this article.
Independently of Michael Shermer asking my opinion of Frank’s article,
I had agreed to write an overview article of the scientific basis of
anthropogenic global warming for Skeptic Magazine. I did not know that
Frank’s article would be published in the magazine along with mine, so
my article certainly was not meant to be a rebuttal of his
(#91). Indeed, I was surprised by the decision to publish it given the
numerous errors.
Regarding some of Browning’s other points here, the ill-posedness of
the primitive equations means that unbalanced initial conditions
excite unphysical internal waves on the grid scale of hydrostatic
numerical models, in lieu of smaller-scale waves that such models
cannot represent. This again limits predictability of the first kind
(weather forecasts on mesoscales with such models) but not necessarily
of the second kind. Browning is right that the models require
artificial dissipation at their smallest resolved scales. However,
from this it does not follow that the total dissipation in climate
models is “unphysically large” (#152) (the dissipation is principally
controlled by large-scale dynamics that can be resolved in climate
models). And it does not follow that the “physical forcings are
necessarily wrong” (#152). Moreover, the “forcing” relevant for the
dynamical energy dissipation Browning is concerned with is not the
forcing associated with increases in GHG concentrations, but the
differential heating of the planet owing to insolation gradients (this
is the forcing that principally drives the atmospheric dynamics and
ultimately the dynamical energy dissipation). As Gavin pointed out, many aspects
of anthropogenic global warming can be understood without considering
turbulent dynamics.
Jared says
#197
The linear trend for the past 10 years is still basically flat (at least for HAD, RSS, and UAH). But people don’t like that since it starts with 1998. So by going with the mean of 1998-99, I was trying to be more fair. Something I’ve pointed out, that I have not seen anyone respond to, is that if one takes into account that 1998 was heavily influences by the strong El Nino…then one should also remember that the 2002-2007 period was dominated by El Nino. Now that we have entered La Nina conditions again, global temps are right back around where they were in the last La Nina, 1999-2000.
#199
Ron, I wasn’t trying to put words in your mouth, I was trying to clarify what you meant. I still don’t understand how a flat trend supports continued global warming. What if that flat trend continued for 20 years?
#201
JCH, since the 1999-2001 La Nina (2001 was basically recovering from the strong La Nina), there have been three El Ninos (2002-03, 2004-05, and 2006-07) and now there is finally a La Nina again. So since 2001, three El Ninos to one La Nina…that’s pretty one-sided if you ask me.
If you want to go back over the entire 10 year period, 1997-98 was the strong El Nino, so there have 4 El Ninos and 2 La Ninas.
#202
Tamino, you are not addressing my actual points (that the rate of warming this decade had at the very least not been as great as the two previous decades, when all indications were that it should have warmed even faster than the 1980s and 1990s).
#203
Hank, yes I’ve taken a statistics course (I have a master’s degree, fwiw). And few people on here are actually addressing the fact that the same statistics that were used to illustrate the global warming of the 1980s and 1990s are now being downplayed because they don’t show the same thing for the past 10 years. Again…why is it that you cannot find another 10 year period from 1977 on that shows a flat trend? Could it just be an anomaly? Perhaps…but the trend definitely should not stay flat much longer if that is the case.
Jared says
One more thing regarding statistics…they also tell me that according to GHG/AGW calculations, the odds of any ten year period showing a flat trend are quite low (barring major volcanic eruption). Anyone disagree?
[Response: The ‘calculations’ are in the figures above, and I gave the actual distribution of expected trends for 7 years, 8 years and even 20 years. But if that isn’t enough, the distribution of trends for 10 years (1998-2007 inclusive) is N(0.195,0.172) and there are 7 realisations (out of 55) with negative trends. Therefore, assuming that you aren’t cherry picking your dates, the probability of a negative trend given the model distribution is roughly 12%. If you are cherry picking your dates, then the odds are much greater of course. – gavin]
David B. Benson says
Chris (205) — Less surprised than you might think. I quote from page 84 of “An Introduction to Probablity Theory and Its Apllications, Volume I, Third Editiion” by William Feller (John Wiley & Sons, 1968):
“The theoretical study of chance fluctuations confronts us with many paradoxs. For example, one should expect naively that in a prolonged coin-tossing game the observed number of changes of lead should increase roughly in proportion to the duration of the game. In a game that last twice as long, Peter should lead about twice as often. This intuitive reasoning is false. We shall show that, in a sense to be made precise, the number of changes of lead in n trrials increases only as sqrt n: in 100n trials one should expect only 10 times as many changes of lead as in n trials. This proves once again that the waiting times between successive equalizations are likely to be fantastically long.”
JCH says
(where’s the “memory” btw?) – Chris at 205
Chris, I was puzzled by the coin flipping thing, but for a different reason. I think the memory is in the oceans, not the proposed coin-flip mechanism, but I don’t understand any of this stuff too well.
In his example is not the coin flip essentially standing in place of natural variability?
Jared says
Here is one last way to look at it (for some reason my two other comments are still awaiting moderation):
Let’s forget about 1998 for a minute, and consider a hypothetical. 2000-2005 had about an even amount of ENSO influence (1999-2000 strong La Nina, 2000-01 weak La Nina; 2002-03 moderate/strong El Nino, 2004-05 weak El Nino)…and let’s say 2006-2011 ends up with an equal amount of ENSO influence (so far, so good as we had the 2006-07 El Nino and 2007-08 La Nina). And let’s say that the average global temperature from 2006-2011 is about the same or even a little bit lower than the average from 2000-2005. Would that be enough to illustrate a flat trend, and therefore a significant decline in global warming? Or, what if by 2011 there is still no consensus record warm year above 1998? Would that prove anything?
[Response: you need on the order of 20 years to get past the interannual variability. – gavin]
tamino says
Re: #207 (Jared)
Jared, you’re not addressing my actual point: that using GISS data, the rate of warming so far this year is 70 times greater than the long-term rate from 1975 to the present, using HadCRU data it’s 100 times greater.
Is there something wrong with that argument? What might it be?
Jared says
Thank you for the response, Gavin. I could be wrong, …. [edit]
[Response: Yup. – gavin]
Ron Taylor says
Jared, the established temperature trend I referred to is not flat. It is increasing. So if a new data point is on or above the projected trend line, it represents an increase and supports continuing warming.
Gerald Browning says
Ray Ladbury or Gavin (ref #161),
I have asked you a very specific scientific question. If you cannot respond to the question of the impact of the upper boundary treatment in your climate model on your results over a period of time, what does that say about your comprehension of the errors in the rest of your results based on more serious forms of error in the models?
Let us address one issue at a time so other readers can see just exactly how much you know about the impact of numerical gimmicks in your models. If you continue to avoid these direct questions, the silence will be deafining.
Jerry
[Response: Treatment of the upper boundary generally affects the stratospheric circulation and can have impacts on the mean sea level pressure. However, the higher up you put it, the less effect it generally has (Boville, Rind et al). It doesn’t appear to have any noticeable effect on the climate sensitivity though, but it can impact the sensitivity of dynamical modes (such as the NAO) to forcing (Shindell et al, 1999; 2001). It’s a very interesting question, and one in which I’ve been involved on many papers. But I have no idea what point you are making. If it is as trivial as ‘details matter’, then we are obviously in agreement. If it is that ‘details matter therefore we know nothing’, then we are not. – gavin]
Gerald Browning says
Gavin (#182),
> [Response: The argument for AGW is based on energy balance, not turbulence.
So mesoscale storms, fronts, and turbulence are now classified as turbulence.
Oh my.
> The argument existed before GCMs were invented, and the addition of dynamical components has not provided any reason to adjust the basic picture.
So why have so many computer resources been wasted on models if the “proof”
already existed. A slight contradiction.
> As resolution increases more and finer spatial scale processes get included, and improved approximations to the governing equations get used (such as moving to non-hydrostatic solvers for instance).
I have published a manuscript on microphysics for smaller scale motions and they are just as big a kluge as the parameterizations used in the large scale models. And it has also been pointed out that there is fast exponential growth
in numerical models based on the nonhydrostatic modesl. Numerical methods will not converge to a continuum solution that has an exponential growth.
> Yet while many features of the models improve at higher resolution, there is no substantial change to the ‘big issue’ – the sensitivity to radiative forcing.
Pardon me but isn’t radiative forcing dependent on water vapor (clouds) that
Pat Frank and others have shown is one of the biggest sources of error in the models.
> It should also be pointed out (again) that if you were correct, then why do models show any skill at anything? If they are all noise, why do you get a systematic cooling of the right size after Pinatubo? Why do you get a match to the global water vapour amounts during an El Niño? Why do you get a shift north of the rainfall at the mid-Holocene that matches the paleo record? If you were correct, none of these things could occur.
How was the forcing for Pinatubo included? It can be shown in a simple 3 line proof that by including an appropriate forcing term, one can obtain any solution one wants. Even from an incorrect differential system exactly as you have done.
> Yet they do. You keep posting your claim that the models are ill-posed yet you never address the issue of their demonstrated skill.
There are any number of manuscripts that have questioned the “skill” of the models. I have specifically mentioned Dave Williamson’s results that you continue to ignore. Please address any of the issues, e.g. with the nonlinear cascade of vorticity that produces unresolved features in a climate model within a few days, How does that impact the difference between the model solution and reality? Or the impact of the upper boundary using numerical gimmicks? Or the use of inaccurate parameterizations as shown by Sylvie Gravel (see Climate Audit) or Dave Williamson? The ill posedness has also been shown when the mesoscale storms will be resolved and the dissiaption reduced
exactly as in the Lu et al. manuscript.
>In fact, you are wrong about what the models solve in any case. Without even addressing the merits of your fundamental point, the fact that the models are solving a well posed system is attested to by their stability and lack of ‘exponential unbounded growth’.
I have speciffically addressed this issue. The unphysically large dissipation in the models that is preventing the smaller scles from forming is also hiding the ill posedness (along with the hydrostatic readjustment of the solution when overturning occurs due to heating – a very unphysical gimmick).
> Now this system is not the exact system that one would ideally want – approximations are indeed made to deal with sub-gridscale processes and numerical artifacts – but the test of whether this is useful lies in the comparisons to the real world – not in some a priori claim that the models can’t work because they are not exact.
And it is those exact sub grid scale processes that are causing much of the inaccuracy in the models along with the hydrostatic approximation.
> So, here’s my challenge to you – explain why the models work in the three examples I give here and tell me why that still means that they can’t be used for the CO2 issue. Further repetition of already made points is not requested. – gavin]
If you address any one of my points above in a rigorous mathematical manner
with no handwaving, I will be amazed. I am willing (and have) backed up my statements with mathematics and numerical illustrations. So far I have only heard that you have tuned the model to balance the unphysically large dissipation against the inaccurate forcing to provide the answer you want. This is not science, but trial and error.
Jerry
[Response: This is not an argument, it is just contradiction. You absolutely refuse to consider the implications of your points and your punt on providing any justification for the three examples of demonstrated skill – even allowing for your points – speaks volumes. Your response on Pinatubo is particularly revealing. Aerosol optical depth is obtained from satellite and in situ observations and in the models the response is made up of the increases in reflectivity, LW absorption and subsequent changes in temperature gradients in the lower stratosphere, water vapour amounts etc. If you think the surface temperature response is obvious, it must surely be that you think think the impact on the overall energy budget is dominant over any mis-specification of the details. That seems rather contradictory to the main thrust of your points. If you think it was a fix, then please explain how Hansen et al 1992 predicted the correct magnitude of cooling of 1992 and 1993 before it happened? Therefore the models have some skill in responding to energy balance perturbations – how then can you therefore continue to insist that sub-gridscale processes (which everyone acknowledges to be uncertain) preclude any meaningful results? Please clarify. – gavin]
Bryan S says
Gavin: In Hansen (2007) Figure 3, does this figure suggest that the combined forcings + natural variability produce a positive net TOA radiative imbalance (gain in OHC) in every single year following 2000? I see no negative imbalance over even an annual period. What am I not seeing here?
[Response: Nothing. That is what the model suggests. In our other configuration, (GISS-EH) the trend is smaller and the interannual variability is larger. A look at the same diagnostics across all the models would be instructive since this particular metric will depend on the amount of deep ocean mixing and tropical variability – both of which vary widely in the models. – gavin]
Barton Paul Levenson says
Jared repeats, for the Nth time, the same misinformation:
No it isn’t:
http://members.aol.com/bpl1960/Ball.html
And even if it were, ten years is too short a period to tell anything where climate is concerned. Climate is DEFINED as mean regional or global climate “over a period of 30 years or more.”
What you have done is isolate a portion of the curve that seems to your naked-eye observation to be going down and call it a trend. That’s wrong. Trends are defined mathematically, and I told you how to do it above. Go back and read it again.
J says
In #177, Barton Paul Levenson writes:
Type “NASA GISTEMP” into Google and click on the first link that comes up.
Sorry if the question wasn’t clear. I was asking whether anyone has archived the IPCC AR4 model realizations of global mean temperature in an easy-to-obtain way. In other words, I was looking for the data shown in the first figure at the top of this post.
In an earlier post here at RC, the authors noted that the archive of AR4 outputs, while useful, could be made more so if supplemented with various zonally or globally averaged products … that would save others from having to do their own spatial and temporal averaging, and it would reduce the amount of data that need to be downloaded from the archive. If I’m curious about predictions of particular models for annual global mean temperature under scenario SRES A1B, ideally I shouldn’t have to download the entire data set and make my own averages….
[Response: That would indeed be ideal. Unfortunately that hasn’t (yet) been set up. In the meantime, there is a GUI interface for much of this data available at http://dapper.pmel.noaa.gov/dchart/ or at http://climexp.knmi.nl/ (the latter does global means etc.). Both take a little getting used to. But I think it should be doable. – gavin]
Hank Roberts says
J. 19 mai 2008 at 12:49 PM
Read the first paragraph of the opening post. That contains the answer to your question, with a hot-link.
Bryan S says
In our other configuration, (GISS-EH) the trend is smaller and the interannual variability is larger.
Is the improved resolution of ENSO in GISS-EH, and the smaller trend related? Many here have asserted that a long-term trend in net TOA radiative balance cannot be related to tropical variability. Please advise whether the larger variability and smaller trend are related in the model atmosphere.
[Response: No idea. You’d need to do the whole ensemble to investigate that (ask RP Sr when he’s done). I would not expect so – however, there may be some rectification that goes on. – gavin]
Gerald Browning says
Gavin (#216),
Let’s see . Did you respond technically to all of the points I raised or did you just select one that you thought you could repeat the same illogical argument.
So we are back to one issue at a time.
What is the altitude of the lid in your model?
How is the upper boundary treated, i.e. do you use a sponge layer in the upper layers of your model?
Is the upper layer a physical approximation or a numerical gimmick to damp the upward propagating gravity waves.
What is the impact of that treatment on information propagating upward and down from above?
Once you answer these specific scientific questions (if you can), we can begin to discuss the errors that arise from such an ad hoc treatment of the upper boundary in terms of how quickly it impacts the solution of the continuum system. Then we will proceed to each subsequent question one at a time.
For those interested, Sylvie Gravel’s results (see her manuscript on Climate Audit) showed that ad hoc treatment of the upper boundary layer affected the accuracy of the model within a matter of a few days. Physically this should come as no surprise because the downward propagating slow mode information and gravity waves have an impact on the continuum solution.
Jerry
[Response: Jerry, you are not running a tutorial here, and my interest in indulging you is limited (all of the answers are in Schmidt et al, 2006 in any case). We all agree that sub-grid scale issues and treatment of the model top make differences to the solution. The differences they make to the climatology are small yet significant – particularly in this case, for strat-trop exchange. This is not in dispute. Thus let’s move on to what it means. Do you claim that it precludes the use of models in any forced context? If no, then what do you claim? If yes, then explain the demonstrated skill of the models in the cases I raised above. This is not dodging the point – it is the point! Can models be useful in climate predictions if they are not perfect? Since the models are never going to be perfect, a better question might be what would it take for you to acknowledge that any real model was useful? (If the answer is nothing, then of course, our conversation here is done). – gavin]
Anthony Kendall says
Gerald Browning,
Reading your responses, it seems to me that you have your nose to close to the paper on this one.
Step back a moment and ask the question: “How can we produce an estimate of the response to climate as a result of specified (and variable) forcings/boundary conditions?” In answering that question, we can choose any number of approaches, but the one that has the most skill at prediction is the GCM-type.
So, start from that fact: we have no better tool of simulating global climate under varying conditions than the GCM.
It is entirely possible to create a model of the system that does a better job than a GCM. However, as in Frank’s linear model, any change of the system away from the linearity that is embedded will result in a marked decrease in modeling skill.
Now, going from that point. You argue that GCMs are ill-posed because they cannot represent fully the dynamics of the systems described by their PDEs. This is entirely correct. A GCM is not a DNS (direct numerical simulation, in fluid dynamics lingo). It does not explicitly represent the entire Earth using the Navier-Stokes equations at the tiniest possible resolutions.
By that argument, no model short of a complete molecular-scale Lagrangian simulation is well-posed. But wait, what about random nuclear decay occurring at the sub-molecular scale? That could clearly introduce random oscillations into the system that, undamped, could result in completely spurious results. A simulation is never a complete re-creating of a physical system; it’s a model.
What that means is that GCMs, as all models, are wrong. And, it means that GCMs can be improved. Nevertheless, they can, and do, provide us with valuable information about the trends in overall climate behavior. They cannot, yet, simulate even meso-scale “weather” events. Nevertheless, their demonstrated modeling skill means that their predictions of future behavior should at least be given serious consideration.
The alternative is to say: “Global climate is so complex we cannot model it.” But then, if we did that, all we’d have to go on is the basic physics of the situation.
If you put a bunch of CO2 into the atmosphere, it will stop more of the earth’s longwave radiation from escaping back to space. It will mean that, until equilibrium is achieved, the Earth absorbs more heat than it re-radiates. We won’t be able to say much about when or where the warming will occur, but we still know it will warm.
That’s what the bigger picture looks like. AGW is here, it’s real, and GCMs offer us the best chance of predicting the effects so that we, as a species, can respond intelligently–or at least respond with some knowledge of the possible effects of our actions.
Gerald Browning says
Gavin (#216),
So Hansen ran his model 1 year with increased reflectivity. I am less than impressed. An eruption starts as a small scale feature that is unresolvable by the mesh of a climate model. Thus the impact had to be entered at a later time when the impact was more global in scale. So Hansen did not compute the impact from the beginning and had to enter the forcing at a later time.
Please provide details on the implementation of the Pinatubo forcing including spatial extent and size (mathematical formula) and what parameters were adjusted from the normal model runs. Also please indicate when the forcing modification was first added (in real time) and when the eruption occurred.
Given these problems, how is volcanic activity entered into climate models before the advent of satellites?
Jerry
[Response: For volcanic forcing history usually used see here. For the experiment done in 1991 see here (note that the paper was submitted Oct 3, 1991). For an a priori calculations of the aerosol distribution from a point source of SO2 see here. For a discussion of the evaluation of the Pinatubo forcing see here. For an evaluation of the impact of volcanoes on dynamical modes in the models see here. – gavin]
BRIAN M FLYNN says
Gavin:
In answer to Bryan S at #217, I understand you to write:
“A look at the same diagnostics across all the models would be instructive since [combined forcings + natural variability produce a positive net TOA radiative imbalance (gain in OHC)] will depend on the amount of deep ocean mixing and tropical variability – both of which vary widely in the models.”
When commenting on GISS modelE, Hansen et al. (2007) write, “Measured ocean heat storage in the past decade (Willis et al., 2004; Lyman [Willis] et al., 2006) presents limited evidence of [deep ocean temperature change], but the record is too short and the measurements too shallow for full confirmation. Ongoing simulations with modelE coupled to the current version of the Bleck (2002) ocean model show less deep mixing of heat anomalies.”
Do you have a description of the Bleck (2002) ocean model and its current version and, if so, would you please post sites to them? Thank you for your time.
[Response: Try Sun and Bleck, 2006 and references within. Data is available at PCMDI and ftp://data.giss.nasa.gov/pub/pcmdi/ – gavin]
Pat Cassen says
Gerald Browning –
I’m a bit baffled by what appears to be your blanket dismissal of climate GCMs. Part of the business of numerical modeling is figuring out what a model does well and what it doesn’t do well. I get the impression that you have concluded that sponge layers, parameterizations, unresolved vorticity casecades, etc. preclude any value of the GCMs. (Of course, the same or similar objections would then preclude the value many other astrophysical, geophysical and engineering numerical simulations.) So do the GCMs do anything adequately? Do you have some suggestions as to how to improve their performance in any area, perhaps by incorporating some of your own work? Do you have some specific, constructive recommendations? Or is the main conclusion of your work that toy models are the best we can do for now?
Chris Colose says
Gavin, I would not entertain Gerald any further. From my limited experience, convincing someone who thinks models are worthless is just an exercise in futility, and you would have better luck with a creationist. If the models get something wrong, or we cannot model every little thing in the universe then it is all crap; if the models get something right, then no one is impressed because it was either by coincidence or a “well duh” thing anyway.
Of course, the person who denies AGW on the basis of GCMs being worthless, should also explain the paleoclimatic record, basic radiative physics, other planets, etc. I still want an answer from someone: what kind of GCM did Arrhenius use back in 1896?
[Response: I concur. Round and round the mulberry bush gets tedious after a while. – gavin]
David B. Benson says
Chris Colose (227) — A pen and lots and lots of paper, it seems…
Gerald Browning says
Gavin (#222),
I am not running a tutorial. I am asking specific scientific questions that you continue to avoid. If you know the answers, then state them and quit circumventing them.
The model is not the climate. It is a heavily damped ill posed system closer to a heat equation than the continuum solution of the NS equations with the correct Reynold’s number. How do you determine the impact of the sponge layer and incorrect cascade cascade. Isn’t that exactly what Dave Williamson did?
Please summarize his results.
Yes, when the model is too far from the continuum solution for the period of integration. That is exactly what is happening in the climate models.
Did you state the number of runs that have been made for the Pinatubo results,
i.e. how much tuning was done. Is there a place at GISS where I can determine this number? Did you state how the volcano was started from the initial blast? These should be straight forward to explain. Why not do so.
I have stated very clearly and precisely what it takes for a model to be useful. It must be close to the continuum solution for the entire period of integration. If it deviates too far from that solution, then the results are nonsense.
[edit]
Jerry
[Response: Ok. Game over. I gave you copious and specific references that answered all your questions which you obviously did not bother to read. Instead you change the subject again and insist that I regurgitate other peoples papers. Very odd. Do you find that this is a fruitful method of exchange? Judging from your other internet forays, it wouldn’t seem so. You might want to think about that…. BTW you have neither been clear not precise in any part of this ‘conversation’. But in my last word on this topic, you will find that most people define ‘useful’ as something that ends up have some use. Therefore a model that makes predictions that end up happening is, by definition, useful. It does not become nonsense because it is not arbitrarily close to an ideal solution that we cannot know. Since you refuse to accept a definition of useful that has any practical consequence, this conversation is done. Shame really. – gavin]
Gerald Browning says
Anthiny Kendall (#223)
Gerald Browning,
Reading your responses, it seems to me that you have your nose to close to the paper on this one.
> Step back a moment and ask the question: “How can we produce an estimate of the response to climate as a result of specified (and variable) forcings/boundary conditions?” In answering that question, we can choose any number of approaches, but the one that has the most skill at prediction is theGCM-type.
This has not been shown and in fact it now appears that Pat’s simple linear forcing does a better job.
>So, start from that fact: we have no better tool of simulating global climate under varying conditions than the GCM.
So instead of running the atmospheric component ad nauseum, run it with finer resolution (~3 km mesh) for a few weeks to see what happens when the dissipation is reduced as Lu et al. did. A much better use of computer resources and the results will demonstrate who is right. The solution will be quite different, i.e. the rate of cascade will lead to a quite different
numerical solution revealing just how far away the current numerical solution is from the continuum solution.
It is entirely possible to create a model of the system that does a better job than a GCM. However, as in Frank’s linear model, any change of the system away from the linearity that is embedded will result in a marked decrease in modeling skill.
If Frank’s error analysis is anywhere near correct (and I believe that it is)
then the GCM’s have no skill against reality without considerable tuning
(trial and error).
Now, going from that point. You argue that GCMs are ill-posed because they cannot represent fully the dynamics of the systems described by their PDEs. This is entirely correct. A GCM is not a DNS (direct numerical simulation, in fluid dynamics lingo). It does not explicitly represent the entire Earth using the Navier-Stokes equations at the tiniest possible resolutions.
You have entirely missed the point. It is the hydrostatic approximation that causes the ill posedness. One does not have to go any finer than the above suggested experiment to see the problem. The original nonhydrostatic
system is not ill posed (but it still has fast exponential growth).
By that argument, no model short of a complete molecular-scale Lagrangian simulation is well-posed.
The mathematical definition of well posedness is that perturbations of a solution of the continuum time dependent system grow at worst exponentially in time. All physically reasonable systems satisfy this requirement, including the nonhydostatic system or the NS equations. Perturbations in an ill posed system grow exponentially with a larger exponent as the mesh is refined, i.e. there is no hope for convergence of a numerical solution. This is a serious
problem.
> But wait, what about random nuclear decay occurring at the sub-molecular scale? That could clearly introduce random oscillations into the system that, undamped, could result in completely spurious results. A simulation is never a complete re-creating of a physical system; it’s a model.
Stick to the issue at hand.
What that means is that GCMs, as all models, are wrong. And, it means that GCMs can be improved. Nevertheless, they can, and do, provide us with valuable information about the trends in overall climate behavior. They cannot, yet, simulate even meso-scale “weather” events. Nevertheless, their demonstrated modeling skill means that their predictions of future behavior should at least be given serious consideration.
Models do not have to be wrong. They are only wrong if they are based on a time dependent system that is ill posed or if the system is well posed if the numerical solution is propagated beyond where it accurately error accurately approximates the continuum solution.
The alternative is to say: “Global climate is so complex we cannot model it.” But then, if we did that, all we’d have to go on is the basic physics of the situation.
That is entirely possible. And I point out that there is much to learn about the basic physics. That would be a much better investment of resources.
If you put a bunch of CO2 into the atmosphere, it will stop more of the earth’s longwave radiation from escaping back to space. It will mean that, until equilibrium is achieved, the Earth absorbs more heat than it re-radiates. We won’t be able to say much about when or where the warming will occur, but we still know it will warm.
Is there a scientific argument here? I recently read a manuscript that pointed out that an incorrect treatment of the upper boundary condition in the basic radiation argument had been made (I can find the article if you want).
When the correction was made, the results were very different.
That’s what the bigger picture looks like. AGW is here, it’s real, and GCMs offer us the best chance of predicting the effects so that we, as a species, can respond intelligently–or at least respond with some knowledge of the possible effects of our actions.
Politics anyone.
Jerry
Rod B says
re the discussion between Gavin and G. Browning, et al. I’m way out of my league here, but have a simple question that I hope is not a non sequitur: How can a model possibly predict the effects of a Pinatubo without the model user providing an input specific to such a thing?
[Response: Read the paper I linked to – it really is quite good. Pinatubo erupted in June 1991, and it was apparent very quickly that there was a large amount of SO2 emitted into the lower stratosphere. That was enough to be able to scale the distribution of aerosols from El Chichon (in 1982) up to the amount from Pinatubo and project the models forward with that. Compare the model output (from the paper written that October) with the record we know today. It’s pretty good. If a similar event were to happen today, we’d be able to do a a more a priori calculation – just putting a point source of SO2 and having the model calculate the sulphates, ozone response etc. – gavin]
Anthony Kendall (223) says, “…The alternative is to say: “Global climate is so complex we cannot model it.”…” Might that in fact be true (well, not model it within some kind of range)? Aren’t climate models about the most complex and difficult models going, including quantum physics and astrophysics?
Hank Roberts says
Rod, did you read Judith Curry’s review? “Op. cit.”
Nylo says
I still have two unanswered questions.
First, provided that in the Green House effect theory the warming in the surface temperatures comes from the emissions of a warmer troposphere, and that the models predict far more warming in the troposphere than what has been measured in real world, why shold we rely on the predictions of the models?
Second, why has the first question been ignored first, and censored, later, without an explanation as to why it should be ignored or censored? What is wrong with the question? Am I being impolite? Should I use a different language? Are some of the stated facts wrong? Please, give me a clue.
[Response: The facts, premise and implication are all wrong. It sounds logical, but its purpose as a ‘gotcha’ question is neither to learn nor to inform. Tropospheric warming is driven from the surface through adiabatic effects, not the GHE; the troposphere is warming globally at about the rate predicted by models; in the tropics there are discrepancies but the uncertainty in the observations is huge; the model predictions for dozens of other issues are in line with observations;even if you don’t like models the empirical evidence is sufficient to imply continued warming with potentially serious consequences. That better? – gavin]
Alexander Harvey says
Re #143:
Chris,
I had thought I had been clear that the lack of analysis referred to the 75% claim. If not I will state that I thought it was that point:
“In terms of climate forcing, greenhouse gases added to the atmosphere through mans activities since the late 19th Century have already produced three-quarters of the radiative forcing that we expect from a doubling of CO2.”
That statement you seem to have dismissed out of hand and unfortunately it “as stated” is possibly correct and I do not think you have challenged that. His statement is not particular important but it is a bit of an attention grabber.
I do not think Lindzen is a stupid man and his use of words does require my attention.
For instance:
“Even if we attribute all warming over the past century to man made greenhouse gases (which we have no basis for doing), the observed warming is only about 1/3-1/6 of what models project.”
It is his using of the word “project” not “projected”. This allows him to refer the equillibrium temperature which he maintains is commonly 4C. Arguing from there he can be seen to be more or less correct “in his own terms”. Refuting it by using other figures for the warming and other figures for the climatic sensitivity will not make it incorrrect “in his own terms”. On the other hand you seem to have interpreted the statement by replacing “project” with “expect” (quote: “(he asserts we’ve only had 16-33% of the expected warming)”. Hence you are “I feel” arguing different cases.
Finally a word of caution regarding the Hansen “pipeline” figures. I think in the Hansen 1985 paper it is explicit that the temperatures are ocean surface temperatures not globally averaged temperatures and I believe that style is carried forward to his (2005?) paper. That said I think it is more accurate to add the pipeline increase to the sea surface temperature increase to obtain the final temperature.
During the post 1970 surge in temperatures, GISS give at least twice the rate of increase over land as over the ocean. So the Land, Ocean and Global temperatures have diverged significantly and it is not totally obvious how to do the sums when adding back in the “pipeline” figure.
Personally I have little problem at this moment with a .5C +/= .2C range for future ocean temperature increase. Which, (not all that coincidently), would bring ocean temperatures back into line with the increase in land temperatures. Which, quite plausibly, track the equillibrium rather closely.
Neither here nor before am I attacking you, nor am I defending Lindzen “point of view” whatever that may be. I am just saying that you can both be right in “your own terms” on his two points that you brought to our attention. As for the rest of what he has to say I have not commented.
Best Wishes
Alexander Harvey
Nylo says
Gavin, certainly my posts can contain unaccuracies. But that is precisely the reason why I am asking. If I find anything that doesn’t look good in the AGW reasoning, I will say it, and I will ask, so that you have the opportunity to explain it better, you are the scientist here. The purpose of a “gotcha” question is not to inform, of course, there is no question with the ability to inform, what informs is the answer. But the purpose of any question IS to learn. If you can provide an answer to a gotcha question, “you win”, and everybody learns. If you can’t, then we also learn, and the gotcha question proves to be one that had to be asked in order to force people to improve science. Science progresses by making questions that need answers and looking for those answers.
I don’t understand very well what you said about what drives the tropospheric warming. I know what the AGW defenders say to the common people: we add CO2 to the atmosphere, mainly the troposphere, and because of that the troposphere warms and radiates extra energy to the surface. If this is wrong, and humanity is influencing the climate in a different way than increasing the GH effect, shouldn’t it be explained better? I didn’t know that we were provoking a climate change in different ways from strengthening the green house effect. As far as I knew, the GHE was the only important thing we were affecting to cause a big warming.
About the discrepancies between models and real data, you are right, the discrepancies come mostly at the tropics. But the tropics discrepancy shown in the article by Douglas et al. is an area which is as big as 30ºS to 30ºN. This is half of the total surface of the planet, not just one small region. Furthermore, it is the half of the planet where most of the emissions from the troposphere reach the surface, because it is the half where the troposphere temperature is higher, and therefore it emits the most. Shouldn’t we be relying on the predictions of the models which show the correct warming in the troposphere in this area, instead of the average of the models?
Relating to the unaccuracy in the observations: Are the models predictions to be trusted more than the observations in this case? And does the unaccuracy mean that the troposphere is probably warmer than measured, or does it work also the other way, i.e. the troposphere could be cooler than measured with a similar uncertainty?
Thanks.
[Response: See, just repeating the same statement again but with more words doesn’t help. The GHE does not rely on the troposphere getting warmer – it relies on the increased difficulty for the whole system to lose heat to space. Because of convection tropospheric profiles, particularly in the tropics are pinned to the surface temperature (the moist adiabat). The tropics are certainly an important part of the planet, but they are also (unfortunately) rather poorly observed – the uncertainties in the tropospheric trends are a function of the latter, not the former. And Douglass et al’s claims were overblown on a number of fronts (more on that tomorrow). – gavin]
Hank Roberts says
Re the failures to make a go of Internet conversations, MT’s suggestion over at his blog thread is interesting:
“Why don’t you write Collins and ask him whether he thinks GCMs are of no use in predicting the future on multidecadal scales?
Since he has done some paleoclimate work I expect he will disagree.”
Perhaps one of the climate scientists/modelers will do so. If not good blog material it could produce an interesting joint paper among those who can’t make a go of the blog conversation. I realize this suggestion probably belongs at the other place.
Russ Willis says
Don’t all the IPCC models assume an infinately thick atmosphere? And, hasn’t this recently been suggested to be a dodgy/incorrect assumption?
[Response: No. That would indeed be dodgy. IPCC-type models tend to go up to 35km, 50km or even 80km and more specialised ones up to 110 km. 35km is probably a little too low (since it cuts off the stratosphere and affects planetary waves), 50km/80km is better (top at the mesopause/mesosphere), 110km is unnecessary for getting the surface climate right. However, even with a top at 35km, the model contains 99% of the atmospheric mass. -gavin]
Ray Ladbury says
#231, Rod B. asks if climate might not be too complex to model.
The purpose of modeling is not to reproduce the system being modeled, but rather to gain understanding of it. As George Box said, “All models are wrong. Some models are useful.” So the question is not whether climate models can have 100% fidelity, but whether they can have sufficient fidelity to yield insight into the system and whether the departures from fidelity compromise any of those insights.
GCM are quite complex, but I have seen other simulations that rival them. DOD nuclear codes have to be, since we do not allow nuclear testing any more. I have also seen simulations of a DNA molecule moving under the influence of an electric field and high-energy collisions of uranium nuclei (remember these are Monte Carlo, so each they are repeated many times). As part of my day job, I work with people who simulate failures in complex microcircuits–submicron CMOS, high-electron mobility transistors, heterojunction bipolar transistors, etc. Curcuits fabricated in these technologies are way too complicated to simulate on even the largest supercomputers. So you have to make compromises. It would be foolish to take model output as gospel for what you will see in the real device, but the insights guiding you to the physics are invaluable.
Rod B says
Gavin, so the models are not predicting the eruption itself, which is put in manually, but predicting pretty well the downstream effects and variances post eruption on its own — the former of course being hokey, the latter not bad and positive. Do I have it generally right?
Chris says
Alexander,
When we arrive at the situation of debating what an author might mean with the use of a word (like “project”) then we know that either the author has been remiss in conveying his meaning, or that potential ambiguities are being used to pursue dubious interpretations.
When Lindzen states: “Even if we attribute all warming over the past century to man made greenhouse gases (which we have no basis for doing), the observed warming is only about 1/3-1/6 of what models project.”
It seems pretty clear to me that he means to imply that we’ve had much less warming than the models indicate that we should have.
After all, Lindzen’s very next sentences are [*]:
We are logically led to two possibilities:
1. Our models are greatly overestimating the sensitivity of climate to man made greenhouse gases, or
2. The models are correct, but there is some unknown process that has cancelled most of the warming.
You are also questioning what Hansen et al refer to when they state that according to their analysis of the Earth’s “heat imbalance”, that the Earth has 0.6 oC of warming “in the pipeline”. You suggest that this refers to ocean surface temperature and not globally averaged temperature.
Again I think the meaning is quite clear by reference to what the authors say:
Of the 1.8 W/m2 forcing, 0.85 W/m2 remains, i.e., additional global warming of 0.85 x 0.67 0.6°C is “in the pipeline” and will occur in the future even if atmospheric composition and other climate forcings remain fixed at today’s values (3, 4, 23). [**]
[*] http://www.ycsg.yale.edu/climate/forms/LindzenYaleMtg.pdf (slide 12)
[**] J. Hansen et al. (2005) Earth’s Energy Imbalance: Confirmation and Implications
James, Science, 308, 1431-1435. (third column on p 1432)
http://pubs.giss.nasa.gov/docs/2005/2005_Hansen_etal_1.pdf
Russ Willis says
Thanks for the quick response. My confusion regarding the thickness of the atmousphere comes from a report I was reading about a couple of Hungarian geezers called Miklós Zágoni and Ferenc Miskolczi. I think they were claiming that there was an error in the equations used to predict future climate. Something about applying boundary constrains on the thickness of the atmousphere. It sounded quite interesting at the time, but unfortunately, since then I’ve heard nothing more on the subject. Was there anything to this story or is it a load of tosh?
[Response: The latter. But we will have an undergraduate take-down of this new ‘theory’ up at some point soon. – gavin]
Clive van der Spuy says
Gavin #229 “Game over…”
I sense your frustration but remember us laymen are also trying to follow this. So perhaps a bit more latitude?
Anyhow Jerry asked (inter alia):
“Did you state the number of runs that have been made for the Pinatubo results,
i.e. how much tuning was done….?”
[Response: Please read the paper. (3 runs, no tuning). – gavin]
Up to this point I have actually followed the gist of the reasoning. He says your GCM is inherently flawed – it produces merely the same linear graph of Frank’s super simplistic model and it was cooked to do so. You say no, it actually predicts actual events with a good measure of accuracy eg post Pinatubo cooling. He then fails to read your reference. OK so I did:
“We use the GISS global-climate model to make a preliminary estimate of Mount Pinatubo’s climate impact. ….”
The authors made a number of assumptions and then their GCM predicted fairly clear-cut results regarding cooling, the time periods involved, even a possibly later than usual time for cherry blossoms in Tokyo!
Question one (if you will be so kind): How accurate did these predictions turn out to be?
[Response: pretty good. Max cooling of about 0.5 deg C in the monthly means which compares well with observations. They didn’t predict the concurrent El Niño, which may or may not have been connected and they got the tailing of the cooling pretty well too. This might make a good post to discuss… – gavin]
Question two: How difficult would it be otherwise to predict these results? Ie can not someone like Frank or Jerry simply do a little seat of the pants calculation, tag it onto Frank’s simplistic linear simulator and presto, we have a reasonably accurate cooling scenario.
[Response: You are right, it’s not obvious. The effects involve LW and SW radiation changes, water vapour and dynamical feedbacks which give not only global cooling, but also regional patterns (winter warming in Europe for instance). Additionally, the timescale for the tailing off is a complicated function of ocean mixing processes and timescales. – gavin]
You will get my drift here – I am trying to assess for myself as well as I can how significant the prediction by the GCM is for the purposes of judging its reliability/sophistication.
Perhaps another way to put it is to ask “How unexpected were the predicted results? Assuming the predictions came true, I would think that the more unexpected (ie the GCM picking by and large the right numbers from A BIG POPULATION of possibilities), the higher the level of confidence we can have in the GCM – not?
Please remember my lay status if you do have time to answer. May Jerry comment?
Thanks for a very informative site (and comments section). For once I sense I am approaching some sort of understanding.
[Response: Well it’s not unexpected that volcanic aerosols cool, but the quantification of the impact is a function of many feedbacks – radiative and dynamic – which are of course important for the response to GHGs too. The models today do a better job, but it’s tiresome to hear how models can’t possibly work when they do. – gavin]
Nylo says
Gavin at #235 said: “The GHE does not rely on the troposphere getting warmer – it relies on the increased difficulty for the whole system to lose heat to space”. “Because of convection tropospheric profiles, particularly in the tropics are pinned to the surface temperature (the moist adiabat)”.
I thought that the whole system had increased difficulty to lose heat to space because some of the heat emitted at the surface was trapped by the troposphere, making it warmer.
The tropical troposphere doesn’t look to be warming the way the average model predicts, which doesn’t mean that it is not absorbing the energy. It only means that the energy the troposphere absorbs is redistributed later, by processes like the convection profiles you mention. But Gavin, correct me if I am wrong, but these convection profiles only accelerate the processes of energy exchange between the surface and the troposphere. So they “cool” the troposphere by “warming” the surface. They move the heat, they don’t make it disappear.
If the models don’t address correctly the convection profiles, then the models are delaying the response of the surface temperatures to the GHE. If this response is being delayed in the models, and still the models correctly predict nowadays’ surface temperature trends, then this means that the models are over-reacting. A delayed response should show less warming in the surface together with more warming in the troposphere. They cannot get one right and the other so wrong. They should take both wrong in opposite directions, or otherwise they are not showing the true energy balance of the system. Their whole atmospheric+surface system gets more warm than what the observations show.
Finally, don’t these convection profiles help the surface to release heat directly to the higher layers, with the heat being carried by big masses of air going upwards, and therefore skipping the problem of the infrared absorption in the middle by the GHGs? Isn’t that a way that the planet has to release heat to the space without an interference by the GHGs? Do the models include this source of energy disipation?
Would it be very difficult to include in the models these well known convection profiles in the tropics, so that they do not show this delayed response of the surface temperatures to the GHE? I would be very interested in learning what the resulting prediction for the future would be, and how it would match the current temperature trends at the surface.
[Response: This is just nonsense piled on top of nonsense. Convection doesn’t cool the atmosphere – it warms it (in the tropics that is mainly by latent heat release). The moist adiabat governs tropical profiles on all timescales – monthly, interannual and in the long term trends – with the lone exception of one satellite record and particular readings of the radiosondes (but as I said, there will be more on that soon). All of these features are included in GCMs. Your faith in the fidelity of short term records in the sparsely sampled tropics with multiple proven biases is touching. – gavin]
Clive van der Spuy says
Re # 242 Gavin
Ok so if I understand correctly, the GCM produced a set of predictions in close conformity with the actual outcome – at least close enough to infer that it has reasonable predictive value regarding aerosol cooling from volcanic eruption. This is of course of probative value when judging the GCM. It is difficult for me to get a keen appreciation of exactly how much value to place on the ability of the GCM to do this.
What can be inferred about the GCM’s ability to predict OTHER climate features from its proven ability to predict aerosol cooling? Ie I now know the model is quite good at dealing with volcanic aerosols but does that necessarily tell me anything about its ability to predict for instance the consequences of a meteorite impact (providing it with different assumptions regarding particle size and the like) or most importantly, does it provide confidence regarding its ability to predict future temperature?
[Response: First off, it is a simple demonstration that models can predict useful things about the global mean temperature change in the face of radiative forcings. This clearly undermines arguments that climate modelling is inherently useless such as put forward by Browning. Does that imply they can predict anything else? Not necessarily. However, many of the things that make a difference to the results for doubled CO2 also come in to play in this test case – water vapour and radiative feedbacks for instance. As you line up other test cases that test the models in other circumstances (such as the mid-Holocene, or their response to ENSO, or the LGM or the 8.2 kyr event, or the 20th C etc.), you get to try out most of the parts that factor in to the overall climate sensitivity. Good (but necessarily imperfect) matches to these examples demonstrate that the models do capture the gross phenomenology of the system despite the flaws – and that is what determines our confidence going forward. – gavin]
Patrick M says
It seems to be that natural variability has now been shown to obscure the AGW signal for up to a decade, and thus even a 20-year trend might over-state or understate AGW trends. So why not focus on looking at the whole 1950s – current period, since we have Mauna Loa CO2 data and temperature data for the whole period, and such a longer period reduces the natural variability biases.
From 1957 to 2007, there is a 50 year warming trend – HADcrut3v global temp anomolies went from -.083ºC in 1957 to 0.396ºC in 2007, or .48ºC in 50 years, that is a bit under 0.1ºC/decade. Since an annual number can be subject to biases due to annual variability, let’s use the -.15ºC 1950s decade average to the 0.44ºC 5-year average for 2002-2007 (again HADcrut numbers), yielding a 50 year 1955-2005 trend of 0.12ºC per decade. This compares with a GCM AR4 model average delta of around 0.7ºC or more over the same time period. The 0.1 to .12ºC/decade measured temperature trends are at the low end of the GCM models, which average almost double that.
Gavin correctly states that “claims that the IPCC projection of about 0.2ºC/dec over the next few decades would be falsified with such an observation are equally bogus.” (observation of 8-year negative trend)
However, we have a 50-year trend of data that is much less than that projected 0.2C/decade trend. The next decade might have significant warming, and pull the trend up a bit, or it might not, in which case we will have a 60-year trend of approximately 0.12C/decade, give or take. Neither trend will completely falsify nor validate AGW models.
The comparison between data and model over 50 years suggests that of the warming trend predicted by the models hasn’t shown up in temperature records, and/or the models are overstating the trend and climate sensitivity, by a 1.5X to 2X factor. The ‘pipeline’ argument cannot account for much of this difference, since 80% of the AGW signal is over 10 years old, and as the temperaure record lengthens that depth increases. The lack of any significant acceleration in warming soon above this 0.12C/decade trend would suggest that the GCM models with 0.2C/decade or above trends are overstating the impact of AGW.
[Response: Your calculations are not comparing like with like. If you do the 1957-2007 trend in HadCRUT3v you get 0.13+/- 0.02 deg C/dec. The equivalent calculation for the models has a distribution is 0.15 +/- 0.08 deg C/dec (95% conf, OLS, annual means). The spread is wider than for the future trends likely because of the varying specifications of the anthropogenic forcings in the models but the model mean is clearly close to observed. There is also a clear acceleration to the present in the obs. (1967-2007: 0.16, 1977-2007: 0.17, 1987-2007: 0.19). – gavin]
Nylo says
I am sorry to have expressed the idea wrong. What I meant is that the convection processes make the warming trends in the surface and the troposphere more similar. Without the convection processes, the warming in the troposphere would be faster, and the warming in the surface would be slower, because all of the energy transfer would happen only by radiation of energy, and that is much slower than massive convection processes.
I will not continue with this, your position is clear and too opposite to mine. If I understand you right, your position is that the models already include all the convection processes, and if the warming trend distribution in the tropical troposphere doesn’t match the observed data it is because the observed data is obviously wrong. The models have to be right, and in the remote case they weren’t, it wouldn’t be important.
Fine to you, I guess. Difficult to sound convincing for anyone though.
[Response: If convection is the only thing going on then tropospheric tropical trends would be larger than at the surface. Therefore a departure from that either implies the convection is not the only thing going on, or the data aren’t good enough to say. I at no time indicated that the models ‘have to be right’, I merely implied that an apparent model-data discrepancy doesn’t automatically imply that the models are wrong. That may be too subtle though. If the data were so good that they precluded any accommodation with the model results, then of course the models would need to be looked at (they have been looked at in any case). However, that isn’t the case. That doesn’t imply that modelled convection processes are perfect (far from it), but it is has proven very hard to get the models to deviate much from the moist adiabat in the long term. Simply assuming that the data must be perfect is not however sensible. Jumping to conclusions about my meaning is not either. – gavin]
BRIAN M FLYNN says
When listing some of the deficiencies of ModelE (2006), Dr. Hansen et. al. (2007) mentioned the absence of a gravity wave representation for the atmosphere.
Can such a wave be vertical for a particular column of the atmosphere? If so, would that phenomenon make the earth more efficient in radiating heat than the said model suggests?
As an aside, Miskolczi by his paper “Greenhouse effect in semi-transparent planetary atmospheres” (2006) appeared to suggest more efficiency in radiating heat. But with the dense equations in the paper, the public focus was more on the phrase “runaway greenhouse” which I believe overshadowed the point of such efficiency. The findings by Miskolczi were rejected by the IPCC because they were unsupported by the literature, but you would expect the absence of support since he questioned the conventional approach in the first instance. I look forward to RC’s perspective on his idea.
Lastly, does such a gravity wave exist for the ocean, and likewise be vertical for a particular column of the ocean? If so, would that phenomenon make the earth more efficient in sending heat to the ocean deep?
Thank you for your time.
David B. Benson says
BRIAN M FLYNN (247) — Start with the Wikipedia page on Rossby waves.
Chris Colose says
Nylo,
in short, the whole troposphere is basically linked by convection to stay on the moist adiabat. In fact it’s precisely because the temperature drops with altitude that you can have an enhanced greenhouse effect, because you increase the “opacity” of the atmosphere to infrared radiation, forcing the altitude of emission to higher layers where radiation is weaker (because of the T^4 dependence). Since the troposphere is linked by convection, then once you create a situation where the planet is taking in more radiation than it is giving off (because the same is coming in, but now you’re emitting less), all the layers will warm up (including the surface) at least until you reach the stratosphere.
Now if the tropical troposphere was not actually warming as much as models say, then that could mean still higher climate sensitivity (or the models actually underestimating surface warming), since the lapse rate feedback represents the most negative feedback (aside from the OLR). That’s because the more sharp the temperature drop with height, the stronger your greenhouse effect is.
Evaporation and sensible heat mainly refer to the surface energy budget, not the TOA energy budget…the latter is a bit more in line with the AGW argument of radiative imbalance causing warming, while the surface budget just closes (and evaporation is how it comes to equilibrium from the radiative perturbation) regulating the gradient between the surface and overlying atmosphere. I don’t know if the experts agree, but I think the Miskolczi paper over-emphasizes the surface energy budget (among many other things).
Gerald Browning says
Clive van der Spuy (#242),
You might want to read Tom Vonk’s discussion of the Pinatubo climate run
on Climate Audit.He makes the very valid scientific point that the ocean does
not change that much during the time frame of the climate model. And now ask Gavin what changed the most in the run of the atmospheric portion, i.e. exactly which terms deviated the most from the “standard” run. And recall that the volcano was put into the model in an ad hoc fashion, i.e. the initial explosion cannot be modeled by a coarse mesh climate model. Also we are talking about the equivalent of multiple nuclear bombs.
I statrted to read the first manuscript cited by Gavin and could not believe the number of caveats and excess verbiage. Why not provide the exact formulas for the changes?
And finally you might ask Gavin if all the other climate models use the same formula?
Jerry