This month’s open thread. We’re going to guess that most of what people want to talk about is related to the IPCC WG1 AR5 report… Have at it!
Reader Interactions
286 Responses to "Unforced Variations: Oct 2013"
nicosays
FP @ 12 you ask about smoke from Australian and US bushfires. I’ve attended lectures by Mike Fromm, USN, who is interested in pyrocumulonimbus clouds created by large fires. Why is the US Navy concerned about fires? It seems they rely on satellites to observe the “enemy” and – well – smoke gets in your eyes. See for example: http://www.bushfirecrc.com/sites/default/files/managed/resource/thur_p110a_1520_mike_fromm.pdf
nicosays
FB @ 12 you ask about the smoke from Australian and US wildfires. See the work on pyrocumulonimbus clouds by Mike Fromm, USN. Why is the navy interested in fires? Because they use satellites to observe the “enemy”. And smoke gets in your eyes …
Watchersays
I’ve appreciated Gavin stepping in from time to time with the questions I have posed. I’m hoping it will happen again.
Unlike some of you folks (the childish stuff going on with 15 and 25, for example) I feel no compunction about visiting a variety of websites which discuss AGW issues. One of them is Judith Curry’s. The other day she posted a graphic showing the output of a bunch of GCMs. Sorry, I don’t know how to post images so here’s the link:
The accompanying text indicated that what are normally shown are anomalies, i.e. each GCM run is normalised by subtracting off a baseline value, however determined, to line up observations and calculations at some chosen reference point. Honestly, this took me by surprise at the time and I continue to find my thoughts returning to it. I had always assumed that GCM results were plotted “as is”, and that the process of tuning ensured they would match observations over some sort of calibration period.
In contrast, the graphic posted by Dr. Curry shows a large spread in temperature between the various models and is not at all like the graphics normally used to present the results. In fact, the spread in models is just as large as the spread in RCP85 and RCP26 scenario projects, which I take it are “business as usual” and “boy we did a great job” emission scenarios.
To the question, then:
Is this graphic right? Does it provide an accurate description of the sort of output one obtains from a representative set of ‘current’ GCMs?
First let me express my appreciation to all contributors for the immense effort you put in to providing accurate information and rigorous interpretations. I am a non-scientist.
There’s no way to know how long this current cool cycle will last, though the previous Pacific cool phase, which started in the 1940s, continued for about 30 years. If the present cycle is of the same duration, then in about 15 years much of the heat currently being dumped in deep oceans may begin instead to remain in the atmosphere. At that point we will likely see unprecedented rates of climate warming, and far worse episodes of extreme weather.
Aside from the perhaps inadvisable use of the term ‘cooling’ for the current situation, what do people think of the prospects that this current slower rate of warming (‘pause’) could continue for another 10 or 15 years?
And for others, other advantages no doubt happen to accrue.
A few days or a week in which there’s nobody watching the financial transactions, or the toxic waste truck’s pickups and deliveries, or the stuff going on in the slaughterhouses, or the pension plans.
A few things that get default approval if nobody says hold on, within some span of days, maybe.
…. hey, what could go wrong?
And if some well meaning ‘ibbertarian who’s one of the self-identified good guys could stumble across a bit more profit just by happenstance when the inspectors and regulators are all off on unpaid leave, well, would that be so terrible?
/ sarcasm
Steve Lsays
I seem to recall a study highlighted a few years ago indicating that surface temp increase could take a 30 year hiatus. I don’t think it was Swanson and Tsonis 2009. Was there another paper like that?
The anonymous coward (#50) has mischaracterized my posts. I’ve been pointing out that while good science is censored by the shutdown of the government, junk science promoted by the government is still up on the web. I’ll admit that junk science is confusing and intended to confuse, but it is a bit of a stretch to say that a false claim that using coal increases radiation exposure, promoted by Oak Ridge National Laboratory in what amounts to a national embarrassment, is about nuclear power. It isn’t. It is about coal power.
I lived there at the height of the damage that strip mining for coal was doing to the land around the laboratory and I can certainly understand an animosity to coal power stemming from that as well as from their uncritical lust for nuclear power, but making things up out of whole cloth and claiming that it is science is completely inappropriate for a government laboratory.
The CMIP3 and CMIP5 1850-2000 model runs in question were started using initial 1850 global mean temperatures ranging from 12 to 15 °C, a +/- 1.5 K range around the best reconstructed estimate of 13.5 °C. The intent was clearly to examine the robustness of the modeling with respect to initial conditions, and in this sense the broadly similar results (0.7 K 20th century warming) showed a low sensitivity to starting temperatures.
The CMIP3 and CMIP5 1850-2000 model runs in question were started using initial 1850 global mean temperatures ranging from 12 to 15 °C, a +/- 1.5 K range around the best reconstructed estimate of 13.5 °C. The intent was clearly to examine the robustness of the modeling with respect to initial conditions, not to accurately match the actual global temperatures themselves. In this sense the broadly similar changes shown by the different models (0.7 K 20th century warming) indicated a low sensitivity to starting temperatures.
See, Watcher, that’s what you get when you go to Aunt Judy’s blog–confused. It’s her product. Your question betrays a deep misunderstanding of what climate models are and how they work. Just because you have the same starting point doesn’t mean that the weather is going to be the same for every model run. Our particular climate is only one possible realization out of an infinite variety of possible realizations.
CMsays
Gavin,
A question, if I may, about your recent Santer et al. PNAS paper (doi:10.1073/pnas.1305332110). I’m looking at the charts of zonal-mean atmospheric temperature trends, specifically for CMIP5 models with all the forcings (fig.2A). The warming trends appear to be strongest near the surface and decrease with height. Why is there no tropical tropospheric amplification to be seen? What happened to the “hot spot”?
I’d naively expect to see a less pronounced version of the baleful Eye of Sauron that glared from the graphs in your “Tropical Troposphere Trends” post, and I can’t make out what makes these graphs different.
[Response: Different vertical extent. The Santer et al figures are from 700 hPa on up for matches to the MSU records, not the surface. – gavin]
Brennansays
Re 53 & Judith Curry.
Curry’s blog was brought to my attention recently in an exchange on the Guardian website. It took me a while to get my head round something she had posted. The reason it took a while is that you don’t expect such a colossal misrepresentation of the facts. The figures she had on a graph of Arctic Sea Ice Extent bore absolutely no correlation to the figures she was talking about or the figures I have seen elsewhere. It was as if she had just made up a graph to suit her argument and mislabelled the axes.
Looking now, she made a recent post about how the IPCC wasn’t mentioning the ‘pause’. Then she made another post about how the pause was being ‘reasoned away’ but still wasn’t mentioned. Then she quotes from the report “Box 9.2: Climate Models and the Hiatus in Global-Mean Surface Warming of the Past 15 Years”. Did she just miss the word Hiatus? Maybe she doesn’t know what it means? But the n she quotes a big chunk of text which repeatedly mentions the slowdown in atmospheric warming and again uses the word ‘hiatus’.
I’ve noticed a more desperate tone about the ‘sceptics’ of late. Curry seems a good example of one just making it up as she goes along. I wouldn’t trust anything from her at all.
The model-tuning process is done to give a stable climatology (e.g., cloud properties to get an appropriate albedo in the climatology). This is not the case for future climate change projections or for out-of-sample (paleo) validation.
It is not self-evident that the different absolute temperatures amongst the CMIP5 ensemble members would be independent of the things we are interested in (e.g., their equilibrium climate sensitivity). But this is a hypothesis that can be tested, and was done in Figure 9.42 of the new IPCC report.
In that graph, they plotted the equilibrium sensitivity against the global mean surface temperature in the CMIP5 models. There is no correlation whatsoever between these, and thus no evidence for (and in fact strong evidence against) the hypothesis that a model with an absolute temperature e.g, 1-2 C “below observations” (which themselves are much more uncertain in an absolute than in an anomaly sense) vs. one “above observations” will give a biased estimate in one way or the other of the sensitivity. It would be interesting to explore this dependence for other variables like e.g., the sea ice edge. This is a technical issue, however, and Judith Curry’s blog is not an appropriate resource for trying to gain insight into the implications of these things.
The Australian newspaper (part of the Murdoch stable) has adopted Judith Curry as their favourite ‘climate scientist’ as part of their ongoing campaign to muddy the waters about climate change. For more details see http://www.peakdecisions.org
I appreciate you all have much better things to do, but any efforts to assess the validity of Curry’s postings (and explain, step-by-step, your reasoning) is a great help to those of us in the lay public wanting accurate and rigorous analysis.
Try using your noodle. It almost rhymes with google but allows perception rather than recitation. Why is there uranium in coal? Because there is uranium in dirt. Dirt is where forests grow and that is what turns into coal. Why does coal have ash when it is burned? Because dirt is hard to burn. What happens to the uranium? it is in the ash just like it was in the dirt. What happens to radiation? Nothing. There is just as much screening from ash as there is from dirt. It is like pushing dirt around with a bulldozer. Nothing changes on the Geiger counter. The background level does not go up.
[Response: That would only be true if there were no changes in concentration – but burning coal and producing ash will concentrate the ‘dirt’ as you put it. – gavin]
“I’ve been wondering what temperatures would look like today in the absence of CO2, and assuming the models are correct.”
FYI, I spend way too much time on Curry’s blog, trying to battle the nonsense comments. I put together this post which shows how the current “hiatus” or “pause” is simply the result of transient SOI effects.
What is very interesting is that a simple log regression fit between land warming and CO2 concentration can extrapolate backward to more than a 25C cooling as CO2 approaches 1PPM. Modtran shows that CO2 loses its GHG warming effectiveness as its concentration dips below 1 PPM.
This assumes the best fit of ECS of 3C for doubling of atmospheric concentration of CO2. I hope this answers your question Ed Barbar, of what happens if the CO2 is removed. Now you can figure out for yourself what happens when CO2 is doubled or tripled from the pre-industrial levels.
Watchersays
Re 58: Magma, thanks very much for that reference. Now that I go back I see Dr. Curry had posted it herself.
I don’t agree that Dr. Curry grossly misrepresented that figure at all. Her point was simply that it seems odd that models can differ so much when important underlying processes — specifically phase transitions of water — are highly dependent on absolute temperature. This is a point echoed — or should I say presaged since they said it first — in the article you link to:
To parameterized processes that are non-linearly dependent on the absolute temperature it is a prerequisite that they be exposed to realistic temperatures for them to act as intended. Prime examples are processes involving phase transitions of water.
The point is reinforced in their discussion of model tuning,
To us, a global mean temperature in close absolute agreement with observations is of highest priority because it sets the stage for temperature-dependent processes to act.
Given all of this, I do find it troubling that models that differ by 3 degrees can continue to run parallel to each other. I don’t think the text indicates they are testing ‘robustness’ to initial conditions; but even if they were, a simulation that was ‘robust’ to initial temperature should relax to some ‘proper’ temperature, and not depend linearly on it. Given that they fail to converge to a common value, how is it that a second order property (i.e. trends) can be regarded as more ‘robust’ than a first order property (i.e. temperature)?
Dreadful word. And now I’ve used it three times!
Here’s something else I find troubling: their statement in SS2.3 “Climate models may not exactly conserve energy. Indeed they seem to go on to discuss energy “leakages” in the models that arise from e.g. artifacts due to gridding, etc, that are of exactly the same order of magnitude as one of the key model outputs, the TOA imbalance (around 0.5W/m-2).
Though I sense a swat coming on, let me press on: that paper itemises 25 separate parameters (Their Table 1) used to tune their model. They refer to several more in the text, with the sensible statement that where possible they use published values for things that can be independently measured. They also make the statement that By doing so [tuning] we clearly run the risk of building the models’ performance upon compensating errors
and go on to give examples. I did not get the sense that they have 25 (or more) separate observables to constrain the parameter choices, which would be the minimum necessary to get a unique solution. I’m not knocking them: they’ve laid out several issues that obviously trouble them, and that’s what research is all about.
So where am I going with this? Maybe just to say that the way I read Dr. Curry’s position is that climate models are a work in progress. Having read through Mauritsen et al I’m inclined to agree with her.
[Response: You conclude this as if it were a profound statement and the sum total of what is being alleged. That is wrong on both counts. No-one has claimed that models are perfect or that further development is not needed so that is not the point in question. As I stated above, tuning for the absolute planetary temperature is not trivial and generally not done. But Figure 9.42 in the AR5 shows that sensitivity is not dependent on this (mainly because the global mean offset is small compared to the spatial and temporal ranges of the temperatures that local feedbacks are sensitive to). As for energy conservation, this is something that all models should strive for, but some of the terms are subtle and it takes work to track all the ‘leaks’ down. (FWIW the GISS models conserve energy to machine precision). However, while these small leaks do not have as much of an impact on the simulation as you might think, fixing them does allow you to ask a wider and clearer range of questions. – gavin]
Magmasays
And then there’s Curry’s caricature of a post today, “Skeptics vs. academics”.
Perhaps somebody with more time and patience than I have could answer whether Curry’s professional standards have eroded with time, or whether they were always this poor.
Follow the links in Scholar; read at least some of the cited papers. On average and approximately, the level doesn’t go down as you claim to believe — diluting the atmosphere with fossil carbon isn’t significant.
CO2 is well mixed, unlike heavy metals.
Know how and why the metals got -into- the coal?
You can look this stuff up.
Your logic led you to your opinion not supported by science.
Do you like the result? Why?
My apologies if I made it sound as if this were a profound observation. All I was trying to say was that it represented a revelation to me. It can’t have escaped you from other threads that I am nothing more than a dilettante in this climate business. Perhaps I should preface each post with an acknowledgement that anything I think I know about the subject should be considered fragmentary at best.
Clearly, since you still have a job (if not a paycheque at the moment!) there is a continuing need to develop the models. At the same time the tenor of many of the pronouncements I have read (though not here, admittedly) is that the models are pretty much dead on. So dead on that it’s a no-brainer that the world needs to spend bazillions of $$ to fundamentally reorganise itself or we’re all going to die. Hence my bemusement when I see just how much development there is left to do.
You commented that the absolute temp didn’t matter much since the annual/daily/latitudinal variations are so much bigger. Does that imply that for a higher-running model these cyclic variations are larger (to e.g. match ice coverage at the poles) and vice versa?
Finally, I assume you had a hand in the GISS models’ conservation of energy. Good on ya, then.
Watchersays
Re: Gavin’s response to 73, and 74.
I thought it best to separate my reply to Gavin’s answers about modeling and address four of his words separately, mostly because Magma’s comment 74 irked me.
Gavin first:
As to “what is being alleged”. I take it you are referring to Dr. Curry’s discussion of the differences between the draft and released SPM. I read her post as a recap of what others have said — something generally true of her blog, I might add. That there are significant differences I don’t think can be disputed. The two versions of Figure 1.4 could hardly be more different in the view they present of the consistency between models and observations.
Then there are the statements she quotes:
“Models do not generally reproduce the observed reduction in surface warming trend over the last 10?–15 years.” from the draft, and
the observations through 2012 generally fall within the projections made in all past assessments from the released version.
On the face of it these are pretty different. As I understand it, the IPCC position on this discrepancy is pretty much, “oops, my bad”. Whatever your take on the subject, surely it’s valid for interested parties to want to discuss it? Again, as far as I can see Dr. Curry merely summarises other peoples’ discussions about which is the better approach and whether it was sneaky to make the change during a POLITICAL meeting set up to discuss a final SCIENTIFIC draft. Another valid point, I would say. To the extent that she comes to any conclusion of her own, all I get is this one: What is wrong is the failure of the IPCC to note the failure of nearly all climate model simulations to reproduce a pause of 15+ years. It’s hard to consider this off-the-wall crazy when it clearly had the support of those writing the draft version.
Magma:
Having looked at the post you refer to I see no way you can impugn anybody’s “professional standards”. Your very words are evidence that there is a vituperative aspect to almost any climate discussion that doesn’t toe the IPCC line. Whatever your take on whether “the science is settled” or not, surely the often vicious interactions are a valid point of discussion?
[Response: I find that discussions predicated on the supposed fact that the IPCC is incompetent and corrupt are generally not productive explorations of the true state of climate science. And this. And this. – gavin]
Ray Ladburysays
Watcher,
Your comment@73 reveals a profound ignorance of how scientific modeling–especially, physics-based modeling–works. First, to quote Richard Hamming, “The purpose of computing is not numbers, but understanding.”
Let me repeat that–the numbers are less importance than the insight we gain from the models. Imperfections in the models are not a barrier to their yielding that insight.
Second, no computer-based modeling technique reproduces 100% conservation laws such as energy, momentum, angular momentum, etc. They are models, not the physical systems, themselves.
Third, of course the models are works in progress. This isn’t a fricking science-fair project where you know the outcome in advance.
Fourth, it is not terribly surprising that the results would be robust within some temperature range.
Finally, I don’t think Aunt Judy has any choice but to misrepresent research. I don’t think she herself understands the research sufficiently to present it correctly. One of the reasons she is so good at confusing her readers is that she is mightily confused, herself.
You raise a fair point. But recall, the carbon in the coal initially diluted the uranium concentration. It came from the air, not the soil. So, re-concentration is only back to the level of a clay soil, not heightened concentration. The ash is chemically active and nasty in that way, but it is not particularly radioactive. That is, it is radioactive the way dirt is radioactive, being the same composition, and so is neutral in terms of exposure.
I’m not trying to sound like a dick-head here, but my career for the past 25 years has largely consisted of physics-based modeling. If it is a ‘profound misunderstanding’ to expect my models to give accurate predictions of how the real world is going to behave then mea culpa. If they didn’t I expect my customers would be … well, not my customers any more.
Yes, the systems are simpler than a global climate and, yes I can augment them with things I can measure in a lab; but the central requirement of a scientific model is to produce measurable predictions. It has no value otherwise. Yes, during development one can tinker with poorly understood parameters to make a better fit, but this doesn’t always lead to understanding. Indeed, Mauritsen et al lament that they encounter situations in their climate model where one wrong parameter is offsetting one or more other wrong parameters but they don’t have a ready way to disentangle the errors. In some cases they can identify the parameters in the trade-off, in which case understanding is gained even though they still can’t pin down the balance; but in other cases they admit to not even knowing what parameters will affect the metric they want to change,
“In many cases, however, we do not know how to tune a certain aspect of a model that we care about representing with fidelity”
so it’s hard to say that getting a better fit can be called an improvement in understanding.
Again, from the same paper the single most important performance metric is the system temperature,
To us, a global mean temperature in close absolute agreement with observations is of highest priority because it sets the stage for temperature-dependent processes to act.
And besides, that’s what all the hand-wringing is all about. Striving to model that has to be of importance, and observing that the models as a group don’t do a very good job yet can hardly be dismissed as irrelevant. What’s a ‘very good job’? If the spread in models is 3K, and the ‘dangerous threshold’ is 2K …. Like they say, the models are a work in progress.
Finally, I can’t help but notice that “Judy’s stupid” seems to be the crux of the rebuttals presented in a several posts, not least of them yours. I’ll take that as supporting the second part of my post 78.
> physics-based modeling…. to give accurate
> predictions …. my customers ….
Very different kind of model, right?
Watcher — I’m guessing you are not creating models that include random events?
Running a climate model repeatedly gives a spread of results because elements are inescapably randomn (like volcanic different for each run of the model).
Maybe I”m guessing wrong, but I’d guess Watcher is modeling clockwork kinds of physics.
Watcher — care to say more specifically what systems you model for your customers? Do you get a spread of results when you run the model repeatedly, as a normal result?
Here your noodle will help you. What is fly ash? Quick lime and pozzolan. An exothermic reaction involving water and carbon dioxide rather rapidly dilutes the radioactivity back down to the soil range in your link.
Your noodle can help you again. If these exaggerated claims about radioactivity of coal were true, then we could add coal ash to soil, grow some trees, make some charcoal, and produce even more uranium. Eventually, we could turn all of the stuff into uranium using this form of alchemy.
But transmutation requires nuclear reactions, not chemical reactions. So, your leg is being pulled.
In fact, burning coal cuts radiation exposure owing to the reduction in carbon-14 in our food. That is not a good reason to burn coal. But, it is a good reason, along with evidence of data falsification by government scientists at Yucca Mountain, to be distrustful of government nuclear power enthusiasts. Their devotion to truth seems to be too weak to be compatible with science.
Far be it from me to tell you what you should believe.
I believe I get better information from the science.
You believe you can trust whatever source you rely on.
Are you reasoning to your own conclusion, or drawing from some external source you trust for what you believe?
Retrograde Orbitsays
I find this whole discussion on the inability of the models to reproduce recent global temperature (‘the hiatus’) disturbing. Very disturbing.
Let me explain: Skeptics have always made the baseless accusation that global warming is a fraud and climate scientists are ‘covering up’ the truth. Now I am afraid this might become a self-fulfilling prophecy. If it hasn’t already.
Consider: There are errors in the scientific results we publish. Always will be, models are no exception, Gavin has nicely explained it. However, when skeptics look at these errors they will say: “Aha! Told you so! There is no global warming!” Which is nonsense, but embarrassing. And so there is a growing temptation to researchers to downplay the errors in their research – even if there is no rational reason to do so.
That troubles me. And I could easily see that Watcher is insinuating exactly that.
Dave123says
My own modeling experience in chemical reactor design gives me a different perspective from what I see Watcher saying. Watcher appears to be agreeing with Professor Curry that an error in absolute temperature of a degree or two is significant. First, based on my experience I’m not inclined to agree. Second, I’m seeing a ‘god of the gaps’ argument that appears to have no target for how close the absolute temperatures need to be before some folks pick up their knitting again.
Let me elaborate on the first: An error of 1K out of numbers that could be from 220-340 K (stratospheric to desert surface) is an error of well less than 1%. A typical rate equation Ae^-(kT[c1]^i[c2]^j… (sorry no good way to write equations like this) probably has larger errors in the estimates of A and k and concentrations influencing the rate than the error in T. Concentration and partial pressure estimates would seem likely even less influenced by an error of 1 part in 293 (room temperature in K).
I guess if I had a climate model in hand to tinker with, I could test for how results would differ if I could systematically bias calculated temperature through the iterations by steps of 0.01, 0.1, 1, 2 and 5 K with appropriate precautions to censor wandering into physically absurd outputs.
Which is back to the second point: How close on an absolute scale is good enough? How would you know? When the trend is what you’re after, how does have agreement at some point in time to an arbitrary closeness to absolute temperature improve your confidence in the trend?
Let me add that chemical reactor modeling of in my experience is pretty clockwork, yet to my experience the errors I describe don’t prevent them from being used, as we did, to great effect. I’ll save comments on how I see scenarios, uncontrolled variables (atmospheric humidity) for some other time.
Thomassays
Chris,
I don’t think you can consider your assumption that coal will have the same radioactive concentration as original peak plus dirt. It has lain underground for millions of year, exposed to groundwater flows. Material would be exchanged between the coal and groundwater. It could either absorb radioactive substances, or have radioactive substances dissolved and carried away. Only detailed investigation -also guided by experimental results can resolve such as issue.
“There is just as much screening from ash as there is from dirt.”
Major fail–as ‘thought experiments’ arguing in the face of actual data tend to be.
Patrick 027says
re Watcher – the climate models of the sort being discussed (covering the globe in latitude, longitude, height, and time – as opposed to 1-dimensional models (height… oh, they might have time, too, but they don’t resolve synoptic-scale storms or Hadley cells, or ocean circulation, obviously; … but the true 1-dimension model is useful for finding equilibrium given some simplifications*, and it is interesting to compare that to the fuller behavior of the climate in other models or the real world(s)) reproduce much of the behavior in the real climate system, including internal variability – that’s not to say it’s perfect (last I heard – which was a while ago – there was trouble with MJO, but I’m not up on all the details of that – I still don’t really understand what MJO is, to be honest, though I think it’s based in the tropics). Some of that internal variability produces temporary disequilibria in the climate system on the global annual average scale. So there are decades that warm up faster or slower, or warm up and cool down if there is no underlying trend. The observations of global surface temperature fit that behavior.
* a 1-dimensional model may find an equilibrium temperature profile, given external forcing, by finding the temperature distribution for which upward net LW radiative flux + convective heat flux = net downward SW flux at each level . Globally and temporally averaged, that actually is the case for an equilibrium climate, so this isn’t totally removed from reality. Convection may be parameterized by setting a maximum-allowable lapse rate; if temperature drops with height too quickly, convection must be increased. There will be some feedbacks as a change in the temperature profile caused by convection will alter the radiative flux. Clouds in such a model might be set as a boundary condition (external forcing), which of course is unrealistic, but if realistic clouds are used, you still get something realistic. Etc.
> we could add coal ash to soil …
> and produce even more uranium
Oh.
I see where you’re coming from.
Far out.
Dave123says
Attempt 2- having captcha problems
My own modeling experience in chemical reactor design gives me a different perspective from what I see Watcher saying. Watcher appears to be agreeing with Professor Curry that an error in absolute temperature of a degree or two is significant. First, based on my experience I’m not inclined to agree. Second, I’m seeing a ‘god of the gaps’ argument that appears to have no target for how close the absolute temperatures need to be before some folks pick up their knitting again.
Let me elaborate on the first: An error of 1K out of numbers that could be from 220-340 K (stratospheric to desert surface) is an error of well less than 1%. A typical rate equation d[C]/dt= Ae^-(kT[c1]^i[c2]^j… (sorry no good way to write equations like this) probably has larger errors in the estimates of A and k and concentrations influencing the rate than the error in T. Concentration and partial pressure estimates would seem likely even less influenced by an error of 1 part in 293 (room temperature in K).
I guess if I had a climate model in hand to tinker with, I could test for how results would differ if I could systematically bias calculated temperature through the iterations by steps of 0.01, 0.1, 1, 2 and 5 K with appropriate precautions to censor wandering into physically absurd outputs.
Which is back to the second point: How close on an absolute scale is good enough? How would you know? When the trend is what you’re after, how does have agreement at some point in time to an arbitrary closeness to absolute temperature improve your confidence in the trend?
Let me add that chemical reactor modeling of in my experience is pretty clockwork, yet to my experience the errors I describe don’t prevent them from being used, as we did, to great effect. I’ll save comments on how I see scenarios, uncontrolled variables (atmospheric humidity) for some other time.
Ray Ladburysays
Watcher,
Ah, I see. You use physics-bases models. You just don’t understand them.
Yes, of course we must compare model performance to reality. However, we have to make meaningful comparisons. Reality is only one possible realization of the climate system. Are you actually saying that not a single realization of the model runs produced a “16 year hiatus”? Are you really calling it a hiatus when it indicates that a La Nina year now is as warm as a big-assed El Nino 15 years ago?
What is more, we often learn more from models that fail in interesting ways than we do from models that reproduce the trends exactly.
As to Aunt Judy, I have never learned anything from her. She provides no insights, no clarity, nothing useful. She is often flat wrong are never even in an interesting way. Judy is worse than a waste of time.
Watchersays
Re: 91 etc.
Dave123 (and others) I appreciate your thoughts.
I’m not really trying to insinuate anything but the glaring changes between the draft and final SPM concerning model/observation agreement surely has to give one pause. I’m merely defending Dr. Curry’s right to say, “WTF?”
As for Dave123’s comment about models being good to 1%: in many if not most cases that would be ‘good enough’. However, given the absolutely key role of water in the climate system and its well-known phase transition at 273.15K I would think that some explicit test of what is ‘good enough’ would be in order. I have no idea how small ‘the gaps’ need to be before I believe in ‘the god’, I’m just saying that it makes me uneasy.
Furthermore, to continue the point, if you had a model that was known to be good only to 1% and it told you some process was going to change by 1%, what would be your confidence in the prediction? Myself, with what is being called my clockwork models, I would say pretty close to zero.
And since you ask, yes you could call them clockwork models: laser cavity dynamics, non-linear fibre processes, that sort of thing. Non-random except in the trivial sense that Maxwell-Boltzmann statistics can be assumed or to generate bit patterns. However, my understanding of climate systems (close to pathetic, admittedly) is not that they are random, but rather chaotic. Thus, the presence of multiple, interconnected, non-linear interactions leads to unpredictability on large scales. This is not the same as randomness, which I would call unpredictability on small scales, but when taken in the aggregate becomes predictable.
The implication of a chaotic system is that there are multiple possible large scale states which can arise from infinitesimally different initial conditions. However, if you look at that figure from Mauritsen et al again, you can see that while each of the hindcasts wiggles up and down, if it starts out high it pretty much stays high. So the randomness or chaos or whatever you want to call it doesn’t look as though it accounts for the differences between the runs. Magma way back in 58 said he thought they were testing different starting temps. I got the impression from Mauritsen’s paper that it was more about different tuning strategies and that the plot was a randomly chosen set of archived runs, but I didn’t see an explicit statement. Whatever the case, each ‘solution’ seems pretty stable.
Of course I’m only guessing about this. It would be interesting to get an answer from a climate modeler (you know who you are!) about whether the same set of forcings and tuning parameters can generate runs that differ by 3K. In other words, is the chaotic nature responsible for only the fluctuations within a given ‘solution’, or is it responsible for the ‘choice’ of solution?
Radge Haverssays
Retro’ @86
It troubles me too.
You know, it takes a certain willfully obtuse mean spirit to try to wipe out of discourse a pretty basic intuition that almost everybody already understands: You don’t let the perfect be the enemy of the good. That goes for pretty much everything in life, except apparently if you’re an ideological bampot. Just a reminder that you shouldn’t be too passive in responding to such malicious propaganda.
Steve Fishsays
I have been staring at the decadal average surface temperature anomaly graph, under the new IPCC Climate Report topic here on RC, and wondering where the 15 year slowdown in warming is. The 2000 to 2010 step is completely contained within the last 15 years, yet is as big a step up as the previous two. I would like for all those trying to distract, with talk about how models don’t show something that they admittedly can’t (short term variation), to climb up those steep steps, stand on the top one, and point out the slowdown for me.
Steve
Mike Donaldsays
Booker’s at it again and I’m sure you good folk will have something to say.
[Response: Christopher Booker is wrong about the history, wrong about the present, and will be wrong about the future. -gavin]
Retrograde Orbitsays
Bampot? Hmm, whatever. Don’t forget, most of us here are human.
The problem is that most skeptics are in essence wishful thinkers (e.g. model discrepancy => wishful thought => maybe the models are incorrect in their long term predictions too and it won’t be as bad as they predict).
You can encourage this kind of wishful thinking by simply asking pointed questions. And it’s effective. Wishful thinking consistently trumps rational arguments. I don’t think we should give Dr. Curry (or anybody) a free pass on that.
Retrograde Orbitsays
Which leads me to a question for Gavin:
Isn’t this whole discourse on models way over the top? I had that feeling already when I read your model-error post (after somebody claimed that all models were “falsified”).
What is wrong with simply saying: The actual temperature is within the error margin of the models and therefore we cannot draw any conclusion from this discrepancy? And that – in particular – the idea that long term model predictions may be incorrect because of the short term discrepancy is merely a wishful thought?
Patrick 027says
re 94 Watcher
Would you have confidence in a model’s trend if the absolute value were 1 % off but the trend, integrated over time, produced a 10 % difference? If so, what would you expect if you reduced the change in external forcing to 10 % of the original value. Even if you had no knowledge of the general behavior (is it a sinusoidal dependence? Parabolic?), a fair best first guess, albeit with minimal or no confidence, would be a 1 % change. But say you did know something about it’s behavior, enough to have some expectation of roughly/approximately linear proportionality, at least within a range of conditions that this case falls into. Then a 1 % result has more confidence.
With regard to the Earth’s climate system, there is some highly nonlinear behavior well outside the range of conditions being dealt with – runaway H2O vapor feedback (which, from what I’ve read, is hard to get to with non-feedback GHG forcing – it requires solar brightening) and snowball Earth (hysteresis, runaway albedo feedback). There are certainly other complexities, especially if we consider Earth-system sensitivity (includes feedbacks on CO2 amount, ice-sheets, vegetation albedo, aerosols) rather than just Charney sensitivity (includes the Planck response (the increase in outgoing LW radiation due to an increase in temperature, and H2O vapor, lapse rate, cloud, and if I’m not mistaken, snow and sea ice(? – I always forget whether sea ice is included, but it would make sense)). Charney feedbacks are not perfectly linear over very large ranges of conditions but I think they tend to be smooth enough to be approximated as such over smaller ranges – of course snow and sea ice feedbacks approach zero at sufficiently high temperatures. Charney feedbacks are fast-acting relative to the equilibration time given by heat capacity and climate sensitivity. The response of the climate to orbital forcing is a great example of the complexity of the full Earth-system response; the global annual average forcing in that case is quite small, and what is really important is the redistribution of solar radiation over latitude and season, where feedbacks to changes at some locations have a global-annual average impact. The importance of such spatial and temporal distributions in forcing could be measured by their effect on efficacy – the global time-average sensitivity to a particular type of forcing relative to a standard type of forcing.
But as far as the effect of absolute temperature errors are concerned, consider that the troposphere varies from above 293 K at the surface in many places (~288 K global average) to ~220 or even less at the tropopause. How far up or down, and for that matter, north or south, does the 273.15 K isotherm shift among models for the same forcing? The snow and ice are still there, there is still a freezing level in many clouds, etc. Consider the changes expected with global warming – shifts in storm tracks will leave some places dry and others wetter, and a shift in precipitation toward heavier downpours will occur, and we’ll lose snow and ice, and yet, at least within some limits, we’re still going to be within the range where there’s still significant snow and ice coverage, there will still be significant extratropical storm track activity, etc.
When and if we get to the point that feedback values change rapidly over temperature, we’ll probably be farther along than we’d ever want to be, wouldn’t we? (and it will probably take time for Greenland and Antarctica together to lose much of their ice – not as long as many of us would prefer, but…)
Except for the possibility of stepped sensitivity – for example, if (for illustrative purposes I constructed this example) there were a series of ice fields which remained intact up to some threshold and completely disappeared above that – then the equilibrium sensitivity would have a series of jumps – but if these jumps were not too large and were not tightly clustered in a bundle (relative to the temperature range considered), then it can be approximated by something more smooth – especially for the predictions if the thresholds have uncertainty.
Concerning chaos: Weather is chaotic, with small changes in initial conditions resulting in large changes in conditions at any point in time, after some period of time over which predictability is lost. But general characteristics of weather may remain the same – in the sense that nothing seems out-of-whack if there’s 4 blizzards instead of 6 in one year and 7 the next, or a tornado hits one place rather than another. You weren’t expecting the alternative scenario in the first place – you couldn’t predict it that far ahead with any confidence. The general characteristics are the climate. It’s more than just averages; I like the analogy of texture – consider two lawns with the same variety of grass, same soil, maintained the same way – on a climatic level they tend to look the same, although each individual blade of grass is different – there isn’t even necessarily a one-to-one correspondence between the two sets.
What is weather and climate can shift depending on scale – for example, each individual snow flake is like a weather event in a blizzard climate – an ice age that lasts maybe a few hours to a day or so. On that timescale the blizzard is predictable. The mantle has weather – and so far as I know we can’t model exactly where the continents will be or have been beyond some time horizon even if we could model mantle convection as well as the atmosphere (of course, we have the geologic record to tell us about continental drift in the past), but there is a climate of mantle convection and plate tectonics behavior – which will change over Earth’s history as there is cooling and associated effects (layered convection due to the thermodynamics of the Perovskite phase transition may (have/will?/is) shift(ing?) toward whole mantle convection).
Climate is predictable because the chaos of weather is bounded. There are conservation laws to consider, for example – the whole ocean won’t spontaneously heat up or cool off – there must be a heat source or sink. A thunderstorm, turbulent eddy, or extratropical cyclone may grow from some instability (instability from CAPE, Kelvin-Helmholtz, baroclinic wave instability (although latent heating is there too)) but can’t grow forever due to limits (spatial, material, energetic) or keep reforming without a source of energy to drive it. Some freak events may occur rarely due to some random (in effect, chaotic in origin) alignment; they can’t be expected to occur all the time unless climate changes sufficiently. If there are two distinct equilibrium states for the same external forcing, then climate is stuck in one or the other until the forcing puts it on a trajectory which connects the two. If the states are not truly fully equilibria, then the climate may fluctuate between them, and thus a complete description of climate encompasses both states and the shifting behavior (ENSO, NAM, SAM, NAO, MJO, PDO, AMO, QBO – actually, not all of these have two or more distinct (approximate and/or partial) equilibria – the QBO (the one I believe I understand the best, except of course for whatever aspects I don’t yet know :) ) has no separate equilibrium states; each state in a continuum leads as smoothly to next as any other, so far as know; I have a vague understanding of ENSO, NAM, and SAM; can’t explain what MJO even is).
nico says
FP @ 12 you ask about smoke from Australian and US bushfires. I’ve attended lectures by Mike Fromm, USN, who is interested in pyrocumulonimbus clouds created by large fires. Why is the US Navy concerned about fires? It seems they rely on satellites to observe the “enemy” and – well – smoke gets in your eyes. See for example: http://www.bushfirecrc.com/sites/default/files/managed/resource/thur_p110a_1520_mike_fromm.pdf
nico says
FB @ 12 you ask about the smoke from Australian and US wildfires. See the work on pyrocumulonimbus clouds by Mike Fromm, USN. Why is the navy interested in fires? Because they use satellites to observe the “enemy”. And smoke gets in your eyes …
Watcher says
I’ve appreciated Gavin stepping in from time to time with the questions I have posed. I’m hoping it will happen again.
Unlike some of you folks (the childish stuff going on with 15 and 25, for example) I feel no compunction about visiting a variety of websites which discuss AGW issues. One of them is Judith Curry’s. The other day she posted a graphic showing the output of a bunch of GCMs. Sorry, I don’t know how to post images so here’s the link:
http://curryja.files.wordpress.com/2013/10/figure.jpg
The accompanying text indicated that what are normally shown are anomalies, i.e. each GCM run is normalised by subtracting off a baseline value, however determined, to line up observations and calculations at some chosen reference point. Honestly, this took me by surprise at the time and I continue to find my thoughts returning to it. I had always assumed that GCM results were plotted “as is”, and that the process of tuning ensured they would match observations over some sort of calibration period.
In contrast, the graphic posted by Dr. Curry shows a large spread in temperature between the various models and is not at all like the graphics normally used to present the results. In fact, the spread in models is just as large as the spread in RCP85 and RCP26 scenario projects, which I take it are “business as usual” and “boy we did a great job” emission scenarios.
To the question, then:
Is this graphic right? Does it provide an accurate description of the sort of output one obtains from a representative set of ‘current’ GCMs?
Peter Cook says
First let me express my appreciation to all contributors for the immense effort you put in to providing accurate information and rigorous interpretations. I am a non-scientist.
I notice in a recent post by Richard Heinberg, he states the following at http://www.resilience.org/stories/2013-10-01/fingers-in-the-dike:
Aside from the perhaps inadvisable use of the term ‘cooling’ for the current situation, what do people think of the prospects that this current slower rate of warming (‘pause’) could continue for another 10 or 15 years?
Hank Roberts says
> Exactly the effect some wanted.
And for others, other advantages no doubt happen to accrue.
A few days or a week in which there’s nobody watching the financial transactions, or the toxic waste truck’s pickups and deliveries, or the stuff going on in the slaughterhouses, or the pension plans.
A few things that get default approval if nobody says hold on, within some span of days, maybe.
…. hey, what could go wrong?
And if some well meaning ‘ibbertarian who’s one of the self-identified good guys could stumble across a bit more profit just by happenstance when the inspectors and regulators are all off on unpaid leave, well, would that be so terrible?
/ sarcasm
Steve L says
I seem to recall a study highlighted a few years ago indicating that surface temp increase could take a 30 year hiatus. I don’t think it was Swanson and Tsonis 2009. Was there another paper like that?
Chris Dudley says
The anonymous coward (#50) has mischaracterized my posts. I’ve been pointing out that while good science is censored by the shutdown of the government, junk science promoted by the government is still up on the web. I’ll admit that junk science is confusing and intended to confuse, but it is a bit of a stretch to say that a false claim that using coal increases radiation exposure, promoted by Oak Ridge National Laboratory in what amounts to a national embarrassment, is about nuclear power. It isn’t. It is about coal power.
I lived there at the height of the damage that strip mining for coal was doing to the land around the laboratory and I can certainly understand an animosity to coal power stemming from that as well as from their uncritical lust for nuclear power, but making things up out of whole cloth and claiming that it is science is completely inappropriate for a government laboratory.
Magma says
@ Watcher (comment 53) the figure is correct, although grossly misrepresented or misunderstood by Curry on her blog (http://judithcurry.com/2013/10/02/spinning-the-climate-model-observation-comparison-part-ii/). It is Figure 1 from ‘Tuning the climate of a global model’, Mauritsen et al. (2012), Journal of Advances in Modeling Earth Systems (open access at http://onlinelibrary.wiley.com/doi/10.1029/2012MS000154/abstract).
The CMIP3 and CMIP5 1850-2000 model runs in question were started using initial 1850 global mean temperatures ranging from 12 to 15 °C, a +/- 1.5 K range around the best reconstructed estimate of 13.5 °C. The intent was clearly to examine the robustness of the modeling with respect to initial conditions, and in this sense the broadly similar results (0.7 K 20th century warming) showed a low sensitivity to starting temperatures.
Magma says
@ Watcher (comment 53) the figure is correct, although grossly misrepresented or misunderstood by Curry on her blog (http://judithcurry.com/2013/10/02/spinning-the-climate-model-observation-comparison-part-ii/). It is Figure 1 from ‘Tuning the climate of a global model’, Mauritsen et al. (2012), Journal of Advances in Modeling Earth Systems (open access at http://onlinelibrary.wiley.com/doi/10.1029/2012MS000154/abstract).
The CMIP3 and CMIP5 1850-2000 model runs in question were started using initial 1850 global mean temperatures ranging from 12 to 15 °C, a +/- 1.5 K range around the best reconstructed estimate of 13.5 °C. The intent was clearly to examine the robustness of the modeling with respect to initial conditions, not to accurately match the actual global temperatures themselves. In this sense the broadly similar changes shown by the different models (0.7 K 20th century warming) indicated a low sensitivity to starting temperatures.
Steve Fish says
Re- Comment by Steve L — 3 Oct 2013 @ 10:46 PM
See here- https://www.realclimate.org/index.php/archives/2010/11/so-how-did-that-global-cooling-bet-work-out/
Steve
Ray Ladbury says
See, Watcher, that’s what you get when you go to Aunt Judy’s blog–confused. It’s her product. Your question betrays a deep misunderstanding of what climate models are and how they work. Just because you have the same starting point doesn’t mean that the weather is going to be the same for every model run. Our particular climate is only one possible realization out of an infinite variety of possible realizations.
CM says
Gavin,
A question, if I may, about your recent Santer et al. PNAS paper (doi:10.1073/pnas.1305332110). I’m looking at the charts of zonal-mean atmospheric temperature trends, specifically for CMIP5 models with all the forcings (fig.2A). The warming trends appear to be strongest near the surface and decrease with height. Why is there no tropical tropospheric amplification to be seen? What happened to the “hot spot”?
I’d naively expect to see a less pronounced version of the baleful Eye of Sauron that glared from the graphs in your “Tropical Troposphere Trends” post, and I can’t make out what makes these graphs different.
[Response: Different vertical extent. The Santer et al figures are from 700 hPa on up for matches to the MSU records, not the surface. – gavin]
Brennan says
Re 53 & Judith Curry.
Curry’s blog was brought to my attention recently in an exchange on the Guardian website. It took me a while to get my head round something she had posted. The reason it took a while is that you don’t expect such a colossal misrepresentation of the facts. The figures she had on a graph of Arctic Sea Ice Extent bore absolutely no correlation to the figures she was talking about or the figures I have seen elsewhere. It was as if she had just made up a graph to suit her argument and mislabelled the axes.
Looking now, she made a recent post about how the IPCC wasn’t mentioning the ‘pause’. Then she made another post about how the pause was being ‘reasoned away’ but still wasn’t mentioned. Then she quotes from the report “Box 9.2: Climate Models and the Hiatus in Global-Mean Surface Warming of the Past 15 Years”. Did she just miss the word Hiatus? Maybe she doesn’t know what it means? But the n she quotes a big chunk of text which repeatedly mentions the slowdown in atmospheric warming and again uses the word ‘hiatus’.
http://judithcurry.com/2013/09/30/ipccs-pause-logic/#more-13176
I’ve noticed a more desperate tone about the ‘sceptics’ of late. Curry seems a good example of one just making it up as she goes along. I wouldn’t trust anything from her at all.
Hank Roberts says
Citations:
Radiological impact of airborne effluents of coal and nuclear plants”>JP McBride, RE Moore, JP Witherspoon, RE Blanco – Science, 1978 – sciencemag.org
December 1978: Vol. 202 no. 4372 pp. 1045-1050
DOI: 10.1126/science.202.4372.
1045.
Cited by 133
Hank Roberts says
that’s cited by 133 subsequent papers
Chris Colose says
Watcher,
The model-tuning process is done to give a stable climatology (e.g., cloud properties to get an appropriate albedo in the climatology). This is not the case for future climate change projections or for out-of-sample (paleo) validation.
It is not self-evident that the different absolute temperatures amongst the CMIP5 ensemble members would be independent of the things we are interested in (e.g., their equilibrium climate sensitivity). But this is a hypothesis that can be tested, and was done in Figure 9.42 of the new IPCC report.
In that graph, they plotted the equilibrium sensitivity against the global mean surface temperature in the CMIP5 models. There is no correlation whatsoever between these, and thus no evidence for (and in fact strong evidence against) the hypothesis that a model with an absolute temperature e.g, 1-2 C “below observations” (which themselves are much more uncertain in an absolute than in an anomaly sense) vs. one “above observations” will give a biased estimate in one way or the other of the sensitivity. It would be interesting to explore this dependence for other variables like e.g., the sea ice edge. This is a technical issue, however, and Judith Curry’s blog is not an appropriate resource for trying to gain insight into the implications of these things.
Steve L says
Thank you Steve Fish @ #60!
Hank Roberts says
http://www.theatlanticwire.com/politics/2013/10/secret-plea-help-some-poor-national-weather-service-forecaster/70218/
CM says
Gavin @62, thanks for bothering to answer, and sorry about the noise. I really should know how to read a y axis. (Slaps forehead, repeatedly).
Peter Cook says
Re 53, 63 and 66 (Judith Curry)
The Australian newspaper (part of the Murdoch stable) has adopted Judith Curry as their favourite ‘climate scientist’ as part of their ongoing campaign to muddy the waters about climate change. For more details see http://www.peakdecisions.org
I appreciate you all have much better things to do, but any efforts to assess the validity of Curry’s postings (and explain, step-by-step, your reasoning) is a great help to those of us in the lay public wanting accurate and rigorous analysis.
Chris Dudley says
Hank,
Try using your noodle. It almost rhymes with google but allows perception rather than recitation. Why is there uranium in coal? Because there is uranium in dirt. Dirt is where forests grow and that is what turns into coal. Why does coal have ash when it is burned? Because dirt is hard to burn. What happens to the uranium? it is in the ash just like it was in the dirt. What happens to radiation? Nothing. There is just as much screening from ash as there is from dirt. It is like pushing dirt around with a bulldozer. Nothing changes on the Geiger counter. The background level does not go up.
[Response: That would only be true if there were no changes in concentration – but burning coal and producing ash will concentrate the ‘dirt’ as you put it. – gavin]
WebHubTelescope says
Ed B :
FYI, I spend way too much time on Curry’s blog, trying to battle the nonsense comments. I put together this post which shows how the current “hiatus” or “pause” is simply the result of transient SOI effects.
http://contextearth.com/2013/10/04/climate-variability-and-inferring-global-warming/
What is very interesting is that a simple log regression fit between land warming and CO2 concentration can extrapolate backward to more than a 25C cooling as CO2 approaches 1PPM. Modtran shows that CO2 loses its GHG warming effectiveness as its concentration dips below 1 PPM.
This assumes the best fit of ECS of 3C for doubling of atmospheric concentration of CO2. I hope this answers your question Ed Barbar, of what happens if the CO2 is removed. Now you can figure out for yourself what happens when CO2 is doubled or tripled from the pre-industrial levels.
Watcher says
Re 58: Magma, thanks very much for that reference. Now that I go back I see Dr. Curry had posted it herself.
I don’t agree that Dr. Curry grossly misrepresented that figure at all. Her point was simply that it seems odd that models can differ so much when important underlying processes — specifically phase transitions of water — are highly dependent on absolute temperature. This is a point echoed — or should I say presaged since they said it first — in the article you link to:
To parameterized processes that are non-linearly dependent on the absolute temperature it is a prerequisite that they be exposed to realistic temperatures for them to act as intended. Prime examples are processes involving phase transitions of water.
The point is reinforced in their discussion of model tuning,
To us, a global mean temperature in close absolute agreement with observations is of highest priority because it sets the stage for temperature-dependent processes to act.
Given all of this, I do find it troubling that models that differ by 3 degrees can continue to run parallel to each other. I don’t think the text indicates they are testing ‘robustness’ to initial conditions; but even if they were, a simulation that was ‘robust’ to initial temperature should relax to some ‘proper’ temperature, and not depend linearly on it. Given that they fail to converge to a common value, how is it that a second order property (i.e. trends) can be regarded as more ‘robust’ than a first order property (i.e. temperature)?
Dreadful word. And now I’ve used it three times!
Here’s something else I find troubling: their statement in SS2.3 “Climate models may not exactly conserve energy. Indeed they seem to go on to discuss energy “leakages” in the models that arise from e.g. artifacts due to gridding, etc, that are of exactly the same order of magnitude as one of the key model outputs, the TOA imbalance (around 0.5W/m-2).
Though I sense a swat coming on, let me press on: that paper itemises 25 separate parameters (Their Table 1) used to tune their model. They refer to several more in the text, with the sensible statement that where possible they use published values for things that can be independently measured. They also make the statement that
By doing so [tuning] we clearly run the risk of building the models’ performance upon compensating errors
and go on to give examples. I did not get the sense that they have 25 (or more) separate observables to constrain the parameter choices, which would be the minimum necessary to get a unique solution. I’m not knocking them: they’ve laid out several issues that obviously trouble them, and that’s what research is all about.
So where am I going with this? Maybe just to say that the way I read Dr. Curry’s position is that climate models are a work in progress. Having read through Mauritsen et al I’m inclined to agree with her.
[Response: You conclude this as if it were a profound statement and the sum total of what is being alleged. That is wrong on both counts. No-one has claimed that models are perfect or that further development is not needed so that is not the point in question. As I stated above, tuning for the absolute planetary temperature is not trivial and generally not done. But Figure 9.42 in the AR5 shows that sensitivity is not dependent on this (mainly because the global mean offset is small compared to the spatial and temporal ranges of the temperatures that local feedbacks are sensitive to). As for energy conservation, this is something that all models should strive for, but some of the terms are subtle and it takes work to track all the ‘leaks’ down. (FWIW the GISS models conserve energy to machine precision). However, while these small leaks do not have as much of an impact on the simulation as you might think, fixing them does allow you to ask a wider and clearer range of questions. – gavin]
Magma says
And then there’s Curry’s caricature of a post today, “Skeptics vs. academics”.
Perhaps somebody with more time and patience than I have could answer whether Curry’s professional standards have eroded with time, or whether they were always this poor.
Hank Roberts says
> The background level does not go up.
So, your noodle tells you that?
Follow the links in Scholar; read at least some of the cited papers. On average and approximately, the level doesn’t go down as you claim to believe — diluting the atmosphere with fossil carbon isn’t significant.
CO2 is well mixed, unlike heavy metals.
Know how and why the metals got -into- the coal?
You can look this stuff up.
Your logic led you to your opinion not supported by science.
Do you like the result? Why?
Hank Roberts says
PS, I suggest pursuing the tangent in the currently available discussion at SciAm’s blog, that cites sources:
http://www.scientificamerican.com/article.cfm?id=coal-ash-is-more-radioactive-than-nuclear-waste&page=2
Watcher says
Re: 73
Gavin,
Thanks much for your response.
My apologies if I made it sound as if this were a profound observation. All I was trying to say was that it represented a revelation to me. It can’t have escaped you from other threads that I am nothing more than a dilettante in this climate business. Perhaps I should preface each post with an acknowledgement that anything I think I know about the subject should be considered fragmentary at best.
Clearly, since you still have a job (if not a paycheque at the moment!) there is a continuing need to develop the models. At the same time the tenor of many of the pronouncements I have read (though not here, admittedly) is that the models are pretty much dead on. So dead on that it’s a no-brainer that the world needs to spend bazillions of $$ to fundamentally reorganise itself or we’re all going to die. Hence my bemusement when I see just how much development there is left to do.
You commented that the absolute temp didn’t matter much since the annual/daily/latitudinal variations are so much bigger. Does that imply that for a higher-running model these cyclic variations are larger (to e.g. match ice coverage at the poles) and vice versa?
Finally, I assume you had a hand in the GISS models’ conservation of energy. Good on ya, then.
Watcher says
Re: Gavin’s response to 73, and 74.
I thought it best to separate my reply to Gavin’s answers about modeling and address four of his words separately, mostly because Magma’s comment 74 irked me.
Gavin first:
As to “what is being alleged”. I take it you are referring to Dr. Curry’s discussion of the differences between the draft and released SPM. I read her post as a recap of what others have said — something generally true of her blog, I might add. That there are significant differences I don’t think can be disputed. The two versions of Figure 1.4 could hardly be more different in the view they present of the consistency between models and observations.
Then there are the statements she quotes:
“Models do not generally reproduce the observed reduction in surface warming trend over the last 10?–15 years.” from the draft, and
the observations through 2012 generally fall within the projections made in all past assessments from the released version.
On the face of it these are pretty different. As I understand it, the IPCC position on this discrepancy is pretty much, “oops, my bad”. Whatever your take on the subject, surely it’s valid for interested parties to want to discuss it? Again, as far as I can see Dr. Curry merely summarises other peoples’ discussions about which is the better approach and whether it was sneaky to make the change during a POLITICAL meeting set up to discuss a final SCIENTIFIC draft. Another valid point, I would say. To the extent that she comes to any conclusion of her own, all I get is this one:
What is wrong is the failure of the IPCC to note the failure of nearly all climate model simulations to reproduce a pause of 15+ years. It’s hard to consider this off-the-wall crazy when it clearly had the support of those writing the draft version.
Magma:
Having looked at the post you refer to I see no way you can impugn anybody’s “professional standards”. Your very words are evidence that there is a vituperative aspect to almost any climate discussion that doesn’t toe the IPCC line. Whatever your take on whether “the science is settled” or not, surely the often vicious interactions are a valid point of discussion?
[Response: I find that discussions predicated on the supposed fact that the IPCC is incompetent and corrupt are generally not productive explorations of the true state of climate science. And this. And this. – gavin]
Ray Ladbury says
Watcher,
Your comment@73 reveals a profound ignorance of how scientific modeling–especially, physics-based modeling–works. First, to quote Richard Hamming, “The purpose of computing is not numbers, but understanding.”
Let me repeat that–the numbers are less importance than the insight we gain from the models. Imperfections in the models are not a barrier to their yielding that insight.
Second, no computer-based modeling technique reproduces 100% conservation laws such as energy, momentum, angular momentum, etc. They are models, not the physical systems, themselves.
Third, of course the models are works in progress. This isn’t a fricking science-fair project where you know the outcome in advance.
Fourth, it is not terribly surprising that the results would be robust within some temperature range.
Finally, I don’t think Aunt Judy has any choice but to misrepresent research. I don’t think she herself understands the research sufficiently to present it correctly. One of the reasons she is so good at confusing her readers is that she is mightily confused, herself.
Chris Dudley says
Gavin (#71),
You raise a fair point. But recall, the carbon in the coal initially diluted the uranium concentration. It came from the air, not the soil. So, re-concentration is only back to the level of a clay soil, not heightened concentration. The ash is chemically active and nasty in that way, but it is not particularly radioactive. That is, it is radioactive the way dirt is radioactive, being the same composition, and so is neutral in terms of exposure.
Hank Roberts says
> dirt
That’s a belief.
But facts are available.
You can look this stuff up:
http://www.epa.gov/radiation/tenorm/sources.html#summary-table
Watcher says
Re: 79
Ray,
I’m not trying to sound like a dick-head here, but my career for the past 25 years has largely consisted of physics-based modeling. If it is a ‘profound misunderstanding’ to expect my models to give accurate predictions of how the real world is going to behave then mea culpa. If they didn’t I expect my customers would be … well, not my customers any more.
Yes, the systems are simpler than a global climate and, yes I can augment them with things I can measure in a lab; but the central requirement of a scientific model is to produce measurable predictions. It has no value otherwise. Yes, during development one can tinker with poorly understood parameters to make a better fit, but this doesn’t always lead to understanding. Indeed, Mauritsen et al lament that they encounter situations in their climate model where one wrong parameter is offsetting one or more other wrong parameters but they don’t have a ready way to disentangle the errors. In some cases they can identify the parameters in the trade-off, in which case understanding is gained even though they still can’t pin down the balance; but in other cases they admit to not even knowing what parameters will affect the metric they want to change,
“In many cases, however, we do not know how to tune a certain aspect of a model that we care about representing with fidelity”
so it’s hard to say that getting a better fit can be called an improvement in understanding.
Again, from the same paper the single most important performance metric is the system temperature,
To us, a global mean temperature in close absolute agreement with observations is of highest priority because it sets the stage for temperature-dependent processes to act.
And besides, that’s what all the hand-wringing is all about. Striving to model that has to be of importance, and observing that the models as a group don’t do a very good job yet can hardly be dismissed as irrelevant. What’s a ‘very good job’? If the spread in models is 3K, and the ‘dangerous threshold’ is 2K …. Like they say, the models are a work in progress.
Finally, I can’t help but notice that “Judy’s stupid” seems to be the crux of the rebuttals presented in a several posts, not least of them yours. I’ll take that as supporting the second part of my post 78.
Hank Roberts says
> physics-based modeling…. to give accurate
> predictions …. my customers ….
Very different kind of model, right?
Watcher — I’m guessing you are not creating models that include random events?
Running a climate model repeatedly gives a spread of results because elements are inescapably randomn (like volcanic different for each run of the model).
Maybe I”m guessing wrong, but I’d guess Watcher is modeling clockwork kinds of physics.
Watcher — care to say more specifically what systems you model for your customers? Do you get a spread of results when you run the model repeatedly, as a normal result?
Chris Dudley says
Hank (#81),
Here your noodle will help you. What is fly ash? Quick lime and pozzolan. An exothermic reaction involving water and carbon dioxide rather rapidly dilutes the radioactivity back down to the soil range in your link.
Your noodle can help you again. If these exaggerated claims about radioactivity of coal were true, then we could add coal ash to soil, grow some trees, make some charcoal, and produce even more uranium. Eventually, we could turn all of the stuff into uranium using this form of alchemy.
But transmutation requires nuclear reactions, not chemical reactions. So, your leg is being pulled.
In fact, burning coal cuts radiation exposure owing to the reduction in carbon-14 in our food. That is not a good reason to burn coal. But, it is a good reason, along with evidence of data falsification by government scientists at Yucca Mountain, to be distrustful of government nuclear power enthusiasts. Their devotion to truth seems to be too weak to be compatible with science.
Hank Roberts says
> http://web.ornl.gov/info/ornlreview/rev26-34/text/colmain.html
You’re stating your belief, not a fact.
Levels have been measured.
Numbers are published.
You can look this stuff up:
http://www.epa.gov/radiation/tenorm/sources.html#summary-table
Far be it from me to tell you what you should believe.
I believe I get better information from the science.
You believe you can trust whatever source you rely on.
Are you reasoning to your own conclusion, or drawing from some external source you trust for what you believe?
Retrograde Orbit says
I find this whole discussion on the inability of the models to reproduce recent global temperature (‘the hiatus’) disturbing. Very disturbing.
Let me explain: Skeptics have always made the baseless accusation that global warming is a fraud and climate scientists are ‘covering up’ the truth. Now I am afraid this might become a self-fulfilling prophecy. If it hasn’t already.
Consider: There are errors in the scientific results we publish. Always will be, models are no exception, Gavin has nicely explained it. However, when skeptics look at these errors they will say: “Aha! Told you so! There is no global warming!” Which is nonsense, but embarrassing. And so there is a growing temptation to researchers to downplay the errors in their research – even if there is no rational reason to do so.
That troubles me. And I could easily see that Watcher is insinuating exactly that.
Dave123 says
My own modeling experience in chemical reactor design gives me a different perspective from what I see Watcher saying. Watcher appears to be agreeing with Professor Curry that an error in absolute temperature of a degree or two is significant. First, based on my experience I’m not inclined to agree. Second, I’m seeing a ‘god of the gaps’ argument that appears to have no target for how close the absolute temperatures need to be before some folks pick up their knitting again.
Let me elaborate on the first: An error of 1K out of numbers that could be from 220-340 K (stratospheric to desert surface) is an error of well less than 1%. A typical rate equation Ae^-(kT[c1]^i[c2]^j… (sorry no good way to write equations like this) probably has larger errors in the estimates of A and k and concentrations influencing the rate than the error in T. Concentration and partial pressure estimates would seem likely even less influenced by an error of 1 part in 293 (room temperature in K).
I guess if I had a climate model in hand to tinker with, I could test for how results would differ if I could systematically bias calculated temperature through the iterations by steps of 0.01, 0.1, 1, 2 and 5 K with appropriate precautions to censor wandering into physically absurd outputs.
Which is back to the second point: How close on an absolute scale is good enough? How would you know? When the trend is what you’re after, how does have agreement at some point in time to an arbitrary closeness to absolute temperature improve your confidence in the trend?
Let me add that chemical reactor modeling of in my experience is pretty clockwork, yet to my experience the errors I describe don’t prevent them from being used, as we did, to great effect. I’ll save comments on how I see scenarios, uncontrolled variables (atmospheric humidity) for some other time.
Thomas says
Chris,
I don’t think you can consider your assumption that coal will have the same radioactive concentration as original peak plus dirt. It has lain underground for millions of year, exposed to groundwater flows. Material would be exchanged between the coal and groundwater. It could either absorb radioactive substances, or have radioactive substances dissolved and carried away. Only detailed investigation -also guided by experimental results can resolve such as issue.
Kevin McKinney says
“There is just as much screening from ash as there is from dirt.”
Major fail–as ‘thought experiments’ arguing in the face of actual data tend to be.
Patrick 027 says
re Watcher – the climate models of the sort being discussed (covering the globe in latitude, longitude, height, and time – as opposed to 1-dimensional models (height… oh, they might have time, too, but they don’t resolve synoptic-scale storms or Hadley cells, or ocean circulation, obviously; … but the true 1-dimension model is useful for finding equilibrium given some simplifications*, and it is interesting to compare that to the fuller behavior of the climate in other models or the real world(s)) reproduce much of the behavior in the real climate system, including internal variability – that’s not to say it’s perfect (last I heard – which was a while ago – there was trouble with MJO, but I’m not up on all the details of that – I still don’t really understand what MJO is, to be honest, though I think it’s based in the tropics). Some of that internal variability produces temporary disequilibria in the climate system on the global annual average scale. So there are decades that warm up faster or slower, or warm up and cool down if there is no underlying trend. The observations of global surface temperature fit that behavior.
* a 1-dimensional model may find an equilibrium temperature profile, given external forcing, by finding the temperature distribution for which upward net LW radiative flux + convective heat flux = net downward SW flux at each level . Globally and temporally averaged, that actually is the case for an equilibrium climate, so this isn’t totally removed from reality. Convection may be parameterized by setting a maximum-allowable lapse rate; if temperature drops with height too quickly, convection must be increased. There will be some feedbacks as a change in the temperature profile caused by convection will alter the radiative flux. Clouds in such a model might be set as a boundary condition (external forcing), which of course is unrealistic, but if realistic clouds are used, you still get something realistic. Etc.
Hank Roberts says
> we could add coal ash to soil …
> and produce even more uranium
Oh.
I see where you’re coming from.
Far out.
Dave123 says
Attempt 2- having captcha problems
My own modeling experience in chemical reactor design gives me a different perspective from what I see Watcher saying. Watcher appears to be agreeing with Professor Curry that an error in absolute temperature of a degree or two is significant. First, based on my experience I’m not inclined to agree. Second, I’m seeing a ‘god of the gaps’ argument that appears to have no target for how close the absolute temperatures need to be before some folks pick up their knitting again.
Let me elaborate on the first: An error of 1K out of numbers that could be from 220-340 K (stratospheric to desert surface) is an error of well less than 1%. A typical rate equation d[C]/dt= Ae^-(kT[c1]^i[c2]^j… (sorry no good way to write equations like this) probably has larger errors in the estimates of A and k and concentrations influencing the rate than the error in T. Concentration and partial pressure estimates would seem likely even less influenced by an error of 1 part in 293 (room temperature in K).
I guess if I had a climate model in hand to tinker with, I could test for how results would differ if I could systematically bias calculated temperature through the iterations by steps of 0.01, 0.1, 1, 2 and 5 K with appropriate precautions to censor wandering into physically absurd outputs.
Which is back to the second point: How close on an absolute scale is good enough? How would you know? When the trend is what you’re after, how does have agreement at some point in time to an arbitrary closeness to absolute temperature improve your confidence in the trend?
Let me add that chemical reactor modeling of in my experience is pretty clockwork, yet to my experience the errors I describe don’t prevent them from being used, as we did, to great effect. I’ll save comments on how I see scenarios, uncontrolled variables (atmospheric humidity) for some other time.
Ray Ladbury says
Watcher,
Ah, I see. You use physics-bases models. You just don’t understand them.
Yes, of course we must compare model performance to reality. However, we have to make meaningful comparisons. Reality is only one possible realization of the climate system. Are you actually saying that not a single realization of the model runs produced a “16 year hiatus”? Are you really calling it a hiatus when it indicates that a La Nina year now is as warm as a big-assed El Nino 15 years ago?
What is more, we often learn more from models that fail in interesting ways than we do from models that reproduce the trends exactly.
As to Aunt Judy, I have never learned anything from her. She provides no insights, no clarity, nothing useful. She is often flat wrong are never even in an interesting way. Judy is worse than a waste of time.
Watcher says
Re: 91 etc.
Dave123 (and others) I appreciate your thoughts.
I’m not really trying to insinuate anything but the glaring changes between the draft and final SPM concerning model/observation agreement surely has to give one pause. I’m merely defending Dr. Curry’s right to say, “WTF?”
As for Dave123’s comment about models being good to 1%: in many if not most cases that would be ‘good enough’. However, given the absolutely key role of water in the climate system and its well-known phase transition at 273.15K I would think that some explicit test of what is ‘good enough’ would be in order. I have no idea how small ‘the gaps’ need to be before I believe in ‘the god’, I’m just saying that it makes me uneasy.
Furthermore, to continue the point, if you had a model that was known to be good only to 1% and it told you some process was going to change by 1%, what would be your confidence in the prediction? Myself, with what is being called my clockwork models, I would say pretty close to zero.
And since you ask, yes you could call them clockwork models: laser cavity dynamics, non-linear fibre processes, that sort of thing. Non-random except in the trivial sense that Maxwell-Boltzmann statistics can be assumed or to generate bit patterns. However, my understanding of climate systems (close to pathetic, admittedly) is not that they are random, but rather chaotic. Thus, the presence of multiple, interconnected, non-linear interactions leads to unpredictability on large scales. This is not the same as randomness, which I would call unpredictability on small scales, but when taken in the aggregate becomes predictable.
The implication of a chaotic system is that there are multiple possible large scale states which can arise from infinitesimally different initial conditions. However, if you look at that figure from Mauritsen et al again, you can see that while each of the hindcasts wiggles up and down, if it starts out high it pretty much stays high. So the randomness or chaos or whatever you want to call it doesn’t look as though it accounts for the differences between the runs. Magma way back in 58 said he thought they were testing different starting temps. I got the impression from Mauritsen’s paper that it was more about different tuning strategies and that the plot was a randomly chosen set of archived runs, but I didn’t see an explicit statement. Whatever the case, each ‘solution’ seems pretty stable.
Of course I’m only guessing about this. It would be interesting to get an answer from a climate modeler (you know who you are!) about whether the same set of forcings and tuning parameters can generate runs that differ by 3K. In other words, is the chaotic nature responsible for only the fluctuations within a given ‘solution’, or is it responsible for the ‘choice’ of solution?
Radge Havers says
Retro’ @86
It troubles me too.
You know, it takes a certain willfully obtuse mean spirit to try to wipe out of discourse a pretty basic intuition that almost everybody already understands: You don’t let the perfect be the enemy of the good. That goes for pretty much everything in life, except apparently if you’re an ideological bampot. Just a reminder that you shouldn’t be too passive in responding to such malicious propaganda.
Steve Fish says
I have been staring at the decadal average surface temperature anomaly graph, under the new IPCC Climate Report topic here on RC, and wondering where the 15 year slowdown in warming is. The 2000 to 2010 step is completely contained within the last 15 years, yet is as big a step up as the previous two. I would like for all those trying to distract, with talk about how models don’t show something that they admittedly can’t (short term variation), to climb up those steep steps, stand on the top one, and point out the slowdown for me.
Steve
Mike Donald says
Booker’s at it again and I’m sure you good folk will have something to say.
http://www.telegraph.co.uk/earth/environment/climatechange/10356276/Climate-change-scientists-are-just-another-pressure-group.html
[Response: Christopher Booker is wrong about the history, wrong about the present, and will be wrong about the future. -gavin]
Retrograde Orbit says
Bampot? Hmm, whatever. Don’t forget, most of us here are human.
The problem is that most skeptics are in essence wishful thinkers (e.g. model discrepancy => wishful thought => maybe the models are incorrect in their long term predictions too and it won’t be as bad as they predict).
You can encourage this kind of wishful thinking by simply asking pointed questions. And it’s effective. Wishful thinking consistently trumps rational arguments. I don’t think we should give Dr. Curry (or anybody) a free pass on that.
Retrograde Orbit says
Which leads me to a question for Gavin:
Isn’t this whole discourse on models way over the top? I had that feeling already when I read your model-error post (after somebody claimed that all models were “falsified”).
What is wrong with simply saying: The actual temperature is within the error margin of the models and therefore we cannot draw any conclusion from this discrepancy? And that – in particular – the idea that long term model predictions may be incorrect because of the short term discrepancy is merely a wishful thought?
Patrick 027 says
re 94 Watcher
Would you have confidence in a model’s trend if the absolute value were 1 % off but the trend, integrated over time, produced a 10 % difference? If so, what would you expect if you reduced the change in external forcing to 10 % of the original value. Even if you had no knowledge of the general behavior (is it a sinusoidal dependence? Parabolic?), a fair best first guess, albeit with minimal or no confidence, would be a 1 % change. But say you did know something about it’s behavior, enough to have some expectation of roughly/approximately linear proportionality, at least within a range of conditions that this case falls into. Then a 1 % result has more confidence.
With regard to the Earth’s climate system, there is some highly nonlinear behavior well outside the range of conditions being dealt with – runaway H2O vapor feedback (which, from what I’ve read, is hard to get to with non-feedback GHG forcing – it requires solar brightening) and snowball Earth (hysteresis, runaway albedo feedback). There are certainly other complexities, especially if we consider Earth-system sensitivity (includes feedbacks on CO2 amount, ice-sheets, vegetation albedo, aerosols) rather than just Charney sensitivity (includes the Planck response (the increase in outgoing LW radiation due to an increase in temperature, and H2O vapor, lapse rate, cloud, and if I’m not mistaken, snow and sea ice(? – I always forget whether sea ice is included, but it would make sense)). Charney feedbacks are not perfectly linear over very large ranges of conditions but I think they tend to be smooth enough to be approximated as such over smaller ranges – of course snow and sea ice feedbacks approach zero at sufficiently high temperatures. Charney feedbacks are fast-acting relative to the equilibration time given by heat capacity and climate sensitivity. The response of the climate to orbital forcing is a great example of the complexity of the full Earth-system response; the global annual average forcing in that case is quite small, and what is really important is the redistribution of solar radiation over latitude and season, where feedbacks to changes at some locations have a global-annual average impact. The importance of such spatial and temporal distributions in forcing could be measured by their effect on efficacy – the global time-average sensitivity to a particular type of forcing relative to a standard type of forcing.
But as far as the effect of absolute temperature errors are concerned, consider that the troposphere varies from above 293 K at the surface in many places (~288 K global average) to ~220 or even less at the tropopause. How far up or down, and for that matter, north or south, does the 273.15 K isotherm shift among models for the same forcing? The snow and ice are still there, there is still a freezing level in many clouds, etc. Consider the changes expected with global warming – shifts in storm tracks will leave some places dry and others wetter, and a shift in precipitation toward heavier downpours will occur, and we’ll lose snow and ice, and yet, at least within some limits, we’re still going to be within the range where there’s still significant snow and ice coverage, there will still be significant extratropical storm track activity, etc.
When and if we get to the point that feedback values change rapidly over temperature, we’ll probably be farther along than we’d ever want to be, wouldn’t we? (and it will probably take time for Greenland and Antarctica together to lose much of their ice – not as long as many of us would prefer, but…)
Except for the possibility of stepped sensitivity – for example, if (for illustrative purposes I constructed this example) there were a series of ice fields which remained intact up to some threshold and completely disappeared above that – then the equilibrium sensitivity would have a series of jumps – but if these jumps were not too large and were not tightly clustered in a bundle (relative to the temperature range considered), then it can be approximated by something more smooth – especially for the predictions if the thresholds have uncertainty.
Concerning chaos: Weather is chaotic, with small changes in initial conditions resulting in large changes in conditions at any point in time, after some period of time over which predictability is lost. But general characteristics of weather may remain the same – in the sense that nothing seems out-of-whack if there’s 4 blizzards instead of 6 in one year and 7 the next, or a tornado hits one place rather than another. You weren’t expecting the alternative scenario in the first place – you couldn’t predict it that far ahead with any confidence. The general characteristics are the climate. It’s more than just averages; I like the analogy of texture – consider two lawns with the same variety of grass, same soil, maintained the same way – on a climatic level they tend to look the same, although each individual blade of grass is different – there isn’t even necessarily a one-to-one correspondence between the two sets.
What is weather and climate can shift depending on scale – for example, each individual snow flake is like a weather event in a blizzard climate – an ice age that lasts maybe a few hours to a day or so. On that timescale the blizzard is predictable. The mantle has weather – and so far as I know we can’t model exactly where the continents will be or have been beyond some time horizon even if we could model mantle convection as well as the atmosphere (of course, we have the geologic record to tell us about continental drift in the past), but there is a climate of mantle convection and plate tectonics behavior – which will change over Earth’s history as there is cooling and associated effects (layered convection due to the thermodynamics of the Perovskite phase transition may (have/will?/is) shift(ing?) toward whole mantle convection).
Climate is predictable because the chaos of weather is bounded. There are conservation laws to consider, for example – the whole ocean won’t spontaneously heat up or cool off – there must be a heat source or sink. A thunderstorm, turbulent eddy, or extratropical cyclone may grow from some instability (instability from CAPE, Kelvin-Helmholtz, baroclinic wave instability (although latent heating is there too)) but can’t grow forever due to limits (spatial, material, energetic) or keep reforming without a source of energy to drive it. Some freak events may occur rarely due to some random (in effect, chaotic in origin) alignment; they can’t be expected to occur all the time unless climate changes sufficiently. If there are two distinct equilibrium states for the same external forcing, then climate is stuck in one or the other until the forcing puts it on a trajectory which connects the two. If the states are not truly fully equilibria, then the climate may fluctuate between them, and thus a complete description of climate encompasses both states and the shifting behavior (ENSO, NAM, SAM, NAO, MJO, PDO, AMO, QBO – actually, not all of these have two or more distinct (approximate and/or partial) equilibria – the QBO (the one I believe I understand the best, except of course for whatever aspects I don’t yet know :) ) has no separate equilibrium states; each state in a continuum leads as smoothly to next as any other, so far as know; I have a vague understanding of ENSO, NAM, and SAM; can’t explain what MJO even is).