It is a truism that all models are wrong. Just as no map can capture the real landscape and no portrait the true self, numerical models by necessity have to contain approximations to the complexity of the real world and so can never be perfect replications of reality. Similarly, any specific observations are only partial reflections of what is actually happening and have multiple sources of error. It is therefore to be expected that there will be discrepancies between models and observations. However, why these arise and what one should conclude from them are interesting and more subtle than most people realise. Indeed, such discrepancies are the classic way we learn something new – and it often isn’t what people first thought of.
The first thing to note is that any climate model-observation mismatch can have multiple (non-exclusive) causes which (simply put) are:
- The observations are in error
- The models are in error
- The comparison is flawed
In climate science there have been multiple examples of each possibility and multiple ways in which each set of errors has arisen, and so we’ll take them in turn.
1. Observational Error
These errors can be straight-up mistakes in transcription, instrument failure, or data corruption etc., but these are generally easy to spot and so I won’t dwell on this class of error. More subtly, most of the “observations” that we compare climate models to are actually syntheses of large amounts of raw observations. These data products are not just a function of the raw observations, but also of the assumptions and the “model” (usually statistical) that go into building the synthesis. These assumptions can relate to space or time interpolation, corrections for non-climate related factors, or inversions of the raw data to get the relevant climate variable. Examples of these kinds of errors being responsible for a climate model/observation discrepancy range from the omission of orbital decay effects in producing the UAH MSU data sets, or the problems of no-modern analogs in the CLIMAP reconstruction of ice age ocean temperatures.
In other fields, these kinds of issues arise in unacknowledged laboratory effects or instrument calibration errors. Examples abound, most recently for instance, the supposed ‘observation’ of ‘faster-than-light’ neutrinos.
2. Model Error
There are of course many model errors. These range from the inability to resolve sub-grid features of the topography, approximations made for computational efficiency, the necessarily incomplete physical scope of the models and inevitable coding bugs. Sometimes model-observation discrepancies can be easily traced to such issues. However, more often, model output is a function of multiple aspects of a simulation, and so even if the model is undoubtedly biased (a good example is the persistent ‘double ITCZ’ bias in simulations of tropical rainfall) it can be hard to associate this with a specific conceptual or coding error. The most useful comparisons are then those that allow for the most direct assessment of the cause of any discrepancy.”Process-based” diagnostics – where comparisons are made for specific processes, rather than specific fields, are becoming very useful in this respect.
When a comparison is being made in a specific experiment though, there are a few additional considerations. Any particular simulation (and hence diagnostic from it) arises as a result from a collection of multiple assumptions – in the model physics itself, the forcings of the simulation (such as the history of aerosols in a 20th Century experiment), and the initial conditions used in the simulation. Each potential source of the mismatch needs to be independently examined.
3. Flawed Comparisons
Even with a near-perfect model and accurate observations, model-observation comparisons can show big discrepancies because the diagnostics being compared while similar in both cases, actually end up be subtly (and perhaps importantly) biased. This can be as simple as assuming an estimate of the global mean surface temperature anomaly is truly global when it in fact has large gaps in regions that are behaving anomalously. This can be dealt with by masking the model fields prior to averaging, but it isn’t always done. Other examples have involved assuming the MSU-TMT record can be compared to temperatures at a specific height in the model, instead of using the full weighting profile. Yet another might be comparing satellite retrievals of low clouds with the model averages, but forgetting that satellites can’t see low clouds if they are hiding behind upper level ones. In paleo-climate, simple transfer functions of proxies like isotopes can often be complicated by other influences on the proxy (e.g. Werner et al, 2000). It is therefore incumbent on the modellers to try and produce diagnostics that are commensurate with what the observations actually represent.
Flaws in comparisons can be more conceptual as well – for instance comparing the ensemble mean of a set of model runs to the single realisation of the real world. Or comparing a single run with its own weather to a short term observation. These are not wrong so much as potentially misleading – since it is obvious why there is going to be a discrepancy, albeit one that doesn’t have much implications for our understanding.
Implications
The implications of any specific discrepancy therefore aren’t immediately obvious (for those who like their philosophy a little more academic, this is basically a rephrasing of the Quine/Duhem position on scientific underdetermination). Since any actual model prediction depends on a collection of hypotheses together, as do the ‘observation’ and the comparison, there are multiple chances for errors to creep in. It takes work to figure out where though.
The alternative ‘Popperian’ view – well encapsulated by Richard Feynman:
… we compare the result of the computation to nature, with experiment or experience, compare it directly with observation, to see if it works. If it disagrees with experiment it is wrong.
actually doesn’t work except in the purest of circumstances (and I’m not even sure I can think of a clean example). A recent obvious counter-example in physics was the fact that the ‘faster-than-light’ neutrino experiment has not falsified special relativity – despite Feynman’s dictum.
But does this exposition help in any current issues related to climate science? I think it does – mainly because it forces one to think about the other ancillary hypotheses are. For three particular mismatches – sea ice loss rates being much too low in CMIP3, tropical MSU-TMT rising too fast in CMIP5, or the ensemble mean global mean temperatures diverging from HadCRUT4 – it is likely that there are multiple sources of these mismatches across all three categories described above. The sea ice loss rate seems to be very sensitive to model resolution and has improved in CMIP5 – implicating aspects of the model structure as the main source of the problem. MSU-TMT trends have a lot of structural uncertainty in the observations (note the differences in trends between the UAH and RSS products). And global mean temperature trends are quite sensitive to observational products, masking, forcings in the models, and initial condition sensitivity.
Working out what is responsible for what is, as they say, an “active research question”.
Update: From the comments:
“our earth is a globe
whose surface we probe
no map can replace her
but just try to trace her”
– Steve Waterman, The World of Maps
References
- M. Werner, U. Mikolajewicz, M. Heimann, and G. Hoffmann, "Borehole versus isotope temperatures on Greenland: Seasonality does matter", Geophysical Research Letters, vol. 27, pp. 723-726, 2000. http://dx.doi.org/10.1029/1999GL006075
Lichanos says
Under the heading of ‘Model Error’, shouldn’t you include as well the possibility that the model omits or misrepresents elements of the subject?
[Response: I did. In the first line of the section. – gavin]
Susan Anderson says
I’ve been (trying to) follow all this with baffled admiration, but can’t resist a brief lay interpolation:
“If at first you don’t succeed, try, try again”
Seems to be climate models have done a dam’ good job at continuously observing and upgrading and finding new ways to measure and approximate. Difficult, but how else do people suggest we try and get a grip? A whole lot of people are good at tearing down (an adolescent exercise) but if they have a contribution, how about buckling up and trying to help? It might feel like work, but work is good for you!
Rob Quayle says
Climate consensus will probably come no time soon. After all, Australia, one of the more progressive countries on this earth (it’s been illegal NOT to vote there since 1925) has recently changed governments partly because their carbon tax was so unpopular. My simple regression-based statistical climate model predicts global carbon dioxide, surface temperature & sea level at yearly time steps. It is now calibrated on actual 1959-2012 data & its results are generally in the same ball park as the IPCC. When recalibrated on real 1959-2012 plus fake 2013-2027 data, assuming nearly flat surface temperatures for 2013-2027 (just like 1998-2012), the result is that the 21st century warming estimate goes from about 2.74 deg C (4.93 deg F) to 1.86 deg C (3.35 deg F). That’s still a significant empirical climate sensitivity to carbon dioxide. Results available on request: rgquayle@gmail.com
jdeuxf says
How do you analyse the mismatch between modelisation of the pliocene climatic optimum with constraints on the surface sea temperature derived from proxies and the proxies about temperatures and precipitations the pliocene climatic optimum? PS : especially the mismatch about precipitation. One example : High resolution climate and vegetation simulations of the Late Pliocene, a model-data comparison over western Europe and the Mediterranean region, A. Jost et al., Clim. Past, 5, 585-606, 2009.
[Response: Not yet clear. There is much that could be improved in model set-up for Pliocene climates – CO2 is only approximate, CH4 unknown, aerosols unknown, land surface types approximated etc. There are clear mismatches – particularly in the equator-to-pole temperature gradient which points to some kind of missing physics relevant to warm climates. But then the observed data are not perfect by any means and span a long time period (multiple orbital cycles) and so there may be some apples/oranges comparisons going on. For some more recent discussions try Lunt et al (2009) and Lunt et al (2012). – gavin]
simon abingdon says
#52 Susan Anderson
“I’ve been (trying to) follow all this with baffled admiration”.
Stick with it Susan. All will come into clear focus before very long.
Charles Stack, MPH says
Dear Gavin and all, I advise policy-makers within the GOP on environmental matters. It’s been a struggle, but I’d been making some headway in this area, thanks to excellent remarks by senior leaders, especially former Sec of State George Schultz.
I cannot begin to tell you the damage that has occurred by the past exaggeration of climate predictions. I understand very well the issues of model accuracy & believe in “climate disruption” (the proper term), particularly ocean acidification.
However, we are losing the policy argument due to past claims vs. present results. It is time to reboot the entire process. Reach out to skeptics and engage the public. Your intransigence is harming the entire planet. Thank you, Charles Stack, MPH
[Response: My ‘intransigence’ is harming the planet? Not journalists lying, politicians denying, companies polluting, or the whole host of perverse incentives society has created that make it cheaper to do the wrong thing for the environment? None of those things matter compared to my blogging? Phew – I had no idea! Thanks. – gavin]
prokaryotes says
Link
Hank Roberts says
> Carbon Dioxide and Climate: A Scientific Assessment
that’s the Charney report.
Cited by 192 other papers (links in Scholar)
Dave123 says
Charles,
You wouldn’t suppose misrepresentation of past claims has anything to to with it? A deliberate disinformation campaign using methods honed and tested by the tobacco companies? I’d find your note more ‘credible’ if you gave example of a past claim made by say Gavin, and why it was wrong. As a man who claims to be advising ‘Republican’ leadership, surely you have enough command of the facts to substantiate your allegations.
captcha coincidence: lityHpe formulate (which I read as Lity Hype)…let’s see if I got it right!
Berényi Péter says
“[Response: Interesting, but not relevant. This presupposes a perfectly known set of basic equations that we can test for convergence as scales get arbitrarily small. That isn’t the case for climate models – too many magnitudes of scale between cloud microphysics or under-ice salt fingering and grid box averages. – gavin]”
“[Response: Only specific processes can be examined in the lab. Radiative transfer, aerosol formation, some aspects of cloud microphysics, ocean diffusion etc. – but the real world has many good experiments that the numerical models can be evaluated against (some mentioned above). – gavin]”
Well, I think I still could not drive my point through. It is not about climate science as such, it is about physics. If we were dealing with a reproducible system, the MEP principle would hold along with the fluctuation theorem, see Dewar 2003. Those would put strict constraints on any computational model, one could literally test model output against them.
However, the climate system is clearly not reproducible, it is chaotic. Indeed, if it were reproducible, it had to linger around a Maximum Entropy Production state. But it does not, for most of the entropy production happens on Earth when short wave radiation is absorbed and gets thermalized. That is, by decreasing the (rather high) Bond albedo of Earth one could increase rate of entropy production, which is inconsistent with a MEP state.
Therefore the rules of the game should be different for some non reproducible systems. Please note shortwave albedo of Earth has large spatio-temporal variations, but its annual global average is restricted to a narrow range, even if it is not determined by simple material constraints, but by an intricate interplay between many internal degrees of freedom. And the value it fluctuates around is very different from the one we would expect for a non-chaotic non-equilibrium quasi steady state thermodynamic system for which energy exchange with its environment is dominated by radiation. Mercury is black, Earth is not.
Questions:
1. Do you believe the MEP principle can’t be generalized to another, deeper extremum principle which would hold to a class of nonequilibrium thermodynamic systems terrestrial climate belongs to? If so, why?
2. Are multiscale properties of climate you have mentioned not connected to a SOC state? In the vicinity of a critical state one would expect scale invariant behavior in all state variables. Is it seen in climate?
One could, of course, take a different track and delve into Dewar 2003 deeper to see where reproducibility comes into the picture and how far one can get without it.
However, even in that case one would need actual experiments to verify theoretical expectations, that is, a model that would fit into the lab.
If an extremum principle, valid for climate, could be found and verified experimentally, that would make testing computational climate models much easier.
Consider the case of celestial mechanics. With a naive computational model, coding Newton’s laws in a straightforward manner, one gets into trouble soon. We do know both mechanical energy and angular momentum are conserved quantities in any setup (with no dissipative processes, of course). However, due to subtle computational errors which add up, the model lacks these properties, which means it should be rejected as a device to compute future states of the system. On the other hand, it also shows the way to improve the model, that is, to take care of conservation laws at each algorithmic step.
Similarly, an underlying principle, if one exists, could take care of some multiscale phenomena in climate models, reducing the need for guessing parameterization schemes greatly while improving model quality.
[Response: Nice thought. If such principles can be found, they might indeed be useful. However, I am not optimistic – the specifics of the small scale physics (aerosol indirect effects on clouds, sea ice formation, soil hydrology etc.) are so heterogeneous that I don’t see how you can do without calculating the details. The main conservation principles (of energy, mass, momentum etc) are already incorporated, but beyond that I am not aware of anything of this sort. – gavin]
Radge Havers says
Joe,
For the purposes of communicating basic ideas of modeling to a broad audience, maps would seem to offer a ready and intuitive lead-in. They are models, after all, that people use in every day life. But maybe that goes out the window if your audience insists on taking maps (and portraits for that matter) for granted. I suppose in that case you might as well be talking about sofas and dust ruffles.
Still, don’t social scientists create maps, and aren’t they capable of handling figures of speech? And even if all you can call to mind are road maps, what, you’ve never encountered examples that are illegible, contain misinformation, are out of date, etc.? You’ve never heard of people getting lost and driving into a ditch following their GPS? I certainly have. Being able to locate Orlando in Florida is a pretty low standard. By contrast is not the London Subway Map useful, elegant and wondrous?
Nor is interpretation anything to sneer at. For instance, consider the process of geologic mapping where interpretation can be an ongoing part, from beginning to end, of making sense of apparently chaotic terrain. It’s a fair analogy to portraiture– if you’ve ever tried your hand at it and understand that analogies are by definition imperfect. So there is real beauty in some maps, as there is in some models, theories, solutions. Great!
Unclench. ‘Mapping’ in a colloquial sense isn’t just about making maps, it’s about how humans make sense of the world.
And btw, what’s with everybody “shuddering” at this and that already? I admit I have a soft spot for stroppy nonsense, but I don’t get all the banal melodrama.
Lichanos says
@ 51 – Gavin’s response:
I asked:
-Under the heading of ‘Model Error’, shouldn’t you include as well the possibility that the model omits or misrepresents elements of the subject?
and got this:
~[Response: I did. In the first line of the section. – gavin]j
This is very interesting, because he is referring, I believe, to this text of his in the post:
~”There are of course many model errors. These range from the inability to resolve sub-grid features of the topography, approximations made for computational efficiency, the necessarily incomplete physical scope of the models and inevitable coding bugs.”
Now, grid resolution is a mechanical problem, that can be improved with computing power, and has, although I guess there is a limit, unless we go back to Borges & Morehouse and build a 1:1 scale model.
Presumably approximations made for efficiency may drop away as computers get more powerful and programming tools get more sophisticated.
Coding bugs? I see those as simple blunders, hard to catch sometimes, but with time…
“The necessarily incomplete physical scope of the models,” is…what exactly? Elements of the total system that are left out because there are only 24 hours in a day? This is the only bit of GS’s text that deals, obliquely, with my question. What he is describing are the inevitable limitations on models, things we all accept in a GOOD model. But what if the model is simply incorrect? Wrong? Makes the wrong connections? Is wrong at a conceptual level about what are the forcings, and how they interact? With such a complex system, such an error is not hard to imagine, and GS implicitly accepts its possibility.
The entire discussion of error in this post is, however, based on the assumption that the model is correct, even though it’s wrong, as it must be to some extent, but fundamentally correct. What if that assumption is wrong? The fact that it performs well in hindcasting simply makes it plausible, not correct. I was simply raising this possibility, and GS seems to think it is out of the question.
Susan Anderson says
Simon A, I am significantly less baffled than you would like to imply, unlike yourself. Humility is not a sin, but pretense is.
I would maintain a dignified silence except I just picked up the terrific link in the addendum. I love maps! Of course they are limited, but they present such a nice example of useful metaphor and that is a gorgeous collection of well crafted wordsmithing:
https://www.realclimate.org/index.php/archives/2013/09/on-mismatches-between-models-and-observations/comment-page-1/#comment-408422
missoula says
This (if I understand your statement correctly) is one of the core problems with the interaction between climate modeling and public policy. Modeling-based claims (or predictions) about future climate often pertain to trends that can only be unequivocally observed on 10-year to 100-year timescales. (E.g. CO2 is expected to rise and temperature is expected to rise over the next decades.) However, the climate, and thus the observational record, is sensitive to many factors, some of which are difficult to predict with any certainty in the short-term, e.g. volcanoes (cooling by aerosols), land-cover (changes to cloud-cover from transpiration), solar intensity, and stochastic internal dynamics, like El Nino/La Nina. Indeed, these (short-term) factors may oppose long-term trends. (Take a look at the jagged terrain of the 20th century temperature record.) Herein lies the rub.
Policy-makers, and the people they represent, experience the world in real time. Weather is extremely important, and immediate, to almost everyone in the world. Average climate trends are not. When asked “how’s the weather?”, who responds with the average rate of temperature change over the last 30 years?
As trivial as this may seem to some scientists, I believe it is one of the central reasons that there is little government action on climate change (or on most long-term problems, for that matter).
In short, we are ill-equipped to imagine geological (earth) time or to think on the scale of an entire planet. As long as this situation continues, I see little reason to expect a change in the political dialogue.
missoula says
Gavin,
I’d be curious to hear more about the further complications you talk about here:
It seems like this should be an active area of research, so it’s surprising that there are no papers.
I would imagine that the understanding of “structural” model lacunas is itself evolving. For example, how many climate models include the bacterial dynamics associated with cloud formation in the ocean? (http://ucsdnews.ucsd.edu/pressreleases/biological_activity_alters_the_ability_of_particles_from_sea_spray_to_seed_clouds) This might be an extreme example, but it seems like there is a certain hubris in discounting the uncertainty associated with undiscovered climate forcings and dynamics, given that understanding in this area is clearly advancing.
John West says
@ Charles Stack
in•tran•si•gent also in•tran•si•geant (n-trns-jnt, -z-)
adj.
Refusing to moderate a position, especially an extreme position; uncompromising.
——————————————————————————–
I don’t think that’s an appropriate contention here at all, scientific questions aren’t resolved by compromise or by moderating a position; they are resolved by evidence. While I would characterize myself as unconvinced with respect to the magnitude of the expected warming and the timeframe for realization of that warming, if I were advising the GOP I’d pull out the IPCC report for the science. IMO, the political questions for now should be centralized around risk tolerance, determining what constitutes something being “worrying” to society as a whole, and the adaptability of both civilizations and ecosystems. These aren’t climate science questions.
Also, RC has quelled many “extreme positions” such as the recent methane bomb scare stories.
Charles Stack, MPH says
Folks, I’ve been working in climate science since the mid 1970s, focused upon engineering controls for agricultural and industrial methane. Projects have won awards from the British government, and Kyoto CDM projects completed in Asia and Latin America. Most recently, I’m certified in former VP Gore’s “Climate Reality Leadership Corps” training. Trust me, I know my stuff.
There is an old story about “The Boy Who Cried Wolf,” you should all read it.
You may not like them, but you MUST engage the entire voting public, including (*gasp!*) dreaded Republicans and others, to tackle this problem. Drop the term “denier,” it is insulting and stupid. Forget the blame; many of the largest polluters are taking some of the biggest technology risks to reduce emissions. Most of all, don’t be so damn strident with your models & predictions, you don’t know everything. Australia’s gutting of their carbon laws is bound to be followed by other nations. Recent events, back-pedaling and poor responses are no less than a disaster. BTW, acidification is a MUCH worse looming problem than temperature ever will be….if we shut down photosynthesis in the ocean’s euphotic zone, it’s game over. Have a good night.
[Response: Perhaps you could point out where I claimed to know everything? Or where I advocated never talking to republicans or conservatives? Or where I cried wolf? You obviously have a beef with someone, but I suggest you track them down and take it up with them – rather than with me. – gavin]
Nickc says
Gavin, you cite Charney in the context of having high confidence … From Charney …
“However, we have not examined anew the many uncertainties in these projections, such as their implicit assumptions with regard to the workings of the world economy and the role of the biosphere in the carbon cycle. These impose an uncertainty beyond that arising from our necessarily imperfect knowledge of the manifold and complex climatic system of the earth.”
If we assume the uncertainties could be quantified as 10 in 1979, you then think that we have revealed enough to decrease the imperfect knowledge back to an uncertainty of a smaller value? Would it be correct to say that the majority of the original uncertainty still exists considering the complexity?
You also appear to say in the following response to an earlier post that GCM’s are evaluation targets:
“However, we don’t calibrate the emergent properties of the GCMs to the emergent properties derived from observations – they stay (more or less) as evaluation targets.”
I agree, they should evaluate something … I am not as sanguine as Ray though about how a mismatch ‘insight’ would be employed in the science when the belief remains that the underlying assumptions remain robust enough coupled with the scientist as advocate model you support. We have been brilliant at communicating and encouraging the downside risk without much attention to the uncertainty at all.
Mitch Golden says
Of course Feynman is not here to defend himself or interpret his words, but I think he would reject the the use of the neutrino-faster-than-light experiment to demonstrate the limitations of his metaphysics as expounded by that quote. There is a simple reason one knows that Feynman couldn’t possibly have meant this argument seems to attribute to him (namely that as soon as a new experiment comes along, any model with which it disagrees must immediately be chucked out): Feynman lived to see loads and loads of wrong experiments. The problem with the cited experiment was that it was not reproducible (odd that this word hasn’t come up in the discussion in this context so far).
[Response: I agree that Feynman likely didn’t take his dictum literally (though the spirit is right). There are many examples in his career where challenges to seemingly conclusive experimental data lead to theoretical breakthroughs (and subsequent experimental verification). But the dictum as written – and often quoted – is too simplistic to be useful as anything other than a reminder that nature is the final arbiter of our understanding. – gavin]
The broader point is that there is a fundamental difference between wrong models and wrong experiments, that the discussion in this post blurs. In general, you don’t need *any* model to invalidate an experiment. The neutrino-faster-than-light experiment was found to be wrong without any reference to Special Relativity – it was actually a rather simple electronic metering issue.
[Response: No-one would have done this experiment except for special relativity and no-one would have cared about the error without it appearing to contradict SR. Neither the experiment nor the cabling issue have any import except for that context. – gavin]
On the other hand, if the experiment had been found to be free from errors, and someone else had established the same result with different apparatus, then Special Relativity would be toast and that is it.
[Response: I very much doubt it. Given the amount of support SR has in observation and previous experiment, I would predict that an enormous amount of effort would have been devoted to finding the flaws in the concept or execution of this experiment and only after far more work had been done would people slowly come around to the idea. The fact is it is far more likely that an experiment was flawed than such a standard was wrong. Not impossible, just unlikely. – gavin]
It is certainly true that the fact that it disagrees with something as well-established as Special Relativity made people look harder for issues in the experiment – but that doesn’t change Feynman’s point.
Now there can be subtleties about whether it’s possible to define the concepts involved in a measurement without any underlying metaphysics. For this sort of thing you can read Kant I suppose. But while this might sometimes bear on discussions of quantum mechanics or cosmology, the concepts involved in climate science are straightforward enough that I think Feynman’s point stands as stated.
[Response: Experiments and observations in climate science are far less controlled than the neutrino experiment, and yet you think that they somehow rise to the level of unchallengeable? And despite plentiful examples of where challenges were ultimately correct? How odd. – gavin]
Dan Miller says
Gavin:
1. Would you agree that if we continue with BAU emissions, then there is “high confidence” of “catastrophic” AGW? If not, what climate science are you studying? I would love to be less alarmed than I am. So you might be able to add another source of error: scientists not wanting to alarm the public (Kevin Anderson cites this as a source of error in his “Going Beyond Dangerous” article).
2. I want to add another reason for flawed communications of climate science by scientists. To publish a paper in a peer reviewed journal, a scientist must be quite sure of his or her claims… perhaps 90% sure. This is fine for studies of exoplanets or black holes, but not when civilization must take steps to protect itself. If the military worked the same way, we would lose every war we ever fought. I would like to know what sea level rise will be with a 50% probability, and even a 20, 10, 5, and 1% probability.
Also, a scientist would much rather not predict something that comes true (“Type 1 Error”) than predict something that does not come true (“Type 2 Error”). They are both equally wrong, but in the Type 1 case there was no paper published!
I think this helps explain part of the reason predictions of Arctic sea ice melt were so far off and why there was/is so much focus on 2~3 feet of SLR this century, when the actual numbers could be much larger (according to Jim Hansen and others). In fact, Jim Hansen wrote about this in his “Scientific Reticence” paper.
Anonymous Coward says
“You may not like them, but you MUST engage the entire voting public, including (*gasp!*) dreaded Republicans and others, to tackle this problem.”
Since Charles Stack advises Republicans perhaps he could be bothered to study the history of the GOP, specifically the part concerning Lincoln whose administration abolished slavery with <40% of the popular vote.
For the big lie, use ALL CAPS.
Dr. Punnett says
Interesting to see Gavin is actually allowing people who fundamentally disagree with him to be included in the discussion.
[Response: Of course, because you bring so much substance to the discussion. – gavin]
Of course, his dismissive and rather nasty personality shine through as usual.
[Response: Your obligation to pay attention to anything I have to say is precisely zero. But you should be clear, it is not my personality that is dismissive, it is my attitude towards people who decide first and look at the science later. – gavin]
I really think the tipping point has been reached now. Watch for more climate scientists to begin speaking out in a similar way that J. Curry has.
[Response: We’ll see. – gavin]
prokaryotes says
John West #66
There is soon a new report out but the last is based on conservative estimates form 2007. Unconvinced by them means to doubt the scientific facts, the consensus on climate change. Political, much more relevant and recognised are observational developments particular for businesses. Innovation is key to combat dangerous climate change and we only have a short window of opportunity to act.
We are underestimating climate change and underfunding innovation
prokaryotes says
Nickc #68
This is not a rhetoric game or a political show where one could win with sizing a majority. Today’s uncertainty comes from evaluating feedbacks and tipping points such as how much longer the Ocean will keep sucking up heat and CO2 (OHC) or how fast non-linear developments will occur or for how long we can sustain civilisation based on conservative scenario assessments RCP8.5.
Example of uncertainty in today’s climate science
Link
NickC #68
No. Example Study of “True Global Warming Signal” Finds “Remarkably Steady” Rate of Manmade Warming Since 1979
Charles Stack, MPH #67
Yet, a new study concludes: Climate Scientists Erring on the Side of Least Drama
John Benton says
This article contains so many errors and false comparisons the whole thing is just delusional. Whoever wrote this is not a scientist.
[Response: And a good morning to you too. – gavin]
Hank Roberts says
Al Gore on climate communication.
“Gore: Climate Dialogue ‘Not Won Yet, but Very Nearly’ — August 28, 2013
Alan Millar says
Well of course there is a mismatch between the Models outputs and the current temperature trend.
That would be a sign of a GCM that was potentially accurate.
The models MUST be either running hot or cold for periods or a decade or so if they are indeed anywhere near accurate.
It is so obvious that I don’t know why it is not stated more often. (actually I do have a suspicion but more of that later.
The reason the models cannot match the short or medium term global temperature records is that they are not measuring the same thing! Oh the underlying signal is the same in both outputs, that is the climate change signal. However, the temperature as an added signal which is either a cooling or warming one based on current ‘weather’ influenced by ENSO inter alia and this additional ‘weather’ signal in the temperature record is only averaged in the models if included at all.
Now scientists have concluded this century, that this natural variation ‘weather’ signal can be large enough to put a significant mismatch between model output and current decadal temperature record. Had to really or the models would be kaput.
It is obvious really an apple doesn’t equal an orange no matter which way you cut it.
No it is not a problem that the models are running hot this century it does not prove that they are inaccurate. It doesn’t prove that they are accurate either but it is the behaviour an accurate model should be displaying.
No, the problem for the models is their hind cast for the 20th century. The models track the temperature record really really well during this period. Only trouble is they shouldn’t if they were anywhere accurate!
It is impossible for an accurate model to track the temperature record in this way in the short to medium term unless the ‘weather’ signal was neutral for nearly the whole time.
Also the forcing effect of the increasing CO2, taken over the whole 20th century, was on average weaker than this century due to the ramping up of CO2 emissions during the latter half of that century. This should have made it easier and more obvious for the ‘weather’ signal to create a mismatch between the two outputs.
So, it is not the current mismatch that is a problem for the models, it is the previous excellent correlation between the global temperature record and the models outputs that is the problem for the models.
That behaviour is absolutely impossible for an accurate model.
Alan
[Response: Hmm, an interesting and testable argument. Well, let’s go to the tape:
Umm… no obvious sign of some huge increase in fidelity prior to 2000. So that would be a “no” then. – gavin]
MARodger says
[edit – no disrespect intended, but I’d prefer if comments focussed on substance, not pedigree]
Lichanos says
@ #69 – Gavin’s Response, again:
[Response: I agree that Feynman likely didn’t take his dictum literally… But the dictum … is too simplistic to be useful as anything other than a reminder that nature is the final arbiter of our understanding. – gavin]
I find this remark amazing coming from a scientist. I would think that this point, that nature is the final arbiter, is of monumental importance, and he trivializes it. The history of early modern science is of a tremendous effort to establish exactly this principle.
For Gavin, it is simply a ‘reminder’ of something that is presumably obvious to all. But reading and watching the controversy, I’d say it’s a reminder that is not heard often enough.
Ray Ladbury says
Lichanos@79,
Your amazement amazes me. Gavin is NOT saying that nature is not the ultimate arbiter. Of course it is. However, the question is how one responds to a discrepancy–does one check the measurement again?; does one modify the theory?; or does one scrap the theory? It is not a simple matter that if prediction diverges from observation that the theory must be wrong. The theory may be more right than wrong. I wonder why that is so difficult for you to get?
Hank Roberts says
> nature is the final arbiter
Begin by forming identical Earths in a thousand identical Solar systems.
Run each of those over time up to the present.
Nature doesn’t give you one answer.
Nature gives you a range of possible outcomes.
Our species got smart enough to fiddle with nature during an unprecedented single opportunity — everything came together to make us possible here.
What odds that we can improve on the outcome?
What odds that we can make things worse?
Nature says — do you feel lucky, punks? Do ya?
And rattles the dice.
Mitch Golden says
Given that Feynman was a pretty smart guy and a pretty experienced physicist, I think one has to be pretty careful in interpreting his words. I don’t think it’s likely he said something that is just trivially useless. I am sure Feynman would agree with us (as we’re in fact agreeing) that one uses models both to decide what experiments to do and to evaluate how quickly to trust them. (As in, we’d need a damned good, repeated experiment before we’d throw out Special Relativity.) But we can determine whether the experiment is *right* or not without any reference to Special Relativity, and that is what makes the experiment different from the model.
[Response: Experiments and observations in climate science are far less controlled than the neutrino experiment, and yet you think that they somehow rise to the level of unchallengeable? And despite plentiful examples of where challenges were ultimately correct? How odd. – gavin]
It’s “odd” because it’s not what I was saying. I am simply pointing out that it is possible to evaluate climate experiments and observations without getting into the sort of philosophical discussions one sometimes has to have when dealing with experiments in quantum mechanics or special relativity. “Temperature”, “radiation”, “water vapor” are all pretty well-defined concepts in this context.
I agree that physics experiments are far better controlled then those of climate – which means that the former can generally be trusted much more quickly than the latter. But it’s still the case that you just don’t need to look at a climate model to evaluate the correctness of the experiment. For example, ultimately the 1990s UAH temperature data was found to be wrong because of the technical mistakes that were being made, not because it disagreed with models – though of course it did.
[Response: I think you are missing the point I am making – it is the mismatches between experiment and theory that drive people to look harder for overlooked technical issues or interpretations. It is true that people often find bugs in code or miscalibrations in equipment on their own with no external prodding, but people are more strongly motivated to do so when there is mismatch of the sort we are discussing. Mismatches are clues that we should pay heed to. – gavin]
SecularAnimist says
Charles Stack wrote: “I advise policy-makers within the GOP on environmental matters … I cannot begin to tell you the damage that has occurred by the past exaggeration of climate predictions.”
I would respectfully suggest that “policy-makers within the GOP” have suffered much more “damage” from the millions of dollars in campaign contributions they receive from the fossil fuel corporations.
Perhaps you would care to explain exactly how these alleged “past exaggerations of climate predictions” compelled numerous GOP elected officials to deliberately and repeatedly lie about climate science, while seeking to abuse their positions of authority to defund climate research and attack and destroy the careers of leading climate scientists.
Really, if you want your portrayal of GOP politicians as the well-meaning, innocent victims of “exaggeration” by climate scientists to pass the laugh test, you’re going to have to work harder.
Geoff Wexler says
I am not sure whether it is fair to lay the blame and credit for that claim on to Popper, because it was probably a widely held simplification before he came on the scene. As evidence for this, there is no entry for Popper in the index to Duhem’s book, The Aim and Structure of Physical Theory, which Gavin may have quoted above, and which was written just before 1906.
Chapter 6 has a whole series of case studies which demonstrate problems with the above simplification. The latter appears to have been quite fashionable at the time of writing. Because Duhem was a good theoretical physicist he was in a better position to choose realistic examples, drawn from science than some philosophers. He says:
Of course the discussion does impinge on the usefulness of Popper’s falsifiability model which came later.
———————-
* Popper would probably have added
” .. if it agrees with experiment it might still be wrong”
as well as his falsifiability criterion ***
** He did not intend to include biology in that remark.
***. This may have been introduced by one of Bertrand Russel’s graduate students but I have lost the reference.
Mal Adapted says
Charles Stack:
I’m skeptical of your claim, Mr. Stack. Which GOP policy-makers, exactly? Have you been paid as a consultant with funds from the Republican party or individual officials? Can you link to any reports, position papers, or other documents you’ve authored that would give us any reason to take you seriously? The more verifiable details you can provide, the better. Thank you.
Doug Bostrom says
I really think the tipping point has been reached now. Watch for more climate scientists to begin speaking out in a similar way that J. Curry has.
Something escaping from the diode bubble.
Jacob says
Is it true that most (a great majority) of models run “too hot”, i.e. the discrepancy between model projections and measured temp, so far, is that projections are higher than measurements?
The general explanation, above, about model-measurement mismatch ignores the specifics of this case.
Can we infer anything from the characteristic of this particular mismatch?
SteveF says
A new paper featuring models and observations and the like :)
http://www.pnas.org/content/early/2013/09/10/1305332110.abstract
Jacob says
About ‘Observation error”.
Do you think that it is possible that there are systemic, or considerable errors in the temperature data sets ?
If not, could we, in this case, eliminate this possibility from consideration ?
[Response: For the surface temperature data the picture is quite robust – using different methods, different subsets of input data, including corrections or not – so I doubt that there is much uncertainty there that has not already been explored. In some regions there is more uncertainty than others (the arctic, tropical pacific, Africa) but the global picture is clear and consistent with multiple independent sources of information. – gavin]
Hank Roberts says
> “I advise policy-makers within the GOP
> on environmental matters …
> the damage that has occurred by
> the past exaggeration of climate predictions.”
Some advisor surely has been feeding the GOP policymakers spin for a very long time. When did the GOP policymakers y stop getting bad advice?
I know your name’s not Surely. But your quote says
> … I can not tell you …
So. Who can? Where were the GOP policymakers getting the exaggerations? Who was it they believed, and was that advisor doing the exaggerating, or merely echoing it uncritically?
Figuring out how the GOP policy makers got such bad advice for so long is a worthwhile study. Maybe not here.
Tokodave says
81, Hank Roberts…Also expressed as “Mother Nature Bats Last.”
And she always bats 1000. Always
http://en.wikipedia.org/wiki/Robert_K._Watson
David B. Benson says
Far better commentary than Feynman’s misunderstood remarks on the role of theory and experiment is from Eugene Wigner:
https://en.wikipedia.org/wiki/The_Unreasonable_Effectiveness_of_Mathematics_in_the_Natural_Sciences
The paper itself is the first reference in the Wikipedia article. I strongly recommend taking the time to read this short paper.
Jacob says
Could you advance from thee generic discussion, above, of model-reality mismatch, to a more focused discussion of this specific case: climate models vs. observed temps mismatch ?
You agree that observation error can be ruled out in our case.
So, what would be, in your opinion, the most plausible area were you would look for an explanation of this particular mismatch ?
Ray Ladbury says
Jacob@93,
Please refer to the chart in Gavin’s in-line response to #77.
Given that chart, my question is “what mismatch?”
Given the results of Foster and Rahmstorf 2011, my qeustion is “what mismatch?”
t_p_hamilton says
Jacob (and many, many others) seem to think that if model A, when run from 1900 to present, predicts the relatively flat, global average surface temperature record over the past decade, is a better match to reality than model B which does not. It is a flawed comparison: (quoting the end of Gavin’s point 3)
“Flaws in comparisons can be more conceptual as well – for instance comparing the ensemble mean of a set of model runs to the single realisation of the real world. Or comparing a single run with its own weather to a short term observation. These are not wrong so much as potentially misleading – since it is obvious why there is going to be a discrepancy, albeit one that doesn’t have much implications for our understanding.”
Obvious to people who understand weather and climate, that is. If it is not obvious to a person why a “mismatch” between a model and the temperature record are expected, this is a clue that their understanding is far below what it should be for a well read, science literate person who claims to be interested in this issue. More reading of high quality, educational sources is required.
Hank Roberts says
…the discrepancy of our results … highlights the wide divergence that now exists in recent values of G.
DOI:10.1103/PhysRevLett.111.101102
hat tip to: http://www.newscientist.com/article/dn24180-strength-of-gravity-shifts–and-this-time-its-serious.html
Jacob says
Continuing the map analogy:
maps are not the terrain, they are a model, but they are a useful model. It is useful because the relation of map to terrain is well determined and known, and meticulously maintained. When there is a mismatch between map and terrain (eg. a terrain feature omitted from the map) we don’t say: “Duh, maps are a model not the terrain, don’t expect a full match”. We identify it as a model error and rush to correct it (update the map).
The trouble with climate models is – we don’t know, maybe even can’t know, if models really represent climate, and what the relation is between model and nature (climate), i.e. what is the extent of the match, or, in which area is the model more reliable and in which less.
Mitch says
[Response: I think you are missing the point I am making – it is the mismatches between experiment and theory that drive people to look harder for overlooked technical issues or interpretations. It is true that people often find bugs in code or miscalibrations in equipment on their own with no external prodding, but people are more strongly motivated to do so when there is mismatch of the sort we are discussing. Mismatches are clues that we should pay heed to. – gavin]
We are in complete agreement on this. My concern is that you seem to think that Feynman would have disagreed, or that his statement was somehow incomplete. It is only this I am taking issue with.
Again, here is the full quote from Wikipedia:
“In general we look for a new law by the following process. First we guess it. Then we compute the consequences of the guess to see what would be implied if this law that we guessed is right. Then we compare the result of the computation to nature, with experiment or experience, compare it directly with observation, to see if it works. If it disagrees with experiment it is wrong. In that simple statement is the key to science. It does not make any difference how beautiful your guess is. It does not make any difference how smart you are, who made the guess, or what his name is – if it disagrees with experiment it is wrong. That is all there is to it.”
The emphasis here is on which thing falsifies the other. He’s not going into how one decides whether an experiment is right. In fact, he doesn’t even mention the possibility of wrong experiments, but it would be foolish, I am sure you agree, to presume that he doesn’t know what a wrong experiment is.
He says nothing that implies, as you seem to be stating, that hunches, mismatch with other experiments, mismatch with theory, and a host of other considerations won’t play a role in the decision of what to look at when trying do decide whether an experiment is right. So you are correct in this – but I don’t think you have any argument with Feynman.
BTW, readers might be interested in the book “How Experiments End” by Peter Galison, who discusses exactly these issues.
http://www.amazon.com/How-Experiments-End-Peter-Galison/dp/0226279154/ref=sr_1_1?ie=UTF8&qid=1379442213&sr=8-1&keywords=%22how+experiments+end%22
pete best says
I thought that GCMs were useful but not the story of climate that is told which is in the paleo-climatic record. The issues of humankinds climate changes vs natural climate change is obvious really.
GHGs are rising faster than at any known time in the past (10 to 50x as fast). Humans emit a lot of warming and cooling agents which create their own novel global climatic change, not all of it warming of course.
Models tell us interesting things about possible future climate but in the main useful and interesting they are and not gospel. One question is how do climate models fair against recent warming over the past 50 years. Useful I would suggest but accurate with certainty is not what they are run for surely.
Hank Roberts says
> When there is a mismatch between map and terrain
Always
> We identify it as a model error
Nope; models aren’t maps, they’re tools.
Run the model to generate the “map” — which is probabilities, not certainties.
Run out enough scenarios; sure, some will match some of what happened on this Earth when looked at in retrospect.
The model tells you you’re in the same ballpark and how the game is played; but run the model multiple times and you get the outcomes — and hope in retrospect reality fell somewhere in among those scenarios.
A map describes some few specific details and you can get ground truth by looking.