From time to time, there is discussion about whether the recent warming trend is due just to chance. We have heard arguments that so-called ‘random walk‘ can produce similar hikes in temperature (any reason why the global mean temperature should behave like the displacement of a molecule in Brownian motion?). The latest in this category of discussions was provided by Cohn and Lins (2005), who in essence pitch statistics against physics. They observe that tests for trends are sensitive to the expectations, or the choice of the null-hypothesis .
Cohn and Lins argue that long-term persistence (LTP) makes standard hypothesis testing difficult. While it is true that statistical tests do depend on the underlying assumptions, it is not given that chosen statistical models such as AR (autoregressive), ARMA, ARIMA, or FARIMA do provide an adequate representation of the null-distribution. All these statistical models do represent a type of structure in time, be it as simple as a serial correlation, persistence, or more complex recurring patterns. Thus, the choice of model determines what kind of temporal pattern one expects to be present in the process analysed. Although these models tend to be referred to as ‘stochastic models’ (a random number generator is usually used to provide underlying input for their behaviour), I think this is a misnomer, and think that the labels ‘pseudo-stochastic’ or ‘semi-stochastic’ are more appropriate. It is important to keep in mind that these models are not necessarily representative of nature – just convenient models which to some degree mimic the empirical data. In fact, I would argue that all these models are far inferior compared to the general circulation models (GCMs) for the study of our climate, and that the most appropriate null-distributions are derived from long control simulations performed with such GCMs. The GCMs embody much more physically-based information, and do provide a physically consistent representation of the radiative balance, energy distribution and dynamical processes in our climate system. No GCM does suggest a global mean temperature hike as observed, unless an enhanced greenhouse effect is taken into account. The question whether the recent global warming is natural or not is an issue that belongs in ‘detection and attribution’ topic in climate research.
One difficulty with the notion that the global mean temperature behaves like a random walk is that it then would imply a more unstable system with similar hikes as we now observe throughout our history. However, the indications are that the historical climate has been fairly stable. An even more serious problem with Cohn and Lins’ paper as well as the random walk notion is that a hike in the global surface temperature would have physical implications – be it energetic (Stefan-Boltzmann, heat budget) or dynamic (vertical stability, circulation). In fact, one may wonder if an underlying assumption of stochastic behaviour is representative, since after all, the laws of physics seem to rule our universe. On the very microscopical scales, processes obey quantum physics and events are stochastic. Nevertheless, the probability for their position or occurrence is determined by a set of rules (e.g. the Schrödinger’s equation). Still, on a macroscopic scale, nature follows a set of physical laws, as a consequence of the way the probabilities are detemined. After all, changes in the global mean temperature of a planet must be consistent with the energy budget.
Is the question of LTP then relevant for testing a planet global temperature for trend? To some degree, all processes involving a trend also exhibit some LTP, and it is also important to ask whether the test by Cohn and Lins involves circular logic: For our system, forcings increase LTP and so an LTP derived from the data, already contains the forcings and is not a measure of the intrinsic LTP of the system. The real issue is the true degrees of freedom – number of truely independent observations – and the question of independent and identically distributed (iid) data. Long-term persistence may imply dependency between adjacent measurements, as slow systems may not have had the time to change appreciably between two successive observations (the same state is more or less observed in the successive measurements). Are there reasons to believe that this is the case for our planet? Predictions for subsequent month or season (seasonal forecasting) is tricky at higher latitudes but reasonably skilful regarding El Nino Southern Oscillation (ENSO). However, it is extremely difficult to predict ENSO one or more years ahead. The year-to-year fluctuations thus tend to be difficult to predict, suggesting that LTP is not the ‘problem’ with our climate. On the other hand, there is also the thermal momentum in the oceans which implies that the radiative forcing up to the present time is going has implications for following decades. Thus, in order to be physically consistent, arguing for the presence of LTP also implies an acknowledgement of past radiative forcing in the favour for an enhanced greenhouse effect, since if there were no trend, the oceanic memory would not be very relevant (the short-term effects of ENSO and volcanoes would destroy the LTP).
Another common false statment, which some contrarians may also find support for from the Cohn and Lins paper, is that the climate system is not well understood. I think this statement is somewhat ironic, but the people who make this statement must be allowed to talk for themselves. If this statement were generally true, then how could climate scientists make complex models – GCMs – that replicate the essential features of our climate system? The fact that GCMs exist and that they provide a realistic description of our climate system, is overwhelming evidence demonstrating that such statement must be false – at least concerning the climate scientists. I’d like to iterate this: If we did not understand our atmosphere very well, then how can a meteorologist make atmospheric models for weather forecasts? It is indeed impressing to see how some state-of-the-art atmopsheric-, oceanic models, and coupled atmospheric-oceanic GCMs reproduce features such as ENSO, the North Atlantic Oscillation (or Arctic or Antarctic Oscillation) on the larger scales, as well as smaller scale systems such as mid-latitude cyclones (the German model ECHAM5 really produces impressive results for the North Atlantic!) and Tropical Instability Waves with such realism. The models are not perfect and have some shortcomings (eg clouds and planetary boundary layer), but these are not necassarily due to a lack of understanding, but rather due to limited computational resources. Take an analogy: how the human body works, conscienceness, and our minds. These are aspects the medical profession does not understand in every detail due to their baffling complexity, but medical doctors nevertheless do a very good job curing us for diseases, and shrinks heal our mental illnesses.
In summary, statistics is a powerful tool, but blind statistics is likely to lead one astray. Statistics does not usually incorporate physically-based information, but derives an answer from a set of given assumptions and mathematical logic. It is important to combine physics with statistics in order to obtain true answers. And, to re-iterate on the issues I began with: It’s natural for molecules under Brownian motion to go on a hike through their random walks (this is known as diffusion), however, it’s quite a different matter if such behaviour was found for the global planetary temperature, as this would have profound physical implications. The nature is not trendy in our case, by the way – because of the laws of physics.
Update & Summary
This post has provoked various responses, both here and on other Internet sites. Some of these responses have been very valuable, but I believe that some of these are based on a misunderstanding. For instance, some seem to think that I am claiming that there is no auto correlation in the temperature record! For those who have this impression, I would urge to please read my post more carefully, because it is not my message. The same comments goes for those who think that I’m arguing that the temperature is iid, as this is definitely not what I say. It is extremely important to be able to understand the message before one can make a sensible response.
I will try to make a summary of my arguments and the same time address some of the comments. Planetary temperatures are governed by physics, and it is crucial that any hypotheses regarding their behaviour are both physically as well as statistically consistent. This does not mean that I’m dismissing statistics as a tool. Setting up such statistical tests is often a very delicate exercise, and I do question whether the ones in this case provide a credible answer.
Some of the response to my post on other Internet sites seem to completely dismiss the physics. Temperature increases involve changes in energy (temperature is a measure for the bulk kinetic energy of the moleclues), thus the first law of thermodynamics must come into consideration. ARIMA models are not based on physics, but GCMs are.
When ARIMA-type models are calibrated on empirical data to provide a null-distribution which is used to test the same data, then the design of the test is likely to be seriously flawed. To re-iterate, since the question is whether the observed trend is significant or not, we cannot derive a null-distribution using statistical models trained on the same data that contain the trend we want to assess. Hence, the use of GCMs, which both incorporates the physics, as well as not being prone to circular logic is the appropriate choice.
There seems to be a mix-up between ‘random walk’ and temperatures. Random walk typically concerns the displacement of a molecule, whereas the temperature is a measure of the average kinetic energy of the molecules. The molecules are free to move away, but the mean energy of the molecules is conserved, unless there is a source (first law of thermodynamics). [Of course, if the average temperature is increased, this affects the random walk as the molecules move faster (higher speed).]
Steve Latham says
Setting aside temporal autocorrelation for now, Cohn and Lins should reject their null hypothesis that temps can follow random walks in an unconstrained way because multicellular life would have been wiped out by now if there were no negative feedbacks. Then they should construct a new null hypothesis which includes some feedbacks. What would be the nature of those feedbacks? They would have to build something like today’s GCM’s to encompass them.
Vahan Hartooni says
Meteorologists make short-term predictions. What about long-term? Can you predict the location of a thunder-storm a few years from now, a few months?
[Response:Short-term predictions depend more on the initial conditions, whereas long-term predictions depend on changes in the boundary conditions, the forcings. You cannot predict the exact weather (atmospheric state) because of the chaotic nature of the atmopshere, so a thunderstorm cannot be predicted. It may, however, be possible to predict the frequency of future thunderstorms, given suitable models. It’s a bit like the seasonal cycle: I can easily predict that the summar will be warmer than current (winter) conditions, but I cannot say much about a specific date during the summer. -rasmus]
I’m not a Global Warming skeptic, I’m just angered of how the scientific community is handling this subject. G.W. (Global Warming) is too politicized, by that I mean the scientists who believe in G.W. always interpret their evidence to agree with their point of view, and skeptics of G.W., interpret their evidence to agree with their point of view. Also the alarmists (G.W. believers) are supported by politicians and environmentalists, because they know that the study they are supporting would agree with their opinions. Same with the skeptics, they are paid by Oil and Industrial companies because the study would favor their reasoning. Global Warming is happening, but it’s a scientific circus.
[Response:The issue has been politicised, but my view is that it’s because of people outside the scientific community. Think about the scientists (like me) who gets caught in this ‘cross-fire’ (I honestly did not envisage this when I entered the field of climate science – call me a nerd if you like). It has really been frustrating to see our field become distorted through the media, and this is one reason for why RC came about. Here at RC, we aim to stay out of the political aspects and focus on the scientific issues. For the political you should visit Prometheus -rasmus]
Douglas Watts says
Perhaps I’m not getting it. Objects the size of baseballs do not just randomly leap into the air from a position of repose. Fires do not just randomly start in the absence of fuel. Rain does not just randomly fall from a clear blue sky. A multi-decade trend in global temperature must have some physical cause. The increased energy has to come from somewhere, be it natural or anthropogenic or a combination of both. An increase in CO2 must come from somewhere; an increase in solar radiation must come from somewhere. How can a measured increase in CO2 or solar radiation or planetary albedo be “random”? What does random even mean in such a macroscopic context? Perhaps this is why I’m not getting it.
Doug Watts
[Response:You are getting it! I think it’s others who don’t. The universe is not random. Physics rule. -rasmus]
Alastair McDonald says
Re #3 I have to say that I agree with Douglas Watts. Scientists seem to believe in these natural cycles, but they must have a cause. They seem to admit that El Nino exists, but deny the tidal atmospheric effects of the sun and the moon. This shows that scientific opinion is driven more by public opinion than by the facts.
Cheers, Alastair.
[Response:???. But atmospheric and oceanic tides are well-established now, and explained in terms of physics. El Nino also has a theory (or several). -rasmus]
Michael Tobis says
Re #2; it is reasonable to conclude that at least one side of the debate is behaving inappropriately.
However, it is not so easy to conclude that both sides are unreasonable. One side may be advancing positions so far outside what is supported by the evidence, that the other side is essentially forced to be adamant about what is supported by the evidence.
In that case, the unreasonable side will have achieved a public relations victory if you perceive both sides as comparably unreasonable.
Consider the case of evolution, or the case of the health impacts of tobacco. Organized groups spend a great deal of effort raising doubt and confusion in opposition to sound scientific conclusions. The only way to decide these questions is to appeal to the evidence, and this is difficult in a case where one side is obfuscating the evidence.
This sort of polarization does not occur very often within the scientific community itself. Rather, if you see something like this going on, one side usually has the agreement of the scientific community, and typically that side, though not by any means infallible, has dramatically more useful and informed opinions. The other side will be expressing commercial interests and/or strongly held philosophical preconceptions but will not be expressing science.
In such cases, every time a member of the public sees a moral equivalence between the two groups constitutes a defeat for truth.
If you don’t have the time to investigate the evidence for yourself you are best off looking to established scientific groups in related disciplines for advice on where the evidence truly points.
[Response:You only have to trawl the scientific literature! Then you may ask if the scientific community is reliable. Think about the state of our modern civilisation, what would it have been without scientific progress? I would argue that many things taken for granted in our modern society has been piggybacked by science. Science (here used in a wider meaning including engineering) has also formed our culture and enabeled you guys to read this blog. Nevertheless, this is a far cry from the argument that the global mean temperature behaving like a random walk. -rasmus]
Alastair McDonald says
Rasmus wrote “It’s natural for molecules under Brownian motion to go on a hike through their random walks (this is known as diffusion), however, it’s quite a different matter if such behaviour was found for the global planetary temperature, as this would have profound physical implications.”
Well!
Global warming is caused by the emissions from greenhouse gases which radiate based on their excitation, not on their temperature. Brownian motion proves that atoms exist. It does not explain the emissions from greenhouse gases which need quantum mechanics to understand how they operate.
[Response:Right, there are two aspects to this radiation: the continuum associated with the atoms kinetic energy and the band absorption associated with the atomic electron configurations. The excitation of the molecules is caused by an absorption of a photon, as they cannot keep losing energy by through radiation without gaining some. Quantum physics determine what the electronic levels are, i.e. at which frequencies the line spectra are. But, in the real world, the lines broaden to frequnecy bands, due to several complicating factors. -rasmus]
Douglas Watts says
The paper at hand seems to draw an anology between Brownian movement and natural variation in the Earth’s climate and suggests researchers adopt as the null hypothesis that observed climate changes are due to “random” forces akin to that ascribed to Brownian movement (or the ‘drunkard’s walk’).
George Gamow gave a nice illustration of how Brownian movement of air molecules could conceivably cause all of the air in a room to collect in one corner of the room, thus suffocating the person sitting in a chair in the opposite corner of the room reading Gamow’s book (One, Two, Three … Infinity). Gamow then calculated the probability of this occurring, which was exceedingly small, and therefore, provided a good fit with empirical observation.
[Response:The probability for this happening is infinitesimally small, so for practical reasons, this can be regarded as an impossibility (unless you are a fan of the Hitch hikers guide to the Galaxy). -rasmus ]
Consistent with Gamow’s line of reasoning and evidence, one can say it is possible a large, documented variation in climate is “random” in Gamow’s sense of the word, but the probability of such a purely random event actually occurring on Earth is very low (ie. about the same probability as all of Earth’s atmosphere collecting in one corner of the Earth). As such, the use of such an event as the null hypothesis seems to be stretching the matter rather thin.
Moreover, the inception of a purely “random” cause for observed climate trends on Earth seems to represent a confusion of logical type. Brownian movement at the molecular scale is not caused by forces unknown to science. The forces are very well known. However, the collisions are so numerous as to make predictions of exactly where one molecule will end up after 5 minutes so difficult that we call this motion “random.” But even within this “random” pattern of molecular movement at the microscopic scale, we can like George Gamow safely presume the unlikelihood of all of the air molecules in a room randomly collecting in one small corner and suffocating us.
Empirical observation supports this. The sun cannot “randomly” increase its total energy output because its output is due to hydrogen fusion and there is no known physical explanation for how the sun could suddenly burn hydrogen in its core at a lower rate, or how the energy output of each fusion reaction could suddenly become greater or less. If empirical observations showed an increase in solar radiation over time, it would be inappropriate to use as a null hypothesis that this change is “random” in the sense that this change has no physical explanation or cause.
A similar argument could be made for the orbits of the planets. If Mars suddenly flew out of its orbit and sped out of the solar system, few astronomers would cite “randomness” as the null hypothesis for this phenomena. At the galactic, solar and planetary scale very few, if any, phenomena occur “randomly.” Stars with plenty of hydrogen left in their cores to burn do not just “randomly” stop fusing hydrogen into helium at a given moment (or alternately, become supernova).
Oceans do not just “randomly” increase 10C in temperature or precipitate all of their dissolved salts onto the ocean floor. Ice sheets 100,000 years old do not just “randomly” melt in a few decades. Asteroids do not “randomly” crash into Earth. Rather, physical forces require asteroids crash into Earth if their orbital path coincides with the position of the Earth. In fact, if “randomness” actually existed at the megascopic, planetary scale we would have to envision an asteroid heading straight to Earth and randomly “jumping” out of its trajectory at the last possible second in defiance of gravity.
This is why I think the use of the “randomness” concept drawn from Brownian movement at the molecular level is inappropriate for planetary, megascopic phenomena like climatic trends and this use is not supported by empirical observation. If I’ve made a mistake here, I would welcome a correction. Thanks
[Response:I agree with you! -rasmus]
TCO says
The issue of autocorrelation (random walk is a very particular form of autocorrelation) has been discussed in detail on Climate Audit website. It’s kind of ripping a straw man to argue against temps as a complete random walk. The question is, is there some autocorrelation character to the data. Comparison versus ARIMA models suggests that temps are not iid (independant noise) or iid on a trend. Various physical rationales (El Nino, ocean effects, damage to trees (for proxies)) suggest themselves to explain the degree of autocorrelation. Also, those who want to assume that the data is iid, ought to put a little onus on themselves to prove it…rather than to expect the McKintyres of the world to disprove the converse. After all…it is the iiders who are advancing temperature reconstructions with various assumtions of iid in the standard deviations and such.
[Response:This is why it’s best to use control integrations with GCMs to obtain null-distributiions. -rasmus]
Tim McDermott says
adding to #7: It seems to me that the analogy between Brownian motion and weather/climate is that next month’s weather is random in the Brownian sense, but the climate must obey the laws of physics, just like a gas obeys P=T/V.
[Response:Not random, but chaotic and unprdictable. -rasmus]
What happens to the random walk of a smoke particle if we increase temperature of the system? I don’t know, but I do know that, volume held constant, the pressure will increase. And, barring some as yet undiscovered negative feedback, I have a pretty good idea what will happen as we add GHGs to the atmosphere.
[Response:The molecules kinetic energy – speed – would increase. The mean free path would be the same if the density is the same. But this is not really directly relevant for the issue that I discussed. -rasmus]
Armand MacMurray says
Re: “Take an analogy: how the human body works, conscienceness, and our minds. These are aspects the medical profession does not understand in every detail due to their baffling complexity, but medical doctors nevertheless do a very good job curing us for diseases, and shrinks heal our mental illnesses.”
You may wish to choose different analogies; if consciousness and our minds are truly “well understood,” some Nobel prizes are clearly in order! As for doctors curing us of diseases, I think we can agree that the vast majority of infectious disease cures arise from imitating nature (e.g. antibiotics, immunizations) rather than being created de novo based on our understanding of the body. The development of statins for artery disease does fit your analogy, but most other chronic disease treatments seem to be derived from natural sources, trail and error, or just treat symptoms rather than the underlying problem. Finally, and most unfortunately, our ability to heal mental illnesses still lags far behind our ability to heal physical illnesses.
[Response:OK. I’m no medical doctor. -rasmus]
Armand MacMurray says
Re: “If we did not understand our atmosphere very well, then how can a meteorologist make atmospheric models for weather forecasts?”
The persistent inability to correctly forecast whether it will rain tomorrow makes this argument a hard sell. Certainly, much of the problem is a lack of detailed-enough knowledge of initial conditions. However, a lack of modeling topography, local water/atmosphere interactions, 3-D patterns of cloud cover, and so forth likely play major roles. Without a “gold standard” model that is able to reliably predict future conditions (and thus cannot possibly have been tweaked to produce a desired answer, even inadvertently), one cannot show that all the relevant factors have been understood and included in the model.
[Response:Re: hard to sell: If this were not true, why do you think most countries keep running those weather models several times a day, every day a year? -rasmus]
Perhaps part of the issue is the definition of “well understood”. Naively, one would expect that “well understood” means one can robustly predict future conditions. The fact that certain GCMs can produce realistic features such as ENSO is certainly extremely promising. However, if it is not understood why other models do not produce a realistic ENSO, that would argue against a good understanding of ENSO, and so forth. Again naively, I would expect that a “well understood” climate system would imply a good understanding of the sign & magnitude of cloud effects on the climate, and an ability to model those.
[Response:We have some idea about this -rasmus]
Rather than dealing with malleable and easily-misunderstood phrases like “well understood” and “not well understood”, it would be useful to specify the list of real-world climate system features (both positive, e.g. ENSO and negative, e.g. lack of runaway heating/cooling) that a realistic model would reproduce, and to refer to our current understanding in terms of which features are understood/reproduced correctly and which are not.
[Response:As I stated, there are some short comings. There is also the issue of scale range, from microscopic to planetary and the question how to take all these scales into account in a computer code. Tell me anouther field which runs model predictions as extensively as weather prediction. -rasmus]
Geoff says
Dear Dr. Benestad,
I think you are to be commended for bringing this article to the attention of RC readers. I hope anyone who’s not a member of AGU will buy it (for the modest price of US$ 9.00) and read it.
Will all due respect, your argument against the thesis of this article seems circular. You argue against the statement in the Cohn article that “the climate system is not well understood” by saying climate scientists make “complex models – GCMs – that replicate the essential features of our climate system” that “provide a realistic description of our climate system”. This would seem to be (at least) begging the question. Citing the models meteorologists use to make weather forecasts would not seem to inspire confidence in predictions by models of persistent long term trends. Having a model that looks correct for even a decade may be analagous to a stopped watch being right twice a day when talking about millennium time scales usually seen in climate systems.
You start your article by questioning whether current perceived trends are happening by chance. However that’s not Dr. Cohn’s argument. As he puts the points in conclusion: ” powerful trend tests are available that can accommodate LTP” and therefore it’s surprising that “surprising that nearly every assessment of trend significance in geophysical variables published during the past few decades has failed to account properly for long-term persistence”. He further concludes: “These findings have implications for both science and public policy. For example, with respect to temperature data there is overwhelming evidence that the planet has warmed during the past century. But could this warming be due to natural dynamics? Given what we know about the complexity, long-term persistence, and non-linearity of the climate system, it seems the answer might be yes. Finally, that reported trends are real yet insignificant indicates a worrisome possibility: natural climatic excursions may be much larger than we imagine. So large, perhaps, that they render insignificant the changes, human-induced or otherwise, observed during the past century”. You may disagree with Dr. Cohn, but his arguments need to be taken into account (and surely will be being published in the presitgious GRL).
[Response:Dear Geoff. This issue is known as ‘attribution’ and is a wide topic in climate research. I think that simple statistical models are not adequate for proper attribution, but that control simulations with GCMs are needed. It’s important to take physical considerations into accout and to get both the physics and the statistics right. -rasmus]
Mark A. York says
“However, it is not so easy to conclude that both sides are unreasonable.”
Science is neutral, so one side is just skewing reality. Uncertainties exist but the naysayer argument fails on its face. How can both sides be culpable when so much data indicate warming is ocurring at record levels? What we have here is a political shell game, and science can lose that because of the he said she said reporting. Sure one side consists of NASA and other top climate specialists but then Bjorn Lomborg a social statistician comes along and poof, reality is out of the public window. It’s amazing and insidious.
Lynn Vincentnathan says
Sorry, I only read the first part & am jumping in perhaps too fast. In the social sciences there is a big difference between experiments and, say, surveys, that use stats. The latter relies a lot more on good theory (your physics) for understanding and interpretation.
Since our experiment with planet earth is only in its initial phase, we have to rely on all sorts of stats that really need a lot good theory – good physics.
Now, we could just complete the experiment — pump as much GHGs into the atmosphere as we possibly can (I think we have 200+ years left of coal, which we might be able to burn in 100 years if we really try hard) — and see who’s right. Or, we can use the best stats AND theory (& geological knowledge) we have to date, figure out what’s happening & work to prevent what looks to be the mother of all disasters (from our 2 million year human perspective).
Steve Latham says
Just an aside regarding comment #10. In support of the analogy: we understand bodies quite well and things like many infectious and chronic diseases can be diagnosed with great success. Curing those things is the hard part. I think the science suggesting that anthropogenic GHGs are warming the planet is like the diagnosis; getting humanity to stop dumping so much into the atmosphere is the cure (there are other ways to treat the symptoms, maybe) and is more difficult.
APRIL STEWARD says
i have recorded the 3 day forcast every day this week on my home page – ur – not a scientist – but every day it forcast rain for the next 3 days – everyday here in llandudno (where the forcast was for) we have had blue skies and sun. These models based on statistics and physics aren’t much cop if you cancel your holiday because of them on the other hand i saved my money by staying here in the sun – don’t know where the rain clouds went but something was right and wrong and i was right to be suspicious as this has happened before. – dont worry – the drs cant cure my mental illness but it disapears with medication. – and YES i have taken it !
[Response:Keep in mind that the skill of forecast is often limited by politics, i.e. how many resources are going to be spent getting best initial conditions – that means, how many real-time observations to incorporate in the observational network (from weather stations to satellite programs), how much computer resources to make available several times every day, and so on. One such consideration is the model resolution which has an implication for the question whether the forecast will be right for every place within the model’s grid box (~10 by 10 sq. km). Nevertheless, I would argue that the forecasts in general are quite good – otherwise we wouldn’t have operational weather forecasts. I think it’s also fair to say that people tend to remember when the forecasts miss rather when they are correct. -rasmus]
Pat Neuman says
… “there was slow global warming, with large fluctuations, over the century up to 1975 and subsequent rapid warming of almost 0.2°C per decade”. http://data.giss.nasa.gov/gistemp/2005/
GISS NASA Figure 1 (a) shows a 5 year trend line for global surface temperature anomaly (1880 to 2005).
The trend line hits 0.0 in 1937 and again in 1976.
Between 1937 and 1976:
max ~ 0.1 Deg C in 1942
min ~ -0.05 Deg C in 1965
I agree that “there was slow global warming, with large fluctuations, over the century up to 1975 and subsequent rapid warming of almost 0.2°C per decade”, as said in the link above.
Many people want to know what caused the minor max and min between 1937 and 1976. Some scientists have said that the slight cooling in mid century can be attributed to aerosols.
I think ENSO explains both the max and min between 1937 and 1976.
EL Nino dominated 1930s-1940s.
La Nina or neutral dominated 1950s-1960s.
Armand MacMurray says
Re: response to #11: “Tell me anouther field which runs model predictions as extensively as weather prediction.”
I think we’d all agree that popularity isn’t really the best measure of effectiveness. Most of the hair regrowth treatments out there don’t work, yet are still popular. :)
My point is that it would be very useful to have a list of agreed-upon climate features that any “gold standard” model should be able to reproduce, without reproducing any non-realistic features. Each model would then have its own “scorecard,” which would give a rough idea of how close it was to the desired “gold standard.” The predictions of a 49/50 model would inspire more confidence than the predictions of a 30/50 model. Have there been any (formal or informal) movements in the modelling community to establish such benchmarks?
[Response:I wouldn’t know much about hair growth products, but I am convinced that the field of meteorology is well-established and the forecasts useful. Science in general – climate science has a common base with general sciences in terms of physics & chemistry – has also proved to be successful interm of advancing our civilisation. Regarding the “gold standard”, there exists one: the CMIP2 and the most recent integrations with the climate models done for the next IPCC report. One such “gold standard” is the ‘climate senstitvity. -reasmus]
joel Hammer says
Anybody notice in this hottest year on record that the temperature in the Nothern Hemisphere has been essentially flat for the last three years?
Why isn’t the trend accellerating? Could there be, gasp, negative feedbacks as yet not defined?
Pat Neuman says
“Recent warming coincides with rapid growth of human-made greenhouse gases. Climate models show that the rate of warming is consistent with expectations (5). The observed rapid warming thus gives urgency to discussions about how to slow greenhouse gas emissions (6)”.
http://data.giss.nasa.gov/gistemp/2005/
The observed rapid warming gives more than an “urgency to discussions about how to slow greenhouse gas emissions”, it gives urgency to cut greenhouse gas emissions in any way possible.
wayne davidson says
#19 Joel, GT temperatures are not flat lining, they were slowly increasing, with a bit of a faster rate by now. Northern Hemisphere of 2005 had 6 months with monthly anomalies above +1 degree C. 2004 had 3, 2003 had 3, famous 1998 had 3. Every other previous years combined from 1997 going back to 1880 had no months above 1 C. Furthermore, other independent multiple fields of research have showed the same thing. My own research, the oblate sun method, saw 2005 all time high temps coming by stunning observations of consistently vastly expanded sun disks measured in the winter-early spring of 2005, statistics usually suggests that the sun disk may vary wildly, but not at one point, in a expanded streak never seen since the beginning of my observations 4 seasons ago. Nowhere will you find more dramatic change than in the Polar regions, many experienced Arctic hunters, adventurers and scientists had equally surprising 2005 experiences during the same time period, they either saw early thaws, running rivers when their shouldn’t be, shrinking once upon a time permanent lake ice sheets. An impressive experience to add, despite all these unusual events, was flying over Arctic Quebec well after it had +30 C weather in early May , while the great number of lakes there were still covered with ice. Somehow heat prevailed despite ice and snow still on the surface. These events are no statistical blips, rather overwhelming evidence of stronger warming.
JS says
“If this statement were generally true, then how could climate scientists make complex models – GCMs – that replicate the essential features of our climate system? The fact that GCMs exist and that they provide a realistic description of our climate system, is overwhelming evidence demonstrating that such statement must be false – at least concerning the climate scientists.”
This, of itself, is no argument. Astrologers create incredibly complicated models of planetary motion and what it means for people’s lives. The fact that complicated astrological models exist is no proof that there is any scientific content to these models.
[Response:I think most people understand my message here, but the proof lies within the evaluation. When it comes to oye example, a model for planetart motion is one thing (could even be scientific), to say whatever it means for peoples’ life is another (religion, in my eyes). Climate models are ‘extended versions’ of weather models. To my knowledge, there is no other scientific model ‘exposed’ as much to the public as weather models. They have succeeded in predicting features like tropical instability waves, which have subsequently been found in nature. That’s impressive. It was atmospheric models that helped unveiled the ‘chaos effect’ to Lorenz, leading to a fundamental and profound understanding of our nature. Weather models help save lives when extreme weather arise. These models can be broken down to a small number of equations, some of which can be isolated and solved analytically (actually, the way they were designed was the other way round…). These analytical solutions help us understand importand sides of atmposheric phenomena. -rasmus]
Slightly less rhetorically, economists create complicated models of the economy. They incorporate a huge amount of knowledge about how the economy works. And yet their ability to forecast is pitiful.
I believe that climate science has a lot to learn from economics. Both are non-experimental sciences. This creates unique statistical challenges. Economists have, for a large part, realised the limitations of their forecasting ability. Climate scientists seem a little less aware of the limitations of their craft because of their being based in the physical and experimental sciences (and the statistical techniques that implies).
[Response:Personally, I think the other way round – to my knowlredge, no economist sent people to the moon – the scientists & engineers did! My proposition is that our highly advanced society is built foremost on science, and secondary on economy (which is primarily a means of distributing our goods). Can you prove that economy forecasts ever have been correct? (Economic tigers, world bank, etc have not impressed…) -rasmus]
Not being able to read the paper in question makes it difficult for me to say more. But I sense that there is a straw-man somewhere here. Statistics never proves the null-distribution…”it is not given that chosen statistical models such as AR (autoregressive), ARMA, ARIMA, or FARIMA do provide an adequate representation of the null-distribution”… it merely fails to reject the null hypothesis. The same is true of any statistical model – to which all climate models belong. Thus, I could equally say, it is not given that chosen GCM models do provide an adequate representation of the null-distribution. At heart, they are both vacuous statements. But further, the implication seems to be that a random walk model is simplistic – if anyone believes this they haven’t studied the stock market. It remains fundamentally true that a random walk is the best model of the stock market – but it is so much more complicated than that statement would make it seem.
[Response:You have to get both the physics right and the statistics righ! Basically, your statistical model needs to be representative of the process you are analysing-rasmus]
Tom Fiddaman says
Re #22
The funny thing about economic models is that they don’t directly incorporate a huge amount of knowledge about how the economy works. For example, models used in the climate integrated assessment space distill a large number of disparate facts into a few principles that are often more aspirational than empirical (e.g. perfect foresight, market equilibrium) then use those with limited attention to fit to data. Forecasting models are typically statistically sophisticated and dynamically trivial, i.e. they use a lot of data in the context of a simple model that is abstract about the underlying ‘physics’ of information and material flows in the economy. The fact that they can’t forecast is probably partly a consequence of the complexity of behavior, partly lack of ‘physics’.
I’m sure that there are technical things that climate scientists could learn from economists and vice versa. But personally I hope climate scientists avoid learning too much from economics – economists may be aware of their limits at forecasting things like business cycles, but they are nevertheless paradigm bound and cheerfully contribute to policy design with models that are largely assumption-driven.
Global climate may be non-experimental because we only have one, but many of the more micro physical aspects of climate are subject to experiment or at least detailed measurements. Climate data now comes in terabytes, while all the global macroeconomic data used to inform the majority of models would probably fit on one CD. Economists routinely discount micro experimental data when it contradicts established macro approaches. Certainly climate models have to parameterize some relationships that are sub-grid-scale or poorly understood, but at least those can be somewhat constrained by data and physical principles.
One shouldn’t be too hard on economists – economics may be fundamentally a harder nut to crack than climate, due to the infinite regress of modeling human behavior and the paucity of measurements compared to the number of agents or dimensions in the system. I don’t think analogies based on economics shed much light on climate science.
[Response:You are right – one should always be open to the possibility that one can learn from other disciplines. And to some extent, climatologists do pick up some ideas from economics -econometrics. I was perhaps unduly hard on the economics community, however, I’m not really qualified to discuss eonomic matters as I’m not an expert on that field (although I admitedly have taken a couple of courses in economics which I found dull – still got good grades though…). I still could not resist making the critical (and provoking) remarks. -rasmus]
JohnLopresti says
As with most sub-topics in this field to which I am new, I sense some solutions and methodologies are partial glimpses of the very complex process of climate emulation on computer.
At first view, application of a Brownian movement view of the hockeystick upramp in global warming seems worthwhile, if only for its application of delimited randomness onto what is known to be a cyclical process albeit one of very long periodicity. Yet, the author, rasmus, has dissected deftly the sampling methodology required to justify a Brownian interpretation’s superimposition on what is actually a very well tempered equilibrium; so, the commenters who jocundly remind the author about the preposterousness of ignoring gas diffusion laws or harmonic motion lemmas which describe planetary systems motion have touched on the same weak spot in the Brownian analysis as applied to climate. I found an interesting comment, above, in the reference to noise; for me, this engendered other considerations such as Laplace and Fourier timeslices of climate processes; so, for instance, a small timespan could be described within the long cycle of, e.g., CO2 dissoved in ice as presented in the EPICA data recently presented on this realclimate.org site; the EPICA graph showing 600 kYears as a 7-cycle 90 kYear/cycle fairly sinus waveform really brought home the harmonic nature of the climate process, though, as mentioned, the cycle is quite long, thereby, enhancing the utility of partial differential views, of which the Brownian paradigm might serve to help comprise the noise. Yet, in a sense, I agree with the immediate intuition of our presenter, that the presentation of a proposed overarching solution based on Brownian motion, a theoretical realm which challenged even the curious Einstein, might itself distract the neophyte from more robust research and conceptualization. Incidentally, I might well be disinclined to adopt the purported Prometheus website approach, if that website were accessible, which it is not at the present moment; though even political approaches are de rigeur. However, for the scientific part, clearly, as the author observes, well tempered computerized reference systems are consulted for weather prediction over short term in most countries and are sufficiently reliable for that application; much math far beyond time derivatives is entailed in that software. The piquant part of the computerized view of near-future climate is now it is being called upon to contribute to longer timeframe climate trend definition. The realclimate.org site has provided a suitable forum to encourage that endeavor for the newcomers, and clearly often helps with peer interchange, much as any meritorious professional association would, serving as a substrate and reference.
If we need to examine longterm climate with Brownian glasses, a similar but distinctly separate pair of lenses should be formed of Van der Waal concepts, though the Cohn and Lins US Geological Survey article discussed by rasmus is a Brownian overture exclusively; there is a key two-state component of weather in the fluid phase, gas and liquid; but certainly multiple other properties of matter are involved and the depiction extends well beyond a three-state model incorporating solids, as well as, as alluded to above by one contributor, the quantum-like shift which occurs during transition from one phase to the other: a boiling Brownian movement would look different from an oscillating solid crystal lattice. At first view, gas plasma, thin-film coating physics, and a goodly measure of electricity and magnetism tempered by other insights of particle physics and celestial systems dyanmics are all first areas which attract my interest as ways to look outside of that very long 90 kYear sine wave, to perceive perhaps the sawtooth form it is taking and the reasons from many sectors of human understanding reflecting elements of that.
In the biologic sphere I would question how much genetic work, so to speak, it takes to form a species such as polar bears; and how anomalous it might be to find that within a single span of a century all that genetic impetus was lost to a receding polar ice cap, thereby driving polar bears into extinction likely within our lifetime.
Pat Neuman says
re 24
Clips from National Wildlife Federation’s Bear Family Tree, at:
http://www.nwf.org/wildlife/grizzlybear/familytree.cfm
—
There are only eight species of bear living in the world today.
All eight species have a common ancestor, Ursavus, that lived more than 20 million years ago.
The Ursavus family line split into two subfamilies of what are considered ancestral bear-dogs: the Ailuropodinae (which ultimately evolved into the giant panda (Ailuropoda melanoleuca) that lives in China today) and the Agriotherium (which ultimately evolved into the Ursidae lineage).
About 15 million years ago, Ursidae diverged into two new lineages: the Tremarctinae, known as short-faced bears; and the Ursinae, known as true bears.
Ursinae gave rise to the six other bear species that exist in the world today. About 3.5 million years ago, early Ursine bears began migrating to North America by way of the Bering Land Bridge. These bears evolved into the American black bear (Ursus americanus).
The brown or grizzly bear (Ursus arctos) began to evolve 1.6 million years ago. Brown bears were once found throughout Europe and Asia and eventually wandered into North America, following the same route taken by ancestors of the black bear. Scientists believe that the brown bear lineage split over 300,000 years ago to form the polar bear (Ursus maritimus), theorizing that a group of early brown bears became isolated in colder regions and ultimately adapted to life on ice.
Scientists believe that the brown bear lineage split over 300,000 years ago to form the polar bear (Ursus maritimus), theorizing that a group of early brown bears became isolated in colder regions and ultimately adapted to life on ice.
—
Hank Roberts says
Hey, if economists could issue storm warnings as readily as the Weather Service can issue a Severe Weather Warning, would you want them so you could take your money out of the stock market before each little crash?
Somehow I doubt we’ll ever see such a forecast, even if it’s possible.
The economists can start ignoring a third of the money supply
— “cease publication of the M3 monetary aggregate. …
http://www.federalreserve.gov/releases/h6/discm3.htm
Imagine some climate scientists announcing that a third of the data collected on the climate system was being discontinued — we’ll quit looking at solar output, or carbon dioxide output, or methane residence time in the atmosphere — people would raise holy hell, because the information’s producing useful predictions.
The economists? They don’t have preductions that work, they don’t have any data they know are indispensable, if it’s inconvenient to keep publishing info they can stop without a lot of bother.
I think it tells you who’s “trendy” — who’s able to actually use information to publish predictions, lots of models with differences — and realistically compare results season and year at a time.
Is it a trend? Trust your climatologist before your economist, for answers that are testable.
Armand MacMurray says
Re: response to #18
Rasmus, as a biologist myself I certainly agree that science is important and vital for civilization. However, as you may have (sadly) noticed in the news recently for biology, testing results is most important (of course usually not because of falsification!). Thanks for the pointer to the CMIP2. I took a quick look there and it seems that the benchmarking there is all related to CO2 sensitivity. Is there another site/resource/paper that has done more basic benchmarking along the lines of what I mentioned in posts #11 & #18, or has that not been done yet?
per says
I think your phrase “… who in essence pitch statistics against physics” represents a false dichotomy.
If we had a perfect record of temperatures, etc., for the last couple of millenia, we would be able to look at the data, and see if it was autoregressive- or not. We do not have this information.
It is entirely wrong to suggest that a GCM can substitute. The historical record is what it is- even if we do not have that information. If the GCM fails to replicate reality, it is the GCM which is at fault; so the GCM provides no proof that historical records were autoregressive or not.
Of course, if the GCM is perfect and models reality exactly, then there will be no problem if the GCM shows no autoregression. But since we don’t have the validation that GCMs are perfect, and there is no perfect historical record to check against, it is a bit of a moot point.
In the meantime, you have done work on i.i.d. models; but did you test to see if the data had an autoregressive character ?
cheers
per
[Response:Here is a reason why meteorologists do not use statistical models for weather forecasts. When you travel by plane, the aviation authority depends on good forecasts for your safety. Statistical models are not adequate. You really need to include the physics!!! The same argument goes with climate research. Physics is a vital basis for much of our understanding (statistics maybe isnt…?). Nobody has to my knowledge provided a proof that the statistical autoregressive models that you point to is appropriate to earth’s climate. But you have also misunderstood me if you think that I say our climate is iid (a climate change is not iid per definition). Rather, the choice of the autoregressive models and their representation temporal structure may be wrong. These models are chosen because they are simple and convenient. They may be adequate for some types of questions, but not for detection and attribution when it comes to climate change. It’s not sufficient to say that the global mean temperature is stochastic (random), but you really need to address the physics. -rasmus]
Robert K. Kaufmann says
This discussion is directly relevant to several papers that I have published in the peer review literature regarding the effect of human activity on climate change. First, lets be very specific about the mathematical formula for a random walk. The simplest representation states that the current value of a variable Y equals its value during the previous period plus some random variable. For temperature data, this would imply that this year’s temperature depends on temperature from the previous period. But this formula cannot describe how air surface temperature carries over from one period to the next. Think how quickly temperature dissipates from day to night – there is no physical mechanism by which the atmosphere can carry additional warmth from one year to the next.
That said, the effect of human activity on temperature can be modeled as a random walk because these effects carry over from one year to the next. For example, carbon dioxide persists for a long period in the atmosphere, so the atmosphere ‘carries over’ the warming effects of anthropogenic carbon emissions from one year to the next. Similarly, the capital stock that emits greenhouse gases and sulfur persists for one year to the next, and so imparts a signal in the temperature record.
We use these signals as ‘fingerprints’ to detect the effects of human activity on the historical temperature record. Statistical techniques to do so have been developed over the last decade, and Clive Granger won the 2003 Nobel Prize in economics for his pioneering work in this area. Using two separate statistical techniques, I have been able to show (along with David I Stern, James H. Stock, and Heikki Kauppi) that the stochastic trends in the radiative forcing of greenhouse gases and sulfur emissions can ‘explain’ much of the general increase in temperature over the last 130 years. One of these papers appears in the Journal for Geophysical Research (Kaufmann, R.K. and D.I. Stern. 2002 Cointegration analysis of hemispheric temperature relations. Journal of Geophysical Research. 107 D2 10.1029, 2000JD000174) and another has been accepted for publication in Climatic Change (Kaufmann, RK, H. Kauppi, and James. H. Stock, Emissions, concentrations, and temperature: a time series approach, Climatic Change). This second paper is available at my home page http://www.bu.edu/cees/people/faculty/kaufmann/index.html
JS says
Re #23
“Climate data now comes in terabytes, while all the global macroeconomic data used to inform the majority of models would probably fit on one CD. Economists routinely discount micro experimental data when it contradicts established macro approaches.”
Quantity does not equal quality. And, apart from anything else, you would be wrong about the quantity of data available in macroeconomics – let alone microeconomics.
As to the other, perhaps you should read up about the 2002 Nobel laureates in economics. Alternatively, perhaps you could discuss how physicists reconcile quantum and macro phenomena? Isn’t quantum physics routinely discounted when discussing planetary motion, atmospheric circulation and practically any other macro-scale phenomena?
[Response:The beauty of quantum physics is that in provided a consistent picture with the macrophysics when scaled up (because of a vast number particles and the laws of probability). But, to use quantum physics for macroscales is in general silly (onless you look at line emissions and alike), as you would spend the rest of your life calculating. -rasmus]
JS says
Thank you Robert for your contribution. It is good to see econometricians getting involved in this area – as I suggested above, I think that climate scientists could learn a thing or two from them with regard to statistical technique.
For those of you who may not have the motivation to follow the link I’ll provide a quote from that paper that I think is important:
I made a similar point in an earlier comment that seems to have tripped the filters here for manual review so hasn’t appeared yet. This is directly relevant because as Robert observes in the linked paper:
So, to those climate scientists who don’t believe there is anything to learn from economics, open your mind – you might learn something.
[Response:This touches an interesting point: what would the physical implications be of a ‘stochastic trend’? In physics, there are certain things that aren’t really stochastic, but nevertheless can be modelled as stochastic because we do not have sufficient information about the system. E.g. the displacement of molecules follows brownian motion (diffusion). This enables us to predict the displacement of the bulk of mloecules, but is not very good for one specific particle. For that, we need to know its interactions (collisions) with other particles. Then, there are a number of other physical examples which have much less of a ‘stocxhastic’ appearance: planetary motions and oscillators. In fact – an this discussion is now veering off onto philosophy – the world is not stochastic, but there are physical laws which creates order (if it were merely stochastic, then you could explain how we could sit here having this discussion – there would be no life forms…). To say there is a ‘stochastic trend’ is a cop out – there must be an underlying physical cause. Also, would it not be possible to make predictions with the ‘stochastic trend models’, because that would be a contradiction.? We have one explanation for the trend based on physics: GW; there are no good alternatives. -rasmus]
Tom Fiddaman says
Re #30
Don’t forget Herbert Simon in 1978. Generally though behavioral and experimental economics has commanded limited attention. Certainly there’s next to nothing of behavioral economics embedded in the dominant CGE and intertemporal optimization models used for policy.
The reconciliation of quantum and macro in physics is quite different from building a micro foundation for macroeconomics. The macro physical principles employed in climate models are well-supported by experiment; quantum mechanics is generally just a refinement that applies at scales irrelevant to climate. Not so in economics. It’s a long way from PV=nRT to “assume an infinitely lived representative agent with logarithmic utility…”.
I agree that quantity is not quality, but consider what’s available to someone wanting to build a regional energy-economy model: energy supply and demand with a little bit of fuel detail, some technical detail on electric power plant efficiency and the like, GDP, prices on a limited range of commodities, national accounts, population, and not a heck of a lot else. Some, like GDP, come with all sorts of measurement and interpretation baggage that make the problems of temperature pale by comparison. Other key pieces, like capital stocks, are largely missing or inferred from the same fundamental sources. I’m sure most economists would kill for the equivalent of 650,000 years of deltaD, or a satellite that would take real-time gridded measurements of household consumption.
JS says
“the world is not stochastic, but there are physical laws which creates order”
You seem to misunderstand the meaning of stochastic. Do they not teach statistics to physicists anymore? Who was it that said “God does not play dice”? You seem to be applying an incredibly classical picture of the universe – wasn’t it once believed that if you knew the positions of all particles in the universe you could then completely forecast the future? Hasn’t that been shown to be false?
[Response:Stochastic = ‘A process with an indeterminate or random element as opposed to a deterministic process that has no random element’. Other definitions are: ‘Applied to processes that have random characteristics’ or ‘This is the adjective applied to any phenomenon obeying the laws of probability’. So, it depends which definition you chooses. The latter definition appears to apply to almost everything, whereas the former seems to equate stochastic with ‘random’.
I think this is quite an interesting side of this discussion. As far as I remember, the existence of atoms was gleaned from the Brownian motion work of Einstein. Thus, there had to be some physics behind the ‘stochastic’ motion. It’s therefore somewhat ironical that a process, once used to derive knowledge about the underlying physics, now is presented as if things just happen randomly without any thought about the physics.
Sadly, the degree statistics is tought in physics is in my opinion not enough. It was also Einstein who famously said that God doesn’t play with dice .
One thing is to predict everything in a deterministic way- the classical neutonian universe – it’s another matter arguing that everything is stochastic or random – as order does emerge at least on local scales. You are right that an established view in the past was that the unviverse worked like clock work and in theory could be predicted if one had a perfect knowledge about the initial state of the universe and sufficient resources. This is no longer the paradigm, but the new paradigm does not preclude the role of physical laws. There is quantum physics and the non-linear chaos effect which limit our ability to predict. Still, energy cannot be created or destroyed, and a planet’s global mean temperature must fit in the greater picture of physics. -rasmus]
To take an example from the paper linked above – would you care to discuss the non-stochastic process by which CO2 emissions from power plants are determined? Once the CO2 is in the atmosphere there may be physical laws which create order – but the amount of CO2 released into the atmosphere is determined by fundamentally stochastic processes.
[Response:I take this to be in the meaning ‘Random or probabilistic but with some direction’. The quastion is then to what degree ‘random’ and what degree ‘direction’. There are well-established increases in the atmopsheric CO2, and there are indications that isotope ratios can provide a clue about the portion that comes from fossil sources. But, in our case, I don’t think the question of the CO2 sources and whether the gas’ pathway is stochastic or not, is all that relevant, given the atmospheric concentrations. For single molecules that absorb IR, the re-emission is random in terms of direction, but for a volume of air, there is such a vast number of molecules that the statistical property of the bulk action easily can be predicted. I would say that the definition ‘Random or probabilistic but with some direction’ with little degree of randomness for the bulk property and high degree of ‘direction’ (equal amount going up as down).-rasmus]
Next topic… Occam’s razor. If it walks like a stochastic process, and if it quacks like a stochastic process, what is wrong with calling it a stochastic process?
[Response:I would remind you again about Einstein’s Brownian motion, and that this phenomenon was used to infer the existence of atoms. You could as well say that the particles were just random, and be happy with that. But, insight shows that there are underlying physical laws to even processes that appear stochastic. In gases and diffusion, it’s interaction (collision) between the gas molecules. It’s hard to predict, but the physics is there nevertheless. And even more so when you consider the average kinetic energy of the molecules – the temperature. If the temperature takes a hike, then there must be a supply of energy: first law of thermo-dynamics. -rasmus]
Robert K. Kaufmann says
I want to re-iterate the meaning of stochastic that is described by JS. We can think about stochastic trends in terms of Occams Razor. There are two general types of time series, stationary or non-stationary. Stationary variables have a constant mean and variance that can be approximated from a long set of observations. As such, these variables have “no trend.” Clearly, the global temperature time series is not stationary.
Nonstationary variable have no long-run mean and the variance goes to infinity at time goes to infinity. Put simply, a nonstationary time series tends to increase or decrease over time. As such, the time series for global temperature is non-stationary.
The increase/decrease can be caused by a deterministic trend or a stochastic trend. A deterministic trend is one in which a variable increases year after year. But we know that human activities that affect climate change are not deterministic. For example, carbon emissions do not increase year-after year at a constant rate (yes they generally grow, but in a stochastic nature). There are economic recessions, in which emissions decrease – witness the Great Depression in the US or the collapse in the Former Soviet Union.
A stochastic trend derives its increase/or decrease from the cumulated effects of a stationary variable(s). Note that variable does not have to have a mean value of zero. So, you can think of emissions as a non-zero disturbance that is “integrated” by the atmosphere. This approach allows scientists to “match” the stochastic trends in the radiative forcing of radiatively active gases with temperature. When we do that, the link is very clear – the stochastic trends in the radiative forcing of greenhouse gases and human sulfur emissions match (or cointegrate in technical terms) the stochastic trends in global and hemispheric surface temperatures.
[Response: Maybe this was implied before, but while a time series with no long-term mean and variance that increases with the length of the sample is clearly (pathologically?) non-stationary, there are many non-stationary processes (where the mean or variance change through time) but that remain with bounds. For instance, ENSO may be non-stationary as a function of the base state of the tropical Pacific, but it still has a bounded mean. – gavin]
Terry says
A couple of comments.
1) Many climate time series, especially temperature, are obviously not i.i.d. (I think you understand this, and have said so yourself. I say this only because you seem to stray away from this obvious point at times.) There is obviously persistence in temperature from day-to-day, and from year-to-year. Any model which assumes i.i.d. data is obviously wrong and inferences from it will be wrong.
2) Temperature is also obviously not a pure random walk because there seem to be upper and lower limits on temperature. I would be astonished if anyone truly believed that temperature were a pure random walk. An ARIMA process (and its relatives), however are not random walks and there are many time-series models which can easily incorporate mean-reversion and avoid the absurd implications of a pure random walk model (where temperature would tend to become infinitely high or low).
3) The statistical implications of autocorrelation in time series is well known and it is well-known that the conclusions you can legitimately draw from auto-correlated sereis are highly dependent on the nature of the autocorrelation. It is equally well known that ignoring longrun persistence in such series can easily lead to grossly incorrect inferences.
4) The distinction you draw between physics and statistics is, I think, a bit too simplistic. Statistics are used all the time in physics — they are used anywhere where there is a residual “noise” that cannot be accounted for explicitly. The use of statistics here (including ARIMA models and their relatives) is completely appropriate. Indeed, I think it helps a great deal (and is, in fact essential) to understanding how certain we can be about the results in this area. This is exactly what statistics are for.
5) An understanding of the statistical properties of climate-rated time-series is essential here because it gives a great deal of insight into how certain we can be of the conclusions we draw and (perhaps more importantly) how sensitive the conclusions are to different assumptions. What statistics is telling us here is that seemingly small tweaks to our assumptions about long-term trends in the temperature record are extremely important to the conclusions. Statistics also tell us that such long-term persistence is EXTREMELY difficult to detect much less accurately measure. This is because you need an extremely long and accurate time-series to be able to say anything with confidence about long-run persistence.
6) It is far from obvious that GCM models are any better than simple-minded time series models at prediction here. Yes, they are based on physical models, but they are always calibrated to the data and so inherit the statistical properties of the data. Therefore, they can be seen as just very complicated statistical models themselves. They also have a host of other assumptions build in, and (as are all models) they are only approximations of reality, and can be quite sensitive to seeminly innocuous assumptions. For instance, they may assume linearity in some variables which is reasonably accurate over a large range, but is completely inappropriate when the system reaches extreme states. FWIW, in economics, in the fifties and sixties, econometricians built extremely complicated models that they thought would yield a big increase in predictive power. But, the models turned out to be a disappointment — it turned out that very simple time-series models were actually better at economic prediction than the hyper-sophisticated models. The economy is just too complicated for the models. This is why economists are so much more aware of these time-series issues.
[Response:I agree on points #1, #2 & #3. On point #4 you are right physics uses statistics, but often statistics does not glean to physics, so the relationship between the two often is not reciprocal. I believe that both physics and the statistics must be right. I do not believe in just applying a statisical analysis and that’s it (Occam’s razor?). The analysis must be accompanied by a consideration of the physics. I think that the use or ARIMA-type models, calibrated on the trended series itself, will not provide a good test for the null-hypothesis, since you a priori do not know whether the process you examine is a ‘null-process’. The risk is that the test is already biased by using the empirical data both for testing as well as tuning the statistical models used to represent the null-process. Regarding #5, I think it’s not necessary to go to lengths with statistical analysis to arrive at the conclusion that a discrimination of the null-hypothesis depends on how you model the null-process – I’d be surprised if it were otherwise (isn’t it logical?). This also relates back to #4, and I think that you also have similar arguments embedded in your comments. No surprise, I disagree on #6, partly because there is more to the story: model evaluation and experience with these models. GCMs also embed more information (physics) about how the climate system works. ARIMA-type models do not contain any physics, one never knows if the ARIMA-type models really are representative or just seeming to be so, but I agree that they are convenient tools when we have nothing else. The question is not about using ARIMA-type models or not, but what conclusions you really can infer from them. Here we are looking at a test, and it’s extremely important that this test is not pre-disposed. Such tests are extremely delicate, as you say, since they depend on the assumption about the null-process that in this case we do not really know (a significant trend or not?). On the other hand, if we can utilise insight about the underlying physics (which GCMs do), we can do far better. After all, the discovery of atoms as a result of stochastic Brownian motion have enabled far more useful predictions than what a simple stochastic view ever could. -rasmus]
Hank Roberts says
Is the pattern NASA GISS described (first 3 lines of comment #17 above) something that the scientists agree is real — whether or not it’s a trend, is there agreement we’ve had that pattern of warming as described?
It’s hard (as a non-scientist, reading along) to figure out what people agree is happening, let alone whether trends and causes can explain observations.
I find myself wishing for a (small, quiet) weblog where those who disagree on so much would post only what they agree upon. Like the NASA summary, perhaps?
Pat Neuman says
re 35 … Many climate time series, especially temperature, are obviously not i.i.d. …
You lost me there, but I kept reading anyway.
As a hydrologist involved with prediction, I understand what you said here: “it turned out that very simple time-series models were actually better at economic prediction than the hyper-sophisticated models. The economy is just too complicated for the models. This is why economists are so much more aware of these time-series issues”. Modeling the runoff from varying intensities of rainfall and snowmelt splattered over a partially porous canvas of variable terrain, soils and land use, varying in time, and with multiples sources of input all having varying degrees of error and bias makes it more than difficult to keep a complex runoff model update in real time, needed for making instant crest predictions based on forecast precipitation, temperatures, winds and humidity. It’s no wonder we rarely get it right to the tenth of a foot.
JS says
Gavin,
Time to dust off your statistics. Non-stationarity, even within bounds, has profound implications for the validity of standard statistical hypothesis testing. Many fundamental results about the properties of OLS regression and t-statistics rely on variances and means having a defined limit as n->infinity – that is, they require stationarity not just boundedness.
[Response: One has to be very careful here. Many variables which are quite stationary may appear not to be if described by an inappropriate metric. A simple example is a lake that responds to the difference between random inputs of precipitation and random losses through evaporation. Let us, for simplicity, suppose that runoff is minimal, and that the boundaries of the lake are vertical. Then changes in the level of the lake over a given time interval (say, a month) are linearly proportional to the average difference between the precipitation and evaporation rates over that time interval, with the proportionality constant determined by the lake geometry. Let us suppose that the precipitation and evaporation rates are both normally-distributed white noise (not a bad approximation in many cases). Then the change in lake level from one month to the next, like the precipitation-evaporation series, is described by a normally-distributed white noise process. About as stationary as can be! But suppose that you decide, for sake of ease, not to measure the average precipitation and evaporation rates (the climatological variables responsible for any observed changes in the lake level), but to measure, instead, the lake level (e.g. in meters) from one month to the next, and you eventually obtain a very long series of monthly measurements of the monthly mean lake level. Any guesses as to the statistical properties of this alternative ‘climate’ variable? – mike]
[Response: For your main point I wouldn’t disagree. But non-stationarity needs to be demonstrated and it is often difficult to distinguish from stationary, but oddly non-Gaussian behaviour (i.e. the GISP ice core record). -gavin]
JS says
Mike,
Absolutely. At an abstract level, one needs to consider very carefully whether one is measuring a stock or a flow. The simplest method of dealing with integrated/non-stationary series is to difference them. In your lake example, you would observe that the level was non-stationary but that its first difference was stationary and apply your statistical methods to this difference. That is the point – you need to know the statistical properties of the variable you are measuring; you can not afford to be ignorant of these properties or your statistics will be fundamentally flawed.
In a similar manner, because of very long persistence, atmospheric concentrations of gasses are much more like the level of the lake than the rainfall. You need to take account of that in any statistical modelling of them. But, to break the analogy with the lake, additions to the atmosphere of GHGs from human sources are also non-stationary because they derive from human economic activity which does not follow a stationary process (or even a trend stationary process). Any statistical anlysis which does not take account of these statistical features of the data will be on shaky ground.
In sum, it is not that stationary or non-stationary analysis is better, but that one needs to know the statistical properties of the variables one is analysing and apply the appropriate techniques. Application of standard OLS techniques to non-stationary variables (of which, let us not kid ourselves, there are a lot in climatology) will lead to flawed results. I have yet to see Dickey-Fuller or similar tests considered as standard in climatology, yet in the econometrically based contribution from Robert above it is the first test that is conducted. It is fundamental to establish whether the series you are dealing with are stationary or not before running your regressions.
JS says
Re response to #33
Thank you for the link to the God does not play dice article. Quite interesting.
While tangential to the current dicussion, it captures the point I was trying to make earlier (somewhat inadequately) about reconciling micro and macro phenomena.
(And a minor request to the mods – could you tweak the formatting so that your comments in #33 are all in green – it’s pretty clear who is saying what but some confusion might result at the moment.)
Pat Neuman says
Water temperature has a large influence on evaporation rates from large lakes. Evap from Superior and Michigan-Huron has been greatest in winter.
Terry says
Rasmus:
Thanks for the thoughtful reply to #35.
I agree that physics should inform the statistical analysis. Understanding the heat budget helps establish priors on what the boundaries of the model should be and so helps us form priors on the appropriate choice of statistics that should be applied.
I just wanted to reinforce my point that statistics can also be very helpful in understanding the power of our models and the confidence we should have in their predictions, and some elementary time-series analysis goes a long way here.
Just looking at the various temperature plots and proxy series makes me think that we are dealing with highly autocorrelated series with both long-term, medium-term, and short-term trends superimposed. We know there is short term correlation just by looking at the last hundred years of surface temperature data. We know there is long term correlation from the fact of repeated glaciations, with swings of (I am told) 6 degrees or so. I would be very surprised if there were not medium-term correlations also (with periodicities of 100 to 1000 years).
With such priors, it is very difficult to say with certainty that recent temperature movements are historically anomalous based on the temperature record alone. This, in fact, is I think at the heart of much of the debate. (Much of the impact of MBH was the result that reconstructed temperatures were found to be extremely stable over time. You cite to it for that proposition. This was powerful because it makes the recent temperature increase appear anomalous. Essentially, much of the argument about MBH is whether it correctly estimates the variance of past temperatures.)
Supporting the AGW conclusion is the recent temperature increase which seems to be undeniable. The increase in CO2 is also undeniable and given the physics, it seems likely that increased CO2 has some relationship to increased temperature.
But, the statistics of autocorrelated series should teach us some humility, and inference based on such highly autocorrelated series should give us pause. It is very difficult to estimate long term trends. It is even more difficult to estimate long-term variances (because long-term variances depend critically on the long-term trend and the stationarity of the series.) And that is what much of this debate is about … what is the natural variability of temperature?
TCO says
Can you please explain the within-the-post reply to my comment number 8? I’m not even disagreeing (yet), just don’t understand what point was being made. Thanks.
[Response:GCMs do give a reasonable representation of the persistence and time structure of the global mean temperature. GCMs can also be run with a constant forcing, providing a null-distribution for the case when ther eis no change in the forcing (which is not the case for the real world).
By the way, I do not say the global mean temperature is iid – rather the reverse since there is a trend! (A climate change implies a change in the distribution, and hence the data cannot then also be identically distributed.)
But now we are mixing the two aspects of iid: (1) independence and (2) identical distribution. By referring to autocorrelation, your arguments concern the former. If you subsample the data with sufficient interval so that there is no memory between the subsequent observations (requires a long data series), then it is reasonable to say that the data is independent (lets say ‘chaos’ erases the memory of a previous state). Then if the climate is stable, then the pdf of the climatic parameter (global mean temperature) is constant, and it is reasonable to say that the subsampled data would be iid. But if there is a trend, then the data would not be iid. -rasmus]
[Response: To save on pointless blog-to-blog ping pong, the difference between this statement and Manabe and Stouffer (1996) is that MS96 describe results from a control run (no forcing), while the real world and the latest AR4 runs include forcing effects (D. A. Stone et al, in press). Since forcings have trends they necessarily impart autocorrelation structure into the temperature spectra. This is reasonably modelled (and I suggest you register for access to the IPCC AR4 model data to check for yourselves). The key thing is to compare apples with apples. -gavin]
TCO says
Has the title post been edited/updated since originally written?
Hank Roberts says
Re #8, can you point to an explanation of your very brief point “why it’s best to use control integrations with GCMs to obtain null-distributiions” — I am wondering if this is a shorthand reference to an ongoing argument about what’s best to use and to obtain, or if it’s a generally agreed basis for making models.
CapitalistImperialistPig says
You say: “We have heard arguments that so-called ‘random walk’ can produce similar hikes in temperature (any reason why the global mean temperature should behave like the displacement of a molecule in Brownian motion?).”
Well, there is the fact that climate, like a molecule undergoing brownian motion, is subject to a very large number of forcings (anthropogenic, meteorological, astronomical, biological, and geological for climate, to name a few) with psuedo random distributions in time. There is also the fact that the historical climate record as revealed by ice cores exhibits a lot of apparently random variation. I’m finding some difficulty in appreciating the point of your snark here. Exactly how much evidence do you need?
[Response:Do you not believe that the first law of thermodynamics matters for the global mean temperature? -rasmus]
You also say: “Another common false statment, which some contrarians may also find support for from the Cohn and Lins paper, is that the climate system is not well understood.”
So does that mean that climate models are now making accurate predictions? Can they, for example, predict right now which of the next 5 years will be the warmest? Can they, without special ex-post facto tuning, accurately reproduce the climate behavior of the last 150 years? If the answers to any or (more likely) all of these questions is no, what is the content of your implication that the climate system *is* well understood.
[Response:I think you are mixing the concepts ‘understand’ with ‘predicting’ Have you hear about the so-called ‘butterfly effect‘/chaos? -rasmus]
Kooiti Masuda says
Climate is something like Brownian motion in a sense as mentioned by C.I.P. (#46). On the other hand, variation of temperature cannot be pure Brownian motion, or pure Brownian motion plus constant linear trend as in Cohn and Lins’s model, because it has lower and upper bounds of physically possible values. (Even the motion of Brown’s pollens or spores may not be pure Brownian motion near the edge of the container.) Probably a better stochastic model of climate would be Brownian motion plus some restoring force, or a forced-dissipative system. The question here should be, as I think, that how good is the unbound Brownian motion (with linear trend) as an approximation of a more realistic model in the context at hand.
[Response:The mean kinetic energy of the molecules in a gas is conserved, but the molecules are free to ‘wander off’ without constraints (until the hit a boundary, such as the walls of a container). -rasmus]
Arun says
I take the statement that the climate system is well-understood means that all relevant physical causes have been identified, and their effects have been quantified (which is part of knowing whether something is relevant or not). The lack of predictability has to do with limited computational power and lack of initial value data. These in turn have to do with the scale of the climate system and the fact of the chaotic dynamics of the system.
The question is then how do we know that the climate system is well-understood (i.e., all relevant physical causes have been identified) if we cannot predict? I assume the answer is partly that global climate models produce qualitatatively the features we observe, including large scale circulations, climate cycles, etc.. Secondly, on say, a couple of recent decades for which we have substantial data, the global climate models match the data statistically as well.
Are the above statements correct? How would you amend them?
[Response:Ibelive your statements are fair. You could even strect a bit further and ask ‘what makes us thing that the ARIMA-type models are rights?’ and apply the same demands on them. What do you get? -rasmus]
Terry says
Thanks for the update. I think there has been an abnormally large amount of confusion in this thread, and the update helps clear up quite a bit of the confusion. In the end, I don’t think there is really much disagreement here … just confusion.
I would like to take one last try at a point you make in the update.
“Some of the response to my post on other Internet sites seem to completely dismiss the physics. Temperature increases involve changes in energy (temperature is a measure for the bulk kinetic energy of the moleclues), thus the first law of thermodynamics must come into consideration. ARIMA models are not based on physics, but GCMs are.”
I think a distinction between statistics and physics isn’t the right way to think about things for a few reasons.
1) It isn’t either/or. Statistical tools are an extra layer of analysis laid on top of the physics. Most importantly here, they tell us what inferences we can legitimately draw from the data … how confident can we be in our statements about what the data shows and the predictions we can make from the data and the physics. Where statistics can be especially helpful is in giving us very quick insight into the uncertainty of our results, and a little statistical insight goes a long way. In this case, the insight is that statistical significance is MUCH lower (perhaps orders of magnitude smaller) in the presence of autocorrelation than absent autocorrelation.
2) I don’t think it is accurate to say that statistical models such as ARIMA models are not based on physics. I think it is more accurate to say they incorporate the physics because they operate on the the physical data which incorporates all of the physical phenomena of interest. If the physical system you are studying exhibits persistence, then it shows up in the data as autocorrelation. Thermodynamics, heat budgets, you name it, are all in there, in the data.
[Response:I think you are correct in your assertion that statistical models should implicitly reflect underlying physical processes explaining for instance the degree of persistence/serial correlation. But I do not think that ARIMA models are constructed out of physical considerations – they merely reflect the empirical data. You are also right that one should not separate ‘physics’ and ‘statistics’. I argue that you have to get both right, and I am sceptical to analysis where only statistical aspects are taken into account. A warming trend cannot just happen without a cause, and there must be some physics driving it. If you are looking into the cause, such as in this particular case, I do not believe that statistical models are appropriate because i: they are used to test a null-hypothesis where no antropogenic forcing (of just solar volcanoes) is assumed, ii) they are trained on empirical data subject to forcings (be it anthopogenic as well as solar/volcanic). -rasmus]
Alastair McDonald says
In reply to my #6 rasmus wrote “Right, there are two aspects to this radiation: the continuum associated with the atoms kinetic energy and the band absorption associated with the atomic electron configurations. ”
N2 and O2 radiate neither continuum nor band radiation. CO2 does not radiate continuum radiation, and H20 only radiates continuum radiation at a very low level. Thus, temperature dependent continuum radiation can be ignored when considering atmospheric gases. The band radiation is broadened by pressure, not temperature. This means that the current breed of GCMs which contain layers of atmosphere emitting radiation based on their temperatures are not using the correct physics.
Surely there is at least one scientist at RealClimate with enough scepticism of the established science to see that Dr John Christy is right and the models are wrong.