From time to time, there is discussion about whether the recent warming trend is due just to chance. We have heard arguments that so-called ‘random walk‘ can produce similar hikes in temperature (any reason why the global mean temperature should behave like the displacement of a molecule in Brownian motion?). The latest in this category of discussions was provided by Cohn and Lins (2005), who in essence pitch statistics against physics. They observe that tests for trends are sensitive to the expectations, or the choice of the null-hypothesis .
Cohn and Lins argue that long-term persistence (LTP) makes standard hypothesis testing difficult. While it is true that statistical tests do depend on the underlying assumptions, it is not given that chosen statistical models such as AR (autoregressive), ARMA, ARIMA, or FARIMA do provide an adequate representation of the null-distribution. All these statistical models do represent a type of structure in time, be it as simple as a serial correlation, persistence, or more complex recurring patterns. Thus, the choice of model determines what kind of temporal pattern one expects to be present in the process analysed. Although these models tend to be referred to as ‘stochastic models’ (a random number generator is usually used to provide underlying input for their behaviour), I think this is a misnomer, and think that the labels ‘pseudo-stochastic’ or ‘semi-stochastic’ are more appropriate. It is important to keep in mind that these models are not necessarily representative of nature – just convenient models which to some degree mimic the empirical data. In fact, I would argue that all these models are far inferior compared to the general circulation models (GCMs) for the study of our climate, and that the most appropriate null-distributions are derived from long control simulations performed with such GCMs. The GCMs embody much more physically-based information, and do provide a physically consistent representation of the radiative balance, energy distribution and dynamical processes in our climate system. No GCM does suggest a global mean temperature hike as observed, unless an enhanced greenhouse effect is taken into account. The question whether the recent global warming is natural or not is an issue that belongs in ‘detection and attribution’ topic in climate research.
One difficulty with the notion that the global mean temperature behaves like a random walk is that it then would imply a more unstable system with similar hikes as we now observe throughout our history. However, the indications are that the historical climate has been fairly stable. An even more serious problem with Cohn and Lins’ paper as well as the random walk notion is that a hike in the global surface temperature would have physical implications – be it energetic (Stefan-Boltzmann, heat budget) or dynamic (vertical stability, circulation). In fact, one may wonder if an underlying assumption of stochastic behaviour is representative, since after all, the laws of physics seem to rule our universe. On the very microscopical scales, processes obey quantum physics and events are stochastic. Nevertheless, the probability for their position or occurrence is determined by a set of rules (e.g. the Schrödinger’s equation). Still, on a macroscopic scale, nature follows a set of physical laws, as a consequence of the way the probabilities are detemined. After all, changes in the global mean temperature of a planet must be consistent with the energy budget.
Is the question of LTP then relevant for testing a planet global temperature for trend? To some degree, all processes involving a trend also exhibit some LTP, and it is also important to ask whether the test by Cohn and Lins involves circular logic: For our system, forcings increase LTP and so an LTP derived from the data, already contains the forcings and is not a measure of the intrinsic LTP of the system. The real issue is the true degrees of freedom – number of truely independent observations – and the question of independent and identically distributed (iid) data. Long-term persistence may imply dependency between adjacent measurements, as slow systems may not have had the time to change appreciably between two successive observations (the same state is more or less observed in the successive measurements). Are there reasons to believe that this is the case for our planet? Predictions for subsequent month or season (seasonal forecasting) is tricky at higher latitudes but reasonably skilful regarding El Nino Southern Oscillation (ENSO). However, it is extremely difficult to predict ENSO one or more years ahead. The year-to-year fluctuations thus tend to be difficult to predict, suggesting that LTP is not the ‘problem’ with our climate. On the other hand, there is also the thermal momentum in the oceans which implies that the radiative forcing up to the present time is going has implications for following decades. Thus, in order to be physically consistent, arguing for the presence of LTP also implies an acknowledgement of past radiative forcing in the favour for an enhanced greenhouse effect, since if there were no trend, the oceanic memory would not be very relevant (the short-term effects of ENSO and volcanoes would destroy the LTP).
Another common false statment, which some contrarians may also find support for from the Cohn and Lins paper, is that the climate system is not well understood. I think this statement is somewhat ironic, but the people who make this statement must be allowed to talk for themselves. If this statement were generally true, then how could climate scientists make complex models – GCMs – that replicate the essential features of our climate system? The fact that GCMs exist and that they provide a realistic description of our climate system, is overwhelming evidence demonstrating that such statement must be false – at least concerning the climate scientists. I’d like to iterate this: If we did not understand our atmosphere very well, then how can a meteorologist make atmospheric models for weather forecasts? It is indeed impressing to see how some state-of-the-art atmopsheric-, oceanic models, and coupled atmospheric-oceanic GCMs reproduce features such as ENSO, the North Atlantic Oscillation (or Arctic or Antarctic Oscillation) on the larger scales, as well as smaller scale systems such as mid-latitude cyclones (the German model ECHAM5 really produces impressive results for the North Atlantic!) and Tropical Instability Waves with such realism. The models are not perfect and have some shortcomings (eg clouds and planetary boundary layer), but these are not necassarily due to a lack of understanding, but rather due to limited computational resources. Take an analogy: how the human body works, conscienceness, and our minds. These are aspects the medical profession does not understand in every detail due to their baffling complexity, but medical doctors nevertheless do a very good job curing us for diseases, and shrinks heal our mental illnesses.
In summary, statistics is a powerful tool, but blind statistics is likely to lead one astray. Statistics does not usually incorporate physically-based information, but derives an answer from a set of given assumptions and mathematical logic. It is important to combine physics with statistics in order to obtain true answers. And, to re-iterate on the issues I began with: It’s natural for molecules under Brownian motion to go on a hike through their random walks (this is known as diffusion), however, it’s quite a different matter if such behaviour was found for the global planetary temperature, as this would have profound physical implications. The nature is not trendy in our case, by the way – because of the laws of physics.
Update & Summary
This post has provoked various responses, both here and on other Internet sites. Some of these responses have been very valuable, but I believe that some of these are based on a misunderstanding. For instance, some seem to think that I am claiming that there is no auto correlation in the temperature record! For those who have this impression, I would urge to please read my post more carefully, because it is not my message. The same comments goes for those who think that I’m arguing that the temperature is iid, as this is definitely not what I say. It is extremely important to be able to understand the message before one can make a sensible response.
I will try to make a summary of my arguments and the same time address some of the comments. Planetary temperatures are governed by physics, and it is crucial that any hypotheses regarding their behaviour are both physically as well as statistically consistent. This does not mean that I’m dismissing statistics as a tool. Setting up such statistical tests is often a very delicate exercise, and I do question whether the ones in this case provide a credible answer.
Some of the response to my post on other Internet sites seem to completely dismiss the physics. Temperature increases involve changes in energy (temperature is a measure for the bulk kinetic energy of the moleclues), thus the first law of thermodynamics must come into consideration. ARIMA models are not based on physics, but GCMs are.
When ARIMA-type models are calibrated on empirical data to provide a null-distribution which is used to test the same data, then the design of the test is likely to be seriously flawed. To re-iterate, since the question is whether the observed trend is significant or not, we cannot derive a null-distribution using statistical models trained on the same data that contain the trend we want to assess. Hence, the use of GCMs, which both incorporates the physics, as well as not being prone to circular logic is the appropriate choice.
There seems to be a mix-up between ‘random walk’ and temperatures. Random walk typically concerns the displacement of a molecule, whereas the temperature is a measure of the average kinetic energy of the molecules. The molecules are free to move away, but the mean energy of the molecules is conserved, unless there is a source (first law of thermodynamics). [Of course, if the average temperature is increased, this affects the random walk as the molecules move faster (higher speed).]
CapitalistImperialistPig says
Re: reply to comment 46; Rasmus – “Do you not believe that the first law of thermodynamics matters for the global mean temperature? -rasmus]”
Well, duh! What are forcings but effects that change the energy fluxes? I’m afraid I don’t understand this snark of yours either.
Rasmus again: “[Response:I think you are mixing the concepts ‘understand’ with ‘predicting’ Have you hear about the so-called ‘butterfly effect’/chaos? -rasmus]”
Let’s see if I understand your point about predict vs. understand. Historical sciences understand, physical science predicts, or as Lord Rutherford put it, “there are only two kinds of science, physics and stamp collecting.” Sure do sound like stamp collecting to me.
Since you mention it, there is a question that’s been on my mind. As you know, chaotic systems have both a fast and a slow manifold, so what would a slow manifold eigenvector look like, and why should we expect that butterflies flap always in the fast?
JS says
Re #49 and your update
I have to reiterate some of the things Terry has said and comment, we seem to be talking at cross purposes.
On one level, your description of statistics bears no relation to what I’m thinking of and, to my mind, what statistics really is. You seem to be talking about a philosophical model of the universe that is either deterministic or random. That is really irrelevant to statistical analysis.
Statistics is a wonderful tool to wield Occam’s razor with. It also minimises the all too human tendency to only see the results one wants to see. Statistical tests ultimately tell you if your chosen model actually has some predictive ability or is merely a terribly comlicated black box. If your model has no ability to outperform a simple univariate model then you should wield Occam’s razor. One of the most insightful and telling developments in the field of finance was that you can’t beat the random walk model of exchange rates. Billions of dollars have been expended on trying to beat the market and predict exchange rates even a few minutes ahead but none of these models can beat a simple random walk model of exchange rates.
But even regardless of that point – statistics is not about simple univariate random walk models. It is a tool for evaluating the resutls from your model. Your model can incorporate as many physical laws as it likes and it will still need to be evaluated using statistics. It is never a case of physics or statistics – it must be physics and statistics. And the point here is that even if you are the best physicist in the world, there are elementary errors you can make when evaluating your model if you don’t apply the appropriate statistical techniques. I have discussed non-stationarity here because it seems most relevant, but there are other errors you can make. And the point of Cohn and Lins as I understand it is that climatologists are making elementary errors because they are not properly accounting for the autocorrelated nature of their data. It is not that you should throw out physics (although beware of Occam’s razor).
[Response: Maybe I can interject. First, I think we really all agree that statistics and physics are both useful in this endeavour. The ‘problem’ such as it is with Cohn and Lins conclusions (not their methodology) is the idea that you can derive the LTP behaviour of the unforced system purely from the data. This is not the case, since the observed data clearly contain signals of both natural and anthropogenic forcings. Those forcings inter alia impart LTP into the data. The models’ attribution of the trends to the forcings depends not on the observed LTP, but on the ‘background’ LTP (in the unforced system). Rasmus’ point is that the best estimate of that is probably from a physically-based model – which nonetheless needs to be validated. That validation can come from comparing the LTP behaviour in models with forcing and the observations. Judging from preliminary analyses of the IPCC AR4 models (Stone et al, op cit), the data and models seem to have similar power law behaviour, but obviously more work is needed to assess that in greater detail. What is not a good idea is to use the observed data (with the 20th Century trends) to estimate the natural LTP and then calculate the likelhood of the observed trend with the null hypothesis of that LTP structure. This ‘purely statistical’ approach is somewhat circular. Maybe that is clearer? -gavin]
David Stockwell says
Gavin, I am not following this. When I fit an ARMA model to the CRU annual instrumental data using R I had to detrend it first. It then yielded very high AR values (>0.95). R would not actually let me fit it without detrending it and gave a message to that effect. I am not sure as I haven’t gone into it in detail, but the trend 1. may not affect the AR coefficient, and hence the ‘trendiness’ of the series 2. it is easy to get rid of anyway, and 3. the 20th Century trend may not have actually affected Cohn and Lins results. Anyway, you can get the ARMA structure and hence ‘trendiness’ in spite of the trend. Care to explain how forcing affects LTP estimates?
[Response: The forcing is not linear, so linear detrending will not remove correlations related to the changes in forcings. There will always be red-noise in a climate record due to the thermal inertia of the ocean. The issue is whether there is any LTP in the absence of forcings. That is what is relavant for the null hypothesis – gavin]
per says
let me see if I can follow gavin’s reply to #52 ?
Although we have accurate temperature data for the last two centuries, and it does show autocorrelation, you are hypothesising that this is due to the natural and anthropogenic forcing. We cannot extrapolate from this to previous times, because there was no anthropogenic forcing.
It is very tempting to conclude that you are suggesting that it is only anthropogenic forcing which causes autocorrelation. Clearly, if natural forcings can cause autocorrelative behaviour, then we would have to conclude that previous temperature records could be autocorrelated.
I have to say it is very difficult to understand why we should accept your hypothesis that only current conditions, and no other, should result in autocorrelated temperatures. It seems to me to be speculation.
I believe that there are historical temperature records going back over thousands of years. Is there not evidence from these series of autocorrelation ?
I do not understand your logic with respect to GCMs. You say that the behaviour of GCMs must be validated. But it appears to be an integral part of your case that you cannot do that validation with the temperature records of the last two centuries. How then will we ever be able to test whether gcms adequately represent the autocorrelative (or otherwise) properties of nature, if we do not have a database to test them against ?
yours
per
[Response: It may help you to follow if you actually read what is said. All forcings (specifically solar, but also GHGs and aerosols etc.) impart LTP, so the observed LTP is mix of the LTP in the unforced system (which we want to know) and the LTP imparted by the forcings (which is already known). How then is one to estimate LTP in the unforced climate? We can take a GCM and see what it does in the absence of forcings. But to compare it to the real world we need to run it with as much of the observed forcings as we manage. Then comparing the forced GCM to the real world and doing the same analyses, we can ask whether the the autocorrelation structure of the model iis similar to that of the data. If so, we would then have some grounds for supposing that the background LTP as estimated from the control GCM might be reasonable. Why you appear to think that GCMs should not be tested against the real world is beyond me. – gavin]
Terry says
Gavin:
A request for clarification. When you talk about estimating the LTP in the “unforced system,” do you mean unforced just by AGW or do you mean unforced by everything, i.e., unforced by either AGW or “natural” forcings?
I am guessing that you mean unforced by just AGW, and that you want to estimate the LTP of the “natural” or “non-AGW” system. If so, why isn’t a reasonable estimate of the natural LTP the LTP we observe in the natural + AGW system? Is there some reason to believe that AGW seriously distorts the estimated LTP? Perhaps because AGW is somehow larger or more persistent than “natural” forcings? Off the top of my head, don’t see why this would be the case.
Or maybe I’m just completely wrong. I don’t pretend to understand GCMs very well.
Armand MacMurray says
Re: #55
Terry, Gavin means “unforced by everything,” not just by AGW forcings.
Terry says
Re: #56
Armand:
Oh. … Then I missed the boat on this one. I have no idea why you would care about long-term persistence in a system with no forcings whatsoever. I thought we were interested in whether recent trends were consistent with a system without AGW forcing. Then, I thought the relevant question was whether a non-AGW system with non-AGW forcings can exhibit trends comparable to the recent one, in which case the recent trend could be non-AGW.
What am I missing? Does it have something to do with understanding whether the climate system itself can generate persistence (as opposed to persistence generated by the forcings)? Why should we care whether the persistence comes from the system or the forcings?
I am beginning to suspect that I have missed something fundamental here. Perhaps I should just be quiet for a while.
per says
Dear Gavin
Re: 54
I put my question to see if I could follow what you wrote. We agree that GCMs should be tested against real world data.
I am a bit confused about your suggestion of a climate with no forcings; if I understand correctly, that would be a climate with no solar input, GHGs or aerosols, for example, and hence it is not a realistic prospect to have data from such a situation.
Surely discussions about extrapolating the historical temperature records must use the historical temperature data, which is subject to all the normal, natural forcings ? If you are accepting that the natural forcings impart LTP (or autocorrelation), surely you are accepting that the historical temperature record is autocorrelated ? Surely, then, you are at one with Cohn and Lins ?
It has been brought to my attention that Pelletier (PNAS 99, 2546) describes autocorrelation in deuterium concentrations in the vostok ice core over periods up to 200,000 years; which I would understand to be a temperature proxy.
yours
per
[Response: Forcings in the sense meant are the changes to the solar input/GHGs/aerosols etc. Attribution is all about seeing whether you can match specific forcings to observed changes and this is used for all forcings solar, volcanic and GHG included. The baseline against which this must be tested is a control run with no changes in any forcing. -gavin]
Ferdinand Engelbeen says
Re #18,
A similar project as CMIP-2, AMIP-2 for atmospheric models only, compared the results of several (20) climate models for a first-order forcing: the distribution of the amount of the sun’s energy reaching the top of the atmosphere (TOA) dependent on latitude and longitude in the period 1985-1988. See the recently published work of Raschke ea..
Robert K. Kaufmann says
I would like to pick up on a comment made by per (#58) about testing GCM’s against real-world data. As an outsider to the GCM community, I did such an analysis by testing whether the exogenous inputs to GCM (radiative forcing of greenhouse gases and anthropogenic sulfur emissions) have explanatory power about observed temperature relative to the temperature forecast generated by the GCM. In summary, I found that the data used to simulate the model have information about observed temperature beyond the temperature data generated by the GCM. This implies that the GCM’s tested do not incorporate all of the explanatory power in the radiative forcing data in the temperature forecast. If you would like to see the paper, it is titled “A statistical evaluation of GCM’s: Modeling the temporal relation between radiative forcing and global surface temperature” and is available from my website
http://www.bu.edu/cees/people/faculty/kaufmann/index.html
Needless to say, this paper was not received well by some GCM modelers. The paper would usually have two good reviews and one review that wanted more changes. Together with my co-author, we made the requested changes (including adding an errors-in variables” approach). The back and fourth was so time consuming that in the most recent review, one reviewer now argues that we have to analyze the newest set of GCM runs – the runs from 2001 are too old.
The reviewer did not state what the “current generation” of GCM forecasts are! Nor would the editor really push the reviewer to clarify which GCM experiments would satisfy him/her. I therefore ask readers what are the most recent set of GCM runs that simulate global temperature based on the historical change in radiative forcing and where I could obtain these data?
[Response: The ‘current runs’ are the ones made available as part of the IPCC 4AR. For your purposes, you will want to look at the simulations made for the 20th Century and there are (I think) 20 different models from 14 institutions with mutliple ensembles available. You need to register for the data, but there are no restrictions on the analyses you can do (info at http://www-pcmdi.llnl.gov/ipcc/about_ipcc.php ). Many of the runs have many more forcings than you considered in your paper which definitely improve the match to the obs. However, I am a little puzzled by one aspect of your work – you state correctly that the realisation of the weather ‘noise’ in the simulations means that the output from any one GCM run will not match the data as well as a statistical model based purely on the forcings (at least for the global mean temperature). This makes a lot of sense and seems to be to equivalent to the well-known result that the ensemble mean of the simulations is a better predictor than any individual simulation (specifically because it averages over the non-forrced noise). I think this is well accepted in the GCM community at least for the global mean SAT. That is why simple EBMs (such as Crowley (2000) do as good a job for this as GCMs. The resistence to your work probably stems from a feeling that you are extrapolating that conclusion to all other metrics, which doesn’t follow at all. As I’ve said in other threads, the ‘cutting-edge’ for GCM evaluation is at the regional scale and for other fields such as precipitation, the global mean SAT is pretty much a ‘done deal’ – it reflects the global mean forcings (as you show). I’d be happy to discuss this some more, so email me if you are interested. – gavin]
Hank Roberts says
In today’s news:
http://www.nature.com/nature/journal/v438/n7071/abs/nature04348.html#a1
Eli Rabett says
It seems to me that you are kicking the can down the road. Physics provides a connection between forcings and temperatures, and as I understand you want to use GCMs to obtain temperature series which can be compared to measurements (either proxy or instrumental) to determine whether there is a trend. You contrast this with statistical analysis of the temperature measurements.
However, this merely displaces the problem to one of whether the forcings have trends, and how will you determine that. The physics of some of the forcings are in really rough shape, there are lots of forcings, some of them go one way, some go the other, etc. Worse, the GCM models only provide a range of temperatures, so even more uncertainty is introduced.
A minor niggle, in answer to number 6, gavin did not point out that molecules can be excited by collisions as well as absorption of photons. The interchange of energy between translational and vibrational modes leads to heating of the atmosphere by absorption of IR radiation and to the thermal population of vibrationally excited states of CO2 and H2O which radiate.
Hans Erren says
Indeed, now if you don’t understand the behaviour of your system (i.e. physics) you can’t model it, right?
How about tidal effects?
C. D. Keeling and T. P. Whorf, 2000, The 1,800-year oceanic tidal cycle: A possible cause of rapid climate change, PNAS, April 11, 2000; 97(8): 3814 – 3819.
C. D. Keeling and T. P. Whorf, 1997, Possible forcing of global temperature by the oceanic tides, PNAS 1997 Aug 5;94(16):8321-8
Eli Rabett says
Hans Erren is making a typical argument in denial, that if you don’t understand everything you don’t understand anything. Just about every scientifically based issue that must be dealt with in the policy arena must endure this attack, for example the discussions about tobacco and CFCs. The argument is useful politically for two reasons. First it casts doubt on what the overwhelming science points to, second, it is an excuse to delay (more study needed). However, this argument ignores a basic truth about physics.
A major reason that physics is useful is that even complicated systems have only one or two dominant “forcings” so that simple models incorporating only a few elements are useful, even accurate.
Detailed modeling may require addition of complications, but it is rare to unheard of that all of the added features push the system in the same direction, and for the most part they cancel out. What more complex models allow you to do is to gain insight into the behavior of the system beyond the coarse grained simple model. That has been the story of the anthropic greenhouse effect, and why predictions of global effects have remained relatively stable over 100 years, no matter how many additional details are added to the model or additional forcings are included.
The first paper that Hans points to raises an important point in its penultimate paragraph
“Even without further warming brought about by increasing concentrations of greenhouse gases, this natural warming at its greatest intensity would be expected to exceed any that has occurred since the first millennium of the Christian era, as the 1,800-year tidal cycle progresses from climactic cooling during the 15th century to the next such episode in the 32nd century.”
Those in denial insist that this is an either/or problem (either anthopically driven warming OR something else) and are busy thowing every piece of something else they can think of against the wall to see what sticks. Unfortunately it is a problem of A AND B, as is illustrated here. Frankly, at this time I have no idea of the importance of the cited oceanic tidal cycle, but C. D. Keeling was certainly not a doubter on greenhouse warming.
A particularly frequent example of this diversion is the argument about CO2 mixing ratios rising after warming began to bring the planet out of an ice age. To the extent that the evidence supports this, and the data is interesting, but perhaps not rock solid, it is clear that increased solar input caused by orbital effects, increased CO2 concentrations, which then in turn reinforced the warming, a positive feedback as it were. What this says is that clearly increasing CO2 mixing ratios warms the surface. It does not matter whether the jolt is delivered by anthropic fossil fuel burning, or from the effect of any other positive forcing.
Hank Roberts says
I’m going to make a prediction (grin)
The next big idea will be cooling the earth by using comet dust or other material from earth-orbit-crossing objects — blowing them up to introduce large quantities of fine dust into the upper atmosphere — creating “dust events” like those that show up in the climate cores, without introducing large volumes of water to the stratosphere; this will require finding a dry, dusty, frangible comet.
Then, if other forcings change and we get too cold, follow with a nice wet comet, to warm things up.
I suppose climate modeling is going to lead to terraforming, eventually. I hope we get it right.
http://freefall.purrsia.com/ff1200/fv01190.htm
Alastair McDonald says
Eli, I feel I must reply to your remark “The interchange of energy between translational and vibrational modes leads to heating of the atmosphere by absorption of IR radiation and to the thermal population of vibrationally excited states of CO2 and H2O which radiate.” The heating of the atmosphere means, from energy condsiderations, that the radiation absorbed by the greenhouse gases does not equal the radiation emitted, the normal expression of Kirchhoff’s Law. Furthermore, the effect of temperature on greenhouse gas emissions is called Doppler broadening. The American Meteorological Glossary remarks that ‘At normal temperatures and pressures Doppler broadening is dwarfed by collision broadening, but high in the atmosphere Doppler broadening may dominate and, indeed, provides a means of remotely inferring temperatures.’ See:
http://amsglossary.allenpress.com/glossary/browse?s=d&p=40
In other words, the effects of Doppler broadening do not affect the troposphere where the climate is decided.
wayne davidson says
#66 , Alastair, That may be true, the stratosohere should warm for example, but that was not the case for 2005 (reference the WMO 2005 summary , no Strat. warming) all the action was in the Troposphere, especially shortly above the surface till the tropopause, the warming is happening there. Although some controversy has been mentionned by radiosonde instrument thermistor accuracy and satellite resolution questions, there are other ways to see this. Literally see it, especially in the Polar regions, by increasing brightness of twilight during the long night, a product of ever expanding upper warmer air interfacing with colder surface air , trapping light more often then ever. There are other ways as well to prove that its mostly in the troposphere. I would suggest that water vapour is the biggest factor , being increased by Greenhouse gases. I see this now, by clear polar nights with star magnitudes seen not as dim as previous years (maximum of 4.7 mag.) and unusually warm surface temperatures given the lack of clouds.
Stephen Berg says
Polar bears treading on thin ice
Climate change blamed for decline in population along Hudson Bay coast
http://www.theglobeandmail.com/servlet/ArticleNews/TPStory/LAC/20051224/POLAR24/TPEnvironment/
Hans Erren says
http://www.nwtwildlife.rwed.gov.nt.ca/Publications/speciesatriskweb/polarbear.htm
Don Baccus says
#69: The trend data they’re reporting are for the Northwest Territories ONLY. That’s the NWT wildlife agency site you’re referencing.
22,000-27,000 animals worldwide.
Of these, 3,000 can be found along the artic coasts of the NWT. And of these 3,000 ONLY, two [sub]populations are stable, the third, a SMALL population (hundreds? doesn’t say) is increasing.
The 3,000 do not form a random sample, you can’t extrapolate data from the small numbers in the NWT to the worldwide population.
Predictions that polar bears may face extinction are based on their natural history. They den on land, they wander and feed on polar ice after the winter freeze sets in. If polar ice sheets no longer connect with land in winter, polar bears will disappear, that’s a given. Even if there remains a winter freeze-up connecting ice sheets with land, if the bears are stuck on land too long, they’ll starve or be in poor health before they can travel to their seal hunting grounds on ice. Unlike other bear species, which are omnivorous, polar bears are more strictly carnivorous (they’ll eat vegetation as an extreme measure only).
Hudson’s Bay is apparently warming to the point where changes in the amount of time it’s frozen is affecting polar bear populations. Most of the Hudson’s Bay summer habitat (the most famous being the area surrounding Churchill) lies far to the south of the summer habitat utilized by the NWT subpopulations you reference. We would expect the effects of warming to appear in Churchill in advance of problems showing up further north.
Eli Rabett says
In reply to #66.
1. Kirchoff explicitly allowed for systems which only absorb and emit in restricted wavelength regions (as molecules do). The argument he presented starts with by considering two parallel plates, one of which absorbs and emits at all wavelenths, and the other one of which only absorbs and emits betwee LAMBDA and LAMBDA + dLAMBDA (if HTML was designed at CERN why the devil didn’t Berners-Lee build decent sub/superscripting and Greeks into the thing.) At equilibrium the ratio of emissivity to absortivity of both systems are equal. This can be generalized to all substances with a zeroth law of thermodynamics type argument. In other words, Kirchoff’s law applies to molecules.
2. High in the atmosphere is a relative concept. For practical purposes when considering the greenhouse effect, the top of the atmosphere is a few kilometers while much of the absorption and emission of radiation occurs relatively close to the surface. In any case the number density of molecules at 6 km is only about a factor of 2 less than at the surface (average velocity decreases by ~10%)
Hans Erren says
re 64:
Eli, I published a peer reviewed model calculation on coal maturation using physics first principles. GCM’s are not using first principles, they use parameterisations to calculate sub cell relationships eg between sea surface temperature and precipitation, as individual thunderstorms cannot be modeled.
Arctic cloud modeling is a joke.
[Response: There is no parameterisation in a GCM that connects SST to precipitation. Arctic cloud modelling is difficult, it is not a joke. -gavin]
Hans Erren says
re 70
http://www.polarbearsinternational.org/bear-facts/
Pat Neuman says
re 73.
Hans,
On average, the data used in your summary on polar bear status is 10 years out of date and was of poor-fair quality.
See: http://pbsg.npolar.no/status-table.htm
Hans Erren says
re 74:
Thanks for the update
The table cited shows a lot of unknowns, two decreasing and two increasing populations.
The decreasing groups (3600) are the most “harvested” with 202 kills per year. I’d suggest a moratorium here…
Don Baccus says
#73: In what way does your comment address the point that predictions of future troubles for polar bears are based on their natural history? Are you seriously trying to argue that the fact that populations are stable today indicates that they’ll survive if their habitat changes significantly?
Such thinking is just silly. Predicting the polar bear’s future depends on the accuracy of two things, our ability to predict climate change and the species’ response to the habitat change that follows.
Current population numbers aren’t relevant.
Stephen Berg says
Re: #73,
From the same site:
http://www.polarbearsinternational.org/bear-facts/climate-change/
“Climate Change
The Arctic’s climate is changing, with a noticeable warming trend that is affecting polar bears. The region is experiencing the warmest air temperatures in four centuries. The Intergovernmental Panel on Climate Change, the U.S. EPA, and the Arctic Climate Impact Assessment all report on the effect of this climatic change on sea-ice patterns. A recent report notes that there has been a 7% reduction in ice cover in just 25 years and a 40% loss of ice thickness. It also predicts a mostly ice-free arctic summer by 2080 if present trends continue. Many scientists believe that the Arctic will continue to grow warmer as a result of human activity, namely, the introduction into the atmosphere of increasing quantities of carbon dioxide and other â??greenhouse gasesâ??. While there is no consensus on whether human activity is the most significant factor, the Arctic has in fact been warming, whatever the cause.
Anecdotal evidence indicates that polar bears may be leaving the sea ice to den on land in winter. In Russia, large numbers of bears have been stranded on land by long summers that prevent the advance of the permanent ice pack. Some Inuit hunters in Canada say they can no longer hunt polar bears in the spring because of early ice melts. In the Hudson Bay area, research (sponsored in part by PBI) has found that areas of permafrost have declined, leaving polar-bear denning areas susceptible to destruction by forest fires in the summer. A warm spring might also lead to increased rainfall, which can cause dens to collapse.
Polar bears depend on a frozen platform from which to hunt seals, the mainstay of their diet. Without ice, the bears are unable to reach their prey. In fact, for the western Hudson Bay population of polar bears (the population near Churchill in the Province of Manitoba, Canada), researchers have correlated earlier melting of spring ice with lower fitness in the bears and lower reproduction success. If the reduced ice coverage results in more open water, cubs and young bears may also not be able to swim the distances required to reach solid ice.
Further north, in areas where the ice conditions have not changed as much, seal populations have grown (either through migration or more successful reproduction) and polar bear populations are expanding.
Because polar bears are a top predator in the Arctic, changes in their distribution or numbers could affect the entire arctic ecosystem. There is little doubt that ice-dependent animals such as polar bears will be adversely affected by continued warming in the Arctic. It is therefore crucial that all factors which may affect the well-being of polar bears be carefully analyzed. Conservative precautionary decisions can only be made with a full understanding of the living systems involved.”
Alastair McDonald says
Eli, in 71.1 you describe a thought experiment where a black-body is separated from a grey-body by air or a vacuum. Even in the case where they are separated by air, the gas plays no part in the experiment. For you to then assert that air is a grey-body is a non sequitur because Kirchhoff is clearly ascribing a solid to that role. Moreover, Kirchhoff was the first to discover that the strength of emission lines is independent of the radiation in which they form. In other words, as the background radiation increases they change from being emission into absorption lines. As you must be aware, lines are not formed by the same process as that which causes continuous blackbody radiation, ie the effect from electronic vibrations. The radiation from greenhouse gases is due to molecular vibrations.
In reply to 71.2 when I say high, I mean above the height at which the radiation at the effective temperature is emitted. Doppler broadening starts to be important at heights above 40 km well above the 6 km where conventional wisdom says the greenhouse effect operates.
Hans Erren says
re 77:
four?
Which means 1600 was hotter than present?
Stephen Berg says
Re: #79, “Which means 1600 was hotter than present?”
No. Not necessarily. The statement surmises that the temperatures today have been greater than those of any year prior to 400+ years ago.
It does not state whether the temperatures prior to 1600 were warmer than today, but given the great accuracy of the Hockey Stick graph, it is likely that the temperatures today have been the warmest for far longer than 400 years.
Alastair McDonald says
Re 79. No it means that records only go back 400 years to the time when Frobisher, Davis, Baffin, and Hudson were the first europeans to explore there and make records of the conditions. This that was the time of the Maunder Minimum at the end of the Little Ice age which drove the Vikings out of Greenland, so it seems rather silly for you to assert that the temperature in 1600AD were warmer than today.
[Response:I’m not sure if we can conclude that the LIA (Little Ice Age) drove the Vikings out of Greenland (where did they go?), although it may be one plausible explanation. There could also be other explanations for why the Viking settlements on Greenland perished. After all, the Inuits seemed to manage to survive. Furthermore, the conditions on Greenland may have been local and not necessarily the same as for the entire globe. -rasmus]
Demetris Koutsoyiannis says
Even though I do not concur with the views of this article (Naturally trendy? rasmus; 16 Dec 2005 @ 4:50 pm), I must congratulate the author for discussing and disseminating to climatologists the recent work of Cohn and Lins and, indirectly, the consequences of the related natural behaviour (that this work examines) to statistical inferences and modelling.
In fact this “trendy” behaviour has been known for at least 55 years since Hurst reported it as a geophysical behaviour or for 65 years since Kolmogorov introduced a mathematical model for this. It is known under several, more or less successful, names such as: Long Term Persistence, Long Range Dependence, Long Term Memory, Multi-Scale Fluctuation, the Hurst Phenomenon, the Joseph Effect and Scaling Behaviour; other names have been used for mathematical models describing it such as: Wiener Spiral (the first term used by Kolmogorov), Semi-Stable Process, Fractional Brownian Noise, Self-Similar Process with Stationary Intervals, or Simple Scaling Stochastic Process.
I think that this behaviour relates to climatology far more than to any other discipline and I wonder why it has not been generally accepted so far in climatological studies (or am I wrong?). In contrast in many engineering studies, for example of reservoir designs, the consequences of this behaviour are analysed. Also, the same behaviour has been studied by economists and computer and communication scientists in their own time series.
Of course, this behaviour is not only met in the record analyzed in Cohn and Lins’ work. In contrast, a lot of studies have provided evidence that it is probably omnipresent at all series and at all times i.e., not only in the 20th century – but it can be seen only on long records. To mention a single example, the Nilometer data series (maximum and minimum levels of the Nile River), which clearly exhibits this behaviour, extends from the 7th to at least the 13th century AD. Cohn and Lins’ article contains a lot of references to works that have provided this evidence. For those who may be interested on more recent references, here is a list of three contributions of mine, trying to reply to some questions related to the present discussion:
What is a “trend”? What is the meaning of a “nonstationary time series”? How are these related to the scaling behaviour? See: Koutsoyiannis, D., Nonstationarity versus scaling in hydrology, Journal of Hydrology, 2006 (article in press; http://dx.doi.org/10.1016/j.jhydrol.2005.09.022 ).
Can simple dynamics (which do not change in time) produce scaling (“trendy”, if you wish) behaviour? See: Koutsoyiannis, D., A toy model of climatic variability with scaling behaviour, Journal of Hydrology, 2006 (article in press; http://dx.doi.org/10.1016/j.jhydrol.2005.02.030 ).
Why the scaling behaviour (rather than more familiar ones described by classical statistics) seems to be so common in nature? See: Koutsoyiannis, D., Uncertainty, entropy, scaling and hydrological stochastics, 2, Time dependence of hydrological processes and time scaling, Hydrological Sciences Journal, 50(3), 405-426, 2005 (http://www.extenza-eps.com/IAHS/doi/abs/10.1623/hysj.50.3.405.65028;jsessionid=noyCMpKB1OFcDF7U5H?cookieSet=1&journalCode=hysj).
[Response:Thanks for your comment. I am not saying there is no long-term persistence; that is well-known. But, I’m saying that there is physical reasons for such behaviour, and that must be acknowledged in order to understand the phenomenon. The time structure – presistence and some of the short-term hikes which can be ascribed to ‘natural variations’ – can for instance be explained from oceans’ heat capacity and either changes in natural forcings (volcanoes or solar) or chaos. They do not happen spontaneously and randomly (I suppose there was some confusion about what I meant by ‘random’, which I used in the meaning ‘just happens’ without a cause). Thus, my point is that physical processes are at play giving rise to these phenomena, and thus pure statistical models do not reveal all sides of the process. Although these statistical models may give a similar behaviour to the variations of the earth – if their parameters are optimally set – they do not necessarily prove that the process always behave that way. There may be changes in the circumstances (e.g. different external forcing). There have been claims that GCMs have not been proved to be representative for our climate, but I believe this is more true for the statistical models.
A change in the global mean temperature is different to, say the flow of the Nile, since the former implies a vast shift in heat (energy), and there has to be physical explanations for this. It just does not happen by itself. Again, some of such temperature variations can be explained by changes in the forcing. Hence, when dealing with attribution, then the question is to which degree are the variations ‘natural’. When one uses the observations for deriving a null-distribution, and one does not know how much of the trend is natural and how much is anthropogenic, then this may lead to circular reasoning and a false acceptance of the null-hypothesis. This is not a problem with GCMs which can be run with natural forcing only and with combined natural and anthopogenic. The GCMs also give a good description of our climate’s main features. -rasmus]
Eli Rabett says
To #78
1. A black body is one whose absorptivity and emissivity is unity at all wavelengths
2. A grey body is one whose absorbtivity and emissivity are the same at all wavelengths but less than unity.
Therefore what I described in 71 is NOT a grey body but one whose absorptivity and emissivity change as a function of wavelength, e.g. a molecule.
Kirchoff’s law is quite general. A simple derivation can be found at http://ceos.cnes.fr:8100/cdrom/ceos1/science/dg/dg10.htm.
The idea is that a body (such as a volume of atmosphere, a point that Alastair does not appear to recognize) at equilibrium has a constant temperature. Therefore the amounts of energy absorbed and emitted must be equal. It is then trivial to show that the absorptivity and emissivity at any wavelength must be equal (see the URL for a detailed derivation)
For a molecule, the absorptivity is zero at most wavelengths and so is the emissivity. Both are non-zero only where there are molecular absorption lines.
Kirchoff’s law applies to the atmosphere, both for components that consist only of molecules and those where there are aerosols (for example clouds).
To move to Alastair’s second point, line shapes are determined by a combination of pressure broadening (Lorentzian**) and Doppler broadening (Gaussian). The combination of these two functions produces a Voigt line shape. This is the appropriate shape to use under atmospheric conditions. While Doppler is not dominant in the troposphere, it cannot be neglected.
The Gaussian Doppler broadening profile is determined by the velocity distribution of the molecules and is thus only temperature dependent. The Lorentzian** “pressure” broadened lineshape is determined by the number of collisions/second and their duration. At normal temperatures and pressure (and lower in both parameters) one can safely assume that collisions are binary and instantaneous, which yields a Lorentzian line shape. The number of collisions/second is determined both by number density and temperature, thus pressure broadening is also a function temperature. You could look up the various parameters for CO2 in databases
Avert your eyes if you are really not interested in ultimate details.
**If you get REALLY good at measuring line shapes you have to start allowing for collision times that are finite (collision time means the time during which the collision partners interact). This modifies the Lorentzian line shape and is called a Chi factor. You could google it.
Demetris Koutsoyiannis says
A few points on the response to #82 by rasmus, which I appreciate:
1. “Statistical questions demand, essentially, statistical answers”. (Here I have quoted Karl Poppers’ second thesis on quantum physics interpretation – from his book “Quantum Theory and the Schism in Physics”). The question whether “The GCMs […] give a good description of our climate’s main features” (quoted from the rasmus’s response) or not is, in my opinion, a statistical question as it implies comparisons of real data with model simulations. A lot of similar questions (e.g., Which of GCMs perform better? Are GCMs future predictions good enough? Do GCM simulations reproduce important natural behaviours?) are all statistical questions. Most of all, the “attribution” questions (to quote again rasmus, “how much of the trend is natural and how much is anthropogenic” and “to which degree are the variations ‘natural'”) are statistical questions as they imply statistical testing. And undoubtedly, questions related to the uncertainty of future climate are clearly statistical questions. Even if one believes that the climate system is perfectly understood (which I do not believe, thus not concurring with rasmus), its complex dynamics entail uncertainty (this has been well documented nowadays). Thus, I doubt if one can avoid statistics in climatic research.
2. Correct statistical answers demand correct statistics, appropriate for the statistical behaviours exhibited in the phenomena under study. So, if it is “well known” that there is long term persistence (I was really happy to read this in rasmus’s response) then the classical statistical methods, which are essentially based on an Independent Identically Distributed (IID) paradigm are not appropriate. This I regard as a very simple, almost obvious, truth and I wonder why climatic studies are still based on the IID statistical methods. This query as well as my own answer, which is very similar to Cohn and Lins’ one, I have expressed publicly three years ago (Koutsoyiannis, D., Climate change, the Hurst phenomenon, and hydrological statistics, Hydrological Sciences Journal, 48(1), 3-24, 2003 – http://www.extenza-eps.com/IAHS/doi/abs/10.1623/hysj.48.1.3.43481). In this respect, I am happy for the discussion of Cohn and Lins work hoping that this discussion will lead to more correct statistical methods and more consistent statistical thinking.
3. Consequently, to incorporate the scaling behaviour in the null hypothesis is not a matter of “circular reasoning”. Simply, it is a matter of doing correct statistics. But if one worries too much about “circular reasoning” there is a very simple technique to avoid it, proposed ten years ago in this very important paper: H. von Storch, Misuses of statistical analysis in climate research. In H. von Storch and A. Navarra (eds.): Analysis of Climate Variability Applications of Statistical Techniques. Springer Verlag, 11-26, 1995 (http://w3g.gkss.de/staff/storch/pdf/misuses.pdf). This technique is to split the available record into two parts and formulate the null hypothesis based on the first part.
4. Using probabilistic and statistical methods should not be confused with admitting that things “happen spontaneously and randomly” or “without a cause” (again I quoted here rasmus’s response). Rather, it is an efficient way to describe uncertainty and even to make good predictions under uncertainty. Take the simple example of the movement of a die and eventually its outcome. We use probabilistic laws (in this case the Principle of Insufficient Reason or equivalently The Principle of Maximum Entropy) to produce that the probability of a certain outcome is 1/6 because we cannot arrive at a better prediction using a deterministic (causative) model. This is not a denial of causal mechanisms. If we had perfectly measured the position and momentum of the die at a certain moment and the problem at hand was to predict the position one millisecond after, then the causal mechanisms would undoubtedly help us to derive a good prediction. But if the lead time of one millisecond needs to be a few seconds (i.e. if we are interested about the eventual outcome), then the causal mechanisms do not help and the probabilistic answers become better. May I add here my opinion that the climate system is perhaps more complex than the movement of a die. And may I endorse this thesis saying that statistical thermophysics, which is based on probabilistic considerations, is not at all a denial of causative mechanisms. Here, I must admit that I am ignorant of the detailed structure of GCMs but I cannot imagine that they are not based on statistical thermophysics.
5. I have difficulties to understand rasmus’s point “A change in the global mean temperature is different to, say the flow of the Nile, since the former implies a vast shift in heat (energy), and there has to be physical explanations for this.” Is it meant that there should not be physical explanations for the flow of the Nile river? Or is it meant that the changes in this flow do not reflect changes in rainfall or temperature? I used the example of Nile for three reasons. Firstly, because its basin is huge and its flow manifests an integration of climate over an even more extended area. Secondly, because it is the only case in history that we have an instrumental record of a length of so many centuries (note that the measurements are taken in a solid construction known as the Nilometer), and the record is also validated by historical evidence, which for example witness that there were long periods with consecutive (or very frequent) droughts and others with much higher water levels. And thirdly, because this record clearly manifests a natural behaviour (it is totally free of anthropogenic influences because it covers a period starting at the 6th century AD).
6. I hope that my above points should not be given a “political” interpretation. The problem I try to address is not related to the political debate about the reduction of CO2 emissions. Simply I believe that scientific views have to be as correct and sincere as possible; I also believe that the more correct and sincere these views are the more powerful and influencing will be.
[Response:Thank you Demetris! I think this discussion is a very good one and it is important to look at the different sides. I do think you make some very valid points. I for one thing agree that the climate system is complex, however, I think that although we do not have a ‘perfect knowledge’ (whatever one means with this term, if one choses to be philosophical…) about our climate, we still have sufficient knowledge to make climte models and make certain statements. I am not an ‘anti-statistics’ guy. Statistics is a fascinating field. In fact, most of my current work is heavily embracing statistics. But statistics is only so much, and there is, as you say, inappropriate ways and appropriate ways to apply statistics. In addition, I argue that you need the physical insight (theory). I do not propose that the Nile river levels are not a result of physical processes, but I argue that the physical processes behind the river discharge are different to those behind the global mean temperature, and the displacement of a molecule (Brownian motion) if you like. Because they are affected by different processes, there is no a priori reason the think that they should behave similarly. Yes, they may have some similar characteristics, but the global mean temperature represents the heat content of the surfae air averaged over the globe whereas the Nile river discharge is affected by the precipitation over a large river basin, which again is affected by transport of humidity and i.e. the trajectories of storms (or whatever cloud systems causing the rainfall). When it comes to using statistical models deriving null-distributions for testing significance of trends, it should be noted that the actual data always have been subjected to natural variatiions in the forcing, be it the orbital parameters (Milankovitch) for the proxy data, volcanoes, solar or anthropogenic (GHGs or landscape changes). I think that only such changes in forcing can produce changes in the global mean temperature, because energy has to be conserved. If you use ARIMA-type models tuned to mimic the past, then the effect of changes in forcing is part of the null-process. For instance, using proxy data that include the past ice ages, then the transitions between warm eras and cold glacial periods are part of the null-process. You may say that over the entire record of hundred thousand years, you would see little trend, but that’s not really the issue. We are concerned about time scales of decades to century when we talk about global warming. I would therefore argue that at these time scales, there would be a significant trend during the transitional periods between warm interglaciacial periods and the ice ages. We also know (or think) that there are physical reasons for the ice age cycle (changes orbital parameters). Now, the transition between the ice ages and warmer climates is slow compared to the present global warming. Also, we know that the orbital parameters are not responsible for the current warming. One can more or less rule out solar effect as well, as there is little evidence for any increase in solar activity since 1950. Have to dash now. Thanks for you comments and a happy New year to you! -rasmus]
Isaac Held says
Re #84 and the response to it:
I do not think we have any foolproof physical intuition as to what the noise level in a global mean temperature time series should be. The claim that only changes in forcing can produce changes in global mean temperature because of “conservation of energy” is clearly not correct. One expects internal variability to create some noise in the balance between incoming and outgoing radiation; additionally, the global mean temperature need not be proportional to the total energy in the ocean-atmosphere system. ENSO produces a global mean temperature response of a few tenth of a degree after all. One can easily imagine that variability in oceanic convection and associated sea ice changes could produce even larger variations on longer time scales. I am impressed with how small the noise level in the global mean temperature generated by GCMs is, and how small it seems to be in reality, but it is not obvious to me why it is that small. The size of this noise level is centrally important, but it would be better to say that this is an emergent property of our models rather than something that we understand intuitively from first principles. We should not overstate our understanding of the underlying physics. Given a fixed strength of the “noise source”, the plausible argument that the resulting noise level is proportional to the climate sensitivity (the externally forced perturbations and naturally occuring fluctuations being restored by the same “restoring forces”) is currently being discussed on another RC thread. So the substantial uncertainty in climate sensitivity should translate into uncertainty in the low frequency noise level for global mean temperature.
With regard to questions of statistical methods, I would only add that analysis of the global mean temperature, however sophisticated one’s methods, is unecessarily limiting — what we really need are alternative approaches to multi-variate analyses of the full space-time temperature record. A focus on the global mean IS arbitrary.
[Response:Thanks Isaac for your comment. I agree with you that internal variations can produce fluctuations in the global mean temperature, and that if the system is chaotic (which I believe), the magnitude of these variations are determined by the system’s attractor. Hence, there may be internal shifts in heat which subsequently affect the global surface mean estimates, like ENSO (this is a physical reason for why the temperature varies). I agree that ENSO can cause temperature fluctuations on a few tenth of a degree Centigrade, but the time scale of ENSO (~3-8 years) is too short to explain the recent temperature hike that has taken place over the last 3 decades. However, in order for the global mean temperature to move away from the attractor, I believe you need to change the energy balance as well. I interpret the evidence of past glacial periods as periods when this happened, and would expect an enhanced greenhouse effect to have a similar effect. I think you are absolutely right that full space-time multi-variate analyses are needed to resolve this question. -rasmus]
Pat Neuman says
Comment #84, response, rasmus wrote: … statistics is only so much, and there is, as you say, inappropriate ways and appropriate ways to apply statistics. …
People, please review the procedure below, which was used in providing the latest flood advisory for the St. Louis River at Scanlon, Minnesota.
—
Procedure used to make flood advisories at Scanlon, MN
Please go to the web page at:
http://www.crh.noaa.gov/images/ncrfc/data/ahps/SCNM5.traces.gif
The page shows ensemble trace plots for:
St. Louis River at Scanlon, MN
Latitude 47.1 Longitude 92.6
Forecast for the period 12/26/2005 12hr – 3/26/2006 12 hr
Conditional simulation based on current conditions of 12/19/2005
Right side of page shows: [Trace Start Date]
Below the column heading [Trace Start Date] are historical year dates
(1948-2002) for processed Precipitation and Temperature time series data (P and T data for basin area, six hourly basis, units mm, Celsius).
The processed P and T data were used along with starting condition model states (snow water equivalent, soil moisture, frozen ground indexes on 12/19/2005) to generate 55 conditional flow traces shown in the plot.
The conditional flow traces at Scanlon show that most traces with large values (peaks greater than 6500 cubic feet per second before the ending date of 3/26/2006) were based on P and T input time series from later years (1975-2002) of the historical period (1948-2002).
I think that the 90 day trace plots at Scanlon indicate that seasonal warmth producing snowmelt came earlier in the year during the more recent period (1975-2002) than during the older period (1948-1974) used in generating the conditional flow traces. The conditional flow traces are being used (operationally), to provide exceedance probabilities for maximum river flow and stage at Scanlon for the forecast period (12/26/2005 12hr – 3/26/2006 12 hr).
http://www.crh.noaa.gov/images/ncrfc/data/ahps/SCNM5.exc.90day.gif
To find the probabilities of exceedance of maximum flow and stage at other river gage sites in the Upper Midwest:
click: New Probabilistic Products now Operational
at: http://www.crh.noaa.gov/ncrfc/
and proceed by clicking the small circles on the maps of the Upper Midwest.
—
People, do you think the procedure used above is an appropriate way to apply statistics in making flood advisories for rivers to be used by agencies and the public interested in potential river conditions for the 90 day period ending March 26, 2006?
Please post your questions or comments to realclimate.org.
https://www.realclimate.org/index.php?p=228
wayne davidson says
About GT’s complexities stemming from its inherent chaotic behaviour, I don’t believe it is so chaotic, it can be read at a local level. I am basing this opinion on my own work, with respect to predicting GT’s, specifically of the Northern Hemisphere. I consider influx of air from everywhere as a GT measurement, a local surface air temperature measurement is the result of influx from air coming from everywhere else. This would mean that ENSO is not strictly a regional phenom spreading out its influence on a global basis, but rather ENSO is a result of weather characteristics from everywhere influencing ENSO. Proof is in the pudding, having predicted NH GT’s accurately by looking at vertical sun disk measurements which are influenced by very thick near horizon atmospheres, I was looking at multiple influx of air, giving a net average sun disk size, a summing up of advection and hadley circulation in a simplified way. I’ve done the same measurements in polar and temperate climate zones and found the same expanding sun disk size trend. It is rather possible to measure GT trends by analyzing what extra regional air masses are doing with the one you are living in. The same rule applies for the hurricane region, some current concepts of a regional cycle giving various hurricane activities depending on what phase of this alleged cycle is on, is an incorrect interpretation, regional isolationism of weather does not exist on a meso scale.
Willis Eschenbach says
Re 84:
GavinRasmus, you say:In fact, while the physical processes behind the two are different, the physical laws governing them may be the same. Constructal theory, which has been much in the news lately, explains widely disparate physical systems (heat loss by animals, flying speed of birds, the formation of drainage system, heat transport, and many more). See “The constructal law of organization in nature: tree-shaped flows and body size”, at http://jeb.biologists.org/cgi/content/full/208/9/1677, for a good discussion of constructal theory. It has wide, and largely unrealized, potential application in climate science.
w.
[Response:Thanks Willis. I think that these ideas are interesting and that there may be somtething in that. But these ideas apply more to biological matters, don’t they? Although all processes are based on the same fundamental physics principles, there are also important differences between rainfall over a given region and Brownian motion on the one hand and a planets mean surface temperature on the other. The latter is constrained by some restoring effect such as increased or reduced heat loss when it makes an excursion away from its equilibrium, while I think it’s hard to find such restoring effects in the former.
On another note, it occurs to me that people may seem that I have my lines of logic crossed somewhere: on the one hand I argue that we do have substantial knowledge about our climate system (that is not the same as to say we know everything or have a ‘perfect’ understanding!), whereas some disagree and say we do not really know that much (I think the view on this depends a bit on your expectations). On the other hand, I also argue that we do not know the real null-distribution that is required for testing trends since the past observations are affected by (natural) changes in the forcing. In that sense, I argue that we do not have a perfect knowledge. Then we have the argument that the kind of structure that Constructal theory predicts should be valid for most processes or that the ARIMA-type models are representative for the null-process- that implies that we do know a lot about the climate system. I think we have a substantial body of knowledge about our system, but there are also many things we do not know. When it comes to the original question about determining the significance of trends, I think the cleanest way to carry out the test is to use a GCM in experiments with and with prescribed variations in the forcings. I think that part of the issue is also how one defines a ‘trend’ in this case: is it the long-term rate of change over the entire history (eg temperature change over the last millions of years), or is it a systematic rate of change caused by changes in the forcing (i.e. a response, and it may for instance be the systematic change in temperature in the transitions between glacial and warm periods). I have used the latter interpretation in this discussion because I think this is more relevant to the present question. Anyway, I think this is a good discussion, and there probably people who disagree with me(?). -rasmus]
Michael Jankowski says
RE#85 Comment:”I agree that ENSO can cause temperature fluctuations on a few tenth of a degree Centigrade, but the time scale of ENSO (~3-8 years) is too short to explain the recent temperature hike that has taken place over the last 3 decades.”
Then how about the PDO phase shift circa 1976-1977?
Kenneth Blumenfeld says
Re: #86:
Pat, I would guess (without offending anyone, hopefully), that your question is more within the domain of hydrometeorology than climate science per se.
Judging by the breakthroughs in short term flood products, I tend to have (possibly blind) faith in NWS flood estimates. Seasonal flood products (looking out 30-90 days) are less reliable than short-term ones. Consider the loss in power when going from flash flood warning to a probabilistic 90-day outlook, as one example.
Pat Neuman says
re 90.
I’d like to duck general questions for now, and focus instead on trying to make the specific example in 86. more understandable.
Question #1:
Do you think the conditional flow traces at Scanlon, Minnesota (St. Louis R. basin) are an indication that winter (Jan through Mar 26) climate has warmed within the Upper Midwest in recent decades?
See: http://www.crh.noaa.gov/images/ncrfc/data/ahps/SCNM5.traces.gif
Question #2:
Do you think the procedure explained in 86. is an appropriate way to apply statistics in providing agencies and the public with spring flood advisories for rivers in the Upper Midwest?
JohnLopresti says
Re: 25 To: Pat Neuman In re: polar bear evolution
I appreciate the 1600 K-year timeline. I had broad knowledge of the approximate figure from work I did as a protege of people working in the Martin Almagro group thru the German Institute of Archeology, Madrid, albeit, a long time ago, and topically mostly -pithecantropus oriented.
Fortunately we have a recent report of a dog genome. If there is an interested party considering the ursus maritimus polar bear genome, that might be an energizing study for the purpose of interpreting the 130-year industrial revolution sourced climate warming impact generated upon polar bears. I will check your paleontology link and others to begin formulating an improved perspective of how to incorporate this in a genetic-work model. Perhaps it is a futile hope at this point, but it seems to me Joule was a creator of concepts in his own time, and there might be a way to characterize the work evolutionary forces must exert to develop a working genetic lifeform such as the polar bear. This particular lens might be helpful, as well, in quantifying other quantum changes in species extinctions. Although several thread contributors are following these matters from a science vantage, the following are an u. maritimus image
from a government link to the biologists overseeing bear counting, and an image from the legal entity organizing a diverse assemblage of smaller and less equipped groups to petition for better population tracking and a kind of EIR
Kenneth Blumenfeld says
Re: Pat’s recent comments (I note that we are somewhat off-topic):
“Do you think the conditional flow traces at Scanlon, Minnesota (St. Louis R. basin) are an indication that winter (Jan through Mar 26) climate has warmed within the Upper Midwest in recent decades?”
I think they indicate that the water has begun flowing earlier in northeast MN, which would indicate warmer conditions. I would not jump to the conclusion about the entire Upper Midwest based on those data alone…though it is probably true.
“Do you think the procedure explained in 86. is an appropriate way to apply statistics in providing agencies and the public with spring flood advisories for rivers in the Upper Midwest?”
I do, because it integrates historical data with current conditions. I believe large short-term hydrologic events do make it into the probabilistic outlooks also. It’s not perfect, but I do think it is a reasonable product.
Pat Neuman says
re 93.
Kenny’s note in 93. said: ‘we are somewhat off-topic’. I think the questions in 90. were on topic for ‘Naturally trendy’. The conditional flow simulation trace plots at Scanlon MN on the St. Louis River (west of Duluth MN), which are shown at:
http://www.crh.noaa.gov/images/ncrfc/data/ahps/SCNM5.traces.gif
do not appear ‘Naturally trendy’, but they do appear ‘trendy’, which I think is an indication that some regional winter climate elements have warmed in part of the Upper Midwest in recent decades.
The traces at Scanlon, which were based on processed historical precipitation and temperature data from 1948-2002 records, are part of a larger amount of evidence showing that many hydrologic climate elements have shown warming within the Upper Midwest since the late 1970s, showing unnatural warming trends.
Other Upper Midwest winter climate trends showing warming in recent decades can be viewed from my 2003 article at the Minnesotans For Sustainability website titled Earlier in the Year Snowmelt Runoff for Rivers in Minnesota, Wisconsin and Minnesota, see Figure 1 of the article at:
http://www.mnforsustain.org/climate_snowmelt_dewpoints_minnesota_neuman_table_figure1.htm
In the article, I showed that timing for beginning spring snowmelt runoff changed to earlier in the
year following the late 1970s (by 2-4 weeks) compared to the timing from 1900 to late 1970s. The beginning of spring snowmelt runoff Julian days are shown in Figure 1 at three river stations located within the Upper Midwest, including:
Red River at Fargo ND
St Croix River at Wisconsin/Minnesota border
St. Louis River at Scanlon near Duluth MN
Although I understand that things can’t be perfect, I believe that professional hydrologists should try to adjust for inadequacies in modeling procedures and forecasts, which currently do not take account of the large amount of evidence showing that hydrologic climate warming has been happening in the Upper Midwest. From 86, the conditional flow traces are being used (operationally), to provide exceedance probabilities for maximum river flow and stage at Scanlon for the forecast period (12/26/2005 12hr – 3/26/2006 12 hr).
http://www.crh.noaa.gov/images/ncrfc/data/ahps/SCNM5.exc.90day.gif
Even more importantly, more recent data indicates that rainfall has become more frequent in winter months for parts of the Upper Midwest, especially in central and southern parts of the region (Illinois, Iowa and southern Wisconsin). In January of 2005, major near record flooding occurred on rivers in the Illinois River basin, due mainly to December and January rainfall.
I think it would be helpful for RC moderators to comment on this, especially on the note by Kenny that we are somewhat off-topic with this discussion.
Hank Roberts says
For any time series, figuring out what statistic would be appropriate and running the test might contribute to discussing whether there’s a trend. Here’s one page of tools (picked at random from a Google search for such, I’ve found them from time to time online)
http://members.aol.com/johnp71/javastat.html
The biggest surprises and the lessons I carry with me from graduate statistics in the 1970s were:
1) ask a statistician before collecting the data, not afterward, if you want to collect useful data.
2) It takes appallingly more samples over a short period, or collection of data over a far longer period, than the naive grad student would imagine, before you have collected enough data for meaningful statistical analysis.
Lots of what’s online is just numbers and graphs but not statistical evaluations. Yes, the curves are scary ….
Pat Neuman says
re 93. 94. 95. Thank you all for your comments.
I’m thinking the links and description I provided above on the
probabilistic products may be difficult to understand for people who are unfamiliar with the Advanced Hydrologic Prediction Service (AHPS).
General description of the U.S. operational AHPS probability service can be viewed at: http://www.weather.gov/ahps/about/about.php
I think the additional operational trace plots below in Michigan and Minnesota may help in developing an improved understanding of the system.
AREA MAP: U.P. MICHIGAN
http://www.crh.noaa.gov/ncrfc/ahps/esp_maps/map/zoomin_fcst_18_m10000.php
Ontonagon River at Rockland, MI
http://www.crh.noaa.gov/images/ncrfc/data/ahps/RKLM4.traces.gif
Sturgeon R. at Alston, MI
http://www.crh.noaa.gov/images/ncrfc/data/ahps/ALSM4.traces.gif
AREA MAP: MINNESOTA
http://www.crh.noaa.gov/ncrfc/ahps/esp_maps/map/zoomin_fcst_6_m10000.php
Mississippi R. at Aitkin, MN
http://www.crh.noaa.gov/images/ncrfc/data/ahps/ATKM5.traces.gif
St. Louis R. at Scanlon, MN
http://www.crh.noaa.gov/images/ncrfc/data/ahps/SCNM5.traces.gif
Willis Eschenbach says
Re 88, Rasmus, thanks for your comment. Regarding constructal theory, you say:
The amazing thing about constructal theory is that it applies to any system with flow, whether biological or physical, organic or inorganic. For example, see AH Reis, A. Bejan, “Constructal theory of global circulation and climate”; Journal of Geophysical Research Atmospheres
w.
Pat Neuman says
A reply to John Lopresti in comment 92.
Tracking the Great Bear: Mystery Bears
By Jim Halfpenny, Ph.D.
Website: Bears and Other Top Predators Magazine
http://www.cryptozoology.com/articles/mysterybears.php
http://groups.yahoo.com/group/Paleontology_and_Climate/message/13646
Stephen Berg says
Re: #89, “RE#85 Comment:”I agree that ENSO can cause temperature fluctuations on a few tenth of a degree Centigrade, but the time scale of ENSO (~3-8 years) is too short to explain the recent temperature hike that has taken place over the last 3 decades.”
Then how about the PDO phase shift circa 1976-1977?”
http://www.atmos.washington.edu/~mantua/REPORTS/egec_pdo.pdf
Compare the graphs shown in the above article with the global temperature anomaly graph (i.e. the Hockey Stick).
Pat Neuman says
Stephen, thank you for the link on PDO in your comment (99).
I recently downloaded annual temperature data for stations with monthly and annual temperature data in Alaska (1950-2005, some 1930-2005). I used Excel software to create annual temperature time plots for the stations that have good quality data (few missing daily max/mins). It’s helpful to look at figure 2 of your link on PDO as I’m evaluating the rate of surface warming at the Alaska stations. PDO has had some influence on the overall rising trend in air temperatures at some of the stations, but not much influence compared to the pronounced warming trend coincident with rising concentrations of GHGs in the atmosphere.