#63 Icarus62,
The “quantity absorbed by the natural world” is a function of atmospheric concentrations, not emissions. Initially, a very large cut in emissions would cause a yearly decrease in concentrations. A stop to emissions isn’t required but emissions do add up over time as well. The larger the remaining emissions, the slower concerntrations would decrease and the quicker they would start to increase again (even without taking into account the uncertain contributions from “the warming world”).
As to “how easy it’s going to be”, in my opinion “a huge effort” is a massive understatement. The forecast for the CO2 atmospheric concentrations is clearly: up, up, up.
#23 Mike Roberts,
(re-posting since I seem to be having an issue with reCAPTCHA and “you’ve already said that”)
Your link is currently down for maintenance but you’re evidently comparing very different things.
The Hansen paper you reference states “Only 60 percent of the equilibrium response is achieved in a century.” (not 40 years). But that’s the response to a constant forcing, not a GHG emission pulse. In order to achieve a constant forcing you’d need either continuing (but diminishing) emissions or an unexpectedly strong carbon cycle feedback.
And going by the press release, the Ricke and Caldeira I assume your link is pointing to is about the effect of an emission pulse with an initially quickly declining atmospheric fraction, not continuing emissions causing a constant forcing. Details and assumptions might be “incompatible” with Hansen’s paper but at first glance the papers do not seem obviously contradictory, only concerned about different things.
For what that’s worth, I would tend to take a dim view of the string of papers which have been trying to put a encouraging political spin on the issue by reframing it in an unrealistic manner. Outside of policy-fiction, people do not decide on short emission pulses. Instead, long-lived infrastructure is being developped and somewhat less long-lived energy-consuming products are being marketed. While their construction and production might be considered as causing an emission pulse, their useful lives are typically expected to cause continuing emissions over decades (with even longer-lasting impacts on future choices). A constant forcing is therefore arguably a better (but clearly far from ideal) analogy to the impact of the choices being made today.
I just published a long list of absorption coefficients and band schemes on my web site. If you don’t know what that means, ignore this, but if, like me, you’re really into radiative transfer and climate studies, enjoy:
Victor, it’s rather ironic that you started this whole schlemozzle off with bashing folks for alleged ‘confirmation bias,’ and now you are convinced that your Mark I Eyeball has definitively, positively proved ‘no trend’ and standard statistical procedure be damned.
There is a reason for standards, as I’ve said before, and it is to avoid fooling yourself.
Icarus @32.
There is a reasonably useful carbon model (actually a series of them) called the Bern carbon cycle, which given a CO2 impulse estimates the atmospheric concentration as a function of time. There are both fast terms, and slow terms. And in their model almost 14% of the CO2 stays forever.
Long story short, if we made an instantaneous 50%cut, shortly afterwards the concentration would appear to stabilize, but then as the faster CO2 reservoirs saturate the rate of absorption would decrease and you would see the concentration begin to climb again (although slower than before). In the long term, the only way to stabilize or reduce the concentration is a virtually 100% cutback.
The climate response function is a different calculation. Carbon dioxide concentration is doubled instantaneously and then is held that way while the climate responds. To keep the carbon dioxide concentration constant requires ongoing emissions. That is different from looking at the warming from a pulse of carbon dioxide emission which then is partly absorbed back out of the atmosphere. It should be noted too that the pulse calculation does not really get at the effects of, say, the emissions of one particular year, since subsequent emissions will drive carbon dioxide out of the atmosphere at a higher rate than in a simple pulse calculation.
I have actually tried your fitting function T(x) = A+Bx+Dsin(E+Fx). I doesn’t work. The residuals are larger than the errors of the data. The form T(x) = A+Bx+Cx^2+D sin(E+Fx) is the best form I could find. May be there is a better representation. A Fourier transform is simply another representation of the data. You can not use it for a forecast. Linear trend calculations also lead to unacceptable residuals. Please download the file KlimaGlobal.zip from my Website and run the program KlimaGlobal.exe. The software is for a Windows platform, texts are in English.
[Response: NO offense, but I don’t recommend anyone downloading .exe files from people they do not know.–eric]
Hank Robertssays
>52, 48, 19
Oh, dear. Victor is now pointing to 19 as though it supported his misunderstanding of that blogger’s notion about an old paper.
Sorry, Victor.
When someone shows you how to find the library, that doesn’t entitle you to claim you discovered everything that’s found inside the library.
You have to read the subject, before pontificating on your superior understanding.
Paul Berberich @56.
You demonstrate exactly what I was suggesting @44. You have not developed a model for the functioning of global average temperature using solely 1850-1954 data and then gone on to test it against 1955-2014 data. You have instead repeatedly carried out this process until you happened upon a form of model that fitted. This iterative process does not test your final function because if that final function had failed to reduce your residuals, you would have presumably kept going systematically until you found one that did. In analogy, you have picked a combination lock, not by deriving the required combination but by searching for it until the lock came undone.
You have thus not demonstrated (and indeed cannot demonstrate) that there is any repeating processes in operation within the climate system that determines the average global temperature. So you projection into the future is worthless.
Steve Fishsays
Re- Comment by Victor — 5 Dec 2014 @ 10:46 PM, ~#53
So, you are unable to provide expert criticism of Burnett et al. No surprise. The comment at #19 is whacking your mole and messing up your troll.
#50 Meow sez: “The statistical definition of “trend” is meant to help us avoid deceiving ourselves. It is a mostly-objective way of deciding (probabilistically, of course) whether a given change in a random variable is due to chance. If you abandon that definition in favor of eyeballing, you’re no longer practicing statistics, which means you’ve abandoned the best tool we have to make sense of data series.”
Statistics is a valuable tool when dealing with a large body of data — too large to evaluate in any other way. When dealing with a small body of data, statistics is not necessary and can, in fact, be misleading. What you dismiss as “eyeballing” is in fact the basis of all scientific observation. Results that can be evaluated directly by eye are beyond question more reliable than statistical results. And the proper way to deal with possible bias is via replication, NOT shoehorning unnecessary statistics into the mix. Use of statistics might make you look more like a “real scientist”, but that’s another matter.
The most profound insights of relativity theory didn’t require much in the way of statistics. Darwin did very well without it. So did Copernicus, Kepler, Newton, etc.
The dataset in question consists of only 70 points and as a result, the graph in question can very easily be evaluated by eye, without benefit of statistics. You may not be happy about this because you routinely use statistics, and feel comfortable with it, but from an epistemological standpoint when evaluation via direct observation is feasible, it is always the best recourse.
In my recent post I went beyond simple eyeballing, however, to analyze the data in detail, and again, because we’re dealing with only 70 points such an analysis poses no problem. If the statistics tell us something different from what we can clearly see with our eyes, and clearly evaluate via an analysis based on direct observation, then there is something wrong with the statistics — or the method used in evaluating it.
I’m not claiming that climate scientists should abandon statistics in favor of eyeballing — in many cases datasets are simply too large or too complex to evaluate directly and statistics is necessary. And yes, statistical evaluation can be a very sophisticated and meaningful tool, no question. But it is a tool, NOT an oracle, and should be treated as such.
Chuck Hughessays
Enuf said.
Comment by Victor — 3 Dec 2014
I agree with you Vic.
Matthew R Marlersays
Here is a paper that I thought might be interesting. I sent a copy to Dr Gavin Schmidt.
CHANGE POINTS AND TEMPORAL DEPENDENCE IN
RECONSTRUCTIONS OF ANNUAL TEMPERATURE:
DID EUROPE EXPERIENCE A LITTLE ICE AGE?
BY MORGAN KELLY AND CORMAC Ó GRÁDA
University College Dublin
We analyze the timing and extent of Northern European temperature falls during the Little Ice Age, using standard temperature reconstructions. However, we can find little evidence of temporal dependence for structural breaks in European weather before the twentieth century. Instead, European weather between the fifteenth and nineteenth centuries resembles uncorrelated draws from a distribution with a constant mean (although there are occasional decades of markedly lower summer temperature) and variance, with the same behavior holding more tentatively back to the twelfth century. Our results suggest that observed conditions during the Little Ice Age in Northern Europe are consistent with random climate variability. The existing consensus about apparent cold conditions may stem in part from a Slutsky effect, where smoothing data gives the spurious appearance of irregular oscillations when the underlying time series is white noise.
Meowsays
OT, raised by Eric in an editor’s comment:
NO offense, but I don’t recommend anyone downloading .exe files from people they do not know.
Good advice. Here are 5 rules for safer computing:
1. Run all network-facing applications (e.g., browser, email client) in limited (unprivileged) user accounts, and deny those accounts access to data other than that which they need to operate.
2. Never run any executable or install any program that lacks a valid digital signature traceable to a known entity or person.
3. (a) As much as possible, avoid running executables or installing programs in administrative (privileged) accounts; (b) As much as possible, avoid running executable or installing programs in any account. Let someone else volunteer to cut herself on the bleeding edge.
4. Scan every executable or installer for malware before running/installing it.
5. Don’t click on attachments. Download and scan them first, follow the rules for executables/installers, and open any survivors in a limited/unprivileged account.
Now, I’m curious. If there is anyone posting or lurking here who actually wants to claim they see in this graph clear evidence of a pattern of steadily increasing lake effect snowfall from 1971 to 2001, will that person kindly raise his or her hand?
No hands raised? Well, maybe no one wants to make a fool of himself. Or — maybe you just need some time to think about it? I’ll be waiting.
siddsays
Prof. Rignot, breaking rank in the Washington Post:
“Eric Rignot of UC-Irvine, suggested that in his view, within 100 to 200 years, one-third of West Antarctica could be gone.
Rignot noted that the scientific community “still balks at this” – particularly the 100-year projection — but said he thinks observational studies are showing that ice sheets can melt at a faster pace than model based projections take into account.”
Wonder if Prof. Box will chime in, I think he would be one of those on the far right in the distribution function for SLR from continental ice sheets, as referenced for example in Jeverejeva(2014) doi:10.1088/1748-9326/9/10/104008 fig 2.
Any ice aficionados care to comment ?
Also, would those haruspices divining trends or lacks thereof from temperature entrails please take it to the latest thread ?
sidd
Meowsays
Statistics is a valuable tool when dealing with a large body of data — too large to evaluate in any other way. When dealing with a small body of data, statistics is not necessary and can, in fact, be misleading.
You enter a business in a large city in Nevada, and place a series of wagers on a coin flipped by the house. You bet heads every time, and lose 25 times in a row. Because statistics is “not necessary” to understand this result, you say, “Aw shucks, guess my luck just wasn’t so good”, and go home.
I play the same game, except I use statistics to calculate the probability of losing 25 times in a row. Finding it to be 0.5^25 = 3*10^-8, I call a lawyer to sue the house for cheating.
What you dismiss as “eyeballing” is in fact the basis of all scientific observation.
Humbug. Eyeballing can be useful for hypothesis generation, which is not at all the same thing as series analysis.
Results that can be evaluated directly by eye are beyond question more reliable than statistical results.
Wow, just wow.
The most profound insights of relativity theory didn’t require much in the way of statistics. Darwin did very well without it. So did Copernicus, Kepler, Newton, etc.
Every tool has its place. If the issue is understanding whether a data series contains a meaningful trend, the only useful techniques are statistical. Depending on context, you can quibble whether to use a frequentist approach or a Bayesian approach, and if the latter, what priors to assume. But proclaiming trends without statistical analysis is meaningless.
…
phil mattheissays
Hey – my wife asked what I’m doing, and I had to say “hold on, somebody’s wrong on the internet…”. She was not impressed, and is off eating the rest of the salmon, while I bother with this.
victor, I’ll raise my hand, but only in farewell as you depart, not at your bidding or in response to a question that requires me to visit americanthinker. “Consider the source” is an essential corollary of a scientific approach, if only to avoid rabbit holes and wasted time.
You ask us to look at a site specializing in smug self-deception, to view a graph apparently available no where else, and then you dance a silly two-step that shows you understand none of us will do so. Doesn’t make you smarter, though I suppose it does play well to a subset of lurkers come to watch the show.
I will add my voice to those that find your “Mark I eyeball” seriously deficient in analytic value, especially on a blog that (patiently) offers active updates and translation of real science in a field where math is necessary and complicated.
You’ve been present on the neighboring thread here, and have responded (tangentially at best), but your decision to reject change point analysis in favor of sequential cherry picking only regenerates what most everyone else here would recognize as the escalator at Skeptical Science (find it yourself – we all know how).
The whole point of statistics is use of math to formalize analysis into a standard format that improves odds that an analytic consensus can be found. Certainly, statistics can be and are often misused. However, your stated intent to trust your lying eyes over scientific method has moved from tired and boring, to stagnant and stultifying. Please stop, or move on elsewhere.
‘Victor’ has become the latest in a sad series of posting id’s to be avoided, unfortunately with an increasing footprint here to the point that avoidance pretty much means staying away entirely.
My wife likes that last idea.
Steve Fishsays
Re- Comment by Victor — 6 Dec 2014 @ 2:06 PM, ~#66
You ask- “If there is anyone posting or lurking here who actually wants to claim they see in this graph clear evidence of a pattern of steadily increasing lake effect snowfall from 1971 to 2001, will that person kindly raise his or her hand?”
Your behavior is getting very silly because you propose a straw man in your question. Nobody, including the authors of the study, have proposed or expected a “steady increase.” Your ignorant focus on the regression line has lead you astray. Regression is both a useful analysis and data presentation tool. For example, its visualization in these data helps you see a potential inflection in these data. If Burnett et al had not used this statistical tool I would have been concerned. Its use does not suggest that the authors saw a monotonic increase in lake effect snow. Your selection of this straw man highlights your own confirmation bias.
The study demonstrates that snowfalls in the Great Lakes has increased, while outside of the lake-effect area snowfall has declined slightly and they attribute the difference to increased lake-effect snow with convincing data. They also provide a preliminary analysis of how lake water temperatures effects snow fall. The authors also present a new method for isotopic identification of lake effect snow. It is a typical and solid study.
In addition to your straw man you claim that the 2001 data point is an outlier. No, it is not. You apparently don’t know what an outlier is. If the data had ended in 1971 would you have called this point an outlier? Your “outlier” contention is especially dumb in the light of the facts that the study is 14 years old and there have been several further, so called, “outliers” since.
So, considering problems of confirmation bias, epistemological questions, Ockham’s razor, and basic statistical concepts, you fail. You apparently believe that your naïve opinion trumps expert scientific data and analysis. Yours is amazing postmodernist thinking! Steve
Yeah, I tried email to the editorial contact. No response.
Mike Robertssays
Anonymous Coward, thanks for that. I was beginning to realise that the hansen and Caldeira graphs were apples and oranges. Both are unrealistic but presumably have some utility.
Victor @66.
Re-read the comment @26. If that is too difficult for you to translate into your trollish tongue, two clicks down here is a graphical representation suitable for even the purile at heart.
And if anyone is interested, here (two clicks) is Burnett et al (2003) Figure 1 of the Syracure NY snowfall with an extra decade’s-worth of data added (thus 1915-2012). The lake-effect snowfall trend trends ever onward.
#60 MARodger
I will never find a final function, and there will never be a final climate model.
SecularAnimistsays
MARodger wrote: “This foolish troll we have is so pathetic he cannot even keep in mind what has already been told to him.”
Victor knows exactly what has already been told to him. He simply dismisses or ignores it, and posts the same thoroughly debunked falsehoods, distortions, non sequiturs, condescending pretentious claptrap and outright nonsense over and over again.
It’s plain and simple trolling, in bad faith and boring, and why the moderators have not long since consigned Victor’s posts to the Bore Hole is beyond me.
66 Victor said, “If there is anyone posting or lurking here who actually wants to claim they see in this graph clear evidence of a pattern of steadily increasing lake effect snowfall from 1971 to 2001, will that person kindly raise his or her hand?”
I do. It’s called the trend line.
Edward Greischsays
66 Victor: My guess is as follows: Any snow increase would be from the lakes not freezing over. Frozen lake surfaces don’t provide much water to the air. It is snowing less for another reason. The recent snowstorm near Buffalo is nothing out of the ordinary. Olean had a snow decrease because it is now too warm to snow as much. It rains instead. Snow used to start in the 3rd week of September.
Your canard is worse than unbelievable. It fails the giggle test. It really sounds like you are trying to score a quote you can use against the scientists.
Chuck Hughessays
What’s to keep the WAIS from coming apart rather suddenly, say within the next few decades rather than centuries? If a crack formed somewhere in just the right spot why wouldn’t it be like cutting a diamond? Of course that’s a hypothetical but it seems possible given that so many predictions have happened sooner than anticipated.
#65 Meow
Proposals for CO2-saving
I think following ideas for saving CO2-emissions can be realized at once.
1. Computer updates only when the sun is shining or the wind is blowing.
2. My Screen shows the current temperature of Munich where I am living. It could also show the percentage of electrical energy which is produced at the moment by CO2 emmitting power plants.
Since Meow was the only one to offer a meaningful response, I’ll start with him (her?):
1. I wouldn’t need statistics to tell me something was very wrong if I lost 25 times in a row on a simple coin toss.
2. “If the issue is understanding whether a data series contains a meaningful trend, the only useful techniques are statistical.” Then what’s the point of graphing your data?
#69, Phil and his wife:
American Thinker was not the source of that graph, but a peer reviewed paper published in a legit journal (Burnett et al http://journals.ametsoc.org/doi/abs/10.1175/1520-0442%282003%29016%3C3535%3AIGLSDT%3E2.0.CO%3B2). If you’d been following the discussion you’d have known that. The rest of your post isn’t worth discussing, especially since you’ve demonstrated very clearly that you don’t know what you’re talking about.
#70 Steve Fish:
“Nobody, including the authors of the study, have proposed or expected a “steady increase.””
As far as the Burnett et al paper in general is concerned, that was not the object of my analysis, only that one graph, with its clearly misleading trend line. And as far as 2001 is concerned, yes that data point is an outlier because it’s clearly NOT a part of any trend, as is clearly evident from simply looking at the graph. Graphs such as this are used all the time by climate scientists to convince both the public and world leaders, most of whom have no idea what linear regression means. In this case, the graph would mean nothing at all if it weren’t for that misleading trend line.
“You apparently believe that your naïve opinion trumps expert scientific data and analysis.” I offered a detailed analysis of that dataset, NOT an opinion. You are the one offering an opinion. And by the way I didn’t notice you raising your hand, Steve. Wise decision.
#73, the indefatigable Mr. Rodger:
Your claim regarding the 1973 – 2000 trend is interesting. Thanks for doing the math. But I’m wondering how steep that trend actually was, since you failed to mention it. My guess is that it was minimal. You said your statistics applied to every possible pairing but you seem to have ignored 1971-2000. Was that an oversight? I’d be very curious to see that particular trend line. The graphs you’ve linked to are irrelevant. My analysis pertained only to the graph critiqued by Rayne. What happened in Syracuse is beside the point. I wasn’t critiquing their paper or their premise, only that one graph. For someone so pompously preaching scientific righteousness, you are remarkably careless regarding the simplest observations.
#77 Jim Larsen raised his hand. He’s not afraid to play the fool. Good for him! Neither am I.
#78 At the risk of sounding tedious: my critique was of the graph and its misleading trend line, NOT the paper as a whole. [edit – less name calling please]
siddsays
Prof. Steig writes:
“Even if the WAIS is actually in irreversible decline (which I have my doubts about) … ”
Do tell.
[Response: I’ll write something on this in the next month or so. Short answer is that I think the role of variability in climate forcing of ice sheets is under-estimated by many glaciologists. For a preview of where I’m coming from on this, see Dutrieux et al, 2014, in Science –eric
Chuck Hughessays
Thanks Eric. I guess that was a dumb question but I keep running across the word “collapse” which makes me think of a ceiling collapsing or some other structure. I was thinking more in terms of snow piling up on a tin roof and then suddenly the structure collapses when enough weight is applied or maybe something similar to a pebble hitting a windshield in just the right spot, causing it to crack or shatter. I’m not implying that all the ice would suddenly melt but that it would just break into smaller pieces and slide off. Knowing the Physics of ice and the process of melting would help I’m sure. Of course being a layman/novice etc. puts me at a disadvantage.
Also, whenever I hear or read about how much the oceans will rise it’s always based on a 100 year timescale and nobody goes much beyond that. Then I hear or read numbers like 60′ without it being put in any sort of time frame that corresponds to a particular level. I understand the 3-9′ by the end of this century but then others claim it will be much less and still others claim it will be much more than that by the end of the century. I won’t be around anyway but the entire SLR concept is foggy. The IPCC predictions are supposedly “conservative estimates” and that adds even more mystery. Is there a straightforward mathematical formula for figuring this out based on CO2 levels and Global Average Temperature and other known factors? Thanks.
While I’m not saying those sources are more reliable than NOAA GSOD, any large dataset can have errors. Can you suggest a way to validate the GSOD reports of snow?
> I wouldn’t need statistics to tell me something was very wrong if I lost 25 times in a row on a simple coin toss.
You are deceiving yourself. Pray tell, how do you know then? You didn’t explain *how* you know something is wrong. Your technique is simply knowing that you are multiplying .5 many times to get a very small number. How would you prove your case without explaining it that way? Just because you wouldn’t come up with the exact number doesn’t mean you aren’t doing the exact same thing, just poorly.
> From Burnett et al: “The lake-effect sites (Fig. 3a) reveal an overall statistically significant increasing trend since 1931 (P value = 0.0000).” “Overall increasing trend” sounds like steady increase to me.
Well, it’s not. “Overall” is not synonymous with “steady.” So you’re simply not understanding what the paper is saying. Your refusal to get into the standard language of science and math is causing you to misinterpret what is being said. It is causing confusion that could be avoided if you would just step up and take the time to become educated on the language and tools used by science professionals.
Over the years since 2007 I added notes to Stoat’s Why do Science in Antarctica? as I saw research publishing changed estimates about how fast change could happen. This one really surprised me:
Rapid Sediment Erosion and Drumlin Formation Observed Beneath a Fast-Flowing Antarctic Ice Stream
Smith, A. M.; Murray, T.; Nicholls, K. W.; Makinson, K.; Adalgeirsdottir, G.; Behar, A. E.
American Geophysical Union, Fall Meeting 2005, abstract #C13A-04 http://adsabs.harvard.edu/abs/2005AGUFM.C13A..04S
One citation: Till characteristics, genesis and transport beneath Antarctic paleo-ice streams, 21 JUL 2007,
DOI: 10.1029/2006JF000606
That for me raised significant questions from experience with surface erosion:
I know the erosional force of water carrying silt changes — water when flowing fast can carry a large amount of solid material which scours whatever it hits. When the water slows for any reason (at any slow/turbulent point) it drops the material. That increases the flow’s capacity for picking up solid material at next opportunity. That kind of alternation carves away soil and soft rock away very fast.
But then ANDRILL reported the Ross Sea has been open water, off and on, suggesting melt has happened more often across Antarctica than we imagined.
And we got more and more news about water found at the bottom of boreholes between ice and bedrock, and then about lakes and streams flowing under the ice cap, and about the surface elevation changing as water flows under the ice; some of that noted at the Stoat thread.
__________________
So from here on below these are my questions — just my speculation — based on knowing we’ve been surprised by how much (and increasing) water is flowing at the bottom of the icecaps lately.
But I look back at that immensely surprising drumlin study — a geological formation we had thought took centuries to form, discovered forming under the ice in a matter of days. If that can happen, other erosional processes can be happening.
_____________
There’s a thing that happens with erosion on the surface that I suspect would happen under the ice as well.
When liquid water flows through soft material down to a harder surface,
then flows along the hard surface, it carves out a tunnel.
Then the softer material pushes down into the opening — and gets carved away.
First a tunnel forms from water flowing on the harder surface deep under the soft material
Then material from above drops into the tunnel and that also washes away
Once the bottom of the soft material is removed, material slumps down from both sides as well as above.
It’s dramatic in surface erosion — I watched it happen on a forest fire restoration site, when a new road added tens of acres of rain catchment above a little valley that had only five acres’ catchment in the past.
First a very narrow slot appeared.
Then the sides of that fell in, to the angle of repose.
Erosion didn’t stop there because the whole slope on either side lost its support at the ‘toe’ (bottom).
Kind of the way the glaciers move more once the ice shelves are gone.
I’d think that movement will happen underneath the icecap vertically, as well as at the downhill end.
On our little restoration site it was a fast and short term process, over five years, then it stopped for several reasons. During that time both sides of that site, several feet deep soil, slowly slid down into that slot and washed away. The total amount of material washed away was in the end huge — three feet deep but very wide.
All because of a hole a couple of feet in diameter created underneath that material.
It’s a transient, when the volume of water in a drainage goes up dramatically as it does for a few years after a big fire. The erosion was well known but the mechanism wasn’t talked about here (once we knew we found some ways to reduce the damage — and plants recovering uphill slowly reduce the intensity of flows).
So I really wonder, won’t this happen with ice as well, with an unusually large amount of melt water flowing underneath it?
Ice will move downhill — and if a stream flowing on the base rock undermines the ice sheet overlying that area, it will sink, then more drainage can enter the ice at that area.
I realize this is a big reach for speculation based only on an analogy from an amateur.
Thanx for the response, Prof. Steig. I had noticed the Dutrieux paper when it appeared, as well of course, your work with Ding and others. I remember thinking that it would take quite a wind regime shift to overcome the Weertman instability. I await your article with anticipation.
sidd
John Atkeisonsays
Many of you will be interested in the new authoritative and influential climate report for the state of Nebraska. http://bit.ly/UnlCCImplications2014
Steve Fishsays
Re- Comment by Victor — 8 Dec 2014 @ 12:46 PM, ~#82
Burnet et al say there is an “overall increasing trend,” which the graph demonstrates quite clearly with or without the trend line. They don’t say “steady increase,” and they obviously wouldn’t even expect it to be steady because this is a small sample and meteorological data are notoriously noisy. The 2001 data point is valid, it is in the direction of the author’s expert analysis, and is consistent with the physics. And, the kicker is that all of this is validated by more recent data in the NOAA graph shown in the Holthaus piece you linked to. You didn’t read it!
You did a detailed analysis? Nope! You say that I wisely didn’t raise my hand? You are trying to bait me with troll behavior. You are a science denier who is admittedly ignorant of how science is done, think inexpert opinion can be substituted for expert statistical analysis, and believe that you are qualified to blog on science. Yep. Amazing!
Steve
Aaron Lewissays
In any ice situation, you need to calculate total energy available (including potential energy) against the energy needed to fracture the ice. Ice does not need to melt in place to calve into the ocean or fracture into the debris/water flow.
Watch minute 64 of Chasing Ice by Balog, and remember that we now know that all of of the big ice sheets have deep fjords under them. Deep fjords are one way nature moves ice.
1. I wouldn’t need statistics to tell me something was very wrong if I lost 25 times in a row on a simple coin toss.
Your intuition might say that something is wrong, but without basic statistics (in this case, just probability), you don’t know how wrong, and thus you don’t know whether to curse your luck or to call a lawyer.
2. “If the issue is understanding whether a data series contains a meaningful trend, the only useful techniques are statistical.” Then what’s the point of graphing your data?
Graphs let the reader quickly apprehend something *qualitative* about the data. It might be a trend, or it might be any other kind of relationship. In all cases, the author of the paper will tell you what the probability is that the relationship is not spurious. This is usually expressed in terms of a “P value”: the probability that the result is due to chance. By convention in most scientific endeavors, a P value of less than or equal to 0.05 is considered “significant”. (All this is, of course, calculated statistically. See https://en.wikipedia.org/wiki/P_value ).
The relationship between the graphed values also will be described qualitatively, and there will be various caveats to help the reader evaluate the results (e.g., “The voltage fluctuated over the interval [5.0, 5.2] due to unknown reasons, which could have caused…”, “The lab could not procure sufficient Purina Mouse Chow for the entire experiment. At week 7, all groups were switched to Mouseketeers Chow, which means that…”, etc.)
Finally, there are innumerable different kinds of graphical presentations, many of which are very different from the standard scatter-plot.
From Burnett et al: “The lake-effect sites (Fig. 3a) reveal an overall statistically significant increasing trend since 1931 (P value = 0.0000).” “Overall increasing trend” sounds like steady increase to me.
No. “Increasing trend” has a specific statistical definition, and that definition is very different from “steady increase”. Reading scientific papers with comprehension requires familiarity with scientific concepts, and one of the rock-bottom scientific concepts is basic statistics. Proceeding without it is flying blind, at night, in solid cloud, with a dead instrument panel and a failed compass.
We cannot rely on the word of Victor the Troll to assist us in this discussion concerning Burnett et al (2003) figure 3a. With all his blather, perhaps we need to remind ourselves of the issues being discussed.
This point of discussion was brought to this forum by Victor the Troll. His source he admitted was a website of dubious repute. The Troll describes it thus:-
“I know about that site, and strongly disagree with just about everything I’ve seen there, including most of their “denier” accusations. Yet Rayne has some very meaningful points to make and makes them very clearly.”
So not the best place to begin. The Troll then quotes allegedly from Rayne but the quote is not actually that of Rayne but rather a quote from an earlier article that Rayne is attacking and copied by Rayne. Rayne’s ‘very clearly made points’ are not presented.
The difference between this earlier article and Rayne’s arguments are that the earlier article presents the whole of Burnett et al Figure 3 and points out the difference between the two graphs. This is also the actual point made by Burnett et al (2003) –
“Results reveal a statistically significant increasing trend in snowfall for the lake-effect sites, whereas no trend is observed in the non-lake-effect settings.”
Rayne, on the other hand, considers only the top graph (Figure 3a) and accuses Burnett et al (2003) of not demonstrating a single trend 1931-2001 because ” since 1970 there appears to be no significant upward trend in the data.” Rayne scaled the graph and confirms this to be true. So have I and ditto. But being such noisy data, is the absence of statistical significance any surprise? Note, the central regression value remains positive throughout whatever start date is chosen for analysis.
Rayne, by ignoring the secong Fig3b, has thus set up a flimsy straw man of questionable merit. It is this straw man that Victor the Troll parades before us.
Now, while there is a less robust trend 1970-2001 in the graphed data in fig3a, the data in fig3b rather shows that most of this is less to do with a reversal of lake-effect snowfall 1971-83 and more a period of decreasing snowiness generally. The apparent reversal is not lake-effect but is good old weather. The first graph I presented @73 now sports a yellow trace of the rolling 10-year averages fig3a-minus-fig3b which shows the decade of reversal is pretty-much disappeared.
Of course, such graphical representations will not satisfy the Troll who apparently is only interested in the original artwork. Either that, or perhaps when the Troll said @82 “The graphs you’ve linked to are irrelevant,” he failed to recognise the fig3a data being presented. But then anything is possible with Victor the Troll. He says of me “You said your statistics applied to every possible pairing.” Did I? I think I should put this invention down to wishful thinking (ie denialism) by the Troll, just as he is determined to expunge the 2001 data point from fig3a because it messes up his comforting fantasies. It seems it was too snowy in 2001 so that whole year can be classificed as an “outlier” and expunged from the analysis, apparently.
The first hourly CO2 readings of 400+ppm for the season have been taken at MLO by Scripps Inst. and ESRL are showing a daily reading for 7 December of 400.46ppm, although that could be subject to revision.
So should we expect a weekly 400+ppm level before the end of the year? Will January 2015 be a 400+ppm month?
Whether “overall increasing trend” means the same as “steady increase” is beside the point. There is no overall increasing trend in that dataset. The totality of the increase is limited to the first segment, up to 1971. That’s NOT overall, it’s partial. The “overall increasing trend” is an illusion, produced by a misleadingly rising trend line, due to a poorly applied statistical procedure.
Statistics can be a powerful scientific tool, obviously, but ONLY when tempered by critical analysis, supplemented by simple common sense. I’m surprised you guys are so willing to embarrass yourselves by insisting on the blind acceptance of an obviously flawed statistical procedure.
Moving along, I’d like to congratulate The Pompous Mr. Rodger for actually engaging with my argument for a change rather than limiting himself to ad hominem attacks and arguments from authority. Comparing those two graphs is quite interesting, and your attempt to get around the problem you yourself have (graciously?) acknowledged (“Rayne scaled the graph and confirms this to be true. So have I and ditto.”) is ingenious, yes. But once again I must remind you that my critique was based on a claim made for the lake effect graph per se, NOT any other graph or any other aspect of the Burnett et al argument. Comparing this dataset with any other dataset does not change the fact that the former reveals NO overall increasing trend. What you’ve accomplished is a neat bit of statistical legerdemain, granted. But no cigar, sorry.
As for the year 2001. If there truly was a trend from 1971 through 2001, that trend would be apparent already from 1971 through 2000. One single datapoint shouldn’t make a difference in that respect. Of course, it could make a difference as far as the statistics is concerned. Especially if it’s a very high number, which this one is. That’s what I’D call noise, since that one single point is clearly NOT part of an overall trend.
Now as for the word “trend.” Excuse my presumption but as I recall from all my many past lives, that word was part of the English vocabulary long before the advent of statistics. While it’s true that statistical methods enable us to quantify a trend and thus to compare trends with some degree of precision, there are many instances when neither quantification nor comparison are necessary. Back in the Renaissance, me and my buddies could always spot a trend, as, for example, when we got progressively drunker beer after beer. Or when it got progressively warmer from December to June.
Public Release: 9-Dec-2014
Proceedings of the National Academy of Sciences
Abandoned wells can be ‘super-emitters’ of greenhouse gas
Princeton University researchers have uncovered a previously unknown, and possibly substantial, source of the greenhouse gas methane to the Earth’s atmosphere. After testing a sample of abandoned oil and natural gas wells in northwestern Pennsylvania, the researchers found that many of the old wells leaked substantial quantities of methane. Because there are many abandoned wells nationwide, the researchers believe the overall contribution of leaking wells could be significant.
NOAA, National Sciences and Engineering Research Council of Canada, Yale Center for Environmental Law and Policy, Princeton Environmental Institute
Contact: John Sullivan js29@princeton.edu
609-258-4597
Princeton University, Engineering School
Until recently, and only spottily now, there have been no regulations (or no inspection and enforcement) when wells were abandoned, and what there is relies on local people and agencies, with inconsistent requirements easily ignored.
Perhaps the new technology for locating concentrations of greenhouse gases will be usable for locating abandoned wells — remembering that it may be old water wells that are leaking methane, now, as gas moves laterally through aquifers.
The problem won’t be only the oil and gas wells that were abandoned and left unfilled or inadequately closed.
It seems obvious, but it’s worth noting (since there are those who are naive about different disciplines, how to classify them and how they relate and work in the real world) that math is logic. If you want to go beyond the dibby dab of logic you get in Critical Thinking 101 for reading newspapers, especially as it applies to science and statistics, math is a good way to go. It is foundational.
There are other modes of thinking of course. Darwin was a whiz at naturalistic thinking, that is at classification–which is also foundational to science. You can see how quickly people (ok, trolls) wander into weird territory when they have no facility for conceptual organization. They’re the ones who end up relying on specious b.s. in attempts to keep up with more mature conversationalists. (Note the idea that you don’t need to understand any science to effectively tell scientists that they don’t know what they’re doing. Apparently you just need an impression, a gut feeling, questionable debating skills, and a rabid belief in your own innate superiority.)
Russell says
Here’s a refresher for Victor and others who need to catch up on recent developments in postmodern physics, pure , applied, and Geo-.
Anonymous Coward says
#63 Icarus62,
The “quantity absorbed by the natural world” is a function of atmospheric concentrations, not emissions. Initially, a very large cut in emissions would cause a yearly decrease in concentrations. A stop to emissions isn’t required but emissions do add up over time as well. The larger the remaining emissions, the slower concerntrations would decrease and the quicker they would start to increase again (even without taking into account the uncertain contributions from “the warming world”).
As to “how easy it’s going to be”, in my opinion “a huge effort” is a massive understatement. The forecast for the CO2 atmospheric concentrations is clearly: up, up, up.
#23 Mike Roberts,
(re-posting since I seem to be having an issue with reCAPTCHA and “you’ve already said that”)
Your link is currently down for maintenance but you’re evidently comparing very different things.
The Hansen paper you reference states “Only 60 percent of the equilibrium response is achieved in a century.” (not 40 years). But that’s the response to a constant forcing, not a GHG emission pulse. In order to achieve a constant forcing you’d need either continuing (but diminishing) emissions or an unexpectedly strong carbon cycle feedback.
And going by the press release, the Ricke and Caldeira I assume your link is pointing to is about the effect of an emission pulse with an initially quickly declining atmospheric fraction, not continuing emissions causing a constant forcing. Details and assumptions might be “incompatible” with Hansen’s paper but at first glance the papers do not seem obviously contradictory, only concerned about different things.
For what that’s worth, I would tend to take a dim view of the string of papers which have been trying to put a encouraging political spin on the issue by reframing it in an unrealistic manner. Outside of policy-fiction, people do not decide on short emission pulses. Instead, long-lived infrastructure is being developped and somewhat less long-lived energy-consuming products are being marketed. While their construction and production might be considered as causing an emission pulse, their useful lives are typically expected to cause continuing emissions over decades (with even longer-lasting impacts on future choices). A constant forcing is therefore arguably a better (but clearly far from ideal) analogy to the impact of the choices being made today.
Barton Paul Levenson says
I just published a long list of absorption coefficients and band schemes on my web site. If you don’t know what that means, ignore this, but if, like me, you’re really into radiative transfer and climate studies, enjoy:
http://bartonlevenson.com/AbsorptionCoefficients.html
Kevin McKinney says
Victor, it’s rather ironic that you started this whole schlemozzle off with bashing folks for alleged ‘confirmation bias,’ and now you are convinced that your Mark I Eyeball has definitively, positively proved ‘no trend’ and standard statistical procedure be damned.
There is a reason for standards, as I’ve said before, and it is to avoid fooling yourself.
Victor says
#48
See #19
Thomas says
Icarus @32.
There is a reasonably useful carbon model (actually a series of them) called the Bern carbon cycle, which given a CO2 impulse estimates the atmospheric concentration as a function of time. There are both fast terms, and slow terms. And in their model almost 14% of the CO2 stays forever.
Long story short, if we made an instantaneous 50%cut, shortly afterwards the concentration would appear to stabilize, but then as the faster CO2 reservoirs saturate the rate of absorption would decrease and you would see the concentration begin to climb again (although slower than before). In the long term, the only way to stabilize or reduce the concentration is a virtually 100% cutback.
Chris Dudley says
Mike (#23),
The climate response function is a different calculation. Carbon dioxide concentration is doubled instantaneously and then is held that way while the climate responds. To keep the carbon dioxide concentration constant requires ongoing emissions. That is different from looking at the warming from a pulse of carbon dioxide emission which then is partly absorbed back out of the atmosphere. It should be noted too that the pulse calculation does not really get at the effects of, say, the emissions of one particular year, since subsequent emissions will drive carbon dioxide out of the atmosphere at a higher rate than in a simple pulse calculation.
Paul Berberich says
#44 MARodger
I have actually tried your fitting function T(x) = A+Bx+Dsin(E+Fx). I doesn’t work. The residuals are larger than the errors of the data. The form T(x) = A+Bx+Cx^2+D sin(E+Fx) is the best form I could find. May be there is a better representation. A Fourier transform is simply another representation of the data. You can not use it for a forecast. Linear trend calculations also lead to unacceptable residuals. Please download the file KlimaGlobal.zip from my Website and run the program KlimaGlobal.exe. The software is for a Windows platform, texts are in English.
[Response: NO offense, but I don’t recommend anyone downloading .exe files from people they do not know.–eric]
Hank Roberts says
>52, 48, 19
Oh, dear. Victor is now pointing to 19 as though it supported his misunderstanding of that blogger’s notion about an old paper.
Sorry, Victor.
When someone shows you how to find the library, that doesn’t entitle you to claim you discovered everything that’s found inside the library.
You have to read the subject, before pontificating on your superior understanding.
https://www.google.com/search?q=robert+grumbine+defining+trends
MARodger says
Paul Berberich @56.
You demonstrate exactly what I was suggesting @44. You have not developed a model for the functioning of global average temperature using solely 1850-1954 data and then gone on to test it against 1955-2014 data. You have instead repeatedly carried out this process until you happened upon a form of model that fitted. This iterative process does not test your final function because if that final function had failed to reduce your residuals, you would have presumably kept going systematically until you found one that did. In analogy, you have picked a combination lock, not by deriving the required combination but by searching for it until the lock came undone.
You have thus not demonstrated (and indeed cannot demonstrate) that there is any repeating processes in operation within the climate system that determines the average global temperature. So you projection into the future is worthless.
Steve Fish says
Re- Comment by Victor — 5 Dec 2014 @ 10:46 PM, ~#53
So, you are unable to provide expert criticism of Burnett et al. No surprise. The comment at #19 is whacking your mole and messing up your troll.
Victor says
#50 Meow sez: “The statistical definition of “trend” is meant to help us avoid deceiving ourselves. It is a mostly-objective way of deciding (probabilistically, of course) whether a given change in a random variable is due to chance. If you abandon that definition in favor of eyeballing, you’re no longer practicing statistics, which means you’ve abandoned the best tool we have to make sense of data series.”
Statistics is a valuable tool when dealing with a large body of data — too large to evaluate in any other way. When dealing with a small body of data, statistics is not necessary and can, in fact, be misleading. What you dismiss as “eyeballing” is in fact the basis of all scientific observation. Results that can be evaluated directly by eye are beyond question more reliable than statistical results. And the proper way to deal with possible bias is via replication, NOT shoehorning unnecessary statistics into the mix. Use of statistics might make you look more like a “real scientist”, but that’s another matter.
The most profound insights of relativity theory didn’t require much in the way of statistics. Darwin did very well without it. So did Copernicus, Kepler, Newton, etc.
The dataset in question consists of only 70 points and as a result, the graph in question can very easily be evaluated by eye, without benefit of statistics. You may not be happy about this because you routinely use statistics, and feel comfortable with it, but from an epistemological standpoint when evaluation via direct observation is feasible, it is always the best recourse.
In my recent post I went beyond simple eyeballing, however, to analyze the data in detail, and again, because we’re dealing with only 70 points such an analysis poses no problem. If the statistics tell us something different from what we can clearly see with our eyes, and clearly evaluate via an analysis based on direct observation, then there is something wrong with the statistics — or the method used in evaluating it.
I’m not claiming that climate scientists should abandon statistics in favor of eyeballing — in many cases datasets are simply too large or too complex to evaluate directly and statistics is necessary. And yes, statistical evaluation can be a very sophisticated and meaningful tool, no question. But it is a tool, NOT an oracle, and should be treated as such.
Chuck Hughes says
Enuf said.
Comment by Victor — 3 Dec 2014
I agree with you Vic.
Matthew R Marler says
Here is a paper that I thought might be interesting. I sent a copy to Dr Gavin Schmidt.
The Annals of Applied Statistics
2014, Vol. 8, No. 3, 1372–1394
DOI: 10.1214/14-AOAS753
© Institute of Mathematical Statistics, 2014
CHANGE POINTS AND TEMPORAL DEPENDENCE IN
RECONSTRUCTIONS OF ANNUAL TEMPERATURE:
DID EUROPE EXPERIENCE A LITTLE ICE AGE?
BY MORGAN KELLY AND CORMAC Ó GRÁDA
University College Dublin
We analyze the timing and extent of Northern European temperature falls during the Little Ice Age, using standard temperature reconstructions. However, we can find little evidence of temporal dependence for structural breaks in European weather before the twentieth century. Instead, European weather between the fifteenth and nineteenth centuries resembles uncorrelated draws from a distribution with a constant mean (although there are occasional decades of markedly lower summer temperature) and variance, with the same behavior holding more tentatively back to the twelfth century. Our results suggest that observed conditions during the Little Ice Age in Northern Europe are consistent with random climate variability. The existing consensus about apparent cold conditions may stem in part from a Slutsky effect, where smoothing data gives the spurious appearance of irregular oscillations when the underlying time series is white noise.
Meow says
OT, raised by Eric in an editor’s comment:
Good advice. Here are 5 rules for safer computing:
1. Run all network-facing applications (e.g., browser, email client) in limited (unprivileged) user accounts, and deny those accounts access to data other than that which they need to operate.
2. Never run any executable or install any program that lacks a valid digital signature traceable to a known entity or person.
3. (a) As much as possible, avoid running executables or installing programs in administrative (privileged) accounts; (b) As much as possible, avoid running executable or installing programs in any account. Let someone else volunteer to cut herself on the bleeding edge.
4. Scan every executable or installer for malware before running/installing it.
5. Don’t click on attachments. Download and scan them first, follow the rules for executables/installers, and open any survivors in a limited/unprivileged account.
Victor says
Here’s the graph in question, one more time: http://admin.americanthinker.com/images/bucket/2014-11/193487_5_.png
Now, I’m curious. If there is anyone posting or lurking here who actually wants to claim they see in this graph clear evidence of a pattern of steadily increasing lake effect snowfall from 1971 to 2001, will that person kindly raise his or her hand?
No hands raised? Well, maybe no one wants to make a fool of himself. Or — maybe you just need some time to think about it? I’ll be waiting.
sidd says
Prof. Rignot, breaking rank in the Washington Post:
“Eric Rignot of UC-Irvine, suggested that in his view, within 100 to 200 years, one-third of West Antarctica could be gone.
Rignot noted that the scientific community “still balks at this” – particularly the 100-year projection — but said he thinks observational studies are showing that ice sheets can melt at a faster pace than model based projections take into account.”
Wonder if Prof. Box will chime in, I think he would be one of those on the far right in the distribution function for SLR from continental ice sheets, as referenced for example in Jeverejeva(2014) doi:10.1088/1748-9326/9/10/104008 fig 2.
Any ice aficionados care to comment ?
Also, would those haruspices divining trends or lacks thereof from temperature entrails please take it to the latest thread ?
sidd
Meow says
You enter a business in a large city in Nevada, and place a series of wagers on a coin flipped by the house. You bet heads every time, and lose 25 times in a row. Because statistics is “not necessary” to understand this result, you say, “Aw shucks, guess my luck just wasn’t so good”, and go home.
I play the same game, except I use statistics to calculate the probability of losing 25 times in a row. Finding it to be 0.5^25 = 3*10^-8, I call a lawyer to sue the house for cheating.
Humbug. Eyeballing can be useful for hypothesis generation, which is not at all the same thing as series analysis.
Wow, just wow.
Every tool has its place. If the issue is understanding whether a data series contains a meaningful trend, the only useful techniques are statistical. Depending on context, you can quibble whether to use a frequentist approach or a Bayesian approach, and if the latter, what priors to assume. But proclaiming trends without statistical analysis is meaningless.
…
phil mattheis says
Hey – my wife asked what I’m doing, and I had to say “hold on, somebody’s wrong on the internet…”. She was not impressed, and is off eating the rest of the salmon, while I bother with this.
victor, I’ll raise my hand, but only in farewell as you depart, not at your bidding or in response to a question that requires me to visit americanthinker. “Consider the source” is an essential corollary of a scientific approach, if only to avoid rabbit holes and wasted time.
You ask us to look at a site specializing in smug self-deception, to view a graph apparently available no where else, and then you dance a silly two-step that shows you understand none of us will do so. Doesn’t make you smarter, though I suppose it does play well to a subset of lurkers come to watch the show.
I will add my voice to those that find your “Mark I eyeball” seriously deficient in analytic value, especially on a blog that (patiently) offers active updates and translation of real science in a field where math is necessary and complicated.
You’ve been present on the neighboring thread here, and have responded (tangentially at best), but your decision to reject change point analysis in favor of sequential cherry picking only regenerates what most everyone else here would recognize as the escalator at Skeptical Science (find it yourself – we all know how).
The whole point of statistics is use of math to formalize analysis into a standard format that improves odds that an analytic consensus can be found. Certainly, statistics can be and are often misused. However, your stated intent to trust your lying eyes over scientific method has moved from tired and boring, to stagnant and stultifying. Please stop, or move on elsewhere.
‘Victor’ has become the latest in a sad series of posting id’s to be avoided, unfortunately with an increasing footprint here to the point that avoidance pretty much means staying away entirely.
My wife likes that last idea.
Steve Fish says
Re- Comment by Victor — 6 Dec 2014 @ 2:06 PM, ~#66
You ask- “If there is anyone posting or lurking here who actually wants to claim they see in this graph clear evidence of a pattern of steadily increasing lake effect snowfall from 1971 to 2001, will that person kindly raise his or her hand?”
Your behavior is getting very silly because you propose a straw man in your question. Nobody, including the authors of the study, have proposed or expected a “steady increase.” Your ignorant focus on the regression line has lead you astray. Regression is both a useful analysis and data presentation tool. For example, its visualization in these data helps you see a potential inflection in these data. If Burnett et al had not used this statistical tool I would have been concerned. Its use does not suggest that the authors saw a monotonic increase in lake effect snow. Your selection of this straw man highlights your own confirmation bias.
The study demonstrates that snowfalls in the Great Lakes has increased, while outside of the lake-effect area snowfall has declined slightly and they attribute the difference to increased lake-effect snow with convincing data. They also provide a preliminary analysis of how lake water temperatures effects snow fall. The authors also present a new method for isotopic identification of lake effect snow. It is a typical and solid study.
In addition to your straw man you claim that the 2001 data point is an outlier. No, it is not. You apparently don’t know what an outlier is. If the data had ended in 1971 would you have called this point an outlier? Your “outlier” contention is especially dumb in the light of the facts that the study is 14 years old and there have been several further, so called, “outliers” since.
So, considering problems of confirmation bias, epistemological questions, Ockham’s razor, and basic statistical concepts, you fail. You apparently believe that your naïve opinion trumps expert scientific data and analysis. Yours is amazing postmodernist thinking! Steve
Hank Roberts says
How to confuse people with trend charts, by “Business Insider”
in a decently written article on Antarctic sea ice
http://www.businessinsider.com/antarctic-sea-ice-climate-change-2014-12
someone chose to illustrate a side point with this animated chart, attributed to NASA, showing Arctic sea ice September extent — through 2010.
See how the trend line neatly starts on the first and ends on the last data point? Isn’t that special …
They’re omitting the rather dramatic data points for 2011, 2012, 2013, and 2014, which are easy to find: https://www.google.com/search?q=arctic+sea+ice+september+extent+2014
You can imagine how this is going to confuse people about trends, I bet:
http://video.businessinsider.com/627f23f7-36c8-42ff-b03f-02b72fbbceb6.webm
Yeah, I tried email to the editorial contact. No response.
Mike Roberts says
Anonymous Coward, thanks for that. I was beginning to realise that the hansen and Caldeira graphs were apples and oranges. Both are unrealistic but presumably have some utility.
MARodger says
[edit]
Victor @66.
Re-read the comment @26. If that is too difficult for you to translate into your trollish tongue, two clicks down here is a graphical representation suitable for even the purile at heart.
And if anyone is interested, here (two clicks) is Burnett et al (2003) Figure 1 of the Syracure NY snowfall with an extra decade’s-worth of data added (thus 1915-2012). The lake-effect snowfall trend trends ever onward.
Paul Berberich says
#60 MARodger
I will never find a final function, and there will never be a final climate model.
SecularAnimist says
MARodger wrote: “This foolish troll we have is so pathetic he cannot even keep in mind what has already been told to him.”
Victor knows exactly what has already been told to him. He simply dismisses or ignores it, and posts the same thoroughly debunked falsehoods, distortions, non sequiturs, condescending pretentious claptrap and outright nonsense over and over again.
It’s plain and simple trolling, in bad faith and boring, and why the moderators have not long since consigned Victor’s posts to the Bore Hole is beyond me.
Hank Roberts says
http://qz.com/307271/this-browser-plug-in-will-offer-a-credibility-meter-for-bad-climate-change-reporting/
Jim Larsen says
66 Victor said, “If there is anyone posting or lurking here who actually wants to claim they see in this graph clear evidence of a pattern of steadily increasing lake effect snowfall from 1971 to 2001, will that person kindly raise his or her hand?”
I do. It’s called the trend line.
Edward Greisch says
66 Victor: My guess is as follows: Any snow increase would be from the lakes not freezing over. Frozen lake surfaces don’t provide much water to the air. It is snowing less for another reason. The recent snowstorm near Buffalo is nothing out of the ordinary. Olean had a snow decrease because it is now too warm to snow as much. It rains instead. Snow used to start in the 3rd week of September.
Your canard is worse than unbelievable. It fails the giggle test. It really sounds like you are trying to score a quote you can use against the scientists.
Chuck Hughes says
What’s to keep the WAIS from coming apart rather suddenly, say within the next few decades rather than centuries? If a crack formed somewhere in just the right spot why wouldn’t it be like cutting a diamond? Of course that’s a hypothetical but it seems possible given that so many predictions have happened sooner than anticipated.
http://www.livescience.com/44434-west-antarctica-glaciers-speed-up.html
[Response: Ice is nothing like diamond. It’s a viscous fluid. Even if the WAIS is actually in irreversible decline (which I have my doubts about), we’re still talking about hundreds to thousands of years. Ice can only flow so fast. See Tad Pfeffer’s paper on this from a few years ‘ back (http://www.sciencemag.org/content/321/5894/1340.abstract). You might find our RealClimate back and forth with Tad interesting, as well: https://www.realclimate.org/index.php/archives/2008/09/on-straw-men-and-greenland-tad-pfeffer-responds/ –eric]
Paul Berberich says
#65 Meow
Addendum:
6. Warning! Computers waste good electrical energy. Be careful!
Paul Berberich says
#65 Meow
Proposals for CO2-saving
I think following ideas for saving CO2-emissions can be realized at once.
1. Computer updates only when the sun is shining or the wind is blowing.
2. My Screen shows the current temperature of Munich where I am living. It could also show the percentage of electrical energy which is produced at the moment by CO2 emmitting power plants.
Victor says
[edit – stick to the issues]
Since Meow was the only one to offer a meaningful response, I’ll start with him (her?):
1. I wouldn’t need statistics to tell me something was very wrong if I lost 25 times in a row on a simple coin toss.
2. “If the issue is understanding whether a data series contains a meaningful trend, the only useful techniques are statistical.” Then what’s the point of graphing your data?
#69, Phil and his wife:
American Thinker was not the source of that graph, but a peer reviewed paper published in a legit journal (Burnett et al http://journals.ametsoc.org/doi/abs/10.1175/1520-0442%282003%29016%3C3535%3AIGLSDT%3E2.0.CO%3B2). If you’d been following the discussion you’d have known that. The rest of your post isn’t worth discussing, especially since you’ve demonstrated very clearly that you don’t know what you’re talking about.
#70 Steve Fish:
“Nobody, including the authors of the study, have proposed or expected a “steady increase.””
From Burnett et al: “The lake-effect sites (Fig. 3a) reveal an overall statistically significant increasing trend since 1931 (P value = 0.0000).” “Overall increasing trend” sounds like steady increase to me. Moreover, as noted by Rayne, this same graph has been used, by Eric Holthaus and others, to argue that “Global Warming Is Probably Boosting Lake-Effect Snows.” (http://www.slate.com/blogs/future_tense/2014/11/19/lake_effect_snow_in_buffalo_climate_change_is_making_snowstorms_more_extreme.html)
As far as the Burnett et al paper in general is concerned, that was not the object of my analysis, only that one graph, with its clearly misleading trend line. And as far as 2001 is concerned, yes that data point is an outlier because it’s clearly NOT a part of any trend, as is clearly evident from simply looking at the graph. Graphs such as this are used all the time by climate scientists to convince both the public and world leaders, most of whom have no idea what linear regression means. In this case, the graph would mean nothing at all if it weren’t for that misleading trend line.
“You apparently believe that your naïve opinion trumps expert scientific data and analysis.” I offered a detailed analysis of that dataset, NOT an opinion. You are the one offering an opinion. And by the way I didn’t notice you raising your hand, Steve. Wise decision.
#73, the indefatigable Mr. Rodger:
Your claim regarding the 1973 – 2000 trend is interesting. Thanks for doing the math. But I’m wondering how steep that trend actually was, since you failed to mention it. My guess is that it was minimal. You said your statistics applied to every possible pairing but you seem to have ignored 1971-2000. Was that an oversight? I’d be very curious to see that particular trend line. The graphs you’ve linked to are irrelevant. My analysis pertained only to the graph critiqued by Rayne. What happened in Syracuse is beside the point. I wasn’t critiquing their paper or their premise, only that one graph. For someone so pompously preaching scientific righteousness, you are remarkably careless regarding the simplest observations.
#77 Jim Larsen raised his hand. He’s not afraid to play the fool. Good for him! Neither am I.
#78 At the risk of sounding tedious: my critique was of the graph and its misleading trend line, NOT the paper as a whole. [edit – less name calling please]
sidd says
Prof. Steig writes:
“Even if the WAIS is actually in irreversible decline (which I have my doubts about) … ”
Do tell.
[Response: I’ll write something on this in the next month or so. Short answer is that I think the role of variability in climate forcing of ice sheets is under-estimated by many glaciologists. For a preview of where I’m coming from on this, see Dutrieux et al, 2014, in Science –eric
Chuck Hughes says
Thanks Eric. I guess that was a dumb question but I keep running across the word “collapse” which makes me think of a ceiling collapsing or some other structure. I was thinking more in terms of snow piling up on a tin roof and then suddenly the structure collapses when enough weight is applied or maybe something similar to a pebble hitting a windshield in just the right spot, causing it to crack or shatter. I’m not implying that all the ice would suddenly melt but that it would just break into smaller pieces and slide off. Knowing the Physics of ice and the process of melting would help I’m sure. Of course being a layman/novice etc. puts me at a disadvantage.
Also, whenever I hear or read about how much the oceans will rise it’s always based on a 100 year timescale and nobody goes much beyond that. Then I hear or read numbers like 60′ without it being put in any sort of time frame that corresponds to a particular level. I understand the 3-9′ by the end of this century but then others claim it will be much less and still others claim it will be much more than that by the end of the century. I won’t be around anyway but the entire SLR concept is foggy. The IPCC predictions are supposedly “conservative estimates” and that adds even more mystery. Is there a straightforward mathematical formula for figuring this out based on CO2 levels and Global Average Temperature and other known factors? Thanks.
Mike S says
We have been looking at the NOAA Global Summary of the Day dataset (https://data.noaa.gov/dataset/global-surface-summary-of-the-day-gsod) and have found some surprising data. For example, it shows snow on dates such as March 16, 2008 at San Diego International Airport (Station number 722900), which is at sea level on the ocean, in spite of a temperature that never went below 46 degrees and in conflict with sources that say the last snow on the coast in San Diego was in 1967 (e.g.. http://en.wikipedia.org/wiki/Climate_of_San_Diego#Snow and http://www.accuweather.com/en/weather-news/snow-in-san-diego-it-happened/58957).
While I’m not saying those sources are more reliable than NOAA GSOD, any large dataset can have errors. Can you suggest a way to validate the GSOD reports of snow?
Any suggestions appreciated.
Thanks,
Mike
Chris Dudley says
Europe is in for more frequent heat extremes: http://www.nytimes.com/2014/12/09/world/europe/global-warming-to-make-european-heat-waves-commonplace-by-2040s-study-finds.html
Unsettled Scientist says
> I wouldn’t need statistics to tell me something was very wrong if I lost 25 times in a row on a simple coin toss.
You are deceiving yourself. Pray tell, how do you know then? You didn’t explain *how* you know something is wrong. Your technique is simply knowing that you are multiplying .5 many times to get a very small number. How would you prove your case without explaining it that way? Just because you wouldn’t come up with the exact number doesn’t mean you aren’t doing the exact same thing, just poorly.
> From Burnett et al: “The lake-effect sites (Fig. 3a) reveal an overall statistically significant increasing trend since 1931 (P value = 0.0000).” “Overall increasing trend” sounds like steady increase to me.
Well, it’s not. “Overall” is not synonymous with “steady.” So you’re simply not understanding what the paper is saying. Your refusal to get into the standard language of science and math is causing you to misinterpret what is being said. It is causing confusion that could be avoided if you would just step up and take the time to become educated on the language and tools used by science professionals.
Hank Roberts says
For Dr. Steig, a question on your followup response inline above:
> we’re still talking about hundreds to thousands of years.
> Ice can only flow so fast. – See more at: https://www.realclimate.org/index.php/archives/2014/12/unforced-variations-dec-2014/comment-page-2/#comment-619970
Over the years since 2007 I added notes to Stoat’s Why do Science in Antarctica? as I saw research publishing changed estimates about how fast change could happen. This one really surprised me:
Rapid Sediment Erosion and Drumlin Formation Observed Beneath a Fast-Flowing Antarctic Ice Stream
Smith, A. M.; Murray, T.; Nicholls, K. W.; Makinson, K.; Adalgeirsdottir, G.; Behar, A. E.
American Geophysical Union, Fall Meeting 2005, abstract #C13A-04
http://adsabs.harvard.edu/abs/2005AGUFM.C13A..04S
One citation: Till characteristics, genesis and transport beneath Antarctic paleo-ice streams, 21 JUL 2007,
DOI: 10.1029/2006JF000606
That for me raised significant questions from experience with surface erosion:
I know the erosional force of water carrying silt changes — water when flowing fast can carry a large amount of solid material which scours whatever it hits. When the water slows for any reason (at any slow/turbulent point) it drops the material. That increases the flow’s capacity for picking up solid material at next opportunity. That kind of alternation carves away soil and soft rock away very fast.
I asked a handful of questions about that which weren’t answerable at the time:
http://scienceblogs.com/stoat/2007/02/05/why-do-science-in-antarctica/#comment-3386
In 2008 the best we knew said that ice flows fast enough to close up any seasonal melt openings: https://www.realclimate.org/index.php/archives/2008/04/moulins-calving-fronts-and-greenland-outlet-glacier-acceleration/langswitch_lang/in#comment-85450
But then ANDRILL reported the Ross Sea has been open water, off and on, suggesting melt has happened more often across Antarctica than we imagined.
And we got more and more news about water found at the bottom of boreholes between ice and bedrock, and then about lakes and streams flowing under the ice cap, and about the surface elevation changing as water flows under the ice; some of that noted at the Stoat thread.
__________________
So from here on below these are my questions — just my speculation — based on knowing we’ve been surprised by how much (and increasing) water is flowing at the bottom of the icecaps lately.
But I look back at that immensely surprising drumlin study — a geological formation we had thought took centuries to form, discovered forming under the ice in a matter of days. If that can happen, other erosional processes can be happening.
_____________
There’s a thing that happens with erosion on the surface that I suspect would happen under the ice as well.
When liquid water flows through soft material down to a harder surface,
then flows along the hard surface, it carves out a tunnel.
Then the softer material pushes down into the opening — and gets carved away.
When this happens in soil it what the Australians call a “tunnel gully”
https://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcR6EhO5QwfGWqmGm28O27boByjRVHNEgguOys2rhs0eMrvoZIRb_Q
http://ecan.govt.nz/advice/your-business/farming/Pages/tunnel-gully-erosion-control.aspx
First a tunnel forms from water flowing on the harder surface deep under the soft material
Then material from above drops into the tunnel and that also washes away
Once the bottom of the soft material is removed, material slumps down from both sides as well as above.
It’s dramatic in surface erosion — I watched it happen on a forest fire restoration site, when a new road added tens of acres of rain catchment above a little valley that had only five acres’ catchment in the past.
First a very narrow slot appeared.
Then the sides of that fell in, to the angle of repose.
Erosion didn’t stop there because the whole slope on either side lost its support at the ‘toe’ (bottom).
Kind of the way the glaciers move more once the ice shelves are gone.
I’d think that movement will happen underneath the icecap vertically, as well as at the downhill end.
On our little restoration site it was a fast and short term process, over five years, then it stopped for several reasons. During that time both sides of that site, several feet deep soil, slowly slid down into that slot and washed away. The total amount of material washed away was in the end huge — three feet deep but very wide.
All because of a hole a couple of feet in diameter created underneath that material.
It’s a transient, when the volume of water in a drainage goes up dramatically as it does for a few years after a big fire. The erosion was well known but the mechanism wasn’t talked about here (once we knew we found some ways to reduce the damage — and plants recovering uphill slowly reduce the intensity of flows).
So I really wonder, won’t this happen with ice as well, with an unusually large amount of melt water flowing underneath it?
Ice will move downhill — and if a stream flowing on the base rock undermines the ice sheet overlying that area, it will sink, then more drainage can enter the ice at that area.
I realize this is a big reach for speculation based only on an analogy from an amateur.
Hank Roberts says
Hm, I think my earlier questions are being addressed in the literature:
http://rspa.royalsocietypublishing.org/content/470/2171/20140340
Subglacial swamps
T. M. Kyrke-Smith, A. C. Fowler
DOI: 10.1098/rspa.2014.0340Published 3 September 2014
sidd says
Thanx for the response, Prof. Steig. I had noticed the Dutrieux paper when it appeared, as well of course, your work with Ding and others. I remember thinking that it would take quite a wind regime shift to overcome the Weertman instability. I await your article with anticipation.
sidd
John Atkeison says
Many of you will be interested in the new authoritative and influential climate report for the state of Nebraska. http://bit.ly/UnlCCImplications2014
Steve Fish says
Re- Comment by Victor — 8 Dec 2014 @ 12:46 PM, ~#82
Burnet et al say there is an “overall increasing trend,” which the graph demonstrates quite clearly with or without the trend line. They don’t say “steady increase,” and they obviously wouldn’t even expect it to be steady because this is a small sample and meteorological data are notoriously noisy. The 2001 data point is valid, it is in the direction of the author’s expert analysis, and is consistent with the physics. And, the kicker is that all of this is validated by more recent data in the NOAA graph shown in the Holthaus piece you linked to. You didn’t read it!
You did a detailed analysis? Nope! You say that I wisely didn’t raise my hand? You are trying to bait me with troll behavior. You are a science denier who is admittedly ignorant of how science is done, think inexpert opinion can be substituted for expert statistical analysis, and believe that you are qualified to blog on science. Yep. Amazing!
Steve
Aaron Lewis says
In any ice situation, you need to calculate total energy available (including potential energy) against the energy needed to fracture the ice. Ice does not need to melt in place to calve into the ocean or fracture into the debris/water flow.
Watch minute 64 of Chasing Ice by Balog, and remember that we now know that all of of the big ice sheets have deep fjords under them. Deep fjords are one way nature moves ice.
Meow says
@8 Dec 2014 @ 12:46 PM:
Your intuition might say that something is wrong, but without basic statistics (in this case, just probability), you don’t know how wrong, and thus you don’t know whether to curse your luck or to call a lawyer.
Graphs let the reader quickly apprehend something *qualitative* about the data. It might be a trend, or it might be any other kind of relationship. In all cases, the author of the paper will tell you what the probability is that the relationship is not spurious. This is usually expressed in terms of a “P value”: the probability that the result is due to chance. By convention in most scientific endeavors, a P value of less than or equal to 0.05 is considered “significant”. (All this is, of course, calculated statistically. See https://en.wikipedia.org/wiki/P_value ).
The relationship between the graphed values also will be described qualitatively, and there will be various caveats to help the reader evaluate the results (e.g., “The voltage fluctuated over the interval [5.0, 5.2] due to unknown reasons, which could have caused…”, “The lab could not procure sufficient Purina Mouse Chow for the entire experiment. At week 7, all groups were switched to Mouseketeers Chow, which means that…”, etc.)
Finally, there are innumerable different kinds of graphical presentations, many of which are very different from the standard scatter-plot.
No. “Increasing trend” has a specific statistical definition, and that definition is very different from “steady increase”. Reading scientific papers with comprehension requires familiarity with scientific concepts, and one of the rock-bottom scientific concepts is basic statistics. Proceeding without it is flying blind, at night, in solid cloud, with a dead instrument panel and a failed compass.
MARodger says
We cannot rely on the word of Victor the Troll to assist us in this discussion concerning Burnett et al (2003) figure 3a. With all his blather, perhaps we need to remind ourselves of the issues being discussed.
This point of discussion was brought to this forum by Victor the Troll. His source he admitted was a website of dubious repute. The Troll describes it thus:-
So not the best place to begin. The Troll then quotes allegedly from Rayne but the quote is not actually that of Rayne but rather a quote from an earlier article that Rayne is attacking and copied by Rayne. Rayne’s ‘very clearly made points’ are not presented.
The difference between this earlier article and Rayne’s arguments are that the earlier article presents the whole of Burnett et al Figure 3 and points out the difference between the two graphs. This is also the actual point made by Burnett et al (2003) –
Rayne, on the other hand, considers only the top graph (Figure 3a) and accuses Burnett et al (2003) of not demonstrating a single trend 1931-2001 because ” since 1970 there appears to be no significant upward trend in the data.” Rayne scaled the graph and confirms this to be true. So have I and ditto. But being such noisy data, is the absence of statistical significance any surprise? Note, the central regression value remains positive throughout whatever start date is chosen for analysis.
Rayne, by ignoring the secong Fig3b, has thus set up a flimsy straw man of questionable merit. It is this straw man that Victor the Troll parades before us.
Now, while there is a less robust trend 1970-2001 in the graphed data in fig3a, the data in fig3b rather shows that most of this is less to do with a reversal of lake-effect snowfall 1971-83 and more a period of decreasing snowiness generally. The apparent reversal is not lake-effect but is good old weather. The first graph I presented @73 now sports a yellow trace of the rolling 10-year averages fig3a-minus-fig3b which shows the decade of reversal is pretty-much disappeared.
Of course, such graphical representations will not satisfy the Troll who apparently is only interested in the original artwork. Either that, or perhaps when the Troll said @82 “The graphs you’ve linked to are irrelevant,” he failed to recognise the fig3a data being presented. But then anything is possible with Victor the Troll. He says of me “You said your statistics applied to every possible pairing.” Did I? I think I should put this invention down to wishful thinking (ie denialism) by the Troll, just as he is determined to expunge the 2001 data point from fig3a because it messes up his comforting fantasies. It seems it was too snowy in 2001 so that whole year can be classificed as an “outlier” and expunged from the analysis, apparently.
MARodger says
The first hourly CO2 readings of 400+ppm for the season have been taken at MLO by Scripps Inst. and ESRL are showing a daily reading for 7 December of 400.46ppm, although that could be subject to revision.
So should we expect a weekly 400+ppm level before the end of the year? Will January 2015 be a 400+ppm month?
Victor says
Re Steve Fish et al:
Whether “overall increasing trend” means the same as “steady increase” is beside the point. There is no overall increasing trend in that dataset. The totality of the increase is limited to the first segment, up to 1971. That’s NOT overall, it’s partial. The “overall increasing trend” is an illusion, produced by a misleadingly rising trend line, due to a poorly applied statistical procedure.
Statistics can be a powerful scientific tool, obviously, but ONLY when tempered by critical analysis, supplemented by simple common sense. I’m surprised you guys are so willing to embarrass yourselves by insisting on the blind acceptance of an obviously flawed statistical procedure.
Moving along, I’d like to congratulate The Pompous Mr. Rodger for actually engaging with my argument for a change rather than limiting himself to ad hominem attacks and arguments from authority. Comparing those two graphs is quite interesting, and your attempt to get around the problem you yourself have (graciously?) acknowledged (“Rayne scaled the graph and confirms this to be true. So have I and ditto.”) is ingenious, yes. But once again I must remind you that my critique was based on a claim made for the lake effect graph per se, NOT any other graph or any other aspect of the Burnett et al argument. Comparing this dataset with any other dataset does not change the fact that the former reveals NO overall increasing trend. What you’ve accomplished is a neat bit of statistical legerdemain, granted. But no cigar, sorry.
As for the year 2001. If there truly was a trend from 1971 through 2001, that trend would be apparent already from 1971 through 2000. One single datapoint shouldn’t make a difference in that respect. Of course, it could make a difference as far as the statistics is concerned. Especially if it’s a very high number, which this one is. That’s what I’D call noise, since that one single point is clearly NOT part of an overall trend.
Now as for the word “trend.” Excuse my presumption but as I recall from all my many past lives, that word was part of the English vocabulary long before the advent of statistics. While it’s true that statistical methods enable us to quantify a trend and thus to compare trends with some degree of precision, there are many instances when neither quantification nor comparison are necessary. Back in the Renaissance, me and my buddies could always spot a trend, as, for example, when we got progressively drunker beer after beer. Or when it got progressively warmer from December to June.
Hank Roberts says
http://www.scientificamerican.com/article/climate-science-predictions-prove-too-conservative/
Hank Roberts says
From the “well, Duh!” desk:
Until recently, and only spottily now, there have been no regulations (or no inspection and enforcement) when wells were abandoned, and what there is relies on local people and agencies, with inconsistent requirements easily ignored.
Perhaps the new technology for locating concentrations of greenhouse gases will be usable for locating abandoned wells — remembering that it may be old water wells that are leaking methane, now, as gas moves laterally through aquifers.
The problem won’t be only the oil and gas wells that were abandoned and left unfilled or inadequately closed.
Radge Havers says
FWIW, here’s a fun article on Darwin and statistics:
https://www.sciencenews.org/article/darwin-reluctant-mathematician
And Galileo was no slouch when it came to math:
http://www.math.wichita.edu/history/men/galileo.html
It seems obvious, but it’s worth noting (since there are those who are naive about different disciplines, how to classify them and how they relate and work in the real world) that math is logic. If you want to go beyond the dibby dab of logic you get in Critical Thinking 101 for reading newspapers, especially as it applies to science and statistics, math is a good way to go. It is foundational.
There are other modes of thinking of course. Darwin was a whiz at naturalistic thinking, that is at classification–which is also foundational to science. You can see how quickly people (ok, trolls) wander into weird territory when they have no facility for conceptual organization. They’re the ones who end up relying on specious b.s. in attempts to keep up with more mature conversationalists. (Note the idea that you don’t need to understand any science to effectively tell scientists that they don’t know what they’re doing. Apparently you just need an impression, a gut feeling, questionable debating skills, and a rabid belief in your own innate superiority.)
So it goes.