Schwartz in the news again:
Stephen Schwartz of Brookhaven National Laboratory makes our weekly roundup again this week. This time, its for a comment/reply in the latest issue of Nature concerning a previously published Nature piece “Quantifying climate change — too rosy a picture?” by Schwartz et al. In the original piece, Schwartz and co-authors argue that the IPCC Fourth Assessment Report (AR4) presents an overly confident assessment of climate sensitivity and potential future climate change. In the response by Forster et al, a number of IPCC lead authors point out that the Schwartz et al critique ignores or misinterprets several key IPCC findings.
update: if you don’t have a subscription, the original Schwartz et al Nature article is available here and the recent comment/reply is available here
update #2: It has been pointed out to us that the commentary by Stephen Schwartz and co-authors was published on the Nature Reports Climate Change website, rather than in the print journal Nature.
Chuck Booth says
Is there something missing from this story, such as an RC commentary about the response by Schwartz to the criticism by Forster et al?
Falafulu Fisi says
Forster et al, said…
A further problem in the analysis of
Schwartz et al. is that they assume the
uncertainties associated with forcing and with
temperature response at year 2000 are linearly
related (Schwartz et al. Figure 2). Such a
simple relationship does not exist because
temperature today is not in equilibrium with
current forcing, but instead is influenced by
historical forcing.
I noted Schwartz’s linear model when I first read his paper. How about this scenario, would it be correct to say that all the possible values of sensitivity that have been published in the literatures so far and this including Schwartz’s calculation do fall in the possible range that physics allows, but perhaps we don’t know about these processes yet? Why does it has to be a single value? This is just a bold assertion, based on what I read on the published work of Rossow/Aires regarding the multivariate and nonlinear sensitivities for the analysis of feedback processes, in which they discussed the inappropriateness of using linear control model. So, if Schwartz’s linearity assumption is wrong, then why not try to adopt the formulation described in Rossow/Aires paper, where climate dynamical processes are treated according to the real world physics, and that is non-linear.
I found Rossow/Aires work interesting. They treated each climate process as parallel to each other, but I think that their work is just the start in adopting non-linear control theory for climate data analysis. It would be interesting if future work by researchers in this area if they would discover that there are also feed-forward processes which are taking place at the same time as those feed-back processes. Again, this is just a thought, since both feed-back and feed-forward control processes do really occur simultaneously in the real world science.
Hank Roberts says
> perhaps we don’t know about these processes yet?
I’ve never seen anyone argue otherwise. Very different processes may happen depending on how much, how fast, and when we stop. Nature always stopped boosting CO2 eventually; outcomes varied greatly.
> Why does it has to be a single value?
It’s the temperature difference measured once the planet is again at equilibrium — a single value, for those who will be alive at the time to do the measurement.
> This is just a bold assertion
Seems a mainstream statement to me, based on a few years’ reading.
> the real world physics
After the first century or so, it’s not just physics and chemistry, it will be studies in biogeochemistry and geomicrobiology.
Walt Bennett says
Since this is the roundup, here is a general question. I have been asked to evaluate Idso 98 with regard to the issue of climate sensitivity to CO2.
I found an inline from Gavin in April which states “Idso’s experiments all involve random temperatures divided by random fluxes – all at the surface and none related to global long term feedbacks – they have absolutely no relevance to climate sensitivity as normally defined. There were whole journal issues devoted to pointing out the flaws in Idso’s arguments (Climatic Change for instance).”
Unfortunately, I cannot seem to track that reference down, even using the site’s search engine.
Can anybody point me to critical examinations of that paper?
Hank Roberts says
After a little reading, just a few examples:
“Oceanographers are currently facing a profound challenge to their understanding of one of the ocean’s most important physical processes, turbulent mixing…. A problem currently puzzling the world’s oceanographers is that independent measurements of the ocean’s temperature profile and the amount of turbulent mixing are not consistent. Measurements find approximately one tenth the mixing necessary to agree with the warm temperatures detected in the ocean’s depth.” http://chowder.ucsd.edu/home/
The Hawaii Ocean-Mixing Experiment (HOME)
Science. 2004 Jan 9;303 (5655):210-3 14716008
Widespread intense turbulent mixing in the Southern Ocean.
Alberto C Naveira Garabato, Kurt L Polzin, Brian A King, Karen J Heywood, Martin Visbeck
Observations of internal wave velocity fluctuations show that enhanced turbulent mixing over rough topography in the Southern Ocean is remarkably intense and widespread. Mixing rates exceeding background values by a factor of 10 to 1000 are common above complex bathymetry over a distance of 2000 to 3000 kilometers at depths greater than 500 to 1000 meters. This suggests that turbulent mixing in the Southern Ocean may contribute crucially to driving the upward transport of water closing the ocean’s meridional overturning circulation, and thus needs to be represented in numerical simulations of the global ocean circulation and the spreading of biogeochemical tracers.” http://lib.bioinfo.pl/pmid:14716008
Hank Roberts says
Walt, try searching at the journal’s page:
http://www.springerlink.com/content/?k=Idso
steven mosher says
Falafulu,
Remember what Peter Stone Said:
“In the last five years, we have become aware that climate sensitivity is not a constant, but evolves with time as the climate changes. For example, in a global warming scenario in which sea ice retreats, the ice albedo temperature feedback decreases as the ice retreats, and the temperature sensitivity decreases correspondingly. Thus, the climate sensitivity that one should use in making projections has to be matched to the time scale and scenario for which the projections apply.”
Hank Roberts says
Today’s news:
Friday, September 7, 2007 — Arctic ice cap to melt faster than feared, scientists say
By Sandi Doughton
Seattle Times science reporter
http://nsidc.org/news/press/2007
… a new analysis from Seattle scientists says global warming will accelerate future melting much more than previously expected.
About 40 percent of the floating ice … during the summer will be gone by 2050, says James Overland, an oceanographer at the National Oceanic and Atmospheric Administration’s Pacific Marine Environmental Laboratory. Earlier studies had predicted it would be nearly a century before that much ice vanished.
“This is a major change,” Overland said. “This is actually moving the threshold up.”
The finding, to be published Saturday in Geophysical Research Letters, adds to a growing body of evidence that the ecosystem around the North Pole is rapidly transforming, says Mark Serreze, of the National Snow and Ice Data Center at the University of Colorado. He goes even further than Overland, predicting the Arctic Ocean will be completely ice-free in summer by 2030….
—– end excerpt —–
Mike Alexander says
I’ve studied Schwartz’s J Geophysical Research paper more thoroughly. He gets a very low sensitivity value of about 0.3 (corresponding to about 1.1 C temperature change from a CO2 doubling). Very crudely this idea can be conveyed by his assertion of a 2.2 watt per sq meter greenhouse forcing in the 20th century and a 0.57 increase in temperature: 0.57 temp change/2.2 forcing = 0.26, even smaller than 0.3.
He gets the 2.2 watt per sq meter from this figure from IPCC 2001:
http://www.grida.no/climate/ipcc_tar/wg1/fig6-8.htm
I see about 2.0 I have detailed CO2, N2O and CH4 data. When I calculate using just CO2 I get only 1.2 watts per sq meter for 20th century forcing increase. But if I add N2O and CH4 using the IPCC 2001 simplified expressions I get 1.6. And if I then tack on CFCs forcings from IPCC 2001 I get 1.9, so the ~2.0 from the figure is materializing when I check it out.
Schwarz uses a 0.57 temperature increase for the 20th century. I use the HadCRUT3v data from here:
http://www.cru.uea.ac.uk/cru/data/temperature/hadcrut3vgl.txt
Now if I simply read the 1900 and 2000 temperature off this sheet I see they are different by 0.53. If you look at the CRUTEMv data the difference is 0.55 so I can see where the 0.57 might have come from.
But when I pass a trend line (a running 20-year linear regression) through the data and look at the 1900-2000 change I get about 0.77 C. And if I look at the 1905-2005 difference I get about 0.87 C. Since Schwarz finds a 5 year lag, it makes sense of compare 1905-2005 temperature change to a 1900-2000 forcing change. Even with this revised temperature change and the 1.9 forcing (instead of 2.0) the sensitivity value comes out at 0.46 (corresponding to a 1.7 degree increase from a CO2 doubling).
Now the IPCC 2001 report (same fig as before) suggests -0.2 net forcing from solar + aerosols. With their 2.0 watt/m^2 value for greenhouse gases this gives 1.8 for total forcing. With my 1.9 value it would come to a 1.7 watt/m^2 total forcing. Even with this reduced forcing and the 0.87 temperature change I get, sensitivity of 0.51 is obtained, which corresponds to 1.9 C for a CO2 doubling.
The consensus sensitivity is about 3 C for a CO2 doubling. To get that we need another 0.46 C temperature increase that is unrealized. This is what is considered “in the pipeline”.
The source of this delayed heating is the effect of the deep ocean. Using a simple model in which I divide the world into three compartments (1) land + atmosphere above land (2) surface ocean + atmosphere above ocean and (3) deep ocean I can explore the effects of lags on the system.
Each compartment has an interchange rate with the surface ocean compartment. The land compartment has a small heat capacity and so has little impact on system dynamics over a period of decades (which is the time scale of interest since my trend serves to produce a running 20-year smoothing).
Assuming a fairly rapid interchange between and land and ocean air masses, the effect of deep ocean interchange in system dynamics in the multi-decade time frame is not very important. What deep ocean *does* impact is the extent of the temperature change due to a forcing change. For example over a period of a few decades, the impact of a step increase in forcing will be about 85-90% of the full effect for a deep ocean exchange rate consistent with a ~2000-year equilibrium period. Increasing the exchange rate to what would produce equilibrium in say ~1000 years would reduce the impact to about 80% or so. That is, the *true* sensitivity would be 18-25% greater than the figures above suggest depending on the importance of deep water heat exchange.
Application of these correctives to my 0.51 sensitivity value from above and I get 0.6-0.64 for sensitivity, corresponding to 2.3-2.4 degrees for CO2 doubling. It’s still not 3 C, but if I assume larger deep ocean interchange rates I can make it bigger. The issue is this heating “in the pipeline” that reflects deep ocean effects isn’t going to show up quickly. It might take a couple of centuries to manifest so is this relevant? Obviously it will be relevant when looking at paleoclimate forcing versus response behavior, but is it relevant today?
Aaron Lewis says
The global climate models failure to predict this summer’s Arctic Sea Ice Melt event would seem to confirm Swartz’s statement that we have “a false sense of (the models’) predictive capability.” The fact that IPCC inserted caveats, weasel words, and disclaimers into the report, does not change the fact that some of those disclaimers (key findings) did not find their way into the Executive Summary. Therefore, journalists, the general public, and executives/decision makers who only read the Executive Summary missed those key findings. The only people that read all of the key findings were the folks that read the full text of the IPCC reports. Even, most Lawyers did not get that deep into the fine print.
We need to face that fact that the best models failed, and rerunning suites of failed models (Overland) is not going to give use a good answer. It may give us a better answer, but it will not be a projection that can be used confidently for public policy planning. Moreover, the effects of the Arctic Sea Ice Melt event are likely to affect other parts of the Earth’s climate, and increase divergence between the models’ projections and reality. We need better models that produce projections of known quality
David B. Benson says
I have suggested a climate remediation policy as a comment to the “Climate Management 101 – 3. Changes and Times” post (currently the most recent) at
http://climatepolicy.org
Comments about the policy may be left there.
Here I have some climate change questions. We suppose it is possible to safely and securely sequester about 14 billion tonnes of carbon per year, from biomass sources. About half of that counter-balances the annual addition to the atmosphere from fossil and deforestration sources.
(1) Is this the correct estimate?
The other half goes into reducing the concentration of carbon dioxide in the atmosphere. In general terms, I assume that the climate begins responding by moving back toward ‘normal’.
(2) But in detail, it is not clear to me, either globally or regionally, just what the response would be. What will it be?
I propose continuing at this rate for 30 years, sequestering 210 billion tonnes of carbon. I believe this is about the correct amount to return the concentration of carbon dioxide in the atmosphere to 315 ppm.
(3) Is this about right? Is the response immediate or somewhat delayed?
Suppose that at this level just enough carbon is sequestered to balance the additions from fossil and deforestration sources, no more.
(4) What is the climatic response for the next 50–150 years?
I think I sorta know, but I’m interested in having a better grasp of the climate issues involved with this scenario. Thank you for your responses, which I shall read with great interest.
Hank Roberts says
Just a note on a way to avoid building lots more dirty coal plants fast — increase short term energy storage to smooth out the current grid’s performance.
You can’t do this with biodiesel.
Possible storage forms: batteries, flywheels.
Here’s a good review and example:
Energy Policy
Volume 35, Issue 4, April 2007, Pages 2558-2568
Abstract
doi:10.1016/j.enpol.2006.09.005
Copyright © 2006 Elsevier Ltd All rights reserved.
Economics of electric energy storage for energy arbitrage and regulation in New York
Rahul Walawalkara, Jay Apta, and Rick Mancinib,
Unlike markets for storable commodities, electricity markets depend on the real-time balance of supply and demand. Although much of the present-day grid operates effectively without storage, cost-effective ways of storing electrical energy can help make the grid more efficient and reliable. We investigate the economics of two emerging electric energy storage (EES) technologies: sodium sulfur batteries and flywheel energy storage systems in New York state’s electricity market. The analysis indicates that there is a strong economic case for EES installations in the New York City region for applications such as energy arbitrage, and that significant opportunities exist throughout New York state for regulation services.
Benefits from deferral of system upgrades may be important in the decision to deploy EES.
Market barriers currently make it difficult for energy-limited EES such as flywheels to receive revenue for voltage regulation. Charging efficiency is more important to the economics of EES in a competitive electricity market than has generally been recognized.
—– end abstract —–
Adrianne says
Schwartz et al. assume that: “The challenge of climate change research is to develop confident predictive capability”. It would –presumably- be more helpful to explain the big warming at Spitsbergen in winter 1918/19 first, the most puzzling climate event (L. Bengtsson et al.; JoC, 2004) that happened under the eyes of modern science only 90 years ago, which defiantly was also “one of the ocean’s most important physical processes” (# 5). During the Arctic night only the sea could have started the Arctic warming from 1919 to 1940.
Francis says
I’m a lawyer so a lot of this goes over my head. Nevertheless, isn’t it a little odd that the 1940’s observed global ocean temperature anomaly falls so far outside the 5-95% confidence range? Wouldn’t that suggest that current ocean temperature models need some work?
Thanks,
Adrianne says
That’s great that you are a lawyer, because one of the most comprehensive papers that are related to climate and oceans is Bernaerts’ guide to the 1982 United Nations Convention on the Law of the sea, writen by a lawyer and an international consultant. I am sure that it would answer your question better than I could.
Aaron Lewis says
I would suggest that
http://www.guardian.co.uk/international/story/0,,2164776,00.html
and in particular paragraph 10, suggests that we may still have a false sense of the authority of IPCC AR4.
Lawrence Brown says
Re No. 10: “The global climate models failure to predict this summer’s Arctic Sea Ice Melt event would seem to confirm Swartz’s statement that we have “a false sense of (the models’) predictive capability.”
I’ve never seen any claim made that GCMs can predict individual annual events,neither in the short nor the long term. The models, to the best of my knowledge, make general predictions of what is likely to happen in the future with given inputs, e.g. a business as usual scenario, and resulting greenhouse gas emissions. Expecting the models to say what the arctic sea ice melting will be in say the summer of 2008, is unrealistic.
Walt Bennett says
Re: #6
Hank,
I had zero success searching that site, and I struck out using Google to find other critiques. I find that to be highly unusual for any well established paper.
Is Idso 98 taken seriously in climate circles? Gavin asserted that the paper’s flaws had been well vetted, but I find no such evidence.
Any help at all would be appreciated.
Hank Roberts says
Jeez, Adrienne, this again?? I answered that one the last two times you posted it, there was a documented weather system involved that pushed warm air into that area. Look it up.
ray ladbury says
Re 13 and 14. Spam, spam, spam, spam, spam. Spam, spam, spam, spam. Lovely spam…
Mike Alexander says
I have only been look at this global warming issue for about eight months and there is a lot I don’t know yet. I find most of the work done is largely incomprehensible to a layman, which should not be the case for a field where the models are as wrong as they are.
One of the things not addressed too often (at least I haven’t seen it) is a characterization of what the global temperature profiles look like. If you plot the annual data you will see that it jumps around quite a bit. For example, I took the HADCRUT3v series of annual average global temperatures and ran a trend line though it. The trend was just a centered running 21-year linear regression up to 1996, with the post-1996 trend obtained from the 1986-2006 regression.
Here is what it looks like
http://my.net-link.net/~malexan/Climate-Model_files/image005.gif
I drew lines around it that encompass 82% of the data. These lines are spaced plus or minus 0.12 degrees from the trend. Yet I note that two out of nine years the actual temperature falls outside this value, so we can consider 0.12 C as a not atypical variation from year to year.
Now we know from experience that one any given day, the temperature can be as much as 20 C off from of the average for that day. If we average the daily temperatures of 1600 locations this average might be as much as 0.5 C off from its average value [0.5 C = 20 C/sqrt(1600)]. This statistical truism is called the central mean tendency. And if we then average 365 of these daily values to obtain an annual average, his value will be within 0.03 C [= 0.5 C / sqrt(365)] of its average value. In other words, the sort of “noise” that comes from random weather effects is much smaller than the 0.12 C variation the temperature plot shows.
These variations reflect real things like (possibly) the solar cycle and ENSO type phenomenon, not just “noise”.
When we consider that the size of these fluctuations is about one-third of the entire trend change in temperature over the past century, we can see the problem with short-term forecasts. What happens in the near future heavily depends on these fluctuations.
We can also see why the Markov-type analysis that Schwarz uses isn’t valid. This analysis attempts to identify system lags by measuring autocorrelation. Autocorrelation is the tendency for a system to follow past behavior, what one might all the “bandwagon effect”. Stock markets are a good example. An up day is more likely to be followed by another up day than not. The same is true for down days. The explanation is stock investors act like a herd; when its goes up, investors get on the “bull” bandwagon. When it goes down, it’s the bear’s turn. As a result of this behavior the stock market shows “trendiness”, behavior that is variously characterized as corrections/rallies and bear/bull markets depending on their size.
Applied to climate the idea is above average temperatures will be followed by more above-average temperatures because these elevated temperatures will be incorporated into the oceanic lag of the climate system and so will persist. The system will show hysteresis or a “memory effect”, like a rubber band, which “remembers” its former length and returns to it after it has been stretched.. Such hysteresis can be picked up by a Markov-type analysis which will provide an estimate of the characteristic lag (or response time) of the system under consideration. Schwartz does such an analysis and comes up with a lag time of 5 years.
Analysis of the temperature plot above shows that the trend change in temperature over the 1900-2000 period is +0.78 C, but the total change is 8.3 C. If we assume that 0.03 C(times 100 years) or 3 C is weather “noise” this leaves around 4 C of temperature change over the last 100 years to be explained. Only about a fifth of it is due to the trend change that reflects long-term forcings with which climatologist are concerned. The other 80% reflects cyclical and other types of systemic changes like ENSO that might reflect internal system dynamics and not forcings per se.
Schwarz’s Markov analysis should be valid if the temperature he is studying is mostly due to forcings. But if not then it is meaningless.
So the question comes to, are things like ENSO, related to forcings or not? We know that the heat capacity of the atmosphere is small compared to the surface ocean. As far as I can tell, the mixing times for both are not that different, so for climatic purposes we can consider them as a single system. If during an ENSO-like event the atmospheric temperature is higher, but the temperature of the combined surface ocean atmosphere is unaffected, then you would see a “spike” in recorded temperature that is unrelated to a forcing. That is, there would be no physical reason for this temperature spike to be “remembered” in future temperatures (i.e. through a system lag). This is because the temperature spike does not reflect a real increase in the energy absorbed by the Earth (which has to show up in future temperatures because of conservation of energy) but rather a rearrangement of the distribution of the current energy between surface ocean and atmosphere with no increase in the total energy inventory of the system (i.e. no associated forcing). Such a rearrangement could then disappear and leave no “memory” at all. The observed lag would then reflect the diluted effect of that temperature change (about 20%) that was due to actual forcings as opposed to “rearrangements” like ENSO.
Hank Roberts says
Walt, I recall Gavin’s on vacation; the Contact info in the “about” link, top of page would be a way to leave email to ask for more.
This link is the journal, but you’ll have to get it from interlibrary loan if you’re not a subscriber:
http://www.springerlink.com/content/?mode=allwords&k=climatic+change&sortorder=asc&Publication=Climatic+Change
Steve Bloom says
Re #18: Walt, IIRC the Climatic Change article was considered to be so thorough abd definitive that it pretty well ended the discussion (except insofar as Idso continues to try to promote it to this day). I also recall looking for a copy at one point and not being able to find a free one. Springer and Elsevier are noted pains in the butt when it comes to that sort of thing.
There are three things you can do: One, if there is a large universty near you, likely you can see it there. Two, email Steve Schneider, explain the public service nature of what you’re doing and ask him for a copy. I’m sure he’ll be happy to send you one as long as you promise to not post it on the web. OTOH there’s a chance that given how old it is it might be allowable for him to post it on his site, so ask about that. Three, buy it from Springer at their outrageous price.
Good luck!
Falafulu Fisi says
Here is an interesting paper that I’ve just come across. I haven’t read it yet, however I am still awaiting a response from the authors, since requesting a copy of their paper yesterday.
“Modelling greenhouse temperature using system identification by means of neural networks”
http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6V10-4B5C1BD-1&_user=10&_coverDate=01%2F31%2F2004&_rdoc=1&_fmt=&_orig=search&_sort=d&view=c&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=0babfad1f39555419a4b75b990684ce4
Abstract:
——–
An NNARX system is proposed for modelling the internal greenhouse temperature as a function of outside air temperature and humidity, global solar radiation and sky cloudiness. The models show a good performance over a complete year using only two training periods, 1 week in winter and one in September. Finding the balance between the number of neurons in the hidden layer of the NNARX system and the number of iterations for model training is found to play an important role in obtaining this good performance.
Author Keywords:
—————
Auto-regressive model; Recurrent neural network; Backpropagation
The algorithm used in the quoted paper above is similar to ones described in Rossow/Aires the non-linear sensitivity analysis paper, where they mentioned NARMAX. NARMAX is similar to NNARX described in the abstract of the above paper.
Tom Fiddaman says
Re 21 The 1600 stations aren’t independent, because many are in proximity to one another. I don’t know what the characteristic scale of weather systems is, but the station correlation alone could invalidate your analysis. – Tom
Daniel C. Goodwin says
Re: 10 & 16
Aaron Lewis’s crisp comments are much appreciated. The problem with 2007 Arctic events, vis-a-vis the models (that is, the ones used in AR4) is not that models are not intended to predict single-year events, but that the biogeochemical and geomicrobiological (as Hank puts it) behavior of the actual physical Earth, in all its formidable complexity, is making the models look ridiculous. Current observations keeps falling so far off the bottom of AR4’s worst cast scenarios that it won’t be long before its obvious to most that this generation of models, for all its usefulness, requires fundamental re-examintation.
Coming from a software engineering background myself – if this were my piece of code, I’d have a strong suspicion by now that there’s a major bug somewhere that I haven’t tracked down yet.
barry says
With Roger Pielke Snr’s weblog no longer being added to, what’s another good site that vigorously questions AGW theory purely on the science? Surely his wasn’t the only one. (I don’t think climateaudit makes the cut)
Adrianne says
Re #19: Claiming that an extreme rise of winter temperatures at the edge of the Arctic Ocean during the winter season could have been caused by ‘pushed warm air to that area’ is mere guessing and practically impossible, just read the arctic-warming site.
Aaron Lewis says
RE 17
I do not expect climate models to predict annual events.
However, some of the GCM did predict rapid collapses of the Arctic sea ice in 4 or 5 decades as an unlikely event. Human nature assumes that if the GCM can forecast such distant events, it should also be able to predict similar events that are only 1 or 2 decades distant. If the GCM had made a prediction of possible sea ice melt within 10 years of the actual event, I would say that the GCM was a great success. I would say that they were good if they came within 20 years. However, they missed. The GCMs did not forecast significant ice loss within the next couple of decades.
What is the value of these very expensive GCM, if they can not inform our decisions?
ray ladbury says
Aaron Lewis, Don’t forget that the dynamics of melting ice–especially sea ice–have greater uncertainties than do climate models. Also, GCM models do a very poor job of looking at fluctuations precisely because they are designed not to do so. The GCM have done a pretty good job of doing what they were designed to do–follow energy flows over large scales. Projecting these flows onto regional scales is a fraught proposition however, as previous posts have discussed. And yes, the Arctic is regional. They are also all we have to guide policy decisions.
So, the thing to do is look at where they were wrong and improve them. Look at where the melting-ice models were wrong, and improve them. And as we are making these improvements, we use the current models to implement sensible policies to mitigate the effects we are seeing.
Hank Roberts says
Adrienne, people have been spamming climate websites for years now with these links to what must be dozens of differently-named web pages, which all lead to Mr. Benaerts’s pages claiming World War I and II naval warfare caused global warming.
It’s just blogspamming, advertising his book. There’s no science, no math, just correlation, at the multiple websites:
seaclimate
arctic-warming
oceanclimate
warchangesclimate
1-ocean-1-climate
And you drop this stuff everywhere, always pointing back to those websites, sometimes at great length
http://climatesci.colorado.edu/2007/01/01/a-breath-of-fresh-air-in-the-media-on-climate-an-interview-and-article-by-andy-revkin/#comment-120688
Sometimes it’s a Mike Kauso posting the same thing, as someone noticed in the above thread
http://www.inkstain.net/fleck/?p=1866#comment-58527
Yes, the big year 1918, the year the Spanish Flu blew up across the world, was a warm year.
Most of the other claims you’re making are quotes from newspapers, and nothing supporting you has been published in the climatology papers. You have no research, no math, no idea how what Bernaerts is claiming could work. It gets really tiresome seeing it flogged.
Consider why. Just repeating the same thing over and over in many people’s blogs may increase your Google Page Rank in searches and perhaps your book sales. It doesn’t help advance the science.
Lynn Vincentnathan says
Although I can’t address the technical aspects and deficiencies of the Schwartz et al. commentary, my general gut feeling is that they’re right (perhaps maybe not for the reasons they posit): the IPCC AR4 may be painting too rosy a picture about global warming.
Even Stefan, here, who was part of IPCC AR4, suggested in an earlier thread that AR4 may not have given the best picture of sea rise because of unquantifiables for ice sheet dynamics.
And then there seems to be a general trend in studies — “it’s worse than we had earlier thought,” re such things the permafrost (& ice-locked carbon) in the arctic is much deeper than we though, and a lot of the ice-locked methane hydrates or clathrates in the ocean are at more shallow levels (closer to the warming layer of the ocean) than we thought. Or findings about hydrogen sulfide gas being released during past warmings & killing off life lickety split.
Or that scientists aren’t talking much about the scenarios after 2100, once the warming starts to catch up with the GHGs, I guess more bec that’s just too hard to project with any accuracy at all, esp with all the wild card positive feedback potential, more than bec no one cares about future generations…..
Compared to what happened 55 and 251 mya — so we do know it could happen again, only this time much faster, due to the demon-like speed and vigor with which we’re pumping GHGs into the atmosphere — the AR4 does seem a bit on the tame side.
Lawrence Brown says
Climate models are admittedly, have shortcomings and the the scientists and modelers involved will say so. Yet, there have been many successes. See: Hansen’s 1988 projections
posted by Gavin on 15 May on Realclimate to see verification of this.
Models are continuously being updated and improved.There is a certain randomness in natural phenomena and compensating for this is difficult,but possible.
Daniel C. Goodwin says
By now I’ve developed a mental filter process the error-bars of GCMs. I take the worst-case as my best-case, and shorten the interval by (at least) 50%. It’s easy to imagine an heuristic like this consistently outperforming, in terms of predictive accuracy.
This is nothing rigorous and I don’t mean to recommend such a heuristic to anyone. The steady stream of “much sooner than anyone previously thought possible” climate stories makes an impression on a one’s thinking after awhile, so I’ve acquired this mostly-unconscious rule of thumb.
Eyal Morag says
The problem with models that we know they can predict the past but we can’t know if they can predict the future partly because of the complexity that as know and more because of process that hard to model like
sea ice for example can have albedo from 0.05 to 0.9 the value depend on evolution during the years and last hours weather
[Sea Ice Albedo classification]
http://www.gi.alaska.edu/%7Eeicken/he_teach/GEOS615icenom/albedo/albedo%20classification.htm
that maybe part of the blame why the models don’t predict the observed sea ice decline
http://nsidc.org/news/press/20070430_StroeveGRL.html
but other process are maybe not in the models at all like Mediterranean drouth the duration of snow cover in Europe
http://nsidc.org/news/press/20070623_PainterGRL.html
Or the rate of methane release from hydrates in the Ocean
Or as appear in the WG SPM
“Dynamical processes related to ice flow not included in current models but suggested by recent observations could increase the vulnerability of the ice sheets to warming, increasing future sea level rise. Understanding of these processes is limited and there is no consensus on their magnitude.”
Or the reaction of the tropical rain forest to change in the rain pattern that didn’t occur yet.
That’s why I’v called my Hebrew blog on climate change “The Black Butterfly Effect”
Sow all models are no better then Hansen’s claim that we should better stay in the limits of interglacial periods and to learn from history
Now model can be trusted 2C degree higher than industrial temp
David B. Benson says
Rather than continue to be off-topic on the Regional Climate Change thread, I am commenting here. Hank Roberts brought up the notion that we (all 6+ billion of my closest friends) are over-consuming the world’s resources. Certainly so. There is only a finite supply of metal ores, for example.
However, the potential for biomass to create bioenergy is surprisingly high, over twice the current approximately 400–420 exajoules currently consumed from all sources. The following link contains a link to a thesis (mostly in English) which analyzes the situation:
http://www.biopact.com/africa-latin-america-china-and-biofuel.html
Somehwat related to this, I was hoping for comments regarding my comment #11, which wonders exactly what happens if atmospheric carbon dioxide concentrations are decreased by sequestering carbon…
Phil. Felton says
“If the GCM had made a prediction of possible sea ice melt within 10 years of the actual event, I would say that the GCM was a great success. I would say that they were good if they came within 20 years. However, they missed. The GCMs did not forecast significant ice loss within the next couple of decades.”
Well apparently this meets your standards.
https://www.realclimate.org/images/bitz_fig2.jpg
David B. Benson says
The correct link to the Biopact page is actually a previous post:
http://biopact.com/2006/07/look-at-africas-biofuels-potential.html
Apologies.
ray ladbury says
Daniel and Aaron, When you have a model that performs well generally, but which fails to perform for a specific class of phenomena, the problem is usually not a “bug”, but rather physics pertinent to the weak point that is missing. In this case, the missing physics is that of melting ice. This is a known unknown in the latest IPCC assessment. Such pessimism about the general health of the models is unwarranted.
Hank Roberts says
Current condition:
http://visibleearth.nasa.gov/view_rec.php?id=16858
http://veimages.gsfc.nasa.gov/16858/HANPP_1982-98.jpg
David, on the page you point to, their link is still broken to the document they cite as their source for these numbers:
“… the global, sustainable bioenergy production potential in 2050 is between 273 Exajoules (scenario 1) and 1471 EJ (scenario 4).
Now consider that the world’s total current energy consumption from all sources (nuclear, coal, oil, natural gas and renewables) amounts to 440 EJ of energy per year….”
But another paper that _is_ currently available at the same website contradict these numbers:
http://www.bioenergytrade.org/t40reportspapers/otherreportspublications/fairbiotradeproject20012004/00000098ae0d94705.html
“Existing scenario studies indicated that such increases in productivity may be unrealistically high, although these studies generally excluded the impact of large scale bioenergy crop production. The global potential of bioenergy production from agricultural and forestry residues and wastes was calculated to be 76-96 EJ y-1 in the year 2050. The potential of bioenergy production from surplus forest growth (forest growth not required for the production of industrial roundwood and traditional woodfuel) was calculated to be 74 EJ y-1 in the year 2050.”
I’m not going to pursue this further — it’s going way off topic — but you were picking the most optimistic scenarios and stating them as facts — that always worries me when I know there’s other information available that people need to evaluate the claims. The argument that it’s possible to exceed nature’s total productivity from photosynthesis without stealing anything from nature is most improbable.
I’m asking you to be even-handed and give people all the evidence relevant to your claim, not just the references that support what you prefer to believe, not just the most optimistic scenarios.
This is always the right approach — full, best available information so people can evaluate claims and think about the facts available.
David B. Benson says
Hank Roberts — The link to the Hoogwijk thesis provides the .pdf file. I read enough of it to stand by my statement, which is re-inforced by the Smeets, et al., abstracts that you so kindly provided a link to.
The discrepancy comes from employing currently degraded or unused, but usable, lands to grow biomass for bioenergy. Jatropha is a good choice and there is lots of land which can be employed for this purpose.
This is a ‘Friday Report’ thread, whose purpose — as I understand it — is to provide a place for comments which would be off-topic elsewhere.
And I do try to provide the links to appropriate references. I’ve been following Biopact for some time now and have learned to largely trust their judgement. You are, of course, welcome to disagree. But with regard to the potential for about 840 exajoules of bioenergy, both the Hoogwijk thesis and the Smeets, et al., papers seem to agree that this is possible, albeit with some investment.
This is, to me, of great importance. It means that if the will is there, it is possible to replace all fossil fuelss by biofuels and still have enough to sequester massive amounts of carbon each year.
Mike Alexander says
RE 24. 1600 was an illustrative example. Obviously one would use only the *independent* stations. Say there are only 300 independent stations. In this case the max variation on annual temperature would be 0.06 C, still much less than the 0.12 and my point stands.
Technically my analysis applies for standard deviations, not maximum variations, which are considerably smaller, probably on the order of 5-10 C. Thus for 0.12 C-sized *regular* global fluctuations to be the result of random 10 C “weather effects” you would need less than *twenty* independent weather stations.
This is why the modern instrumental record (with >> 20 independent measurements) is so much more reliable than pre-1850 data. For example 17th century temperature changes of ~0.1 C may well be “weather noise” whereas 20th century changes of this magnitude most certainly do not.
TokyoTom says
James Annan has also commented on Schwartz`s earlier Nature piece concerning the IPCC: http://julesandjames.blogspot.com/2007/09/pile-on.html#links
pete best says
ARCTIC ICE LOSS 2007
Looking at the recent events in this regard in the Arctic it can be currently shown that all 18 models used to predict future sea ice loss are too conservative.
http://nsidc.org/news/press/20070430_StroeveGRL.html
Is this just a modelling paramater error, sea ice to thick in the models or ocean circulation is not correct etc or is it something else.
The recent ice loss seems to be alarming some people and preictions of 2030 for sea ice loss in the Artic are now being projected. I remember a recent article on RC that stipulated that around 2080 to 2100 was the most likely date and even the models earliest dates were 2040 and that was considered unlikely !!
Is polar amplification a stronger forcing than currently known ?
Randy Ross says
In connection with David Thompson’s comments on biofuel, the New York Times just had an article on this very promising plant:
http://www.nytimes.com/2007/09/09/world/africa/09biofuel.html
Obviously, this will not stop global warming, but it can help provide non-fossil fuel energy to the developing world, and help poor farmers in those areas.
Hank Roberts says
Pete, much of the data collected is still classified; the Navy Postgraduate School thesis collection has consistently had more modeling information on the Arctic than any other site. It’s hard to search, each time I go look for something I know is there and supposedly public, they’ve rearranged the links or home page.
The articles this search finds are ways to start into that:
“naval postgraduate school” Arctic sea ice. I’m only guessing that they may be drawing on classified data to feed their models, but allowed to publish some of the model results — not allowed to to tell the enemy submariners where the ice is thick or thin, but perhaps allowed to reveal the overall trends, for example.
I’ve seen one name, Dr. Maslowski, turn up as thesis advisor on several interesting documents that used to be findable there.
Here’s a published paper from him on the topic, quite recent:
http://www.ametsoc.org/atmospolicy/documents/May032006_Dr.WieslawMaslowski.pdf
David B. Benson says
Re #11, #35, #37, #39, #40: Using the figure that burning one kilogram of carbon in the air releases 33 megajoules (I had to ask so I don’t have a reference), 14 billion tonnes of carbon is equivalent to 462 exajoules. This is towards the low end of the four scenarios in the Hoogwijk thesis.
Actually, I forgot about carbon dioxide concentration in the upper ocean in arriving at the figure of 14 billion tonnes posed in comment #11. I believe this means that it will take the removal of about 270 billion tonnes of carbon from the atmosphere to restore the carbon dioxide concentration to 315 ppm. At an average of 7 billion tonnes of carbon sequestered per year, this reequires about 40 years.
John Mashey says
re: #44
Agreed. jatropha is interesting, as do some others, of which miscanthus looks especially good in place of corn for ethanol, sooner or later. Especially, poor third-world farmers aren’t *ever* going to get much, if any, gasoline or diesel fuel (Peak Oil), so maybe they can skip straight to renewables or PV-solar-powered tractors and pumps.
Of course, better crops in general would help: although Norman Borlaug is amazing, and still out there at 93, I hope there are younger equivalents to give Africa more help with crop engineering.
Daniel C. Goodwin says
Re 38: I’m glad to hear your optimism regarding the models, Ray. I have no doubt you know vastly more than me whereof you speak.
Still, the engineer’s “we know we can fix it” is automatically much less credible than the quantum physicist’s “we don’t know anything” to me. The nagging doubts I have about something basically off in the GCMs (if you’ll indulge a piker):
1) Perhaps the GCMs may be described as equilibrium machines. Is there something fundamentally different about modeling disequilibrium, if that’s what the earth is experiencing?
2) What about non-linear versus linear effects? Easier math, at least, might prejudice models in the linear direction.
3) The proliferation of feedbacks from biological sources (which may be a restatement of (2), come to think of it). As this progresses, the prognosis for predictively-successful modeling gets that much worse, again.
pete best says
Re #45, Hank, Thats interesting stuff. looks like ice measurements and assumptions for the models might have been incorrect or that computational power is not sufficient to accurately reflect oceanic heat transport.
Still as all 18 models are somewhat inaccurate even with varying degrees of parameters is there something missing from the models I wonder ?