Meow #32, I don’t see that… visually they scale to ~1 for me. Do you have access to their data?
Meow #34, yes it is circular where temperatures and transient sensitivities themselves are concerned, as they use “observations” from a GCM. But the uncertainty estimates (which is their focus) will be meaningful nevertheless — provided the GCM, and the forcings driving it, aren’t too far off. That’s how I take it.
BTW the paper is still “in review” (where?) according to this. Hope the reviewers catch some of the “warts” like in the intro, “after about 70 years given a 1% CO2 doubling rate”. Surely they mean “after about 70 years given a 1% CO2 annual rate of growth” (which produces a doubling in 70 years).
Nice stuff… funny that they’re coming from Mechanical and Aerospace Engineering…
“scientists who found the new fields of outgoing methane in the Arctic region have not defined yet whether it is the consequence of hydrates failure or result of high activity of sea microorganisms. To know this for sure they should first analyze the samples they gathered during the expedition.”
What kind of sea microorganism activity would be likely to create massive quantities of methane?
Septic Matthewsays
51, Martin Vermeer, I am glad you liked it. On the pdf itself it says “J. Climate, in press”.
Surely they mean “after about 70 years given a 1% CO2 annual rate of growth”
It’s funny no one caught that in review. But most papers have a few odd locutions.
“Warts and all”, right now I think it is the best work on the most important topic.
Meow #32, I don’t see that… visually they scale to ~1 for me. Do you have access to their data?
They’re pdfs: the areas under the curves are 1, not the peaks of the curves.
Warming up, turning sour, losing breath: ocean biogeochemistry under global change — Nicolas Gruber
“… the ocean’s biogeochemical cycles and ecosystems will become increasingly stressed by at least three independent factors. Rising temperatures, ocean acidification and ocean deoxygenation will cause substantial changes in the physical, chemical and biological environment, which will then affect the ocean’s biogeochemical cycles and ecosystems in ways that we are only beginning to fathom.
Ocean warming will not only affect organisms and biogeochemical cycles directly, but will also increase upper ocean stratification. The changes in the ocean’s carbonate chemistry induced by the uptake of anthropogenic carbon dioxide (CO2) (i.e. ocean acidification) will probably affect many organisms and processes, although in ways that are currently not well understood.
Ocean deoxygenation, i.e. the loss of dissolved oxygen (O2) from the ocean, is bound to occur in a warming and more stratified ocean, causing stress to macro-organisms that critically depend on sufficient levels of oxygen. These three stressors—warming, acidification and deoxygenation—will tend to operate globally, although with distinct regional differences.
The impacts of ocean acidification tend to be strongest in the high latitudes, whereas the low-oxygen regions of the low latitudes are most vulnerable to ocean deoxygenation. Specific regions, such as the eastern boundary upwelling systems, will be strongly affected by all three stressors, making them potential hotspots for change.
Of additional concern are synergistic effects, such as ocean acidification-induced changes in the type and magnitude of the organic matter exported to the ocean’s interior, which then might cause substantial changes in the oxygen concentration there. Ocean warming, acidification and deoxygenation are essentially irreversible on centennial time scales, i.e. once these changes have occurred, it will take centuries for the ocean to recover.
With the emission of CO2 being the primary driver behind all three stressors, the primary mitigation strategy is to reduce these emissions.”
[extra paragraph breaks added for readability — hr]
Paul Ssays
#21, Septic Matthew – ‘The authors estimate a most likely increase of 1.6K by 2080 if the atmospheric concentration of CO2 doubles gradually throughout that time span.’
Surely that only holds if climate is currently in equilibrium? What should happen by 2080 is that we get ‘pipeline’ warming associated with the ~100ppm increase seen so far + an extra 1.6K transient response to a doubling (increase to 780ppm).
For comparison the IPCC A2 scenario reaches about 660ppm by 2080. The A2 scenario includes other long-lived GHGs so overall forcing is probably relatively similar to 780ppm CO2. The IPCC ensemble model cast has a transient response of around 1.6K too yet the model mean projection is for about 2.5K warming from 2000.
‘Personally, I think that the transient climate sensitivity is the single most important quantity wrt climate change for public policy purposes.’
Transient sensitivity is a useful metric but I would think it far more important for policymakers to understand that there could be a large amount of warming waiting to occur after we manage to stabilise CO2 concentration. Hence it is incomplete without also knowing the equilibrium response. Another nugget of knowledge that would be very useful for policymakers would be the length of time it takes for equilibrium to be achieved. I don’t see this discussed very often.
Martin Vermeersays
#53
> They’re pdfs: the areas under the curves are 1, not the peaks of the curves.
Precisely. That’s why I wondered how Meow could be so sure… the eye is not very good at integration.
Martin, did you email them a pointer to the comments on errors in the review draft? Email is at the bottom of that page you linked to.
Meowsays
@56:
Precisely. That’s why I wondered how Meow could be so sure… the eye is not very good at integration.
You’re right about that. Visually I had estimated each curve’s area to be significantly > 1, but having measured the blue curve’s area in pixels, I find it’s ~1, so I withdraw that criticism.
Meowsays
@51:
Meow #34, yes it is circular where temperatures and transient sensitivities themselves are concerned, as they use “observations” from a GCM. But the uncertainty estimates (which is their focus) will be meaningful nevertheless — provided the GCM, and the forcings driving it, aren’t too far off. That’s how I take it.
But they’re estimating the uncertainty in the 2030 transient sensitivity by assimilating the synthetic 2008-2030 “temperature record”, which appears, in turn, to be derived from the 2008 transient sensitivity. How can that procedure possibly yield a 2030 TCS with a smaller uncertainty than that of the 2008 TCS?
Mayasays
“Mann’s piece has collected 170 “recommends” so far ”
281 with the one I added. You’re right, in the grand scheme of things, it’s not important, but if even one person notices, and goes “hmmm” and starts to read to find out more about the science of this whole global warming thing, and maybe clicks on one of those links at the bottom … then maybe it’s one more person who’ll understand, who’ll care, who’ll try to make a difference.
Septic Matthewsays
55, PaulS,
Yes. The other really important quantities are (1) the equilibrium climate sensitivity; and (2) how long it takes to achieve 99% of equilibrium after it gets halfway there.
59, Meow: How can that procedure possibly yield a 2030 TCS with a smaller uncertainty than that of the 2008 TCS?
I think they mean to say that, with temperature records available in 2030, estimated transient climate sensitivity then will be more precise; their computation with a simulated future illustrate how that can come about. However, I often write something like “In 20 years we’ll have a much clearer idea which of the models is most accurate,” so you could make a case that I am projecting my own belief onto them.
Septic Matthewsays
does anyone else have references to good estimates of the transient climate sensitivity — peer-reviewed papers?
… Between 18 and 20 kilometres up, over 80 per cent of the existing ozone was destroyed. “The loss in 2011 was twice that in the two previous record-setting Arctic winters, 1996 and 2005,” says Nathaniel Livesey …
… But we don’t know why the stratosphere stayed cold for so long. “That will be studied for years to come,” Santee says.
Climate change could be partly responsible. That may seem counter-intuitive, but global warming occurs only at the bottom of the atmosphere. “Climate change warms the surface but cools the stratosphere,” Harris explains.
In 2007 the Intergovernmental Panel on Climate Change concluded that “there has been global stratospheric cooling since 1979”. “Whether that is because of climate change is speculation,” Santee says.
Are there other predictions for, or ways in hindsight to explain, the stratosphere showing a cooling trend for over 30 years?
Martin Vermeersays
I think they mean to say that, with temperature records available in 2030, estimated transient climate sensitivity then will be more precise; their computation with a simulated future illustrate how that can come about. However, I often write something like “In 20 years we’ll have a much clearer idea which of the models is most accurate,” so you could make a case that I am projecting my own belief onto them.
No, I think this is about right. The new information is how global mean temperatures will respond to much larger (and properly known!) forcings 2008-2030 according to one model. The response itself cannot be trusted any better than the GCM used, but the propagation of uncertainties may be realistic (or that’s the idea I get).
—
By the way, SM #64 is a good illustration of why one should work with climate scientists when doing climate related work, or risk egg on one’s face :-)
Andysays
ref 26. I’m fairly sure the writer was Harry Harrison and I think the title was “The Invisible Idiot”
I tried to sign your petition, but the “SIGN THIS PETITION” button was grayed out and wouldn’t do anything.
ldavidcookesays
RE:23
Hey Pete,
Please do not forget the role that convection plays in part of the heat transport. Rather then just radiative emission, alot of energy goes into evaporation or sublimation which is transported to between 2-6km and radiates out from the elevated altitude. The other missing link is the change in the adabatic wet/dry transition height. Increase the atmospheric heat content and you increase the adiabatic height, complicate that with changes in aerosol CCNs and added wv/condensation UV absorption and re-emission as LW and you have a wonderful story to share. I am concerned that many get stuck in the first chapter or chorus and miss the rest…
Jane Long, an associate director of the Lawrence Livermore National Laboratory and the panel’s co-chairwoman, said that by spewing greenhouse gases into the atmosphere, human activity was already engaged in climate modification. “We are doing it accidentally, but the Earth doesn’t know that,” she said, adding, “Going forward in ignorance is not an option.”
Richard Hawessays
Richard Hawes @ 26
“The Monkey Wrench” Gordon R. Dickson in “The Penguin Science Fiction Omnibus”. Aldiss, B. ed 1973
I bought my copy new in 1987 so good second hand numbers should be available.
Thanks Darv …. this story has stuck with me a long long time, especially dealing with some of the trogs and cornucopians in the oil patch.
Thomassays
Hank @67.
There is a short simple explanation for strtospheric cooling. The upper atmosphere obtains heat input from two sources, solar UV, and upgoing longwave from below. It loses heat via longwave emission. The solar UV is (to first order) unafected by the CO2 concentration, there is an excess of IR emission versus absorbption since the upper atmosphere is higher than the radiative temp of the planet, so increasing the IR opacity increases the cooling efficiency via LW emission. So the upper atmosphere should cool with increasing concentration of greenhouse gases. Of course subtle changes in chemistry and or circulation could complicate matters, but thats it in a nutshell.
Paul Ssays
Hank Roberts – ‘Are there other predictions for, or ways in hindsight to explain, the stratosphere showing a cooling trend for over 30 years?‘
Ozone depletion also causes stratospheric cooling (I’m sure there must be a RealClimate post on this). Temperature trends in the stratosphere have been flat for about 15 years now, perhaps partly due to a small recent Ozone recovery(?).
I found this recent paper a few weeks ago (Forster et many als. 2011). It’s and assessment of CCM (Chemistry climate model) performance against stratospheric trends but obviously discusses the factors forcing the trends along the way.
ldavidcookesays
Hey Dr. Schmidt and Co.,
Is it possible we can revisit Drs. Lin and Chambers “how clouds work” again, though just from a vertical RH profile, based on the radiosonde/CALIPSO/CLOUDSAT, ocean versus land, data and not the Dr. Lidzen SST basis? It would be nice to see how this plays into the PSC/MSC formation, increasing drought mechanics and atmospheric heat content without dragging in CCNs. (I know, my fault, I just did not understand the mechanics at the time and jumped on a Colardo State Grad. student’s paper without thinking things through.)
> SM
> … In response to a paper that is in press you go
> to a blog?
Nope. I looked for mention and noted that it’s already a hot item in the discussions among the septic crowd.
> You did much the same when you decided not to read
> the 110 pp of AOAS that I referenced. Didn’t you?
Nope.
But thanks for asking.
Always happy to disabuse you. (wry grin)
I think we’re getting somewhere.
Aside — email addresses @princeton.edu given for the authors of the Padilla ‘in press’ paper don’t work; I’d hoped the author could be invited to look in here.
The book that is cited by them has a price tag of $203, so I must skip it for now.
68, Martin Vermeer: No, I think this is about right.
I hope the “no” refers to my “projection” comment, and the “about right” to how the uncertainty declines with time.
You are right about statisticians working with climate scientists. Dr. Douglas Nychka at NCAR is an example. Statistics (and statisticians) grows by addressing new challenges in fields that statisticians have not studied up til then. I hope that this topic (the transient climate sensitivity) stimulates some interest at the 2012 Joint Statistical Meetings in San Diego: http://www.amstat.org/meetings/jsm/2012/index.cfm
“Statistics: Growing to serve a data-dependent society.”
I also hope that Padilla et al will expand their exposition and submit it to the Annals of Applied Statistics; as McShayne and Winer did with theirs.
Thanks to all for your help.
Septic Matthewsays
Thanks to all above for your comments. To mike (64 inline) that paper is high on my reading list, thanks again. 68, Martin Vermeer — I agree on the necessity to work with climate scientists.
Each year more statisticians get involved with climate scientists. Hopefully, this topic (estimating transient climate resonse) will be on the schedule next year in San Diego: http://www.amstat.org/meetings/jsm/2012/index.cfm
Mal Adaptedsays
Los Alamos National Laboratory is hosting the Third Santa Fe Conference on Global and Regional Climate Change Oct. 31 thru Nov. 4. A lot of good science has come out of LANL, but the conference program is dismaying. I’m not familiar with many of the names on it, but I do know a few of them, e.g. Lindzen, Singer and Monckton! What can the conference organizers be thinking?
[Response: The organizer is Peter Chylek, and I have no idea. – gavin]
1. First distinguish between transient and full equilibrium responses (with only the Planck response; add in Charney feedbacks and then non-Charney feedbacks later):
If a sudden jump in greenhouse gas concentration occurs, you can have cooling in the upper atmosphere that dissappears later (assuming heat capacity is distributed in a sufficient manner?). For example, in a grey gas case with no direct solar heating above some height z (assuming z is above the tropopause and approximating the upper atmosphere as being in pure radiative equilibrium), then the net LW flux above z must be constant with height when in equilibrium and equal to the OLR at TOA (outgoing longwave radiation at the (effective) top of the atmosphere). Increasing LW opacity will at first reduce OLR. The skin layer’s temperature will fall in response, coming to equilibrium with OLR. But the radiative imbalance causes heat to accumulate below. Eventually a full equilibrium is a achieved when OLR is restored, and the skin temperature must be restored as well.
2. But if opacity is only increased in some bands and not others, the restored OLR can have a different spectral distribution than before. Depending on how the spectrum of atmospheric opacity has changed, some may be displaced from the bands that exert greatest influence on the skin layer, and thus result in a persistent cooling near TOA. On the other hand, if optical thickness in increased in an atmospheric window (assuming surface emissivity is high enough in that window, to generalize this more), this tends to warm the skin layer by intercepting OLR that is originating in warmer places.
3. Other effects of changing the spectrum can occur due to the shape of the Planck function over the spectrum and over temperature (consider whether the lapse rate, in terms of the Planck function, is convex (tends to cause net cooling) or concave (tends to cause net warming) – in particular, on the distance scale of moderate opacity (~ mean free path of photons, maybe give or take?) – this will vary over frequency – even in a grey gas, one may go from convex to concave at some point in the spectrum. In pure LW radiative equilibrium, net warming and net cooling at different parts of the LW portion of the spectrum must balance; otherwise an imbalance will balance solar heating, and/or convective heating/cooling (such as at the surface or in the troposphere, or regionally/seasonally/etc. in the stratosphere (convection) or in the stratosphere due to the ozone layer). The skin Planck function is half of OLR in the absence of direct solar heating or other complications; the temperature may range from half the brightness temperature of OLR to near the same brightness temperature of the OLR depending on where it is determined in the spectrum.
4. If there is direct solar heating sufficiently (depending on opacity) near TOA, adding LW opacity thins the skin layer, removing solar heating from it, bringing the equilibrium skin layer temperature closer to pure LW equilibrium value.
Hunt Janinsays
I need a professional-quality color photo for use on the front cover of my coauthored book, “Rising Sea Levels.”
Since from a photographic point of view, sea level rise is not a good subject (it is much too slow), I’m thinking of using a dramatic photo of a storm surge hitting the shore.
Anything WRONG with this idea?
[Response: Hmm…. There is an implicit attribution which is not what you are trying to convey (I think). However, there are photos of impacts, and in particular increased erosion, that might convey what you want more directly. I’m thinking of images of Shishmaref falling into the sea, or abandoned houses on the Carolina barrier islands. Gary Braasch might be your man. – gavin]
As someone who researches in rather non-controversial arenas (Mol. Biol./Medical Biophys.) unrelated to climate science, I am routinely flabbergasted by some of the (wilfully) dismal rubbish that is occasionally published in climate science presumably in pursuit of dreary agendas.
One area that seems a focus for some of this stuff is the insinuation of “evidence” for low climate sensitivity. As an outsider I tend to take papers rather for granted unless something stands out as being patently dodgy. There’s no escaping that fact that Petr Chylek’s effort to insinuate low climate sensitivity by ludicrous datapoint selection of ice core temperature proxies and dust levels in his 2008 GRL paper (with U. Lohmann) is a deeply flawed analysis (see, for example, the comment by Annan and Hargreaves Clim. Past, 5, 143–145, 2009; http://www.clim-past.net/5/143/2009/cp-5-143-2009.pdf ). The nature of this particular flaw is rather similar to that which Lindzen and Choi made in their insinuation of low climate sensitivity (negative feedback) from a similarly astonishing selection of data points comparing TOA radiative flux in response to surface temperature variation. I don’t think it’s unreasonable to point out that Dr. Spencer has also made some rather flawed analyses in pursuit of negative feedbacks/low climate sensitivity (as have a very small number of other authors). I don’t see why we should have to “pussy-foot” around this subject – the papers (and their rebuttals) are there in cold black and white, and those authors wrote them.
So perhaps Dr. Chylek has some sympathy for that particular point of view (the pursuit of low climate sensitivity), which is shared by some of his meeting participants you mentioned. Incidentally, I don’t think this is a problem for the science. One might say that the assumptions underlying our knowledge base should be tested towards destruction, and we might even be reassured by the fact that some individuals that do their damnest to pursue the contrary point of view, rather, in fact, tend to reinforce the pukka science. Perhaps we might be saddened by the observation that their efforts are so puny, scientifically-speaking.
Anyway, that might be relevant to the question of why Dr. Chylek has organized a meeting with some odd participants.
Msays
Mal Adapted: Add to your list Easterbrook, Scafetta, Morner, Loehle… plus Schwartz on climate sensitivity and Garrett on his wacky thermodynamic economy theory… they’re not all as bad as Singer and Monckton, but not people I associate with high quality science. And probably another half-dozen names there are people who might not be contrarians themselves, but are often quoted by the Moranos of the world nonetheless.
And yet, there are a number of top-notch names there too – I have to wonder what they’ll think when they have to sit through the junk from the above folks…
1. Increasing LW opacity will at first reduce OLR. – assuming no direct solar heating of the air above some height, or otherwise assuming a distribution such that temperature still declines with height, at least on the vertical distance scale that is similar to a unit of optical thickness, at least within a sufficient optical depth from TOA down into the atmosphere.
2.On the other hand, if optical thickness in increased in an atmospheric window (assuming surface emissivity is high enough in that window, to generalize this more), this tends to warm the skin layer by intercepting OLR that is originating in warmer places.
– generalizes to any atmospheric window down to a sufficiently warm level – for example, a humid or cloudy air mass (provided the clouds are not too high in the troposphere or else that the lapse rate is still positive going into the stratosphere, etc.) would act optically act like an elevated surface that may be cooler (unless there’s an inversion under it) than the actual surface but still supply a relatively ‘warm’ upward LW flux that the air above must adjust to if it has optical thickness (of the emission/absorption type, as opposed to pure scattering) in that band.
David Youngsays
I hope someone will help me again because after significant research, my numerical analysis questions just won’t go away. I was pointed to a video by Cambridge University Isaac Newton Institute of P. Williams discussing time stepping errors and he shows some rather alarming things for simple models, namely, that the apparently standard leapfrog scheme with Robert-Asselin filter is pretty dissipative and damps the real oscillations in the system. He studied a few explicit methods of different orders and seemed to conclude that higher order was good and that the problems with leapfrog are not hard to fix. Then I went back to my graduate school text and I found that “The leapfrog scheme is also prone to nonlinear instability.” Comments?
71 BPL: A whitehouse.gov account is required to sign Petitions. That may be the problem. You have to create an account. It is not as user friendly as a Macintosh.
David B. Bensonsays
Barton Paul Levenson @71 — First register; unfortunately that causes a login. So logout. Now start over and log in. Then you can sign.
Remeber that it’s good enuf for government work.
Septic Matthewsays
90, David Young:
It is nearly always possible to come up with an interesting and relevant example that defeats each scheme.
Sorry, but you always have to experiment a lot with problems similar to yours and with different schemes. I always use an implicit scheme with stepsize adjustment and error-corrections; and I nearly always test the finalest results against the simplest Euler’s method with diminishing step-sizes. It takes a long time, and with large enough systems can be discouraging.
With these large, meaning high-dimensional, climate systems run for long times, I am glad that I am not the one responsible for claiming that they achieve what they aim for.
I am not an expert — I merely have some good books and have spent time in the library stacks with other good books. Every differential equation solver can be defeated with a problem reasonably similar to the problem that you are working on. As far as I can tell.
Martin Vermeersays
64, mike, inline. Thank you much. It’s high on my current reading list. I also got some papers from Stephen E. Schwartz, cited inline.
SM #79, mike is pulling your leg, making fun of you unearthing, with unerring ‘skepticism’, the one paper that is seriously flawed… he is the number four author on the “comment” he links to, that demolishes the Schwartz paper. But by all means, read both ;-)
Off topic, why am I reminded of some women serially finding men that are bad for them? Something in the water?
ldavidcookesays
RE:90
Hey David,
I do not pretend to know anything about models; however, to me the biggest issue is the weight of the modifiers of the inital perameters. You cannot simply plug in a stepping value and not have the system quickly detatch from reality. At one point in a former life regarding Capacity Planning we would would build a model that used the past and we derived a weighing factor from projected sales, of course that was a fiasco. Then we discovered we could apply a range bounded random value with the weight adjusted by the change in a leading economic indicator and the industry projections of the customer. In that manner we were able to be within 1% of varibility over a 6 year period with a bi-monthly cycle.
Given this it suggests that maybe we have a disconnect in model creation. It s important that the initial values are the correct values and the interdependencies are well defined. The next step is to examine the range of the modifiers and to define the range based on a weighing with 1-4 std normalized deviations. The issue is to apply weight to the range of the modifers so that they change in step with the change of the results of the prior step. IE: If a value of a modifier from a previoue step caused one of the calculated steps to demonstrate a large change there needs to be a feedback to the bounded range modifier to deselect a 4th and possibly 3rd level std. deviation in the next step, though in nature it has been demonstrated that repetitive outliers can exist though eventually they terminate the trend and reverse.
In short, you will have to re-initialize each time. The actual change is not so much the models results; but, the modifing factors static factors are not real world. Yes they simplfy the model and help point a direction; but, in the end they will not track real world events more the three to maybe ten cycles, depending on your resolution and interrelationship description. As to leapfrogging with a linear modifying factor this is like starting over and plugging in a amplitude greater inaccurate modifing factor. If you attempt a course or low resolution run you still must run a high resolution modifying factor otherwise all you are doing is amplifying the error.
As to what this has to do with your mechanics and formula adjustment filters I can be no help. However, I can tell you that tight model tracking is rocket science and should not be left up to the student or amateur. To devise a model of a high level of variables, with a high range of values, such that the resultant appears to be chaotic takes more man hours and expertise then any simplifing matematical tool kit can account for.
Cheers!
Dave Cooke
Hunt Janinsays
Re sea level rise photo (83 & 84 above):
Many thanks, Gavin and dehogoza. I’m emailing Gary Braasch to see if he has such a photo.
The passive and active nature of ocean heat uptake in idealized climate change experiments [PDF] from princeton.edu — P Xie, GK Vallis – Climate Dynamics, 2011 DOI 10.1007/s00382-011-1063-8
It begins:
“Abstract: The influence of ocean circulation changes on heat uptake is explored using a simply-configured primitive equation ocean model resembling a very idealized Atlantic Ocean. We focus on the relative importance of the redistribution of the existing heat …”
and ends
“… Evidently, the warming occurs first in the mixed layer and
then, on the multi-decadal timescale, in the main thermo-
cline and southern ocean (Fig. 3), with a potential localized
cooling in high northern latitudes due to a weakening of the
MOC, with a significant warming of the abyss only on
century–to multi-century timescales. Note that a warming
of the thermocline only, with a depth of 1 km, produces to
a sea-level rise of about 20 cm per degree, and a warming
of the entire water column, with a lower average coefficient
of thermal expansion but greater volume, translates to a
rise of about 50 cm per degree. Thus, understanding how
the heat uptake reaches the deep ocean, and on what
timescales, is an important problem that we need to better
understand….”
Martin Vermeer says
Meow #32, I don’t see that… visually they scale to ~1 for me. Do you have access to their data?
Meow #34, yes it is circular where temperatures and transient sensitivities themselves are concerned, as they use “observations” from a GCM. But the uncertainty estimates (which is their focus) will be meaningful nevertheless — provided the GCM, and the forcings driving it, aren’t too far off. That’s how I take it.
BTW the paper is still “in review” (where?) according to this. Hope the reviewers catch some of the “warts” like in the intro, “after about 70 years given a 1% CO2 doubling rate”. Surely they mean “after about 70 years given a 1% CO2 annual rate of growth” (which produces a doubling in 70 years).
Nice stuff… funny that they’re coming from Mechanical and Aerospace Engineering…
wili says
http://english.ruvr.ru/2011/09/28/56886547.html
“scientists who found the new fields of outgoing methane in the Arctic region have not defined yet whether it is the consequence of hydrates failure or result of high activity of sea microorganisms. To know this for sure they should first analyze the samples they gathered during the expedition.”
What kind of sea microorganism activity would be likely to create massive quantities of methane?
Septic Matthew says
51, Martin Vermeer, I am glad you liked it. On the pdf itself it says “J. Climate, in press”.
Surely they mean “after about 70 years given a 1% CO2 annual rate of growth”
It’s funny no one caught that in review. But most papers have a few odd locutions.
“Warts and all”, right now I think it is the best work on the most important topic.
Meow #32, I don’t see that… visually they scale to ~1 for me. Do you have access to their data?
They’re pdfs: the areas under the curves are 1, not the peaks of the curves.
Hank Roberts says
http://rsta.royalsocietypublishing.org/content/369/1943/1980.full
10.1098/rsta.2011.0003 Phil. Trans. R. Soc. A 28 May 2011 vol. 369 no. 1943 1980-1996
Warming up, turning sour, losing breath: ocean biogeochemistry under global change — Nicolas Gruber
“… the ocean’s biogeochemical cycles and ecosystems will become increasingly stressed by at least three independent factors. Rising temperatures, ocean acidification and ocean deoxygenation will cause substantial changes in the physical, chemical and biological environment, which will then affect the ocean’s biogeochemical cycles and ecosystems in ways that we are only beginning to fathom.
Ocean warming will not only affect organisms and biogeochemical cycles directly, but will also increase upper ocean stratification. The changes in the ocean’s carbonate chemistry induced by the uptake of anthropogenic carbon dioxide (CO2) (i.e. ocean acidification) will probably affect many organisms and processes, although in ways that are currently not well understood.
Ocean deoxygenation, i.e. the loss of dissolved oxygen (O2) from the ocean, is bound to occur in a warming and more stratified ocean, causing stress to macro-organisms that critically depend on sufficient levels of oxygen. These three stressors—warming, acidification and deoxygenation—will tend to operate globally, although with distinct regional differences.
The impacts of ocean acidification tend to be strongest in the high latitudes, whereas the low-oxygen regions of the low latitudes are most vulnerable to ocean deoxygenation. Specific regions, such as the eastern boundary upwelling systems, will be strongly affected by all three stressors, making them potential hotspots for change.
Of additional concern are synergistic effects, such as ocean acidification-induced changes in the type and magnitude of the organic matter exported to the ocean’s interior, which then might cause substantial changes in the oxygen concentration there. Ocean warming, acidification and deoxygenation are essentially irreversible on centennial time scales, i.e. once these changes have occurred, it will take centuries for the ocean to recover.
With the emission of CO2 being the primary driver behind all three stressors, the primary mitigation strategy is to reduce these emissions.”
[extra paragraph breaks added for readability — hr]
Paul S says
#21, Septic Matthew – ‘The authors estimate a most likely increase of 1.6K by 2080 if the atmospheric concentration of CO2 doubles gradually throughout that time span.’
Surely that only holds if climate is currently in equilibrium? What should happen by 2080 is that we get ‘pipeline’ warming associated with the ~100ppm increase seen so far + an extra 1.6K transient response to a doubling (increase to 780ppm).
For comparison the IPCC A2 scenario reaches about 660ppm by 2080. The A2 scenario includes other long-lived GHGs so overall forcing is probably relatively similar to 780ppm CO2. The IPCC ensemble model cast has a transient response of around 1.6K too yet the model mean projection is for about 2.5K warming from 2000.
‘Personally, I think that the transient climate sensitivity is the single most important quantity wrt climate change for public policy purposes.’
Transient sensitivity is a useful metric but I would think it far more important for policymakers to understand that there could be a large amount of warming waiting to occur after we manage to stabilise CO2 concentration. Hence it is incomplete without also knowing the equilibrium response. Another nugget of knowledge that would be very useful for policymakers would be the length of time it takes for equilibrium to be achieved. I don’t see this discussed very often.
Martin Vermeer says
#53
> They’re pdfs: the areas under the curves are 1, not the peaks of the curves.
Precisely. That’s why I wondered how Meow could be so sure… the eye is not very good at integration.
Hank Roberts says
Martin, did you email them a pointer to the comments on errors in the review draft? Email is at the bottom of that page you linked to.
Meow says
@56:
You’re right about that. Visually I had estimated each curve’s area to be significantly > 1, but having measured the blue curve’s area in pixels, I find it’s ~1, so I withdraw that criticism.
Meow says
@51:
But they’re estimating the uncertainty in the 2030 transient sensitivity by assimilating the synthetic 2008-2030 “temperature record”, which appears, in turn, to be derived from the 2008 transient sensitivity. How can that procedure possibly yield a 2030 TCS with a smaller uncertainty than that of the 2008 TCS?
Maya says
“Mann’s piece has collected 170 “recommends” so far ”
281 with the one I added. You’re right, in the grand scheme of things, it’s not important, but if even one person notices, and goes “hmmm” and starts to read to find out more about the science of this whole global warming thing, and maybe clicks on one of those links at the bottom … then maybe it’s one more person who’ll understand, who’ll care, who’ll try to make a difference.
Septic Matthew says
55, PaulS,
Yes. The other really important quantities are (1) the equilibrium climate sensitivity; and (2) how long it takes to achieve 99% of equilibrium after it gets halfway there.
59, Meow: How can that procedure possibly yield a 2030 TCS with a smaller uncertainty than that of the 2008 TCS?
I think they mean to say that, with temperature records available in 2030, estimated transient climate sensitivity then will be more precise; their computation with a simulated future illustrate how that can come about. However, I often write something like “In 20 years we’ll have a much clearer idea which of the models is most accurate,” so you could make a case that I am projecting my own belief onto them.
Septic Matthew says
does anyone else have references to good estimates of the transient climate sensitivity — peer-reviewed papers?
Hank Roberts says
Many (tho’ many are paywalled) from Scholar’s “Related Articles” link:
http://scholar.google.com/scholar?q=related:9JzhT9w6e_UJ:scholar.google.com/&hl=en&as_sdt=0,5&as_ylo=2011
Septic Matthew says
Here’s one:
Schwartz, S.E. Heat capacity, time constant, and sensitivity of Earth’s climate system. J. Geophysical Research, 112, D24S05, doi: 10.1029/2007JD008746, 2007. [BNL-79148-2007-JA]
[Response: Here’s another: Foster, G., Annan, J.D., Schmidt, G.A., Mann, M.E., Comment on “Heat Capacity, Time Constant, and Sensitivity of Earth’s Climate System” by S. E. Schwartz, J. Geophys. Res., 113, L22707, D15102, doi: 10.1029/2007JD009373, 2008. -mike]
Septic Matthew says
Hank Roberts,
Thank you.
flxible says
willi@52
one just never knows about unknowns. ;)
Hank Roberts says
http://www.newscientist.com/article/dn20988-arctic-ozone-hole-breaks-all-records.html
— excerpt follows —
… Between 18 and 20 kilometres up, over 80 per cent of the existing ozone was destroyed. “The loss in 2011 was twice that in the two previous record-setting Arctic winters, 1996 and 2005,” says Nathaniel Livesey …
… But we don’t know why the stratosphere stayed cold for so long. “That will be studied for years to come,” Santee says.
Climate change could be partly responsible. That may seem counter-intuitive, but global warming occurs only at the bottom of the atmosphere. “Climate change warms the surface but cools the stratosphere,” Harris explains.
In 2007 the Intergovernmental Panel on Climate Change concluded that “there has been global stratospheric cooling since 1979”. “Whether that is because of climate change is speculation,” Santee says.
— end excerpt —
Livesey: http://science.jpl.nasa.gov/people/Livesey/
Harris: http://www.cei.cam.ac.uk/directory/nrh1000@cam.ac.uk
Santee: http://science.jpl.nasa.gov/people/Santee/
Are there other predictions for, or ways in hindsight to explain, the stratosphere showing a cooling trend for over 30 years?
Martin Vermeer says
No, I think this is about right. The new information is how global mean temperatures will respond to much larger (and properly known!) forcings 2008-2030 according to one model. The response itself cannot be trusted any better than the GCM used, but the propagation of uncertainties may be realistic (or that’s the idea I get).
—
By the way, SM #64 is a good illustration of why one should work with climate scientists when doing climate related work, or risk egg on one’s face :-)
Andy says
ref 26. I’m fairly sure the writer was Harry Harrison and I think the title was “The Invisible Idiot”
Barton Paul Levenson says
Hertzberg is the guy advising left-wing crazy Alexander Cockburn. See:
http://bartonpaullevenson.com/Cockburn.html
[Response: Also mentioned here “Cockburn’s form” – gavin]
Barton Paul Levenson says
Edward,
I tried to sign your petition, but the “SIGN THIS PETITION” button was grayed out and wouldn’t do anything.
ldavidcooke says
RE:23
Hey Pete,
Please do not forget the role that convection plays in part of the heat transport. Rather then just radiative emission, alot of energy goes into evaporation or sublimation which is transported to between 2-6km and radiates out from the elevated altitude. The other missing link is the change in the adabatic wet/dry transition height. Increase the atmospheric heat content and you increase the adiabatic height, complicate that with changes in aerosol CCNs and added wv/condensation UV absorption and re-emission as LW and you have a wonderful story to share. I am concerned that many get stuck in the first chapter or chorus and miss the rest…
Cheers!
Dave Cooke
Peter Backes says
Geoengineering raises its head again:
http://www.nytimes.com/2011/10/04/science/earth/04climate.html
Significant quote:
Jane Long, an associate director of the Lawrence Livermore National Laboratory and the panel’s co-chairwoman, said that by spewing greenhouse gases into the atmosphere, human activity was already engaged in climate modification. “We are doing it accidentally, but the Earth doesn’t know that,” she said, adding, “Going forward in ignorance is not an option.”
Richard Hawes says
Richard Hawes @ 26
“The Monkey Wrench” Gordon R. Dickson in “The Penguin Science Fiction Omnibus”. Aldiss, B. ed 1973
I bought my copy new in 1987 so good second hand numbers should be available.
Thanks Darv …. this story has stuck with me a long long time, especially dealing with some of the trogs and cornucopians in the oil patch.
Thomas says
Hank @67.
There is a short simple explanation for strtospheric cooling. The upper atmosphere obtains heat input from two sources, solar UV, and upgoing longwave from below. It loses heat via longwave emission. The solar UV is (to first order) unafected by the CO2 concentration, there is an excess of IR emission versus absorbption since the upper atmosphere is higher than the radiative temp of the planet, so increasing the IR opacity increases the cooling efficiency via LW emission. So the upper atmosphere should cool with increasing concentration of greenhouse gases. Of course subtle changes in chemistry and or circulation could complicate matters, but thats it in a nutshell.
Paul S says
Hank Roberts – ‘Are there other predictions for, or ways in hindsight to explain, the stratosphere showing a cooling trend for over 30 years?‘
Ozone depletion also causes stratospheric cooling (I’m sure there must be a RealClimate post on this). Temperature trends in the stratosphere have been flat for about 15 years now, perhaps partly due to a small recent Ozone recovery(?).
I found this recent paper a few weeks ago (Forster et many als. 2011). It’s and assessment of CCM (Chemistry climate model) performance against stratospheric trends but obviously discusses the factors forcing the trends along the way.
ldavidcooke says
Hey Dr. Schmidt and Co.,
Is it possible we can revisit Drs. Lin and Chambers “how clouds work” again, though just from a vertical RH profile, based on the radiosonde/CALIPSO/CLOUDSAT, ocean versus land, data and not the Dr. Lidzen SST basis? It would be nice to see how this plays into the PSC/MSC formation, increasing drought mechanics and atmospheric heat content without dragging in CCNs. (I know, my fault, I just did not understand the mechanics at the time and jumped on a Colardo State Grad. student’s paper without thinking things through.)
Cheers!
Dave Cooke
Hank Roberts says
> SM
> … In response to a paper that is in press you go
> to a blog?
Nope. I looked for mention and noted that it’s already a hot item in the discussions among the septic crowd.
> You did much the same when you decided not to read
> the 110 pp of AOAS that I referenced. Didn’t you?
Nope.
But thanks for asking.
Always happy to disabuse you. (wry grin)
I think we’re getting somewhere.
Aside — email addresses @princeton.edu given for the authors of the Padilla ‘in press’ paper don’t work; I’d hoped the author could be invited to look in here.
Hank Roberts says
> SM
> In response to a paper that is in press you
> go to a blog?
Nope. I mention it’s already a hot topic there. I don’t go there.
> You did much the same when you decided not
> to read the 110 pp of AOAS that I referenced.
> Didn’t you?
Nope. Thanks for asking.
Septic Matthew says
64, mike, inline. Thank you much. It’s high on my current reading list. I also got some papers from Stephen E. Schwartz, cited inline.
Here is an introduction to the method used by Padilla et al. http://www.image.ucar.edu/~nychka/manuscripts/nychka_anderson2.pdf
The book that is cited by them has a price tag of $203, so I must skip it for now.
68, Martin Vermeer: No, I think this is about right.
I hope the “no” refers to my “projection” comment, and the “about right” to how the uncertainty declines with time.
You are right about statisticians working with climate scientists. Dr. Douglas Nychka at NCAR is an example. Statistics (and statisticians) grows by addressing new challenges in fields that statisticians have not studied up til then. I hope that this topic (the transient climate sensitivity) stimulates some interest at the 2012 Joint Statistical Meetings in San Diego: http://www.amstat.org/meetings/jsm/2012/index.cfm
“Statistics: Growing to serve a data-dependent society.”
I also hope that Padilla et al will expand their exposition and submit it to the Annals of Applied Statistics; as McShayne and Winer did with theirs.
Thanks to all for your help.
Septic Matthew says
Thanks to all above for your comments. To mike (64 inline) that paper is high on my reading list, thanks again. 68, Martin Vermeer — I agree on the necessity to work with climate scientists.
Each year more statisticians get involved with climate scientists. Hopefully, this topic (estimating transient climate resonse) will be on the schedule next year in San Diego: http://www.amstat.org/meetings/jsm/2012/index.cfm
Mal Adapted says
Los Alamos National Laboratory is hosting the Third Santa Fe Conference on Global and Regional Climate Change Oct. 31 thru Nov. 4. A lot of good science has come out of LANL, but the conference program is dismaying. I’m not familiar with many of the names on it, but I do know a few of them, e.g. Lindzen, Singer and Monckton! What can the conference organizers be thinking?
[Response: The organizer is Peter Chylek, and I have no idea. – gavin]
Patrick 027 says
Re 75 Thomas, 67 Hank Roberts
Stratospheric cooling more here:
http://scienceblogs.com/stoat/2011/09/why_does_the_stratosphere_cool.php
The goal was a brief explanation. My own attempts there pretty much failed at being brief. But maybe another go at it:
1. First distinguish between transient and full equilibrium responses (with only the Planck response; add in Charney feedbacks and then non-Charney feedbacks later):
If a sudden jump in greenhouse gas concentration occurs, you can have cooling in the upper atmosphere that dissappears later (assuming heat capacity is distributed in a sufficient manner?). For example, in a grey gas case with no direct solar heating above some height z (assuming z is above the tropopause and approximating the upper atmosphere as being in pure radiative equilibrium), then the net LW flux above z must be constant with height when in equilibrium and equal to the OLR at TOA (outgoing longwave radiation at the (effective) top of the atmosphere). Increasing LW opacity will at first reduce OLR. The skin layer’s temperature will fall in response, coming to equilibrium with OLR. But the radiative imbalance causes heat to accumulate below. Eventually a full equilibrium is a achieved when OLR is restored, and the skin temperature must be restored as well.
2. But if opacity is only increased in some bands and not others, the restored OLR can have a different spectral distribution than before. Depending on how the spectrum of atmospheric opacity has changed, some may be displaced from the bands that exert greatest influence on the skin layer, and thus result in a persistent cooling near TOA. On the other hand, if optical thickness in increased in an atmospheric window (assuming surface emissivity is high enough in that window, to generalize this more), this tends to warm the skin layer by intercepting OLR that is originating in warmer places.
3. Other effects of changing the spectrum can occur due to the shape of the Planck function over the spectrum and over temperature (consider whether the lapse rate, in terms of the Planck function, is convex (tends to cause net cooling) or concave (tends to cause net warming) – in particular, on the distance scale of moderate opacity (~ mean free path of photons, maybe give or take?) – this will vary over frequency – even in a grey gas, one may go from convex to concave at some point in the spectrum. In pure LW radiative equilibrium, net warming and net cooling at different parts of the LW portion of the spectrum must balance; otherwise an imbalance will balance solar heating, and/or convective heating/cooling (such as at the surface or in the troposphere, or regionally/seasonally/etc. in the stratosphere (convection) or in the stratosphere due to the ozone layer). The skin Planck function is half of OLR in the absence of direct solar heating or other complications; the temperature may range from half the brightness temperature of OLR to near the same brightness temperature of the OLR depending on where it is determined in the spectrum.
4. If there is direct solar heating sufficiently (depending on opacity) near TOA, adding LW opacity thins the skin layer, removing solar heating from it, bringing the equilibrium skin layer temperature closer to pure LW equilibrium value.
Hunt Janin says
I need a professional-quality color photo for use on the front cover of my coauthored book, “Rising Sea Levels.”
Since from a photographic point of view, sea level rise is not a good subject (it is much too slow), I’m thinking of using a dramatic photo of a storm surge hitting the shore.
Anything WRONG with this idea?
[Response: Hmm…. There is an implicit attribution which is not what you are trying to convey (I think). However, there are photos of impacts, and in particular increased erosion, that might convey what you want more directly. I’m thinking of images of Shishmaref falling into the sea, or abandoned houses on the Carolina barrier islands. Gary Braasch might be your man. – gavin]
dhogaza says
Hunt, Gavin’s suggestion of Gary Braasch is an excellent one. Here’s a link to the “climate change” category on his website.
chris says
Mal Adapted (re @80)
As someone who researches in rather non-controversial arenas (Mol. Biol./Medical Biophys.) unrelated to climate science, I am routinely flabbergasted by some of the (wilfully) dismal rubbish that is occasionally published in climate science presumably in pursuit of dreary agendas.
One area that seems a focus for some of this stuff is the insinuation of “evidence” for low climate sensitivity. As an outsider I tend to take papers rather for granted unless something stands out as being patently dodgy. There’s no escaping that fact that Petr Chylek’s effort to insinuate low climate sensitivity by ludicrous datapoint selection of ice core temperature proxies and dust levels in his 2008 GRL paper (with U. Lohmann) is a deeply flawed analysis (see, for example, the comment by Annan and Hargreaves Clim. Past, 5, 143–145, 2009; http://www.clim-past.net/5/143/2009/cp-5-143-2009.pdf ). The nature of this particular flaw is rather similar to that which Lindzen and Choi made in their insinuation of low climate sensitivity (negative feedback) from a similarly astonishing selection of data points comparing TOA radiative flux in response to surface temperature variation. I don’t think it’s unreasonable to point out that Dr. Spencer has also made some rather flawed analyses in pursuit of negative feedbacks/low climate sensitivity (as have a very small number of other authors). I don’t see why we should have to “pussy-foot” around this subject – the papers (and their rebuttals) are there in cold black and white, and those authors wrote them.
So perhaps Dr. Chylek has some sympathy for that particular point of view (the pursuit of low climate sensitivity), which is shared by some of his meeting participants you mentioned. Incidentally, I don’t think this is a problem for the science. One might say that the assumptions underlying our knowledge base should be tested towards destruction, and we might even be reassured by the fact that some individuals that do their damnest to pursue the contrary point of view, rather, in fact, tend to reinforce the pukka science. Perhaps we might be saddened by the observation that their efforts are so puny, scientifically-speaking.
Anyway, that might be relevant to the question of why Dr. Chylek has organized a meeting with some odd participants.
M says
Mal Adapted: Add to your list Easterbrook, Scafetta, Morner, Loehle… plus Schwartz on climate sensitivity and Garrett on his wacky thermodynamic economy theory… they’re not all as bad as Singer and Monckton, but not people I associate with high quality science. And probably another half-dozen names there are people who might not be contrarians themselves, but are often quoted by the Moranos of the world nonetheless.
And yet, there are a number of top-notch names there too – I have to wonder what they’ll think when they have to sit through the junk from the above folks…
Edward Greisch says
71 BPL Thanks.
Edward Greisch says
DotEarth http://community.nytimes.com/comments/dotearth.blogs.nytimes.com/2011/10/02/a-map-of-organized-climate-change-denial
A Map of Organized Climate Change Denial
has over 130 comments. Links to a very good very expensive book on the sociology of GW.
See also:
“Bloomberg’s Bombshell Report on “The Koch Method”: How to Steal, Cheat and Lie Your Way to the Top” on
http://thinkprogress.org/romm/2011/10/03/334001/bloomberg-the-koch-method/
Patrick 027 says
Re my last comment:
1. Increasing LW opacity will at first reduce OLR. – assuming no direct solar heating of the air above some height, or otherwise assuming a distribution such that temperature still declines with height, at least on the vertical distance scale that is similar to a unit of optical thickness, at least within a sufficient optical depth from TOA down into the atmosphere.
2.On the other hand, if optical thickness in increased in an atmospheric window (assuming surface emissivity is high enough in that window, to generalize this more), this tends to warm the skin layer by intercepting OLR that is originating in warmer places.
– generalizes to any atmospheric window down to a sufficiently warm level – for example, a humid or cloudy air mass (provided the clouds are not too high in the troposphere or else that the lapse rate is still positive going into the stratosphere, etc.) would act optically act like an elevated surface that may be cooler (unless there’s an inversion under it) than the actual surface but still supply a relatively ‘warm’ upward LW flux that the air above must adjust to if it has optical thickness (of the emission/absorption type, as opposed to pure scattering) in that band.
David Young says
I hope someone will help me again because after significant research, my numerical analysis questions just won’t go away. I was pointed to a video by Cambridge University Isaac Newton Institute of P. Williams discussing time stepping errors and he shows some rather alarming things for simple models, namely, that the apparently standard leapfrog scheme with Robert-Asselin filter is pretty dissipative and damps the real oscillations in the system. He studied a few explicit methods of different orders and seemed to conclude that higher order was good and that the problems with leapfrog are not hard to fix. Then I went back to my graduate school text and I found that “The leapfrog scheme is also prone to nonlinear instability.” Comments?
Hank Roberts says
For anyone wondering about SM’s mention of AOAS, here’s what that’s about:
https://www.realclimate.org/index.php/archives/2010/12/responses-to-mcshane-and-wyner/
and
http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.aoas/1300715166
Edward Greisch says
71 BPL: A whitehouse.gov account is required to sign Petitions. That may be the problem. You have to create an account. It is not as user friendly as a Macintosh.
David B. Benson says
Barton Paul Levenson @71 — First register; unfortunately that causes a login. So logout. Now start over and log in. Then you can sign.
Remeber that it’s good enuf for government work.
Septic Matthew says
90, David Young:
It is nearly always possible to come up with an interesting and relevant example that defeats each scheme.
Sorry, but you always have to experiment a lot with problems similar to yours and with different schemes. I always use an implicit scheme with stepsize adjustment and error-corrections; and I nearly always test the finalest results against the simplest Euler’s method with diminishing step-sizes. It takes a long time, and with large enough systems can be discouraging.
With these large, meaning high-dimensional, climate systems run for long times, I am glad that I am not the one responsible for claiming that they achieve what they aim for.
I am not an expert — I merely have some good books and have spent time in the library stacks with other good books. Every differential equation solver can be defeated with a problem reasonably similar to the problem that you are working on. As far as I can tell.
Martin Vermeer says
SM #79, mike is pulling your leg, making fun of you unearthing, with unerring ‘skepticism’, the one paper that is seriously flawed… he is the number four author on the “comment” he links to, that demolishes the Schwartz paper. But by all means, read both ;-)
Off topic, why am I reminded of some women serially finding men that are bad for them? Something in the water?
ldavidcooke says
RE:90
Hey David,
I do not pretend to know anything about models; however, to me the biggest issue is the weight of the modifiers of the inital perameters. You cannot simply plug in a stepping value and not have the system quickly detatch from reality. At one point in a former life regarding Capacity Planning we would would build a model that used the past and we derived a weighing factor from projected sales, of course that was a fiasco. Then we discovered we could apply a range bounded random value with the weight adjusted by the change in a leading economic indicator and the industry projections of the customer. In that manner we were able to be within 1% of varibility over a 6 year period with a bi-monthly cycle.
Given this it suggests that maybe we have a disconnect in model creation. It s important that the initial values are the correct values and the interdependencies are well defined. The next step is to examine the range of the modifiers and to define the range based on a weighing with 1-4 std normalized deviations. The issue is to apply weight to the range of the modifers so that they change in step with the change of the results of the prior step. IE: If a value of a modifier from a previoue step caused one of the calculated steps to demonstrate a large change there needs to be a feedback to the bounded range modifier to deselect a 4th and possibly 3rd level std. deviation in the next step, though in nature it has been demonstrated that repetitive outliers can exist though eventually they terminate the trend and reverse.
In short, you will have to re-initialize each time. The actual change is not so much the models results; but, the modifing factors static factors are not real world. Yes they simplfy the model and help point a direction; but, in the end they will not track real world events more the three to maybe ten cycles, depending on your resolution and interrelationship description. As to leapfrogging with a linear modifying factor this is like starting over and plugging in a amplitude greater inaccurate modifing factor. If you attempt a course or low resolution run you still must run a high resolution modifying factor otherwise all you are doing is amplifying the error.
As to what this has to do with your mechanics and formula adjustment filters I can be no help. However, I can tell you that tight model tracking is rocket science and should not be left up to the student or amateur. To devise a model of a high level of variables, with a high range of values, such that the resultant appears to be chaotic takes more man hours and expertise then any simplifing matematical tool kit can account for.
Cheers!
Dave Cooke
Hunt Janin says
Re sea level rise photo (83 & 84 above):
Many thanks, Gavin and dehogoza. I’m emailing Gary Braasch to see if he has such a photo.
Best,
Hunt
Hank Roberts says
> Padilla
already Cited by 1
Hank Roberts says
Oops. “Padilla Cited by 1”
link broke when posted; Scholar pointed to:
http://www.springerlink.com/index/M0825V6218725240.pdf
or http://www.princeton.edu/~gkv/papers/Xie_Vallis11.pdf
The passive and active nature of ocean heat uptake in idealized climate change experiments [PDF] from princeton.edu — P Xie, GK Vallis – Climate Dynamics, 2011 DOI 10.1007/s00382-011-1063-8
It begins:
“Abstract: The influence of ocean circulation changes on heat uptake is explored using a simply-configured primitive equation ocean model resembling a very idealized Atlantic Ocean. We focus on the relative importance of the redistribution of the existing heat …”
and ends
“… Evidently, the warming occurs first in the mixed layer and
then, on the multi-decadal timescale, in the main thermo-
cline and southern ocean (Fig. 3), with a potential localized
cooling in high northern latitudes due to a weakening of the
MOC, with a significant warming of the abyss only on
century–to multi-century timescales. Note that a warming
of the thermocline only, with a depth of 1 km, produces to
a sea-level rise of about 20 cm per degree, and a warming
of the entire water column, with a lower average coefficient
of thermal expansion but greater volume, translates to a
rise of about 50 cm per degree. Thus, understanding how
the heat uptake reaches the deep ocean, and on what
timescales, is an important problem that we need to better
understand….”