Simple question: We hear that there is a new ‘ozone hole’ over the Arctic due to a cooling stratosphere, and a cooling stratosphere is (as I understand it) a classic ‘fingerprint’ of an enhanced greenhouse effect, so would it be reasonable to attribute this ozone hole to rising anthropogenic greenhouse gas emissions?
His presentations have long been popular in, er, uncertain circles.
That’s hard to understand unless all they care about is short term, because
his conclusion seems to always be along the lines
‘We ain’t seen nothin’ YET but for sure it is a’comin’ …’
Today I present an analysis of a 2009 article by Yasmin Said and Edward Wegman of George Mason University. “Roadmap for Optimization” was published in the inaugural edition of WIREs Comp Stats, one of a new family of Wiley publications conceived as a “serial encyclopedia”.
…
As the title implies, the article was meant to provide a broad overview of the mathematical optimization and set the stage for subsequent articles detailing various optimization techniques. However my analysis, entitled Suboptimal Scholarship: Antecedents of Said and Wegman 2009, demonstrates the highly problematic scholarship of the “Roadmap” article.
* No fewer than 15 likely online antecedent sources, all unattributed, have been identified, including 13 articles from Wikipedia and two others from Prof. Tom Ferguson and Wolfram MathWorld.
* Numerous errors have been identified, apparently arising from mistranscription, faulty rewording, or omission of key information.
* The scanty list of references appears to have been “carried along” from the unattributed antecedents; thus, these references may well constitute false citations.
ldavidcookesays
Re:99
Hey Hank,
Concur, note that the increase in volume should be constrained within the thermocline. Hence, you should have to distribute the expansion factor. Though the water column covering the polar basins should be uniform as they fill to feed the THC. I guess the question is what happens if you fill the basins faster then they empty? Does the THC rate increase or does the spill over inflate the area above the whole of the abysmal plain?
Cheers!
Dave
Harmensays
“I need a professional-quality color photo for use on the front cover of my coauthored book, “Rising Sea Levels.””
Given the Nobel news, brace yourselves for howls of outrage from WUWT at the Committee’s failure to declare Viscount Monckton and Lubos Motl co-Laureates in Peace, Physics & Medicine :
For curing Bright’s Syndrome by using cosmic ray neutrinos to cause faster than light climate change in 11 dimensions.
wilisays
Picking up on the topic of the enormous and unexpected ozone hole over the Arctic discussed by hank, thomas, patric, icarus, etc:
Is there any chance that the dramatic increases in methane output from Siberian Continental Shelf that Shakhova and others have been reporting on is related to this?
–methane is a powerful ghg, 105 times the global warming potential of CO2, so one expected result of a massive release would be that the stratosphere would cool as the troposphere warmed
–being lighter than other common atmospheric gasses, much of the methane would rise to the stratosphere, and there, react with the ozone, destroying it (and, in the process, producing CO2 and H2O, both ghg’s in their own right–and am I remembering wrong, or did recent studies not suggest that increases of stratospheric water vapor was a more powerful driver of gw than had previously been assumed?)
–all that lighter-than-air methane rising rapidly would, I should think, increase the vortex that was keeping in all that cold.
Is anyone else making these connections, or am I way off base. If the latter, please let me know where I erred.
–a large mass of rising gas would presumably add t
David Youngsays
I guess there is no comment from Gavin on the startling evidence (both empirical and theoretical) of large numerical errors in climate models. Sceptic Matthew, its true that many schemes can be defeated by challenging problems. That’s the point of using modern methods that are more robust.
What is most disturbing about the Williams material is that if a dissipative scheme is at the heart of most climate models, then the empirically observed fact that climate models seem to converge in average properties or patterns could be simply an artifact of the numerical scheme.
It is shocking to me that such studies have not been done until recently. In any other field, people would have insisted on it.
[Response: What is shocking is how willing you are to believe the worst without any actual knowledge of the subject. Methods can always be improved (and are being), but the biggest problem with this genre of argument – I call it the ‘a priori climate models can’t work’ argument – is that it is trivially refuted by the fact that climate models do actually work – they match climatologies, seasonality, response to volcanoes, ozone holes, ice ages, dam bursts, ENSO events etc. etc. pretty well (albeit not perfectly, and with clear systematic problems that remain foci of much research). One shouldn’t be complacent, but people like yourself who come in with a preconceived notion that climate modelling is fatally flawed end up revealing far more about their preconceptions than they do about climate models. – gavin]
David Youngsays
Gavin, finally you respond. I have no preconceived notions. You are reading my mind, a rather prejudiced way to do things. I merely know from 30 years experience that numerical errors are a common source of large errors in models of everything from aircraft flow problems, to climate models. Perhaps Williams is likewise the victim of preconceived notions. My challenge to you ( which judging from you last post you are not taking seriously) is to actually do what people in most fields do, namely, do Paul Williams’ checks on your model. If its fine, its fine. But your lack or curiosity is puzzling. I think Bob Richtmyer would be concerned for your reputation.
By the way, the way errors can vitiate model results is documented in some of the references I cited in previous posts. Perhaps you should look at some of them. They are indeed mainstream mathematics where there are real standards of proof.
[Response: “Finally”? Sorry, but I have both a job and a life, and playing games with you is not my no. 1 priority. I do find it amusing that you conflate the fact that I am not hanging on every one of your wise words with a lack of respect for Ulam, Richtmyer and Lorenz (and why not add Feynman and Hawking just for fun as well?). Whether you appreciate it or not, checks and improvements of all sorts are ongoing at all climate model development centers (oh, look, just like other fields!), and that occurs completely independently of anything you suggest, or I do. Williams stuff does indeed look interesting – but expecting me to suddenly drop everything I am doing and move into a new sub-domain of climate modeling in less than 24 hrs because you are ‘shocked’ at the state of the field is, shall we say, a tad optimistic. – gavin]
David Youngsays
Gavin, Have you viewed the P. Williams presentation? I’m giving you a fair shot here to look into it. Your response is so typical of engineers and scientists to mathematical theory, namely, that our models “work.” Where is the proud history of modeling in engineering? Von Neuman, Ulam, even Lorentz are standard we should aspire to, not denigrate
Brian Dodgesays
“What kind of sea microorganism activity would be likely to create massive quantities of methane?” wili — 3 Oct 2011 @ 8:51 AM
re: 108
David Young:
So, I see Williams conferences including several at Fall AGU. (It would be nice, if someone actually cites what they’ve seen, instead of a vague reference that makes people waste time looking for it.) I’ll try to attend one of those sessions and see what live expert reaction is … since watching a video does not provide such feedback.
“… 6. Transient response to the well-mixed greenhouse gases
Mar 28, 2011 – Global mean surface air warming due to well-mixed greenhouse gases … gases (WMGGs: essentially carbon dioxide, methane, nitrous oxide, …
How long must we tolerate those who are either willfully ignorant or actively trolling for emotional reactions to obviously faulty data? Isn’t that what the borehole is for?
David Youngsays
Mal Adapted: You are not well adapted on this one. Los Alamos has a reputation for scientific independence and being difficult for administrators to control. That’s probably a good thing. Science is advanced by allowing all ideas to be heard and Los Alamos has a stellar history in this regard. I could name the names from the past, but perhaps you already know them (on second thought, perhaps you don’t). Stan Ulam’s autobiography should be required reading for scientists in graduate school, especially climate scientists.
David Youngsays
Jphn, My field is fluid dynamics, i.e., the solution of the Navier-Stokes equatons which govern weather and climate. This is not about application specific experiences but about the mathematics of the system, which is common to all fields. I’ve seen the Gavin response many times. Its understandable but counterproductive and not scientific. It’s basically, “Yes, I know there is a theoretical problem, but my simulations “work”.” I hope Gavin takes note of this: in my experience the next step is that someone like Paul Williams goes off and shows that the current numerics is causing large errors and then people scramble to fix the problem, often claiming that they knew about the problem all along.
[Response: What on earth makes you think I have any objection to people fixing problems that are found? I do it all the time and so do most of my colleagues. “Objecting to your condescension” != “thinking models are perfect”. – gavin]
Hunt @ 84,
The people who made The Hungry Tide ( http://www.imdb.com/title/tt2011296/ ), about the effect of rising sea level on Kiribati, may well have relevant stills.
David Youngsays
Yea, I’m not expecting you to drop everything and fix the problem. It’s just ironic that people continue to insist that model simulations have meaning, or at least imply that. You can look up my publications on the web (Im using my real name because this is a serious scientific discussion). We have developed models that do meet numerical tests for grid convergence. You are tempting me to get out my checkbook and write a check to Heartland Institute or maybe write my congressman asking that climate modeling be monitored by Los Alamos, where there are still a few mathematicians in the theoretical group.
But the main point, and I hope you see this, is that science demands that these things be investigated very carefully. You don’t have to do it yourself, but you I’m sure have a mathematically inclined scientist on your team. You need to assign a sceptical team member to break the models, that’s what I do myself. It is very helpful.
[Response: I agree – and we break the models all the time. Why not simply ask as what we do instead of assuming that we don’t do anything? By the way, LANL does a huge amount of climate modeling in collaboration with NCAR (i.e. http://climate.lanl.gov/) – again, something that is easily asked and answered in contrast to your assumptions. – gavin]
David Youngsays
Gavin, I hope you have the fortitude to post this. I am not saying that you are claiming that “the models are perfect” I am saying that due diligence with regard to issues of numerical consistency and accuracy should be a high priority — a higher priority than publishing more and more papers based on the model results.
So, it’s more important to keep on documenting a tool’s potential usefulness than actually to use it?
Especially during what some see (with evidentiary support) as a crisis for which that tool has a crucial diagnostic role?
Don’t know much at all about the modeling issues raised in this discussion, but that last statement just seems bizarre to me. By all means, keep checking model validity and reliability, but “drop everything?” Really??
ldavidcookesays
Re:121
Hey Dr. Young,
We are talking about a group that is 20yrs into a 50yr project. They have a vast accumulated data set and have been substituting the various discreet processes in for large scale patterns as confirmation is established. As to the ability to account for fluctuations of discreet processes they have improved well over the last 15yrs. At issue is the conversion of the systems from large scale to discreet is difficult to extract from an apparent chaotic system.
As to the main issues with modeling as a whole is the idea that there is one solution set when the influence of a discreet process runs a knife edge which can switch by chance. You can plan for the chance by weighing factors though you cannot predict direction or amplitude. The other issue is the basis of homoginization, in essence the idea that all variables for a given grid block are the same. Smaller grid blocks increase the resolution; but, if you do not account for neighboring influences prior to roll up you miss the range of probability, in essence, damp down the potential change.
As a whole I believe the NCAR, NASA and NOAA teams have done an excellent job with the resources allowed. To me it seems more wrong for criticism when it is clear that additional expertise could compress the development time and effort. Which is better, hobble the horse so it takes all day to go a mile, or judicious put funds and resources in place so that it can reach the goal in 2 min. It really is easy to sit on the sidelines throwing spitballs it takes more (edited) to help lift the load. Care to share your preference or expertise?
Cheers!
Dave Cooke
ldavidcookesays
Hey All,
Now back to the Arctic Ozone hole. As we know an unusual pattern set up prior to the formation. A Blocking High at high latitude formed over the region of the NAD. The shift of the winds dramatically changed the heat/wv flow into the Arctic region last Fall. It appears that as of the second week in Jan. that the Arctic Circulation formed two seperate pools a strong pool over the Northern Eur-Asian region and a weaker one over the NA region. The end result was a very strong vortex with extreme upper atmospheric cooling parked over the Eur-Asian region, both deforming the N. Jet Stream elipse more then prior seasonal monitored deviations and provided a much better ground for tri-nitrics and chlorinated compounds to reduce the Stratospheric Ozone there from the normal 430-460 to 230 Dobson concentration. With an area about the area of Germany virtually devoid of Ozone.
As to root cause, GHG, warm water vapor or aerosols, I do not believe so. As to a high level of low latitude heat content escaping, definitely. At issue remains the question of the drivers of long resident Blocking Highs and Cut-Off Lows which seem to be increasing in frequency and now latitude range.
My best guess will be related to the changes in the flow patterns of both the NH Northern and Southern Jet Streams. That aerosols and GHGs play a part is undeniable. The question is the mechanics.
Mayasays
I thought this was interesting, and didn’t know climate models were the topic of discussion on the open thread until I popped over here to see if anyone else had mentioned this. So, it’s not a commentary on anything, just a “oh look, this is interesting”.
David Young,
OK. Let me get this straight. You are claiming that despite:
1) the overwhelming evidence that the climate is changing,
2) the overwhelming evidence that the change is due to greenhouse gasses; 3) thae fact that GCMs are in no way essential for attribution of the current warming to greenhouse gasses
4) the fact that the small amount of warming we’ve had is already causing serious consequences
that the mere fact that GCMs might have some imperfections is sufficient for you to write a check to the professional liars at Heartland?
When might the Discovery Institute or Jenny McCarthy expect their checks?
t_p_hamiltonsays
“I hope Gavin takes note of this: in my experience the next step is that someone like Paul Williams goes off and shows that the current numerics is causing large errors and then people scramble to fix the problem, often claiming that they knew about the problem all along.”
So according to David Young’s reasoning, people shouldn’t be using Navier-Stokes at all, because you never know when a Paul Williams will show what was thought to be OK wasn’t.
Rob Nichollssays
Are anthropogenic sulphate aerosol emissions rising at the moment (e.g. due to increased coal burning in China)?
If so, how big an effect is the rise in sulphate aerosol emissions thought to be having on global temperatures at the moment? Is there much possibility that sulphate emissions will cause a temporary halt to global temperature rises over the next few years or decades? (I suspect not, but I thought I’d ask).
This question was triggered by a BBC report that I saw recently ( http://www.bbc.co.uk/news/science-environment-14002264 “Global warming lull down to China’s coal growth”) suggesting that a study had concluded that the “lull” in global temperature rise between 1998 and 2008 was due to sulphate aerosols arising from a sharp increase in coal burning in China (a bit like the lull in warming from the 1950s to 70s caused by US and European coal burning).
I’ve only been able to see the abstract of that study, (“Reconciling anthropogenic climate change with observed temperature 1998–2008” by Robert Kaufmann, Heikki Kauppi, Michael Mann [editorial note–this is not RealClimate contributor Michael E. Mann] and James Stock, see http://www.pnas.org/content/108/29/11790.abstract ); the abstract paints a more complex picture than the BBC report. (The abstract mentions the solar cycle and the change from El Nino to La Nina as dominating anthropogenic effects between 1998 to 2008, because CO2 rises were partially offset by the influence of rapid increases in sulphate aerosols from increased coal burning).
Mal Adaptedsays
David Young,
I’m acquainted with much LANL history, and I first heard of Stanislaw Ulam as a schoolboy. Regardless, I don’t claim expertise in nuclear physics, or climate science. Your comments here suggest that you consider yourself expert enough to contend with genuine climate experts like Gavin. Are you aware of the Dunning-Kruger effect?
Anybody want to discuss the Younger Dryas? Didn’t think so.
No matter. I see Thomas Lowell is out there promoting his personal crackpot theory ahead of the GSA meeting this week in Minneapolis. I predict many intellectual meltdowns and the usual heated arguments at this meeting. Yay!
The more the merrier as far as I’m concerned. What’s the difference?
[Response:This? – very unlikely in my opinion. The watershed drainage area for lakes is much greater than the surface available for evaporation, and so for evaporation to control major lake level changes you need to be in really arid conditions (think Lake Chad). Doesn’t seem likely for the boreal regions – even during the ice age termination. – gavin]
>> David Young
>> write my congressman asking that climate
>> modeling be monitored by Los Alamos
answered by:
> LANL does a huge amount of climate modeling
> in collaboration with NCAR (i.e. http://climate.lanl.gov/)
> … easily asked and answered
> in contrast to your assumptions. – gavin]
The exchange between real scientists helps us readers learn a lot.
Thanks.
I was referring to his newswire self promotional article. ‘Not that there’s anything wrong with that’. A little self promotion never hurts, even when you might be wrong. It’s amusing though that we still can’t pin down a simple 9500 cubic kilometer water leak thirteen thousand years ago, when our planet is still hemorrhaging fresh water as we speak. Luckily though, we are getting it back in droves in the form of increased humidity.
I also find it amusing that nobody comments on the almost obvious climate inversion going on in the Midwest – massive almost permanent morning dews far beyond anything I have ever observed, and cloudless high pressure days unlike anything I have observed in my fifty years of weather observing. Hot days, cold nights (almost desert like) along with the increased humidity even in the presence (or in this case absence) of any clouds.
I skimmed the results, you may find something to answer your questions, e.g. this article might help:
http://www.atmos-chem-phys-discuss.net/11/21971/2011/acpd-11-21971-2011.pdf
“Anthropogenic SO2 emissions increased alongside economic development in China at a rate of 12.7%yr−1 from 2000 to 2005. However, under new Chinese government policy, SO2 emissions declined by 3.9 % yr−1 between 2005 and 2009….”
siddsays
Eminent scientist David Young wrote:
“…startling evidence…large numerical errors in climate models…It is
shocling…30 years experience…large errors…climate models…your lack or
curiosity…would be concerned for your reputation…errors can vitiate model results…
real standards of proof…giving you a fair shot here…Your response is so
typical…Von Neuman, Ulam, even Lorentz…My field is fluid dynamics…
Navier-Stokes equatons…seen the Gavin response many times. Its understandable but counterproductive…large errors…people continue to insist that model simulations have
meaning…Im using my real name because this is a serious scientific discussion
…check to Heartland Institute…sceptical team member…fortitude to post this
…due diligence…numerical consistency and accuracy…higher priority than
publishing…”
Dude, can you turn down that whistle ? All the dogs are freaking out…
sidd
Russellsays
Could this be the David Young seen attending a heartland function on the outskirts of Tulsa?
It’s just ironic that people can continue to insist that relativity has meaning. It’s basically, “Yes, I saw the press release about faster-than-light neutrinos, but my GPS ‘works’.”
;-)
Paul Williams, though, seems to be doing interesting and constructive work addressing model uncertainties. He also seems unlikely to be writing any checks to the Heartland Institute (he’s quoted in the Sunday Times as saying he keeps a record of all the climate skeptics’ threats, “partly to provide a list of suspects if I ever disappear, but mainly because it’s funny to read how many times people can call you a ‘caulkhead'”.)
David Young — Whatever imperfection you are troubled about has probably already been resolved (and likely some time ago). The is a best about climate models in “The Discovery of Global Warming” by Spencer Weart: http://www.aip.org/history/climate/index.html
and in more depth in
“A Climate Modelling Primer” by Henderson-Sellers
Introduction to Three-Dimensional Climate Modeling 2nd Edition
Warren M. Washington and Claire Parkinson http://www.palgrave.com/products/title.aspx?PID=270908
with possibly now a book about the history of climate model development. In any case, you could check the code from both NCAR and GISS if you wanted to. [I myself am satisfied that the lesson of Emmy Noether’s (first) Theorem has been learnt for these codes.
Brian Dodgesays
“Robert-Asselin filter is pretty dissipative and damps the real oscillations in the system.”
“…in my experience the next step is that someone like Paul Williams goes off and shows that the current numerics is causing large errors…”
“The effects of the RAWfilter on the climatology and forecast skill of the SPEEDY model,” Javier Amezcua, Eugenia Kalnay, and Paul Williams (the horse’s mouth, as it were)
“In a recent study, Williams (2009) introduced a simple modification to the widely used Robert–Asselin (RA) filter for numerical integration. The main purpose of the Robert–Asselin–Williams (RAW) filter is to avoid the undesired numerical damping of the RA filter and to increase the accuracy.”
“…in tropical surface pressure predictions, five-day forecasts made using the RAW filter have approximately the same skill as four-day forecasts made using the RA filter.”
I wouldn’t necessarily call 25% improvement a correction of ‘large errors’ – YMMV
> dewpoint
You can find papers on trends. I just glanced through scholar. Example:
Theor Appl Climatol (2009) 98:187–195 DOI 10.1007/s00704-008-0094-5
ORIGINAL PAPER
Trends in extremes of temperature, dew point, and precipitation from long instrumental series from central Europe
Published online: 7 February 2009 # Springer-Verlag 2009
“… While precipitation at Potsdam does not show pronounced trends, dew point does exhibit a change from maximum extremes during the 1960s to minimum extremes during the 1970s….”
David Youngsays
The thing that struck me about Williams presentation was the simple test integration where the solution is a sine wave for Y and a cosine for X. The RA result reduces the amplitude by 50% over just a few wave lengths. He picked nu = delta t so that d is third order in delta t. So the formal order of the scheme is maintained at second order. However, the problem here is that the errors accumulate in time, they add up and eventually the signal will disappear. Thus, only in a system where the dynamics are strongly forced all the time will the signal be accurately predicted. You might be lucky in any given problem, but you know when looking at small effects, this kind of thing makes it questionable. Williams’ example using the Lorentz attractor is perhaps an extreme case but shows that choice of delta t can have a big effect on long term simulations, exactly where modelers claim that climate models have the correct patterns. This example shows that numerics can have a strong effect on these patterns. In other words, just having a stable attractor does not guarantee that you can track it accurately. It’s nice to see Williams making these points.
There are more things like this: Keyes and Knoll in Journal of computational physics (around 2002-2005 somewhere) is excellent on the advantages of implicit methods in large simulation codes. Williams considers only explicit schemes. Advection is very accurately discretized by the Streamwise Upwind Petrov Galerkin method. Finite differences are not as good especially on grids that are not very uniform. Just look for T J R Hughes for SUPG.
ldavidcookesays
Hey Dr. Young,
I guess many of us are missing the point, not to discredit the works of Dr. Williams; however, wrt the scheme of things. For instance it is not unlikely for small feedback loops to occur between variables and spread. However, as the cycle periods are not in phase they naturally drift out of sync. As long as your steps are in sync with the modeled variable you would be fine. The problem is trying to find the lowest common denominator when combining multiple variables. Sure you can try to bound the systems for smaller cycles; however, then it is no longer representative of the object of the model. Put differently the processes do not match so the result becomes artificial.
As to gridding, my experience goes to the former CLI Rembrant Video codec in your two confrence rooms that were installed in the ’90s. The conversion of 15000 pixels or 90mb of digitized data to 1.5mb of DS-1 transport. It was accomplised via FTs to compress the intensity and color/gama data Into large data blocks. After the initial image was created persistence maintained that which had not changed or reduced the priority in the stack, that which changed little. The bulk of the data traffic was that which changed frequently. (Maybe it is time to entertain graphic/optical modeling and let the resultant be reflected by the color temperature.)
It seems that most of our models have to change in lockstep and if we attempt to change all variables at some prorated rate we have removed natural varibility, where we try to correct by injecting seasonality via a Holt-Winters curve filter.
So I believe the real question is what is the purpose of introducing various smoothing techniques when the process that they are modeling is not smooth?
Cheers!
Dave Cooke
(PS: Re:Kibitzers Hank, The DOE project is unlikely focused on NCAR support, most often it is DoD. I believe it is CPU cycle time where NCAR/UCAR gets its greatest LANL or LLNL support, seconded by shared data.)
Dan Schillereffsays
Indirectly climate-related, and certainly not as climate-related as the on-going modelling discussion but this graphic is perhaps of interest showing considerable recent (past 48h) seismic activity beneath Katla. Is this a warning for a potentially global-scale eruption?
You’re asking the climatologists? (grin)
You might better ask that question at the site you point to.
They have publications, including one on forecasting eruptions: “Long-term and short-term earthquake warnings based on seismic information in the SISZ (PDF)”
One way to phrase the question is, does a 48-hour period give useful information about any trend, or do you need a longer sample to distinguish a change from the natural variation. That’s a pretty basic statistics question for any data set — how much data do you need depends on how noisy (how much natural variation).
Dave Cook, I don’t quite understand your post. The idea is that in the limit of finer grid and smaller time steps, the numerical error goes to zero and so all the scales are tracked correctly. There is a relatively complete theory for this at least for some types of problems, for example analysis of elastic structures, where things are elliptic. The theory is the basis for the numerical solution of these problems. It gets harder as you go to hyperbolic systems and then to the high Reynolds’ number Navier-Stokes. In any case, there are some mathematical things that can help. I mentioned some of them in the previous post. If you can access journal of computational physics, the survey paper I mentioned is very good.
There is a great thread on Climate etc. about chaos, ergodicity and attractors that goes to the issues for Navier-Stokes, at least the continuous theory. We still need a comprehensive discrete numerical theory for this problem. The only people I know of who work on this are the CFD people with a more mathematical bent. I mentioned Hughes before. There is a lot of work on Discontinuous Galerkin methods and a pretty sound mathematical basis for it. The literature on this is vast, but these methods are a lot better than finite differences, which it sounds to me are the basis of all the climate models. These methods are 50 years old and reworking these codes could p;ay very big dividends. That’s my only point.
The problem here on Real Climate is that to rise above the noise of treating models as black boxes that are the basis of a thousand papers, you have to raise your profile enough to arouse the wolves (not Schmidt, but some of those who don’t use their real names). Actually, the most snide of the posts has been removed (that’s gratifying. Thanks, Gavin.) I’ve experienced this before, people are always defensive when someone tries to ask them to improve numerics. But I detect that climate science (at least the subset represented here) is more defensive than usual. I understand why, they have been assaulted on a professional and personal level often by ignorant people with a political agenda. Once a war gets started, it’s hard to get back to normal.
Anyway, my only point is that things could almost certainly be improved dramatically by a careful examination of the numerical PDE literature from 1980 to present. I would suggest that the modelers spend some time with experts already in Federal employ. Phil Collela at Berkley or David Keyes from Columbia are two who are approachable.
A little off topic again, if that is possible on an open thread, but Thomas Lowell certainly has pushed a lot of buttons with his public relations effort – now I see WUWT has picked up the feed in order to bash the status quo a bit on the uncertainties. Is there anyone else out there who thinks this is perhaps not the optimum manner that science ought to be publicized?
David Youngsays
By the way, one other critical thing. Once you clean up your numerics, you can really address the critical questions about ergodic behaviour of the system, stability, and sensitivity to parameters. With all due respect, just running models with little control of numerical error tends to degenerate into data fitting,i.e., you change the free parameters, the forcings, maybe even the time step or spacial gridding, etc to match data. It becomes impossible to systematically address the real issues because you can never distinguish numerical error from the actual physical effects. And you eventually are forced to face the fact that this way of doing it breaks down when you try a different problem, change boundary conditions, etc.
Russellsays
144 re 135
I am delighted that 144 demonstrates that David Young is not among those illustrated in 135
and I wish him a good deliverance from the district depicted.
David Youngsays
I now understand why GISS people are sensitive about this issue of numerical errors. It appears that Hansen was criticized for using too coarse a grid back in the 1990’s. I claim its much more critical now and I’ll explain why.
Simulations can have many purposes. If you just want the long term patterns, then IF you are lucky, your attractor is strong enough that bad numerics is not a guarantee of failure. But, if you want details and smaller quantitative things, like sensitivities then numerics is critical.
If I want to compute the lift of an airfoil, numerics is not that critical because the lift is a very large number. The drag is 100 times smaller and to compute it, I need much better numerics and much finer grids. Now, this is not a linear scale. Let’s suppose your numerical scheme is second order in space and time. If your grid spacing is Delta X, then the truncation error is O(Delta X)**2. Note, this is just measuring local error, global error can still be large. But, this will give us a necessary condition for accuracy. Now if I double the number of grid points, Delta X is halved and thus truncation error goes down by a factor of 4. However, in climate modeling there are 3 space dimensions, so to get this factor of 4, I need twice as many points in each of 3 directions, resulting in 8 times as many points and at least 8 times as much computer time. It can actually be much worse than 8 because of poor algorithms like banded sparse matrix solves, etc. Now, to get the error down a factor of 100 to get the drag, I need 1,000,000 times as much computer time.
What people like Curry are demanding is sensitivities to parameters and to really compute those accurately may require much better numerics. The Hansen doctrine may give things that look reasonable, but quantitatively, they are probably not good enough for the next step in modeling, sensitivities.
By the way, there is a whole literature on this issue of computing sensitivities for time dependent systems. In any case, I still claim that numerical issues are a huge deal and deserve much much more attention. No Gavin, you don’t have to drop everything to look at it. But Hansen should hire some good people to get it right. It’s a huge undertaking and may require a new computer model. Just think of it as employment security Gavin. If there is a serious problem with the models, then they need you to fix the problem.
David Youngsays
Sorry, there is an error in that last post. To get a factor of 100 reduction in the error, you need 1000 times as much computer time. Still, its not a very good rate of return.
If your linear solver is say a banded one, you take another hit of a factor of 10.
Icarus says
Simple question: We hear that there is a new ‘ozone hole’ over the Arctic due to a cooling stratosphere, and a cooling stratosphere is (as I understand it) a classic ‘fingerprint’ of an enhanced greenhouse effect, so would it be reasonable to attribute this ozone hole to rising anthropogenic greenhouse gas emissions?
Hank Roberts says
> unearthing, with unerring ‘skepticism’,
> the one paper that is seriously flawed
Yup. does seem to keep happening. How come?
> Schwartz
One of his edited versions of the forcings picture:
http://www.ecd.bnl.gov/steve/image/ipcc_forcing_bars_aerosol.gif
“Added to the figure (light blue bar) is an estimate of the total (direct plus first indirect) forcing, -1.2 W m-2, and the associated uncertainty range: -0.6 to -2.4 W m-2.”
Another:
http://www.nature.com/climate/2007/0707/images/climate.2007.22-f1.jpg
“Added to the figure (green bar at bottom and associated uncertainty range) is the estimate from the 2001 IPCC report2 of the total forcing projected for 2100, where the uncertainty denotes the range of estimates for different emission scenarios.”
http://www.nature.com/climate/2007/0707/full/climate.2007.22.html
His theme since the 1990s — uncertainty, no effect _yet_ from CO2, e.g.
http://tinyurl.com/Schwartz-Jan1993
His presentations have long been popular in, er, uncertain circles.
That’s hard to understand unless all they care about is short term, because
his conclusion seems to always be along the lines
‘We ain’t seen nothin’ YET but for sure it is a’comin’ …’
Deep Climate says
Said and Wegman 2009: Suboptimal scholarship
Today I present an analysis of a 2009 article by Yasmin Said and Edward Wegman of George Mason University. “Roadmap for Optimization” was published in the inaugural edition of WIREs Comp Stats, one of a new family of Wiley publications conceived as a “serial encyclopedia”.
…
As the title implies, the article was meant to provide a broad overview of the mathematical optimization and set the stage for subsequent articles detailing various optimization techniques. However my analysis, entitled Suboptimal Scholarship: Antecedents of Said and Wegman 2009, demonstrates the highly problematic scholarship of the “Roadmap” article.
* No fewer than 15 likely online antecedent sources, all unattributed, have been identified, including 13 articles from Wikipedia and two others from Prof. Tom Ferguson and Wolfram MathWorld.
* Numerous errors have been identified, apparently arising from mistranscription, faulty rewording, or omission of key information.
* The scanty list of references appears to have been “carried along” from the unattributed antecedents; thus, these references may well constitute false citations.
ldavidcooke says
Re:99
Hey Hank,
Concur, note that the increase in volume should be constrained within the thermocline. Hence, you should have to distribute the expansion factor. Though the water column covering the polar basins should be uniform as they fill to feed the THC. I guess the question is what happens if you fill the basins faster then they empty? Does the THC rate increase or does the spill over inflate the area above the whole of the abysmal plain?
Cheers!
Dave
Harmen says
“I need a professional-quality color photo for use on the front cover of my coauthored book, “Rising Sea Levels.””
Maybe..
A photo of Banksy’s masterpiece in London?
http://www.google.nl/imgres?q=banksy+climate+change&num=10&um=1&hl=nl&biw=800&bih=509&tbm=isch&tbnid=O43fUqljbFEg7M:&imgrefurl=http://www.guardian.co.uk/artanddesign/2009/dec/21/banksy-copenhagen-regents-canal&docid=ScR7c
Hank Roberts says
> Color photo
Maybe a predictive drawing? http://climatechangepsychology.blogspot.com/2009/10/sea-level-rises-would-flood-phillyand.html
Russell says
Given the Nobel news, brace yourselves for howls of outrage from WUWT at the Committee’s failure to declare Viscount Monckton and Lubos Motl co-Laureates in Peace, Physics & Medicine :
For curing Bright’s Syndrome by using cosmic ray neutrinos to cause faster than light climate change in 11 dimensions.
wili says
Picking up on the topic of the enormous and unexpected ozone hole over the Arctic discussed by hank, thomas, patric, icarus, etc:
Is there any chance that the dramatic increases in methane output from Siberian Continental Shelf that Shakhova and others have been reporting on is related to this?
–methane is a powerful ghg, 105 times the global warming potential of CO2, so one expected result of a massive release would be that the stratosphere would cool as the troposphere warmed
–being lighter than other common atmospheric gasses, much of the methane would rise to the stratosphere, and there, react with the ozone, destroying it (and, in the process, producing CO2 and H2O, both ghg’s in their own right–and am I remembering wrong, or did recent studies not suggest that increases of stratospheric water vapor was a more powerful driver of gw than had previously been assumed?)
–all that lighter-than-air methane rising rapidly would, I should think, increase the vortex that was keeping in all that cold.
Is anyone else making these connections, or am I way off base. If the latter, please let me know where I erred.
–a large mass of rising gas would presumably add t
David Young says
I guess there is no comment from Gavin on the startling evidence (both empirical and theoretical) of large numerical errors in climate models. Sceptic Matthew, its true that many schemes can be defeated by challenging problems. That’s the point of using modern methods that are more robust.
What is most disturbing about the Williams material is that if a dissipative scheme is at the heart of most climate models, then the empirically observed fact that climate models seem to converge in average properties or patterns could be simply an artifact of the numerical scheme.
It is shocking to me that such studies have not been done until recently. In any other field, people would have insisted on it.
[Response: What is shocking is how willing you are to believe the worst without any actual knowledge of the subject. Methods can always be improved (and are being), but the biggest problem with this genre of argument – I call it the ‘a priori climate models can’t work’ argument – is that it is trivially refuted by the fact that climate models do actually work – they match climatologies, seasonality, response to volcanoes, ozone holes, ice ages, dam bursts, ENSO events etc. etc. pretty well (albeit not perfectly, and with clear systematic problems that remain foci of much research). One shouldn’t be complacent, but people like yourself who come in with a preconceived notion that climate modelling is fatally flawed end up revealing far more about their preconceptions than they do about climate models. – gavin]
David Young says
Gavin, finally you respond. I have no preconceived notions. You are reading my mind, a rather prejudiced way to do things. I merely know from 30 years experience that numerical errors are a common source of large errors in models of everything from aircraft flow problems, to climate models. Perhaps Williams is likewise the victim of preconceived notions. My challenge to you ( which judging from you last post you are not taking seriously) is to actually do what people in most fields do, namely, do Paul Williams’ checks on your model. If its fine, its fine. But your lack or curiosity is puzzling. I think Bob Richtmyer would be concerned for your reputation.
By the way, the way errors can vitiate model results is documented in some of the references I cited in previous posts. Perhaps you should look at some of them. They are indeed mainstream mathematics where there are real standards of proof.
[Response: “Finally”? Sorry, but I have both a job and a life, and playing games with you is not my no. 1 priority. I do find it amusing that you conflate the fact that I am not hanging on every one of your wise words with a lack of respect for Ulam, Richtmyer and Lorenz (and why not add Feynman and Hawking just for fun as well?). Whether you appreciate it or not, checks and improvements of all sorts are ongoing at all climate model development centers (oh, look, just like other fields!), and that occurs completely independently of anything you suggest, or I do. Williams stuff does indeed look interesting – but expecting me to suddenly drop everything I am doing and move into a new sub-domain of climate modeling in less than 24 hrs because you are ‘shocked’ at the state of the field is, shall we say, a tad optimistic. – gavin]
David Young says
Gavin, Have you viewed the P. Williams presentation? I’m giving you a fair shot here to look into it. Your response is so typical of engineers and scientists to mathematical theory, namely, that our models “work.” Where is the proud history of modeling in engineering? Von Neuman, Ulam, even Lorentz are standard we should aspire to, not denigrate
Brian Dodge says
“What kind of sea microorganism activity would be likely to create massive quantities of methane?” wili — 3 Oct 2011 @ 8:51 AM
anaerobic decomposition (google biogas generators) – perhaps related to early algal blooms, out of sync with trophic consumer species.
http://onlinelibrary.wiley.com/doi/10.1111/j.1365-2486.2010.02312.x/abstract
Hank Roberts says
> being lighter than other common atmospheric gasses,
> much of the methane would rise to the stratosphere
http://www.google.com/search?q=well+mixed+gases
John Mashey says
re: 108
David Young:
So, I see Williams conferences including several at Fall AGU. (It would be nice, if someone actually cites what they’ve seen, instead of a vague reference that makes people waste time looking for it.) I’ll try to attend one of those sessions and see what live expert reaction is … since watching a video does not provide such feedback.
A while back, I observed that technical people skeptical of climate models seemed to have different reasons that sometimes related to their own technical discipline. Perhaps you might read that discussion and let us know about your discipline and/or technical experience. I’ve found it quite valuable to understand where people are coming from in such discussions. For instance, the key to one discussion was realization that someone had bad experience with protein-folding. In anothehr case, the simulation experience was financial modeling.
Hank Roberts says
“… 6. Transient response to the well-mixed greenhouse gases
Mar 28, 2011 – Global mean surface air warming due to well-mixed greenhouse gases … gases (WMGGs: essentially carbon dioxide, methane, nitrous oxide, …
http://www.gfdl.noaa.gov/blog/isaac-held/2011/03/28/6-transient-response-to-the-well-mixed-greenhouse-gases/
wili says
How long must we tolerate those who are either willfully ignorant or actively trolling for emotional reactions to obviously faulty data? Isn’t that what the borehole is for?
David Young says
Mal Adapted: You are not well adapted on this one. Los Alamos has a reputation for scientific independence and being difficult for administrators to control. That’s probably a good thing. Science is advanced by allowing all ideas to be heard and Los Alamos has a stellar history in this regard. I could name the names from the past, but perhaps you already know them (on second thought, perhaps you don’t). Stan Ulam’s autobiography should be required reading for scientists in graduate school, especially climate scientists.
David Young says
Jphn, My field is fluid dynamics, i.e., the solution of the Navier-Stokes equatons which govern weather and climate. This is not about application specific experiences but about the mathematics of the system, which is common to all fields. I’ve seen the Gavin response many times. Its understandable but counterproductive and not scientific. It’s basically, “Yes, I know there is a theoretical problem, but my simulations “work”.” I hope Gavin takes note of this: in my experience the next step is that someone like Paul Williams goes off and shows that the current numerics is causing large errors and then people scramble to fix the problem, often claiming that they knew about the problem all along.
[Response: What on earth makes you think I have any objection to people fixing problems that are found? I do it all the time and so do most of my colleagues. “Objecting to your condescension” != “thinking models are perfect”. – gavin]
MalcolmT says
Hunt @ 84,
The people who made The Hungry Tide ( http://www.imdb.com/title/tt2011296/ ), about the effect of rising sea level on Kiribati, may well have relevant stills.
David Young says
Yea, I’m not expecting you to drop everything and fix the problem. It’s just ironic that people continue to insist that model simulations have meaning, or at least imply that. You can look up my publications on the web (Im using my real name because this is a serious scientific discussion). We have developed models that do meet numerical tests for grid convergence. You are tempting me to get out my checkbook and write a check to Heartland Institute or maybe write my congressman asking that climate modeling be monitored by Los Alamos, where there are still a few mathematicians in the theoretical group.
But the main point, and I hope you see this, is that science demands that these things be investigated very carefully. You don’t have to do it yourself, but you I’m sure have a mathematically inclined scientist on your team. You need to assign a sceptical team member to break the models, that’s what I do myself. It is very helpful.
[Response: I agree – and we break the models all the time. Why not simply ask as what we do instead of assuming that we don’t do anything? By the way, LANL does a huge amount of climate modeling in collaboration with NCAR (i.e. http://climate.lanl.gov/) – again, something that is easily asked and answered in contrast to your assumptions. – gavin]
David Young says
Gavin, I hope you have the fortitude to post this. I am not saying that you are claiming that “the models are perfect” I am saying that due diligence with regard to issues of numerical consistency and accuracy should be a high priority — a higher priority than publishing more and more papers based on the model results.
Kevin McKinney says
#121–
So, it’s more important to keep on documenting a tool’s potential usefulness than actually to use it?
Especially during what some see (with evidentiary support) as a crisis for which that tool has a crucial diagnostic role?
Don’t know much at all about the modeling issues raised in this discussion, but that last statement just seems bizarre to me. By all means, keep checking model validity and reliability, but “drop everything?” Really??
ldavidcooke says
Re:121
Hey Dr. Young,
We are talking about a group that is 20yrs into a 50yr project. They have a vast accumulated data set and have been substituting the various discreet processes in for large scale patterns as confirmation is established. As to the ability to account for fluctuations of discreet processes they have improved well over the last 15yrs. At issue is the conversion of the systems from large scale to discreet is difficult to extract from an apparent chaotic system.
As to the main issues with modeling as a whole is the idea that there is one solution set when the influence of a discreet process runs a knife edge which can switch by chance. You can plan for the chance by weighing factors though you cannot predict direction or amplitude. The other issue is the basis of homoginization, in essence the idea that all variables for a given grid block are the same. Smaller grid blocks increase the resolution; but, if you do not account for neighboring influences prior to roll up you miss the range of probability, in essence, damp down the potential change.
As a whole I believe the NCAR, NASA and NOAA teams have done an excellent job with the resources allowed. To me it seems more wrong for criticism when it is clear that additional expertise could compress the development time and effort. Which is better, hobble the horse so it takes all day to go a mile, or judicious put funds and resources in place so that it can reach the goal in 2 min. It really is easy to sit on the sidelines throwing spitballs it takes more (edited) to help lift the load. Care to share your preference or expertise?
Cheers!
Dave Cooke
ldavidcooke says
Hey All,
Now back to the Arctic Ozone hole. As we know an unusual pattern set up prior to the formation. A Blocking High at high latitude formed over the region of the NAD. The shift of the winds dramatically changed the heat/wv flow into the Arctic region last Fall. It appears that as of the second week in Jan. that the Arctic Circulation formed two seperate pools a strong pool over the Northern Eur-Asian region and a weaker one over the NA region. The end result was a very strong vortex with extreme upper atmospheric cooling parked over the Eur-Asian region, both deforming the N. Jet Stream elipse more then prior seasonal monitored deviations and provided a much better ground for tri-nitrics and chlorinated compounds to reduce the Stratospheric Ozone there from the normal 430-460 to 230 Dobson concentration. With an area about the area of Germany virtually devoid of Ozone.
As to root cause, GHG, warm water vapor or aerosols, I do not believe so. As to a high level of low latitude heat content escaping, definitely. At issue remains the question of the drivers of long resident Blocking Highs and Cut-Off Lows which seem to be increasing in frequency and now latitude range.
My best guess will be related to the changes in the flow patterns of both the NH Northern and Southern Jet Streams. That aerosols and GHGs play a part is undeniable. The question is the mechanics.
Maya says
I thought this was interesting, and didn’t know climate models were the topic of discussion on the open thread until I popped over here to see if anyone else had mentioned this. So, it’s not a commentary on anything, just a “oh look, this is interesting”.
http://www.physorg.com/news/2011-10-climate-underestimate-arctic-sea-ice.html
I believe this is the text of the paper that’s referenced:
http://web.mit.edu/~rampal/rampal_homepage/Publications_files/Rampal_etal2011.pdf
Ray Ladbury says
David Young,
OK. Let me get this straight. You are claiming that despite:
1) the overwhelming evidence that the climate is changing,
2) the overwhelming evidence that the change is due to greenhouse gasses; 3) thae fact that GCMs are in no way essential for attribution of the current warming to greenhouse gasses
4) the fact that the small amount of warming we’ve had is already causing serious consequences
that the mere fact that GCMs might have some imperfections is sufficient for you to write a check to the professional liars at Heartland?
When might the Discovery Institute or Jenny McCarthy expect their checks?
t_p_hamilton says
“I hope Gavin takes note of this: in my experience the next step is that someone like Paul Williams goes off and shows that the current numerics is causing large errors and then people scramble to fix the problem, often claiming that they knew about the problem all along.”
So according to David Young’s reasoning, people shouldn’t be using Navier-Stokes at all, because you never know when a Paul Williams will show what was thought to be OK wasn’t.
Rob Nicholls says
Are anthropogenic sulphate aerosol emissions rising at the moment (e.g. due to increased coal burning in China)?
If so, how big an effect is the rise in sulphate aerosol emissions thought to be having on global temperatures at the moment? Is there much possibility that sulphate emissions will cause a temporary halt to global temperature rises over the next few years or decades? (I suspect not, but I thought I’d ask).
This question was triggered by a BBC report that I saw recently ( http://www.bbc.co.uk/news/science-environment-14002264 “Global warming lull down to China’s coal growth”) suggesting that a study had concluded that the “lull” in global temperature rise between 1998 and 2008 was due to sulphate aerosols arising from a sharp increase in coal burning in China (a bit like the lull in warming from the 1950s to 70s caused by US and European coal burning).
I’ve only been able to see the abstract of that study, (“Reconciling anthropogenic climate change with observed temperature 1998–2008” by Robert Kaufmann, Heikki Kauppi, Michael Mann [editorial note–this is not RealClimate contributor Michael E. Mann] and James Stock, see http://www.pnas.org/content/108/29/11790.abstract ); the abstract paints a more complex picture than the BBC report. (The abstract mentions the solar cycle and the change from El Nino to La Nina as dominating anthropogenic effects between 1998 to 2008, because CO2 rises were partially offset by the influence of rapid increases in sulphate aerosols from increased coal burning).
Mal Adapted says
David Young,
I’m acquainted with much LANL history, and I first heard of Stanislaw Ulam as a schoolboy. Regardless, I don’t claim expertise in nuclear physics, or climate science. Your comments here suggest that you consider yourself expert enough to contend with genuine climate experts like Gavin. Are you aware of the Dunning-Kruger effect?
Thomas Lee Elifritz says
Anybody want to discuss the Younger Dryas? Didn’t think so.
No matter. I see Thomas Lowell is out there promoting his personal crackpot theory ahead of the GSA meeting this week in Minneapolis. I predict many intellectual meltdowns and the usual heated arguments at this meeting. Yay!
The more the merrier as far as I’m concerned. What’s the difference?
[Response: This? – very unlikely in my opinion. The watershed drainage area for lakes is much greater than the surface available for evaporation, and so for evaporation to control major lake level changes you need to be in really arid conditions (think Lake Chad). Doesn’t seem likely for the boreal regions – even during the ice age termination. – gavin]
Hank Roberts says
>> David Young
>> write my congressman asking that climate
>> modeling be monitored by Los Alamos
answered by:
> LANL does a huge amount of climate modeling
> in collaboration with NCAR (i.e. http://climate.lanl.gov/)
> … easily asked and answered
> in contrast to your assumptions. – gavin]
The exchange between real scientists helps us readers learn a lot.
Thanks.
Ignore the kibitzers, please.
Thomas Lee Elifritz says
I was referring to his newswire self promotional article. ‘Not that there’s anything wrong with that’. A little self promotion never hurts, even when you might be wrong. It’s amusing though that we still can’t pin down a simple 9500 cubic kilometer water leak thirteen thousand years ago, when our planet is still hemorrhaging fresh water as we speak. Luckily though, we are getting it back in droves in the form of increased humidity.
I also find it amusing that nobody comments on the almost obvious climate inversion going on in the Midwest – massive almost permanent morning dews far beyond anything I have ever observed, and cloudless high pressure days unlike anything I have observed in my fifty years of weather observing. Hot days, cold nights (almost desert like) along with the increased humidity even in the presence (or in this case absence) of any clouds.
Hank Roberts says
For Rob Nicholls:
http://scholar.google.com/scholar?hl=en&q=sulphate+aerosol+trend&btnG=Search&as_sdt=1%2C5&as_ylo=2011&as_vis=1
I skimmed the results, you may find something to answer your questions, e.g. this article might help:
http://www.atmos-chem-phys-discuss.net/11/21971/2011/acpd-11-21971-2011.pdf
“Anthropogenic SO2 emissions increased alongside economic development in China at a rate of 12.7%yr−1 from 2000 to 2005. However, under new Chinese government policy, SO2 emissions declined by 3.9 % yr−1 between 2005 and 2009….”
sidd says
Eminent scientist David Young wrote:
“…startling evidence…large numerical errors in climate models…It is
shocling…30 years experience…large errors…climate models…your lack or
curiosity…would be concerned for your reputation…errors can vitiate model results…
real standards of proof…giving you a fair shot here…Your response is so
typical…Von Neuman, Ulam, even Lorentz…My field is fluid dynamics…
Navier-Stokes equatons…seen the Gavin response many times. Its understandable but counterproductive…large errors…people continue to insist that model simulations have
meaning…Im using my real name because this is a serious scientific discussion
…check to Heartland Institute…sceptical team member…fortitude to post this
…due diligence…numerical consistency and accuracy…higher priority than
publishing…”
Dude, can you turn down that whistle ? All the dogs are freaking out…
sidd
Russell says
Could this be the David Young seen attending a heartland function on the outskirts of Tulsa?
http://tinypic.com/r/23rok6e/7
CM says
A variation on #118–120:
;-)
Paul Williams, though, seems to be doing interesting and constructive work addressing model uncertainties. He also seems unlikely to be writing any checks to the Heartland Institute (he’s quoted in the Sunday Times as saying he keeps a record of all the climate skeptics’ threats, “partly to provide a list of suspects if I ever disappear, but mainly because it’s funny to read how many times people can call you a ‘caulkhead'”.)
(Kevin, there’s a lecture on the time-stepping issue that I found surprisingly accessible, even though I am a Bear of Very Little Math. And you at least will want to check out his paper on “Meteorological phenomena in Western classical orchestral music”…)
David B. Benson says
David Young — Whatever imperfection you are troubled about has probably already been resolved (and likely some time ago). The is a best about climate models in “The Discovery of Global Warming” by Spencer Weart:
http://www.aip.org/history/climate/index.html
and in more depth in
“A Climate Modelling Primer” by Henderson-Sellers
Introduction to Three-Dimensional Climate Modeling 2nd Edition
Warren M. Washington and Claire Parkinson
http://www.palgrave.com/products/title.aspx?PID=270908
with possibly now a book about the history of climate model development. In any case, you could check the code from both NCAR and GISS if you wanted to. [I myself am satisfied that the lesson of Emmy Noether’s (first) Theorem has been learnt for these codes.
Brian Dodge says
“Robert-Asselin filter is pretty dissipative and damps the real oscillations in the system.”
“…in my experience the next step is that someone like Paul Williams goes off and shows that the current numerics is causing large errors…”
“The effects of the RAWfilter on the climatology and forecast skill of the SPEEDY model,” Javier Amezcua, Eugenia Kalnay, and Paul Williams (the horse’s mouth, as it were)
“In a recent study, Williams (2009) introduced a simple modification to the widely used Robert–Asselin (RA) filter for numerical integration. The main purpose of the Robert–Asselin–Williams (RAW) filter is to avoid the undesired numerical damping of the RA filter and to increase the accuracy.”
“…in tropical surface pressure predictions, five-day forecasts made using the RAW filter have approximately the same skill as four-day forecasts made using the RA filter.”
I wouldn’t necessarily call 25% improvement a correction of ‘large errors’ – YMMV
Hank Roberts says
> dewpoint
You can find papers on trends. I just glanced through scholar. Example:
Theor Appl Climatol (2009) 98:187–195 DOI 10.1007/s00704-008-0094-5
ORIGINAL PAPER
Trends in extremes of temperature, dew point, and precipitation from long instrumental series from central Europe
Published online: 7 February 2009 # Springer-Verlag 2009
“… While precipitation at Potsdam does not show pronounced trends, dew point does exhibit a change from maximum extremes during the 1960s to minimum extremes during the 1970s….”
David Young says
The thing that struck me about Williams presentation was the simple test integration where the solution is a sine wave for Y and a cosine for X. The RA result reduces the amplitude by 50% over just a few wave lengths. He picked nu = delta t so that d is third order in delta t. So the formal order of the scheme is maintained at second order. However, the problem here is that the errors accumulate in time, they add up and eventually the signal will disappear. Thus, only in a system where the dynamics are strongly forced all the time will the signal be accurately predicted. You might be lucky in any given problem, but you know when looking at small effects, this kind of thing makes it questionable. Williams’ example using the Lorentz attractor is perhaps an extreme case but shows that choice of delta t can have a big effect on long term simulations, exactly where modelers claim that climate models have the correct patterns. This example shows that numerics can have a strong effect on these patterns. In other words, just having a stable attractor does not guarantee that you can track it accurately. It’s nice to see Williams making these points.
There are more things like this: Keyes and Knoll in Journal of computational physics (around 2002-2005 somewhere) is excellent on the advantages of implicit methods in large simulation codes. Williams considers only explicit schemes. Advection is very accurately discretized by the Streamwise Upwind Petrov Galerkin method. Finite differences are not as good especially on grids that are not very uniform. Just look for T J R Hughes for SUPG.
ldavidcooke says
Hey Dr. Young,
I guess many of us are missing the point, not to discredit the works of Dr. Williams; however, wrt the scheme of things. For instance it is not unlikely for small feedback loops to occur between variables and spread. However, as the cycle periods are not in phase they naturally drift out of sync. As long as your steps are in sync with the modeled variable you would be fine. The problem is trying to find the lowest common denominator when combining multiple variables. Sure you can try to bound the systems for smaller cycles; however, then it is no longer representative of the object of the model. Put differently the processes do not match so the result becomes artificial.
As to gridding, my experience goes to the former CLI Rembrant Video codec in your two confrence rooms that were installed in the ’90s. The conversion of 15000 pixels or 90mb of digitized data to 1.5mb of DS-1 transport. It was accomplised via FTs to compress the intensity and color/gama data Into large data blocks. After the initial image was created persistence maintained that which had not changed or reduced the priority in the stack, that which changed little. The bulk of the data traffic was that which changed frequently. (Maybe it is time to entertain graphic/optical modeling and let the resultant be reflected by the color temperature.)
It seems that most of our models have to change in lockstep and if we attempt to change all variables at some prorated rate we have removed natural varibility, where we try to correct by injecting seasonality via a Holt-Winters curve filter.
So I believe the real question is what is the purpose of introducing various smoothing techniques when the process that they are modeling is not smooth?
Cheers!
Dave Cooke
(PS: Re:Kibitzers Hank, The DOE project is unlikely focused on NCAR support, most often it is DoD. I believe it is CPU cycle time where NCAR/UCAR gets its greatest LANL or LLNL support, seconded by shared data.)
Dan Schillereff says
Indirectly climate-related, and certainly not as climate-related as the on-going modelling discussion but this graphic is perhaps of interest showing considerable recent (past 48h) seismic activity beneath Katla. Is this a warning for a potentially global-scale eruption?
http://en.vedur.is/earthquakes-and-volcanism/earthquakes/myrdalsjokull/#view=map
Hank Roberts says
> is this a warning ….?
You’re asking the climatologists? (grin)
You might better ask that question at the site you point to.
They have publications, including one on forecasting eruptions: “Long-term and short-term earthquake warnings based on seismic information in the SISZ (PDF)”
It’s on the page at:
http://en.vedur.is/earthquakes-and-volcanism/reports-and-publications/
One way to phrase the question is, does a 48-hour period give useful information about any trend, or do you need a longer sample to distinguish a change from the natural variation. That’s a pretty basic statistics question for any data set — how much data do you need depends on how noisy (how much natural variation).
Hank Roberts says
Comments on the recent WSJ piece about climate science elsewhere abound.
E.g. http://sci-ence.org/neutrino/
Hat tip to: http://www.metafilter.com/108152/If-Einstein-might-be-wrong-about-relativity-how-can-we-really-trust-any-scientist
David Young says
Dave Cook, I don’t quite understand your post. The idea is that in the limit of finer grid and smaller time steps, the numerical error goes to zero and so all the scales are tracked correctly. There is a relatively complete theory for this at least for some types of problems, for example analysis of elastic structures, where things are elliptic. The theory is the basis for the numerical solution of these problems. It gets harder as you go to hyperbolic systems and then to the high Reynolds’ number Navier-Stokes. In any case, there are some mathematical things that can help. I mentioned some of them in the previous post. If you can access journal of computational physics, the survey paper I mentioned is very good.
There is a great thread on Climate etc. about chaos, ergodicity and attractors that goes to the issues for Navier-Stokes, at least the continuous theory. We still need a comprehensive discrete numerical theory for this problem. The only people I know of who work on this are the CFD people with a more mathematical bent. I mentioned Hughes before. There is a lot of work on Discontinuous Galerkin methods and a pretty sound mathematical basis for it. The literature on this is vast, but these methods are a lot better than finite differences, which it sounds to me are the basis of all the climate models. These methods are 50 years old and reworking these codes could p;ay very big dividends. That’s my only point.
The problem here on Real Climate is that to rise above the noise of treating models as black boxes that are the basis of a thousand papers, you have to raise your profile enough to arouse the wolves (not Schmidt, but some of those who don’t use their real names). Actually, the most snide of the posts has been removed (that’s gratifying. Thanks, Gavin.) I’ve experienced this before, people are always defensive when someone tries to ask them to improve numerics. But I detect that climate science (at least the subset represented here) is more defensive than usual. I understand why, they have been assaulted on a professional and personal level often by ignorant people with a political agenda. Once a war gets started, it’s hard to get back to normal.
Anyway, my only point is that things could almost certainly be improved dramatically by a careful examination of the numerical PDE literature from 1980 to present. I would suggest that the modelers spend some time with experts already in Federal employ. Phil Collela at Berkley or David Keyes from Columbia are two who are approachable.
Thomas Lee Elifritz says
A little off topic again, if that is possible on an open thread, but Thomas Lowell certainly has pushed a lot of buttons with his public relations effort – now I see WUWT has picked up the feed in order to bash the status quo a bit on the uncertainties. Is there anyone else out there who thinks this is perhaps not the optimum manner that science ought to be publicized?
David Young says
By the way, one other critical thing. Once you clean up your numerics, you can really address the critical questions about ergodic behaviour of the system, stability, and sensitivity to parameters. With all due respect, just running models with little control of numerical error tends to degenerate into data fitting,i.e., you change the free parameters, the forcings, maybe even the time step or spacial gridding, etc to match data. It becomes impossible to systematically address the real issues because you can never distinguish numerical error from the actual physical effects. And you eventually are forced to face the fact that this way of doing it breaks down when you try a different problem, change boundary conditions, etc.
Russell says
144 re 135
I am delighted that 144 demonstrates that David Young is not among those illustrated in 135
http://tinypic.com/r/23rok6e/7
and I wish him a good deliverance from the district depicted.
David Young says
I now understand why GISS people are sensitive about this issue of numerical errors. It appears that Hansen was criticized for using too coarse a grid back in the 1990’s. I claim its much more critical now and I’ll explain why.
Simulations can have many purposes. If you just want the long term patterns, then IF you are lucky, your attractor is strong enough that bad numerics is not a guarantee of failure. But, if you want details and smaller quantitative things, like sensitivities then numerics is critical.
If I want to compute the lift of an airfoil, numerics is not that critical because the lift is a very large number. The drag is 100 times smaller and to compute it, I need much better numerics and much finer grids. Now, this is not a linear scale. Let’s suppose your numerical scheme is second order in space and time. If your grid spacing is Delta X, then the truncation error is O(Delta X)**2. Note, this is just measuring local error, global error can still be large. But, this will give us a necessary condition for accuracy. Now if I double the number of grid points, Delta X is halved and thus truncation error goes down by a factor of 4. However, in climate modeling there are 3 space dimensions, so to get this factor of 4, I need twice as many points in each of 3 directions, resulting in 8 times as many points and at least 8 times as much computer time. It can actually be much worse than 8 because of poor algorithms like banded sparse matrix solves, etc. Now, to get the error down a factor of 100 to get the drag, I need 1,000,000 times as much computer time.
What people like Curry are demanding is sensitivities to parameters and to really compute those accurately may require much better numerics. The Hansen doctrine may give things that look reasonable, but quantitatively, they are probably not good enough for the next step in modeling, sensitivities.
By the way, there is a whole literature on this issue of computing sensitivities for time dependent systems. In any case, I still claim that numerical issues are a huge deal and deserve much much more attention. No Gavin, you don’t have to drop everything to look at it. But Hansen should hire some good people to get it right. It’s a huge undertaking and may require a new computer model. Just think of it as employment security Gavin. If there is a serious problem with the models, then they need you to fix the problem.
David Young says
Sorry, there is an error in that last post. To get a factor of 100 reduction in the error, you need 1000 times as much computer time. Still, its not a very good rate of return.
If your linear solver is say a banded one, you take another hit of a factor of 10.