The point that climate downscaling must pay attention to the law of small numbers is no joke.
The World Climate Research Programme (WCRP) will become a ‘new’ WCRP with a “soft launch” in 2021. This is quite a big story since it coordinates much of the research and the substance on which the Intergovernmental Panel on Climate Change (IPCC) builds.
Until now, the COordinated Regional Downscaling EXperiment (CORDEX) has been a major project sponsored by the WRCP. CORDEX has involved regional modelling and downscaling with a focus on the models and methods rather than providing climate services. In its new form, the activities that used to be carried out within CORDEX will belong to the WCRP community called ‘Regional information for society’ (RifS). This implies a slight shift in emphasis.
With this change, the WCRP signals a desire for the regional modelling results to become more useful and relevant for decision-makers. The change will also introduce a set of new requirements, and hence the law of small numbers.
The law of small numbers is described in Daniel Kahneman’s book ‘Thinking, fast and slow‘ and is a condition that can be explained by statistical theory. It says that you are likely to draw a misleading conclusion if your sample is small.
I’m no statistician, but a physicist who experienced a “statistical revelation” about a decade ago. Physics-based disciplines, such as meteorology, often approach a problem from a different angle to the statisticians, and there are often some gaps in the understanding and appreciation between the two communities.
A physicist would say that if we know one side of an equation, then we also know the other side. The statistician, on the other hand, would use data to prove there is an equation in the first place.
One of the key pillars of statistics is that we have a random sample that represents what we want to study. We have no such statistical samples for future climate outlooks, but we do have ensembles of simulations representing future projections.
We also have to keep in mind that regional climate behaves differently to global climate. There are pronounced stochastic variations on regional and decadal scales that may swamp the long-term trends due to greenhouse gases (Deser et al., 2012). These variations are subdued on a global scale since opposite variations over different regions tend to cancel each other.
CORDEX has in the past produced ensembles that can be considered as small, and Mezghani et al., (2019) demonstrated that the Euro-CORDEX ensemble is affected by the law of small numbers.
Even if you have a perfect global climate model and perfect downscaling, you risk getting misleading results with a small ensemble, thanks to the law of small numbers. The regional variations are non-deterministic due to the chaotic nature of the atmospheric circulation.
My take-home-message is that there is a need for sufficiently large ensembles of downscaled results. Furthermore, it is the number of different simulations with global climate models that is key since they provide boundary conditions for the downscaling.
Hence, there is a need for a strong and continued coordination between the downscaling groups so that more scientists contribute to building such ensembles.
Also, while CORDEX has been strong on regional climate modelling, the new RifS community needs additional new expertise. Perhaps a stronger presence of statisticians is a good thing. And while the downscaled results from large ensembles can provide a basis for a risk analysis, there is also another way to provide regional information for society: stress-testing.
References
- C. Deser, R. Knutti, S. Solomon, and A.S. Phillips, "Communication of the role of natural variability in future North American climate", Nature Climate Change, vol. 2, pp. 775-779, 2012. http://dx.doi.org/10.1038/nclimate1562
- A. Mezghani, A. Dobler, R. Benestad, J.E. Haugen, K.M. Parding, M. Piniewski, and Z.W. Kundzewicz, "Subsampling Impact on the Climate Change Signal over Poland Based on Simulations from Statistical and Dynamical Downscaling", Journal of Applied Meteorology and Climatology, vol. 58, pp. 1061-1078, 2019. http://dx.doi.org/10.1175/JAMC-D-18-0179.1
Thomas Fuller says
Could not sufficient hindcasting provide an adequate number of regional results that could be examined?
mike says
Thank you, nicely and clearly explained.
Guest (O.) says
Not sure if I address the problem right, but maybe adding mathematicians to the team would make sense too.
AFAIK there are two movements in addressing problems with accuracy in numerical math. One is rathe on the engineering side: to provide 128-bit arithmetics (quadruple precision) or even higher precision.
The other is interval arithmetics (it looks like being the opposite of making precision higher).
Both seem to be seperate, but in climate modeling, these approaches both maybe could be used for enhancing modeling and reults.
But maybe they are already used and it’s nothing new to you?
If not, let me just throw in:
https://en.wikipedia.org/wiki/Interval_arithmetic
Maybe it’salready used in your math libraries.
But would it make sense, to add it to climate models itself (if not already done)?
Russell says
Even if you have a perfect global climate model and perfect downscaling, you risk getting misleading results with a small ensemble, thanks to the law of small numbers. The regional variations are non-deterministic due to the chaotic nature of the atmospheric circulation.
My take-home-message is that there is a need for sufficiently large ensembles of downscaled results. Furthermore, it is the number of different simulations with global climate models that is key since they provide boundary conditions for the downscaling.
Hence, there is a need for a strong and continued coordination between the downscaling groups so that more scientists contribute to building such ensembles.
As surely as diminishing scale renders climate models less determinate, it tends to amplify demands for purely global policy responses. But human experience of climate &climate change is is intrinsically regional and largely emplaced– most people npw live in cities and experience urban microclimates.
New human risks may arise from excluding regional modalities of mitigation from a climate policy conversation self-focused on managing the global atmosphere to the exclusion of regional land and water surfaces.
Al Bundy says
O: AFAIK there are two movements in addressing problems with accuracy in numerical math. One is rathe on the engineering side: to provide 128-bit arithmetics (quadruple precision) or even higher precision.
The other is interval arithmetics (it looks like being the opposite of making precision higher).
AB: I’ll add shrinking the cubes. 128-bits is rather precise, but if the cube being described has tremendous internal variation over space and time then what’s the point? As chips drop towards 2 or 3nm the models’ cubes and intervals will shrink. Like kids in the back seat, climate scientists will have to wait a bit longer. Exascale is coming.
Interval arithmetic looks interesting (thanks!), kind of a poor man’s creative conversion of an abacus so it has a bit of a quantum computer’s qualities.
Just pondering, here, but a split-cycle system might help. Run a global model with regular cubes concurrently with a regional model with perhaps ten times the precision. Global model takes one step, regional model takes the results and does its ten steps, repeat.
Dougfir says
Large ensembles might help tell us what regional climates are most likely to do, but there is only one future climate, which is not an ensemble average but one of the many possible stochastic paths the climate could take. Each region’s climate experience might still deviate from the prediction. Trying to clarify my understanding. Correct me if I am wrong.
Carbomontanus says
@ Benestad & al
This is abit “alian” or strange to me. Thinking small or big hardly matters in science because Microchosmos in Macrochosmos and vice versa, Despicio suspiciendum / suspicio despiciendum.
For instance, do not badger the erlenmeyer flask or the test tube, not even the raindrop because it is too small. Because it has got to do with, and can allow you to draw conclusions for sure about the very ocean.
Only fools find that ridiculous.
I can also give you a most dramatic exasmple of that.
I learnt recently that the hugest eruptions of magma and lava in the world, the mega or super- volcanisms, are dated and for sure, by using microscope and picking out quite tiny, pure cristals of ThSiO4, a quite surprizing formula of a silicate, but it is cristallized quite purely from molten ThO2 and SiO2. Without any trace of lead.
Then after a very long time, those pure cristals contain some lead due to decay of Thorium. By analyzing that lead, one can say for definite that it is not common lead, but the very special thorium- lead isotope. Ant thus by making mass- spectroscopy of those tiny cristals, you can discuss geophysics and even climate and life history in terms of æons with 3 valid chiffres, for the Siberian trappes and the breakup of america from eurasia and africa. And further date Hawai and Island and the rift walley quite surely and exactly without any statistics at all.
Thus, Steve McIntyre can be ruled out and disqualified.
I am not impresswed at all. I know and I have seen that way of propagating the art of writing hourly wages for poorly educated and politically employed industrial workers in the labs and the institutes, whereas chemical glasses and pioneering electronics and intelligent analytics and critical expetriments is shown out, badgered, and forbidden.
On the contrary, I am really very impressed of Tyndalls fameous experimental measurements of IR- absorbing gases, and Herschels and Arrhenius use of thermopiles and mirror- galvanometers before the invention of Op-Amps and transistors. It took them no statistics at all, and it decided and defined for eternity.
I furter sit back with the solid impression that ones landing in the unions and the bitty (you call it bore- hole) of climate surrealism and denialism, is mostly the result of their religious politicalm lacking of higher learnings, of Baccalaureus 1 and responsible education in the lab. On how to design critical chemical and physical methods and state empirical proof.
It is decided rather by intelligent and critical cross- examining by systematically independent methods rather than thousands of industrially repeated mearurements.
Because, It hardly takes 100 doctors or 1000 repeated experiments for “good statistics”, because that religion hardly rules out quite severe systematic errors.
As for doctors, you hardloy need more than one doctor if you remember also to ask a veterinarian. (Because we are fur-animals and we learn a lot from kits and dogs and rats), and remember to ask and to consult the patient also.
And if you can`t believe that, remember that Covid 19 is also about bats and pangolines, we hardly get infected by anything but zoonoses. And virus was discowered by Friedrich Löffler in the test- tube & further by electron microscopy on tobacco and potatoes. And still obeys Darwins and Pasteurs principles. Remember also to study mosses and farns, lichens and mushroms, wich are our largest microbes, quite easy to find and to see..
And remember to examine and to study and compare virus- denialism and election – result- denialism also, in addition to Climate- surrealism.
I only use Poisson- statistics, that was first developed for criminal statistics in Paris and shows further appliciable to the number raindrops in a cup of coffe under open sky, and radioactive counting. I draw the square root of the counted amplitude, and that is all.
Gauss himself wrote:
“Durch nichts trägt sich der Mangel an matematische Bildung deutlicher zur schau, wie durch masslosse Genauigkeit in der Zahlenrechnung!”
Class- warfare- social racisms – against the proper learnings of Darwin, Pasteur, Löffler and Gauss is what costs, what infects, what goes wrong, and what kills.
It takes no statistics at all in order to see that.
Mr. Know It All says
I managed to read all of that including the first link summary on “climate downscaling”.
Whew! Follow the KISS principle – Keep It Simple, Stupid! If you want to know what the weather will be for the next week, look at the local forecast and realize it will not be a perfect match to the actual future weather. If you want to know what the weather is now, look out the window!
Are we wasting talent here that could be used for more important tasks?
Barton Paul Levenson says
KIA 8: Are we wasting talent here that could be used for more important tasks?
BPL: You definitely are. Go apply your immense talents somewhere else.
William B Jackson says
I suspect that what is really meant in #8 is KISFS keep it simple for (the) simple, but I could be wrong! KIA is a genius after all.
Jeremy Grimm says
I do not understand this post. It begins with the launch of the ‘new’ WCRP, which seems like at least a potentially political event: “CORDEX will belong to the WCRP community called ‘Regional information for society’ (RifS). This implies a slight shift in emphasis.” I do not understand the connection between the ‘new’ WCRP, RifS, more useful regional modelling results, new requirements, and the law of small numbers.
Would someone who does understand what this post is hinting at explain so a dumb layman might understand?
rasmus says
Sorry. So when the focus is shifted from improving regional climate models and downscaling to actually making use of them to provide information about the outlook for the future, there is also a subtle – but significant – change in what is required. There are random variations in the regional climate on top of the global warming trends, and such random swings may last for a decade and either reinforced the trend or go in the opposite direction. So to get a reliable account of future outcomes, you need to estimate the likelihood of a range of outcomes for instance by making use of an ensemble of regional model simulations or downscaled results. You need a sufficient number of these to get a reliable answer – a kin to having a sufficiently large sample in statistics. The law of small numbers explains why a small sample is likely to give misleading answers.
Al Bundy says
rasmus: to get a reliable account of future outcomes, you need to estimate the likelihood of a range of outcomes
AB: Yep. That’s why rolling the dice once can’t inform you about how likely that set of boxcars you rolled in your first toss is guaranteed, likely, or rare. Roll ’em 1000 times and you’ll know that they are 1/36 probable. You’re not looking for THE answer but the distribution of possible answers.
______________
mrkia: keep it simple, stupid
AB: Doesn’t work. You simplified too much. The original quote is:
“Everything should be made as simple as possible, but no simpler.” Albert Einstein.
Jeremy Grimm says
Thank you for explaining rasmus.
Keith Woollard says
So let me paraphrase AB@13’s summary of Rasmus@12’s explanation of his own post – You need to average lots of runs so that the noise cancels.
Great, assuming the noise is white. Which it isn’t, it is very coloured.
Let’s look at the far simpler, but effectively the same models used to forecast weather. Typically they only look 7 days ahead. For some parameters(i.e. rainfall risk) they might push it to a month but there is very low confidence levels in the predictions. Using the logic proposed here, all we need to do is run many more simulations and we will get a much better picture of the PDF. No, the reason they can’t forecast > 7 days is not because of white noise, it is because the system is far more complex than the model. No amount of added statistics will resolve that.
Is there anyone here willing to bet on the December 2021 anomaly value to within a tenth of a degree? If not, why not? Our models do not come close to having that sort of precision. Rather than focusing on element size, or timesteps we need to be confident that all the processes are accounted for correctly. If we can’t say with any certainty how the cloud cover will be different in 12 months, or the ocean currents, or humidity, then there is no point taking the forecast 80 years into the future.
Barton Paul Levenson says
KW 15: If we can’t say with any certainty how the cloud cover will be different in 12 months, or the ocean currents, or humidity, then there is no point taking the forecast 80 years into the future.
BPL: And Keith still, after all this time, doesn’t understand the difference between weather and climate.
Please to get this, Keith:
Weather is local, day-to-day variation in pressure, temperature, rainfall, cloudiness, and wind.
Climate is weather averaged over a wide area, or the entire globe, for thirty years or more.
Weather is chaotic. Climate is deterministic.
Weather is an initial-values problem. Climate is a boundary-values problem.
Climate is what you expect. Weather is what you get. (Mark Twain)
I don’t know what the temperature will be at noon on June 5th in Cairo, Egypt. But I can be pretty damn sure it will be higher than the temperature in Oslo, Norway.
Ric Merritt says
Speaking of bets, is there anyone willing to bet on decade-over-decade global average surface temps? I think they’ve gone up, and will continue that way, the longer the period the surer the bet. Some folks SEEM to think something else, but I have offered the bet on various websites for many years. Never any takers, sigh.
zebra says
Here We Go Again,
BPL, KW, and even Rasmus…Words Matter!!
Last month we had multiple pontificators going on and on about thermodynamics while not knowing the actual definition of the word “heat”.
Not to pick on BPL, but:
“Weather is chaotic. Climate is deterministic.”
Except, wiki tells us this (my bolds):
Again, it’s not just BPL; Rasmus uses “random” in trying to clarify the original post, which makes it more confusing not less, perhaps contributing to Keith’s misunderstanding.
Now I have to return to dealing with the results of a non-random Nor’easter; the difficulty of moving the snow determined by factors like temperature, wind, and rate of precipitation. But I can predict with some confidence, if not precision, that my back will hurt more than it already does when next we meet.
Ray Ladbury says
Zebra, does it really make sense to call a system defined only in terms of averages and variability “deterministic”? Predictable might work. Analyzable would be better.
Al Bundy says
Keith Woolward: You need to average lots of runs so that the noise cancels.
AB: I don’t think so. It’s more like the double slit experiment (or quantum mechanics). You won’t get an average, you’ll get somewhat discrete probabilities.
Come back, Richard Feynman We need you.
Keith Woollard says
Fair point Al@20
Keith Woollard says
I am not conflating climate and weather, I am drawing comparisons with the modelling of the two.
So when I ask about predicting the cloud cover or humidity or temperature anomaly, I don’t mean on a particular day at a particular location, I mean what is average worldwide values for these for the month. Do you honestly believe that the models are predicting these parameters?
You may well believe “Weather is an initial-values problem. Climate is a boundary-values problem.” (I don’t, I think they are both both) but that is irrelevant. The modelling for both is the same….. initial state ==> perform all cell calculations ==> current state becomes next initial state and loop through all times steps.
jgnfld says
Re. “…You won’t get an average, you’ll get somewhat discrete probabilities.” vs “noise cancels”.
It really depends on whether the underlying reality is best viewed in a quantum versus a macro way. Climate values exist continua, generally, not categories.
zebra says
Ray Ladbury #19,
“predictable might work”
Ray, I’m quoting from Wikipedia, and I’m pretty sure it reflects the accepted terminology in the field.
So I don’t understand if you are disagreeing with them when they say “the deterministic nature of these systems does not make them predictable“, or you are disagreeing with the standard definition for “deterministic”, or what. You need to elaborate.
All I’m doing here is trying to get everyone speaking the same language, to avoid the pointless talking-past-each-other-definition-debate that often occurs, as in the “heat” example.
BTW, you never responded to my question on UV about the Hansen paper.
Kevin McKinney says
“Is there anyone here willing to bet on the December 2021 anomaly value to within a tenth of a degree?”
What odds are you giving?
jgnfld says
@20
You don’t GET probabilities from a double slit setup. You get data and samples. Aggregating these lets one _estimate_ probabilities, but the probabilities come from theories. The experiment provides data to compare to the theories.
Al Bundy says
BPL,
I probably need a spiritual advisor. You’d be my choice. And whatever level you are thinking this is at, you’re a few orders of magnitude low.
If you’re game, my initial contact is ManyAndVaried@hotmail.com
zebra says
Been Doing This A While, I Guess,
The Wikipedia page from which I quoted had an animation of a double pendulum, and I suddenly remembered how, what seems like very long ago, I was delighted to find a simple early one of these online. I said: “Hah, now I can get people to understand the nature of the Climate system; this is the perfect visual aid.”
Anyway, this one is pretty good:
https://www.myphysicslab.com/pendulum/double-pendulum-en.html
I don’t know if there is something out there that allows for more precise playing with constraints and perhaps a little damping, but if one is a ‘visual thinker’ to some degree, I think spending some tweaking time with it would help to clarify certain concepts.
Kevin Donald McKinney says
KW, #22–
Yes, apparently so.
Cloud cover: https://eos.org/research-spotlights/evaluating-cloud-cover-predictions-in-climate-models
Water vapor: https://www.nasa.gov/topics/earth/features/vapor_warming.html
Temperature anomalies: https://www.realclimate.org/index.php/climate-model-projections-compared-to-observations/
Or perhaps you meant something slightly less straightforward?
Al Bundy says
zebra: I don’t know if there is something out there that allows for more precise playing with constraints and perhaps a little damping, but if one is a ‘visual thinker’ to some degree, I think spending some tweaking time with it would help to clarify certain concepts.
AB: Shades of Killian. I’m of the same bent (but further).
zebra says
Keith Woolard #22,
Keith, exactly what value would there be in predicting the value of the GMST for some arbitrary month within .1C?
I think maybe you did misunderstand the OP. The question is about regional values. So, for example, they want to be able to tell people in the Southwest USA whether their water supply problem is going to get even worse, for various scenarios of CO2 concentration.
I think it’s a very difficult question to answer in a way that will have a significant influence on policy, for the reasons Rasmus is trying to communicate and others. And there needs to be better definition of what counts as a “region”.
But, to get back to my point about language, you seem to be confusing precision with accuracy. I would certainly bet that, if CO2 continues to rise at the current rate, ceteris paribus, the temperature for, say, August 2050 in the US SW will be higher than it was in August 2020.
That’s not an absolutely sure bet, but the models can certainly provide sufficient information to make useful predictions for any number of criteria.
Ray Ladbury says
Zebra, the point is that climate is composed entirely of averages, and you don’t measure averages, you calculate them. As such, there are always errors, and it is not clear whether the result of adding or subtracting energy will be deterministic. Take one of the denialist favorite systems–cyclonic storms. Ceteris paribus, you would expect adding energy to the ocean-atmosphere system to total cyclonic energy–either due to stroms becoming stronger or because you get more storms or both.
But ceteris ain’t paribus. You also get increased shear and dust from the Sahara, which impede cyclonic formation. Competing effects. Which one wins? Can we predict it? Will the result be the same in all cases for the same (at least nearly) starting conditions.
Yes, it is true that an “average” will usually be more “predictable” than an individual measurement, but 1) that depends on the underlying noise in the system and 2)even well behaved systems can be noisy.
Don’t believe me? Look at the “average” for Caucy distributed data as you add data points.
zebra says
Ray Ladbury #32,
Ray, the nice thing about Wikipedia is that it allows people to attempt to edit things with which they disagree. If you don’t accept the standard definition of a “deterministic” system as given, which comes with references and is used as I’ve always seen it, you should jump in and straighten them out.
But you are demonstrating exactly my point about language… BPL, Keith, you… are using your personal definitions for the purpose of the argument, rather than showing readers what a real scientific project is like, where everyone starts out on the same page. Such ‘definition debates’ are a waste of time.
Keith Woollard says
Zebra@22
I am definitely NOT confusing precision and accuracy. I didn’t say to predict to within 0.100000. It is the accuracy I have a problem with.
You completely miss the point about language and that is because you use the same old tired incorrect argument….
“I would certainly bet that, if CO2 continues to rise at the current rate, ceteris paribus, the temperature for, say, August 2050 in the US SW will be higher than it was in August 2020.”
This is the way all climate alarmists avoid the problem of the inaccurate models and try and make them different to weather models. “I don’t know what temperature it will be, but it will be hotter” I would be hugely surprised if you were wrong. The issue is we are talking about quantifying that warming and it matters immensely if it is 0.2 degrees, or 1 degree. Hansen 1988 had scenario B at 2.5 degrees per century. No matter how you spin it, that was wrong.
And using Latin when we have a perfectly good phrase in English doesn’t impress me
zebra says
Keith Woolard #34,
Here’s what you said at #15 (my bold):
So I think my observation about your confusion is quite accurate.
I’ve been busy (and a bit lazy) lately so I am just referring people to Wikipedia, but I think it would help you to read the “accuracy and precision” article there.
And maybe your real problem is that you can’t answer the basic question: What’s the question?
In applied science or engineering, stuff like how much accuracy, precision, or resolution you want depends on the problem you are trying to solve.
So, for my US Southwest folks whose water supply is already shaky, the temperature is a proxy for the state of a complex non-linear system. Whether it is .2C or 1C, the physics tells us that continuing to move up in energy from the original equilibrium state is very likely to have negative consequences.
Are you saying that they shouldn’t worry because they don’t know if it will be bad or really bad?
Kevin McKinney says
#34, KW–
This is the “same old tired argument” that because knowledge is imperfect, it must therefore be worthless or negligible.
Yawn.
Ric Merritt says
“the problem of the inaccurate models”
Perhaps I have missed news about models (and people) significantly disagreeing with the well-known mainstream ones, that have shown better predictions than the mainstream models. Not just in small ways, though that could interest some folks, but in big ways meaningful for policy, which would interest folks like me.
Until I’m enlightened on that score, I’ll go with the mainstream predictions over the last 4 decades, which are well past the point of proving accurate enough to guide policy overall.
Ray Ladbury says
Zebra, OK, let’s play the definitions game:
From Wikipedia, but the definition is not controversial: “In mathematics, computer science and physics, a deterministic system is a system in which no randomness is involved in the development of future states of the system.”
Are you really going to assert that we have enough data to assert that climate is deterministic–that an input of a given amount of energy into 100000 Earths all prepared in the same identical state (to measurement error) will always yield the same outcome? There will be more probable outcomes and less probable outcomes, but only a single outcome?
Ray Ladbury says
Keith Woolard, I commend to you the words of George Box: “All models are wrong. Some models are useful.”
Useful implies a purpose, and that the model has succeeded in providing insight for that purpose. The questions are: 1) What is the purpose? and 2) Is the model sufficiently accurate, precise and reliable that it fulfills that purpose.
Look, we don’t need to look at a very sophisticated model of climate to know that anthropogenic CO2 is adding huge amounts of energy to the climate–Arrhenius was able to discern it in 1896. The subsequent data and science have made this proposition about as close to 100% certainty as human knowledge ever gets. The questions are how much it will warm and what will be the consequences.
Without the models, we are flying blind with no instruments on a dark and stormy night with a near-empty fuel tank. Without the models, the only responsible thing to do is get ‘er on the ground as quickly as possible. Alarmism is the ONLY appropriate response in such a condition. If you are less of an alarmist, the models are your best friggin’ friends. You’d better hope we can rely on them.
Mal Adapted says
zebra:
OK, z, I think I understand your “definition debate” objection. I agree with you to this extent: rigorous science, to be published in peer-reviewed journals, does require practitioners to agree at the outset on a terminology for their topic. That’s because natural language is semantically ambiguous, and usually (always?) context-dependent: good for love and poetry, not so much for science. Often, scientific peers adopt an evocative word (‘species‘) or phrase (‘Big Bang‘) from a shared natural language, redefining it more precisely for their purpose. Of course, neologism is also rampant in science. So yes, some of the debate on RC is specific enough to a particular scientific topic, e.g. thermodynamics, that I agree arguments should use well-established terminology.
The problem for many regular commenters, myself included, is that we’re fluent enough in English, but often lack a specialized vocabulary for the topic at hand, which (let’s face it) isn’t always a real scientific project. We may not be satisfied with how well a particular Wikipedia definition lets us say what we want to say in the current context. If we choose to comment anyway, then rather than assuming everyone knows what we’re on about, it may be useful to spend time explaining what we mean by a term. Often, it’s sufficient to cite which Wikipedia definition (e.g. ‘denial‘ or ‘denial (Freud)‘) does fit what we want to say. Regardless, each of us is responsible for making ourselves understood here, so some negotiation of definitions is inevitable. As always, I beg readers scroll past any tl;dr comments of mine.
zebra says
Mal and Ray
Mal, I think it was Blaise Pascal who apologized, at the end of a long letter to a friend, for not taking the time to write a shorter one…
Ray, maybe you need to just take the time to read more slowly and thoroughly to understand what someone is saying. This started with BPL saying:
“Weather is chaotic. Climate is deterministic.”
In response, I quoted Wikipedia:
Did you not read or understand these sentences?:
Small differences in initial conditions, such as those due to errors in measurements or due to rounding errors in numerical computation, can yield widely diverging outcomes for such dynamical systems.
and
In other words, the deterministic nature of these systems does not make them predictable.[9][10] This behavior is known as deterministic chaos, or simply chaos.
So, I was pointing out to BPL that chaotic systems are in fact deterministic, so his distinction was invalid. Ray, I really don’t understand what you don’t understand here… it still sounds like you disagree with the Wikipedia statement. Do you not accept the concept of chaos?
Mal, the point is that ‘definition debate’ refers to using language for rhetorical purpose, as BPL seems to have done, rather than trying to educate our (hypothetical) readers-sincerely-interested-in-learning.
My experience with complex non-linear systems predates the full formal development of current Chaos Theory and its terminology, so I am far from proficient in it. And then there’s just getting old and remembering only a fraction of the physics that I did know.
That’s why I always try to take the time to write things in as plain-language a form as I can, and check back rather than casually use some jargon or term-of-art.
Mal Adapted says
Ray Ladbury @39, another superlative comment. Your command of English is impressive, as is the wisdom of your advice to KW. I’ll reply only to this:
That’s an evocative analogy, nonetheless limited like all such. For one thing, we’re not flying blind, we can see what’s happening through our windshield. In particular, we’ve seen a rising trend of GMST accelerate in the last 50 years, and the accumulating costs in money and tragedy of the accompanying changes in average weather around the world. Even a simple linear extrapolation of observed secular trends (informed solely by eyeball, admittedly a naive model) is intersubjectively alarming: it’s easy to imagine a storm coming for you, just over the horizon!
The “modulz is unreliable” undead denialist meme came up on RC a year ago. Quoting again the PNAS Perspective by climate modelers Tim Palmer and Bjorn Stevens (my emphasis):
Never mind: My best regards to all, on this Dies Natalis Solis Invicti of 2020 CE 8^D!
Keith Woollard says
Yes Mal, Ray’s grasp of American English is truly impressive. It’s just a pity this is meant to be a science forum rather than literature.
As you rightly point out, the whole thrust of his argument is incorrect.
Ray has also fallen for exactly the same fallacy that Zebra did in #22. Effectively he is saying there is way more heat in the system so the models are right. That’s a giant leap of faith. Luckily models are not our only tool, they are just the only tool that gets us up towards scary climate sensitivity values. Without the models, we just have physics and maths to tell us that doubling CO2 gives about 1.2 degrees increase, and that 1.2 degrees increase gives us up to about 7% more absolute humidity.
Al Bundy says
Ray: The questions are how much it will warm and what will be the consequences.
AB: And how fast will the trip be. Ask Trump. Steep ramps can be tricky for various species (including, apparently, a very stable genus, which makes sense since ‘variable’ aka ‘weedy’ is the winning strategy in our new biosphere).
______________
Ray: Are you really going to assert that we have enough data to assert that climate is deterministic–that an input of a given amount of energy into 100000 Earths all prepared in the same identical state (to measurement error) will always yield the same outcome? There will be more probable outcomes and less probable outcomes, but only a single outcome?
AB: Well, we don’t and can’t have perfect knowledge but assuming we did, yes, that is one of the Big Questions, especially with regard to black holes and the information paradox.
I’m of the side that perfect information doesn’t exist, as the past doesn’t exist in a “single outcome” sort of way. Until measured, a quantum system is in all of the more and less probable “outcomes that would have been outcomes had the system been measured at that point”.
Place various lengths and breadths of myriad quantum systems across the majority of spacetime and the islands of winking particlesqueness scale like nuclei in atoms.
_______________
Mal Adapted: That’s because natural language is semantically ambiguous, and usually (always?) context-dependent: good for love and poetry, not so much for science.
AB: Eh. Science, as practiced by Einstein, has two distinct phases. The bicycle phase lacks mathematics, lacks precision, lacks all that (to go Killianesque) all of you worship about science.
That dumbasses will beat you over the head with your bicycle is an unfortunate reality. I suggest donning a helmet and using one’s bicycle voice whenever speaking to the public.
Al Bundy says
zebra: Do you not accept the concept of chaos?
AB: No, I don’t. That Wiki is based on Newtonian instead of Quantum. Pairs of particles are flashing in and out of existence all of the time everywhere. Occasionally, one interacts with a “real” particle instead of just reintegrating with its pair.
That can’t be replicated across Ray’s worlds.
jgnfld says
To DS:
Models don’t measure temps. Thermometers measure temps. Models (climate) try to explain factors behind why the thermometer is measuring a temp at a particular time and place. Or rather, in this case, slowly rising temps over the globe over time. That is just a plain, simple, readily verified fact. So easy it is now readily apparent to anyone who has lived long enough now anywhere away from the tropics.
Re. stats: One uses extreme values to define a _range_, not a trend (though aggregated extremes can provide some data as we see here from others). To measure a trend, you see, one takes ALL the relevant values from the whole year and oddly enough one then looks and sees how the aggregated monthly/seasonal/annual means change over time. Weather variance within and across seasons is also not measured at all by the extreme values you report.
Trying to use extreme value stats to make any _climate_ statement/inference (or general statements/inferences on or about any measured thing) is something you really ought to stay away from without advanced math/stats training together with a great need to make the best inferences from poor/sparse data (unlike here).
Skull sweat is necessary in science, in political propaganda…not so much. Just throw out a bunch of muddied statements (like the famous Svalbard news item you referenced presumably via McIntrye, B.Sc, 1969 and his blog). He specializes in these sorts of cherrypicked factoids that mislead. You simply blindly throw them out here hoping something will stick to the wall like any penny-ante agit-prop operative. None does. Actual scientists, you see, spend their lives disentangling verifible ideas from rivers of mud and oceans of dross.
jgnfld says
Re. AB and truly random quantum chaos versus deterministic chaos.
Nice statement. Might even be true though the jury on QM isn’t fully in. Now tell us exactly how the quantum world manifests itself truly randomly at the macro scale in aggregated weather and climate data points. I’d be curious as to the answer as would be many others.
Mal Adapted says
Keith Woollard:
I pointed out no such thing. It’s sheer perversity to take that message from Palmer and Stevens, and to unilaterally declare the “whole thrust” of Ray’s argument incorrect. I’m not your ally here. That said, I’ll resist further responding to your provocations, leaving it in others as they see fit. Ray, in particular, doesn’t need my help!
Barton Paul Levenson says
KW 43: Without the models, we just have physics and maths to tell us that doubling CO2 gives about 1.2 degrees increase, and that 1.2 degrees increase gives us up to about 7% more absolute humidity.
BPL: Paleoclimatology, like the models, indicates a Charney sensitivity around 3 K. Sorry, but killing the messenger doesn’t affect reality.
Mal Adapted says
zebra:
Ah, doesn’t apply to us, then. I’m less likely to apologize to a frenemy (def. 3).
zebra says
jgnfld #47,
“truly random quantum chaos”
Whether something can be “truly random” and also chaotic is questionable.