Last week, there was a CORDEX workshop on regional climate modelling at International Centre for Theoretical Physics (ICTP), near Trieste, Italy.
The CORDEX initiative, as the abbreviation ‘COordinated Regional climate Downscaling Experiment‘ suggests, tries to bring together the community of regional climate modellers. At least, this initiative has got a blessing from the World Climate Research Programme WCRP.
I think the most important take-home message from the workshop is that the stake holders and end users of climate information should not look at just one simulation from global climate models, or just one downscaling method. This is very much in agreement with the recommendations from the IPCC Good Practice Guidance Paper. The main reason for this is the degree of uncertainties involved in regional climate modelling, as discussed in a previous post.
I sense that the issue of uncertainty is sometimes seen as problematic and difficult to deal with. Uncertainty does not mean that we are completely clueless – it means that we do not have accurate knowledge about absolutely every detail. Uncertainty is nothing new – we live with it every day. All scientific disciplines have to live with uncertainty too.
Moreover, we can describe and model uncertainty. The question about uncertainty is a question about information about processes, whether understood, random variations (known as ‘noise’ or stochastic processes), or systematic model shortcomings (biases).
Theoretical physics have a long tradition with uncertainties – take quantum physics for example. This has lead to one anecdote, where – according to Wikiquote – Albert Einstein is paraphrased to have said “God does not play dice with the universe”.
Uncertainty is often modelled in statistics though stochastic models. One common phrase in statistical texts is “take a random variable X…”. I believe that statisticians can contribute more to climate sciences in better description of the uncertainties, in addition to better calibration of statistical models. And I’m not alone. In a European COST-action initiative named ‘VALUE’, spearheaded by Douglas Maraun, the intention is to include statisticians for improved climate modelling.
We can get much information about all these aspects from our models and real observations (empirical data). The key words are evaluations and testing. Hence, it so important to combine models based on our understanding of the world with empirical data.
Another point is the fact that general circulation models have our understanding of relevant processes encoded into lines of computer code, whereas empirical-statistical models capture all relevant processes simply by the fact that these are emedded in the data itself. The problem with statistical models is that one does not know if they actually capture the real connections, or if a good match is just a coincidence.
The beauty of computer modelling is when real features are predicted, and the combination of empirical-statistical models with physics-based models enhance our confidence of actual predictive skills.
One thing is fairly clear from the CORDEX workshop – it has very strong relevance to the global climate services, called for at the World Climate Conference-3 (WCC-3), as well as the
IAV community – ‘IAV’ being ‘Impact, Adaptation and Vulnerability’.
A global warming is bound to affect a great deal of earth’s population. For many people, scientific facts do not really speak loud and clear. What does model bias mean, and what implication does that have for my risk assessment? Scientific facts must also be complemented with narratives, or a story line which visualises possible outcomes. For global climate services, science and infrastructure is not enough – we also need to interpret the information into knowledge, based on science. I wonder if it is assumed that this communication work will happen effortlessly, or if people see that this will be a herculean task. Maybe we will get the answer to that at the open science conference in Denver 24-28 October this year.
Andrew McKeon says
I once heard John Holdren (President Obama’s science advisor) speak on the issue of uncertainty in climate predictions. He used the example you picture here (the dice), saying essentially that anthropogenic forcing could be thought of as loading the climate dice towards undesirable outcomes. Concepts like “loading the dice” create images that lay people such as myself can intuitively understand.
Edward Greisch says
Definitely a herculean task. See my comment at: http://rockblogs.psu.edu/climate/2011/03/universities-and-the-need-to-address-global-climate-change-across-disciplines-and-programs.html
” For many people, scientific facts do not really speak loud and clear” is an understatement of enormous proportions. For most people, science is utter nonsense. For most people, math is impossible to learn. That makes science impossible to learn, but that isn’t all. Almost all people have beliefs that contradict science. Those beliefs will not change until civilization falls.
Beliefs generally do change when something as momentous as the fall of civilization happens. “Our old gods must not be strong enough.” To change the average math IQ from 100 to what is presently 150 would require a lot of evolution. “This is a herculean task” is also an understatement of even greater proportions.
Jack Maloney says
“For many people, scientific facts do not really speak loud and clear…Scientific facts must also be complemented with narratives, or a story line which visualises possible outcomes. For global climate services, science and infrastructure is not enough – we also need to interpret the information into knowledge, based on science.”
Are climate models really “scientific facts”?
[Response: Models should embody scientific facts – they are often themselves complex constructs of facts. -rasmus]
Pete Dunkelberg says
The Navy has an Arctic model, regional not global. It seems to be somewhat discounted by global modelers. Maslowski finds Arctic summer sea ice endangered sooner rather than later. What do you think of it?
one of many links:
http://www.ees.hokudai.ac.jp/coe21/dc2008/DC/report/Maslowski.pdf
Ed H says
I often wonder whether as climate scientists we should avoid using the word ‘uncertainty’ in certain situations when communicating our science. We (generally) know what we mean by the term, but as the post author suggests, I think uncertainty can have a different meaning to the public, implying that not very much is known at all.
This may be obvious and oversimplistic, but perhaps we should try to use other words instead, such as ‘range’ or ‘spread’ where appropriate? Of course, this will depend on the situation – e.g.
“we are uncertain about whether we will get more or less rain in future summers over Asia”
is fine, and we might use
“we predict that global mean temperatures will increase over the 21st century, with a range from 1.5 – 4degrees.”
rather than
“we are uncertain about how much global temperatures will increase…”
Isotopious says
How does combining a number of poorly performing models improve skill?
For example, instead of making an accurate prediction of tomorrow’s weather,(will it be warm or cool?) we pretend to be clever and say it will be a multi ensemble of warm and cool model outputs (sounds technical!).
This process makes no attempt at improving model skill, it essence you are making it seem like the models are better at performing, by reducing the skill of the test?
[Response: Depends what you mean by ‘poorly performing models’. Tomorrow’s weather is predicted using an ensemble – known as ‘EPS‘ at the ECMWF. That way, we can get a better sense of probabilities for raining or not. The point is that models can never provide complete information about all details. They nevertheless manage to simulate the large scales, e.g. the flow, lows, and highs. In addition to using ensembles to get the best prognoses, it is also important to continually work on improving the models themselves. The statement about ‘reducing the skill of the test’ doesn’t make sense to me. -rasmus]
rc says
Sorry I missed the previous post by Rasmus, but it seems that there are important issues raised about the use of regional climate models. There was a point when there seemed to be consensus that RCMs would have a limited window of utility as the spatial resolution of global models increased. Well it’s now ten years later and RCMs seem to be as widely used as ever. I speculate that this is due to (among other things) the use of the RCM as the poor man’s global model, and- as described above- a disconnect between stakeholders and global model output. The former point can be seen in its use in polar regions where there is a desire to locally tune many physical parameterizations and not fiddle with other parts of the model. But a problem here is that RCMs don’t provide a unique answer, even though their results are frequently treated that way: some large area-average is taken and a time series is plotted, but it is likely to be strongly dependent on boundary conditions. As for the latter point, there is a persuasive argument for the need for global model results to be applicable to the local scale. But the take-home message of using more than one global simulation must be sobering in light of recent large ensemble studies (e.g., Deser et al., Clim. Dyn., 10.1007/s00382-010-0977-x). More than one?! Perhaps forty are needed for a particular location to address internal climate variability. So these issues lead back to questions about research resources. Are efforts best used in making global model output more applicable to the local scale, or in increasing the local skill and resolution of global models? And, is the window for regional climate modeling closing, or is it here for the duration?
Paul Pentony says
The link points to a 2008 paper. Since then sea ice extent has become much more stable – though still low. Sea ice volume may be a different issue.
[Response: 2-3 years is a bit short for saying that the sea ice has become more stable. -rasmus]
One Anonymous Bloke says
Al Sommer #7. [edit. the troll comment you’re referring to was removed. -moderator]
Pete Dunkelberg says
Arctic sea ice does not look stable when you take a closer look, see here for instance.
To learn about the Naval sea ice volume model and research see
Advancements and Limitations in Understanding and Predicting Arctic Climate Change
— Wieslaw Maslowski, Naval Postgraduate School
and this series (change ‘2008’ to other years as desired)
http://www.arsc.edu/challenges/pdf/annual_2008.pdf
Isotopious says
Rasmus,
Weather models are very good for 3 day outlooks, beyond this you can shuffle them as much as you like.
Combining models which overstate an amount of rainfall with ones which understate an amount of rainfall improves the correlation purely by chance, rather than skill. Although a useful process to see which models should have more weight, and which ones should be discarded all together, the average that the ensemble produces will automatically have a higher correlation with observation data simply because of how far a set of numbers are spread out from each other.
Jack Maloney says
[Response: Models should embody scientific facts – they are often themselves complex constructs of facts. -rasmus]
Canned “Spam” embodies ham – it is itself a complex construct of ham. But it isn’t ham. Climate model scenarios are mistaken by many – including the media – to be scientific facts. But they aren’t. Frankness about the uncertainties won’t please the headline writers, but is essential to the credibility of climate science.
Susan Anderson says
Perhaps this should go on the open thread, but just saw this, so in case any of you are interested (Thursday 10:30 am):
http://theprojectonclimatescience.org/hearing/
Found at Tenney Naumer’s blog if you want more info:
http://climatechangepsychology.blogspot.com/2011/03/congressional-hearing-climate-change.html
“Congressional hearing: “Climate Change: Examining the Processes Used to Create Science and Policy,” on March 31, 2011, to have real time commentary by leading climate scientists in order to correct misleading and inaccurate testimony — available to journalists — additionally, a teleconference follows hearing (with Kevin Trenberth, Andrew Dessler, and Gary Yohe)”
Cat J says
If climate folks want to interface with math and stat people studying uncertainty quantification, here is a good opportunity:
http://www.samsi.info/workshop/2011-12-uq-program-climate-modeling-opening-workshop
Forest of Peace says
talk, talk, talk … time going on …. lets plant and the scientists and politicians do your job! They provide a framework, ergo … do it now with your possibilities.
dhogaza says
Isotopious:
Even in Portland, Oregon, this is a false statement. The accuracy of the models and their outlooks varies in accuracy by the time of year, but overall your statement is simply false.
And the PNW is notoriously hard to predict.
Most of the errors over the short term are related to “how soon will the next front strike” and “how low will it be” and “will the two swamp a short-term high that might build between two fronts”.
Weather, weather, weather and not vaguely related to climate …
Ray Ladbury says
While Isotopious and Jack Maloney are mainly concern trolling, they do raise an important point about the public’s misunderstanding of the role of models in science.
Models aren’t there to provide answers, but rather to facilitate understanding. So, the skill required of a model varies with the aspect of climate we are trying to understand. Fortunately, the influence of CO2 is one of the easier aspects to understand. Were we trying to understand a climate-change mitigation program using sulfate aerosols, then the results of the model runs would have to be viewed with much more trepidation.
What is more, the models don’t have to do it all. There are tons of studies–ranging from paleoclimate studies to studies of volcanic effects, etc. that constrain climate response and which generally yield results consistent with the models.
Finally, it always astounds me when denialists attack the models. That the climate is changing in response to anthropogenic CO2 is beyond doubt–even Lindzen and Spencer concede that. The models are the best tool we have to place upper limits on that change. Without them, we are flying blind in a very dangerous landscape, and risk avoidance would be the only acceptable mitigation strategy. This would necessitate draconian action.
If you are a proponent of gradual responsible action, you had better hope and/or pray (as is your spiritual inclination) that the models are sufficiently reliable. Uncertainty is NOT your friend.
phill says
It was my understanding that the vast majority of climate models were in agreement. Is this not so?
Didactylos says
Jack Maloney: I think you have a fundamental misunderstanding about headline writers. They don’t care what the scientists say. They certainly aren’t interested in uncertainty (except for their “beloved” weasel quotes). Generally, they just make stuff up.
I have traced the origins of a few particularly unlikely headlines relating to global temperature, and they do things including converting wrongly from Celsius to Fahrenheit, taking the high bound (or the low bound) and ignoring the other (and ignoring the best estimate), or completely fabricating the numbers. And since headline writers are lazy (even more than journalists in general), these errors accumulate as they get copied from the original scientific publication and from direct quotes from the scientists, through press releases, wire stories, and sloppily cut and pasted news stories and editorials. Finding where the errors were introduced is like a form of archaeology.
What can scientists do to combat this? They don’t control the media. They can issue corrections, but in this instant news era, the correction can never catch up with the original error, which is recycled forever (particularly if the error looks attractive to political operators – they will do everything in their power to keep the error alive).
Paul Connolly says
I think the problem with using the term uncertainty, is that few people understand what it means in a technical sense, the following link is a very good beginers guide.
http://www.ukas.com/Technical-Information/Publications-and-Tech-Articles/Technical/technical-uncertain.asp
Ron Manley says
One thing that the debate on climate has taught is that to plan for the future you have to take account of climate change. At present all climate influenced human infrastructure (reservoirs, irrigation, water supply, sea walls, etc) is designed by analysing the past statistically and assuming it will occur with equal probability in the future. This is clearly false. The statistics themselves are based implicitly on the idea that the data are from stationary homogeneous populations – another false assumption.
I am fully aware of the limitations of the current crop of models but strongly believe that climate modelling is important for the future.
Jack Maloney says
18.Jack Maloney: I think you have a fundamental misunderstanding about headline writers. Comment by Didactylos
Didactylos – I think you have a fundamental misunderstanding of what I said. How does your “they certainly aren’t interested in uncertainty” differ from my “frankness about the uncertainties won’t please the headline writers”? Both suggest the press is more interested in sensationalism than in realities, which is certainly true of MSM climate change coverage. And true according to my half-century experience in the writing business.
You ask, “What can scientists do to combat this?” Transparency and honesty about climate uncertainties are good first steps. And making a clear distinction between computer model scenarios and scientific fact.
Didactylos says
Jack Maloney: You don’t seem to have bothered to find out what scientists have to say on the subject. Isn’t that a serious omission on your part?
And what exactly is “scientific fact”? Now you are just making stuff up (or cheerfully oversimplifying).
You keep implying that scientists aren’t upfront about uncertainty. That’s simply false, so I can only conclude Ray pegged you accurately. Go and concern troll elsewhere, please.
Kevin McKinney says
#21–
” Transparency and honesty about climate uncertainties are good first steps.”
If you read AR4, or any other IPCC report for that matter, you’ll find painfully detailed treatment of the uncertainties. And in the climate literature, as in other fields, it’s normal to try to quantify uncertainties. So I’d say this ‘first step’ has been taken long ago.
“And making a clear distinction between computer model scenarios and scientific fact.”
This seems important to you. What do you understand by the term ‘scientific fact?’ Just data? Well-supported hypothesis? (Not playing rhetorical games here; I’d really like to know.)
As to distinguishing ‘model scenarios,’ I’d say that no ‘scenario’ is ever fact. ‘Scenario’ refers to a conditional given: that is, the ‘x’ in the proposition that begins “If you assume [x], then it follows that. . .” I suspect that you are thinking, not of [x], but of the conclusion that follows.
I presume climatologists as a class are pretty clear about that distinction–and also that they wish the rest of us were, too. So, what can they do to help us–bearing in mind, of course, that ‘present company’ already takes a whole bunch of personal time to maintain this website for our edification?
One Anonymous Bloke says
Ray Ladbury #16 ‘…to facilitate understanding’ – please elaborate on this. I think of the analogy of a “physics engine” in a computer game – build a virtual hill and a virtual ball will bounce and roll down it according to the force of virtual gravity. Or engineering software that can stress test a structure before you build. I’d like better analogies though…
Didactylos #18 The only way to counter that would be to make the facts as ubiquitous and easily available as the errors. As it is they’re available, (and this begs the question as to why ‘journalists’ don’t check them), but ubiquitous?
Didactylos says
One Anonymous Bloke: It’s something of a tautology. If scientists could get information out there in a ubiquitous and straightforward manner, then they would effectively control the media. Since they don’t, they can’t – and vice versa.
Ray Ladbury says
OAB, All I mean by this is that–as George Box said: “All models are wrong; some models are useful.” Models allow you to determine which factors are most important and how they interact. However, ultimately, a models is a simplification–it isn’t real. The models merely alert you to the physics. They, themselves aren’t the physics. Often a simple model may give you the best insight even if it doesn’t give the best agreement. And sometimes a model can simply be flat wrong (viz. the Alfven, Bethe, Gamov model
http://en.wikipedia.org/wiki/Alpher%E2%80%93Bethe%E2%80%93Gamow_paper
)
Lynn Vincentnathan says
This is all good for understanding more regional/local impacts — for adaptation, for strengthening the science, and inspiring people to implement mitigation measures — but from an ecological citizen’s view, all we need to know at a low level of confidence is that AGW will be causing some bad things or other to be happening somewhere or other, sometime or other, to people and other creatures to feel the heavy responsibility to mitigate here and now.
Pete Dunkelberg says
Lynn @ 27, with a high level of confidence, physics never sleeps.
One Anonymous Bloke says
Ray Ladbury #26, I am approaching a conclusion that models are neither experiments nor observations, but are instead tools that scientists use to test their ‘notions’ (thank you, random climate mythologist). The idea being that if the model accurately matches something we can observe, you can then conclude that the maths for that part of the model may explain the phenomenon. Assuming I understand that correctly (always a work in progress), to take a specific example, Arrhenius’ model forecasts that nights will warm more than days. Is there a way to explain that without delving into maths?
One Anonymous Bloke says
I’m trying to tease out the “easily explained” parts of climatology – other than the blindingly stupidly obvious “add more energy and stuff heats up” that should’ve already swept all other arguments aside.
Chris Colose says
Great post! When I see an effort for “making climate science useful,” I am reminded of a professor I have for an informal seminar class on Climate Change in Wisconsin, in which he told a story of how a laborer came to him and asked how he would be impacted by climate change. The professor replied with something like “…well the IPCC projects global temperatures will rise 2-6 degrees…”.
That is an obvious disconnect between science and useful information, and the need to bridge these gaps is what makes regional climate syntheses and interaction between climate scientists and social scientists or policy makers a critical part in moving forward.
Here in Wisconsin, we have a great effort unfolding which could serve as a high standard for regional/state-wide efforts in down-scaling climate projections and communicating the information in a way that is useful for farmers, policy makers, natural resource managers, public health officials, etc. It is the Wisconsin Initiative on Climate Change Impacts (or WICCI).
The first assessment report was released just recently, and reflects the current science of climate change in Wisconsin. Apart from the physical science, there is focus on cold water fish and fisheries, agriculture, storm water, coastal communities, and so forth, with information presented in a way to be useful for people in locations like Green Bay, Madison, Milwaukee, etc.
Andrew Browne says
It is not the statement of uncertainty that causes problems rather the over statement of certainty in press releases that causes the loss of confidence in climate science. Obviously while avoiding a near time testable statement
Kevin McKinney says
#32–Ah, so it’s not the scientists, it’s the folks who write press releases?
Nothing to do with those other folks who smear, lie, spin, exaggerate, wrench out of context, obfuscate, ridicule, mock, distort and otherwise ‘bend, fold and mutilate’ those self-same press releases? And who do not scruple to just flat make stuff up?
Louis says
Climate science can be very useful in lot of ways especially in opening the eyes of all the people on Earth. Since our country is facing economic crisis and climate change many people would want to use “green products” which will minimize energy use while caring for the environment. There are actually so many energy conservation products already available in the market. One item that can be added in this article is available in http://www.Tintbuyer.com and get totally independent quotes for solar control window film, you will find that people can reduce consumption without any visual effect on their windows for much less than other energy saving technologies. Window tint is a known and trusted “Green” technology, it is cost-effective, energy-efficient and above all, it is eco-friendly.
Geno Canto del Halcon says
I think one of the problems in communicating with a lay audience is that even some scientists are not very clear about the nature of knowledge. What is a “fact”? There are no “facts”. Too often this term gets abused and misused. More accurate and specific language is needed: observations, data, analysis, hypothesis, theory. Muddying the water by dumbing down the language isn’t a solution, it will merely add to public misunderstanding. One doesn’t have to present the general public with complex equations; present them with well-established principles, talk about why one has made certain assumptions, and talk about what the model predicts and the degree of uncertainty. Emphasize that estimates are being made because of uncertainties in the model and the input data. Most folks do understand the nature of estimates. As soon as you say something like “this is an established fact” you’re in trouble.
Didactylos says
“Obviously while avoiding a near time testable statement”
This would be called “weather”. And climate scientists aren’t weather forecasters. The lack of short term testability is built into the science. Climate isn’t weather.
You are curiously silent about long-term testable statements. All climate indicators have strongly confirmed the ongoing presence of human-caused global warming. And we haven’t done anything to stop it, so all these people who think it will just stop really aren’t thinking very clearly, are they?
As the delayers have spun years into decades of inaction, claim after claim has come to pass. Why focus on what can’t be tested when there is so much evidence we do have? Oh yes – they want more delay, and all that evidence we do have is so very inconvenient.
[Response: In my mind, a prediction is not only restricted to the future. As long as the information about the ‘truth’ is not used directly in the model design, you can test the model against independent data (I guess the laws of physics contain information about the truth, but not at the same level as statistical evaluation). Hence, you can look backwards, e.g. to the ice ages. You can also apply the model to a different region, to see if it captures the response to different geographical conditions. -rasmus]
Tom Scharf says
You are missing the real elephant in the room.
You are a pressing a perception here that the “uncertainty problem” is all about assessing known error rates of known processes, and this is not what the largest problem is.
1. Many inputs to models have unbounded error. e.g. There is simply no way to know what the type and quantity of aerosols over Peru in 1912 was, and bounding that error rate is more guess than science.
2. The much larger issue is the “unknown unknowns”, which are the climate drivers that are yet to be discovered and modeled. Climate modeling is too immature to provide a compelling case that it has identified all the main drivers to climate. i.e. All the forcings and their magnitudes.
If this was the case, the simulations would be much more successful than they actually are. It’s going to rain more somewhere? Where and when? Drought? Where and when? Models have not even approached this level of success. Why? Because the understanding of the climate is not sophisticated enough to make that prediction.
We have been treated to many opportunistic hindsight “validations” of climate modeling (Pakistan, Russia, etc.) using the “consistent with” meme that most scientists would see as very weak evidence.
Show…me…the…money.
Make future predictions with models. Publish actual results. When your results start showing skill against a reasonable null model, I start believing you have begun to understand the problem.
Why in the world would anyone find them useful until they have passed this simple test? All modeling depends on this for usefulness. When did you start finding the weatherman useful?
If anyone can point me toward on-line data that documents near term regional modelling predictions and documents actual results after the fact I would appreciate it.
[Response: In my view, uncertainty is not just known unknowns, but also unknown unknowns. But unknown unknowns will affect the real data, if they exist and matter. Hence, the point about bringing empirical-statistical models in together models which only capture known knowns and to some degree knowns unknowns (vie parameterisation schemes).
Some of the climate models are now used for seasonal forecasting, for e.g. ENSO. There are some examples of this type of work: ECMWF.int, IRI (http://portal.iri.columbia.edu/portal/server.pt?open=512&objID=944&PageID=7868&mode=2), and CLIK/APCC (http://clik.apcc21.net/predictions/1595). The forecasts are not yet as skillful as we would like, but there are some regions with moderate skill (the Tropics). However, this comparison is not really representative, as these types of forecasts are an initial value problem, whereas climate change ought to be viewed as a boundary condition problem.
For boundary conditions, we must also rely on empirical observation, which provide us with quite a bit of information (however, there are also errors in the observations). This information must also be used to evaluate the models, so that we know how much we should rely on them for a given region and variable. This infromation is also key to improve the models – if they are constructed to represent that kind of detail. Hence, the situation is not as hopeless as you think. -rasmus]
Didactylos says
“the simulations would be much more successful than they actually are. It’s going to rain more somewhere? Where and when?”
Weather again. Tom Scharf, the very foundations of your beliefs are nothing but misconceptions.
“Make future predictions with models. Publish actual results.”
If you really can’t find these, then you just aren’t looking. It gets very tiresome when people don’t bother to look. Or wait – do you actually want a weather forecast for next year? Nice try, but that’s not climate modelling. It’s weather again.
For heaven’s sake, stop demanding things that aren’t relevant.
“skill against a reasonable null model”
Remind us again what you think is reasonable, so we can tell you why it isn’t. Or better yet, just go and find out what you were told last time. It will save ever so much time and effort.
Hank Roberts says
> Tom Scharf
> what you were told last time
https://www.realclimate.org/index.php/archives/2010/09/warmer-and-warmer/comment-page-1/#comment-186591
adelady says
Not for Tom, but an analogy that might be used in particular social or teaching settings.
We all know that the reason why tennis tournaments change the balls in use so often is that the pounding they get changes the physical properties of their surfaces. And that’s why tennis players inspect balls and discard any that look to be irregularly worn or more worn than others. They try to use those with the “best” surface properties.
But, these are mere technical details at the elite level of the game. What we all know perfectly well is that, regardless of the age or irregularity of a tennis ball, when it’s served by a top ten (or top 100) player we ordinary mortals have little chance of doing more than watching it speed by us.
And so it is with lack of knowledge of particular pre-conditions in climate science. Rainfall in the Solomon Islands during the 1920s, aerosols over Tanzania in the 1950s, speed of the Tasman Glacier in 1931. These are the equivalent of assigning values to the individual fibres of the nap on a tennis ball. No such technical detail can affect the reality that a supremely powerful athlete will snap any and every ball past everyone except another top competitor.
And so with climate change. The force is so powerful that the only problems lie with identifying what, if any, factors might influence outcomes by the equivalent of 0.05 mm on a tennis court.
Davis Straub says
Latest from Richard Muller:
http://berkeleyearth.org/resources
http://www.guardian.co.uk/science/blog/2011/mar/31/scienceofclimatechange-climate-change-scepticism
Clearly, there is very close agreement between the Berkeley analysis and the warming trends reported by the major three climate groups, that is a rise of around 0.7 degrees C since 1957. In notes prepared in advance of Thursday’s hearings, Muller writes: “The Berkeley Earth agreement with the prior analysis surprised us, since our preliminary results don’t yet address many of the known biases. When they do, it is possible that the corrections could bring our agreement into disagreement.”
Another interesting outcome from the analysis so far regards the impact of temperature stations being located near buildings, car parks and other urban sources of heat. In 2009, a former TV weatherman, Anthony Watts, published a report claiming the problem with “poor stations” was serious enough to render the US temperature record unreliable. Based on preliminary work, Muller says this isn’t true. “Over the past 50 years the poor stations in the US network do not show greater warming than do the good stations,” his notes say.
dhogaza says
David Straub:
Nothing to celebrate here … it simply underscores his ignorance of the field, as it has been known for years now that slicing and dicing the data many different ways, or using unadjusted vs. adjusted data (as he discusses), has virtually no effect on the trend.
Muller:
Ever hopeful that Watts is right, climate scientists, wrong … they could save a lot of Koch’s money by spending an afternoon on a series of 15-minute calls with knowledgeable people in the field.
As I said over at Climateprogress … watching Muller try to reinvent climate science (guided by his advisor Watts, to some extent!) is a bit like watching Fleischer and Pons reinvent physics …
Bright guy, out of his field, looking foolish.
Cliff says
Sorry to hijack the post but speaking of IPCC, I’m French and I’m torned by the presentation of Courtillot, I have read your articles “Les Chevaliers de l’Ordre de la terre Plate”.
But nevertheless some assertion sounds (IPCC models discarded) not right.
have you seen this video:
http://www.youtube.com/watch?v=IG_7zK8ODGA&t=0m29s
That would be great if one climate scientist could respond.
Dan H. says
Tom brings up some good points regarding uncertainty; mainly we do not the effect of the “unknown unknowns.” While the models are good based on the known inputs, the unknowns may change the outputs significantly.
Many of the models are becoming useful for seasonal predictions, namely ENSO. The recent forecast for a cool NH spring based on enlarged snowcover is another example. Longer term, we have greater uncertainty as witnessed by the much wider ranges. Small factors may have large contributions when multiplied out over many years. Boundary conditions are another large uncertainty, without which will allow parameters to continue to affect results long beyond their realistic ranges.
It still appears that some people cannot distinguish weather from climate. Local events are simply part of the larger climate and may deviate significantly on a daily basis. Using models, we see for instance that the rainfall in Brisbane was not extreme this year compared to past amount under similar oceanic conditions. The rainfall under different conditions is irrelevant for comparison sake; do we know all the unknowns?
Richard Muller is attempting to remove some of the uncertainty with his project over at Berkeley. His first approach shows general agreement with the various agency records without any data adjustment. He admits that many factors such as urban bias have not been addressed, but station location has. He applauds Anthony Watts for his detail in station location for this. His results are still preliminary, and final results may change dramatically from his presentation. He admits that he was surprised that his results show a similar 0.6C temperature rise over the last 60 year temperature cycle as do the other groups. I am eagerly awaiting his future work and publications
Pete Dunkelberg says
@ 42 “watching Muller try to reinvent climate science….” – another Curry?
@ various “unknown unknowns” – overruled by Nature. Paleoclimate knows and shows all. See e.g. http://www.columbia.edu/~jeh1/mailings/2011/20110118_MilankovicPaper.pdf
Pete Dunkelberg says
Party at Romm’s!
Joe Cushley says
“He (Muller) admits that he was surprised that his results show a similar 0.6C temperature rise over the last 60 year temperature cycle as do the other groups. I am eagerly awaiting his future work and publications…”
Why is he surprised? Many, many superb scientists devote their lives to deliver data of a very high quality and he’s surprised that his results match theirs. Sheesh.
And what’s this with the temperature cycle? Is the next 60 years going to replicate the pattern of the foregoing 60? And “urban bias” not being addressed?! But praps the most hilarious line, “he applauds Anthony Watts” not the excellent scientists who came up with the initial figures which his project has corroborated, oh no he applauds Anthony Watts…. He applauds Watts for what? For getting it wrong about poor station siting having any effect on the temp records?
Dan, you’re good for a sardonic chuckle and a shake of the head, but nothing else.
Cue the first faux sceptic attack on BEST…
Didactylos says
Dan H.: Your strategy of blurring the boundary between weather and climate is really quite clever, well done! By cherry-picking a few successful (mostly) long range weather forecasts, you neatly imply that climate predictions are just a really, really long range weather forecast, and so while possibly right, most likely they will fall prey to all this “uncertainty” you wave around.
Doesn’t it bother you that this entire edifice you have constructed is false?
In all this, why don’t you pay attention to the physical models we have: the climate over the last few decades, and palaeoclimate. Both support the idea that climate models do not ignore any large unknown or unknown unknown. In fact, they indicate that the unknowns we are aware of may make climate change worse than current model estimates.
You did get one thing right, though: “It still appears that some people cannot distinguish weather from climate.”
Didactylos says
dhogaza: If deniers have taught us anything, it is that if you cook data long enough, you can make it say whatever you want. In the case of recent temperature data, with its inherent strong linear trend, the trick is just to remove a conveniently scaled linear trend, or to introduce a few discontinuities.
Making reality go away is easy. Just close your eyes and hum really loud.
The main problem at the moment, though, is Muller’s “random 2%” just isn’t a good way of eliminating any of the biases they claim it does. Hopefully their full analysis won’t be so scatterbrained.