We discuss climate models a lot, and from the comments here and in other forums it’s clear that there remains a great deal of confusion about what climate models do and how their results should be interpreted. This post is designed to be a FAQ for climate model questions – of which a few are already given. If you have comments or other questions, ask them as concisely as possible in the comment section and if they are of enough interest, we’ll add them to the post so that we can have a resource for future discussions. (We would ask that you please focus on real questions that have real answers and, as always, avoid rhetorical excesses).
Part II is here.
Quick definitions:
- GCM – General Circulation Model (sometimes Global Climate Model) which includes the physics of the atmosphere and often the ocean, sea ice and land surface as well.
- Simulation – a single experiment with a GCM
- Initial Condition Ensemble – a set of simulations using a single GCM but with slight perturbations in the initial conditions. This is an attempt to average over chaotic behaviour in the weather.
- Multi-model Ensemble – a set of simulations from multiple models. Surprisingly, an average over these simulations gives a better match to climatological observations than any single model.
- Model weather – the path that any individual simulation will take has very different individual storms and wave patterns than any other simulation. The model weather is the part of the solution (usually high frequency and small scale) that is uncorrelated with another simulation in the same ensemble.
- Model climate – the part of the simulation that is robust and is the same in different ensemble members (usually these are long-term averages, statistics, and relationships between variables).
- Forcings – anything that is imposed from the outside that causes a model’s climate to change.
- Feedbacks – changes in the model that occur in response to the initial forcing that end up adding to (for positive feedbacks) or damping (negative feedbacks) the initial response. Classic examples are the amplifying ice-albedo feedback, or the damping long-wave radiative feedback.
Questions:
- What is the difference between a physics-based model and a statistical model?
Models in statistics or in many colloquial uses of the term often imply a simple relationship that is fitted to some observations. A linear regression line through a change of temperature with time, or a sinusoidal fit to the seasonal cycle for instance. More complicated fits are also possible (neural nets for instance). These statistical models are very efficient at encapsulating existing information concisely and as long as things don’t change much, they can provide reasonable predictions of future behaviour. However, they aren’t much good for predictions if you know the underlying system is changing in ways that might possibly affect how your original variables will interact.
Physics-based models on the other hand, try to capture the real physical cause of any relationship, which hopefully are understood at a deeper level. Since those fundamentals are not likely to change in the future, the anticipation of a successful prediction is higher. A classic example is Newton’s Law of motion, F=ma, which can be used in multiple contexts to give highly accurate results completely independently of the data Newton himself had on hand.
Climate models are fundamentally physics-based, but some of the small scale physics is only known empirically (for instance, the increase of evaporation as the wind increases). Thus statistical fits to the observed data are included in the climate model formulation, but these are only used for process-level parameterisations, not for trends in time.
- Are climate models just a fit to the trend in the global temperature data?
No. Much of the confusion concerning this point comes from a misunderstanding stemming from the point above. Model development actually does not use the trend data in tuning (see below). Instead, modellers work to improve the climatology of the model (the fit to the average conditions), and it’s intrinsic variability (such as the frequency and amplitude of tropical variability). The resulting model is pretty much used ‘as is’ in hindcast experiments for the 20th Century.
- Why are there ‘wiggles’ in the output?
GCMs perform calculations with timesteps of about 20 to 30 minutes so that they can capture the daily cycle and the progression of weather systems. As with weather forecasting models, the weather in a climate model is chaotic. Starting from a very similar (but not identical) state, a different simulation will ensue – with different weather, different storms, different wind patterns – i.e different wiggles. In control simulations, there are wiggles at almost all timescales – daily, monthly, yearly, decadally and longer – and modellers need to test very carefully how much of any change that happens because of a change in forcing is really associated with that forcing and how much might simply be due to the internal wiggles.
- What is robust in a climate projection and how can I tell?
Since every wiggle is not necessarily significant, modellers need to assess how robust particular model results are. They do this by seeing whether the same result is seen in other simulations, with other models, whether it makes physical sense and whether there is some evidence of similar things in the observational or paleo record. If that result is seen in multiple models and multiple simulations, it is likely to be a robust consequence of the underlying assumptions, or in other words, it probably isn’t due to any of the relatively arbitrary choices that mark the differences between different models. If the magnitude of the effect makes theoretical sense independent of these kinds of model, then that adds to it’s credibility, and if in fact this effect matches what is seen in observations, then that adds more. Robust results are therefore those that quantitatively match in all three domains. Examples are the warming of planet as a function of increasing greenhouse gases, or the change in water vapour with temperature. All models show basically the same behaviour that is in line with basic theory and observations. Examples of non-robust results are the changes in El Niño as a result of climate forcings, or the impact on hurricanes. In both of these cases, models produce very disparate results, the theory is not yet fully developed and observations are ambiguous.
- How have models changed over the years?
Initially (ca. 1975), GCMs were based purely on atmospheric processes – the winds, radiation, and with simplified clouds. By the mid-1980s, there were simple treatments of the upper ocean and sea ice, and clouds parameterisations started to get slightly more sophisticated. In the 1990s, fully coupled ocean-atmosphere models started to become available. This is when the first Coupled Model Intercomparison Project (CMIP) was started. This has subsequently seen two further iterations, the latest (CMIP3) being the database used in support of much of the model work in the IPCC AR4. Over that time, model simulations have become demonstrably more realistic (Reichler and Kim, 2008) as resolution has increased and parameterisations have become more sophisticated. Nowadays, models also include dynamic sea ice, aerosols and atmospheric chemistry modules. Issues like excessive ‘climate drift’ (the tendency for a coupled model to move away from the a state resembling the actual climate) which were problematic in the early days are now much minimised.
- What is tuning?
We are still a long way from being able to simulate the climate with a true first principles calculation. While many basic aspects of physics can be included (conservation of mass, energy etc.), many need to be approximated for reasons of efficiency or resolutions (i.e. the equations of motion need estimates of sub-gridscale turbulent effects, radiative transfer codes approximate the line-by-line calculations using band averaging), and still others are only known empirically (the formula for how fast clouds turn to rain for instance). With these approximations and empirical formulae, there is often a tunable parameter or two that can be varied in order to improve the match to whatever observations exist. Adjusting these values is described as tuning and falls into two categories. First, there is the tuning in a single formula in order for that formula to best match the observed values of that specific relationship. This happens most frequently when new parameterisations are being developed.
Secondly, there are tuning parameters that control aspects of the emergent system. Gravity wave drag parameters are not very constrained by data, and so are often tuned to improve the climatology of stratospheric zonal winds. The threshold relative humidity for making clouds is tuned often to get the most realistic cloud cover and global albedo. Surprisingly, there are very few of these (maybe a half dozen) that are used in adjusting the models to match the data. It is important to note that these exercises are done with the mean climate (including the seasonal cycle and some internal variability) – and once set they are kept fixed for any perturbation experiment.
- How are models evaluated?
The amount of data that is available for model evaluation is vast, but falls into a few clear categories. First, there is the climatological average (maybe for each month or season) of key observed fields like temperature, rainfall, winds and clouds. This is the zeroth order comparison to see whether the model is getting the basics reasonably correct. Next comes the variability in these basic fields – does the model have a realistic North Atlantic Oscillation, or ENSO, or MJO. These are harder to match (and indeed many models do not yet have realistic El Niños). More subtle are comparisons of relationships in the model and in the real world. This is useful for short data records (such as those retrieves by satellite) where there is a lot of weather noise one wouldn’t expect the model to capture. In those cases, looking at the relationship between temperatures and humidity, or cloudiness and aerosols can give insight into whether the model processes are realistic or not.
Then there are the tests of climate changes themselves: how does a model respond to the addition of aerosols in the stratosphere such as was seen in the Mt Pinatubo ‘natural experiment’? How does it respond over the whole of the 20th Century, or at the Maunder Minimum, or the mid-Holocene or the Last Glacial Maximum? In each case, there is usually sufficient data available to evaluate how well the model is doing.
- Are the models complete? That is, do they contain all the processes we know about?
No. While models contain a lot of physics, they don’t contain many small-scale processes that more specialised groups (of atmospheric chemists, or coastal oceanographers for instance) might worry about a lot. Mostly this is a question of scale (model grid boxes are too large for the details to be resolved), but sometimes it’s a matter of being uncertain how to include it (for instance, the impact of ocean eddies on tracers).
Additionally, many important bio-physical-chemical cycles (for the carbon fluxes, aerosols, ozone) are only just starting to be incorporated. Ice sheet and vegetation components are very much still under development.
- Do models have global warming built in?
No. If left to run on their own, the models will oscillate around a long-term mean that is the same regardless of what the initial conditions were. Given different drivers, volcanoes or CO2 say, they will warm or cool as a function of the basic physics of aerosols or the greenhouse effect.
- How do I write a paper that proves that models are wrong?
Much more simply than you might think since, of course, all models are indeed wrong (though some are useful – George Box). Showing a mismatch between the real world and the observational data is made much easier if you recall the signal-to-noise issue we mentioned above. As you go to smaller spatial and shorter temporal scales the amount of internal variability increases markedly and so the number of diagnostics which will be different to the expected values from the models will increase (in both directions of course). So pick a variable, restrict your analysis to a small part of the planet, and calculate some statistic over a short period of time and you’re done. If the models match through some fluke, make the space smaller, and use a shorter time period and eventually they won’t. Even if models get much better than they are now, this will always work – call it the RealClimate theory of persistence. Now, appropriate statistics can be used to see whether these mismatches are significant and not just the result of chance or cherry-picking, but a surprising number of papers don’t bother to check such things correctly. Getting people outside the, shall we say, more ‘excitable’ parts of the blogosphere to pay any attention is, unfortunately, a lot harder.
- Can GCMs predict the temperature and precipitation for my home?
No. There are often large variation in the temperature and precipitation statistics over short distances because the local climatic characteristics are affected by the local geography. The GCMs are designed to describe the most important large-scale features of the climate, such as the energy flow, the circulation, and the temperature in a grid-box volume (through physical laws of thermodynamics, the dynamics, and the ideal gas laws). A typical grid-box may have a horizontal area of ~100×100 km2, but the size has tended to reduce over the years as computers have increased in speed. The shape of the landscape (the details of mountains, coastline etc.) used in the models reflect the spatial resolution, hence the model will not have sufficient detail to describe local climate variation associated with local geographical features (e.g. mountains, valleys, lakes, etc.). However, it is possible to use a GCM to derive some information about the local climate through downscaling, as it is affected by both the local geography (a more or less given constant) as well as the large-scale atmospheric conditions. The results derived through downscaling can then be compared with local climate variables, and can be used for further (and more severe) assessments of the combination model-downscaling technique. This is however still an experimental technique.
- Can I use a climate model myself?
Yes! There is a project called EdGCM which has a nice interface and works with Windows and lets you try out a large number of tests. ClimatePrediction.Net has a climate model that runs as a screensaver in a coordinated set of simulations. GISS ModelE is available as a download for Unix-based machines and can be run on a normal desktop. NCAR CCSM is the US community model and is well-documented and freely available.
Jim Eager says
Re Dave Andrews @31: “You have described very well the economists/bankers mathematical models that predicted everything was rosy in this best of all possible worlds.”
Did you even read the post? The very first “Question” outlines the fundamental difference between a statistical model (your economists/bankers mathematical model) and a dynamic physical model (general circulation model) based on physical laws and properties.
Ignorance is correctable. Will ignorance is inexcusable.
Martin Vermeer says
Dietrich Hoecht #27:
the plateau is commonly, and credibly, attributed to industrial aerosols produced by Northern hemisphere industrialized countries. (Not to be confused with stratospheric aerosols due to large volcanic eruptions.) See, e.g., Figure 9.5 in the IPCC AR4 WG1 report. Or
http://tamino.wordpress.com/2008/01/09/dead-heat/
It was the cooling produced by these areosols that offset the greenhouse warming and even led (in the popular press, not the scientific literature) to the “ice age scare” of the 1970’s:
https://www.realclimate.org/index.php/archives/2005/01/the-global-cooling-myth/
Yes, googling is an art :-)
Ark says
Mae (#33). Predictions of global warming are not “based on” doubling of CO2. Even the present 387 ppm would lead to further climate change, and ideas on a relatively ‘safe’ level range from 350 to 450 ppm. At the moment the CO2 concentration is increasing by approx. 3 ppm per year, so at this rate it would reach 450 ppm in little more than 20 years:
http://www.esrl.noaa.gov/gmd/ccgg/trends/co2_data_mlo.html
GlenFergus says
It’s semantics, but for cross-discipline understanding, you might reconsider that “physical model” coinage (yours?). I understand you to mean “a model built to represent basic physical principles”, but others could be confused.
The term physical model is widely used in engineering to mean a model built from actual physical stuff, usually to scale; sometimes in quite complex ways. Believe it or not such models are still widely used to predict the behavior of things which are computationally difficult or even intractable. Examples include hydraulic structure models (eg dykes to resist hurricane waves and rising sea levels!) and the wind tunnel models used in aeronautics. In engineering jargon, A GCM would be a numerical model, so I guess the term you’d be looking for would be physically-based numerical model…
[Response: See #66 – I’ve gone with physics-based. Thanks – gavin]
Richard C says
Are there measurements of “Earthlight”? Do we have experimental records demonstrating the atmosphere is not CO2 saturated?
pete best says
Re #47 It might not save us but it will potentially disrupt the global economy and political order enough to make AGW a problem for future generations and not this one.
If economic growth is the reason detre of the present globalized world then its a pipe dream. Tar and shale sands cannot scale in time and neither can coal for peak oil to make a large scale impact. Recently at $147 a barrel it was starting to hurt but when we cannot pump more oil and a depression is not on the cards and demand is growing it will spell significant issues for the world.
JCH says
On economic models, in 2004 representatives (Hank Paulson, then the CEO of Goldman Sachs, was one of them) of some the investment banks met with the SEC to request a change in their required capital reserves. There was one dissenting opinion, mailed in from a guy with a PO box:
http://www.sec.gov/rules/proposed/s72103/s72103-9.pdf
Barton Paul Levenson says
Mark writes:
Asteroids are very small compared to the Earth and would have great difficulty changing Earth’s orbit noticeably, even with a major impact.
Barton Paul Levenson says
Dietrich Hoecht writes:
The plateau of temperature from about 1940 to 1970 is generally attributed to an increase in high-altitude aerosols due to the tremendous ramping up of industry in response to World War II. When pollution controls began to be introduced in the 1970s, the temperature began rising again.
Barton Paul Levenson says
Dave Andrews writes:
There is considerable evidence that climate models are better. Just because you’re not familiar with it doesn’t mean the evidence doesn’t exist. You really ought to research these things more thoroughly before making pronouncements about them.
Global climate models have successfully predicted the rise in temperature as greenhouse gases increased, the cooling of the stratosphere as the troposphere warmed, polar amplification due the ice-albedo effect and other effects, greater increase in nighttime than in daytime temperatures, and the magnitude and duration of the cooling from the eruption of Mount Pinatubo.
Barton Paul Levenson says
Richard C writes:
How would it get CO2 saturated? What does that even mean?
Jaydee says
14 Samson
The best result I had discussing chaos with a climate skeptic was to show him two pictures of Jupiter taken 21 years apart. (From Voyager 1 and Cassini I think). At the large scale the images are almost identical it is the small scale weather type stuff that is different.
http://www.windows.ucar.edu/tour/link=/jupiter/images/jupiter_ir_vis_image.html
http://space.about.com/od/jupiter/ig/Jupiter-Pictures-Gallery/Jupiter-Portrait.htm
I’m sure that you can find better images than the ones above.
Another approach is to point out that the Earths orbit around the Sun is also chaotic (along with the other planets) but we can still sling shot a space proba around Jupiter or Saturn very accurately.
http://www.fortunecity.com/emachines/e11/86/solarsys.html
http://en.wikipedia.org/wiki/N-body_problem
Richard says
Do you think it possible for the chaotic “weather” unsteadiness to modify the external forcings?
For example: could different oceanic circulation rates change the oceanic CO2 sink/source behaviour, or could different atmospheric conditions change the mixing rates of atmospheric gases hence modify their affect on the solar forcing?
[Response: Those would be feedbacks in the full system. – gavin]
Hank says
You indicate it is suprising that multi-model ensembles give a better match to climatological observations. Why should that be suprising, and how exactly does it happen that multi-models give better matches?
[Response: It’s surprising because the different models are not really a random selection from the space of all possible models, and so wouldn’t necessarily be expected to conform to the central limit theorem or similar. The paper by Reichler and Kim in BAMS this year has some of the details. – gavin]
Mark says
Richard #55. Yes.
Go high enough and there isn’t enough CO2 to be saturated in ANY band. Yet you are still in the atmosphere.
Even at the ground level, the width of the saturated bands will get wider as you put more CO2.
Alternatively, since your wording is ATROCIOUS, yes, since anything less than 100% CO2 is a non-CO2 saturated atmosphere, then we don’t have CO2 saturated atmosphere.
Write your questions CORRECTLY.
[edit – stay polite]
Mark says
Glen, #54. How about “physics based model”?
[Response: I concur. I’ve changed it above. – gavin]
Like the old looking glass Flight Sim (forget the name) it was based on not a model of how a plane acts but on a full physical simulation of airflow over the aircraft.
VERY computationally expensive and they only managed a few air frames to simulate.
The next one used a model where it said things like “if you have the thrust at THIS level, operate like THIS if you are in THIS configuration”. Which is computationally MUCH simpler but you have to change the model for each airframe you want to include. This is the form all the other flight sims on PC used.
One used just the physics and simulated the situation. A physics based model (or Physical Model in this topic, see how you CAN get that as a perfectly cromulent word?). Or what I call a simulation. Anything you get is an emergent property of the physics involved (or errors in your discrete analogue needed for making a numerical model of the continuous equations).
The other used a constraint model that was based on finding the operating characteristics of real aircraft. A statistical model. Or what I call a real model: it does what you tell it to, the only surprises are emergent properties.
Ed says
RE #42 Craig
One example would be the Laffer Curve. Cutting taxes increases overall tax revenues.
David B. Benson says
jcbmack (41) — Suppose the relative humidity goes up; then the water vapor is more likely to condense and percipitate out. Suppoe the relative humidity goes down; less likely.
There is a recent study tending to demonstrate that the average relative humidity is indeed close to constant.
Rod B says
Steve (7), you’re better off keeping your audience ignorant rather than teaching them science and then hitting ’em with Edward’s non-scientific litany of trivial individual cherry-picked anecdotes (21). “Podunk set a new September high temperature record two years in a row. Game over! World’s getting hotter!” There needs to be some level of knowledge, though not much, to see through that inductive logic as proof. Or, the argument that all sceptics are devils makes some proponents feel good but usually falls quite short of scientifically swaying others.
Rod B says
Very helpful informative post. Thanks
JCH says
“One example would be the Laffer Curve. Cutting taxes increases overall tax revenues. …”
First, it was not a model. He famously drew it on a napkin, and correctly admits it was not his idea.
“[T]he whole California gang had taken [the Laffer curve] literally (and primitively). The way they talked, they seemed to expect that once the supply-side tax cut was in effect, additional revenue would start to fall, manna-like, from the heavens. Since January, I had been explaining that there is no literal Laffer curve. …” – David Stockman, President Ronald Reagan’s budget director
Lynn Vincentnathan says
Timely post. Just read “Telling the truth about climate change has become a revolutionary act” at http://www.climateark.org/shared/reader/welcome.aspx?linkid=109667
Even supposing the models “lie,” they might be underestimating the problem. Why is it with denialists that if climate science is wrong, it’s got to be wrong in overestimating GW? (that’s just a frustrated rhetorical Q)
And I haven’t seen any “collective delusions and hysterical illness” like people trampling each other at Home Depot to buy compact fluorescent bulbs.
jcbmack says
Mark #66, well put. Of course emerging properties will foul up predictions verus actual results.
jcbmack says
David B. Bension #68, could you point me in the direction of this study- thanks.
Stuart says
#67: One example would be the Laffer Curve. Cutting taxes increases overall tax revenues.
Only true on the right hand side of the curve. Empirically Sweden (I think) found it to be at around 70-80% of GDP taken in taxes. Note the curve is usually drawn very incorrectly – 100% tax could be considered communism, and while that tends to drop productivity significantly, it certainly doesn’t go to 0.
jcbmack says
# 67 & 71 Ofcourse in the Laffer curve, tax increases positively increased revenues, until passing the optimum point, then reducing taxes increases revenues. So revenues are a function of taxes. Then again, Roosevelt’s new deal where currency becomes devalued to increase the value of the gold backing it, during the great depression, (1933) even if arbitrarily carried out, worked with no graph; the empirical observations of its effectiveness was enough.
The models in the case of AGW, are a little more scientific than either approach/;, the variables cannot all be counted for, but they do provide insights to the totality of the research, and the models, (and data fed into the models)satellite data and observations from researchers in the geographical areas affected by GW, agree more than do not, as long as the averages are taken into account.
Hank Roberts says
For clarity, the “Hank” at 4 novembre 2008 at 1:53 PM isn’t me. Not complaining; just to avoid confusion.
David B. Benson says
jcbmack (74) — Unfortunately, no. I saw in mentioned in some comment of some thread on one of the many climate blogs I frequent. You could try Google School to search.
A good general overview of water vapor is in CaltechWater.pdf on Ray Pierrehumbert’s web site.
Nonlinear guy says
Question: before talking about simulating climate CHANGE, how long does the climate science community expect it to take before GCM’s can reproduce the real world climate PRIOR to human induced CO2 perturbation in terms of:
– “equilibrium point”, i.e. without artificial flux adjustment to avoid climatic drift,
– “natural variability”, in terms of, for instance, the Hurst coefficient at different locations on the planet?
Pat Neuman says
Question: What is wrong with this reply by Roger A. Pielke Sr.?
… “I think the IPCC was basically a very narrowly focused document. In fact it was basically written mostly by atmospheric scientists. And they’re focusing on a very narrow issue where the atmospheric increase of CO2 feeds down to affect the climate that has all these effects on resources, and I think that is so narrowly confined as to be of little use to policymakers in terms of what’s really going to happen.”
http://www.motherjones.com/interview/2008/11/sustainability-interviews-roger-a-pielke.html
[Response: He isn’t reading the same report as everyone else. – gavin]
Dave Andrews says
BPL, #60,
Come on, you KNOW the models can hindcast if the right parameters are put in but they are pretty atrocius at forecasting.
I don’t suppose economic models were designed to hindcast, but, as in climate science, a whole world view was built upon their supposed mathematical and statistical prowess.
I do not doubt the sincerety and expertise of the climate modellers, but seriously question the policy actions that are based upon the models.
Hank Roberts says
Dave, your suppositions continue to show you don’t bother to look things up. Why not try?
Simple Google turns up quite a few studies of the sort you suppose don’t exist. From the first page, using a search string taken directly from your posting above, just as an example of what you can find out if you bother to check your suppositions:
http://www.diva-portal.org/diva/getDocument?urn_nbn_se_uu_diva-8053-2__fulltext.pdf
The Forecasting Power of Economic Growth Models
“Abstract
… In economics, forecasting power may be decisive for the success or failure of a particular policy. The forecasting power of economic growth models is investigated in this study. … Forecasts/hindcasts from the statistical model were tested ….”
Try facts. They help.
One of the problems the authors discuss with those failing economic models is that they relied on their theoretical beliefs instead of using facts.
What you say you wish were being done is being done by the very people you claim aren’t doing it.
Richard C says
Barton #61, Mark #65
Apologies for the lousy phrasing. I’ve been arguing with a skeptic on a different blog. If you thought my terminology was bad you ought to try listening to a skeptic making analogies using Pandas and bamboo! I think the saturation in question is based upon a mis-application of the Beer-Lambert law, i.e. an excess of CO2.
Are there any measurements made by satellites giving a time based record that shows variation in outgoing longwave radiation dependent on the amount of CO2 in the atmosphere? Is that better?
Richard says
Should the turbulent Prandtl and Schmidt numbers be treated as constants or variables and does the choice have any effect on the performance of GCM simulations?
Philip Machanick says
Here’s a potential FAQ question: what do you mean when you say model has “skill”?
Another: although physical and statistical models are different, there is a statistical component to physical models (eliminating noise and skewing effects such as urban heat islands): to what extent do these statistical components add to uncertainties in physical models?
Another: how much can we learn from the paleoclimate, e.g., previous periods of rapid sea level rise or rapid greenhouse gas increases? (This question would be a useful riposte to those pulling up ice core data purporting to show that CO2 rise always follows temperature rise.)
David W says
Is computer processing power a major limitation? In the future will simulations become more accurate with increasing processing power?
I imagine even with todays super computers that some simulations would take a long time….days? weeks? years?
[Response: Most simulations (say 100 years of model time) take on the order of a month. We definitely want to be doing longer runs (the last millenium, the Holocene, the deglaciation, ice age cycles), but simulations that take longer than 6 months (or maybe a year) don’t tend to get started (because you know that if you wait, you’ll be able to do the same thing twice as fast). So longer runs tend to be made with slightly less complicated models. – gavin]
jcbmack says
This RC areticle from last year answers some questions as well:
27 May 2007
Why global climate models do not give a realistic description of the local climate
Bob H says
I have looked at the models and the information provided here and I do not see where changes in the sun are calculated into these models. My question is as follows;
How are cyclical changes in the sun reflected in these models?
[Response: They are imposed directly as cyclic changes in the amount (and spectra) of the incoming solar radiation. – gavin]
Hank Roberts says
And, Dave, I expect many are in agreement with you about watching _policy_ decisions.
Estimating fossil CO2 produced from, for example, a grain ethanol program takes attention from voters, including scientists. But that would belong on another blog (like a better filtered DotEarth maybe?).
jcbmack says
Ok in response to relative humidity being at or near constant, it is the models which tend to represent it as so, due to the input assumption as so. The real conditions flucuate quite a bit. Relative humidity is the ratio of the actual vapor pressure and the saturated vapor pressure at a given air temperature expressed as a percentage. Or how close to saturation the air is. Google provides plenty of research, some credible, some not, but what is revealed is that the relative humidity is not constant in the real system.
Now the amount of oxygen in the atmosphere is fairly constant, not water vapor levels. The models again do not factor in constant or rapid changes so well, but what they do is provide a guide with other data. Unsaturated air in the real system does produce questions of the models.
Without doing the math here, look up the Clausius-Clapeyron relation OLR effects,and just how important CO2 is in ineracting with this major GHG.
Just like any model, there are limits to what can be precisely and acuurately shown, based upon assumptions, holding anything to a constant to get a clearer depiction will skew the representation, regardless. In the absence of models AGW is in tact, the issue is how long, how much money and what inputs are necessary to make the models more in tune or close to real time and conditions. As I stated in another blog, ideal conditions (or gases) do not give the full picture. Now the MME’s have shown practical and revealing possibilites, this is true, but they still do not represent the dynamics of the atmospheric system or oecean atmosphere interface. Just like models for evolution do not capture all of the dynamics, or simulated models of flight (no matter how complex) do not take into account dry versus wet runway conditions, or sudden changes from cold fronts etc… The models, however, have been useful in light of several other resources.
So let us not assume constant or near constant relative humidity.
jcbmack says
The policies that do not work or the ones that I question include: corn based ethanol, (and other agriculturally based)carbon capture into the ground, and burying garbage into the ground to produce electricity, (you cannot trustwhat they are putting there to produce the methane, some of that garbage could cause problems)and the slow rate at which wind mills are being constructed, there are a few technical problems, but the aesthetic one is ridiculous on light of being placed mostly in the middle of nowhere, and I have worked out the basic problems the applications faced, not difficult to figure out; the problem is this is a wikipedia world in instead of a Brittanica and howstuffworks one. Bacteria could be applied in other modalities, but the application that is quite possible and safe for carbon capture has not been tried as of yet. I digress for now, however, to do more reading of my fellow posters, I am relatively new to this site and with some down time, it certainly passed mine:)
Ray Ladbury says
Dave Anderson, One of the things I have never understood about denialits is the delight they take in the possibility that the models might be unreliable. Have you considered that once a threat is deemed credible, the models are crucial for limiting the the risk. If the models are unreliable, then what you have is effectively unlimited risk, and any amount of mitigation can be justified.
Simple physics plus the paleoclimatic record are sufficient to make the threat credible. If you favor a measured response to climate change, you had better hope and pray you’ve got decent models go guide you.
jcbmack says
The modes assume near constant leves of water vapor, but they are not necessarily so close, however, water tends towards equilibrium, so in the long run, the mode averages may not be far off in that regard, but relative humidity does vary greaty, especially during shorter periods of time.
jcbmack says
Ray Ladbury always good to see your posts!
Thomas Lee Elifritz says
The modes assume near constant leves of water vapor (snip)
Doesn’t this rambling qualify as technobabble?
I know it’s been a long night, but …
Captcha : admits to
Ed Tredger says
“Multi-model Ensemble – a set of simulations from multiple models. Surprisingly, an average over these simulations gives a better match to climatological observations than any single model.”
Is there a reference for this that does NOT use RMS error to measure model performance (RMS error will always reward simulations with less variability)?
Lawrence Coleman says
Just to say a sincere ‘THANK YOU’ to all AMERICANS fo making the RIGHT choice for the countries future but also the WORLD’S future. I think it’s obvious that for a country that basically single handedly caused climate change or at the least had the most influence in it’s creation that now you have a smart, switched on leader who will I’m sure make the CORRECT decisions to lead the world to dramatic and vital emissions reduction. You have now In my opinion the best hope you are ever going to get to get this right or in his term (hopefully 8 years) to lay the strongest global foundations for all countries to adhere to. Why we have had such a lukewarm response from world leaders in re: to CC is because they were looking to America for leadership and found ‘absolutely nothing’. Now we have a competant steward to sail planet earth through the eye of the perfect storm and out the other side battered and damaged but still afloat, still able to be repaired.
THANK YOU AMERICA..I’M PROUD OF YOU!!!
Mark says
Richard #83. Sorry for being so piggin annoyed. However, although I love a good argument (even if I’m proven wrong or at best mistaken) I really loath it when someone can’t even manage to ask a question properly. You spend a lot of time trying to ask fifteen questions that they could have meant and then answering them.
REALLY annoying.
Mark says
Richard #63, for that weather to last long enough to become a climatological forcing (it would have to take gigatons of carbon out of one system and dump it into another) it would either have to be a catastrophic event (clathrates suddenly erupting, supervolcano erupting, etc) or long lasting (in which case open to climatological statistics rather than weather chaotics).
So no, but such a case is on the same level as aliens finding us and giving us new technologies (or making us work in their salt mines).
Mark says
BPL #58. But a BIG asteroid (say, for example, Ceres) could split us. Something as big coming from the Ooort cloud would be going MUCH faster than Ceres (about 8km/s I think, compared to ~1km/s) and the energy to convert 64 times greater.
Squoosh.
PS Turn up the sarcasm detector.