We discuss climate models a lot, and from the comments here and in other forums it’s clear that there remains a great deal of confusion about what climate models do and how their results should be interpreted. This post is designed to be a FAQ for climate model questions – of which a few are already given. If you have comments or other questions, ask them as concisely as possible in the comment section and if they are of enough interest, we’ll add them to the post so that we can have a resource for future discussions. (We would ask that you please focus on real questions that have real answers and, as always, avoid rhetorical excesses).
Part II is here.
Quick definitions:
- GCM – General Circulation Model (sometimes Global Climate Model) which includes the physics of the atmosphere and often the ocean, sea ice and land surface as well.
- Simulation – a single experiment with a GCM
- Initial Condition Ensemble – a set of simulations using a single GCM but with slight perturbations in the initial conditions. This is an attempt to average over chaotic behaviour in the weather.
- Multi-model Ensemble – a set of simulations from multiple models. Surprisingly, an average over these simulations gives a better match to climatological observations than any single model.
- Model weather – the path that any individual simulation will take has very different individual storms and wave patterns than any other simulation. The model weather is the part of the solution (usually high frequency and small scale) that is uncorrelated with another simulation in the same ensemble.
- Model climate – the part of the simulation that is robust and is the same in different ensemble members (usually these are long-term averages, statistics, and relationships between variables).
- Forcings – anything that is imposed from the outside that causes a model’s climate to change.
- Feedbacks – changes in the model that occur in response to the initial forcing that end up adding to (for positive feedbacks) or damping (negative feedbacks) the initial response. Classic examples are the amplifying ice-albedo feedback, or the damping long-wave radiative feedback.
Questions:
- What is the difference between a physics-based model and a statistical model?
Models in statistics or in many colloquial uses of the term often imply a simple relationship that is fitted to some observations. A linear regression line through a change of temperature with time, or a sinusoidal fit to the seasonal cycle for instance. More complicated fits are also possible (neural nets for instance). These statistical models are very efficient at encapsulating existing information concisely and as long as things don’t change much, they can provide reasonable predictions of future behaviour. However, they aren’t much good for predictions if you know the underlying system is changing in ways that might possibly affect how your original variables will interact.
Physics-based models on the other hand, try to capture the real physical cause of any relationship, which hopefully are understood at a deeper level. Since those fundamentals are not likely to change in the future, the anticipation of a successful prediction is higher. A classic example is Newton’s Law of motion, F=ma, which can be used in multiple contexts to give highly accurate results completely independently of the data Newton himself had on hand.
Climate models are fundamentally physics-based, but some of the small scale physics is only known empirically (for instance, the increase of evaporation as the wind increases). Thus statistical fits to the observed data are included in the climate model formulation, but these are only used for process-level parameterisations, not for trends in time.
- Are climate models just a fit to the trend in the global temperature data?
No. Much of the confusion concerning this point comes from a misunderstanding stemming from the point above. Model development actually does not use the trend data in tuning (see below). Instead, modellers work to improve the climatology of the model (the fit to the average conditions), and it’s intrinsic variability (such as the frequency and amplitude of tropical variability). The resulting model is pretty much used ‘as is’ in hindcast experiments for the 20th Century.
- Why are there ‘wiggles’ in the output?
GCMs perform calculations with timesteps of about 20 to 30 minutes so that they can capture the daily cycle and the progression of weather systems. As with weather forecasting models, the weather in a climate model is chaotic. Starting from a very similar (but not identical) state, a different simulation will ensue – with different weather, different storms, different wind patterns – i.e different wiggles. In control simulations, there are wiggles at almost all timescales – daily, monthly, yearly, decadally and longer – and modellers need to test very carefully how much of any change that happens because of a change in forcing is really associated with that forcing and how much might simply be due to the internal wiggles.
- What is robust in a climate projection and how can I tell?
Since every wiggle is not necessarily significant, modellers need to assess how robust particular model results are. They do this by seeing whether the same result is seen in other simulations, with other models, whether it makes physical sense and whether there is some evidence of similar things in the observational or paleo record. If that result is seen in multiple models and multiple simulations, it is likely to be a robust consequence of the underlying assumptions, or in other words, it probably isn’t due to any of the relatively arbitrary choices that mark the differences between different models. If the magnitude of the effect makes theoretical sense independent of these kinds of model, then that adds to it’s credibility, and if in fact this effect matches what is seen in observations, then that adds more. Robust results are therefore those that quantitatively match in all three domains. Examples are the warming of planet as a function of increasing greenhouse gases, or the change in water vapour with temperature. All models show basically the same behaviour that is in line with basic theory and observations. Examples of non-robust results are the changes in El Niño as a result of climate forcings, or the impact on hurricanes. In both of these cases, models produce very disparate results, the theory is not yet fully developed and observations are ambiguous.
- How have models changed over the years?
Initially (ca. 1975), GCMs were based purely on atmospheric processes – the winds, radiation, and with simplified clouds. By the mid-1980s, there were simple treatments of the upper ocean and sea ice, and clouds parameterisations started to get slightly more sophisticated. In the 1990s, fully coupled ocean-atmosphere models started to become available. This is when the first Coupled Model Intercomparison Project (CMIP) was started. This has subsequently seen two further iterations, the latest (CMIP3) being the database used in support of much of the model work in the IPCC AR4. Over that time, model simulations have become demonstrably more realistic (Reichler and Kim, 2008) as resolution has increased and parameterisations have become more sophisticated. Nowadays, models also include dynamic sea ice, aerosols and atmospheric chemistry modules. Issues like excessive ‘climate drift’ (the tendency for a coupled model to move away from the a state resembling the actual climate) which were problematic in the early days are now much minimised.
- What is tuning?
We are still a long way from being able to simulate the climate with a true first principles calculation. While many basic aspects of physics can be included (conservation of mass, energy etc.), many need to be approximated for reasons of efficiency or resolutions (i.e. the equations of motion need estimates of sub-gridscale turbulent effects, radiative transfer codes approximate the line-by-line calculations using band averaging), and still others are only known empirically (the formula for how fast clouds turn to rain for instance). With these approximations and empirical formulae, there is often a tunable parameter or two that can be varied in order to improve the match to whatever observations exist. Adjusting these values is described as tuning and falls into two categories. First, there is the tuning in a single formula in order for that formula to best match the observed values of that specific relationship. This happens most frequently when new parameterisations are being developed.
Secondly, there are tuning parameters that control aspects of the emergent system. Gravity wave drag parameters are not very constrained by data, and so are often tuned to improve the climatology of stratospheric zonal winds. The threshold relative humidity for making clouds is tuned often to get the most realistic cloud cover and global albedo. Surprisingly, there are very few of these (maybe a half dozen) that are used in adjusting the models to match the data. It is important to note that these exercises are done with the mean climate (including the seasonal cycle and some internal variability) – and once set they are kept fixed for any perturbation experiment.
- How are models evaluated?
The amount of data that is available for model evaluation is vast, but falls into a few clear categories. First, there is the climatological average (maybe for each month or season) of key observed fields like temperature, rainfall, winds and clouds. This is the zeroth order comparison to see whether the model is getting the basics reasonably correct. Next comes the variability in these basic fields – does the model have a realistic North Atlantic Oscillation, or ENSO, or MJO. These are harder to match (and indeed many models do not yet have realistic El Niños). More subtle are comparisons of relationships in the model and in the real world. This is useful for short data records (such as those retrieves by satellite) where there is a lot of weather noise one wouldn’t expect the model to capture. In those cases, looking at the relationship between temperatures and humidity, or cloudiness and aerosols can give insight into whether the model processes are realistic or not.
Then there are the tests of climate changes themselves: how does a model respond to the addition of aerosols in the stratosphere such as was seen in the Mt Pinatubo ‘natural experiment’? How does it respond over the whole of the 20th Century, or at the Maunder Minimum, or the mid-Holocene or the Last Glacial Maximum? In each case, there is usually sufficient data available to evaluate how well the model is doing.
- Are the models complete? That is, do they contain all the processes we know about?
No. While models contain a lot of physics, they don’t contain many small-scale processes that more specialised groups (of atmospheric chemists, or coastal oceanographers for instance) might worry about a lot. Mostly this is a question of scale (model grid boxes are too large for the details to be resolved), but sometimes it’s a matter of being uncertain how to include it (for instance, the impact of ocean eddies on tracers).
Additionally, many important bio-physical-chemical cycles (for the carbon fluxes, aerosols, ozone) are only just starting to be incorporated. Ice sheet and vegetation components are very much still under development.
- Do models have global warming built in?
No. If left to run on their own, the models will oscillate around a long-term mean that is the same regardless of what the initial conditions were. Given different drivers, volcanoes or CO2 say, they will warm or cool as a function of the basic physics of aerosols or the greenhouse effect.
- How do I write a paper that proves that models are wrong?
Much more simply than you might think since, of course, all models are indeed wrong (though some are useful – George Box). Showing a mismatch between the real world and the observational data is made much easier if you recall the signal-to-noise issue we mentioned above. As you go to smaller spatial and shorter temporal scales the amount of internal variability increases markedly and so the number of diagnostics which will be different to the expected values from the models will increase (in both directions of course). So pick a variable, restrict your analysis to a small part of the planet, and calculate some statistic over a short period of time and you’re done. If the models match through some fluke, make the space smaller, and use a shorter time period and eventually they won’t. Even if models get much better than they are now, this will always work – call it the RealClimate theory of persistence. Now, appropriate statistics can be used to see whether these mismatches are significant and not just the result of chance or cherry-picking, but a surprising number of papers don’t bother to check such things correctly. Getting people outside the, shall we say, more ‘excitable’ parts of the blogosphere to pay any attention is, unfortunately, a lot harder.
- Can GCMs predict the temperature and precipitation for my home?
No. There are often large variation in the temperature and precipitation statistics over short distances because the local climatic characteristics are affected by the local geography. The GCMs are designed to describe the most important large-scale features of the climate, such as the energy flow, the circulation, and the temperature in a grid-box volume (through physical laws of thermodynamics, the dynamics, and the ideal gas laws). A typical grid-box may have a horizontal area of ~100×100 km2, but the size has tended to reduce over the years as computers have increased in speed. The shape of the landscape (the details of mountains, coastline etc.) used in the models reflect the spatial resolution, hence the model will not have sufficient detail to describe local climate variation associated with local geographical features (e.g. mountains, valleys, lakes, etc.). However, it is possible to use a GCM to derive some information about the local climate through downscaling, as it is affected by both the local geography (a more or less given constant) as well as the large-scale atmospheric conditions. The results derived through downscaling can then be compared with local climate variables, and can be used for further (and more severe) assessments of the combination model-downscaling technique. This is however still an experimental technique.
- Can I use a climate model myself?
Yes! There is a project called EdGCM which has a nice interface and works with Windows and lets you try out a large number of tests. ClimatePrediction.Net has a climate model that runs as a screensaver in a coordinated set of simulations. GISS ModelE is available as a download for Unix-based machines and can be run on a normal desktop. NCAR CCSM is the US community model and is well-documented and freely available.
Nonlinear guy says
About “Tuning”:
“…It is important to note that these exercises are done with the mean climate (including the seasonal cycle and some internal variability) – and once set they are kept fixed for any perturbation experiment..”
With “mean climate”, surely the model ensemble mean is meant, however the “real data” to base the tuning on by definition is restricted to the single realisation of Earth’s climate (including cloud cover caused by, for instance, multi-decadal oscillations instead of AGW feedback). How is this taken into account in the estimation of the climate sensitivity?
CM says
This is a very helpful post on a great site, thank you.
If you do requests, I’d also like to learn more about the chaos topic broached here in #14 (Samson + Gavin’s reply) and in #34 (Garry S-J).
That is: Is climate chaotic? Is climate change? In what way?(Gavin said: “…if there are aspects of climate change that are chaotic…”) What does that actually mean? In particular, what are the implications for climate modeling, and what do you tell a skeptic who pooh-poohs the models because “it’s chaotic”? Is there an easy-to-grasp example that helps unpack the point?
Gavin’s and Garry’s responses here are helpful. But I’d really appreciate a dedicated post — or failing that, a reading tip — amplifying on this.
Barton Paul Levenson says
Ed writes:
Only if you start out on the upper half of the curve. If you don’t, then cutting taxes cuts overall tax revenues.
Sekerob says
Not sure who to take for granted with the published numbers:
NSIDC/NOAA posted their numbers yesterday for October 2008:
Extent: 8.40 million km square, where the JAXA 31 days daily averages 7.21
Area: 5.72 million km square, including the 0.31 million km square for the blind spot. Cryosphere Today from the iPhone 31 dailies averages to 5.0 million km square.
What is “bad” is that the NOAA/NSIDC numbers suggest there was 2.68 million km square average water within their extent implying a major major break up / dispersal.
Mark Smith says
Ray Ladbury, one thing that puzzles me is the assumption that human action to curb human emissions is generally presented as being risk-free.
Among the consequences of the rush to bio-fuels appears to be food shortage and accelerated de-forestation.
Whatever the merits of the pro and anti AGW arguments, we need to be extremely wary of the law of unintended consequences. Hubris is always punished severely in the end.
Klaus Ragaller says
With increasing sophistication of the models and increasing precision and completeness of boundary conditions will there be a reduction of the “wiggles at almost all time scales”? Can this be expected for large scale phenomena like Enso, zonal winds etc. and could this lead in the long-term also to better local climate forecasts and even weather forecasts?
[Response: No. The ‘wiggles’ are real phenomena. As we get better models, the realism and structure of those wiggles will likely become more realistic – but in the end they define the limits to what we will be able to predict at regional/decadal scales. – gavin]
Nico says
Dear realclimate crew.
I was wondering for some time now, how much the findings of the work of scientists, be it the IPCC, be it the PIK in Potsdam or what have you, can be taken for granted in order for policy makers to make valuable decisions (e.g. cutting carbon emissions by half by 2050) and if the uncertainties in the models might outweigh certain decisions to reduce carbon emissions so that in the end it might happen that these uncertainties make these decisions obsolete, because they do not suffice to avoid “dangerous climate change”? So, are the data and models reliable enough to ensure feasible decisions?
Cheers,
Nico
Rod B says
Ray (92), the fact that “…the models are crucial for limiting the risk…” or that anyone is hoping and praying for decent models does not, by themselves, make the models any less inaccurate or unreliable.
Dietrich Hoecht says
I am back, re. #27 and #52, the temperature plateau caused by aerosols. I read through the feedback references, and am still intrigued. Taking my ruler and extending the warming line, bypassing the plateau, I get somewhere around 0.5 to 0.8 deg Celsius prevention of thermal rise! Thanks aerosols! Here are more thoughts: if North America’s dirty industries, like steel mills, were the culprit, then we should be able to trace back to those times a significant regional cooling differential in the downwind regions of places like Pittsburgh, PA. This cooling differential should be readable relative to today and surrounding regions outside the downwind aerosol cones. Remember, these aerosols are short lived, and, theoretically, the effect and changes should be obvious. Further, since these industrial pollution centers have had widespread aerosol thermal changes, the localized effects would have shown ‘hotbeds’ of cooling (sorry). Are there any good studies on this? Actually, cherchez le acid rain, since those were the the times BC (before clean-up) and BS (before scrubbers). Come to think of it, if nowadays we were to re-tune all jet engines for sooty output (large particles to avoid breathing health hazards) we might mitigate all of the carbon curb hoopla (sorry again, but I could not resist)!
Now, I want to contrast these relatively drastic temperature influences with the recent (around 2003) surprising thermal rise in Europe. It was measured (Philipona et al) and found to be predominantly tracking increased water vapor. Wow, a totally different singular cause. So that begs the question how the earlier aerosol effects might have interacted with the increased steam and water vapor emissions that accompany aerosol emitting processes. Coal powerplans burn slurries and use lots of cooling water for their turbines; Bessemer steel furnaces use lime and iron ore with lots of trapped water. Lastly, today, i.e. over the last 30 years, we should be able to observe the identical localized aerosol caused cooling over the Chinese industrial zones, which are known for their constant brown haze. However, if I read the regional temperature distribution correctly, there is only warming. Where am I wrong?
Rod B says
Lawrence Coleman (97), which also cuts through your all’s canard of maintaining or even improving economic well being, which is hard to do by “bankrupting” (his words) the entire US coal industry as fast as can be by, in effect, shutting off 50% of all of our electricity production nearly over night.
Brian Dodge says
Re “the” Laffer curve;
If it did in fact represent the output of an economic model instead of a sketchy idea drawn on a napkin, it would be an ensemble of models with different underlying policy decisions regarding the distribution of tax expenditures. In scenario A, a higher percentage of tax revenues spent on things that increase productivity, such as health care, infrastructure such as more efficient transportation and power generation, and R & D which results in such advances as the internet, has an optimum tax rate that is higher than scenario B, where a higher percentage is spent on the military, which a soldier once told me is fundamentally set up to “break things and kill people.”
Not only are the underlying policy decisions mutable and arbitrary, the government doesn’t allocate spending as a percent of revenue, but as differing amounts unrelated to receipts. When expenditures exceed receipts (currently the case, arguably due to higher military costs), the government simply borrows or prints more money.
These hypothetical Laffer models(GCMs) have the tax revenues(global temperatures) decoupled from expenditures(OLR) by arbitrary policy decisions(denialist handwaving).
Hank Roberts says
http://www.basicinstructions.net/images/basic081103.gif
_______________
“police instantly” says ReCaptcha
Rod B says
Brian (111), you make some good points (though I admit not fully understanding your analogy with Laffer and GCM modelling), but why do you think military spending does not improve productivity, i.e the economy? It got us out of the depression of the 30s. It does much productive R&D (your example of the [early] internet, e.g.), indirectly supports even Gavin and company (a little maybe) by being NASA’s largest customer, and tons of other stuff (even though you could probably find pieces of the military budget that one would be hard-pressed to claim productive).
The military IS “fundamentally set up to “break things and kill people.””. What’s your point?
Mark says
Rod, #113: Who gets the lions share of the money?
Those big investors (with lots of money) and the directors/C*O that get paid lots of money.
The rich invest money in things to get it from people who can’t afford a cash transaction (loa ns and mort gages) and so accrue more of the capital to themselves.
You need money to make money. And these guys have lots.
They don’t *spend* money, though.
If all your money goes on fags, booze, rent, food and clothes, you can’t afford NOT to spend all your money (the first two are to take your mind off the fact that you can only afford fags, booze, rent, food and clothes). However, the rich spend a lot of money, but not as much as if that money were in poor people’s hands.
E.g. Despite having given away billions, Bill Gates was still richer six years later than he was when he committed to spending all his money on charity before he dies.
And in the modern age, “defense spending is good” can be removed by one simple name:
Haliburton.
Investors and the senior management have made out like bandits. They’ve LOST billions upon billions. That money didn’t go back into the US economy.
You can’t say defense spending improves productivity.
Each Fox One shot at a target is 3 grand blown up. And the cash isn’t moving around in the economy, it’s sitting as some rich beggar’s Mona Lisa print sat in their bathroom…
Mark says
Nico, #107
All of it, if they like.
This is why the summary and reports include all the uncertainties in the outputs. So that the politicians can treat it ALL as a given, since it already shows the possible errors in assumptions.
Then the summary is written BY those politicians, so you think they are going to say “How much of this what I wrote should I believe”?
Dan says
OT: Michael Crichton passed away. He was the classic, anti-science denialist. http://www.etonline.com/news/2008/11/67369/
Hank Roberts says
“This post is designed to be a FAQ for climate model questions … so that we can have a resource for future discussions. (We would ask that you please focus on real questions that have real answers and, as always, avoid rhetorical excesses).” — the first post here.
Please.
David B. Benson says
jcbmack — Near constant average relative humidity (averaged over space and time) is surely correct, due to the negative feedbacks I previously indicated.
A bit more puzzling is the global precipitation product producing by a group in Italy. Twenty-eight years on, the global precipitation has been essentially constant on an annual basis. There will be another paper (for 29 years on) at the Fall AGU meeting (held in December, so I think of it as the winter AGU meeting. Oh well.)
This seems in disagreement with the argument in CaltechWater.pdf (and surely elsewhere) that with global warming prcipitation ought to increase. However, aerosols are throught to decrease precipitation, so maybe this (partly) explains the situation. But then I would think then that cloudiness would increase. Dunno.
Joseph O'Sullivan says
I have one question with two parts, is there a limit to how much regional downscaling can be done:
When GCMs are used to model atmospheric conditions and spatial grid size is reduced is there a scale at which chaotic conditions prevail and make modeling difficult in the same way that weather is harder to model than climate? For example going from 100km x 100km squares to 50km x 50km is possible to model but 10km x 10km is not.
Does introducing regional geographical features make modeling more complex but theoretically achievable, or are there built in barriers, again like weather vs climate. For example an area with complex topography can’t be modeled but areas with relatively simple topography simple can.
recaptcha “drained reviewer” I hope my usually off-topic comments haven’t been to taxing on the people who must read them and decide to let them through. The oracle has spoken again.
jcbmack says
#95 it helps to talk it through; denialists see a paper or research are making assumptions, and they have no idea why these ‘assumptions’are made they automatically assume themselves, a grave error. Also without looking at the facts, people miss the point of the models. Sometimes both the argument for and against must be presented. I know that modelling always plays an important role in science and global climate change, such a vast phenomenon needs all the relevant research that is available, for the blind men (people in general) to understand the elephant:)The models are a necessary component, and interestingly some of these assumptions are based upon physics and chemsitry just the same, otherwise the models would be really far off.
Pat Neuman says
Although average relative humidity may be constant, average melt rates are increasing (increasing latent heat of condensation and increasing temperatures), so why do models ignore increasing latent heat effects on thaw of snow and ice?
jcbmack says
David # 118 absolutely, just wanted to talk it out so denialists do not get the wrong idea. Regarding the other research you cite, I will read what I can find first before I comment,but aersols are relatively temporary in their effects in this matter. Thanks for the references.
Kevin McKinney says
Re 113,
“The military IS “fundamentally set up to “break things and kill people.””. What’s your point?”
Well, breaking things and killing people is destroying value, for one thing. So when the military is in use, you have negative value as a “product”, and during peacetime you have no product.
Admittedly, the question of *whose* negative value–ours or, say, North Korea’s–is the whole point of having a military. But I believe it was Adam Smith himself who first characterized the military as an economic drag–not some raving leftist. The fact that you can use the military to deliver economic stimulus doesn’t mean it is the most efficient means of doing so.
Rod B says
Mark, so contracting out millions-billions so outfits can hire thousands of workers building tanks, carriers, rifles, airplanes, ships, uniforms, boots, etc, etc, etc does nothing for the pure poor little guy? How about the billions of wages we pay members of the military? Or did someone tell you that only greedy Generals get paid? Etc? Etc, ad infinitum? Or did someone tell you that none of this stuff is any help to the economy and productivity? Silly.
Kevin McKinney says
On second thought, I should probably admit that in his own day, Smith arguably *was* a “raving leftie.”
Rod B says
Hank Roberts (117), point well taken. Sorry
Dave Andrews says
Kevin (123)
“The fact that you can use the military to deliver economic stimulus doesn’t mean it is the most efficient means of doing so.”
Perhaps you should just give everyone in the country $1 million and stimulate the economy that way :-)
Dave Andrews says
Hank
“Abstract
… In economics, forecasting power may be decisive for the success or failure of a particular policy. The forecasting power of economic growth models is investigated in this study. … Forecasts/hindcasts from the statistical model were tested ….”
Well there you go, the economic models were rubbish at forecasting even though they could hindcast. Bit like the climate models really
Ray Ladbury says
Mark Smith, if you are looking for someone to blame for the push to biofuels, you’ve come to the wrong guy. Corn Ethanol doesn’t make sense. It is mainly driven by US farm belt (read agribusiness) politics. Sugar cane ethanol makes sense in Brazil mainly because they have a large, poor labor force to harvest the cane. It is as much a job creation program in the poor Nordeste as it is an energy program. Cellulosic ethanol makes a great deal more sense, but has a way to go before it is practical. Conservation, on the other hand, pretty much always makes sense and we still have LOTS of low hanging fruit there.
Look, if people are going to adopt stupid mitigation strategies and say, “Doctor, doctor, it hurts when I adopt stupid mitigation strategies,” I’m going to say, “Don’t adopt stupid mitigation strategies.”
I’m sorry, but I think that when you are confronted with a real threat, the argument that you can’t intervene because you might screw it up is a pretty piss poor one. Just don’t screw it up.
Ray Ladbury says
Rod B., You miss my point. The models actually perform remarkably well, yet the only folks who are denying this are precisely those arguing for “a measured response,” which can only be justified based on output from the very models they distrust. Kinda mavericky, huh?
David B. Benson says
The production of ethanol is concentrated in the Central and Southeast regions of the country, which includes the main producer, São Paulo State. … machines will gradually replace human labor as the means of harvesting cane, except where the abrupt terrain does not allow for mechanical harvesting. However, 150 out of 170 of São Paulo’s sugar cane processing plants signed in 2007 a voluntary agreement with the state government to comply by 2014. Independent growers signed in 2008 the voluntary agreement to comply, … As production sparks in other states in Brazil, mainly in the Northeast Region, where lack of job positions and social issues amount much further, to give incentives to coming sugarcane producers as long as they employ harvest workers instead of implementing less labor intensive and more modern techniques
from
http://en.wikipedia.org/wiki/Ethanol_fuel_in_Brazil
[reCHAPTCHA entonees “stick legislation”]
Ray Ladbury says
David, most of the sugar cane is grown in the Northeast–Bahia and Northwards. This area has been depressed since the collapse of the last sugar boom in the early 1900s. I’m not as familiar with the industry in Sao Paolo, which I tend to avoid when I’m in Brazil. However, it’s interesting to hear about the shift to mechanized harvesting. Cane cutting is backbreaking work, but it’s been the only work available for many of the poor in Brazil.
Hank Roberts says
> Dave Andrews … 5 November 2008 at 4:37 PM
> … much like climate models
Dave, if you’d read even the abstract, or the previous comments to you, you’d understand the difference.
You can’t be credible commenting on policy while you insist on making up your own facts about things you could easily read and understand.
Jim aske dyou earlier:
“Did you even read the post? The very first “Question” outlines the fundamental difference between a statistical model (your economists/bankers mathematical model) and a dynamic physical model (general circulation model) based on physical laws and properties.”
Try reading. It will improve your ability to comment.
You have a lot of energy. Focus and be useful, there’s work everyone can do to figure this stuff out.
David B. Benson says
Ray Ladbury (132) — Your information about Brazil’s Northeast region does not agree with the Wikipedia article, which has 92 references and which includes this map:
http://en.wikipedia.org/wiki/Image:Goldemberg_2008_Brazil_sugarcane_regions_1754-6834-1-6-1_Fig_1.jpg
Alexi Tekhasski says
I am totally confused with your definitions. What exactly is “forcing”? What do you mean under “imposed from the outside”? Do you “impose” new boundary conditions on the “physics-based” equations? Or do you change a parameter in a differential equations like a gas mix ratio? Is there any “forcing” if the atmosphere mix stays constant, and the Sun shines steady?
What do you mean under “Feedbacks are changes in the model that occur in response…”? Did you mean “changes in the state variables of the physics-based model”? What is “initial forcing”? The only forcing I am familiar with is a constant flux of SW solar energy. What are the other “forcings”?
What do you mean under “physics-based”? You mentioned F=ma, but for a continuous media, the physical equivalent of the conservation laws would be Navier-Stokes Equations (NSE). Do you mean that GCMs directly emulate NSE by numerically iterating some finite-difference (or spectral) approximation of NSE?
What is the definition of “process-level parameterization”?
Thanks,
-Alexi
Aaron Lewis says
When are we going to see “the physics” of ice sheets on Greenland and Antarctica added to the models?
What is the center of rotation in the models? If it is not yet the Earth’s actual axis of rotation, are there plans to correct that little bit of physics in hopes of getting a better handle on heat flows over and around Greenland?
wayne davidson says
Does a model or another replicate the Stratospheric Polar Vortex? And how successful are they in actually predicting their size and magnitude?
Ray Ladbury says
David, I won’t argue with Jose. He’s a good guy (I had an opportunity to interact with him a bit when he wrote an article for Physics Today). My impression was based on my own travels in Brazil and conversations with Brazilians I met, and I haven’t traveled in the northeast recently. My sources had said that the alcohol boom had brought employment if not prosperity to the northeast. So I’m willing to be wrong on this. Jose’ ought to know better than I do.
jcbmack says
Why is anyone still using wikipedia?
steven mosher says
RE 133, Good point hank! dave andrews should start by reading this.
you might as well. It a good place to start…to frame some questions.
http://www-pcmdi.llnl.gov/wgne2007/presentations/Oral-Presentations/mon/Taylor_metrics_err_wkshp_1.pdf
Hank Roberts says
One thought, a lot of the FAQ suggestions like some of Alexi’s are good ones for a collection of the very frequently asked and answered.
Some might be answered with a picture first rather than text first, then answer questions people ask about the picture. Example:
What are climate “forcings”?
http://www.giss.nasa.gov/research/briefs/hansen_05/fig2.gif
Some people will get that just from looking. Others will need more.
Mark says
Alexi, Gravity is a forcing. Air resistance is a forcing. Drop a feather.
It WILL go down. Gravity. It will go slowly. Air resistance. Its path will be chaotic. Like weather, because its path is based on the very fine small difference between air resistance and gravity.
Boundary condition: the sun is hot. What about when there’s a CME? You change the external boundary condition contributed by that sun. We don’t simulate the sun in a GCM.
Feedbacks: the warmer the air the more water it can hold before raining it out. The warmer it is and the sunnier it is, the more water evaporates. CO2 can cause it to be warmer. Water gets sucked into the air and that causes warming. And more water makes it warmer. Which makes more water get sucked up into the air. Which makes it warmer. Which …
Feedback.
Yes. That’s what’s done. But not just that. The blocking of IR from the earth doesn’t obey Navier Stokes.
Winds across the earth are slowed more and this slows air higher up the atmosphere if it travels over trees than grassland. But in a 100km square you can’t model the disrupted airflow over each and every tree, bush, kangaroo or ant. So you paraeterise the entire effect of all the trees, grass, houses and rivers into a overall effect. Much like an army cook doesn’t try and work out how much each and every squaddie will eat that day (it varies) but go “A squaddie will eat this many calories on average and we have X sqaddies to feed. We need Y calories.” and then work out how many tins of beans need to be purchased for stores. A small difference is that you can’t have squaddies without food, so you over order, but you also parameterise this figure so that you aren’t dinged by stores for wasting food.
Mark says
Rod 124. Thanks for taking the answer and making a straw man out of it.
Did I say it did NOTHING for them? No. It does a heck of a lot less than if that moeny was just given to all the poor people (who have to spend it all because they are poor and can’t afford all the thinks the middle classes take for granted).
Are you saying that rich people spend all their money? If so, why do they hate inheritence tax?
(just thought I’d use the straw left over myself)
Mark says
Joseph #119. Actually parameterisation models are made. Microclimate really DO measure the turbulence around a scotts pine on its own and in a forest, checking what effect that has on cloud formation, convective lift and lots of other things.
These models are then used to make parameterisations to go in the big models.
The main reason for a limit to how small I would say is that for every halving of the grid size you have 16x as much work (cube 1/2 on a side and your timestep has to halve at least).
Bryce Anderson says
Unrelated note: Can anyone explain why the RC wiki is so thoroughly locked down? I found the Crichton page, and wanted to update it to A) mention his death, and B) update a couple of links that have gone stale. Even after creating an account, I found that everything was locked to prevent editing (which sort of defeats the purpose of a wiki).
[Response: The wiki is only editable by approved accounts, email the contact address to inform us of your background and what you’d like to edit and we’ll see. But please note that the wiki is basically just a clearing house for links to existing rebuttals/discussions of the contrarian arguments – it is not an encyclopedia, and the details of Crichton’s life are not relevant. – gavin]
jo says
There has been a lot of mention about the parameterisations within models, initial boundary conditions, various forcing mechanisms, etc. Do all models assume the same boundary conditions? And if so, what are these conditions? How are they calculated? There are hundreds of factors that influence the climate in different ways, how do you manage to cram all of these into the calculations within the model to accurately (as far as possible) represent the climate? Is there an average number of conditions that imposed on models, and how do you decide which forcings and limitations are to be imposed on the models?
Thanks
CM says
I’m sorry — in #102 I suggested a post on chaos and climate, without noticing you already ran one in 2005:
https://www.realclimate.org/index.php/archives/2005/11/chaos-and-climate
But perhaps a statement about this would be a useful part of this FAQ?
Martin Vermeer says
Dietrich Hoecht #109:
Hmmm yes… but I suspect that what you did was place your ruler through the pre-1950 temperature curve, which is somewhat affected by a lull in volcanism 1920-1950. See the following graphs:
http://data.giss.nasa.gov/modelforce/
The grey curve “stratospheric aerosols” represents volcanism.
If you look at the tropospheric aerosol effect, you have to add the light blue and the purple dotted curve, yielding -1.8 W/m^2. Scale that to the total forcing of 2 W/m^2 (right figure) which has produced so far (through feedbacks, and attenuated by ocean thermal inertia) a warming of 0.7 degrees. So yes, something like 0.5…0.6 degrees.
Only for a while. Remember they are short lived (longer lived if you put them in the stratosphere, but even there only a few years) and not cumulative. CO2 is.
Something like this has been seriously proposed as “geo-engineering” (but using sulphuric acid rather than soot), see elsewhere on RC. It’s not really a solution, rather a balancing act (like drinking a lot of black coffee after a boozing spree).
https://www.realclimate.org/index.php/archives/2008/08/climate-change-methadone/langswitch_lang/sp
Human water vapour emissions are irrelevant, as water vapour is in dynamic equilibrium with ocean water, an equilibrium controlled by global mean temperature, i.e., other greenhouse gases etc. In other words, H2O is a feedback, not a forcing. Also discussed elsewhere on RC.
https://www.realclimate.org/index.php/archives/2005/04/water-vapour-feedback-or-forcing/
Perhaps that, while short lived, these aerosols nevertheless spread around the globe, especially the small particles, which play a role in modifying cloud formation (the “indirect aerosol effect” in the above graph). So the most visible part of the aerosol emissions may not be the most climatically relevant. My guess. But the experts on this site should know best.
Hope this helps.
Lawrence Coleman says
Does anybody here know roughly how many tonnes of CO2 is released into the global atmosphere daily by just the consumption of carbonated drinks, eg: coke,fanta even sparkling mineral water and all the aerosol cans that use compressed CO2 as the propellant. Say 1 in every 6-7 people on earth drink 1L of soda drinks/day….? How does that stack up against the emissions from vehicles say? I just drank a can of coke and burped and immediately felt somewhat guilty.
Alexi Tekhasski says
Re #141 and 142: Mark and Hank, I am afraid you misunderstood my questions. I do not understand your terms like “gravity” and “air resistance”, “and the sunnier it is, the more water evaporates” when we are talking about numeric calculations. Here is an example of description of a GCM model in normal scientific terms I can understand:
http://www.ccsm.ucar.edu/models/atm-cam/docs/description/description.pdf
Still, this description is incomplete and requires additional information from articles published in 1959 and 1974, and it never mentions the term “feedback” (except once in the Reference to other articles).
So, my questions are: given the equations shown in the above document, which particular terms (or coefficients, or else) do “you” call as “forcings”, and which one are the “feedbacks”. Thanks.