This is a continuation of a previous post including interesting questions from the comments.
More Questions
- What are parameterisations?
Some physics in the real world, that is necessary for a climate model to work, is only known empirically. Or perhaps the theory only really applies at scales much smaller than the model grid size. This physics needs to be ‘parameterised’ i.e. a formulation is used that captures the phenomenology of the process and its sensitivity to change but without going into all of the very small scale details. These parameterisations are approximations to the phenomena that we wish to model, but which work at the scales the models actually resolve. A simple example is the radiation code – instead of using a line-by-line code which would resolve the absorption at over 10,000 individual wavelengths, a GCM generally uses a broad-band approximation (with 30 to 50 bands) which gives very close to the same results as a full calculation. Another example is the formula for the evaporation from the ocean as a function of the large-scale humidity, temperature and wind-speed. This is really a highly turbulent phenomena, but there are good approximations that give the net evaporation as a function of the large scale (‘bulk’) conditions. In some parameterisations, the functional form is reasonably well known, but the values of specific coefficients might not be. In these cases, the parameterisations are ‘tuned’ to reproduce the observed processes as much as possible.
- How are the parameterisations evaluated?
In at least two ways. At the process scale, and at the emergent phenomena scale. For instance, taking one of the two examples mentioned above, the radiation code can be tested against field measurements at specific times and places where the composition of the atmosphere is known alongside a line-by-line code. It would need to capture the variations seen over time (the daily cycle, weather, cloudiness etc.). This is a test at the level of the actual process being parameterised and is a necessary component in all parameterisations. The more important tests occur when we examine how the parameterisation impacts larger-scale or emergent phenomena. Does changing the evaporation improve the patterns of precipitation? the match of the specific humidity field to observations? etc. This can be an exhaustive set of tests but again are mostly necessary. Note that most ‘tunings’ are done at the process level. Only those that can’t be constrained using direct observations of the phenomena are available for tuning to get better large scale climate features. As mentioned in the previous post, there are only a handful of such parameters that get used in practice.
- Are clouds included in models? How are they parameterised?
Models do indeed include clouds, and do allow changes in clouds as a response to forcings. There are certainly questions about how realistic those clouds are and whether they have the right sensitivity – but all models do have them! In general, models suggest that they are a positive feedback – i.e. there is a relative increase in high clouds (which warm more than they cool) compared to low clouds (which cool more than they warm) – but this is quite variable among models and not very well constrained from data.
Cloud parameterisations are amongst the most complex in the models. The large differences in mechanisms for cloud formation (tropical convection, mid-latitude storms, marine stratus decks) require multiple cases to be looked at and many sensitivities to be explored (to vertical motion, humidity, stratification etc.). Clouds also have important micro-physics that determine their properties (such as cloud particle size and phase) and interact strongly with aerosols. Standard GCMs have most of this physics included, and some are even going so far as to embed cloud resolving models in each grid box. These models are supposed to do away with much of the parameterisation (though they too need some, smaller-scale, ones), but at the cost of greatly increased complexity and computation time. Something like this is probably the way of the future.
- What is being done to address the considerable uncertainty associated with cloud and aerosol forcings?
As alluded to above, cloud parameterisations are becoming much more detailed and are being matched to an ever larger amount of observations. However, there are still problems in getting sufficient data to constrain the models. For instance, it’s only recently that separate diagnostics for cloud liquid water and cloud ice have become available. We still aren’t able to distinguish different kinds of aerosols from satellites (though maybe by this time next year).
However, none of this is to say that clouds are a done deal, they certainly aren’t. In both cloud and aerosol modelling the current approach is get as wide a spectrum of approaches as possible and to discern what is and what is not robust among those results. Hopefully soon we will start converging on the approaches that are the most realistic, but we are not there yet.
Forcings over time are a slightly different issue, and there it is likely that substantial uncertainties will remain because of the difficulty in reconstructing the true emission data for periods more than a few decades back. That involves making pretty unconstrained estimates of the efficiency of 1930s technology (for instance) and 19th Century deforestation rates. Educated guesses are possible, but independent constraints (such as particulates in ice cores) are partial at best.
- Do models assume a constant relative humidity?
No. Relative humidity is a diagnostic of the models’ temperature and water distribution and will vary according to the dynamics, convection etc. However, many processes that remove water from the atmosphere (i.e. cloud formation and rainfall) have a clear functional dependence on the relative humidity rather than the total amount of water (i.e. clouds form when air parcels are saturated at their local temperature, not when humidity reaches X g/m3). These leads to the phenomenon observed in the models and the real world that long-term mean relative humidity is pretty stable. In models it varies by a couple of percent over temperature changes that lead to specific humidity (the total amount of water) changing by much larger amounts. Thus a good estimate of the model relative humidity response is that it is roughly constant, similar to the situation seen in observations. But this is a derived result, not an assumption. You can see for yourself here (select Relative Humidty (%) from the diagnostics).
- What are boundary conditions?
These are the basic data input into the models that define the land/ocean mask, the height of the mountains, river routing and the orbit of the Earth. For standard models additional inputs are the distribution of vegetation types and their properties, soil properties, and mountain glacier, lake, and wetland distributions. In more sophisticated models some of what were boundary conditions in simpler models have now become prognostic variables. For instance, dynamic vegetation models predict the vegetation types as a function of climate. Other examples in a simple atmospheric model might be the distribution of ozone or the level of carbon dioxide. In more complex models that calculate atmospheric chemistry or the carbon cycle, the boundary conditions would instead be the emissions of ozone precursors or anthropogenic CO2. Variations in these boundary conditions (for whatever reason) will change the climate simulation and can be considered forcings in the most general sense (see the next few questions).
- Does the climate change if the boundary conditions are stable?
The answer to this question depends very much on perspective. On the longest timescales a climate model with constant boundary conditions is stable – that is, the mean properties and their statistical distribution don’t vary. However, the spectrum of variability can be wide, and so there is variation from one decade to the next, from one century to the next, that are the result of internal variations in (for instance) the ocean circulation. While the long term stability is easy to demonstrate in climate models, it can’t be unambiguously determined whether this is true in the real world since boundary conditions are always changing (albeit slowly most of the time).
- Does the climate change if boundary conditions change?
Yes. If any of the factors that influence the simulation change, there will be a response in the climate. It might be large or small, but it will always be detectable if you run the model for long enough. For example, making the Rockies smaller (as they were a few million years ago) changes the planetary wave patterns and the temperature patterns downstream. Changing the ozone distribution changes temperatures, the height of the tropopause and stratospheric winds. Changing the land-ocean mask (because of sea level rise or tectonic changes for instance) changes ocean circulation, patterns of atmospheric convection and heat transports.
- What is a forcing then?
The most straightforward definition is simply that a forcing is a change in any of the boundary conditions. Note however that this definition is not absolute with respect to any particular bit of physics. Take ozone for instance. In a standard atmospheric model, the ozone distribution is fixed and any change in that fixed distribution (because of stratospheric ozone depletion, tropospheric pollution, or changes over a solar cycle) would be a forcing causing the climate to change. In a model that calculates atmospheric chemistry, the ozone distribution is a function of the emissions of chemical precursors, the solar UV input and the climate itself. In such a model, ozone changes are a response (possibly leading to a feedback) to other imposed changes. Thus it doesn’t make sense to ask whether ozone changes are or aren’t a forcing without discussing what kind of model you are talking about.
There is however a default model setup in which many forcings are considered. This is not always stated explicitly and leads to (somewhat semantic) confusion even among specialists. This setup consists of an atmospheric model with a simple mixed-layer ocean model, but that doesn’t include chemistry, aerosol vegetation or dynamic ice sheet modules. Not coincidentally this corresponds to the state-of-the-art of climate models around 1980 when the first comparisons of different forcings started to be done. It persists in the literature all the way through to the latest IPCC report (figure xx). However, there is a good reason for this, and that is observation that different forcings that have equal ‘radiative’ impacts have very similar responses. This allows many different forcings to be compared in magnitude and added up.
The ‘radiative forcing’ is calculated (roughly) as the net change in radiative fluxes (both short wave and long wave) at the top of the atmosphere when a component of the default model set up is changed. Increased solar irradiance is an easy radiative forcing to calculate, as is the value for well-mixed greenhouse gases. The direct effect of aerosols (the change in reflectance and absorption) is also easy (though uncertain due to the distributional uncertainty), while the indirect effect of aerosols on clouds is a little trickier. However, some forcings in the general sense defined above don’t have an easy-to-caclulate ‘radiative forcing’ at all. What is the radiative impact of opening the isthmus of Panama? or the collapse of Lake Agassiz? Yet both of these examples have large impacts on the models’ climate. Some other forcings have a very small global radiative forcing and yet lead to large impacts (orbital changes for instance) through components of the climate that aren’t included in the default set-up. This isn’t a problem for actually modelling the effects, but it does make comparing them to other forcings without doing the calculations a little more tricky.
- What are the differences between climate models and weather models?
Conceptually they are very similar, but in practice they are used very differently. Weather models use as much data as there is available to start off close to the current weather situation and then use their knowledge of physics to step forward in time. This has good skill for a few days and some skill for a little longer. Because they are run for short periods of time only, they tend to have much higher resolution and more detailed physics than climate models (but note that the Hadley Centre for instance, uses the same model for climate and weather purposes). Weather models develop in ways that improve the short term predictions, though the impact for long term statistics or the climatology needs to be assessed independently. Curiously, the best weather models often have a much worse climatology than the best climate models. There are many current attempts to improve the short-term predictability in climate models in line with the best weather models, though it is unclear what impact that will have on projections.
- How are solar variations represented in the models?
This varies a lot because of uncertainties in the past record and complexities in the responses. But given a particular estimate of solar activity there are a number of modelled responses. First, the total amount of solar radiation (TSI) can be varied – this changes the total amount of energy coming into the system and is very easy to implement. Second, the variation over the the solar cycle at different frequencies (from the UV to the near infra-red) don’t all vary with the same amplitude – UV changes are about 10 times as large as those in the total irradiance. Since UV is mostly absorbed by ozone in the stratosphere, including these changes increases the magnitude of the solar cycle variability in the stratosphere. Furthermore, the change in UV has an impact on the production of ozone itself (even down into the troposphere). This can be calculated with chemistry-climate models, and is increasingly being used in climate model scenarios (see here for instance).
There are also other hypothesised impacts of solar activity on climate, most notably the impact of galactic cosmic rays (which are modulated by the solar magnetic activity on solar cycle timescales) on atmospheric ionisation, which in turn has been linked to aerosol formation, and in turn linked to cloud amounts. Most of these links are based on untested theories and somewhat dubious correlations, however, as was recognised many years ago (Dickinson, 1975), this is a plausible idea. Implementing it in climate models is however a challenge. It requires models to have a full model of aerosol creation, growth, accretion and cloud nucleation. There are many other processes that affect aerosols and GCR-related ionisation is only a small part of that. Additionally there is a huge amount of uncertainty in aerosol-cloud effects (the ‘aerosol indirect effect’). Preliminary work seems to indicate that the GCR-aerosol-cloud link is very small (i.e. the other effects dominate), but this is still in the early stages of research. Should this prove to be significant, climate models will likely incorporate this directly (using embedded aerosol codes), or will parameterise the effects based on calculated cloud variations from more detailed models. What models can’t do (except perhaps as a sensitivity study) is take purported global scale correlations and just ‘stick them in’ – cloud processes and effects are so tightly wound up in the model dynamics and radiation and have so much spatial and temporal structure that this couldn’t be done in a way that made physical sense. For instance, part of the observed correlation could be due to the other solar effects, and so how could they be separated out? (and that’s even assuming that the correlations actually hold up over time, which doesn’t seem to be the case).
- What do you mean when you say a model has “skill”?
‘Skill’ is a relative concept. A model is said to have skill if it gives more information than a naive heuristic. Thus for weather forecasts, a prediction is described as skillful if it works better than just assuming that each day is the same as the last (‘persistence’). It should be noted that ‘persistence’ itself is much more skillful than climatology (the historical average for that day) for about a week. For climate models, there is a much larger range of tests available and there isn’t necessarily an analogue for ‘persistence’ in all cases. For a simulation of a previous time period (say the mid-Holocene), skill is determined relative to a ‘no change from the present’. Thus if a model predicts a shift northwards of the tropical rain bands (as was observed), that would be skillful. This can be quantified and different models can exhibit more or less skill with respect to that metric. For the 20th Century, models show skill for the long-term changes in global and continental-scale temperatures – but only if natural and anthropogenic forcings are used – compared to an expectation of no change. Standard climate models don’t show skill at the interannual timescales which depend heavily on El Niño’s and other relatively unpredictable internal variations (note that initiallised climate model projections that use historical ocean conditions may show some skill, but this is still a very experimental endeavour).
- How much can we learn from paleoclimate?
Lots! The main issue is that for the modern instrumental period the changes in many aspects of climate have not been very large – either compared with what is projected for the 21st Century, or from what we see in the past climate record. Thus we can’t rely on the modern observations to properly assess the sensitivity of the climate to future changes. For instance, we don’t have any good observations of changes in the ocean’s thermohaline circulation over recent decades because a) the measurements are difficult, and b) there is a lot of noise. However, in periods in the past, say around 8,200 years ago, or during the last ice age, there is lots of evidence that this circulation was greatly reduced, possibly as a function of surface freshwater forcing from large lake collapses or from the ice sheets. If those forcings and the response can be quantified they provide good targets against which the models’ sensitivity can be tested. Periods that are of possibly the most interest for testing sensitivities associated with uncertainties in future projections are the mid-Holocene (for tropical rainfall, sea ice), the 8.2kyr event (for the ocean thermohaline circulation), the last two millennia (for decadal/multi-decadal variability), the last interglacial (for ice sheets/sea level) etc. There are plenty of other examples, and of course, there is a lot of intrinsic interest in paleoclimate that is not related to climate models at all!
As before, if there are additional questions you’d like answered, put them in the comments and we’ll collate the interesting ones for the next FAQ.
Vernon says
Gavin,
IRT #41 I was wonding if the GISS Model was changed in light of this April 2008 Study. It is a peer reviewed paper. If the model was not changed to reflect these finding, why? If changes were made, how did that changes impact the climate forcast?
Regards
[Response: Many model intercomparisons appear in the peer-reviewed literature. Many of them speculate about the cause of some perceived discrepancy. Unfortunately, very few of these are useful in actually improving the models. The reason is that a) the speculation (in this case a larger sensitivity of water vapour transports to the forcing) is not actually compared to data (there are no water vapour transport observations in the paper), and b) water vapour transports are a function of dozens of bits of physics – evaporation, cloud formation, mixing in storms, cyclogenesis, topographic steering, etc… Which of these do you think should be changed? and how? Thus while a paper like the one you quote is a good benchmark to see if future versions of the models (or future revisions of the data) reduce the discrepancy, models don’t change specifically because of them. Models do change because resolution gets better (which will have impacts on many of the aspects that control water vapour transports), because individual parameterisations are improved based usually on specific measurements of the process in question, and because models become more complete (for instance, having a more a priori treatment of aerosol/cloud interactions). When we re-do the long runs that are analogous to the ones discussed in the paper (which are actually starting very soon), we’ll see if things get better or not. Note too that the repeat time for these kinds of comprehensive experiments is multiple years, not months. – gavin]
Shoshin says
Do you include sunspot cycles and their effect on cloud formation in your modeling?
[Response: The answer is already above. – gavin]
Bryan S says
Gavin,
In model intercomparisons, do models with ocean circulation, all having similar sensitivities for the same forcing scenarios, have similar time constants for fully relaxing the radiative imbalance? I understand the relationship between sensitivity and relaxation time, but do different models show different type transient behavior?
[Response: Yes. Their uptake of heat into the deep ocean is different – those with less uptake warm faster than the others. – gavin]
Rod B says
T. Gannet (47), et al: This is one of my areas of interest (which is not the same as an area of knowledge). I’ll throw out some stuff that might shed some light or might generate informative rebuttals.
The atmosphere does radiate a broad spectrum ala Planck’s blackbody, though the view is that it is done poorly. At least when you get beyond dividing the atmosphere into 3 or 4 strata for analysis it gets very complex to analyze and figure out. [There is some debate over this assertion which, IMO, mostly boils down to what “blackbody radiation” is exactly.]
Non-CO2 molecules that have been thermally heated via collision with CO2, gaining translation energy from CO2 vibration energy, can, among other processes, later collide with another CO2 (say at a higher altitude), and transfer some of its translation energy back to the CO2 molecule, some of which will/may go into a vibration mode and at least be eligible then to radiate outward (or inward) as IR.
Vernon says
IRT #53 Since both Argo floats show no ocean warming past the surface mixing boundry and neither UAH and RSS satellite data shows troposherical warming, both predicted by climate models, how do you go about determining what parameters need to be changed based on the real world data? What is the process used to determine what part of the science being incorrectly parameterized?
I am not be critical of the models but rather looking for insight into the process you use to refine your model inlight of new information.
[Response: For a start you get the characterisation of the data correct. Both RSS and UAH MSU-LT show warming, as does long term ocean heat content data (Domingues et al, 2008). But these aren’t the kind of data that are used to improve models – they are (if they are precise enough) the kind of data that is used evaluate the models. To improve a model I need something like good data on how sea ice alebdo varies with snow conditions or melt-pond extent, to evaluate a model I need to look at the whether the interannual variability of sea ice extent is comparable to the obs. The former is specific to a process (and therefore a particular chunk of code), while the latter is an emergent property. I’ve said this before, and I’ll say it again, models are not tuned to match long-term time-series data of any sort. – gavin]
Vernon says
IRT #55 Thank you for your comment. I ment to say that both RSS and UAH do not show tropical upper tropo warming which is called for in climate models but I hashed it up.
Sorry about that, but thanks for the insight.
[Response: RSS does. – gavin]
Bryan S says
Re: #53
Gavin,
When an emmission scenario is imput into various ocean-atmosphere coupled models (known to have the same equilibrium sensitivities), and they are run out over hundreds to several thousand years, are there large differences in the length of time the temperature continues to rise after the emmission change approaches 0? Conversely, what is the scatter (from the model comparisons) in the (fractional percentage) of the temperature rise that has already been realized at the time that the emmission change approaches 0? Is there already significant scatter at time 0, or is there a tight cluster, with increasing scatter out in time?
kuhnkat says
OT
Mauna Loa posts .24 yearly rise in co2 for 2008, the smallest since recording began in 1959!!!
http://www.esrl.noaa.gov/gmd/ccgg/trends/
Hopefully they fully checked these numbers before posting!!
Jim Eager says
As I work my way through David Archer’s Understanding the Forecast and read through other more technical references, I have a couple questions relating to the overlap of CO2 and H2O that maybe Hank, Ray or one of the other regulars could help me with.
I understand that the absorption spectrum is non-continuous, but rather made of discrete wavelength bands, i.e. the “picket fence” analogy.
My first question is: Do the CO2 bands, or “pickets,” coincide with those of H2O, or are they offset from each other?
I ask because If they are offset it would undermine the popular argument that CO2 does not matter in the region of overlap.
Second question regarding pressure broadening: Does the broadening only occur outward in either wing of the wider frequency range, or does each discrete band, or “picket,” broaden as well?
Jim Eager says
Re 58, Did you read this disclaimer?
“The last year of data are still preliminary, pending recalibrations of reference gases and other quality control checks.”
And if you click on this link:
“globally averaged CO2 concentration at the surface.”
http://www.esrl.noaa.gov/gmd/ccgg/trends/index.html#global
The Annual Mean Growth Rate shows 1.82 for 2008.
Hank Roberts says
Jim, rather than divert this topic meant to collect FAQ suggestions, — you’ve posted a good one — you might look at
http://www.aip.org/history/climate/Radmath.htm
where the explanation includes
“…. Take a single molecule of CO2 or H2O. It will absorb light only in a set of specific wavelengths, which show up as thin dark lines in a spectrum. In a gas at sea-level temperature and pressure, the countless molecules colliding with one another at different velocities each absorb at slightly different wavelengths, so the lines are broadened …. In cold air at low pressure, each band resolves into a cluster of sharply defined lines, like a picket fence. There are gaps between the H2O lines where radiation can get through unless blocked by CO2 lines….”
Read the whole thing and its pointers, not just my excerpt. The ‘band’ is a picture on an instrument, the instruments have continued to improve, but you’re probably asking about the radiation physics. My hunch (only a hunch) is that at the point where almost all of the molecules have time to wring out a photon before they bumble into one another and get their vibrations mixed up, the lines will be most precise.
Let me try an analogy — purely a hunch, someone knowledgeable will correct me. Have you seen a chaotic pendulum? It’s a simple device in which several different pieces can spin independently, and the energy is moving back and forth throughout the whole thing.
http://www.youtube.com/watch?v=BrMQ7G1DtPw
http://www.youtube.com/watch?v=mhxcMFQjVRs&NR=1
Let’s take a composite pendulum and enable it to capture and release pingpong balls, but the firing end has to be spinning at some high speed before it can fire off a pingpong ball.
One of them alone will eventually reach the firing point.
Take a bunch of such chaotic pendulum devices floating around in a small area (postulate zero gravity and a vacuum …) they’ll run into one another far more often than any one of them will happen to concentrate enough of its energy into one particular arm of the device and emit a pingpong ball.
No, this isn’t a scientific explanation ….
Ray Ladbury says
Jim Eager, The 15 micron band for CO2 is on the edge of the H2O band. The best illustration I know of is this one:
http://www.globalwarmingart.com/wiki/Image:Atmospheric_Transmission_png
So while CO2 is absorbing very strongly in this band, water vapor is weakly absorbing.
WRT pressure and doppler broadening, my understanding is that the whole line broadens and flattens slightly.
Mark says
Pressure broadening broadens ALL lines to become wider. Reason: severalfold (and I may have forgotten a few) but things like doppler shift (velocity of a particle goes up when you increase pressure, PV=nRT, K.E. varies with T) where the velocity is randomly distributed and in a random direction WRT the direction of radiation will broaden the lines is a major one. Affects the absorption spectra directly.
Other things affect the ability to absorb indirectly, by changing the energy or by siphoning off energy too quickly to hold on and emptying the band quickly to be refreshed anew.
But the broadening is used in stellar physics to see what the temperature of something is and, because that is symmetrical, doesn’t affect the doppler shift of recession for distant objects (which is only one way).
Bryan S says
Re #57: I will speculate on my own question in hope of recieving education.
*My hypothesis* is that in the actual climate system, most of the radiative imbalance will have already been equilibriated when the forcing change reaches 0. There is a long thermal lag, but the long tail of the transient response represents only a small fraction of total temperature change needed to reach equilibrium.
The concept of significant heating “left in the pipeline” is a flawed hypothesis.
Reason: The portion of the shallow ocean land and cryosphere that are effectively coupled with the atmosphere have a relatively small amount of mass allowing the atmosphere temperature to increase rapidly. The temperature rise will nearly equilibriate the forcing change within only a very short time period (maybe a few years). The perturbation will not completely die off for several thousand years however due to the slow uptake of heat by the deep ocean. The deep ocean has a very long memory and will record a complex interference pattern of past, present, and future perturbations. Models not only fail to initialize past ocean conditions, but additionally, are known not to accurately resolve ocean turbulent eddy motion on decadal to multi-decadal time scales, therefore their transient responses that are produced cannot be considered skillful predictions of the real climate system. They are rather only process experiments.
Why is my hypothesis flawed?
Phil. Felton says
Ray Ladbury Says:
11 janvier 2009 at 3:45 PM
Jim Eager, The 15 micron band for CO2 is on the edge of the H2O band. The best illustration I know of is this one:
http://www.globalwarmingart.com/wiki/Image:Atmospheric_Transmission_png
So while CO2 is absorbing very strongly in this band, water vapor is weakly absorbing.
WRT pressure and doppler broadening, my understanding is that the whole line broadens and flattens slightly.
The trouble with that figure is that it’s such low resolution that it gives the the false impression that the band adsorbs at all wavelengths. A blown up region shown below shows a more realistic picture.
http://i302.photobucket.com/albums/nn107/Sprintstar400/CO2H2O.gif
Y fouquart says
Mark: pressure broadening and Doppler broadening are quite different.
Doppler broadening is due to component of the speed of the absorbing molecule along the light view.
To understand pressure line broadening, you must keep in mind that there are a huge number of transitions that occur simultaneously. Typically, the number of transitions is of the order of the Avogadro number.
Each transition occurs at a discret frequency.
Each molecule generates an electric field which acts on the charged particles of any other molecule which is sufficiently close( this what is called a “collision”) This results in a slight modification of the characteristics of the molecule so that the posssible transition occurs at a slightly different frequency. If you consider all molecules of a given gas, that gives you a distribution of discrete transitions.
What you see is the enveloppe of that distribution.
Pressure line broadening occurs at all frequency but how much boadening depends upon the gas under consideration as well as upon the frequency.
Ray Ladbury says
Bryan S., Your view basically assumes that all the ocean below the first 30 meters is inert, as water above that depth has mass equal to the entire atmosphere. Since we have evidence of warming from below that depth, tain’t so. What is more, GCM with more realistic oceans perform better. Your model wouldn’t look much like Earth.
Uli says
Bryan S., #57,#64
Try your own calculation.
A short Tutorial:
Step 1. Find values for the ocean mass and water heat capacity.
Hint: The difference between salty water and pure water could be ignored in this crude calculation.
Step 2. The ocean is divided in an upper part of 3% of total mass and the deep ocean of 97% total mass. Assume that the temperatures of both parts are in equilibrium (but not necessary equal).
Step 3. Calculate the energy E in Joule needed to heat up the deep ocean by 1 K.
Step 4. By the assumption that the energy that goes into the deep ocean is proportionally to the difference in the temperature annomaly the response on a (relativly) fast rise of the upper part temperature by 1 K at t=0 sec is
t_deep=(1-exp(-t/tau))
tau is the response time of the deep ocean.
t_deep is the temperature anomaly of the deep ocean, it is 0 at t=0.
Step 5. Choose different response times (likely between 100 and 1000 years) you like and convert it to seconds.
Step 6. The energy per second that goes into the deep ocean under these simple assumptions is
P=(1K-t_deep)*E/tau
Calculate it at least for t=0 in units of TW (TeraWatt) or PW.
You can also calculate as a function of t if you like.
Step 7. Choose a climate sensitivity s you like in K/(W/m^2). If you have it in K per doubling CO2, divied it by 3.708 to get this value.
Step 8. Calculate the additional heating (for example by ‘radiative forcing’) H in TW to get 1K (long term) temperature response by multipling s with 510e12 m² the area of Earth.
Step 9. Compare H and P for t=0. You will need H+P TW additional heating to reach 1K very soon in the presence of the deep ocean uptake. The total long term temperatur response to H+P TW will be H+P K, the short term only 1K
Step 10. If you like try more values for tau and s.
I hope that helps.
Bryan S says
Ray. Not inert. Only that the thermocline will appear as a sharp boundary in the transient response. The atmospheric temperature will appear to equilibriate very rapidly at first due to the heating of land and shallow ocean (the smaller the mass, the faster the temperature increases). If plotted graphically, dT/dt will be large at first, then will decrease rapidly along a log function until it becomes asymptotic. It is is the complex dynamical structure of the upper ocean that will determine the exact shape of the curve.
Think about this analogy. If you run the faucet in your wash basin, the water level (temperature) will rise if the flow of water (heat) into the basin exceeds the rate that water drains out the bottom. How fast it rises is determined by 1) The size of the basin 2) the rate of the water flow into the basin, and 3)the size of the drain. Your wash basin may drain into the ocean, but the size of the ocean will not be relevant to the problem of determining how long it will take to fill up the basin. Once water begins running over the side of your wash basin, it will by definition stop rising (equilibrium), despite the drain to the underlying ocean. If your basin had a sensor to detect when it was full, the faucet would shut off when it ran over. It would then intermittently kick on to keep the basin full. During this period, the rate of inflow would equal the rate of drainage. Once the ocean was full, the wash basin would stop draining completely, and it would be at complete equilibrium, with no water going in or out.
The increased GHG forcing just makes the sides of the basin taller, so the water level must get higher before it runs over. Despite the drain, as the sides of the basin slowly get taller, the water level is always lapping full at the sides, as the faucet is kicked on immediately as the sensor shows that water is not running over. There is very little lag time needed to keep the basin full as the sides get higher.
Jim Eager says
Thanks very much to Hank, Ray, and Mark.
Hank, The “picket fence” of sharply defined absorption lines described in the short excerpt from Spencer Weart’s The Dicovery of Global Warming is exactly what I was asking about. I have read Weart, but mainly the book version as I found it too tedious to do so on-line, but I will go back through the on-line Basic Radiation Calculations chapter again, since I know the on-live version has embedded links to supporting material. I enjoyed your links to the chaotic pendulum (so that’s what those are clalled), but but I’m not entirely grasping your analogy.
Ray, I understand that CO2’s absoption in the 15 micron band is much stronger than water vapour’s, it’s the total overlap further to the left that I’m more interested in. And that diagram is much too course to show the sharply defined absorption lines. Someone once posted a comparison here at RC showing the marked difference between a solid-appearing absorption curve and a high-resolution plot of the individual discreet absorption lines. This image showing the pressure broadening in the wings of the 15 micron band:
http://home.casema.nl/errenwijlens/co2/co205124.gif
doesn’t quite do it since it only resolves into discrete absorption spikes in the wings.
Ahhh, these two threads at Eli’s show what I mean:
http://rabett.blogspot.com/2007/07/pressure-broadening-eli-has-been-happy.html
http://rabett.blogspot.com/2007/07/temperature-anonymice-gave-eli-new.html
They also illustrates what Mark said, that pressure broadening causes all lines to become wider, expanding into gaps between the “pickets.”
So, my second question has been addressed, but the first remains:
In the region of CO2-H2O overlap do the absorptive lines of each coincide, or are they offset?
Jim Eager says
Yes Phil (65), that’s what I mean. It may even have been you who posted the comparison that I recall, although that’s not specifically it.
Kevin McKinney says
Bryan S., I appreciate your hypothesis for causing me to scurry off & review what I thought I knew about the thermocline. That review caused me to reflect that presumably one consequence of the reduced Arctic ice cover characterizing the last three years must increased vertical mixing in the Arctic Ocean. This in turn should affect oceanic heat transport, although just in what manner I don’t dare speculate.
Anyway, turning to your hypothesis, my sense of the thermocline (post-review) would require that the bottom of the basin in your analogy be chaotically reforming itself on an ongoing basis. Moreover, I think the thermocline depth is frequently much deeper than Ray’s upper 30 m of ocean, which would suggest a much more gradual warming curve than you are imagining. My two cents. . .
Hank Roberts says
> chaotic pendulum … analogy
Someone who knows something should comment on that. Eli, you hereabouts?
Look at the online pictures showing the various different ways that a CO2 molecule can vibrate — angle changes, bond length changes, simultaneous or alternating. The energy there can move around the molecule “sorta kinda like” the various pieces of a chaotic pendulum can change their speed and direction as energy moves around in that system.
In an isolated molecule, if a photon is absorbed it adds more energy, and if the molecule doesn’t collide with another molecule and get rid of energy by collision, the energy moves chaotically (?) within the molecule’s many kinds and patterns of vibration, and if one of those happens to be the right [er uh size?] that vibration produces a photon and off that parcel of energy goes.
Or so I imagine. There, I’ve done the hard part, someone else can explain why it makes sense (grin) and do the computer animation ….
Ha!
ReCaptcha says for this post:
“publish tuned”
Hank Roberts says
PS, you can find triatomic molecule vibrations illustrated online; search for
co2 molecule vibration mode applet
Note how many _more_ possibilities are present with a triatomic molecule. A “chaotic pendulum” is very simple by comparison, with rotation but no stretch or bending modes. (I wonder if stretching a bond is like changing the length of a macroscopic pendulum?)
There’s a challenge for any Exploratorium or other science-museum hardware builders!
______________
“factors that”
Hank Roberts says
And one more to sum up the idea:
http://www.maths.ed.ac.uk/~s9905488/other/mol.pdf
“… considering the following basic view of the interaction of radiation and matter. Imagine a diatomic molecule which has a natural period of oscillation; if electromagnetic waves of a certain frequency pass by and somehow drive the oscillations of the molecule, then the molecule will extract more energy from the waves when their frequency matches the characteristic frequency of vibration of the molecule. Conversely, if the molecule was somehow able to store energy and then release it through its oscillations, again in the form of electromagnetic waves, then it would radiate waves with a frequency corresponding to the natural frequency of the molecule. This is a very qualitative argument but it does give a basic idea of the quantum mechanics which links molecular vibrations to observed spectral features.”
____________
“be- select”
Jim Eager says
OK, Hank, that’s more or less what I thought you were driving at as I watched some of the secondary pendulums all of a sudden start spinning much faster.
Thanks –Jim
Mark says
Further to #73, the extra energy changes the entire system somewhat so that what used to be a virbational mode now no longer is resonant, because, for example, the electron spends longer now between the O-C nuclei and so change the electrostatic potential that causes the restoration of a vibrational mode.
The torsional force of flexion excitation changes for the same sort of reason (the C end can’t “flex” in as far because of electrostatic repulsion).
etc.
Very complicated.
Mark says
Jim, #70, your question doesn’t make sense. If “CO2-H2O overlap” mean the absorption spectra overlapping (which is the only one that makes sense in the context asked), then yes the absorption lines do coincide. But that’s a tautology, so I can’t make out what you’re on about.
I doubt anyone else can either.
It doesn’t really matter anyway, since the collisional relaxation at about 15um is very much within the chances of happening at STP in our atmosphere by kinetic sources. So even if they didn’t overlap, they could populate each others’ absorbtion spectra interchangably by hitting each other head on (to up the energy transfer to the resonance of the exitation energy) or by rear-ending (to reduce the energy transfer to the resonance of the exitation energy).
Or hit N2. Argon. Xenon. Yo momma. Whatever.
Hank Roberts says
Here’s what seems (to me, a purely amateurish reader) a helpful explanation of absorbtion lines in simple words.
http://www.applet-magic.com/absorptionspectra.htm
Written and coded by an economist, if I read the home page right.
Jim Eager says
Mark, it may well not make any sense, but I didn’t know that because I don’t have the physics background, so I asked.
The spectrum is continuous, no? While the absorption lines are centered on specific discrete wave lengths, right?
My thought was that if the absorption lines of CO2 and H2O do not coincide then the lines of one gas would be offset into the gap between the lines of the other gas, meaning that the absorption effect of CO2 would not be redundant to that of the more numerous H2O in the region of overlap, as some claim.
Perhaps my mistake is in my lack of understanding of what determines the discrete wave lengths of the absorption lines. Is it a characteristic of the particular molecule (CO2 vs H2O) or is is a characteristic of the bonds?
Hank Roberts says
Jim, the answer is yes, both. Complicated, hardly answerable in a blog posting. Lots of links posted above though. Maybe a FAQ will help.
Hank Roberts says
Potential FAQ material if anyone from someplace like the Exploratorium is checking in here — kids, you _can_ do this stuff at home nowadays.
Need a single-mode red laser, cheap? Check your pockets!
J. Chem. Phys. 124, 236101 (2006); DOI:10.1063/1.2212940– 16 June 2006
REFERENCES (6)
Joel Tellinghuisen
Department of Chemistry, Vanderbilt University
“An inexpensive (less than $5) key-chain model of red laser pointer (RLP) operates with typically 98% of its total emission in a single longitudinal cavity mode. The laser self-tunes with time, interpreted as due to thermal effects. The laser can also be tuned by varying its operating current and voltage. These properties permit one to quickly and easily record absorption spectral segments spanning ranges of 1–6 cm–1, with high quantitative reliability, resolution, and accuracy….”
Mark says
Jim, I think the problem is the wording of the question. Read up on some A-level physics first. But here’s a few that may help firm up what you’re asking.
There is a continuous spectra and line spectra. It isn’t “The spectrum is continuous”. The line spectra have a very limited width. An example is the 15um line spectra. The width of a line spectrum depends on its half-life or stability. The definition of time is from a meta-stable (nearly stable) transition of caesium. Because the half-life of this exited state is so long, the width of the emission line is very thin and so the error in counting X vibrations is very accurate and the time measured therefrom likewise accurate.
The IR absorbtion is generally composed of many very close stable (or semi-stable) excited states. Because they are close together and very unstable, they can merge into a wide band of excited energies that are a resonant frequency of the system.
A resonant frequency is more likely to be absorbed.
That causes the energy to be absorbed and reemitted in a random direction. Therefore in this band, instead of shooting straight out into the universe, the photon and it’s concommitant energy take a random walk through the atmosphere, with a very short transfer time between reabsorbtions. Only when the distance ot absorbtion is getting to something of the order of the distance out of the atmosphere does it take a more direct route.
Now, the random walk will move much slower (to the square of the linear distance, given constant mean path between absorptions) the thicker the layer is.
Now what happens when you get more absorbtion elements per unit space?
Shorter mean free path.
Which means it takes longer to get out.
Because of the mixing, the height at which the mean free path is about what the distance to the exopshere is must also get higher (because the density of absorbers goes up more quickly where it is low, with the proviso that VERY well mixed concentrations will not show this change).
So it doesn’t matter if the absorption bands overlap at some point: all that means is that the mean free path goes up a lot when they DO overlap for that overlapping energy domain. And that keeps the energy in that domain in the atmosphere much longer (doubling concentration can quadruple the resident time, for example).
IRL it’s not that simple, since doppler shifting can move an emitted photon from an absorbing particle into a band that isn’t an absorption band, or the energy can pass into another form whose product can be outside the absorption band too. Then again, that which is outside the absorption band can be seen (from the POV of the absorber) to be moved IN to the band it is resonating with and be captured.
But this sort of stuff is getting on toward degree level info to actually work out roughly, and PhD level to model with some degree of promise.
Jim Eager says
Hank and Mark, thanks for bearing with me. Your replies have been quite helpful, even if my grasp of them has not been complete.
Yes, Mark, a better–and more recent–grounding in A-level physics would definitely help. It’s been a long time since I took basic level physics and chem as electives for non-science majors. That considered, I don’t think I’m doing that badly. I’ve been following the discussions at RC and Tamino’s and Eli’s and reading increasingly more technical references for a couple of years now. I’m now starting to drill into harder stuff, at least for me. I do understand the part about shorter mean free path and increased residence time in your reply @83, and I also ralize that the overlap is not really the issue that some make it out to be, I just had the thought that it could perhaps be even less of an issue, hence my questions.
Time to finish reading the links that I’ve been directed to.
Hank Roberts says
Chuckle. Don’t miss the two earlier “saturated gassy argument” threads here, which many of us staggered through trying to grasp this stuff without using math.
I _think_ I sort of understand a bit more than before, in a vague and poetic sense. But I keep hoping someone who really does will straighten me out when I try to pass on my vague notion of how it works to others.
I’m aiming for fifth grade level comprehensible language, more or less, nothing more than that.
Jim Eager says
Yep, time to reread them, too. They might even make a bit more sense to me now than they did then. And trust me, I even slogged through your multi-thread back and forth with Rod over vibrational states and what temperature is and is not. Thanks to all of you for the education.
Uli says
Re:#68
there is a error in step 8, sorry. The correct version is:
“Step 8. Calculate the additional heating (for example by ‘radiative forcing’) H in TW to get 1K (long term) temperature response by dividing the area of Earth 510e12 m² by s.”
Bryan S says
Re:#87: Thank you Uli for taking interest in my comments. I have previously gone through this arithmetic, and the results plus some background reading and personal research with other types of transient systems have spurred the above comments. The background of this subject deals with the controversial Stephen Schwartz paper (2007), and his reply (2008) to the paper in rebuttal by Forster et. al (2008) in which both Gavin Schmidt and Mike Mann were co-authors. Schwartz used a simple energy balance model given by dH/dt=Q-E=CdTs/dt to relate the change in system heat content to the transient change in global surface temperature to try and constrain a system relaxation time and thus an estimate of the climate sensitivity given by S=t/C, where S is the climate sensitivity, t is the time constant, and C is the effective heat capacity. His analysis used a controversial method of autocorrelation assuming a linear trend plus a first order Markov process to estimate the time constant. From this analysis, he calculated a short time-constant (8.5 years) which was judged by the subsequent authors as being un-physical, due to the very slow uptake of heat by the deep ocean. His analysis is supposedly compromised by the various heat reservoirs in the climate system, each having their own time constant, plus a noisy temperature signal owing to natural variablity.
As I have thought about this problem of various components of the system affecting the transient response, it turns out to be a similar type problem to the practice of pressure-transient analysis in subsurface petroleum or groundwater reservoirs. When the pressure in a subsurface reservoir is perturbed, we can study the decay in the transient response with respect to time. As the ripple from the peturbation intersects various boundaries in the geological system (rocks with different permeablity), it affects the transient response, and the shape of the transient curve when plotted along a graph. We see several characteristics which I think may have analogs in the climate system. If considering a pressure buildup test after a withdraw of fluids, the transient response on a semi-log plot will generally have a form where the pressure moves toward equlibrium rapidly at first, then the change in slope decreases with time. The coupling of the components of differing permeablility will determine the ultimate shape of the buildup response. It is might be possible to visualize such a system in terms of several distinct time-constants since the graphical slope of the response may be defined by several straight lines of differing slopes.
If we consiser a thin permeable rock formation (of low pore volume) that it is bounded by much less permeable rock (of high pore volume), and bounded again by completely impermeable rock, the relaxation time will be very small for the permeable section, and very large for the almost impermeable section. Even though the bulk of the fluid might be held in the very impermeable section, it is not effectively coupled to the permeable section. The transient response will appear to approach an asymptote as the boundaries of the permeable section are reached. The pressure in the test might continue to build very slowly for very long periods of time if the bounding rock formation is very impermeable (as it slowly exhanges fluids with the permeable layer), but for the purposes of the transient response, the pressure will have appeared to almost completely equilize after only a much shorter time period.
The climate system is analogous in that it has components of small heat capacity that are very “permeable” to heating, which are weakly coupled a component of very large heat capacity which is almost “impermeable. I think that for all practicle purposes, this almost impermeable component should appear as a strong boundary (change in slope) in the transient response, such that it can be almost ignored for the purposes of transient climate sensitivity. I also suspect that the Schwartz paper and rebuttles might lead to a better way to analyze the so-called time constant, by viewing the climate response as a continuous transient response of changing slopes.
This explanation is why I have convinced myself that the concept of “heating remaining in the pipeline” or committed warming” is a seriously flawed concept. First, it does not necessarily convey any truly meaningful information to policy makers. If 85% percent of the response to a change in forcing is realized in 5-10 years, with the remaining portion realized over several thousand years (as Schwartz, 2008 suggests), the system is effectively at equilibrium after only 5-10 years. Technically, it is not, but practically it is. Now, if the radiative imbalance induced by the changing forcing allows heat to accumulate in the small permeable reservoirs much more rapidly than it can be leaked off to the deep impermeable reservoir, then the temperature will increase in the atmosphere (small reservoir) nearly like the deep reservoir is not even present. The radiative imbalance due to the forcing change+feedbacks will be *almost* equilibriated quickly. The deep ocean is thus effectivly decoupled from the climate system, and can be nearly trival to the problem at hand, as I have explained in my lavoratory example above.
Hank Roberts says
Bryan, your “lavoratory” example isn’t credible.
You claim you’ve convinced yourself by logic and analogy, without doing the math. Faith not arithmetic.
Look at the ocean turnover numbers. Look how they change over time.
Ray Ladbury says
Bryan S., The problem with your model is that you are assuming only 2 timescales–1 very short and one very long, that can be neglected. This is, granted, an improvement on Schwartz’s 1 timescale, but the same criticism applies. I suggest going back and rereading:
https://www.realclimate.org/index.php/archives/2007/09/climate-insensitivity/
richard schumacher says
Has this possibility been addressed? Water is densest at 4’C. It may be that significant heat has been going into the warming of water which was at 3’C or less. If so then that water has been getting denser, and thus actually tending to cause sea level to fall, in opposition to the sea level rise caused by warming of >4’C waters and the melting of grounded ice. If that has been happening, then if the ocean substantially runs out of sub-4’C water the rate of sea level rise will increase significantly.
[Response: 4 C is only for fresh water. Salt water (greater than 25 psu or so) is densest at the freezing point (roughly -0.054*S, i.e. -1.9 deg C for S=35). – gavin]
richard schumacher says
Serves me right for being ignorant and lazy :_> Thanks!
Bryan S says
Say it aint so Hank,
The lavoratory example is indeed credible for the intended purposes.. to motivate one to think! My epistemology is indeed mathematical. Math is a pure form of logical thinking. Mearly saying it ain’t so has no basis in either math or logic.
Ray, Please read the 2008 replies and responses (actual peer-reviewed papers) to the original paper for more background. The archive from 2007 is dated. I think everyone now agrees (Schwartz included) that there are several timescales that characterize the system. It is agreed that the components with small mass have a short time-constant, and the 10 ton gorilla is the deep ocean, with a very long time-constant. The point is that heat will accumulate immediately in the coupled smaller reservoirs at a rate equal to the net radiative imbalance minus the rate of heat uptake by the deep ocean. Since the diffusivity to the deep ocean is low and the mass of all other components is low, the atmosphere will realize increased temperature almost immediately. The net radiative imbalance (averaged over a number of years) will soon be diminished to the rate of heat uptake by the deep ocean.
Jim Eaton says
Re: #88 and #93: “lavoratory”
Urban Dictionary
1. lavoratory
A place where scientific research is conducted which doubles as a toilet.
“Dr Frank Fischenhoffer’s world-class lavoratory in California was one of the first in the world to create fluorescent urine after his initial discovery pissing in one of the cubicles.”
http://www.urbandictionary.com/define.php?term=lavoratory
Mark says
Bryan S, it surely ain’t spellin.
And it’s no good if you can convince yourself you’re right. There’s a bloke I know convinced himself he was napoleon.
For some reason, he couldn’t convince anyone else…
JCH says
Bryan S, I’ve found Schwartz 2008 and some related stuff, and reviewed some of your earlier posts and the responses. I’m just a lay person. Is it possible for you to better explain what you mean in your analogy when you say additional GHG raise the sides of the basin.
Hank Roberts says
> Math is a pure form of logical thinking.
Show your work, please.
There is no single deep ocean. There are ocean basins; look at the recent studies for changes in them.
http://www.google.com/search?q=deep+ocean+basin+warming
Eschew the “21stcenturysciencetech” hit; the others on the first page are good sources to research.
Go figure.
Hank Roberts says
PS, Erik replied to your earlier idea on ocean heat storage here: https://www.realclimate.org/index.php/archives/2006/08/antarctica-snowfall/#comment-18305
Hank Roberts says
JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 113, D15103, doi:10.1029/2007JD009473, 2008
http://www.iac.ethz.ch/people/knuttir/papers/knutti08jgr.pdf (full text)
“… For the climate change problem, in order to achieve stabilization of global temperature, the relevant response timescales are those of the deep ocean, and the short timescales found by SES are therefore irrelevant to the problem of estimating climate sensitivity. The argument of abrupt temperature shifts in glacial periods is misleading, because these were local or regional warming events caused by a change in the ocean thermohaline circulation and sea ice, with little or no signal in global temperature.”
…
“… In his reply, Schwartz [2008] proposes revised methods to estimate the response time scale of the system (his equations (5) and (6)), but based on essentially the same arguments. We applied that method to all GCM control simulations and find that correlation is still insignificant, and that the revised method has no more skill in predicting climate sensitivity than the original one, even in the optimal situation ….”
Arthur Smith says
Thinking of another question for future FAQ’s…
I notice the answers in this edition seemed to carefully avoid much mention of the structure of the atmosphere – troposphere, stratosphere, tropopause, etc. Clearly that structure is not a “boundary condition”, but an outcome of modeling. Can you make any broad statements about the causes for atmospheric structure and relation with the various forcings? For example, we know tropopause height increases with GHG forcing, but what else can we say generally about the origins and dependencies of that structure and how it impacts the other components of the system?
[Response: Well that’s not really a climate modelling issue per se. It’s more a climatology question. The tropopause exists because of the ozone in the stratosphere which due to its local heating effects is a barrier against convection. Thus convection from near the surface (predicated because the atmosphere is mostly transparent to solar radiation can only go so far up. Variations to the tropopause will then occur due to changes in stratification (i.e. an ozone change, or volcanic aerosols) or to the structure of the troposphere (temperatures, water vapour etc.). You can certainly explore these dependencies using models (for instance, try running with no ozone at all!). – gavin]