132 Responses to "The Physics of Climate Modelling"
Catherine Gautiersays
It seems to me a little hasty to state that the ITCZ is an emergent phenomenon. What about the ITCZ is emergent? Is it the organization of small convective elements over the large scale? If so, how do you quantify that organization in climate models? And is it really emergent beyond the self-organization of systems at the mesoscale? Or are you referring to the dynamics (convergence and confluence of the wind) that result from the organization of the convection at the larger scale?
[Response: The point is to distinguish physics that is coded for in a GCM – such as the dependence of evaporation on wind-speed, and large scale organised features that emerge from the interaction of all the physics. There is no subroutine for ‘doing’ the ITCZ in other words. -gavin]
Steve Sadlovsays
RE: Climate is instead a boundary value problem
I would make a distinction here. Boundary value problems of this type that deal with heat flow in solids have reasonably well behaved boundaries. Take for example the classic bar of uniform solid with a specific isotherm presented to one end of the bar. In climate, the mere task of determining just what should be the boundary value for a particular t0 parameter in question is perhaps not quite so straightforward. How stable do we assume things to be prior to t0? How uniform is the value presented at the boundary? Etc. Is there not a continuum between climate modelling and weather prediction, in point of fact?
neutrinosays
At first blush, it would seem that the cumulus parameterization scheme(CPS) would have to provide fairly accurate moistening and warming rates in order that a realistic ITCZ show up. On the other hand, maybe the climate models are relatively insensitive to the CPS used? If so, ITCZ appearance in climate models is not so impressive.
neutrino
[Response: Fair point – without some coding for moist convection, an ITCZ is unlikely to emerge. But all GCMs since the 1970s have ‘had’ an ITCZ even though the parameterisations range from the very basic to the very complex – so the existence of the ITCZ is robust. Of course, there specific aspects of tropical convection that aren’t, and that is where the evaluation of any specific scheme is focussed. -gavin]
L. David Cookesays
Dr. Schmidt;
I did not register to review the GCM that you had provided a link to in your article. I did read the article. Part of it prompted me to go in search for an earlier work that complimented Dr. Hansens work with a model. I went in search of a study that was published about two years ago regarding the BSRN at ETHZ (Swiss Technical Institute, Zurich), regarding radiative forcing and ran across a reference to the ECHAM5 GCM:
based on the data regarding the descriptive update I am impressed with the level of effort regarding the radiative forcing adjustment in the shortwave band, I think the increased resolution may have benefited the tracking accuracy.
The point is that I do not know, that enough value for radiative downwelling energy, through the entire spectrum is represented yet. The ECHAM5 GCM does appear to have a fairly good match up in the shortwave bands though.
I still think there is work to do with the adiabatic change at altitude as the ground based Lidar in Colorado seems to indicate there are significant changes in the altitude, temperature and water vapor content of the tropopause and Stratosphere that do not appear to be addressed. The recent paper by Dr. P. Chuang at UCSC:
appears to represent a major change in the latent heat release at altitude. I suspect this may be related more to the saturation level then the aerosol “seed” size or concentration, apparently the jury is still out in this regard.
Of greater interest is the apparent evidence in relation to the radiative shade and it’s effect on marine biological health. If the work of Dr. M. Behrenfeld at OSU appears correct, there is the possibility that though radiative flux does not indicate a change, that warming is having a negative effect:
This takes me back to the 2004 article regarding the Swiss Resort shielding the snow base around snow molding equipment with a simple tarp, not necessarily a thermal blanket and preserving the snow over the summer. This brings me back to the question regarding the drivers for the possible reduction of precipitation. Why is there less precipitation when the apparent vapor content is higher in warm air and the condensation temp at the troposphere is ‘cooler’, (providing I understand the 2006 Spring study saying that GHG reduced the long wave energy from reaching the 250mb range.)?
The most recent work I have been attempting as a layman is in regards to trying to breakdown of the major oscillation patterns to see if the drivers can be described. I believe if we can get to the point we can decipher the drivers of phenomena such as NAO or ENSO the long term models might improve dramatically.
I think the piece that Dr. Benestad posted earlier this week “Mid-latitude Storms”, is beginning to get close to describing the drivers. The interrelationships between large scale phenomena does not seem to be very good at trying to map cause and effect, it almost seems like using bamboo cane fishing poles as chopsticks. Is there any work or signals for possible drivers for these large scale events in any of the models or recent studies you have seen?
Dave Cooke
Julian Floodsays
I, too, find the statement about current models being non-chaotic surprising. I once saw a suggestion that lemming populations were following an attractor and painstakingly created a cellular automaton to look at population behaviours — no attractors that I could see in two or three dimensions, but the little bugs did strange and unexpected things. Perhaps, as the climate models become better, we are in for surprises.
The article (timely and extremely useful) mentions that the microphysics of clouds and aerosols needs research — the latter is an area where statements such as ‘this work is difficult to carry out in the field’ occur frequently, so let’s not hold our breath.
Might I ask about a couple of other points?
Do the models make allowance for variation (other than, obviously, temperature) in the condition of the ocean surface, such as variation in evaporation rates (not temperature related), variation in aerosol production (not related to windspeed), variation in CO2 incorporation (not related to either)?
The idea that mankind’s small percentage of the carbon cycle could be the cause of the Mauna Loa graph seems, to this layman, to be unlikely, the attribution owing more to man’s overwheening vanity than scientific measurement, like claiming the Earth as the centre of all things. It seems more likely that we are disrupting one of the feedback mechanisms — my guess is that surfactant and oil sheen pollution of the ocean’s surface has led to reduced aerosol production, less biological CO2 pulldown and reduced mechanical incorporation of atmospheric gases. These effects will be subtle in highly productive waters — natural pollutants may mask the effect — and not present in shallow or coastal waters, but in the blue desert areas they will be highly significant and should be measurable. It might be possible, by piggybacking on Latham, Salter et al’s calculations (I’m afraid that I do not have access to a public domain version of their proposal for cooling using aerosol production vessels but a subscription, if required, is well worth it for the illustration alone) to predict how much albedo reduction we are suffering because of lower oceanic strato-cu cover.
I like numbers. I hope someone is out there with sample bottles.
[Response: Maybe this could have been clearer, but it is the climate within the models that is non-chaotic, not the model itself. All individual solutions in the models are chaotic in the ‘sensitivity to initial conditions’ sense, but their statistical properties are stable and are not sensitive to the intial conditions (though as I allude to in the article, I don’t know whether that will remain true as we add in more feedbacks).
As to the mechanisms you mention, evaporation generally depends on humidity, temperature, wind speed, and atmospheric stability. In models with interactive aerosols, ocean sources for sea salt are generally wind speed dependent, and for DMS or MSA they are now starting to put in biological activity feedbacks – but these are very uncertain as yet. There maybe important secondary organic aerosol precursors that are climate dependent – but incorporating these effects is very much a hot reserach issue at the moment. -gavin]
Tom Pollardsays
I don’t understand in what sense climate models solve “boundary-value problems.” They’re just dynamical models, which means they’re initial-value problems. This is just a matter of what sort of equations you’re solving. The fact that you run lots of trajectories to collect statistics doesn’t change the fact that those trajectories are solutions to an initial-value problem.
An unrelated quibble is that I have no idea what you mean when you say “Emergent qualities make climate modeling fundamentally different from numerically solving tricky equations.” Emergent behavior is something that may or may not arise in your dynamical model, but it doesn’t change the fact that your job as a modeller is still to properly define and accurately solve numerically tricky equations. I also don’t see the distinction you’re trying to draw between climate modelling and weather forecasting, here.
[Response: The distinction occurs precisely because I am interested in the statistics of the problem, not the individual trajectories. For instance, take storm tracks in the North Atlantic – I can be interested in the path of an individual storm (an initial value ‘weather’ problem), or I can be interested in the statistics of all such storms. The second does not depend on the initial values – I can perturb them to my heart’s content – yet the statistics of the storms once I’ve gone out long enough will converge to a ‘climatology’ of storms. This is true for even a perfect model (should any exist). Now, it is certainly my job to numerically solve tricky equations, but the point I was trying to make was that the emergent properties of dynamical systems make those solutions much less a priori predictable than simply a ‘tricky numerical problem’. – gavin]
CobblyWorldssays
Once again thanks for that Gavin.
With regards emergent behaviour, it was just one of the issues that changed my mind and defeated my scepticism with regards the models.
Some time back in my ‘studies’ I realised that the models are able to create things like the ITCZ or as in Pintubo recreate total column water vapour response to Pinatubo. This was despite these being emergent phenomena of underlying modelled processes. I just couldn’t get how, if the models were wrong, they could produce virtual analogues of real phenomena, without being coded to explicitly model those phenomena. The models aren’t perfect, but to dismiss them because of their flaws really is to “throw the baby out with the bathwater”.
Re #7 Are you running the Climate Prediction model? If so, does it have an ITCZ? I ask this because when I ran it, I could not see any ITCZ :-(
Peersays
Figure 7.5 and fig 7.9 at http://www.ux1.eiu.edu/~cfjps/1400/circulation.html , make the ITCZ look like the simplest and most predictable weather phenomenon on Earth untill one realize that the real world ITCZ line(fig 7.9) deviate from the line of maximum downpour or convection. So my question is if the ITCZ always behave as simple as this, or if it sometimes can split into two when passing obstacles like cold SSTs (East pacific/La Nina) or dry areas like the Sahara desert.
Bryan Srallasays
Gavin: I realize that the boundary values vs initial values discussion is probably getting tired to you, but please indulge me.
It is my understanding (from the literature) that climate prediction takes two forms. Climate predictions of the first kind certainly are initial values problems. Predictions of the second kind are likewise solutions controled by boundary values. I am unclear since some of the papers I have read reporting on AOGCM experiments of the first kind (with anthropogenic GHG’s kept at pre-industrial levels), have shown very poor skill at predicting regional climates (high sensitivity to perturbations of initial conditions), and for the most part a saturation or loss of skill at short lead times.
Since things like ENSO prediction (among others) is a problem of the first kind, and these are superimposed over predictions of the second kind (boundary values)when adding external forcings like GHG increases, is it not true that relevant climate predictions really can be thought of as a combination of both. In my way of thinking, it still seems like one could consider these intial value problems.
Since the statistical properties of predictions of the second kind are stable, does this not imply that we have smoothed out some of the climate metrics that are relevant to where people actually live?
My mathematics background in chaos, attractors, and differential equations is limited to about one or two undergrad classes a long time ago, so please clarify my thinking on this.
[Response: Practically you can distinguish between the two types just by seeing whether the initial conditions matter. In a ‘perfect’ system – i.e. using a model to predict what another run of the same model produced – you can show that there is useful information in ocean initial conditions for about a decade – mostly based on the North Atlantic. However, in the presence of strong forcing (rising greenhouse gases, volcanoes etc.), the predictions become much less dependent on the start. So for the 20th century trends – where we have essentially no information about the ocean initial conditions – it is easy to see that the global trends even over decadal to multi-decadal time scales are robust to the starting fields. Skill in prediciting regional changes however, even in a perfect set up, is not very high – mainly because of the amount of unforced ‘weather noise’ which neither kind of prediction can capture. – gavin]
CobblyWorldssays
Re #8,
No Alastair. The CC.net information page was, as implied, an addition for other interested amateur readers.
I’ve not run any climate models, merely read papers. Were I to run a climate model I don’t know I’d have the skills to be sure whether it had correctly produced such a thing as the ITCZ.
Re “The idea that mankind’s small percentage of the carbon cycle could be the cause of the Mauna Loa graph seems, to this layman, to be unlikely, the attribution owing more to man’s overwheening vanity than scientific measurement, like claiming the Earth as the centre of all things.”
From 1750 or so to 2005 the ambient CO2 concentration rose from about 280 parts per million by volume to 380 ppmv. We know it’s from fossil-fuel burning because we can measure the fraction of C14, and that has been declining. There’s virtually no CO2 left in old (“fossil”) fuels simply because their age is many times greater than the C14 half-life. This is the smoking gun that shows the CO2 is anthropogenic and not from some natural process. All the CO2 in the biosphere has roughly the same mean level of C14.
Jamessays
Also, the amounts of fossil fuels used in the last century are quite well known – you can look up the numbers in various economic & statistical sources – and from that you can easily compute how much CO2 was produced. Add that up, compare it to the observed increase, and you’ll find the numbers match. (Actually the amount of CO2 from fossil fuels is somewhat greater than the observed atmospheric increase, because some gets dissolved in the ocean and so on.)
Thinking of fossil fuel CO2 as “man’s small percentage” is the wrong way to look at the problem. The key word there is “cycle”: all the rest (with small exceptions like volcanic sources and geological sequestration) is in a cycle that keeps going around and around. The fossil fuel CO2 is an addition. That addition may be comparatively small in any one year, but it doesn’t go away: it keeps adding up.
David Pricesays
One question. Have models taken in to account the increace in the number of thunderstorms that will result from warming. I read once that thunderstorms keep the earth several degrees cooler than it would otherwise be. As more storms happen the hotter it gets will they have a moderating impact?
[Response:Equilibrium requires a balance between downward and upward transport of energy within the atmosphere. Thunderstorms should be thought of as simply one of many modes of behavior by which the atmosphere attempts to achieve this balance, in this case largely through the vertical transport of latent heat associated with condensation of water vapor within storm clouds. Obviously, individual thunderstorms cannot be represented at the coarse spatial scales resolved by GCMs. However, their principal role in terms of energy balance, as described above, is represented in models through the parameterization of convective instability in the atmosphere. -Mike]
mzedsays
#12, think you meant “There’s virtually no *C14* left in old (“fossil”) fuels” ;)
Re 5. Thank you for your reference — I’ve not replied to thank you until now because I’ve been thinking.
Your reference is dated 1996 and as such the author had no access to the work (was it by Morel et al? I’ve looked at so many abstracts today my gyros are toppled) about C4 and beta-carboxylation pathways in phytoplankton. Under stress some phytos change from C3 metabolism and the isotopic fractions they sequester change — the heavier molecules are not discriminated against so strongly. So, my ocean pollution hypothesis is up to this objection: reduced upwelling and lessened entrainment of the surface by wind results in depleted zinc and cadmium levels. Stressed, the plankton switches to C4/beta carb and begins to rain out C13-enriched detritus, depleting (relatively) the upper ocean of C13 in an isotopic refinment of the biological pump. The ratio of C12 rises. The normal process of mixing ensures that the air and water concentrations of the new isotopic balanced CO2 mix and match. The atmosphere exhibits depleted C12. Surface pollution is worse in the Northern hemisphere, and continued healthy upwelling in the Southern hemisphere — helped by being less complicated by contraints of narrow seas — means that the phytos there are less stressed and can continue their usual C3 way of life.
“How then should an oceanic CO2 source cause
a simultaneous drop of 13C in both the atmosphere and ocean ?”
Not a source. A relative sink for C13.
ref 13: ‘the numbers match’. Have I got this wrong? I’ve seen references that half of the anthropogenic carbon is sequestered. Half is not what I call a very good match. However, I’d be grateful for references which tie this down a little more strongly as I’m sure the science does not depend on a simple post hoc argument and I’d like reassurance.
ref 12: The smoking gun indeed. If a plant switches to C4 metabolism — a technique which apparently is less discriminatory between C12 and C13, would it also sequester unexpected amounts of C14? Presumably the differences are purely mechanical which would indicate this was so. Does anyone know?
If C14 is being used up unexpectedly, what does this do to the smoking gun? Maybe I need to think some more.
There should be some testable predictions from this. C14 levels in deep sediments should show increases from around 1850 as the stressed oceans began to react to the outward spewing of the nascent petrochemical industry. Plankton samples, if dead — maybe frozen — should show a lack of zinc and cadmium indicating that they have changed to different metabolic systems. Is the data already out there?
I leave other tests to the intelligent reader…
JF
Hank Robertssays
I’ve been wondering where to look for the surprises. One example, much belabored, is methane hydrates.
I see that only a few years ago geologists were still debating the origin of pingos (not the Linux penguin). This is fossil carbon too, presumably depleted of C14?
Is there any clear idea how and when the stuff is formed? I can imagine it could be either a cyclical process associated with glaciation — lower temperature at a given depth below sea level? or contrariwise, when the planet is warmer, CO2 higher, and sea level much higher — because pressure is greater at that same location.
I wonder if the models state an assumption about this stuff — either
— it’s been locked down by past extremes so is stable, or
— it’s at an equilibrium state, so released as warming proceeds (in local spikes as brief temperature extremes occur in the location).
I’d think that some would have bubbled out over the last geologically brief warmth at end of last ice age — but that would have been ‘almost done happening’ as the planet slowly cooled back toward the next ice age. (The big old ones look like hills, tree-covered.)
I wonder if the Navy has mapped the polar sea floor well enough to count undersea pingos — presumably secret if so, as detailed maps, but if so perhaps summary data on size and location would be interesting, to try to quantify what’s been happening.
As of 2003 the origin of pingo structures was apparently still being debated — lots of good pictures for example here.
Now it appears clear — as they’re being observed below sea level — that they do form as methane boils off.
Is there enough info developing to include this in models?
Jason Goodellsays
After reading this months quick study on climate modeling in Physics Today I’ve been unable to push this topic from my mind, especially with it being nearly 70 F here in New England yesterday. As a physicist with only a casual understanding of the issues surrounding global climate change I’m drawn to the simple idea that humanities demand for energy is the central issue. As the global population inceases and the spread of technology increases, the demand for energy generation will also increase. For our great-grandchildren other factors like thermal polution, and the effects of large scale solar and wind farms may be the climate issue of their era, CO2 emissions are only the beginning of what will be a constant need for the consideration of our actions and their impact on Earth’s climate.
Sorry, typo in 17 — in spite of lots of thinking. It should read, of course ‘the atmosphere exhibits depleted C13’ not C12. Sorry about that.
It has me wondering about the mismatch of CO2 rising after temperature in the historical records — do the dating techniques depend on C14 levels and would C4 reduction of C14 pull the dates back into line?
Julian, Gavin’s reference was to a FAQ produces by Jan Schloerer of Ulm University, and hosted on the web by Dr Robert Grumbine of NOAA http://www.radix.net/~bobg/. Although the article is now over ten years old it directly answers your question, one that has been asked by global warming skeptics for all those years.
Jan wrote “From its preindustrial level of about 280 ppmv (parts per million by volume) around the year 1800, atmospheric carbon dioxide rose to 315 ppmv in 1958 and to about 358 ppmv in 1994”. If you go to the Mauna Loa site you can see that it has continued to rise since then to about 375 ppmv in 2004. http://cdiac.ornl.gov/trends/co2/graphics/mlo145e_thrudc04.pdf
There is no doubt that the increase is due to fossil fuel burning. You only have to think about about 100 million Americans taking their cars onto the road and burning on average one gallon of fuel each day. That is 500,000,000 tons of CO2 they are producing per day. Globally, we have put a trillion tons (Pt) of CO2 into the atmosphere. See http://cdiac.ornl.gov/trends/emis/tre_glob.htm 305 Gt of C equals 305 * 44 / 12 = 1.122 Pt. Well that’s what 6.5 billion little unassuming people can do if you give them enough time :-(
Hank Robertssays
C12? C13? C14? All three contribute:
“… The 14C/12C and 13C/12C ratios ….” SCIENCE VOL. 279 20 FEBRUARY 1998 1187
Atmospheric Radiocarbon Calibration to 45,000 yr B.P.: Late Glacial Fluctuations and Cosmogenic Isotope Production
Charles Mullersays
About parametrization
In your text (Gavin), you explain equation-approximated physics (eg radiative transfer) and empirically-based physics (eg evaporation) need for parametrization. That’s not really clear for a non-expert : do you mean for example that different coefficients of equations (for these phenomena) are to be regularly corrected from real-world measurements and retrovalidation ?
About emergence
You say on one hand that large scale behaviors of climate are robust, but on the other hand that they emerge from small-scale and more chaotic features. Does it mean that chaotic small-scale behaviors of climate are, after all, without real concern for the accuracy of 2100 projection ?
Simple one-celled organisms in the soil are doing better than that — they push out 60 GtC/yr. I wonder why we assume that the C accumulating in the atmosphere is actually ‘our’ C. Why is there this tiny fraction of the overall flux which is not consumed? The smoking gun did indicate that it’s ex fossil fuel carbon because it is depleted in C14 as one would expect. If C4 metabolism is disturbing the expected C14 levels then the bets are off.
The deep sea reservoir of C is 380,000 Gt. Thinking non-anthropocentrically, why does our little flow go straight into the atmosphere? Does it? Obviously not, because some of it gets lost. Our flow is dwarfed by natural processes and we need to find a way of pointing at our emissions and proving that the trouble is what we’re up to. Otherwise we’re like a little boy peeing into a lake and taking the blame when the dam bursts.
There’s a sawtooth daily pattern to the Mauna Loa graph. Is there any difference in the gas make up between day and night? Does the isotopic makeup vary?
JF
Bryan Srallasays
Re: #10 Gavin, thank you for responding.
Since a climate prediction problem of the first kind (controlled by initial values) will effect numerous non-linear feedbacks which ultimately will effect the degree of free variations in the climate trajectories, it seems well founded (to me) that many classes of climate prediction problems are controlled by intial values. Do you agree with my statement? Also, greater numbers of non-linear type feedbacks added to the AOGCMs will most likely increase the degrees of free variations in modeling experiments. Right? It then seems feasible that improving the models (making them better resemble the real thing) may make skillful multi-decadal predictions more contolled by initial values, not less (since the amount of noise in the system is greater relative to the forced variations). Since a boundary values problem is superimposed over the initial values problem in real climate prediction, it seems necessary to obtain solutions which satisfy both types of equations (for many climate prediction problems). I think for some casual observers, it might seem non-intuitive that improving the models adds to more uncertainty in predictions. The more I read and study the problem, the more I am pursuaded you modelers have a tough problem on your hands. Good luck.
yartrebosays
Re #24:
Those single-celled organisms are releasing carbon that was only recently fixed. Even in the short term (1 year period) it is within 1% of being in equilibrium with carbon fixed by life.
The C4 pathway is only used by grasses. Other plants and all bacteria and protozoa use the older C3 pathway. An organism cannot just choose which pathway is wants to use. Even if land use patterns have changed the ratio of C4 to C3 plants (probably increased it, because so much crop and livestock land is planted with grasses), the effect will be quite minor as the carbon is released back into the atmosphere in short order.
As far as the deep sea reservoir of C goes, don’t forget that it takes thousands of years for the deep ocean to turn over and that while it is a reservoir, it was more or less in equilibrium at 285 ppm CO2 and thus that carbon wasn’t going anywhere. With current CO2 levels the deep ocean is actually a net sink of CO2, absorbing a fairly significant share of our emissions.
With regards to the sawtooth daily pattern at Mauna Loa, it’s due to photosynthesis during the day and respiration during the night. While there’s probably a difference in the isotopic makeup, it’s probably pretty small?
“Thinking non-anthropocentrically, why does our little flow go straight into the atmosphere? Does it? Obviously not, because some of it gets lost.”
Quite easy – just look at a smokestack or exhaust pipe. The CO2 literally goes straight into the atmosphere. As far as what gets ‘lost’, that’s mostly the ocean absorbing about half of it. It isn’t gone for good and if CO2 levels drop, the process will reverse, releasing the CO2 back to the atmosphere.
Jamessays
Re #24: “Simple one-celled organisms in the soil are doing better than that — they push out 60 GtC/yr.”
Rather misleading language: they don’t “push out” that much CO2, except in the sense that you “push out” a certain amount of CO2 every time you exhale. A better way to put it would be to say that they _cycle through_ that much carbon: it comes in, it goes out, but only if there is a change in the mass of soil organisms (and their corpses, etc) does all that activity produce a net change in CO2.
This does bring up a question I’ve sometimes wondered about, but have never seen numbers on: suppose we could ignore the political obstacles, and make a serious attempt at re-vegetating areas – the western US, North Africa, the Mideast, Australia – that have been desertified by human activity. How much CO2 could we expect that to sequester?
Hank Robertssays
Julian, here are papers addressing the questions you posed above, just a couple from a large and interesting site.
You’d want to check the citation index forward because these are relatively old papers, not the current best info, but a place to start.
There has been a lot of research addressing the questions you mention above and I am sure isn’t news to the climate scientists. It’s an example of the details they have to consider.
“… One of the most striking results that 13C data (and now O2/N2 ratio data) unveiled is the existence of a very large repository of anthropogenic CO2 in Northern Hemisphere ecosystems during the early 1990’s when the atmospheric CO2 growth rate had diminished to only one third of its normal value. Still, the long term trend and interannual fluctuations of 13C at one given monitoring station is at the limit of detection of mass spectrometers, on the order of 0.01 per mil for 13C in CO2. Thus, even a very slight bias in the isotopic data would translate into different inferred magnitudes of the global land and ocean uptake of anthropogenic CO2….”
Biogenic aerosol formation in the boreal forest (BIOFOR)
Anomalous (or Not Strictly Mass Dependent) Isotope Variations Observed in Important Atmospheric Trace Gases
“… Variations in stable isotope ratios in the environment have generally been well understood and put to good use. However, the atmosphere appears to be the scene for a host of isotope effects that we do not yet understand. The prime example is ozone, whose anomalous enrichment has repeatedly defied correct interpretation.
“….Atmospheric studies also benefit from stable isotope variations. An illustration is the ongoing decline of the 13C/12C ratio of atmospheric carbon dioxide, largely in consequence of the increasing fraction of fossil fuel-derived carbon dioxide. This isotope effect is thus directly related to the isotopic composition of an important source of the gas. Fossil fuels have about 2% less 13C than atmospheric carbon dioxide. This in itself is obviously not a source effect (ambient CO2 is the carbon source for plants), but rather an isotope fractionation effect of photosynthesis. Plants favor 12CO2 slightly over 13CO2, so the assimilated carbon is depleted in 13C relative to the atmosphere. In isotope applications of interest to atmospheric chemistry, source signatures and fractionation effects in chemical reactions are both relevant….”
“… It has taken years to unravel the secrets of the anomalous isotope fractionation of ozone, perhaps the most extensively studied reactive atmospheric trace gas. In regard to molecular symmetry, 17O and 18O in an ozone molecule are identical (they are simply different from the abundant 16O isotope)…. However, theories based on symmetry have been challenged by the latest experimental data.
“… After ozone, it was found that carbon dioxide in the stratosphere exhibits MIF [Thiemens et al., 1995, Gamo et al., 1989]. A chemical mechanism was proposed by Yung et al. [1991], who showed that the observed 17O excess in CO2 could be explained by transfer of the enrichment present in ozone to CO2 via the excited oxygen radical ….”
Hank Robertssays
Another good source on where CO2 is coming from here, dated early 2005.
Regrettably, the actual data illuminating this question was collected at stations funded by the USA and Australia that are now being shut down, the authors point out.
Sallysays
Re: 24
“There’s a sawtooth daily pattern to the Mauna Loa graph. Is there any difference in the gas make up between day and night? Does the isotopic makeup vary?”
Is there a daily sawtooth pattern to the Mauna Loa graph? There is certainly an annual sawtooth pattern, caused by the northern hemisphere forests kicking in with photosynthesis every spring. That is probably rather simplistic but I think it is what Charles Keeling set out to measure initially. The carbon flux is affected by all photosynthesis, terrestrial and aquatic. There would be a daily flux however, as plants respire all the time but only photosynthesize during light conditions. Again this is a simple explanation and does not take into account the processes that go on in the dark to produce sugars.
you can look at graphs from the four collection stations at Barrow, Alaska, Samoa, the south pole and Mauna Loa. You will see from the graph found here: http://cdiac.ornl.gov/trends/co2/nocm-sagr.htm
that there is far less fluctuation in Samoa, as the climate has little seasonal change at this location.
I hope this helps.
Donald G.says
I propose we invest massive resources into making a large array of petaflop-scale computers dedicated to the study of global warming. This way, if the simulations show that Global Warming isn’t an upcoming threat, we’ve ensured that it really is, by churning out more than enough CO2 while making those computers. ;)
John Doddssays
Re 24 30
Go to http://meteo.lcd.lu/today_01.html to see real live CO2 concentrations et al in Luxemburg. Variations over 50% and as high as 500+ppm
I agree that the patterns are complex. Just changing the water vapor content (& rainstorms) plays havoc with the CO2 concentrations, as does the daily commute (vehicle exhaust) near the measuring station.
In general it is my understanding that the daily fluctuaions & even the yearly cycle have a relatively small impact on the global warming consequences. I think there is a Gavin comment to this effect from a year or so ago somewhere in the archives.
Re #24: “Simple one-celled organisms in the soil are doing better than that — they push out 60 GtC/yr.”
Yes, and they are better at sequestering carbon than us too. They absorb 60 GtC/yr too. That is the oscillation you though were daily but are actually yearly. We are what is making the curve rise.
Englishsays
The text has both punctuation errors and errors of grammar. Some of these errors are not insognificant. The text of the document can be seen as ambiguous.
I would most respectfuly suggest that any text be checked for errors in grammar and punctuation by a technical author prior to publication.
[Response: Bit late now, but what specifically did you find ambiguous? – gavin]
Julian Floodsays
re 28: thanks for the sites — even more thinking needed. I read a recent CO2 metabolism paper for a particular species of phytoplankton and began to wonder about the chance that they are less consistent than science currently understands. If the knowledge that certain marine plants can swap between C3 and C4 is only about 5 years old, I’d rather like to see whether their different fixation routes do odd things to oxygen isotope ratios. There is a rather disturbing illo of the degree of confidence in the science of global warming — low low and very low occur too often to be reassuring.
Governments make a lot of noise about global warming — how odd that they don’t throw large amounts of money to the only people who can demonstrate, by scientific measurement, exactly what’s happening. And yet monitoring sites are being abandoned — why is that? Do governments know something or is it merely complacency?
30: thank you. There’s a daily wiggle rather than an annual sawtooth I believe, but you’re right, I was conflating the two. I’m wondering if a signal of isotopic fractionisation could be teased out of the daily signal, pinning it down to either worldwide or near-ocean effects.
If the ocean surface pollution hypothesis of global warming is correct then this is the sequence: the petrochemical industry kicks into life around 1850. Surfactant run-off and oil spill begin to reduce stratocumulus cover: the whole surface of the ocean warms, reducing nutrient upflows and encouraging the phytoplankton to kick into C4 metabolism. C4 phytoplankton increase in numbers as their relative advantage inproves over C3 plants. C4 plankton sequesters more C13 than expected by conventional models, expected levels of C13 fall, producing a false anthropogenic signal.
Surfactant and oil pollution effects increase CO2 levels by reducing biological pull down, mechanical mixing and reduce solubility by warming the surface. (What happens to the albedo of a polluted surface? I don’t know.)
Deep water warms slightly. Methanophages begin to emit more light isotope CO2 as the clathrate deposits become more accessible. CO2 warming increases. The ocean surface becomes even more stable and the cycle continues. Eventually the clathrates boil off.
Result: large temperature spike and collapse of civilisation.
I wonder if there’s anything in the fossil record which would fit this scenario, triggered perhaps by the breaching of a large, light oil reservoir by coastal erosion? I wonder what signals a scenario like that would leave for us to interpret?
(Well, I like it better than a convenient volcano boiling off a carbon deposit!)
Re # 27: “This does bring up a question I’ve sometimes wondered about, but have never seen numbers on: suppose we could ignore the political obstacles, and make a serious attempt at re-vegetating areas – the western US, North Africa, the Mideast, Australia – that have been desertified by human activity.”
Most of those areas have not been “desertified by human activity” — they’re naturally deserts, part of the global desert belts at low latitudes. Vegetating them on a large scale would probably require significant energy expenditures (and accompanying CO2 release!) for fertilization, water transport, and building the necessary infrastructure. (This is not to say that it’s a bad idea to reverse desertification in local areas where it’s recently occured, of course.)
From the various Hansen et al and other papers, the computer calculations of global warming show that adding CO2 results in the added CO2 molecules delaying (not trapping!) the transport of energy out of the earth system, and the subsequent global warming of the air near the ground and cooling near the top of the atmosphere- TOA (see cartoon figure 2e from Hansen et al 2005). This is the Greenhouse Effect. The global warming effects results in a positive energy imbalance or disequilibrium at the TOA, where more solar energy is coming in than is being sent out because the TOA temperature is colder than the equilibrium. This apparently lasts for years if not forever per the GCMs (Hansen Nazarenko et al 2005).
However, on a daily basis the Earth goes through a daily rotating solar cycle where at night we have a negative imbalance with more energy out than in, then as the sun rises we warm up pass through the equilibrium point to get a positive imbalance and stay that way until the sun again starts to set and the energy-in again passes through the equilibrium point and we return to a negative imbalance. The same process happens when we get yearly/seasonal temperature cycles and also on the approximately 11 year solar energy cycles which change the energy-in amounts. At all times, the amount of energy being transported out of the Earth system is calculated by the Stefan-Boltzmann Law (SBL) based ONLY on the temperature of the Earth system, be it at the ground or at the TOA. The SBL, which says that the energy transported from an object is proportional to the temperature raised to the 4th power (ie hotter air rises faster-convection, or hotter objects radiate more energy per unit time- radiation transport), is forever trying to reestablish the earth to its equilibrium energy-in equals energy-out conditions, by either warming or cooling it, and it is perfectly successful exactly twice each and every day. This contradicts the conclusions of the Global Computer Models (GCMs) which require a permanent or multi-decade disequilibrium or energy imbalance to create the global warming.
So the question is: WHY doesn’t the Stefan-Boltzman Law feedback also automatically compensate for the greenhouse effect? Why doesn’t the feedback from the SBL automatically return the air to its equilibrium to (solar) energy-in conditions imposed by the daily solar cycles? Why isn’t the delay in energy transport by adding GHGs compensated for by the speedup in radiation transport at the speed of light caused by the increase in temperature and the SBL response? The SBL has no way of differentiating between a GHG or solar caused warming. According to the GCMs, we already know that the CO2 caused global warming results in hotter air rising faster (convection) at ground level, which Gavin says is included in the GCMs as a part of the water vapor feedback effect.@ How not to attribute climate change comment #182 12:57 pm (see also #126 et seq & 208) Why doesn’t the hotter air at ground level also cause more energy to be radiated out faster per the SBL to result in the greenhouse caused warming energy to be returned to the TOA where the extra energy will cancel out the CO2 caused imbalance at the TOA? ie to return the Earth to its equilibrium conditions, which it actually does – twice a day?
The conventional wisdom that greenhouse gases cause global warming is based on the identification of the greenhouse effect (GHE) in the Svante Arrhenius 1896 paper (see Wikipedia ) ” On the Influence of Carbonic Acid in the Air Upon the Temperature of the Ground”. However, my simple reading of the paper shows that Arrhenius calculated the warming effect of CO2 energy absorption in academic isolation (CO2 absorbs energy) without considering the real world effects of the Stefan-Boltzmann Law Feedback.
Also a review of the Global Computer Models (GCMs) results shows that the amount of global warming calculated varies depending upon the duration of the period modeled. eg 1950-1997, 1880-2000, 1750-2000, or from the bottom of the ice age and bottom of the CO2 level (20,000 years ago) until 2000 etc. This is to be expected, except that the computer model also requires that the temperature imbalance at the TOA be equal to and opposite the warming to maintain conservation of energy. SO just what is the temperature at the TOA in the year 2000? It HAS to be a single value, not the infinite number of options that can result from the GCMs. Again, the GCMs are calculating results that are not possible. BUT if the SBL Feedback returns the TOA to the equilibrium conditions, then we have a single value, but then there is no energy imbalance. However there also is no global warming due to the GHE!
Sorry Gavin, but I feel that the GCMs are giving such fundamentally incorrect/inconsistent results that they can NOT be valid. They seem to be missing some of the SBL feedback. If you can explain these discrepancies please do.
Which brings us back to what you previously stated 15 months ago,
“… [Response: … You refuse to relax your (incorrect) assumption that the flux from the surface is the same as the flux from the top of the atmosphere, which is equivalent to assuming that there is no GHE at all. So you assume the result you wish to prove. … -gavin]
Comment by John Dodds – 30 Oct 2005 @ 8:04 pm ” (click on the time stamp to go to Gavin Schmidt’s RealClimate.org location)
To which I now respond, Yes Gavin I intuitively assumed constant equilibrium flux, but it is not my assumption, it is the constant flux equilibrium IMPOSED by the Stefan-Boltzmann Law, & Mother Nature’s Laws of Physics, and seen on a daily basis. The SBL Feedback (which is dependant ONLY on the temperature and not the CO2 concentration) cancels out the GHE temperature effect basically eliminating it as it occurs. No GHE means no GHG/CO2 induced warming, as you said. The ever increasing GHG Forcing Curve (Planetary energy imbalance? May 2005 & IPCC) is effectively a flatline. Any research based on the increasing GHG forcing and the GCMs is invalid. The estimates of the temperature impact of doubling the CO2 levels in the next century are just plain wrong.
This does not mean that global warming does not exist or that adding CO2 does not cause problems such as ocean acidification. The evidence in melting polar ice caps, glaciers and measured temperatures etc is too obvious. However the increases must come from an external increase in energy-in (or maybe if the solar increases do not explain it, some of the decreasing Earth magnetic field flux energy is leaking into the ground/air ??? Are we seeing increases in the northern lights effects?). The solar increases documented by IPCC approximately account for the observed temperature increase from 1700-2000. For example the solar insolation increases by about 4 W/m2 to 1364 since ~1700, This increase is 4/1364 or 0.3% which is about the observed increase of 0.3% of 288 or 0.84 degrees absolute.
This “CO2 does NOT cause warming” conclusion does however mean that any efforts to control Carbon/GHG emissions (Kyoto Treaty, carbon taxes, carbon emissions trading, carbon sequestration research, lobbying, lawsuits etc) are totally worthless, a waste of resources, and can be eliminated since they will have no effect on global warming other than increasing taxes and getting me hot under the collar. :) However, for solving the problem of CO2 caused global warming and “saving the world” from the British/Stern estimate of 1 to 5% of annual global GNP, Companies and Governments are encouraged to send a fraction of their subsequent cost/tax savings to the John Dodds Foundation USA 94123-3404. :)
Sorry for the length.
Julian Floodsays
Re 37:
Another triumph for the ocean surface pollution theory of global warming!
JF
(we can correct the offset of warming and CO2 rises as well.)
Jamessays
Re #38: Ever park your car in the sun on a cold day? It gets quite a bit warmer inside, doesn’t it? Now if I understand the physical model you’re using in your description, it should instead be in equilibrium with the outside air. I’d take that as a strong hint that your physics is wrong :-)
And re #36: “Most of those areas have not been “desertified by human activity” — they’re naturally deserts, part of the global desert belts at low latitudes.”
While such areas are indeed natural drylands, I think there’s quite a bit of evidence showing that much of the area has in fact been turned to desert as a result of human activity. See for instance the early explorers’ descriptions of the American west compared to conditions today. There are prairie/steppe plant communities that, once established, do quite well with little rainfall. Destroy that community by over-grazing or farming, though, and it does not readily reestablish itself.
John Doddssays
Re #40
A car has a physical barrier – the roof- that prevents or actually just slows down the restablishment of equilibrium to the rate that energy can transfer thru the roof.. Just like a glass greenhouse has a barrier that prevents the reestablishment of the equilibrium BUT go into a glass greenhouse at 4AM and it IS as cold as the outside- the equilibrium is restablished. The atmosphere has NO barrier to the flow of energy, the question is just how fast is the equilibrium reestablished. and radiation of energy is as fast as the speed of light.
The question you should be looking at is if there is a GHG caused energy imbalance per the GCMs, and more energy than what the GHGs generate is continuously added then why haven’t we overheated already? CO2 has been increasing for 20,000 years according to Hansen. (& I agree) so the so called imbalance should have been there for 20,000 years. Besides if there is an imbalance just why would it result in that extra energy going into the ocean (per Hansen) instead of just going into the air & eliminating the imbalance like it does on a daily basis. Hansen’s model does not make sense.
Hank Robertssays
Mr. Dobbs, if you understand how CO2 absorbs and re-radiates energy, you’ll understand how a CO2 laser works as well. This may help. It’s not marketing, it’s working hardware.
The Stefan-Boltzmann law does NOT apply to the Earth-atmospheric system. The SBL is a result derived from Planck’s radiation law, under the assumption of frequency-independent emissivity. This is flagrantly not true for a system with absorption lines.
Englishsays
Re #41
I think that you make an interesting point.
If I can paraphrase, you are saying that the earth may heat up to above normal temperatures, because of GHG’s, during daytime. Each part of the earth will have lost all of the excess energy it may have gained (due to GHG) by dawn.
I think that this may be true but it would still result in an increase in mean temperature as the temperature has risen for some time (during the day) above that which it would have been if there were no GHG and the temperature will return to the usual ambient at night (and won’t go below it).
So, by your theory, we would get hotter days and no change in the dawn temperatures. I have no idea if this is the case or not.
I believe that there are more uncertainties in climate science than this (and I’m not the only one).
“There is international consensus that human activities are increasing the amounts of greenhouse gases in the atmosphere and that these increases are contributing to changes in the earth’s climate. However, there is scientific uncertainty regarding the sensitivity of climate to these increases, particularly the timing and regional character of climate change.”
Having engaged with Roger Pielke Sr. and cohorts on this issue, I begin to think that their definition of boundary value problem and mine is different. They are talking about the extreme limits of the system, whereas I (and I think most of the folk writing here) think of the hypersurface on which the trajectories evolve as being the boundary. Perhaps it is because they are locked to a three dimensional picture.
BTW this discussion has a very high S/N ratio.
Jamessays
Re #41: “The atmosphere has NO barrier to the flow of energy…”
Huh? How did you reach that conclusion? Going back to basic physics, there are three modes of energy transport: conduction, convection, and radiation, no? Conduction is not significant in gasses, so we’re left with two. Incoming solar radiation in the visible spectrum heats the ground, and can be transported away by convection or radiation.
Convection pretty much stops at the stratosphere, so that leaves radiation. That radiation takes place in the infrared, to which CO2 is not transparent, so the outgoing radiation is stopped. The atmosphere _is_ the physical barrier that stops re-radiation.
“….and radiation of energy is as fast as the speed of light.”
Humm… So you’re saying that a hot object placed in a vacuum should cool instantly? I don’t think it works that way :-)
Looking at the earth-atmosphere system, an infrared photon leaving the ground does travel at the speed of light – until it hits something, say a CO2 molecule. Then the energy it carried can either stay with the CO2, making it hotter, or it can re-radiate. If it re-radiates, it can go either up or down. The net effect is that the system stays warm longer. Left alone, it would eventually cool back down, but the sun comes up before than can happen, so the next day the system gets a little warmer still, and that keeps happening until a new, higher, equilibrium temperature is reached.
But then humans keep adding more CO2, which shifts the equilibrium still higher…
L. David Cookesays
RE: #46
Hey James;
The statement you made regarding, “The net effect is that the system stays warm longer.” do you have an indication of this? Have you a clear data table that the given temperature range has diminshed between high and low over the last 35 years based on similar conditions? How about the typical surface temperature decay curve in a clear night sky, under similar input potentials and similar weather conditions?
That the CO2 increases, hence creating a greater opportunity for CO2 vibrations to maintain the energy, or to delay the terrestrial release into space should be detectable don’t you think? I have not seen any indications of this have you?
Dave Cooke
Hank Robertssays
Mr. Cooke, when you use phrases like “opportunity for CO2 vibrations to maintain the energy” you are making it hard for yourself. You won’t find that with Google — let’s check:
search – +opportunity +”CO2 vibrations” +”maintain the energy” – did not match any documents.
Yep, you’d find no evidence of it.
But look using the terms you find in the physics books, eh?
This is well documented as one of the basic predictions — greater warming in the nighttime, because less heat is being radiated off the planet by an optically clear sky.
Here, for example, just to give you a start looking into this:
“ABSTRACT Two sky brightness monitors – one for the near-infrared and one for the mid-infrared – have been developed for site survey work in Antarctica. The instruments, which we refer to as the NISM (Near-Infrared Sky Monitor) and the MISM (Mid-Infrared Sky Monitor), are part of a suite of instruments being deployed in the Automated Astrophysical Site-Testing Observatory (AASTO). The chief design constraints include reliable, autonomous operation, low power consumption, and of course the ability to operate under conditions of extreme cold. The instruments are currently operational at the Amundsen-Scott South Pole Station, prior to deployment at remote, unattended sites on the high antarctic plateau.”
James and L. The CO2 molecule absorbs a photon. It gains vibrational energy. This energy rapidly (~ a few tens of collisions, within nanoseconds) is collisionally transfered to other atmospheric molecules as translational energy (V–>T transfer in the argot of the field), mostly N2 and O2. This heats the atmosphere
So how do CO2 (and H2O) molecules gain energy to radiate? Well, the unexcited CO2 molecules continually collide with other molecules in the atmosphere and a small percentage of the collisions leave the CO2 vibrationally excited (T–>V transfer). The average fraction of CO2 molecules that are vibrationally excited in the bend is something like 2*exp(-1000(K)/T(K))
Eye crossing stuff: 1000K multiplied by Boltzmann’s constant is roughly the energy in Joules of the CO2 vibrational bend, the bend is two fold degenerate which accounts for the 2..
The bottom line is that about 6% of CO2 in the atmosphere is vibrationally excited at any time and can radiate, almost none of the molecule that absorb the IR emit directly.
Catherine Gautier says
It seems to me a little hasty to state that the ITCZ is an emergent phenomenon. What about the ITCZ is emergent? Is it the organization of small convective elements over the large scale? If so, how do you quantify that organization in climate models? And is it really emergent beyond the self-organization of systems at the mesoscale? Or are you referring to the dynamics (convergence and confluence of the wind) that result from the organization of the convection at the larger scale?
[Response: The point is to distinguish physics that is coded for in a GCM – such as the dependence of evaporation on wind-speed, and large scale organised features that emerge from the interaction of all the physics. There is no subroutine for ‘doing’ the ITCZ in other words. -gavin]
Steve Sadlov says
RE: Climate is instead a boundary value problem
I would make a distinction here. Boundary value problems of this type that deal with heat flow in solids have reasonably well behaved boundaries. Take for example the classic bar of uniform solid with a specific isotherm presented to one end of the bar. In climate, the mere task of determining just what should be the boundary value for a particular t0 parameter in question is perhaps not quite so straightforward. How stable do we assume things to be prior to t0? How uniform is the value presented at the boundary? Etc. Is there not a continuum between climate modelling and weather prediction, in point of fact?
neutrino says
At first blush, it would seem that the cumulus parameterization scheme(CPS) would have to provide fairly accurate moistening and warming rates in order that a realistic ITCZ show up. On the other hand, maybe the climate models are relatively insensitive to the CPS used? If so, ITCZ appearance in climate models is not so impressive.
neutrino
[Response: Fair point – without some coding for moist convection, an ITCZ is unlikely to emerge. But all GCMs since the 1970s have ‘had’ an ITCZ even though the parameterisations range from the very basic to the very complex – so the existence of the ITCZ is robust. Of course, there specific aspects of tropical convection that aren’t, and that is where the evaluation of any specific scheme is focussed. -gavin]
L. David Cooke says
Dr. Schmidt;
I did not register to review the GCM that you had provided a link to in your article. I did read the article. Part of it prompted me to go in search for an earlier work that complimented Dr. Hansens work with a model. I went in search of a study that was published about two years ago regarding the BSRN at ETHZ (Swiss Technical Institute, Zurich), regarding radiative forcing and ran across a reference to the ECHAM5 GCM:
http://www.mpimet.mpg.de/fileadmin/models/jclimate-mpim-2006/wild_roeckner_jclim_2006.pdf
based on the data regarding the descriptive update I am impressed with the level of effort regarding the radiative forcing adjustment in the shortwave band, I think the increased resolution may have benefited the tracking accuracy.
The point is that I do not know, that enough value for radiative downwelling energy, through the entire spectrum is represented yet. The ECHAM5 GCM does appear to have a fairly good match up in the shortwave bands though.
I still think there is work to do with the adiabatic change at altitude as the ground based Lidar in Colorado seems to indicate there are significant changes in the altitude, temperature and water vapor content of the tropopause and Stratosphere that do not appear to be addressed. The recent paper by Dr. P. Chuang at UCSC:
http://www.ucsc.edu/news_events/press_releases/text.asp?pid=983
appears to represent a major change in the latent heat release at altitude. I suspect this may be related more to the saturation level then the aerosol “seed” size or concentration, apparently the jury is still out in this regard.
Of greater interest is the apparent evidence in relation to the radiative shade and it’s effect on marine biological health. If the work of Dr. M. Behrenfeld at OSU appears correct, there is the possibility that though radiative flux does not indicate a change, that warming is having a negative effect:
http://www.livescience.com/environment/061206_phytoplankton_warming.html
By the same token it appears that just shading the coral reduces the loss:
Australia_Turns_To_Sunshades_Water_Spray_To_Save_Great_Barrier_Reef
This takes me back to the 2004 article regarding the Swiss Resort shielding the snow base around snow molding equipment with a simple tarp, not necessarily a thermal blanket and preserving the snow over the summer. This brings me back to the question regarding the drivers for the possible reduction of precipitation. Why is there less precipitation when the apparent vapor content is higher in warm air and the condensation temp at the troposphere is ‘cooler’, (providing I understand the 2006 Spring study saying that GHG reduced the long wave energy from reaching the 250mb range.)?
The most recent work I have been attempting as a layman is in regards to trying to breakdown of the major oscillation patterns to see if the drivers can be described. I believe if we can get to the point we can decipher the drivers of phenomena such as NAO or ENSO the long term models might improve dramatically.
I think the piece that Dr. Benestad posted earlier this week “Mid-latitude Storms”, is beginning to get close to describing the drivers. The interrelationships between large scale phenomena does not seem to be very good at trying to map cause and effect, it almost seems like using bamboo cane fishing poles as chopsticks. Is there any work or signals for possible drivers for these large scale events in any of the models or recent studies you have seen?
Dave Cooke
Julian Flood says
I, too, find the statement about current models being non-chaotic surprising. I once saw a suggestion that lemming populations were following an attractor and painstakingly created a cellular automaton to look at population behaviours — no attractors that I could see in two or three dimensions, but the little bugs did strange and unexpected things. Perhaps, as the climate models become better, we are in for surprises.
The article (timely and extremely useful) mentions that the microphysics of clouds and aerosols needs research — the latter is an area where statements such as ‘this work is difficult to carry out in the field’ occur frequently, so let’s not hold our breath.
Might I ask about a couple of other points?
Do the models make allowance for variation (other than, obviously, temperature) in the condition of the ocean surface, such as variation in evaporation rates (not temperature related), variation in aerosol production (not related to windspeed), variation in CO2 incorporation (not related to either)?
The idea that mankind’s small percentage of the carbon cycle could be the cause of the Mauna Loa graph seems, to this layman, to be unlikely, the attribution owing more to man’s overwheening vanity than scientific measurement, like claiming the Earth as the centre of all things. It seems more likely that we are disrupting one of the feedback mechanisms — my guess is that surfactant and oil sheen pollution of the ocean’s surface has led to reduced aerosol production, less biological CO2 pulldown and reduced mechanical incorporation of atmospheric gases. These effects will be subtle in highly productive waters — natural pollutants may mask the effect — and not present in shallow or coastal waters, but in the blue desert areas they will be highly significant and should be measurable. It might be possible, by piggybacking on Latham, Salter et al’s calculations (I’m afraid that I do not have access to a public domain version of their proposal for cooling using aerosol production vessels but a subscription, if required, is well worth it for the illustration alone) to predict how much albedo reduction we are suffering because of lower oceanic strato-cu cover.
I like numbers. I hope someone is out there with sample bottles.
JF
The solution to global warming is at http://www.floodsclimbers.co.uk. The relevant article is, modestly, in green ink.
[Response: Maybe this could have been clearer, but it is the climate within the models that is non-chaotic, not the model itself. All individual solutions in the models are chaotic in the ‘sensitivity to initial conditions’ sense, but their statistical properties are stable and are not sensitive to the intial conditions (though as I allude to in the article, I don’t know whether that will remain true as we add in more feedbacks).
CO2 increases are real and definitely anthropogenic.
As to the mechanisms you mention, evaporation generally depends on humidity, temperature, wind speed, and atmospheric stability. In models with interactive aerosols, ocean sources for sea salt are generally wind speed dependent, and for DMS or MSA they are now starting to put in biological activity feedbacks – but these are very uncertain as yet. There maybe important secondary organic aerosol precursors that are climate dependent – but incorporating these effects is very much a hot reserach issue at the moment. -gavin]
Tom Pollard says
I don’t understand in what sense climate models solve “boundary-value problems.” They’re just dynamical models, which means they’re initial-value problems. This is just a matter of what sort of equations you’re solving. The fact that you run lots of trajectories to collect statistics doesn’t change the fact that those trajectories are solutions to an initial-value problem.
An unrelated quibble is that I have no idea what you mean when you say “Emergent qualities make climate modeling fundamentally different from numerically solving tricky equations.” Emergent behavior is something that may or may not arise in your dynamical model, but it doesn’t change the fact that your job as a modeller is still to properly define and accurately solve numerically tricky equations. I also don’t see the distinction you’re trying to draw between climate modelling and weather forecasting, here.
[Response: The distinction occurs precisely because I am interested in the statistics of the problem, not the individual trajectories. For instance, take storm tracks in the North Atlantic – I can be interested in the path of an individual storm (an initial value ‘weather’ problem), or I can be interested in the statistics of all such storms. The second does not depend on the initial values – I can perturb them to my heart’s content – yet the statistics of the storms once I’ve gone out long enough will converge to a ‘climatology’ of storms. This is true for even a perfect model (should any exist). Now, it is certainly my job to numerically solve tricky equations, but the point I was trying to make was that the emergent properties of dynamical systems make those solutions much less a priori predictable than simply a ‘tricky numerical problem’. – gavin]
CobblyWorlds says
Once again thanks for that Gavin.
With regards emergent behaviour, it was just one of the issues that changed my mind and defeated my scepticism with regards the models.
Some time back in my ‘studies’ I realised that the models are able to create things like the ITCZ or as in Pintubo recreate total column water vapour response to Pinatubo. This was despite these being emergent phenomena of underlying modelled processes. I just couldn’t get how, if the models were wrong, they could produce virtual analogues of real phenomena, without being coded to explicitly model those phenomena. The models aren’t perfect, but to dismiss them because of their flaws really is to “throw the baby out with the bathwater”.
For other interested readers, ClimatePrediction.net have an easy to understand description of models here: http://climateprediction.net/science/model-intro.php
Alastair McDonald says
Re #7 Are you running the Climate Prediction model? If so, does it have an ITCZ? I ask this because when I ran it, I could not see any ITCZ :-(
Peer says
Figure 7.5 and fig 7.9 at http://www.ux1.eiu.edu/~cfjps/1400/circulation.html , make the ITCZ look like the simplest and most predictable weather phenomenon on Earth untill one realize that the real world ITCZ line(fig 7.9) deviate from the line of maximum downpour or convection. So my question is if the ITCZ always behave as simple as this, or if it sometimes can split into two when passing obstacles like cold SSTs (East pacific/La Nina) or dry areas like the Sahara desert.
Bryan Sralla says
Gavin: I realize that the boundary values vs initial values discussion is probably getting tired to you, but please indulge me.
It is my understanding (from the literature) that climate prediction takes two forms. Climate predictions of the first kind certainly are initial values problems. Predictions of the second kind are likewise solutions controled by boundary values. I am unclear since some of the papers I have read reporting on AOGCM experiments of the first kind (with anthropogenic GHG’s kept at pre-industrial levels), have shown very poor skill at predicting regional climates (high sensitivity to perturbations of initial conditions), and for the most part a saturation or loss of skill at short lead times.
Since things like ENSO prediction (among others) is a problem of the first kind, and these are superimposed over predictions of the second kind (boundary values)when adding external forcings like GHG increases, is it not true that relevant climate predictions really can be thought of as a combination of both. In my way of thinking, it still seems like one could consider these intial value problems.
Since the statistical properties of predictions of the second kind are stable, does this not imply that we have smoothed out some of the climate metrics that are relevant to where people actually live?
My mathematics background in chaos, attractors, and differential equations is limited to about one or two undergrad classes a long time ago, so please clarify my thinking on this.
[Response: Practically you can distinguish between the two types just by seeing whether the initial conditions matter. In a ‘perfect’ system – i.e. using a model to predict what another run of the same model produced – you can show that there is useful information in ocean initial conditions for about a decade – mostly based on the North Atlantic. However, in the presence of strong forcing (rising greenhouse gases, volcanoes etc.), the predictions become much less dependent on the start. So for the 20th century trends – where we have essentially no information about the ocean initial conditions – it is easy to see that the global trends even over decadal to multi-decadal time scales are robust to the starting fields. Skill in prediciting regional changes however, even in a perfect set up, is not very high – mainly because of the amount of unforced ‘weather noise’ which neither kind of prediction can capture. – gavin]
CobblyWorlds says
Re #8,
No Alastair. The CC.net information page was, as implied, an addition for other interested amateur readers.
I’ve not run any climate models, merely read papers. Were I to run a climate model I don’t know I’d have the skills to be sure whether it had correctly produced such a thing as the ITCZ.
Barton Paul Levenson says
Re “The idea that mankind’s small percentage of the carbon cycle could be the cause of the Mauna Loa graph seems, to this layman, to be unlikely, the attribution owing more to man’s overwheening vanity than scientific measurement, like claiming the Earth as the centre of all things.”
From 1750 or so to 2005 the ambient CO2 concentration rose from about 280 parts per million by volume to 380 ppmv. We know it’s from fossil-fuel burning because we can measure the fraction of C14, and that has been declining. There’s virtually no CO2 left in old (“fossil”) fuels simply because their age is many times greater than the C14 half-life. This is the smoking gun that shows the CO2 is anthropogenic and not from some natural process. All the CO2 in the biosphere has roughly the same mean level of C14.
James says
Also, the amounts of fossil fuels used in the last century are quite well known – you can look up the numbers in various economic & statistical sources – and from that you can easily compute how much CO2 was produced. Add that up, compare it to the observed increase, and you’ll find the numbers match. (Actually the amount of CO2 from fossil fuels is somewhat greater than the observed atmospheric increase, because some gets dissolved in the ocean and so on.)
Thinking of fossil fuel CO2 as “man’s small percentage” is the wrong way to look at the problem. The key word there is “cycle”: all the rest (with small exceptions like volcanic sources and geological sequestration) is in a cycle that keeps going around and around. The fossil fuel CO2 is an addition. That addition may be comparatively small in any one year, but it doesn’t go away: it keeps adding up.
David Price says
One question. Have models taken in to account the increace in the number of thunderstorms that will result from warming. I read once that thunderstorms keep the earth several degrees cooler than it would otherwise be. As more storms happen the hotter it gets will they have a moderating impact?
[Response:Equilibrium requires a balance between downward and upward transport of energy within the atmosphere. Thunderstorms should be thought of as simply one of many modes of behavior by which the atmosphere attempts to achieve this balance, in this case largely through the vertical transport of latent heat associated with condensation of water vapor within storm clouds. Obviously, individual thunderstorms cannot be represented at the coarse spatial scales resolved by GCMs. However, their principal role in terms of energy balance, as described above, is represented in models through the parameterization of convective instability in the atmosphere. -Mike]
mzed says
#12, think you meant “There’s virtually no *C14* left in old (“fossil”) fuels” ;)
Barton Paul Levenson says
Re “#12, think you meant “There’s virtually no *C14* left in old (“fossil”) fuels””
Yep, you’ve got it. My bad. :)
Julian Flood says
Re 5. Thank you for your reference — I’ve not replied to thank you until now because I’ve been thinking.
Your reference is dated 1996 and as such the author had no access to the work (was it by Morel et al? I’ve looked at so many abstracts today my gyros are toppled) about C4 and beta-carboxylation pathways in phytoplankton. Under stress some phytos change from C3 metabolism and the isotopic fractions they sequester change — the heavier molecules are not discriminated against so strongly. So, my ocean pollution hypothesis is up to this objection: reduced upwelling and lessened entrainment of the surface by wind results in depleted zinc and cadmium levels. Stressed, the plankton switches to C4/beta carb and begins to rain out C13-enriched detritus, depleting (relatively) the upper ocean of C13 in an isotopic refinment of the biological pump. The ratio of C12 rises. The normal process of mixing ensures that the air and water concentrations of the new isotopic balanced CO2 mix and match. The atmosphere exhibits depleted C12. Surface pollution is worse in the Northern hemisphere, and continued healthy upwelling in the Southern hemisphere — helped by being less complicated by contraints of narrow seas — means that the phytos there are less stressed and can continue their usual C3 way of life.
“How then should an oceanic CO2 source cause
a simultaneous drop of 13C in both the atmosphere and ocean ?”
Not a source. A relative sink for C13.
ref 13: ‘the numbers match’. Have I got this wrong? I’ve seen references that half of the anthropogenic carbon is sequestered. Half is not what I call a very good match. However, I’d be grateful for references which tie this down a little more strongly as I’m sure the science does not depend on a simple post hoc argument and I’d like reassurance.
ref 12: The smoking gun indeed. If a plant switches to C4 metabolism — a technique which apparently is less discriminatory between C12 and C13, would it also sequester unexpected amounts of C14? Presumably the differences are purely mechanical which would indicate this was so. Does anyone know?
If C14 is being used up unexpectedly, what does this do to the smoking gun? Maybe I need to think some more.
There should be some testable predictions from this. C14 levels in deep sediments should show increases from around 1850 as the stressed oceans began to react to the outward spewing of the nascent petrochemical industry. Plankton samples, if dead — maybe frozen — should show a lack of zinc and cadmium indicating that they have changed to different metabolic systems. Is the data already out there?
I leave other tests to the intelligent reader…
JF
Hank Roberts says
I’ve been wondering where to look for the surprises. One example, much belabored, is methane hydrates.
I see that only a few years ago geologists were still debating the origin of pingos (not the Linux penguin). This is fossil carbon too, presumably depleted of C14?
Is there any clear idea how and when the stuff is formed? I can imagine it could be either a cyclical process associated with glaciation — lower temperature at a given depth below sea level? or contrariwise, when the planet is warmer, CO2 higher, and sea level much higher — because pressure is greater at that same location.
I wonder if the models state an assumption about this stuff — either
— it’s been locked down by past extremes so is stable, or
— it’s at an equilibrium state, so released as warming proceeds (in local spikes as brief temperature extremes occur in the location).
I’d think that some would have bubbled out over the last geologically brief warmth at end of last ice age — but that would have been ‘almost done happening’ as the planet slowly cooled back toward the next ice age. (The big old ones look like hills, tree-covered.)
I wonder if the Navy has mapped the polar sea floor well enough to count undersea pingos — presumably secret if so, as detailed maps, but if so perhaps summary data on size and location would be interesting, to try to quantify what’s been happening.
As of 2003 the origin of pingo structures was apparently still being debated — lots of good pictures for example here.
http://www.mbari.org/news/news_releases/2003/paull_pingos.html
Now it appears clear — as they’re being observed below sea level — that they do form as methane boils off.
Is there enough info developing to include this in models?
Jason Goodell says
After reading this months quick study on climate modeling in Physics Today I’ve been unable to push this topic from my mind, especially with it being nearly 70 F here in New England yesterday. As a physicist with only a casual understanding of the issues surrounding global climate change I’m drawn to the simple idea that humanities demand for energy is the central issue. As the global population inceases and the spread of technology increases, the demand for energy generation will also increase. For our great-grandchildren other factors like thermal polution, and the effects of large scale solar and wind farms may be the climate issue of their era, CO2 emissions are only the beginning of what will be a constant need for the consideration of our actions and their impact on Earth’s climate.
Julian Flood says
Sorry, typo in 17 — in spite of lots of thinking. It should read, of course ‘the atmosphere exhibits depleted C13’ not C12. Sorry about that.
It has me wondering about the mismatch of CO2 rising after temperature in the historical records — do the dating techniques depend on C14 levels and would C4 reduction of C14 pull the dates back into line?
JF
Alastair McDonald says
Re #20 etc.
Julian, Gavin’s reference was to a FAQ produces by Jan Schloerer of Ulm University, and hosted on the web by Dr Robert Grumbine of NOAA http://www.radix.net/~bobg/. Although the article is now over ten years old it directly answers your question, one that has been asked by global warming skeptics for all those years.
Jan wrote “From its preindustrial level of about 280 ppmv (parts per million by volume) around the year 1800, atmospheric carbon dioxide rose to 315 ppmv in 1958 and to about 358 ppmv in 1994”. If you go to the Mauna Loa site you can see that it has continued to rise since then to about 375 ppmv in 2004. http://cdiac.ornl.gov/trends/co2/graphics/mlo145e_thrudc04.pdf
There is no doubt that the increase is due to fossil fuel burning. You only have to think about about 100 million Americans taking their cars onto the road and burning on average one gallon of fuel each day. That is 500,000,000 tons of CO2 they are producing per day. Globally, we have put a trillion tons (Pt) of CO2 into the atmosphere. See http://cdiac.ornl.gov/trends/emis/tre_glob.htm 305 Gt of C equals 305 * 44 / 12 = 1.122 Pt. Well that’s what 6.5 billion little unassuming people can do if you give them enough time :-(
Hank Roberts says
C12? C13? C14? All three contribute:
“… The 14C/12C and 13C/12C ratios ….” SCIENCE VOL. 279 20 FEBRUARY 1998 1187
Atmospheric Radiocarbon Calibration to 45,000 yr B.P.: Late Glacial Fluctuations and Cosmogenic Isotope Production
Charles Muller says
About parametrization
In your text (Gavin), you explain equation-approximated physics (eg radiative transfer) and empirically-based physics (eg evaporation) need for parametrization. That’s not really clear for a non-expert : do you mean for example that different coefficients of equations (for these phenomena) are to be regularly corrected from real-world measurements and retrovalidation ?
About emergence
You say on one hand that large scale behaviors of climate are robust, but on the other hand that they emerge from small-scale and more chaotic features. Does it mean that chaotic small-scale behaviors of climate are, after all, without real concern for the accuracy of 2100 projection ?
Julian Flood says
Re 21:
Simple one-celled organisms in the soil are doing better than that — they push out 60 GtC/yr. I wonder why we assume that the C accumulating in the atmosphere is actually ‘our’ C. Why is there this tiny fraction of the overall flux which is not consumed? The smoking gun did indicate that it’s ex fossil fuel carbon because it is depleted in C14 as one would expect. If C4 metabolism is disturbing the expected C14 levels then the bets are off.
The deep sea reservoir of C is 380,000 Gt. Thinking non-anthropocentrically, why does our little flow go straight into the atmosphere? Does it? Obviously not, because some of it gets lost. Our flow is dwarfed by natural processes and we need to find a way of pointing at our emissions and proving that the trouble is what we’re up to. Otherwise we’re like a little boy peeing into a lake and taking the blame when the dam bursts.
There’s a sawtooth daily pattern to the Mauna Loa graph. Is there any difference in the gas make up between day and night? Does the isotopic makeup vary?
JF
Bryan Sralla says
Re: #10 Gavin, thank you for responding.
Since a climate prediction problem of the first kind (controlled by initial values) will effect numerous non-linear feedbacks which ultimately will effect the degree of free variations in the climate trajectories, it seems well founded (to me) that many classes of climate prediction problems are controlled by intial values. Do you agree with my statement? Also, greater numbers of non-linear type feedbacks added to the AOGCMs will most likely increase the degrees of free variations in modeling experiments. Right? It then seems feasible that improving the models (making them better resemble the real thing) may make skillful multi-decadal predictions more contolled by initial values, not less (since the amount of noise in the system is greater relative to the forced variations). Since a boundary values problem is superimposed over the initial values problem in real climate prediction, it seems necessary to obtain solutions which satisfy both types of equations (for many climate prediction problems). I think for some casual observers, it might seem non-intuitive that improving the models adds to more uncertainty in predictions. The more I read and study the problem, the more I am pursuaded you modelers have a tough problem on your hands. Good luck.
yartrebo says
Re #24:
Those single-celled organisms are releasing carbon that was only recently fixed. Even in the short term (1 year period) it is within 1% of being in equilibrium with carbon fixed by life.
The C4 pathway is only used by grasses. Other plants and all bacteria and protozoa use the older C3 pathway. An organism cannot just choose which pathway is wants to use. Even if land use patterns have changed the ratio of C4 to C3 plants (probably increased it, because so much crop and livestock land is planted with grasses), the effect will be quite minor as the carbon is released back into the atmosphere in short order.
As far as the deep sea reservoir of C goes, don’t forget that it takes thousands of years for the deep ocean to turn over and that while it is a reservoir, it was more or less in equilibrium at 285 ppm CO2 and thus that carbon wasn’t going anywhere. With current CO2 levels the deep ocean is actually a net sink of CO2, absorbing a fairly significant share of our emissions.
With regards to the sawtooth daily pattern at Mauna Loa, it’s due to photosynthesis during the day and respiration during the night. While there’s probably a difference in the isotopic makeup, it’s probably pretty small?
“Thinking non-anthropocentrically, why does our little flow go straight into the atmosphere? Does it? Obviously not, because some of it gets lost.”
Quite easy – just look at a smokestack or exhaust pipe. The CO2 literally goes straight into the atmosphere. As far as what gets ‘lost’, that’s mostly the ocean absorbing about half of it. It isn’t gone for good and if CO2 levels drop, the process will reverse, releasing the CO2 back to the atmosphere.
James says
Re #24: “Simple one-celled organisms in the soil are doing better than that — they push out 60 GtC/yr.”
Rather misleading language: they don’t “push out” that much CO2, except in the sense that you “push out” a certain amount of CO2 every time you exhale. A better way to put it would be to say that they _cycle through_ that much carbon: it comes in, it goes out, but only if there is a change in the mass of soil organisms (and their corpses, etc) does all that activity produce a net change in CO2.
This does bring up a question I’ve sometimes wondered about, but have never seen numbers on: suppose we could ignore the political obstacles, and make a serious attempt at re-vegetating areas – the western US, North Africa, the Mideast, Australia – that have been desertified by human activity. How much CO2 could we expect that to sequester?
Hank Roberts says
Julian, here are papers addressing the questions you posed above, just a couple from a large and interesting site.
You’d want to check the citation index forward because these are relatively old papers, not the current best info, but a place to start.
There has been a lot of research addressing the questions you mention above and I am sure isn’t news to the climate scientists. It’s an example of the details they have to consider.
http://www.igac.noaa.gov/newsletter/16/co2.php
Isotopomers of CO2 and their use in understanding the global carbon cycle
“… One of the most striking results that 13C data (and now O2/N2 ratio data) unveiled is the existence of a very large repository of anthropogenic CO2 in Northern Hemisphere ecosystems during the early 1990’s when the atmospheric CO2 growth rate had diminished to only one third of its normal value. Still, the long term trend and interannual fluctuations of 13C at one given monitoring station is at the limit of detection of mass spectrometers, on the order of 0.01 per mil for 13C in CO2. Thus, even a very slight bias in the isotopic data would translate into different inferred magnitudes of the global land and ocean uptake of anthropogenic CO2….”
—————
http://www.igac.noaa.gov/newsletter/16/mass-new.php
Biogenic aerosol formation in the boreal forest (BIOFOR)
Anomalous (or Not Strictly Mass Dependent) Isotope Variations Observed in Important Atmospheric Trace Gases
“… Variations in stable isotope ratios in the environment have generally been well understood and put to good use. However, the atmosphere appears to be the scene for a host of isotope effects that we do not yet understand. The prime example is ozone, whose anomalous enrichment has repeatedly defied correct interpretation.
“….Atmospheric studies also benefit from stable isotope variations. An illustration is the ongoing decline of the 13C/12C ratio of atmospheric carbon dioxide, largely in consequence of the increasing fraction of fossil fuel-derived carbon dioxide. This isotope effect is thus directly related to the isotopic composition of an important source of the gas. Fossil fuels have about 2% less 13C than atmospheric carbon dioxide. This in itself is obviously not a source effect (ambient CO2 is the carbon source for plants), but rather an isotope fractionation effect of photosynthesis. Plants favor 12CO2 slightly over 13CO2, so the assimilated carbon is depleted in 13C relative to the atmosphere. In isotope applications of interest to atmospheric chemistry, source signatures and fractionation effects in chemical reactions are both relevant….”
“… It has taken years to unravel the secrets of the anomalous isotope fractionation of ozone, perhaps the most extensively studied reactive atmospheric trace gas. In regard to molecular symmetry, 17O and 18O in an ozone molecule are identical (they are simply different from the abundant 16O isotope)…. However, theories based on symmetry have been challenged by the latest experimental data.
“… After ozone, it was found that carbon dioxide in the stratosphere exhibits MIF [Thiemens et al., 1995, Gamo et al., 1989]. A chemical mechanism was proposed by Yung et al. [1991], who showed that the observed 17O excess in CO2 could be explained by transfer of the enrichment present in ozone to CO2 via the excited oxygen radical ….”
Hank Roberts says
Another good source on where CO2 is coming from here, dated early 2005.
http://www.publish.csiro.au/?act=view_file&file_id=EN05013.pdf
Recent Record Growth inAtmospheric CO Levels
Regrettably, the actual data illuminating this question was collected at stations funded by the USA and Australia that are now being shut down, the authors point out.
Sally says
Re: 24
“There’s a sawtooth daily pattern to the Mauna Loa graph. Is there any difference in the gas make up between day and night? Does the isotopic makeup vary?”
Is there a daily sawtooth pattern to the Mauna Loa graph? There is certainly an annual sawtooth pattern, caused by the northern hemisphere forests kicking in with photosynthesis every spring. That is probably rather simplistic but I think it is what Charles Keeling set out to measure initially. The carbon flux is affected by all photosynthesis, terrestrial and aquatic. There would be a daily flux however, as plants respire all the time but only photosynthesize during light conditions. Again this is a simple explanation and does not take into account the processes that go on in the dark to produce sugars.
If you go here: http://cdiac.ornl.gov/trends/co2/nocm.htm
you can look at graphs from the four collection stations at Barrow, Alaska, Samoa, the south pole and Mauna Loa. You will see from the graph found here: http://cdiac.ornl.gov/trends/co2/nocm-sagr.htm
that there is far less fluctuation in Samoa, as the climate has little seasonal change at this location.
I hope this helps.
Donald G. says
I propose we invest massive resources into making a large array of petaflop-scale computers dedicated to the study of global warming. This way, if the simulations show that Global Warming isn’t an upcoming threat, we’ve ensured that it really is, by churning out more than enough CO2 while making those computers. ;)
John Dodds says
Re 24 30
Go to http://meteo.lcd.lu/today_01.html to see real live CO2 concentrations et al in Luxemburg. Variations over 50% and as high as 500+ppm
I agree that the patterns are complex. Just changing the water vapor content (& rainstorms) plays havoc with the CO2 concentrations, as does the daily commute (vehicle exhaust) near the measuring station.
In general it is my understanding that the daily fluctuaions & even the yearly cycle have a relatively small impact on the global warming consequences. I think there is a Gavin comment to this effect from a year or so ago somewhere in the archives.
Alastair McDonald says
Re #24: “Simple one-celled organisms in the soil are doing better than that — they push out 60 GtC/yr.”
Yes, and they are better at sequestering carbon than us too. They absorb 60 GtC/yr too. That is the oscillation you though were daily but are actually yearly. We are what is making the curve rise.
English says
The text has both punctuation errors and errors of grammar. Some of these errors are not insognificant. The text of the document can be seen as ambiguous.
I would most respectfuly suggest that any text be checked for errors in grammar and punctuation by a technical author prior to publication.
[Response: Bit late now, but what specifically did you find ambiguous? – gavin]
Julian Flood says
re 28: thanks for the sites — even more thinking needed. I read a recent CO2 metabolism paper for a particular species of phytoplankton and began to wonder about the chance that they are less consistent than science currently understands. If the knowledge that certain marine plants can swap between C3 and C4 is only about 5 years old, I’d rather like to see whether their different fixation routes do odd things to oxygen isotope ratios. There is a rather disturbing illo of the degree of confidence in the science of global warming — low low and very low occur too often to be reassuring.
Governments make a lot of noise about global warming — how odd that they don’t throw large amounts of money to the only people who can demonstrate, by scientific measurement, exactly what’s happening. And yet monitoring sites are being abandoned — why is that? Do governments know something or is it merely complacency?
30: thank you. There’s a daily wiggle rather than an annual sawtooth I believe, but you’re right, I was conflating the two. I’m wondering if a signal of isotopic fractionisation could be teased out of the daily signal, pinning it down to either worldwide or near-ocean effects.
If the ocean surface pollution hypothesis of global warming is correct then this is the sequence: the petrochemical industry kicks into life around 1850. Surfactant run-off and oil spill begin to reduce stratocumulus cover: the whole surface of the ocean warms, reducing nutrient upflows and encouraging the phytoplankton to kick into C4 metabolism. C4 phytoplankton increase in numbers as their relative advantage inproves over C3 plants. C4 plankton sequesters more C13 than expected by conventional models, expected levels of C13 fall, producing a false anthropogenic signal.
Surfactant and oil pollution effects increase CO2 levels by reducing biological pull down, mechanical mixing and reduce solubility by warming the surface. (What happens to the albedo of a polluted surface? I don’t know.)
Deep water warms slightly. Methanophages begin to emit more light isotope CO2 as the clathrate deposits become more accessible. CO2 warming increases. The ocean surface becomes even more stable and the cycle continues. Eventually the clathrates boil off.
Result: large temperature spike and collapse of civilisation.
I wonder if there’s anything in the fossil record which would fit this scenario, triggered perhaps by the breaching of a large, light oil reservoir by coastal erosion? I wonder what signals a scenario like that would leave for us to interpret?
(Well, I like it better than a convenient volcano boiling off a carbon deposit!)
JF
Peter Erwin says
Re # 27: “This does bring up a question I’ve sometimes wondered about, but have never seen numbers on: suppose we could ignore the political obstacles, and make a serious attempt at re-vegetating areas – the western US, North Africa, the Mideast, Australia – that have been desertified by human activity.”
Most of those areas have not been “desertified by human activity” — they’re naturally deserts, part of the global desert belts at low latitudes. Vegetating them on a large scale would probably require significant energy expenditures (and accompanying CO2 release!) for fertilization, water transport, and building the necessary infrastructure. (This is not to say that it’s a bad idea to reverse desertification in local areas where it’s recently occured, of course.)
Alastair McDonald says
Re #35
Julian, there is a major event in the fossil record called the PETM (Paleocene-Eocene Thermal Maximum.) See http://www.es.ucsc.edu/~silab/biocomplex/lptm_background.htm
John Dodds says
From the various Hansen et al and other papers, the computer calculations of global warming show that adding CO2 results in the added CO2 molecules delaying (not trapping!) the transport of energy out of the earth system, and the subsequent global warming of the air near the ground and cooling near the top of the atmosphere- TOA (see cartoon figure 2e from Hansen et al 2005). This is the Greenhouse Effect. The global warming effects results in a positive energy imbalance or disequilibrium at the TOA, where more solar energy is coming in than is being sent out because the TOA temperature is colder than the equilibrium. This apparently lasts for years if not forever per the GCMs (Hansen Nazarenko et al 2005).
However, on a daily basis the Earth goes through a daily rotating solar cycle where at night we have a negative imbalance with more energy out than in, then as the sun rises we warm up pass through the equilibrium point to get a positive imbalance and stay that way until the sun again starts to set and the energy-in again passes through the equilibrium point and we return to a negative imbalance. The same process happens when we get yearly/seasonal temperature cycles and also on the approximately 11 year solar energy cycles which change the energy-in amounts. At all times, the amount of energy being transported out of the Earth system is calculated by the Stefan-Boltzmann Law (SBL) based ONLY on the temperature of the Earth system, be it at the ground or at the TOA. The SBL, which says that the energy transported from an object is proportional to the temperature raised to the 4th power (ie hotter air rises faster-convection, or hotter objects radiate more energy per unit time- radiation transport), is forever trying to reestablish the earth to its equilibrium energy-in equals energy-out conditions, by either warming or cooling it, and it is perfectly successful exactly twice each and every day. This contradicts the conclusions of the Global Computer Models (GCMs) which require a permanent or multi-decade disequilibrium or energy imbalance to create the global warming.
So the question is: WHY doesn’t the Stefan-Boltzman Law feedback also automatically compensate for the greenhouse effect? Why doesn’t the feedback from the SBL automatically return the air to its equilibrium to (solar) energy-in conditions imposed by the daily solar cycles? Why isn’t the delay in energy transport by adding GHGs compensated for by the speedup in radiation transport at the speed of light caused by the increase in temperature and the SBL response? The SBL has no way of differentiating between a GHG or solar caused warming. According to the GCMs, we already know that the CO2 caused global warming results in hotter air rising faster (convection) at ground level, which Gavin says is included in the GCMs as a part of the water vapor feedback effect.@ How not to attribute climate change comment #182 12:57 pm (see also #126 et seq & 208) Why doesn’t the hotter air at ground level also cause more energy to be radiated out faster per the SBL to result in the greenhouse caused warming energy to be returned to the TOA where the extra energy will cancel out the CO2 caused imbalance at the TOA? ie to return the Earth to its equilibrium conditions, which it actually does – twice a day?
The conventional wisdom that greenhouse gases cause global warming is based on the identification of the greenhouse effect (GHE) in the Svante Arrhenius 1896 paper (see Wikipedia ) ” On the Influence of Carbonic Acid in the Air Upon the Temperature of the Ground”. However, my simple reading of the paper shows that Arrhenius calculated the warming effect of CO2 energy absorption in academic isolation (CO2 absorbs energy) without considering the real world effects of the Stefan-Boltzmann Law Feedback.
Also a review of the Global Computer Models (GCMs) results shows that the amount of global warming calculated varies depending upon the duration of the period modeled. eg 1950-1997, 1880-2000, 1750-2000, or from the bottom of the ice age and bottom of the CO2 level (20,000 years ago) until 2000 etc. This is to be expected, except that the computer model also requires that the temperature imbalance at the TOA be equal to and opposite the warming to maintain conservation of energy. SO just what is the temperature at the TOA in the year 2000? It HAS to be a single value, not the infinite number of options that can result from the GCMs. Again, the GCMs are calculating results that are not possible. BUT if the SBL Feedback returns the TOA to the equilibrium conditions, then we have a single value, but then there is no energy imbalance. However there also is no global warming due to the GHE!
Sorry Gavin, but I feel that the GCMs are giving such fundamentally incorrect/inconsistent results that they can NOT be valid. They seem to be missing some of the SBL feedback. If you can explain these discrepancies please do.
Which brings us back to what you previously stated 15 months ago,
“… [Response: … You refuse to relax your (incorrect) assumption that the flux from the surface is the same as the flux from the top of the atmosphere, which is equivalent to assuming that there is no GHE at all. So you assume the result you wish to prove. … -gavin]
Comment by John Dodds – 30 Oct 2005 @ 8:04 pm ” (click on the time stamp to go to Gavin Schmidt’s RealClimate.org location)
To which I now respond, Yes Gavin I intuitively assumed constant equilibrium flux, but it is not my assumption, it is the constant flux equilibrium IMPOSED by the Stefan-Boltzmann Law, & Mother Nature’s Laws of Physics, and seen on a daily basis. The SBL Feedback (which is dependant ONLY on the temperature and not the CO2 concentration) cancels out the GHE temperature effect basically eliminating it as it occurs. No GHE means no GHG/CO2 induced warming, as you said. The ever increasing GHG Forcing Curve (Planetary energy imbalance? May 2005 & IPCC) is effectively a flatline. Any research based on the increasing GHG forcing and the GCMs is invalid. The estimates of the temperature impact of doubling the CO2 levels in the next century are just plain wrong.
This does not mean that global warming does not exist or that adding CO2 does not cause problems such as ocean acidification. The evidence in melting polar ice caps, glaciers and measured temperatures etc is too obvious. However the increases must come from an external increase in energy-in (or maybe if the solar increases do not explain it, some of the decreasing Earth magnetic field flux energy is leaking into the ground/air ??? Are we seeing increases in the northern lights effects?). The solar increases documented by IPCC approximately account for the observed temperature increase from 1700-2000. For example the solar insolation increases by about 4 W/m2 to 1364 since ~1700, This increase is 4/1364 or 0.3% which is about the observed increase of 0.3% of 288 or 0.84 degrees absolute.
This “CO2 does NOT cause warming” conclusion does however mean that any efforts to control Carbon/GHG emissions (Kyoto Treaty, carbon taxes, carbon emissions trading, carbon sequestration research, lobbying, lawsuits etc) are totally worthless, a waste of resources, and can be eliminated since they will have no effect on global warming other than increasing taxes and getting me hot under the collar. :) However, for solving the problem of CO2 caused global warming and “saving the world” from the British/Stern estimate of 1 to 5% of annual global GNP, Companies and Governments are encouraged to send a fraction of their subsequent cost/tax savings to the John Dodds Foundation USA 94123-3404. :)
Sorry for the length.
Julian Flood says
Re 37:
Another triumph for the ocean surface pollution theory of global warming!
JF
(we can correct the offset of warming and CO2 rises as well.)
James says
Re #38: Ever park your car in the sun on a cold day? It gets quite a bit warmer inside, doesn’t it? Now if I understand the physical model you’re using in your description, it should instead be in equilibrium with the outside air. I’d take that as a strong hint that your physics is wrong :-)
And re #36: “Most of those areas have not been “desertified by human activity” — they’re naturally deserts, part of the global desert belts at low latitudes.”
While such areas are indeed natural drylands, I think there’s quite a bit of evidence showing that much of the area has in fact been turned to desert as a result of human activity. See for instance the early explorers’ descriptions of the American west compared to conditions today. There are prairie/steppe plant communities that, once established, do quite well with little rainfall. Destroy that community by over-grazing or farming, though, and it does not readily reestablish itself.
John Dodds says
Re #40
A car has a physical barrier – the roof- that prevents or actually just slows down the restablishment of equilibrium to the rate that energy can transfer thru the roof.. Just like a glass greenhouse has a barrier that prevents the reestablishment of the equilibrium BUT go into a glass greenhouse at 4AM and it IS as cold as the outside- the equilibrium is restablished. The atmosphere has NO barrier to the flow of energy, the question is just how fast is the equilibrium reestablished. and radiation of energy is as fast as the speed of light.
The question you should be looking at is if there is a GHG caused energy imbalance per the GCMs, and more energy than what the GHGs generate is continuously added then why haven’t we overheated already? CO2 has been increasing for 20,000 years according to Hansen. (& I agree) so the so called imbalance should have been there for 20,000 years. Besides if there is an imbalance just why would it result in that extra energy going into the ocean (per Hansen) instead of just going into the air & eliminating the imbalance like it does on a daily basis. Hansen’s model does not make sense.
Hank Roberts says
Mr. Dobbs, if you understand how CO2 absorbs and re-radiates energy, you’ll understand how a CO2 laser works as well. This may help. It’s not marketing, it’s working hardware.
http://www.laserk.com/media/whiteA.gif
http://www.laserk.com/newsletters/whiteTHE.html
http://www.laserk.com/newsletters/whiteCO.html
Neal J. King says
John Dodds,
The Stefan-Boltzmann law does NOT apply to the Earth-atmospheric system. The SBL is a result derived from Planck’s radiation law, under the assumption of frequency-independent emissivity. This is flagrantly not true for a system with absorption lines.
English says
Re #41
I think that you make an interesting point.
If I can paraphrase, you are saying that the earth may heat up to above normal temperatures, because of GHG’s, during daytime. Each part of the earth will have lost all of the excess energy it may have gained (due to GHG) by dawn.
I think that this may be true but it would still result in an increase in mean temperature as the temperature has risen for some time (during the day) above that which it would have been if there were no GHG and the temperature will return to the usual ambient at night (and won’t go below it).
So, by your theory, we would get hotter days and no change in the dawn temperatures. I have no idea if this is the case or not.
I believe that there are more uncertainties in climate science than this (and I’m not the only one).
http://www.pco-bcp.gc.ca/default.asp?Language=E&page=publications&doc=precaution/precaution_e.htm
“There is international consensus that human activities are increasing the amounts of greenhouse gases in the atmosphere and that these increases are contributing to changes in the earth’s climate. However, there is scientific uncertainty regarding the sensitivity of climate to these increases, particularly the timing and regional character of climate change.”
Eli Rabett says
Having engaged with Roger Pielke Sr. and cohorts on this issue, I begin to think that their definition of boundary value problem and mine is different. They are talking about the extreme limits of the system, whereas I (and I think most of the folk writing here) think of the hypersurface on which the trajectories evolve as being the boundary. Perhaps it is because they are locked to a three dimensional picture.
BTW this discussion has a very high S/N ratio.
James says
Re #41: “The atmosphere has NO barrier to the flow of energy…”
Huh? How did you reach that conclusion? Going back to basic physics, there are three modes of energy transport: conduction, convection, and radiation, no? Conduction is not significant in gasses, so we’re left with two. Incoming solar radiation in the visible spectrum heats the ground, and can be transported away by convection or radiation.
Convection pretty much stops at the stratosphere, so that leaves radiation. That radiation takes place in the infrared, to which CO2 is not transparent, so the outgoing radiation is stopped. The atmosphere _is_ the physical barrier that stops re-radiation.
“….and radiation of energy is as fast as the speed of light.”
Humm… So you’re saying that a hot object placed in a vacuum should cool instantly? I don’t think it works that way :-)
Looking at the earth-atmosphere system, an infrared photon leaving the ground does travel at the speed of light – until it hits something, say a CO2 molecule. Then the energy it carried can either stay with the CO2, making it hotter, or it can re-radiate. If it re-radiates, it can go either up or down. The net effect is that the system stays warm longer. Left alone, it would eventually cool back down, but the sun comes up before than can happen, so the next day the system gets a little warmer still, and that keeps happening until a new, higher, equilibrium temperature is reached.
But then humans keep adding more CO2, which shifts the equilibrium still higher…
L. David Cooke says
RE: #46
Hey James;
The statement you made regarding, “The net effect is that the system stays warm longer.” do you have an indication of this? Have you a clear data table that the given temperature range has diminshed between high and low over the last 35 years based on similar conditions? How about the typical surface temperature decay curve in a clear night sky, under similar input potentials and similar weather conditions?
That the CO2 increases, hence creating a greater opportunity for CO2 vibrations to maintain the energy, or to delay the terrestrial release into space should be detectable don’t you think? I have not seen any indications of this have you?
Dave Cooke
Hank Roberts says
Mr. Cooke, when you use phrases like “opportunity for CO2 vibrations to maintain the energy” you are making it hard for yourself. You won’t find that with Google — let’s check:
search – +opportunity +”CO2 vibrations” +”maintain the energy” – did not match any documents.
Yep, you’d find no evidence of it.
But look using the terms you find in the physics books, eh?
This is well documented as one of the basic predictions — greater warming in the nighttime, because less heat is being radiated off the planet by an optically clear sky.
Here, for example, just to give you a start looking into this:
http://blog.sciam.com/index.php?title=are_you_a_global_warming_skeptic_part_iv&more=1&c=1&tb=1&pb=1
Hank Roberts says
P.S., here’s an instrument; anyone have access to the data?
http://www.journals.uchicago.edu/PASP/journal/issues/v111n760/990028/990028.html
“ABSTRACT Two sky brightness monitors – one for the near-infrared and one for the mid-infrared – have been developed for site survey work in Antarctica. The instruments, which we refer to as the NISM (Near-Infrared Sky Monitor) and the MISM (Mid-Infrared Sky Monitor), are part of a suite of instruments being deployed in the Automated Astrophysical Site-Testing Observatory (AASTO). The chief design constraints include reliable, autonomous operation, low power consumption, and of course the ability to operate under conditions of extreme cold. The instruments are currently operational at the Amundsen-Scott South Pole Station, prior to deployment at remote, unattended sites on the high antarctic plateau.”
Eli Rabett says
James and L. The CO2 molecule absorbs a photon. It gains vibrational energy. This energy rapidly (~ a few tens of collisions, within nanoseconds) is collisionally transfered to other atmospheric molecules as translational energy (V–>T transfer in the argot of the field), mostly N2 and O2. This heats the atmosphere
So how do CO2 (and H2O) molecules gain energy to radiate? Well, the unexcited CO2 molecules continually collide with other molecules in the atmosphere and a small percentage of the collisions leave the CO2 vibrationally excited (T–>V transfer). The average fraction of CO2 molecules that are vibrationally excited in the bend is something like 2*exp(-1000(K)/T(K))
Eye crossing stuff: 1000K multiplied by Boltzmann’s constant is roughly the energy in Joules of the CO2 vibrational bend, the bend is two fold degenerate which accounts for the 2..
The bottom line is that about 6% of CO2 in the atmosphere is vibrationally excited at any time and can radiate, almost none of the molecule that absorb the IR emit directly.