Guest Commentary by Jim Bouldin (UC Davis)
How much additional carbon dioxide will be released to, or removed from, the atmosphere, by the oceans and the biosphere in response to global warming over the next century? That is an important question, and David Frank and his Swiss coworkers at WSL have just published an interesting new approach to answering it. They empirically estimate the distribution of gamma, the temperature-induced carbon dioxide feedback to the climate system, given the current state of the knowledge of reconstructed temperature, and carbon dioxide concentration, over the last millennium. It is a macro-scale approach to constraining this parameter; it does not attempt to refine our knowledge about carbon dioxide flux pathways, rates or mechanisms. Regardless of general approach or specific results, I like studies like this. They bring together results from actually or potentially disparate data inputs and methods, which can be hard to keep track of, into a systematic framework. By organizing, they help to clarify, and for that there is much to be said.
Gamma has units in ppmv per ºC. It is thus the inverse of climate sensitivity, where CO2 is the forcing and T is the response. Carbon dioxide can, of course, act as both a forcing and a (relatively slow) feedback; slow at least when compared to faster feedbacks like water vapor and cloud changes. Estimates of the traditional climate sensitivity, e.g. Charney et al., (1979) are thus not affected by the study. Estimates of more broadly defined sensitivities that include slower feedbacks, (e.g. Lunt et al. (2010), Pagani et al. (2010)), could be however.
Existing estimates of gamma come primarily from analyses of coupled climate-carbon cycle (C4) models (analyzed in Friedlingstein et al., 2006), and a small number of empirical studies. The latter are based on a limited set of assumptions regarding historic temperatures and appropriate methods, while the models display a wide range of sensitivities depending on assumptions inherent to each. Values of gamma are typically positive in these studies (i.e. increased T => increased CO2).
To estimate gamma, the authors use an experimental (“ensemble”) calibration approach, by analyzing the time courses of reconstructed Northern Hemisphere T estimates, and ice core CO2 levels, from 1050 to 1800, AD. This period represents a time when both high resolution T and CO2 estimates exist, and in which the confounding effects of other possible causes of CO2 fluxes are minimized, especially the massive anthropogenic input since 1800. That input could completely swamp the temperature signal; the authors’ choice is thus designed to maximize the likelihood of detecting the T signal on CO2. The T estimates are taken from the recalibration of nine proxy-based studies from the last decade, and the CO2 from 3 Antarctic ice cores. Northern Hemisphere T estimates are used because their proxy sample sizes (largely dendro-based) are far higher than in the Southern Hemisphere. However, the results are considered globally applicable, due to the very strong correlation between hemispheric and global T values in the instrumental record (their Figure S3, r = 0.96, HadCRUT basis), and also of ice core and global mean atmospheric CO2.
The authors systematically varied both the proxy T data sources and methodologicalvariables that influence gamma, and then examined the distribution of the nearly 230,000 resulting values. The varying data sources include the nine T reconstructions (Fig 1), while the varying methods include things like the statistical smoothing method, and the time intervals used to both calibrate the proxy T record against the instrumental record, and to estimate gamma.
Figure 1. The nine temperature reconstructions (a), and 3 ice core CO2 records (b), used in the study.
Some other variables were fixed, most notably the calibration method relating the proxy and instrumental temperatures (via equalization of the mean and variance for each, over the chosen calibration interval). The authors note that this approach is not only among the mathematically simplest, but also among the best at retaining the full variance (Lee et al, 2008), and hence the amplitude, of the historic T record. This is important, given the inherent uncertainty in obtaining a T signal, even with the above-mentioned considerations regarding the analysis period chosen. They chose the time lag, ranging up to +/- 80 years, which maximized the correlation between T and CO2. This was to account for the inherent uncertainty in the time scale, and even the direction of causation, of the various physical processes involved. They also estimated the results that would be produced from 10 C4 models analyzed by Friedlingstein (2006), over the same range of temperatures (but shorter time periods).
So what did they find?
In the highlighted result of the work, the authors estimate the mean and median of gamma to be 10.2 and 7.7 ppm/ºC respectively, but, as indicated by the difference in the two, with a long tail to the right (Fig. 2). The previous empirical estimates, by contrast, come in much higher–about 40 ppm/degree. The choice of the proxy reconstruction used, and the target time period analyzed, had the largest effect on the estimates. The estimates from the ten C4 models, were higher on average; it is about twice as likely that the empirical estimates fall in the model estimates? lower quartile as in the upper. Still, six of the ten models evaluated produced results very close to the empirical estimates, and the models’ range of estimates does not exclude those from the empirical methods.
Figure 2. Distribution of gamma. Red values are from 1050-1550, blue from 1550-1800.
Are these results cause for optimism regarding the future? Well the problem with knowing the future, to flip the famous Niels Bohr quote, is that it involves prediction.
The question is hard to answer. Empirically oriented studies are inherently limited in applicability to the range of conditions they evaluate. As most of the source reconstructions used in the study show, there is no time period between 1050 and 1800, including the medieval times, which equals the global temperature state we are now in; most of it is not even close. We are in a no-analogue state with respect to mechanistic, global-scale understanding of the inter-relationship of the carbon cycle and temperature, at least for the last two or three million years. And no-analogue states are generally not a real comfortable place to be, either scientifically or societally.
Still, based on these low estimates of gamma, the authors suggest that surprises over the next century may be unlikely. The estimates are supported by the fact that more than half of the C4-based (model) results were quite close (within a couple of ppm) to the median values obtained from the empirical analysis, although the authors clearly state that the shorter time periods that the models were originally run over makes apples to apples comparisons with the empirical results tenuous. Still, this result may be evidence that the carbon cycle component of these models have, individually or collectively, captured the essential physics and biology needed to make them useful for predictions into the multi-decadal future. Also, some pre-1800, temperature independent CO2 fluxes could have contributed to the observed CO2 variation in the ice cores, which would tend to exaggerate the empirically-estimated values. The authors did attempt to control for the effects of land use change, but noted that modeled land use estimates going back 1000 years are inherently uncertain. Choosing the time lag that maximizes the T to CO2 correlation could also bias the estimates high.
On the other hand, arguments could also be made that the estimates are low. Figure 2 shows that the authors also performed their empirical analyses within two sub-intervals (1050-1550, and 1550-1800). Not only did the mean and variance differ significantly between the two (mean/s.d. of 4.3/3.5 versus 16.1/12.5 respectively), but the R squared values of the many regressions were generally much higher in the late period than in the early (their Figure S6). Given that the proxy sample size for all temperature reconstructions generally drops fairly drastically over the past millennium, especially before their 1550 dividing line, it seems at least reasonably plausible that the estimates from the later interval are more realistic. The long tail–the possibility of much higher values of gamma–also comes mainly from the later time interval, so values of gamma from say 20 to 60 ppm/ºC (e.g. Cox and Jones, 2008) certainly cannot be excluded.
But this wrangling over likely values may well be somewhat moot, given the real world situation. Even if the mean estimates as high as say 20 ppm/ºC are more realistic, this feedback rate still does not compare to the rate of increase in CO2 resulting from fossil fuel burning, which at recent rates would exceed that amount in between one and two decades.
I found some other results of this study interesting. One such involved the analysis of time lags. The authors found that in 98.5% of their regressions, CO2 lagged temperature. There will undoubtedly be those who interpret this as evidence that CO2 cannot be a driver of temperature, a common misinterpretation of the ice core record. Rather, these results from the past millennium support the usual interpretation of the ice core record over the later Pleistocene, in which CO2 acts as a feedback to temperature changes initiated by orbital forcings (see e.g. the recent paper by Ganopolski and Roche (2009)).
The study also points up the need, once again, to further constrain the carbon cycle budget. The fact that a pre-1800 time period had to be used to try to detect a signal indicates that this type of analysis is not likely to be sensitive enough to figure out how, or even if, gamma is changing in the future. The only way around that problem is via tighter constraints on the various pools and fluxes of the carbon cycle, especially those related to the terrestrial component. There is much work to be done there.
References
Charney, J.G., et al. Carbon Dioxide and Climate: A Scientific Assessment. National Academy of Sciences, Washington, DC (1979).
Cox, P. & Jones, C. Climate change – illuminating the modern dance of climate and CO2. Science 321, 1642-1644 (2008).
Frank, D. C. et al. Ensemble reconstruction constraints on the global carbon cycle sensitivity to climate. Nature 463, 527-530 (2010).
Friedlingstein, P. et al. Climate-carbon cycle feedback analysis: results from the (CMIP)-M-4 model intercomparison. J. Clim. 19, 3337-3353 (2006).
Ganopolski, A, and D. M. Roche, On the nature of lead-lag relationships during glacial-interglacial climate transitions. Quaternary Science Reviews, 28, 3361-3378 (2009).
Lee, T., Zwiers, F. & Tsao, M. Evaluation of proxy-based millennial reconstruction methods. Clim. Dyn. 31, 263-281 (2008).
Lunt, D.J., A.M. Haywood, G.A. Schmidt, U. Salzmann, P.J. Valdes, and H.J. Dowsett. Earth system sensitivity inferred from Pliocene modeling and data. Nature Geosci., 3, 60-64 (2010).
Pagani, M, Z. Liu, J. LaRiviere, and A.C.Ravelo. High Earth-system climate sensitivity determined from Pliocene carbon dioxide concentrations. Nature Geosci., 3, 27-30
Richard Steckis says
“Response: But why? When there is an obvious nonsense in his derivation and the basic confusion of assuming a current observation must be an immutable fact, why can’t you – as a proclaimed scientific thinker – see past your desire for him to be correct and just acknowledge that this isn’t the magic bullet people have claimed?”
I do not have an immutable desire for him to be correct. But the fact remains that there is not a single peer reviewed paper that falsifies his theory. In fact, the only half intelligent attempt to challenge his theory is from Nick Stokes.
Therefore, in the absence of falsification, his theory must stand as valid.
It is up to those that find fault with it, to falsify his theory.
[Response: Strange. In the case of Schwartz and Chelyk, even peer-reviewed critiques were not enough. Consistency fail? But look instead at whether people actually build on this stuff. How many papers have used this info? Or predicted something? None will, because this is garbage. Really, you don’t need to be an expert to work this stuff out. Use your own intellect – try it, it can be fun. – gavin]
Jim D says
Re: 299. Dave, the 6% is not RH, but saturated vapor pressure or number of water molecules per air molecule, which is currently averaging about 4000 ppm. This is what matters for radiative effect, not RH. Also, I am not sure why you mention sea-level. The sea-level change is an exchange with ice mass not vapor mass. Hope this clears it up a bit. RH only relates the vapor mass to its saturated value which is the maximum it can be for its temperature.
John E. Pearson says
301: Steckis continues nonsensically
I take it you are not a scientist. IF you were you would understand that the fact that nobody bothers to shoot down that paper is because it is a waste of time. Progress is made by pushing forward not looking backwards at irrelevant stuff. Citation counts are important for scientists. Works that aren’t cited at all like the Miskolczi paper are not cited because they haven’t said anything that anyone found worth discussing. The vast majority of scientific publications in all fields share the same fate. If you look at a histogram of the number of papers that have a given number of citations you’ll find a huge spike at 0 and 1 citations which drops off very rapidly as the number of citations goes up. It is utterly ridiculous to assume that all the uncited/poorly cited papers are correct/interesting/relevant/etc . You’re arguing that the less attention a paper draws the more important it is.
Anand says
Jim Bouldin:
A quick question –
With respect to this paper, do you believe that the uncertainties that were talked about – permafrost, methane, vegetation, insects etc – are subsumed under the single numerical value of gamma?
In other words, let’s say we look past the spread-eagled error bars of the temperature reconstruction, and the same being the case with the CO2 conc. reconstruction for the past millenium. If we claim CO2 to be the main driver of climatic temperature, would you agree that that all aforesaid minor influences will be included in either T – the temperature sensitivity to CO2, or in gamma – the CO2 sensitivity to gamma?
Thanks
Anand
[Response: The goal of course, is to estimate the T effect alone, and therefore exclude any non-temperature related CO2 sources. That’s why they chose the time period they did, assuming these other things to be of minor importance–which I think is fairly reasonable. That approach is, I’d guess, difficult to impossible right now because the anthro CO2 source is so dominant. So it’s going to require better estimates of the different changes in natural C fluxes than we’ve had till now to know where all the atmospheric CO2 is coming from.–jim]
Marco says
@Richard Steckis:
I guess you therefore also say that Gerlich & Tscheuschner are correct? Their paper has not been rebutted in the peer reviewed literature either!
One problem, though: Miskolczi and G&T cannot both be right…
simon abingdon says
#278 Completely Fed Up “Did you know that RH is the amount of vapour held in water compared to the maximum amount of water that such vapour could hold?” Er, say again.
Ray Ladbury says
Stickis,
OK, a crank publishes an obviously flawed paper in an obscure journal that is outside the expertise of that journal. It doesn’t get cited in subsequent work. It lies there like a dog turd on a New York sidewalk Why in the normal process of scientific inquiry would anyone bother to waste time and effort to publish a paper refuting it. This ain’t the sort of thing that’s going to get you into Nature
In any case, I would contend that every peer-reviewed paper that uses the models to successfully predict some aspect of climate is a refutation of Miskolczi.
And do you know of any peer-reviewed papers that refute Velikovsky? Do you also subscribe to Plimer’s iron-sun musings because there’s no peer-reviewed paper refuting them?
Here’s the thing, Steckis. Although you see this “great debate” between “skeptics” and scientists as the main event, it is a sideshow. The main event for science is advancing understanding of Earth’s climate, and Ferenc Miskolczi isn’t even inside the big tent. Hell, he even missed the circus train and is back in the last town.
Ray Ladbury says
Steckis says, “Human nature is not inherently evil. Stop being an amateur philosopher.”
“Never attribute to evil that which can be explained by stupidity.”–Napoleon
Richard Steckis says
305
Marco says:
15 February 2010 at 1:26 AM
“@Richard Steckis:
I guess you therefore also say that Gerlich & Tscheuschner are correct? Their paper has not been rebutted in the peer reviewed literature either!”
I understand that it has been by none other than the Rabbett. Am I wrong?
Marco says
@Richard Steckis: yes, you are wrong. It has not been rebutted in the peer reviewed literature. Several people (including Arthur Smith) have pointed to the grave errors, but that’s it.
Ray Ladbury says
Anand asks: “With respect to this paper, do you believe that the uncertainties that were talked about – permafrost, methane, vegetation, insects etc – are subsumed under the single numerical value of gamma?”
Allow me to test my comprehension: I’d say they show that gamma does not have a single numerical value, but is likely a function of temperature. Am I far off the mark, Jim?
[Response: You got it Ray. I seem to have had a hard time making my points on this whole topic, the main point being that gamma can certainly vary as the carbon sources and temperature change. Predicting such changes in the future is tough, because we have so much uncertainty in anthro emissions, which drives T, which in turn drives C feedbacks. I was trying to walk the fine line between “yes there’s definitely the possibility for nasty surprises because of uncertainties” and “but this is the best empirical estimate we have to go on for now”–Jim]
dhogaza says
Steckis:
In other words, since physicists tend to ignore those who claim to have figured out how to build a perpetual motion machine, rather than dive in, find the error, and publicly falsify the claim … the world is filled with perpetual motion machines.
L. David Cooke says
RE: 302
Hey Jim D,
Thanks, yes that did clear things up a bit. Do I understand correctly that the principle is; in an ideal gas that, 1 Deg.C would increase the internal energy by approximately 4.8 Joules? Hence, this would relate to an ability of the atmosphere to support more water vapor content due to the ensuing vapor pressure 6% additional water vapor by what, mass? I am curious though where you get the 6% from.
Going back to the earlier table I referenced ( http://hyperphysics.phy-astr.gsu.edu/hbase/kinetic/watvap.html ), it seemed to suggest that 40 Deg. C would support approximately 51g/m^3 versus the roughly 17g/m^3 or roughly an increase in the mass /m^3 of roughly 300% (Sorry for my earlier error, must be a form of dyslexia.), suggesting that a 20 Deg. C rise (or doubling of the approximate GAT), results in a 300% increase in the mass of water vapor in the air or about a 15% rise in mass per degree per cubic meter. Am I still getting this wrong, if so would you please explain?
As to radiative effect I find I am a bit confused by your conclusion. We have a slug or parcel of water vapor laden air rising to an altitude where by the Dew Point and the surrounding air are matched (100% RH est.) and then we achieve a phase change by either the adiabatic expansion resulting in the cooling and reduction of vapor pressure or the radiative emission of the latent heat in the water vapor which in-turn will warm the surrounding air forcing the surrounding air parcel to rise while the partially condensed water vapor falls or re-evaporates in the warmer air. By the additional feeding of convective air parcels from the air column base, in stable or still upper level air, the local region or air rises further till it achieves a new equilibrium. In a Sun warmed land surface with high surface humidity the parcel will be forced higher to achieve the Dew Point. In a Sun warmed land surface with low humidity the parcel will be lower when it achieves the Dew Point. If the convective air parcel is raising in non-stable air or high advection conditions what happens to the precipitation and radiative potential then?
As to the Sea Level Change, I also made an error there. I was attempting to address the issue of CO2 as a Primary driver for increasing the secondary forcing of additional water vapor in the atmosphere. (The intent is to address the potential water vapor reservoir of the ocean versus the simple recycling of the atmospheric water vapor.) If the GAT were to rise say 20 Deg. C as was seen in the Paleozoic period and there was roughly 1200ppm more CO2 in the atmosphere at the beginning versus the end period and the Seal Level dropped from the 120m above current at the end of the Devonian to the 60m above current at the end of Carboniferous then that would suggest a reduction 1200ppm of CO2 may be related to a reduction of 60m of sea surface.
My earlier error was the sign, the loss of Sea Level was during the drop in CO2 suggesting that increase in 10ppm of CO2 equates to a rise (not a loss) of 1/2 m^3 of the sea level. Then if gamma equates to 1 Deg. C rise equates to 10ppm more CO2 then the logic seemed to suggest that 1 Deg. C rise in GAT should equate to 0.5 meters cu added to the sea surface height.
I was trying to relate the temperature change, CO2 change and sea surface levels with water vapor change. As there are so many degrees of freedom I fear the idea was wrong. However, I was curious if you or anyone else had considered if there may be a macro or top down approach in the historic record we could then analog to the current changes.
The desire was since the Carbon we are currently adding to the atmosphere came mainly from the Carboniferous period I was hoping that we could then correlate the changes then to the inverse of changes today and coupled with the change in Sols radiative strength develop a confirmation for the purpose of drawing an analog to a prior data record at the same time insuring that natural or biologic influences were included in the hypothesis (As biologic functions have a tendency of confounding current calculations the hope was an analog to a past record would help define the weight that biologic influences could have.
Any additional insight you can offer are welcome. My apologies WRT my errors. Hopefully, I communicated my thoughts better this time.
Cheers!
Dave Cooke
Lynn Vincentnathan says
#308 & “Never attribute to evil that which can be explained by stupidity.”–Napoleon
Thanks, Ray. Actually when I’m being generous, giving the benefit of the doubt, and not being so evil myself, I do attribute it to stupidity.
I guess I’m just getting very impatient. For God’s sake, it’s been 20 years since we (all of us) should have known that we have to mitigate AGW. And 15 years since even science (the first studies) reached .05 on the issue, and science is usually the laggard on these types of life-threatening issues, obsessed with avoiding the false positive, with the laypeople rushing out ahead to save the world.
“You can fool all of the people some of the time, and some of the people all of the time, but you can’t fool all of the people all of the time” — Mark Twain
“You can fool all of the people most of the time, and most of the people all of the time….” — me
Dr Nick Bone says
RE: 132 and 145
I’ve done a bit more experimenting with the sheet at http://sites.google.com/site/essandgamma and found something quite interesting: for some parameters in the “tail” of gamma, the feedback series no longer converges.
In practice, we know that it must converge, otherwise very small natural perturbations around 280 ppm (due to volcanoes say) would already have destablized the climate, leading either to extreme warming or (in the other direction) a collapse of CO2 and extreme cooling.
So the convergence condition actually translates into a rather neat bound on gamma, which I’ve worked
out as (ESS x gamma) < 280 x ln(2). Putting in Earth System Sensitivity of 6 degrees C forces gamma < 32, so the case of gamma = 30ppm per degree C in my table is near the upper limit of what's possible.
Another thought: if gamma has actually been constant since the LGM (we don't know that, but it's a plausible guess), then the same reasoning applies there, only using the lower CO2 concentration. We get (ESS x gamma) < 180 x ln(2) and now ESS = 6 gives gamma < 20.8.
I'm curious if there is any published analysis along these lines already, since the mathematics is quite straightforward (see below).
Suppose the CO2 concentration is perturbed from 280 to 280(1 + epsilon), and the ultimate temperature change after all feedbacks is Delta_T. This ultimately results in a further CO2 concentration increase of gamma.Delta_T. But the climate is now stable at the new temperature, so Delta_T is a solution to the equation:
Delta_T = ln [(280 + 280.epsilon + gamma.Delta_T) / 280] X [ESS / ln(2)]
Delta_T = ln [ 1 + epsilon + gamma.Delta_T / 280] x [ESS / ln(2)]
which for small epsilon and Delta_T implies:
Delta_T = [epsilon + gamma.Delta_T / 280] x [ESS / ln(2)]
and after re-arranging:
Delta_T / epsilon = [ESS / ln 2] / [1 – gamma.ESS / (280.ln 2)]
Since the right-hand side must be positive to allow a stable solution (we can't reduce the temperature by increasing CO2) we arrive at the stated condition that gamma.ESS < 280.ln(2)
[Response: Thanks Nick, I’ll look at this more closely when I get some time.–Jim]
Jim D says
Re: 313. Dave, quite an essay. I’ll only be brief though.
The 6% per degree is really to be thought of as an exponential, not linear growth, so for each degree it would be 1.06 times what it was, so for 20 degrees you get 1.06^20 which is over 300% (like compound interest). It refers to the same curve you have, and is an approximation. It actually varies from 6-8% as temperature gets lower.
The third paragraph is hard to answer because it considers at least three separate physical processes in a combined question. Note that condensation leads to latent heating, regardless of any radiation effects, which should not be confused here. Also, a parcel closer to saturation has to be lifted less to achieve saturation. Radiation interacts in different ways with vapor and clouds, and does so independent of their motion.
I am not going to argue about the sea-level issue, but I think the 10 ppm/degree CO2 sensitivity only applies in the range 270-280 ppm, and anything else is a dangerous extrapolation to make. As I mentioned in an earlier post, we are not in this equilibrium state any more, and whether biological processes and ocean uptake alone can return us to this same equilibrium in the long term is a big question.
L. David Cooke says
RE: 316
Hey Jim D,
Thanks for the response. I was aware of the air parcel latent heating as a non-radiant process. (Hence, the turbulence generally found at cloud tops even in a stable air column.)
Another Error (as you suggested):
As to the third paragraph saturation/dew point example, I clicked on submit just as I saw the error. (If the water vapor is rising in a stable air parcel with a dew point of say 20 Deg. C it would not have to rise as high as say a parcel with a Dew Point of say 14 Deg. C providing the air column temperatures were the same for both parcels.)
Radiation and Heat Content Flow:
Where my observations of radiation come into play refers to the earlier discussion with Bob and BPL regarding the inter-zonal flow of heat content. It appears interesting to me that I can see a water vapor band associated with a pressure wave front extend from near the 25th N to the 45th N parallel (over the N. Pacific) and when looking down from the TOA it seems like a hosepipe until it reaches the 45th N and there it looks like a fountain with both a cyclonic and anti-cyclonic outflow.
The point being that I suspect the convective parcel should be radiating and losing energy as it increases in latitude. The problem is we can not see it from the side at the same time as the GEOS package provides us the TOA water vapor and IR optical depth image.
(Using the Lidar data from CloudSat or Calipso should demonstrate that the thickness of the vapor laden band/parcel is reducing in thickness or we should detect an increasing optical depth as it increases in latitude, which I suspect is due to radiation ( as a change in specific heat). (I am curious to see if there is a significant change in water vapor in these long fetch parcels of water laden air…))
Paleozoic Vectors:
I understood the lack of support to try to make a correlation to the paleozoic period. I am mainly curious if we might be able to find some vectors there that could assist us now.
Equilibrium:
As to a return to equilibrium, as I said earlier if the energy from Sol were the same and the influence of mankind were removed, I would expect that entropy driven weather would likely reestablish the initial conditions fairly quickly. However, if Sol is progressing into the next phase of its life cycle we may not see a return to the initial conditions.
(I suspect that may be part of the reason for the recent flurry of solar experiment packages…, though the ARM.gov down-welling short wave detectors do not support any significant change in the overall radiation budget.)
As an aside, sorry about the epistles, I just have many thoughts on my mind and it seems I see connections everywhere. I am afraid I am trying to “resolve too many equations simultaneously…”. Editorial comment: Life is short and everything fits, trying to figure out how it fits is real science.
Cheers!
Dave Cooke
Hank Roberts says
> not rebutted
That doesn’t mean it’s supported, mind you. A paper is valuable to the extent it leads to further interesting work, if it does. Many are published and never mentioned again in the research journals.
That one has been cited — but in PR/opinion pieces:
http://scholar.google.com/scholar?cites=523710823726435697&hl=en&as_sdt=2000
Brian Dodge says
simon abingdon — 14 February 2010 @ 4:10 AM
“This increase in AH should mean an increase in the GHG effect of the atmospheric water vapour, leading to increased global temperature, enabling a further increase in AH, leading to unstable runaway given the effectively inexhaustible availability of more water vapour.”
“Since this effect is obviously not seen in practice, there must be something missing. What is it?”
Short answer -rain.
long answer – part 1 – convection –
A parcel of air + water vapor warmer than the surrounding atmosphere is less dense, and will buoyantly rise, cooling at a rate of 9.8 deg C/km (dry adiabatic lapse rate) by expansion as the pressure drops with altitude, until
1. it’s the same density as the surrounding air, and stops rising, or
2. it cools to the dew point, the vapor starts condensing into cloud droplets, and the cooling rate drops to ~ 5 deg C/km(moist adiabatic lapse rate – depends on actual temperature – heat of condensation replaces some of the heat lost to expansion).
The average actual change in temperature of the atmosphere is ~6.5 degrees per km, (environmental lapse rate, which varies depending on the weather). Since the heat of condensation keeps a saturated rising parcel warmer(= less dense) than the surrounding atmosphere, it keeps rising until it runs out of steam(condensible water vapor – pun intended). The cloud droplets condense/freeze/coalesce out as rain.
part 2 – mechanical lift –
When large scale circulation (Ferrel cells, jetstreams, high/low pressure anticyclones/cyclones) push colder/dryer/denser air under warm/moist air(frontal), or push warm/moist air up mountain ranges(orographic), the mechanical lift created replaces the bouyancy, but cloud formation, condensation, and precipitation remove moisture from the atmosphere.
Both processes can occur together, often causing intense thunderstorms in the US Great Plains – http://www.weatheranswer.com/public/Thunderstorm.pdf.
The turnover rate is rapid – “Once evaporated, a water molecule spends about 10 days in the air.” http://ga.water.usgs.gov/edu/watercyclesummary.html
Barton Paul Levenson says
RS, read my lips–Miskolczi was WRONG. I’ll just go over one point in intimate detail for you. I’ll use nothing harder than algebra, okay?
Miskolczi’s theory depends on the atmosphere being subject to something called “The Virial Theorem.” This is normally applied to two bodies in orbit. One of its consequences is that the magnitude of potential energy of one of the bodies in question must be twice that of its kinetic energy.
With me so far? When the virial theorem applies, P / K = 2. If the virial theorem doesn’t apply, P / K = some other number.
I divided the Earth’s atmosphere into 20 layers of equal mass, assigning temperatures from the US Standard Atmosphere of 1976. If the mass of the atmosphere is 5.136 x 10^18 kilograms (Walker 1977, p. 20), then each layer has a mass of 2.658 x 10^17 kg. Here are the details:
Layer P (Pa) T (K) z (m)
1 2468.6 216.67 26598.2
2 7405.9 216.67 18141.8
3 12343.1 216.67 14759.2
4 17280.4 216.67 12584.7
5 22217.7 216.82 10973.4
6 27154.9 225.30 9668.7
7 32092.2 232.60 8545.4
8 37029.4 239.04 7555.2
9 41966.7 244.81 6667.2
10 46904.0 250.06 5860.4
11 51841.2 254.87 5119.9
12 56778.5 259.33 4434.6
13 61715.7 263.48 3796.0
14 66653.0 267.37 3197.7
15 71590.3 271.03 2634.2
16 76527.5 274.49 2101.5
17 81464.8 277.78 1595.9
18 86402.0 280.91 1114.6
19 91339.3 283.89 655.0
20 96276.6 286.75 215.2
For the virial theorem, the Earth’s atmosphere being “gravitationally bound” to the Earth, as Miskolczi puts it, gravitatinal potential energy is:
P = G M m / r
where P is potential energy in Joules, G the Newton-Cavendish constant (6.67428 x 10^-11 m3/kg/s2 in the SI), M and m the masses of the two bodies. For Earth M = 5.9736 x 10^24 kg, and I told you m above. Separation r would be the Earth’s radius plus the altitude of the layer in question. Using R = 6,371,010 meters (Lodders and Fegley 1998, p. 128), I get (in Computer math notation to shorten it):
P = 6.67428e-11 * 5.9736e24 * 2.568e17 / (6371010 + z)
for each layer, and they add up to
P = 3.21 x 10^26 Joules. Check my math, maybe I got it wrong.
The mean molecular velocity for a gas is
v = (8 R T / (π MW))^1/2
With v in m/s, the universal gas constant R = 8314.472 J/K/kg, the temperatures T from the table above, π the circle constant, and MW the mean molecular weight (it’s about 29 AMU). The mean atmospheric temperature of 249.76 K, yields v = 427.6 m/s and a total kinetic energy:
K = (1/2) m v^2
of K = 4.696 x 10^23 J.
The ratio P/K for Earth’s atmosphere is thus approximately 684. Not 2.
Some deniers critical of my point here say I should be using potential energy relative to the Earth’s surface. They’re absolutely wrong if the virial theorem is supposed to apply, but let’s check it out anyway. For small heights, we have
P = m g z
where g is gravity, averaging about 9.80665 m/s^2 at Earth’s surface. That gives us P = 3.682 x 10^23 J. Much closer, but what it gives us is P/K = 0.74… which is still not 2. Correct for variation of gravity with height and it diverges further.
So when Miskolczi says the atmosphere is following the Virial Theorem, he is just plain bloody red-mark-from-the-teacher WRONG–and anybody who can do simple algebra can prove it.
Brian Dodge says
I’ve been wading through Miskolczi’s paper, and it appears to me that there are internal inconsistencies. He shows in figure 1 that Sg (or Su, as its called later in the paper), the thermal radiation from the ground, is partitioned into St, the portion transmitted into space, and Aa, the portion absorbed by greenhouse gases. On P9 Miskolczi states that Ed, the downward radiation from the atmosphere, and Aa are always equal. An increase in GHG will decrease St and OLR, increase Aa, and increase Ed by the same amount. On P29 Miskolczi says “As long as the F0+P0 flux term is constant and the system is in radiative balance with a global average radiative equilibrium source function profile, global warming looks impossible.” But the ground can’t differentiate F0+P0 from Ed – it’s assumed to be a blackbody absorber, and radiation is radiation – so the effect of increasing Ed is indistinguishable from an increase in F0+P0. Since OLR must equilibrate with downward radiation, K(nonradiative transfer from the ground to the atmosphere) must increase, warming it and increasing Eu, upward atmospheric radiation, and/or Su/St must increase; both would require that the surface temperature increase.
simon abingdon says
#319 Brian Dodge
Thanks for your trouble in responding Brian. Though I know the general physics of how weather phenomena occur (but thanks for refreshing my memory) I still can’t see how the fact of precipitation occurring somewhere/sometime answers my question; (to quote “cloud droplets condense/freeze/coalesce out as rain”, “cloud formation, condensation, and precipitation remove moisture from the atmosphere”).
Every instance of weather is basically only a local phenomenon, is it not? Why should the fact that it is raining in X, Y and Z today remove just the right amount of moisture from the atmosphere to keep it constant globally?
I was trying to visualise what the effect of a general global rise in temperature might be. Since water vapour is a GHG any change in global conditions which increases its percentage in the atmosphere should cause a rise in global temperature – because that’s what GHGs do. Now it seems to me that a general rise in global temperature should result in a measurable increase in the amount of water as vapour globally, simply because the atmosphere would then be generally warmer (this notwithstanding the fact that water vapour takes part in the condensation/evaporation cycles of weather phenomena).
So what stops there being more water vapour in the atmosphere if the world heats up? That was my question. To say “because it rains” doesn’t seem convincing enough to my limited understanding.
Completely Fed Up says
“So what stops there being more water vapour in the atmosphere if the world heats up? That was my question. To say “because it rains” doesn’t seem convincing enough to my limited understanding.”
It rains out.
That you don’t seem to think this through isn’t really available to us because we cannot psychoanalyse you over the internet.
“Now it seems to me that a general rise in global temperature should result in a measurable increase in the amount of water as vapour globally,”
Yes, now do you know why you’ve forgotten that this is “RELATIVE humidity” we’re talking about?
Because this seems to be the source of your expressed problem and the cause of this the deeper problem you have not yet expressed.
L. David Cooke says
RE: 322
Hey Simon,
You have to keep in mind one thing, not all of the natural ice has melted yet… As long as there is a seasonal heat sink, the increase in temperature from any global source will have some form of compensation. It is when you no longer have a heat sink that can match the heat source that you have the conditions you are expecting. Currently we still have about 1/2 of the heat sink necessary (Polar/Glacial Ice) to overcome the rampant change in atmospheric humidity.
Hence, we get more dramatic weather conditions (rain/drought) that reflect the differences in the zonal heat content/flow. If the heat imbalance becomes more stable, with the reduction of heat sink capacity at 273K, then the system reaches a new plateau. The atmosphere will likely begin to fill with water vapor, then, as the oceans become the next heat sink at 316K.
It is likely you are just ahead of the curve. Thinking long term with short term data may be a bit confusing, sorry it just is…
Cheers!
Dave Cooke
simon abingdon says
#323 Completely Fed Up, you’ve responded to my #322 before responding to my #306. Disrupting the chronology of contributions in a debate can be very confusing and counterproductive. Please respond to my #306 and I will then be able to consider in context your #323. Thanks.
simon abingdon says
#324 L David Cooke
David, thanks for responding to my thought experiment. You say “Currently we still have about 1/2 of the heat sink necessary (Polar/Glacial Ice) to overcome the rampant change in atmospheric humidity”.
“rampant change in atmospheric humidity”? I hadn’t read about that before. Does the climatological community generally accept that such is happening, held back only by the polar/glacial heat sink? Do tell me.
Regards, simon
Good to talk to you David. Cheers. simon
Completely Fed Up says
“325
simon abingdon says:
16 February 2010 at 2:08 PM
#323 Completely Fed Up, you’ve responded to my #322 before responding to my #306. ”
I thought you had intelligence, simon.
The human brain is quite able to maintain several independent timelines and is, indeed, supposed to be one reasons why homo erectus grew bigger brains: more independent timelines, the more “what if” scenarios can be processed and the more the brain can make up for the poor physiology of the species as a hunter.
And if I were to reply and get the next free number, then you’d be answering my post #327 before you answer my post #323, thereby braking the timeline you’re so certain is an impediment.
Answer the 323.
Is the answer “I, simon abingdon, do not know what relative humidity is”?
If so we can investigate why you’re speaking so often and with such assumed authority about a subject where this is a rather central concept.
Completely Fed Up says
PS When you ask “Does the climatological community generally accept that such is happening,”
How about starting here:
http://www.ipcc.ch
and working from there?
L. David Cooke says
Hey Simon,
Sorry, I am not in vast company with this idea. As others have recently suggested, with the limited data, it is “dangerous” ground I tread. This simply reinforces, I am not a professional, so I need you to understand that the ideas I share here are at best an educated guess.
(Note, contrary to your suggestion in your last post, the suggestion of a current increase in humidity is unlikely. Absolute humidity may not increase until the ice based heat sinks have been exhausted. As the zonal atmospheric heat content stabilizes and exceeds the heat sink of surface ice, the absolute humidity due to increasing Sea Surface Temperatures should begin to increase, substantially. It would likely begin as a seasonal thing and then slowly increase in duration, similar to how we are seeing the NH Polar Sea Ice diminish.)
We also have to keep in mind there may be a limitation to the amount of fossil/mineral Carbon that man can reach. When coupled with the amount of Carbon in long term sequestration in the ocean, I suspect the Earth may not return to a near Paleozoic condition, at least until Sol actually moves into a mature life cycle phase, a large extraterrestrial object hits the ocean or the tectonic subduction process increases the release of ocean sediments… The point being Carbon emissions may not be the only driver…
The main question as I see it becomes one of the total heat content potential of the total fossil/mineral Carbon released by man-kinds activities. The fossil/mineral Carbon as a participant in the total insolation versus the terrestrial emission / convection / evaporation / re-radiation equation is what the experts here are trying to determine. At least you are in the right place, you would be well served to learn what they have to share.
Cheers!
Dave Cooke
Hank Roberts says
> there may be a limitation to the amount of fossil/mineral Carbon that
> man can reach. When coupled with the amount of Carbon in long term
> sequestration in the ocean
Bzzzt. Rate of change. All those previous events were occurring at natural rates of change. This one is occurring so much faster (10x? 100x?) that the biogeochemical cycling isn’t handling the removal.
When you fill your bathtub from a firehose, you are bound to overflow it even if the tub drain is working at its full normal capacity. That’s our situation.
Hank Roberts says
Oh, yeah, and
> Carbon emissions may not be the only driver…
Nobody’s ever thought carbon emission was or is the only driver.
Strawman
> The main question as I see it becomes one of the total heat content
> potential of the total fossil/mineral Carbon
Nope, heat content is not the question — another common misunderstanding, that — because CO2 doesn’t simply hold heat and get warm, it transfers heat to the rest of the atmosphere, which transfers heat to the oceans.
Jim D says
Re: 322. Simon, at the risk of starting an infinite loop, I refer you to my #270 to answer your last two paragraphs. You have it right that there is a water vapor feedback, but the feedback is not of sufficient strength to run away. It converges.
L. David Cooke says
RE: 330/331
Hey Hank,
Sorry, did you get bored elsewhere…, great, I was too. Is it possible that we can discuss your viewpoints?
Let us see where to start, first let me clarify the amount of fossil/mineral carbon accessible by current human technologies statement. At best we are able to recover about 50% of liquid deposits in each field. (Most of the balance remains locked up within the rock and is not retrievable.) As to tar sands even attempting to recover it from the surface deposits would be unlikely to exceed 5 million barrels/day by 2020.)
At best no more then 70% of estimated solid deposits are significant enough to mine. It appears that no more then 40% of those reserves are Anthracite, forcing the application of coking, reducing its economic efficiency. Which is a good thing especially with President Obama championing Gen. 4 Nuclear Plant financing. (As long as this is a real offer and not a political maneuver.) Though if someone gets the great idea of applying geothermal technology to the coking process we may have an uphill battle.
Any idea on your part of what the atmospheric balance of CO2 would be in the atmosphere if all the recoverable fossil/mineral Carbon were re-introduced to the current eon? If it exceeds 800ppm I will be very surprised. Of course if the US gets the bright idea that it can simply start stuffing biomass (sludge), down old oil wells and mine shafts it might be even less. In essence, returning something similar in a like for like exchange… )
As to the rate of change, what has that got to do with the heat in the atmosphere? I concur wholeheartedly we should be actively dumping processed human wastes underground, the more humans the faster the sequestration. Of course if there were fewer humans in the first place, there would likely be less fossil/mineral CO2 in the atmosphere as well. Population conditions not withstanding, the rate of sequestration is not the issue, IMHO.
What I believe matters is the rate of exchange between incoming and outgoing atmospheric heat. (As to your fire hose allusion and a fixed drain, would you care to share why you are certain there is only one drain? I seem to remember an overflow drain as well in modern tubs…)
As to the transfer of heat to the oceans, obviously there needs to be better data collection there, as the measured evidence suggests that the overall long term NOAA and Woods Hole ocean measurements from below the 100 meter range do not support heating. This is further supported by observations of the Pacific Triton buoy data sets. (Now as to the PIRATA data sets you may have a point.) Do you have an alternative data set I should also monitor?
You might also have a point with the Triton 20 Deg. C isotherm depth; however, most measures seem to suggest that the long term values there were near their historic lows last year… I know natural variation…, funny though I seem to remember something about oceans being slow to respond being discussed and yet it certainly looked quick enough in 1998, why would it suddenly have slowed in 2004…? (I have a good idea why, would you care to share your insights, and no PDO/La Nina short cuts allowed!)
(Oh and please let us not get into a discussion of the ARGO systems at this time. At least until the roughly 10% of the Atlantic installed buoys has been retired or remedied and not just corrected for in the data set.)
Did you also want to expand the discussion regarding dissolved CO2 in the oceans? It certainly seems interesting that there could be both an increase in acidification and yet, less dissolved CO2 due to warmer water temperatures… (Of course we should be talking about actual measures of dissolved CO2 in the oceans and not the ionic or conductivity measurements which have no less then 3 degrees of freedom, right off the bat, (H2SO4 being the main one). It seems Professor Andrew Watson at UAE, is implementing a great CO2 sensor system to be employed on ships in the N. Atlantic. (I can’t wait to see what the results will offer us.)
Apparently, you do not consider that CO2 is fully mixed in the atmosphere if you do not consider the heat content of CO2 to be important. (To be honest, Dr. Georg Hoffman in 2007, suggested that most CO2 would likely be limited to the 4km region over land and the recent MODIS GEOS packages seems to suggest the CO2 is found in pockets distributed downwind of many urban regions.)
Would you also clarify if you consider that CO2 would appear to directly contribute to the heating of the oceans surface. Generally, my observations suggest where there is significant SST increase, there is also indications of high salinity. This combination appears to suggest large amounts of evaporation, which generally points to a condition in which a cloudless stagnant sky has been overhead for some time… This appears to be even further substantiated by observing the NCDC SSRS Northern Hemisphere Analysis at the same time, while comparing the GEOS Eastern water vapor and IR images, and reviewing the UCAR Cosmic SST anomaly data. Can you provide any references supporting your observation? I fully support the idea of a secondary forcing of cloud cover or Jet Stream meander if that is also acceptable to you.
(BTW, thank you to the folks who contacted the NCDC to get the SRRS system fixed and apparently funded for another year! This has been a fantastic tool.)
I am not saying I am correct or that you are wrong as much as I am limited to the public data sets, which does not seem to not support your criticisms at this time. If you have data sets that are available that I can review with you that would help me to understand your point it would be most welcome…
Cheers!
Dave Cooke
Completely Fed Up says
“Nobody’s ever thought carbon emission was or is the only driver.”
Actually, Hank, the denialists often believe it.
They ALWAYS believe in only one cause.
Of course, they accuse the IPCC of doing this, because, like, THEY do it, don’t they, so doesn’t everyone?
Anand says
The goal of course, is to estimate the T effect alone, and therefore exclude any non-temperature related CO2 sources. That’s why they chose the time period they did, assuming these other things to be of minor importance–which I think is fairly reasonable. That approach is, I’d guess, difficult to impossible right now because the anthro- CO2 source is so dominant. So it’s going to require better estimates of the different changes in natural C fluxes than we’ve had till now to know where all the atmospheric CO2 is coming from.
Mr Bouldin, Just as I suspected :)
[Response: Not sure what you suspected, something nefarious I guess, but anyway…]
In my opinion, there are a few serious problems with this line of reasoning.
Firstly, the authors do not choose this time period to “minimise the importance of other things” as much for the availability of “highly resolved temperature and atmospheric CO2 reconstructions”. I do agree with you though; the authors also state what you say – about the limited possibilities of providing constraints on gamma given the high CO2.
[Response: Your first sentence there is incorrect. First, if high time resolution was what they were after, then T, and especially [CO2], are both much more highly resolved in their instrumental periods. Second, quoting the article: “The substantial anthropogenic perturbation of the carbon cycle and physical climate system characteristics limits possibilities of providing tight constraints during the recent (instrumental) period, while past glacial–interglacial CO2 fluctuations are governed by currently poorly quantified processes. This means that neither the interannual nor glacial–interglacial domains permit feedback quantification on timescales relevant for addressing amplification of anthropogenic global warming.” That’s about as clear as you can state it.]
But the overall direction we’re taking things in general, including this paper, is that we have reasonable evidence to believe that the paleo-reconstructions are good enough. In fact, good enough for us to graft the instrumental reconstruction onto. A graft, mind you, that is undertaken during the same said period of unprecedented high CO2 of the ‘recent period’. If we can do that, we should go the next step too. That is, accept that values of gamma in the past millennium are meaningful to the present situation.
[Response: Hold on Anand. The ‘grafting’ of instrumental T to proxy T estimates is done to account for the fact that the proxies do not always track actual T over the last few decades (i.e. the divergence problem). It does not follow logically from this that past estimates of gamma are necessarily valid (or invalid) for the future. You are comparing two very different processes (T effect on proxy variables vs its effect on [CO2]). In fact, if the recent divergence is unique (it may or may not actually be, depending on time frame chosen), it could be viewed as a no-analogue phenomenon too.]
In addition, the recent past (50-70 yrs) is contiguous with the paleo period. It is not as if we are measuring T and gamma in divorced time periods. The probability of general acceptance of estimates for both T and gamma values recedes in proportion to how much we go back in time to measure them. We have no other choice.
[Response: I think this argument has some validity (but they ended their analysis in 1800 so you’re going back 200 years, not 70), or at least should be considered. I think that’s why the authors argue that these estimates may be reasonable approximations out over the next several decades.]
Secondly, using the ‘excuse’ of ‘no-analogue’, esp in the context of this paper, is discomforting. It reminds of a situation where certain ‘divergences’ were sought to be explained (away?). Why is it that the same no-analogue argument is not employed for studies that show indirect evidence of anthropogenic warming, for example? I do not see much in the way of “We know it is warming, but conditions during periods compared to which we are warmer, were very much different. So conclusions about causation cannot be made with full certainty”.
[Response: “No-analogue” is not offered as an excuse but as a simple statement of fact. You have to go back to the Pliocene (2.6+ mya) to find T and [CO2] conditions similar to now. But the time resolution of the data then is not nearly as good as now, and so mechanistic understanding of C fluxes occurring then is also lowered. Also, “no-analogue” is not meant (by me) to predict or explain anything. To the contrary, it indicates that prediction is difficult due to lack of good understanding of how the system will behave–i.e. there are possible (or likely?) surprises in store. Regarding your argument against people making unwarranted attributions of various observed effects to climate change, yes I agree–they should not do that (and yes, some have. And I’ve taken some to task for doing so).]
I think it is wrong to say we’ll do paleo, but not accept the lessons we learn because CO2, vegetation cover, anthropogenicity etc were different. Why should we dissemble so? Especially when we don’t, at times.
Thirdly, you point out, and the bimodal distribution of gamma during the past millenium (Fig 3) indicates that gamma can change with time. Doesn’t this imply that factors other than CO2 concentrations could drive climate? (i.e. be the predominant driver?) For instance, the last time CO2 was high (~280+ ppm), and temperatures were high (beginning of LIA), there was a dramatic increase in gamma. Meaning ‘x’ happened, drove temps down, drove CO2 down and ramped up gamma. Meaning CO2 was not the main driver, atleast at one point. What that ‘x’ is, can it happen again?
[Response: Nobody’s arguing here that [CO2] declines caused the little ice age. But moreover, we already know full well that other things besides GHGs can change the climate, such as global albedo or ocean conditions. We’re not using this study to say that other things can’t change the climate.–Jim]
[Response: The bottom line here is that this study provides us very good constraints on likelihoods of gamma in the last 1000 years, but more information–from other approaches–is needed to know how well they will really apply in the coming century.–Jim]
Thank you for your time.
Regards
Anand
simon abingdon says
#323 #327 Completely Fed Up
Since you appear to regard your constant stream of logorrhoea as somehow authoritative, I thought you might like to revisit your own definition of relative humidity in #278, viz “RH is the amount of vapour held in water compared to the maximum amount of water that such vapour could hold“, which to my untutored eye looks to have about three mistakes in it, despite its brevity.
Jim says
Anand, see inline responses to your recent post.
Jim
simon abingdon says
#332 Jim D Thanks and roger the #270. Just so I might get a little closer to understanding, can you confirm that “leaving the Clausius-Clapeyron effect dominant” (penultimate line) refers to the (in this case excessive) heat required to effect the liquid water to vapour transition?
Hank Roberts says
> 334 Completely Fed Up says: 17 February 2010 at 3:56 AM
>> “Nobody’s ever thought carbon emission was or is the only driver.”
> Actually, Hank, the denialists often believe it.
[citation needed]
I’ll be surprised if you can find support for this claim. I sure can’t.
L. David Cooke says
RE:333
Hey All,
My apologies to Professor Watson and the great folks at UEA, I mistakenly associated him with UAE and not the UEA… for more on his efforts: http://www.uea.ac.uk/env/events/early-warning
Cheers!
Dave Cooke
Completely Fed Up says
“336
simon abingdon says: ”
Nothing.
Since you didn’t answer, I’ll answer for you: you do not understand what relative humidity is.
This is why you don’t understand climate science.
Completely Fed Up says
“339
Hank Roberts says:
17 February 2010 at 11:25 AM
> Actually, Hank, the denialists often believe it.
[citation needed]
I’ll be surprised if you can find support for this claim. I sure can’t.”
Hank, every time someone says to prove AGW wrong:”CO2 has been increasing but the temperatures haven’t gone up”, they’re stating that there is only one driver of climate: CO2.
Every time someone points to the sun and says “IT’S THE SUUUUN!!!”, they’re stating that there’s only one driver of climate: the sun.
Every time someone says “why then is it not warmer than 1998”, they’re stating that there’s nothing else driving the climate: No ENSO.
Brian Dodge says
“So what stops there being more water vapour in the atmosphere if the world heats up? That was my question. ” is not quite the same as “Since clouds are an inexhaustible supply of fresh water vapour this effect should cause catastrophic run away. It doesn’t. Why not?.”
There will be more water vapor(absolute humidity) in the atmosphere as the average temperature increases – that is a key driver of the 2-4.5 degree climate sensitivity, but because the temperature at some level in the atmosphere will always be cool enough for water to precipitate out, it doesn’t matter how big the (ocean) supply is – at some point, more evaporation gives more rain, not “a further increase in AH”. The other important consideration is that the absolute amount of water vapor varies dramatically with altitude because of the effect of the lapse rate removing water from rising parcels of moist air. At 30 deg C and 100% RH,(tropical ocean surface) the partial pressure of water is ~4.2e-2 bar. At 230mb(11km) altitude, -56.5 deg C, and 40% RH (ICAO tropopause, humidity from http://www.esrl.noaa.gov/psd/cgi-bin/data/timeseries/timeseries1.pl), the water vapor has dropped to ~6.9e-6 bar (about a factor of 6000) even though there is still about a quarter of the atmosphere above this level, You can play with humidity vs temperature & pressure at http://www.humidity-calculator.com/index.php
“Every instance of weather is basically only a local phenomenon, is it not? Why should the fact that it is raining in X, Y and Z today remove just the right amount of moisture from the atmosphere to keep it constant globally?” All the X’s, Y’s, and Z’s, heterogeneously distributed around the globe are a reflection of the heterogeneity of the parcels of various moisture content distributed around the globe. Since the cloud cover is about 60%, that percentage of the global area has saturated parcels of air. Those saturated parcels act as a buffer to rising water vapor – if you try to increase the water vapor in a cloud, you just get more or larger cloud droplets, and more rain. As more heat raises the humidity in areas that currently are cloud free, that will increase the greenhouse effect, but the devil is in the details – small clear areas can lose moisture to adjacent clouds, or pick up moisture from cloud evaporation, depending on humidity/temperature gradients, turbulence, mixing, latitude and solar input, diurnal changes etc.The atmospheric bucket of water vapor has a lot of holes in it in the form of clouds; as we pour more water vapor into the bucket by raising CO2, capturing more solar energy, and causing more evaporation, the level in the bucket goes up, but the holes leak more out; we get a higher level in the bucket, and additional warming, but the increased flow through the leaks (rain) matches the increased evaporation preventing runaway. Like Jim D said, the effects converge.
” observations suggest that precipitation and total atmospheric water have increased at about the same rate over the past two decades.”How Much More Rain Will Global Warming Bring?
Frank J. Wentz,* Lucrezia Ricciardulli, Kyle Hilburn, Carl Mears Science 13 July 2007:
From http://www.ipcc.ch/ipccreports/tar/wg1/266.htm
“The atmospheric water vapour content responds to changes in temperature, microphysical processes and the atmospheric circulation.”
“The complexity of water vapour radiative impact is reflected in the intricate and strongly inhomogeneous patterns of the day-to-day water vapour distribution (Figure 7.1a). The very dry and very moist regions reveal a strong influence of the large-scale dynamical transport. Model simulations exhibit similar patterns (Figure 7.1b), with a notable qualitative improvement at higher resolution (Figure 7.1c).” http://www.ipcc.ch/ipccreports/tar/wg1/266.htm
L. David Cooke says
RE:342
Hey CFU,
To take the heat off Hank, let me suggest a different point of view. Other then natural phenomenon which, has caused prior peaks and valleys in the Earths temperature record, there currently appears to be something different going on when compared to the weather of the prior 100 years. The primary focus over the last 10-12 years has been on anthropogenic fossil/mineral carbon that has been released into the atmosphere over the last 50 years in an amount exceeding what had been deemed as the normal Earths Carbon Cycle capacity.
Many including myself doubted that Carbon has been a direct contributor to the trend in the Earths temperature. When I did some early research with the ARM.gov short and long wave down-welling pyrolytic detectors the noise was too high to make a determination.
(Using the SkyRAD60 data set, prior to the DOE take over.)
When looking further it seemed there had to be a slightly different cause and effect then a direct forcing. Looking at the data sets between 2004 and 2007 many could see the large pattern drivers such as the ENSO you reference. However, it has been very difficult to try to determine how the large scale phenomena were playing into a global warming observation. It was about this time there was a lot of work surrounding the development of the Raman Lidar. (Which I believe were incorporated in the CloudSat and Calipso packages.) The Lidar systems offered the ability to estimate water content and temperatures of the over head precipitate water.
It has not been until the last 2 years that there was the ability to see both side of clouds. It has been this ability provided by Raman Lidars that began to offer detailed insights into weather patterns. This coupled with the spat of Water Vapor research coming out of WSU and CSU seemed to point to the combination of aerosols, cloud and surface characteristics as participants in insolation and water vapor transfer began to point to some interesting things going on in the atmosphere.
Going back to the long term data records and comparing them to recent events in US we see an unusual pattern of blocking Highs and cutoff Lows that seemed to pop-up during the recent abnormal global temperature run up. Hence, my thoughts I shared with Simon that it appears to me that Carbon could be playing a part; but, not necessarily the major part in the change in global warming. Rather it seems the change in the weather patterns are the major part of the global warming phenomenon we are seeing.
This is what lead me to this site, yet again, as I am trying to find my way through to the next phase of my research. Now I am curious if the recent Polar Amplification may be part and parcel of the effects of Carbon driven warming. However, rather then the idea of dirty ice which I initially thought, I am thinking: Part One, it may be that Carbon is playing into insolation changes forcing changes in the weather patterns. Part Two, will be determining if the Dry Slot areas are the same size in upper latitudes as at lower latitudes even though the surface area at the Poles is smaller. I suspect this results in greater insolation in a region that may not have had a great deal of insolation in the past.
Part and parcel with this is the character of the Northern Jet Stream. I suspect that the spat of 2005 papers documenting the reduction in the Walker circulation may be related to the Northern Jet Stream meandering. The point being that the Jet Stream may be expanding or contracting based on the heat differential between the Sub-Polar and the Temperate zones. If so then I am hoping to see significant CO2 traces in this region. At worst an unusual Ozone pattern emerging in this region.
Does this help reduce your concerns over cause and effects assignment? After all, I am way too big to be standing on a sandy beach, in a white suit, crying out. “The Plane, The Plane”…
The key points I suspect are that the deviations of the Northern Jet stream that may be the driver of the recent weather extremes including Summer Droughts in the SE US, the Floods in the SE Asia, the Snows in the SE US and the Sun in the NW US up through Alaska… Hence, IMHO, carbon plays into the heating of the Polar Temperate Convergence Zone which in turn seems to play into Blocking Highs and Cutoff Lows resulting in extremes of droughts, floods and global warming along with polar amplification.
Cheers!
Dave Cooke
simon abingdon says
#343 Brian Dodge
Many thanks for a very informative and interesting explanation. Much appreciated. simon
Completely Fed Up says
David the first four paragraphs are saying absolutely nothing not already known.
However, such natural effects would have appeared in previous warming and cooling events and Annan and Hargreaves is one study often referred to here has done just that.
Such effects do not seem to change the median temperature sensitivity from that produced by the models you presuppose do not have sufficient inclusion of minor features of climate and weather.
This should be a strong signal that such feedbacks ARE indeed minor.
The classic school test for the value of g, the acceleration to gravity is never done except on paper. Where you’re told that a log is on a slope of, say 7 degrees slope. You are then asked to show what the time taken to drop a certain distance would be and then goes on to say that this experiment would give you the experimental value for g.
However, this is not the case, since such a simple model doesn’t include such things as friction, stiction, slippage and, most importantly, rotational inertia.
If you were to do the experiment you would see that your value for g would be, say 9.58 +/- 0.0.
If such were the result, you would not conclude that the effect of stiction would be a change in the real value of G of 2m/s^2.
There’s nowhere for that value to fit in the 0.13m/s^2 difference between what a more accurate measuring technique produces and what you produced.
Doubly so if you found out that rotational inertia could account for 0.11 +/- 0.04 of that.
But you would proffer clouds as being able to undo all the evidence that points to anthropogenic CO2 production being a problem.
Why?
How?
Para 6 is gibberish.
At one point you’re talking about CO2 in the atmosphere then segue into how this changes insolation at the poles.
I was not aware of any polar axis shift nor any mechanism how CO2 changes could cause this.
Para 7 is asking a question that has no genesis. Where do you get the feeling I have a problem with cause and effect assignment and how do you think the previous paragraphs could have affected one if such were extant?
Para 8, start: yes. This is what I’ve heard. I suspect that this too is not news to the climatologists. It then leads off into an assertion that is also not one I am aware of being news.
L. David Cooke says
RE 343
Hey Brian,
I do not agree with the conclusion regarding rain, as the total column is not condensing. (Based on ground based Raman Lidar measurements in the South West between OK, CO and NM in 2005 on the old Souminet site, it had shown that the tropopause lifts in the presence of higher localized air temperatures and low air pressure. At the time there was not a CloudSat/Calipso fly over so I could not confirm the Mesosphere compression; however, Stratospheric compression was clear.) Are you considering the differences between the wet and dry adiabatic lapse rates?
If we want to go further we might need to consider Polar Stratospheric Clouds and their origins. If you increase the heat, the air parcel rises higher in the atmosphere; however, it is only that portion that has changed state that can rain. So what happens to the rest of the column until it becomes cool enough to change state? What happens to the lapse rate of the total column? If the air temperature of the air column is greater would it not be able to hold more water? ( http://www.tis-gdv.de/tis_e/misc/klima.htm ) The result I see would suggests that a warmer parcel of air near the Sea Surface may have a greater Absolute Humidity.
If globally the air above the Sea Surfaces were to be warmer are you suggesting that the atmospheric AH would not increase? As we go up in altitude and the adiabatic the vapor pressure falls the water cools. This begs the question of what is the rate of temperature drop versus altitude. For instance going from 40 Deg. C at 1033mb at 90% humidity with a dew point of what around 37 Deg. C what would the temperature be of the same parcel at say 550mb and again at 250mb. (BTW, I broke the calculator…)
We both know that a lower vapor pressure cools the water vapor; however, in a warmer atmosphere the height for achieving state change increases, to a point. Would the maximum AH be related to the the maximum height a parcel can rise before dropping to the dew point? If so would that not suggest that there would be a greater AH in the atmosphere as the entire atmosphere would contain more water vapor?
Note: There likely is a maximum altitude limitation if you look at the total temperature gradient of the standard atmosphere at greater then 55km. ( Page 26 Figure 3 http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19770009539_1977009539.pdf )
To suggest that higher temperatures are going to result in more rain and not AH is insufficient unless you can describe the mechanism, IMHO. So far I do not see it. I do not think I am being dense, is there a practical example we could employ, maybe that would help?
As for the IPCC radiative description I know the distributions I have observed over the last 40 years have a very clear pattern. On the other hand, I have not seen a fixed number of global Cyclonic and Anti-cyclonic systems balancing out the global atmospheric systems. Is that what they mean?
Cheers!
Dave Cooke
L. David Cooke says
RE:346
Hey CFU,
I have not a hint of how your statements 5-10 are in anyway related to what I posted.
The historic record seems to suggest the cycle of Drought wrought be Blocking Highs since 1970 are significant, with nearly 10 sever events since 1970 ranging from CA, to the Sierras, the Mid-West and down through the SE including Florida. During these periods, at least locally, there were record high temperatures and record low temperatures to the point that dew formed on raised surfaces, in the middle of the drought. These regional events certainly contributed to a slightly higher average temperature (regionally about 2 Deg. C over a period of about 6 months), though the daily range was abnormally high at an average 32 Deg. (The daily range measured as the Daily High Temperature of the current day with the next mornings Daily Low Temperature in the attempt to measure the amount of Radiant heat released. This has been an experiment I attempted to account for any overhead increased radiant impedance.) I do not consider long term drought a minor signal…
As to clouds being the difference, I do not know where you got that impression. Lets reset that thought and talk total water vapor. Clouds or their lack are mainly only a signal, to a point. (That point being a vapor density of greater then roughly 1g/m^3). Water vapor and both it’s radiant and optical depth effects are well documented. Increasing the amount of insolation by it’s reduction and it’s effects on ice in the Arctic Region, (my error for not being more precise), have been clearly demonstrated by recent experiments near Arctic ice melt ponds/pools. (Performed by Dr. Jason Box from Ohio State University (DVD available here: http://store.discoveryeducation.com/product/show/54306 )
The point I clearly see you missing is the cause of the Jet Stream deviations. I have monitored the GEOS and SSRS inter-zonal patterns over the N. Pacific for the last 4 years. I feel fairly confident with the both the patterns of the ITCZ and Northern Jet Streams, as the Rossby Wave steering energy and hence a driver for most of the ENSO. PDO and NAO activity since 2006.
The question I was attempting to discuss earlier is what drives the deviations in the Jet Stream. At best I can only theorize at this time that it has more to do with the heat content at the Hadley-Ferrel Cell convergence. Kind of like heating and cooling a small diameter copper pipe coil. Fill it with hot water and the coil expands, fill it with cold water and the coil contracts.
The hypothesis should be if the major difference in the Earths atmosphere since 1950 has been the change in CO2 in the atmosphere and all other things remain roughly the same then CO2 would be assumed to causing additional heating at the Hadley-Ferrel convergence. However, that assumption which is easy to measure, though based on what I have seen from Japanese satellite reported by NASA, CO2 does not seem to be concentrated along the SRRS 200mb Analysis indicated Jet Stream track. However, public access to the images have been hit and miss at best…
Cheers!
Dave Cooke
Anand says
Mr Bouldin
The ‘grafting’ of instrumental T to proxy T estimates is done to account for the fact that the proxies do not always track actual T over the last few decades (i.e. the divergence problem).
I don’t think I agree with line of reasoning.
If we seek an explanation as to why grafting of temperature proxies over the paleo ones is done, the most important reason, and probably the most valid reason is: to demonstrate that the paleo reconstructions of the GTA track well with the temperature proxies and therefore, to increase confidence in the veracity of the paleo reconstructions. Because, like Phil Jones says, the instrumental series is the “best proxy” after all.
The divergence issue is not dealt with by the grafting. It is a consequence of the grafting. The Briffa truncation in the TAR is how they dealt with it. Leaving out the same data for Briffa, 2001 with an explanation, is how chapter 6 AR4 deals with it.
[Response: Your points here are very unclear. You mentioned grafting without specifying exactly what you mean, ans I was trying to interpret it (mistake!). It appears (though it’s hard to tell) that you are confusing proxy calibration with the completely separate issue of showing instrumental and proxy temperatures together in graphs. Furthermore, it is not clear why you are even discussing this.–Jim]
Check their own words:
[Response: Who’s?]
“Virtually all of these used chronologies or tree ring climate reconstructions produced using methods that preserve multi-decadal and centennial time scale variability”
“That this is a defensible simplification, however, is shown by the general strength of many such calibrated relationships, and their significant verification using independent instrumental data.”
Moreover, the instrumental series extend back to about 150 years, well beyond the non-divergent period. Why did they do that then?
[Response: Sorry, not following. They used variable length calibration periods over the instrumental period, as stated in the article, and this post.–Jim]
Once again, thanks for your time and comments. This paper and this thread (your posts mainly) helped me understand how the same thing is understood differently by two sets of people. I do agree and understand with many of your observations above.
With regards
Anand
Completely Fed Up says
“Because, like Phil Jones says, the instrumental series is the “best proxy” after all”
Please explain then why “hide the decline” is so bad when to do that, he used the best proxy: real temperatures?
“Moreover, the instrumental series extend back to about 150 years, well beyond the non-divergent period. Why did they do that then?”
Because we didn’t have any thermometer readings from the indiginous Amerindian population, nor the Australian continent etc.