In a new GRL paper, Svensmark et al., claim that liquid water content in low clouds is reduced after Forbush decreases (FD), and for the most influential FD events, the liquid water content in the oceanic atmosphere can diminish by as much as 7%. In particular, they argue that there is a substantial decline in liquid water clouds, apparently tracking a declining flux of galactic cosmic rays (GCR), reaching a minimum days after the drop in GCR levels. The implication would be that GCR can affect climate through modulating the low-level cloudiness. The analysis is based on various remote sensing products.
The hypothesis is this: a rapid reduction in GCR, due to FD, results in reduced ionization of the atmosphere, and hence less cloud drops and liquid water in low clouds. Their analysis of various remote sensing products suggest that the opacitiy (measured in terms of the Angstrom exponent) due to aerosols reaches a minimum ~5 days after FD, and that there is a minimum in the cloud liquid water content (CWC) minimum occurring ~7 days later than the FD. They also observe that the CWC minimum takes place ~4 days after the fine aerosol minimum (the numbers here don’t seem to add up).
The paper is based on a small selection of events and specific choice of events and bandwidths. The paper doesn’t provide any proof that GCR affect the low clouds– at best -, but can at most only give support to this hypothesis. There are still a lot of hurdles that remain before one can call it a proof.
One requirement for successful scientific progress in general, is that new explanations or proposed mechanisms must fit within the big picture, as well as being consistent with other observations. They must also be able to explain other relevant aspects. A thorough understanding of the broader subject is therefore often necessary to put the new pieces in the larger context. It’s typical of non-experts not to place their ideas in the context of the bigger picture.
If we look at the big picture, one immediate question is why it should take days for the alleged minimum in CWC to be visible? The lifetime of clouds is usually thought to be on the order of hours, and it is likely that most of the CWC has precipitated out or re-evaporated within a day after the cloud has formed.
In this context, the FD is supposed to suppress the formation of new cloud condensation nuclei (CCN), and the time lag of the response must reflect the life time of the clouds and the time it takes for new ultra-fine molecule clusters (tiny aerosols) to grow to CCN.
Next question is then, why the process, through which ultra-fine molecule clusters grow by an order of ~1000 to become CCN, takes place over several days while the clouds themselves have a shorter life time?
There is also a recent study in GRL (also a comment on May 1st, 2009 in Science) by Pierce and Adams on modeling CCN, which is directly relevant to Svensmark et al.‘s hypothesis, but not cited in their paper.
Pierce and Adams argue that the theory is not able to explain the growth from tiny molecule clusters to CCN. Thus, the work by Svensmark et al. is not very convincing if they do not discuss these issues, on which their hypothesis hinges, even if the paper by Pierce and Adams was too recent for being included in this paper.
But Svensmark et al. also fail to make reference to another relevant paper by Erlykin et al. (published January 2009), which argues that any effect on climate is more likely to be directly from solar activity rather than GCR, because the variations in GCR lag variations in temperature.
Furthermore, there are two recent papers in the Philosophical Transactions A of the Royal Society, ‘Enhancement of cloud formation by droplet charging‘ and ‘Discrimination between cosmic ray and solar irradiance effects on clouds, and evidence for geophysical modulation of cloud thickness‘, that are relevant for this study. Both support the notion that GCR may affect the cloudiness, but in different aspects to the way Svensmark et al. propose. The first of these studies focuses on time scales on the order of minutes and hours, rather than days. It is difficult to explain how the changes in the current densities taking place minutes to hours after solar storms may have a lasting effect of 4-9 days.
There are many micro-physical processes known to be involved in the low clouds, each affecting the cloud droplet spectra, CWC and the cloud life times. Such processes include collision & coalescence, mixing processes, winds, phase changes, heat transfer (e.g., diffusive and radiative), chemical reactions, precipitation, and effects from temperature. The ambient temperature determines the balance between the amount of liquid water and that of water vapour.
On a more technical side, the paper did not communicate well why 340 nm and 440 nm should the magic numbers for the remote sensing data and the Angstrom exponents, calculated from the Aerosol Robotic Network (AERONET). There are also measurements for other wavelengths, and Svensmark et al. do not explain why these particular choices are best for the type of aerosols they want to study.
For a real effect, one would expect to see a response in the whole chain of the CCN-formation, from the smallest to the largest aerosols. So, what about the particles of other sizes (or different Angstrom exponents) than those Svensmark et al. have examined? Are they affected in the same way, or is there a reason to believe that the particles grow in jumps and spurts?
If one looks long enough at a large set of data, it is often possible to discern patterns just by chance. For instance, ancient scholars thought they found meaningful patterns in the constellations of the stars on the sky. Svensmark et al. selected a smaller number of FDs than Kristjansson et al. (published in 2008) who found no clear effect of GCR on cloudiness.
Also, statistics based on only 26 data points or only 5 events as presented in the paper is bound to involve a great deal of uncertainty, especially in a noisy environment such as the atmosphere. It is important to ask: Could the similarities arise from pure coincidence?
Applying filtering to the data can sometimes bias the results. Svensmark et al. applied a Gaussian smooth with a width of 2 days and max 10 days to reduce fluctuations. But did it reduce the ‘right’ fluctuations? If the aerosols need days to form CCNs and hence clouds, wouldn’t there be an inherent time scale of several days? And is this accounted for in the Monte-Carlo simulations they carried out to investigate the confidence limits? By limiting the minimum to take place in the interval 0-20 days after FD, and defining the base reference to 15 to 5 days before FD, a lot is already given. How sensitive are the results to these choices? The paper does not explore this.
For a claimed ‘FD strength of 100 %’ (whatever that means) the change in cloud fraction was found to be on the order 4% +-2% which, they argue, is ‘slightly larger than the changes observed during a solar cycle’ of ~2%. This is not a very precise statement. And when the FD only is given in percentage, it’s difficult to check the consistency of the numbers. E.g. is there any consistency between the changes in the level of GCR between solar min and max and cloud fraction and during FD? And how does cloud fraction relate with CWC?
Svensmark et al. used the south pole neutron monitor to define the FD, with a cut-off rigidity at 0.06GV that also is sensitive to the low-energy particles from space. Higher energies are necessary for GCR to reach the lower latitudes on Earth, and the flux tends to diminish with higher energy. Hence, the south pole monitor is not necessarily a good indicator for higher-energy GCR that potentially may influence stratiform clouds in the low latitudes.
In their first figure, they show a composite of the 5 strongest FD events. But how robust are these results? Does an inclusion of the 13 strongest FD events or only the 3 leading events alter the picture?
Svensmark et al. claim that the results are statistically significant at the 5%-level, but for the quantitative comparison (their 2nd figure) of effect of the FD magnitude in each of the four data sets studied, it is clear that there is a strong scatter and that the data points do not lie neatly on a line. Thus, it looks as if the statistical test was biased, because the fit is not very impressive.
The GRL paper claims to focus on maritime clouds, but it is reasonable to question if this is true as the air moves some distance in 4-9 days (the time between the FD and the minimum in CWC) due to the winds. This may suggest that the initial ionization probably takes place over other regions than where the CWC minima are located 4—9 days afterward. It would be more convincing if the study accounted for the geographical patterns and the advection by the winds.
Does the width of the minimum peak reveal time scales associated with the clouds? The shape of the minimum suggests that some reduction starts shortly after the FD, which then reaches a minimum after several days. For some data, however, the reduction phase is slower, for others the recovery phase is slower. The width of the minimum is 7-12 days. Do these variations exhibit part of the uncertainty of the analysis, or is there some real information there?
The paper does not discuss the lack of trend in the GCR of moderate energy levels or which role GCR plays for climate change. They have done that before (see previous posts here, here, and here), and it’s wise to leave out statements which do not have scientific support. But it seems they look for ways to back up their older claim, and news report and the press release on their paper make the outrageous claim that GCR have been demonstrated to play an important role in recent global warming.
A recent analysis carried out by myself and Gavin, and published in JGR, compares the response to solar forcing between the GISS GCM (ER) and the observations. Our analysis suggests that the GCM provides a realistic response in terms of the global mean temperature – well within the bounds of uncertainty, as uncertainties are large when applying linear methods to analyse chaotic systems. The model does not include the GCR mechanism, and the general agreement between model and observations therefore is consistent with the effect of GCR on clouds being minor in terms of global warming.
As an aside to this issue, there has been some new developements regarding GCR, galaxy dynamics and our climate (see the commentary environmentalresearchweb.org) – discussed previously here.
Rod B says
Patrick (197), true. I thought that was a level of detail that wasn’t necessary and probably unhelpful to my basic explanation.
Jacob Mack says
I think the term proof is a good word choice, as it is a single event, or series of more causal events as opposed to AGW which of course is highly correlated with confidence greater than 95%. AGW has many more components and time series of well correlated events and a good physics exposition of the process of equilibration. GCR, on the other hand needs more direct proofs, through observations; of course this may end up becoming a slippery slope for sometime, but as well attributed as AGW is, it is not “proven,” in the most standard way of thinking of something as proven.
Yet this GCR business can be shown to be true or not true just like obeserving melting sea ice can be looked at directly (irrespective of attribution). Of course with such high level of confidence in the AR4 report and the more recent Climate Change and Water report, there is really little (if any) doubt AGW is a real phenomenon and is in a trend. Of course there is NO doubt humans affect climate; this is 100% certain. Just look at Asian cloud (spanning from China to Pakistan) which is destroying crops due to lower temps for a land mass which contains more than 50% of the world’s population. Of course the WMO’s definition of climate (not just Gavin’s) is 30 years, but of course we can look at 10 year changes, especially due to human activity directly and ascertain a climate modification. The WMO and NASA–NOAA acknowledges it is helpful to also look at “climate, for less than 30 years too.
I am interested in seeing more research and blog posts from RC in GCR,, of course, as there may be a smaller % contribution from this FD phenomenon, but I can see why the word proof is being maintained even after posters pointed it out. I think all these issues of GCR, FD, SOI, ENSO are important, (as do the mods here at RC) but that faulty research misplaces emphasis on them in regards to GHG forcings.
John P. Reisman (OSS Foundation) says
#200 Rod B
Just add water and stir.
If you want to argue with the principle physics, you better show up with something more powerful than your opinion.
For me, I will accept the principle physics unless of course you can prove them wrong, get it peer reviewed and have it survive peer response. However, I would point out that any such attempt would likely be akin to proving that earth does not have a gravity at this point.
Good luck, and remember to use a tether next time you step outside… you wouldn’t want to risk floating off into space.
I’m not a math guy, so I will just have to watch in shock and awe of your superior skills in proving the science wrong on the subject.
Doug Bostrom says
Rod B 7 August 2009 at 3:39 PM
“The current accepted estimate is that CO2 concentration going from 400 to 800 ppm will add 3.708 watts/m^2 forcing [5.35ln(800/400)]. I think the science supporting that projection from that level of concentration is sufficiently weak to warrant reasonable questioning.”
Ok, fair enough. Pursuing my earlier suggestion about identifying and prioritizing uncertainties, what part of the supporting science would you choose as the greatest liability to the accepted prediction of forcing?
Paul says
Many of the comments here seem bizarre from the perspective of scientific integrity. Svensmark in his paper is showing what he believes to be valid empirical evidence in support of his hypothesis of the existence of a postulated physical phenomenon, namely that the level of GCR can affect cloud formation. There may or may not be consequences in the event that this hypothesis is valid, particularly with respect to the existence of Savir’s “amplification factor” on TSI as a control of climate change.
However, the appropriate intelligent response should be to comment on the validity of the demonstration of the physical phenomenon.
Cann we please all recall that at present the entire argument for CO2 causing dangerous global warming rests on a single proven demonstration of a physical phenomenon – namely that CO2 is a very effective absorber and emitter in certain LW bands. Everything else is an extrapolated deduction.
Patrick 027 says
Rod B. – if you say the burden of proof is on those who say it is 3.708 +/- 0.001 W/m2, then I guess that’s fine. But 3.7 +/- 0.3 W/m2 (or something like that) is much more solid – the burden of proof is on you (or whoever) to show otherwise. Anyway, the uncertainty in radiative forcing itself is not much of an issue for CO2 or greenhouse gases in general.
Patrick 027 says
“Everything else is an extrapolated deduction.”
Like how the sun’s mass was determined by planetary orbital characteristics was an extrapolation?
Brian Dodge says
“I don’t know how that Fourier proxy stuff works, …” Comment by Robert Bateman — 5 August 2009 @
“Beginning with work by Joseph Fourier in the 1820s, scientists had understood that gases in the atmosphere might trap the heat received from the Sun. As Fourier put it, energy in the form of visible light from the Sun easily penetrates the atmosphere to reach the surface and heat it up, but heat cannot so easily escape back into space. For the air absorbs invisible heat rays (“infrared radiation”) rising from the surface. ” http://www.aip.org/history/climate/co2.htm
“Joseph Fourier’s contributions to modern engineering science are so critically important and so pervasive that he is rightly regarded as the father of modern engineering.
Fourier’s contributions, many of which are presented in The Analytical Theory of Heat (1822), include:
• The original and still globally accepted view of dimensional homogeneity—the view that natural phenomena can be rigorously described only by equations that are dimensionally homogeneous—i.e. equations that are dimensionally identical.
Fourier’s view of homogeneity required the creation of contrived parameters such as electrical resistance, heat transfer coefficient, and material modulus. These parameters, and the myriad others like them, are the engineering tools now used to describe and to analyze natural phenomena. They exist only because of Fourier’s pioneering view of homogeneity, and they have all been contrived in the manner pioneered by Fourier.
• The original and still globally used concept of “flux”—of a flow of something per unit area and unit time.
• The original and still globally accepted sciences of convective heat transfer and conductive heat transfer.
• The original and still globally used concepts of heat transfer coefficient and thermal conductivity.
• The original and still globally used solution of “boundary condition” problems by matching the flux at the boundary.
• Many original and still globally used contributions in pure and applied mathematics widely used in modern engineering. ”
and
“Lienhard [1983] summarizes Fourier’s contributions in pure and applied mathematics presented in his 1807 memoir on heat transfer:
Fourier submitted a new 234 page manuscript to the Institut de France in Paris in 1807. In it he did something more important than determining how to formulate the laws governing the flow of heat in a solid. He did something beyond updating Bernoulli’s trigonometric series to solve the equation. He actually provided us with the strategies that would be basic to the entire field of continuum mechanics, of which heat conduction and convection are a major part. These are the identification of field differential equations and boundary conditions, the technique of separation of variables, and the idea of representing solutions in the form of series of arbitrary functions.”
http://memagazine.asme.org/web/Fourierthe_Father_Modern.cfm
“This method, later known as Fourier’s Theorem or Fourier series, was revolutionary in that it could also be applied to any recurring, oscillating motion… Using Fourier series, it is possible to reduce any complex, periodic wave form into a series of simple, sine waves whose sum produces the original complex wave. The use of Fourier series in this manner is called harmonic analysis.” http://www.bookrags.com/biography/jean-baptiste-joseph-fourier-wsd/
“In signal processing, the Fourier transform often takes a time series or a function of continuous time, and maps it into a frequency spectrum. That is, it takes a function from the time domain into the frequency domain; it is a decomposition of a function into sinusoids of different frequencies; in the case of a Fourier series or discrete Fourier transform, the sinusoids are harmonics of the fundamental frequency of the function being analyzed.” http://en.wikipedia.org/wiki/Fourier_analysis; also see; http://en.wikipedia.org/wiki/Fourier_transform and http://en.wikipedia.org/wiki/Discrete_Fourier_transform
“Figure 2A displays the correlation between 10Be concentration and sunspot number at sunspot minimum, This shows the remarkable result that the GCR intensity exhibits a well defined dependence upon the minimum sunspot number. … Figure 2A therefore conveys the important result that the sunspot number is providing a proxy for the magnetic conditions over a substantial region of space…” http://articles.adsabs.harvard.edu/cgi-bin/nph-iarticle_query?bibcode=2003ICRC….7.4031M&db_key=AST&page_ind=1&data_type=GIF&type=SCREEN_VIEW&classic=YES
“We have tested the use of sunspot area as a long-term proxy for solar irradiance change, using observations made at the Coimbra Solar Observatory, from which we obtain both statistically weighted sunspot numbers and sunspot areas over the period 1980-1992. These are both correlated with solar irradiance values measured from Nimbus-7 spacecraft over the same time period, …”
Comparing Sunspot Area and Sunspot Number as Proxies for Long-term Solar Irradiance Variation
S. D. Jordan (NASA Goddard Space Flight Center, Greenbelt, MD), A. G. Garcia (Coimbra Solar Observatory, Coimbra, Portugal)
AAS 200th meeting, Albuquerque, NM, June 2002 Session 57. Living with a Star
I was too curt with my previous statement regarding the lack of correlation between GCR and global temperature, and didn’t explain my reasoning adequately. Since the woodfortrees site doesn’t have a GCR dataset that can be used, one must use a proxy or stand-in for such a set. As can be seen from the references above there is a high correlation between sunspot number and GCR(as well as sunspot area and solar irradiance), so sunspot number, though perhaps not ideal, can be used as a proxy for GCR. We also(well, the woodfortrees site) has sunspot number data that extends back to 1900, well before data on GCR was measured. E.g.,”Cosmic ray measurements in Oulu started in 1964 in a wooden barrack with a standard 9-NM-64 neutron monitor (NM) consisting of three units each of three counters.” http://spaceweb.oulu.fi/projects/crs/. If one selects two data sets with periodic variations(in this case annual) http://www.woodfortrees.org/plot/esrl-co2/from:1980/scale:1/mean:1/offset:-310/plot/nsidc-seaice-n/from:1980/scale:1/mean:1, and then we use the Fourier transform tool to convert from time domain to frequency domain http://www.woodfortrees.org/plot/esrl-co2/from:1980/scale:1/mean:1/offset:-337/detrend:47/fourier/magnitude/from:1/to:100/plot/nsidc-seaice-n/from:1980/scale:0.5/mean:1/offset:-10/fourier/magnitude/from:10/to:100 we see correlated peaks in the frequency spectrum corresponding to the annual variation, plus the first harmonic since the variation isn’t strictly sinusoidal, of both sea ice and CO2. When we make the same analysis of sunspot number (which we have seen to be correlated with and a proxy for GCR) versus global temperature, there is a peak in the sunspot spectrum that corresponds to the well known solar cycle.
“Although noted astronomers such as LaLande and Wm. Herschel observed the Sun and recorded the presence of sunspots, it wasn’t until 1843 that Heinrich Schwabe noted that the number of sunspots varied with a period near 10 years. ” http://www.mtwilson.edu/hk/ARGD/Sunspot_Cycle/
There is no corresponding peak in the temperate spectrum. Since global temperature can vary significantly over periods much shorter than the time for a sunspot cycle(e.g., the 1998 record temperature), and also shows significant variations and trend over longer periods of time, if the solar cycles were influencing temperature to a significant degree, there would be a peak in the frequency spectrum of temperature changes corresponding to the peak we see in the sunspot number spectrum.
chris says
Re #205
Well yes, scientific integrity is the issue Paul. In my experience as a scientist, we work hard to raise research funds in areas we consider interesting/important, do careful experiments/analyses and publish the results, generally without much fanfare. We are careful to place our results and interpretations in the context of any wider issues. If Svensmark (and Shaviv) did that, then the CRF-cloud-climate issue would be just another element of climate research to be considered in the wider conext according to a careful and honest consideration of the evidence . We could be clear, for example, that however interesting any potential CRF-cloud relationship might be, we know rather categorically that this can have little significance for the very marked global scale warming of the last 30-odd years. There simply hasn’t been a secular trend in the CRF since the early 1950’s at least, when the CRF began to be monitored in great detail. We could examine the climate/temperature data for the last 1000 years and conclude that the balance of evidence opposes any significant role of CRF in temperature variation during this period too.
Unfortunately scientific integrity over this issue has taken a battering. For some reason a very small number of scientists have participated in a circus in which any potential significance of these data are vastly overblown. According to the press release associated with Svensmark’s paper, Forbush phenomena are:
“…events that reveal in detail how the Sun and the stars control our everyday clouds.”
Shaviv accompanied his (very likely terminally flawed) hypothesis about CRF effects on earth temperature through the Phanerozoic with the assertion:
“The operative significance of our research is that a significant reduction of the release of greenhouse gases will not significantly lower the global temperature”
…and the web is simply crawling with overblown statements encompassing gross misrepresentations of this science:
“A systematic effect on the clouds – e.g. one of the cosmic origin – is a nightmare for the champions of the silly CO2 toy model of climatology because the cloud variations easily beat any effect of CO2. …”
None of this seems to be a concern for the advocates of this hypothesis. The unfortunate thing is that there are many scientists working pretty diligently on these issues (CRF effects on cloud formation) and their evidence is pretty uniformly un-supportive of the CRF-cloud-climate causality. Unfortunately, the causal reader wouldn’t know this since these contrary analyses are pretty uniformly ignored on the blogosphere.
Of course one may have a sneaking admiration for the ideas of Paul Feyerabend and his “anything goes” picture of science, in which scientific argy-bargy, cherry-picking of data, and overblown self-promotion of one’s notions are basic elements of the progression of research fields in the real world! As scientists with access to the full range of evidence on these subjects, we can make a pretty informed interpretation of the likely significance of any CRF-cloud causality in relation to climate variation and its relevance to current concerns of greenhouse-induced warming….unfortunately Joe Public is confronted with a false picture of the subject in any investigated s/he might make via the web, or from the TV “documentaries”, that have also disgracefully massacred this topic. As you suggest, this is all tied up with issues of scientific integrity.
Kevin McKinney says
Paul, it’s hardly true that “all else is extrapolated deduction.”
The deduction and calculation WRT AGW operate upon vast volumes of real-world data, obtained at considerable effort and expense, and parsed very carefully indeed. The observed responses of the various climate systems let us know that the “deductions” are largely correct.
For example, the ice albedo effect feedback–which is part of what makes the CO2 forcing “dangerous”–is not just deduction; it is also a matter of empirical data, obtained by researchers setting up camp and taking measurements under rather uncomfortable circumstances. (See, eg., Hanesiak et al 2001, “Local and regional observations of Arctic first-year sea ice during melt ponding.”)
Similar points obtain for atmospheric and oceanic circulation, etc., etc. There is an amazing amount of (perhaps inadvertant) dismissal involved in this idea that “all else is deduction.”
Rod B says
John P. Reisman (203), you miss the point. You’re just reiterating the well-stated line of reasoning that I am arguing against. What’s new?
Rod B says
Doug Bostrom (204), fair question. Kinda in rough priority: 1. the coefficient of the log (ln) relationship is mainly based on past numerical observations with little scientific support for its projected extrapolation. 2 (tie). The log relationship itself is to a large degree based on past observations with the same problem as #1 — though it has more scientific support than #1. 2 (tie). the mathematics and science of band spreading which keeps the concentration from saturating, IMO, is lacking beyond a good projected hypothesis level.
Rod B says
Patrick 027 (206), what makes 3.7 +/- 0.3 W/m2 a more solid (supported) forcing than 3.708 +/- 0.001 W/m2 in going from 400ppm to 800ppm CO2? If it’s due to the wider stated margins, what is the margin where it becomes solid? Why +/- 0.3? Why not +/- 0.6? 1.0? 3.7?
[Response: I don’t know what your hyper small error bar is from, but it is rather misleading. The 10% uncertainty in radiative forcing is related to the background climate – cloud, temperature and water vapour distributions for instance – identical radiative transfer codes will give a different global mean number as a function exactly where the clouds are etc. There is no way we know the current climate well enough to reduce that uncertainty to 0.003%. – gavin]
Richard C says
Rob B(212). Re. your 1.2, May I suggest you pick up a Physical Chemistry book. This is hardly new science!
Doug Bostrom says
Rod B 8 August 2009 at 12:10 PM
Tractable I’m sure but not for me because I lack the necessary training.
To my naive ears it sounds as though 1 & 2 are closely related; a resolution of one question might eliminate the other. Though I can’t wade through the details without a repurposing of my life, I suppose a (though not the only) next logical question might be, “What would cause a breakdown of the previously observed behaviors we’re using as part of the basis for projections?”
Hank Roberts says
Rod, look at Gavin’s answer.
Then look at Robert Grumbine’s site, which is aimed at high school level understanding.
You’re asking why the error band is 0.3 out of 3.7 watts per square meter, rather than 0.001 or 0.003.
The error band is the uncertainty _in_the_data_available_.
See Robert’s several threads on how to determine trends — it’s the same statistical issue, you look at the data, look at how much it varies, and figure out (_see_his_site_ for this) how much uncertainty you have.
You can find this. You can understand it.
Just repeating oh noes, how can anyone possibly know, gets tedious.
You do better than that when you want to.
Hank Roberts says
PS, Rod, James has just posted a pointer to two pages with good explanations of the same issues:
http://julesandjames.blogspot.com/2009/08/on-statistical-significance.html
J. Bob says
208 – Brian
The folloing graph shows how Fourier spectral analysis can be used to compare sunspot activity, with variations in temperature. The temperature base was the English long term series. The bandpass filter only passed frequencies in the 0.06 – 0.12 cycles/year range.
http://www.imagenerd.com/uploads/t_est_03-0KLEO.gif
Patrick 027 says
“This is distinct from the ‘internal energy’ that is (per unit material)equal to cv*T; this internal energy includes translational energy in addition to the others.”
My bad. Actually it’s the integral of cv*dT if cv is not constant (and it generally is not). But changes in internal energy can be approximated as cv* changes in temperature over ranges where cv is approximately constant. And so on for enthalpy (cp*T), etc.
claudio costa says
http://www.springerlink.com/content/n57121r735134233/ Frank Arnold “Atmospheric Ions and Aerosol Formation” Space Science Reviews Volume 137, Numbers 1-4 / June, 2008 10.1007/s11214-008-9390-8
http://hal.archives-ouvertes.fr/docs/00/31/75/93/PDF/angeo-23-675-2005.pdf
A. Kasatkina and O. I. Shumilov:” Cosmic ray-induced stratospheric aerosols “30 March 2005 Annales Geophysicae, 23, 675–679, 2005: 1432-0576/ag/2005-23-675
http://www.agu.org/pubs/crossref/2000/2000GL012164.shtml
J. P. Abram; et al, “Hydroxyl Radical and Ozone Measurements in England During the Solar Eclipse of 11 August 1999” GEOPHYSICAL RESEARCH LETTERS, VOL. 27, NO. 21, PAGES 3437–3440, 2000
Patrick 027 says
Rod B. – your comment 212 is very inaccurate.
PS I’m not actually sure that the ~(?) 95 % confidence interval – if that’s what we want to go by – is +/- 0.3 W/m2, but Gavin didn’t correct us on that so it’s probably not far off (and I didn’t think it was far off before or I wouldn’t have mentioned it).
But anyway, generally such a transition from flimsy to firm will be graded, as you seem to imply, but the farther away you get from the range of reasonably expected values, the stronger the evidence supporting the claim that you will be wrong, and thus the more extraordinary work you will need to show otherwise.
Patrick 027 says
… If you claimed that the forcing is between 0 and 6 W/m2, you’d technically be correct, but needlessly vague.
… If you claimed that the forcing was 2.0 +/- 0.3 W/m2, or 5.5 +/- 0.3 W/m2, you’d have a lot of work to do to back it up (which I’d bet could not be done).
… If you claimed that the forcing was 3.708 +/- 0.001 W/m2, you’d also have a lot of work to do (and I might still bet against it because it could so easily be 3.702 or 3.711, etc.).
… So imagine how hard it would be to argue that it is 1.001 +/- 0.001 W/m2.
P.Wilson says
I’m still having difficulty in understanding how c02 traps heat in the atmosphere. It doesn’t have heat capacity itself, but transfers heat that would otherwise escape back to earth. Yet the atmosphere isn’t supposed to have heat capacity either and if it does, its far less than c02, so its assumed that the greenhouse effect is only on oceans, landmasses, and other non atmospheric matter. So if incoming radiation heats something to an optimum temperature, say 20C, then how does extra c02 convert this optimum to a higher temperature? Are there any experiments in this? It was fascinating to read someone’s earlier post about bricks and their heat capacity, obviously greater than that of the atmopshere. However, how does a smaller temperature from longwave re-radiation increase an optimum temperature that results from shortwave radiation?
Hank Roberts says
What’s your source for this, P. Wilson?
> … the atmosphere isn’t supposed to have heat capacity either …
Says who, and why do you consider your source reliable on this?
Reasoning from that belief leads you completely astray.
You can look this stuff up. Here for example:
http://scholar.google.com/scholar?q=atmosphere+heat+capacity
If you don’t check the assumptions, you’re just inviting a lot of guys hanging out on the blogs to tell you what they think — which is often recreational typing and often comes with a lot of confusion and a lack of sources for the information. Try the first few hits from that search, then redo your question if you’re still having trouble. I think it’ll make sense.
Hank Roberts says
More for P. Wilson — this is a quote from an article you can find here at RC as a topic; this is the author’s revised version. This is by Spencer Weart; click the first link in the right sidebar under Science for more.
— excerpt follows—-
… We understand the basic physics just fine, and can explain it in a minute to a curious non-scientist. (Like this: greenhouse gases let sunlight through to the Earth’s surface, which gets warm; the surface sends infrared radiation back up, which is absorbed by the gases at various levels and warms up the air; the air radiates some of this energy back to the surface, keeping it warmer than it would be without the gases.) …
— end excerpt —-
http://www.aps.org/units/fps/newsletters/200810/weart.cfm
David B. Benson says
Brian Dodge (208) — You might care to read Tung & Camp (2008) for a recent determination of the temperature changes over the course of solar cycles.
Brian Dodge says
“#
“The folloing graph shows how Fourier spectral analysis can be used to compare sunspot activity, with variations in temperature. The temperature base was the English long term series. The bandpass filter only passed frequencies in the 0.06 – 0.12 cycles/year range.”
Comment by J. Bob — 8 August 2009 @ 1:35 PM
I suspect that you are not such a moron that you are unaware that if you bandpass filter noise before frequency analysis/Fourier transform, you will get a peak in the frequency spectrum. But then again, you may have learned all you know of data analysis from McLean & Carter. If you are actually not so ignorant, what is the point of posting nonsense? Are you learning anything?
P.Wilson says
Its a theoretical question: I received an email from the MET Office here in the UK asking them why the temperature plummeted during the solar eclipse. If greenhouse gases are more powerful tan the effects of solar energy, then given all the c02 emitted from India and China, the temperature should have stayed the same, or nearly the same during the eclipse.
They replies that “C02 doesn’t retain heat, as it doesn’t have heat capacity. It is its ability to trasfer heat to the atmosphere that warms the atmosphere. The fall in temperature is in line with what we would expect during such an event” .
They also explained that you didn’t need an eclipse to verify how cooler the temperature is, as it can be observed between day and night. (OK, pretty obvious). So what gets me now: Where is this heat stored, if c02 doesn’t hold it? Oxygen? nitrogen? Or does non atmospheric mass absorb it?
John P. Reisman (OSS Foundation) says
# 211 Rod B
Just add water and stir.
That’s my point. It’s hard to tell exactly what you are stating or asking sometimes, due to the various degrees of ambiguity noticeable or the more specific manner you use that is oft out of context.
You may be missing my point however. As we add water (moisture) to the atmosphere (yes, I know water was more of a joke though) we will be amplifying the feedback effects so if you limit your consideration to only Co2 forcing, you will always miss the point.
Lawrence McLean says
Re #223, P. Wilson,
The heat capacity of gases is listed on the site:
http://www.engineeringtoolbox.com/spesific-heat-capacity-gases-d_159.html
I do not see 0.000 listed for any of them!
Ray Ladbury says
P. Wilson, you seem to be getting all wrapped around the axle based on incorrect understanding of terminology. Heat capacity is merely the amount of energy required to raise the temperature of a mole of the material a given amount (e.g 1 degree). So, of course the atmsophere has a heat capacity. You seem to be thinking of energy as purely thermal energy, but electromagnetic waves (including IR light) are energy, too.
Hold your had in front of a heat lamp. Why does it heat up? It is because the water in your hand absorbs the IR and the resulting vebrational and rotational energy transfer to other molecules in the skin. The atmosphere is not as insubstantial as you think–there are nearly 10^22 CO2 molecules in every column of air with base area 1 square cm. That can stop a lot of radiation–and therefore a lot of energy.
Rod B says
P.Wilson (223), I’ll jump in again. 1) Like Hank said (though maybe not as nicely ;-) ) gases and atmosphere most certainly do have heat capacity. In the technical sense that heat capacity is less than earth water, bricks, but all this means is that it takes less energy to heat atmosphere the same number of degrees (all else being equal, of course.)
I’m not sure what you mean by the sun heats the earth to an optimum temperature. It heats it to whatever temperature follows thermodynamics math. As best as I know, if there was no LW radiation (an impossibility, but a helpful thought process), while I’ve never done the math, my guess is the Sun, given enough time, will heat the earth theoretically to near infinite degrees — well, at least very very hot. Likewise the LW radiation is only indirectly related to the solar heating. Find out what the surface temperature is and you can determine what the LW radiation is without knowing anything about the solar heating (though it helps in understanding the whole enchilada.) Finally, the only thing that keeps the temperature of the earth below infinity is the LW radiation.
Rod B says
John P. Reisman (229), In response to someone’s clarifying question, I was simply explaining one (probably the largest) of my areas of skepticism and trying to properly define and confine it. I wasn’t stirring the AGW ocean, so to speak… ;-)
Hank Roberts says
> why the temperature plummeted during the solar eclipse
Same reason it plummets when a big cloud obscures the sun for a while. This was a huge area entirely blocked from all sunlight.
Besides losing the direct heat from the sunlight, when that happens you may also get moisture condensing. Last solar eclipse I saw long ago was through thin low cloud in western Oregon, and there was a spattering of rain moving right along with the shadow line.
Nobody’s saying CO2 has zero heat capacity; it’s a molecule, it can spin and bounce around among other molecules transferring energy like any other; if you cool it down enough you get a solid form (and under the right temperature and pressure conditions a liquid too). It has that heat capacity.
It also has the added feature, like H2O and chlorofluorocarbons and methane and other greenhouse gases, that those molecules will pick up infrared photons, turn the energy into vibration/rotation/stretch/wiggle/bump, and emit another infrared photon (unless they first collide with some other molecule and transfer some energy away in the collision).
Any gas has heat capacity; as Lawrence points out, look them up.
Point people have been trying to make is that CO2 isn’t acting like little tanker trucks cruising around soaking up energy and getting filled up with it.
Think of it — CO2 is a trace gas. Look through a sheet of window glass — it looks clear. Look crosswise through the length or width, and it looks dark greenish. There are some impurities in it, absorbing some of the light. Well, you’re looking through a hundred-odd miles of atmosphere, with a trace of CO2 in it, going on 400 parts per million.
Double the CO2, or double the impurities in the sheet of plate glass, and much more of the light that’s absorbable by that material does get absorbed.
Dang. Recreational typing. Look this up, you can find better clearer explanations. You will find Spencer Weart very helpful and clear.
And he will make the point that you cannot understand this just from questions and answers on blogs. You need the math.
Without the math, physics is poetry — at best. I know I’ve said that before.
Brian Dodge says
re David B. Benson — 8 August 2009 @ 6:08 PM
Fom Camp & Tung 2007 – Surface warming by the solar cycle as revealed by the composite mean difference projection.
“There have been thousands of reports over two hundred years of regional climate responses to the 11-year variations of solar radiation, ranging from cycles of Nile River flows, African droughts, to temperature measurements at various selected stations, but a coherent global signal at the surface has not yet been established statistically.”
Which means that one has to dig really hard(or creatively, in a good way like Camp & Tung, not like “The bandpass filter only passed frequencies in the 0.06 – 0.12 cycles/year range.” ) to see the solar signal in the temperature record.
Unfortunately woodfortrees doesn’t have the NCEP data that they use, but if one uses the same start time (1960), and the woodfortrees composite temperature index
http://www.woodfortrees.org/plot/wti/from:1960/mean:30/detrend:0.7/fourier/magnitude/from:1/to:20/plot/sidc-ssn/from:1960/mean:30/scale:0.002/fourier/magnitude:1/from:1/to:20
a correlated frequency peak appears. (Don’t tell J Bob or Manacker – they’ll accuse me of cherrypicking in my earlier reference.&;>)
J. Bob says
#227 – Brian – Nothing was said about band passing noise before the Fourier transform. The band pass was the Fourier transform, mask and inverse. The posting was a simple example of what one can do with Fourier convolution. As far as learning Fourier methods, my instructors were Blackman, Tukey and Cooley, and the posting was similar to an example they used.
Jacob Mack says
P Wilson, you may find a Gen Chem book useful; the only thing I would add is heat capacity is an extensive property and specific heat is an intensive property.
Patrick 027 says
P. Wilson – it’s not a matter of greenhouse gases ‘retaining’ heat in the atmosphere as a thermal mass. Adding anything to the atmosphere will raise its heat capacity (unit of heat energy per unit temperature change) by changing the mass even if specific heat (heat capacity per unit mass) is the same – Since CO2 and H2O are triatomic molecules, I’d expect adding CO2 and the resulting H2O vapor feedback to increase the specific heat as well as mass of the atmosphere as a whole (although with the uptake of some of the additional CO2 from the atmosphere by the ocean that results from increased CO2 partial pressure, the net effect of fossil fuel combustion + oceanic uptake of CO2 is an atmospheric gain of CO2 but a greater molar loss in atmospheric O2).
*HOWEVER*, this is a very minor effect that can be neglected; resulting changes in atmospheric heat capacity are up to a point negligable.
What is very important is that greenhouse gases and other agents contribute opacity to the atmosphere at some wavelengths in the longwave (LW) part of the spectrum – wavelengths longer than roughly 4 microns, where radiant energy fluxes are dominated by emissions from the Earth’s surface and atmosphere – as opposed to shortwave (SW) wavelengths dominated by solar radiation (at most wavelengths within these two intervals, the dominance is great – the solar and terrestrial emissions are only similar near the cutoff wavelength).
Opacity can take different forms. Opacity can involve scattering radiation, as clouds and aerosols do a great deal to SW radiation; on the microscopic scale, this can involve reflection, refraction, and diffraction. Absorption followed by reemission of the same photon energy is effectively scattering. There is also absorption followed by fluorescence or phosphorescence. There is reflection off of a surface, such as with a sheet of aluminum foil (some forms of scattering are reflections off of rough surfaces or from a broken-up material). Then there is absorption and conversion to enthalpy or internal energy. (Absorption and conversion to chemical energy will generally lead to either conversion of chemical energy to enthalpy, or conversion of chemical energy to radiation, so in net can be described as a combination of the above).
Any one of those forms could be used to construct a radiatively-forced greenhouse effect, but:
1. fluorescence and phosphorescence are not generally important processes in planetary atmospheres, at least regarding the energy budget of the bulk of the mass of the atmosphere and whatever lies beneath it – these occur when the energy of absorbed photons is not thermalized sufficiently rapidly relative to the time scale of subsequent emission. When energy is thermalized rapidly (by molecular collisions) relative to the rate of photon emissions, energy is redistributed toward an equilibrium distribution among various states of the population of molecules characteristic of local thermodynamic equilibrium (*LTE*). Phosphorescent aerosols are not a common thing – maybe on some interesting alien world (?).
2. A planetary atmosphere generally won’t have a solid or smooth fluid reflective surface within it.
This leaves scattering and absorption within the atmosphere and at the surface, with maybe some (quasi-)specular reflection at the surface as well (such as is seen with SW radiation reflected off of calm water).
Under Earthly conditions, scattering plays a minor role in LW radiation fluxes.
Aside from those forms of radiation that can be emitted in conditions not in local thermodynamic equilibrium (LTE), Materials emit radiation as a function of their temperatures and emissivities, and at LTE, at any one wavelength, emissivity = absorptivity. A perfect blackbody has an emissivity of 1; at any one wavelength (and polarization, when that matters), at any given direction along a given path over some distance, the fraction of radiation absorbed from a direction (the absorptivity) is equal to the emitted radiation as a fraction of perfect blackbody radiation (the emissivity) toward that direction. Blackbody radiant intensity (flux per unit area normal to the direction, per unit solid angle of directions, and per unit wavelength interval (or frequency interval) if different wavelengths are being counted seperately)) increases with increasing temperature in a nonlinear fashion – the fractional increase is very large at shorter wavelengths but approaches a linear proportionality at very long wavelengths, and the emission spectrum has a peak in intensity at a wavelength that is inversely proportional to the temperature (this is why solar and terrestrial radiation can be approximated as being divided into SW and LW radiation – the sun’s effective surface (optically) is several thousand Kelvins while the Earth’s surface averages around 288 K and the atmosphere is mostly colder).
The net radiant intensity along any path at any point is the difference between intensities in opposite directions. Opacity reduces the distribution of distances over which photons can travel between emission and absorption (scattering does this by redirecting photons so that some are absorbed nearer where they were emitted than otherwise). The net radiant intensity increases if the temperature variation between emissions and absorptions for photons in one direction (reverse for photons in the other) is larger (though it also is larger if both temperatures are larger by the same amount without increasing the difference). If the opacity increases, then, unless there are temperature fluctuations on a spatial scale that is significantly smaller than the distances between emission and absorption, the net intensity decreases because, from any point, the photons passing by are coming from and going to points closer and at more similar temperatures. Thus, the net radiant energy flow is reduced.
(Integrating the radiant energy intensity in different directions, weighted by the cosine of the angle from vertical, over a hemisphere of solid angle symmetric about the vertical axis, gives the radiant energy flux across a horizontal surface, per unit area of that surface.)
If the net upward LW radiation is decreased while the net downward SW radiation (total downward minus scattered/reflected radiation coming back up) remains the same, then there is an increase in the net downward energy flux. A net downward energy flux will tend to increase the temperature below that point, which will tend to increase the upward LW flux, so that the temperature distribution changes until there is a radiative equilbrium – except when convection is also involved (see above comment about the importance of radiative forcing at the tropopause level, with this clarification: regionally, the upper atmosphere above the tropopause can be disturbed from radiative equilibrium by motions that are forced by kinetic energy supplied from the circulation in the troposphere (which converts heat energy into kinetic energy – motions can convert heat energy to kinetic energy when warmer air rises as colder air sinks; the reverse motion can convert kinetic energy back into heat energy) As I understand it, most of the kinetic energy produced in the troposphere is converted by to heat within the troposphere and at or just below the surface, and with an increase in entropy so that it cannot be effectively recycled back into kinetic energy. The flux of kinetic energy out of the troposphere is (at least globally) a fraction of the total kinetic energy generation which itself is small compared to radiative energy fluxes, so the global average tropopause level radiative forcing is still a key value).
Kevin McKinney says
P. Wilson, you wrote: “If the greenhouse gases are more powerful tan the effects of solar energy, then given all the c02 emitted from India and China. . .”
A couple of misconceptions here are messing you up. First, the greenhouse effect does slow down the cooling rate, which means that during an eclipse today it will cool a bit more slowly that it would have during pre-Industrial times. However, greenhouse gas forcings are something like 1.6 Watts per meter squared, while solar input is something like 340 Watts per meter squared! So it’s still going to cool plenty. But the big point is that solar input is important to the greenhouse effect; it’s not a case of two completely independent factors.
There’s a term that I think may have fallen out of favor as more sophisticated concepts prevailed: you would hear it said that CO2 “thermalized” radiation. It’s maybe not a bad image even now, though; you could think of CO2 as converting solar radiation (Shortwave band) into infrared radiation (heat radiation, more or less.) This radiation then heats up pretty much anything around, as Ray described the heat lamp doing. Air, soil, trees, people, and especially water.
Second–and this is more of a nit-pick, but not without significance–by far the greatest proportion of the anthropogenic CO2 in the atmosphere today was emitted by the First World, and especially the US. If you were to liken national total CO2 emissions to a horse race, China would be the fastest horse on the track today–and the US wouldn’t be much slower–but China would be pushing from a long, long way back. India wouldn’t even be in it.
Jacob Mack says
See specific heat capacity of C02 here: http://www.engineeringtoolbox.com/carbon-dioxide-d_974.html
Patrick 027 says
… to put that last point more clearly – while the upper atmosphere above the tropopause is not actually in radiative equilibrium, it is close to radiative equilibrium in a global horizonally-averaged basis, as I understand it.
And of course, seasonal and diurnal and other fluctuations will prevent actual instantaneous radiative equilibrium or even radiative-convective equilibrium from occuring; the point is that a tendency to remain near equilibrium time-average is a prerequisite for a stable climate and time-averaged disequilibrium will cause climate change.
Hank Roberts says
This
> I received an email from the MET Office here in the UK asking them why
That sounds like you got it third hand from someone who corresponded with them?
Are you sure you’re quoting exactly from the original? Or have you got the original? Could it be whoever sent the text to you was paraphrasing?
Brian Dodge says
J.Bob I’m not following you – the caption on the graphics you presented said “Filtered Temperature(Bandpass + Lowpass Filters)”. Are you saying that the unfiltered temperature signal isn’t noisy? Or that the peak in the power spectrum that your graphics shows isn’t the result of filtering?
claudio costa says
http://www.atmos-chem-phys-discuss.net/9/10575/2009/acpd-9-10575-2009.html.
B. A. Laken and D. R. Kniveton “The effects of Forbush decreases on Antarctic climate variability: a re-assessment” Atmos. Chem. Phys. Discuss., 9, 10575-10596, 2009
“In an attempt to test the validity of a relationship between Galactic cosmic rays (GCRs) and cloud cover, a range of past studies have performed composite analysis based around Forbush decrease (FD) events. These studies have produced a range of conflicting results, consequently reducing confidence in the existence of a GCR-cloud link. A potential reason why past FD based studies have failed to identify a consistent relationship may be that the FD events themselves are too poorly defined, and require calibration prior to analysis. Drawing from an initial sample of 48 FD events taken from multiple studies this work attempts to isolate a GCR decrease of greater magnitude and coherence than has been demonstrated by past studies. After this calibration composite analysis revealed increases in high level (10–180 mb) cloud cover (of ~20%) occurred over the Antarctic plateau in conjunction with decreases in the rate of GCR flux during austral winter (these results are broadly opposite to those of past studies). The cloud changes occurred in conjunction with locally significant surface level air temperature increases over the Antarctic plateau (~4 K) and temperature decreases over the Ross Ice Sheet (~8 K). These temperature variations appear to be indirectly linked to cloud via anomalous surface level winds rather than a direct radiative forcing. These results provide good evidence of a relationship between daily timescale GCR variations and Antarctic climate variability”
[Response: One needs to be careful citing discussion papers – this one was withdrawn following the discovery of errors in the calculations and a lack of “statistical rigour”. – gavin]
Barton Paul Levenson says
P. Wilson,
There’s a brief explanation here:
http://BartonPaulLevenson.com/Greenhouse101.html
Carsten Brinch says
Svensmark does not deny effects of greenhouse gasses. He just offers an explanation to anomalies in temperature, which cannot be explained by CO2, CH4 or other greenhouse gas effects.
If you doubt the assumption, that cloud formation and development is a matter of days – please look at your local tv-weatherforecast! There will be satellite pictures to assure you!
Kindly
Carsten Brinch
Hank Roberts says
Carsten, why do you think they’re connected? Who are you trusting about “anomalies .. which cannot be explained” — Please point to your source.
Mark says
Robert Bateman states:
“I’m not a scientist and I don’t prove anything.”
You do if you want to have a theory considered.
Without proving YOUR point, you HAVE no point.
Mark says
“And please explain what you mean by a “rational sense”?
Ron
Comment by Ron ”
I mean “not irrational”, a statement based on reason and not rhetoric, “gut feeling” or lack of knowledge.
And the analogy is a good one, which may be why you want to “not get it” (like the nasty guy in “Big” with Tom Hanks, using that to get back at Tom’s character…).
Mark says
“127 – Mark – How many aero engineers do you know? ”
Read it in the same newspapers that you read JBob.
Ones that say “Engineers can now prove Bees can fly”.
It wasn’t all that long ago.
And we’ve been flying complex jet planes for MUCH longer than that.
All without a scare that these doofuses who think bees can’t fly will design a jet plane that won’t fly either…