There is a new paper in Science this week on changes to atmospheric visibility. In clear sky conditions (no clouds), this is related mainly to the amount of aerosols (particulate matter) in the air (but is slightly dependent on the amount of water vapour as well, which is corrected for in this study). The authors report that the clear-sky visibility has decreased almost everywhere (particularly in Asia) from 1973 to 2007, with the exception of Europe where visibility has increased (consistent with the ‘brightening trend’ reported recently). Trends in North American stations seem relatively flat.
There is another story that didn’t get as much press when it came out late last year but that is highly relevant to this issue – whether any of the efforts that the Chinese authorities to reduce air pollution ahead of the Olympics last year had any impact. To the extent that they did, they might point the way to reducing aerosols and other pollutants across Asia, but it might also reveal how hard it is to do so.
The press release and abstract for the Science paper link their results to the ‘global dimming’ trends we have reported on in the past, but it’s worth perhaps pointing out that previous studies (and the term ‘global dimming” itself) have referred to all-sky conditions. So that includes changes in clouds – which are obviously a big factor in how much sunlight gets to the surface. Looking at the clear sky conditions (i.e. only when there are no clouds) can help attribute changes to aerosols or atmospheric dynamics say, but since aerosols affect clouds (the ‘indirect effect’) as well as circulation too, it is only a partial estimate of the true impact of aerosols.
But getting back to the Olympics…. Monitoring of pollutants near the surface has improved enormously in recent years with the various satellite instruments now in orbit (MOPITT, GOME, OMI and TES for instance (sounds like a comedy revue team, no?)). These instruments detect specific frequencies where pollutants are known to absorb and so can give a birds eye view of where the pollutants are and how they are changing. Among other things, the satellites can detect ozone, NOx, SO2, the total amount of aerosols and carbon monoxide. Each of these have different atmospheric lifetimes and so can be used either to detect point sources (from pollutants that only last a short time) or long range transports of pollution (from the longer lived pollutants). NO2 (a big component of NOx – which lumps together NO and NO2all of the reactive nitrogen oxides), is very short-lived and so tells you a lot about local sources. Carbon monoxide has a longer lifetime (a couple of months) and so can show the long-range impacts. Many of these pollutants have related industrial sources (car exhausts, coal burning, industrial production etc) and so can be used as proxies for many other pollutants (such as specific aerosols) which can’t (yet) be directly measured.
What do the results show? The team at GSFC have released preliminary images from the NO2 analysis showing the before and during the pollution controls. In both images, Beijing shows up as a huge hotspot of pollution, but relatively, the levels during the Olympics were significantly smaller:
August 2008 levels were therefore about 50% less than a similar period the year before. Meanwhile values at other hotspots in China were steady or got even worse. So there was a significant effect, but the scale of the task was indeed Olympian.
David B. Benson says
Timothy Chase (275, 276) — Thank you for the clarification.
Martin Vermeer says
Snorbert:
I’m flattered :-)
What about trying a slightly bigger cube? Like, big enough to be opaque at the wavelengths considered?
Your claim was a much more sweeping one, if I may quote:
AKA “moving the goalposts”.
Yes, evidently…
He has read it — at least as much of it as he could before disgust overcame him.
If you don’t believe his judgment as a physicist, you’re not gonna believe his analysis as a physicist either. I’d rather see Ray do something useful and/or enjoyable instead :-)
Mark #288:
You got it.
I was one of those that got bored in math class when the teacher started dotting his deltas and crossing his epsilons… but that’s what it is.
Timothy Chase says
David B. Benson wrote in 301:
Actually I figured you already new pretty much everything I had to say and were keeping your response short — albeit the humorous “hazy” — for lack of time. But I had some more time. However, I probably should have responded to veritas36 (comment 250) directly.
Chris Colose says
snorbert,
Please visit the site “Rabett Run” to see a recent string of posts by a few different contributors on the G&T paper.
Mark says
Snorbert, the problem with using a cavity at the wavelength of the radiation being considered is that you have resonance and destructive interference to worry about.
Read up on the Casmiir effect. It relies on that: if you put two conductive surfaces close together, you cannot fit an EM wavelength any longer than the distance between the plates and that energy is excluded from that region. The inability to contain that wavelength means that there is more energy *outside* the two plates than between them. And therefore a pressure force will be exerted on the two plates because the energy difference is effectively a mass difference.
Now if you have a cavity EXACTLY the wavelength of interest in diameter, then the EM wave will have it’s zero point at either side of the cavity. And if there’s no EM wave, there’s no EM response and therefore no EM absorbtion. You no longer have a Blackbody cavity, you have a waveguide. And that’s a completely different kettle of piscine lifeform.
Mark says
“Mark, please read the article. Seriously, logic will only take you so far in refuting published work. Actually reading the article _will_ help you here.”
I did.
Nothing explaining that the dust stopped sunspots being discerend and a lot about how there were no observing on that day.
What happens when you increase the condensation nuclei in a moist atmosphere?
What may be happening here is you’re talking about one effect of dust and I’m talking of a different one.
Hank Roberts says
Mark, that’s right — two observers, located far apart, have similar gaps in their sunspot records. They did not, as far as I know, record _why_ they did not record observations on those days. The correlation is interesting.
If you’re sure that stratospheric dust couldn’t possibly have caused this, what else might have?
The longterm argument has been that the sun really was very quiet; but the pattern compared to the volcanos argues that dust might’ve interfered.
We don’t have the same baseline atmosphere they did — the seeing was on average better, less dust and smoke. How much better?
Again, I’m not arguing for or against the paper, just saying if you’re sure he’s wrong, and have a solid argument, that might be publishable.
Patrick 027 says
Re 283 – 286 nailed it.
G&T p.12:
Regarding a small cube of wavelength-size: Wouldn’t that also disprove that the sun radiates energy? That the light in the sky is a figment of our imaginations?
“If
such an extreme effect existed, it would show up even in a laboratory experiment involving
concentrated CO2 as a thermal conductivity anomaly. It would manifest itself as a new kind
of `superinsulation’ violating the conventional heat conduction equation. However, for CO2
such anomalous heat transport properties never have been observed.”
Under some conditions (very short distances, high densities), thermal conductivity will become more important than radiative energy transfer; opacity can not reduce energy fluxes below that which is sustained by conduction.
However, the optical properties of CO2 and other gases can be observed in the spectrum of radiation emitted to space from the Earth and atmosphere, and can also be measured in laboratories.
pp.20-21
G&T p.20-21:
“In classical radiation theory radiation is not described by a vector field assigning to
every space point a corresponding vector. Rather, with each point of space many rays
are associated (Figure 3). This is in sharp contrast to the modern description of the
radiation field as an electromagnetic eld with the Poynting vector field as the relevant
quantity [99].”
The Poynting vector describes the net radiant energy flux at any one point and at any one time, which at precise locations and times will fluctuate as individual photons pass by. There is no disagreement here. The understanding of electromagnetic and quantum mechanical mechanisms underlying phenomena has advanced, but geometric optics (ray-tracing) still applies to larger scale radiant fluxes. The former explains how, when, and where scattering, reflection, refraction, absorption, and emission occur; it is only necessary to know that these things happen and how much for different conditions to use them in calculating their bulk effects.
“The constant [sigma] appearing in the T4 law is not a universal constant of physics. It strongly
depends on the particular geometry of the problem considered.”
It is proportional the square of the real component of the index of refraction (of the material or space through which radiation is propagating, not of an emitter or absorber outside the location being considered) and thus to the inverse square of the phase speed of radiation (see equation on 27 on p.20 – it looks correct (if h with a line drawn through it is equal to . For some materials, index of refraction varies by direction (and polarization?). None of this is of much consequence to radiative energy fluxes through space or air in bulk, though it will apply to the microphysics of cloud droplets and ice particles, etc, that give rise to the macroscopic optical properties of cloud.
“The T4-law will no longer hold if one integrates only over a filtered spectrum, appropriate
to real world situations.”
True, but in the real world, scientists use more accurate descriptions of optical properties that are wavelength dependent to calculate effects on radiatiant fluxes. See my own comments above and any listed at my comments 7,13 here: http://www.skepticalscience.com/climate-sensitivity.htm
“Many pseudo-explanations in the context of global climatology are already falsified by these
three fundamental observations of mathematical physics.”
NO WAY. (or perhaps they mean that the simple explanation given to the lay person is not quite accurate and precise. Well of course it isn’t. Nor is the Earth a perfect sphere, nor it any simple matter for water vapor to condense and precipitate (pages can be filled with the details of how that happens). But it is acceptable for introductory purposes to say that the Earth has a greenhouse effect that traps heat, that the Earth is spherical, and that water condenses from vapor into clouds (under certain conditions) and precipitates (under certain conditions). On the other hand, the grade-school explanation of how airplane wings produce lift is not quite right (the air above does not ‘have to keep up’ with the air below – it can actually end up farther along in the flow than the air below – it speeds up because it accelerates toward the wind while moving around the wing, which requires a pressure gradient away from the wing above it, which when integrated in two dimensions, results in a low pressure above the wing).
I shall waste no more time reading G&T (is that being close minded? You have to balance being open-minded with being efficient as well as being skeptical. Sometimes statements obviously stink of snake-oil. Or to avoid implying anything nefarious, G&T are writing about subjects they do not appear to understand. I don’t plan my day around the horoscope.)
If you really want to know about the subject matter, check out any links from RealClimate, or in comments, or the resources I listed in comments 286,306 here:
http://www.skepticalscience.com/Arctic-sea-ice-melt-natural-or-man-made.html
… or my comments here:
http://www.skepticalscience.com/Is-Antarctic-ice-melting-or-growing.html
and/or here:
http://www.skepticalscience.com/volcanoes-and-global-warming.htm
…
or take it from a professional, for example, the book I refer to in comment 144 above.
Patrick 027 says
Okay, here’s the be-all and end-all of it:
Do you accept that there is thermally-emitted radiation? (which I earlier refered to as blackbody radiation that may or may not come from a perfect blackbody but obeys some of the rules of blackbody radiation (given emissivity, absorptivity of a real body).
Do you accep the Second Law of Thermodynamics?
When there is thermal energy, any given energy transition may occur at a rate depending on the temperature and the nature of the transition. Some of those energy transitions can emit photons. The reverse transition can aborb those photons. When in local thermodynamic equilibrium, the Second Law requires absorptivity from a direction = emissivity toward a direction, at a location, along the same path, for a given wavelength and polarization. That path encounters crystal lattices, molecules, atoms, electrons and ions, etc, that are responsible for emission and absorption, and thus are mathematically equivalent in how much they absorb and emit and scatter to objects of some size with some albedo at any given wavelength, etc; when that albedo is zero, they act like perfect blackbodies. This describes a bulk property of many such particles over some period of time – at any one moment an individual particle may or may not emit or absorb.
Local thermodynamic equilibrium requires all the energy involved in these energy transitions be ‘thermalized’ (I think that’s the term). An example of energy not involved in such transitions is the bulk motion – macroscopic kinetic energy – until visosity would transform it into thermal energy. Molecular collisions tend to thermalize energy. CO2 and other molecules in the vast majority of the mass of the atmosphere collide with other molecules frequently enough – relative to photons coming from or going to layers with different temperatures – that their energy is thermalized and the different gaseous substances have the same temperature (or can be very closely approximated as such), so when one kind of gas radiates energy, it radiates as if part of a blackbody at the temperature of the air (a thin part of a blackbody that may not be a perfect blackbody in and of itself, although it can approach that over sufficiently thick layers relative to temperature variations), and when it absorbs radiation, it transfers it to the rest of the air.
Absorptivity cannot be greater than 1 or less than 0, so in LTE (local thermodynamic equilibrium), emmisivity cannot be greater than 1 or less than zero.
If mixtures of gases with sufficiently infrequent collisions with each other occupy the same space but have different temperatures, they could be treated as two seperate entities that are within themselves in LTE – or more generally, it may be possible to assign different temperatures to different subgroups of particles. More generally, energy that is not thermalized may not be distributed in such a way as to be easily assigned a temperature.
——————-
PS when I mentioned assigning a temperature to radiation, I forgot about polarization. If the same intensity over all polarizations is concentrated into a subgroup of polarizations, then it has less entropy. It’s temperature would be that of the blackbody that emits that intensity for those polarizations.
Patrick 027 says
Re 281,282 – good points. The reason why the corona thermally emits such a very small amount compared to the sun is that it is optically very thin (I think it actually does have a rather hot temperature, which must be related to the random component of motions and not the bulk kinetic energy that it has). Although also, I think some of the coronal radiation is light scattered by solar wind particles.
Lawrence Coleman says
I would like to take this opportunity to pay a belated tribute to Steven Kimball aka. Johnny Rook of ‘Climaticide chronicles to died of cancer on the 2nd March. His hope and vision was much like mine..to leave the planet a little better for his children and thus he devoted a large part of his final years to projecting though his blog the dangers we all face from Climate change. I contributed to him information about Australia’s recent crazy weather north and south resulting in over 200 deaths and he asked me to keep in regular contact with him, thus it was a great shock to hear of his untimely (only 53) death. There is a lot of information available to anyone on his site written brilliantly and persuasively about all topics relating to climate change. Rest in peace Steven Kimball, you and your blog along with ‘real climate’ has made me vow to carry the torch and raise peoples awareness of this ‘elephant in the room’ for my little boy’s sake as well as yours.
Mark says
I’ll have to use the old “correlation != causation”.
That statement *really* means if you don’t have a causation then your correlation means NOTHING.
It is no proof of anything.
Alan of Oz says
Hank, I wasn’t really thinking 150+ yrs ago and quite possibly they did get less days suitable for observing due to major volcanoes back then. I didn’t get as far as examining what they did to relate them because that really wasn’t the question I was answering, and both Mark and I had already pointed to the “if you can see the sun” small print.
Disclaimer: I live, and in some cases work, with acedemics but I’m not an academic and have zero interest in writing papers that have “sun-spot counting in the first half of the 19th century” as part of the title.
Barton Paul Levenson says
snorbert: Try here for a few analyses ripping G&T to shreds:
http://rabett.blogspot.com/
If you want quick precis, here’s what I wrote on Open Mind in response to the six points in G&T’s abstract:
(a) there are no common physical laws between the warming phenomenon in glass houses and the fictitious atmospheric greenhouse effects,
Yes, “greenhouse effect” doesn’t really describe how a greenhouse works. Scientists have known that for longer than G&T have been alive.
(b) there are no calculations to determine an average surface temperature of a planet,
Take the temperature in representative areas and take the average. They had the figure approximately right as far back as the late 19th century.
(c) the frequently mentioned difference of 33 C is a meaningless number calculated wrongly,
It’s the difference between the Earth’s mean global annual surface temperature of 288 K and its radiative equilibrium temperature of 255 K (I get 254 K myself). Yes, if Earth didn’t have an atmosphere, its albedo would probably be different and Te would be a little different, but so what? What possible relevance does that have?
(d) the formulas of cavity radiation are used inappropriately,
The formulas of cavity radiation aren’t generally used at all in atmosphere physics unless one is discussing blackbodies. The Stefan-Boltzmann law:
I = s T^4
is the basic “cavity radiation law.” For a graybody one adds an emissivity term, and for a real body one adds a wavelength or frequency subscript to the emissivity term and accounts for the fraction of radiation output in the range of interest. Usually you can use the Planck law for the blackbody fraction, then multiply by the appropriate fractional constants.
(e) the assumption of a radiative balance is unphysical,
Very true. The Earth’s atmosphere is in radiative-convective balance, not radiative balance. G&T apparently think climatologists don’t know this.
(f) thermal conductivity and friction must not be set to zero, the atmospheric greenhouse conjecture is falsified.
Thermal conductivity and friction are covered in the expressions for surface cooling by sensible heat loss, which is part of what makes up the “convective” part of “radiative-convective equilibrium.” They are only set to zero for theoretical simplifications usually shown to students.
Mark says
“(f) thermal conductivity and friction must not be set to zero, the atmospheric greenhouse conjecture is falsified.”
But the earth is in space. In space, there’s nothing for you to rub against. There’s nothing for you to conduct through. It’s space.
So how are you supposed to remove energy from the earth system by conduction/convection/friction? Invoke the Aether?
Patrick 027 says
Re 314 – Nice job!
Patrick 027 says
Re 263:
“Is there a way to know if these 100 million would-be climate refugees might be better prepared for flood and famine if they had jobs, transportation/roads, education/expertise, emergency services, healthcare, etc.?”
I suspect migration (as opposed to building dikes/levees) will be the most cost-effective adaptation to sea level rise. Adaptation of farming to changing conditions may involve a combination of migration, changes in crops, and increasingly smart farming (having back-up crop plans for droughts, floods, cold and heat waves – see “Against the Grain” by Richard Manning, buckwheat example, I think). (Though there are ultimate limits to what any plant can do, especially over a short time; evolution is slow, intentional breeding can only go so fast, and genetic engineering has risks (and it’s a lot harder to take salt out of water than to put it in – that’s an analogy to new genes or new embodiments of them let loose in the environment) – there is room for improvement in irrigation efficiency. People can also adapt their diets (less meat). Peaks in solar power could be ‘stored’ by desalinating seawater.
(For that matter, greater use of perennial crops or some other changes could also potentially help mitigate climate change as well as help adapt to it. Progress in biofuel technology will open up markets for damaged (from weather) and diseased crops, and some crop residue, as well as non-food biofuel crops that can be grown in conditions outside where food and feed crops are best grown, etc – switchgrass, wildflowers, used coffee grounds, used cooking oil, banana peels, paper plates, paper napkins, cupcake and muffin paper cups and the stuff that sticks to them, expired mayonaise, grass clippings, crumbs, sewage, algae, sawdust, etc. Eating less meat would help, as would wiser use of fertilizer. I hope there are ways to reduce methane emissions from cows (because I really like cheese).)
Infrastructure – buildings, pipes, etc, will need some updating (okay, it may need that anyway, but perhaps not so much. Much infrastructure doesn’t migrate very fast, by the way.).
But we also can’t forget about ecosystem services (fresh water and soil, some pollination, biodiversity as potential for future growth/maintenance of crop, medicine, and material resources; wetlands’ role in flood management).
Winter is a great disease control program (see link to be posted later).
Individuals and even whole nations can reduce their dependence on climate and weather by switching away from reliance on farming. BUT SOMEBODY has to grow the food.
—–
Adaptation has costs (loss of property values, realized when a farmer moves, when people leave coastal areas and drought-stricken regions, for example; costs of dealing with increased disease, R&D for crops, etc – and cliamte change and adaptation have material, psychological, social, political (migrations, among other things), aesthetic, and scientific costs (can’t study that glacier anymore!).
Mitigation has some costs.
It makes perfect sense that some tax be applied to fossil C emissions, as a fossil fuel C sales tax, for example, and also that other emissions (farming, deforestation, cement) also be taxed at a rate comparable to equivalent CO2 emission – to reflect the public cost as a price signal in the market that drives changes in demands and investments in supplies (energy efficiency, clean energy, less fossil fuel use), and that some of that revenue go towards mitigation and adaptation R&D, sequestration, subsidies, adaptation cost compensation, cuts in other taxes, programs to reduce population growth, etc.
Alan Millar says
Can all those posters who feel that climate trends should be considered over periods like 30 years and feel that trends can be masked for a while by short term variation please consider the following graphs of global temperatures since 1880.
http://www.woodfortrees.org/plot/hadcrut3gl/from:1880/to:1940/trend/plot/hadcrut3gl/from:1880/to:1940
http://www.woodfortrees.org/plot/hadcrut3gl/from:1940/to:1978/trend/plot/hadcrut3gl/from:1940/to:1978
http://www.woodfortrees.org/plot/uah/from:1979/to:1995/trend/plot/uah/from:1979/to:1995
http://www.woodfortrees.org/plot/uah/from:1996/to:2000/trend/plot/uah/from:1996/to:2000
http://www.woodfortrees.org/plot/uah/from:2001/to:2010/trend/plot/uah/from:2001/to:2010
Could they please state in clear language what they consider the true climate trend to be and what is the short term variation.
Could they also state what they find so alarming in these trends.
Alan
[Response: Ironically enough given the website you use, you are completely missing the wood for the trees. The issue is not that the 20th C trend is alarming – it is that our understanding of why it trended implies that we can expect much more in the future. That is what is alarming. You also misunderstand attribution – it doesn’t matter for the models or the scientists whether a forcing causes cooling (i.e. a volcano) or warming (CO2) or whether it was a short term effect or a long term effect. All that matters is whether the forced effect can stand out from the noise of the internal variability. – gavin]
Michael says
Patrick, what do you mean by ‘adaption cost compensation’ listed under mitigation costs?
Barton Paul Levenson says
Thanks, Patrick! :)
snorbert zangox says
Gavin,
As I have said, reading the Gerlich & Tscheuschner article is difficult at best. The authors spent most of the first 40 pages refuting example after example (ad infinitum) of explanations of the greenhouse gas effect by comparison to horticultural hot houses. I now well understand that hot houses work by trapping hot air. They do not work because the glass blocks infrared radiation. The atmosphere has no blankets that keep us warm. All of these things I already know. The effect, which Gerlich & Tscheuschner call the carbon dioxide greenhouse effect (accepting that other gases behave similarly), includes only absorption and re-emission of long-wave IR.
The rest of the paper addresses two other major points. The first is the calculation of absorption and emission of light and heat by the earth. The second is the modeling of absorption and re-emission of IR by atmospheric gases.
Gerlich & Tscheuschner develop a set of theoretical partial differential equations that one would have to solve if one were to calculate the temperature distribution on the surface of an airless obliquely rotating globe (page 68, et seq.). They then conclude,
“Rough estimates indicate that even these oversimplified problems cannot be tackled with any computer. Taking a sphere with dimensions of the Earth it will be impossible to solve this problem numerically even in the far future. Not only the computer would work ages, before a ‘balanced’ temperature distribution would be reached, but also the correct initial temperature distributions could not be determined at all.”
As I understand it they maintain that the problem of securing an analytical (or even a thorough numerical) solution to the gray body temperature of an airless earth is intractable.
I have read the article by Arthur P. Smith and compared his approach to that of Gerlich & Tscheuschner and find that there is less disagreement than I thought I might find. Smith develops a partial differential equation for a rotating globe, albeit a non-oblique globe. He does not mention the difficulty in finding analytical solutions for the equation. He proceeds to develop a simpler approach that allows him to calculate the surface temperature at several values of a parameter, λ. He makes other simplifying assumptions and finally calculates the expected temperatures on several planets as a function of λ.
Smith’s next to develops differential equations for the case of a rotating globe, still non-oblique, having a variable albedo. The only variation that he considers is the increasing albedo of ice near the poles. Finally, he shows the calculation of the gray body temperature of the solar system rocks.
It looks to me like what Smith has done is to develop a theoretical basis for computation of the gray body temperature that incorporates a couple of arbitrary parameters and then used NASA data to find appropriate values for the parameters. I could be wrong about this last point and am willing to consider another analysis of Smith’s procedure.
However, it seems clear to me that Gerlich & Tscheuschner are correct in their conclusion that the IPCC approach to calculating gray body temperature is empirical and not analytical.
The last major point that Gerlich & Tscheuschner make is that the model that IPCC uses to calculate the absorption and re-emission of long wave IR by carbon dioxide is wrong. Gerlich & Tscheuschner have not denied that the Earth loses heat to the cosmos by radiative transfer. They are convinced that the overly simplistic model that IPCC uses does not rigorously describe what is really happening. The IPCC approach appears to be to develop relatively simple physical models and to adjust parameters to force those models to fit the historical data. The Smith paper appears to present an example of this approach in Section IV. Statistical models work well for many applications; their weakness is that you may not extrapolate the beyond the envelope of the data. Antoine equations can interpolate vapor pressures quite accurately between measurement points, but you extrapolate beyond the data range at your own risk. To the extent that Gerlich & Tscheuschner are correct about the IPCC approach, there is risk that the ignored factors that probably exist in the data beyond the historical data set will ruin the predictive value of the models.
You may believe that what Gerlich & Tscheuschner advocate is to incorporate needless complexity and difficulty into the process of analyzing what is happening to our climate. You may also believe that should we undertake what Gerlich & Tscheuschner appear to be advocating, the result will be little different from the results that IPCC has obtained with the statistical approach. That could be correct. It could just as easily be incorrect.
I believe that a more analytical approach is justified. I also believe that the result of the extra work will be worth the effort.
Patrick 027 says
Re 319 –
Farmer’s property value drops due to climate change, sells farm at low price, is partially compensated for the difference.
Expensive infrastructure and propery in southern Florida becomes worthless; people can’t sell property, partially compensated for the difference. (Not completely compensated because one could argue that they could have seen it coming.)
Some details to be worked out.
Obviously the policy will necessarily be more complicated because of the international scale of the problems and solutions…
(PS yes, some people in some areas could benifit from climate change (up to a point; generally, I expect the balance to be more and more obviously a net cost for more and more people with greater changes); but there are also other problems with coal mining in particular and with CO2 (ocean acidification) and other pollutants that come from burning fossil fuels (mercury).)
Patrick 027 says
Re 321 – analytical vs empirical is a false dichotomy – although to your credit you did mention numerical at one point.
Computer models used in climate simulations are numerical – they have to be, as there are no simple equations that encompass the entirety of radiation in the atmosphere at all wavelengths, or the distribution of clouds and pressure, etc, at any one moment in time (although one can seperate fields into a number of linearly-superimposed sinusoidal functions in flat rectangular spaces, or some other set of orthogonal functions in more general domains such as on a sphere).
Computer models generally use parameterized equations to describe radiation – this is just to speed computation. These can be tested against line-by-line models, which essentially are as accurate as one can get, unless one wants to count individual photons (there is no need for that kind of accuracy; climate modelling is a boundary value problem; sensitivity to initial conditions (butterfly effect) is not a major concern, because the goal is not to forecast individual instances of cyclones, ENSOs (in long-term simulations, at least), etc.), but a larger time-scale pattern of those things.
Climate models are formulated according to mainly established physics (conservation of energy, momentum, angular momentum, ideal gas laws, conservation of mass of substances except for chemical reactions, latent heat of phase changes). This is especially true of explicitly resolved (grid-scale) phenomena. There is some parameterization that is based on … well, see these:
https://www.realclimate.org/index.php/archives/2008/11/faq-on-climate-models/
https://www.realclimate.org/index.php/archives/2009/01/faq-on-climate-models-part-ii/langswitch_lang/de
The point is that models are not parameterized in order to produce a fit to the instrumental record of climate change. They are not fundamentally statistical models, and although there is some uncertainty, it is not mainly from the inherent problem of extrapolating outside of a statistical sample.
(Then there’s paleoclimatic evidence, etc…)
It isn’t necessary to be exact (analytical); within limits, approximations can be reasonably expected to give approximately accurate results. (Taking out a handful of relatively small wavelength intervals from the radiation calculations would not result in major changes to radiative forcing values.)
It is true that it is not necessarily the case that the amount of warming by the total greenhouse effect (about 33 K) for global and annual average conditions should be the same as that resulting on a differentially and cyclically heated and cooled sphere (and obviously, many aspects of the climate cannot be resolved by considering a single column of air), but a calculation for global time average conditions (or representative conditions) is a good starting point. The more complex climate models do the more detailed calculations. In totality, for a long-term climatic equilibrium, the radiative fluxes at and above the tropopause (up and down, LW and SW) must sum to nearly zero (a tiny flux of kinetic energy does come out of the troposphere and into the stratosphere and some of that is converted to heat energy, but this is very small in comparison to other parts of the thermal energy budget); below the tropopause, convection makes up the difference. This will not be the case at all times at all places but must be the case for the total radiant and other heat fluxes across each closed surface, such as each vertical level taken over the entire globe (forming approximately concentric spheres). Considering the multidimensional nature of climate, there are other requirements for long-term equilibrium, involving balancing the fluxes (of mass, momentum, and heat) into and out of smaller closed surfaces, with patterns in imbalances that occur over smaller time units, etc…
Persistent imbalances/shifts in these fluxes drive long-term changes.
dhogaza says
As does anyone else who’s paid more than passing attention to the issue, including every climate scientist on the planet.
It’s a 40 page strawman. It’s the equivalent of saying “American football doesn’t exist because the Statue of Liberty play doesn’t actually involve the Statue of Liberty”.
If that alone isn’t enough to convince you that the authors are trying to mislead people with a bunch of sophisticated-sounding handwaving, I imagine that no amount of explanations from physicists will convince you.
Especially since I see no evidence that you’re paying attention to anything physicists here and elsewhere have said about the paper, which despite your claims, includes plenty of analysis.
You’re exhibiting classic denialist behavior.
Ray Ladbury says
Snorbert, OK, now given that the first 40 pages of the G&T snowjob are so transparently irrelevant to the “problem” they are supposedly addressing, shouldn’t this tip you off that maybe, just maybe the rest is BS as well. Numerical doesn’t mean “wrong”. Most differential equations are solved numerically. That is what science does: It takes on a difficult problem that cannot usually be solved exactly, makes simplifications, approximations and estimates and gets a solution that works well enough. G&T are a joke and you are the only one who hasn’t gotten the punchline yet.
Patrick 027 says
Any ideas about the issue I mention in 151 above?
Alan of Oz says
RE # 321
“Rough estimates indicate that even these oversimplified problems cannot be tackled with any computer. Taking a sphere with dimensions of the Earth it will be impossible to solve this problem numerically even in the far future. Not only the computer would work ages, before a ‘balanced’ temperature distribution would be reached, but also the correct initial temperature distributions could not be determined at all.”
Without getting into the rest of the argument I don’t see why you can’t set the Earth’s initial temprature to a random value, run the simulation and get the same end state. The time taken to do this depends on the definition of “rough estimates” and the amount of number-crunching power you have.
I think the author underestimates similar problems that computers are routinely used to solve. As a computer scientist I can say at the “high end” we are not that far off a cellular level simulation of the mamallian brain (google “Blue Brain Project”).
The reason they use historical values to seed ANY numerical simulation is to TEST if it can PREDICT the rest of the historical data set that comes after it. Before that was possible similar techniques were used to solve the three body problem when planning space probe trajectories, and before that the technique was used by the very first mechanical computers to create atillery tables.
Climate models grew from weather models which were also an an early miltary/civilian problem for computers. Ironically I believe climate is easier to predict but only at the low end of the error bar since the top end depends on biology and other complex feedbacks, most of which would seem to make things worse if they eventuated.
Looking at the simpler low end:
The only mathematical difference between numerical and analytical solutions is that delta is not zero in the numerical method. Analytical solutions are only available where nature allows us to divide by zero, the vast majority of “real world” problems do not have an analytical solution. As a matter of fact we can’t even intergrate the normal distribution curve! Sure an analytical solution would be way more efficent, but the point is we don’t NEED an analytical solution to get a good answer.
A good demo simulation from Japan’s Earth Simulator can be found here (scroll half way down to watch the embedded movie). Note that not all of the simulated effects are visable, eg: ocean currents and heights are not shown. Also take a look on youtube for a clip of the hardware, it was quite a revolutionary machine, purpose built for climate simulations, and as the WP entry states; “not to be confused with the videogame, SimEarth”.
Computer tech still moves quick, the ES is now around #70 on the top 500 supuercomputers after claiming #1 for it’s first 3yrs of operation. There are now at least a dozen more powerfull machines simulating climate. The difference between the paper in question and all that hardware is a concrete example of the difference between pure and applied maths. Until someone actually finds an analytical solution to the n-body problem, space probes will always need in-flight trajectory tweeks.
IMHO if the author want’s to find out anything about his equations then he is going to have to look at “rough estimates” from numerical analysis techniques.
Barton Paul Levenson says
snorbert writes, after a long defense of G&T:
I believe that anyone who keeps looking for ways to defend an obvious piece of garbage like the G&T article has taken as a premise that they must have something to say, and therefore rejects any evidence that they don’t. Let me put this as simply as I can: G&T have nothing worthwhile to say. Nothing. No matter how carefully you interpret their paper, it’s still worthless. Stop looking for something that isn’t there. If somebody publishes a paper asserting that the Earth is a cube, it’s futile and self-defeating to go into long explanations of why the author may have meant something more subtle in calling Earth a “cube,” and arguing about what exactly a cube is and how it differs from a sphere. The paper is just prima facie stupid, and it remains stupid no matter how you try to justify it.
Mark says
For those who DEMAND that reducing emissions can only come from wasting money:
http://weblog.infoworld.com/sustainableit/archives/2009/03/pc_power_manage_2.html
Mark says
Mind you, some will complain about ANYTHING to avoid having to say “saving power saves money”. E.g. “When do you do updates then???” Uh, WoL exists. Machines off. Once a month after Patch Thursday, machines woken up via WoL, patches installed, machines shut down.
Or “Huh, they waste that and much more while I sit about waiting for my PC to boot”. Well, they waste that much time you taking a dump. Shove a cork up there and save your company billions!
snorbert zangox says
Patrick 027, dhogza, Ray Ladbury, Alan, Barton Paul Levenson;
I read your responses but can find no indication that any of you have read the Gerlich & Tscheuschner paper. You certainly have not responded to my penultimate post (321) on this subject.
Gerlich & Tscheuschner claim and support their claims that the IPCC modeling approach assumes an invalid physical model for the insolation and radiation of heat from the Earth and an invalid physical model for radiation among gas molecules. I do not believe that a numerical solution (or even an analytical solution) to the differential equations for an incorrect model is any better than or different from an empirical model.
[Response: They are simply wrong. The physical basis for radiative transfer modelling is validated every time you look at a satellite picture. – gavin]
snorbert zangox says
Gavin,
I think that a satellite picture proves that surfaces emit or reflect electromagnetic radiation at lots of wavelengths. I do not think that proves that a carbon dioxide (or any other) molecule receives and emits radiation in the same way that bulk objects do.
[Response: If you look at an SSMI picture of the sea ice, or the water vapour amount from GOES, or the CO2 distribution from AIRES you are looking at a highly processed retrieval using multiple spectral intervals and using our full understanding of radiative transfer and all of the multiple absorptions due to dust, water vapour, trace gases and aerosols. If you think we don’t know anything about radiative transfer, or that it’s all based on some fundamental misunderstanding of the physics, you are a fool. – gavin]
Chris Colose says
snorbert,
G&T is a fraudulent puff-piece which is clearly succeeding in its intention to confuse. These are the same authors who think a pot of boiling water invalidates the atmsopheric greenhouse effect. If you’re learning from these pretentious idiots rather than picking up an atmospheric science book, you’re just bound to look foolish. The whole paper is a pile full of erroneous remarks, irrelevancies, and accusations.
Eli Rabett, myself, Duae, among others have very clearly shown at Rabett Run why G&T have absolutely no understanding of thermodyanmics or atmospheric radiation. A more formal rebuttal is already available by Arthur Smith. It’s a shame we need to spend our time on this, but an equally greater shame they actually got this published in a respected physics journal.
Hank Roberts says
Snorb — look up the various kinds of lasers. Solids; gas molecules in rotation; gas molecules in vibration.
http://books.google.com/books?id=Dgk-HBVUxJcC&pg=PA60&lpg=PA60&dq=laser+co2+ruby+crystal&source=bl&ots=rHD6VD6UYB&sig=pubEXFVXhmLdpoGioytSO8LObTo
The physics works, for different forms of matter.
You’d know if it were different, you could look it up.
You rely on this stuff every day.
“Reality is whatever refuses to go away when I stop believing in it.”
— Philip K. Dick (1928-1982)
Hank Roberts says
Patrick, you asked about a possible typo above, on cloud feedback two places in the last IPCC report; have you checked at the source for an errata list? (I haven’t.) If not found, have you emailed the question to the people who produced the document? From the home page:
e-mail: IPCC-Sec@wmo.int (they’ll have the authors’ contact info for the two pages where you found an inconsistency). I’m sure this is obvious to you, but going to the source is always a good exercise.
Patrick 027 says
Re 335 – Thanks for the suggestions (I hadn’t noticed the errata sections before). I just checked and there was nothing about what I found, so I’ll try emailing.
Ray Ladbury says
Snorbert, I got my PhD in physics over two decades ago. In over 30 years of studying physics, G&T is the only paper I’ve ever read that made me angry. It made me angry because it was so transparently a fraud. Scientists are busy people. They usually read several papers a week. Knowing this, when they write a paper, they don’t waste 40 pages on irrelevant examples. And that is not the only irrelevancy they bring in–the paper is full of them.
As to the rest, if G&T were correct, lasers would not work, satellite measurement would not work, IR spectroscopy would not work. The physical world says G&T are wrong. The way the paper is written leaves no doubt that it is a fraud–out there to snare the gullible layman by masquerading as a scientific ouvre.
Uncle Eli’s post at Rabett Run quite effectively eviscerates G&T’s ideas about modeling of Earth’s climate, as does Arthur Smith’s more involved takedown. If you haven’t read this, read it.
Patrick 027 says
Tieing up loose ends:
Sample of commercially available PV (photovoltatic, not to be confused with PV, potential vorticity) modules:
“Real Goods Solar Living Sourcebook” John Schaeffer, Doug Pratt, 2005
http://books.google.com/books?id=im-No5TYyy8C
and
http://books.google.com/books?id=im-No5TYyy8C&pg=PA59&lpg=PA59&dq=weight+of+PV+module&source=bl&ots=EjhMG5B8US&sig=xRNhG2OAgQEzKwnIKrS_a0ukj6I&hl=en&ei=aaqhSdLJGpqWsAO_1ry5CQ&sa=X&oi=book_result&resnum=3&ct=result
Modules used in calculations:
Sharp 185
Sharp 167
Sharp 165
Sharp 140
Sharp 70 (triangular)
Sharp 123-Watt
Sharp 80-Watt
Kyocera KC120
Evergreen EC-115
Evergreen EC-110
Evergreen EC-55
Evergreen EC-51
Uni-Solar 64-Watt
Uni-Solar 42-Watt
Uni-Solar 5-Watt
—
CdTe cells:
From:
http://stockology.blogspot.com/2007/11/first-solar-has-dark-future.html
“Tellurium’s crust abundance is 1 pbb versus 37 ppb for platinum (pbb is “parts per billion”). Tellurium is mainly produced as a byproduct from the anode slime accumulated during copper refining. But not all copper mines contain significant amount of tellurium. Chile produces 1/3 of the world’s copper but virtually nothing in tellurium. According to USGS and Arizona State Geologist Lee Allison, the world produces any where from 160 to 215 metric tons of tellurium a year.”
“Tellurium was traditionally used in metal alloys and other uses. Demand from emerging new applications, like DVD discs, digital camera, computer flash memory and CPU thermoelectric cooling, among other things, has caused a severe shortage in recent years, and drove the price from below $4 a pound to over $100 in 2006, according to Lee Allison. Jack Lifton on Resource Investor suggested that investors could sense the shortage and start to hoard physical tellurium, adding fuel to the fire and causing a huge tellurium price run.”
“How much tellurium does FSLR use? They use about 7 grams of cadmium and about 8 grams of tellurium in each of the 2 feet x 4 feet CdTe solar panel. That’s roughly 135 metric tons per each 1 gig watts (GW) of products.” …
—
From:
http://www.absoluteastronomy.com/topics/Tellurium
“Treatment of 500 tons of copper ore typically yields one pound of tellurium.”
—
… to be continued….
Patrick 027 says
1 pound/500 tons = 1 ppm – a lower limit of Te concentration in Cu ore based on the last source.
If Cu ore is 1 % grade, then Te/Cu = 100 ppm at least. (How much Cu is recovered from the ore?)
If Cu is present in common rocks at 70 ppm and Te is present at 7 ppb (some sources say 1 ppb in crustal rocks, although one says 10 ppb (Encyclopedia Britannica)), then Te/Cu = 100 ppm in common rocks.
From:
http://en.wikipedia.org/wiki/Copper
“Most copper ore is mined or extracted as copper sulfides from large open pit mines in porphyry copper deposits that contain 0.4 to 1.0 percent copper. Examples include: Chuquicamata in Chile and El Chino Mine in New Mexico. The average abundance of copper found within crustal rocks is approximately 68 ppm by mass, and 22 ppm by atoms.”
(1%, 68 ppm in close agreement with Encyclopedia Britannica)
—-
PS about http://stockology.blogspot.com/2007/11/first-solar-has-dark-future.html
– comments are interesting. Remember my point that Te prices can get much much higher before significant price increases in CdTe solar cells.
Also, from closely related source:
http://stockology.blogspot.com/2008/04/tellurium-supernova-has-erupted.html
“At 3 microns CdTe layer thickness, there’s about 15 grams of CdTe per 2 feet x 4 feet panel of 70 watts. Allow some production waste, 0.25 grams/watt CdTe is reasonable. FSLR produced 77 MW in Q4,07, that’s a consumption of roughly 19.25 metric tons of CdTe. At over US$500 per kilogram, that’s worth $9.625M of purchase from VNP. Add CdS, which also came from VNP, total purchase should be almost US$11M for the quarter.”
(The area given, 8 square feet, is 0.743 square meters. 70 W per panel implies 94.2 W/m2. I haven’t checked some of the other numbers from these stockology sites.)
70 W / 15 g CdTe = 4.67 W/ g CdTe. The amount of waste allowed in the above quote implies 4 W/g CdTe. I had used 4.67 W/g in my calculation.
I had also used, in terms of Te, assuming no waste,
70 W / 7.975 g Te.
(About 53.16 % of the mass of CdTe is from Te – from periodic table atomic mass values).
The numbers given by this site and the closely related site seem about right in terms of layer thickness and density (about 6.7 kg/L), and the mass ratio of Te to CdTe is about right.
$500 per kg of CdTe adds $0.107 per peak W, and 0.102 cents/kWh over the equivalent of 60 years at rated power output / 5 (divide by 5 for an average of 200 W/m2 incident solar power; rated power (peak Watts) is for 1000 W/m2 incident solar power).
An increase in the cost of Te of $10,000 / kg would add $1.14/peak W, and 1.08 cents/kWh.
20,000 metric tons of Te is enough for 37,619 metric tons of CdTe and about 176 GW rated power, or at 200 W/m2 average incident solar power on the panels, about 35.1 GW average power output.
The effective energy density (for 60 years at rated power equivalent) relative to the CdTe layer is about 1,770,000 MJ/kg, which is about 136,000 times coal electricity (coal at 32.5 MJ/kg, it can vary – conversion to electricity assumed 40% efficient).
(Of course, given the energy density of a single component based on total energy production may seem a bit meaningless. The numbers can be used this way – the energy per unit mass used to produce that layer can be divided by the above energy density to give a contribution of energy payback time as a fraction of effective life of 60 years at rated power (a bit fuzzy given a gradual decay – could actually be more like 70 or 80 years to produce that amount of energy).
—-
Decay rates – an exponential decay in power output of 0.5 % per year results in the equivalent energy production of 60 years at rated power over an actual time of about 71.4 years, and over indefinite time, total energy production approaches almost 200 years at rated power.
—-
I have notes taken from “The Cambridge Encyclopedia of Earth Science”, that energy of extraction of Cu from ore of various grades:
mass fraction, MJ/kg of Cu (calculated from kWh/kg)
0.02, 54
0.01, 79.2
0.005, 126
0.002, 324
0.000259 (seawater *- is this relative to total or to non-water component?), 1800
0.00007 (common rock), 5400
I may have rounded the common rock value up from 60-something and I may have gotten the seawater concentration of Cu from another page or source…
Subtracting 2 MJ/kg of chemical energy (a rough round figure), I found a rough fit that – I’m guessing – might tend to fit many mineral resources:
Energy of extraction = chemical energy + 1.6472 MJ/kg * (mass fraction)^(-0.8444).
a little bit left….
Patrick 027 says
“I may have rounded the common rock value up from 60-something ” … that’s 60-something ppm.
Hank Roberts says
Hat tip to: http://www.spacemart.com/reports/Beijing_extends_post-Olympics_car_rules_report_999.html
(Not the full scale Olympic rules, and only automobile limits as I read it; perhaps useful for comparison over the longer term)
Brief excerpt below:
by Staff Writers
Beijing (AFP) April 6, 2009
Beijing has extended its post-Olympics traffic control measures for one year after a successful initial effort at easing road congestion and curbing pollution, state media reported Monday.
The rules, first introduced in September last year following more stringent rules during the 2008 Olympic Games, will take 930,000 of the city’s 3.6 million vehicles off the roads every weekday, the China Daily reported.
“Beijing’s air quality is getting better,” Li Kunsheng, head of the vehicle management section of the Beijing municipal environmental protection bureau, was quoted as saying.
He said daily vehicle emissions had fallen 10 percent since the measures were introduced on September 20, the newspaper reported…..
Patrick 027 says
“The Geography of Poverty and Wealth” by Jeffrey D. Sachs, Andrew D. Mellinger, and John L. Gallup, Scientific American, March 2001, pp.70-75, http://www.earth.columbia.edu/sitefiles/File/about/director/documents/SCIAM032001_000.pdf
p.74:
“Winter could be considered the world’s most effective public health intervention.”
—
I had mentioned something about this above somewhere but didn’t give the reference, so there it is.