Coauthors are invited for a paper on a topic at the bleeding edge of radiative heat transfer and animal physiology- the Underground Greenhouse Effect.
As some small furry creatures snooze through the winter in sealed burrows containing up to 13% exhaled CO2, and as little as 4% O2, how does the IR opacity of atmospheres enriched in CO2 by exhalation figure in the rate of heat loss in such environments as burrows, warrens, and for that matter, sleeping bags ?
Are any extant codes applicable to such problems ?
wilisays
I would be interested in anyone’s insights into a (maybe) hypothetical situation:
According to Shakhova (March 2011), the methane coming from the East Siberian Arctic Shelf is about 8 million tons. For the purposes of simplicity I will round this up to 10 megatons. If we multiply this by the 105 times global warming potential of methane (Schindell 2006), we get about one gigaton of gwp from this source per year (compared to about 30 gigatons of CO2 from all direct human activity).
Reports earlier this year was that there was a ‘dramatic increase’ in methane release from the Arctic which prompted an sudden mission by US and Russian researchers to investigate put together ‘on short notice.’
My question is, if ‘dramatic’ here ends up being an increase of an order of magnitude, how long would it take for this forcing to significantly affect temperatures, particularly in the Northern Hemisphere?
The answer involves at least two considerations that I can identify:
How fast would the methane mix into the broader atmosphere?
How long would it take for the heat to build up that would be held in the troposphere by this new quantity?
(I suppose that whether this is a one-time emission or a new and increasing level of regular annual emissions from the Arctic would also have an effect on the answer.)
Thanks ahead of time for any insights or speculation.
Thomassays
Russel. Thinking about the CO2 column density within the burrow. Thirteen percent is roughly 300 times atmospheric density. So the animal to burrow wall is 10centimetrs, that means there is as much CO2 per unit area as roughly 30meters of free air. If we made our atmosphere have a uniform density it would be maybe 5kilometers deep, so we have less than 1% as much CO2 opacity within the burrow, as within the atmosphere. However, CO2 absorption lines are saturated near line centers, so we would still get some absorption/reradiation within the burrows air. Assuming the air in the burrow is warmer than the walls, there would be some insulative effect, although I don’t think it is too great.
With respect to #386 Unforced Variations: Oct 2011
I halved the concentration of water/sugar mix to less than 270 ppm. There is still a net increase temperature +0.1 C over plain water warm up. Done twice with exact weights of water set to 174 grams. This is as far as rudimentary equipment can go. Trying other solutions to mimic the greenhouse effect is OK as long as the experiment is repeatable, the solution must be the same before and after microwaving. David suggested unstable Carbonated water which changes in concentration with every heat spell.
For those following this little project, set room temperature to +23 C. Fever thermometer is ideal for this after a 20 second microwave. The water temperature should increase +16.5 C. Making the reading
same as human body temperature, with a maximum thermometer, which is a exactly what a medical thermometer is.
The idea of this is to prove physically to any contrarian who claim that trace elements don’t do much to the temperature record. Water is homogenous as opposed to the hydrostatic atmosphere, CO2 affects specific layers more than another as opposed to well mixed sugar with water. However, the similarities are meant to be as close as possible without going to a lab. The beauty of this experiment is that its repeatable in many ways.. You doubt trace elements have no effect with temperature ??? Try it out, learn for yourself.
Unfortunately, the current estimates of gas hydrate storage in the Arctic region are very poor and non-existent for Antarctica, and we don’t really know the sensitivity of hydrates to future warming. So, it is currently impossible to say with high confidence how methane responses will feedback onto future global warming.
Russellsays
3
The saturation is indeed interesting, because , ashibernating critters both radiate and absorb in the 10 micron thermal band, the gas trapped around them may add radiative thermal gain to the suppression of advection and convection by their fur and the air-trapping materials lining their nests.
Russellsays
3
I agree the line saturation is important , but it seems an open quantitative question , as the ratio of the thermal radiative gain from trapped CO2 to the conductive loss arising from the gas mixture depends empirically on the combined advective and convective “R” value afforded by nest materials as well as the animal’s fur.
Dave Werthsays
I have a guy who’s been telling me because the accuracy of weather station thermometers is generally recorded in integer degrees that it’s not reasonable to express the global temperature to a precision greater an integer degree. To quote him:
“Worse, however, is the real life debate that “over a decade, the temperature will rise 1/10 of F”. If the data was collected with accuracy of +- 0.5F, then you can see the spread in your (modified) example far exceeds the stated conclusion. If we measure 63.5F on a certain occasion, should we become alarmed at an increase of 1/10th F? It is smaller than the standard deviation, and this becomes obvious if more accurate instrumentation reveals that the average is actually 63.7 F and not 63.1 F. In this hypothetical, we’re actually seeing temperatures drop. It might not fit the desired model, but that’s how it would play out and a faithful scientist would allow the facts to prevail, or would refrain from faulty or premature conclusions unsupported by the data or the precision of the data.”
In other words he’s saying it’s not reasonable to express the average global temperature with any decimal places on it because the measurements are only accurate to the degree.
I don’t have the statistical chops to refute this convincingly so I was hoping someone could help me with this. What can I tell this guy to convince him it’s reasonable to express the global temperature anomaly in tenths of a degree when the individual measurements are only accurate to 1 degree?
Remember that all models are wrong; the practical question
is how wrong do they have to be to not be useful.
— George E.P. Box & Norman R. Draper
EMPIRICAL MODEL-BUILDING AND RESPONSE SURFACES 74 (1987).
That’s a wake-up call, all right. And too close to home for this reader’s comfort. But, a pet peeve if I may? If the researchers want to be taken seriously in the drying Southeast Europe, they’d better update their maps. It’s been 20 years since Yugoslavia went the way of the dodo, and people in the successor states tend to have strong opinions about political borders.
June:
maximum insolation, high radiative heat transfer and CO2 back radiation
temperature response: flat http://www.vukcevic.talktalk.net/CETjun.htm
December:
minimum insolation, the low radiative heat transfer and CO2 back radiation
temperature response: rise 0.35 C/ century http://www.vukcevic.talktalk.net/CET-Dec.htm
I have strong reasons to think that the climate change in the CET area has lot to do with the ocean currents moving heat from tropics polewards and vice versa. http://www.vukcevic.talktalk.net/CDr.htm
Rich Creagersays
Can someone provide me with the name of the purported law, which I have seen referenced in comment threads here, which states (to paraphrase) that no parody of fundamentalism can be so over-the-top that it will not be taken at face value by someone, unless it is accompanied by a disclaimer. Thanks
Hmm. Less-than-pleasant personal experience suggests that the ‘anti-greenhouse’ effects of exhaled water vapor on bag insulation outweigh anything the CO2 can do–even at levels well short of actual ‘saturation’. . .
[Response: Hence synthetic fill, hood drawstrings and semi-permeable membrane shells–Jim]
t_p_hamiltonsays
Dave Werth,
I would refer that person to the fact that the current long-term temperature rise is 0.3 degrees Fahrenheit per decade, not 0.1, that water vapor is an exponential function of temperature, and specifically on your question: that standard error is small through the use of multiple stations and multiple observations (divide by square root of number of observations).
Clearly your guy doesn’t have the statistical chops either.
Is he a sports fan? Maybe a fan of U.S. Football? Ask him, “What’s Adrian Peterson’ average yards-per-carry?” When he says “4.8”, ask him how that’s possible — given that the yardage on any single run is only recorded to the nearest whole yard.
Ask him “What was Ted Williams lifetime batting average?” When he says “.344” ask him how that’s possible, given that hits are either zero or one — only recorded to the nearest whole hit.
And — don’t bother to explain anything else to him or argue further. Just let him chew on it.
Chris Ho-Stuartsays
Hi Dave (in comment #8), asking about accuracy of weather station thermometers, generally recorded in integer degrees, and implications for accuracy of a global temperature.
Your friend is wrong, of course; it is basic statistics that when you have a lot of measurements, the standard error generally scales down by the square root of the number of measurements. Lots of measurements together can therefore give much greater accuracy than any individual measurement.
There’s a minor technical point to watch. People don’t define an “average global temperature”. They define an “average global anomaly”. This is the average CHANGE in temperature. Simplified, you take a whole bunch of temperature readings, at many different locations. Then you do it again, several years later. At each location, you calculate the change. Then you average the changes. The physical reasons for this have been explained here at realclimate several years ago, I think, but I can’t find a link. (Actually, you calculate a “norm” at each location, giving average temperatures AT THAT LOCATION ONLY for any given month. Then you convert any reading to the “anomaly”, or the difference from the norm. Then you average anomalies over the whole globe.)
Here is a very simple example to show why your friend is incorrect. Suppose we have N pairs of numbers, x[i] and y[i]. Conceptually you can think of them as a reading of a temperature in two different years. Suppose there’s a normally distributed error in each value, with standard deviation S. The standard error in x[i]-y[i] is sqrt(2)*S.
Then add all the pairs of differences.
Sum(i=1..N) (x[i]-y[i]).
The standard error is now sqrt(2N)*S.
To get the average difference, you divide this sum by N. The standard error in your result is S*sqrt(2N)/N, or S*sqrt(2/N).
If N is around 20000, you get an error of S/100. That means the “global anomaly” in this simple case has two additional figures of accuracy over the individual measurements.
The actual global anomaly calculation is much more complex than this, and so too is the error analysis. The real problems show up with finding and addressing more systematic sources of error. A lot of work has been done on this; the most recent being the Berkley group (BEST), which with much fan fare got the same basic result everyone else has obtained before them. There’s some heavy statistical work going on to get an idea of the errors in calculated global anomaly figures.
Rounding errors on reading instruments are not really a problem, and although his objection sounds superficially plausible, its a failure to understand introductory high school level statistics.
I used to discuss with someone like this once. It sounds a lot like someone with initials GM. But in any case, you are most unlikely to be able to convince him to see the error. The best bet, in my experience, is to explain as simply and as clearly as possible for any onlookers the real mathematics of how rounding errors impact averages. In practice, the accuracy of global anomalies are much more strongly limited by systematic errors, rather than simple rounding errors. The final global anomaly, like pretty much any other average, is known to greater accuracy than any individual measurements.
Martin Smithsays
This claim sounds crazy, but I can’t find anything at the JAXA site:
Industrialized nations emit far less carbon dioxide than the Third World, according to latest evidence from Japan’s Aerospace Exploration Agency (JAXA).
[Response: Unbelievably stupid. I shouldn’t have to say anything more–Jim]
John E. Pearsonsays
Martin, as far as I know the only possibility of measuring regional CO2 output is via the Orbiting Carbon Observatory (OCO —heh—) http://oco.jpl.nasa.gov/ which has not yet been successfully launched.
Wayne, I sent you an e-mail to the address on your web-site. By the way, I used to know Andy Young who did the green flash page. I didn’t know he was still active. When I calculate the ppm for your recipe I get it coming in an order of magnitude lower. Your recipe is 0.1 grams of sugar in 174 grams of water.
The molecular wt. of sucrose according to wikipedia is 342 grams/mol. So .1 grams sucrose = 2.9 10^-4 mols sucrose. The molecular wt of water is 18g/mol so 174 grams H2O = 9.6 mols H2O. The ratio gives: 3 x 10^-5 molecules of sugar for each H2O molecule or 30 ppm. So this effect is due to a trace element that is present in concentrations a factor of 10 smaller than atmospheric CO2.
John E. Pearsonsays
My crucet (my biggest pot) only holds 7 quarts so I added a teaspoon of sugar to 7 quarts of water. I filled another pot with as much water as it could hold.
Then I did 5 measurements each. For some reason the pure water pot stayed a little warmer than the other pot. I don’t know why. I filled identical plastic cups with water from a 1 cup measuring cup (spillage was probably my biggest variability). I measured the temp of the water in a cup then heated that cup for 30 seconds and measured the final Temp. Then I did that again and again for pure water and sugar water. I think it’s pretty convincing.
Oh yes. My microwave has a little circle indentation in the bottom in the back left corner just slightly larger than the cup I used. I placed the cup in the center of that indentation each time (this is essentially what Wayne suggested). This is important and likely another source of variability. You’re in the near-field inside a microwave oven so you should expect strong variations in the amount of heating you get as you move stuff around.
> JAXA
The ‘Suite101’ guy has taken one of the four pictures from the satellite information — he’s using Northern Hemisphere summertime, with forests in Siberia and Canada growing and soaking up CO2 — and pretended it’s the overall picture.
He links to a video at NHK that has the full set of four showing midway in the video.
Yesterday the GOSAT /IBUKI page had a tiny thumbnail image showing four different seasonal images, the same four you can see in the video linked from the ‘Suite101’ page. Today it has a single image.
Replication may now proceed with anyone who wants to try. I must thank Mr Pearson for an important job well done, will rewrite the recipe for those who don’t have a proper .1 gram capable scale. This experiment is similar to our atmosphere miniaturized in a plastic coffee cup..
Andy Young is surely very much the same great astronomer as he ever was, just a little older.
Robert Murphysays
Hank @24
The ‘Suite101′ guy is none other than John O’Sullivan, serial science distorter. He even mentions Murray Salby. ‘Nuff said.
CMsays
Re: JAXA, GOSAT, annual carbon cycle, and that utter moron O’Sullivan cherry-picking the NH growing season, and (to boot) suggesting that revisions to our understanding of regional CO2 fluxes somehow call into question the warming effect of CO2…
Anyway, apparently the GOSAT folks have some newsworthy findings. Would be nice to see the actual paper. The JAXA site indeed refers to a paper published Oct 29, but that is this paper (Takagi et al.) — which is relevant, but what it discusses is the uncertainty reduction achieved, and that’s what the color codings on their map represent, not the CO2 fluxes themselves. That paper does not show the four quarterly maps shown on the television news story that O’Sullivan garbled.
[Response: Correct, thank you. It’s all about the ability of the GOSAT spectrometer to reduce the uncertainty in the surface-based flux measurements. Basically, it’s most useful where those measurements are the sparsest. Possibly more on this later if time allows.–Jim]
I used two different cups and two different measuring cups to avoid transferring sugar water in the non-sugar water and vice versa. Wayne suggested using a turkey baster to transfer the water but I couldn’t find ours. I am pretty sure that would’ve reduced my variability a lot. I wasn’t super careful in pouring the water from the measuring cup into the plastic cup and always a little water probably spilled back into the pot. This set of 10 measurements took a surprisingly long time to perform (about an hour). In retrospect it would’ve been worth the small additional effort to be more careful when transferring the water to the cup in order to reduce variability from that source. It was late and I was tired when I started. Occasionally I discovered that I’d forgotten to take the initial temperature etc which contributed to the time it took. I think this would be an ideal experiment to perform with a child of the appropriate age, say 8. I’m not sure why the sugar free water stayed cooler than the sugar water. It might be that it simply came out of the tap cooler. I thought I had let it run long enough that I was getting the steady-state temperature but maybe not. I did fill the tap water pot last.
The report from the Nobel Prize-winning Intergovernmental Panel on Climate Change will be issued in a few weeks, after a meeting in Uganda. It says there is at least a two-in-three probability that weather extremes have already worsened because of man-made greenhouse gases.
Get the gate-du-jour debunking bookmarks ready. “Himalayagate” will probably increase tenfold on Google in 3 weeks ;)
Russellsays
15
The IR bands of water vapor presumably contribute to reducing radiative transfer as well.
There was a polar bear rug in my grandmother’s house, that, though I was spared infant photography , I occasionally slept on as a child. As I was on the outboard side of the hide, it proved surprisingly chilly with a bare floor beneath.
JustaPhilosophersays
Data Assimilation and Retrieval
I’m interested in the big picture of scientific methodology, particularly when it comes to climate models and computer simulations.
I had this nice picture of what we were doing with data assimilation when it comes to computer models and data retrievals. The picture went something like this: the reason we simulate is to help us get better measurements, and the reason why we measure is to make better simulations. We are learning about the state of the climate in one case (data retrieval) and we are learning about the causal structure of the climate in another (simulation).
My reasoning was that in the simulation context, data assimilation provides us with a nice way of constraining the physics of our model. At every analysis point (depending on the method of assimilation you are using), the trajectory of the model would be adjusted to fit actual observations in that area, thus preventing drift and providing accurate initial conditions for the next analysis interval. The expected values given the model trajectory are weighted against the observational values to provide us with the most stastically likely value given our background knowledge.
In turn the results of our model are then used to produce better measurements, particularly when it comes to data retrievals from satellites. In those data retrievals that employ inverse theory, we use model outputs to set our a priori values for the retrieval, and then compare the values provided by our spectral analysis to the background. Again we get the statistically most likely values.
This picture was however complicated when, speaking with an expert in atmospheric physics, I was told that modelers frequently won’t use the retrieval data in models because they don’t trust the error statistics, and thus do not want to introduce correlations between the assimilated data and their background assumptions (since the mathematics of the model assumes they are not correlated).
How much does this complicate my big picture? It does seem like the better our models, the better our retrieval data will be. Since this represents a large portion of our data globally – we are very dependent on our models for this kind of measurement. But it might seem that our models are only coarsely constrained by our satellite observations (more so with in situ observations).
Does this sound right? How do scientists see these practices (and others) fitting together to create a rational form of investigation?
Robert Murphysays
The link to the John O’Sullivan article at suite101 now brings one to the website’s main page. The article is no longer listed among his works there too; the last article they have of his now is from May, 2011. That’s odd.
Twitter posters are reporting that the court has granted the motion for climate scientist Michael Mann to intervene in the UVA email case. I don’t know if this claim is true, but the hearing was today.
I don’t know the book described there, perhaps someone else does:
A Climate Modelling Primer, 3rd Edition
Kendal McGuffie, Ann Henderson-Sellers
[Response: Climate models don’t ingest any data as they run. Weather models do. – gavin]
JustaPhilosophersays
@Hank
Tuning isn’t exactly the same thing as data assimilation (at least, I don’t think of them as the same, someone can correct me). I think of tuning as the comparison of model output to known data sets – and changing the parameters/physics in the model to ensure that the output matches the data set. The model, autonomously, will hit the data points.
Data assimilation is something different. Its a system that runs alongside the physics. It uses statistics to compare the modeled values to observations – and weighs them against each other to determine a likely value somewhere within the expected region. This likely value serves as the initial condition for the next time step in the simulation. The model is forced to take up values close to the observations (in some sense).
@ Gavin
That used to be true, but recently 3dvar and 4dvar data assimilation systems have been incorporated into global climate models.
[Response: Not when doing standard climate model runs. If you want to run a climate model in weather forecasting mode for debugging purposes, that makes sense, but you are doing a weather forecast, not a climate prediction/projection. – gavin]
wilisays
Chris Colose said at #5 “Unfortunately, the current estimates of gas hydrate storage in the Arctic region are very poor and non-existent for Antarctica, and we don’t really know the sensitivity of hydrates to future warming.”
Thanks. Do you have a reference for the most recent and most reliable estimates for Arctic hydrate storage?
“So, it is currently impossible to say with high confidence how methane responses will feedback onto future global warming.”
Because of this uncertainty, has it been completely left out of all models?
(Thanks ahead of time for any light anyone can cast in my generally dim direction’-)
Paul Ssays
There has been some blog excitement recently about modellers rejecting 20th Century hindcast runs that they don’t like. As far as I can work out it appears to stem from this paragraph in Gent et al. 2011:
Next, a twentieth-century run from 1850 to 2005 is completed, and a decision on whether this is acceptable is made based almost exclusively on two comparisons against observations. They are the globally averaged surface temperature against the historical reconstruction and the September Arctic sea ice extent from 1979 to 2005 against the satellite era observations. This second comparison is the reason that the sea ice albedos are allowed to change after the components are coupled. If these comparisons are deemed unacceptable, then the process of setting up the preindustrial control run would be repeated.
Is this a common practice across all modelling groups? What might constitute an unacceptable run, and is there nothing that could be learned from those runs?
[Response: No this is not common practice. GISS for instance assesses the models based on the pre-industrial control, and all the 20th C transient simulations have been submitted to CMIP5. – gavin]
Paul Ssays
Thanks Gavin.
I should clarify for everyone else that Gent et al. 2011 is the model description paper for NCAR’s CCSM4 GCM (more acronym soup).
Could there be a good justification for rejecting certain model runs? It sounds like a practice that would leave the way open for confirmation bias but then it’s not made clear why there is an a priori belief that some runs might be unacceptable or what criteria are used for rejection.
The estimates are rather wide…for example Brook et al 2008 (in Abrupt Climate Change Final Report) suggested there is ~7.5 to 400 Gt C stored as methane in Arctic permafrost, a rather large uncertainty, and the sensitivity to warming is not clear either. There are also a lot of hydrates that are located at depths in soils and ocean sediments where anthropogenic warming will take place over millennia rather than decades.
The Shakhova paper is important for understanding the atmospheric methane budget, but it is a negligible contribution to the total global emissions, and the importance of permafrost seems far secondary to that of wetlands, agriculture, animal husbandry, etc. Furthermore, measuring emissions is not the same thing as *changing emissions,* much less attributing those changes to recent warming.
Other people (gavin for example) will have better insight as to whether the latest generation of models going into the AR5 have interactive permafrost and methane responses (there’s been a lot of development in dynamic vegetation and carbon cycle modeling, and without knowing the details I will still say I don’t really think any methane feedbacks should be believable at this point); this has generally been neglected to date, or at least is only the focus of specialized modeling studies (see e.g., Archer, 2004) without much meaning for AR4 21st century projections. But these generally agree that there is no evidence for any “catastrophic” methane threat in the near future. Warming of the deep ocean waters might be important for these sorts of feedbacks on multi-millennial timescales, and there’s been suggestions of the relevance of all this at the PETM (~ 55 million years ago) but this is a science still in its infancy.
Imbacksays
NOAA’s global weather forecast systems have not used satellite retrievals for data assimilation for over a decade. Instead they use the raw satellite radiances and perform the forward model (atmospheric temperature to infrared radiances at top of atmosphere) and its adjoint during the convergence iteration steps of the data assimilation. So for weather models, satellite radiances are certainly used to great effect, as evidenced by the skill in the Southern Hemisphere now being nearly as good as in the Northern Hemisphere where there is much more conventional data, but inverse model satellite retrievals are not. See Derber, J. C. and W.-S. Wu, 1998: The use of TOVS cloud-cleared radiances in the NCEP SSI analysis system. Mon. Wea. Rev., 126, 2287 – 2299.
blue skysays
Global temperatures are dropping like a rock at the end of the year here. Chl 5..and the Chl’s above it.
(Aside — errata happen; this is why it’s better to point or link to a source rather than say what one recollects from some past reading — you always want to check for current information.)
http://geology.geoscienceworld.org/cgi/content/abstract/39/11/1059
“… the end-Permian extinction was a physiological crisis, selecting against genera with poorly buffered respiratory physiology and calcareous shells. Genera with unbuffered physiology also fared poorly in the Guadalupian extinction, consistent with recognition of a pronounced crisis only among protists and reef-builders and implying similar respiratory physiological stresses. Despite sharing a similar trigger, the end-Permian extinction was considerably more severe than the Guadalupian or other Phanerozoic physiological crises. Its magnitude may have resulted from a larger environmental perturbation, although the combination of warming, hypercapnia, ocean acidification, and hypoxia during the end-Permian extinction likely exacerbated the crisis because of the multiplicative effects of those stresses. Although ocean carbon cycle and evolutionary changes have reduced the sensitivity of modern ecosystems to physiological stresses, extant marine invertebrates face the same synergistic effects of multiple stressors that were so severe during the end-Permian extinction.”
ldavidcookesays
Re:4
Hey Wayne,
Okay for S&G, 4) quart jars, initial temperature 48C, #1 tap water, #2 tap and 1tsp sucrose, #3 tap and 3/4 tsp table salt, #4 everclear. Each stirred 10 times with a silver spoon.
1kw microwave, rotation rate 1-15 sec. Set for 1 min. 4 Candy themometers in jar with 48C tap water
Place all 4 quart jars within 1/4″ of center of rotation. Turn on for 1 min. Immediately after ding insert pre warmed themometers in jars.
Result: #1 53C, #2 55C, #3 55C, #4 51C
No not your model; however, demos my point.
Cheers,
Dave Cooke
Rattus Norvegicussays
Paul S,
They seem to be describing the process they used for tuning the two parameters (sea ice albedo and RH threshold for low cloud formation) coupled mode to get acceptable reference runs. Doesn’t seem like a scandal, more like a normal part of model development.
Iansays
Apologies but I’m not sure how exactly to bring the recent lecture (October 31st) by Matthew Ridley at the University of Edinburgh. It actually is not entirely on the side of the AGW proponents but argues a logical case and uses appropriate referencing. It also spends some time discussing confirmation bias. I’m sure their are points raised that you will be able to confirm or refute. Although I’m fairly sure you will have read it but I draw your attention to it on on the slim off chance you have not. It is often said that scientists don’t really know how to communicate with non-scientists. Matt Ridley is a scientist that certainly does
vendicar decariansays
“The ‘Suite101′ guy is none other than John O’Sullivan, serial science distorter.” – 27
Some background
John O’Sullivan CBE (born April 25, 1942) is a leading British conservative political commentator and journalist and currently Vice President and executive editor of Radio Free Europe/Radio Liberty. During the 1980s he was a senior policywriter and speechwriter in 10 Downing Street for Margaret Thatcher when she was British prime minister, and remains close to Thatcher until this day.
O’Sullivan is Editor-at-Large of the opinion magazine National Review and a Senior Fellow at the Hudson Institute. Prior to this, he was the Editor-in-Chief of United Press International, Editor-in-Chief of the international affairs magazine, The National Interest, and a Special Adviser to British Prime Minister Margaret Thatcher. He was made a Commander of the British Empire (CBE) in the 1991 New Year’s Honours List.
He is the founder and co-chairman of the New Atlantic Initiative, an international organization dedicated to reinvigorating and expanding the Atlantic community of democracies. The organization was created at the Congress of Prague in May 1996 by President Václav Havel and Lady Margaret Thatcher.
He is known for O’Sullivan’s First Law (a.k.a. O’Sullivan’s Law), paraphrased by George Will as stating that any institution that is not libertarian and classically liberal will, over time, become collectivist and statist.
O’Sullivan currently resides in Decatur, Alabama with his wife Melissa and stepdaughters – Katherine, who is 23, and Amanda, 17.
Russell says
Coauthors are invited for a paper on a topic at the bleeding edge of radiative heat transfer and animal physiology- the Underground Greenhouse Effect.
As some small furry creatures snooze through the winter in sealed burrows containing up to 13% exhaled CO2, and as little as 4% O2, how does the IR opacity of atmospheres enriched in CO2 by exhalation figure in the rate of heat loss in such environments as burrows, warrens, and for that matter, sleeping bags ?
Are any extant codes applicable to such problems ?
wili says
I would be interested in anyone’s insights into a (maybe) hypothetical situation:
According to Shakhova (March 2011), the methane coming from the East Siberian Arctic Shelf is about 8 million tons. For the purposes of simplicity I will round this up to 10 megatons. If we multiply this by the 105 times global warming potential of methane (Schindell 2006), we get about one gigaton of gwp from this source per year (compared to about 30 gigatons of CO2 from all direct human activity).
Reports earlier this year was that there was a ‘dramatic increase’ in methane release from the Arctic which prompted an sudden mission by US and Russian researchers to investigate put together ‘on short notice.’
My question is, if ‘dramatic’ here ends up being an increase of an order of magnitude, how long would it take for this forcing to significantly affect temperatures, particularly in the Northern Hemisphere?
The answer involves at least two considerations that I can identify:
How fast would the methane mix into the broader atmosphere?
How long would it take for the heat to build up that would be held in the troposphere by this new quantity?
(I suppose that whether this is a one-time emission or a new and increasing level of regular annual emissions from the Arctic would also have an effect on the answer.)
Thanks ahead of time for any insights or speculation.
Thomas says
Russel. Thinking about the CO2 column density within the burrow. Thirteen percent is roughly 300 times atmospheric density. So the animal to burrow wall is 10centimetrs, that means there is as much CO2 per unit area as roughly 30meters of free air. If we made our atmosphere have a uniform density it would be maybe 5kilometers deep, so we have less than 1% as much CO2 opacity within the burrow, as within the atmosphere. However, CO2 absorption lines are saturated near line centers, so we would still get some absorption/reradiation within the burrows air. Assuming the air in the burrow is warmer than the walls, there would be some insulative effect, although I don’t think it is too great.
wayne davidson says
With respect to #386 Unforced Variations: Oct 2011
I halved the concentration of water/sugar mix to less than 270 ppm. There is still a net increase temperature +0.1 C over plain water warm up. Done twice with exact weights of water set to 174 grams. This is as far as rudimentary equipment can go. Trying other solutions to mimic the greenhouse effect is OK as long as the experiment is repeatable, the solution must be the same before and after microwaving. David suggested unstable Carbonated water which changes in concentration with every heat spell.
For those following this little project, set room temperature to +23 C. Fever thermometer is ideal for this after a 20 second microwave. The water temperature should increase +16.5 C. Making the reading
same as human body temperature, with a maximum thermometer, which is a exactly what a medical thermometer is.
The idea of this is to prove physically to any contrarian who claim that trace elements don’t do much to the temperature record. Water is homogenous as opposed to the hydrostatic atmosphere, CO2 affects specific layers more than another as opposed to well mixed sugar with water. However, the similarities are meant to be as close as possible without going to a lab. The beauty of this experiment is that its repeatable in many ways.. You doubt trace elements have no effect with temperature ??? Try it out, learn for yourself.
Chris Colose says
wili,
Unfortunately, the current estimates of gas hydrate storage in the Arctic region are very poor and non-existent for Antarctica, and we don’t really know the sensitivity of hydrates to future warming. So, it is currently impossible to say with high confidence how methane responses will feedback onto future global warming.
Russell says
3
The saturation is indeed interesting, because , ashibernating critters both radiate and absorb in the 10 micron thermal band, the gas trapped around them may add radiative thermal gain to the suppression of advection and convection by their fur and the air-trapping materials lining their nests.
Russell says
3
I agree the line saturation is important , but it seems an open quantitative question , as the ratio of the thermal radiative gain from trapped CO2 to the conductive loss arising from the gas mixture depends empirically on the combined advective and convective “R” value afforded by nest materials as well as the animal’s fur.
Dave Werth says
I have a guy who’s been telling me because the accuracy of weather station thermometers is generally recorded in integer degrees that it’s not reasonable to express the global temperature to a precision greater an integer degree. To quote him:
“Worse, however, is the real life debate that “over a decade, the temperature will rise 1/10 of F”. If the data was collected with accuracy of +- 0.5F, then you can see the spread in your (modified) example far exceeds the stated conclusion. If we measure 63.5F on a certain occasion, should we become alarmed at an increase of 1/10th F? It is smaller than the standard deviation, and this becomes obvious if more accurate instrumentation reveals that the average is actually 63.7 F and not 63.1 F. In this hypothetical, we’re actually seeing temperatures drop. It might not fit the desired model, but that’s how it would play out and a faithful scientist would allow the facts to prevail, or would refrain from faulty or premature conclusions unsupported by the data or the precision of the data.”
In other words he’s saying it’s not reasonable to express the average global temperature with any decimal places on it because the measurements are only accurate to the degree.
I don’t have the statistical chops to refute this convincingly so I was hoping someone could help me with this. What can I tell this guy to convince him it’s reasonable to express the global temperature anomaly in tenths of a degree when the individual measurements are only accurate to 1 degree?
Edward Greisch says
http://hosted.verticalresponse.com/569982/7333cbe3e8/1383039133/2fc08aded4/
David B. Benson says
Remember that all models are wrong; the practical question
is how wrong do they have to be to not be useful.
— George E.P. Box & Norman R. Draper
EMPIRICAL MODEL-BUILDING AND RESPONSE SURFACES 74 (1987).
CM says
Re: the NOAA Mediterranean drought paper, discussed at Climate Progress and brought up here by Kevin on the Berkeley thread,
That’s a wake-up call, all right. And too close to home for this reader’s comfort. But, a pet peeve if I may? If the researchers want to be taken seriously in the drying Southeast Europe, they’d better update their maps. It’s been 20 years since Yugoslavia went the way of the dodo, and people in the successor states tend to have strong opinions about political borders.
Martin Vermeer says
Russell #1,
ask again in five months sharp
vukcevic says
June:
maximum insolation, high radiative heat transfer and CO2 back radiation
temperature response: flat
http://www.vukcevic.talktalk.net/CETjun.htm
December:
minimum insolation, the low radiative heat transfer and CO2 back radiation
temperature response: rise 0.35 C/ century
http://www.vukcevic.talktalk.net/CET-Dec.htm
I have strong reasons to think that the climate change in the CET area has lot to do with the ocean currents moving heat from tropics polewards and vice versa.
http://www.vukcevic.talktalk.net/CDr.htm
Rich Creager says
Can someone provide me with the name of the purported law, which I have seen referenced in comment threads here, which states (to paraphrase) that no parody of fundamentalism can be so over-the-top that it will not be taken at face value by someone, unless it is accompanied by a disclaimer. Thanks
[Response: Poe’s Law. – gavin]
Kevin McKinney says
#1–
“. . .and for that matter, sleeping bags ?”
Hmm. Less-than-pleasant personal experience suggests that the ‘anti-greenhouse’ effects of exhaled water vapor on bag insulation outweigh anything the CO2 can do–even at levels well short of actual ‘saturation’. . .
[Response: Hence synthetic fill, hood drawstrings and semi-permeable membrane shells–Jim]
t_p_hamilton says
Dave Werth,
I would refer that person to the fact that the current long-term temperature rise is 0.3 degrees Fahrenheit per decade, not 0.1, that water vapor is an exponential function of temperature, and specifically on your question: that standard error is small through the use of multiple stations and multiple observations (divide by square root of number of observations).
harvey says
British Arctic Conference…
http://www.ukarcticscience.org/programme/Book%20of%20abstracts_v1_110911.pdf
tamino says
Re: #8 (Dave Werth)
Clearly your guy doesn’t have the statistical chops either.
Is he a sports fan? Maybe a fan of U.S. Football? Ask him, “What’s Adrian Peterson’ average yards-per-carry?” When he says “4.8”, ask him how that’s possible — given that the yardage on any single run is only recorded to the nearest whole yard.
Ask him “What was Ted Williams lifetime batting average?” When he says “.344” ask him how that’s possible, given that hits are either zero or one — only recorded to the nearest whole hit.
And — don’t bother to explain anything else to him or argue further. Just let him chew on it.
Chris Ho-Stuart says
Hi Dave (in comment #8), asking about accuracy of weather station thermometers, generally recorded in integer degrees, and implications for accuracy of a global temperature.
Your friend is wrong, of course; it is basic statistics that when you have a lot of measurements, the standard error generally scales down by the square root of the number of measurements. Lots of measurements together can therefore give much greater accuracy than any individual measurement.
There’s a minor technical point to watch. People don’t define an “average global temperature”. They define an “average global anomaly”. This is the average CHANGE in temperature. Simplified, you take a whole bunch of temperature readings, at many different locations. Then you do it again, several years later. At each location, you calculate the change. Then you average the changes. The physical reasons for this have been explained here at realclimate several years ago, I think, but I can’t find a link. (Actually, you calculate a “norm” at each location, giving average temperatures AT THAT LOCATION ONLY for any given month. Then you convert any reading to the “anomaly”, or the difference from the norm. Then you average anomalies over the whole globe.)
Here is a very simple example to show why your friend is incorrect. Suppose we have N pairs of numbers, x[i] and y[i]. Conceptually you can think of them as a reading of a temperature in two different years. Suppose there’s a normally distributed error in each value, with standard deviation S. The standard error in x[i]-y[i] is sqrt(2)*S.
Then add all the pairs of differences.
Sum(i=1..N) (x[i]-y[i]).
The standard error is now sqrt(2N)*S.
To get the average difference, you divide this sum by N. The standard error in your result is S*sqrt(2N)/N, or S*sqrt(2/N).
If N is around 20000, you get an error of S/100. That means the “global anomaly” in this simple case has two additional figures of accuracy over the individual measurements.
The actual global anomaly calculation is much more complex than this, and so too is the error analysis. The real problems show up with finding and addressing more systematic sources of error. A lot of work has been done on this; the most recent being the Berkley group (BEST), which with much fan fare got the same basic result everyone else has obtained before them. There’s some heavy statistical work going on to get an idea of the errors in calculated global anomaly figures.
Rounding errors on reading instruments are not really a problem, and although his objection sounds superficially plausible, its a failure to understand introductory high school level statistics.
I used to discuss with someone like this once. It sounds a lot like someone with initials GM. But in any case, you are most unlikely to be able to convince him to see the error. The best bet, in my experience, is to explain as simply and as clearly as possible for any onlookers the real mathematics of how rounding errors impact averages. In practice, the accuracy of global anomalies are much more strongly limited by systematic errors, rather than simple rounding errors. The final global anomaly, like pretty much any other average, is known to greater accuracy than any individual measurements.
Martin Smith says
This claim sounds crazy, but I can’t find anything at the JAXA site:
Industrialized nations emit far less carbon dioxide than the Third World, according to latest evidence from Japan’s Aerospace Exploration Agency (JAXA).
http://www.suite101.com/news/new-satellite-data-contradicts-carbon-dioxide-climate-theory-a394975
[Response: Unbelievably stupid. I shouldn’t have to say anything more–Jim]
John E. Pearson says
Martin, as far as I know the only possibility of measuring regional CO2 output is via the Orbiting Carbon Observatory (OCO —heh—) http://oco.jpl.nasa.gov/ which has not yet been successfully launched.
[Response: GOSAT, which JAXA put up in 2009–Jim]
John E. Pearson says
Wayne, I sent you an e-mail to the address on your web-site. By the way, I used to know Andy Young who did the green flash page. I didn’t know he was still active. When I calculate the ppm for your recipe I get it coming in an order of magnitude lower. Your recipe is 0.1 grams of sugar in 174 grams of water.
The molecular wt. of sucrose according to wikipedia is 342 grams/mol. So .1 grams sucrose = 2.9 10^-4 mols sucrose. The molecular wt of water is 18g/mol so 174 grams H2O = 9.6 mols H2O. The ratio gives: 3 x 10^-5 molecules of sugar for each H2O molecule or 30 ppm. So this effect is due to a trace element that is present in concentrations a factor of 10 smaller than atmospheric CO2.
John E. Pearson says
My crucet (my biggest pot) only holds 7 quarts so I added a teaspoon of sugar to 7 quarts of water. I filled another pot with as much water as it could hold.
Then I did 5 measurements each. For some reason the pure water pot stayed a little warmer than the other pot. I don’t know why. I filled identical plastic cups with water from a 1 cup measuring cup (spillage was probably my biggest variability). I measured the temp of the water in a cup then heated that cup for 30 seconds and measured the final Temp. Then I did that again and again for pure water and sugar water. I think it’s pretty convincing.
Pure Water Sugar water
To Tf dT To Tf dT
66.9 103.8 36.9 67.1 109.1 42.0
66.7 105.5 38.8 67.0 108.2 41.2
66.8 106.4 39.6 67.2 107.2 40.0
66.7 103.5 36.8 67.0 106.8 39.8
67.0 103.8 36.8 67.6 105.7 38.1
Average dT 37.78 40.22
Oh yes. My microwave has a little circle indentation in the bottom in the back left corner just slightly larger than the cup I used. I placed the cup in the center of that indentation each time (this is essentially what Wayne suggested). This is important and likely another source of variability. You’re in the near-field inside a microwave oven so you should expect strong variations in the amount of heating you get as you move stuff around.
Hank Roberts says
> JAXA
The ‘Suite101’ guy has taken one of the four pictures from the satellite information — he’s using Northern Hemisphere summertime, with forests in Siberia and Canada growing and soaking up CO2 — and pretended it’s the overall picture.
He links to a video at NHK that has the full set of four showing midway in the video.
Yesterday the GOSAT /IBUKI page had a tiny thumbnail image showing four different seasonal images, the same four you can see in the video linked from the ‘Suite101’ page. Today it has a single image.
Hank Roberts says
http://screencast.com/t/k7e1F1MncMJ shows the four images from which the Suite101 guy picked his cherry.
wayne davidson says
#23 John,
Replication may now proceed with anyone who wants to try. I must thank Mr Pearson for an important job well done, will rewrite the recipe for those who don’t have a proper .1 gram capable scale. This experiment is similar to our atmosphere miniaturized in a plastic coffee cup..
Andy Young is surely very much the same great astronomer as he ever was, just a little older.
Robert Murphy says
Hank @24
The ‘Suite101′ guy is none other than John O’Sullivan, serial science distorter. He even mentions Murray Salby. ‘Nuff said.
CM says
Re: JAXA, GOSAT, annual carbon cycle, and that utter moron O’Sullivan cherry-picking the NH growing season, and (to boot) suggesting that revisions to our understanding of regional CO2 fluxes somehow call into question the warming effect of CO2…
Anyway, apparently the GOSAT folks have some newsworthy findings. Would be nice to see the actual paper. The JAXA site indeed refers to a paper published Oct 29, but that is this paper (Takagi et al.) — which is relevant, but what it discusses is the uncertainty reduction achieved, and that’s what the color codings on their map represent, not the CO2 fluxes themselves. That paper does not show the four quarterly maps shown on the television news story that O’Sullivan garbled.
[Response: Correct, thank you. It’s all about the ability of the GOSAT spectrometer to reduce the uncertainty in the surface-based flux measurements. Basically, it’s most useful where those measurements are the sparsest. Possibly more on this later if time allows.–Jim]
Hank Roberts says
Oops, this is the NHK screenshot showing all four seasons: http://screencast.com/t/XJGD6ppG0g5h
John E. Pearson says
I used two different cups and two different measuring cups to avoid transferring sugar water in the non-sugar water and vice versa. Wayne suggested using a turkey baster to transfer the water but I couldn’t find ours. I am pretty sure that would’ve reduced my variability a lot. I wasn’t super careful in pouring the water from the measuring cup into the plastic cup and always a little water probably spilled back into the pot. This set of 10 measurements took a surprisingly long time to perform (about an hour). In retrospect it would’ve been worth the small additional effort to be more careful when transferring the water to the cup in order to reduce variability from that source. It was late and I was tired when I started. Occasionally I discovered that I’d forgotten to take the initial temperature etc which contributed to the time it took. I think this would be an ideal experiment to perform with a child of the appropriate age, say 8. I’m not sure why the sugar free water stayed cooler than the sugar water. It might be that it simply came out of the tap cooler. I thought I had let it run long enough that I was getting the steady-state temperature but maybe not. I did fill the tap water pot last.
J Bowers says
Wild weather worsening due to climate change, IPCC confirms.
Get the gate-du-jour debunking bookmarks ready. “Himalayagate” will probably increase tenfold on Google in 3 weeks ;)
Russell says
15
The IR bands of water vapor presumably contribute to reducing radiative transfer as well.
There was a polar bear rug in my grandmother’s house, that, though I was spared infant photography , I occasionally slept on as a child. As I was on the outboard side of the hide, it proved surprisingly chilly with a bare floor beneath.
JustaPhilosopher says
Data Assimilation and Retrieval
I’m interested in the big picture of scientific methodology, particularly when it comes to climate models and computer simulations.
I had this nice picture of what we were doing with data assimilation when it comes to computer models and data retrievals. The picture went something like this: the reason we simulate is to help us get better measurements, and the reason why we measure is to make better simulations. We are learning about the state of the climate in one case (data retrieval) and we are learning about the causal structure of the climate in another (simulation).
My reasoning was that in the simulation context, data assimilation provides us with a nice way of constraining the physics of our model. At every analysis point (depending on the method of assimilation you are using), the trajectory of the model would be adjusted to fit actual observations in that area, thus preventing drift and providing accurate initial conditions for the next analysis interval. The expected values given the model trajectory are weighted against the observational values to provide us with the most stastically likely value given our background knowledge.
In turn the results of our model are then used to produce better measurements, particularly when it comes to data retrievals from satellites. In those data retrievals that employ inverse theory, we use model outputs to set our a priori values for the retrieval, and then compare the values provided by our spectral analysis to the background. Again we get the statistically most likely values.
This picture was however complicated when, speaking with an expert in atmospheric physics, I was told that modelers frequently won’t use the retrieval data in models because they don’t trust the error statistics, and thus do not want to introduce correlations between the assimilated data and their background assumptions (since the mathematics of the model assumes they are not correlated).
How much does this complicate my big picture? It does seem like the better our models, the better our retrieval data will be. Since this represents a large portion of our data globally – we are very dependent on our models for this kind of measurement. But it might seem that our models are only coarsely constrained by our satellite observations (more so with in situ observations).
Does this sound right? How do scientists see these practices (and others) fitting together to create a rational form of investigation?
Robert Murphy says
The link to the John O’Sullivan article at suite101 now brings one to the website’s main page. The article is no longer listed among his works there too; the last article they have of his now is from May, 2011. That’s odd.
Snapple says
Twitter posters are reporting that the court has granted the motion for climate scientist Michael Mann to intervene in the UVA email case. I don’t know if this claim is true, but the hearing was today.
Check the news.
http://legendofpineridge.blogspot.com/2011/11/dr-michael-mann-confronts-attorney.html
Maybe they will say something at UVA. http://www.virginia.edu/foia/
CM says
Robert #34, it’s still up at http://climaterealists.com/?id=8573; make sure to scroll down to O’Sullivan’s 9:33am comment, it’s priceless.
Hank Roberts says
> speaking with an expert in atmospheric physics, I was told that
> modelers frequently won’t use the retrieval data in models
Would you ask him for references? He’s probably referring to something that has been published.
Here’s one example of the argument against tuning:
http://climatemodellingprimer.net/node/82
I don’t know the book described there, perhaps someone else does:
A Climate Modelling Primer, 3rd Edition
Kendal McGuffie, Ann Henderson-Sellers
[Response: Climate models don’t ingest any data as they run. Weather models do. – gavin]
JustaPhilosopher says
@Hank
Tuning isn’t exactly the same thing as data assimilation (at least, I don’t think of them as the same, someone can correct me). I think of tuning as the comparison of model output to known data sets – and changing the parameters/physics in the model to ensure that the output matches the data set. The model, autonomously, will hit the data points.
Data assimilation is something different. Its a system that runs alongside the physics. It uses statistics to compare the modeled values to observations – and weighs them against each other to determine a likely value somewhere within the expected region. This likely value serves as the initial condition for the next time step in the simulation. The model is forced to take up values close to the observations (in some sense).
@ Gavin
That used to be true, but recently 3dvar and 4dvar data assimilation systems have been incorporated into global climate models.
[Response: Not when doing standard climate model runs. If you want to run a climate model in weather forecasting mode for debugging purposes, that makes sense, but you are doing a weather forecast, not a climate prediction/projection. – gavin]
wili says
Chris Colose said at #5 “Unfortunately, the current estimates of gas hydrate storage in the Arctic region are very poor and non-existent for Antarctica, and we don’t really know the sensitivity of hydrates to future warming.”
Thanks. Do you have a reference for the most recent and most reliable estimates for Arctic hydrate storage?
“So, it is currently impossible to say with high confidence how methane responses will feedback onto future global warming.”
Because of this uncertainty, has it been completely left out of all models?
(Thanks ahead of time for any light anyone can cast in my generally dim direction’-)
Paul S says
There has been some blog excitement recently about modellers rejecting 20th Century hindcast runs that they don’t like. As far as I can work out it appears to stem from this paragraph in Gent et al. 2011:
Next, a twentieth-century run from 1850 to 2005 is completed, and a decision on whether this is acceptable is made based almost exclusively on two comparisons against observations. They are the globally averaged surface temperature against the historical reconstruction and the September Arctic sea ice extent from 1979 to 2005 against the satellite era observations. This second comparison is the reason that the sea ice albedos are allowed to change after the components are coupled. If these comparisons are deemed unacceptable, then the process of setting up the preindustrial control run would be repeated.
Is this a common practice across all modelling groups? What might constitute an unacceptable run, and is there nothing that could be learned from those runs?
[Response: No this is not common practice. GISS for instance assesses the models based on the pre-industrial control, and all the 20th C transient simulations have been submitted to CMIP5. – gavin]
Paul S says
Thanks Gavin.
I should clarify for everyone else that Gent et al. 2011 is the model description paper for NCAR’s CCSM4 GCM (more acronym soup).
Could there be a good justification for rejecting certain model runs? It sounds like a practice that would leave the way open for confirmation bias but then it’s not made clear why there is an a priori belief that some runs might be unacceptable or what criteria are used for rejection.
Chris Colose says
wili (39)
The estimates are rather wide…for example Brook et al 2008 (in Abrupt Climate Change Final Report) suggested there is ~7.5 to 400 Gt C stored as methane in Arctic permafrost, a rather large uncertainty, and the sensitivity to warming is not clear either. There are also a lot of hydrates that are located at depths in soils and ocean sediments where anthropogenic warming will take place over millennia rather than decades.
The Shakhova paper is important for understanding the atmospheric methane budget, but it is a negligible contribution to the total global emissions, and the importance of permafrost seems far secondary to that of wetlands, agriculture, animal husbandry, etc. Furthermore, measuring emissions is not the same thing as *changing emissions,* much less attributing those changes to recent warming.
Other people (gavin for example) will have better insight as to whether the latest generation of models going into the AR5 have interactive permafrost and methane responses (there’s been a lot of development in dynamic vegetation and carbon cycle modeling, and without knowing the details I will still say I don’t really think any methane feedbacks should be believable at this point); this has generally been neglected to date, or at least is only the focus of specialized modeling studies (see e.g., Archer, 2004) without much meaning for AR4 21st century projections. But these generally agree that there is no evidence for any “catastrophic” methane threat in the near future. Warming of the deep ocean waters might be important for these sorts of feedbacks on multi-millennial timescales, and there’s been suggestions of the relevance of all this at the PETM (~ 55 million years ago) but this is a science still in its infancy.
Imback says
NOAA’s global weather forecast systems have not used satellite retrievals for data assimilation for over a decade. Instead they use the raw satellite radiances and perform the forward model (atmospheric temperature to infrared radiances at top of atmosphere) and its adjoint during the convergence iteration steps of the data assimilation. So for weather models, satellite radiances are certainly used to great effect, as evidenced by the skill in the Southern Hemisphere now being nearly as good as in the Northern Hemisphere where there is much more conventional data, but inverse model satellite retrievals are not. See Derber, J. C. and W.-S. Wu, 1998: The use of TOVS cloud-cleared radiances in the NCEP SSI analysis system. Mon. Wea. Rev., 126, 2287 – 2299.
blue sky says
Global temperatures are dropping like a rock at the end of the year here. Chl 5..and the Chl’s above it.
http://discover.itsc.uah.edu/amsutemps/
How is that explained?
Hank Roberts says
for Wili:
https://www.google.com/search?q=site%3Aipcc.ch+“methane”+forcin
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/fig/faq-2-1-figure-1-errata.jpeg
from
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/errataserrata-errata.html#faq21fig1
(Aside — errata happen; this is why it’s better to point or link to a source rather than say what one recollects from some past reading — you always want to check for current information.)
Hank Roberts says
http://geology.geoscienceworld.org/cgi/content/abstract/39/11/1059
“… the end-Permian extinction was a physiological crisis, selecting against genera with poorly buffered respiratory physiology and calcareous shells. Genera with unbuffered physiology also fared poorly in the Guadalupian extinction, consistent with recognition of a pronounced crisis only among protists and reef-builders and implying similar respiratory physiological stresses. Despite sharing a similar trigger, the end-Permian extinction was considerably more severe than the Guadalupian or other Phanerozoic physiological crises. Its magnitude may have resulted from a larger environmental perturbation, although the combination of warming, hypercapnia, ocean acidification, and hypoxia during the end-Permian extinction likely exacerbated the crisis because of the multiplicative effects of those stresses. Although ocean carbon cycle and evolutionary changes have reduced the sensitivity of modern ecosystems to physiological stresses, extant marine invertebrates face the same synergistic effects of multiple stressors that were so severe during the end-Permian extinction.”
ldavidcooke says
Re:4
Hey Wayne,
Okay for S&G, 4) quart jars, initial temperature 48C, #1 tap water, #2 tap and 1tsp sucrose, #3 tap and 3/4 tsp table salt, #4 everclear. Each stirred 10 times with a silver spoon.
1kw microwave, rotation rate 1-15 sec. Set for 1 min. 4 Candy themometers in jar with 48C tap water
Place all 4 quart jars within 1/4″ of center of rotation. Turn on for 1 min. Immediately after ding insert pre warmed themometers in jars.
Result: #1 53C, #2 55C, #3 55C, #4 51C
No not your model; however, demos my point.
Cheers,
Dave Cooke
Rattus Norvegicus says
Paul S,
They seem to be describing the process they used for tuning the two parameters (sea ice albedo and RH threshold for low cloud formation) coupled mode to get acceptable reference runs. Doesn’t seem like a scandal, more like a normal part of model development.
Ian says
Apologies but I’m not sure how exactly to bring the recent lecture (October 31st) by Matthew Ridley at the University of Edinburgh. It actually is not entirely on the side of the AGW proponents but argues a logical case and uses appropriate referencing. It also spends some time discussing confirmation bias. I’m sure their are points raised that you will be able to confirm or refute. Although I’m fairly sure you will have read it but I draw your attention to it on on the slim off chance you have not. It is often said that scientists don’t really know how to communicate with non-scientists. Matt Ridley is a scientist that certainly does
vendicar decarian says
“The ‘Suite101′ guy is none other than John O’Sullivan, serial science distorter.” – 27
Some background
John O’Sullivan CBE (born April 25, 1942) is a leading British conservative political commentator and journalist and currently Vice President and executive editor of Radio Free Europe/Radio Liberty. During the 1980s he was a senior policywriter and speechwriter in 10 Downing Street for Margaret Thatcher when she was British prime minister, and remains close to Thatcher until this day.
O’Sullivan is Editor-at-Large of the opinion magazine National Review and a Senior Fellow at the Hudson Institute. Prior to this, he was the Editor-in-Chief of United Press International, Editor-in-Chief of the international affairs magazine, The National Interest, and a Special Adviser to British Prime Minister Margaret Thatcher. He was made a Commander of the British Empire (CBE) in the 1991 New Year’s Honours List.
He is the founder and co-chairman of the New Atlantic Initiative, an international organization dedicated to reinvigorating and expanding the Atlantic community of democracies. The organization was created at the Congress of Prague in May 1996 by President Václav Havel and Lady Margaret Thatcher.
He is known for O’Sullivan’s First Law (a.k.a. O’Sullivan’s Law), paraphrased by George Will as stating that any institution that is not libertarian and classically liberal will, over time, become collectivist and statist.
O’Sullivan currently resides in Decatur, Alabama with his wife Melissa and stepdaughters – Katherine, who is 23, and Amanda, 17.