As we did roughly a year ago (and as we will probably do every year around this time), we can add another data point to a set of reasonably standard model-data comparisons that have proven interesting over the years.
First, here is the update of the graph showing the annual mean anomalies from the IPCC AR4 models plotted against the surface temperature records from the HadCRUT3v, NCDC and GISTEMP products (it really doesn’t matter which). Everything has been baselined to 1980-1999 (as in the 2007 IPCC report) and the envelope in grey encloses 95% of the model runs.
The El Niño event that started off 2010 definitely gave last year a boost, despite the emerging La Niña towards the end of the year. An almost-record summer melt in the Arctic was also important (and probably key in explaining the difference between GISTEMP and the others). Checking up on our predictions from last year, we forecast that 2010 would be warmer than 2009 (because of the ENSO phase last January). Consistent with that, I predict that 2011 will not be quite as warm as 2010, but it will still rank easily amongst the top ten warmest years of the historical record.
The comments on last year’s post (and responses) are worth reading before commenting on this post, and there are a number of points that shouldn’t need to be repeated again:
- Short term (15 years or less) trends in global temperature are not usefully predictable as a function of current forcings. This means you can’t use such short periods to ‘prove’ that global warming has or hasn’t stopped, or that we are really cooling despite this being the warmest decade in centuries.
- The AR4 model simulations are an ‘ensemble of opportunity’ and vary substantially among themselves with the forcings imposed, the magnitude of the internal variability and of course, the sensitivity. Thus while they do span a large range of possible situations, the average of these simulations is not ‘truth’.
- The model simulations use observed forcings up until 2000 (or 2003 in a couple of cases) and use a business-as-usual scenario subsequently (A1B). The models are not tuned to temperature trends pre-2000.
- Differences between the temperature anomaly products is related to: different selections of input data, different methods for assessing urban heating effects, and (most important) different methodologies for estimating temperatures in data-poor regions like the Arctic. GISTEMP assumes that the Arctic is warming as fast as the stations around the Arctic, while HadCRUT and NCDC assume the Arctic is warming as fast as the global mean. The former assumption is more in line with the sea ice results and independent measures from buoys and the reanalysis products.
There is one upcoming development that is worth flagging. Long in development, the new Hadley Centre analysis of sea surface temperatures (HadISST3) will soon become available. This will contain additional newly-digitised data, better corrections for artifacts in the record (such as highlighted by Thompson et al. 2007), and corrections to more recent parts of the record because of better calibrations of some SST measuring devices. Once it is published, the historical HadCRUT global temperature anomalies will also be updated. GISTEMP uses HadISST for the pre-satellite era, and so long-term trends may be affected there too (though not the more recent changes shown above).
The next figure is the comparison of the ocean heat content (OHC) changes in the models compared to the latest data from NODC. As before, I don’t have the post-2003 model output, but the comparison between the 3-monthly data (to the end of Sep) and annual data versus the model output is still useful.
To include the data from the Lyman et al (2010) paper, I am baselining all curves to the period 1975-1989, and using the 1993-2003 period to match the observational data sources a little more consistently. I have linearly extended the ensemble mean model values for the post 2003 period (using a regression from 1993-2002) to get a rough sense of where those runs might have gone.
Update (May 2010): The figure has been corrected for an error in the model data scaling. The original image can still be seen here.
As can be seen the long term trends in the models match those in the data, but the short-term fluctuations are both noisy and imprecise.
Looking now to the Arctic, here’s a 2010 update (courtesy of Marika Holland) showing the ongoing decrease in September sea ice extent compared to a selection of the AR4 models, again using the A1B scenario (following Stroeve et al, 2007):
In this case, the match is not very good, and possibly getting worse, but unfortunately it appears that the models are not sensitive enough.
Finally, we update the Hansen et al (1988) comparisons. As stated last year, the Scenario B in that paper is running a little high compared with the actual forcings growth (by about 10%) (and high compared to A1B), and the old GISS model had a climate sensitivity that was a little higher (4.2ºC for a doubling of CO2) than the best estimate (~3ºC).
The trends for the period 1984 to 2010 (the 1984 date chosen because that is when these projections started), scenario B has a trend of 0.27+/-0.05ºC/dec (95% uncertainties, no correction for auto-correlation). For the GISTEMP and HadCRUT3, the trends are 0.19+/-0.05 and 0.18+/-0.04ºC/dec (note that the GISTEMP met-station index has 0.23+/-0.06ºC/dec and has 2010 as a clear record high).
As before, it seems that the Hansen et al ‘B’ projection is likely running a little warm compared to the real world. Repeating the calculation from last year, assuming (again, a little recklessly) that the 27 yr trend scales linearly with the sensitivity and the forcing, we could use this mismatch to estimate a sensitivity for the real world. That would give us 4.2/(0.27*0.9) * 0.19=~ 3.3 ºC. And again, it’s interesting to note that the best estimate sensitivity deduced from this projection, is very close to what we think in any case. For reference, the trends in the AR4 models for the same period have a range 0.21+/-0.16 ºC/dec (95%).
So to conclude, global warming continues. Did you really think it wouldn’t?
Hank Roberts says
> SM
Read what you linked.
Nature (a year ago) mentioned a study published in Science.
Here’s how to check:
http://www.google.com/search?q=site%3Arealclimate.org++“water+vapor”+stratosphere+Science
Didactylos says
Sometimes, I feel like knocking heads together.
On the one hand, there are people painting a picture based on extreme estimates, and even beyond extreme estimates. On the other, there are people rejecting the reality of these extreme estimates, and taking the selfish view that it’s not their problem.
Neither view is helpful.
The extreme scenarios may not come to pass tomorrow, but if we remain on this path, then they will, inevitably, become reality, in some form or another. Just because some of us may be dead before any of this becomes an issue is not an acceptable reason for ignoring it. Nor is it acceptable to ignore it because we live in comfortable conditions likely to be able to survive moderate warming for a while longer than most places.
Selfishness is a useful evolutionary trait, but when we are condemning our future to oblivion, it may be a good time to look up “tragedy of the commons”, and ponder the implications.
Jacob Mack says
Bayesian Statistics, a general primer:
As discussed by: Hierarchal Modeling for the Environmental Sciences: Statistical Methods and Applications (2006) by James S. Clark and Alan E. Gelfand;
Search For Certainty: On the Clash of Science and Philosophy of Probability (2009) by Krzysztof Burdzy;
and for more specific relevance and scope: Review of the U.S. Climate Science Program’s Synthesis and Assessment Product 5.2, “Best Practice Approaches for Characterizing, Communicating, and Incorporating Scientific Uncertainty in Climate Decision making (2007).
I also back up and provide more simple but accurate explanation in more algebraic and easier to read format taken from Statstics A Step by Step Approach International Edition: McGraw Hill, written by Alan Bluman. Bluman in my opinon, writes the best and most comprehensive self teaching textbook for general statistics. He deals in H0: = u and H1: not equal to u, instead of Lambda = theta or Lambda not equal to theta, but not everyone here is looking at those symbols like that.
In general we can look at Bayesian equations in the following manner:
p(theta}y, lambda) = p(y, theta} lambda)/p(y{lambda) = p(y,theta}lambda)/integral p(y,theta}lambda)dtheta = f(y}theta)pi(theta}lambda)dtheta. What is going here is an approach to modeling where the observed data and any unknowns are treated as random variables. As we will see in a minute this can be expanded upon to include other factors and to better bridge priors and current observations after they are treated separately.
f(y}theta) is the distributional mocdel for the observed data y = (y1, y2, y3, y4, … y50, etc… The vector of unknown parameters is theta and keep that in mind as there are aspects that do and do not depend upon theta. Current data we know or see does not depend upon theta. Theta is assumed to be a random quantity sampled from a prior distribution (a lot of people talk of theta and it as a prior, but we need to delve a little deeper) which is: pi(theta”lambda), where lambda is a vector of so called hyperparameters. Lambda itself is a paramter that controls such things like variation across populations or spatial similarity. If lambda is known then we can use inference (inferential statistics) in relation to theta the unknown parameters from the general equation I opended up this post with.
Bayesian inference which is now a paradigm in its usage and importance in modern science does have advantages over the frequentist statistical philosophy: it has a more unified approach to data analysis, it incorporates prior professi0nal opinion/external empirical evidence,into results through pi the prior distribution. Here lies some issues too: if a couple of studies being heavily relied upon are being used that have some unknown flaws the Bayes approach may or may not be able to correct for those, whereas direct empirical observations can better correct for such issues, and there are some other frequentist approaches, though much more tedious, can better control for such errors. Now, it is impossible to observe the whole human system in real time, all the time, just like we cannot observe the whole earth and see all heat flow and temp fluxes and what is causing such fluxes, so certainly do not throw out the baby with the bathwater either. I think re-analysis with several methods in addition to what is used may be if use, including, but not limited to counter-factuals and some falsifiable methodology. More on that in a separate post.
Now some more math:
p(theta}y)= N(theta}mu,j^2)N(y}theta, sigma^2)/p(y)N (theta}mu, j^2)N(y}theta, sigma^2)= N(theta}sigma^2/sigma^2 + j^2mu + j^2/sigma^2 + j^2y, ^sigma^2j^2/sigma^2 + j^2. Again this is a general form and there can be tweakings and rearrangements to perform out various calculations and estimations. This last equation is used when we assume a Gaussian observation and from that and other assumptions we compute the posterior.
This is a very generalized beginning to the disucssion and I welcome corrections from: statisticians, physicists and mathematicians who work in climate science as it specifically pertains to the work they do using Bayes statistics, or related methods that are of importance. The equations are straight from my textbooks and papers I use in my work, or use to analyze other’s work.
In more simplified terms of Bluman (2009) Bayesian statistics takes the products of the values in the numerator and divides them by the sum of the values in the denominator to obtain a value. We can sketch a tree and apply potential or probable values and get a rough outlook on where we are at and where we may be going. However, minus critical data, Bayesian methods can fall short even if the critical data or data points seem to be small and inconsequential.
Frequency philosophy attempts to provide scientific explanations for probability which has not been very successful. On the other hand, Bayesian philosophy can still be far too subjective as well. These ideologies and applications as aforementioned by me several times should not be completely thrown out, or completely written off as complete failures as Burdzy and other experts assert, but his warnings about both methods are of immense relevance to mathematics and science in general. I also submit that the use of so much statistics in the absence of empirical evidence provides a talling point about some of the weaknesses in the GCM’s. Since not all important and influential data can be input to the models, there must be assumptions being made as seen in math and science methods along the way, but perhaps more so, when modeling such a complex climate system.
Backing up a minute: I personally work with Bayesian statistics everyday in my own work. It is also used by physicians, typically less experienced ones, as they become more experienced, they can rely more on professional experience, but this is no bad reflection on the utility of Bayesian statistics itself. However, it does have advantages and disadvanatages: it takes supercomputers in climate science, genetics, environmental sciences, utilizing Markov Chain Monte Carlo integration methods among others to compute and solve for intractable integrals. This aforementioned is both a good thing and a call for caution. The computers make things much easier and they are in fact, indispensable, however, their remains good chances for input error, computer error and at times, missing mistakes made via quality controls and error analysis. Also keep in mind Bayesian methods hold a certain subjectivity and heavy reliance upon judgment, though not absent in frequentist methodology and elsewhere, is more pronounced here. When dealing with one gene and one fucntion at a time, though this can an issue, it is less so, but in more vast and complex systems like a global climate system, or a whole genome the probability of errors, and the unavoidable clashing of errors/cognitive dissonance in results can be immense and yes, very, robust. It is unavoidable that raw data must be worked out, summarized and statistcal methods applied. The incompletness and subjectivity of Bayes makes it subject to at least some scrutiny and calls for careful re-analysis. The other issue is we must be very careful what ‘current data’ is entered, and this holds true of any field, not just climate science. As of late there have been numerous errors in studies conducted in medicine and psychiartry for instance, as reported over at Neurocritic, the magazine Science and various online science blogs. The main issue was the misuse, misinterpretation of and analysis of statistical analysis, setting up control/experimental groups, specifically, and infering from larger populations in general. In terms of climate science I want to see more relationships between reported warming and thermodynamics, which as we all know contain immutable laws. Statsistics serves to look at uncertainty and to summarize existing data, in a more cohesive manner. I want to see more studies cited as reporting 3 degrees C from doubling of C02 not only statiscally and with an analysis of albedo changes, IFR trapped, CH4 emitted from clathrates, but how, more precisely, heat, is relating to an upwards trend in temperature, thermodynamically. I want to see thermodynamics incorporated in the models, and in the Bayesian methods too. We know the 4 laws cannot be violated and we know the first and second law are most pertinent to this discussion of AGW.
I am interested in looking at, understanding and discussing further, the uncertainty in uncertainty analysis. This is something I am looking to do in ann upcoming lab job I will be working in, myself in another field of science. Hence why this discussion on models, certainty, uncertainty and statistics is of such importance to me.
I am out of time and would like to atleast get this post out prior to gong to work. Oh and I agree with David B. Benson in the ongoing discussion on S and Cauchy 100%. More on that in my next post.
Jacob Mack says
Ray Ladbury please show me your work. Pleas show the general equations you are working with and why. In this thread I am making no personal attack on anyone. Let us speak of statstics and thermodynamics, one at a time, or in separate paragraphs in the same posts. I know many climate scientists are physicists, chemists, meteorologists, and mathematicians. We are all prone to error or to make judgments where there may be an error at times. I do not know everything, nor will I, but let us behave civily and continue discussion. I just posted some generalized equations and some applications under specific assumptions. I just posted some of my concerns of the methods I actually use myself and have studied in the classroom, the textbooks and in the real world. Let us continue from there, Dr. Ladbury:)
john byatt says
#235 OT, Ken , you can drastically reduce your use of heating oil ,
lived in Tasmania for two years, I purchased light weight thermal underwear for the whole family, Google DAMART,
David B. Benson says
Pete Dunkelberg @222 — I’m not sure what you mean by paleo prior but if you mean actually look at the evidence then that is not in the spirt of the cloistered expert. The cloistered expert knows the physics (and geology),k but has no acess to the actual data. The cloistered expert determines, as best she can, her subjective (but informed) prior and only then looks at the evidence to update to the posterior pdf.
Ray Ladbury & Tamino — I’m certainly finding this discussion of great interest. For reasons I’ll explain in a subsequent comment, can we restrict attention to a cloistered expert in the spring of 1959 CE? The cloistered expert knows physics and geology and has acess to all the literature up to that time. In particular, whatever of “The Warming Papers” which appeared before then and so of course Guy Callendar’s work as well as Arrhenius’s two attempts to compute S.
This cloistered expert isn’t a good enough mathematician to use a uniform pseudo-distribution of the entire real line; she insists that her subjective pdf integrated over the entire real line has value 1.
Ray Ladbury — I fear you are informed by the data when you insist that S cannot be other than non-negative. I am quite, quite certain that it is but nowhere near the absolute certainty I have that entropy is non-negative. For entropy the pdf has certainly support bounded below by 0 but for S? I’m not so certain so make it just an exceedingly small value.
Ray Ladbury says
Jacob, I’m more than happy to have a discussion based on evidence. I just haven’t seen any coming from you in support of your position yet. I also have trouble with arguments that impugn the competence or integrity of an entire field of scientific endeavor. Moreover, since virtually every professional or honorific organization of scientists has taken a position in support of the science, and since the scientific consensus is arrived at via the scientific method, when you impugn the consensus, you are impugning the entire scientific community AND the scientific method.
The thing is, Jacob, I am not an expert in climate science. I’ve worked at it and understand the basics well enough to see that the science is pretty coherent. I understand most of the statistical and analytical techniques. I know and understand a good portion of the evidence. However, ultimately I tend to buy into the consensus because I can see that it is arrived at via the scientific method. And I know from experience that the scientific method generally yields reliable consensus.
I look at the other side of the argument, and I see these guys aren’t doing science. They are not stating clear, testable hypotheses. Their story is switching from day to day. They aren’t gathering data, and most important, they aren’t developing new understandings of the climate. When I see two groups of scientists–one developing new techniques, making new, testable hypotheses and steadily advancing their model, and the other saying, “Oh, it’s too complex to understand,” I’m going to throw my weight behind the former.
So, Jacob, if you can show me a theory that makes as much sense of Earth’s climate and makes as many verified predictions as the current consensus model and which doesn’t imply serious problems due to warming, I’ll be the first to pat you on the back. Until then, I’m going to have to go with the folks who are doing science.
Recaptcha: Fricking Chinese characters? Come on.
Ray Ladbury says
David Benson, Based solely on the fact that Earth was 33 degrees warmer than its blackbody temperature, on what was known of the absorption spectrum of CO2 and on the fact that Earth’s climate did not exhibit exceptional stability characteristic of systems with negative feedback, I’d probably still go with restricting CO2 sensitivity to 0 to + infinity. I just don’t see any reason–empirical or physical to go with a nonzero probability for a negative sensitivity. Now the exact form for the sensitivity Prior probability distribution based on 1959…that I’ll have to think about.
dhogaza says
Jacob Mack:
So it’s you against thousands of physicists, chemists, meteorologists, and mathematicians.
And we’re supposed to believe that you’ve shown them all to be wrong, based on some hand-waving posts absent of much detail, and no willingness to summarize your astounding, paradigm-scuttling, nobel-prize winning achievement in the form of a scientific paper that will lead to your name being established in the firmament with the likes of Galileo, Einstein, and Bohr.
Why don’t you claim your laurels by codifying your rock-solid debunking of physics, chemistry, meteorology, and mathematics?
Beating down Ray in a blog thread (not that he’s actually being beaten down, I’m being hypothetical here) isn’t doing science.
C’mon, reap the laurels, the rewards, if you can actually do it you’re a shoo-in for a seat in the House, if not the Senate, and if you don’t want that, the tea party lecture circuit’s your dime.
Lay your cards on the table in a credible venue …
Edward Greisch says
See:
http://dotearth.blogs.nytimes.com/2011/01/27/on-hollywood-hiv-alcohol-and-warming
which contains a comment by Gavin. The dotearth article is on a subject I have advocated for RC: How to put the science into action. We all know that the models work well enough to support strong action. Updates to models are not required to decide that a radical departure from BAU is required immediately.
Geoff Beacon says
Pete Dunkelberg 231
Thanks for the reference to Zang et.al.
“Arctic sea ice response to atmospheric forcings with varying levels of anthropogenic warming and climate variability”
http://www.agu.org/pubs/crossref/2010/2010GL044988.shtml
The abstract suggests some good news, modelling a later time for a summer free of Arctic sea-ice than one might expect
from extrapolating Arctic sea-ice volumes. If falls in Arctic sea-ice volume were to keep up the pace they have had over the past decade, the Arctic will all be open sea in summer in under ten years. See http://psc.apl.washington.edu/ArcticSeaiceVolume/IceVolume.php
From the point of view of climate modelling the all-gone moment isn’t as important as the magnitude of the change in albedo – particularly in the spring, summer and autumn.
I don’t particularly bet to make money but because I think a market in environmental futures is important.
I suppose I’ll have to find the time to negotiate the payment system then read the actual paper from Zang et.al.
Anne van der Bom says
Ken Lowe,
27 Jan 2011 at 12:51 PM
A back-of-the-envelope calculation suggests you can save very little GHG emissions by optimising the delivery of your heating oil. If you want to really put a dent in your GHG emissions, reduce your consumption of heating oil. Insulate your home as good as you can. Lower the thermostat. Install a solar water heater. With a bit of oversizing the collector and storage tank, it can help to heat your home too. A ground sourced heat pump is also a good, if somewhat expensive option to completely eliminate the need for heating oil.
Install solar panels, the UK has an exceptionally generous feed-in-tariff, use it. It won’t save you heating oil, but will lower GHG emissions from powerplants.
I don’t think however this is the blog for this topic. There are certainly countless forums in the UK where you can get much better advice than here.
Martin Smith says
Re #241 (tamino & gavin)
The denial space template has been adapted to be a rhetorical blog post that extracts key paragraphs and graphics from an allegedly peer reviewed and published scientific paper. A link to the paper is always provided, but that link refers to the paper in some aggregation site, which often charges for access to a pdf of the paper. It all looks so legitimate, and few people have the time to fact check claims in the denialist blog, let alone the scientific knowledge to be able to do it.
That’s what was done with this Patrick Frank paper, which you guys quickly refuted, but which I only had a strong suspicion was total gibber. I suspect that most people who get referred to these denial space blogs will read them and take their conclusions on board without trying to verify them. I see lots of these denial blogs, because I am in a never-ending argument with a denier on an Investor Village message board. He claims to be an engineer, yet he continually posts denier blogs without doing any fact checking. I have refuted many of them myself, and often I refute them with info from Real Climate (thanks much), but there is so much of this denial stuff being published as rhetorical blogs, there is little hope for the casual reader. And most people are casual readers.
Clearly, the guy I argue with is only interested in short term stock market gains, and his attitude seems to be typical of people with some connection to the coal and oil industries. It all seems orchestrated, but I am loath to wear a tinfoil conspiracy hat. And more often now, I catch myself in a dark place where I am hoping AGW will increase suddenly and shut these people up once and for all. I don’t like that.
Septic Matthew says
258, Ray Ladbury: exceptional stability characteristic of systems with negative feedback,
What did you mean by that?
Negative feedback can produce oscillations, the simplest case being a harmonic oscillator. Increasing the negative feedback, as might happen in the atmosphere if global warming creates increased cloud cover (hence albedo), can increase the amplitude of the oscillations.
FurryCatHerder says
Pete Dunkleberry @ 237:
Well … “deny harder” is always an option, but one of the primary claims supporting the denialosphere is the relatively unavoidable fact that we’ve not had the kind of record high that is unarguably a new record high.
When we break whatever the denialosphere think is the record, which we almost certainly will in the next 3 years, they won’t have that excuse anymore.
David B. Benson says
Ray Ladbury @258 — Carl Hauser is also quite interested in Bayesian reasoning and today over a long lunch we considered the question of how the expert in 1959 CE would be able to construct a prior pdf based on what was known then. Carl was opposed to a uniform distribution over an interval [a,b] on the general grounds that a Bayesian does not exclude any values in a prior since no amont of evidence can ever restore some non-zero probability; one’s mind is made up. That would be ok in situations such as a pdf for, say, temperature where we already know in 1959 that 0 K is the lower limit, but not for S. The other problem with a uniform pdf is the assumption of uniformity. In 1959 we have Arrhenius’s initial estimate of 6 K, his revision to 1.6 K and then Guy Callendar’s revision back to a higher value. So given all the things which might matter left out of those estimates one is still left with a sense that around 1.6–6 K is more likely than either smaller or larger values.
Carl suggested attempting a reductionist program, to estimate priors for those factors, such as cloud changes, about which nothing was known. We were unable to see how to combine those into a prior for S. We closed by agreeing that the prior for S would need support over the entire real line, going to zero rapidly for both negative and large postive values.
Along the way I did suggest consulting several experts to pool their estimates for S. This was done in Tol, R.S.J. and A.F. de Vos (1998), A bayesian statistical
analysis of the enhanced greenhouse effect, Climatic Change 38, 87–112 but we agreed that in 1959 it might have been difficult to find enough experts.
John Dixon says
Congratulations; you’ve built a highly effective autocorrelative model with no explanatory power.
Jacob Mack says
David B Benson, frequentist approaches still have some well documented advantages as well.
Septic Matthew says
266, David B. Benson, that is an interesting post. I hope that you are able to follow it up with calculations of the posterior distribution of S given current and future data.
dhogaza says
SM:
You’re suggesting a harmonic oscillator is unstable?
Gilles says
spontaneous cycles do not appear with negative feedbacks : they are intrisically non linear. They occur when the equilibrium solution is unstable (with positive feedbacks), leading the system to diverge from this solution. Then non-linear negative feedbacks occur above some finite amplitude, and produce hysteresis cycles. The system revolves around the “stable solution”, satisfying the global budget requirements (such as energy conservation ) on average. Both the frequency and the amplitude of the cycles are very difficult to predict, because they rely entirely on precise non linear feedbacks which are not easily derived from fundamental laws (not at all actually) : well-known examples are Solar cycles, ENSO, etc… whose characteristics can not be precisely reproduced up to now.
In my sense, climate scientists superbly ignore the possibility of long , secular cycles that could trigger variability on hundreds of years timescales. This is by no means excluded , neither by observations, nor by theory , and could contribute to a fair part of the natural variability of the XXth century , lowering accordingly the sensitivity to GHG.
[Response: Oh dear. So the thousands of papers on internal variability of the climate system were apparently written by Martian bloggers and not climate scientists? – interesting…. And we can ignore the fact that the sensitivity is barely constrained at all by 20th Century trends (mainly because of the uncertainty in aerosol forcing) and go back to making naive correlations with single factors in a hugely multivariate system…. Is there no sense in which these conversations can progress past politically-tainted declarations of personal belief? Please at least try to assimilate something from the science. – gavin]
Jacob Mack says
Aerosols can both cool and warm depending upon a range of physical factors. Getting to understand the effects of aerosols, timescales and complex interactions seems to be, as it should be an active ongoing area of study.There are some very cool papers published in peer review on internal variability to be sure.
Brian Klappstein says
Pardon if I’m repeating the obvious observation….
In 1998 there was a very strong El Nino and the global annual surface air temperature surged enough to equal the upper bound of the GCM model ensemble. In 2010, the year again started with an El Nino, not as strong as 1998, but probably the strongest since then. This time global annual SAT surged again but only enough to equal the average of the model ensemble.
Geoff Beacon says
Pete Dunkelberg 231
I’ve paid the $25 and read Zang et.al. “Arctic sea ice response to atmospheric forcings with varying levels of anthropogenic warming and climate variability”
http://www.agu.org/pubs/crossref/2010/2010GL044988.shtml
It looks a serious piece of work and gives estimates of the date that the Arctic will be free of sea-ice in summer to 2050 or beyond but as I understand it
1. It concedes previous climate models were underestimating the fall in Arctic sea-ice.
2. It assumes climate variability in the Arctic to be consistent with either 1948 to 2009 (or alternatively 1989 to 2009). I doubt that the past two years are typical of either of these periods.
3. It makes no mention of “rotten ice” as reported by David Barber
http://climateprogress.org/2009/11/08/arctic-multiyear-sea-ice-nsidc-david-barber/
4. It necessarily has no mention of the just-published paper by Spielhagen et al. “Enhanced Modern Heat Transfer to the Arctic by Warm Atlantic Water”. Reports of this paper suggest that it is not clear how this warmer water entering lower depths of the Arctic seas affect the sea-ice but this seems another unknown. If it was an unknown unknown it is worrying.
Allen et al, “Warming caused by cumulative carbon emissions towards the trillionth tonne”. Nature 458, 1163-1166 (30 April 2009) may be the underlying basis for the UK Governments concentration on carbon dioxide and so downplaying other climate forcing agents such as methane and black carbon. It says:
Underestimated feedback effects in its climate models undermine this claim. The Arctic sea-ice may be one of them. Zang et.al. may be interesting but it does not give me more confidence in this Trillion Tonne Scenario.
Rejection of the Trillion Tonne Scenario has enormous consequences for public policy.
David B. Benson says
Jacob Mack @268 — I’m attempting to be a compleat Bayesian just now.
Septic Matthew @269 — Thank you. The current attempt is to find a rational way to establish a prior distribution when little is known. For two posterior distributions, see Annan & Hargreaves; I’ll not attempt to replicate that work.
Septic Matthew says
270, dhogaza: You’re suggesting a harmonic oscillator is unstable?
Sounds like it, but really I am just asking Ray Ladbury for his definition of exceptional stability. You could call a Lorentz system or Brusselator “stable”, even though they generate chaotic trajectories.
Ray Ladbury says
David Benson,
I still don’t see how you get a negative sensitivity given what was known in 1959. There is nothing in the climate that suggests a negative sensitivity. It would require a conspiracy of feedbacks that somehow overshoot zero net forcing. I just don’t see how you get there, and certainly large negative sentivities can be entirely ruled out. I mean if you wanted to be conservative, you could maybe use a 3-parameter lognormal with a negative position parameter. What is more, if you have 3 estimates varying between 1.6 to 6, there’s certainly nothing there to suggest sensitivity even below 1 degree per doubling.
I agree that a uniform prior is problematic. If we take the esimates up to 1959, we have Arrhenius (5.5), Arrhenius (1.6), Callendar (2), Hurlbut (4) and Plass (3.8). That gives us an average of 3.3 with standard deviation 1.6. A fit to a lognormal with mean 1.18 and standard deviation of 0.51 doesn’t give a terrible fit. If you want a more noninformative prior you could take a broader standard deviation say, somewhere between 0.65 to 0.85.
Interestingly, if you take all the point estimates of sensitivity Arrhenius to the present, you get a moderately symmetric distribution, centered on about 2.8 with standard deviation of about 1.5. The point estimates are roughly Weibull distributed with shape parameter ~2.
The average value for S has not changed by more than a percent (from ~2.8) since 1989, and the standard deviation of estimates has fallen steadily since 1963. That’s indicative of a pretty mature understanding.
Ray Ladbury says
SM, what I am saying is that if you had negative sensitivity, that would imply strong negative feedback, and you wouldn’t see much change in the climate system–in contrast to the climate we see on Earth. My sole intent is to suggest that negative sensitivities can safely be excluded from consideration for an Earthlike planet.
Septic Matthew says
278, Ray Ladbury, that answers my question.
Jacob Mack says
David B. Benson # 275, no problem, I understand.
captdallas2 says
Ray Ladbury,
Annan’s Bayesian with expert prior approach seemed to me to be a reasonable method to fine tune the range of sensitivity which would help fine tune the risk assessment. No approach would eliminate the possibility of values outside the predicted range unless the range was uselessly broad.
The tighter range would just give policy makers a better target to use to determine mitigation/adaption policy. “What if”, scenarios based on the tighter range should help in making pragmatic decisions.
A more interesting statistical exercise would be what actions are likely to be taken. Personally, I would believe that a gaseous fuel infrastructure should be a priority because it increases transportation fuel options without demanding one engine technology be scraped in favor of another. Consumers, at least American consumers, would be more likely to accept personal transportation alternatives that allow for larger vehicles without increasing dependence on foreign oil. Others would pick another priority. Positive action will be a great compromise.
The only reason I “cherry picked” the 1913 to 1940 range is it might help improve our understanding of climate response.
In any case, a less contested range of climate sensitivity would help change the debate to what to do, instead of if to do any thing.
John E. Pearson says
I recently had an old friend send me the following claim which he said was made by a \NASA engineer\: that without CO2 the climate would be much hotter. Coincidentally I found Ray et al discussing the possibility/impossibility of negative sensitivity. The \engineer\ did not invoke feedback at all. Below is what he wrote. The numbered stuff is my summary of his exact words. THe exact quotation follows my summary. I am tempted to simply tell my correspondent that his \NASA engineer\ is deluded and leave it at that. I hate these zombie arguments and I hate replying to them. The only reason I do it now is out of respect for my friend. I read this stuff and it conjures up images from \night of the living dead\ which I saw in the theater when it came out and that I would really rather forget. The argument is below. I think (1) is correct. 1A is false in that it implies only at ground level can CO2 absorb more energy than it emits which is equivalent to claiming that the atmosphere can’t be heated except at ground level. I believe 2 is correct. I believe 3 would be correct with/without the pressure broadening argument simply because the atmosphere thins with altitude. I would say that (3) is just obfuscating more than right/wrong. Ditto 4. I don’t know how absorption probabilities go with pressure off the top of my head, but for sure they decrease with decreasing pressure; focusing on band structure while excluding the huge drop in number density with altitude is obfuscatory. 5 and 6 are simply wrong. No gas at all is needed in order for radiation to escape to space. As far as I know, if the only physical mechanism under consideration is the radiative cooling of the planet’s surface (which was heated by shortwave solar radiation and reradiated at longer wavelengths in the infrared) via radiative transport, additional gas of any kind can only result in a higher equilibrium temperature. I suppose that with a sufficient change in the atmospheric density by the addition of a gas, one might expect changes in physical processes like thermal conduction and/or advection to make a difference but that isn’t what the engineer was claiming by my reading.
The strangest thing is that he begins by claiming that CO2 is a trace gas and can’t have an effect and he finished by claiming that it has a major effect but that us poor physicists have gotten the sign wrong on its effect for the past century. Comments appreciated.
(1)at ground level the spectral bands are at their maximum widths.
(1A) It is here that CO2 can absorb more energy than it emits.
(2)farther from the Earth, the temperature and pressure both decrease and
the bands get narrower.
(3)This means that when CO2 emits energy towards space, only some of it
will be absorbed by the CO2 above it. However, a very very small
amount will not be absorbed because the absorption bands are narrower.
(4)The rate of this cooling is partly related to the mean free path – how
far the radiation travels before it is reabsorbed. Basically, the
farther radiation travels toward space (lower temperature and
pressure) before being reabsorbed, the narrower the absorption band
and the more heat is lost to space.
(5)when this band spreading is taken into effect, it quickly becomes apparent that carbon dioxide is actually the only gas that cools the atmosphere. without carbon dioxide the atmosphere has no way to release its energy to space and the planet quickly over heats.
(6)Up to about 11,000 feet (top of the troposphere), water vapor provides
this capability. But above that level, there are few, if any, gases to
cool the atmosphere.\
THE ENGINEERS EXACT WORDS
My research shows that heat comes first, like heating the ocean, then
the CO2 emitted from the ocean cools the earth back down.
The biggest problem I have is that CO2 represents only .039% of our
atmosphere (thats 390 parts in 1,000,000). Further, according to
Robert Clemenzi, \While the spectra of each gas is different, the
absorption and emission spectra for a specific gas are usually
identical. (The primary exception is fluorescence.)
(1)Though it is
seldom mentioned, this means that CO2 absorbs and emits IR radiation
at exactly the same frequencies. Note however that all the radiation
in a spectral line is not at exactly a single frequency, but instead
in a small range (band) of frequencies. It is the width of these
spectral lines that is affected by temperature and pressure.
Basically, at ground level the spectral bands are at their maximum
widths. (Maximum pressure – but not always the maximum temperature.)
It is here that CO2 can absorb more energy than it emits. As you get
farther from the Earth, the temperature and pressure both decrease and
the bands get narrower.
This means that when CO2 emits energy towards space, only some of it
will be absorbed by the CO2 above it. However, a very very small
amount will not be absorbed because the absorption bands are narrower.
The rate of this cooling is partly related to the mean free path – how
far the radiation travels before it is reabsorbed. Basically, the
farther radiation travels toward space (lower temperature and
pressure) before being reabsorbed, the narrower the absorption band
and the more heat is lost to space.
The funny thing is that when this band spreading is taken into effect,
it quickly becomes apparent that carbon dioxide is actually the only
gas that cools the atmosphere. That’s right, without carbon dioxide
the atmosphere has no way to release its energy to space and the
planet quickly over heats.
Up to about 11,000 feet (top of the troposphere), water vapor provides
this capability. But above that level, there are few, if any, gases to
cool the atmosphere.\
dhogaza says
John E Pearson:
Feed the dude 390 micrograms of LSD and, a day later, ask him if he still thinks tiny amounts of stuff can’t have big impacts …
When I see nonsense like his statement, frankly, I don’t bother reading further. It’s that dumb, and as an engineer, I’m sure he knows of many counterexamples. As a NASA engineer, you could ask him whether or not a bit of O-ring material comprising far less than 0.039% of the total weight of a solid fuel booster could’ve caused a significant systems failure 25 years and a couple of days ago …
Hank Roberts says
Suggest something directly educational on the question.
(You might want to ask the supposed ‘NASA engineer’ how this old notion that’s been widely debunked (and rebunked) is presented as his new idea.)
http://www.google.com/search?q=co2+absorbtion+band+spread+altitude
http://www.physicsforums.com/showthread.php?p=2373492
http://www.skepticalscience.com/The-first-global-warming-skeptic.html
CM says
John (#282),
Can I try my layman’s take? (Corrections welcome, as always.)
Your engineer loses the plot from the beginning by assuming that the warming effect of CO2 is to do with the CO2 absorbing more energy than it emits. The imbalance is not between IR absorbed and IR emitted by a layer of atmosphere, but between the incoming shortwave solar energy from space and the outgoing longwave energy emitted to space, due to the increasing difference between the ground temperature and the temperature of the level from which re-emitted radiation can escape to space. Moreover, without GHGs in the atmosphere, getting rid of heat would not be hard, it would be easy. It would just radiate out into space directly from ground level at all wavelengths. Your engineer is correct only that increased CO2 helps cool the stratosphere.
adelady says
jep@282 Oh dear.
Looking at stuff like this, I’m always perplexed when deciding whether this kind of word salad is above my pay grade or below it.
I’ve probably spent too much of my life dealing with stroppy teenagers who resist rules, logic, structure and everything else about algebra, science, spelling. The breathtaking certainty when assembling poorly understood facts and asserting that their own idiosyncratic connections between them deserve marks I’m not prepared to give is all too familiar.
I call it smart-aleckry.
John E. Pearson says
284:
Hank I don’t recognize anything in your links as pertaining to his argument. Am I missing something? I didn’t hear anything about saturation in his argument. He started off by claiming that 390 ppm was too little to have an effect and ended by claiming that it has a big effect, but that physicists got the sign (of the effect) wrong for the past century. That doesn’t sound like saturation to me. I think it’s incoherent but that is a separate issue. In any event he says that CO2 actually produces a cooling effect. I believe that cooling by adding trace amounts of a gas to an atmosphere is physically impossible under the assumption that only radiation physics is responsible for heat transport which is what the guy was arguing. I don’t have a mathematical argument for my claim, just a physical one, which is that as photons pass through a gas they’re either absorbed or not. Absorption means the molecules that did the absorbing end up with additional energy so that absorption can only result in heating, never cooling. As far as I can tell the whole issue of band structure is entirely superfluous to the sign of the effect of adding gas to an atmosphere. I think that adding trace amounts of gas to an atmosphere can only heat. The heating might be negligible, or not, depending on the gas and the radiation, but it necessarily has a positive sign, doesn’t it? It seems to that to get cooling the added gas would have to do something really weird like decreasing the absorption probability for the molecules that were already there before the gas was added. But maybe I’m missing something. I was hoping someone here that knows more about this than me might say something useful and perhaps corroborate/correct my response.
Regarding the guy’s honesty and whether this was his “work” or someone else’s; it occurs to me that when scientists say they’ve “done research” they generally mean they actually did original research. When other people say they’ve “researched” something they often mean only that they read about it somewhere. A general remark: I’ve found that arguments in which disrespect plays a major part are fare less convincing than arguments which start like this: http://www.youtube.com/watch?v=k80nW6AOhTs which is what impugning someone’s honesty is.
David B. Benson says
Ray Ladbury @277 — I wouldn’t have foound a negative S in 1959. But with such a limited understanding of how the climate actually works, I (and Carl Hauser) prefer a more conservative prior distribution which allows for that possibility, assuming it actually is found through Bayesian analysis of the evidence collected later than 1959.
Using a translated lognormal is better, but it still has a cutoff at wherever you translate zero. Since I don’t see how to justify any particular cutoff, I’d rather a prior distribution with none at all; supported on the entire real line and vanishingly small for large absolute values.
With five experts giving values, I’d be t4empted to use the Tol & deVos procedure to construct such a prior. Thank you for checking.
I agree that by now there is an exceedingly good grip on the pdf for S with Annan & Hargreaves latest suggesting there isn’t much of a heavy tail.
John E. Pearson says
286 Adelady, I hear ya!
CM said “Moreover, without GHGs in the atmosphere, getting rid of heat would not be hard, it would be easy.”
Thanks for that.
Am I correct that adding any gas at all to the atmosphere can only result in heating at least at the level where the gas is located? That seems to be what you are saying. I wasn’t thinking about the stratosphere. Delinquent that I am, I haven’t followed the whole issue of stratospheric cooling. Does the stratosphere cool because of the CO2 in the stratosphere or because of the CO2 in the troposphere? I’m thinking it cools because of a decreased heat flux into the stratosphere because outbound heat is building up?
Ray Ladbury says
captdallas2, I agree with you about Annan’s motivation and while I share the motivation, I have great qualms with using a Prior that alters qualitatively the conclusions of the analysis. The question is whether it is reasonable to use a Prior that is 1)symmetric, rather than skewed right, and 2)allows negative sensitivities (which I think are unphysical). I think that perhaps one approach that might make more sense is to characterize each sensitivity determination in terms of both a “best-fit” or location, and as a width, and look at the distributions over these parameters. I’m doing something like that in my day job, so maybe I’ll look at it.
I realize that I’ve given you a bit of a hard time. Don’t take it personally. My goal is to ensure that we 1)all stick to the evidence when it comes to science, and 2)use valid risk mitigation techniques.
Ray Ladbury says
David@288, I’m just going with physics, and I don’t see how you get enough negative feedback to get a negative sensitivity AND get 33 degrees of warming over Earth’s blackbody temperature. Such a strong negative feed back would have to apply to any forcing, right? So, how would you get glacial/interglacial cycles with such a strong negative feedback? There is a difference between what is mathematically possible and what is physically possible–and I’d contend that if you can get negative sensitivity values for a positive forcing, then our understanding of climate would have to be so flawed that we would need to toss the whole thing out and start over again.
David B. Benson says
Ray Ladbury @291 — In the spring of 1959 in my Physics 1c lab I designed, built and tested a negative thermometer; the mercury went down when the bulb was immersed in a beaker of warm water.
With that easily performed experiment in mind, the five expert values all derive by variations which hold many aspeccts of the climate system constant (when there is no knowledge that those aspects are actually constant). So to be safe, use a prior distribution with support the entire real line; afterall the glacial cycles might be due to something else completely un-understood in 1959.
As for symmetry, Using the Tol & deVos procedure to construct a pdf for the five expert’s estimated sensitivies won’t be symmetric, but will have support over the entire real line. It’ll have all moments as well.
captdallas2 says
Ray Ladbury,
I don’t take it personal, really it is just an illustration of the current conundrum. Think of it as moving away from science and into marketing. While the science is still there, it has to be communicated as a opportunity. A diffeques professor from Bell Labs told me there are no problems only opportunities. That stuck with me just like KISS did. He was trying to teach me how to work with uncertainties (tolerances) to design things that would work reliably with available technology. So I can live with a 95% probability, while knowing that Murphy is still out there.
Because of Murphy, I would never totally rule out S > 6 or S < 0. We would be either be looking at a new carboniforuous period or a glacial period, but both would be bad for business. It would also be cost prohibitive to design around either.
Business wise, it would also be difficult to design for 100 plus governments. A contract with one government is enough of a PITA.
The government we in the US have to work with has all ready invested in promising technology, we just need to pitch the right blends of technology.
Co-generation has higher efficiency than stand alone. Sulfur-iodine cycle hydrogen production is a neat co-generation product. There has already been a great deal of research to improve high pressure hydrogen storage, fuel cells and decent work on combination natural gas/hydrogen pipeline designs. Pretty impressive because hydrogen is a bitch to contain and platinum free PEM fuels cells are becoming affordable. Selling the US and the G6 on a cost effective energy plan would be a solid start that the ROW would follow if proven.
Clean coal is a given because coal, clean or not, is an abundant interim resource. It is better if clean is used, but without co-generation of hydrogen it is not all that cost effective yet.
Higher temperature nuclear reactors offer the option of hydrogen co-generation. While less efficient, off peak or remote solar and wind offer clean hydrogen production with electrolysis.
Energy independence and cleaner world without sacrificing creature comforts all while saving the world. Chaining ourselves to trees is no where near as effective as selling a good idea :)
dhogaza says
captdallas2:
So because of this, you never go fishing on a boat, because of the fact that because of Murphy, you can’t rule out that the specific gravity or density of water is so low that your boat will sink before you can say “physics sucks!”
I could lay out an infinite series of everyday actions we all take that, because of Murphy, are irrational because after all, you can never rule out Murphy.
Chris Colose says
John Pearson (on the engineer),
You need to be very careful thinking about individual components of a local energy budget. The troposphere is currently cooling radiatively at about 2K/day, and adding CO2 to the atmosphere generally increases the radiative cooling (primarily through increases in water vapor, though how these details play out also depend on the details of the surface budget). In the stratosphere, the increased radiative cooling with more CO2 is a ubiquitous feature of double-CO2 simulations and this leads to a drop in the temperature there. But the troposphere can still warm with an increased radiative cooling term because it is also balanced by heating through latent heat release, subsidence, solar absorption, increased IR flux from the surface, etc.
The increased troposphere-surface warming from more CO2 is best thought of by the rate of IR escape out the top of the atmosphere, which is reduced for a given temperature. Let’s step back though and think about absorption and emission in the atmosphere.
Suppose first, that we are looking through a pinhole at an empty isothermal cavity at some temperature T. The observer will of course see Planck radiation emanating from the back wall of the cavity by the Planck function B(T), which gives the distribution of energy flux as a function of wavelength (or frequency). We now put an air parcel between the observer and the back wall, a parcel which absorbs IR radiation exiting from the wall to the observer. An parcel means that the medium is small enough to be isothermal and in local thermodynamic equilibrium (which then ensures that the population of thermodynamic molecular energy levels will be set by molecular collisions at the local atmospheric temperature), but the parcel is also large enough to contain a large enough sample of molecules to represent a statistically significant mass of air for thermodynamics to apply. IR molecules are absorbed and knock molecules into higher energy quantum states. The collision time between molecules in the medium (which is representative of our current atmosphere) is several orders of magnitude shorter than the excitation time, so the energy of the molecule will go into the energy reservoir of the local matter establishing a Maxwell-Boltzmann distribution at a new temperature, T+dT. Meanwhile, when radiation escapes without being absorbed then it will cool locally.
Our observer will look into the medium and see a transmitted (t) portion of the Planckian radiation B*t=B*exp(-τ) and the medium radiates as B(T)*[1 – exp(-τ)]. A warm parcel of air will radiate more than a colder parcel, even at the same 390 ppm of CO2 in the air due to the population of the different rotational and vibrational energy states of the GHGs from collisions with other atmospheric molecules in the LTE limit. (Also see Ray Pierrehumbert’s Physics Today article for a more thorough description).
In the context of the real atmosphere, an observer looking down from space will see Planckian radiation upwelling at the surface temperature for those wavelengths where the air is very transparent. For those wavelengths in which the air absorbs effectively (such as the 15 micron CO2 band), surface radiation is effectively replaced by colder emission aloft, and is manifest as a bite in the spectrum of Earth’s emission (see this image). Because the temperature aloft is colder than the surface temperature, you can clearly see a bite in the emission at the center of the GHG absorption band, corresponding to predominant emission from the cold upper troposphere or stratosphere.
Increasing the GHG content increases the depth or width of this bite, with the depth constrained by the coldest altitude of the body in consideration (where the “emission height” eventually propagates to) and the width increases as the wings of the absorption features become important. For doppler-broadened (primarily stratospheric) lines, the absorption becomes logarithmic in absorber amount. Physically, the extra GHG is causing a reduction in the total outgoing radiation at a certain T, and so the planet must warm to re-satisfy radiative equilibrium with the absorbed incoming stellar flux. The whole troposphere is basically yoked together by convection, and the warming is communicated throughout the depth below the tropopause.
Chris
ccpo says
“Energy independence and cleaner world without sacrificing creature comforts all while saving the world. Chaining ourselves to trees is no where near as effective as selling a good idea :)
Comment by captdallas2 — 30 Jan 2011 @ 9:57 PM”
This exhibits a very limited understanding of the ecological services of the planet. Just scaling any of these to fully serve 9 billion – the minimum we will hit just from inertia – makes the futility of your claims obvious. When we look at all the other depletions, declines, alterations of habitat, temp increases, etc….
It’s really important to remember we live in a finite world with myriad system faults occurring simultaneously. If we do nothing worse than burn all the coal, there is basically a slim and a none chance of a comfortable, non-tree hugging world ever existing.
When designing for a sustainable system, you must include *everything*.
CM says
John,
Far as I understand (not very far!), it’s both. (1) CO2 in the troposphere, by repeatedly absorbing and re-emitting IR, reduces upwelling IR to the stratosphere (over the wavelengths where stratospheric CO2 would absorb). (2) In the stratosphere, CO2, being a good IR emitter, radiates to space the heat energy it gains from collisions with other molecules. (3) The net effect is cooling as stratospheric CO2 emits more IR than it absorbs.
captdallas2 says
dhogaza,
LOL, there is probably no better example of “dealing” with Murphy than a boat. I think you may misunderstand me.
Didactylos says
“Clean coal is a given”
Clean coal is a myth.
There’s dirty coal, and very dirty coal, like the nasty stuff they mine in Germany.
Why can’t we make the coal companies put their own money into carbon capture research? As ideas go, it’s not the best plan we have. If they can make it work, fine – win for capitalism. If they can’t – capitalism wins again, and we can kiss goodbye to those fossilised irrelevant industries.
Ray Ladbury says
David,it’s easy to see how a negative thermometer would work–you just have to have the glass expand more than the mercury. That’s just physics. What possible physics could you possibly have that would cool the temperature of the planet when it traps more energy? And equally important, would that planet resemble Earth at all in its climate.
Keep in mind that feedback is pretty indiscriminate wrt the source of the energy, so you are positing a mechanism that decreases planetary temperature when you add energy to it–and so increases planetary temperature when you take energy away. I don’t see how you get there with physics.
Besides physics, my objection to including the negative reals is that that probability must come from somewhere, and if you are stealing that probability from the positive tail (and where else would it come from), you are biasing your result toward that which you would desperately like to achieve. To that end, what is to stop you from positing a prior that peaks at zero?
The spirit of maximum entropy/minimum information says that we have to incorporate the physics and especially the symmetries into the Prior. That also incluses the antisymmetries–e.g. the skew. The physics tells us that it is much easier to get an Earthlike climate with a high sensitivity than it is with a very low sensitivity. Maybe in that sense, I am an Empirical Bayesian, but I get very uncomfortable when I have a Prior that qualitatively changes the conclusions of the analysis, AND my data are not dominant AND I have no good physics motivated reason for choosing a prior with very different characteristics than my data.
I think we must also look at what we mean by a conservative analysis. In a scientific sense, the most conservative choice would be to take a prior that includes the negative reals. Hell, take the entire complex plane while we are at it! However, in an engineering sense, the question we must be conservative with regard to is: How bad can it be? If our prior significantly changes the answer to that question without a good physical reason, then we have good reason to call into question our choice of Prior.