The great thing about complex data is that one can basically come up with any number of headlines describing it – all of which can be literally true – but that give very different impressions. Thus we are sure that you will soon read that 2008 was warmer than any year in the 20th Century (with the exception of 1998), that is was the coolest year this century (starting from 2001), and that 7 or 8 of the 9 warmest years have occurred since 2000. There will undoubtedly also be a number of claims made that aren’t true; 2008 is not the coolest year this decade (that was 2000), global warming hasn’t ‘stopped’, CO2 continues to be a greenhouse gas, and such variability is indeed predicted by climate models. Today’s post is therefore dedicated to cutting through the hype and looking at the bigger picture.
As is usual, today marks the release of the ‘meteorological year’ averages for the surface temperature records (GISTEMP, HadCRU, NCDC). This time period runs from December last year through to the end of November this year and is so-called because of the fact that it is easier to dice into seasons than the calendar year. That is, the met year consists of the average of the DJF (winter), MAM (spring), JJA (summer) and SON (autumn) periods (using the standard shorthand for the month names). This makes a little more sense than including the JF from one winter and the D from another as you do in the calendar year calculation. But since the correlation between the D-N and J-D averages is very high (r=0.997), it makes little practical difference. Annual numbers are a little more useful than monthly anomalies for determining long term trends, but are still quite noisy.
The bottom line: In the GISTEMP, HadCRU and NCDC analyses D-N 2008 were at 0.43, 0.42 and 0.47ºC above the 1951-1980 baseline (respectively). In GISTEMP both October and November came in quite warm (0.58ºC), the former edging up slightly on last month’s estimate as more data came in. This puts 2008 at #9 (or #8) in the yearly rankings, but given the uncertainty in the estimates, the real ranking could be anywhere between #6 or #15. More robustly, the most recent 5-year averages are all significantly higher than any in the last century. The last decade is by far the warmest decade globally in the record. These big picture conclusions are the same if you look at any of the data sets, though the actual numbers are slightly different (relating principally to the data extrapolation – particularly in the Arctic).
So what to make of the latest year’s data? First off, we expect that there will be oscillations in the global mean temperature. No climate model has ever shown a year-on-year increase in temperatures because of the currently expected amount of global warming. A big factor in those oscillations is ENSO – whether there is a a warm El Niño event, or a cool La Niña event makes an appreciable difference in the global mean anomalies – about 0.1 to 0.2ºC for significant events. There was a significant La Niña at the beginning of this year (and that is fully included in the D-N annual mean), and that undoubtedly played a role in this year’s relative coolness. It’s worth pointing out that 2000 also had a similarly sized La Niña but was notably cooler than this last year.
While ENSO is one factor in the annual variability, it is not the only one. There are both other sources of internal variability and external forcings. The other internal variations can be a little difficult to characterise (it isn’t as simple as just a super-position of all the climate acronyms you ever heard of NAO+SAM+PDO+AMO+MJO etc.), but the external (natural) forcings are a little easier. The two main ones are volcanic variability and solar forcing. There have been no climatically significant volcanoes since 1991, and so that is not a factor. However, we are at a solar minimum. The impacts of the solar cycle on the surface temperature record are somewhat disputed, but it might be as large as 0.1ºC from solar min to solar max, with a lag of a year or two. Thus for 2008, one might expect a deviation below trend (the difference between mean solar and solar min, and expecting the impact to not yet be fully felt) of up to 0.05ºC. Not a very big signal, and not one that would shift the rankings significantly.
There were a number of rather overheated claims earlier this year that ‘all the global warming had been erased’ by the La Niña-related anomaly. This was always ridiculous, and now that most of that anomaly has passed, we aren’t holding our breath waiting for the ‘global warming is now back’ headlines from the same sources.
Taking a longer perspective, the 30 year mean trends aren’t greatly affected by a single year (GISTEMP: 1978-2007 0.17+/-0.04ºC/dec; 1979-2008 0.16+/-0.04 – OLS trends, annual data, 95% CI, no correction for auto-correlation; identical for HadCRU); they are still solidly upwards. The match of the Hansen et al 1988 scenario B projections are similarly little affected (GISTEMP 1984-2008 0.19+/-0.05 (LO-index) 0.22+/-0.07 (Met-station index); HansenB 1984-2008 0.25+/-0.05 ºC/dec) – the projections run slightly warmer as one would expect given the slightly greater (~10%) forcing in the projection then occurred in reality. This year’s data then don’t really change our expectations much.
Finally, as we’ve discussed before, what climate models did or did not predict is available for all to see. Despite many cautions about using short-term changes to imply something about the long-term trend, these comparisons will still be made. So just for fun, here is a comparison of the observations with the model projections from 1999 to 2008 using 1999 as a baseline. The answer might be surprising for some:
You can get slightly different pictures if you pick the start year differently, and so this isn’t something profound. Picking any single year as a starting point is somewhat subjective and causes the visual aspect to vary – looking at the trends is more robust. However, this figure does show that in models, as in data, some years will be above trend, and some will be below trend. Anyone who expresses shock at this is either naive or … well, you know.
As for the next few years, our expectations are not much changed. This coming winter is predicted to be ENSO neutral, so on that basis one would expect a warmer year next year than this year (though probably not quite record breaking). Barring any large volcanic eruption, I don’t see any reason for the decadal trends to depart much from the anticipated ~0.2ºC/decade.
Update: Just FYI, the same figure as above baselined to 1990, and 1979.
dhogaza says
National Geographic is entertaining, and educational in its own way, but don’t mistake it for being a definitive resource for understanding the state of science.
Rick Brown says
Re #345 (cw00p)
As Hank and dhogaza suggest, your opinions will be better informed if you rely on peer-reviewed articles and sites such as Real Climate that are based on, and cite, them. Nat’l Geo should be taken with a few grains of salt, but while the website you reference (incorrectly, as tamino points out) for the 7000 ppm figure may have some interesting information on West Virginia fossils, given that it’s apparently put together by an engineer for the WV Office of Miner’s Safety, it might best be accompanied by a bucket-full if you’re reading it for information on climate change. A more complex, fully referenced, depiction of estimates of CO2 levels during Permian times can be found at
http://www.globalwarmingart.com/wiki/Image:Phanerozoic_Carbon_Dioxide_png
JB says
What about the temperatures and sea levels of the ocean basins? Has the melting at the poles been more or less right on par with most predictions? The trend of ~.2C/decade although significant, is considerably less than a “worst case scenario”. I wonder if the positive feedback of the climatic warming and melting of glaciers will lag behind, not becoming manifest until several decades of warming have already transpired. The ocean, after all, does take quite a while to warm up, but once it does, it retains energy extremely well. Perhaps all of this newly freed up ice-cold water at the poles is temporarily acting as a negative feedback, but as it absorbs more of the solar radiation, over time, it will transform into what we rightly think: a predominately positive feedback system, rapidly intensifying the warming. Therefore, with all other variables being equal, I would suggest a more rapid warming towards the latter part of this century, that is, given the plausibility of the former conjecture.
James says
Re #342: “…my point is simply that laws of physics can predict the future – hence providing an example for a source of information that provides certainty about the future. I am not specifically talking about climate/weather, just science.”
We don’t even need to resort to arcane (to the non-physics major, anyway) examples like the 3-body problem. The problem is to predict the climate in 2050, no? So how about a few scenarios:
1) The world continues more or less on its present course, with recessions, booms, a reasonable amount of technological progress. Business as usual, IOW.
2) In 2015, scientists at LLNL develop cheap, reliable & portable fusion reactors. Within a few years fossil fuel use plummets to about 1% of its 2008 value.
3) Al Qaeda (or some other jihadist group) obtains a significant nuclear capability, and the aftermath of the nuclear exchange of 2019 reduces the population to about 5% of the present number.
None of those scenarios are inherently implausible, and any of them, or many others that even a hack SF author could dream up, would make great differences in the amount of CO2 emitted (and other environmental effects) between now and 2050. Are they predictable by Newtonian physics? I hardly think so…
Chris Colose says
As for uncertain predictability (which I think krib wants to talk about projectability). Even if you ignore Gavin’s comments about emission scenarios and set a specific target which is independent of human decisions (such as a doubling of carbon dioxide) will you be able to determine the statistics of weather events, or the weather events themselves, etc? Traditional ideas of chaos and climate says yes to the first question and no to the second.
The large scale climate dynamics are well represented in modern AOGCMs, so failures of treatment in cloud forcing, the MJO, etc are likely due to deficiencies in the model physical parameterizations.
The microphysical cloud processes, aerosol interactions, etc which are not resolved (spatially) by models or are not reducible to simple algebraic answers must be constrained heavily and talked about with uncertanity. There’s no problem with this except when people believe those uncertanties can be stretched to infinity in either direction.
Chris Colose says
cw00p,
Please cite a scholarly source saying CO2 levels rose as fast as today (I don’t think you’ll have success).
The graph presented in that site on the CO2-temp correlation over geologic time is essentially worthless since it doesn’t account for the *very large* uncertanties in both variables as you’re going back tens of millions of years ago. The error bars on CO2 for instance can exceed 1000 ppmv that long ago.
You should read some of the work by Dana Royer on the influence of carbon dioxide on climate over geologic time (full text PDF’s available from his home page)
FurryCatHerder says
In re #316 —
Don’t say that too loud, Jim, or people will call you a kook, crazy, denialist and other bad names.
Rod B says
Chris C., You of course make a valid point, but there is a shadow in there that implies CO2 levels going back 100s of millions of years are pretty accurate if supporting AGW but largely uncertain if not. Is this the case?
Chris Colose says
Rod,
I don’t think Royer or anyone else ignores these uncertanties.
Jim Galasyn says
Edward says
#29 Phil and #35 Pete (in reference to the 12-12-07 BBC article “Artic Summer Ice Free by 2013”)
Sorry for the delayed response, but it’s my opinion that not many readers or editors of this blog would seriously support the conclusions stated in that article. But you don’t have to take my word for it.
Let me refer you to William M. Connelly’s blogpost of Nov 27, 2008 in which he stated:
“And it is not true that The trajectory of current melting plummets through the graphs like a meteorite falling to earth – as we all know, there was marginally more ice this year than last – and if Monbiot, PIRC, or anyone from the Naval Postgraduate School in Monterey, California, or indeed anyone else is stupid enough to believe that all the late-summer ice will be gone by 2013 (or within “within three to seven years”), I’ve got money that says otherwise: wanna bet?”
My money’s on William.
Thanks
Ed
Jim Eaton says
#345 cw00p Says: “Whereas the CO2 levels increased during the
Permian, and the earth did warm by 5 degree c total, it took 70,000
years with Co2 at much high ppm than today (7000 ppm by the time of
the die off – http://www.geocraft.com/WVFossils/
Carboniferous_climate.html )”
I did go to your reference, “Plant fossils of West Virginia” by Monte
Hieb. The information on that site seemed to conflict with what I
know about the PT extinction. Wondering who Monte Hieb is, I ran
across the following:
http://globalwarmingwatch.blogspot.com/2006/04/end-of-our-epoch-all-
in-good-time-2.html
“If I am looking for an ‘education’ on mining engineering, I would
possibly consult Monte , since he worked as chief engineer for
the West Virginia Office of Miner’s Safety. If I were an fossil
hobbyist I would probably look at his amateur fossil website.
“But he is not a climate scientist, which is why he has not published
any scientific papers on causes of global warming. A search through
the peer-review scientific journals where all new claims are tested
and confirmed by scientists specializing in the field shows no ground-
breaking advances in understanding by Monte Hieb or Harrison Hieb.
“However he has published a lot of opinion pieces on global cooling,
and the role of water vapor in global warming.”
For a better understanding of the Permian extinction and the role of
CO2 in it, I would suggest reading Peter Ward’s “Under a Green Sky.”
ReCaptcha agrees: “today learn.”
Jared says
One interesting thing that I think is left out when comparing the 1991-00 decadal average to the 2001-08 average is that 1991-00 was heavily influenced by the Pinatubo eruption of late 1991. 1992-94 saw significant global cooling due to Pinatubo, so obviously the 1991-00 average temp is therefore skewed by that.
In contrast, 2001-07 has seen no such significant volcanic eruption. This should be kept in mind when assessing the decadal trends.
Alf Jones says
Has RC have any comments on the Met Office’s new forecast for global temperatures in 2009?
http://www.metoffice.gov.uk/corporate/pressoffice/2008/pr20081230.html
They are projecting a warmer year than 2008. Is this a useful forecast, or as the uncertainties are so wide not particularly helpful?
I think I am right in saying that all their forecasts, since the first one for 2000, were warmer than what actually happened.
Alf
[Response: All of their press releases are on line, so it should be easy to check. Their release for this last year came in pretty accurate: “Global temperature for 2008 is expected to be 0.37 °C above the long-term (1961-1990) average of 14.0 °C, the coolest year since 2000” (here). – gavin]
Chris Colose says
Alf,
2009 will probably be warmer than 2008, because of the La Nina in 2008. It will be much tougher to say 2009 will be warmer than, say, 2005 which didn’t have much “interesting” going on (and is already anomalous in the context of surrounding years anyway).
Jared says
Chris…
It appears that another La Nina is rapidly developing just in time for 2009, so depending on how long it persists and how strong it gets, 2009 may not be much warmer than 2008.
If 2009 does turn out to be a La Nina year, then we will have had three La Nina-influenced years in the 2001-09 decade (2001, 2008, 2009), and four El Nino-influenced years (2002, 2003, 2005, 2007). In the 1991-00 period (minus 92-94 Pinatubo years), we had three El Nino-influenced years (1995, 1997, 1998), and three La Nina influenced years (1996, 1999, 2000). So we should be able to compare 1991-00 (minus Pinatubo years) to 2001-09 and get a fairly accurate, ENSO-balanced look at decadal trends.
Andrew says
Re #363
Agree; he should not have included the early 90’s as a comparison. However, if had used a 5 year average, then he could have said that every 5 year period included 1998 was cooler than the last 5.
wayne davidson says
365-366 Chris and Jared, there are deep distinctions between late 07 and late 08, at least in the Arctic, especially with cloud extent and the resulting not so strong Arctic ice freeze up which exists right now. December just ending is not showing a specter of super deep freezing, despite what it seems in North America. But rather a lack of high Polar stratospheric winds, not as strong as last year, should encourage cloud formations further below. But I see a weak La NIna on the maps just as much as anyone else, mega regions combinations are crucial, it might be cooler at some places in the Pacific equator, but there is not the same synergism coordination for cooling as with winter of 07-08.
Ron says
Re: Ray Ladbury Says, 29 December at 8:13AM (#340)
Ray, regarding your statement that BeetleB’s “…post is not even logically correct: there are no degrees of certainty”, maybe you’re being a bit too harsh, and quite a bit too dogmatic here. You might, as a starting point, reflect on Descartes’ distinction between a degree of certainty sufficient for ordinary life, or about things we wouldn‘t normally doubt, and those things that simply cannot be doubted . The former, I seem to recall, he referred to as moral., whereas the latter would be logical, deductive, etc. In fairness to BeetleB, and with what would seem to be “a reasonable degree of certainty“, he was using the word in the other than logical sense.
On another matter, you speak of the models having been “validated”. As it is difficult to imagine you could be using “validated” as a term that is synonymous with “valid” in its logical sense, I checked Wikipedia. After a cursory reading of the Wikipedia list of about a dozen or so usages, it’s not immediately clear what, if any, of these would apply. Hopefully you’ll be able to help me out. Thanks and a Happy New Year.
Ron
Ray Ladbury says
Ron, Last I saw, “certain” meant sure, positive, beyond doubt. I don’t see much scope for degrees there. Rather, in science, we use the more precise term “confidence”.
As to validated, I was speaking in terms of model validation–verification of trends in the model against data, etc. Certainly the fact that the past 30 years are the warmest 30 in the temperature provides validation, as do many other stringent tests.
Ron says
Ray, thanks for your quick reply at a time when ours minds and bodies are preoccupied with good cheer etc. First the serious stuff.
When you say you were,“…speaking in terms of model validation–verification of trends in the model against data, etc”, is it correct for me to say that this can be diagrammed (verbally) as (a) the model produces statements of climate trends, (b) these statements are compared to climate data, and if there is (c) correspondence of the model statements to the data from the real world, then the model is validated. In summary, validation is produced by verification. But is this validation process, “valid“? It may be perfectly clearly so in your use of the terms, but it isn’t in mine. So, here’s how I understand them.
To say a model is validated is to say that it passes some sort of test that is equivalent to the test of a proper logical argument, e.g. the conditions for a valid syllogism. Thus I understand validation to be a demonstration that a process (method) of doing something or other is capable of producing proper results, i.e. given true inputs the valid process must produce true outputs. So, a validated climate model would be one that takes in all the physical environment data it should have and omits all the data it shouldn’t have, and then is able to processes the data in the proper sequences etc. to produce true results.
Verifying, on the other hand would relate only to determining the accuracy of the data itself.
With due regard for complexities of the issue, if my understanding of these terms is basically correct, then I have a problem in that while these two elements must be integrated to produce climate truth, it’s not clear to me how, without a validated model in the first place, all the proper data can be gathered. If we start first with the data before we try to build a model, what bit of data would tell us that our collection is now complete and it’s time to build? So Ray, I hope you can further elaborate on this issue, or show me the alternative uses of these terms that would overcome the appearance of a bootstrapping process here.
Now for a bit of less serious banter. Ray, with all due respect maybe you should expand your scope about the use of the word “certain”. In addition to your synonyms ( “…sure, positive, beyond doubt…”, included in #6 of my Webster’s New World Dictionary ) there’s also #1, “fixed, settled, or determined” (concepts that suggest the arbitrary positions of an establishment and its counter establishment, e.g. a paradigm shift in science —in fact, didn’t Hume make it clear, what we call “science” is essential not ever logically certain as would be the case with 2+2=4). Then we have #4 which includes “reliable; dependable”. There’s a lot of ,“so far, so good…but let‘s wait and see”, mixed into these concepts. I’m positive you would not want to claim these uses have the same degree of certainty as “beyond doubt”. Or, how about definition #8, “some, but not very much; appreciable [as in the expression, “to a certain extent”]”. But more importantly Ray spend a few hours listening to people talk about a whole lot things and count the number of times you hear, “I’m almost certain”, “I’m pretty certain”, “I’m very certain”, “I‘m positively certain“, “I’m absolutely certain”. That’s the way people really talk, and that suggests to me that there’s a lot of scope for degrees here. But if you want more formal stuff, try a recent article on Certainty in the Stanford Encyclopedia of Philosophy which I came across after deciding to do a bit of homework to backup my somewhat spontaneous earlier remarks.
Cheers,
Ron
Alf Jones says
Re #364
Here are the met office forecasts for the years 2000-2008 (with uncertainties around +/- 0.15)
0.41,0.47,0.47,0.55,0.50,0.51,0.45,0.54,0.37 C
and here are the “actuals” (www.hadobs.org)
0.24,0.40,0.46,0.46,0.43,0.48,0.42,0.40,0.31 C
They claim a mean error of 0.06C for those forecasts, but the mean error if your forecast method is to use the same temperature as the current year is -0.005C, doesn’t that suggest “persistence” is more accurate that their forecast?
My point isn’t that this shows that global warming isn’t happening (clearly the long term trend shows that it is). Rather that these short term forecasts are apparently not that useful, and they give the impression that a forecast is evidence of global warming. Some sceptics are even using their press-releases about “2007 likely to be warmest year”, “2009 in top 5 warmest years”, to claim that global warming is being exaggerated.
Shouldn’t the met office be a little more careful about how they present the forecasts? It seems to be a very similar problem to the presentation of the Kenylside decadal projections.
Alf
[Response: Science is about making predictions and seeing if they pan out. This is a relatively new endeavour and has a number of issues (the limits to predictibiilty of ENSO, other sources of variation etc.). But if you look at that data more in detail, there is definite skill – ie. the forecast outperforms persistence (r=0.74 vs 0.36), OLS regression = 0.98 compared to 0.5 (though the bias is less, -0.01 compared to 0.07). It might not be perfect, but the variations are being tracked quite well. Given that, their estimate of relative warmth in 2009 is probably a good bet. – gavin]
Ray Ladbury says
Ron, In science, the proper term is not “degree of certainty” which has no meaning in science, but rather confidence, which can be quantified. Moreover, while common usage may sanction “fuzzy” certainty, this is a usage that simply takes a perfectly good word and weakens its meaning even while this is avoidable, as there are other words that serve the purpose better.
First, Ron, do you understand the difference between dynamical and statistical models? Briefly, a dynamical model is a an attempt to reproduce the mathematics of the physical system as closely as you can. Parameters are determined with data independent from that to be used for the validation test. The validation test for such a model is whether it reproduces trends in the validation dataset. For noisy systems like climate, the validation is based on many realizations of the model. A stringent validation test would be one where the model would be very unlikely to reproduce the observed trends if it is incorrect. Climate models have passed a broad range of validation tests–e.g. a 30-year warming trend,response to perturbations like ENSO and volcanic eruptions…
On the other hand, in a statistical model, parameters of the model are determined by a fit to the model. Validation here is a matter of goodness of fit (e.g. likelihood, Chi-square…) or an information criterion (e.g. AIC, BIC, DIC…).
Different aspects of the climate models have been validated to different degrees. Climate sensitivity to doubling of CO2 is among the best determined parameters–it’s very hard to get models to work with a sensitivity less than 2 or more than 5 to look anything like Earth. Clouds and aerosols less so. The way climate models are validated is pretty standard in science for dynamical models.
Hank Roberts says
Ron, you’re thinking of the wrong kind of model.
You’ve ended up with an impossible goal that sounds like an argument for doing nothing because each time you move toward perfection you can only go half of the way in each step.
This isn’t like a model of an airplane at 1:1 where if you put all the pieces together exactly right, it will fly, and otherwise not at all.
This is more like resolution in an image file, where you’re never going to get close to 1:1 but you’ll be able, once you have enough pixels, to recognize the image.
Start with the very simple. Add more detail. Reading the history of modeling makes it much easier to figure out how this is done — compare what happened to your long post above.
Eli gave good advice on how to use one of the online models here:
http://tamino.wordpress.com/2008/12/26/the-other-anthropogenic-greenhouse-gas/#comment-25656
wayne davidson says
It is again astonishing to realize that the High Arctic is warmer than further to the South, a common winter feature now a years. Contrarian lobbyists will easily ignore this weather contradiction. Flaunt
the dreadfully cold temperatures of winter as nails in AGW coffin.
I am chasing low RH stratums to determine apparent Cp’s (specific heat calculating gravity with altitude), apparent because moisture and other factors distort its dry sea level number, and I am beginning to see a pattern..
But first can anyone calculate RH forcing vs CO2 384 ppm equivalent at -35 C at 500 mb? What would be the amount of RH giving equivalent forcing as much as CO2 can at 384 ppm in -35 C air at 500 mb (I suspect it to be small). Cp also varies with temperature pressure and RH,,, So its good to know roughly RH/CO2 forcing equivalent… I’ll come back with results in a little while.
FurryCatHerder says
In re Jared @ 363:
I think that trying to say “This year was affected by this one thing” is dangerous. Because the decade from 1991 through 2000 wasn’t monotonically increasing from, say, 1994 after some number of years of cooling due to Pinatubo. And likewise, the ten years from 1999 through 2008 hasn’t continued in some linearly upward trend now that there aren’t any volcanic eruptions of the magnitude of Pinatubo.
Jared says
#376
I’m not sure what you mean here. No one knows exactly what the temperatures of 1992-94 would have been if Pinatubo had not happened…that is why I think it is best (safest) to just leave them out when comparing the 1991-00 decadal average to the 2001-08 decadal average. We do know that there was a strong Nino in 1992, so that year at least would have been much warmer had Pinatubo not occurred.
Jared says
Alf…do you have a link to those projections from #372?
Ron says
Re : Ray Ladbury Says: 1 January 2009 at 8:03 AM (373) and Hank Roberts Says:
1 January 2009 at 10:13 AM (374)
Just returned and noted your immediate replies. Tied up with other matters, but hope to study them in the next few days. Meanwhile, please accept my thanks.
Ron
Alf Jones says
#372
Gavin, Thanks for the explanations. It is clear that the more normal measures of skill seem to show higher skill for the met office’s projections than persistence. Perhaps they should advertise those measures as well as the bias they use ;-)
#378
Jared… I got some of the numbers for the projections from press releases on the met office site but after a bit of digging all the numbers (and science behind them) can be found via a link at
http://www.metoffice.gov.uk/science/creating/monthsahead/seasonal/index.html
registration is needed to get the seasonal forecast and the global annual forecasts.
Alf
Wayne Davidson says
I would be grateful if someone can use his/her model and answer what I asked for in #375…. I’ll expand a little: what would be the average RH of a stratum of 1 kilometer thick with average temperature of -35 C at 500 mb if all the CO2 was replaced with equivalent forcing strictly with RH? I am getting low RH stratums (10-20%) with lapse rates of 4 or 5 C/km at or about 500 mb. -g/Cp = Lapse rate (the pretty equation) makes the apparent Cp = 1.9 which is perhaps double a Cp should be. If I am not clear with this question let me know!
Mark says
Wayne, what will an answer tell us?
I.e. what does the person answering get out of it.
Wayne Davidson says
Mark, I dont know, I am exploring the invisible sky. I really like to have an answer though. I am seeing some weird stuff, Apparent Cp’s varying a great deal in very low RH stratums. Very interesting stuff…
Mark says
Wayne, if you want to explore the invisible sky, work it out yourself.
Then, if you’re skeptical of things, see if you can work out whether you’re wrong and how that could happen.
Hank Roberts says
Wayne’s got a perspective worth paying attention to, Mark. At least look up prior postings here beforer telling people to go away. The Contributors make the decision about what appears here, not us readers.
Wayne Davidson says
Thanks Hank! Wow….. Mark how can I be wrong when I have made no conclusions? You seem to lack curiosity, or jump to criticize before there is anything to criticize about! Exploration is never ending and abides to no doctrine, it is the fuel of science.
Mark says
384, Wayne, read. comprehend.
Why not answer it yourself and the bonus is you can then check where you’ve got it wrong.
If you don’t answer your question wasn’t associated with being wrong. You can only be wrong in your answers if you give one.
Sheesh. English not your first language???
Mark says
Hank, the contributors don’t make the decision about what I think to say to wayne.
This isn’t a dictatorship.
Why don’t you answer it, Hank?
wayne davidson says
Mark, you absolutely make no sense whatsoever! I am asking for a model calculation
of equivalent forcing given in RH in place of CO2 for a 1 KM stratum at about 500 mb… Its a question for those who have a model which can calculate this… Cappish? You are getting deeper and deeper nowhere…..
Erkan says
Rod,
I don’t think Royer or anyone else ignores these uncertanties.
:)
Mark says
Wayne, likewise.
You said in 386 “how can I be wrong when I have made no conclusions?”
Which is an odd question since the post you replied to said:
“Why not answer it yourself and the bonus is you can then check where you’ve got it wrong”
Which precludes your question since it is predicated on you answering it yourself.
Hence you make no sense.
Jim Eager says
Mark, this is a public comments section on a blog dedicated to discussing and educating the lay pubic about the science of global warming/climate change.
You constant abrasiveness and your quickness to dismiss long-time regular participants as trolls or slackers is more than a little annoying and counter productive to the stated aims of RC.
wayne davidson says
Mark , there is absolutely no way you can argue yourself out of the hole you dug. Enough, I patiently await the answer to the question I’ve asked, thanks for bringing it up again….