Does a global temperature exist? This is the question asked in a recently published article in Journal of Non-Equilibrium Thermodynamics by Christopher Essex, Ross McKitrick, and Bjarne Andresen. The paper argues that the global mean temperature is not physical, and that there may be many other ways of computing a mean which will give different trends.
The common arithmetic mean is just an estimate that provides a measure of the centre value of a batch of measurements (centre of a cloud of data points, and can be written more formally as the integral of x f(x) dx. The whole paper is irrelevant in the context of a climate change because it missed a very central point. CO2 affects all surface temperatures on Earth, and in order to improve the signal-to-noise ratio, an ordinary arithmetic mean will enhance the common signal in all the measurements and suppress the internal variations which are spatially incoherent (e.g. not caused by CO2 or other external forcings). Thus the choice may not need a physical justification, but is part of a scientific test which enables us to get a clearer ‘yes’ or ‘no’. One could choose to look at the global mean sea level instead, which does have a physical meaning because it represents an estimate for the volume of the water in the oceans, but the choice is not crucial as long as the indicator used really responds to the conditions under investigation. And the global mean temperature is indeed a function of the temperature over the whole planetary surface.
Is this paper a joke then? It is old and traditional knowledge that the temperature measurements made in meteorological and climatological studies are supposed to be representative of a certain volume of air, i.e. the arithmetic mean. Essex et al. argue that it is not really physical, but surely the temperature measurements do have clear practical implications? Temperature itself can be inferred directly from several physical laws, such as the ideal gas law, first law of thermodynamics and the Stefan-Boltzmann law, so it’s not the temperature itself which is ‘unphysical’. Even though the final temperature of two bodies in contact may not be the arithmetic mean, it will still be a weighted arithmetic mean of the temperatures of the two initial temperatures if no heat is lost to the surroundings. Besides, grid-box sizes for numerical weather models often have a minimum spatial scale of 10-20km, and the temperature may be regarded as a mean for this scale. Numerical weather models usually provide useful forecasts.
And what distinguishes the mean temperature representing a small volume to a larger one? Or do Essex et al. think the limit is at greater scales. For instance at the synoptic spatial scale (~1000 km)? The funny thing then is that the concept of regional mean temperature would also not be meaningful according to Essex et al. And one may also wonder if the problem of computing a mean temperature is meaningful in time, such as the summer-mean temperature or winter-mean temperature?
Essex et al. suggest that there are many different ways of computing the mean, and it is difficult to know which make more sense. But when they compute the geometric mean, they should not forget that the temperature should be in degrees Kelvin (the absolute temperature) as opposed to Celsius. One argument used by Essex et al. is that the temperatures are not in equilibrium. Strictly speaking, this applies to most cases. But in general, these laws still give a reasonable results because the temperatures are close to being in equilibrium in meteorology and climatology. The paper doesn’t bring any new revelations – I thought that these aspects were already well-known.
Update: Rabett Run has a very detailed set of posts pulling apart this paper more thoroughly.
Dick Veldkamp says
#150 Heat from the interior (Chuck)
Back of the envelope calculation: some googling gives me heat conductivity figures for rock Labda = 1-50 W/m/K. Going down, temperature rises by about 0.01 K/m, hence heat flow from the interior is p = 0.01-0.5 W/m^2. Your p = 0.09 W/m2 fits nicely in between and gives (ca): P = 4 Pi R^2 * p = 4.6e13 W as you say (R=6.4e6 m).
Insolation is 1400 W/m^2 at the top of the atmosphere, or 350 W/m^2 if we average over the entire Earth surface area (including night side). Of course not all this reaches the surface, but in all cases heat from the interior does not seem to make much difference for the climate.
Lynn Vincentnathan says
RE #116, “As the evidence for global warming gets stronger, Republicans are actually getting more skeptical.”
I’m wondering if perhaps the number of Republicans is decreasing, esp those who believe AGW is real leaving the GOP — perhaps in addition those remaining falling more into party line.
This is bizzare. A whole party becoming increasingly anti-science, anti-reality, and anti-common sense. (BTW, I used to be a Republican, but guess I got out in plenty of time to avoid the “great brain wash.”)
This to me might indicate we are going through a period of cultural distortion (when the culture gets really out of whack with reality & real problems, and the strategies become more and more counterproductive, if not downright harmful). If so, we might be on the brink of a revitalization movement, a rather sudden change to a more satisfying culture more in line with reality & solutions that work.
In that case we might soon be seeing a lot of Republicans leaving their sinking ship — if the sensible ones remaining can’t take charge and restructure the GOP, pushing the fanatics to the fringe.
But another thought is that the big fissure is not between right and left, but between those adament about staying plugged into the matrix (the elusion of an endless consumer utopia), and those who have freed themselves and know we must take responsibility and work to solve this problem of AGW. Both the right and the left want endless prosperity without price (externalities), and perhaps the poor nations want their chance to live like the rich nations before the party of proligate extravagance is finally over.
So that leaves a small number of people to start this revitalization movement — but that’s all it really takes. Since the effects of AGW are not going away soon, eventually other may follow & the movement gain momentum. I hope in time to avert the worst.
Chuck Booth says
RE #151
Dick,
Those are not my calculations – that was Barton Paul Levenson. My problem with his calculation is that it is based on the Stefan-Boltzmann equation. I’m no physicist, but I’m pretty sure that equation deals with the emission of electromagnetic radiation, and has nothing to do with nuclear radiation which is the source of heat in the earth’s core. I’m sure one of the physical scientists will clarify this point.
David B. Benson says
Geothermal heat flux is on the order of 0.01 W/m^2 according to Barry Saltzman, Dynamical Paleoclimatology, page 12.
Hank Roberts says
Rafael (147), try here for what you asked for, the magical “double CO2 without changing anything else” — perhaps only possible for a string theorist, even in theory.
http://motls.blogspot.com/2006/05/climate-sensitivity-and-editorial.html
Look here for the real world:
http://fermiparadox.wordpress.com/2007/03/30/greenhouse-gas-effect-consistent-over-420-million-years/
Ray Ladbury says
Re 153. Chuck, the Stefan-Boltzmann equation has to do with the electromagnetic radiation emitted by a black body at temperature T. Barton is just saying that if you took all the energy from the interior and that were the only energy, Earth would look like a black body at 32 K. That’s probably valid to first order.
My reference for the heat coming from the core is
http://www.sg.geophys.ethz.ch/geodynamics/klaus/WS_99_00/Earth/Earth's%20Thermal%20Regimes.htm
About half the energy from the interior is still thought to be due to radioactivity–the rest due to cooling of the interior.
and I meant to say it’s about 1/10%, not 10%.
Ray Ladbury says
How appropriate. Interesting article in Science
http://www.sciencemag.org/cgi/content/abstract/315/5820/1813
Seismostratigraphy and Thermal Structure of Earth’s Core-Mantle Boundary Region
R. D. van der Hilst,1* M. V. de Hoop,2 P. Wang,1 S.-H. Shim,1 P. Ma,3 L. Tenorio4
We used three-dimensional inverse scattering of core-reflected shear waves for large-scale, high-resolution exploration of Earth’s deep interior (D”) and detected multiple, piecewise continuous interfaces in the lowermost layer (D”) beneath Central and North America. With thermodynamic properties of phase transitions in mantle silicates, we interpret the images and estimate in situ temperatures. A widespread wave-speed increase at 150 to 300 kilometers above the coremantle boundary is consistent with a transition from perovskite to postperovskite. Internal D” stratification may be due to multiple phase-boundary crossings, and a deep wave-speed reduction may mark the base of a postperovskite lens about 2300 kilometers wide and 250 kilometers thick. The core-mantle boundary temperature is estimated at 3950 ± 200 kelvin. Beneath Central America, a site of deep subduction, the D” is relatively cold (T = 700 ± 100 kelvin). Accounting for a factor-of-two uncertainty in thermal conductivity, core heat flux is 80 to 160 milliwatts per square meter (mW mâ��2) into the coldest D” region and 35 to 70 mW mâ��2 away from it. Combined with estimates from the central Pacific, this suggests a global average of 50 to 100 mW mâ��2 and a total heat loss of 7.5 to 15 terawatts.
Even so, this amount of heat can only decrease with time, and so cannot be responsible for warming.
Dick Veldkamp says
Re #157 Heat flux (ray)
The Science data seem to be inconsistent. 50 mW/m2 gives 25 TW (2.5e13 W), and 100 mW/m2 gives 51 TW (5.1e13 W).
Am I misunderstanidng something here?
Hank Roberts says
Possibly they’re talking about the heat flux between core and mantle rather than from surface to space?
Barton Paul Levenson says
[[The Science data seem to be inconsistent. 50 mW/m2 gives 25 TW (2.5e13 W), and 100 mW/m2 gives 51 TW (5.1e13 W).
Am I misunderstanidng something here? ]]
No, you’ve bracketed the data for the geothermal flux neatly. Recent estimates all fall in the range 41-46 TW, the upper end yielding the 0.090 W/m2 figure I used earlier. Just divide the total power by the Earth’s surface area (about 5.1007 x 1014 m2). Solar influx is many times that level, both total and per square meter.
John Mashey says
[None of this article supports the McKitrick article, it is just a note on Geometric Means and the lognormal distribution.]
One can get really hung up on choice of means – in computer performance analysis we’ve fought about the right choice for decades.
For many data sets, a Mean alone is fairly meaningless, if unaccompanied (at least) by measures of dispersion and skewness, because otherwise one doesn’t have much feel for the shape of the underlying *distribution*. It’s always nice to see entire distributions, but summarization is needed for sanity.
So, if one is using the usual Arithmetic Mean, one wants:
– Standard Deviation (or variance, i.e., 2nd moment) as measure of dispersion.
Smaller is better.
– Skewness (3rd moment)
And you want this to be not too far from zero (~symmetric distribution), because if you have a seriously-skewed distribution, the Mean gets further and further from the Median, and is more influenced by outliers on one side or the other.
Then, given skew ~0, one might hope that the data is approximated by a normal (Gaussian) distribution (given all the wonderful properties, including being able to use Mean & Std deviation as a really good summary): one would use a normality test to check, and probably compute the Kurtosis and confidence intervals. Normal distributions arise as sums of random collections of small additive factors.
Suppose one finds that the distribution x is far from normal. A common method is to transform the data onto a different scale, say y=1/x, y =x^2, y = ln(x), and analyze the distribution y to see if it is Gaussian normal, compute the various metrics, and then back-transform them onto the original x scale. Often, the transformation may give insight into underlying processes.
The transform ln(x) is sometimes useful, and it turns out that the Geometric Mean is just an equivalent way of computing the correct mean:
Geometric Mean (GM) = Product(x) ^ (1/n) = exp((1/n) Sum(ln(x)))
Geometric (multiplicative) Std Dev (Sigma) = exp(stddev(ln(x)))
If the logarithms ln(x) are normally distributed, the original x is called *lognormal* or log-normal, and this is a well-known distribution of use in many areas of science. It happens when the original distribution is created by combinations of small multiplicative effects (which are of course, additive on a log scale). Just as 66% of normal data is in range [mean-stddev, mean+stddev], 66% of lognormal data is in range [GM/sigma, GM*sigma]. Lognormals are right-skewed, so when one sees right-skewed data, it’s at least worth trying, and it copes better with larger standard deviations. On the other hand, normal and lognormal distributions with small dispersions look pretty similar, so one might as well use normal. Given this math, one of course cannot have negative data for lognormals, so that for instance, one needs ratios, rather than differences (which is where Kelvin might come in, and where we’ve used the Geometric Mean for years in the SPEC Benchmarking effort).
meteora.ucsd.edu/~pierce/docs/Pierce_2004_CiSE.pdf talks about this general topic in climatology.
Sometimes one runs into datasets that have several modes, and turn out to be aggregations of two normal distributions, and good insight can be gained if there is a simple rule to tell the two apart.
Anyway, it is wise to be careful with *any* Mean unsupported by characterization of the underlying distribution. If one can say: “The Mean is X, the std-deviation is Y, and the data is normal”, one need say no more.
Ray Ladbury says
Re 161: John, I agree that the distribution of the data determines the appropriate statistics. The mean is just a measure of the central tendency of the distribution and is eqivalent to the first moment of the distribution. Standard deviation or variance look at distribution width (variance is a centralized 2nd moment). Skew, looks at asymmetry and is related to the 3rd moment. Kurtosis looks at how peaked the distribution is and is related to the 4th moment. And so on.
I wonder if anyone has looked at the standard deviation of global temperatures. Wouldn’t a greenhouse mechanism be expected on average to narrow the standard deviation? Or does the added variability in a more energetic climate overwhelm this tendency? Has anybody looked at higher moments?
tamino says
Re: 161, 162
It’s well to remember that temperature has physical significance in terms of a conserved quantity: energy. Therefore if one mass of air increases temperature by 1 deg.C, while another equal-size mass of air decreases temperature by the same 1 deg.C, we can say that the net change in total thermal energy of the combined system is zero (at least as a first approximation).
I’m not claiming that global average temperature is a measure of atmospheric thermal energy; there are too many other variables at work, and too many unmeasured quantities, to make that statement. But can one not claim that it is a lowest-order approximation to the thermal energy content of the troposphere? As such, the only average which is appropriate is the arithmetic average, regardless of the distribution of temperatures.
Lynn Vincentnathan says
RE #162, I’ve thrown it out here before that perhaps the standard deviation might be greater in a warming world, but didn’t get any response. I’m sort of imagining weather around the world thrashing around kicking & screaming as it gets dragged into a hotter scenario from the GHGs; I’m sort of imagining a natural rubber-band type pull away from the heating that we are forcing on the earth, since we are already at a natural thermal maximum (but that might be anthropomorphizing too much). Then if that rubberband breaks (bec nature also starts pulling in our direction of heating the world, by releasing, r/t absorbing GHGs, & reducing albedo), we’re done for.
Anyway, the extreme off-season cold snap much of the U.S. underwent in late Jan & Feb I’m thinking of as a weird GW effect. It was a short, but very cold winter.
Also, if we do get greater and more frequent storms & hurricanes, then there will be more times when heat energy get translated into kinetic energy. So you go from very hot & sultry to a quick cool down during & after the storm.
OTOH, I do know that GW entails a narrowing of the difference between daytime and nighttime temps (due to the GHG “blanket” effect at night, I think). That’s apparently why there were so many heat deaths in Europe in 2003, because people could not recouperate during the night from the heat stress during the day.
This is all just speculation from a non-scientist (or perhaps nonsense).
Ray Ladbury says
Tamino, I’m not sure that is quite right. Say we have a mass of air at 0 degrees C over a mass of ice. Start adding energy to the air–the temperature will stay the same, but ice will melt. Now say we have a mass of wet air moving over a mountain range. The air cools adiabatically as it rises, and eventually the water drops out as snow or rain. Now the air moves downslope, heating adiabatically, except the adiabatic heating rate for the dry air is higher than the adiabatic cooling rate for wet air. Net result: A high-temperature gravity-driven wind like a chinook. Have we added energy to the air?
So maybe to first order, but I think you will systematically undercount the added energy if you do so. After all, one of the favorite arguments of the denialists is that 0.6 degrees is insignificant. Yet, one need only look in polar regions to put the lie to that misunderstanding.
John Mashey says
Re: #163, #164 “As such, the only average which is appropriate is the arithmetic average, regardless of the distribution of temperatures.”
Having originally trained a a physicist, that makes sense, but from a statistics sense, I sill want to know the shape of the distribution. For some problems, one *expects* to find lognormals, and thus use Geometric Means. [Computer benchmark sets expressed as sets of performance ratios work that way.] Sometimes datasets happen to be fit better by lognormals, for no obvious reason. I don’t know of any reason why temperature data should be lognormal, but people mentione Gemoetric Means, inciting my post.
I simply never like averages alone without some sense of the shape of the distribution [skew, dispersion], because I remember the old adage about someone drowning in a river whose depth averaged 6 inches. In any case, if somebody tells me data is normally distributed with a modest std. deviation, from then on, I’ll happily just use the average.
John Mashey says
Re: does a global temperature exist
The Wall Street Journal’s editorial page continued as usual, with a Holman Jenkins Jr opinion piece called “Climate of Opinion”, which includes:
“And that’s without considering whether a planetary ‘average’ temperature is even a meaningful datapoint (some have likened it to averaging all the phone numbers in the phone book.)” Hmmm, I wonder where that came from?
{A: the Essex/McKitrick article that started this]
Hank Roberts says
> The air cools adiabatically as it rises, and eventually
> the water drops out as snow or rain.
Ray, the “heat of condensation” at that point goes into the surrounding molecules — the _gas_ picks up heat from the molecules of water as they condense; the water molecules have lost energy so they can stick together in droplets. On the far side of the ridge, the now much drier air warms up as it descends. There’s no fog it, so the heat stays in the air rather than turning mist into humidity.
This is also what makes a towering cumulus cloud go boiling up into colder air once it starts to form, if the air above the condensation height is colder than the air in the rising thermal column as the water condenses out.
One example: pyrocumulus (lots of moisture coming off a forest fire; once it condenses, it becomes a cumulus cloud and gets tall fast)
http://www.atmos.washington.edu/2003Q3/101/notes/Pyrocumulus.jpg
On a lower or cooler ridge, where you don’t lose most of the humidity falling out as rain, the condensing cloud/fog rises over the top and falls down the far side, and is available to absorb heat again as the droplets of water disassociate into molecules again, so the air doesn’t warm as much.
Ray Ladbury says
John, Correct me if I’m wrong, but the distribution we’re interested in here is the distribution of temperatures at every point on Earth’s surface at a given moment in time. Since we are interested in whether the center of the distribution is changing, it makes sense to look at the arithmetic mean. I mean, after all, since we’re looking at temperature, the distribution will be bounded above and below, right, so really, it’s neither lognormal nor normal. Now, I believe that the arithmetic mean will significantly underestimate the energy going into the system–it doesn’t take into account increased evaporation, ice melting, etc. However, other estimators will probably do an even worse job. So, as an intuitive measure of increasing energy density, it may be as good as we’ll get.
Ray Ladbury says
Re 168: Hank, my point is that that temperature doesn’t reflect energy absorbed or released in a phase change, or heating of the ocean, etc.–thus, it tends to underestimate warming in a warming climate or cooling in a cooling climate. You get a damped picture of what’s going on. Thus, I think it can be misleading to look at temperature alone. I don’t, however, liken it to averaging the numbers in a phone book. If you see a change in temperature, it is a real indication of change in the climate.
John Mashey says
Re: 169
“distribution we’re interested in here is the distribution of temperatures at every point on Earth’s surface at a given moment in time.”
Well, that’s one of the distributions that might be useful, but I think, when looking at the yearly plots, people have averaged across:
– geography (doing their best to derive homgeneity from heterogenous sources)
– across the year [unless doing season-season comparison]
– across time-of-day [unless doing daytime/nighttime studies]
Of course, weather stations are where they are, and many daily temperatures are not real averages across 24 hours, but .5* (min + max).
Thought experiment:
Consider the *ideal* distribution that we’d love to have, if we wanted to compute exactly one temperature for a year/month/day/minute
– Pick a random minute from the year/month/day/minute
– on the Earth’s surface, pick a random location [not just at stations]
– Measure the temperature then and there.
If one does it N times, as N gets larger, the resulting datasets would better and better approximate the continuous probability distribution function for the chosen time-period’s data.
NCDC has a nice chart on recorded extremes:
http://www.ncdc.noaa.gov/oa/climate/globalextremes.html
which range from -129F to 136F [i.e., the outliers are not pleasant weather!]
The existence of bounds on temperature ranges doesn’t stop them from being well fit by normal, lognormal, or other distributions: the only way to know is to actually look at the data:
– quick check: do histograms and eyeball the data
– compute Skew and excess Kurtosis, and if those are close to 0:
– run Normality tests, as for example:
http://www.cimis.water.ca.gov/cimis/resourceArticleOthersQcStatControl.jsp
http://asae.frymulti.com/abstract.asp?aid=3512&t=2
(I haven’t found one simple reference, but people seem to use such techniques often on various climate datasets: Google Scholar: climate temperature normality test
gives 15K hits,and in general, the words “normal” and “distribution” have multiple meanings, so I haven’t yet located the sort of reference I’m looking for, although I’ve tried many queries.)
If the data sort of looks normal, but is right-skewed, perhaps it can be better modeled as lognormal, in which case one computes the logs, and then goes back through the above. Of course, it might be something else.
I have no idea whether the distribution, say for 5000 samples as above, for a year, is normal, lognormal, or some left-skewed distribution. Can anybody point me at a paper or website that has such? I’ve looked at USHCN, various NOAA websites, etc.
Ideal would be histograms, with the key statistical metrics for datasets that approximate the thought experiment above for sometime period.
I would be perfectly happy if the resulting distributions were ~normal, or at least symmetric, as the arithmetic mean (which does seem to fit the physics) would also be in the intuitive “center” of he distribution.
Lynn Vincentnathan says
RE #167, well that would be something if the average of all the phone numbers in the book were going up over the years, over the decades. You’d think something might be causing it…..I.e., it probably wouldn’t just be due to random fluctuations.
tamino says
Just a note: the normal distribution has 0 skewness, but not 0 kurtosis; for the normal distribution the kurtosis is = 3.
Ray Ladbury says
John, I agree, a Normal or lognormal will do fine in the center–not in the tails. A bounded distribution looks Weibull in the tail (Extreme Value Type III, I believe). Now I agree that we probably don’t care too much about the extremes–though you might expect to see trends there, too. The reason the lognorm or normal look reasonable is because of the CLT. BTW, the common Weibull is the only distribution I know of that is skewed left (for shape parameters above a certain value). Moreover, if the probability in the tails is sufficiently small, the skews won’t matter–guess that would be ~normal for our purposes. However, even for a skewed distrubution, arithmetic mean approximately tracks central tendency–that is, if the center of the distribution shifts, so will the mean, unless you get some really freaky cooperation from the other moments.
tamino says
Just for laughs, I took the daily mean temperature for Geneva, Switzerland (from the European Climate Assessment and Dataset Project) and subtracted the seasonal pattern to define temperature anomaly. Then I did a histogram of the anomalies, and superimposed a normal distribution with the same mean (0, by definition) and standard deviation (3.0527 deg.C). I’ve posted the graph on my blog, and you can go straight to the graph here:
http://tamino.files.wordpress.com/2007/04/geneva2.jpg
John Mashey says
re: #173: Tamino
Sorry I was a little sloppy earlier, but in #171, I was precise, and said if the “excess Kurtosis” were 0… Excess kurtosis = kurtosis – 3.
See:
http://en.wikipedia.org/wiki/Kurtosis OR
http://www.itl.nist.gov/div898/handbook/eda/section3/eda35b.htm
as far as I can tell, people are sometimes imprecise as to which they’re using.
re: #174, Ray: yes, we agree, and in particular, with the numbers of data points usually available in climatology, one doesn’t get weird results from a few outliers (unlike computer benchmarking, which often has small datasets from which people draw more conclusions than they ought) and the arithmetic mean is likely to be fine. In any case, I’d guess the real issue is the motion of some mean, calculated in a consistent way. [I’ve never used Weibull, not having done reliability studies.]
As always, one hopes to end up with something ~normal [as Tamino’s #175 certainly looks: thanks!], in which case Life Is Good, but I still think:
a) One should never just assume normality, one should look at the data and see, and if it is, it’s nice to be able to say: data is normal, and be done with it.
b) One may gain insight from noticing outliers or multi-modal data.
c) We’re probably deeper into the stat than most people want to go, but the great John Tukey’s pithy observations seem relevant and memorable:
“Far better an approximate answer to the right question, which is often vague, than an exact answer to the wrong question, which can always be made more precise.”
“The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data.”
http://en.wikipedia.org/wiki/John_Tukey
[I overlapped with Tukey 10 years at Bell Labs. He was associate executive director of a division that contained Joseph Kruskal, John Chambers (S), and Paul Tukey, among other star statisticians. Bell Labs required that when someone wanted to publish a paper outside, it had to be first reviewed by several departments outside their line of management. Of course, anything in which statistics was important went to these folks, and I understand that their comments could be acerbic, a not-uncommon effect amongst statisticians. That meant that other people learned to get help from the statisticians earlier, not later, and also to treat data with suspicion, by default.]
Eric says
While I only understood the introduction and the conclusion, I agree with the bottom line of what is being said. But that does not change the fact that a statistical measurment can be useful in seeing patterns, as many here have stated.
So did this paper change anyones mind one way or the other. :)
I must say that when I saw the title my first thought was, “Here we go again.” I have to constantly re-ground myself from what “I already know”, and from what is being added to the pool of knowledge by new data.
I am a bad scientist.
Not that this paper added anything really useful in my laymans understanding.
peter says
How can the IPCC use temperature data with three significant figures? How can they be confident that the temperature at any point on the earth in 1000 A.D. was 32.1 degrees C rather than 32.2 degress C? If the data were reliable to two significant figures the profile would be a flat line with a possible step change of 1 degree C around 2000. Am I wrong?
Lynn Vincentnathan says
Okay, re the need for more than the mean (we’re all agreed we need the mean, elsewise how could we calc the s.d.)…
I read today that the AR4 IPCC said in re to N. America, “In the short term, crop yields may increase by 5 to 20 percent from a longer growing season, but will plummet if temperatures rise by 7.2 F.”
I’m not even sure there will be such increases. I’m thinking there may be these wild fluctuations in weather so we get some fluke late spring and early fall killing frosts, and it only takes one of these to do in a crop. So while the AVERAGE may be a warmer & longer summer, the fluky outliers may increase (I have no idea, but this could be a possibility).
Not to mention those killing heat spikes (outliers in the hot direction) that kill much of the crop mid-summer.
So from that POV, we really do need much more than the mean.
James says
Nor is temperature the only factor in crop yields, or even the most important one. There’ve been several news articles in the last couple of days on a study predicting that the southwestern US and northern Mexico will be much drier in the future as a result of AGW. Since the area’s mostly desert now, with crops being grown only with irrigation, that doesn’t bode well for future productivity.
Looking back at the warming at the end of the last Ice Age, we see the whole western half of the US becoming much drier. It seems reasonable to think that further warming would only intensify this, which would seem to indicate that the northwest (east of the Cascades) and Great Plains might become too dry for agriculture as well. So perhaps we’d better start hoping that those areas which still retain enough moisture do become more productive, else we all may be getting a bit hungry.
Tom Boucher says
Re #170: “If you see a change in temperature, it is a real indication of change in the climate.”
Perhaps…in some subset of the measurements, or perhaps overall.
Or…perhaps not…it is known the summing of even spatially independent but temporally correlated series can produce spurious breaks or trends (long-memory) in the averaged series.
I find the Essex paper interesting, and agree with their conclusions for the reasons above. I would not rely upon globally averaged temperature as an indicator of climate change.
Barton Paul Levenson says
[[I find the Essex paper interesting, and agree with their conclusions for the reasons above. I would not rely upon globally averaged temperature as an indicator of climate change. ]]
You would seem to be wrong on both counts. The average temperature of the atmosphere is a measure of how much energy the atmosphere is holding, via the old H = m cp T relation where H is heat content (in Joules in the SI), m is mass (kg), cp is constant-pressure specific heat or heat capacity (J kg-1 K-1), and T temperature (K). I would guess that the more energy in the climate system, the more energetic the weather, on average.