Every so often people who are determined to prove a particular point will come up with a new way to demonstrate it. This new methodology can initially seem compelling, but if the conclusion is at odds with other more standard ways of looking at the same question, further investigation can often reveal some hidden dependencies or non-robustness. And so it is with the new graph being cited purporting to show that the models are an “abject” failure.
The figure in question was first revealed in Michaels’ recent testimony to Congress:
The idea is that you calculate the trends in the observations to 2008 starting in 2003, 2002, 2001…. etc, and compare that to the model projections for the same period. Nothing wrong with this in principle. However, while it initially looks like each of the points is bolstering the case that the real world seems to be tracking the lower edge of the model curve, these points are not all independent. For short trends, there is significant impact from the end points, and since each trend ends on the same point (2008), an outlier there can skew all the points significantly. An obvious question then is how does this picture change year by year? or if you use a different data set for the temperatures? or what might it look like in a year’s time? Fortunately, this is not rocket science, and so the answers can be swiftly revealed.
First off, this is what you would have got if you’d done this last year:
which might explain why it never came up before. I’ve plotted both the envelope of all the model runs I’m using and 2 standard deviations from the mean. Michaels appears to be using a slightly different methodology that involves grouping the runs from a single model together before calculating the 95% bounds. Depending on the details that might or might not be appropriate – for instance, averaging the runs and calculating the trends from the ensemble means would incorrectly reduce the size of the envelope, but weighting the contribution of each run to the mean and variance by the number of model runs might be ok.
Of course, even using the latest data (up to the end of 2008), the impression one gets depends very much on the dataset you are using:
More interesting perhaps is what it will likely look like next year once 2009 has run its course. I made two different assumptions – that this year will be the same as last year (2008), or that it will be the same as 2007. These two assumptions bracket the result you get if you simply assume that 2009 will equal the mean of the previous 10 years. Which of these assumptions is most reasonable remains to be seen, but the first few months of 2009 are running significantly warmer than 2008. Nonetheless, it’s easy to see how sensitive the impression being given is to the last point and the dataset used.
It is thus unlikely this new graph would have seen the light of day had it come up in 2007; and given that next year will likely be warmer than last year, it is not likely to come up again; and since the impression of ‘failure’ relies on you using the HadCRUT3v data, we probably won’t be seeing too many sensitivity studies either.
To summarise, initially compelling pictures whose character depends on a single year’s worth of data and only if you use a very specific dataset are unlikely to be robust or provide much guidance for future projections. Instead, this methodology tells us a) that 2008 was relatively cool compared to recent years and b) short term trends don’t tell you very much about longer term ones. Both things we knew already.
Next.
Jim Bouldin says
Jason:
“Off the top of my head I can think of two different blog that performed substantially similar analyzes a year ago.”
OK, and they prove what exactly?
“There are two reasons (besides declining global temperatures) why this issue is receiving increased attention now:”
So now global temperatures have not just plateaued, they’re declining? Surface temps? Upper parts of troposphere? Upper ocean? Lower ocean?
“This means that for the first time a large number of models can be readily tested against temperature data recorded AFTER those models were finalized.”
Wrong. In no particular order: (1) Models are never “finalized” and stating they ARE is pretty revealing, (2) No, there are climate models going back to at least the 1950s that you can test subsequent T data on, (3) You’re saying that the six years between the TAR and AR4 are capable of “testing” whether the IPCC TAR models are good or not? Then you absolutely do not understand the point of these two posts on the topic.
“Each year that passes will give us another year of real data to compare to the models. More years mean more statistical certainty when performing these analyses.”
Yes, and we now have about 120 years of pretty good data against which to evaluate the models, and they show unequivocally that GHGs are driving global temperature increases. So your additional years are going to change that?
“I AM very fond of odds making. I would bet…”
Who said anything about betting? Hansen’s not the only one with a ~3 degree sensitivity estimate, and by no means is it even the highest. Anyway, what are your odds for a “very cold 2009”, you didn’t say.
“Is this something you are interested in?”
A bet that can’t be decided for more than 20 years and proves what (other than grandstanding)? Not in the least.
Hank Roberts says
> betting on climate
Google for it. Stoat keeps a list; so do other climate bloggers (not here at RC though).
And relevant to betting and phenology (as Jim Bouldin’s guest topic is now closed), this:
http://news.stanford.edu/news/2001/october31/alaskabet-1031.html
Someone might want to look at what’s happened with that, seems suitable for statistical treatment.
Timothy Chase says
David B. Benson wrote in 197:
I’m certainly not an expert, but I would expect ENSO to remain fairly neutral for the rest of the year — possibly beginning to climb towards the end. A good El Nino? 2 or 3 years. Watch the North Pacific Gyre Oscillation. From what I understand it tends to lead ENSO by 10 to 12 months. Not sure how well the pattern is holding this time around, though.
Steve says
Just a thought – I’ve just read Micheal’s article and your rebuttal, and I think your point about endpoints is well taken: what it means to me is that he’s a little early to press – if 2009 is warmer, then his premise is trashed. On the other hand, your suggestion that he is using data selectively seems more appropriately directed at your own rebuttal – there are four global temperature series that I know of – you mention only two. Of these, GISS seems to offer the data kindest to your hypothesis, but significantly diverges from the other three – perhaps, if averages are to be used, all of the congruent temperature records should be averaged for this analysis.
[Response: Your claim about GISTEMP is unfounded. The outlier series is UAH, not GISTEMP. However, both UAH and RSS are measures of MSU-LT – a different quantities then the SAT – and need to be be compared with the same diagnostic from the models (which I don’t have handy). The variance of the MSU data is not the same as the SAT and so could make a significant difference. – gavin]
Wayne Davidson says
For chaps like Michaels, smug with your graph prowess, making fun of our good game of hockey ( of hockey stick fame).
Take a look:
http://www.eh2r.com/index_pop_ups/warming.html
of warming evidence not needing data, just pictures, the sun itself, used as a fixed sphere, gets mangled by the atmosphere, so it is a very good temperature
evaluator, density of the atmosphere as a whole depends on its temperature… Since it is suppose to cool, since 1998? or better since 2005, if you like,
why are high arctic sunsets trending as if they were from further South since 2005?
Why not look around for alternative ways of measuring/observing GW?
Is better than simple ad hoc half baked graph manipulations? Hey?
Timothy Chase says
Jason wrote in 196:
“… how very limited our understanding of the climate system is”? I believe you are overstating your case.
Climate models have done fairly well in terms of a variety of predictions. For example, they predicted the expansion of the Hadley cells, the poleward movement of storm tracks, the rising of the tropopause, the rising of the effective radiating altitude, the circulation of aerosols in the atmosphere, the modelling of the transmission of radiation through the atmosphere, the clear sky super greenhouse effect that results from increased water vapor in the tropics, the near constancy of relative humidity, and polar amplification, the cooling of the stratosphere while the troposphere warmed.
I understand they do rather well with ocean. And they predicted the expansion of the range of hurricanes and cyclones — about a year before Catrina showed up off the coast of Brazil. Not the sort of thing that had ever happened before.
Of course there are areas where the models seem to do less well. For example, while they have predicted the expansion of the Hadley cells, they appear to have underestimated the rate of its expansion — and thus how rapidly the subtropics would move north. They appear to have underestimated the rate at which sea ice would be lost in the Arctic. They did not take into account glacier slippage until fairly recently — and had in essence assumed that ice would simply melt in place. As such, they had underestimated the rate at which we would lose glaciers.
These are areas where they have tended to fall down, but typically this has been due to their underestimating the effects of climate change by failing to take into account all of the positive feedbacks. Not the sort of thing I would be playing up if I wanted to say that global warming isn’t a serious issue.
Then there are areas where the results of models have been more mixed. For example, they did not predict the trend in cloud loss in the tropics, which means that they underestimated the positive trend in outgoing infrared radiation. However, it also means that they underestimated the negative trend in outgoing visible light. And the net effect in terms of warming has been roughly that of one trend cancelling the other.
But in any area where models are doing poorly, this suggests that there are certain phenomena which are not being taken into account in terms of the physical processes that are included in the models. And once these phenomena are included, it is my understanding that models perform better overall — not just in the areas which modelers sought to improve. Currently it is my understanding that models could be improved with respect to aerosols and clouds. But I understand we are making progress in both. Particularly clouds.
Timothy Chase says
CORRECTION to the above post…
With respect to the hurricane that showed up off the coast of Brazil, I had called it “Catrina,” but that should have been “Catarina.”
MarkB says
Here’s an amusing blast from the past regarding Michaels.
http://www.cato.org/testimony/ct-pm072998-2.html
Showing Scenario A and removing B and C amounts to fraud in my view. Also cherry-picking bad satellite data (granted it wasn’t known to be bad at the time) of the southern hemisphere seems almost comical. Does anyone here have experience with Congressional testimony? Are policymakers this gullible?
GBS - Aesthetic Engineer says
David B. Benson (123) Thank you. I saw the PDS series on the first you mentioned. My problem with it was that it did not seem rigorous enough. I will check out the other one. I appreciate the time you took to respond to my inquiry.
Deep Climate says
#204
In my quest to show more trend info on one graph, I display GISTemp 5-year, 10-year and 20-year trends for all end points from 2008 back to 1990.
http://deepclimate.files.wordpress.com/2009/04/gistemp-trends.gif
The wild fluctuations of the 5 and 10 year trends can be clearly seen, as can the relative stability of the 20-year trend (and the simple decade over decade measure, dubbed “10-yr-diff”). The two trend measures based on 20 years of data dipped ever so slightly in 2008, but are still ahead of 1998 end point counterparts. So much for the “stopping” of global warming in 1998!
IPCC AR4 WG1 Chapter 10 doesn’t seem to give projections that are directly comparable to the MSU-LT swathe, but they would appear to be close to the surface projections. As I recall, though, there’s a lot more variance in both the model projections and the observations.
It’s also true that UAH is the outlier – and in more ways than one. The UAH annual cycle, as elaborated at Open Mind (Tamino) and my blog at Deep Climate, remains unexplained.
http://tamino.wordpress.com/2008/10/30/annual-cycle-in-uah-tlt/
http://deepclimate.org/2009/03/26/seasonal-divergence-in-tropospheric-temperature-trends-part-2/
It’s safe to say that both Mears (of RSS) and Christy (UAH) are aware of the issue, but so far there’s no published work or commentary on this. Maybe soon …
Jason says
““Off the top of my head I can think of two different blog that performed substantially similar analyzes a year ago.”
OK, and they prove what exactly?”
They prove that your suggestion (That Michaels’ analysis was timed to take advantage of a cold 2008) is unwarranted.
““There are two reasons (besides declining global temperatures) why this issue is receiving increased attention now:”
So now global temperatures have not just plateaued, they’re declining? Surface temps? Upper parts of troposphere? Upper ocean? Lower ocean?”
I thought that YOUR claim is that 2008 surface temps were lower than 2007, and that is why this analysis has been published now. Have I misunderstood?
““This means that for the first time a large number of models can be readily tested against temperature data recorded AFTER those models were finalized.”
Wrong. In no particular order: (1) Models are never “finalized” and stating they ARE is pretty revealing,”
You are arguing semantics to avoid admiting the obvious point. Climate models are only valuable if they can tell us something about the FUTURE.
If I produce a GCM that models perfectly the state of the world up until the day I publish it, but looks like pink noise afterwards, it is utterly useless.
The archived model runs were each produced using a specific version of a specific model with specific inputs. While the models and scenarios may evolve, the specific version of the models and scenarios used to create these runs will remain fixed for all eternity. Call this fixed, finalized, formalized or a snapshot. The point remains the same.
The experiment that Michaels and Lucia and many others are performing is designed to test whether or not the models are capable of providing us with information about periods of time AFTER they are published.
“(2) No, there are climate models going back to at least the 1950s that you can test subsequent T data on,”
First, those models were not held out as accurate forecasts of future temperature. It would have been helpful if, in 1975, the owners of these climate models had written to Newsweek informing them that: A) their story about global cooling was wrong because B) climate models have clearly demonstrated that temperatures are about to head up rapidly. Had they done so, maybe the new York Times wouldn’t have repeated the story one month later.
Second, many previously published model runs lack sufficient specificity to be tested. Was Hansen’s 1988 congressional testimony an accurate forecast of future temperatures, or a gross exaggeration? It depends on how you interpret his words, and how you center the 1988 data point. You can find numerous analyses making contradictory arguments while using the same data thanks to this lack of specificity. The model runs which are available today are much less vulnerable to this sort of argument.
“(3) You’re saying that the six years between the TAR and AR4 are capable of “testing” whether the IPCC TAR models are good or not?”
I’m not sure how many years are sufficient. Obviously I am not yet convinced that the models are an abject failure, so I suppose that not enough time has passed to make such an assessment. If the next 8 years of global temperature look like the last 8 years, I’ll almost certainly conclude that the models greatly overstated climate sensitivity. (I still won’t call them an abject failure. By establishing a testable hypothesis, they will have benefited science, even if that hypothesis is ultimately rejected.)
“Yes, and we now have about 120 years of pretty good data against which to evaluate the models, and they show unequivocally that GHGs are driving global temperature increases. So your additional years are going to change that?”
It is VERY easy to model the past. Models should not be published until they have successfully modeled the past.
It is VERY difficult to model the future of a system as complex as the earth.
If GCMs prove unable to do the latter, there are very few people who will care about the former, and attempts to legislate greenhouse emissions will fail.
“Who said anything about betting? Hansen’s not the only one with a ~3 degree sensitivity estimate, and by no means is it even the highest. Anyway, what are your odds for a “very cold 2009″, you didn’t say.”
I’m not making odds for that. Neither a very cold 2009 nor a very warm 2009 would mean very much to me.
“A bet that can’t be decided for more than 20 years and proves what (other than grandstanding)? Not in the least.”
But that is precisely what this thread is about. A bet HAS been made by the climate community that the models will prove to be accurate. If the bet is lost, so are efforts to reduce emissions. Michaels is basically saying: “Hey guys; Remember that bet you made? Things aren’t going the way you expected.”
I agree that it is too early to call the bet. But people are going to keep on performing this analysis until, one way or another, the bet is called.
Tim McDermott says
Jason said:
Wow, your concept of a GCM is very different from mine. Could you explain to me please how you can easily model past climate behavior, but not be able to model future behavior? A GCM that is not built from first principles is not a model, it is a fraud. I hope you don’t think that climate modelers keep a table of past weather somewhere so that their GCMs can spit out the right numbers for historical runs.
So why do you think that modeling the past is easier. Does the physics change? Does the chemistry? Is the fluid dynamics of the future more complex?. So why do you think modeling the past is easier?
Timothy Chase says
Re Jason’s 211
Jason wrote in 211:
Tim McDermott responded in 212:
Jason, as I alluded to earlier, climate models aren’t based upon simple correlations and aren’t instances of line-fitting. They are built upon physics. Radiation transfer theory, thermodynamics, fluid dynamics, etc.. They don’t tinker with the model each time to make it fit the phenomena they are trying to model. With the sort of curve-fitting which your statement implies, tightening the fit in one area would loosen the fit in others. But with actual climate models, when they improve the physics — because it is actual physics — tightening the fit in one area almost inevitably means tightening the fit in numerous others.
This is why our knowledge of climatology has grown and grown quite rapidly. And this is why, as I indicated above in 206, your stating in 196:
… betrays a profound ignorance of the state — or for that matter even the nature — of climatology as a science.
*
Jim Bouldin wrote in 201
Jason responds in 211:
Don’t flatter yourself — or Michaels for that matter.
Hansen’s Scenario B from 1988 proved to be rather accurate over the past twenty years. Even though his climate sensitivity was a little off at the time. Even though he was specifically focusing on carbon dioxide. Climate models are approximations. Uncertainties exist, but — as in the case of the law of large numbers — they tend to cancel each other other out. And getting things approximately right is often more than enough.
When predicting which states will be suffering from extreme drought by 2100. When predicting when all of the glaciers in the Himalayas will be gone. Or how high the sea level will rise. If it is a meter and a half by 2110 instead of 2100, or two-thirds of Florida is under water when they were projecting half, I doubt people will say that listening to the climatologists was a complete waste.
Yes, this is a bet of sorts. But you are adding up all the costs associated with doing something about climate change without taking into account all the costs associated with doing nothing, aren’t you? And instead of focusing on the fairly accurate projections that Hansen made in 1988, you have chosen in essence to focus on one year — 2008 as the endpoint of Michael’s “test” — and say that the models aren’t performing that well for that year — given the internal variability of the climate system which plays such a big role in the short run, but such a minor role in the long.
Now given the actual nature of the “bet,” perhaps you can be a little more clear about whose interests you and Michaels are actually looking out for.
Jim Bouldin says
Jason, you need to read this:
http://www.aip.org/history/climate/index.html
and this:
https://www.realclimate.org/index.php/archives/2008/05/what-the-ipcc-models-really-say/
and this:
https://www.realclimate.org/index.php/archives/2008/01/uncertainty-noise-and-the-art-of-model-data-comparison/
Deep Climate says
#211 Jason said:
They prove that your suggestion (That Michaels’ analysis was timed to take advantage of a cold 2008) is unwarranted.
Not really. The earlier analyses were timed to take adavantage of the exceptionally cool La Nina winter. For example, WattsUpWithThat had a particularly cretinous post comparing January 2008 to January 2007 (the warmest January on record in most datasets).
BFJ Cricklewood says
Re: Politicised science
#159 JBL
This misses the point, which is how the NSF (or other beneficiary) is selected in the first place. This issue is above rather than in the science community.
#161 Timothy Chase
(The above was by way of a suggestion that political funding would then equally colour these issues.)
No, since – unlike with AGW – there is no obvious political spin to be put on them.
#167 Ray Ladbury
I do elsewhere attempt to discuss the science. But even if I didn’t, that would not still invalidate discussion on how science is structured.
FWIW, I am undecided on the evidence, but unequivocal that blinkering oneself to the relationship between funding and evidence submitted is no answer.
#168 Dan
And what is that?
#171 Timothy Chase
Within the narrow confines of politics though.
#174 Mark
Mark says
Thetan level 10 says “No, don’t. Overall government spending on climatology issues is thousands of times larger than all industry put together.”
Nope, lobbying alone is a million dollar business. The US government even keeps a receipt of such expedicture,
And when websites are supproted by Exxon, that’s money to add. When David Evans is paid to make a speech about how AGW models are a farce, his expenses are paid by Exxon and pals.
Millions more is spent on ensuring a business-friendly atmosphere for the Big Oil and Big Tobacco industries than is spent on climatology.
Hank Roberts says
Yes, “blinkering oneself to the relationship between funding and evidence submitted is no answer.”
You can look that up, you know. Did you bother?
http://scholar.google.com/scholar?q=research+funding+affects+results%3F
You’re right to suspect something might be happening.
You’re wrong to assume you know the answer without investigating.
As long as you keep going in the wrong direction, you’re most likely going to end up where you’re headed.
Pray consider the possibility that you may be mistaken.
http://scholar.google.com/scholar?q=research+funding+affects+results%3F
http://www.springerlink.com/content/r654521305u8547k/
Referenced by 34 newer articles
Take your time. Facts are hard to choke down; they don’t completely support anyone’s closely held assumptions.
Ray Ladbury says
Cricklewood, Note the title of this blog. Now go to the “About” button at the top of the page and read. To wit:
“The discussion here is restricted to scientific topics…”
Much as we all might love to eviscerate your tired regurgitation of the arguments of Crichton on the right and Feyerabend et al. on the left, that’s off topic.
The problem with your argument is that your very premise doesn’t even hold up. Climate change is not good for governments because it disrupts business as usual, and that disrupts tax revenues. It’s clear you are as ignorant of politics and government as you are of science.
What is more, you need posit no dark conspiracies–political or otherwise–to understand why contrarian climate science doesn’t prosper. It doesn’t prosper because it isn’t productive. It doesn’t advance understanding of climate. In short, it’s a dry well.
Here is my recommendation. Go out and figure out how science actually works. Talk to some actual scientists. Send off a letter to a grant-making organization or two. LISTEN. Read Spencer Weart’s excellent History of climate change. Maybe try to learn some of the science. Don’t keep going down your current path. It’s leads straight to crackpotville.
There are plenty of scientists here. We do science every day. We compete for grants. We publish. Trust me. Your paranoid fantasies don’t ring a bell with us. That is not how science works.
Mark says
“FWIW, I am undecided on the evidence, but unequivocal that blinkering oneself to the relationship between funding and evidence submitted is no answer.”
Then why do you continue to blinker yourself to funding of the denialists?
Kevin McKinney says
BJ, it’s ridiculous to ascribe all funding of climate research to an “pro-AGW machine.” The only way to get the huge dollar amounts people claim is to include all sorts of normal duties as part of the “machine.” For example, a University professor’s activities involve things like, say, actual teaching, advising, and committee work. But if his salary is counted in toto as part of said “machine,” you’ve effectively decided that most of his time is actually spent pro bono, not doing his “job” of proving that AGW is all that it is claimed to be.
Also, the idea of the “pro-AGW machine” ignores the facts that:
a) Most papers are relatively focussed and technical, hence neither support nor controvert the AGW thesis directly; why are the dollars that paid for them counted towards “machine funding?”
b) The contrarian papers that do get published by this theory should, by conpiratorial logic, be counted as part of the “AGW machine,” since Robert Lindzen’s or Roy Spencer’s salary (for instance) is under the academic umbrella; clearly this is nonsensical, however. They shouldn’t be counted as “machine funding,” either.
c) Since we see most of the academic research output as neutral, and a small amount actually antagonistic with respect to the AGW question, the money funding legitimate climate research does not “buy results,” as claimed. Maybe most scientists and most funders are actually concerned with what they claim: advancing our understanding of the universe?
By contrast, I know of no reason to think that Big Oil is anything but satisfied with the money they put into the Heartland Institute, the Cato Institute, etc., etc.
Timothy Chase says
Responding to BFJ Cricklewood, Ray Ladbury wrote in 156:
I responded to Ray Ladbury in 161:
BFJ Cricklewood responds to me in 215:
The trouble is that what I am describing is the physical basis for the greenhouse effect.
The greenhouse effect is the result of molecules being stimulated into vibrational, rotational and rovibrational states of excitation. This may be the result of the absorption of photons or molecular collisions in which they gain energy. De-excitation occurs either through either molecular collisions in which they lose energy or through the emission of photons.
The absorption of photons results in the warming of the atmosphere and their emission results in the cooling of the atmosphere. Absorption of thermal radiation cools the thermal spectra of the earth as seen from space, radiation emitted by de-excitation is what results in the further warming of the surface, and the surface continues to warm until the rate at which energy is radiated from the earth’s climate system (given the increased opacity of the atmosphere to longwave radiation) is equal to the rate at which energy enters it.
The wavelengths at which the absorb and emit photons is described by their absorption/emission spectra — and give rise to images like this:
Aqua/AIRS Global Carbon Dioxide
http://svs.gsfc.nasa.gov/vis/a000000/a003400/a003440/index.html
We had a couple of posts on it here:
Part I: A Saturated Gassy Argument
by Spencer Weart and Raymond T. Pierrehumbert
26 June 2007
https://www.realclimate.org/index.php?p=455
… and here:
Part II: What Ångström didn’t know
by Raymond T. Pierrehumbert
26 June 2007
https://www.realclimate.org/index.php?p=456
The physical basis for the greenhouse effect is principally that of Quantum Mechanics and more broadly that of Quantum Statistical Mechanics.
*
I wrote in 171:
BFJ Cricklewood responds to me in 215:
But I wrote in 171:
You see, the trouble is you can’t separate science in that fashion.
The basis in physics for explaining the greenhouse effect is essentially the same as that for describing photovoltaic devices, or that which Einstein used to suggest the possibility of lasers. The same principles form the basis for our ability to perform calculations in chemistry and biochemistry at the quantum level. It is how we are able to understand and predict the behavior of tunnel diodes.
It is the same as what goes into infrared detection used in the military by fighter jets:
AFRL-VS-HA-TR-2004-1145
Environmental Research Papers, No. 1260
Users’ Manual for SAMM2, SHARC-4 and MODTRAN4 Merged
H. Dothe, et al.
http://www.dtic.mil/…GetTRDoc.pdf
Now since you have helped me illustrate this principle, clearly you are deeply involved in the conspiracy, and as such there are only two questions still remain to be answered.
First, as one of the conspirators, are you using your real name or a pseudonym?
Second, if it is the latter, what name should I write the check out to?
Barton Paul Levenson says
BJFC continues to use ad hominem argument. Attention BJ: IT DOESN’T MATTER what the sources of funding are. it only matter WHETHER THE ARGUMENTS PRESENTED ARE CORRECT OR NOT. Why don’t you get this?
Google “ad hominem wikipedia”
CAPTCHA: “QUESTION stayed”
Tim McDermott says
Barton: Rather than look at BJFC as an ad hominem wielding troll, I think it is useful to consider him as a prime example of how we all see the world through our own filters. I think if you asked him what he thought of the Millikan oil drop experiment, his response would be \huh?\ But I suspect the initial response of the vast majority of the folks who went on to become working scientists would include \cool\ and \elegant.\
It may be impossible for the BJFCs of the world to understand why someone with the intelligence and accomplishments needed to become a working scientist would settle for scientist pay when a fresh MBA starts in six figures. (median package for Wharton grads is 145K$) I suspect that they assume ulterior motives because they have no other explanation.
oracle says: term beautify
Mark says
“I think it is useful to consider him as a prime example of how we all see the world through our own filters.”
That’s not a filter. That’s “AGW sensitive glasses” like Zaphod has…
We *don’t* think like him (in the main).