It has taken 17 months to get a comment published pointing out the obvious errors in the Scafetta (2022) paper in GRL.
Back in March 2022, Nicola Scafetta published a short paper in Geophysical Research Letters (GRL) purporting to show through ‘advanced’ means that ‘all models with ECS > 3.0°C overestimate the observed global surface warming’ (as defined by ERA5). We (me, Gareth Jones and John Kennedy) wrote a note up within a couple of days pointing out how wrongheaded the reasoning was and how the results did not stand up to scrutiny.
At the time, GRL had a policy not to accept comments for publication. Instead, they had a somewhat opaque and, I assume, rarely used, process by which you could submit a complaint about a paper and upon review, the editors would decide whether a correction, amendment, or even retraction, was warranted. We therefore submitted our note as a complaint that same week.
For whatever reason (and speculation may abound), the original process hit the buffers after a few months, which possibly contributed to a reassessment in December 2022 by the GRL editors and AGU of their policy regarding comments. Henceforth, they would be accepted! [Note this is something many people had been wanting for some time]. After some back and forth on how exactly this would work (including updating the GRL website to accept comments), we reformatted our note as a comment, and submitted it formally on December 12, 2022. We were assured from the editor-in-chief and publications manager that this would be a ‘streamlined’ and ‘timely’ review process.
With respect to our comment, that appeared to be the case: It was reviewed, received minor comments, was resubmitted, and accepted on January 28, 2023.
But there it sat for 7 months!
The issue was that the GRL editors wanted to have both the comment and a reply appear together. However, the reply had to pass peer review as well, and that seems to have been a bit of a bottleneck. But while the reply wasn’t being accepted, our comment sat in limbo. Indeed, the situation inadvertently gives the criticized author(s) an effective delaying tactic since, as long as a reply is promised but not delivered, the comment doesn’t see the light of day. After multiple successive reassurances that it would just take a few weeks longer, Scafetta’s reply was finally accepted (through exhaustion?) and we were finally moved into production on August 18, 2023. The comment (direct link) and reply (direct link) appeared online on September 21, 2023.
All in all, it took 17 months, two separate processes, and dozens of emails, who knows how much internal deliberation, for an official comment to get into the journal pointing issues that were obvious immediately the paper came out.
Why bother?
This is a perennial question. Why do we need to correct the scientific record in formal ways when we have abundant blogs, PubPeer, and social media, to get the message out? Clearly, many people who come across a technical paper in the literature won’t instantly be aware of criticisms of it on Twitter, or someone’s blog. Not everyone has installed the PubPeer browser extension for getting notified when a paper you are looking at or citing has comments [though you really should!]. And so, since journals remain extremely reluctant to point to third party commentary on their published papers, going through the journals’ own process seems like it’s the only way to get a comment or criticism noticed by the people who are reading the original article. Without that, people may read and cite the material without being aware of the criticisms (having said that, the original Scafetta paper has so far amassed only 4 citations, half of which are from Scafetta himself (not counting this comment and reply). For comparison, our Hausfather et al (2022) commentary on ‘hot models’ in CMIP6 which came out in May 2022 has been cited 118 times).
The odd thing about how long this has taken is that the substance of the comment was produced extremely quickly (a few days) because the errors in the original paper were both commonplace and easily demonstrated. The time, instead, has been entirely taken up by the process itself. It shouldn’t be beyond the wit of people to reduce that burden considerably. [Perhaps, let us know in the comments if more recent experiences with GRL have improved?].
I’ve previously discussed other ideas, such as short-form journals that could be devoted to post-publication peer reviews and extensions, but this has not (yet!) gained much traction, though AGU has suggested that the online Earth and Space Sciences could be used as such a platform.
What should have happened?
Every claim in the Scafetta paper was wrong and wrongly reasoned. As soon as the journal was notified of that, and had the original process reviewed by independent editors and reviewers, it should have been clear that it would never have passed competent peer review. At that point, the author could have been given a chance to amend the paper to pass a new review, and if they were unwilling or unable to do so, the paper should have been retracted. The COPE guidelines are clear that retraction is warranted in the case of unreliable results resulting from major errors. It does no-one any good to have incorrectly argued claims, inappropriate analyses, and unsupported claims in the literature. It would not have appeared in GRL if it had been competently reviewed, and so why should it remain, now that it has been?
Of course, people sometimes get a bit bent out of shape when their papers are retracted or even if that is threatened, and some authors have gone as far as instigating legal action for defamation against the journals for pursuing it or the newspapers reporting it. I am, however, unaware of any such suit succeeding. Authors do not have any right to be published in their journal of choice, and the judgement of a journal in deciding the what does or does not get published in their pages is (and should be) pretty much absolute.
So was the reply worth waiting 7 months for?
Nope. Not in the slightest.
He spends most of the response arguing incorrectly about the accuracy of the ERA5 surface temperatures – something that isn’t even in question, they could be perfect and wouldn’t impact the point we were making. His confusion is that he thinks that the specific realization of the internal variability that the real world followed is the same as the forced component of the temperature trends that we would hope to capture with climate models. It is not. We discussed this in some detail in a subsequent post when he first made this error in his 2023 Climate Dynamics paper. To be specific, the observed temperature record can be thought of as consisting of a climatological trend, internal variability with a mean of zero, plus structural uncertainty related to how well the observational estimate matches the real world:
with assumed to be constant by definition over each decade, and so
The can be estimated from the decadal sample and for GISTEMP or ERA5 it’s around 0.05ºC, while is much smaller (0.016ºC or so). So the 95% confidence interval on the decadal change due to internal variability is therefore around ºC. With models you can actually run an ensemble and estimate this more directly, and for consistency, the two methods should be comparable. Curiously though, the 95% ensemble spread for the models (with 3 or more simulations) has quite a wide range from 0.05ºC to a whopping 0.42ºC (EC-Earth, a definite outlier), though the model mean is a more reasonable 0.17ºC.
Curiously Scafetta associates the structural uncertainty in annual temperature anomalies in the ERA5 reanalysis with the uncertainty in the in situ surface temperature analyses (like GISTEMP or HadCRUT5) – products that use a totally different methodology whose error characteristics aren’t obviously related at all. In any case, it’s a very small number and the uncertainty in our estimate of the climatological trend is totally dominated by the variance due to the specific realization of the weather. Also curious is his insistence that the calculation of an internal variability component can’t be fundamental because he gets a different number using the monthly variations as opposed to the annual ones. He seems unaware of the influence of auto-correlation.
Also amusing is his excuse for not looking at the full ensemble in assessing the consistency of specific models. He claims to have taken three runs from each model. But he’s being very disingenuous here. His ‘three runs’ were the ensemble means from three scenarios (the different SSPs from 2015 to 2020), which a) barely differ from each other in forcing because of the minor differences in GHG concentrations over a mere five years and, b) are the ensemble means (at least for the runs with multiple ensemble members)! It is possible that he isn’t aware of what he actually did since it is not stated clearly either in the original paper, nor this reply, but is obvious from comparing his results from the SPP2-45 scenarios with our Figure 1. For instance, for NCAR CESM2, there are six simulations with deltas of [0.788, 0.861, 0.735, 0.653, 0.682, 0.795] ºC (using the period definition in the original paper) and an ensemble mean change of 0.752ºC. Scafetta’s value for this model is … 0.75ºC. Similarly, for NorESM2-LM, the individual runs have changes of 0.772, 0.632, & 0.444ºC, with an ensemble mean of 0.616ºC. Scafetta’s number? You guessed it, 0.62ºC. It is simply not possible to estimate the ensemble spread from only using the ensemble means. Another oddity of this methodology is that the spread for the models with many ensemble members is much smaller than the spread for models with only a single simulation since for these models you actually do sample some of the internal variability with the three scenarios. For instance, CanESM5 (50 ensemble members) has a spread of 0.03ºC across the three scenarios, and and IPSL-CM6A-L (11 ensemble members) has no spread at all! Meanwhile MCM-UA-1-0, and HadGEM3-GC31-LL (with only single runs) have spreads in Scafetta’s table of 0.11ºC, and 0.17ºC respectively. [All that effort put in to running initial condition ensembles for nought!]
Thus the two points that we made in our comment – that he misunderstood the uncertainty in the climatological trends in the observations and that he didn’t utilize the spread in the model ensembles, and that this fatally compromises his conclusions, stand even more clearly now. The additional spin he now wants to put on his results, defining a new concept called apparently, a ‘macro-GCM’, has the internal consistency of whipped cream. None of it rescues his patently incorrect conclusions.
Summary
I have absolutely no expectation that this episode will encourage Scafetta to improve his analyses. He’s been doing this kind of thing for almost two decades now. He is too wedded to the conclusions that he wants to let little things like the actual results or consistency intrude. I am slightly more confident that processes at GRL may improve, and the recent change to allow comments is very welcome. Hopefully, this exchange might be helpful for other researchers thinking about the appropriate way to compare models to observations (for instance, it made an appearance in Jain et al, 2023).
The last paragraph in our comment sums it up:
In critiquing the tests in this particular paper, we are not suggesting that hindcast comparisons should not be performed, nor are we claiming that all models in the CMIP6 archive perform equally well. […] However, the claims in Scafetta (2022) are simply not supported by an appropriate analysis and should be withdrawn or amended.
Schmidt et al, 2023
Let us all try to do better in future.
References
- N. Scafetta, "Advanced Testing of Low, Medium, and High ECS CMIP6 GCM Simulations Versus ERA5‐T2m", Geophysical Research Letters, vol. 49, 2022. http://dx.doi.org/10.1029/2022GL097716
- G.A. Schmidt, G.S. Jones, and J.J. Kennedy, "Comment on “Advanced Testing of Low, Medium, and High ECS CMIP6 GCM Simulations Versus ERA5‐T2m” by N. Scafetta (2022)", Geophysical Research Letters, vol. 50, 2023. http://dx.doi.org/10.1029/2022GL102530
- N. Scafetta, "Reply to “Comment on ‘Advanced Testing of Low, Medium, and High ECS CMIP6 GCM Simulations Versus ERA5‐T2m’ by N. Scafetta (2022)” by Schmidt et al. (2023)", Geophysical Research Letters, vol. 50, 2023. http://dx.doi.org/10.1029/2023GL104960
- Z. Hausfather, K. Marvel, G.A. Schmidt, J.W. Nielsen-Gammon, and M. Zelinka, "Climate simulations: recognize the ‘hot model’ problem", Nature, vol. 605, pp. 26-29, 2022. http://dx.doi.org/10.1038/d41586-022-01192-2
- S. Jain, A.A. Scaife, T.G. Shepherd, C. Deser, N. Dunstone, G.A. Schmidt, K.E. Trenberth, and T. Turkington, "Importance of internal variability for climate model assessment", npj Climate and Atmospheric Science, vol. 6, 2023. http://dx.doi.org/10.1038/s41612-023-00389-0
Susan Anderson says
A stolen poem (comment on Guardian’s John Crace (re Sunak), a powerful way of viewing the problem: from “RockyRex” – https://discussion.theguardian.com/comment-permalink/164481324
THE CLIMATE PROTESTS………
Arrest the rain.
How dare it remind us that the climate is changing?
Let’s pretend it always rained like this.
Handcuff the wildfire.
Why should we care about the trees?
What did koalas ever do for us?
Take the flood to court.
It has no business blocking roads, or sweeping away bridges.
What if an ambulance was stuck on the wrong side of the river?
Give the clouds a suspended sentence.
They are trying to remind us that a warmer world has wetter air.
They should go and tell China, not us.
Dress the heatwave in an orange jumpsuit.
It is probably a fault in the thermometer anyway.
These scientists – what do they know?
Sentence the melting ice to 10 years in the cooler.
Why does it matter if the sea floods Bangladesh?
And Miami can be the New Atlantis.
All these forces of nature are just causing trouble.
They are just a rabble of hippies – don’t listen to them.
[RR 2022]
Paul Pukite (@whut) says
The problem with Scafetta is that he uses a pasta approach — he throws everything against the wall to see if anything sticks. How many papers has he written that have attributed planetary influences, sunspot cycles, orbital resonances, etc to climate? Yet, none of these are self-consistent. He’s even written about correlation to COVid-19 !
In conventional science research circles, you’re given a couple of chances to prove your worthiness. After that, you’re considered a pariah and shunned thereafter.
I realize it’s difficult to make judgments because none of the models can be routinely debunked, as controlled experiments are not available to easily falsify the results — yet all the editors have to do is look at a scientist’s track record to evaluate their sincerity and diligence in how they publish their research results.
With that said, let us all embrace the new-and-improved pasta approach — applying machine learning! The difference here is that the practitioners actually know how to do cross-validation to determine if the ML models have any practicality. Should be fun times ahead.
Jerry (Jerome A.) Smith says
As the editor-in-chief of the AMS Journal of Physical Oceanography, I have also expressed my dismay at the current “comment and reply” protocol here, where basically the original author can “stonewall” the process. I’d suggest (and I will suggest) that the process be changed to a finite (and fairly short) deadline for the “reply”, after which the (approved) comment is published. A later “reply” might also be published (after review), and the two would then likely be linked, at least online. My initial suggestion for the “short deadline* would be the same as for reviewers: 3 or 4 weeks, max. The clock starts upon approval of the comment.
[Response: Agreed. That would be a big improvement. – gavin]
TheWarOnEntropy says
I would have thought that a critical comment could be peer-reviewed quickly, published without a reply, and then republished *with* the eventual reply. This puts the onus on the author to respond quickly, rather than rewarding them for stonewalling.
John N-G says
A more incremental improvement would be: when a comment is submitted, the comment is reviewed and the original authors are notified. When the comment is accepted, a response is solicited with a deadline of one month. The response is reviewed, but rather than being revised, the reviews are published along with the response.
(P.S. I’m getting the comment (kommentti) form instructions in Finnish. I don’t know why it’s happening, but it’s kind of refreshing…)
Michael To is says
https://frog.gatech.edu/Pubs/How-to-Publish-a-Scientific-Comment-in-123-Easy-Steps.pdf
Paul Pukite (@whut) says
That’s a interesting story by Trebino, but the thing I don’t get is this passage:
It seems obvious that the physicist Trebino has invented something. If the device actually works as described in his original paper, that’s enough to validate his research, and whatever someone else is criticizing should be devalued. That’s the concept often known as “the proof is in the pudding”
In the end, science is self-correcting — the fight is usually over who gets the credit.
Keith Woollard says
Yes, science is self correcting, but typically with a lag of more than a generation.
Companies come and go an a whiff of news
Piotr says
[ Re: Keith Woolard Sept.22]
Piotr: So what are you saying, Mr. Woolard? That in a generation the science will … self-correct itself by admitting that there is no climate change???
And only one generation? I keep waiting for this retraction from Copernicus and … still nothing!
And which are those companies that have “ shut down on a whiff of news ” on a scientific paper that supports the existence of climate change? Exxon Mobile? BP? Aramco? Gazprom?
Don’t you, deniers, have some peer-review process that would sieve out the more bizarre claims by the members of the denier community?
Piotr says
Keith Woollard: Sept.23: “I am not talking about global warming, nor was I even thinking it“.
Lady doth protest too much, me think. And the protestations would have been much more believable if this site wasn’t about climate change, and if you haven’t promoted here climate change denial tropes for years – in this case your:
KW :” Yes, science is self correcting, but typically with a lag of more than a generation”
being a rehash of the old deniers trope that climate change is a global conspiracy of corrupt scientists, who block the denier heroes from publishing their findings that would have proved that global warming is not happening or does not cause bad things, or at the very least – is not caused by us.
KW: And I didn’t say “only” – why would you write that and italicise it?
Huh? Italic font is NOT a form of quotation marks – it was used for emphasis – my irony was built on ironic suggestion that scientific censorship of the opposing views may be … far more entrenched than you suggested:
Piotr 22Sep. “And only one generation? I keep waiting for this retraction from Copernicus and … still nothing!”
You know, Copernicus, as in:
Copernicus, N., 1543. “De revolutionibus orbium coelestium”, Johannes Petreius (Nuremberg), 540 pp. ?
Ray Ladbury says
Horse Puckey! Most scientific errors are found and corrected on a timescale of months. It is only when the theory also has some evolving to do that it can take longer–and it should, as doing science with a slightly wrong theory is often less dangerous than doing it with a theory you don’t understand.
Paul Pukite (@whut) says
Ray said:
Welcome to the world of geophysics. In just about any other scientific discipline, say solid-state physics for example, round-trip analysis is relatively quick. Especially when it involves a controlled experiment. Recall how quickly the recent pseudo-finding of room-temperature superconductivity resolved itself. Other scientists tried to replicate the findings and were rapidly able to come to an understanding of the mechanism, within a week or two IIRC. Alas, nothing is that quick in geophysics because there are no controlled experiments to provide a means of falsification. Everything is slowly chewed on because apparently the only factor that matters is the long waiting time for the results of predictions to dribble in.
A personal case in point: I have a model of QBO originally presented at an AGU meeting in 2016 and published in 2018. Perhaps no one is criticizing it because it continues to explain the QBO behavior better than any other model out there. From an article published last month:
So it’s really a waiting game to sort things out. Unlike Scafetta, I’m holding steady and not shooting randomly at anything that moves. Like I said in an earlier comment and from what I was taught, you don’t get a do-over.
Keith Woollard says
Piotr,
I am not talking about global warming, nor was I even thinking it.
And I didn’t say “only” – why would you write that and italicise it? Most established scientific theories take more than a generation as you need to have the gatekeepers die or lose their influence.. I was thinking of subjects closer to my own field such as plate tectonics and sequence stratigraphy. I am sure all scientific fields will have their own examples. I won’t even pretend to address either of your last two paragraphs. What a meaningless waste of typing!. And at least I can copy someone;’s name without making a istake
And Ray, what can I say….. sure simple mistakes in certain calculations can be spotted and corrected but we aren’t really talking about that are we. We are talking about differing opinions about how the world works. Things like what gives people gastric ulcers
My point was not about climate science, it was rather the difference in rate between scientific acceptance and (small) company growth/decline.
Paul Pukite (@whut) says
KW said:
Interesting that the “gatekeeper” of the original QBO model (see above comment of mine) is the notorious Richard Lindzen. Back in the 1960’s when he took up explaining QBO as a research topic, he apparently went through all the possible forcing causes, which you can find from his papers. He dismissed the obvious cause of tides:
and
Yet, Lindzen never considered that tides act non-linearly with the annual cycle, thus generating sidebands that aren’t normally considered in conventional tidal analysis. That is the basis of my hypothesis, that the lunar tidal factor with the only symmetry that can effect a wavenumber=0 behavior such as QBO is the 27.212 day lunar Draconic cycle. And that cycle will create a frequency sideband that matches that of the average QBO period, and will also well approximate the square-wave-like shape. See Pukite(2018)
The point is that scientific influencers such as Lindzen may make assertions that prevent advances for the span of their careers, as other researchers decline to pursue these paths fearing they are dead-ends.
I believe the changing of the guard will be the application of machine learning, which based on the way it works will ignore subjective advice and instead plow through all the combinations so as to match the climate patterns. Example is that NVIDIA is looking for a ” Senior AI Research Scientist for Climate & Weather Prediction” to apply to their team:
https://twitter.com/SciPritchard/status/1705357784072761718
They will certainly clean things up, if not shake up the status quo.
Thomas W Fuller says
I have seen many adjectives applied to Richard Lindzen, most of them disparaging. Notorious is a new one. Might I suggest a slightly longer descriptor? ‘A credentialed climate scientist with whom I disagree,’ for example.
[Response: ‘A credentialed climate scientist who at one time (many decades ago) made interesting and challenging points, but whose points have long been dealt with or have been overtaken by the ‘facts on the ground’.’ – gavin]
Thomas W Fuller says
I like mine better. I have advocated to skeptics that they use the same for you, Gavin.
Carbomontanus says
Dr Schmidt
I have actually worked with things similar to Lindzens contributions namely on oscillators of several kinds and especially on pnevmatic oscillators
And have had my upper hand on it from other fields and areas of science and technologies, but hardly from Lindzens.
So I am more and more coming to the conclusion that if your results are appliciable and fruitful back to the domaines and traditions from where you got it and borrowed it, then your results are valid and fruitful. But if not, you may be misconsceived and you may even have cheated your results.
Richard Lindzen spoke:
://”Where there is Science, there is not Consensus. And where there is Consensus, there is not SCIENCE,….. PERIOD!”//:
I suddenly woke up hearing that with my own ears and saw it with my own eyes on Youtube Video TV from Domus Academica Royal Frederiks Oslo downtown , by Richard Lindzen giving a lecture for the local climate surrealists.
Hearing that doctrine quite clearly spoken from the Cateter in Domus Academica, , I went to sleep again and lost nothing.
That doctrine of Richard Lindzen, known as Lindzens teorem,…………….. is not fruitful and appliciable elsewhere.
This is quite a good rule for what to take for serious and what not to take for serious and waste your talents and time on. Such as, where to go to scool and where not to go to school if you have any choise.
Ray Ladbury says
Richard Lindzen ceased being a scientist when he started making arguments he know are specious to court public opinion. A minimum requirement for a scientist is adherence to truth.
The term is shill.
MA Rodger says
Thomas W Fuller,
Dickie Lindzen is an entirely unreliable source of climate science. The question of whether this is because he is a bare-faced liar, a senile old fool or somebody so wrapped up in his scientific endeavours that he is become incapable of reporting properly their scientific significance is all really immaterial. His campaigning against the science that underpins has been going on for far far too long for such niceties.
MA Rodger says
Carbomontanus,
The argument that science does not co-exist with consensus and thus that consensus is non-scientific is a much-used crazy denialist argument. But I don’t think I have hear of Dickie Lindzen making such an argument, The quote you provide (which I’m not familiar with) I would suggest is not that consensus is non-scientific which is the purile gobshite but that the science does not co-exist with consensus which is a useful argument. I recall even our hosts here at RC employing it.
Perhaps the interpretive problem lies in differing understanding of the word‘science’. For some e=mc^2 is ‘science’. For others this is incorrect. ‘Science’ is the reason we know without any controversy that e=mc^2. But if you see the world from a scientist’s perspective, the ‘science’ was the work that established the relationship e-mc^2 and today there is consensus over this finding.
Dickie Lindzen would I’m sure entirely agree there is a massive difference between saying science does not co-exist with consensus and saying consensus is non-scientific and he would (or should) entirely disagree with the latter, although I cannot be sure as the old twit does often kick-over the scientifical traces. The closest I can see that he gets to kicking-over these particular traces is seen in this video (@2:00) where he says:-
His reason here for denying that CO2 is the control knob of Earth’s climate and that the system must be “robust” enough not to result in any problematic outcome from AGW rests on the Faint Young Sun Paradox and by implication the bulk of the climatology community are utter fools to ignore this evidence, although he “can’t say for sure.”
“Set back the science of climate generations”? I would suggest this is a very good demonstration of a mucky old pot calling the shiny electric kettles black.
Paul Pukite (@whut) says
The problem with Richard Lindzen’s research results is that he appears to refuse to consider the powerful impact of forcing on climate, I gave the example of his early research on QBO, where he paid minimal lip service to how external tidal forces were ultimately responsible for the reversing of the equatorial stratospheric winds. Instead he generated a hypothesis that the QBO was a natural resonance in the atmosphere. That belief could have laid a foundation for his assertion that man-made forcing via increased atmospheric CO2 is likely NOT a source of temperature rise. Many belief systems are built up this way, as previous ideas influence how one models new problems. Additionally, Lindzen may have been motivated to protect his reputation, given that the scientific foundation he had constructed was so precariously assembled.
I’d offer up this advice: Very few large scale phenomena (if any) spontaneously develop in the absence of any forcing. Even something seemingly spontaneous such as isostatic rebound can be considered as a release from a previous forcing.
Paul Pukite (@whut) says
I have no idea on whether Richard Lindzen is aware of the alternative QBO model, which is notorious in its own way.
This is a Google spreadsheet of the model plotted against the 30hB QBO data if one would like to see the deep complexity …. errr, I mean striking simplicity of the details,
https://docs.google.com/spreadsheets/d/1QjBtVeD0rvXZc24TyxiKXp6mp0DqozgeWZL_mXuYKNc/edit?usp=sharing
Probably should have made a spreadsheet like this long ago.
Carbomontanus says
@ MA Rodger
Thank you very much for your long and thorrough paragraph to the possible defence of Ricyhard Lindzen. in the purgatory.
But my spontaneous reaction and judgement this time was quite especially experienced, also from the festival auditorium in DOMVS ACADEMICA of The Royal Frederics.
About my autenticity & qualifications:
I was once told to have slept my way to a very high LAVD in general chemistery, as I had been seen sleeping up on the gallery for 2 semesters in the large chemical auditory
The secret is that I was a highly trained listener from before, even asleep. and could wake up 5 seconds before any BANG! experiment on the Cateter. , , as I heard them coming by half an ear and by closed eyes even asleep.. And then simply go to sleep again after each BANG!
I thrived very much better in the lab, where I learnt it better.
It is a matter of higher, transcendental meditation at slumbering you see,
People should be aquainted to slumbering in church allready, , not just in class, at Lindzens performances. ( Aint that not so Levenson?)
I had to learn that on the scooldesk in public school and highscool allready to be able to sit out all those long and irrelevant lectures. An earloy training that later showed very valuable in the army. . I was keeping the radio, and we were set on watch alert. Where I could rather dare to go to sleep. And wake up every time when personally called up. It worked each time. .
Which is a remaining archaic animal instinct not shared and trained by everyone.
Namely to be able to be aware and on alert also asleep. with just 1/4 ear 1/8 ey open all the time
which is autentic higher and deep, transcendental meditation.
The cat can rest and sleep with all those irrelevant noises around but wakes up immediately on alert if only a tiny mouse is rustling behind the walls. Being not so mysterious you see, but rather quite practical.
It suddenly worked for me agan with Richard Lindzen in town in Domus Academica.
Lindzens teorem
:/”Where there is science there is not consensus,… and where there is consensus, there is not SCIENCE…. PERIOD!”/:
Which is a most remarkable, fameous BANG! experiment for the audience and camera, in the fameous grand festival auditory in DOMVS ACAQDEMICA at the Royal Frederiks,
That is especially known from earlier for its fameos BANG experiments. Such as professwor Kristian Birkelands sun. Planned to be a noiseless gun against the swedes, a pioneering electromagnetic linear accelerator. Today known as “A railgun”.
They were sitting for 1/4 hour at least curbing their electrifying machines loading up big chests of stanniol & glass plate condensers with poor Professor Birkeland teaching: “This is not dangerous, it will be all fine Don`t worry, be happy…., you will hardly see and hear anything, This is very scientific and stealthy against the swedes, , smile smile.”
With all the ambassadeurs and noblesses even the French artilleries seated in the AVDITORIVM. .
But then an ulucky shortcut at very high charge and voltage, …. Birkelands sun stood up over the railgun, as .. a large electric arch spread out in a high frequency electyromagnetic field by a really big BANG!
All the people were dazzled , light went out, Panick broke out, Known as Birkelands scandal in the old festival auditory. Royal Frederiks
So whenever you go there, expect something really dazzling and educating.
Richard Lindzens teorem from his fameous lecture in DOMVS ACADEMICA is Lindzen on his very best and most autentic, it is really all you have to know about him.
===================000
I brought Otto Øgrims fameous “Størrelser enheter og symboler i fysikken” magnitudes units and symbols in physics 46 pages to the next Climate- surrealist meeting Oslo downtown and handed it over to Prof. Emeritus Jan Erik Solheim at his table..
” Look here, the consensus- book!” I said
It was not opened.
It was the local university small catechismj of the CGS and SI systems with formulas and definitions for his generation. Whithout it and for whoever is giving a snobbish damnn to it, there hardly is physics or astrophysics of any kind going on.
I told around further of the fameous CRC Handbook of chemistery and physics, the so- called and fameous “Rubber- bible” measuring 6″ x 8″ x 2 1/2″ .
Wherever “The bible” is seen on the accute writing desks and in the library rather laying on the table, there is SCIENCE.
But, where the CRC Handbook is not seen and not even known, there is not SCIENCE.
That is for sure, and easily explained
Surrealism is wherever such basic catechisms and bibles of scientific con- sensus and con- ventions are given a damn to.
Thus when the very conscept of Con- Sensus is being stolen- occupied and given anti facultary meanings for politicalo propagandistic sales promotion and career purposes, then we can draw some elementary conclusions.
That is all to be said about Richard Lindzen after all.
I came to that conclusdion asleep allready. as I was highly trained from before on Peculiar Professors..
Eli Rabett says
FROG is an improved basis for measuring the pulse with of fs lasers and yes it works. Rick Trebuno had a small business selling instruments. The problem is that the LANL guys were selling doubt.
The same thing here.
Paul Pukite (@whut) says
Wow, thanks Joshua! That was the context for the correction saga? “The Frequency-Resolved Optical-Gating (FROG) technique has revolutionized our ability to measure and understand ultrashort laser pulses”
BTW, this brings up an old pet-peeve of mine — that autocorrelation of signals is often reflexively equated to be a bad thing in many papers. I note that Gavin mentions auto-correlation above. The fact of the matter is that any time-series that is not random shows autocorrelation. Note that Scafetta in the past has tried to associate deterministic factors to climate change, such as sunspot forcing. The ensuing signal would show strong autocorrelation. Yet in this new paper, Scafetta is entering the realm of uncertainty where the ensembles have a stochastic (not deterministic) component. Gavin is thus correct that Scafetta needs to account or autocorrelation, as there is likely to still be a deterministic component to the signal. The existential question is what is that component and what causes it?
If anyone is interested in autocorrelation, ChatGPT (like it or not)is great as an informative learning tool. I gave it this prompt and click the link to see how it responded:
https://chat.openai.com/share/3d0d01a7-8cf1-4718-9e1c-d43626316bfb
Susan Anderson says
Fabulous, if sad. (Have to admit I skimmed some of the later steps.)
Are my eyes deceiving me, or do too many people here lack the ability to recognize irony or humor?
tamino says
I expect Scafetta to write garbage like this; it’s what he does. But perhaps a bigger issue for GRL is to address how this paper got accepted in the first place. Maybe the referees who approved it should be banned from acting as reviewers in the future, and their names and misdeeds made public.
Carbomontanus says
Yes, mee also go for AD-HOMINEM- methods in such, and in similar cases. , wherever there is SENSOR behind the Iron Curtain or Big Arsh, shortened B.A , sitting secretly in social or media keye- positions
It is a traditional and most efficient method of getting rid of Trolls , Light onto them and guess and publish their full name and adress.
It is called corrupt or mafiotic wherever the traditional, quite ugly conscept of Trolls is less known,
Piotr says
Re: tamino: Sept.22
The question could also be asked of the reviewers of the response – should they have picked up on the fact that he: “spends most of the response arguing incorrectly about the accuracy of the ERA5 surface temperatures – something that isn’t even in question ” Which is the common straw-man argument defense – aimed at distracting from the lack/weakness of the response to the major points of criticism.
Obviously missing that, is still much less of a problem than the original referees that couldn’t find a problem with the paper itself – and I think there should be responsibility for them missing all the problems with Scafetta paper – which questions either their expertise and/or scientific integrity. And banning them for further review only in the journal in question – does not warn other journals about them – so in the most egregious cases – publishing their names alongside the critique of the paper they approved would be a proper warming to the scientific community and publishers of other journal.
The goal for keeping the identity of the reviewers confidential should be to preserve their objectivity by protecting them from the pressure during the peer review process and retribution after the rejection
of the papers, not to hide their errors that question their expertise and/or integrity.
But if there are formal problems with that (threat of the legal action for disclosure their identity to the public at large, at least there should be a provision to telling other publishers who was the reviewer in such an obvious case. But then again, the same referees offended by making their names public, would bitterly complain about the secrecy of the process that blackballed them.
Ray Ladbury says
Actually, it doesn’t surprise me too much. Scafetta likes analysis with lots of bells and whistles and moving parts. Unless a reviewer is familiar with the subject matter AND the analysis technique, they may get lost and simply decide to err on the side of letting the work pass on to his peers. And then the craptastic piece gets shredded, as it should.
Journals have a very difficult time getting mathematically intensive papers reviewed–not a lot of reviewers will be up to the task, and often those that are will be too busy with their own research to bother with a review. Most jounals don’t hae a deep bench of reviewers.
Karsten V. Johansen says
In a newspaper or journal, at least here in Scandinavia and as far as I know also in Germany and Britain, it would be considered absurd if critical comments to an article couldn’t be published until the author(s) of the critizised article had replied. In fact, such practice is seen as intolerable and condemned as a kind of editorial partisanship and even corruption.
Rory Allen says
If you need evidence to see how this kind of bad science is picked up and used by the climate science denial industry, look at an article in ‘The Daily Sceptic’ of 3 November 2021. Headlined ‘Dodgy Climate Models should be Discarded’, the article continues:
‘A devastating indictment of the accuracy of climate models is contained in a paper just published by the highly credentialed Physicist Nicola Scafetta from the University of Naples. Professor Scafetta analysed 38 of the main models and found that most had over-estimated global warming over the last 40 years and many of them should be “dismissed and not used by policymakers”.
…
At the heart of the climate model problem is determining the equilibrium climate sensitivity (ECS). This is defined in climate science as the increase in the global mean surface temperature that follows a doubling of atmospheric CO2. Nobody knows what this figure is – the science for this crucial piece of the jigsaw is missing, unsettled you may say. So guesses are made and they usually range from 1C to as high as 6C. Models that use a higher figure invariably run hot and Professor Scafetta has proved them to be the least accurate in their forecasts.
…
More detailed research into this by Professor William Happer at Princeton has led him to conclude that a very low ECS, suggesting gentle if any warming, occurs when CO2 rises above the current atmospheric level of 420 parts per million. Far from being harmful, the extra CO2 is highly beneficial for plant growth and food. Slightly warmer temperatures can also be desirable. Homo Sapiens started in the tropics and only ventured out when the ice age started to lift – we like being warm and far more people die of the cold than the heat.
Failing to discuss the science behind climate change and simply blaming it all on humans is not science, it is anti-science, leading to faith-based green ideology. A plea for a more scientific approach was made two years ago by Professor Scaffeta along with a group of over 70 Italian scientists, including many distinguished academics, in a direct plea to Italian politicians. They stated that the human responsibility for climate change observed in the last century was “unjustifiably exaggerated and catastrophic predictions are not realistic”. Signatories of the letter included Antonino Zichichi, Professor emeritus of Physics and the discoverer of nuclear antimatter, and Renato Angelo Ricci, also an emeritus Professor of Physics and former President of the Italian Society of Physics. In total it was signed by 48 science professors. Needless to say it went unreported in the mainstream media at the time.’
Ah, the mainstream media! I am surprised they didn’t bring in the WEF. The Daily Sceptic is the go to source for many climate deniers. You can prove Scafetta wrong all you like: the public doesn’t read the journals, it goes to online trash dumps like this for its information. I agree that the work of refutation has to be done anyway.
Mal Adapted says
Heh. ‘The Daily Sceptic’ is rated by the Media Bias Chart as “Skews Right” to “Hyper-Partisan Right” on the Bias axis, and “Wide Variation in Reliability” to “Contains Misleading Info” (the least-reliable category) on the Reliability axis.
The Media Bias/Fact Check website says:
Overall, we rate the Daily Sceptic a far-right biased quackery level pseudoscience website that frequently publishes false and misleading information regarding covid-19 and science in general.
Sounds like an “online trash dump”, alright! How do we compete with that?
Barton Paul Levenson says
Dr. Schmidt,
Thank you for taking the time and trouble to publish the refutation of Scafetta. I know you’d much rather be working on your own interests and your own papers, but we need this kind of thing. Thank you for being involved.
Susan Anderson says
Indeed!
Gavin says
thanks!
Mal Adapted says
My profuse thanks also, Gavin, for keeping us up to date on not only the right way to do climate science, but the wrong way as well!
Silvia Leahu-Aluas says
Agree, thank you Gavin.
Has anybody been fired at GRL for publishing incorrect data and conclusions? At this point in time, no “alternative facts” in climate science or anything are allowed.
Russell Seitz says
One way to enliven those who practice to disinform by publishing in predatory or pay for play journals is to drop in on them as they celebrate their work and dismiss their critics on Your Tube,
Watch Willie Soon and Anthony Watts toss the Scafetta “What Retraction ?” climate ball around at
https://www.youtube.com/watch?v=HTq8kNDk3JA
Where, unlike WUWT, Watts can’t cut or censor commens at will.
Susan Anderson says
re “consensus” (argument above, but this is so ridiculous I’m giving it a new item). Lemme see if I have this right:
If a few scientists disagree it must be wrong/dubious.
If most scientists agree that makes it wrong too, especially if it’s a large majority.
Words fail …
=====
Thing about reality is it’s, well, real.
Carbomontanus says
Yes!
Such is dia- lectic materialism.
The contra- diction is there what states the scientific proof, creates the matter, mooves and changes it, and anihilates it again.
Moral:
Better take to reason & experience and reallize who they really are at theiir bottoms.
Guest (O.) says
I think there is a fundamental problem in today’s science, regarding (internal and external) communication.
To have browser-plugins for getting updates is better than not to have such a plugin.
But I think the problems go much deeper.
The science(s) need a more formal approach. In any sceintific work there are propositions that are shown to be correct or incorrect (falsification).
The tool for handling this situation is predicate logic / first order logic. This is inherent, but not used in a formalized way. Instead domain language (as overloaing of natural languages) is used.
For transporting understanding this is fine. But for automated checking of propositions and relating to other peoples research, a more formal approach would be helpful.
The hype technology of LLM’s with it’s hallucinations is nothing I would recommend here (or only as partial help in certain scenarios).
The well understood first order logic, formalized in(to) a dataformat, provided via code versioning systems (e.g. git clone ), which can be related to, locally, be changed/adapted (add propositions, or show, where something went wrong in the arguments of paper doi-xxx),
would be automatable quite easy.
Need an update? Then git pull
Relating to other peoples work in such a based-on-logic and relate-dto-by-references way
would allow automated finding of researcjhh contents and propositions easily.
of course it must be integrated with empirical findings, like in climate science would be temperature measurements etc.
But using a browser plugin by hand to look for some updated text looks like a way outdated apporoach.
To get the research findings into the formalized description and vice versa, maybe some LLM-based AI stuff (but only with explainable-AI methods) could help here (formalized/unformalize or to.-logic and to-domainlanguage).
A Goedelisation of terms of a domain could be done, as for example WikiData does with ID’s for terms.
I wonder, why this has not already been used in science.
That approach would not only allow faster and better science-internal communication, but also easier communication with the outside world.
Finding the again-and-again-and-again served agw-deniers talking points would be easy.
Just do a lookup for a group-of-propositions that they make, and if that griup of propositions has already been falsified, you get the link to the research. May the laypersons (including journalists) then pick that up and read it.
That would be much easier than to have an unclear base of many research papers, wjere it is not easy to find, what propositions are addressed in which papers in which context.
If not-already-falsified agw-deniers claims are found, this points into the direction of potential research questions.
It’s just too easy for the deniers to mislead people (some of the deniers are mislead themselves, as they are not domain experts, even though they think they are … Dunning-Kruger effect).
If such a forma system would be existing (don’t expect proprietary solutions by some publishers to be a solution), and if it’s usage is easy enough for non-domain-experts to use it, then one could insist that only through such a formal system, any participation of discourse would be accepted.
Free data, free dataformats, free tools (free software, GPL’ed), freely cloning and relating to the “official” (aka science ‘consense’) strain of thought, so anybody could just pick the official arguments, add their own, but in that formalized way.
Then the deniers/laypersons calim of “we are the sceptics and are excluded from the discourse” would not work, as they can attach their own propositions, but again-and-again-already-falsified-claims would not be accepted – even not in their own runnings/attempts, as the tools show, that these claims are already falsified.
And journalists could not come up with “oh, we thought this was a real expert”, when they asked the deniers, doing math-magic to lure the journalists into so-called-scepticism.
The problem of reproducibility, that came up maybe 10 or 20 years ago has not been solved, but it is addressed now in the sciences. The problem of an ever growing extent of research findings (which not necessarily has been reproduced) together with the growing number of pseudo-experts or deniers, who lure people into wroing assumptions, can’t be resolved just “by hand”.
There must be a more automated approach to science, and doing it more formally with built-in logic-checking of propositions IMO is indispensible.
And I think the technology developed enough to achieve such a thing.
It just must be done. And it can only be done by the science community.
Not big-tech, not the publishers or other interest groups, which would have somewhat different interests in the game.
Paul Pukite (@whut) says
O, described well many of the challenges of applying LLM. I have a background and experience in applying first-order logic (FOL) to AI and in organizing and querying from knowledgebases. We have to first realize that LLMs use a completely different approach than a declarative FOL architecture. The outputs of a LLM are based on a nonlinear combination of inputs that are trained by a neural network. What that means is that a LLM may be able to find B given A, but will not necessarily come up with the inverse result of A given B. That’s because of the impossibility of creating a generalized inversion algorithm for a nonlinear transformation. However, a declarative FOL formulation will do the forward and inverse just as easily if the knowledge and rule are structured properly.
ChatGPT agrees with me: https://chat.openai.com/share/2f26bc23-cf8f-46b9-814f-039a0165471d
We had a project to create a knowledgebase of earth science information that could be applied to various tasks. Unfortunately it takes a lot of maintenance compared to a LLM.
Guest (O.) says
OK, I found a nice example for LLM logic-failures:
“Sally (a girl) has 3 brothers. Each brother has 2 sisters. How many sisters does Sally have?”
https://benchmarks.llmonitor.com/sally
Regarding the energy efficiency of the so called “AI”, you might have a look at this article:
Microsoft Needs So Much Power to Train AI That It’s Considering Small Nuclear Reactors
https://futurism.com/the-byte/microsoft-power-train-ai-small-nuclear-reactors
Paul Pukite (@whut) says
Sally (a girl) has 3 brothers. Each brother has 2 sisters. How many sisters does Sally have — not counting herself? When it fails, because of self-counting in this case
Kevin McKinney says
If by “self-counting,” you also mean the triple count of sisters that results when “each brother has two sisters” is interpreted as an exclusive category, then yes.
Paul Pukite (@whut) says
The way this works is you have to think like a software programmer would. It’s not excellent at solving stumpers like this unless it’s been exposed to it before. So in this case it’s essentially counting all the sisters, so you have to add a rule that says to subtract a sister if the subject sister (Sally) needs to be excluded from the final count. This is the reason that LLMs are so good at producing workable software source code, in that if you specify the requirements and constraints completely, it will do exactly as is told.
Carbomontanus says
@ Paul pukite @ whut
I only had to think of brotherhood and sisterhood within the famework of the holy matrimony Hr Pu8kite. . Then it showed quite elementary. But not everyone seems to be aquainted to that.
If any of the brothers had sisters that were not Sallys sister and any brother a sister that was not any next brothers sister as well, then they should have been called half brothers and half sisters.
As the conscept of half- siblings is not given here, the solution is very simple.
You will be unqualoified and confused and commit severe errors, confusion and contamination of samples, in human and animal biolotgical genetics also, as in rational computer programming, if that is not exactly defined and understood.
And that is the problem here.
Carbomontanus says
I have seen a lot of answers.
Since sally has 3 brothers, they all must have the same 2 parents. All brothers have the same sister Sally. Then we need only one more sister namely sallys one sister. all of the same 2 parents.
How can that be so difficult?
Kevin McKinney says
Apparently AI can make the difficult easy, and the easy difficult.
Carbomontanus says
@ all and everyone
As we ought to be discussing Scafettas and other presumably paralell SAGAs here,…..
I come to think that we also ought to have some common PENSVM and systematics for it, .
or some possible, systematic con- sensus with axioms, defrinitions, formulas, and examples. , else no SCIENCE.
That is actually delivered quite well on Wikipedia, it seems, under the keyeword para- sciences.
With several examjples over a wide scale and with quite good definitions and analytic criteria. So we can search for ourselves and make up our own minds.
I show the asirands behind the barn or out in the snow or throw them to sea ( one can kiddle- hauwl them also) or simply inform obvious rascals of a quite much hotter place with their owner in the high seat.
See also Pseudosciences, and cross- examine it also in several languages, in as many as you can, whenever using Wikipedia.,
Those 2 articles, on Psevdosciences and on Parasciences on Wikipedia are quite valuable and informative.
Para- and Psevdosciences belong on museum indeed, as all good museums have got special chambers for it. Where there may be a lot to be studied.
Randomguy says
Have you ever considered, that Scafetta isn’t actually “wedded to his conclusions” in the sense that he would believe them? That, instead, he is just satisfying a demand for links to ‘studies’ and ‘papers’ that can be injected into the discussion to stir further confusion, so to speak?
That he’s just another “drug dealer” of the climate catastrophe who “just cares for his customers” and satisfies their demand for distraction and consolation?
Paul Pukite (@whut) says
More garbage by Scafetta in this paper with a November publication date, “Empirical assessment of the role of the Sun in climate change using balanced multi-proxy solar records”, Geoscience Frontiers, Volume 14, Issue 6, 2023, https://doi.org/10.1016/j.gsf.2023.101650.
shorter Scafetta: “I’m making up stuff transformed from sunspot data that kinda reflects the AGW signal and then ascribing 80% of the AGW signal to the made-up stuff.”
RJ says
pretty disturbing that this kind of stuff makes it through the peer review process. JGR has three reviewers, does GRL?
Atomsk's Sanakan says
Beyond me why Scafetta’s work is being cited in “State of the Climate in 2022” as if the climastrologer’s work is in any way credible.
“Several GCMs also exceed the likely range of estimates of climate sensitivity (Forster et al. 2021)—the global surface warming response to a doubling of atmospheric carbon dioxide—which in turn contributes to overestimates of historical warming (Scafetta 2023).
[…]
Scafetta, N., 2023: CMIP6 GCM ensemble members versus global surface temperatures. Climate Dyn., 60, 3091–3120”
https://web.archive.org/web/20231111214303/https://ametsoc.net/sotc2022/SOTC2022_FullReport_final.pdf