Claims this week that climate scientists have “5-sigma” certainty for their findings are pure hype and extremely false adverrtising. Lubos Motl explains at his website Reference Frame “Five-sigma proof” of man-made climate change is complete nonsense Excerpts in italics with my bolds.
Notorious climate fearmonger Gavin Schmidt tweeted the following:
40 years since:
– the Charney report
– Hasselmann’s paper on detection & attribution
– the satellite erahttps://rdcu.be/bowzn @NatureClimate
Put it together and what have you got?
Greater than 5σ detection of anthropogenic climate change.
He picks about 3 scientific teams and praises them for reaching the “gold standard” of science (which is how the journalists hype it) – a five-sigma proof of man-made global warming. The signal-to-noise ratio has reached some critical threshold, it’s those five sigma, so the man-made climate change is proven at the same level at which we needed e.g. the Higgs boson to be discovered by CERN’s particle physicists.
It sounds great except it’s complete nonsense. When we discover something at five sigma, it means something that clearly cannot be the case in climatology. When we discover new physics at five sigma, it means that we experimentally rule out a well-defined null hypothesis at the p-level of 99.9999% or so. Note that a “well-defined null hypothesis” is always needed to talk about “five sigma”.
In the case of the man-made climate change discussion, there is clearly no such “well-defined null hypothesis”. In particular, when Schmidt and others discuss the “signal-to-noise ratio”, they don’t really know what part of the observed data is “noise” and how strong it should be. The assumption must be that the “noise” is some natural variability of the climate. But we don’t really have any precise enough and canonical enough model of the natural variability. The natural variability is undoubtedly very complex and has contributions from lots of natural and statistical phenomena and their mixtures. Cloud variations, irregular seasons, solar variability, volcanoes, even earthquakes, annual ocean cycles, decadal ocean cycles, centennial ocean cycles, 1500-year ocean cycles, irregularities in tropical cyclones, plants’ albedo variations, residuals from a way to compute the average, butterfly wings in China, and tons of other things.
So we can’t really separate the measured data to the “signal” and “noise”. Even if we knew the relevant definition of the natural noise, we just don’t know how large it was before the industrialization began. The arguments about the “hockey stick graph” are the greatest tangible proof of this statement. Some papers show the variability in 1000-1900 AD as 5 times larger than others – so “5 signa” could very well be “1 sigma” or something else.
Just like before Schmidt’s tweet, it is perfectly possible that all the data we observe may be labeled “noise” and attributed to some natural causes. There may obviously be natural causes whose effect n the global mean temperature and other quantities is virtually indistinguishable from the effected expected from the man-made global warming.
If the people observed some amazing high-frequency correlation between the changes of CO2 and the temperature, a great agreement between these two functions of time could become strong evidence of the anthropogenic greenhouse effect. But it’s clearly impossible because we surely can’t measure the effect of the tiny seasonal variations of the CO2 concentration – these variations are just a few ppm while the observed changes, seasons, are hugely pronounced and affected mostly by other things than CO2 (especially by the Sun directly).
So the growth of the CO2 was almost monotonic – and in recent decades, almost precisely linear. Nature may also add lots of contributions that change almost monotonically or linearly for a few decades. So the summary is that Gavin Schmidt and his fellow fearmongers are trying to make the man-made climate science look like a hard science – perhaps even as particle physics – but it is not really possible for the climate science to be analogous to a hard science. The reason is that particle physics and hard sciences have nicely understood, unique, and unbelievably precise null hypotheses that may be supported by the data or refuted; while the climate science doesn’t have any very precise null hypotheses.
At most, the attribution of the climate change is as messy a problem as the attribution of the discrepancies between Hubble’s constant obtained from various sources. It’s just not possible to make any reliable enough attribution because the amount of parameters that we may adjust in our explanations is larger than the number of unequivalent values that are helpful for the attribution and that we may obtain from observations. In effect, the task to “attribute” is an underdetermined set of equations: the number of unknowns is larger than the number of known conditions or constraints that they obey (i.e. than the number of observed relevant data).
Gavin Schmidt and everyone else who tries to paint hysterical climatology as a hard science analogous to particle physics is simply lying. Particle physics is a hard science and “five sigma proofs” are possible in it, climatology is a soft science and “five sigma proofs” in it are just marketing scams, and cosmology is somewhere in between. We all hope that cosmology will return closer to particle physics but we can’t be sure.
Update March 1, 2019
Ross Mckitrick posted at Climate Etc. Critique of the new Santer et al. (2019) paper
H/T Philip Dean
“I will discuss four aspects of this study which I think weaken the conclusions considerably: (a) the difference between the existence of a signal and the magnitude of the effect; (b) the confounded nature of their experimental design; (c) the invalid design of the natural-only comparator; and (d) problems relating “sigma” boundaries to probabilities.”
“The authors’ conclusions depend critically on the assumption that their “natural” model variability estimate is a plausible representation of what 1979-2018 would have looked like without greenhouse gases. The authors note the importance of this assumption in their Supplement.”
“Thus, it seems to me that the lines in Figure 1 are based on comparing an artificially exaggerated resemblance between observations and tuned models versus an artificially worsened counterfactual. This is not a gold standard of proof.”
“I’ll just point out that if time series data have unit roots they are nonstationary and you can’t use them in an autoregression because the t-statistics follow a nonstandard distribution and Gaussian (or even Student’s t) tables will give seriously biased probability values.”
“I ran Phillips-Perron unit root tests and found that anthro is nonstationary, while Temp and natural are stationary. . . A possible remedy is to construct the model in first differences. . . The coefficient magnitudes remain comparable but—oh dear—the t-statistic on anthro has collapsed from 8.56 to 1.32, while those on natural and lagged temperature are now larger. “
“The fact that in my example the t-statistic on anthro falls to a low level does not “prove” that anthropogenic forcing has no effect on tropospheric temperatures. It does show that in the framework of my model the effects are not statistically significant.”
“In the same way, since I have reason to doubt the validity of the Santer et al. model I don’t accept their conclusions. They haven’t shown what they say they showed. In particular they have not identified a unique anthropogenic fingerprint, or provided a credible control for natural variability over the sample period. Nor have they justified the use of Gaussian p-values. Their claim to have attained a “gold standard” of proof are unwarranted, in part because statistical modeling can never do that, and in part because of the specific problems in their model.”
See also: The Limitations of Climate Science