Greta’s Spurious “Carbon Budget”

Many have noticed that recent speeches written for child activist Greta Thunberg are basing the climate “emergency” on the rapidly closing “carbon budget”. This post aims to summarize how alarmists define the so-called carbon budget, and why their claims to its authority are spurious. In the text and at the bottom are links to websites where readers can access both the consensus science papers and the analyses showing the flaws in the carbon budget notion. Excerpts are in italics with my bolds.

The 2019 update on the Global Carbon Budget was reported at Future Earth article entitled Global Carbon Budget Estimates Global CO2 Emissions Still Rising in 2019. The results were published by the Global Carbon Project in the journals Nature Climate Change, Environmental Research Letters, and Earth System Science Data. Excerpts below in italics with my bolds.

History of Growing CO2 Emissions

“Carbon dioxide emissions must decline sharply if the world is to meet the ‘well below 2°C’ mark set out in the Paris Agreement, and every year with growing emissions makes that target even more difficult to reach,” said Robbie Andrew, a Senior Researcher at the CICERO Center for International Climate Research in Norway.

Global emissions from coal use are expected to decline 0.9 percent in 2019 (range: -2.0 percent to +0.2 percent) due to an estimated 10 percent fall in the United States and a 10 percent fall in Europe, combined with weak growth in coal use in China (+0.8 percent) and India (+2 percent).


Shifting Mix of Fossil Fuel Consumption

“The weak growth in carbon dioxide emissions in 2019 is due to an unexpected decline in global coal use, but this drop is insufficient to overcome the robust growth in natural gas and oil consumption,” said Glen Peters, Research Director at CICERO.

“Global commitments made in Paris in 2015 to reduce emissions are not yet being matched by proportionate actions,” said Peters. “Despite political rhetoric and rapid growth in low carbon technologies such as solar and wind power, electric vehicles, and batteries, global fossil carbon dioxide emissions are likely to be more than four percent higher in 2019 than in 2015 when the Paris Agreement was adopted.

“Compared to coal, natural gas is a cleaner fossil fuel, but unabated natural gas merely cooks the planet more slowly than coal,” said Peters. “While there may be some short-term emission reductions from using natural gas instead of coal, natural gas use needs to be phased out quickly on the heels of coal to meet ambitious climate goals.”

Oil and gas use have grown almost unabated in the last decade. Gas use has been pushed up by declines in coal use and increased demand for gas in industry. Oil is used mainly to fuel personal transport, freight, aviation and shipping, and to produce petrochemicals.

“This year’s Carbon Budget underscores the need for more definitive climate action from all sectors of society, from national and local governments to the private sector,” said Amy Luers, Future Earth’s Executive Director. “Like the youth climate movement is demanding, this requires large-scale systems changes – looking beyond traditional sector-based approaches to cross-cutting transformations in our governance and economic systems.”

Burning gas emits about 40 percent less CO2 than coal per unit energy, but it is not a zero-carbon fuel. While CO2 emissions are likely to decline when gas displaces coal in electricity production, Global Carbon Project researchers say it is only a short-term solution at best. All CO2 emissions will need to decline rapidly towards zero.

The Premise: Rising CO2 Emissions Cause Global Warming

Atmospheric CO2 concentration is set to reach 410 ppm on average in 2019, 47 percent above pre-industrial levels.

Glen Peters on the carbon budget and global carbon emissions is a Future of Earth interview explaining the Carbon Budget notion. Excerpts in italics with my bolds.

In many ways, the global carbon budget is like any other budget. There’s a maximum amount we can spend, and it must be allocated to various countries and various needs. But how do we determine how much carbon each country can emit? Can developing countries grow their economies without increasing their emissions? And if a large portion of China’s emissions come from products made for American and European consumption, who’s to blame for those emissions? Glen Peters, Research Director at the Center for International Climate Research (CICERO) in Oslo, explains the components that make up the carbon budget, the complexities of its calculation, and its implications for climate policy and mitigation efforts. He also discusses how emissions are allocated to different countries, how emissions are related to economic growth, what role China plays in all of this, and more.

The carbon budget generally has two components: the source component, so what’s going into the atmosphere; and the sink component, so the components which are more or less going out of the atmosphere.

So in terms of sources, we have fossil fuel emissions; so we dig up coal, oil, and gas and burn them and emit CO2. We have cement, which is a chemical reaction, which emits CO2. That’s sort of one important component on the source side. We also have land use change, so deforestation. We’re chopping down a lot of trees, burning them, using the wood products and so on. And then on the other side of the equation, sort of the sink side, we have some carbon coming back out in a sense to the atmosphere. So the land sucks up about 25% of the carbon that we put into the atmosphere and the ocean sucks up about 25%. So for every ton we put into the atmosphere, then only about half a ton of CO2 remains in the atmosphere. So in a sense, the oceans and the land are cleaning up half of our mess, if you like.

The other half just stays in the atmosphere. Half a ton stays in the atmosphere; the other half is cleaned up. It’s that carbon that stays in the atmosphere which is causing climate change and temperature increases and changes in precipitation and so on.

The carbon budget is like a balance, so you have something coming in and something going out, and in a sense by mass balance, they have to equal. So if we go out and we take an estimate of how much carbon have we emitted by burning fossil fuels or by chopping down forests and we try and estimate how much carbon has gone into the ocean or the land, then we can measure quite well how much carbon is in the atmosphere. So we can add all those measurements together and then we can compare the two totals — they should equal. But they don’t equal. And this is sort of part of the science, if we overestimated emissions or if we over or underestimated the strength of the land sink or the oceans or something like that. And we can also cross check with what our models say.

My Comment:

Several things are notable about the carbon cycle diagram from GCP. It claims the atmosphere adds 18 GtCO2 per year and drives Global Warming. Yet estimates of emissions from burning fossil fuels and from land use combined range from 36 to 45 GtCO2 per year, or +/- 4.5. The uptake by the biosphere and ocean combined range from 16 to 25 GtCO2 per year, also +/- 4.5. The uncertainty on emissions is 11% while the natural sequestration uncertainty is 22%, twice as much.

Furthermore, the fluxes from biosphere and ocean are both presented as balanced with no error range. The diagram assumes the natural sinks/sources are not in balance, but are taking more CO2 than they release. IPCC reported: Gross fluxes generally have uncertainties of more than +/- 20%. (IPCC AR4WG1 Figure 7.3.) Thus for land and ocean the estimates range as follows:

Land: 440, with uncertainty between 352 and 528, a range of 176
Ocean: 330, with uncertainty between 264 and 396, a range of 132
Nature: 770, with uncertainty between 616 and 924, a range of 308

So the natural flux uncertainty is 7.5 times the estimated human emissions of 41 GtCO2 per year.

For more detail see CO2 Fluxes, Sources and Sinks and Who to Blame for Rising CO2?

The Fundamental Flaw: Spurious Correlation

Beyond the uncertainty of the amounts is a method error in claiming rising CO2 drives temperature changes. For this discussion I am drawing on work by chaam jamal at her website Thongchai Thailand. A series of articles there explain in detail how the mistake was invented and why it is faulty. A good starting point is The Carbon Budgets of Climate Science. Below is my attempt at a synopsis from her writings with excerpts in italics and my bolds.

Simplifying Climate to a Single Number

Figure 1 above shows the strong positive correlation between cumulative emissions and cumulative warming used by climate science and by the IPCC to track the effect of emissions on temperature and to derive the “carbon budget” for various acceptable levels of warming such as 2C and 1.5C. These so called carbon budgets then serve as policy tools for international climate action agreements and climate action imperatives of the United Nations. And yet, all such budgets are numbers with no interpretation in the real world because they are derived from spurious correlations. Source: Matthews et al 2009

Carbon budget accounting is based on the TCRE (Transient Climate Response to Cumulative Emissions). It is derived from the observed correlation between temperature and cumulative emissions. A comprehensive explanation of an application of this relationship in climate science is found in the IPCC SR 15 2018. This IPCC description is quoted below in paragraphs #1 to #7 where the IPCC describes how climate science uses the TCRE for climate action mitigation of AGW in terms of the so called the carbon budget. Also included are some of difficult issues in carbon budget accounting and the methods used in their resolution.

It has long been recognized that the climate sensitivity of surface temperature to the logarithm of atmospheric CO2 (ECS), which lies at the heart of the anthropogenic global warming and climate change (AGW) proposition, was a difficult issue for climate science because of the large range of empirical values reported in the literature and the so called “uncertainty problem” it implies.

The ECS uncertainty issue was interpreted in two very different ways. Climate science took the position that ECS uncertainty implies that climate action has to be greater than that implied by the mean value of ECS in order to ensure that higher values of ECS that are possible will be accommodated while skeptics argued that the large range means that we don’t really know. At the same time skeptics also presented convincing arguments against the assumption that observed changes in atmospheric CO2 concentration can be attributed to fossil fuel emissions.

A breakthrough came in 2009 when Damon Matthews, Myles Allen, and a few others almost simultaneously published almost identical papers reporting the discovery of a “near perfect” correlation (ρ≈1) between surface temperature and cumulative emissions {2009: Matthews, H. Damon, et al. “The proportionality of global warming to cumulative carbon emissions” Nature 459.7248 (2009): 829}. They had found that, irrespective of the timing of emissions or of atmospheric CO2 concentration, emitting a trillion tonnes of carbon will cause 1.0 – 2.1 C of global warming. This linear regression coefficient corresponding with the near perfect correlation between cumulative warming and cumulative emissions (note: temperature=cumulative warming), initially described as the Climate Carbon Response (CCR) was later termed the Transient Climate Response to Cumulative Emissions (TCRE).

Initially a curiosity, it gained in importance when it was found that it was in fact predicting future temperatures consistent with model predictions. The consistency with climate models was taken as a validation of the new tool and the TCRE became integrated into the theory of climate change. However, as noted in a related post the consistency likely derives from the assumption that emissions accumulate in the atmosphere.

Thereafter the TCRE became incorporated into the foundation of climate change theory particularly so in terms of its utility in the construction of carbon budgets for climate action plans for any given target temperature rise, an application for which the TCRE appeared to be tailor made. Most importantly, it solved or perhaps bypassed the messy and inconclusive uncertainty issue in ECS climate sensitivity that remained unresolved. The importance of this aspect of the TCRE is found in the 2017 paper “Beyond Climate Sensitivity” by prominent climate scientist Reto Knutti where he declared that the TCRE metric should replace the ECS as the primary tool for relating warming to human caused emissions {2017: Knutti, Reto, Maria AA Rugenstein, and Gabriele C. Hegerl. “Beyond equilibrium climate sensitivity.” Nature Geoscience 10.10 (2017): 727}. The anti ECS Knutti paper was not only published but received with great fanfare by the journal and by the climate science community in general.

The TCRE has continued to gain in importance and prominence as a tool for the practical application of climate change theory in terms of its utility in the construction and tracking of carbon budgets for limiting warming to a target such as the Paris Climate Accord target of +1.5C above pre-industrial. {Matthews, H. Damon. “Quantifying historical carbon and climate debts among nations.” Nature climate change 6.1 (2016): 60}. A bibliography on the subject of TCRE carbon budgets is included below at the end of this article (here).

However, a mysterious and vexing issue has arisen in the practical matter of applying and tracking TCRE based carbon budgets. The unsolved matter in the TCRE carbon budget is the remaining carbon budget puzzle {Rogelj, Joeri, et al. “Estimating and tracking the remaining carbon budget for stringent climate targets.” Nature 571.7765 (2019): 335-342}. It turns out that midway in the implementation of a carbon budget, the remaining carbon budget computed by subtraction does not match the TCRE carbon budget for the latter period computed directly using the Damon Matthews proportionality of temperature with cumulative emissions for that period. As it turns out, the difference between the two estimates of the remaining carbon budget has a rational explanation in terms of the statistics of a time series of cumulative values of another time series described in a related post

It is shown that a time series of the cumulative values of another time series has neither time scale nor degrees of freedom and that therefore statistical properties of this series can have no practical interpretation.

It is demonstrated with random numbers that the only practical implication of the “near perfect proportionality” correlation reported by Damon Matthews is that the two time series being compared (annual warming and annual emissions) tend to have positive values. In the case of emissions we have all positive values, and during a time of global warming, the annual warming series contains mostly positive values. The correlation between temperature (cumulative warming) and cumulative emissions derives from this sign bias as demonstrated with random numbers with and without sign bias.

Figure 4: Random Numbers without Sign Bias

Figure 5: Random Numbers with Sign Bias

The sign bias explains the correlation between cumulative values of time series data and also the remaining carbon budget puzzle. It is shown that the TCRE regression coefficient between these time series of cumulative values derives from the positive value bias in the annual warming data. Thus, during a period of accelerated warming, the second half of the carbon budget period may contain a higher percentage of positive values for annual warming and it will therefore show a carbon budget that exceeds the proportional budget for the second half computed from the full span regression coefficient that is based on a lower bias for positive values.

In short, the bias for positive annual warming is highest for the second half, lowest for the first half, and midway between these two values for the full span – and therein lies the simple statistics explanation of the remaining carbon budget issue that climate science is trying to solve in terms of climate theory and its extension to Earth System Models. The Millar and Friedlingstein 2018 paper is yet another in a long line of studies that ignore the statistical issues the TCRE correlation and instead try to explain its anomalous behavior in terms of climate theory whereas in fact their explanation lies in statistical issues that have been overlooked by these young scientists.

The fundamental problem with the construction of TCRE carbon budgets and their interpretation in terms of climate action is that the TCRE is a spurious correlation that has no interpretation in terms of a relationship between emissions and warming. Complexities in these carbon budgets such as the remaining carbon budget are best understood in these terms and not in terms of new and esoteric variables such as those in earth system models.


An independent study by Jamal Munshi come to a similar conclusion. Climate Sensitivity and the Responsiveness of Temperature to Atmospheric CO2

Detrended correlation analysis of global mean temperature observations and model projections are compared in a test for the theory that surface temperature is responsive to atmospheric CO2 concentration in terms of GHG forcing of surface temperature implied by the Climate Sensitivity parameter ECS. The test shows strong evidence of GHG forcing of warming in the theoretical RCP8.5 temperature projections made with CMIP5 forcings. However, no evidence of GHG forcing by CO2 is found in observational temperatures from four sources including two from satellite measurements. The test period is set to 1979-2018 so that satellite data can be included on a comparable basis. No empirical evidence is found in these data for a climate sensitivity parameter that determines surface temperature according to atmospheric CO2 concentration or for the proposition that reductions in fossil fuel emissions will moderate the rate of warming.

Postscript on Spurious Correlations

I am not a climate, environment, geology, weather, or physics expert. However, I am an expert on statistics. So, I recognize bad statistical analysis when I see it. There are quite a few problems with the use of statistics within the global warming debate. The use of Gaussian statistics is the first error. In his first movie Gore used a linear regression of CO2 and temperature. If he had done the same regression using the number of zoos in the world, or the worldwide use of atomic energy, or sunspots, he would have the same result. A linear regression by itself proves nothing.–Dan Ashley · PhD statistics, PhD Business, Northcentral University


I Want You Not to Panic


I’ve been looking into claims for concern over rising CO2 and temperatures, and this post provides reasons why the alarms are exaggerated. It involves looking into the data and how it is interpreted.

First the longer view suggests where to focus for understanding. Consider a long term temperature record such as Hadcrut4. Taking it at face value, setting aside concerns about revisions and adjustments, we can see what has been the pattern in the last 120 years following the Little Ice Age. Often the period between 1850 and 1900 is considered pre industrial since modern energy and machinery took hold later on. The graph shows that warming was not much of a factor until temperatures rose peaking in the 1940s, then cooling off into the 1970s, before ending the century with a rise matching the rate of earlier warming. Overall, the accumulated warming was 0.8C.

Then regard the record concerning CO2 concentrations in the atmosphere. It’s important to know that modern measurement of CO2 really began in 1959 with Mauna Loa observatory, coinciding with the mid-century cool period. The earlier values in the chart are reconstructed by NASA GISS from various sources and calibrated to reconcile with the modern record. It is also evident that the first 60 years saw minimal change in the values compared to the post 1959 rise after WWII ended and manufacturing was turned from military production to meet consumer needs. So again the mid-20th century appears as a change point.

It becomes interesting to look at the last 60 years of temperature and CO2 from 1959 to 2019, particularly with so much clamour about climate emergency and crisis. This graph puts together rising CO2 and temperatures for this period. Firstly note that the accumulated warming is about 0.8C after fluctuations. And remember that those decades witnessed great human flourishing and prosperity by any standard of life quality. The rise of CO2 was a monotonic steady rise with some acceleration into the 21st century.

Now let’s look at projections into the future, bearing in mind Mark Twain’s warning not to trust future predictions. No scientist knows all or most of the surprises that overturn continuity from today to tomorrow. Still, as weathermen well know, the best forecasts are built from present conditions and adding some changes going forward.

Here is a look to century end as a baseline for context. No one knows what cooling and warming periods lie ahead, but one scenario is that the next 80 years could see continued warming at the same rate as the last 60 years. That presumes that forces at play making the weather in the lifetime of many of us seniors will continue in the future. Of course factors beyond our ken may deviate from that baseline and humans will notice and adapt as they have always done. And in the back of our minds is the knowledge that we are 11,500 years into an interglacial period before the cold returns, being the greater threat to both humanity and the biosphere.

Those who believe CO2 causes warming advocate for reducing use of fossil fuels for fear of overheating, apparently discounting the need for energy should winters grow harsher. The graph shows one projection similar to that of temperature, showing the next 80 years accumulating at the same rate as the last 60. A second projection in green takes the somewhat higher rate of the last 10 years and projects it to century end. The latter trend would achieve a doubling of CO2.

What those two scenarios mean depends on how sensitive you think Global Mean Temperature is to changing CO2 concentrations. Climate models attempt to consider all relevant and significant factors and produce future scenarios for GMT. CMIP6 is the current group of models displaying a wide range of warming presumably from rising CO2. The one model closely replicating Hadcrut4 back to 1850 projects 1.8C higher GMT for a doubling of CO2 concentrations. If that held true going from 300 ppm to 600 ppm, the trend would resemble the red dashed line continuing the observed warming from the past 60 years: 0.8C up to now and another 1C the rest of the century. Of course there are other models programmed for warming 2 or 3 times the rate observed.

People who take to the streets with signs forecasting doom in 11 or 12 years have fallen victim to IPCC 450 and 430 scenarios.  For years activists asserted that warming from pre industrial can be contained to 2C if CO2 concentrations peak at 450 ppm.  Last year, the SR1.5 lowered the threshold to 430 ppm, thus the shortened timetable for the end of life as we know it.

For the sake of brevity, this post leaves aside many technical issues. Uncertainties about the temperature record, and about early CO2 levels, and the questions around Equilibrium CO2 Sensitivity (ECS) and Transient CO2 Sensitivity (TCS) are for another day. It should also be noted that GMT as an average hides huge variety of fluxes over the globe surface, and thus larger warming in some places such as Canada, and cooling in other places like Southeast US. Ross McKitrick pointed out that Canada has already gotten more than 1.5C of warming and it has been a great social, economic and environmental benefit.

So I want people not to panic about global warming/climate change. Should we do nothing? On the contrary, we must invest in robust infrastructure to ensure reliable affordable energy and to protect against destructive natural events. And advanced energy technologies must be developed for the future since today’s wind and solar farms will not suffice.

It is good that Greta’s demands were unheeded at the Davos gathering. Panic is not useful for making wise policies, and as you can see above, we have time to get it right.

CO2, SO2, O3: A journey of Discovery

A previous post Light Bulbs Disprove Global Warming presented an article by Dr. Peter Ward along with some scientific discussion from his website. This post presents an excerpt from Chapter One of his book which helpfully explains his journey of discovery from his field of volcanism to the larger question of global warming.

The Chapter is How I Came to Wonder about Climate Change. Excerpts in italics with my bolds.

Discovering a More Likely Cause of Global Warming

The evidence for volcanism in the ice layers under Summit, Greenland, consists of sulfate
deposits. Sulfate comes from sulfur dioxide, megatons of which are emitted during each
volcanic eruption. At first, I thought that the warming was caused by the sulfur dioxide,
which is observed to absorb solar energy passing through the atmosphere.17 My thinking
was influenced by greenhouse warming theory, which assumes that carbon dioxide causes
global warming because it is observed to absorb infrared energy radiated by Earth as it
passes upward through the atmosphere and is then thought to re-radiate it back down to
the surface, thus causing warming. The sulfur dioxide story, however, just wasn’t adding
up quantitatively.

Figure 1.9 Average temperatures per century (black) increased at the same time as the amount of volcanic sulfate per century (red). The greatest warming occurred when volcanism was more continuous from year to year, as shown by the blue circles surrounding the number of contiguous layers (7 or more) containing volcanic sulfate. It was this continuity over two millennia that finally warmed the world out of the last ice age. Data are from the GISP2 drill hole under Summit, Greenland. Periods of major warming are labeled in black. Periods of major cooling are labeled in blue.

Eventually, after publishing two papers that developed this story, I came to realize
that sulfur dioxide was actually just the “footprint” of volcanism—a measure of how
active volcanoes were at any given time. The real breakthrough came when I came across
a paper reporting that the lowest concentrations of stratospheric ozone ever recorded were for the two years after the 1991 eruption of Mt. Pinatubo, the largest volcanic eruption since the 1912 eruption of Mt. Katmai. As I dug deeper, analyzing ozone records from Arosa, Switzerland18—the longest running observations of ozone in the world, begun in 1927 (Figure 8.15 on page 119)—I found that ozone spiked in the years of most volcanic eruptions but dropped dramatically and precipitously in the year following each eruption. There seemed to be a close relationship between volcanism and ozone. What could that relationship be?

Increased SO2 pollution (dotted black line) does not appear to contribute to substantial global warming (red line) until total column ozone decreased (black line, y-axis inverted), most likely due to increasing tropospheric chlorine (green line). Mean annual temperature anomaly in the Northern Hemisphere (red line) and ozone (black line) are smoothed with a centered 5 point running mean. OHC is ocean heat content (dotted purple line).

The answer was not long in coming. I knew that all volcanoes release hydrogen chloride
when they erupt, and I also knew that chlorine from man-made chlorofluorocarbon
compounds had been identified in the 1970s as a potent agent of stratospheric ozone
depletion. From these two facts, and a third one, I deduced that it must be the depletion of
ozone by chlorine in volcanic hydrogen chloride—and not the absorption of solar radiation
by sulfur dioxide—that was driving the warming events that followed volcanic eruptions.
The third fact in the equation was the well-known interaction of stratospheric ozone with
solar radiation.

Figure 1.10 When ozone is depleted, a narrow sliver of solar ultraviolet-B radiation with wavelengths close to 0.31 µm (yellow triangle) reaches Earth. The red circle shows that the energy of this ultraviolet radiation is around 4 electron volts (eV) on the red scale on the right, 48 times the energy absorbed most strongly by carbon dioxide (blue circle, 0.083 eV at 14.9 micrometers (µm) wavelength. Shaded grey areas show the bandwidths of absorption by different greenhouse gases. Current computer models calculate radiative forcing by adding up the areas under the broadened spectral lines that make up these bandwidths. Net radiative energy, however, is proportional to frequency only (red line), not to amplitude, bandwidth, or amount.

The ozone layer, at altitudes of 12 to 19 miles (20 to 30 km) up in the lower
stratosphere, absorbs very energetic solar ultraviolet radiation, thereby protecting life on
Earth from this very “hot,” DNA-destroying radiation. When the concentration of ozone is
reduced, more ultraviolet radiation is observed to reach Earth’s surface, increasing the risk
of sunburn and skin cancer. There is no disagreement among climate scientists about this,
but I went one step further by deducing that this increased influx of “super-hot” ultraviolet
radiation also actually warms Earth.

All ultraviolet UV-C is absorbed in the upper atmosphere. Most UV-B is absorbed in the stratosphere. The wavelengths of UV are shown in nanometers.

All current climate models assume that radiation travels through space as waves and
that energy in radiation is proportional to the square of the amplitude of these waves
and to the bandwidth of the radiation, i.e. to the range of wavelengths or frequencies
involved. Figure 1.10 shows the percent absorption for different greenhouse-gases as a
function of wavelength or frequency. It is generally assumed that the energy absorbed
by greenhouse-gases is proportional to the areas shaded in gray. From this perspective,
absorption by carbon dioxide of wavelengths around 14.9 and 4.3 micrometers in
the infrared looks much more important than absorption by ozone of ultraviolet-B
radiation around 0.31 micrometers. Climate models thus calculate that ultraviolet
radiation is relatively unimportant for global warming because it occupies a rather
narrow bandwidth in the solar spectrum compared to Earth’s much lower frequency,
infrared radiation.

The models neglect the fact, shown by the red line in Figure 1.10 and explained in
Chapter 4, that due to its higher frequency, ultraviolet radiation (red circle) is
48 times more energy-rich, 48 times “hotter,” than infrared absorbed by
carbon dioxide (blue circle), which means that there is a great deal more energy packed
into that narrow sliver of ultraviolet (yellow triangle) than there is in the broad band
of infrared. This actually makes very good intuitive sense. From personal experience,
we all know that we get very hot and are easily sunburned when standing in ultraviolet
sunlight during the day, but that we have trouble keeping warm at night when standing
in infrared energy rising from Earth.

These flawed assumptions in the climate models are based on equations that were
written in 1865 by James Clerk Maxwell and have been used very successfully to design
every piece of electronics that we depend on today, including our electric grid. Maxwell
assumed that electromagnetic energy travels as waves through matter, air, and space.
His wave equations seem to work well in matter, but not in space. Even though Albert
Michelson and Edward Morley demonstrated experimentally in 1887 that there is no
medium in space, no so-called luminiferous aether, through which waves could travel,
most physicists and climatologists today still assume that electromagnetic radiation does
in fact travel through space at least partially in the form of waves.

They also erroneously assume that energy in these imagined waves is proportional to
the square of their amplitude, which is true in matter, but cannot be true in space. They
calculate that there is more energy in the broad band of low-frequency infrared radiation
emitted by Earth and absorbed by greenhouse gases than there is in the narrow sliver of
additional high-frequency ultraviolet solar radiation that reaches Earth when ozone is
depleted (Figure 1.10). Nothing could be further from the truth.

The energy of radiation absorbed by carbon dioxide around 14,900 nanometers (blue circle) is near 0.08 electron volts (green circle) while the energy that reaches Earth when the ozone layer is depleted around 310 nanometers (red circle) is near 4 electron volts, 48 times larger.

The story got even more convoluted by the rise of quantum mechanics at the dawn
of the 20th century when Max Planck and Albert Einstein introduced the idea that energy
in light is quantized. These quanta of light ultimately became known as photons. In order
to explain the photoelectric effect, Einstein proposed that radiation travels as particles, a
concept that scientists and natural philosophers had debated for 2500 years before him.
I will explain in Chapter 4 why photons traveling from Sun cannot physically exist, even
though they provide a very useful mathematical shorthand.

Max Planck postulated, in 1900, that the energy in radiation is equal to vibrational
frequency times a constant, as is true of an atomic oscillator, in which a bond holding two
atoms together is oscillating in some way. He needed this postulate in order to derive an
equation by trial and error that could account for and calculate the observed properties of
radiation. Planck’s postulate led to Albert Einstein’s light quanta and to modern physics,
dominated by quantum mechanics and quantum electrodynamics. Curiously, however,
Planck didn’t fully appreciate the far-reaching implications of his simple postulate, which
states that the energy in radiation is equal to frequency times a constant. He simply saw it as a useful mathematical trick.

Energy is a function of frequency and should therefore be plotted on the x-axis (top of this figure) and units of watts should not be included on the y-axis. The colored lines show the spectral radiance predicted by Planck’s law for black bodies with different absolute temperatures.

As I dug deeper, it took me several years to become comfortable with those implications.
It was not the way we were trained to think. It was not the way most physicists think, even
today. Being retired turned out to be very useful because I could give my brain time to mull
this over. Gradually, it began to make sense. The take-away message for me was that the
energy in the kind of ultraviolet radiation that reaches Earth when ozone is depleted is 48 times “hotter” than infrared energy absorbed by greenhouse gases. In sufficient quantities, it should be correspondingly 48 times more effective in raising Earth’s surface temperature than the weak infrared radiation from Earth’s surface that is absorbed by carbon dioxide in the atmosphere and supposedly re-radiated back to the ground.

There simply is not enough energy involved with greenhouse gases to have a significant
effect on global warming. Reducing emissions of greenhouse gases will therefore not be
effective in reducing global warming. This conclusion is critical right now because most of
the world’s nations are planning to meet in Paris, France, in late November 2015, to agree
on legally binding limits to greenhouse-gas emissions. Such limits would be very expensive
as well as socioeconomically disruptive. We depend on large amounts of affordable energy to support our lifestyles, and developing countries also depend on large amounts of affordable energy to improve their lifestyles. Increasing the cost of energy by even a few percent would have major negative financial and societal repercussions.

This book is your chance to join my odyssey. You do not need to have majored in
science or even to be familiar with physics, chemistry, mathematics, or climatology. You
just need to be curious and be willing to work. You also need to be willing to think critically
about observations, and you may need to reevaluate some of your own ideas about climate.
You will learn that there was a slight misunderstanding in science made back in the 1860s
that has had profound implications for understanding climate change and physics today. It took me many years of hard work to gain this insight, and I will discuss that in Chapter 4. First, however, we need to look at some fundamental observations that cause us to wonder: Could the greenhouse warming theory of climate change actually be mistaken?


I welcome this analysis and assessment that explain why rising CO2 concentrations in the satellite era have no discernable impact on the radiative profile of the atmosphere.  See Global Warming Theory and the Tests It Fails

Raman Effect Not a Climate Factor

When the Raman effect came up last year in relation to GHGs (Greenhouse Gases), I was firstly confused thinking it was talk of asian noodles.  So I have had to learn more, and while the effect is real and useful, I doubt it is a factor concerning global warming/climate change.  This post provides information principally from two sources consistent with many others I read.

One article is Raman Spectroscopy from University of Pennsylvania.  Excerpts in italics with my bolds.

Raman Effect

Raman spectroscopy is often considered to be complementary to IR spectroscopy. For symmetrical molecules with a center of inversion, Raman and IR are mutually exclusive. In other words, bonds that are IR-active will not be Raman-active and vice versa. Other molecules may have bonds that are either Raman-active, IR-active, neither or both.

Raman spectroscopy measures the scattering of light by matter. The light source used in Raman spectroscopy is a laser.

The laser light is used because it is a very intense beam of nearly monochromatic light that can interact with sample molecules. When matter absorbs light, the internal energy of the matter is changed in some way. Since this site is focused on the complementary nature of IR and Raman, the infrared region will be discussed. Infrared radiation causes molecules to undergo changes in their vibrational and rotational motion. When the radiation is absorbed, a molecule jumps to a higher vibrational or rotational energy level. When the molecule relaxes back to a lower energy level, radiation is emitted. Most often the emitted radiation is of the same frequency as the incident light. Since the radiation was absorbed and then emitted, it will likely travel in a different direction from which it came. This is called Rayleigh scattering. Sometimes, however, the scattered (emitted) light is of a slightly different frequency than the incident light. This effect was first noted by Chandrasekhara Venkata Raman who won the Nobel Prize for this discovery. (6) The effect, named for its discoverer, is called the Raman effect, or Raman scattering.

Raman scattering occurs in two ways. If the emitted radiation is of lower frequency than the incident radiation, then it is called Stokes scattering. If it is of higher frequency, then it is called anti-Stokes scattering.

Energy Diagram Scattering (Source: Wikipedia)

The Blue arrow in the picture to the left represents the incident radiation. The Stokes scattered light has a frequency lower than that of the original light because the molecule did not relax all the way back to the original ground state. The anti-Stokes scattered light has a higher frequency than the original because it started in an excited energy level but relaxed back to the ground state.

Though any Raman scattering is very low in intensity, the Stokes scattered radiation is more intense than the anti-Stokes scattered radiation.

The reason for this is that very few molecules would exist in the excited level as compared to the ground state before the absorption of radiation. The diagram shown represents electronic energy levels as shown by the labels “n=”. The same phenomenon, however, applies to radiation in any of the regions.

Another article is Raman Techniques: Fundamentals and Frontiers by Robin R. Jones et al. at 2019 at US National Library of Medicine.


Driven by applications in chemical sensing, biological imaging and material characterisation, Raman spectroscopies are attracting growing interest from a variety of scientific disciplines. The Raman effect originates from the inelastic scattering of light, and it can directly probe vibration/rotational-vibration states in molecules and materials.

Despite numerous advantages over infrared spectroscopy, spontaneous Raman scattering is very weak, and consequently, a variety of enhanced Raman spectroscopic techniques have emerged.

These techniques include stimulated Raman scattering and coherent anti-Stokes Raman scattering, as well as surface- and tip-enhanced Raman scattering spectroscopies. The present review provides the reader with an understanding of the fundamental physics that govern the Raman effect and its advantages, limitations and applications. The review also highlights the key experimental considerations for implementing the main experimental Raman spectroscopic techniques. The relevant data analysis methods and some of the most recent advances related to the Raman effect are finally presented. This review constitutes a practical introduction to the science of Raman spectroscopy; it also highlights recent and promising directions of future research developments.

Fundamental Principles

When light interacts with matter, the oscillatory electro-magnetic (EM) field of the light perturbs the charge distribution in the matter which can lead to the exchange of energy and momentum leaving the matter in a modified state. Examples include electronic excitations and molecular vibrations or rotational-vibrations (ro-vibrations) in liquids and gases, electronic excitations and optical phonons in solids, and electron-plasma oscillations in plasmas [108].

Spontaneous Raman

When an incident photon interacts with a crystal lattice or molecule, it can be scattered either elastically or inelastically. Predominantly, light is elastically scattered (i.e. the energy of the scattered photon is equal to that of the incident photon). This type of scattering is often referred to as Rayleigh scattering. The inelastic scattering of light by matter (i.e. the energy of the scattered photon is not equal to that of the incident photon) is known as the Raman effect [1, 4, 6]. This inelastic process leaves the molecule in a modified (ro-)vibrational state

In the case of spontaneous Raman scattering, the Raman effect is very weak; typically, 1 in 10^8 of the incident radiation undergoes spontaneous Raman scattering [6].

The transition from the virtual excited state to the final state can occur at any point in time and to any possible final state based on probability. Hence, spontaneous Raman scattering is an incoherent process. The output signal power is proportional to the input power, scattered in random directions and is dependent on the orientation of the polarisation. For example, in a system of gaseous molecules, the molecular orientation relative to the incident light is random and hence their polarisation wave vector will also be random. Furthermore, as the excited state has a finite lifetime, there is an associated uncertainty in the transition energy which leads to natural line broadening of the wavelength as per the Heisenberg uncertainty principle (∆E∆t ≥ ℏ/2) [1]. The scattered light, in general, has polarisation properties that differ from that of the incident radiation. Furthermore, the intensity and polarisation are dependent on the direction from which the light is measured [1]. The scattered spectrum exhibits peaks at all Raman active modes; the relative strength of the spectral peaks are determined by the scattering cross-section of each Raman mode [108]. Photons can undergo successive Rayleigh scattering events before Raman scattering occurs as Raman scattering is far less probable than Rayleigh scattering.

Laser Empowered Raman Scattering

Coherent light-scattering events involving multiple incident photons simultaneously interacting with the scattering material was not observed until laser sources became available in the 1960s, despite predictions being made as early as the 1930s [37, 38]. The first laser-based Raman scattering experiment was demonstrated in 1961 [39]. Stimulated Raman scattering (SRS) and CARS have become prominent four-wave mixing techniques and are of interest in this review.

SRS is a coherent process providing much stronger signals relative to spontaneous Raman spectroscopy as well as the ability to time-resolve the vibrational motions.

Raman is generally a very weak process; it is estimated that approximately one in every 10^8 photons undergo Raman scattering spontaneously [6]. This inherent weakness poses a limitation on the intensity of the obtainable Raman signal. Various methods can be used to increase the Raman throughput of an experiment, such as increasing the incident laser power and using microscope objectives to tightly focus the laser beam into small areas. However, this can have negative consequences such as sample photobleaching [139]. Placing the analyte on a rough metal surface can provide orders of magnitude enhancement of the measured Raman signal, i.e. SERS.


It seems to me that spontaneous scattering is the only possible way that the Raman effect could influence the radiative profile of the atmosphere.  Sources like those above convince me that lacking laser intensity, natural light does not produce a Raman effect in the air of any significance for it to be considered a climate factor.

Dipole Down Under

Vijay Jayaraj explains how weather is created around the Indian Ocean in this article Record Heat and Cold Expose Climate Alarmists’ Bias. Excerpts in italics with my bolds and images.

Australia was literally on fire in December. Record heat made headlines in global media. So did the extreme rainfall in east Africa.

You and everybody else on earth can guess what climate alarmists blamed for both: man-made global warming, a.k.a. climate change.

But record cold in northern India at the same time didn’t make headlines in any major media in the United States or the United Kingdom.

Why? Because it didn’t fit expectations.

It’s a perfect example of climate alarmists’ obvious bias that’s seldom brought to light.

In December, east Africa received extremely heavy rainfall, causing widespread floods in Kenya and Djibouti. The floods impacted more than one million people and killed scores already challenged by extreme poverty.

During the same month, Australia recorded all-time highs. Widespread, devastating wildfires made the situation worse.

Climate alarmists predictably claimed these weather events for their propaganda.  Almost all news article about the Australian heat and wildfires ultimately blamed man-made climate change. But more than four-fifths of Australia’s wildfires were caused by arson, not climate change.

And what caused the extreme hot weather was not global warming but a phenomenon called Positive Indian Ocean Dipole (PIOD).

PIOD is a seasonal weather phenomenon that can affect climate in east Africa, south Asia, and Australia all at once.

The same PIOD that caused Australia’s heat (but not its wildfires) caused the year-end floods in east Africa. It also caused extreme cold in northern India in the same month. Largely underreported in global media, the cold continued right through to the end of December.

Delhi, India’s capital, recorded its second-coldest December in 118 years. Intermittent cold waves gripped Punjab, Jammu and Kashmir, Ladakh, Himachal Pradesh, Uttarakhand, and Delhi.

On December 28, the heart of Delhi recorded a minimum of 1.7˚C (35˚F). The temperature likely reached freezing outside the city’s urban heat island effect. The cold wave impacted everyday life for 29 million people in Delhi.

But neither CNN nor BBC headlines ever mentioned it. It runs contrary to their narrative. Winters are supposed to become warmer. Though the mainstream media do link the PIOD to the Australian heat and the east African floods, they never shy away from blaming man-made climate change and find ways to link both.

Now their new theory is that the PIOD itself has become more intense because of climate change. In other words, weather events are non-existent in their dictionary. Each and every extreme weather event is blamed on man-made climate change.

This is what happens when people read every weather event through the preconceived lenses of climate alarmism.

Closer inspection reveals no change in very hot days in Australia since World War I. So hot weather (short term) and hot climate (long term) have nothing to do with the wildfire outbreak.

December’s extremes — heat in Australia, flooding in east Africa, cold in India — all were caused by a strong PIOD, not climate change.

These weather events neither prove nor disprove man-made climate change. But they do expose the bias of climate alarmists who blame them on man-made global warming.

Vijay Jayaraj (M.Sc., Environmental Science, University of East Anglia, England) is a research contributor for the Cornwall Alliance for the Stewardship of Creation.


The real tragedy is that Australian officials keep obsessing over their bogus climate models instead taking seriously real world weather warnings. It is not like they had no advance notice; this was published May 16, 2019, which should have triggered major efforts to reduce the fuel load long before summer.

ABC Online: A positive Indian Ocean Dipole this winter is bad news for drought-hit parts of Australia

Cool seas off WA’s north-west could kick off a climatic phenomenon that may exacerbate a winter drought across central and southern Australia.

See Also Aussies’ Choice:  Burn Cool or Burn Hot

How Water Warms Our Planet

The hydrological cycle. Estimates of the observed main water reservoirs (black numbers in 10^3 km3 ) and the flow of moisture through the system (red numbers, in 10^3 km3 yr À1 ). Adjusted from Trenberth et al. [2007a] for the period 2002-2008 as in Trenberth et al. [2011].

This site has long asserted that “Oceans Make Climate”. Now a recent study reveals the dynamics by which water influences temperatures over land as well. The paper is Testing the hypothesis that variations in atmospheric water vapour are the main cause of fluctuations in global temperature by Ivan R. Kennedy and Migdat Hodzic, published in Periodicals of Engineering and Natural Sciences, August 2019.  Excerpts in italics with my bolds. H/T Notrickszone.


Global warming issues have caused intensive research work in related areas, from land use, to urban environment to data science use in order to understand its effects better [25], [26], [27]. In this paper we focus on water related effects on global warming. Although water is recognised as the main cause of the greenhouse effect warming the Earth 33 oC above its black body temperature, water vapour is usually given a secondary role in global models, as a positive feedback from warming by all other causes. Despite its dominant effect in generating the weather, changes related to water are not seen as having a primary role in climate change, the focus being primarily on CO2. With positive feedback from primary warming, the effect of increasing CO2 is trebled [15] by water vapour increase. This conclusion is based on the perception that there are no significant trends in the hydrological cycle that could cause climate forcing. But this overlooks the effect of more than 3500 km3 of extra surface and ground water used annually in irrigation [17] to grow food for the human population. This quantity of extra water increases steadily year by year, well correlated with increasing atmospheric CO2, growing about 60% of world food requirements. Even so, the amount used in irrigation probably only adds about 3% to the annual hydrological cycle [9] of 113,000 km3. Is this sufficient to exert a significant extra greenhouse effect? Here we advance the hypothesis that it does and should be included in climate models.

A critical assumption of the IPCC consensus of global warming is that an increasing concentration of CO2 causes more retention of radiant heat near the top of the atmosphere, largely as a result of reduced emission of its spectral wavelengths centred on 15 microns. The radiative-convective model assumes that the lowered emissions at reduced pressure, number density and higher, colder altitudes from this GHG now provides an independent and sustained forcing exceeding 1-2 W per m2. It is assumed that once this reduction in OLR in the air column from increasing CO2 has occurred it must be compensated by increased OLR at different wavelengths elsewhere, maintaining balance with incoming radiation.

This critical assumption still lacks empirical confirmation.

Water Drives Atmospheric Warming

The importance of water in helping to keep the Earth’s atmosphere warm in the short term is beyond dispute. Table 1 summarises previously estimated rates for thermal energy flows into and out of the atmosphere [23]. As shown in the table, more than 80% of the power by which the temperature of air is maintained above the Earth’s black body temperature of -18 C is facilitated by water. Most significant of these air warming inputs from water is the greenhouse effect by which water vapour absorbs longwave radiation emitted from the surface, retaining more energy in air. However, warming from absorption of specific quanta by water vapour of incoming short wave solar radiation (ISR) and the latent heat of condensation of water vapour, exceeding the cooling effect of vertical convection, also contribute to warming of air.

Thus, the greenhouse gas (GHG) content of the atmosphere effectively provides resistance to heat flow to space increasing the transient storage of solar energy, with a warming effect analogous to resistances in an electrical circuit. By comparison to water, other polyatomic greenhouse gases like CO2 play a minor role in this process, totalling less than 20% of warming. Furthermore, the fact that the minor GHGs are relatively well-mixed by the turbulence in the troposphere, unlike water, means that we cannot expect to observe spatial variations in their effects. Furthermore, the heat capacity of non-greenhouse gases provides some 99% of the thermal inertia of the troposphere, although only greenhouse gases capable of longwave radiation by vibrational and rotational quanta can contribute to cooling by radiation through the top of the atmosphere as OLR. Figure 1 contrasts schematically the typical variation of outgoing longwave radiation (OLR) over marine and terrestrial environments.

On well-watered land such as southern China much less direct emission of OLR to space occurs, in contrast to Quetta, Pakistan, on the same latitude with similar incoming shortwave radiation (ISR). In contrast to humid atmospheres on land and tropical seas, relatively arid regions such as the Sahara, the Middle East and Australia provide heat vents effectively cooling the Earth, solely as a result of the radiant emissions from GHGs as OLR. The varying global emissions of OLR estimated for typical marine and terrestrial regions shown in Figure 2 mirror this scheme.

Clearly, water vapour is the most critical factor in the mechanism by which the air column of the lower troposphere is charged with heat energy. It is of interest from this figure and in Table 1 that the exact sum of the effects of all greenhouse gases in directly warming air, including conduction from the surface, charges the lower atmosphere with sufficient heat to generate the downwelling radiation from greenhouse gases directed towards the surface [12]. Water is the main source of this back radiation [18], well understood to be responsible for keeping the surface air warmer in humid atmospheres, thus raising the minimum temperature.

None of the variation in OLR in Figure 1 can be attributed to the well-mixed GHGs such as CO2.

Furthermore, unlike the greenhouse effect of CO2, which is regarded as increasing only in in a logarithmic manner as its concentration rises, the greenhouse effect of water on retaining heat in the atmosphere should vary more linearly, even in the case of absorption of surface radiation, as its vapour spreads into dryer atmospheres; this potential is illustrated in Fig.1 in the descending zones of Hadley cells at sub-tropical latitudes.

Fig. 1 Global values of mean OLR from 2003-2011 (downloaded August 2, 2017, AIRS OLR 2003-2011 average htpp:// estimated by Giorgio, G.P., June 24, 2014). The russet areas show regions of greater OLR, with outgoing radiation above the average of ca. 240 W per m2, thus tending to cool the Earth. Note how the upper troposphere above arid continental regions provides a vent for the greatest rate of cooling.

Thermal Effects from Water are Direct and Linear

An approximately linear response in increasing air temperature to changes in atmospheric water content is reasonable. Unlike the well-mixed CO2, there are marked spatial and temporal variations in atmospheric water content, with much of the Earth’s surface in significant deficit, particularly in the sub-tropical zone subject to Hadley cell recycling, emphasised over semi-arid land. To the extent that additional water vapour spills over into these dryer regions on land the greater the area of the Earth that is subject to the greenhouse effect. This response can be contrasted to the effect of increasing CO2, which has a logarithmic relationship between climate forcing and concentration in the atmosphere [14], [15], each doubling causing a similar increase in temperature. Because there is no obvious regional effect of CO2 on the weather or regional climate, the effect of any increases in its concentration can only be theoretically inferred. If additional heat is retained in the atmosphere by increasing greenhouse effects from CO2 or water, the air temperature near the surface is expected to increase to keep global values of ISR and OLR in balance. A critical assumption of the IPCC consensus for climate change is that increasing CO2 causes more retention of heat in air near the top of the troposphere, largely as reduced emission from the edges of its spectral peak centred on 15 microns. This edge effect is predicted to be visible from space as a cooling of its spectrum, providing a negative forcing of 1-2 W per m2. It is assumed that this forcing must be compensated by increased OLR at different wavelengths as a result of the increased temperature.

Fig. 3 Satellite measurements of global-zonal OLR ( NOAA website, downloaded August 20, 2017). The 1998-2000 El Nino peaked at about 1.03 C above the minimum temperature in the preceding La Nina, with zonal OLR varying approximately 4 W/m2; see also (8)

This is regarded as a result of convective elevation of the maritime atmosphere, reducing the outgoing longwave radiation (OLR) about 100 W/m2 locally and 4 W/m2 globally from an increase in global water vapour of about 4%. This suggests a linear response from greenhouse warming to increased water vapour content of the atmosphere. Note that the extra heat in the atmosphere during an El Nino is controlled by all these sources of warming, as shown in Figure 2. Whatever the source of extra heat in the ocean, by moving extra water into the atmosphere as vapour it warms the atmosphere by the resultant greenhouse effect, reducing OLR, as well as direct warming by sunlight in the air column. In Table 4, another estimate of the possible effect of irrigation on global warming by comparison with the El Nino-La Nina cycle [22] is made. Consistent with the irrigation water hypothesis the El Nino has been long known to significantly reduce the OLR over the Pacific Ocean up to 25% [3], recognised as a result of elevation of emission of the OLR from water being elevated and therefore a colder altitude. Assuming 60% of irrigation water becomes vapour in the troposphere and a longer rain-out time of 15 days in dry regions compared to less than a week over the oceans with a global average of 8.5 days [19], a steady state of about 100 km3 of extra water vapour results from irrigation.

This estimate also suggests an increase in temperature near 0.2C from 0.84 W/m2 of forcing based on the data given in Figure 3. This is consistent with the total effect of water vapour on global warming exceeding 25 C.

It should be noted that this dynamic effect of water on warming air includes heat pumping by evapotranspiration as well as significant warming by direct absorption of short wave solar radiation (see Fig. 2), also contributing to a more linear effect by water on warming. Since this increase estimates a primary forcing effect of new water, a positive feedback is also anticipated from increased evaporation of the ocean, suggesting that the total increase from irrigation could be of the order of 0.5 oC in the 20th century.

These global results may have more accuracy than the results obtained from the numerous grid points in global circulation models, given the additivity of errors.

Empirical Proof Comparing Dry and Irrigated Land

In Figure 4, using the same modelling as in Figure 2, the predicted steady state greenhouse effect of adding irrigation water in a comparison between dryland and irrigated land. In fact the effect of water on heat transfer to the atmospheric column is not only a result of the greenhouse effect given in the equation in the figure but also from direct absorption by water of short wave ISR and evapotranspiration, similar in total magnitude. These latter effects will be a linear function of the water vapour involved. The evaporative effect cools the surface but must transfer a similar amount of heat to the atmosphere as infrared radiation (ca. 6 microns) associated with condensation of water vapour into droplets under convective cooling as in [21]. Paradoxically, the modelling paper in [6] failed to account for any of these effects, specifically dismissing significant transfer of water vapour into the atmosphere from growth of irrigated crop growth as noted above. This provides a clue to the possible flaw in their models. Except for environments already very humid where evapotranspiration is limited, this cannot be true.

Fig. 4 Comparison of dryland and irrigated land for effect of water on heat retention in the atmosphere as an enhanced greenhouse effect. The El Nino condition of enhanced evaporation from the ocean known to strongly reduce OLR In [3] is shown as an analogue.

NCEP/ NCAR Reanalyses Coincident with the Periodic Flooding of Lake Eyre

Fig. 5 Variation in OLR from flooding of lake Eyre using NCEP-NCAR reanalysis datasets. a.Difference in OLR values between 1978 and 1974, dry and wet years. b. Difference in OLR values between 1978 and 1973, two dry years.

Rarely, during the La Nina phase of the climate cycle, the dry interior of northern Australia overlying the Great Artesian Basin may flood. Lacking riverine exits to the ocean, the massive runoff caused flows southwards, mainly accumulating in the depression below sea level in central South Australia known as Lake Eyre. In late January and February in the early months of 1974 Lake Eyre filled to a depth of six metres, its surface only returning to its hot, dry state three years later in 1977-78. This was the greatest flood ever recorded. The hypothesis in [4] suggests that this flooding should also lead to persistent elevated water vapour content of the atmosphere, predominantly downwind from the Lake Eyre basin. Using the NCEP-NCAR reanalysis datasets, which are informed by Nimbus and other satellite observations since 1970, the OLR emissions to space and the variation in humidity from this region comparing 12 months of 1974 with the same period in 1978 by subtraction of one year from the other. A significant elevation of OLR when the lake was dry by more than 10 W/m2 was observed for the 12-month period (Figure 5). This result is accompanied by increases in specific humidity consistent with an elevated greenhouse effect such as would be experienced in semi-arid areas when irrigated. The area affected downwind also showing elevated humidity is estimated as 35 times the flooded area, showing that the magnitude of this regional greenhouse effect was indeed significant.

Conclusion:  Thankfully, A Wet World is a Warm World

The neglect of the possible effect of irrigation as a significant source of anthropogenic climate change may have been a result of reluctance to consider the relatively small amount of irrigation in the hydrological cycle. Because water has been considered as providing positive feedback to warming primarily from CO2 its possible forcing effect has been overlooked. But as shown here by several different means, the more potent effect of applying water previously in the ocean or deep in the ground to dry surfaces with air in strong water deficit can be sufficient to affect global temperature. Clearly, the water vapour content of the troposphere is the major cause of the natural greenhouse effect, contributing up to two-thirds of the 33 oC warming.

Spatial and temporal variations in soil moisture and relative humidity of the atmosphere are the main factors controlling the regional outgoing longwave radiation (OLR), in contrast to the more even effects from well-mixed greenhouse gases such as CO2.

This is well illustrated in the 4-6 year El Nino cycles, resulting in a global mean temperature variation approaching 1 oC compared with La Nina years. Longer term, the proposed Milankovitch glaciations of paleoclimates result in declines of atmospheric temperature around 10 oC, consistent with the major reduction in tropospheric water vapour approaching 50%. Weather conditions and climate as illustrated in the greenhouse effect are clearly demonstrated in the distribution of water, particularly on land. The apparently linear relationship between the water content of the atmosphere is direct verification of the greenhouse warming effect of this greenhouse gas. By contrast, other than by correlation, there is no such direct verification possible for the greenhouse effect of CO2. We rely on the forcing equation of 5.3ln[(CO2)t /(CO2)o] to estimate the climate sensitivity with respect to varying concentration (ppmv) of this greenhouse gas. Early hopes that a clear spectral signal was available showing significantly reduced OLR from increasing CO2, proving the hypothesis of climate forcing by permanent GHGs, have not been realised [5]. A focus using new satellites on the longer wavelength OLR associated with rotations of water might help resolve this question. Up till now, OLR is estimated for this region based on shorter wavelengths. The natural experiment provided by the flooding of Lake Eyre of the greenhouse effect by significantly reducing the OLR provides confirmation that irrigation water typically applied to dry land will have a measurable greenhouse effect.

One year time lapse of precipitable water (amount of water in the atmosphere) from Jan 1, 2016 to Dec 31, 2016, as modeled by the GFS. The Pacific ocean rotates into view just as the tropical cyclone season picks up steam.

Corrections to CO2 Post

This is an update correcting a previous post Fear Not CO2!  I discovered math errors that invalidated the main conclusion.  I apologize for not seeing the problem before posting.

At Quora Paul Noel answers the question Is climate change the biggest catastrophic risk facing humanity today? Text below in italics with my bolds

NO it is not even an issue you need to worry about.

Look closely. The biological adaptation is eating up the CO2 almost as fast as it is being emitted and the adaptation is getting faster every year.

Out of the over 38 GT output in 2019 only 0.02 GT will not be sequestered naturally and by next year that will be gone. The plants are eating up the CO2 just a few days after the release. They are happily eating it up just fine. You don’t even need to plant trees. Nothing against trees here. I like them.

This is the TRUMP CARD on the game. With this known, it is impossible to imagine the problems proposed are happening regardless of all other issues.

Correction Update

I reblogged an answer from Quora with an analysis and conclusions new to me. I thought it interesting if it held up to scrutiny.. Afterward I became uncomfortable when double checking the math, and so I am retracting my support. One smaller issue was noted in my post regarding CO2 having a larger weight (44) than the average air molecule (29). Thus calculating CO2 mass in the atmosphere should apply a ratio of 1.52. That does not in itself materially affect the finding.

The Table produced by Paul Noel is shown above..

The more substantial issue is having equivalent units of mass for comparing atmospheric CO2 and human emissions of CO2. The proper unit is Gigatons since that is how emissions are reported. One GT is defined as 1 billion (10^9) metric tons and 1 metric ton is 1000 (10^3) kilograms. So one GT is 10^12 kg.

The mass of the atmosphere is calculated using air pressure and area, with a little variation in the results obtained by researchers. A typical standard is 5.148 x 10^18 kg. That converts to 5148000 x 10^12 kg or 5148000 GT. Once that value is plugged into the table, the results are very different. I had recognized that 10^15 was not GT, but thought it was only mislabeling. Later I found that the comparison was distorted in the process.

My revised Table 1 applies a weighted calculation for CO2 compared to average air molecules and derives masses and percentages using GT consistently.

It is clear that the claim of 99% sequestration of emissions is an artifact of faulty math. A better approximation is 57% for emissions reduced by natural fluxes.

This does not mean we should fear CO2. For one thing the greening of the planet and record annual crop yields are a great benefit from both warming and higher atmospheric CO2. It is also the case that estimates of human emissions (fraught with uncertainty) are small compared to natural fluxes, which are estimated with error ranges exceeding emission amounts. Further, the sensitivity of temperature to rising CO2 is assumed to lie in a wide range.

My views on the CO2 cycle are in the posts:

CO2 Fluxes, Sources and Sinks

Who to Blame for Rising CO2?

Oceanic Forcing Rules, Not Radiative Effects

Roy Clark explains how climate science built an house of cards obscuring the actual physical mechanisms driving observed climate fluctuations. Text of his recent post at WUWT in italics with my bolds.

The basic issue is that there is no such thing as a climate sensitivity to CO2 or any other so called ‘greenhouse gas’. Radiative forcing can politely be described as climate theology – how does a change in the atmospheric concentration of CO2 change the number of angels that may dance on the head of a climate pin? The climate equilibrium assumption was used by Arrhenius in his 1896 estimate of global warming. In this paper he traced the concept back to Pouillet in 1838. Speculation that changes in atmospheric CO2 concentration could somehow cause an Ice Age started with John Tyndall in 1863.

To get to the bottom of the radiative forcing nonsense it is necessary to go back to Fourier in 1827 and start over with the real physics of the surface energy transfer.

The essential part that almost everyone seems to have missed in this paper is the time delay or phase shift between the solar flux and the surface temperature response. The daily phase shift in MSAT can reach 2 hours and the seasonal phase shift can reach 6 to 8 weeks. This is clear evidence for non-equilibrium thermal storage. The same kind of non-equilibrium phase shift on different time and energy scales occurs with electrical energy storage in capacitors and inductors in AC circuits – low pass filters, tank circuits etc.

The equilibrium average climate assumption was used by Manabe and Wetherald (M&W) in their 1967 climate modeling paper. They abandoned physical reality and created global warming as a mathematical artifact of their input modeling assumptions. The rest of the climate modelers followed like lemmings jumping off a cliff. In the 1979 Charney report, no-one looked at the underlying assumptions. The radiative transfer results were reasonable –for the total long wave IR (LWIR) flux at the top and bottom atmosphere – and the mathematical derivation of the flux balance equations was correct. The increase in surface temperature was the a-priori expected result. Radiative forcing and the invalid equilibrium flux balance equations were discussed by Ramanathan and Coakley in 1978. The prescribed mathematical ritual of radiative forcing in climate models was described by Hansen et al in 1981. They also introduced a fraudulent ‘slab’ ocean model and did a bait and switch from surface to weather station temperatures.

The LWIR flux interacts with the surface, not the weather station thermometer at eye level above the ground.

Radiative forcing is still an integral part of IPCC climate models [IPCC, 2013]. Physical reality has been abandoned in favor of mathematical simplicity. Among other things, M&W threw out the Second Law of Thermodynamics along with at least 4 other Laws of Physics. The underlying requirement for climate stability is that the absorbed solar heat be dissipated by the surface. This requires a time dependent thermal and or humidity gradient at the surface.

The starting point for any realistic climate system is that the upward LWIR flux from the top of the atmosphere does not define an equilibrium average temperature of 255 K. Instead it is the cumulative cooling flux emitted from multiple levels down through the atmosphere. The upward emission from each level is then attenuated by the LWIR absorption/emission along the upward path to space [Feldman et al, 2008]. Another fundamental error in the radiative forcing argument is the failure to consider the molecular line width effects. Part of this was due to the band model simplifications that are still used in the climate models to speed up the calculations. The IR flux through the atmosphere consists of absorption and emission from many thousands of overlapping molecular lines, mainly from CO2 and water vapor [Rothman et al, 2005].

As the temperature and pressure decrease with altitude, these lines become narrower and transmission ‘gaps’ open up between the lines. This produces a gradual transition from absorption/emission to a free photon flux to space.

The radiative forcing argument has also obscured the fact that the heat lost to space is replaced by convection, not LWIR radiation. The troposphere is an open cycle heat engine that transports heat from the surface by moist convection. It is stored in the troposphere as gravitational potential energy. As a high altitude air parcel cools by LWIR emission, it contracts and sinks back down through the troposphere. The upward LWIR flux to space is decoupled from the surface by the linewidth effects. The downward LWR flux from the upper troposphere cannot reach the surface and cause any kind of change in the surface temperature. Almost all of the downward LWIR flux reaching the surface originated from within the first 2 km layer of the troposphere and about half of this comes from the first 100 m layer.

Figure 2: Thermal reservoirs, surface energy transfer and thermal storage (schematic). The surface is heated by the sun and cooled by a combination of net LWIR emission, convection and evaporation. Heat is stored below the surface and released over a range of time scales. There is no ‘equilibrium average temperature’. Source: Roy Clark


Near the surface, the lines in the main bands for CO2 and water vapor are sufficiently broadened that they merge into a continuum. There is an atmospheric transmission window in the 8 to 12 micron spectral region that allows part of the surface LWIR flux to escape directly to space. The magnitude of this transmitted cooling flux varies with cloud cover and humidity. The downward LWIR flux to the surface from the broad molecular emission bands provides an LWIR exchange energy that ‘blocks’ the upward LWIR flux from the surface. Photons are exchanged without any net heat transfer.

In order for the surface to cool, it must heat up until the excess absorbed solar heat is removed by moist convection. This is the real cause of the so called ‘greenhouse effect’.

It requires the application of the Second Law of Thermodynamics to the surface exchange energy. There is no equilibrium average climate so there can be no average ‘greenhouse effect temperature’ of 33 K. Instead, the greenhouse effect is just the downward LWIR flux from the lower troposphere to the surface. It can be defined as the downward flux or as an ‘opacity factor’ [Rorsch, 2019]. This is the ratio of the downward flux to the total blackbody surface emission.

The surface temperature has to be calculated at the surface using the surface flux balance. The change in local surface temperature is determined by the change in heat content or enthalpy of the local surface thermal reservoir divided by the specific heat [Clark, 2013a, b]. The LWIR flux cannot be separated from the other flux terms and analyzed independently. The land and ocean surface behave differently and have to be considered separately.

Over land, the various flux terms interact with a thin surface layer. During the day, the surface heating produces a thermal gradient both with the cooler air layer above and the subsurface layers below. The surface-air gradient drives the convection or sensible heat flux. The subsurface thermal gradient conducts heat into the first 0.5 to 2 meter layer of the ground. Later in the day this thermal gradient reverses and the stored heat is released back into the troposphere. The thermal gradients are reduced by evaporation if the land surface is moist. An important consideration in setting the land surface temperature is the night time convection transition temperature at which the surface and surface air temperatures equalize. Convection then essentially stops and the surface continues to cool more slowly by net LWIR emission. This convection transition temperature is reset each day by the local weather conditions.

The ocean surface is almost transparent to the solar flux. Approximately 90% of the solar flux is absorbed within the first 10 m ocean layer. The surface-air temperature gradient is quite small, usually less than 2 K. The excess absorbed solar heat is removed through a combination of net LWIR emission and wind driven evaporation. The penetration depth of the LWIR flux into the ocean surface is 100 µm or less and the evaporation involves the removal of water molecules from a thin surface layer [Hale and Querry, 1972]. These two processes combine to produce cooler water at the surface that sinks and is replaced by warmer water from below. This is a Rayleigh-Benard convection process, not simple diffusion. There are distinct columns of water moving in opposite directions. The upwelling warmer water allows the wind driven ocean evaporation to continue at night. As the cooler water sinks, it carries with it the surface momentum or linear motion produced by the wind coupling at the surface. This establishes the subsurface ocean gyre currents. Outside of the tropics there is a seasonal phase shift that may reach 6 to 8 weeks.

This phase shift can only occur with ocean solar heating. The heat capacity of the land thermal reservoir is too small to produce this effect. In many parts of the world, the prevailing weather systems are formed over the ocean. The temperature changes related to the ocean surface are stored by the weather system as the bulk surface air temperature and this information can be transported over very long distances. Such ocean related phase shifts can be found in the daily climate data for weather stations in places like Sioux Falls SD.

Over the oceans, the wind driven evaporation can never exactly balance the solar heating. This produces the ocean oscillations such as the ENSO, PDO and AMO.

These surface temperature changes are incorporated into the various weather systems and can be seen in the long term climate data, particularly the minimum MSAT. The whole global warming scam is based on nothing more than the last AMO warming cycle coupled into the weather station data [Akasofu, 2010].

Figure 1: Change in wind speed (cm s-1) needed to restore the ocean surface cooling flux when the downward LWIR flux is increased by 2 W m-2 Both the fixed and the temperature dependent LWIR window flux cases are shown.

A fundamental failure of the radiative forcing argument is the lack of any error analysis. Over the last 200 years, the atmospheric CO2 concentration has increased by a little over 120 ppm. This has produced an increase in the downward LWIR flux at the surface of about 2 W m-2 [Harde, 2017]. Over the oceans this is coupled into the first 100 micron layer of the ocean surface. Here it is fully coupled to the wind driven evaporation. Using long term ocean evaporation data from Yu et al, 2008, an approximate estimate of the evaporation rate within the ±30 degree latitude region is 15 Watts per square meter for each change in wind speed of 1 meter per second.

This means that the radiative forcing from an increase of 120 ppm in the CO2 concentration amounts to a change in wind speed of about 13 CENTIMETERS per second.

This is at least two orders of magnitude below the normal variation in ocean wind speed. Similarly, a reasonable estimate of the bulk convection coefficient for dry land is 20 Watts per square meter per degree C difference between surface and air temperature. Here a 2 W m-2 change in convection requires a change of 0.1 C in the surface air thermal gradient.

Once the physics of the time dependent surface energy transfer is restored, global warming and radiative forcing disappear into the realm of computerized climate fiction.

The topic of radiative forcing was recently reviewed in detail by Ramaswamy et al [2019] as part of the American Meteorological Society monographs series. This review provides a good start for a scientific and criminal fraud investigation into the climate modeling fraud. To begin, the scientific community should demand that this particular monograph be retracted and all further work on equilibrium climate modeling be stopped. Any climate model that uses radiative forcing is by definition invalid. There is no need to try and validate the computer code of any equilibrium climate model. The use of radiative forcing alone is sufficient to render the results totally useless. These modelers are not scientists, they are mathematicians playing with a set of physically meaningless equations. They left physical reality behind when they made the climate equilibrium assumption. They are now members of a rather unpleasant quasi-religious cult. They believe that the divine spaghetti plots created by the computer climate models come from a higher authority that the Laws of Physics.

Figure 12: The ocean surface energy balance in the tropical warm pool. The evaporative surface cooling is strongly dependent on the wind speed. Source: Roy Clark

Any realistic climate model must correctly predict the changes in ocean temperature caused by the ocean oscillations. These must then be used to predict the changes in the weather station data.

This must include the minimum and maximum surface air temperatures, surface temperatures and the phase shifts. There are no forcings, feedbacks or climate sensitivities, just time dependent rates of heating and cooling. It is time to welcome the Second Law of Thermodynamics back to the climate models. It has always been part of the Earth’s climate system. [See linked post for references]

Roy Clark’s research studies are available at his website Ventura Photonics

See Also Bill Gray: H20 is Climate Control Knob, not CO2


Q&A Why So Many Climate Skeptics

An extensive and documented reply is given at Quora from John Walker, former Laboratory Medical Director/Pathologist (1984-2011). Excerpts in italics with my bolds.(red text is link)

Perhaps you really mean “Why are there so many catastrophic anthropogenic CO2 global warming (CAGW) skeptics?”

There are very few individuals who are skeptical that the climate changes. But there are millions and millions of individuals (and growing), who are quite skeptical that human emissions of CO2 are causing apocalyptic global warming, including many scientists, climate scientists, Nobel Laureates, and other highly educated individuals.

The reason for this is multi-factorial and very voluminous. The following presents condensed summaries of 12 of the reasons that so many individuals have become highly skeptical of the theory of CAGW. Even though it is rather long, it represents only a small portion of the information, studies and references engendering skepticism of this unproven assemblage of hypotheses. Most of it is taken from my 250+ page treatise on the fallacies of the theory of CAGW.

1 . First and foremost is the fact that there is currently NO experimental evidence validating the theory of CAGW. Rather CAGW is a collection of unproven/unvalidated hypotheses, which can only be accepted by faith. However, most of these hypotheses have been shown to contain fallacies and/or misinformation.

2 . The “science” behind the theory of CAGW has not been sufficiently rigorous, non-biased, or open, and, crucially, does not comply with the tenets of the scientific method since it is not subject to potential falsification by testing/experiment.

3 . The theory of CAGW is based entirely upon:

a . Atmospheric CO2 versus temperature correlation studies (which are not proof of cause and effect, are partially based upon fictitious/manipulated/estimated temperature data [as in the “hockey stick” graph and altered NASA/NOAA/CRU data], and actually do not correlate all that well):

The original MBH “hockey stick” graph compared to a corrected version produced by MacIntyre and McKitrick after undoing Mann’s errors.

b . Partially altered, manipulated, selective, imprecise, incomplete, extrapolated and unverified/fictitious temperature data (as revealed by Climategate, the “Hockey Stick” confutation, and other sources), with frequent measurements selected from urban concrete and asphalt hot spots, naturally producing higher temps, which increase in number over time due to continued urban growth).

“Government reports, writers of opinion pieces, and bloggers posting graphs purporting to show rising or record air temperatures or ocean heat, are misleading you. This is not actual raw data. It is plots of data that have been ‘adjusted’ or ‘homogenized’ (i.e., manipulated) by scientists – or it is output from models that are based on assumptions, many of them incorrect. UK Meteorological Office researcher Chris Folland makes no apologies for this. ‘The data don’t matter. We’re not basing our recommendations [for reductions in carbon dioxide emissions] upon the data. We’re basing them upon the climate models’.” Climate: The Real ‘Worrisome Trend’ (Part I: Faulty Science) – Master Resource

c . Unreliable computer models (based upon partially altered, manipulated, selective, imprecise, incomplete, extrapolated and unverified/fictitious temperature data, woefully inadequate/incomplete input data regarding thousands of climate parameters, and “educated guesses” about the climate sensitivity to atmospheric CO2), which can be programmed to reveal whatever result the programmer desires, and many of which have already been proven incorrect or exaggerated.

d . Insufficient understanding of the role and relative magnitude/sensitivity of CO2 as a “greenhouse” gas, and the unproven (and many would say ludicrous) hypothesis that the earth’s atmosphere (with all its enormity, complexity, multiple layers, convection, layers of exceedingly cold air [as low as -60F] and even colder adjacent outer space [-455F] as well as extremely hot (high kinetic energy) upper layers, huge underlying oceans with complex currents and temperature fluctuations, varying molecular compositions, stratospheric ozone [which absorbs both UV and IR radiation], variable humidity, massive heat-absorbing evaporative processes, extensive cloud formations, variably intense winds, the jet stream, varying barometric pressures, cosmic ray effects, and NO glass ceiling or walls) functions identically to a glass-enclosed greenhouse. (Yes, that does seem rather ludicrous!)

5 . Promoters of the theory of CAGW falsely claim there is a 97% “consensus” among climate scientists that the theory is true. Indeed the 97% figure is false and based upon poorly contrived surveys/studies by CAGW promoters and “peer-reviewed” (i.e., “pal-reviewed”) by other CAGW promoters. If one reads the original papers where the 97% figure was contrived, it is quite easy to see how poorly designed and biased these surveys were. All of these surveys/studies have been debunked by multiple statistical analyses and better defined and controlled surveys and studies, revealing that less than half of climate scientists believe in the theory of CAGW.

“Claims that a ‘consensus’ exists among climate experts regarding the causes of the modest warming of the past century are contradicted by thousands of independent scientists.” – International Climate Science Coalition Core Principles

The fact is that tens of thousands of scientists, including climate scientists and many Nobel Laureates, do NOT accept the theory of CAGW:[Numerous examples are provided in linked article]

6 . Thus, in essence, CAGW promoters are demanding we accept their conclusions based upon consensus and faith (normally antithetical to most modern liberals’ thinking), just as theocrats and other religious fundamentalists argue. But the inability to follow the rigorous scientific method by the use of repeatable double blind, controlled experiments for validation does not justify acceptance of a theory without such experiments simply because they cannot be performed. It may be fine to accept beliefs by consensus or even by faith on personal or other matters which do not materially affect other people. But those pushing the theory of CAGW are demanding draconian changes affecting everyone on the planet, such as diverting tens of trillions of dollars from solving known existing existential problems (poverty, hunger, violence, war, infectious disease, cancer research, pollution and over-fishing of our oceans, lack of adequate sanitation, education and clean water, etc.) in order to “fight” an unproven future potentially existential problem with costly methods which have not been proven effective, replacing capitalism and democracy with global socialism and authoritarian one world government, and redistributing global wealth. Such actions would be premature, irresponsible, illogical, socialistic, cruel and lead to massive morbidity and mortality!

7 . In addition to the “97% consensus” falsehood, CAGW promoters and alarmists have promulgated many other lies, failed predictions (for both catastrophic global cooling and global warming) based upon their flawed computer models, abundant misinformation and disinformation. If the theory of CAGW is true, why the need to prevaricate? Anyone who is aware of this widespread sophistry must become skeptical of the theory. [Many examples are given in the linked article]

8 . Another clue that the theory of CAGW is fallacious is the fact that many promoters and alarmists so frequently resort to ad hominen attacks or demand that skeptics be banned from discussions. The former is another logical fallacy, which is used when the promoter has no real evidence to back up his/her claim and is unable to respond in a logical and respectful manner. They feel cornered because of their lack of intelligent retort. They hope that such attacks will make the skeptic afraid to make further comments.

Banning and refusing to hear/discuss information contrary to one’s dogmatic belief is characteristic of a fundamentalist who has been indoctrinated, often with propaganda. It is characteristic of religious fanaticism, not science. While it is prohibited under Quora policy, you will discover that some CAGW alarmist authors just can’t stop themselves from indulging in this fallacious and destructive tactic.

Again, this engenders more skepticism in their beliefs.

9 . Climategate. Climategate was a notorious event initiated by leaked emails in 2009 (with a second batch released in 2011) allegedly revealing the deceit and deception practiced by a prominent group of British (Climatic Research Unit or CRU) and American climate researchers (including Michael Mann of Penn State) who promote the theory of CAGW and supply much of the climate and temperature data and reports to the IPCC. The latter gives this group tremendous influence regarding the UN’s climate change agenda.

“There are three threads in particular in the leaked documents which have sent a shock wave through informed observers across the world. Perhaps the most obvious, as lucidly put together by Willis Eschenbach (see McIntyre’s blog Climate Audit and Anthony Watt’s blog Watts Up With That ), is the highly disturbing series of emails which show how Dr Jones and his colleagues have for years been discussing the devious tactics whereby they could avoid releasing their data to outsiders under freedom of information laws.

“But the question which inevitably arises from this systematic refusal to release their data is – what is it that these scientists seem so anxious to hide? The second and most shocking revelation of the leaked documents is how they show the scientists trying to manipulate data through their tortuous computer programmes, always to point in only the one desired direction – to lower past temperatures and to ‘adjust’ recent temperatures upwards, in order to convey the impression of an accelerated warming. This is what Mr McIntyre caught Dr Hansen doing with his GISS temperature record last year (after which Hansen was forced to revise his record), and two further shocking examples have now come to light from Australia and New Zealand.

“The third shocking revelation of these documents is the ruthless way in which these academics have been determined to silence any expert questioning of the findings they have arrived at by such dubious methods – not just by refusing to disclose their basic data but by discrediting and freezing out any scientific journal which dares to publish their critics’ work. It seems they are prepared to stop at nothing to stifle scientific debate in this way, not least by ensuring that no dissenting research should find its way into the pages of IPCC reports.”

10 . The IPCC, which is the primary authority driving the CAGW agenda is a political body, not a scientific body. It’s originating mission was to find human causes of climate change.

“It is to specifically find and report a human impact on climate, and thereby make a scientific case for the adoption of national and international policies that would supposedly reduce that impact.

[Thus, the IPCC has been directed to attribute the cause, or at least significant portions of the cause, of climate change to human influences. If it does not make claims of significant human influence, it’s function would be obviated and its members likely out of their UN jobs!]

The IPCC is also designed to put political leaders and bureaucrats rather than scientists in control of the research project. It is a membership organization composed of governments, not scientists. The governments that created the IPCC fund it, staff it, select the scientists who get to participate, and revise and rewrite the reports after the scientists have concluded their work. Obviously, this is not how a real scientific organization operates.

11 . Much of the motive behind the promotion of the theory of CAGW is driven by money, power, and politics. Socialists, globalists and radical environmentalists are using the fear of CAGW to convince the world to replace capitalism with authoritative global socialism. The climate change industry now exceeds $1.5 trillion. If “cap-and-trade” legislation is ever passed in the US, as Al Gore, Goldman-Sachs, and other wealthy investors hope, they could potentially make $trillions via the buying and selling of carbon credits on a commodities exchange. Gore and Goldman tried desperately to get such legislation passed during the Obama administration. They were major investors in the Chicago Climate Exchange, which would have been the commodities exchange for carbon credits.

12 . There are better alternative theories regarding the mechanisms that drive the earth’s climate. Most of the theories derive from the observation that the earth’s climate goes through multiple, well-defined cycles (and cycles within cycles) of warming and cooling, and have done so for millennia. They generally involve various changes in total solar radiation reaching the earth and the adiabatic heating of the earth’s atmosphere due to atmospheric pressure. The theory of Cosmoclimatology is gaining credence among many climate scientists and astrophysicists.

Many other theories about the cause of climate change also involve solar influences. Think of the extreme temperature changes that are caused by changes in the amount of solar radiation the earth receives. Just the variation in the tilting of the earth leads to 4 seasons with temperatures varying from over 100 F to minus 20 F (or even colder) between summer and winter months in many locations. Temperatures increase dramatically just by moving towards the equator from higher or lower latitudes, due to differences in solar radiation. Day and night temperatures can easily differ by as much as 30 F or more, all in a 12 hour span. Temperatures on a sunny summer day can drop by 5 F in a matter of seconds if a cloud passes overhead. Compare that to the claimed increase in global temp of 1.4 F over 150 years supposedly caused by anthropogenic CO2.

In addition to the multiple periodic clusterings creating grand solar minima and maxima, there are multiple additional cyclic changes of solar activity, which are being elucidated with continuing climate research (another reason to stop making the absurd, counter-productive and pseudoscientific claim that the “science is settled”). There are centennial and bicentennial cycles of grand solar minima and maxima, along with many other cyclic processes of longer time intervals related to celestial changes:

There have been numerous glacial cycles, each lasting an average of 100,000 years. They coincide with the Milankovitch Eccentricity cycle of the earth’s orbit around the sun. Within each cycle is a period of marked global cooling (lasting from 70k to 90k years in which immense glaciers cover much of the land surface, and much of the ocean surface freezes. The cold periods are followed by interglacial (warm) periods lasting from 10k to 30k years. Some climatologists believe that the Earth is on the downward slope of the current interglacial period and headed towards the next ice age, which could arrive in the next several thousand years. During this downward slope, global temperatures are expected to slowly decrease with intermittent warmer and cooler trends. Milankovitch cycles – Wikipedia

And there is so much more but not enough time and space to present it all.

Sept Arctic ice 2007 to 2019 full

Munich Climate Conference 2019


Antifa thugs outside Munich Conference Center.

Thanks to Andreas Müller for writing at his blog hintermbusch on four key presentations at the EIKE Climate Conference on Nov. 23, 2019. As many have read, eco-terrorists forced the sessions out of the scheduled venue, but the gatherings went on elsewhere.  So much for dialogue in search of scientific truth. Here are some excerpts in italics with my bolds to encourage readers to read his informative report. (link in red above).

In this blog post, I summarize these lectures and add links to the video clips for you to follow the lectures on your own and in full detail (Only the first talk was in German and is not easily accessible for most of the international public).

Christian Schlüchter, Switzerland

Prof. em. Christian Schlüchter is a geologist and has studied the glaciers of the Alps in great detail. He reports the findings of very old timber in and below glaciers and what those trees taught him about the glacial epochs of the Alps.

One of the most intuitive finds of Schlüchter’s is this huge tree trunk, found at a glacier tongue (see the most beautiful glacier snout behind!).

This place nowadays is clearly above the limit of vegetation and still there is this tree which attracted Schlüchter’s curiosity and fuelled his research: How old is it? Where and under what conditions has it grown and why is it here.

The key message from his slides is that all of these records were left in times when the alpine glacier extent was smaller than in 2005.

Warm periods: more life

The timberline was at least 300 meters higher which indicates a minimum of 1.8° C higher temperatures. An example of this gives Hannibal, who managed to cross the Alps with elephants because the higher regions were much less covered by ice than in recent centuries.

Warm periods: more civilization

As his summary, Schlüchter gave the following facts:

  • More than 50% of the last 11000 years alpine glaciers were smaller than 2005
  • This fact he baptized, “dominance of the Hannibalistic world”
  • Alpine glaciers have shown huge dynamics
  • Events of glacier growth were fast and short
  • The little ice age (from the end of the medieval warm period to about 1850) was the longest glacier extension since the last ice age 12000 years ago
  • Every warming followed an accelerated glacier growth

Nicola Scafetta, Italy

Nicola Scafetta is an italian physicist und climate modeller who works at Naples university. He is well-known for his criticism of IPCC climate models and, of course not uncontested for creating his own climate models and comparing them to IPCC results. His talk in Munich again took aim at the weaknesses und faults of IPCC climate models.

He notes that the models tend to reproduce the notorious “hockeystick” shape and therefore fail at the medial warming bump! (This is an echo of a very old climate change controversy, documented here).

He also shows similar failures for longer periods and demonstrates that the ulimate reason for this is that the models are not capable of reproducing climate variations which follow periodic solar activity.

Therefore, ten years ago, he contrasted his own model forecasts, which take those into account, (black line) to those of the IPCC (dashed blue line), which leads to the climax of the talk Below is an upadated graph of his 2010 projections (cyan color) compared to observations and to IPCC models (green).

Nir Shaviv, Israel

Nir Shaviv is a well-known but moderate climate skeptic. Besides the lecture on alpine glaciers by Christian Schlüchter, he was my main reason to choose this half day from the conference program.  Shaviv continued where Scafetta had ended and discussed the IPCC world and its errors.

For a start, he presented its lines of thinking:

Next, he discussed the validity of each building block, marked the errors und deconstructed the standard picture. He emphasized that the climate sensitivity of CO2 is a priori unknown and largely overestimated by the IPCC.  He judged it a severe shortcoming that other forcings than green house gases are ruled out by the IPCC: the sun!

He pointed out that the IPCC overestimates climate sensitivity of CO2 at the expense of solar influences. While IPCC modelers managed to hide this for 20th century data, it will lead to a serious overestimate of temperatures in the 21st when solar influences will be cooling. He therefore expects a much lower temperature rise than predicted by the IPCC, a modest (and manageable) one:

Using physical arguments Shaviv manages to set an upper limit for the climate sensitivity of CO2. This should convince the audience to accept additional forcings behind the 20th century temperature rise. Like Scafetta he points at a solar driven forcing.

Henrik Svensmark, Denmark

Henrik Svensmark is a Danish physicist und climate researcher. As other speakers he reports that he finds it more and more difficult to raise funds for his research because its results contradict the IPCC position:

By experiments and also by correlation measurements Svensmark has investigated this mechanism of cloud creation by cosmic rays.  This is interesting because IPCC researchers cite reduction of cloud creation by global warming as a possible positive feedback mechanism which could escalate global warming to catastrophic levels. Svensmark assumes that it provides a way how solar activity, via its solar winds, has a climate impact on the earth which adds to the direct impact of solar irradiation to the earth.

Scientific summary

As a physicist I found all 4 talks interesting. They demonstrate that real and sophisticated science on climate was presented at that conference. Nothing suggests that this is less serious or valid science than anything I experienced during my time as a master and Ph.D student in physics. The presented results make it seem rather improbable that the climate models of the IPCC are complete, beyond any doubt or worth a 97% consensus.


For more on Scaffeta Theory see 2019 Update Scafetta vs. IPCC: Dueling Climate Theories

For more on Svensmark Theory see The cosmoclimatology theory

Regarding solar influence on climate due to orbital mechanics see this short informative video by Bill Sellers: