Global Warming Abates in Autumn

Hot, Hot, Hot.  You will have noticed that the term “climate change” is now synonymous with “summer”.  Since the northern hemisphere is where most of the world’s land, people and media are located, two typical summer months (June was not so hot) have been depicted as the fires of hell awaiting any and all who benefit from fossil fuels. If you were wondering what the media would do, apart from obsessing over the many small storms this year, you are getting the answer.

Fortunately, Autumn is on the way and already bringing cooler evenings in Montreal where I live. Once again open windows provide fresh air for sleeping, while mornings are showing condensation, and frost sometimes. This year’s period of “climate change” is winding down.  Unless of course, we get some hurricanes the next two months.  Below is a repost of seasonal changes in temperature and climate for those who may have been misled by the media reports of a forever hotter future.

[Note:  The text below refers to human migratory behavior now prohibited because, well Coronavirus.]

geese-in-v-formation

Autumnal Climate Change

Seeing a lot more of this lately, along with hearing the geese  honking. And in the next month or so, we expect that trees around here will lose their leaves. It definitely is climate change of the seasonal variety.

Interestingly, the science on this is settled: It is all due to reduction of solar energy because of the shorter length of days (LOD). The trees drop their leaves and go dormant because of less sunlight, not because of lower temperatures. The latter is an effect, not the cause.

Of course, the farther north you go, the more remarkable the seasonal climate change. St. Petersburg, Russia has their balmy “White Nights” in June when twilight is as dark as it gets, followed by the cold, dark winter and a chance to see the Northern Lights.

And as we have been monitoring, the Arctic ice has been melting from sunlight in recent months, but is already building again in the twilight, to reach its maximum in March under the cover of darkness.

We can also expect in January and February for another migration of millions of Canadians (nicknamed “snowbirds”) to fly south in search of a summer-like climate to renew their memories and hopes. As was said to me by one man in Saskatchewan (part of the Canadian wheat breadbasket region): “Around here we have Triple-A farmers: April to August, and then Arizona.” Here’s what he was talking about: Quartzsite Arizona annually hosts 1.5M visitors, mostly between November and March.

Of course, this is just North America. Similar migrations occur in Europe, and in the Southern Hemisphere, the climates are changing in the opposite direction, Springtime currently. Since it is so obviously the sun causing this seasonal change, the question arises: Does the sunlight vary on longer than annual timescales?

The Solar-Climate Debate

And therein lies a great, enduring controversy between those (like the IPCC) who dismiss the sun as a driver of multi-Decadal climate change, and those who see a connection between solar cycles and Earth’s climate history. One side can be accused of ignoring the sun because of a prior commitment to CO2 as the climate “control knob”.

The other side is repeatedly denounced as “cyclomaniacs” in search of curve-fitting patterns to prove one or another thesis. It is also argued that a claim of 60-year cycles can not be validated with only 150 years or so of reliable data. That point has weight, but it is usually made by those on the CO2 bandwagon despite temperature and CO2 trends correlating for only 2 decades during the last century.

One scientist in this field is Nicola Scafetta, who presents the basic concept this way:

“The theory is very simple in words. The solar system is characterized by a set of specific gravitational oscillations due to the fact that the planets are moving around the sun. Everything in the solar system tends to synchronize to these frequencies beginning with the sun itself. The oscillating sun then causes equivalent cycles in the climate system. Also the moon acts on the climate system with its own harmonics. In conclusion we have a climate system that is mostly made of a set of complex cycles that mirror astronomical cycles. Consequently it is possible to use these harmonics to both approximately hindcast and forecast the harmonic component of the climate, at least on a global scale. This theory is supported by strong empirical evidences using the available solar and climatic data.”

He goes on to say:

“The global surface temperature record appears to be made of natural specific oscillations with a likely solar/astronomical origin plus a noncyclical anthropogenic contribution during the last decades. Indeed, because the boundary condition of the climate system is regulated also by astronomical harmonic forcings, the astronomical frequencies need to be part of the climate signal in the same way the tidal oscillations are regulated by soli-lunar harmonics.”

He has concluded that “at least 60% of the warming of the Earth observed since 1970 appears to be induced by natural cycles which are present in the solar system.” For the near future he predicts a stabilization of global temperature and cooling until 2030-2040.

For more see Scafetta vs. IPCC: Dueling Climate Theories

A Deeper, but Accessible Presentation of Solar-Climate Theory

I have found this presentation by Ian Wilson to be persuasive while honestly considering all of the complexities involved.

The author raises the question: What if there is a third factor that not only drives the variations in solar activity that we see on the Sun but also drives the changes that we see in climate here on the Earth?

The linked article is quite readable by a general audience, and comes to a similar conclusion as Scafetta above: There is a connection, but it is not simple cause and effect. And yes, length of day (LOD) is a factor beyond the annual cycle.

Click to access IanwilsonForum2008.pdf

It is fair to say that we are still at the theorizing stage of understanding a solar connection to earth’s climate. And at this stage, investigators look for correlations in the data and propose theories (explanations) for what mechanisms are at work. Interestingly, despite the lack of interest from the IPCC, solar and climate variability is a very active research field these days.

For example Svensmark has now a Cosmosclimatology theory supported by empirical studies described in more detail in the red link.

A summary of recent studies is provided at NoTricksZone: Since 2014, 400 Scientific Papers Affirm A Strong Sun-Climate Link

Ian Wilson has much more to say at his blog: http://astroclimateconnection.blogspot.com.au/

Once again, it appears that the world is more complicated than a simple cause and effect model suggests.

Fluctuations in observed global temperatures can be explained by a combination of oceanic and solar cycles.  See engineering analysis from first principles Quantifying Natural Climate Change.

For everything there is a season, a time for every purpose under heaven.

What has been will be again, what has been done will be done again;
there is nothing new under the sun.
(Ecclesiastes 3:1 and 1:9)

Footnote:

jimbob child activist

Greenland Ice Varies, Don’t Panic

Update September 1, 2020 on GIS Math (at end)

The scare du jour is about Greenland Ice Sheet (GIS) and how it will melt out and flood us all.  It’s declared that GIS has passed its tipping point, and we are doomed.  Typical is the Phys.org hysteria: Sea level rise quickens as Greenland ice sheet sheds record amount:  “Greenland’s massive ice sheet saw a record net loss of 532 billion tonnes last year, raising red flags about accelerating sea level rise, according to new findings.”

Panic is warranted only if you treat this as proof of an alarmist narrative and ignore the facts and context in which natural variation occurs. For starters, consider the last four years of GIS fluctuations reported by DMI and summarized in the eight graphs above.  Note the noisy blue lines showing how the surface mass balance (SMB) changes its daily weight by 8 or 10 gigatonnes (Gt) around the baseline mean from 1981 to 2010.  Note also the summer decrease between May and August each year before recovering to match or exceed the mean.

The other four graphs show the accumulation of SMB for each of the last four years including 2020.  Tipping Point?  Note that in both 2017 and 2018, SMB ended about 500 Gt higher than the year began, and way higher than 2012, which added nothing.  Then came 2019 dropping below the mean, but still above 2012.  Lastly, this year is matching the 30-year average.  Note also that the charts do not integrate from previous years; i.e. each year starts at zero and shows the accumulation only for that year.  Thus the gains from 2017 and 2018 do not result in 2019 starting the year up 1000 Gt, but from zero.

The Truth about Sliding Greenland Ice

Researchers know that the small flows of water from surface melting are not the main way GIS loses ice in the summer.  Neil Humphrey explains in this article from last year Nate Maier and Neil Humphrey Lead Team Discovering Ice is Sliding Toward Edges Off Greenland Ice Sheet  Excerpts in italics with my bolds.

While they may appear solid, all ice sheets—which are essentially giant glaciers—experience movement: ice flows downslope either through the process of deformation or sliding. The latest results suggest that the movement of the ice on the GIS is dominated by sliding, not deformation. This process is moving ice to the marginal zones of the sheet, where melting occurs, at a much faster rate.

“The study was motivated by a major unknown in how the ice of Greenland moves from the cold interior, to the melting regions on the margins,” Neil Humphrey, a professor of geology from the University of Wyoming and author of the study, told Newsweek. “The ice is known to move both by sliding over the bedrock under the ice, and by oozing (deforming) like slowly flowing honey or molasses. What was unknown was the ratio between these two modes of motion—sliding or deforming.

“This lack of understanding makes predicting the future difficult, since we know how to calculate the flowing, but do not know much about sliding,” he said. “Although melt can occur anywhere in Greenland, the only place that significant melt can occur is in the low altitude margins. The center (high altitude) of the ice is too cold for the melt to contribute significant water to the oceans; that only occurs at the margins. Therefore ice has to get from where it snows in the interior to the margins.

“The implications for having high sliding along the margin of the ice sheet means that thinning or thickening along the margins due to changes in ice speed can occur much more rapidly than previously thought,” Maier said. “This is really important; as when the ice sheet thins or thickens it will either increase the rate of melting or alternatively become more resilient in a changing climate.

“There has been some debate as to whether ice flow along the edges of Greenland should be considered mostly deformation or mostly sliding,” Maier says. “This has to do with uncertainty of trying to calculate deformation motion using surface measurements alone. Our direct measurements of sliding- dominated motion, along with sliding measurements made by other research teams in Greenland, make a pretty compelling argument that no matter where you go along the edges of Greenland, you are likely to have a lot of sliding.”

The sliding ice does two things, Humphrey says. First, it allows the ice to slide into the ocean and make icebergs, which then float away. Two, the ice slides into lower, warmer climate, where it can melt faster.

While it may sound dire, Humphrey notes the entire Greenland Ice Sheet is 5,000 to 10,000 feet thick.

In a really big melt year, the ice sheet might melt a few feet. It means Greenland is going to be there another 10,000 years,” Humphrey says. “So, it’s not the catastrophe the media is overhyping.”

Humphrey has been working in Greenland for the past 30 years and says the Greenland Ice Sheet has only melted 10 feet during that time span.

Summary

The Greenland ice sheet is more than 1.2 miles thick in most regions. If all of its ice was to melt, global sea levels could be expected to rise by about 25 feet. However, this would take more than 10,000 years at the current rates of melting.

Background from Previous Post: Greenland Glaciers: History vs. Hysteria

The modern pattern of environmental scares started with Rachel Carson’s Silent Spring claiming chemicals are killing birds, only today it is windmills doing the carnage. That was followed by ever expanding doomsday scenarios, from DDT, to SST, to CFC, and now the most glorious of them all, CO2. In all cases the menace was placed in remote areas difficult for objective observers to verify or contradict. From the wilderness bird sanctuaries, the scares are now hiding in the stratosphere and more recently in the Arctic and Antarctic polar deserts. See Progressively Scaring the World (Lewin book synopsis)

The advantage of course is that no one can challenge the claims with facts on the ground, or on the ice. Correction: Scratch “no one”, because the climate faithful are the exception. Highly motivated to go to the ends of the earth, they will look through their alarmist glasses and bring back the news that we are indeed doomed for using fossil fuels.

A recent example is a team of researchers from Dubai (the hot and sandy petro kingdom) going to Greenland to report on the melting of Helheim glacier there.  The article is NYUAD team finds reasons behind Greenland’s glacier melt.  Excerpts in italics with my bolds.

First the study and findings:

For the first time, warm waters that originate in the tropics have been found at uniform depth, displacing the cold polar water at the Helheim calving front, causing an unusually high melt rate. Typically, ocean waters near the terminus of an outlet glacier like Helheim are at the freezing point and cause little melting.

NYUAD researchers, led by Professor of Mathematics at NYU’s Courant Institute of Mathematical Sciences and Principal Investigator for NYU Abu Dhabi’s Centre for Sea Level Change David Holland, on August 5, deployed a helicopter-borne ocean temperature probe into a pond-like opening, created by warm ocean waters, in the usually thick and frozen melange in front of the glacier terminus.

Normally, warm, salty waters from the tropics travel north with the Gulf Stream, where at Greenland they meet with cold, fresh water coming from the polar region. Because the tropical waters are so salty, they normally sink beneath the polar waters. But Holland and his team discovered that the temperature of the ocean water at the base of the glacier was a uniform 4 degrees Centigrade from top to bottom at depth to 800 metres. The finding was also recently confirmed by Nasa’s OMG (Oceans Melting Greenland) project.

“This is unsustainable from the point of view of glacier mass balance as the warm waters are melting the glacier much faster than they can be replenished,” said Holland.

Surface melt drains through the ice sheet and flows under the glacier and into the ocean. Such fresh waters input at the calving front at depth have enormous buoyancy and want to reach the surface of the ocean at the calving front. In doing so, they draw the deep warm tropical water up to the surface, as well.

All around Greenland, at depth, warm tropical waters can be found at many locations. Their presence over time changes depending on the behaviour of the Gulf Stream. Over the last two decades, the warm tropical waters at depth have been found in abundance. Greenland outlet glaciers like Helheim have been melting rapidly and retreating since the arrival of these warm waters.

Then the Hysteria and Pledge of Alligiance to Global Warming

“We are surprised to learn that increased surface glacier melt due to warming atmosphere can trigger increased ocean melting of the glacier,” added Holland. “Essentially, the warming air and warming ocean water are delivering a troubling ‘one-two punch’ that is rapidly accelerating glacier melt.”

My comment: Hold on.They studied effects from warmer ocean water gaining access underneath that glacier. Oceans have roughly 1000 times the heat capacity of the atmosphere, so the idea that the air is warming the water is far-fetched. And remember also that long wave radiation of the sort that CO2 can emit can not penetrate beyond the first millimeter or so of the water surface. So how did warmer ocean water get attributed to rising CO2? Don’t ask, don’t tell.  And the idea that air is melting Arctic glaciers is also unfounded.

Consider the basics of air parcels in the Arctic.

The central region of the Arctic is very dry. Why? Firstly because the water is frozen and releases very little water vapour into the atmosphere. And secondly because (according to the laws of physics) cold air can retain very little moisture.

Greenland has the only veritable polar ice cap in the Arctic, meaning that the climate is even harsher (10°C colder) than at the North Pole, except along the coast and in the southern part of the landmass where the Atlantic has a warming effect. The marked stability of Greenland’s climate is due to a layer of very cold air just above ground level, air that is always heavier than the upper layers of the troposphere. The result of this is a strong, gravity-driven air flow down the slopes (i.e. catabatic winds), generating gusts that can reach 200 kph at ground level.

Arctic air temperatures

Some history and scientific facts are needed to put these claims in context. Let’s start with what is known about Helheim Glacier.

Holocene history of the Helheim Glacier, southeast Greenland

Helheim Glacier ranks among the fastest flowing and most ice discharging outlets of the Greenland Ice Sheet (GrIS). After undergoing rapid speed-up in the early 2000s, understanding its long-term mass balance and dynamic has become increasingly important. Here, we present the first record of direct Holocene ice-marginal changes of the Helheim Glacier following the initial deglaciation. By analysing cores from lakes adjacent to the present ice margin, we pinpoint periods of advance and retreat. We target threshold lakes, which receive glacial meltwater only when the margin is at an advanced position, similar to the present. We show that, during the period from 10.5 to 9.6 cal ka BP, the extent of Helheim Glacier was similar to that of todays, after which it remained retracted for most of the Holocene until a re-advance caused it to reach its present extent at c. 0.3 cal ka BP, during the Little Ice Age (LIA). Thus, Helheim Glacier’s present extent is the largest since the last deglaciation, and its Holocene history shows that it is capable of recovering after several millennia of warming and retreat. Furthermore, the absence of advances beyond the present-day position during for example the 9.3 and 8.2 ka cold events as well as the early-Neoglacial suggest a substantial retreat during most of the Holocene.

Quaternary Science Reviews, Holocene history of the Helheim Glacier, southeast Greenland
A.A.Bjørk et. Al. 1 August 2018

The topography of Greenland shows why its ice cap has persisted for millenia despite its southerly location.  It is a bowl surrounded by ridges except for a few outlets, Helheim being a major one.

And then, what do we know about the recent history of glacier changes. Two Decades of Changes in Helheim Glacier

Helheim Glacier is the fastest flowing glacier along the eastern edge of Greenland Ice Sheet and one of the island’s largest ocean-terminating rivers of ice. Named after the Vikings’ world of the dead, Helheim has kept scientists on their toes for the past two decades. Between 2000 and 2005, Helheim quickly increased the rate at which it dumped ice to the sea, while also rapidly retreating inland- a behavior also seen in other glaciers around Greenland. Since then, the ice loss has slowed down and the glacier’s front has partially recovered, readvancing by about 2 miles of the more than 4 miles it had initially ­retreated.

NASA has compiled a time series of airborne observations of Helheim’s changes into a new visualization that illustrates the complexity of studying Earth’s changing ice sheets. NASA uses satellites and airborne sensors to track variations in polar ice year after year to figure out what’s driving these changes and what impact they will have in the future on global concerns like sea level rise.

Since 1997, NASA has collected data over Helheim Glacier almost every year during annual airborne surveys of the Greenland Ice Sheet using an airborne laser altimeter called the Airborne Topographic Mapper (ATM). Since 2009 these surveys have continued as part of Operation IceBridge, NASA’s ongoing airborne survey of polar ice and its longest-running airborne mission. ATM measures the elevation of the glacier along a swath as the plane files along the middle of the glacier. By comparing the changes in the height of the glacier surface from year to year, scientists estimate how much ice the glacier has lost.

The animation begins by showing the NASA P-3 plane collecting elevation data in 1998. The laser instrument maps the glacier’s surface in a circular scanning pattern, firing laser shots that reflect off the ice and are recorded by the laser’s detectors aboard the airplane. The instrument measures the time it takes for the laser pulses to travel down to the ice and back to the aircraft, enabling scientists to measure the height of the ice surface. In the animation, the laser data is combined with three-dimensional images created from IceBridge’s high-resolution camera system. The animation then switches to data collected in 2013, showing how the surface elevation and position of the calving front (the edge of the glacier, from where it sheds ice) have changed over those 15 years.

Helheim’s calving front retreated about 2.5 miles between 1998 and 2013. It also thinned by around 330 feet during that period, one of the fastest thinning rates in Greenland.

“The calving front of the glacier most likely was perched on a ledge in the bedrock in 1998 and then something altered its equilibrium,” said Joe MacGregor, IceBridge deputy project scientist. “One of the most likely culprits is a change in ocean circulation or temperature, such that slightly warmer water entered into the fjord, melted a bit more ice and disturbed the glacier’s delicate balance of forces.”

Update September 1, 2020 Greenland Ice Math

Prompted by comments from Gordon Walleville, let’s look at Greenland ice gains and losses in context.  The ongoing SMB (surface mass balance) estimates ice sheet mass net from melting and sublimation losses and precipitation gains.  Dynamic ice loss is a separate calculation of calving chunks of ice off the edges of the sheet, as discussed in the post above.  The two factors are combined in a paper Forty-six years of Greenland Ice Sheet mass balance from 1972 to 2018 by Mouginot et al. (2019) Excerpt in italics. (“D” refers to dynamic ice loss.)

Greenland’s SMB averaged 422 ± 10 Gt/y in 1961–1989 (SI Appendix, Fig. S1H). It decreased from 506 ± 18 Gt/y in the 1970s to 410 ± 17 Gt/y in the 1980s and 1990s, 251 ± 20 Gt/y in 2010–2018, and a minimum at 145 ± 55 Gt/y in 2012. In 2018, SMB was above equilibrium at 449 ± 55 Gt, but the ice sheet still lost 105 ± 55 Gt, because D is well above equilibrium and 15 Gt higher than in 2017. In 1972–2000, D averaged 456 ± 1 Gt/y, near balance, to peak at 555 ± 12 Gt/y in 2018. In total, the mass loss increased to 286 ± 20 Gt/y in 2010–2018 due to an 18 ± 1% increase in D and a 48 ± 9% decrease in SMB. The ice sheet gained 47 ± 21 Gt/y in 1972–1980, and lost 50 ± 17 Gt/y in the 1980s, 41 ± 17 Gt/y in the 1990s, 187 ± 17 Gt/y in the 2000s, and 286 ± 20 Gt/y in 2010–2018 (Fig. 2). Since 1972, the ice sheet lost 4,976 ± 400 Gt, or 13.7 ± 1.1 mm SLR.

Doing the numbers: Greenland area 2.1 10^6 km2 80% ice cover, 1500 m thick in average- That is 2.5 Million Gton. Simplified to 1 km3 = 1 Gton

The estimated loss since 1972 is 5000 Gt (rounded off), which is 110 Gt a year.  The more recent estimates are higher, in the 200 Gt range.

200 Gton is 0.008 % of the Greenland ice sheet mass.

Annual snowfall: From the Lost Squadron, we know at that particular spot, the ice increase since 1942 – 1990 was 1.5 m/year ( Planes were found 75 m below surface)
Assume that yearly precipitation is 100 mm / year over the entire surface.
That is 168000 Gton. Yes, Greenland is Big!
Inflow = 168,000Gton. Outflow is 168,200 Gton.

So if that 200 Gton rate continued, (assuming as models do, despite air photos showing fluctuations), that ice loss would result in a 1% loss of Greenland ice in 800 years. (H/t Bengt Abelsson)

Comment:

Once again, history is a better guide than hysteria.  Over time glaciers advance and retreat, and incursions of warm water are a key factor.  Greenland ice cap and glaciers are part of the Arctic self-oscillating climate system operating on a quasi-60 year cycle.

Why CO2 Can’t Warm the Planet

Figure 1. The global annual mean energy budget of Earth’s climate system (Trenberth and Fasullo, 2012.)

Recently in a discussion thread a warming proponent suggested we read this paper for conclusive evidence. The greenhouse effect and carbon dioxide by Wenyi Zhong and Joanna D. Haigh (2013) Imperial College, London. Indeed as advertised the paper staunchly presents IPCC climate science. Excerpts in italics with my bolds.

IPCC Conception: Earth’s radiation budget and the Greenhouse Effect

The Earth is bathed in radiation from the Sun, which warms the planet and provides all the energy driving the climate system. Some of the solar (shortwave) radiation is reflected back to space by clouds and bright surfaces but much reaches the ground, which warms and emits heat radiation. This infrared (longwave) radiation, however, does not directly escape to space but is largely absorbed by gases and clouds in the atmosphere, which itself warms and emits heat radiation, both out to space and back to the surface. This enhances the solar warming of the Earth producing what has become known as the ‘greenhouse effect’. Global radiative equilibrium is established by the adjustment of atmospheric temperatures such that the flux of heat radiation leaving the planet equals the absorbed solar flux.

The schematic in Figure 1, which is based on available observational data, illustrates the magnitude of these radiation streams. At the Earth’s distance from the Sun the flux of radiant energy is about 1365Wm−2 which, averaged over the globe, amounts to 1365/4 = 341W for each square metre. Of this about 30% is reflected back to space (by bright surfaces such as ice, desert and cloud) leaving 0.7 × 341 = 239Wm−2 available to the climate system. The atmosphere is fairly transparent to short wavelength solar radiation and only 78Wm−2 is absorbed by it, leaving about 161Wm−2 being transmitted to, and absorbed by, the surface. Because of the greenhouse gases and clouds the surface is also warmed by 333Wm−2 of back radiation from the atmosphere. Thus the heat radiation emitted by the surface, about 396Wm−2, is 157Wm−2 greater than the 239Wm−2 leaving the top of the atmosphere (equal to the solar radiation absorbed) – this is a measure of ‘greenhouse trapping’.

Why This Line of Thinking is Wrong and Misleading

Short Answer: Greenhouse Gases Cannot Physically Cause Observed Global Warming. Dr. Peter Langdon Ward explains more fully in the linked text. Excerpts in italics with my bolds.

Key Points:
Thus greenhouse-warming theory and the diagram above is based on these mistaken assumptions:

(1) that radiative energy can be quantified by a single number of watts per square meter,
(2) the assumption that these radiative forcings can be added together, and
(3) the assumption that Earth’s surface temperature is proportional to the sum of all of these radiative forcings.

There are other serious problems:

(4) greenhouse gases absorb only a small part of the radiation emitted by Earth,
(5) they can only reradiate what they absorb,
(6) they do not reradiate in every direction as assumed,
(7) they make up only a tiny part of the gases in the atmosphere, and
(8) they have been shown by experiment not to cause significant warming.
(9) The thermal effects of radiation are not about amount of radiation absorbed, as currently assumed, they are about the temperature of the emitting body and the difference in temperature between the emitting and the absorbing bodies as described below.

Back to the Basics of Radiative Warming in Earth’s Atmosphere

What Physically Is Thermal Radiation?

We physically measure visible light as containing all frequencies of oscillation ranging from 450 to 789 terahertz, where one terahertz is one-trillion cycles per second (10^12 cycles per second). We also observe that the visible spectrum is but a very small part of a much wider continuum that we call electromagnetic radiation.  Electromagnetic continuum with frequencies extending over more than 20 orders of magnitude from extremely low frequency radio signals in cycles per second to microwave, infrared, visible, ultraviolet, X-rays, to gamma rays with frequencies of more than 100 million, million, million cycles per second (10^20 cycles per second).
Thermal radiation is a portion of this continuum of electromagnetic radiation radiated by a body of matter as a result of the body’s temperature—the hotter the body, shown here at the bottom as Temperature, the higher the radiated frequencies of oscillation with significant amplitudes of oscillation.

We observe that electromagnetic radiation has two physical properties: 1) frequency of oscillation, which is color in the visible part of the continuum, and 2) amplitude of oscillation, which we perceive as intensity or brightness at each frequency.  Planck’s law In 1900, Max Planck, one of the fathers of modern physics, derived an equation by trial and error that has become known as Planck’s empirical law. Planck’s empirical law is not based on theory, although several derivations have been proposed. It was formulated solely to calculate correctly the intensities at each frequency observed during extensive direct observations of Nature. Planck’s empirical law calculates the observed intensity or amplitude of oscillation at each frequency of oscillation for radiation emitted by a black body of matter at a specific temperature and at thermal equilibrium. A black body is simply a perfect absorber and emitter of all frequencies of radiation.

Thermal radiation from Earth, at a temperature of 15C, consists of the narrow continuum of frequencies of oscillation shown in green in this plot of Planck’s empirical law. Thermal radiation from the tungsten filament of an incandescent light bulb at 3000C consists of a broader continuum of frequencies shown in yellow and green. Thermal radiation from Sun at 5500C consists of a much broader continuum of frequencies shown in red, yellow and green.

Note in this plot of Planck’s empirical law that the higher the temperature, 1) the broader the continuum of frequencies, 2) the higher the amplitude of oscillation at each and every frequency, and 3) the higher the frequencies of oscillation that are oscillating with the largest amplitudes of oscillation.

Radiation from Sun shown in red, yellow, and green clearly contains much higher frequencies and amplitudes of oscillation than radiation from Earth shown in green. Planck’s empirical law shows unequivocally that the physical properties of radiation are a function of the temperature of the body emitting the radiation.

Heat, defined in concept as that which must be absorbed by solid matter to increase its temperature, is similarly a broad continuum of frequencies of oscillation and corresponding amplitudes of oscillation.

For example, the broad continuum of heat that Earth, with a temperature of 15C, must absorb to reach a temperature of 3000C is shown by the continuum of values within the yellow-shaded area in this plot of Planck’s empirical law.

Heat is, therefore, a broad continuum of frequencies and amplitudes of oscillation that cannot be described by a single number of watts per square meter as currently assumed in physics and in greenhouse-warming theory. The physical properties of heat as described by Planck’s empirical law and the thermal effects of this heat are determined both by the temperature of the emitting body and, as we will see below, by the difference in temperature between the emitting body and the absorbing body.

Greenhouse Gases Limited to Low Energy Frequencies

Figure 1.10 When ozone is depleted, a narrow sliver of solar ultraviolet-B radiation with wavelengths close to 0.31 µm (yellow triangle) reaches Earth. The red circle shows that the energy of this ultraviolet radiation is around 4 electron volts (eV) on the red scale on the right, 48 times the energy absorbed most strongly by carbon dioxide (blue circle, 0.083 eV at 14.9 micrometers (µm) wavelength. Shaded grey areas show the bandwidths of absorption by different greenhouse gases. Current computer models calculate radiative forcing by adding up the areas under the broadened spectral lines that make up these bandwidths. Net radiative energy, however, is proportional to frequency only (red line), not to amplitude, bandwidth, or amount.

Greenhouse gases absorb only certain limited bands of frequencies of radiation emitted by Earth as shown in this diagram. Water is, by far, the strongest absorber, especially at lower frequencies.

Climate models neglect the fact, shown by the red line in Figure 1.10 and explained in
Chapter 4, that due to its higher frequency, ultraviolet radiation (red circle) is
48 times more energy-rich, 48 times “hotter,” than infrared absorbed by
carbon dioxide (blue circle), which means that there is a great deal more energy packed
into that narrow sliver of ultraviolet (yellow triangle) than there is in the broad band
of infrared. This actually makes very good intuitive sense. From personal experience,
we all know that we get very hot and are easily sunburned when standing in ultraviolet
sunlight during the day, but that we have trouble keeping warm at night when standing
in infrared energy rising from Earth.

Ångström (1900) showed that “no more than about 16 percent of earth’s radiation can be absorbed by atmospheric carbon dioxide, and secondly, that the total absorption is very little dependent on the changes in the atmospheric carbon dioxide content, as long as it is not smaller than 0.2 of the existing value.” Extensive modern data agree that carbon dioxide absorbs less than 16% of the frequencies emitted by Earth shown by the vertical black lines of this plot of Planck’s empirical law where frequencies are plotted on a logarithmic x-axis. These vertical black lines show frequencies and relative amplitudes only. Their absolute amplitudes on this plot are arbitrary.

Temperature at Earth’s surface is the result of the broad continuum of oscillations shown in green. Absorbing less than 16% of the frequencies emitted by Earth cannot have much effect on the temperature of anything.

Summary

Greenhouse warming theory depends on at least nine assumptions that appear to be mistaken. Greenhouse warming theory has never been shown to be physically possible by experiment, a cornerstone of the scientific method. Greenhouse warming theory is rapidly becoming the most expensive mistake ever made in the history of science, economically, politically, and environmentally.

Resources: Light Bulbs Disprove Global Warming

CO2, SO2, O3: A journey of Discovery

What Climate Crisis?


Dr. D.E. Koelle published in EIKE, here in English translation: Where’s the “climate crisis”? Excerpts in italics with my bolds.

Today it can be found in the brains of (unfortunately too many) people who have a pronounced ignorance of climate – what it is and what it meant in the past.

But unfortunately there are also incomprehensible statements by “climate experts” who should actually know better. The GEOMAR staff member Prof. Dr. Katja Matthes and his designated chief, in a mirror interview (in issue 21/2020) on the climate issue under the obviously inevitable title “Corona crisis as an opportunity” states that the climate is “dramatic”. And that only a reduction in CO2 emissions and even its “artificial” removal from the atmosphere is necessary to limit a further temperature increase of 1.5 or 2 ° C. This means that, in accordance with the IPCC dogma, it sees only CO2 as a climate factor and ignores all other climate influencing factors, especially the natural climate fluctuations that have occurred for millions and thousands of years.

In any case, there is no factual reason for a “climate drama” – quite the contrary: in the past (before the existence of mankind) there were repeated temperature fluctuations between 0 ° C and 28 ° C – today we are around 14.5 ° C – exactly in the middle between the extremes, ie it couldn’t be better. Today we have the best possible, the optimal climate, as the Wikipedia diagram shows:

Figure 2: The temperature history of the earth in the past 500 million years. It shows tremendous climate change in the past (without people !!) and a perfect mean temperature with fluctuations of only ± 1 ° C in the past 10,000 years. It couldn’t be better.

What happened that such a climate hysteria could occur, as it was spread in the media in competition – from CO2 as a “climate killer” to “climate catastrophe” and “doomsday”?

The global temperature has actually increased by 1 ° C in the last 100 years! A degree C. Incredible!

Only very simple types can believe or expect that nature can / must deliver the exact same temperatures every year. There are about a dozen climate-influencing factors: long-term, medium-term and short-term, only CO2 is not one of them. There is no evidence of this in Earth’s climate history. Rather, the reverse applies:

Global warming increases the CO2 level in the atmosphere through outgassing from the oceans. Warmer water can store less CO2. And since it is known that 50 times more CO2 is stored in the oceans than in the atmosphere today, this effect has often occurred in the past. Apparently some people have mixed up here.

Climate change, perceived primarily as temperature fluctuations, is by no means a new phenomenon caused by humanity (CO2 emissions), as some climate charlatans and ideologists want to make us believe (in their fight against capitalism and industrial society), but a completely normal natural phenomenon of our planet since its existence, just like earthquakes and volcanic eruptions – and no one can do anything about it! The “fight against climate change” is reminiscent of Don Quixote.

In the past 8000 years, the mean global temperature has fluctuated regularly by +/- one degree C (Eddy cycle of approx. 1070 years).

And that was as much the case 8,000 years ago as it is today! There is no recognizable anthropogenic influence. On the contrary: The regular temperature maxima of the Eddy cycle have fallen by 0.7 ° C since the Holocene maximum 8000 years ago, despite the continuous increase in CO2 from 200 to 400 ppm, which according to the IPCC hypothesis is an increase around + 3 ° C should have resulted. The IPCC, which has propagated the CO2 hypothesis, has so far been unable to provide any factual or historical evidence for it – other than “confidence”. This confirms that the IPCC is not a scientific, but a political institution where scientists are misused for ideological and political goals. His reports must e.g. Before publication, be checked by the participating governments and modified according to their wishes. A process that only exists in climate research, which has largely lost its formerly scientific character.

The above temperature diagram of the last 3200 years clearly shows the dominance of the natural 1000-year cycle, as it has occurred for at least 8000 years. We have just passed the last maximum and in the future it will go down again (if astrophysics has not changed) and is completely independent of the evolution of CO2. This has doubled in the last 8000 years from 200 to 400 ppm, but the temperature of the maxima has decreased by 0.7 ° C (instead of rising by 3 ° C according to the IPCC theory!). In fact, the current CO2 level is among the lowest in the history of the earth, which reached values ​​of 4000 to 6000 ppm several times – without causing damage – only much stronger plant growth. We owe this to the coal deposits on earth. If CO2 is released by combustion today, it is nothing more than the CO2 that the plants then extracted from the atmosphere.

Note: I used an online translate utility for this English text, so blame any errors on Mr. Google.

 

Solar Cyles in Earth Atmosphere

H/T to Ireneusz Palmowski for pointing me to this presentation of a paper Regional and temporal variability of solar activity and galactic cosmic ray effects on the lower atmosphere circulation by S. Veretenenko and M. Ogurtsov  Advances in Space Research · February 2012.

Background

A previous post Quantifying Natural Climate Change presented a study by Dan Pangburn demonstrating that earth temperature fluctuations can be explained by oceanic and solar variations.  The oceanic factors are elaborated in numerous posts here under the category Oceans Make Climate.  The solar mechanisms are more mysterious, making it more difficult to show how solar activity influences cooler or warmer eras.  Cosmoclimatology is a theory advanced by Svensmark that draws a connection between GCRs (Galactic Cosmic Rays) and cloudiness.

This post presents evidence from Russian scientists describing how those same Cosmic Rays (GCR) have a dramatic top-down effect on atmospheric circulation by interacting with ozone in the stratosphere.  The basic idea is that the climate effects from increasing cosmic rays vary according to Arctic polar vortex shifts from fast and strong, to weak and wavy, resulting in alternating climate epochs.

The published paper can be accessed by the linked title at the top.  The slide presentation is here.

Conclusions: In the paper three important findings are described. Text in italics with my bolds.

1. Disturbances of the lower atmosphere circulation associated with solar activity and galactic cosmic ray variations take place over the entire globe, with the processes developing in different latitudinal belts and regions being closely interconnected. The SA/GCR effects on pressure variations reveal a distinct latitudinal and regional character depending on the circulation peculiarities in the regions under study. The spatial structure of pressure variations correlated with SA/GCR variations is closely related to their influence on the main elements of the large-scale atmospheric circulation, namely on the polar vortex, planetary frontal zones and extratropical cyclones and anticyclones.

2. The temporal structure of the SA/GCR effects on pressure variations at high and middle latitudes of the Northern hemisphere is characterized by a pronounced ~60-year periodicity which is apparently related to the epochs of the large-scale atmospheric circulation. The reversals of the correlation sign between pressure and sunspot numbers were detected in the 1890s, 1920s, 1950s and in the early 1980s. The sign of the SA/GCR effects seems to depend on the evolution of meridional processes in the atmosphere which, in turn, is determined by the state of the polar vortex.

3. A mechanism of SA/GCR influences on the troposphere circulation is likely to involve changes in the evolution of the polar vortex in the stratosphere of high latitudes. Intensification of the polar vortex associated with solar activity and cosmic ray variations may contribute to the increase of temperature contrasts in planetary frontal zones and, then, to the intensification of extratropical cyclogenesis.

Comment and Further Discussion

It takes some effort to grasp the import of this research.  If I understand correctly, they looked at the impact of increasing GCRs during periods of quiet SA, and found the effects on earth atmosphere differed depending on another factor: strength or weakness of the polar vortex, which is an internal feature of the Arctic region.  At one point, the paper says:

Vangengeim–Girs classification defines three main forms of circulation: the westerly (zonal) form W, the easterly form E and the meridional form C. A distinguishing feature of the form W is the development of zonal circulation when the pressure field is characterized by small amplitude waves rapidly moving from west to east. The forms C and E are characterized by the development of meridional processes in the  atmosphere when slowly moving or stationary large-amplitude waves are observed in the pressure field.

Fig. 9. Top panel: annual frequencies of occurrence (number of days during a year) of the main forms of the large-scale circulation (20-year running averages) (a); Bottom panel: correlation coefficient R(SLP,Rz) between mean yearly values of pressure in the region of the polar vortex center and sunspot numbers for sliding 17-year periods (b) and the Fourier spectrum of the annual frequency of occurrence of the meridional circulation C (c). The vertical dotted lines indicate the moments of the correlation sign reversals.

The data in Fig. 9a show the evolution of annual frequency of occurrence (expressed as a number of days during a year) of these circulation forms. The time variation of the correlation R(SLP,Rz) in the region of the Arctic polar vortex is presented in Fig. 9b. Comparing these data, we can see that the latest reversal of the correlation sign in the early 1980s was preceded by noticeable changes in the evolution of all the circulation forms. Since the late 1970s the frequency of occurrence of the zonal form W has been increasing. The frequency of the meridional form C started increasing too, with a simultaneous decrease of the frequency of the form E.

The results presented in Fig. 9 show that the time behavior of the correlation between pressure at high and middle latitudes and SA/GCR variability depends on the evolution of meridional processes in the atmosphere. In the epochs of increasing frequency of the meridional circulation C (~1920–1950 and since the 1980s) we can see that an increase of GCR fluxes in the 11-year solar cycle is accompanied by an intensification of polar anticyclones (an increase of the troposphere pressure at polar latitudes), an intensification of extratropical cyclogenesis (a decrease of pressure at polar fronts at middle latitudes) and a weakening of the equatorial trough (an increase of pressure at low latitudes). The long-term GCR effects on extratropical cyclogenesis during these epochs are in good agreement with the GCR effects on the development of baric systems detected on the time scale of a few days.  These epochs coincide with the periods of a strong polar vortex. 

In the epochs of decreasing meridional circulation C (~1890–1920 and ~1950–1980), corresponding to a weak polar vortex, we observe the spatial distribution of the correlations between the troposphere pressure and GCR intensity with the opposite sign: an increase of GCR isaccompanied by a weakening of polar anticyclones, a weakening of extratropical cyclogenesis and an intensification of the equatorial trough.

A possible reason for these correlation reversals may be significant changes in a dynamic coupling between the troposphere and the stratosphere during the periods of a weak and strong polar vortex. According to the data of Perlwitz and Graf (2001), the stratosphere may influence the troposphere only when the polar vortex is strong. When the vortex is weak, only the troposphere influences the stratosphere. So, if GCR (or some other factor of solar activity) produce any effect in the stratosphere in the period of a strong vortex (i.e., in the period of increasing meridional circulation), this effect may be transferred to the troposphere and we can see a pronounced correlation of extratropical cyclogenesis with GCR intensity. As the strength of the vortex reveals ~60-year variations (Gudkovich et al., 2009; Frolov et al., 2009), which influence the circulation state, this can explain the detected temporal variability of the SA/GCR effects.

And thus we can appreciate the summary slide shown at the top.  It would appear that we have been in a period of weak and wavy polar vortices as well as strong GCRs (minimal solar activity),  a continuation of the epoch since 1982 (on the left). It also suggests that the ~60 year vortex cycle is due for a shift to the epoch on the right.

IPCC Wants a 10-year Lockdown

You’ve seen the News:

While analysts agree the historic lockdowns will significantly lower emissions, some environmentalists argue the drop is nowhere near enough.–USA Today

Emissions Declines Will Set Records This Year. But It’s Not Good News.  An “unprecedented” fall in fossil fuel use, driven by the Covid-19 crisis, is likely to lead to a nearly 8 percent drop, according to new research.–New York Times

The COVID-19 pandemic cut carbon emissions down to 2006 levels.  Daily global CO2 emissions dropped 17 percent in April — but it’s not likely to last–The Verge

In fact, the drop is not even enough to get the world back on track to meet the target of the 2015 Paris Agreement, which aims for global temperature rise of no more than 1.5 degree above pre-industrial levels, said WHO S-G Taalas. That would require at least a 7% annual drop in emissions, he added.–Reuters

An article at Las Vegas Review-Journal draws the implications Environmentalists want 10 years of coronavirus-level emissions cuts.  Excerpts in italics with my bolds.

It’s always been difficult for the layman to comprehend what it would entail to reduce carbon emissions enough to satisfy environmentalists who fret over global warming. Then came coronavirus.

The once-in-a-century pandemic has devastated the U.S. economy. The April unemployment rate is 14.7 percent and rising. Nevada has the highest unemployment rate in the country at 28.2 percent. One-third of families with children report feeling “food insecure,” according to the Institute for Policy Research, Northwestern University. Grocery prices saw their largest one-month increase since 1974.

To keep the economy afloat, the U.S. government has spent $2.4 trillion on coronavirus programs. Another expensive relief bill seems likely. Unless the end goal is massive inflation, this level of spending can’t continue.

Amazingly, the United States has it good compared with many other places in the world. David Beasley, director of the U.N. World Food Program, has estimated that the ongoing economic slowdown could push an additional 130 million “to the brink of starvation.”

That’s the bad news. The good news for global warming alarmists is that economic shutdowns reduce carbon emissions. If some restrictions remain in place worldwide through the end of the year, researchers writing in Nature estimate emissions will drop in 2020 by 7 percent.

For some perspective, the U.N.’s climate panel calls for a 7.6 percent reduction in emissions — every year for a decade. And now we get a real-world glimpse of the cost.

What happens if we return to the course we were on before this pandemic?  A previous post shows that the past continuing into the future is not disasterous, and that panic is unwarranted.

I Want You Not to Panic

I’ve been looking into claims for concern over rising CO2 and temperatures, and this post provides reasons why the alarms are exaggerated. It involves looking into the data and how it is interpreted.

First the longer view suggests where to focus for understanding. Consider a long term temperature record such as Hadcrut4. Taking it at face value, setting aside concerns about revisions and adjustments, we can see what has been the pattern in the last 120 years following the Little Ice Age. Often the period between 1850 and 1900 is considered pre industrial since modern energy and machinery took hold later on. The graph shows that warming was not much of a factor until temperatures rose peaking in the 1940s, then cooling off into the 1970s, before ending the century with a rise matching the rate of earlier warming. Overall, the accumulated warming was 0.8C.

Then regard the record concerning CO2 concentrations in the atmosphere. It’s important to know that modern measurement of CO2 really began in 1959 with Mauna Loa observatory, coinciding with the mid-century cool period. The earlier values in the chart are reconstructed by NASA GISS from various sources and calibrated to reconcile with the modern record. It is also evident that the first 60 years saw minimal change in the values compared to the post 1959 rise after WWII ended and manufacturing was turned from military production to meet consumer needs. So again the mid-20th century appears as a change point.

It becomes interesting to look at the last 60 years of temperature and CO2 from 1959 to 2019, particularly with so much clamour about climate emergency and crisis. This graph puts together rising CO2 and temperatures for this period. Firstly note that the accumulated warming is about 0.8C after fluctuations. And remember that those decades witnessed great human flourishing and prosperity by any standard of life quality. The rise of CO2 was a monotonic steady rise with some acceleration into the 21st century.

Now let’s look at projections into the future, bearing in mind Mark Twain’s warning not to trust future predictions. No scientist knows all or most of the surprises that overturn continuity from today to tomorrow. Still, as weathermen well know, the best forecasts are built from present conditions and adding some changes going forward.

Here is a look to century end as a baseline for context. No one knows what cooling and warming periods lie ahead, but one scenario is that the next 80 years could see continued warming at the same rate as the last 60 years. That presumes that forces at play making the weather in the lifetime of many of us seniors will continue in the future. Of course factors beyond our ken may deviate from that baseline and humans will notice and adapt as they have always done. And in the back of our minds is the knowledge that we are 11,500 years into an interglacial period before the cold returns, being the greater threat to both humanity and the biosphere.

Those who believe CO2 causes warming advocate for reducing use of fossil fuels for fear of overheating, apparently discounting the need for energy should winters grow harsher. The graph shows one projection similar to that of temperature, showing the next 80 years accumulating at the same rate as the last 60. A second projection in green takes the somewhat higher rate of the last 10 years and projects it to century end. The latter trend would achieve a doubling of CO2.

What those two scenarios mean depends on how sensitive you think Global Mean Temperature is to changing CO2 concentrations. Climate models attempt to consider all relevant and significant factors and produce future scenarios for GMT. CMIP6 is the current group of models displaying a wide range of warming presumably from rising CO2. The one model closely replicating Hadcrut4 back to 1850 projects 1.8C higher GMT for a doubling of CO2 concentrations. If that held true going from 300 ppm to 600 ppm, the trend would resemble the red dashed line continuing the observed warming from the past 60 years: 0.8C up to now and another 1C the rest of the century. Of course there are other models programmed for warming 2 or 3 times the rate observed.

People who take to the streets with signs forecasting doom in 11 or 12 years have fallen victim to IPCC 450 and 430 scenarios.  For years activists asserted that warming from pre industrial can be contained to 2C if CO2 concentrations peak at 450 ppm.  Last year, the SR1.5 lowered the threshold to 430 ppm, thus the shortened timetable for the end of life as we know it.

For the sake of brevity, this post leaves aside many technical issues. Uncertainties about the temperature record, and about early CO2 levels, and the questions around Equilibrium CO2 Sensitivity (ECS) and Transient CO2 Sensitivity (TCS) are for another day. It should also be noted that GMT as an average hides huge variety of fluxes over the globe surface, and thus larger warming in some places such as Canada, and cooling in other places like Southeast US. Ross McKitrick pointed out that Canada has already gotten more than 1.5C of warming and it has been a great social, economic and environmental benefit.

So I want people not to panic about global warming/climate change. Should we do nothing? On the contrary, we must invest in robust infrastructure to ensure reliable affordable energy and to protect against destructive natural events. And advanced energy technologies must be developed for the future since today’s wind and solar farms will not suffice.

It is good that Greta’s demands were unheeded at the Davos gathering. Panic is not useful for making wise policies, and as you can see above, we have time to get it right.

Greta’s Spurious “Carbon Budget”

Many have noticed that recent speeches written for child activist Greta Thunberg are basing the climate “emergency” on the rapidly closing “carbon budget”. This post aims to summarize how alarmists define the so-called carbon budget, and why their claims to its authority are spurious. In the text and at the bottom are links to websites where readers can access both the consensus science papers and the analyses showing the flaws in the carbon budget notion. Excerpts are in italics with my bolds.

The 2019 update on the Global Carbon Budget was reported at Future Earth article entitled Global Carbon Budget Estimates Global CO2 Emissions Still Rising in 2019. The results were published by the Global Carbon Project in the journals Nature Climate Change, Environmental Research Letters, and Earth System Science Data. Excerpts below in italics with my bolds.

History of Growing CO2 Emissions

“Carbon dioxide emissions must decline sharply if the world is to meet the ‘well below 2°C’ mark set out in the Paris Agreement, and every year with growing emissions makes that target even more difficult to reach,” said Robbie Andrew, a Senior Researcher at the CICERO Center for International Climate Research in Norway.

Global emissions from coal use are expected to decline 0.9 percent in 2019 (range: -2.0 percent to +0.2 percent) due to an estimated 10 percent fall in the United States and a 10 percent fall in Europe, combined with weak growth in coal use in China (+0.8 percent) and India (+2 percent).

 

Shifting Mix of Fossil Fuel Consumption

“The weak growth in carbon dioxide emissions in 2019 is due to an unexpected decline in global coal use, but this drop is insufficient to overcome the robust growth in natural gas and oil consumption,” said Glen Peters, Research Director at CICERO.

“Global commitments made in Paris in 2015 to reduce emissions are not yet being matched by proportionate actions,” said Peters. “Despite political rhetoric and rapid growth in low carbon technologies such as solar and wind power, electric vehicles, and batteries, global fossil carbon dioxide emissions are likely to be more than four percent higher in 2019 than in 2015 when the Paris Agreement was adopted.

“Compared to coal, natural gas is a cleaner fossil fuel, but unabated natural gas merely cooks the planet more slowly than coal,” said Peters. “While there may be some short-term emission reductions from using natural gas instead of coal, natural gas use needs to be phased out quickly on the heels of coal to meet ambitious climate goals.”

Oil and gas use have grown almost unabated in the last decade. Gas use has been pushed up by declines in coal use and increased demand for gas in industry. Oil is used mainly to fuel personal transport, freight, aviation and shipping, and to produce petrochemicals.

“This year’s Carbon Budget underscores the need for more definitive climate action from all sectors of society, from national and local governments to the private sector,” said Amy Luers, Future Earth’s Executive Director. “Like the youth climate movement is demanding, this requires large-scale systems changes – looking beyond traditional sector-based approaches to cross-cutting transformations in our governance and economic systems.”

Burning gas emits about 40 percent less CO2 than coal per unit energy, but it is not a zero-carbon fuel. While CO2 emissions are likely to decline when gas displaces coal in electricity production, Global Carbon Project researchers say it is only a short-term solution at best. All CO2 emissions will need to decline rapidly towards zero.

The Premise: Rising CO2 Emissions Cause Global Warming

Atmospheric CO2 concentration is set to reach 410 ppm on average in 2019, 47 percent above pre-industrial levels.

Glen Peters on the carbon budget and global carbon emissions is a Future of Earth interview explaining the Carbon Budget notion. Excerpts in italics with my bolds.

In many ways, the global carbon budget is like any other budget. There’s a maximum amount we can spend, and it must be allocated to various countries and various needs. But how do we determine how much carbon each country can emit? Can developing countries grow their economies without increasing their emissions? And if a large portion of China’s emissions come from products made for American and European consumption, who’s to blame for those emissions? Glen Peters, Research Director at the Center for International Climate Research (CICERO) in Oslo, explains the components that make up the carbon budget, the complexities of its calculation, and its implications for climate policy and mitigation efforts. He also discusses how emissions are allocated to different countries, how emissions are related to economic growth, what role China plays in all of this, and more.

The carbon budget generally has two components: the source component, so what’s going into the atmosphere; and the sink component, so the components which are more or less going out of the atmosphere.

So in terms of sources, we have fossil fuel emissions; so we dig up coal, oil, and gas and burn them and emit CO2. We have cement, which is a chemical reaction, which emits CO2. That’s sort of one important component on the source side. We also have land use change, so deforestation. We’re chopping down a lot of trees, burning them, using the wood products and so on. And then on the other side of the equation, sort of the sink side, we have some carbon coming back out in a sense to the atmosphere. So the land sucks up about 25% of the carbon that we put into the atmosphere and the ocean sucks up about 25%. So for every ton we put into the atmosphere, then only about half a ton of CO2 remains in the atmosphere. So in a sense, the oceans and the land are cleaning up half of our mess, if you like.

The other half just stays in the atmosphere. Half a ton stays in the atmosphere; the other half is cleaned up. It’s that carbon that stays in the atmosphere which is causing climate change and temperature increases and changes in precipitation and so on.

The carbon budget is like a balance, so you have something coming in and something going out, and in a sense by mass balance, they have to equal. So if we go out and we take an estimate of how much carbon have we emitted by burning fossil fuels or by chopping down forests and we try and estimate how much carbon has gone into the ocean or the land, then we can measure quite well how much carbon is in the atmosphere. So we can add all those measurements together and then we can compare the two totals — they should equal. But they don’t equal. And this is sort of part of the science, if we overestimated emissions or if we over or underestimated the strength of the land sink or the oceans or something like that. And we can also cross check with what our models say.

My Comment:

Several things are notable about the carbon cycle diagram from GCP. It claims the atmosphere adds 18 GtCO2 per year and drives Global Warming. Yet estimates of emissions from burning fossil fuels and from land use combined range from 36 to 45 GtCO2 per year, or +/- 4.5. The uptake by the biosphere and ocean combined range from 16 to 25 GtCO2 per year, also +/- 4.5. The uncertainty on emissions is 11% while the natural sequestration uncertainty is 22%, twice as much.

Furthermore, the fluxes from biosphere and ocean are both presented as balanced with no error range. The diagram assumes the natural sinks/sources are not in balance, but are taking more CO2 than they release. IPCC reported: Gross fluxes generally have uncertainties of more than +/- 20%. (IPCC AR4WG1 Figure 7.3.) Thus for land and ocean the estimates range as follows:

Land: 440, with uncertainty between 352 and 528, a range of 176
Ocean: 330, with uncertainty between 264 and 396, a range of 132
Nature: 770, with uncertainty between 616 and 924, a range of 308

So the natural flux uncertainty is 7.5 times the estimated human emissions of 41 GtCO2 per year.

For more detail see CO2 Fluxes, Sources and Sinks and Who to Blame for Rising CO2?

The Fundamental Flaw: Spurious Correlation

Beyond the uncertainty of the amounts is a method error in claiming rising CO2 drives temperature changes. For this discussion I am drawing on work by chaam jamal at her website Thongchai Thailand. A series of articles there explain in detail how the mistake was invented and why it is faulty. A good starting point is The Carbon Budgets of Climate Science. Below is my attempt at a synopsis from her writings with excerpts in italics and my bolds.

Simplifying Climate to a Single Number

Figure 1 above shows the strong positive correlation between cumulative emissions and cumulative warming used by climate science and by the IPCC to track the effect of emissions on temperature and to derive the “carbon budget” for various acceptable levels of warming such as 2C and 1.5C. These so called carbon budgets then serve as policy tools for international climate action agreements and climate action imperatives of the United Nations. And yet, all such budgets are numbers with no interpretation in the real world because they are derived from spurious correlations. Source: Matthews et al 2009

Carbon budget accounting is based on the TCRE (Transient Climate Response to Cumulative Emissions). It is derived from the observed correlation between temperature and cumulative emissions. A comprehensive explanation of an application of this relationship in climate science is found in the IPCC SR 15 2018. This IPCC description is quoted below in paragraphs #1 to #7 where the IPCC describes how climate science uses the TCRE for climate action mitigation of AGW in terms of the so called the carbon budget. Also included are some of difficult issues in carbon budget accounting and the methods used in their resolution.

It has long been recognized that the climate sensitivity of surface temperature to the logarithm of atmospheric CO2 (ECS), which lies at the heart of the anthropogenic global warming and climate change (AGW) proposition, was a difficult issue for climate science because of the large range of empirical values reported in the literature and the so called “uncertainty problem” it implies.

The ECS uncertainty issue was interpreted in two very different ways. Climate science took the position that ECS uncertainty implies that climate action has to be greater than that implied by the mean value of ECS in order to ensure that higher values of ECS that are possible will be accommodated while skeptics argued that the large range means that we don’t really know. At the same time skeptics also presented convincing arguments against the assumption that observed changes in atmospheric CO2 concentration can be attributed to fossil fuel emissions.

A breakthrough came in 2009 when Damon Matthews, Myles Allen, and a few others almost simultaneously published almost identical papers reporting the discovery of a “near perfect” correlation (ρ≈1) between surface temperature and cumulative emissions {2009: Matthews, H. Damon, et al. “The proportionality of global warming to cumulative carbon emissions” Nature 459.7248 (2009): 829}. They had found that, irrespective of the timing of emissions or of atmospheric CO2 concentration, emitting a trillion tonnes of carbon will cause 1.0 – 2.1 C of global warming. This linear regression coefficient corresponding with the near perfect correlation between cumulative warming and cumulative emissions (note: temperature=cumulative warming), initially described as the Climate Carbon Response (CCR) was later termed the Transient Climate Response to Cumulative Emissions (TCRE).

Initially a curiosity, it gained in importance when it was found that it was in fact predicting future temperatures consistent with model predictions. The consistency with climate models was taken as a validation of the new tool and the TCRE became integrated into the theory of climate change. However, as noted in a related post the consistency likely derives from the assumption that emissions accumulate in the atmosphere.

Thereafter the TCRE became incorporated into the foundation of climate change theory particularly so in terms of its utility in the construction of carbon budgets for climate action plans for any given target temperature rise, an application for which the TCRE appeared to be tailor made. Most importantly, it solved or perhaps bypassed the messy and inconclusive uncertainty issue in ECS climate sensitivity that remained unresolved. The importance of this aspect of the TCRE is found in the 2017 paper “Beyond Climate Sensitivity” by prominent climate scientist Reto Knutti where he declared that the TCRE metric should replace the ECS as the primary tool for relating warming to human caused emissions {2017: Knutti, Reto, Maria AA Rugenstein, and Gabriele C. Hegerl. “Beyond equilibrium climate sensitivity.” Nature Geoscience 10.10 (2017): 727}. The anti ECS Knutti paper was not only published but received with great fanfare by the journal and by the climate science community in general.

The TCRE has continued to gain in importance and prominence as a tool for the practical application of climate change theory in terms of its utility in the construction and tracking of carbon budgets for limiting warming to a target such as the Paris Climate Accord target of +1.5C above pre-industrial. {Matthews, H. Damon. “Quantifying historical carbon and climate debts among nations.” Nature climate change 6.1 (2016): 60}. A bibliography on the subject of TCRE carbon budgets is included below at the end of this article (here).

However, a mysterious and vexing issue has arisen in the practical matter of applying and tracking TCRE based carbon budgets. The unsolved matter in the TCRE carbon budget is the remaining carbon budget puzzle {Rogelj, Joeri, et al. “Estimating and tracking the remaining carbon budget for stringent climate targets.” Nature 571.7765 (2019): 335-342}. It turns out that midway in the implementation of a carbon budget, the remaining carbon budget computed by subtraction does not match the TCRE carbon budget for the latter period computed directly using the Damon Matthews proportionality of temperature with cumulative emissions for that period. As it turns out, the difference between the two estimates of the remaining carbon budget has a rational explanation in terms of the statistics of a time series of cumulative values of another time series described in a related post

It is shown that a time series of the cumulative values of another time series has neither time scale nor degrees of freedom and that therefore statistical properties of this series can have no practical interpretation.

It is demonstrated with random numbers that the only practical implication of the “near perfect proportionality” correlation reported by Damon Matthews is that the two time series being compared (annual warming and annual emissions) tend to have positive values. In the case of emissions we have all positive values, and during a time of global warming, the annual warming series contains mostly positive values. The correlation between temperature (cumulative warming) and cumulative emissions derives from this sign bias as demonstrated with random numbers with and without sign bias.

Figure 4: Random Numbers without Sign Bias

Figure 5: Random Numbers with Sign Bias

The sign bias explains the correlation between cumulative values of time series data and also the remaining carbon budget puzzle. It is shown that the TCRE regression coefficient between these time series of cumulative values derives from the positive value bias in the annual warming data. Thus, during a period of accelerated warming, the second half of the carbon budget period may contain a higher percentage of positive values for annual warming and it will therefore show a carbon budget that exceeds the proportional budget for the second half computed from the full span regression coefficient that is based on a lower bias for positive values.

In short, the bias for positive annual warming is highest for the second half, lowest for the first half, and midway between these two values for the full span – and therein lies the simple statistics explanation of the remaining carbon budget issue that climate science is trying to solve in terms of climate theory and its extension to Earth System Models. The Millar and Friedlingstein 2018 paper is yet another in a long line of studies that ignore the statistical issues the TCRE correlation and instead try to explain its anomalous behavior in terms of climate theory whereas in fact their explanation lies in statistical issues that have been overlooked by these young scientists.

The fundamental problem with the construction of TCRE carbon budgets and their interpretation in terms of climate action is that the TCRE is a spurious correlation that has no interpretation in terms of a relationship between emissions and warming. Complexities in these carbon budgets such as the remaining carbon budget are best understood in these terms and not in terms of new and esoteric variables such as those in earth system models.

Footnote:

An independent study by Jamal Munshi come to a similar conclusion. Climate Sensitivity and the Responsiveness of Temperature to Atmospheric CO2

Detrended correlation analysis of global mean temperature observations and model projections are compared in a test for the theory that surface temperature is responsive to atmospheric CO2 concentration in terms of GHG forcing of surface temperature implied by the Climate Sensitivity parameter ECS. The test shows strong evidence of GHG forcing of warming in the theoretical RCP8.5 temperature projections made with CMIP5 forcings. However, no evidence of GHG forcing by CO2 is found in observational temperatures from four sources including two from satellite measurements. The test period is set to 1979-2018 so that satellite data can be included on a comparable basis. No empirical evidence is found in these data for a climate sensitivity parameter that determines surface temperature according to atmospheric CO2 concentration or for the proposition that reductions in fossil fuel emissions will moderate the rate of warming.

Postscript on Spurious Correlations

I am not a climate, environment, geology, weather, or physics expert. However, I am an expert on statistics. So, I recognize bad statistical analysis when I see it. There are quite a few problems with the use of statistics within the global warming debate. The use of Gaussian statistics is the first error. In his first movie Gore used a linear regression of CO2 and temperature. If he had done the same regression using the number of zoos in the world, or the worldwide use of atomic energy, or sunspots, he would have the same result. A linear regression by itself proves nothing.–Dan Ashley · PhD statistics, PhD Business, Northcentral University

 

I Want You Not to Panic

 

I’ve been looking into claims for concern over rising CO2 and temperatures, and this post provides reasons why the alarms are exaggerated. It involves looking into the data and how it is interpreted.

First the longer view suggests where to focus for understanding. Consider a long term temperature record such as Hadcrut4. Taking it at face value, setting aside concerns about revisions and adjustments, we can see what has been the pattern in the last 120 years following the Little Ice Age. Often the period between 1850 and 1900 is considered pre industrial since modern energy and machinery took hold later on. The graph shows that warming was not much of a factor until temperatures rose peaking in the 1940s, then cooling off into the 1970s, before ending the century with a rise matching the rate of earlier warming. Overall, the accumulated warming was 0.8C.

Then regard the record concerning CO2 concentrations in the atmosphere. It’s important to know that modern measurement of CO2 really began in 1959 with Mauna Loa observatory, coinciding with the mid-century cool period. The earlier values in the chart are reconstructed by NASA GISS from various sources and calibrated to reconcile with the modern record. It is also evident that the first 60 years saw minimal change in the values compared to the post 1959 rise after WWII ended and manufacturing was turned from military production to meet consumer needs. So again the mid-20th century appears as a change point.

It becomes interesting to look at the last 60 years of temperature and CO2 from 1959 to 2019, particularly with so much clamour about climate emergency and crisis. This graph puts together rising CO2 and temperatures for this period. Firstly note that the accumulated warming is about 0.8C after fluctuations. And remember that those decades witnessed great human flourishing and prosperity by any standard of life quality. The rise of CO2 was a monotonic steady rise with some acceleration into the 21st century.

Now let’s look at projections into the future, bearing in mind Mark Twain’s warning not to trust future predictions. No scientist knows all or most of the surprises that overturn continuity from today to tomorrow. Still, as weathermen well know, the best forecasts are built from present conditions and adding some changes going forward.

Here is a look to century end as a baseline for context. No one knows what cooling and warming periods lie ahead, but one scenario is that the next 80 years could see continued warming at the same rate as the last 60 years. That presumes that forces at play making the weather in the lifetime of many of us seniors will continue in the future. Of course factors beyond our ken may deviate from that baseline and humans will notice and adapt as they have always done. And in the back of our minds is the knowledge that we are 11,500 years into an interglacial period before the cold returns, being the greater threat to both humanity and the biosphere.

Those who believe CO2 causes warming advocate for reducing use of fossil fuels for fear of overheating, apparently discounting the need for energy should winters grow harsher. The graph shows one projection similar to that of temperature, showing the next 80 years accumulating at the same rate as the last 60. A second projection in green takes the somewhat higher rate of the last 10 years and projects it to century end. The latter trend would achieve a doubling of CO2.

What those two scenarios mean depends on how sensitive you think Global Mean Temperature is to changing CO2 concentrations. Climate models attempt to consider all relevant and significant factors and produce future scenarios for GMT. CMIP6 is the current group of models displaying a wide range of warming presumably from rising CO2. The one model closely replicating Hadcrut4 back to 1850 projects 1.8C higher GMT for a doubling of CO2 concentrations. If that held true going from 300 ppm to 600 ppm, the trend would resemble the red dashed line continuing the observed warming from the past 60 years: 0.8C up to now and another 1C the rest of the century. Of course there are other models programmed for warming 2 or 3 times the rate observed.

People who take to the streets with signs forecasting doom in 11 or 12 years have fallen victim to IPCC 450 and 430 scenarios.  For years activists asserted that warming from pre industrial can be contained to 2C if CO2 concentrations peak at 450 ppm.  Last year, the SR1.5 lowered the threshold to 430 ppm, thus the shortened timetable for the end of life as we know it.

For the sake of brevity, this post leaves aside many technical issues. Uncertainties about the temperature record, and about early CO2 levels, and the questions around Equilibrium CO2 Sensitivity (ECS) and Transient CO2 Sensitivity (TCS) are for another day. It should also be noted that GMT as an average hides huge variety of fluxes over the globe surface, and thus larger warming in some places such as Canada, and cooling in other places like Southeast US. Ross McKitrick pointed out that Canada has already gotten more than 1.5C of warming and it has been a great social, economic and environmental benefit.

So I want people not to panic about global warming/climate change. Should we do nothing? On the contrary, we must invest in robust infrastructure to ensure reliable affordable energy and to protect against destructive natural events. And advanced energy technologies must be developed for the future since today’s wind and solar farms will not suffice.

It is good that Greta’s demands were unheeded at the Davos gathering. Panic is not useful for making wise policies, and as you can see above, we have time to get it right.

CO2, SO2, O3: A journey of Discovery

A previous post Light Bulbs Disprove Global Warming presented an article by Dr. Peter Ward along with some scientific discussion from his website. This post presents an excerpt from Chapter One of his book which helpfully explains his journey of discovery from his field of volcanism to the larger question of global warming.

The Chapter is How I Came to Wonder about Climate Change. Excerpts in italics with my bolds.

Discovering a More Likely Cause of Global Warming

The evidence for volcanism in the ice layers under Summit, Greenland, consists of sulfate
deposits. Sulfate comes from sulfur dioxide, megatons of which are emitted during each
volcanic eruption. At first, I thought that the warming was caused by the sulfur dioxide,
which is observed to absorb solar energy passing through the atmosphere.17 My thinking
was influenced by greenhouse warming theory, which assumes that carbon dioxide causes
global warming because it is observed to absorb infrared energy radiated by Earth as it
passes upward through the atmosphere and is then thought to re-radiate it back down to
the surface, thus causing warming. The sulfur dioxide story, however, just wasn’t adding
up quantitatively.

Figure 1.9 Average temperatures per century (black) increased at the same time as the amount of volcanic sulfate per century (red). The greatest warming occurred when volcanism was more continuous from year to year, as shown by the blue circles surrounding the number of contiguous layers (7 or more) containing volcanic sulfate. It was this continuity over two millennia that finally warmed the world out of the last ice age. Data are from the GISP2 drill hole under Summit, Greenland. Periods of major warming are labeled in black. Periods of major cooling are labeled in blue.

Eventually, after publishing two papers that developed this story, I came to realize
that sulfur dioxide was actually just the “footprint” of volcanism—a measure of how
active volcanoes were at any given time. The real breakthrough came when I came across
a paper reporting that the lowest concentrations of stratospheric ozone ever recorded were for the two years after the 1991 eruption of Mt. Pinatubo, the largest volcanic eruption since the 1912 eruption of Mt. Katmai. As I dug deeper, analyzing ozone records from Arosa, Switzerland18—the longest running observations of ozone in the world, begun in 1927 (Figure 8.15 on page 119)—I found that ozone spiked in the years of most volcanic eruptions but dropped dramatically and precipitously in the year following each eruption. There seemed to be a close relationship between volcanism and ozone. What could that relationship be?

Increased SO2 pollution (dotted black line) does not appear to contribute to substantial global warming (red line) until total column ozone decreased (black line, y-axis inverted), most likely due to increasing tropospheric chlorine (green line). Mean annual temperature anomaly in the Northern Hemisphere (red line) and ozone (black line) are smoothed with a centered 5 point running mean. OHC is ocean heat content (dotted purple line).

The answer was not long in coming. I knew that all volcanoes release hydrogen chloride
when they erupt, and I also knew that chlorine from man-made chlorofluorocarbon
compounds had been identified in the 1970s as a potent agent of stratospheric ozone
depletion. From these two facts, and a third one, I deduced that it must be the depletion of
ozone by chlorine in volcanic hydrogen chloride—and not the absorption of solar radiation
by sulfur dioxide—that was driving the warming events that followed volcanic eruptions.
The third fact in the equation was the well-known interaction of stratospheric ozone with
solar radiation.

Figure 1.10 When ozone is depleted, a narrow sliver of solar ultraviolet-B radiation with wavelengths close to 0.31 µm (yellow triangle) reaches Earth. The red circle shows that the energy of this ultraviolet radiation is around 4 electron volts (eV) on the red scale on the right, 48 times the energy absorbed most strongly by carbon dioxide (blue circle, 0.083 eV at 14.9 micrometers (µm) wavelength. Shaded grey areas show the bandwidths of absorption by different greenhouse gases. Current computer models calculate radiative forcing by adding up the areas under the broadened spectral lines that make up these bandwidths. Net radiative energy, however, is proportional to frequency only (red line), not to amplitude, bandwidth, or amount.

The ozone layer, at altitudes of 12 to 19 miles (20 to 30 km) up in the lower
stratosphere, absorbs very energetic solar ultraviolet radiation, thereby protecting life on
Earth from this very “hot,” DNA-destroying radiation. When the concentration of ozone is
reduced, more ultraviolet radiation is observed to reach Earth’s surface, increasing the risk
of sunburn and skin cancer. There is no disagreement among climate scientists about this,
but I went one step further by deducing that this increased influx of “super-hot” ultraviolet
radiation also actually warms Earth.

All ultraviolet UV-C is absorbed in the upper atmosphere. Most UV-B is absorbed in the stratosphere. The wavelengths of UV are shown in nanometers.

All current climate models assume that radiation travels through space as waves and
that energy in radiation is proportional to the square of the amplitude of these waves
and to the bandwidth of the radiation, i.e. to the range of wavelengths or frequencies
involved. Figure 1.10 shows the percent absorption for different greenhouse-gases as a
function of wavelength or frequency. It is generally assumed that the energy absorbed
by greenhouse-gases is proportional to the areas shaded in gray. From this perspective,
absorption by carbon dioxide of wavelengths around 14.9 and 4.3 micrometers in
the infrared looks much more important than absorption by ozone of ultraviolet-B
radiation around 0.31 micrometers. Climate models thus calculate that ultraviolet
radiation is relatively unimportant for global warming because it occupies a rather
narrow bandwidth in the solar spectrum compared to Earth’s much lower frequency,
infrared radiation.

The models neglect the fact, shown by the red line in Figure 1.10 and explained in
Chapter 4, that due to its higher frequency, ultraviolet radiation (red circle) is
48 times more energy-rich, 48 times “hotter,” than infrared absorbed by
carbon dioxide (blue circle), which means that there is a great deal more energy packed
into that narrow sliver of ultraviolet (yellow triangle) than there is in the broad band
of infrared. This actually makes very good intuitive sense. From personal experience,
we all know that we get very hot and are easily sunburned when standing in ultraviolet
sunlight during the day, but that we have trouble keeping warm at night when standing
in infrared energy rising from Earth.

These flawed assumptions in the climate models are based on equations that were
written in 1865 by James Clerk Maxwell and have been used very successfully to design
every piece of electronics that we depend on today, including our electric grid. Maxwell
assumed that electromagnetic energy travels as waves through matter, air, and space.
His wave equations seem to work well in matter, but not in space. Even though Albert
Michelson and Edward Morley demonstrated experimentally in 1887 that there is no
medium in space, no so-called luminiferous aether, through which waves could travel,
most physicists and climatologists today still assume that electromagnetic radiation does
in fact travel through space at least partially in the form of waves.

They also erroneously assume that energy in these imagined waves is proportional to
the square of their amplitude, which is true in matter, but cannot be true in space. They
calculate that there is more energy in the broad band of low-frequency infrared radiation
emitted by Earth and absorbed by greenhouse gases than there is in the narrow sliver of
additional high-frequency ultraviolet solar radiation that reaches Earth when ozone is
depleted (Figure 1.10). Nothing could be further from the truth.

The energy of radiation absorbed by carbon dioxide around 14,900 nanometers (blue circle) is near 0.08 electron volts (green circle) while the energy that reaches Earth when the ozone layer is depleted around 310 nanometers (red circle) is near 4 electron volts, 48 times larger.

The story got even more convoluted by the rise of quantum mechanics at the dawn
of the 20th century when Max Planck and Albert Einstein introduced the idea that energy
in light is quantized. These quanta of light ultimately became known as photons. In order
to explain the photoelectric effect, Einstein proposed that radiation travels as particles, a
concept that scientists and natural philosophers had debated for 2500 years before him.
I will explain in Chapter 4 why photons traveling from Sun cannot physically exist, even
though they provide a very useful mathematical shorthand.

Max Planck postulated, in 1900, that the energy in radiation is equal to vibrational
frequency times a constant, as is true of an atomic oscillator, in which a bond holding two
atoms together is oscillating in some way. He needed this postulate in order to derive an
equation by trial and error that could account for and calculate the observed properties of
radiation. Planck’s postulate led to Albert Einstein’s light quanta and to modern physics,
dominated by quantum mechanics and quantum electrodynamics. Curiously, however,
Planck didn’t fully appreciate the far-reaching implications of his simple postulate, which
states that the energy in radiation is equal to frequency times a constant. He simply saw it as a useful mathematical trick.

Energy is a function of frequency and should therefore be plotted on the x-axis (top of this figure) and units of watts should not be included on the y-axis. The colored lines show the spectral radiance predicted by Planck’s law for black bodies with different absolute temperatures.

As I dug deeper, it took me several years to become comfortable with those implications.
It was not the way we were trained to think. It was not the way most physicists think, even
today. Being retired turned out to be very useful because I could give my brain time to mull
this over. Gradually, it began to make sense. The take-away message for me was that the
energy in the kind of ultraviolet radiation that reaches Earth when ozone is depleted is 48 times “hotter” than infrared energy absorbed by greenhouse gases. In sufficient quantities, it should be correspondingly 48 times more effective in raising Earth’s surface temperature than the weak infrared radiation from Earth’s surface that is absorbed by carbon dioxide in the atmosphere and supposedly re-radiated back to the ground.

There simply is not enough energy involved with greenhouse gases to have a significant
effect on global warming. Reducing emissions of greenhouse gases will therefore not be
effective in reducing global warming. This conclusion is critical right now because most of
the world’s nations are planning to meet in Paris, France, in late November 2015, to agree
on legally binding limits to greenhouse-gas emissions. Such limits would be very expensive
as well as socioeconomically disruptive. We depend on large amounts of affordable energy to support our lifestyles, and developing countries also depend on large amounts of affordable energy to improve their lifestyles. Increasing the cost of energy by even a few percent would have major negative financial and societal repercussions.

This book is your chance to join my odyssey. You do not need to have majored in
science or even to be familiar with physics, chemistry, mathematics, or climatology. You
just need to be curious and be willing to work. You also need to be willing to think critically
about observations, and you may need to reevaluate some of your own ideas about climate.
You will learn that there was a slight misunderstanding in science made back in the 1860s
that has had profound implications for understanding climate change and physics today. It took me many years of hard work to gain this insight, and I will discuss that in Chapter 4. First, however, we need to look at some fundamental observations that cause us to wonder: Could the greenhouse warming theory of climate change actually be mistaken?

Footnote:

I welcome this analysis and assessment that explain why rising CO2 concentrations in the satellite era have no discernable impact on the radiative profile of the atmosphere.  See Global Warming Theory and the Tests It Fails

Raman Effect Not a Climate Factor

When the Raman effect came up last year in relation to GHGs (Greenhouse Gases), I was firstly confused thinking it was talk of asian noodles.  So I have had to learn more, and while the effect is real and useful, I doubt it is a factor concerning global warming/climate change.  This post provides information principally from two sources consistent with many others I read.

One article is Raman Spectroscopy from University of Pennsylvania.  Excerpts in italics with my bolds.

Raman Effect

Raman spectroscopy is often considered to be complementary to IR spectroscopy. For symmetrical molecules with a center of inversion, Raman and IR are mutually exclusive. In other words, bonds that are IR-active will not be Raman-active and vice versa. Other molecules may have bonds that are either Raman-active, IR-active, neither or both.

Raman spectroscopy measures the scattering of light by matter. The light source used in Raman spectroscopy is a laser.

The laser light is used because it is a very intense beam of nearly monochromatic light that can interact with sample molecules. When matter absorbs light, the internal energy of the matter is changed in some way. Since this site is focused on the complementary nature of IR and Raman, the infrared region will be discussed. Infrared radiation causes molecules to undergo changes in their vibrational and rotational motion. When the radiation is absorbed, a molecule jumps to a higher vibrational or rotational energy level. When the molecule relaxes back to a lower energy level, radiation is emitted. Most often the emitted radiation is of the same frequency as the incident light. Since the radiation was absorbed and then emitted, it will likely travel in a different direction from which it came. This is called Rayleigh scattering. Sometimes, however, the scattered (emitted) light is of a slightly different frequency than the incident light. This effect was first noted by Chandrasekhara Venkata Raman who won the Nobel Prize for this discovery. (6) The effect, named for its discoverer, is called the Raman effect, or Raman scattering.

Raman scattering occurs in two ways. If the emitted radiation is of lower frequency than the incident radiation, then it is called Stokes scattering. If it is of higher frequency, then it is called anti-Stokes scattering.

Energy Diagram Scattering (Source: Wikipedia)

The Blue arrow in the picture to the left represents the incident radiation. The Stokes scattered light has a frequency lower than that of the original light because the molecule did not relax all the way back to the original ground state. The anti-Stokes scattered light has a higher frequency than the original because it started in an excited energy level but relaxed back to the ground state.

Though any Raman scattering is very low in intensity, the Stokes scattered radiation is more intense than the anti-Stokes scattered radiation.

The reason for this is that very few molecules would exist in the excited level as compared to the ground state before the absorption of radiation. The diagram shown represents electronic energy levels as shown by the labels “n=”. The same phenomenon, however, applies to radiation in any of the regions.

Another article is Raman Techniques: Fundamentals and Frontiers by Robin R. Jones et al. at 2019 at US National Library of Medicine.

Abstract

Driven by applications in chemical sensing, biological imaging and material characterisation, Raman spectroscopies are attracting growing interest from a variety of scientific disciplines. The Raman effect originates from the inelastic scattering of light, and it can directly probe vibration/rotational-vibration states in molecules and materials.

Despite numerous advantages over infrared spectroscopy, spontaneous Raman scattering is very weak, and consequently, a variety of enhanced Raman spectroscopic techniques have emerged.

These techniques include stimulated Raman scattering and coherent anti-Stokes Raman scattering, as well as surface- and tip-enhanced Raman scattering spectroscopies. The present review provides the reader with an understanding of the fundamental physics that govern the Raman effect and its advantages, limitations and applications. The review also highlights the key experimental considerations for implementing the main experimental Raman spectroscopic techniques. The relevant data analysis methods and some of the most recent advances related to the Raman effect are finally presented. This review constitutes a practical introduction to the science of Raman spectroscopy; it also highlights recent and promising directions of future research developments.

Fundamental Principles

When light interacts with matter, the oscillatory electro-magnetic (EM) field of the light perturbs the charge distribution in the matter which can lead to the exchange of energy and momentum leaving the matter in a modified state. Examples include electronic excitations and molecular vibrations or rotational-vibrations (ro-vibrations) in liquids and gases, electronic excitations and optical phonons in solids, and electron-plasma oscillations in plasmas [108].

Spontaneous Raman

When an incident photon interacts with a crystal lattice or molecule, it can be scattered either elastically or inelastically. Predominantly, light is elastically scattered (i.e. the energy of the scattered photon is equal to that of the incident photon). This type of scattering is often referred to as Rayleigh scattering. The inelastic scattering of light by matter (i.e. the energy of the scattered photon is not equal to that of the incident photon) is known as the Raman effect [1, 4, 6]. This inelastic process leaves the molecule in a modified (ro-)vibrational state

In the case of spontaneous Raman scattering, the Raman effect is very weak; typically, 1 in 10^8 of the incident radiation undergoes spontaneous Raman scattering [6].

The transition from the virtual excited state to the final state can occur at any point in time and to any possible final state based on probability. Hence, spontaneous Raman scattering is an incoherent process. The output signal power is proportional to the input power, scattered in random directions and is dependent on the orientation of the polarisation. For example, in a system of gaseous molecules, the molecular orientation relative to the incident light is random and hence their polarisation wave vector will also be random. Furthermore, as the excited state has a finite lifetime, there is an associated uncertainty in the transition energy which leads to natural line broadening of the wavelength as per the Heisenberg uncertainty principle (∆E∆t ≥ ℏ/2) [1]. The scattered light, in general, has polarisation properties that differ from that of the incident radiation. Furthermore, the intensity and polarisation are dependent on the direction from which the light is measured [1]. The scattered spectrum exhibits peaks at all Raman active modes; the relative strength of the spectral peaks are determined by the scattering cross-section of each Raman mode [108]. Photons can undergo successive Rayleigh scattering events before Raman scattering occurs as Raman scattering is far less probable than Rayleigh scattering.

Laser Empowered Raman Scattering

Coherent light-scattering events involving multiple incident photons simultaneously interacting with the scattering material was not observed until laser sources became available in the 1960s, despite predictions being made as early as the 1930s [37, 38]. The first laser-based Raman scattering experiment was demonstrated in 1961 [39]. Stimulated Raman scattering (SRS) and CARS have become prominent four-wave mixing techniques and are of interest in this review.

SRS is a coherent process providing much stronger signals relative to spontaneous Raman spectroscopy as well as the ability to time-resolve the vibrational motions.

Raman is generally a very weak process; it is estimated that approximately one in every 10^8 photons undergo Raman scattering spontaneously [6]. This inherent weakness poses a limitation on the intensity of the obtainable Raman signal. Various methods can be used to increase the Raman throughput of an experiment, such as increasing the incident laser power and using microscope objectives to tightly focus the laser beam into small areas. However, this can have negative consequences such as sample photobleaching [139]. Placing the analyte on a rough metal surface can provide orders of magnitude enhancement of the measured Raman signal, i.e. SERS.

Summary

It seems to me that spontaneous scattering is the only possible way that the Raman effect could influence the radiative profile of the atmosphere.  Sources like those above convince me that lacking laser intensity, natural light does not produce a Raman effect in the air of any significance for it to be considered a climate factor.