I Want You Not to Panic

 

I’ve been looking into claims for concern over rising CO2 and temperatures, and this post provides reasons why the alarms are exaggerated. It involves looking into the data and how it is interpreted.

First the longer view suggests where to focus for understanding. Consider a long term temperature record such as Hadcrut4. Taking it at face value, setting aside concerns about revisions and adjustments, we can see what has been the pattern in the last 120 years following the Little Ice Age. Often the period between 1850 and 1900 is considered pre industrial since modern energy and machinery took hold later on. The graph shows that warming was not much of a factor until temperatures rose peaking in the 1940s, then cooling off into the 1970s, before ending the century with a rise matching the rate of earlier warming. Overall, the accumulated warming was 0.8C.

Then regard the record concerning CO2 concentrations in the atmosphere. It’s important to know that modern measurement of CO2 really began in 1959 with Mauna Loa observatory, coinciding with the mid-century cool period. The earlier values in the chart are reconstructed by NASA GISS from various sources and calibrated to reconcile with the modern record, It is also evident that the first 60 years saw minimal change in the values compared to the post 1959 rise after WWII ended and manufacturing was turned from military production to meet consumer needs. So again the mid-20th century appears as a change point.

It becomes interesting to look at the last 60 years of temperature and CO2 from 1959 to 2019, particularly with so much clamour about climate emergency and crisis. This graph puts together rising CO2 and temperatures for this period. Firstly note that the accumulated warming is about 0.8C after fluctuations. And remember that those decades witnessed great human flourishing and prosperity by any standard of life quality. The rise of CO2 was a monotonic steady rise with some acceleration into the 21st century.

Now let’s look at projections into the future, bearing in mind Mark Twain’s warning not to trust future predictions. No scientist knows all or most of the surprises that overturn continuity from today to tomorrow. Still, as weathermen well know, the best forecasts are built from present conditions and adding some changes going forward.

Here is a look to century end as a baseline for context. No one knows what cooling and warming periods lie ahead, but one scenario is that the next 80 years could see continued warming at the same rate as the last 60 years. That presumes that forces at play making the weather in the lifetime of many of us seniors will continue in the future. Of course factors beyond our ken may deviate from that baseline and humans will notice and adapt as they have always done. And in the back of our minds is the knowledge that we are 11,500 years into an interglacial period before the cold returns, being the greater threat to both humanity and the biosphere.

Those who believe CO2 causes warming advocate for reducing use of fossil fuels for fear of overheating, apparently discounting the need for energy should winters grow harsher. The graph shows one projection similar to that of temperature, showing the next 80 years accumulating at the same rate as the last 60. A second projection in green takes the somewhat higher rate of the last 10 years and projects it to century end. The latter trend would achieve a doubling of CO2.

What those two scenarios mean depends on how sensitive you think Global Mean Temperature is to changing CO2 concentrations. Climate models attempt to consider all relevant and significant factors and produce future scenarios for GMT. CMIP6 is the current group of models displaying a wide range of warming presumably from rising CO2. The one model closely replicating Hadcrut4 back to 1850 projects 1.8C higher GMT for a doubling of CO2 concentrations. If that held true going from 300 ppm to 600 ppm, the trend would resemble the red dashed line continuing the observed warming from the past 60 years: 0.8C up to now and another 1C the rest of the century. Of course there are other models programmed for warming 2 or 3 times the rate observed.

People who take to the streets with signs forecasting doom in 11 or 12 years have fallen victim to IPCC 450 and 430 scenarios.  For years activists asserted that warming from pre industrial can be contained to 2C if CO2 concentrations peak at 450 ppm.  Last year, the SR1.5 lowered the threshold to 430 ppm, thus the shortened timetable for the end of life as we know it.

For the sake of brevity, this post leaves aside many technical issues. Uncertainties about the temperature record, and about early CO2 levels, and the questions around Equilibrium CO2 Sensitivity (ECS) and Transient CO2 Sensitivity (TCS) are for another day. It should also be noted that GMT as an average hides huge variety of fluxes over the globe surface, and thus larger warming in some places such as Canada, and cooling in other places like Southeast US. Ross McKitrick pointed out that Canada has already gotten more than 1.5C of warming and it has been a great social, economic and environmental benefit.

So I want people not to panic about global warming/climate change. Should we do nothing? On the contrary, we must invest in robust infrastructure to ensure reliable affordable energy and to protect against destructive natural events. And advanced energy technologies must be developed for the future since today’s wind and solar farms will not suffice.

It is good that Greta’s demands were unheeded at the Davos gathering. Panic is not useful for making wise policies, and as you can see above, we have time to get it right.

Climate Models: Good, Bad and Ugly

Several posts here discuss INM-CM4, the Good CMIP5 climate model since it alone closely replicates the Hadcrut temperature record, as well as approximating BEST and satellite datasets. This post is prompted by recent studies comparing various CMIP6 models, the new generation intending to hindcast history through 2014, and forecast to 2100.

Background

Much revealing information is provided in an AGU publication Causes of Higher Climate Sensitivity in CMIP6 Models by Mark D. Zelinka et al. (2019). H/T Judith Curry.  Excerpts in italics with my bolds.

The severity of climate change is closely related to how much the Earth warms in response to greenhouse gas increases. Here we find that the temperature response to an abrupt quadrupling of atmospheric carbon dioxide has increased substantially in the latest generation of global climate models. This is primarily because low cloud water content and coverage decrease more strongly with global warming, causing enhanced planetary absorption of sunlight—an amplifying feedback that ultimately results in more warming. Differences in the physical representation of clouds in models drive this enhanced sensitivity relative to the previous generation of models. It is crucial to establish whether the latest models, which presumably represent the climate system better than their predecessors, are also providing a more realistic picture of future climate warming.

The objective is to understand why the models are getting badder and uglier, and whether the increased warming is realistic. This issue was previously noted by John Christy last summer:

Figure 8: Warming in the tropical troposphere according to the CMIP6 models.
Trends 1979–2014 (except the rightmost model, which is to 2007), for 20°N–20°S, 300–200 hPa.

Christy’s comment: We are just starting to see the first of the next generation of climate models, known as CMIP6. These will be the basis of the IPCC assessment report, and of climate and energy policy for the next 10 years. Unfortunately, as Figure 8 shows, they don’t seem to be getting any better. The observations are in blue on the left. The CMIP6 models, in pink, are also warming faster than the real world. They actually have a higher sensitivity than the CMIP5 models; in other words, they’re apparently getting worse! This is a big problem.

Why CMIP6 Models Are More Sensitive

Zelinka et al. (2019) delve into the issue by comparing attributes of the CMIP6 models currently available for diagnostics.

1 Introduction

Determining the sensitivity of Earth’s climate to changes in atmospheric carbon dioxide (CO2) is a fundamental goal of climate science. A typical approach for doing so is to consider the planetary energy balance at the top of the atmosphere (TOA), represented as

urn:x-wiley:grl:media:grl60047:grl60047-math-0004

urn:x-wiley:grl:media:grl60047:grl60047-math-0005 is the net TOA radiative flux anomaly,  urn:x-wiley:grl:media:grl60047:grl60047-math-0006  is the radiative forcingurn:x-wiley:grl:media:grl60047:grl60047-math-0007  is the radiative feedback parameter, and urn:x-wiley:grl:media:grl60047:grl60047-math-0008  is the global mean surface air temperature anomaly. The sign convention is that urn:x-wiley:grl:media:grl60047:grl60047-math-0005  is positive down and  urn:x-wiley:grl:media:grl60047:grl60047-math-0007  is negative for a stable system. 

Conceptually, this equation states that the TOA energy imbalance can be expressed as the sum of the radiative forcing and the radiative response of the system to a global surface temperature anomaly. The assumption that the radiative damping can be expressed as a product of a time‐invariant and global mean surface temperature anomaly is useful but imperfect (Armour et al., 2013; Ceppi & Gregory, 2019). Under this assumption, one can estimate the effective climate sensitivity (ECS), the ultimate global surface temperature change that would restore TOA energy balance

urn:x-wiley:grl:media:grl60047:grl60047-math-0014

where urn:x-wiley:grl:media:grl60047:grl60047-math-0015  is the radiative forcing due to doubled CO2 .

ECS therefore depends on the magnitude of the CO2 radiative forcing and on how strongly the climate system radiatively damps planetary warming. A climate system that more effectively radiates thermal energy to space or more strongly reflects sunlight back to space as it warms (larger magnitude urn:x-wiley:grl:media:grl60047:grl60047-math-0007 ) will require less warming to restore planetary energy balance in response to a positive radiative forcing, and vice versa.

Because GCMs attempt to represent all relevant processes governing Earth’s response to CO2, they provide the most direct means of estimating ECS. ECS values diagnosed from CO2 quadrupling experiments performed in fully coupled GCMs as part of the fifth phase of the Coupled Model Intercomparison Project ranged from 2.1 to 4.7 K. It is already known that several models taking part in CMIP6 have values of ECS exceeding the upper limit of this range. These include CanESM5.0.3 , CESM2, CNRM‐CM6‐1, E3SMv1, and both HadGEM3‐GC3.1 and UKESM1.

In all of these models, high ECS values are at least partly attributed to larger cloud feedbacks than their predecessors.

In this study, we diagnose the forcings, feedbacks, and ECS values in all available CMIP6 models. We assess in each model the individual components that make up the climate feedback parameter and quantify the contributors to intermodel differences in ECS. We also compare these results with those from CMIP5 to determine whether the multimodel mean or spread in ECS, feedbacks, and forcings have changed.

The range of ECS values across models has widened in CMIP6, particularly on the high end, and now includes nine models with values exceeding the CMIP5 maximum (Figure 1a). Specifically, the range has increased from 2.1–4.7 K in CMIP5 to 1.8–5.6 K in CMIP6, and the intermodel variance has significantly increased (p = 0.04).

One model’s ECS is below the CMIP5 minimum (INM‐CM4‐8).

This increased population of high ECS models has caused the multimodel mean ECS to increase from 3.3 K in CMIP5 to 3.9 K in CMIP6. Though substantial, this increase is not statistically significant (p = 0.16).  ER urn:x-wiley:grl:media:grl60047:grl60047-math-0015  has increased slightly on average in CMIP6 and its intermodel standard deviation has been reduced by nearly 30% from 0.50 Wm^2 in CMIP5 to 0.36 Wm^2 in CMIP6 (Figure 1b).

This ECS increase is primarily attributable to an increased multimodel mean feedback parameter due to strengthened positive cloud feedbacks, as all noncloud feedbacks are essentially unchanged on average in CMIP6. However, it is the unique combination of weak overall negative feedback and moderate radiative forcing that allows several CMIP6 models to achieve high ECS values beyond the CMIP5 range.

The increase in cloud feedback arises solely from the strengthened SW low cloud component, while the non‐low cloud feedback has slightly decreased. The SW low cloud feedback is larger on average in CMIP6 due to larger reductions in low cloud cover and weaker increases in cloud liquid water path with warming. Both of these changes are much more dramatic in the extratropics, such that the CMIP6 mean low cloud amount feedback is now stronger in the extratropics than in the tropics, and the fraction of multimodel mean ECS attributable to extratropical cloud feedback has roughly tripled.

The aforementioned increase in CMIP6 mean cloud feedback is related to changes in model representation of clouds. Specifically, both low cloud cover and water content increase less dramatically with SST in the middle latitudes as estimated from unforced climate variability in CMIP6.

Figure 1. INM-CM5 representation of temperature history. The 5-year mean GMST (K) anomaly with respect to 1850–1899 for HadCRUTv4 (thick solid black); model mean (thick solid red). Dashed thin lines represent data from individual model runs: 1 – purple, 2 – dark blue, 3 – blue, 4 – green, 5 – yellow, 6 – orange, 7 – magenta. In this and the next figures numbers on the time axis indicate the first year of the 5-year mean

The Nitty Gritty

Open image in new tab to enlarge.

The details are shown in Supporting Information for “Causes of higher climate
sensitivity in CMIP6 models”. Here we can seen how specific models stack up on the key variables driving ECS attributes.

Open image in new tab to enlarge.

Figure S1. Gregory plots showing global and annual mean TOA net radiation anomalies
plotted against global and annual mean surface air temperature anomalies. Best-fit ordinary linear least squares lines are shown. The y-intercept of the line (divided by 2) provides an estimate of the effective radiative forcing from CO2 doubling (ERF2x), the slope of the line provides an estimate of the net climate feedback parameter (λ), and the x-intercept of the line (divided by 2) provides an estimate of the effective climate sensitivity (ECS). These values are printed in each panel. Models are ordered by ECS.

Open image in new tab to enlarge.

Figure S7. Contributions of forcing and feedbacks to ECS in each model and for the multimodel means. Contributions from the tropical and extratropical portion of the feedback are shown in light and dark shading, respectively. Black dots indicate the ECS in each model, while upward and downward pointing triangles indicate contributions from non-cloud and cloud feedbacks, respectively. Numbers printed next to the multi-model mean bars indicate the cumulative sum of each plotted component. Numerical values are not printed next to residual, extratropical forcing, and tropical albedo terms for clarity. Models within each collection are ordered by ECS.

Open image in new tab to enlarge.

Figure S8. Cloud feedbacks due to low and non-low clouds in the (light shading) tropics and (dark shading) extratropics in each model and for the multi-model means. Non-low cloud feedbacks are separated into LW and SW components, and SW low cloud feedbacks are separated into amount and scattering components. “Others” represents the sum of LW low cloud feedbacks and the small difference between kernel- and APRP-derived SW low cloud feedback. Insufficient diagnostics are available to compute SW cloud amount and scattering feedbacks for the FGOALSg2 and CAMS-CSM1-0 models. Black dots indicate the global mean net cloud feedback in each model, while upward and downward pointing triangles indicate total contributions from non-low and low clouds, respectively. Models within each collection are ordered by global mean net cloud feedback.

My Summary

Once again the Good Model INM-CM4-8 is bucking the model builders’ consensus. The new revised INM model has a reduced ECS and it flipped its cloud feedback from positive to negative.The description of improvements made to the INM modules includes how clouds are handled:

One of the few notable changes is the new parameterization of clouds and large-scale condensation. In the INMCM5 cloud area and cloud water are computed prognostically according to Tiedtke (1993). That includes the formation of large-scale cloudiness as well as the formation of clouds in the atmospheric boundary layer and clouds of deep convection. Decrease of cloudiness due to mixing with unsaturated environment and precipitation formation are also taken into account. Evaporation of precipitation is implemented according to Kessler (1969).

Cloud radiation forcing (CRF) at the top of the atmosphere is one of the most important climate model characteristics, as errors in CRF frequently lead to an incorrect surface temperature.

In the high latitudes model errors in shortwave CRF are small. The model underestimates longwave CRF in the subtropics but overestimates it in the high latitudes. Errors in longwave CRF in the tropics tend to partially compensate errors in shortwave CRF. Both errors have positive sign near 60S leading to warm bias in the surface temperature here. As a result, we have some underestimation of the net CRF absolute value at almost all latitudes except the tropics. Additional experiments with tuned conversion of cloud water (ice) to precipitation (for upper cloudiness) showed that model bias in the net CRF could be reduced, but that the RMS bias for the surface temperature will increase in this case.

Resources:

Temperatures According to Climate Models  Initial Discovery of the Good Model INM-CM4 within CMIP5

Latest Results from First-Class Climate Model INMCM5 The new version improvements and historical validation

 

Planetary CO2 in the Long Run

This is a new slide from Raymond at RIC-Communications added to twelve others in a project entitled The World of CO2.  Below is a reprinted post with the background and complete set of exhibits, or infographics as he calls them. Recently Dr. William Happer referred to this long historical view to correct activists who claim we are conducting a dangerous experiment on the planet by burning fossil fuels and releasing CO2.  As the chart shows, CO2 atmospheric concentrations have been much higher throughout history, with today being a period of CO2 famine.  As well the graph shows that temperatures can crash even when CO2 is high, and periods that remained warm while CO2 declined. Also apparent is our current time well into an interglacial period, classified by paleoclimatologists as an “Icehouse.  See Post Climate Advice: Don’t Worry Be Happer

Previous Post Here’s Looking at You CO2 

Raymond of RiC-Communications  studio commented on a recent post and made an offer to share here some graphics on CO2 for improving public awareness.  This post presents the eleven charts he has produced so far. I find them straightforward and useful, and appreciate his excellent work on this. Project title is link to RiC-Communications.

Updates January 21 and 26, 2020, with added slides

This project is: The world of CO2

Infographics can be helpful, in making things simple to understand. CO2 is a complex topic with a lot of information and statistics. These simple step by step charts should help to give you an idea of CO2’s importance. Without CO2, plants wouldn’t be able to live on this planet. Just remember, that if CO2 falls below 150 ppm, all plant life would cease to exist.

– N° 1 Earth‘s atmospheric composition
– N° 2 Natural sources of CO2 emissions
– N° 3 Global anthropogenic CO2 emissions
– N° 4 CO2 – Carbon dioxide molecule
– N° 5 The global carbon cycle
– N° 6 Carbon and plant respiration
– N° 7 Plant categories and abundance (C3, C4 & CAM Plants)
– N° 8 Photosynthesis, the C3 vs C4 gap
– N° 9 Plant respiration and CO2
– N° 10 The logarithmic temperature rise of higher CO2 levels.
N° 11 Earths atmospheric composition in relationship to CO2
– N° 12 Human respiration and CO2 concentrations.
– N° 13 600 million years of temperature change and atmospheric CO2

And in Addition

Note that the illustration #10 assumes (as is the “consensus”) that doubling atmospheric CO2 produces a 1C rise in GMT (Global Mean Temperature).  Even if true, the warming would be gentle and not cataclysmic.  Greta and XR are foolishly thinking the world goes over a cliff if CO2 hits 430ppm.  I start to wonder if Greta really can see CO2 as she claims.

It is also important to know that natural CO2 sources and sinks are estimated with large error ranges.  For example this table from earlier IPCC reports:

Below are some other images I find meaningful, though they lack Raymond’s high production values.

 

co2-levels2018

Simple Science 2: The World of Climate

Raymond of RiC-Communications  studio commented on a recent post and made an offer to share here some graphics on CO2 for improving public awareness.  He has produced 12 interesting slides which are presented in the post Here’s Looking at You, CO2.  This post presents the three initial charts he has so far created on a second theme The World of Climate Change.  I find them straightforward and useful, and appreciate his excellent work on this. Project title is link to RiC-Communications.

This project is The World of Climate Change

Infographics can be helpful, in making things simple to understand. Climate change is a complex topic with a lot of information and statistics. These simple step by step charts are here to better understand what is occurring naturally and what could be caused by humans. What is cause for alarm and what isn’t cause for alarmism if at all. Only through learning is it possible to get the big picture so as to make the right decisions for the future.

– N° 01 120 m of sea level rise over the past 20‘000 years.
– N° 02 Holocene period and average northern hemispheric temperatures
– N° 03 140 years of global mean temperature

Comment:

This project will explore information concerning how aspects of the world climate system have changed in the past up to the present time.  Understanding the range of historical variation and the factors involved is essential for anticipating how future climate parameters might fluctuate.

For example:

The Climate Story (Illustrated) looks at the temperature record.

H20 the Gorilla Climate Molecule looks at precipitation patterns.

Data vs. Models #2: Droughts and Floods looks at precipitation extremes.

Data vs. Models #3: Disasters looks at extreme weather events.

Data vs. Models #4: Climates Changing looks at boundaries of defined climate zones.

 

 

 

 

 

 

 

 

 

 

 

 

 

And in Addition

Note that the illustration #10 assumes (as is the “consensus”) that doubling atmospheric CO2 produces a 1C rise in GMT (Global Mean Temperature).  Even if true, the warming would be gentle and not cataclysmic.  Greta and XR are foolishly thinking the world goes over a cliff if CO2 hits 430ppm.  I start to wonder if Greta really can see CO2 as she claims.

It is also important to know that natural CO2 sources and sinks are estimated with large error ranges.  For example this table from earlier IPCC reports:

Since the Statue of Liberty features in the sea level graphic, here are observations from there

nyc-past-projected

Below are some other images I find meaningful, though they lack Raymond’s high production values.

 

co2-levels2018

CO2, SO2, O3: A journey of Discovery

A previous post Light Bulbs Disprove Global Warming presented an article by Dr. Peter Ward along with some scientific discussion from his website. This post presents an excerpt from Chapter One of his book which helpfully explains his journey of discovery from his field of volcanism to the larger question of global warming.

The Chapter is How I Came to Wonder about Climate Change. Excerpts in italics with my bolds.

Discovering a More Likely Cause of Global Warming

The evidence for volcanism in the ice layers under Summit, Greenland, consists of sulfate
deposits. Sulfate comes from sulfur dioxide, megatons of which are emitted during each
volcanic eruption. At first, I thought that the warming was caused by the sulfur dioxide,
which is observed to absorb solar energy passing through the atmosphere.17 My thinking
was influenced by greenhouse warming theory, which assumes that carbon dioxide causes
global warming because it is observed to absorb infrared energy radiated by Earth as it
passes upward through the atmosphere and is then thought to re-radiate it back down to
the surface, thus causing warming. The sulfur dioxide story, however, just wasn’t adding
up quantitatively.

Figure 1.9 Average temperatures per century (black) increased at the same time as the amount of volcanic sulfate per century (red). The greatest warming occurred when volcanism was more continuous from year to year, as shown by the blue circles surrounding the number of contiguous layers (7 or more) containing volcanic sulfate. It was this continuity over two millennia that finally warmed the world out of the last ice age. Data are from the GISP2 drill hole under Summit, Greenland. Periods of major warming are labeled in black. Periods of major cooling are labeled in blue.

Eventually, after publishing two papers that developed this story, I came to realize
that sulfur dioxide was actually just the “footprint” of volcanism—a measure of how
active volcanoes were at any given time. The real breakthrough came when I came across
a paper reporting that the lowest concentrations of stratospheric ozone ever recorded were for the two years after the 1991 eruption of Mt. Pinatubo, the largest volcanic eruption since the 1912 eruption of Mt. Katmai. As I dug deeper, analyzing ozone records from Arosa, Switzerland18—the longest running observations of ozone in the world, begun in 1927 (Figure 8.15 on page 119)—I found that ozone spiked in the years of most volcanic eruptions but dropped dramatically and precipitously in the year following each eruption. There seemed to be a close relationship between volcanism and ozone. What could that relationship be?

Increased SO2 pollution (dotted black line) does not appear to contribute to substantial global warming (red line) until total column ozone decreased (black line, y-axis inverted), most likely due to increasing tropospheric chlorine (green line). Mean annual temperature anomaly in the Northern Hemisphere (red line) and ozone (black line) are smoothed with a centered 5 point running mean. OHC is ocean heat content (dotted purple line).

The answer was not long in coming. I knew that all volcanoes release hydrogen chloride
when they erupt, and I also knew that chlorine from man-made chlorofluorocarbon
compounds had been identified in the 1970s as a potent agent of stratospheric ozone
depletion. From these two facts, and a third one, I deduced that it must be the depletion of
ozone by chlorine in volcanic hydrogen chloride—and not the absorption of solar radiation
by sulfur dioxide—that was driving the warming events that followed volcanic eruptions.
The third fact in the equation was the well-known interaction of stratospheric ozone with
solar radiation.

Figure 1.10 When ozone is depleted, a narrow sliver of solar ultraviolet-B radiation with wavelengths close to 0.31 µm (yellow triangle) reaches Earth. The red circle shows that the energy of this ultraviolet radiation is around 4 electron volts (eV) on the red scale on the right, 48 times the energy absorbed most strongly by carbon dioxide (blue circle, 0.083 eV at 14.9 micrometers (µm) wavelength. Shaded grey areas show the bandwidths of absorption by different greenhouse gases. Current computer models calculate radiative forcing by adding up the areas under the broadened spectral lines that make up these bandwidths. Net radiative energy, however, is proportional to frequency only (red line), not to amplitude, bandwidth, or amount.

The ozone layer, at altitudes of 12 to 19 miles (20 to 30 km) up in the lower
stratosphere, absorbs very energetic solar ultraviolet radiation, thereby protecting life on
Earth from this very “hot,” DNA-destroying radiation. When the concentration of ozone is
reduced, more ultraviolet radiation is observed to reach Earth’s surface, increasing the risk
of sunburn and skin cancer. There is no disagreement among climate scientists about this,
but I went one step further by deducing that this increased influx of “super-hot” ultraviolet
radiation also actually warms Earth.

All ultraviolet UV-C is absorbed in the upper atmosphere. Most UV-B is absorbed in the stratosphere. The wavelengths of UV are shown in nanometers.

All current climate models assume that radiation travels through space as waves and
that energy in radiation is proportional to the square of the amplitude of these waves
and to the bandwidth of the radiation, i.e. to the range of wavelengths or frequencies
involved. Figure 1.10 shows the percent absorption for different greenhouse-gases as a
function of wavelength or frequency. It is generally assumed that the energy absorbed
by greenhouse-gases is proportional to the areas shaded in gray. From this perspective,
absorption by carbon dioxide of wavelengths around 14.9 and 4.3 micrometers in
the infrared looks much more important than absorption by ozone of ultraviolet-B
radiation around 0.31 micrometers. Climate models thus calculate that ultraviolet
radiation is relatively unimportant for global warming because it occupies a rather
narrow bandwidth in the solar spectrum compared to Earth’s much lower frequency,
infrared radiation.

The models neglect the fact, shown by the red line in Figure 1.10 and explained in
Chapter 4, that due to its higher frequency, ultraviolet radiation (red circle) is
48 times more energy-rich, 48 times “hotter,” than infrared absorbed by
carbon dioxide (blue circle), which means that there is a great deal more energy packed
into that narrow sliver of ultraviolet (yellow triangle) than there is in the broad band
of infrared. This actually makes very good intuitive sense. From personal experience,
we all know that we get very hot and are easily sunburned when standing in ultraviolet
sunlight during the day, but that we have trouble keeping warm at night when standing
in infrared energy rising from Earth.

These flawed assumptions in the climate models are based on equations that were
written in 1865 by James Clerk Maxwell and have been used very successfully to design
every piece of electronics that we depend on today, including our electric grid. Maxwell
assumed that electromagnetic energy travels as waves through matter, air, and space.
His wave equations seem to work well in matter, but not in space. Even though Albert
Michelson and Edward Morley demonstrated experimentally in 1887 that there is no
medium in space, no so-called luminiferous aether, through which waves could travel,
most physicists and climatologists today still assume that electromagnetic radiation does
in fact travel through space at least partially in the form of waves.

They also erroneously assume that energy in these imagined waves is proportional to
the square of their amplitude, which is true in matter, but cannot be true in space. They
calculate that there is more energy in the broad band of low-frequency infrared radiation
emitted by Earth and absorbed by greenhouse gases than there is in the narrow sliver of
additional high-frequency ultraviolet solar radiation that reaches Earth when ozone is
depleted (Figure 1.10). Nothing could be further from the truth.

The energy of radiation absorbed by carbon dioxide around 14,900 nanometers (blue circle) is near 0.08 electron volts (green circle) while the energy that reaches Earth when the ozone layer is depleted around 310 nanometers (red circle) is near 4 electron volts, 48 times larger.

The story got even more convoluted by the rise of quantum mechanics at the dawn
of the 20th century when Max Planck and Albert Einstein introduced the idea that energy
in light is quantized. These quanta of light ultimately became known as photons. In order
to explain the photoelectric effect, Einstein proposed that radiation travels as particles, a
concept that scientists and natural philosophers had debated for 2500 years before him.
I will explain in Chapter 4 why photons traveling from Sun cannot physically exist, even
though they provide a very useful mathematical shorthand.

Max Planck postulated, in 1900, that the energy in radiation is equal to vibrational
frequency times a constant, as is true of an atomic oscillator, in which a bond holding two
atoms together is oscillating in some way. He needed this postulate in order to derive an
equation by trial and error that could account for and calculate the observed properties of
radiation. Planck’s postulate led to Albert Einstein’s light quanta and to modern physics,
dominated by quantum mechanics and quantum electrodynamics. Curiously, however,
Planck didn’t fully appreciate the far-reaching implications of his simple postulate, which
states that the energy in radiation is equal to frequency times a constant. He simply saw it as a useful mathematical trick.

Energy is a function of frequency and should therefore be plotted on the x-axis (top of this figure) and units of watts should not be included on the y-axis. The colored lines show the spectral radiance predicted by Planck’s law for black bodies with different absolute temperatures.

As I dug deeper, it took me several years to become comfortable with those implications.
It was not the way we were trained to think. It was not the way most physicists think, even
today. Being retired turned out to be very useful because I could give my brain time to mull
this over. Gradually, it began to make sense. The take-away message for me was that the
energy in the kind of ultraviolet radiation that reaches Earth when ozone is depleted is 48 times “hotter” than infrared energy absorbed by greenhouse gases. In sufficient quantities, it should be correspondingly 48 times more effective in raising Earth’s surface temperature than the weak infrared radiation from Earth’s surface that is absorbed by carbon dioxide in the atmosphere and supposedly re-radiated back to the ground.

There simply is not enough energy involved with greenhouse gases to have a significant
effect on global warming. Reducing emissions of greenhouse gases will therefore not be
effective in reducing global warming. This conclusion is critical right now because most of
the world’s nations are planning to meet in Paris, France, in late November 2015, to agree
on legally binding limits to greenhouse-gas emissions. Such limits would be very expensive
as well as socioeconomically disruptive. We depend on large amounts of affordable energy to support our lifestyles, and developing countries also depend on large amounts of affordable energy to improve their lifestyles. Increasing the cost of energy by even a few percent would have major negative financial and societal repercussions.

This book is your chance to join my odyssey. You do not need to have majored in
science or even to be familiar with physics, chemistry, mathematics, or climatology. You
just need to be curious and be willing to work. You also need to be willing to think critically
about observations, and you may need to reevaluate some of your own ideas about climate.
You will learn that there was a slight misunderstanding in science made back in the 1860s
that has had profound implications for understanding climate change and physics today. It took me many years of hard work to gain this insight, and I will discuss that in Chapter 4. First, however, we need to look at some fundamental observations that cause us to wonder: Could the greenhouse warming theory of climate change actually be mistaken?

Footnote:

I welcome this analysis and assessment that explain why rising CO2 concentrations in the satellite era have no discernable impact on the radiative profile of the atmosphere.  See Global Warming Theory and the Tests It Fails

Recycling Climate Trash Papers

There they go again with the ocean heating claims. Media alarms are rampant triggered by a new publication Record-Setting Ocean Warmth Continued in 2019 in Advances in Atmospheric Sciences
Authors: Lijing Cheng, John Abraham, Jiang Zhu, Kevin E. Trenberth, John Fasullo, Tim Boyer, Ricardo Locarnini, Bin Zhang, Fujiang Yu, Liying Wan, Xingrong Chen, Xiangzhou Song, Yulong Liu, Michael E. Mann.

Reasons for doubting the paper and its claims go well beyond the listing of so many names, including several of the usual suspects. No, this publication is tarnished by its implausible provenance. It rests upon and repeats analytical mistakes that have been pointed out but true believers carry on without batting an eye.

It started with Resplandy et al in 2018 who became an overnight sensation with their paper Quantification of ocean heat uptake from changes in atmospheric O2 and CO2 composition in Nature October 2018, leading to media reports of extreme ocean heating. Nic Lewis published a series of articles at his own site and at Climate Etc. in November 2018, leading to the paper being withdrawn and eventually retracted. Those authors acknowledged the errors and did the honorable thing

Then in 2019 Cheng et al. published the same claim in their Science paper January 2019 drawing on Resplandy et al. as a reference. That publication was featured in the IPCC Special Report on the Ocean and Cryosphere in a Changing Climate (SROCC).

Benny Peiser of GWPF objected in writing to IPCC, saying inter alia:

Your report (SROCC, p. 5-14) concludes that
” The rate of heat uptake in the upper ocean (0-700m) is very likely higher in the 1993-2017 (or .2005-2017) period compared with the 1969-1993 period (see Table 5.1).”

We would like to point out that this conclusion is based to a significant degree on a paper
by Cheng et al. (2019) which itself relies on a flawed estimate by Resplandy et al. (2018).
An authors’ correction to this paper and its ocean heat uptake (OHU) estimate was under
review for nearly a year, but in the end Nature requested that the paper be retracted
(Retraction Note, 2019).

That was not the only objection. Nic Lewis examined Cheng et al. 2019 and found it wanting. That discussion is also at Climate Etc. Is ocean warming accelerating faster than thought? The authors replied to Lewis’ critique but did not refute or correct the identified errors.

A year later in January 2020 the same people have processed another year of data in the same manner and then proclaim the same result. The only differences are the addition of several high profile alarmists and the subtraction of Resplandy et al. from the References.  It looks like the group is emulating MIchael Mann’s blueprint:  The Show Must Go On.  The Noble cause justifies any and all means.  Show no weaknesses, admit no mistakes, correct nothing, sue if you have to.

 

 

 

 

SSTs Keep Cool at Year End

The best context for understanding decadal temperature changes comes from the world’s sea surface temperatures (SST), for several reasons:

  • The ocean covers 71% of the globe and drives average temperatures;
  • SSTs have a constant water content, (unlike air temperatures), so give a better reading of heat content variations;
  • A major El Nino was the dominant climate feature in recent years.

HadSST is generally regarded as the best of the global SST data sets, and so the temperature story here comes from that source, the latest version being HadSST3.  More on what distinguishes HadSST3 from other SST products at the end.

The Current Context

The chart below shows SST monthly anomalies as reported in HadSST3 starting in 2015 through December 2019.
A global cooling pattern is seen clearly in the Tropics since its peak in 2016, joined by NH and SH cycling downward since 2016.  In 2019 all regions had been converging to reach nearly the same value in April.

Then  NH rose exceptionally by almost 0.5C over the four summer months, in August exceeding previous summer peaks in NH since 2015.  Now in the last 4 months that warm NH pulse has reversed sharply.  Meanwhile the SH and Tropics bumped upward, but despite that the global anomaly dropped a little due to strong NH cooling.

Note that higher temps in 2015 and 2016 were first of all due to a sharp rise in Tropical SST, beginning in March 2015, peaking in January 2016, and steadily declining back below its beginning level. Secondly, the Northern Hemisphere added three bumps on the shoulders of Tropical warming, with peaks in August of each year.  A fourth NH bump was lower and peaked in September 2018.  As noted above, a fifth peak in August 2019 exceeded the four previous upward bumps in NH.

And as before, note that the global release of heat was not dramatic, due to the Southern Hemisphere offsetting the Northern one.  The major difference between now and 2015-2016 is the absence of Tropical warming driving the SSTs.

A longer view of SSTs

The graph below  is noisy, but the density is needed to see the seasonal patterns in the oceanic fluctuations.  Previous posts focused on the rise and fall of the last El Nino starting in 2015.  This post adds a longer view, encompassing the significant 1998 El Nino and since.  The color schemes are retained for Global, Tropics, NH and SH anomalies.  Despite the longer time frame, I have kept the monthly data (rather than yearly averages) because of interesting shifts between January and July.

To enlarge open image in new tabl

1995 is a reasonable (ENSO neutral) starting point prior to the first El Nino.  The sharp Tropical rise peaking in 1998 is dominant in the record, starting Jan. ’97 to pull up SSTs uniformly before returning to the same level Jan. ’99.  For the next 2 years, the Tropics stayed down, and the world’s oceans held steady around 0.2C above 1961 to 1990 average.

Then comes a steady rise over two years to a lesser peak Jan. 2003, but again uniformly pulling all oceans up around 0.4C.  Something changes at this point, with more hemispheric divergence than before. Over the 4 years until Jan 2007, the Tropics go through ups and downs, NH a series of ups and SH mostly downs.  As a result the Global average fluctuates around that same 0.4C, which also turns out to be the average for the entire record since 1995.

2007 stands out with a sharp drop in temperatures so that Jan.08 matches the low in Jan. ’99, but starting from a lower high. The oceans all decline as well, until temps build peaking in 2010.

Now again a different pattern appears.  The Tropics cool sharply to Jan 11, then rise steadily for 4 years to Jan 15, at which point the most recent major El Nino takes off.  But this time in contrast to ’97-’99, the Northern Hemisphere produces peaks every summer pulling up the Global average.  In fact, these NH peaks appear every July starting in 2003, growing stronger to produce 3 massive highs in 2014, 15 and 16.  NH July 2017 was only slightly lower, and a fifth NH peak still lower in Sept. 2018.

The highest summer NH peak came in 2019, only this time the Tropics and SH are offsetting rather adding to the warming. Since 2014 SH has played a moderating role, offsetting the NH warming pulses. (Note: these are high anomalies on top of the highest absolute temps in the NH.)

What to make of all this? The patterns suggest that in addition to El Ninos in the Pacific driving the Tropic SSTs, something else is going on in the NH.  The obvious culprit is the North Atlantic, since I have seen this sort of pulsing before.  After reading some papers by David Dilley, I confirmed his observation of Atlantic pulses into the Arctic every 8 to 10 years.

But the peaks coming nearly every summer in HadSST require a different picture.  Let’s look at August, the hottest month in the North Atlantic from the Kaplan dataset.
The AMO Index is from from Kaplan SST v2, the unaltered and not detrended dataset. By definition, the data are monthly average SSTs interpolated to a 5×5 grid over the North Atlantic basically 0 to 70N. The graph shows warming began after 1992 up to 1998, with a series of matching years since. Because the N. Atlantic has partnered with the Pacific ENSO recently, let’s take a closer look at some AMO years in the last 2 decades.
This graph shows monthly AMO temps for some important years. The Peak years were 1998, 2010 and 2016, with the latter emphasized as the most recent. The other years show lesser warming, with 2007 emphasized as the coolest in the last 20 years. Note the red 2018 line is at the bottom of all these tracks. The black line shows that 2019 began slightly cooler, then tracked 2018, then rose to match previous summer pulses, before dropping the last four months to be slightly above 2018 and below other years.

Summary

The oceans are driving the warming this century.  SSTs took a step up with the 1998 El Nino and have stayed there with help from the North Atlantic, and more recently the Pacific northern “Blob.”  The ocean surfaces are releasing a lot of energy, warming the air, but eventually will have a cooling effect.  The decline after 1937 was rapid by comparison, so one wonders: How long can the oceans keep this up? If the pattern of recent years continues, NH SST anomalies may rise slightly in coming months, but once again, ENSO which has weakened will probably determine the outcome.

Footnote: Why Rely on HadSST3

HadSST3 is distinguished from other SST products because HadCRU (Hadley Climatic Research Unit) does not engage in SST interpolation, i.e. infilling estimated anomalies into grid cells lacking sufficient sampling in a given month. From reading the documentation and from queries to Met Office, this is their procedure.

HadSST3 imports data from gridcells containing ocean, excluding land cells. From past records, they have calculated daily and monthly average readings for each grid cell for the period 1961 to 1990. Those temperatures form the baseline from which anomalies are calculated.

In a given month, each gridcell with sufficient sampling is averaged for the month and then the baseline value for that cell and that month is subtracted, resulting in the monthly anomaly for that cell. All cells with monthly anomalies are averaged to produce global, hemispheric and tropical anomalies for the month, based on the cells in those locations. For example, Tropics averages include ocean grid cells lying between latitudes 20N and 20S.

Gridcells lacking sufficient sampling that month are left out of the averaging, and the uncertainty from such missing data is estimated. IMO that is more reasonable than inventing data to infill. And it seems that the Global Drifter Array displayed in the top image is providing more uniform coverage of the oceans than in the past.

uss-pearl-harbor-deploys-global-drifter-buoys-in-pacific-ocean

USS Pearl Harbor deploys Global Drifter Buoys in Pacific Ocean

Raman Effect Not a Climate Factor

When the Raman effect came up last year in relation to GHGs (Greenhouse Gases), I was firstly confused thinking it was talk of asian noodles.  So I have had to learn more, and while the effect is real and useful, I doubt it is a factor concerning global warming/climate change.  This post provides information principally from two sources consistent with many others I read.

One article is Raman Spectroscopy from University of Pennsylvania.  Excerpts in italics with my bolds.

Raman Effect

Raman spectroscopy is often considered to be complementary to IR spectroscopy. For symmetrical molecules with a center of inversion, Raman and IR are mutually exclusive. In other words, bonds that are IR-active will not be Raman-active and vice versa. Other molecules may have bonds that are either Raman-active, IR-active, neither or both.

Raman spectroscopy measures the scattering of light by matter. The light source used in Raman spectroscopy is a laser.

The laser light is used because it is a very intense beam of nearly monochromatic light that can interact with sample molecules. When matter absorbs light, the internal energy of the matter is changed in some way. Since this site is focused on the complementary nature of IR and Raman, the infrared region will be discussed. Infrared radiation causes molecules to undergo changes in their vibrational and rotational motion. When the radiation is absorbed, a molecule jumps to a higher vibrational or rotational energy level. When the molecule relaxes back to a lower energy level, radiation is emitted. Most often the emitted radiation is of the same frequency as the incident light. Since the radiation was absorbed and then emitted, it will likely travel in a different direction from which it came. This is called Rayleigh scattering. Sometimes, however, the scattered (emitted) light is of a slightly different frequency than the incident light. This effect was first noted by Chandrasekhara Venkata Raman who won the Nobel Prize for this discovery. (6) The effect, named for its discoverer, is called the Raman effect, or Raman scattering.

Raman scattering occurs in two ways. If the emitted radiation is of lower frequency than the incident radiation, then it is called Stokes scattering. If it is of higher frequency, then it is called anti-Stokes scattering.

Energy Diagram Scattering (Source: Wikipedia)

The Blue arrow in the picture to the left represents the incident radiation. The Stokes scattered light has a frequency lower than that of the original light because the molecule did not relax all the way back to the original ground state. The anti-Stokes scattered light has a higher frequency than the original because it started in an excited energy level but relaxed back to the ground state.

Though any Raman scattering is very low in intensity, the Stokes scattered radiation is more intense than the anti-Stokes scattered radiation.

The reason for this is that very few molecules would exist in the excited level as compared to the ground state before the absorption of radiation. The diagram shown represents electronic energy levels as shown by the labels “n=”. The same phenomenon, however, applies to radiation in any of the regions.

Another article is Raman Techniques: Fundamentals and Frontiers by Robin R. Jones et al. at 2019 at US National Library of Medicine.

Abstract

Driven by applications in chemical sensing, biological imaging and material characterisation, Raman spectroscopies are attracting growing interest from a variety of scientific disciplines. The Raman effect originates from the inelastic scattering of light, and it can directly probe vibration/rotational-vibration states in molecules and materials.

Despite numerous advantages over infrared spectroscopy, spontaneous Raman scattering is very weak, and consequently, a variety of enhanced Raman spectroscopic techniques have emerged.

These techniques include stimulated Raman scattering and coherent anti-Stokes Raman scattering, as well as surface- and tip-enhanced Raman scattering spectroscopies. The present review provides the reader with an understanding of the fundamental physics that govern the Raman effect and its advantages, limitations and applications. The review also highlights the key experimental considerations for implementing the main experimental Raman spectroscopic techniques. The relevant data analysis methods and some of the most recent advances related to the Raman effect are finally presented. This review constitutes a practical introduction to the science of Raman spectroscopy; it also highlights recent and promising directions of future research developments.

Fundamental Principles

When light interacts with matter, the oscillatory electro-magnetic (EM) field of the light perturbs the charge distribution in the matter which can lead to the exchange of energy and momentum leaving the matter in a modified state. Examples include electronic excitations and molecular vibrations or rotational-vibrations (ro-vibrations) in liquids and gases, electronic excitations and optical phonons in solids, and electron-plasma oscillations in plasmas [108].

Spontaneous Raman

When an incident photon interacts with a crystal lattice or molecule, it can be scattered either elastically or inelastically. Predominantly, light is elastically scattered (i.e. the energy of the scattered photon is equal to that of the incident photon). This type of scattering is often referred to as Rayleigh scattering. The inelastic scattering of light by matter (i.e. the energy of the scattered photon is not equal to that of the incident photon) is known as the Raman effect [1, 4, 6]. This inelastic process leaves the molecule in a modified (ro-)vibrational state

In the case of spontaneous Raman scattering, the Raman effect is very weak; typically, 1 in 10^8 of the incident radiation undergoes spontaneous Raman scattering [6].

The transition from the virtual excited state to the final state can occur at any point in time and to any possible final state based on probability. Hence, spontaneous Raman scattering is an incoherent process. The output signal power is proportional to the input power, scattered in random directions and is dependent on the orientation of the polarisation. For example, in a system of gaseous molecules, the molecular orientation relative to the incident light is random and hence their polarisation wave vector will also be random. Furthermore, as the excited state has a finite lifetime, there is an associated uncertainty in the transition energy which leads to natural line broadening of the wavelength as per the Heisenberg uncertainty principle (∆E∆t ≥ ℏ/2) [1]. The scattered light, in general, has polarisation properties that differ from that of the incident radiation. Furthermore, the intensity and polarisation are dependent on the direction from which the light is measured [1]. The scattered spectrum exhibits peaks at all Raman active modes; the relative strength of the spectral peaks are determined by the scattering cross-section of each Raman mode [108]. Photons can undergo successive Rayleigh scattering events before Raman scattering occurs as Raman scattering is far less probable than Rayleigh scattering.

Laser Empowered Raman Scattering

Coherent light-scattering events involving multiple incident photons simultaneously interacting with the scattering material was not observed until laser sources became available in the 1960s, despite predictions being made as early as the 1930s [37, 38]. The first laser-based Raman scattering experiment was demonstrated in 1961 [39]. Stimulated Raman scattering (SRS) and CARS have become prominent four-wave mixing techniques and are of interest in this review.

SRS is a coherent process providing much stronger signals relative to spontaneous Raman spectroscopy as well as the ability to time-resolve the vibrational motions.

Raman is generally a very weak process; it is estimated that approximately one in every 10^8 photons undergo Raman scattering spontaneously [6]. This inherent weakness poses a limitation on the intensity of the obtainable Raman signal. Various methods can be used to increase the Raman throughput of an experiment, such as increasing the incident laser power and using microscope objectives to tightly focus the laser beam into small areas. However, this can have negative consequences such as sample photobleaching [139]. Placing the analyte on a rough metal surface can provide orders of magnitude enhancement of the measured Raman signal, i.e. SERS.

Summary

It seems to me that spontaneous scattering is the only possible way that the Raman effect could influence the radiative profile of the atmosphere.  Sources like those above convince me that lacking laser intensity, natural light does not produce a Raman effect in the air of any significance for it to be considered a climate factor.

Light Bulbs Disprove Global Warming

Dr. Peter Ward explains at The Hill Greenhouse gases simply do not absorb enough heat to warm Earth Excerpts in italics with my bolds.

Science is not done by consensus, by popular vote, or by group think. As Michael Crichton put it: “In science consensus is irrelevant. What is relevant is reproducible results. The greatest scientists in history are great precisely because they broke with the consensus.”

The drive to demonstrate scientific consensus over greenhouse-warming theory has had the unintended consequence of inhibiting genuine scientific debate about the ultimate cause of global warming.

Believers of “the consensus” argue that anyone not agreeing with them is uninformed, an idiot or being paid by nefarious companies. The last thing most climate scientists want to consider at this point, when they think they are finally winning the climate wars, is the possibility of some problem with the science of greenhouse-warming theory. Believe me, I have tried for several years to communicate the problem to numerous leading climate scientists.

New data and improved understanding now show that there is a fatal flaw in greenhouse-warming theory. Simply put: greenhouse gases do not absorb enough of the heat radiated by Earth to cause global warming.

Understanding this very surprising and rather blunt statement is much easier than you might think. It gets down to understanding why a traditional light bulb gives off a great deal of heat whereas a new LED light bulb producing the same amount of light remains quite cool.

Heat is what makes us feel warm. More formally, heat is thermal energy flowing spontaneously from a warmer body to a cooler body. Thermal energy is well observed at the molecular level to be the oscillation of all the bonds that hold matter together. The hotter the body of matter, the higher the frequencies of oscillation and the higher the amplitudes of oscillation at each frequency of oscillation. In this way, heat and the temperature that results from absorbing heat both consist of a very broad spectrum of all of these frequencies of oscillation.

A traditional light bulb uses a large amount of electricity to heat the tungsten filament to temperatures around 5500 degrees, causing the filament to glow white hot. This high temperature is required to produce visible white light. The glowing filament gives off a very broad spectrum of frequencies of radiation, however, that we perceive as heat. Only a very small number of the highest of these frequencies are useful as visible light.

A new LED light bulb, on the other hand, uses a very small amount of electricity to cause a diode to emit a very narrow range of frequencies within the spectrum of visible light. The LED radiates only visible light — it does not radiate heat.

If you look at the LED with an infrared camera, you can see just where it gets hot. The hottest part is the base of the bulb where there is an AC to DC converter which is the primary source of heat for this bulb. For the incandescent bulbs, the hottest part is the top of the bulb.

The primary purpose of a light bulb is to provide visible light. To repeat, a traditional light bulb radiates heat, a small portion of which is visible light. An LED on the other hand, only radiates visible light, requiring much less electricity. This is why you can substantially reduce your electric bills by replacing traditional incandescent light bulbs with LED light bulbs.

How does this apply to greenhouse gases?

Detailed laboratory studies of absorption of radiation show that carbon dioxide absorbs less than 16 percent of all the frequencies making up the heat radiated by Earth. Just like LEDs, this limited number of frequencies absorbed by carbon dioxide does not constitute heat. This limited number of frequencies cannot cause an absorbing body of matter to get much hotter because it contains only a very small part of the heat required to do so.

Current radiation theory and current climate models assume that all radiation is created equal—that all radiation is the same no matter the temperature of the radiating body. Current theory simply assumes that what changes is the amount of such generic radiation measured in watts per square meter.

Extensive observations of radiation emitted by matter at different temperatures, however, show us clearly that the physical properties of radiation, the frequencies and amplitudes of oscillation making up radiation, increase in value rapidly with increasing temperature of the radiating body.

Climate scientists argue that the thermal energy absorbed by greenhouse gases is re-radiated, causing warming of air, slowing cooling of Earth and even directly warming Earth.

There simply is not enough heat involved in any of these proposed processes to have any significant effect on global warming. Greenhouse-warming theory “just ain’t so.”

Peter L. Ward worked 27 years with the United States Geological Survey. He was the chairman of the White House Working Group on Natural Disaster Information Systems during the Clinton administration. He’s published more than 50 scientific papers. He retired in 1998 but continues working to resolve several enigmatic observations related to climate change. His work is described in detail at WhyClimateChanges.com and in his book What Really Causes Global Warming? Greenhouse gases or ozone depletion? Follow him on Twitter at @yclimatechanges.

Comment:

The above article is an opinion piece that does not go deeply into the scientific case underlying the conclusions.  That analysis can be read at Ward’s paper Why greenhouse-warming theory is physically impossible

Overview

Thus greenhouse-warming theory is based on the assumption that (1) radiative energy can be quantified by a single number of watts per square meter, (2) the assumption that these radiative forcings can be added together, and (3) the assumption that Earth’s surface temperature is proportional to the sum of all of these radiative forcings. A fundamentally new understanding of the physics of thermal energy and the physics of heat, described below, shows that all three assumptions are mistaken. There are other serious problems: (4) greenhouse gases absorb only a small part of the radiation emitted by Earth, (5) they can only reradiate what they absorb, (6) they do not reradiate in every direction as assumed, (7) they make up only a tiny part of the gases in the atmosphere, and (8) they have been shown by experiment not to cause significant warming. (9) The thermal effects of radiation are not about amount of radiation absorbed, as currently assumed, they are about the temperature of the emitting body and the difference in temperature between the emitting and the absorbing bodies as described below.

plancks-law-freq-linearplus-1024x576-1

Thermal radiation from Earth, at a temperature of 15 oC, consists of the narrow continuum of frequencies of oscillation shown in green in this plot of Planck’s empirical law. Thermal radiation from the tungsten filament of an incandescent light bulb at 3000 oC consists of a broader continuum of frequencies shown in yellow and green. Thermal radiation from Sun at 5500 oC consists of a much broader continuum of frequencies shown in red, yellow and green.

Note in this plot of Planck’s empirical law that the higher the temperature, 1) the broader the continuum of frequencies, 2) the higher the amplitude of oscillation at each and every frequency, and 3) the higher the frequencies of oscillation that are oscillating with the largest amplitudes of oscillation. Radiation from Sun shown in red, yellow, and green clearly contains much higher frequencies and amplitudes of oscillation than radiation from Earth shown in green. Planck’s empirical law shows unequivocally that the physical properties of radiation are a function of the temperature of the body emitting the radiation.

plancks-law-frequency-title-300x169-1

Ångström (1900) showed that “no more than about 16 percent of earth’s radiation can be absorbed by atmospheric carbon dioxide, and secondly, that the total absorption is very little dependent on the changes in the atmospheric carbon dioxide content, as long as it is not smaller than 0.2 of the existing value.” Extensive modern data agree that carbon dioxide absorbs less than 16% of the frequencies emitted by Earth shown by the vertical black lines of this plot of Planck’s empirical law where frequencies are plotted on a logarithmic x-axis. These vertical black lines show frequencies and relative amplitudes only. Their absolute amplitudes on this plot are arbitrary.

Temperature at Earth’s surface is the result of the broad continuum of oscillations shown in green. Absorbing less than 16% of the frequencies emitted by Earth cannot have much effect on the temperature of anything.

Update January 17, 2020

Dr. Ward’s journey of discovery is provided here: CO2, SO2, O3: A journey of Discovery

Here’s Looking at You, CO2 Updated

Raymond of RiC-Communications  studio commented on a recent post and made an offer to share here some graphics on CO2 for improving public awareness.  This post presents the eleven charts he has produced so far. I find them straightforward and useful, and appreciate his excellent work on this. Project title is link to RiC-Communications.

Update January 21, 2020 with two added slides

This project is: The world of CO2

Infographics can be helpful, in making things simple to understand. CO2 is a complex topic with a lot of information and statistics. These simple step by step charts should help to give you an idea of CO2’s importance. Without CO2, plants wouldn’t be able to live on this planet. Just remember, that if CO2 falls below 150 ppm, all plant life would cease to exist.

– N° 1 Earth‘s atmospheric composition
– N° 2 Natural sources of CO2 emissions
– N° 3 Global anthropogenic CO2 emissions
– N° 4 CO2 – Carbon dioxide molecule
– N° 5 The global carbon cycle
– N° 6 Carbon and plant respiration
– N° 7 Plant categories and abundance (C3, C4 & CAM Plants)
– N° 8 Photosynthesis, the C3 vs C4 gap
– N° 9 Plant respiration and CO2
– N° 10 The logarithmic temperature rise of higher CO2 levels.
N° 11 Earths atmospheric composition in relationship to CO2
– N° 12 Human respiration and CO2 concentrations.

And in Addition

Note that the illustration #10 assumes (as is the “consensus”) that doubling atmospheric CO2 produces a 1C rise in GMT (Global Mean Temperature).  Even if true, the warming would be gentle and not cataclysmic.  Greta and XR are foolishly thinking the world goes over a cliff if CO2 hits 430ppm.  I start to wonder if Greta really can see CO2 as she claims.

It is also important to know that natural CO2 sources and sinks are estimated with large error ranges.  For example this table from earlier IPCC reports:

Since the Statue of Liberty features in the sea level graphic, here are observations from there

nyc-past-projected

Below are some other images I find meaningful, though they lack Raymond’s high production values.

 

co2-levels2018