Ocean SSTs Mixed in June

The best context for understanding decadal temperature changes comes from the world’s sea surface temperatures (SST), for several reasons:

  • The ocean covers 71% of the globe and drives average temperatures;
  • SSTs have a constant water content, (unlike air temperatures), so give a better reading of heat content variations;
  • A major El Nino was the dominant climate feature in recent years.

HadSST is generally regarded as the best of the global SST data sets, and so the temperature story here comes from that source, the latest version being HadSST3.  More on what distinguishes HadSST3 from other SST products at the end.

The Current Context

The chart below shows SST monthly anomalies as reported in HadSST3 starting in 2015 through June 2019.
A global cooling pattern is seen clearly in the Tropics since its peak in 2016, joined by NH and SH cycling downward since 2016.  2018 started with slow warming after the low point of December 2017, led by steadily rising NH, which peaked in September and cooled since.  The Tropics rose steadily until November, then cooled before returning to the same level.

In 2019 all regions had been converging to reach nearly the same value in April.  Now in June, NH rose sharply, while SH dropped by the same amount while the Tropics SSTs are holding steady.  As a result the Global average anomaly is up 0.04 to an anomaly of 0.56C  All regions are about the same as 05/2017 which led to a cooling period despite NH warming at the time

Note that higher temps in 2015 and 2016 were first of all due to a sharp rise in Tropical SST, beginning in March 2015, peaking in January 2016, and steadily declining back below its beginning level. Secondly, the Northern Hemisphere added three bumps on the shoulders of Tropical warming, with peaks in August of each year.  A fourth NH bump was lower and peaked in September 2018.  Also, note that the global release of heat was not dramatic, due to the Southern Hemisphere offsetting the Northern one.

The annual SSTs for the last five years are as follows:

Annual SSTs Global NH SH  Tropics
2014 0.477 0.617 0.335 0.451
2015 0.592 0.737 0.425 0.717
2016 0.613 0.746 0.486 0.708
2017 0.505 0.650 0.385 0.424
2018 0.480 0.620 0.362 0.369

2018 annual average SSTs across the regions are close to 2014, slightly higher in SH and much lower in the Tropics.  The SST rise from the global ocean was remarkable, peaking in 2016, higher than 2011 by 0.32C.

A longer view of SSTs

The graph below  is noisy, but the density is needed to see the seasonal patterns in the oceanic fluctuations.  Previous posts focused on the rise and fall of the last El Nino starting in 2015.  This post adds a longer view, encompassing the significant 1998 El Nino and since.  The color schemes are retained for Global, Tropics, NH and SH anomalies.  Despite the longer time frame, I have kept the monthly data (rather than yearly averages) because of interesting shifts between January and July.

Hadsst1995 to 062019Open image in new tab to enlarge.

1995 is a reasonable starting point prior to the first El Nino.  The sharp Tropical rise peaking in 1998 is dominant in the record, starting Jan. ’97 to pull up SSTs uniformly before returning to the same level Jan. ’99.  For the next 2 years, the Tropics stayed down, and the world’s oceans held steady around 0.2C above 1961 to 1990 average.

Then comes a steady rise over two years to a lesser peak Jan. 2003, but again uniformly pulling all oceans up around 0.4C.  Something changes at this point, with more hemispheric divergence than before. Over the 4 years until Jan 2007, the Tropics go through ups and downs, NH a series of ups and SH mostly downs.  As a result the Global average fluctuates around that same 0.4C, which also turns out to be the average for the entire record since 1995.

2007 stands out with a sharp drop in temperatures so that Jan.08 matches the low in Jan. ’99, but starting from a lower high. The oceans all decline as well, until temps build peaking in 2010.

Now again a different pattern appears.  The Tropics cool sharply to Jan 11, then rise steadily for 4 years to Jan 15, at which point the most recent major El Nino takes off.  But this time in contrast to ’97-’99, the Northern Hemisphere produces peaks every summer pulling up the Global average.  In fact, these NH peaks appear every July starting in 2003, growing stronger to produce 3 massive highs in 2014, 15 and 16.  NH July 2017 was only slightly lower, and a fifth NH peak still lower in Sept. 2018.  Note also that starting in 2014 SH plays a moderating role, offsetting the NH warming pulses. (Note: these are high anomalies on top of the highest absolute temps in the NH.)

What to make of all this? The patterns suggest that in addition to El Ninos in the Pacific driving the Tropic SSTs, something else is going on in the NH.  The obvious culprit is the North Atlantic, since I have seen this sort of pulsing before.  After reading some papers by David Dilley, I confirmed his observation of Atlantic pulses into the Arctic every 8 to 10 years.

But the peaks coming nearly every summer in HadSST require a different picture.  Let’s look at August, the hottest month in the North Atlantic from the Kaplan dataset.
AMO August 2018

The AMO Index is from from Kaplan SST v2, the unaltered and not detrended dataset. By definition, the data are monthly average SSTs interpolated to a 5×5 grid over the North Atlantic basically 0 to 70N. The graph shows warming began after 1992 up to 1998, with a series of matching years since. Because the N. Atlantic has partnered with the Pacific ENSO recently, let’s take a closer look at some AMO years in the last 2 decades.
This graph shows monthly AMO temps for some important years. The Peak years were 1998, 2010 and 2016, with the latter emphasized as the most recent. The other years show lesser warming, with 2007 emphasized as the coolest in the last 20 years. Note the red 2018 line is at the bottom of all these tracks. The short black line shows that 2019 began slightly cooler and is now tracking last year closely.

Summary

The oceans are driving the warming this century.  SSTs took a step up with the 1998 El Nino and have stayed there with help from the North Atlantic, and more recently the Pacific northern “Blob.”  The ocean surfaces are releasing a lot of energy, warming the air, but eventually will have a cooling effect.  The decline after 1937 was rapid by comparison, so one wonders: How long can the oceans keep this up? If the pattern of recent years continues, NH SST anomalies may rise slightly in coming months, but once again, ENSO which has weakened will probably determine the outcome.

Postscript:

In the most recent GWPF 2017 State of the Climate report, Dr. Humlum made this observation:

“It is instructive to consider the variation of the annual change rate of atmospheric CO2 together with the annual change rates for the global air temperature and global sea surface temperature (Figure 16). All three change rates clearly vary in concert, but with sea surface temperature rates leading the global temperature rates by a few months and atmospheric CO2 rates lagging 11–12 months behind the sea surface temperature rates.”

Footnote: Why Rely on HadSST3

HadSST3 is distinguished from other SST products because HadCRU (Hadley Climatic Research Unit) does not engage in SST interpolation, i.e. infilling estimated anomalies into grid cells lacking sufficient sampling in a given month. From reading the documentation and from queries to Met Office, this is their procedure.

HadSST3 imports data from gridcells containing ocean, excluding land cells. From past records, they have calculated daily and monthly average readings for each grid cell for the period 1961 to 1990. Those temperatures form the baseline from which anomalies are calculated.

In a given month, each gridcell with sufficient sampling is averaged for the month and then the baseline value for that cell and that month is subtracted, resulting in the monthly anomaly for that cell. All cells with monthly anomalies are averaged to produce global, hemispheric and tropical anomalies for the month, based on the cells in those locations. For example, Tropics averages include ocean grid cells lying between latitudes 20N and 20S.

Gridcells lacking sufficient sampling that month are left out of the averaging, and the uncertainty from such missing data is estimated. IMO that is more reasonable than inventing data to infill. And it seems that the Global Drifter Array displayed in the top image is providing more uniform coverage of the oceans than in the past.

uss-pearl-harbor-deploys-global-drifter-buoys-in-pacific-ocean

USS Pearl Harbor deploys Global Drifter Buoys in Pacific Ocean

Advertisements

Spaceship Earth Ideology Officers

The image above is from the Hunt for Red October (1990). Sean Connery played Marko Alexandrovich Ramius, a Soviet submarine captain, here in a confrontation with the on board Political Officer Ivan Putin, responsible to ensure the crew conforms to the Communist Party Line and Directives.

The real life parallel to the submarine drama is reported at New Scientist Journal criticised for study claiming sun is causing global warming. Excerpts in italics with my bolds. (H/T GWPF)

A high profile scientific journal is investigating how it came to publish a study suggesting that global warming is down to natural solar cycles. The paper was criticised by scientists for containing “very basic errors” about how Earth moves around the sun.

The study was published online on 24 June by Scientific Reports, an open access journal run by Nature Research, which also lists the prestigious Nature journal among its titles. A spokesperson told New Scientist that it is aware of concerns raised over the paper, which was authored by four academics based at Northumbria University, the University of Bradford and the University of Hull, all in the UK, plus the Nasir al-Din al-Tusi Shamakhi Astrophysical Observatory in Azerbaijan.

The authors suggest that Earth’s 1°C temperature rise over the past two centuries could largely be explained by the distance between Earth and the sun changing over time as the sun orbits around our solar system’s barycentre, its centre of mass. The phenomenon would see temperatures rise a further 3°C by 2600, they say.

Ken Rice of the University of Edinburgh, UK, criticised the paper for an “elementary” mistake about celestial mechanics. “It’s well known that the sun moves around the barycentre of the solar system due to the influence of the other solar system bodies, mainly Jupiter,” he says. “This does not mean, as the paper is claiming, that this then leads to changes in the distance between the sun and the Earth.”

“The claim that we will see warming in the coming centuries because the sun will move closer to the Earth as it moves around the solar system barycentre is very simply wrong,” adds Rice. He is urging the journal to withdraw the paper, and says it is embarrassing it was published.

Gavin Schmidt of the NASA Goddard Institute for Space Studies says the paper contains egregious errors. “The sun-Earth distance does not vary with the motion of the sun-Earth system around the barycentre of the sun-Jupiter system, nor the sun-galactic centre system or any other purely mathematical reference point,” he says. He says the journal must retract the paper if it wants to retain any credibility.

The Dispute

Michael Brown of Monash University in Australia lamented uncritical media coverage of the paper in Australia.

Following criticism of the paper, lead author Valentina Zharkova, of Northumbria University, described Rice as a “climate alarmist” in an online discussion.

“The close links between oscillations of solar baseline magnetic field, solar irradiance and temperature are established in our paper without any involvement of solar inertial motion,” Zharkova told New Scientist.

Scientific Reports says it has begun an “established process” to investigate the paper it has published. “This process is ongoing and we cannot comment further at this stage,” a spokesperson said.

Ken Rice has form as a Climate Ideology Officer having led a successful take down of Hermann Harde’s paper showing human CO2 emissions are only 4% of atmospheric CO2, which is only 0.04% of the air. Rice declared at the outset: “Any paper concluding that humans are not causing the rise in CO2 is obviously wrong.” He and fellow ideology officers quickly cobbled together an attack paper which was immediately published in the journal. Harde wrote a paper describing the errors and misconstructions in the attack paper, but his response was denied publication, sealing the issue in favor of the party line. This saga of censorship can be read at No Tricks Zone article AGW Gatekeepers Censor The CO2-Climate Debate By Refusing To Publish Author’s Response To Criticism

Now Rice and his gang are at it again, this time targeting lead author Valentina Zharkova. The tactics are familiar, starting with outrage against findings deviating from their beliefs, in this case the notion that the sun could in any way influence the climate. The comment thread shows the intensity and venom applied to digging up any mistake, no matter how trivial or peripheral to the central argument. These are declared “egregious” and justification to ignore and censor the contrary understanding of nature.

The comment thread started 9 days ago: Oscillations of the baseline of solar magnetic field and solar irradiance on a millennial timescale

Zharkova stands her ground, though always on the defensive and surrounded by a pack of attackers. Rice, as usual, displays his mastery of the English language to demean and undermine while appearing to be reasonable. His Russian opponent is clearly at a disadvantage when it comes to putdowns.

I don’t know enough to judge the substance of the claims, or the pertinence of the details, but can observe that the intensity shows how much is at stake for the attackers. A major irony is that Zharkova forecasts significant warming in the future, which would seem to confirm the warmists’ expectations. However, since she puts the sun as the cause, the finding pulls the rug from under anti-fossil fuel activists, hence the outrage. This also shows the dispute is not about climate or temperature, but about politics.

Footnote:

Michael Mann has been the most aggressive Inquisitor against climate heretics. In his book, “The Hockey Stick and the Climate Wars,” he presented an analogy to explain why he and other researchers have become the objects of such fierce public scrutiny and vilification, which he terms “the Serengeti strategy.” Likening climate scientists to zebras, he writes, “The climate change deniers isolate individual scientists just as predators on the Serengeti Plain of Africa hunt their prey: picking off vulnerable individuals from the rest of the herd.” He asserts that he and others have become targets because their findings challenge the entrenched fossil-fuel industries, which have tried to discredit them.

No one has more chutzpah than M. Mann, he and his pack applying the Serengeti strategy repeatedly against scientists finding other factors than CO2 driving climate changes, all the while claiming to be a victim rather than a predator.

Science 101: Null Test All Claims

Francis Menton provides some essential advice for non-scientists in his recent essay at Manhattan Contrarian You Don’t Need To Be A Scientist To Know That The Global Warming Alarm “Science” Is Fake. Excerpts in italics with my bolds.

When confronted with a claim that a scientific proposition has been definitively proven, ask the question: What was the null hypothesis, and on what basis has it been rejected?

As Menton explains, you don’t need the skills to perform yourself the null test, just the boldness to check how they dismissed the null hypothesis.

Consider first a simple example, the question of whether aspirin cures headaches. Make that our scientific proposition: aspirin cures headaches. How would this proposition be established? You yourself have taken aspirin many times, and your headache always went away. Doesn’t that prove that the aspirin worked? Absolutely not. The fact that you took aspirin 100 times and the headache went away 100 times proves nothing. Why? Because there is a null hypothesis that must first be rejected. Here the null hypothesis is that headaches will go away just as quickly on their own. How do you reject that? The standard method is to take some substantial number of people with headaches, say 2000, and give half of them the aspirin and the other half a placebo. Two hours later, of the 1000 who took the aspirin, 950 feel better and only 50 still have the headache; and of the 1000 who took the placebo, 500 still have the headache. Now you have very, very good proof that aspirin cured the headaches.

The point to focus on is that the most important evidence — the only evidence that really proves causation — is the evidence that requires rejection of the null hypothesis.

Over to climate science. Here you are subject to a constant barrage of information designed to convince you of the definitive relationship between human carbon emissions and global warming. The world temperature graph is shooting up in hockey stick formation! Arctic sea ice is disappearing! The rate of sea level rise is accelerating! Hurricanes are intensifying! June was the warmest month EVER! And on and on and on. All of this is alleged to be “consistent” with the hypothesis of human-caused global warming.

But, what is the null hypothesis, and on what basis has it been rejected? Here the null hypothesis is that some other factor, or combination of factors, rather than human carbon emissions, was the dominant cause of the observed warming.

Once you pose the null hypothesis, you immediately realize that all of the scary climate information with which you are constantly barraged does not even meaningfully address the relevant question. All of that information is just the analog of your 100 headaches that went away after you took aspirin. How do you know that those headaches wouldn’t have gone away without the aspirin? You don’t know unless someone presents data that are sufficient to reject the null hypothesis. Proof of causation can only come from disproof of the null hypothesis or hypotheses, that is, disproof of other proposed alternative causes. This precept is fundamental to the scientific method, and therefore fully applies to “climate science” to the extent that that field wishes to be real science versus fake science.

Now, start applying this simple check to every piece you read about climate science. Start looking for the null hypothesis and how it was supposedly rejected. In mainstream climate literature — and I’m including here both the highbrow media like the New York Times and also the so-called “peer reviewed” scientific journals like Nature and Scienceyou won’t find that. It seems that people calling themselves “climate scientists” today have convinced themselves that their field is such “settled science” that they no longer need to bother with tacky questions like worrying about the null hypothesis.

When climate scientists start addressing the alternative hypotheses seriously, then it will be real science. In the meantime, it’s fake science.

Summary

The null test can be applied to any scientific claim.  If there is no null hypothesis considered, then you can add the report  to the file “Unproven Claims,” or “Unfounded Suppositions.”  Some researchers call them SWAGs: Scientific Wild Ass Guesses.  These are not useless, since any discovery starts with a SWAG.  But you should avoid believing that they describe the way the world works until alternative explanations have been tested and dismissed.

See Also: No “Gold Standard” Climate Science

No GHG Warming Fingerprints in the Sky

No Causation Without Correlation (FF↔GMT)

Previous posts addressed the claim that fossil fuels are driving global warming. This post updates that analysis with the latest (2018) numbers from BP Statistics and compares World Fossil Fuel Consumption (WFFC) with three estimates of Global Mean Temperature (GMT). More on both these variables below.

WFFC

2018 statistics are now available from BP for international consumption of Primary Energy sources. 2019 Statistical Review of World Energy. 

The reporting categories are:
Oil
Natural Gas
Coal
Nuclear
Hydro
Renewables (other than hydro)

This analysis combines the first three, Oil, Gas, and Coal for total fossil fuel consumption world wide. The chart below shows the patterns for WFFC compared to world consumption of Primary Energy from 1965 through 2018.

The graph shows that Primary Energy consumption has grown continuously for more than 5 decades. Over that period oil, gas and coal (sometimes termed “Thermal”) averaged 89% of PE consumed, ranging from 94% in 1965 to 85% in 2018.  MToe is millions of tons of oil equivalents.

Global Mean Temperatures

Everyone acknowledges that GMT is a fiction since temperature is an intrinsic property of objects, and varies dramatically over time and over the surface of the earth. No place on earth determines “average” temperature for the globe. Yet for the purpose of detecting change in temperature, major climate data sets estimate GMT and report anomalies from it.

UAH record consists of satellite era global temperature estimates for the lower troposphere, a layer of air from 0 to 4km above the surface. HadSST estimates sea surface temperatures from oceans covering 71% of the planet. HADCRUT combines HadSST estimates with records from land stations whose elevations range up to 6km above sea level.

Both GISS LOTI (land and ocean) and HADCRUT4 (land and ocean) use 14.0 Celsius as the climate normal, so I will add that number back into the anomalies. This is done not claiming any validity other than to achieve a reasonable measure of magnitude regarding the observed fluctuations.

No doubt global sea surface temperatures are typically higher than 14C, more like 17 or 18C, and of course warmer in the tropics and colder at higher latitudes. Likewise, the lapse rate in the atmosphere means that air temperatures both from satellites and elevated land stations will range colder than 14C. Still, that climate normal is a generally accepted indicator of GMT.

Correlations of GMT and WFFC

The next graph compares WFFC to GMT estimates over the 5+ decades from 1965 to 2018 from HADCRUT4, which includes HadSST3.

Over the last five decades the increase in fossil fuel consumption is dramatic and monotonic, steadily increasing by 234% from 3.5B to 11.7B oil equivalent tons.  Meanwhile the GMT record from Hadcrut shows multiple ups and downs with an accumulated rise of 0.74C over 52 years, 5% of the starting value.

The second graph compares WFFC to GMT estimates from UAH6, and HadSST3 for the satellite era from 1979 to 2018, a period of 39 years.

In the satellite era WFFC has increased at a compounded rate of nearly 2% per year, for a total increase of 91% since 1979. At the same time, SST warming amounted to 0.42C, or 3% of the starting value.  UAH warming was 0.44C, or 3% up from 1979.  The temperature compounded rate of change is 0.1% per year, an order of magnitude less.  Even more obvious is the 1998 El Nino peak and flat GMT since.

Summary

The climate alarmist/activist claim is straight forward: Burning fossil fuels makes measured temperatures warmer. The Paris Accord further asserts that by reducing human use of fossil fuels, further warming can be prevented.  Those claims do not bear up under scrutiny.

It is enough for simple minds to see that two time series are both rising and to think that one must be causing the other. But both scientific and legal methods assert causation only when the two variables are both strongly and consistently aligned. The above shows a weak and inconsistent linkage between WFFC and GMT.

Going further back in history shows even weaker correlation between fossil fuels consumption and global temperature estimates:

wfc-vs-sat

Figure 5.1. Comparative dynamics of the World Fuel Consumption (WFC) and Global Surface Air Temperature Anomaly (ΔT), 1861-2000. The thin dashed line represents annual ΔT, the bold line—its 13-year smoothing, and the line constructed from rectangles—WFC (in millions of tons of nominal fuel) (Klyashtorin and Lyubushin, 2003). Source: Frolov et al. 2009

In legal terms, as long as there is another equally or more likely explanation for the set of facts, the claimed causation is unproven. The more likely explanation is that global temperatures vary due to oceanic and solar cycles. The proof is clearly and thoroughly set forward in the post Quantifying Natural Climate Change.

Background context for today’s post is at Claim: Fossil Fuels Cause Global Warming.

LA Times Misreports Mexican Energy Realism

 

Emily Green writes at LA Times Alternative energy efforts in Mexico slow as Lopez Obrador prioritizes oil. Excerpts in italics with my bolds.

The title of the article is not wrong, as we shall see below. But as usual climatists leave out the reality so obvious in the pie chart above. Seeing which energy sources are driving his nation’s prosperity provides the missing context for understanding the priorities of Mexican President Andres Lopez Obrador

The alarmist/activist hand-wringing is in full display:

With its windy valleys and wide swaths of desert, Mexico has some of the best natural terrain to produce wind and solar energy. And, in recent years, the country has attracted alternative energy investors from across the globe.

An aerial view of the Villanueva photovoltaic power plant in the municipality of Viesca, Coahuila state, Mexico. The plant covers an area the size of 40 football fields making it the largest solar plant in the Americas. (Alfredo Estrella / AFP/Getty Images)

But the market has taken a step back under Mexico’s new president, who has made clear his priority is returning Mexico’s oil company to its former dominance.

Since taking office Dec. 1, President Andres Manuel Lopez Obrador has canceled a highly anticipated electricity auction, as well as two major transmission-line projects that would have transported power generated by renewable energy plants around the country. He has also called for more investment in coal, and stood by as his director of Mexico’s electric utility dismissed wind and solar energy as unreliable and expensive.

It’s too soon to forecast the long-term consequences, but business leaders and energy consultants are seeing a trend: a chilling in the country’s up-and-coming renewable energy market.

Further on we get the usual distortions and misdirection: Renewables capacities and low prices are cited ignoring the low actual production and intermittancy mismatch with actual needs.

Energy and oil remain sensitive topics in Mexico, where people still recall the glory days of state-owned oil company Pemex, when it was the country’s economic lifeblood. There’s even a day commemorating Mexico’s 1938 nationalization of its oil and mineral wealth.

In recent years, however, Mexico’s energy market has undergone a transformation and reached out to investors. In 2014, Lopez Obrador’s predecessor, Enrique Peña Nieto, fully opened up the country’s oil, gas and electricity sector to private investment for the first time in 70 years.

The effects were immediate. In the oil sector, companies such as ExxonMobil and Chevron clamored to explore large deposits that had once been the sole purview of Pemex.

On the electricity side, the reform led to billions of dollars in private investment in Mexico’s power sector, both in renewable energy and traditional sources such as natural gas.

Through a series of auctions, Mexico’s state-owned utility awarded long-term power contracts to private developers. Although the auctions were open to all energy technologies, wind and solar companies won the bulk of the contracts because they offered among the lowest prices in the world. Solar developers won contracts to generate electricity in Mexico at around $20 per megawatt-hour, according to the government. Industry sources said that is about half the going price for coal and gas.

The country’s wind generation capacity jumped from 2,360 megawatts at the end of 2014 to 5,382 megawatts this April, according to the Mexican wind energy association. The numbers were even more stark in solar, which soared from 166 megawatts of capacity in 2014 to 2,900 megawatts in April, according to the Mexican solar energy association.

Virtue Signalling is an Expensive Way to Run an Economy

The electricity auctions were also seen as the main vehicle for Mexico to reach its clean energy commitments made as part of the Paris climate accord to produce 35% of its electricity from clean energy sources by 2024, and 50% by 2050. Under Mexico’s definition, clean energy sources include solar and wind generation, as well as sources that some critics say aren’t environmentally friendly — such as hydroelectric dams, nuclear energy and efficient natural gas plants. Currently, 24% of Mexico’s electricity comes from clean energy sources.

Summary

Note that for true believers, no energy is “clean” except wind and solar. And Mexico is another example of how renewables cannibalize your electrical grid while claiming to be cheaper than FF sources and saving the planet from the plant food gas CO2. Meanwhile those two “zero carbon” sources provide only 2% of the energy consumed, despite the billions invested.

I get the impression that ALO is much smarter than AOC.
See Also

Exaggerating Green Energy Supply

Cutting Through the Fog of Renewable Power Costs

Superhuman Renewables Targets

 

 

 

More 2019 Evidence of Nature’s Sunscreen

Greenhouse with adjustable sun screens to control warming.

Update July 12, 2019

A paper was just published by an IPCC reviewer No Empirical Evidence for Significant Anthropogenic Climate Change by J. Kauppinen and P. Malmi. Excerpts in italics with my bolds. H/T WUWT

An analysis by Finnish researchers adds to the chain of studies supporting the Cosmosclimatology theory first proposed by Svensmark. Their focus is on the relation between the changes in temperatures and the changes in low cloud cover.  Their findings are consistent with the global brightening and dimming research centered at ETH Zurich, which is elaborated later on.

Figure 2. [2] Global temperature anomaly (red) and the global low cloud cover changes (blue) according to the observations. The anomalies are between summer 1983 and summer 2008. The time resolution of the data is one month, but the seasonal signal is removed. Zero corresponds about 15°C for the temperature and 26 % for the low cloud cover.

It turns out that the changes in the relative humidity and in the low cloud cover depend on each other [4]. So, instead of low cloud cover we can use the changes of the relative humidity in order to derive the natural temperature anomaly. According to the observations 1 % increase of the relative humidity decreases the temperature by 0.15°C, and consequently the last term in the above equation can be approximated by −15°C∆φ, where ∆φ is the change of the relative humidity at the altitude of the low clouds. Figure 4 shows the sum of the temperature changes due to the natural and CO2 contributions compared with the observed temperature anomaly. The natural component has been calculated using the changes of the relative humidity. Now we see that the natural forcing does not explain fully the observed temperature anomaly. So we have to add the contribution of CO2 (green line), because the time interval is now 40 years (1970–2010). The concentration of CO2 has now increased from 326 ppm to 389 ppm. The green line has been calculated using the sensitivity 0.24°C, which seems to be correct. In Fig. 4 we see clearly how well a change in the relative humidity can model the strong temperature minimum around the year 1975. This is impossible to interpret by CO2 concentration.

The IPCC climate sensitivity is about one order of magnitude too high, because a strong negative feedback of the clouds is missing in climate models. If we pay attention to the fact that only a small part of the increased CO2 concentration is anthropogenic, we have to recognize that the anthropogenic climate change does not exist in practice. The major part of the extra CO2 is emitted from oceans [6], according to Henry‘s law. The low clouds practically control the global average temperature. During the last hundred years the temperature is increased about 0.1°C because of CO2. The human contribution was about 0.01°C.

We have proven that the GCM-models used in IPCC report AR5 cannot compute correctly the natural component included in the observed global temperature. The reason is that the models fail to derive the influences of low cloud cover fraction on the global temperature. A too small natural component results in a too large portion for the contribution of the greenhouse gases like carbon dioxide. That is why IPCC represents the climate sensitivity more than one order of magnitude larger than our sensitivity 0.24°C. Because the anthropogenic portion in the increased CO2 is less than 10 %, we have practically no anthropogenic climate change. The low clouds control mainly the global temperature.

Previous Update  Hard Evidence of Solar Impact upon Earth Cloudiness

Later on is a reprinted discussion of global dimming and brightness resulting from fluctuating cloud cover.  This is topical because of new empirical research findings coming out of Asia.  H/T GWPF.  A study published by Kobe University research center is Revealing the impact of cosmic rays on the Earth’s climate.  Excerpts in italics with my bolds.

New evidence suggests that high-energy particles from space known as galactic cosmic rays affect the Earth’s climate by increasing cloud cover, causing an “umbrella effect”.

When galactic cosmic rays increased during the Earth’s last geomagnetic reversal transition 780,000 years ago, the umbrella effect of low-cloud cover led to high atmospheric pressure in Siberia, causing the East Asian winter monsoon to become stronger. This is evidence that galactic cosmic rays influence changes in the Earth’s climate. The findings were made by a research team led by Professor Masayuki Hyodo (Research Center for Inland Seas, Kobe University) and published on June 28 in the online edition of Scientific Reports.

The Svensmark Effect is a hypothesis that galactic cosmic rays induce low cloud formation and influence the Earth’s climate. Tests based on recent meteorological observation data only show minute changes in the amounts of galactic cosmic rays and cloud cover, making it hard to prove this theory. However, during the last geomagnetic reversal transition, when the amount of galactic cosmic rays increased dramatically, there was also a large increase in cloud cover, so it should be possible to detect the impact of cosmic rays on climate at a higher sensitivity.

(The Svenmark Effect is explained in essay The cosmoclimatology theory)

How Nature’s Sunscreen Works (from Previous Post)

A recent post Planetary Warming: Back to Basics discussed a recent paper by Nikolov and Zeller on the atmospheric thermal effect measured on various planets in our solar system. They mentioned that an important source of temperature variation around the earth’s energy balance state can be traced to global brightening and dimming.

This post explores the fact of fluctuations in the amount of solar energy reflected rather than absorbed by the atmosphere and surface. Brightening refers to more incoming solar energy from clear and clean skies. Dimming refers to less solar energy due to more sunlight reflected in the atmosphere by the presence of clouds and aerosols (air-born particles like dust and smoke).

The energy budget above from ERBE shows how important is this issue. On average, half of sunlight is either absorbed in the atmosphere or reflected before it can be absorbed by the surface land and ocean. Any shift in the reflectivity (albedo) impacts greatly on the solar energy warming the planet.

The leading research on global brightening/dimming is done at the Institute for Atmospheric and Climate Science of ETH Zurich, led by Martin Wild, senior scientist specializing in the subject.

Special instruments have been recording the solar radiation that reaches the Earth’s surface since 1923. However, it wasn’t until the International Geophysical Year in 1957/58 that a global measurement network began to take shape. The data thus obtained reveal that the energy provided by the sun at the Earth’s surface has undergone considerable variations over the past decades, with associated impacts on climate.

The initial studies were published in the late 1980s and early 1990s for specific regions of the Earth. In 1998 the first global study was conducted for larger areas, like the continents Africa, Asia, North America and Europe for instance.

Now ETH has announced The Global Energy Balance Archive (GEBA) version 2017: A database for worldwide measured surface energy fluxes. The title is a link to that paper published in May 2017 explaining the facility and some principal findings. The Archive itself is at  http://www.geba.ethz.ch.

For example, Figure 2 below provides the longest continuous record available in GEBA: surface downward shortwave radiation measured in Stockholm since 1922. Five year moving average in blue, 4th order regression model in red. Units Wm-2. Substantial multidecadal variations become evident, with an increase up to the 1950s (“early brightening”), an overall decline from the 1950s to the 1980s (“dimming”), and a recovery thereafter (“brightening”).
Figure 5. Composite of 56 European GEBA time series of annual surface downward shortwave radiation (thin line) from 1939 to 2013, plotted together with a 21 year Gaussian low-pass filter ((thick line). The series are expressed as anomalies (in Wm-2) from the 1971–2000 mean. Dashed lines are used prior to 1961 due to the lower number of records for this initial period. Updated from Sanchez-Lorenzo et al. (2015) including data until December 2013.
Martin Wild explains in a 2016 article Decadal changes in radiative fluxes at land and ocean surfaces and their relevance for global warming. From the Conclusion (SSR refers to solar radiation incident upon the surface)

However, observations indicate not only changes in the downward thermal fluxes, but even more so in their solar counterparts, whose records have a much wider spatial and temporal coverage. These records suggest multidecadal variations in SSR at widespread land-based observation sites. Specifically, declining tendencies in SSR between the 1950s and 1980s have been found at most of the measurement sites (‘dimming’), with a partial recovery at many of the sites thereafter (‘brightening’).

With the additional information from more widely measured meteorological quantities which can serve as proxies for SSR (primarily sunshine duration and DTR), more evidence for a widespread extent of these variations has been provided, as well as additional indications for an overall increasing tendency in SSR in the first part of the 20th century (‘early brightening’).

It is well established that these SSR variations are not caused by variations in the output of the sun itself, but rather by variations in the transparency of the atmosphere for solar radiation. It is still debated, however, to what extent the two major modulators of the atmospheric transparency, i.e., aerosol and clouds, contribute to the SSR variations.

The balance of evidence suggests that on longer (multidecadal) timescales aerosol changes dominate, whereas on shorter (decadal to subdecadal) timescales cloud effects dominate. More evidence is further provided for an increasing influence of aerosols during the course of the 20th century. However, aerosol and clouds may also interact, and these interactions were hypothesized to have the potential to amplify and dampen SSR trends in pristine and polluted areas, respectively.

No direct observational records are available over ocean surfaces. Nevertheless, based on the presented conceptual ideas of SSR trends amplified by aerosol–cloud interactions over the pristine oceans, modeling approaches as well as the available satellite-derived records it appears plausible that also over oceans significant decadal changes in SSR occur.

The coinciding multidecadal variations in SSTs and global aerosol emissions may be seen as a smoking gun, yet it is currently an open debate to what extent these SST variations are forced by aerosol-induced changes in SSR, effectively amplified by aerosol– cloud interactions, or are merely a result of unforced natural variations in the coupled ocean atmosphere system. Resolving this question could state a major step toward a better understanding of multidecadal climate change.

Another paper co-authored by Wild discusses the effects of aerosols and clouds The solar dimming/brightening effect over the Mediterranean Basin in the period 1979 − 2012. (NSWR is Net Short Wave Radiation, that is equal to surface solar radiation less reflected)

The analysis reveals an overall increasing trend in NSWR (all skies) corresponding to a slight solar brightening over the region (+0.36 Wm−2per decade), which is not statistically significant at 95% confidence level (C.L.). An increasing trend(+0.52 Wm−2per decade) is also shown for NSWR under clean skies (without aerosols), which is statistically significant (P=0.04).

This indicates that NSWR increases at a higher rate over the Mediterranean due to cloud variations only, because of a declining trend in COD (Cloud Optical Depth). The peaks in NSWR (all skies) in certain years (e.g., 2000) are attributed to a significant decrease in COD (see Figs. 9 and 10), whilethe two data series (NSWRall and NSWRclean) are highly correlated(r=0.95).

This indicates that cloud variation is the major regulatory factor for the amount and multi-decadal trends in NSWR over the Mediterranean Basin. (Note: Lower cloud optical depth is caused by less opaque clouds and/or decrease in overall cloudiness)

On the other hand, the results do not reveal a reversal from dimming to brightening during 1980s, as shown in several studies over Europe (Norris and Wild, 2007;Sanchez-Lorenzoet al., 2015), but a rather steady slight increasing trend in solar radiation, which, however, seems to be stabilized during the last years of the data series, in agreement with Sanchez-Lorenzo et al. (2015). Similarly, Wild (2012) reported that the solar brightening was less distinct at European sites after 2000 compared to the 1990s.

In contrast, the NSWR under clear (cloudless) skies shows a slight but statistically significant decreasing trend (−0.17 Wm−2per decade,P=0.002), indicating an overall decrease in NSWR over the Mediterranean due to water-vapor variability suggesting a transition to more humid environment under a warming climate.

Other researchers find cloudiness more dominant than aerosols. For example, The cause of solar dimming and brightening at the Earth’s surface during the last half century: Evidence from measurements of sunshine duration by Gerald Stanhill et al.

Analysis of the Angstrom-Prescott relationship between normalized values of global radiation and sunshine duration measured during the last 50 years made at five sites with a wide range of climate and aerosol emissions showed few significant differences in atmospheric transmissivity under clear or cloud-covered skies between years when global dimming occurred and years when global brightening was measured, nor in most cases were there any significant changes in the parameters or in their relationships to annual rates of fossil fuel combustion in the surrounding 1° cells. It is concluded that at the sites studied changes in cloud cover rather than anthropogenic aerosols emissions played the major role in determining solar dimming and brightening during the last half century and that there are reasons to suppose that these findings may have wider relevance.

Summary

The final words go to Martin Wild from Enlightening Global Dimming and Brightening.

Observed Tendencies in surface solar radiation
Figure 2.  Changes in surface solar radiation observed in regions with good station coverage during three periods.(left column) The 1950s–1980s show predominant declines (“dimming”), (middle column) the 1980s–2000 indicate partial recoveries (“brightening”) at many locations, except India, and (right column) recent developments after 2000 show mixed tendencies. Numbers denote typical literature estimates for the specified region and period in W m–2 per decade.  Based on various sources as referenced in Wild (2009).

The latest updates on solar radiation changes observed since the new millennium show no globally coherent trends anymore (see above and Fig. 2). While brightening persists to some extent in Europe and the United States, there are indications for a renewed dimming in China associated with the tremendous emission increases there after 2000, as well as unabated dimming in India (Streets et al. 2009; Wild et al. 2009).

We cannot exclude the possibility that we are currently again in a transition phase and may return to a renewed overall dimming for some years to come.

One can’t help but see the similarity between dimming/brightening and patterns of Global Mean Temperature, such as HadCrut.

Footnote: For more on clouds, precipitation and the ocean, see Here Comes the Rain Again

The End of Wind and Solar Parasites

Norman Rogers writes at American Thinker What It Will Take for the Wind and Solar Industries to Collapse. Excerpts in italics with my bolds.

The solar electricity industry is dependent on federal government subsidies for building new capacity. The subsidy consists of a 30% tax credit and the use of a tax scheme called tax equity finance. These subsidies are delivered during the first five years.

For wind, there is subsidy during the first five to ten years resulting from tax equity finance. There is also a production subsidy that lasts for the first ten years.

The other subsidy for wind and solar, not often characterized as a subsidy, is state renewable portfolio laws, or quotas, that require that an increasing portion of a state’s electricity come from renewable sources. Those state mandates result in wind and solar electricity being sold via profitable 25-year power purchase contracts. The buyer is generally a utility with good credit. The utilities are forced to offer these terms in order to cause sufficient supply to emerge to satisfy the renewable energy quotas.

The rate of return from a wind or solar investment can be low and credit terms favorable because the investors see the 25-year contract by a creditworthy utility as a guarantee of a low risk of default. If the risk were to be perceived as higher, then a higher rate of return and a higher interest rate on loans would be demanded. That in turn would increase the price of the electricity generated.

The bankruptcy of PG&E, the largest California utility, has created some cracks in the façade. A bankruptcy judge has ruled that cancellation of up to $40 billion in long-term energy contracts is a possibility. These contracts are not essential or needed to preserve the supply of electricity because they are mostly for wind or solar electricity supply that varies with the weather and can’t be counted on. As a consequence, there has to exist and does exist the necessary infrastructure to supply the electricity needs without the wind or solar energy.

Probably the judge will be overruled for political reasons, or the state will step in with a bailout. Utilities have to keep operating, no matter what. Ditching wind and solar contracts would make California politicians look foolish because they have long touted wind and solar as the future of energy.

PG&E is in bankruptcy because California applies strict liability for damages from forest fires started by electric lines, no matter who is really at fault. Almost certainly the government is at fault for not anticipating the danger of massive fires and for not enforcing strict fire prevention and protection. Massive fire damage should be protected by insurance, not by the utility, even if the fire was started by a power line. The fire in question could just as well have been started by lightning or a homeless person. PG&E previously filed bankruptcy in 2001, also a consequence of abuse of the utility by the state government.

By far the most important subsidy is the renewable portfolio laws. Even if the federal subsidies are reduced, the quota for renewable energy will force price increases to keep the renewable energy industry in business, because it has to stay in business to supply energy to meet the quota. Other plausible methods of meeting the quota have been outlawed by the industry’s friends in the state governments. Nuclear and hydro, neither of which generates CO2 emissions, are not allowed. Hydro is not strictly prohibited — only hydro that involves dams and diversions. That is very close to all hydro. Another reason hydro is banned is that environmental groups don’t like dams.

For technical reasons, an electrical grid cannot run on wind or solar much more than 50% of the time. The fleet of backup plants must be online to provide adjustable output to compensate for erratic variations in wind or solar. Output has to be ramped up to meet early-evening peaks. Wind suffers from a cube power law, meaning that if the wind drops by 10%, the electricity drops by 30%. Solar suffers from too much generation in the middle of the day and not enough generation to meet early evening peaks in consumption.

When a “too much generation” situation happens, the wind or solar has to be curtailed. That means that the operators are told to stop delivering electricity. In many cases, they are not paid for the electricity they could have delivered. Some contracts require that they be paid according to a model that figures out how much they could have generated according to the recorded weather conditions. The more wind and solar, the more curtailments as the amount of erratic electricity approaches the allowable limits. Curtailment is an increasing threat, as quotas increase, to the financial health of wind and solar.

There is a movement to include batteries with solar installations to move excessive middle-of-the-day generation to the early evening. This is a palliative to extend the time before solar runs into the curtailment wall. The batteries are extremely expensive and wear out every five years.

Neither wind nor solar is competitive without subsidies. If the subsidies and quotas were taken away, no wind or solar operation outside very special situations would be built. Further, the existing installations would continue only as long as their contracts are honored and they are cash flow–positive. In order to be competitive, without subsidies, wind or solar would have to supply electricity for less than $20 per megawatt-hour, the marginal cost of generating the electricity with gas or coal. Only the marginal cost counts, because the fossil fuel plants have to be there whether or not there is wind or solar. Without the subsidies, quotas, and 25-year contracts, wind or solar would have to get about $100 per megawatt-hour for its electricity. That gap, between $100 and $20, is a wide chasm only bridged by subsidies and mandates.

The cost of using wind and solar for reducing CO2 emissions is very high. The most authoritative and sincere promoters of global warming loudly advocate using nuclear, a source that is not erratic, does not emit CO2 or pollution, and uses the cheapest fuel. One can buy carbon offsets for 10 or 20 times less than the cost of reducing CO2 emissions with wind or solar. A carbon offset is a scheme where the buyer pays the seller to reduce world emissions of CO2. This is done in a variety of ways by the sellers.

The special situations where wind and solar can be competitive are remote locations using imported oil to generate electricity. In those situations, the marginal cost of the electricity may be $200 per megawatt-hour or more. Newfoundland comes to mind — for wind, not solar.

Maintenance costs for solar are low. For wind, maintenance costs are high, and major components, such as propeller blades and gearboxes, may fail, especially as the turbines age. These heavy and awkward objects are located hundreds of feet above ground. There exists a danger that wind farms will fail once the inflation-protected subsidy of $24 per megawatt-hour runs out after ten years. At that point, turbines that need expensive repairs may be abandoned. Wind turbine graveyards from the first wind fad in the 1970s can be seen near Palm Springs, California. Wind farms can’t receive the production subsidy unless they can sell the electricity. That has resulted paying customers to “buy” the electricity.

Tehachapi’s dead turbines.

A significant financial risk is that the global warming narrative may collapse. If belief in the reality of the global warming threat collapses, then the major intellectual support for renewable energy will collapse. It is ironic that the promoters of global warming are campaigning to require companies to take into account the threat of global warming in their financial projections. If the companies do this in an honest manner, they also have to take into account the possibility that the threat will evaporate. My own best guess, after considerable technical study, is that it is near a sure thing that the threat of global warming is imaginary and largely invented by the people who benefit. Adding CO2 to the atmosphere has well understood positive effects for the growth of crops and the greening of deserts.

The conservative investors who make long-term investments in wind or solar may be underestimating the risks involved. For example, an article in Chief Investment Officer magazine stated that CalPERS, the giant California public employees retirement fund, is planning to expand investments in renewable energy, characterized as “stable cash flowing assets.” That article was written before the bankruptcy of PG&E. The article also stated that competition among institutional investors for top yielding investments in the alternative energy space is fierce.

Wind and solar are not competitive and never will be. They have been pumped up into supposedly solid investments by means of ill advised subsidies and mandates. At some point, the governments will wake up to the waste and foolishness involved. At that point, the value of these investments will collapse. It won’t be the first time that investment experts made bad investments because they don’t really understand what is going on.

Footnote:  There is also a report from GWPF on environmental degradation from industrial scale wind and solar:

N. Atlantic Staying Cool

RAPID Array measuring North Atlantic SSTs.

For the last few years, observers have been speculating about when the North Atlantic will start the next phase shift from warm to cold. Given the way 2018 went and 2019 is following, this may be the onset.  First some background.

. Source: Energy and Education Canada

An example is this report in May 2015 The Atlantic is entering a cool phase that will change the world’s weather by Gerald McCarthy and Evan Haigh of the RAPID Atlantic monitoring project. Excerpts in italics with my bolds.

This is known as the Atlantic Multidecadal Oscillation (AMO), and the transition between its positive and negative phases can be very rapid. For example, Atlantic temperatures declined by 0.1ºC per decade from the 1940s to the 1970s. By comparison, global surface warming is estimated at 0.5ºC per century – a rate twice as slow.

In many parts of the world, the AMO has been linked with decade-long temperature and rainfall trends. Certainly – and perhaps obviously – the mean temperature of islands downwind of the Atlantic such as Britain and Ireland show almost exactly the same temperature fluctuations as the AMO.

Atlantic oscillations are associated with the frequency of hurricanes and droughts. When the AMO is in the warm phase, there are more hurricanes in the Atlantic and droughts in the US Midwest tend to be more frequent and prolonged. In the Pacific Northwest, a positive AMO leads to more rainfall.

A negative AMO (cooler ocean) is associated with reduced rainfall in the vulnerable Sahel region of Africa. The prolonged negative AMO was associated with the infamous Ethiopian famine in the mid-1980s. In the UK it tends to mean reduced summer rainfall – the mythical “barbeque summer”.Our results show that ocean circulation responds to the first mode of Atlantic atmospheric forcing, the North Atlantic Oscillation, through circulation changes between the subtropical and subpolar gyres – the intergyre region. This a major influence on the wind patterns and the heat transferred between the atmosphere and ocean.

The observations that we do have of the Atlantic overturning circulation over the past ten years show that it is declining. As a result, we expect the AMO is moving to a negative (colder surface waters) phase. This is consistent with observations of temperature in the North Atlantic.

Cold “blobs” in North Atlantic have been reported, but they are usually winter phenomena. For example in April 2016, the sst anomalies looked like this

But by September, the picture changed to this

And we know from Kaplan AMO dataset, that 2016 summer SSTs were right up there with 1998 and 2010 as the highest recorded.

As the graph above suggests, this body of water is also important for tropical cyclones, since warmer water provides more energy.  But those are annual averages, and I am interested in the summer pulses of warm water into the Arctic. As I have noted in my monthly HadSST3 reports, most summers since 2003 there have been warm pulses in the north atlantic.
amo december 2018
The AMO Index is from from Kaplan SST v2, the unaltered and not detrended dataset. By definition, the data are monthly average SSTs interpolated to a 5×5 grid over the North Atlantic basically 0 to 70N.  The graph shows the warmest month August beginning to rise after 1993 up to 1998, with a series of matching years since.  December 2016 set a record at 20.6C, but note the plunge down to 20.2C for  December 2018, matching 2011 as the coldest years  since 2000.  Because McCarthy refers to hints of cooling to come in the N. Atlantic, let’s take a closer look at some AMO years in the last 2 decades.

June begins the summer, and does serve to show the pattern of North Atlantic pulse related to the ENSO events. In the last two decades, there were four El Nino events peaking in 1998, 2005, 2010 and 2016.  Three of those years appear in the June AMO record as over 21.8C, a level not previously reached in the North Atlantic. Note the dropoff to 21.4C last year, and a rebound this year.

This graph shows monthly AMO temps for some important years. The Peak years were 1998, 2010 and 2016, with the latter emphasized as the most recent. The other years show lesser warming, with 2007 emphasized as the coolest in the last 20 years. Note the red 2018 line is at the bottom of all these tracks.  The short black line shows that 2019 began slightly cooler than January 2018, then tracking closely before rising slightly in June.  The average the first six months is virtually the same, with 2019 0.02C higher.

With all the talk of AMOC slowing down and a phase shift in the North Atlantic, it seems the annual average for 2018, matched so far by 2019, confirms that cooling has set in.  Through December the momentum is certainly heading downward, despite the band of warming ocean  that gave rise to European heat waves last summer.

amo annual122018

Permafrost Scare (again)

The Permafrost Bogeyman is back!

This post is prompted by noticing that alarmists are again trying to leverage permafrost to frighten people into anti-fossil fuel compliance. I have pushed back against this previously, but the PR continues and is successful when people lack information and historical context to see through the claims.

Basically, the fear is that organic material underneath ice and frozen ground will decompose when permafrost melts, and the emissions of CO2 and CH4 will drive the planetary climate into runaway warming. First you should ask yourself how did those organisms get sequestered under the ice, and what has happened to frozen soil down through history. As you will see below, the evidence shows that warm climate periods in the past caused that terrain to be filled with plant life. And discovered remains prove that animals and even humans lived in those places between frozen periods.

Then ask yourself why was there not runaway warming when the land thawed previously. After all, the fear mongers are eager to inform us that permafrost is covering soil and vegetation that can produce amounts of GHGs several multiples larger than all the emissions from human activity. And as history shows, ice and permafrost have melted and refrozen several times during our current Holocene period. Yet no runaway warming occurred, or we would not be here to fret about it.

Europe, like North America, had four periods of glaciation. Successive ice caps reached limits that differed only slightly. The area covered by ice at any time is shown in white.

The Big Picture

From Encycopaedia Britannica

An Ice age, also called glacial age, is any geologic period during which thick ice sheets cover vast areas of land. Such periods of large-scale glaciation may last several million years and drastically reshape surface features of entire continents. A number of major ice ages have occurred throughout Earth history. The earliest known took place during Precambrian time dating back more than 570 million years. The most recent periods of widespread glaciation occurred during the Pleistocene Epoch (2.6 million to 11,700 years ago).

A lesser, recent glacial stage called the Little Ice Age began in the 16th century and advanced and receded intermittently over three centuries in Europe and many other regions. Its maximum development was reached about 1750, at which time glaciers were more widespread on Earth than at any time since the last major ice age ended about 11,700 years ago.

The colored areas are those that were covered by ice sheets in the past. The Kansan and Nebraskan sheets overlapped almost the same areas, and the Wisconsin and Illinoisan sheets covered approximately the same territory. In the high altitudes of the West are the Cordilleran ice sheets. An area at the junction of Wisconsin, Minnesota, Iowa, and Illinois was never entirely covered with ice. Encyclopaedia Brittannica

What was Covered by the Ice Sheets from UC Berkeley

This mammoth, found in deposits in Russia, was one of the largest land mammals of the Pleistocene, the time period that spanned from 1.8 million to ~10,000 years ago. Pleistocene biotas were extremely close to modern ones — many genera and even species of Pleistocene conifers, mosses, flowering plants, insects, mollusks, birds, mammals, and others survive to this day. Yet the Pleistocene was also characterized by the presence of distinctive large land mammals and birds. Mammoths and their cousins the mastodons, longhorned bison, sabre-toothed cats, giant ground sloths, and many other large mammals characterized Pleistocene habitats in North America, Asia, and Europe. Native horses and camels galloped across the plains of North America. Great teratorn birds with 25-foot wingspans stalked prey. Around the end of the Pleistocene, all these creatures went extinct (the horses living in North America today are all descendants of animals brought from Europe in historic times).

It was during the Pleistocene that the most recent episodes of global cooling, or ice ages, took place. Much of the world’s temperate zones were alternately covered by glaciers during cool periods and uncovered during the warmer interglacial periods when the glaciers retreated. Did this cause the Pleistocene extinctions? It doesn’t seem likely; the large mammals of the Pleistocene weathered several climate shifts.

The Holocene Glacial Retreat from Wikipedia

The Holocene glacial retreat is a geographical phenomenon that involved the global deglaciation of glaciers that previously had advanced during the Last Glacial Maximum. Ice sheet retreat initiated ca. 19,000 years ago and accelerated after ca. 15,000 years ago. The Holocene, starting with abrupt warming 11,700 years ago, resulted in rapid melting of the remaining ice sheets of North America and Europe.

During the various Ice Ages in the Pleistocene Epoch, the continent of North America was covered by a massive ice sheet, which advanced as far south as 37 degrees North latitude. Centered in the Hudson Bay region, it later combined with other glaciers and covered a maximum of 5 million square miles (“Ice Age”), as seen in Figure 1.1. In some places it was upwards 10 thousand feet thick (“Laurentide Ice Sheet”). This was the Laurentide Ice Sheet, responsible for much of the topography seen in Canada and the United States.

[Note that Alaska has always been an exception within the Arctic climate due to the influence of warm pulses of Pacific water.]

The Laurentide Ice sheet last reached its maximum extent during the Last Glacial Maximum, 21,000 years ago, just ahead of present-day Cape Cod. After achieving equilibrium global temperatures began to warm, triggering the ice sheet’s retreat around 18,000 years ago. By 5,000 years ago, most of the ice sheet had completely melted except for a small chunk near Baffin Island (Martin). Figure 1.2 shows the ice sheet in retreat, between 18,000 and 5,000 years ago. The retreat pockmarked the North American continent with numerous depositional features, many of which can still be seen and studied today.

What Happens to Permafrost During Warmer or Cooler Periods

From Mountains, Lowlands, and Coasts: the Physiography of Cold Landscapes
Tobias Bolch and Hanne H. Christiansen.  Excerpts in italics with my bolds.

The general characteristics of the physiography of the cold regions on Earth are important background information to understand the distribution of processes associated with the cryosphere, such as glacier or permafrost-related hazards. Glaciers and permafrost comprise an important part of the cryosphere.

FIGURE 7.4 The distribution of the different permafrost types on the Northern Hemisphere as compiled by the International Permafrost Association, IPA. Source: AMAP (2011) based on Brown et al. (1997).

Permafrost is soil, rock, sediment, or other earth material that remains at or below 0 C for two or more consecutive years (van Everdingen, 1998). Thus, it is solely defined on the basis of temperature and duration. 

Typically, permafrost does not occur beneath glaciers, as they isolate the ground from the necessary atmospheric cooling, but permafrost can exist under thin cold-based glaciers or along the margins of polythermal glaciers.

Terrestrial permafrost thickness ranges from a few decimeters at the southern limit of the permafrost zone to about 1,500 m in the north of the Arctic region (Figure 7.7). The thickest permafrost is found in areas that have not recently been covered by glaciers, such as Siberia, where ground cooling for a longer time has allowed for >1,000-m-thick permafrost to develop. Areas that have been glacier covered during the last glaciation typically do not have >200- to 500-m-thick permafrost (French, 2007).

Permafrost is controlled by climate and also by a combination of several local factors (Streletskiy et al., 2014). Thus, the permafrost thermal regime is controlled by the exchanges of heat and moisture between the atmosphere and the Earth’s surface, and by the thermal properties of the underlying ground (Williams and Smith, 1989). The existence of permafrost depends on past and present states of energy fluxes through the active layer. This layer experiences seasonal variations in ice/water content, thermal conductivity, density, mechanical properties, and solute redistribution. Other important factors include snow cover, vegetation, soil organic layer thickness, soil moisture, ice content, and drainage conditions controlled by the local geomorphology.

Figure 7.7

The upper parts of permafrost can experience freezing and thawing at centennial to millennial scales. French and Shur (2010) conclude that permafrost can be stable under fluctuating climatic conditions if the ground is protected by a high ground-ice content during warm periods. Such stability of the permafrost toward climatic fluctuations is a consequence of a layer of the ground that, although a part of the active layer during warm summers, under normal climatic conditions is the upper part of the permafrost. If this layer has a high ice content, it provides thermal inertia. The net result is that permafrost can have a relatively low sensitivity to atmospheric temperature rise, or anthropogenic disturbance, when the top permafrost is ice-rich (Shur et al., 2005). This is called the transient layer (Shur et al., 2005). The transient layer experiences high and quasi-uniform ice content and undergoes freezing/thawing at decadal to century scales. Obviously, the most important condition in the permafrost is its temperature. This is typically monitored in boreholes to varying depths in the ground in different landforms. Permafrost temperatures vary from being very close to 0 C at the southern extent of permafrost, to being down to 15 C in the high Arctic (Romanovsky et al., 2010).

Summary from Hugh M French

The permafrost history of the high northern latitudes over the last two million years indicates that perennially frozen ground formed and thawed repeatedly, probably in close synchronicity with the climate changes that led to the expansion and subsequent shrinkage of continental ice sheets.

There is convincing evidence to suggest that much of today’s permafrost probably originated during the fluctuating climate of the Pleistocene. Some of the most striking evidence includes the remains of woolly mammoths and other Pleistocene animals found preserved in permafrost in Siberia, Alaska and north-western Arctic Canada. Another line of evidence is cryostratigraphic: in some areas, the upper boundary of permafrost lies below the depth of modern seasonal freezing and the temperature of permafrost sometimes decreases with increasing depth. Both phenomena indicate residual (i.e. relict) cold. Another clue lies in the fact that the thickest permafrost occurs in areas which escaped glaciation and which were not protected from cold subaerial conditions by a thick ice cover.

Class Zero Grad Speeches: Fail

Once upon a time at secondary school graduation ceremonies students who finished with the best grades (Valedictorians and Salutatorians) took to the podium to deliver a speech each one wrote expressing his or her personal thoughts on that life passage. Not any more. In progressive, post-modern places, these speeches are now a canned performance: Now apparently accomplished scholars choose not to express themselves, not to speak out as individuals having earned the right to heard. Instead they read out words written by others to proselytize for a cause.

This Spring We’re Taking Over Commencements Everywhere to Demand a Zero Emissions Future

According to Class of 0000: Starting in may, hundreds of Valedictorians and Salutatorians will deliver the same message in their commencement speeches.  The Speech in italics with my bolds:

Today, we celebrate our achievements from the last 4 years. But I want to focus on what we need to achieve in the next 11.

That’s how long climate scientists have given us; 11 years to avoid catastrophic climate change. It’s already damaging our homes, our health, our safety and our happiness. We won’t let it take our futures too.

Our diplomas may say Class of 2019, but marked in history, we are the Class of Zero.
Zero emissions.
Zero excuses.
Zero time to waste.

Across the country, our class stands 7.5 million strong.

And in unity, we’re giving 2020 political candidates a choice:

Have a plan to get to zero emissions, or get zero of our votes.

Together, we have the power to solve the climate crisis.

Every student. Every parent. Every teacher. Every leader.

The future is in our hands.

Local school children join Greta Thunberg's initiative on climate strike during the COP24 UN Climate Change Conference 2018 in Katowice

But it didn’t go off everywhere as planned by the movement.

As Lathan Watts reported at Town Hall, many of these performances were stopped by educators.

Kriya Naidu was the valedictorian of her Florida high school, but she was recently prohibited by school officials from giving her graduation speech because of its content.

Cait Christenson was one of six valedictorians at her Wisconsin high school, but again, school administrators found the content of her graduation speech too controversial and prohibited her from addressing her fellow graduates.

Lulabel Seitz was the valedictorian at her California high school. And, you guessed it, her graduation speech was also censored by school administrators – in fact, her mic was turned off in the midst of her address.

For Lathan Watts, this is a problem. I would agree if they were expressing something other than a call for political and social action. This is the most striking example yet of young people subsuming into an social group and losing individual identity. I don’t know whether to call this Artificial Intelligence or Artificial Stupidity; but it is certainly not genuine, not authentic. Maybe this is what we have to endure in the Age of Greta.

The graduates would have been better served if Robert Curry were at the podium:

As we all know, acquiring common sense can be a matter of life and death. I’m thinking, for example, of the teenage boy who swallowed a garden slug on a dare, became paralyzed, and died recently. Because children lack common sense, parents must do what they have always done, trying to instill common sense in their children while at the same time using their own common sense to encompass the growing child.

Becoming a person of common sense has always been a life-defining challenge, but acquiring common sense has gotten a lot more difficult for young people in our time, especially if they have spent some time in our institutions of higher learning. My witty friend Robert Godwin has this to say about that: “Say what you want about the liberal arts, but they’ve found a cure for common sense.”

When I headed off to college, my high school teacher who was my mentor offered me two commonsense rules to follow“Take care to stay well, and choose professors, not courses.” Because of my high regard for him, I took his words to heart. Later, when I saw the problems my fellow students brought on themselves by not getting enough sleep and generally being careless about their health, I understood the practical wisdom of what he had told me. And the second rule helped me more quickly understand the value of navigating my way through college by who was teaching the course rather than by the course title.

For years, I handed on the same commonsense wisdom to young folks I knew when they headed off to college. But I have not offered that advice for some years now. Here is what I tell them now: “They are going to try to knock common sense out of you; don’t let them.”

Say what you want about the liberal arts, but they’ve found a cure for common sense.

By Robert Curry writes at American Thinker Making Sense of Common Sense. Excerpts in italics with my bolds.