More Methane Madness

The US Senate is considering an act to repeal with prejudice an Obama anti-methane regulation. The story from activist source Climate Central is
Senate Mulls ‘Kill Switch’ for Obama Methane Rule

The U.S. Senate is expected to vote soon on whether to use the Congressional Review Act to kill an Obama administration climate regulation that cuts methane emissions from oil and gas wells on federal land. The rule was designed to reduce oil and gas wells’ contribution to climate change and to stop energy companies from wasting natural gas.

The Congressional Review Act is rarely invoked. It was used this month to reverse a regulation for the first time in 16 years and it’s a particularly lethal way to kill a regulation as it would take an act of Congress to approve a similar regulation. Federal agencies cannot propose similar regulations on their own.

The Claim Against Methane

Now some Republican senators are hesitant to take this step because of claims like this one in the article:

Methane is 86 times more potent as a greenhouse gas than carbon dioxide over a period of 20 years and is a significant contributor to climate change. It warms the climate much more than other greenhouse gases over a period of decades before eventually losing its potency. Atmospheric carbon dioxide remains a potent greenhouse gas for thousands of years.

Essentially the journalist is saying: As afraid as you are about CO2, you should be 86 times more afraid of methane. Which also means, if CO2 is not a warming problem, your fear of methane is 86 times zero. The thousands of years claim is also bogus, but that is beside the point of this post, which is Methane.

IPCC Methane Scare

The article helpfully provides a link referring to Chapter 8 of IPCC AR5 report by Working Group 1 Anthropogenic and Natural Radiative Forcing.

The document is full of sophistry and creative accounting in order to produce as scary a number as possible. Table 8.7 provides the number for CH4 potency of 86 times that of CO2.  They note they were able to increase the Global Warming Potential (GWP) of CH4 by 20% over the estimate in AR4. The increase comes from adding in more indirect effects and feedbacks, as well as from increased concentration in the atmosphere.

In the details are some qualifying notes like these:

Uncertainties related to the climate–carbon feedback are large, comparable in magnitude to the strength of the feedback for a single gas.

For CH4 GWP we estimate an uncertainty of ±30% and ±40% for 20- and 100-year time horizons, respectively (for 5 to 95% uncertainty range).

Methane Facts from the Real World
From Sea Friends (here):

Methane is natural gas CH4 which burns cleanly to carbon dioxide and water. Methane is eagerly sought after as fuel for electric power plants because of its ease of transport and because it produces the least carbon dioxide for the most power. Also cars can be powered with compressed natural gas (CNG) for short distances.

In many countries CNG has been widely distributed as the main home heating fuel. As a consequence, methane has leaked to the atmosphere in large quantities, now firmly controlled. Grazing animals also produce methane in their complicated stomachs and methane escapes from rice paddies and peat bogs like the Siberian permafrost.

It is thought that methane is a very potent greenhouse gas because it absorbs some infrared wavelengths 7 times more effectively than CO2, molecule for molecule, and by weight even 20 times. As we have seen previously, this also means that within a distance of metres, its effect has saturated, and further transmission of heat occurs by convection and conduction rather than by radiation.

Note that when H20 is present in the lower troposphere, there are few photons left for CH4 to absorb:

Even if the IPCC radiative greenhouse theory were true, methane occurs only in minute quantities in air, 1.8ppm versus CO2 of 390ppm. By weight, CH4 is only 5.24Gt versus CO2 3140Gt (on this assumption). If it truly were twenty times more potent, it would amount to an equivalent of 105Gt CO2 or one thirtieth that of CO2. A doubling in methane would thus have no noticeable effect on world temperature.

However, the factor of 20 is entirely misleading because absorption is proportional to the number of molecules (=volume), so the factor of 7 (7.3) is correct and 20 is wrong. With this in mind, the perceived threat from methane becomes even less.

Further still, methane has been rising from 1.6ppm to 1.8ppm in 30 years (1980-2010), assuming that it has not stopped rising, this amounts to a doubling in 2-3 centuries. In other words, methane can never have any measurable effect on temperature, even if the IPCC radiative cooling theory were right.

Because only a small fraction in the rise of methane in air can be attributed to farm animals, it is ludicrous to worry about this aspect or to try to farm with smaller emissions of methane, or to tax it or to trade credits.

The fact that methane in air has been leveling off in the past two decades, even though we do not know why, implies that it plays absolutely no role as a greenhouse gas.

More information at THE METHANE MISCONCEPTIONS by Dr Wilson Flood (UK) here

Summary:

Natural Gas (75% methane) burns the cleanest with the least CO2 for the energy produced.

Leakage of methane is already addressed by efficiency improvements for its economic recovery, and will apparently be subject to even more regulations.

The atmosphere is a methane sink where the compound is oxidized through a series of reactions producing 1 CO2 and 2H20 after a few years.

GWP (Global Warming Potential) is CO2 equivalent heat trapping based on laboratory, not real world effects.

Any IR absorption by methane is limited by H2O absorbing in the same low energy LW bands.

There is no danger this century from natural or man-made methane emissions.

Conclusion

Senators and the public are being bamboozled by opaque scientific bafflegab. The plain truth is much different. The atmosphere is a methane sink in which CH4 is oxidized in the first few meters. The amount of CH4 available in the air is miniscule, even compared to the trace gas CO2, and it is not accelerating. Methane is the obvious choice to signal virtue on the climate issue since governmental actions will not make a bit of difference anyway, except perhaps to do some economic harm.

Give a daisy a break (h/t Derek here)

Daisy methane

Footnote:

For a more thorough and realistic description of atmospheric warming see:

Fearless Physics from Dr. Salby

Time Mag Misreads Science

 

This blog is dedicated to science as a process of discovery, rather than a catechism of truths to be embraced.  This article in the Washington Times discussed that issue in relation to a recent Time Magazine essay that comes down on the catechism side.

Time’s Misreading of Science, The magazine would rather settle than search

As demonstrated by the confirmation hearings of Scott Pruitt for new Environmental Protection Agency chief, all-out war is being waged against the Trump administration by leftists who believe science is under attack from the evil empire.

Belief that this new administration puts science in jeopardy is not surprising given the fact that so many are confused about what science is, how it is practiced, and what it can tell us about the future.

The popular press adds to the confusion about science. Take the Feb. 13 issue of Time magazine, for example. In an article titled “How a war on science could hurt the U.S. — and its citizens,” the authors open with this assessment of science: “The discipline of science is one where the facts, once they are peer-reviewed and published in scientific journals, are fixed. They’re not open to interpretation, or at least not much.”

There are numerous problems with this confused understanding of science. Regardless, the authors continue by contrasting “science” with politics “in which nearly everything can be negotiated. But as the first days of the Trump administration have shown, many of those seemingly settled scientific facts — the ones that have informed countless policies from previous U.S. administrations — are once more up for debate.”

tall-stack-booksrev

Science can be defined at its most basic level as “knowledge,” or what we think we know about a given topic. Since absolute truth on a subject is elusive, science is tentative, adjusted as additional information is accumulated through more research and wider perspective and, yes, even debate.

In practice, science can certainly be influenced by politics or, essentially, ideology. Those on the left apparently do not see a leftist ideology permeating certain areas of contemporary scientific practice and so equate scientific conclusions that endorse their beliefs as being absolutely irrefutable.

This blinkered perception manifests itself as “settled science” and is apparent in climate change science, and especially the power of this science to ascertain Earth’s future climate.

Accurate prediction is one of the biggest challenges in scientific practice, and indeed an accurate prediction for the right reasons is one of the conditions for a scientific assertion to be correct.

Here’s where climate science has fallen woefully short in recent decades.

The prediction that man-made carbon-dioxide emissions drive catastrophic climate change beginning with mounting global temperatures has been proven paltry at best. Yet, the dire global warming prediction, years ago, evolved into a belief and brandished as a proselytizing mantra by climate change crusaders.

Now the current climate change hypothesis is struggling and can use some insight from qualified, skeptical scientists to broaden the ambient landscape.

That broadening is difficult with a Time-skewed understanding of science and scientific practice. To say that the discipline of science is where facts are fixed once they are peer-reviewed and published is confused at best. Scientists use facts (like those associated with the fundamental principles of physics) as they observe natural events, propose hypotheses, and test their explanations of what they observe. Hypotheses are submitted to peer-reviewed scientific journals for critique.

The peer-review process is assumed to be rigorous, fair and balanced; however, that is not always the case. Documented instances have occurred where data in published reports were discovered to be falsified, or when work described was never actually performed, or when only friendly reviewers were chosen to assure acceptance of the conclusions, and the like. So, facts cannot be determined by peer review any more than real truth can be decide by an ad hoc committee. And published results are always open to further review, challenges and certainly interpretation.

True believers trust that their concept of science is rock-solid, especially when the science they choose to believe conforms to their preconceived notions.

But, the current world of climate science has been astutely branded by some challengers as a “climate-industrial complex.” The moniker may be well suited to describe the seemingly enormous political and monetary influence of this particular field by left-leaning vested interests.

Perhaps, with the arrival of the pragmatic Trump team, including Scott Pruitt, the climate world of “seemingly settled scientific facts” is about to be rocked by a bit more conservative assessment.

• Anthony J. Sadar is a Certified Consulting Meteorologist and author of “In Global Warming We Trust: Too Big to Fail” (Stairway Press, 2016).

Bertrand Russell makes a related point with his solar teapot.

To enlarge image, open it in new tab.

More on the Climate Crisis industry

Climate Crisis Inc.

For more on belief related to science and religion:

Head, Heart and Science

Arctic Ice Great Leap Upward

arctic-ice-2017050

Recent posts on Arctic ice talked about an ice dance and seesaws in both Atlantic and Pacific basins.  But the fluctuations are over, and ice is forming fast.

In just six days Arctic ice extent grew 455k km2, now tied with 2016.  Ice extent in 2006 has fallen 490k behind at this date, and will go lower by month end.  Sea Ice Index from NASA@NSIDC also shows ice increasing, but still lags behind by ~400k km2.

Here are images of the great leap upward in the last week, from day 44 to day 50 (Feb. 19, 2017)

Ice growing in the Atlantic seas:

output_pmnukt

 

Ice growing in the Pacific seas, and showing 2006 day 050 as a reference.

output_1coscq

The table below shows this year compared to the 11-year average and to 2006.

Region 2017050 Day 050
Average
2017-Ave. 2006050 2017-2006
 (0) Northern_Hemisphere 14743198 14851871 -108673 14251849 491349
 (1) Beaufort_Sea 1070445 1070111 334 1069711 734
 (2) Chukchi_Sea 966006 964552 1454 954122 11884
 (3) East_Siberian_Sea 1087137 1087038 99 1086081 1056
 (4) Laptev_Sea 897845 897835 10 897773 71
 (5) Kara_Sea 928805 918103 10702 902175 26631
 (6) Barents_Sea 508018 600373 -92355 488506 19512
 (7) Greenland_Sea 606503 635799 -29296 572979 33525
 (8) Baffin_Bay_Gulf_of_St._Lawrence 1517595 1446871 70725 1286797 230799
 (9) Canadian_Archipelago 853214 852984 230 852715 499
 (10) Hudson_Bay 1260903 1260391 512 1257433 3470
 (11) Central_Arctic 3238980 3214106 24874 3207097 31882
 (12) Bering_Sea 717887 771682 -53796 649938 67949
 (13) Baltic_Sea 52798 114323 -61525 93848 -41050
 (14) Sea_of_Okhotsk 957006 954616 2390 852621 104386
 (15) Yellow_Sea 15475 22222 -6747 13586 1889
 (16) Cook_Inlet 9671 11778 -2107 9530 141

The only deficit to 2006 is in the Baltic, while major surpluses are in Baffin, Bering and Okhotsk.  2017 is slightly below average in Barents, Bering and the Baltic, partly offset by being above average in Baffin and Central Arctic.

 

output_y6n43e

Ocean Oxygen Misdirection


The climate scare machine is promoting again the fear of suffocating oceans. For example, an article this week by Chris Mooney in Washington Post, It’s Official, the Oceans are Losing Oxygen.

A large research synthesis, published in one of the world’s most influential scientific journals, has detected a decline in the amount of dissolved oxygen in oceans around the world — a long-predicted result of climate change that could have severe consequences for marine organisms if it continues.

The paper, published Wednesday in the journal Nature by oceanographer Sunke Schmidtko and two colleagues from the GEOMAR Helmholtz Centre for Ocean Research in Kiel, Germany, found a decline of more than 2 percent in ocean oxygen content worldwide between 1960 and 2010.

Climate change models predict the oceans will lose oxygen because of several factors. Most obvious is simply that warmer water holds less dissolved gases, including oxygen. “It’s the same reason we keep our sparkling drinks pretty cold,” Schmidtko said.

But another factor is the growing stratification of ocean waters. Oxygen enters the ocean at its surface, from the atmosphere and from the photosynthetic activity of marine microorganisms. But as that upper layer warms up, the oxygen-rich waters are less likely to mix down into cooler layers of the ocean because the warm waters are less dense and do not sink as readily.

And of course, other journalists pile on with ever more catchy headlines.

The World’s Oceans Are Losing Oxygen Due to Climate Change

How Climate Change Is Suffocating The Oceans

Overview of Oceanic Oxygen

Once again climate alarmists/activists have seized upon an actual environmental issue, but misdirect the public toward their CO2 obsession, and away from practical efforts to address a real concern. Some excerpts from scientific studies serve to put things in perspective.

How the Ocean Breathes

Variability in oxygen and nutrients in South Pacific Antarctic Intermediate Water by J. L. Russell and A. G. Dickson

The Southern Ocean acts as the lungs of the ocean; drawing in oxygen and exchanging carbon dioxide. A quantitative understanding of the processes regulating the ventilation of the Southern Ocean today is vital to assessments of the geochemical significance of potential circulation reorganizations in the Southern Hemisphere, both during glacial-interglacial transitions and into the future.

Traditionally, the change in the concentration of oxygen along an isopycnal due to remineralization of organic material, known as the apparent oxygen utilization (AOU), has been used by physical oceanographers as a proxy for the time elapsed since the water mass was last exposed to the atmosphere. The concept of AOU requires that newly subducted water be saturated with respect to oxygen and is calculated from the difference between the measured oxygen concentration and the saturated concentration at the sample temperature.

This study has shown that the ratio of oxygen to nutrients can vary with time. Since Antarctic Intermediate Water provides a necessary component to the Pacific equatorial biological regime, this relatively high-nutrient, high-oxygen input to the Equatorial Undercurrent in the Western Pacific plays an important role in driving high rates of primary productivity on the equator, while limiting the extent of denitrifying bacteria in the eastern portion of the basin. 

Uncertain Measures of O2 Variability and Linkage to Climate Change

A conceptual model for the temporal spectrum of oceanic oxygen variability by Taka Ito and Curtis Deutsch

Changes in dissolved O2 observed across the world oceans in recent decades have been interpreted as a response of marine biogeochemistry to climate change. Little is known however about the spectrum of oceanic O2 variability. Using an idealized model, we illustrate how fluctuations in ocean circulation and biological respiration lead to low-frequency variability of thermocline oxygen.

Because the ventilation of the thermocline naturally integrates the effects of anomalous respiration and advection over decadal timescales, shortlived O2 perturbations are strongly damped, producing a red spectrum, even in a randomly varying oceanic environment. This background red spectrum of O2 suggests a new interpretation of the ubiquitous strength of decadal oxygen variability and provides a null hypothesis for the detection of climate change influence on oceanic oxygen. We find a statistically significant spectral peak at a 15–20 year timescale in the subpolar North Pacific, but the mechanisms connecting to climate variability remain uncertain.

The spectral power of oxygen variability increases from inter-annual to decadal frequencies, which can be explained using a simple conceptual model of an ocean thermocline exposed to random climate fluctuations. The theory predicts that the bias toward low-frequency variability is expected to level off as the forcing timescales become comparable to that of ocean ventilation. On time scales exceeding that of thermocline renewal, O2 variance may actually decrease due to the coupling between physical O2 supply and biological respiration [Deutsch et al., 2006], since the latter is typically limited by the physical nutrient supply.

Climate Model Projections are Confounded by Natural Variability

Natural variability and anthropogenic trends in oceanic oxygen in a coupled carbon cycle–climate model ensemble by T. L. Frolicher et al.

Internal and externally forced variability in oceanic oxygen (O2) are investigated on different spatiotemporal scales using a six-member ensemble from the National Center for Atmospheric Research CSM1.4-carbon coupled climate model. The oceanic O2 inventory is projected to decrease significantly in global warming simulations of the 20th and 21st centuries.

The anthropogenically forced O2 decrease is partly compensated by volcanic eruptions, which cause considerable interannual to decadal variability. Volcanic perturbations in oceanic oxygen concentrations gradually penetrate the ocean’s top 500 m and persist for several years. While well identified on global scales, the detection and attribution of local O2 changes to volcanic forcing is difficult because of unforced variability.

Internal climate modes can substantially contribute to surface and subsurface O2 variability. Variability in the North Atlantic and North Pacific are associated with changes in the North Atlantic Oscillation and Pacific Decadal Oscillation indexes. Simulated decadal variability compares well with observed O2 changes in the North Atlantic, suggesting that the model captures key mechanisms of late 20th century O2 variability, but the model appears to underestimate variability in the North Pacific.

Our results suggest that large interannual to decadal variations and limited data availability make the detection of human-induced O2 changes currently challenging.

The concentration of dissolved oxygen in the thermocline and the deep ocean is a particularly sensitive indicator of change in ocean transport and biology [Joos et al., 2003]. Less than a percent of the combined atmosphere and ocean O2 inventory is found in the ocean. The O2 concentration in the ocean interior reflects the balance between O2 supply from the surface through physical transport and O2 consumption by respiration of organic material.

Our modeling study suggests that over recent decades internal natural variability tends to mask simulated century-scale trends in dissolved oxygen from anthropogenic forcing in the North Atlantic and Pacific. Observed changes in oxygen are similar or even smaller in magnitude than the spread of the ensemble simulation. The observed decreasing trend in dissolved oxygen in the Indian Ocean thermocline and the boundary region between the subtropical and subpolar gyres in the North Pacific has reversed in recent years [McDonagh et al., 2005; Mecking et al., 2008], implicitly supporting this conclusion.

The presence of large-scale propagating O2 anomalies, linked with major climate modes, complicates the detection of long-term trends in oceanic O2 associated with anthropogenic climate change. In particular, we find a statistically significant link between O2 and the dominant climate modes (NAO and PDO) in the North Atlantic and North Pacific surface and subsurface waters, which are causing more than 50% of the total internal variability of O2 in these regions.

To date, the ability to detect and interpret observed changes is still limited by lack of data. Additional biogeo-chemical data from time series and profiling floats, such as the Argo array (http://www.argo.ucsd.edu) are needed to improve the detection of ocean oxygen and carbon system changes and our understanding of climate change.

The Real Issue is Ocean Dead Zones, Both Natural and Man-made

Since 1994, he and the World Resources Institute (report here) in Washington,D.C., have identified and mapped 479 dead zones around the world. That’s more than nine times as many as scientists knew about 50 years ago.

What triggers the loss of oxygen in ocean water is the explosive growth of sea life fueled by the release of too many nutrients. As they grow, these crowds can simply use up too much of the available oxygen.

Many nutrients entering the water — such as nitrogen and phosphorus — come from meeting the daily needs of some seven billion people around the world, Diaz says. Crop fertilizers, manure, sewage and exhaust spewed by cars and power plants all end up in waterways that flow into the ocean. Each can contribute to the creation of dead zones.

Ordinarily, when bacteria steal oxygen from one patch of water, more will arrive as waves and ocean currents bring new water in. Waves also can grab oxygen from the atmosphere.

Dead zones develop when this ocean mixing stops.

Rivers running into the sea dump freshwater into the salty ocean. The sun heats up the freshwater on the sea surface. This water is lighter than cold saltier water, so it floats atop it. When there are not enough storms (including hurricanes) and strong ocean currents to churn the water, the cold water can get trapped below the fresh water for long periods.

Dead zones are seasonal events. They typically last for weeks or months. Then they’ll disappear as the weather changes and ocean mixing resumes.

Solutions are Available and do not Involve CO2 Emissions

Helping dead zones recover

The Black Sea is bordered by Europe and Asia. Dead zones used to develop here that covered an area as large as Switzerland. Fertilizers running off of vast agricultural fields and animal feedlots in the former Soviet Union were a primary cause. Then, in 1989, parts of the Soviet Union began revolting. Two years later, this massive nation broke apart into 15 separate countries.

The political instability hurt farm activity. In short order, use of nitrogen and phosphorus fertilizers by area farmers declined. Almost at once, the size of the Black Sea’s dead zone shrunk dramatically. Now if a dead zone forms there it’s small, Rabalais says. Some years there is none.

Chesapeake Bay, the United State’s largest estuary, has its own dead zone. And the area affected has expanded over the past 50 years due to pollution. But since the 1980s, farmers, landowners and government agencies have worked to reduce the nutrients flowing into the bay.

Farmers now plant cover crops, such as oats or barley, that use up fertilizer that once washed away into rivers. Growers have also established land buffers to absorb nutrient runoff and to keep animal waste out of streams. People have even started to use laundry detergents made without phosphorus.

In 2011, scientists reported that these efforts had achieved some success in shrinking the size of the bay’s late-summer dead zones.

The World Resources Institute lists 55 dead zones as improving. “The bottom line is if we take a look at what is causing a dead zone and fix it, then the dead zone goes away,” says Diaz. “It’s not something that has to be permanent.”

Summary

Alarmists/activists are again confusing the public with their simplistic solution for a complex situation. And actual remedies are available, just not the agenda preferred by climatists.


Waste Management Saves the Ocean

 

Ocean Climate Ripples

Dr. Arnd Bernaerts is again active with edifying articles on how humans impact upon the oceans and thereby the climate. His recent post is Global Cooling 1940 – 1975 explained for climate change experts

I and others first approach Dr. Bernaerts’ theory relating naval warfare to climate change with a properly skeptical observation. The ocean is so vast, covering 71% of our planet’s surface and up to 11,000 meters deep, with such a storage of solar energy that it counteracts all forcings including human ones.

As an oceanographer, Bernaerts is well aware of that generalization, having named his website Oceans Govern Climate. But his understanding is much more particular and more clear to me in these recent presentations. His information is encyclopedic and his grasp of the details can be intimidating, but I think I get his main point.

When there is intense naval warfare concentrated in a small, shallow basin like the North Sea, the disturbance of the water structure and circulation is profound. The atmosphere responds, resulting in significant regional climate effects. Nearby basins and continents are impacted and eventually it ripples out across the globe.

The North Atlantic example is explained by Bernaerts Cooling of North Sea – 1939 (2_16) Some excerpts below.

Follow the Water

Water, among all solids and liquids, has the highest heat capacity except for liquid ammonia. If water within a water body remained stationary and did not move (which is what it does abundantly and often forcefully for a number of reasons), the uppermost water surface layer would, to a very high percentage, almost stop the transfer of any heat from a water body to the atmosphere.

However, temperature and salt are the biggest internal dynamic factors and they make the water move permanently. How much the ocean can transfer heat to the surface depends on how warm the surface water is relative to atmospheric air. Of no lesser importance is the question, as to how quickly and by what quantities cooled-down surface water is replaced by warmer water from sub-surface level. Wind, cyclones and hurricanes are atmospheric factors that quickly expose new water masses at the sea surface. Another ‘effective’ way to replace surface water is to stir the water body itself. Naval activities are doing just this.

War in the North Sea

Since the day the Second World War had started naval activities moved and turned the water in the North Sea at surface and lower levels at 5, 10, 20 or 30 metres or deeper on a scale that was possibly dozens of times higher than any comparable other external activity over a similar time period before. Presumably only World War One could be named in comparison.

The combatants arrived on the scene when the volume of heat from the sun had reached its annual peak. Impacts on temperatures and icing are listed in the last section: ‘Events’ (see below). The following circumstantial evidences help conclude with a high degree of certainty that the North Sea contributed to the arctic war winter of1939/40.

Climate Change in Response

On the basis of sea surface temperature record at Helgoland Station and subsequent air temperature, developments provide strong indication that the evaporation rate was high. This is confirmed by the following impacts observed:

More wind: As the rate of evaporation over the North Sea has not been measured and recorded, it seems there is little chance to prove that more vapour moved upwards during autumn 1939 than usual. It can be proved that the direction of the inflow of wind had changed from the usually most prevailing SW winds, to winds from the N to E, predominantly from the East. At Kew Observatory (London) general wind direction recorded was north-easterly only three times during 155 winter years; i.e. in 1814, 1841 and 1940[6]. This continental wind could have significantly contributed to the following phenomena of 1939: ‘The Western Front rain’.

More rain: One of the most immediate indicators of evaporation is the excessive rain in an area stretching from Southern England to Saxony, Silesia and Switzerland. Southern Baltic Sea together with Poland and Northern Germany were clearly separated from the generally wet weather conditions only three to four hundred kilometres further south. A demonstration of the dominant weather situation occurred in late October, when a rain section (supplied from Libya) south of the line Middle Germany, Hungary and Romania was completely separated from the rain section at Hamburg – Southern Baltic[7].

More cooling: Further, cooling observed from December 1939 onward can be linked to war activities in two ways. The most immediate effect, as has been explained (above), is the direct result from any excessive evaporation process. The second (at least for the establishment of global conditions in the first war winter) is the deprivation of the Northern atmosphere of its usual amount of water masses, circulating the globe as humidity.

Rippling Effects in Northern Europe and Beyond

Next to the Atlantic Gulf Current, the North Sea (Baltic Sea is discussed in the next chapter) plays a key role in determining the winter weather conditions in Northern Europe. The reason is simple. As long as these seas are warm, they help sustain the supremacy of maritime weather conditions. If their heat capacity turns negative, their feature turns ‘continental’, giving high air pressure bodies an easy opportunity to reign, i.e. to come with cold and dry air. Once that happens, access of warm Atlantic air is severely hampered or even prevented from moving eastwards freely.

The less moist air is circulating the globe south of the Arctic, the more easily cold polar air can travel south. A good piece of evidence is the record lack of rain in the USA from October – December 1939 followed by a colder than average January 1940, a long period of low water temperatures in the North Sea from October-March (see above) and the ‘sudden’ fall of air temperatures to record low in Northern Europe.

The graph above suggests that naval warfare is linked to rapid cooling. The climate system responds with negative feed backs to restore equilibrium. Following WWI, limited to the North Atlantic, the system overshot and the momentum continued upward into the 1930s. Following WWII, with both Pacific and Atlantic theaters, the climate feed backs show several peaks trying to offset the cooling, but the downward trend persisted until about 1975.

Summary

The Oceans Govern Climate. Man influences the ocean governor by means of an expanding fleet of motorized propeller-driven ships. Naval warfare in the two World Wars provide the most dramatic examples of the climate effects.

Neither I nor Dr. Bernaerts claim that shipping and naval activity are the only factors driving climate fluctuations. But it is disturbing that so much attention and money is spent on a bit player CO2, when a much more plausible human influence on climate is ignored and not investigated.

Scafetta vs. IPCC: Dueling Climate Theories

In one corner, Darth Vader, the Prince of CO2, filling the air with the overwhelming sound of his poison breath. Opposing him, Luke Skywalker, a single skeptic armed only with facts and logic.

OK, that’s over the top, but it’s what came to mind while reading a new paper by Nicola Scafetta in which he goes up against the IPCC empire. And Star Wars came to mind since Scafetta’s theory involves astronomical cycles. The title below links to the text, which is well worth reading.  Some excerpts follow. H/T GWPF

CMIP5 General Circulation Models versus a Semi-Empirical Model Based on Natural Oscillations

Scafetta comes out swinging: From the Abstract

Since 1850 the global surface temperature has warmed by about 0.9 oC. The CMIP5 computer climate models adopted by the IPCC have projected that the global surface temperature could rise by 2-5 oC from 2000 to 2100 for anthropogenic reasons. These projections are currently used to justify expensive mitigation policies to reduce the emission of anthropogenic greenhouse gases such as CO2.

However, recent scientific research has pointed out that the IPCC climate models fail to properly reconstruct the natural variability of the climate. Indeed, advanced techniques of analysis have revealed that the natural variability of the climate is made of several oscillations spanning from the decadal to the millennial scales (e.g. with periods of about 9.1, 10.4, 20, 60, 115, 1000 years and others). These oscillations likely have an astronomical origin.

In this short review I briefly summarize some of the main reasons why the AGWT should be questioned. In addition, I show that an alternative interpretation of climate change based on the evidences that a significant part of it is due to specific natural oscillations is possible. A modeling based on such interpretation agrees better with the climatic comprehensive picture deduced from the data.

The Missing Hot-Spot

It has been observed that for the last decades climate models predict a hot-spot, that is, a significant warming of a band of the upper troposphere 10 km over the tropics and the equator. The presence of this hot-spot is quite important because it would indicate that the water-vapor feedback to radiative forcing would be correctly reproduced by the models.

However, this predicted hot-spot has never been found in the tropospheric temperature records [20,21]. This could only be suggesting either that both the temperature records obtained with satellite measures and balloons have been poorly handled or that the models severely fail to properly simulate the water-vapor feedback. In the latter case, the flaw of the models would be fatal because the water-vapor feedback is the most important among the climate feedbacks.

Without a strong feedback response from water vapor the models would only predict a moderate climate sensitivity to radiative forcing of about 1.2 oC for CO2 doubling instead of about 3 oC. Figure 8 compares the observed temperature trend in the troposphere versus the climate model predictions: from Ref. [21]. The difference between the two record sets is evident.

scafettafig8

Figure 8. Comparison between observed temperature trend in the troposphere (green-blue) versus the climate model predictions (red). From Ref. [21].

Observations Favor Scafetta’s Model Over GCM Models

I have proposed that the global surface temperature record could be reconstructed from the decadal to the millennial scale using a minimum of 6 harmonics at 9.1, 10.4, 20, 60, 115 and 983 years plus a anthropogenic and volcano contribution that can be evaluated from the CMIP5 GCM outputs reduced by half because, as discussed above, the real climate sensitivity to radiative forcing appears to be about half of what assumed by the current climate models. The figure highlights the better performance of the solar–astronomical semi-empirical model versus the CMIP5 models. This is particularly evident since 2000, as shown in the inserts.

scafettavscmip

Figure 12 [A] The four CMIP5 ensemble average projections versus the HadCRUT4 GST record (black). [B] The solar– astronomical semi-empirical model. From Ref. [4] Left axis shows temperature anomalies in degrees Celsius.

Forecast Validation

In 2011 I prepared a global surface temperature forecast based on a simplified climate model based on four natural oscillations (9.1, 10.4, 20 and 60 year) plus an estimate of a realistic anthropogenic contribution [25]: for example, see Refs. [33,34,35] referring to the 60-year cycle. Figure 13 compares my 2011 forecast (red curve) against the global surface temperature record I used in 2011 (HadCUT3, blue curve) and a modern global surface temperature record updated at June/2016 (RSS MSU record, black line, http://www.remss.com/measurements/upper-air-temperature).

The RSS MSU record, which is a global surface temperature estimate using satellite measurements, was linearly rescaled to fit the original HadCUT3 global surface temperature record for optimal comparison. Other global temperature reconstructions perform similarly. Note that the HadCUT3 has been dismissed in 2014. Figure 13 also shows in green a schematic representation of the IPCC GCMs prediction since 2000 [25].

scafettaforecast082016

Left axis shows temperature anomalies in degrees Celsius.

Figure 13. Comparison of the forecast (red-yellow curve) made in Scafetta (2011) [25] against (1) the temperature record used in 2011 (HadCRUT3, blue curve), (2) the IPCC climate model projections since 2000 (green area), (3) a recent global temperature record (RSS MSU record, black line, linearly re-scaled to match the HadCRUT3 from 1979 to 2014). The temperature record has followed Scafetta’s forecast better than the IPCC ones. In 2015-2016 there was a strong El-Nino Pacific Ocean natural warming that caused the observed temperature peak.

Summary

The considerations emerging from these findings yield to the conclusion that the IPCC climate models severely overestimate the anthropogenic climatic warming by about two times. I have finally proposed a semi-empirical climate model calibrated to reconstruct the natural climatic variability since Medieval times. I have shown that this model projects a very moderate warming until 2040 and a warming less than 2 oC from 2000 to 2100 using the same anthropogenic emission scenarios used by the CMIP5 models: see Figure 12.

This result suggests that climatic adaptation policies, which are less expensive than the mitigation ones, could be sufficient to address most of the consequences of a climatic change during the 21st century. Similarly, fossil fuels, which have contributed significantly to the development of our societies, can still be used to fulfill our energy necessities until equally efficient alternative energy sources could be determined and developed.

Scafetta Briefly Explains the Harmonic oscillation theory

“The theory is very simple in words. The solar system is characterized by a set of specific gravitational oscillations due to the fact that the planets are moving around the sun. Everything in the solar system tends to synchronize to these frequencies beginning with the sun itself. The oscillating sun then causes equivalent cycles in the climate system. Also the moon acts on the climate system with its own harmonics. In conclusion we have a climate system that is mostly made of a set of complex cycles that mirror astronomical cycles. Consequently it is possible to use these harmonics to both approximately hindcast and forecast the harmonic component of the climate, at least on a global scale. This theory is supported by strong empirical evidences using the available solar and climatic data.”

Footnote: Scafetta is not alone.  Dr. Norman Page has a new paper going into detail about forecasting climate by means of  solar-astronomical patterns.

The coming cooling: Usefully accurate climate forecasting for policy makers

Meet Richard Muller, Lukewarmist

Richard Muller, head of the Berkeley Earth project, makes a fair and balanced response to a question regarding the “97% consensus.”  Are any of the US Senators listening?  Full text below from Forbes 97%: An Inconvenient Truth About The Oft-Cited Polling Of Climate Scientists including a reference to Will Happer, potentially Trump’s science advisor.

Read it and see that he sounds a lot like Richard Lindzen.

What are some widely cited studies in the news that are false?

Answer by Richard Muller, Professor of Physics at UC Berkeley, on Quora:

That 97% of all climate scientists accept that climate change is real, large, and a threat to the future of humanity. That 97% basically concur with the vast majority of claims made by Vice President Al Gore in his Nobel Peace Prize winning film, An Inconvenient Truth.

The question asked in typical surveys is neither of those. It is this: “Do you believe that humans are affecting climate?” My answer would be yes. Humans are responsible for about a 1 degree Celsius rise in the average temperature in the last 100 years. So I would be included as one of the 97% who believe.

Yet the observed changes that are scientifically established, in my vast survey of the science, are confined to temperature rise and the resulting small (4-inch) rise in sea level. (The huge “sea level rise” seen in Florida is actually subsidence of the land mass, and is not related to global warming.) There is no significant change in the rate of storms, or of violent storms, including hurricanes and volcanoes. The temperature variability is not increasing. There is no scientifically significant increase in floods or droughts. Even the widely reported warming of Alaska (“the canary in the mine”) doesn’t match the pattern of carbon dioxide increase–it may have an explanation in terms of changes in the northern Pacific and Atlantic currents. Moreover, the standard climate models have done a very poor job of predicting the temperature rise in Antarctica, so we must be cautious about the danger of confirmation bias.

My friend Will Happer believes that humans do affect the climate, particularly in cities where concrete and energy use cause what is called the “urban heat island effect.” So he would be included in the 97% who believe that humans affect climate, even though he is usually included among the more intense skeptics of the IPCC. He also feels that humans cause a small amount of global warming (he isn’t convinced it is as large as 1 degree), but he does not think it is heading towards a disaster; he has concluded that the increase in carbon dioxide is good for food production, and has helped mitigate global hunger. Yet he would be included in the 97%.

The problem is not with the survey, which asked a very general question. The problem is that many writers (and scientists!) look at that number and mischaracterize it. The 97% number is typically interpreted to mean that 97% accept the conclusions presented in An Inconvenient Truth by former Vice President Al Gore. That’s certainly not true; even many scientists who are deeply concerned by the small global warming (such as me) reject over 70% of the claims made by Mr. Gore in that movie (as did a judge in the UK; see the following link: Gore climate film’s nine ‘errors‘).

The pollsters aren’t to blame. Well, some of them are; they too can do a good poll and then misrepresent what it means. The real problem is that many people who fear global warming (include me) feel that it is necessary to exaggerate the meaning of the polls in order to get action from the public (don’t include me).

There is another way to misrepresent the results of the polls. Yes, 97% of those polled believe that there is human caused climate change. How did they reach that decision? Was it based on a careful reading of the IPCC report? Was it based on their knowledge of the potential systematic uncertainties inherent in the data? Or was it based on their fear that opponents to action are anti-science, so we scientists have to get together and support each other. There is a real danger in people with Ph.D.s joining a consensus that they haven’t vetted professionally.

I like to ask scientists who “believe” in global warming what they think of the data. Do they believe hurricanes are increasing? Almost never do I get the answer “Yes, I looked at that, and they are.” Of course they don’t say that, because if they did I would show them the actual data! Do they say, “I’ve looked at the temperature record, and I agree that the variability is going up”? No. Sometimes they will say, “There was a paper by Jim Hansen that showed the variability was increasing.” To which I reply, “I’ve written to Jim Hansen about that paper, and he agrees with me that it shows no such thing. He even expressed surprise that his paper has been so misinterpreted.”

A really good question would be: “Have you studied climate change enough that you would put your scientific credentials on the line that most of what is said in An Inconvenient Truth is based on accurate scientific results? My guess is that a large majority of the climate scientists would answer no to that question, and the true percentage of scientists who support the statement I made in the opening paragraph of this comment, that true percentage would be under 30%. That is an unscientific guestimate, based on my experience in asking many scientists about the claims of Al Gore.

This question originally appeared on Quora. the place to gain and share knowledge, empowering people to learn from others and better understand the world.

Compare Muller’s statement with a short video by Lindzen.

 

Update: EU Leads in Climate Blame and Shame

Update February 15, 2017

The EU is already loading climate reporting requirements onto pension funds.

On December 8th, 2016 the EU adopted a new regulation regarding Pension Funds, the IORP II Directive — the successor of the Institutions for Occupational Retirement Provision Directive adopted in 2003.

A key feature of the directive is the consideration of environmental, social and governance (ESG) factors as part of pension providers’ investment. In particular, pension providers are now required to carry out their own risk assessment, including climate change-related risks, as well as risks caused by the use of resources and regulatory changes.

IORP II applies to all the 14,358 registered EU pension funds, among which 160 have cross-border activities.

Member States (EU countries) have until January 13, 2019 to transpose IORP II into their national law, which was published early January in the Official Journal of the European Union. According to current projections, the implementation deadline should therefore fall before Brexit, an important fact considering that the UK accounts for 50 percent of the EU occupational pension fund sector, followed by the Netherlands (33 percent).

New EU Directive Requires Pension Funds to Assess Climate-related Risks

The Climate Disclosure Standards Board provides an insight into the expanding bureaucracy working to impose climatism on financial and business institutions around the world. Since Paris COP agreement is not legally binding, the effort is on forcing reporting on national commitments and pointing fingers at laggards.

At the microeconomic level, the mission is to load regulatory requirements onto corporations and investors to force them into statements of belief and responsibility for mythical changes in future weather and climate.

The Mission is presented in Making Climate Disclosure the New Norm in Business

In short, the Task Force Recommendations report encourages all financial organizations, ranging from banks, insurance companies, to asset managers and asset owners, and companies with public debt or equity, to disclose in a transparent and consistent way their financial risks and opportunities associated with climate change.

Image: Recommendations of the Task Force on Climate-related Financial Disclosures

The report is the result of one year of work by the Task Force on climate-related financial disclosures, a business and investors-led initiative, launched at the COP21 climate negotiations in Paris, and convened by the Financial Stability Board.

The aim of the initiative is to drive the adoption of the recommendations across the G20 countries, as the final version of the report will be released in July and presented to the G20 leaders gathering in Hamburg. Having the support of the governments of the largest economies in the world would be the ultimate step to make climate disclosure the new norm.

The CDSB Board of Directors (all carrying climate activist resumes)

Pankaj Bhatia Director of GHG Protocol Initiative, World Resources Institute

Henry Derwent Honorary Vice President, International Emissions Trading Association

Dr Rodney Irwin Managing Director, Redefining Value & Education, World Business Council for Sustainable Development

Mindy S. Lubber JD, MBA President, Ceres Director, Investor Network on Climate Risk

David Rosenheim Executive Director, The Climate Registry

Damian Ryan Acting CEO, The Climate Group

Richard Samans (Chairman) Managing Director and Member of the Managing Board, World Economic Forum

Paul Simpson Chief Executive Officer, CDP (formerly Carbon Disclosure Project)

Gordon Wilson Senior Manager PwC, Chairman, Technical Working Group, Climate Disclosure Standards Board

Rough seas ahead for Captains of Industry

 

 

Arctic Ice Seesaw

Mid February is about a month away from the annual maximum Arctic ice extent, and measurements continue to seesaw in the two dynamic places where freezing and drifting cause gains and losses in sea ice. In each region, the gains and losses teeter-totter between two basins.

Here is the Atlantic seesaw with Barents and Baffin.

output_vbs7wb

And here is the Pacific seesaw with Bering and Okhotsk.

output_jbavnc
While the seesaws are tilting back and forth on the margins, the bulk of the Arctic is frozen solid. And with limited places where more extent can be added, the pace of overall growth has slowed.

arctic-ice-2017044
The graph shows that 2017 and 2006 are virtually tied at this date. It shows both years are below average by about 450k km2, and SII adds a further deficit by showing 2017 averaging in February ~400k km2 lower than MASIE.

The table below shows ice extents in the seas comprising the Arctic, comparing day 044 2017 with the same day average over the last 11 years and with 2006.

Region 2017044 Day 044
Average
2017-Ave. 2006044 2017-2006
 (0) Northern_Hemisphere 14287848 14759423 -471575 14318694 -30846
 (1) Beaufort_Sea 1070445 1070111 334 1069711 734
 (2) Chukchi_Sea 966006 965614 392 966006 0
 (3) East_Siberian_Sea 1087137 1087131 6 1087103 35
 (4) Laptev_Sea 897845 897835 10 897773 71
 (5) Kara_Sea 908380 908367 12 932924 -24545
 (6) Barents_Sea 363927 581052 -217125 507771 -143844
 (7) Greenland_Sea 565090 633257 -68167 592221 -27131
 (8) Baffin_Bay_Gulf_of_St._Lawrence 1564353 1451561 112792 1209203 355150
 (9) Canadian_Archipelago 853214 852984 230 852715 499
 (10) Hudson_Bay 1260903 1260476 427 1257433 3470
 (11) Central_Arctic 3209792 3215238 -5446 3178718 31074
 (12) Bering_Sea 564241 759583 -195342 889465 -325224
 (13) Baltic_Sea 59994 105815 -45822 68543 -8549
 (14) Sea_of_Okhotsk 834828 895634 -60806 720201 114628
 (15) Yellow_Sea 17654 31061 -13407 20909 -3255
 (16) Cook_Inlet 9131 12083 -2952 9530 -399

The table indicates some differences in locations of ice surpluses and deficits. Bering Sea has been the largest deficit this year, while Barents is now matching it by losing ~100k in the last week. Greenland Sea is also down slightly compared to average and to 2006. Baffin Bay is the largest surplus to average and to 2006. Okhotsk lost more than 200k km2 in recent days, but still exceeds 2006 by 115k.

The second half of February will be interesting. The average year in the last eleven gained about 200k km2 from now to month end. But the variability ranged from 2006 losing 170K to 2012 gaining 590k km2. What will the ice do this year?

The polar bears have a Valentine Day’s wish for Arctic Ice.

welovearcticicefinal

And Arctic Ice loves them back, returning every year so the bears can roam and hunt for seals.

Footnote:

Seesaw accurately describes Arctic ice in another sense:  The ice we see now is not the same ice we saw previously.  It is better to think of the Arctic as an ice blender than as an ice cap, explained in the post The Great Arctic Ice Exchange.

The Green Energy Money Pit

As we know, politicians are throwing money away on mad green energy schemes in Australia, Germany and Canada.  In the USA, bad examples are found in the left coast states of California and New York.

California Dreaming

From the LA Times: Californians are paying billions for power they don’t need
We’re using less electricity. Some power plants have even shut down. So why do state officials keep approving new ones?

At its 2001 launch, the Sutter Energy Center was hailed as the nation’s cleanest power plant. It generated electricity while using less water and natural gas than older designs.

A year ago, however, the $300-million plant closed indefinitely, just 15 years into an expected 30- to 40-year lifespan. The power it produces is no longer needed — in large part because state regulators approved the construction of a plant just 40 miles away in Colusa that opened in 2010.

Sutter Energy Center has been offline since 2016, after just 15 years of an expected 30- to 40-year lifespan. (David Butow / For The Times)

California has a big — and growing — glut of power, an investigation by the Los Angeles Times has found. The state’s power plants are on track to be able to produce at least 21% more electricity than it needs by 2020, based on official estimates. And that doesn’t even count the soaring production of electricity by rooftop solar panels that has added to the surplus. (my bold)

This translates into a staggering bill. Although California uses 2.6% less electricity annually from the power grid now than in 2008, residential and business customers together pay $6.8 billion more for power than they did then. The added cost to customers will total many billions of dollars over the next two decades, because regulators have approved higher rates for years to come so utilities can recoup the expense of building and maintaining the new plants, transmission lines and related equipment, even if their power isn’t needed. (my bold)

“We overbuilt the system because that was the way we provided that degree of reliability,” explained Michael Picker, president of the California Public Utilities Commission. “Redundancy is important to reliability.”

Some of the excess capacity, he noted, is in preparation for the retirement of older, inefficient power plants over the next several years. The state is building many new plants to try to meet California environmental standards requiring 50% clean energy by 2030, he said. (my bold)

“California has this tradition of astonishingly bad decisions,” said McCullough, the energy consultant. “They build and charge the ratepayers. There’s nothing dishonest about it. There’s nothing complicated. It’s just bad planning.”

Pacific Gas & Electric’s Colusa Generating Station has operated at well below its generating capacity — just 47% in its first five years. (Rich Pedroncelli / AP)

Sutter isn’t alone. Other natural gas plants once heralded as the saviors of California’s energy troubles have found themselves victims of the power glut. Independent power producers have announced plans to sell or close the 14-year-old Moss Landing power plant at Monterey Bay and the 13-year-old La Paloma facility in Kern County.

New York Blowing It in the Wind

From the New York Post: Cash in the Wind: New York’s Wind-power Giveaway

Gov. Cuomo doesn’t like nuclear energy.  Last month, he finalized a deal that will prematurely shutter the Indian Point Energy Center, the twin-reactor facility that supplies about 25 percent of New York City’s electricity.

Cuomo doesn’t like natural gas, either. In 2014, after a years-long moratorium, he banned fracking, the process used to get oil or gas from underground rock formations.

But there’s one thing the governor just loves: wind energy. Indeed, three days after the Indian Point closure was announced, Cuomo’s appointees at the New York State Energy Research and Development Authority provided details on $360 million in subsidies for a handful of renewable-energy projects.

Roughly 80 percent of that money will be doled out to two wind companies: Florida-based NextEra Energy Inc. and Illinois-based Invenergy.

Plus, when the new subsidies are combined with existing federal cash, the amount in subsidies Next­Era and Invenergy will be collecting will exceed the prevailing wholesale price of electricity in the state by nearly $13 per megawatt-hour.

Even more remarkable: those same subsidies, on an energy-equivalent basis — comparing the amount of energy we get from different sources — come to four times the current market price of natural gas. (my bold)

The companies will receive the NYSERDA subsidies over a period of 20 years. Given the size of their wind projects, which are about 101 megawatts and 106 megawatts, respectively, the two companies will likely collect about $286 million from the state over the next two decades. And remember, NextEra and Invenergy will collect those subsidies in addition to the cash they get for actually selling their product. (my bold)

I’ve heard of sweetheart deals, but this one deserves a medal.

It there no bottom to the green energy money pit?