Ocean Physics in a Cup of Coffee

 

The Great Arctic Cyclone of 2012 from satellite.

 Recently I posted Ocean Climate Ripples summarizing an article by Dr. Arnd Bernaerts on how humans impact upon the oceans and thereby the climate. His references to activities in the North and Baltic Seas included this comment:

It works like a spoon stirring hot coffee, attracting cold air from Siberia. In this respect they serve as confined research regions, like a unique field laboratory experiment.

This post presents an article by John S. Wettlaufer who sees not only the oceans but cosmic patterns in coffee cup vorticies. His essay is The universe in a cup of coffee.  (Bolded text is my emphasis.)

John Wettlaufer is the A. M. Bateman Professor of Geophysics, Physics, and Applied Mathematics at Yale University in New Haven, Connecticut.


As people throughout the world awake, millions of them every minute perform the apparently banal act of pouring cold milk into hot coffee or tea. Those too groggy to reach for a spoon might notice upwelling billows of milk separated by small, sinking, linear dark features such as shown in panel a of the figure. The phenomenon is such a common part of our lives that even scientists—trained to be observant—may overlook its importance and generality. The pattern bears resemblance to satellite images of ocean color, and the physics behind it is responsible for the granulated structure of the Sun and other cosmic objects less amenable to scrutiny.

(a) Everyone knows that if you wait for a while coffee will get cold. The primary agent doing the cooling is evaporatively driven convection. Pour cold milk into hot coffee and wait. The cold milk mixes very little as it sinks to the bottom of the cup, but eventually cold plumes created by evaporation at the surface sink down and displace the milk. In time, a pattern forms of upwelling (lighter) and downwelling (darker) fluid.

Archimedes pondered the powerful agent of motion known as buoyancy more than two millennia ago. Children do, too, when they imagine the origins of cloud animals on a summer’s day. The scientific study of thermal and compositional buoyancy originated in 1798 with a report by Count Rumford intended to disabuse believers of the caloric theory. Nowadays, buoyancy is at the heart of some of the most challenging problems in nonlinear physics—problems that are increasingly compelling. Answers to fundamental questions being investigated today will have implications for understanding Earth’s heat budget, the transport of atmospheric and oceanographic energy, and, as a corollary, the climate and fate of stars and the origins of planets. Few avenues of study combine such basic challenges with such a broad swath of implications. Nonetheless, the richness of fluid flow is rarely found in undergraduate physics courses. 

Wake up and smell the physics

The modern theory of hydrodynamic stability arose from experiments by Henri Bénard, who heated, from below, a thin horizontal layer of spermaceti, a viscous, fluid wax. For small vertical temperature gradients, Bénard observed nothing remarkable; the fluid conducted heat up through its surface but exhibited no wholesale motion as it did so. However, when the gradient reached a critical value, a hexagonal pattern abruptly appeared as organized convective motions emerged from what had been an homogenous fluid. The threshold temperature gradient was described by Lord Rayleigh as reflecting the balance between thermal buoyancy and viscous stresses, embodied in a dimensionless parameter now called the Rayleigh number. 

When the momentary thermal buoyancy of a blob of fluid—provided by the hot lower boundary—overcomes the viscous stresses of the surrounding fluid, wholesale organized motion ensues. The strikingly structured fluid, with its up-and-down flow assuming specific geometries, is an iconic manifestation of how a dissipative system can demonstrate symmetry breaking (the up-and-down flow distinguishes horizontal positions even though the lower boundary is at a uniform temperature), self-organization, and beauty. (See the article by Leo Kadanoff in PHYSICS TODAY, August 2001, page 34.)

Astrophysicists and geophysicists can hardly make traction on many of the problems they face unless they come to grips with convection—and their quests are substantially complicated by their systems’ rotations. Despite the 1835 publication of Gaspard-Gustave Coriolis’s Mémoire sur les équations du mouvement relatif des systèmes de corps (On the Equations of Relative Motion of a System of Bodies), debate on the underlying mechanism behind the deflection of the Foucault pendulum raged in the 1905 volume of Annalen der Physik, the same volume in which Albert Einstein introduced the world to special relativity. Maybe the lack of comprehension is not so surprising: Undergraduates still more easily grasp Einstein’s theory than the Coriolis effect, which is essential for understanding why, viewed from above, atmospheric circulation around a low pressure system over a US city is counterclockwise but circulation over an Australian city is clockwise. 

Practitioners of rotating-fluid mechanics generally credit mathematical physicist Vagn Walfrid Ekman for putting things in the modern framework, in another key paper from 1905. Several years earlier, during his famous Fram expedition, explorer Fridtjof Nansen had observed that ice floes moved to the right of the wind that imparted momentum to them. Nansen then suggested to Ekman that he investigate the matter theoretically. That the deflection was due to the ocean’s rotating with Earth was obvious, but Ekman described the corrections that must be implemented in a noninertial reference frame. Since so much in the extraterrestrial realm is spinning, scientists taken by cosmological objects eventually embraced Ekman’s formulation and sought evidence for large-scale vortex structures in the accretion disks around stars. Vortices don’t require convection and when convection is part of a vortex-producing system, additional and unexpected patterns ensue. 

Cream, sugar, and spinning

The Arctic Ocean freezes, cooling and driving salt into the surface layers. Earth’s inner core solidifies, leaving a buoyant, iron-depleted metal. Rapidly rising air from heated land surfaces creates thunderstorms. Planetary accretion disks receive radiation from their central stars. In all these systems, rotation has a hand in the fate of rising or sinking fluid. What about your steaming cup of coffee: What happens when you spin that?

(b) Several views of a volume of water 11.4 cm deep with a cross section of 22.9 × 22.9 cm. Panel b shows the liquid about 7.5 minutes after the fluid is set in motion at a few tenths of a radian per second. The principal image indicates particle density (light is denser) at a depth of 0.6 cm below the surface. The inset is a thermal image of the surface

Place the cup in the center of a spinning record player— some readers may even remember listening to music on one of those. The friction from the wall of the cup transmits stresses into the fluid interior. If the coffee is maintained at a fixed temperature for about a minute, every parcel of fluid will move at the same angular velocity; the coffee is said to be spun up.

On the time scales of contemporary atmospheric and oceanographic phenomena, Earth’s rotation is indeed a constant, whereas the time variation of the rotation could be important for phenomena in planetary interiors, the evolution of an accretion disk, or tidal perturbations of a distant moon. Thus convective vortices are contemplated relative to a rotating background flow. Perturbations in the rotation rate revive the role of boundary friction and substantially influence the interior circulation. Moreover, evaporation and freezing represent additional perturbations, which alter how the fluid behaves as stresses attempt to enforce uniform rotation. Returning to the coffee mug as laboratory, the model system shown in panel b of the figure reveals how the added complexity of rotation momentarily organizes the pattern seen in panel a into concentric rings of cold and warm fluid.

(c) Panel c shows the breakup of the rings, 11 minutes after the initiation of rotation, due to a shearing instability.

Fundamental competitions play out when you rotate your evaporating coffee. As we have seen, evaporative cooling drives narrow regions of downward convection; significant viscous and Coriolis effects balance each other in those downwelling regions. Rotation then dramatically organizes the sinking cold sheets and rising warm billows into concentric rings that first form at the center of the cup. By about 7.5 minutes after rotation has been initiated, the rings shown in panel b have grown to cover most of the horizontal plane. Their uniform azimuthal motion exists for about 3.5 minutes, at which time so-called Kelvin–Helmholtz billows associated with the shearing between the rings appear at their boundaries, grow, and roll up into vortices; see panel c. Three minutes later, as shown in panel d, those vortices lose their azimuthal symmetry and assemble into a regular vortex grid whose centers contain sinking fluid.

(d) As panel d shows, at 14 minutes the breakup leads to a grid of vortices. (Adapted from J.-Q. Zhong, M. D. Patterson, J. S. Wettlaufer, Phys. Rev. Lett. 105, 044504, 2010.)

Panel d shows one type of coherent structure that forms in rotating fluids and other mathematically analogous systems if the persistence time of the structure—vortices here— is much longer than the rotational period. Other well-known examples are Jupiter’s Great Red Spot, which is an enduring feature of the chaotic Jovian atmosphere, and the meandering jet streams on Earth.

Moreover, persistent vortices in superconductors and superfluids organize themselves. Indeed, it appears that vortices in superconductors are as mobile as their counterparts in inviscid fluids. And although scientists have long studied rotating convective superfluids, the classical systems considered in this Quick Study suggest that we may yet find surprising analogies in superconductors. Will we one day see superconducting jet streams?

If you are reading this article with a cup of coffee, put it down and take a closer look at what is going on in your cup.

Summary

Wettlaufer has been an advocate for getting the physics right in climate models.  His analogy of a cuppa coffee is actually a demonstration of mesoscale fluid and rotational dynamics and perturbations that still defy human attempts to simulate climate operations.

 

The Limitations of Climate Science

Here is a fine exposition of Bob Carter’s thoughts on the field of climate science and why we should not jump to conclusions concerning global warming/climate change.  The text and some illustrations are provided by Russ Swan in his post (here).  I added one at the end.

Have you ever wondered about these people when they are so definite about mankind causing climate change? Have you ever wondered how much of the information is from their own expertise and how much is what they’ve learned from someone else? Are they really passing on real proven scientific facts or just what they believe to be true from information provided by someone else?

Or do you just accept what they are telling you?

The average person on the street might be forgiven for thinking that climate change scientists are primarily meteorologists or climatologists plus perhaps some others with supporting expertise.  But that would be only partially right.

The subjects relating to climate change actually diverge into more than 100 scientific sub-disciplines, the elements of which can be exceptionally intricate, highly complicated and intertwined.  Just changing one of the many data inputs e.g. the output chemistry of sub-sea volcanoes to a climate change puzzle can flow-on to incorrect or at least misleading changes in the final solution. And the answer will still be a “best probable” result – not fact.

At most there may be a handful of scientists that have mastery of two or three scientific disciplines such as Professor Robert M. Carter (decd) who was a qualified palaeontogist, stratigrapher and marine geologist.  Yet even if a scientist does have expertise in two or more of the climate change elements, he/she still needs to find and use data from other sources to cover the gaps in his/her own knowledge. Such data may in turn only be a “best probable” solution as opposed to fact(s) as will be explained further below.

climate-components

No Such Thing as a Climate Expert

It must therefore be obvious that there can be no such thing as an “expert” simply because no one can fully comprehend the entirety of it all.

This doesn’t stop the media, in particular the TV media in regularly presenting interviewees as experts to lend credibility to their show. But anyone who claims or admits to being an expert in climate change is either kidding themselves, egocentric or is being deceitful.

The bottom line is that when a supposed expert fronts up in the media – watch it guardedly or else switch the channel.   At the end of the day everyone, including the scientists themselves are basically amateurs when a topic is outside their own field of expertise – even if they are an educated amateur.

But having someone with at least some scientific background involved in climate change discussions has got to be far more preferable than pulling celebrities into the debate. These people despite their best intentions, are simply promoting their own views and muddying the waters for the public to make a realistic conclusion in their own minds.

042-decaprio-300x298

Conclusion

Apart from that all three groups of scientists generally DO agree that the Earth’s climate has always changed, that human emissions affect local climates e.g. urban areas and have a summed potential to affect climate globally, and that carbon dioxide is a mild greenhouse house – note the word “mild”.

The real argument then is not about whether the Earth is heating up, but about how relevant is AGW when considered against natural climate change processes.

The Blind Men and the Elephant (Indian Fable)

Elephant2

Footnote:  For more on science as knowledge rather than opinion see Yellow Climate Journalism

 

Reservoirs and Methane: Facts and Fears

 

A previous post explained how methane has been hyped in support of climate alarmism/activism. Now we have an additional campaign to disparage hydropower because of methane emissions from dam reservoirs. File this under “They have no shame.”

Here’s a recent example of the claim from Asia Times Global hydropower boom will add to climate change

The study, published in BioScience, looked at the carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O) emitted from 267 reservoirs across six continents. In total, the reservoirs studied have a surface area of more than 77,287 square kilometers (29,841 square miles). That’s equivalent to about a quarter of the surface area of all reservoirs in the world, which together cover 305,723 sq km – roughly the combined size of the United Kingdom and Ireland.

“The new study confirms that reservoirs are major emitters of methane, a particularly aggressive greenhouse gas,” said Kate Horner, Executive Director of International Rivers, adding that hydropower dams “can no longer be considered a clean and green source of electricity.”

In fact, methane’s effect is 86 times greater than that of CO2 when considered on this two-decade timescale. Importantly, the study found that methane is responsible for 90% of the global warming impact of reservoir emissions over 20 years.

Alarmists are Wrong about Hydropower

Now CH4 is proclaimed the primary culprit held against hydropower. As usual, there is a kernel of truth buried beneath this obsessive campaign: Flooding of biomass does result in decomposition accompanied by some release of CH4 and CO2. From HydroQuebec:  Greenhouse gas emissions and reservoirs

Impoundment of hydroelectric reservoirs induces decomposition of a small fraction of the flooded biomass (forests, peatlands and other soil types) and an increase in the aquatic wildlife and vegetation in the reservoir.

The result is higher greenhouse gas (GHG) emissions after impoundment, mainly CO2 (carbon dioxide) and a small amount of CH4 (methane).

However, these emissions are temporary and peak two to four years after the reservoir is filled.

During the ensuing decade, CO2 emissions gradually diminish and return to the levels given off by neighboring lakes and rivers.

Hydropower generation, on average, emits 50 times less GHGs than a natural gas generating station and about 70 times less than a coal-fired generating station.

The Facts about Tropical Reservoirs

Activists estimate Methane emissions from dams and reservoirs across the planet, including hydropower, are estimated to be significantly larger than previously thought, approximately equal to 1 gigaton per year.

Activists also claim that dams in boreal regions like Quebec are not the problem, but tropical reservoirs are a big threat to the climate. Contradicting that is an intensive study of Brazilian dams and reservoirs, Greenhouse Gas Emissions from Reservoirs: Studying the Issue in Brazil

The Itaipu Dam is a hydroelectric dam on the Paraná River located on the border between Brazil and Paraguay. The name “Itaipu” was taken from an isle that existed near the construction site. In the Guarani language, Itaipu means “the sound of a stone”. The American composer Philip Glass has also written a symphonic cantata named Itaipu, in honour of the structure.

Five Conclusions from Studying Brazilian Reservoirs

1) The budget approach is essential for a proper grasp of the processes going on in reservoirs. This approach involves taking into account the ways in which the system exchanged GHGs with the atmosphere before the reservoir was flooded. Older studies measured only the emissions of GHG from the reservoir surface or, more recently, from downstream de-gassing. But without the measurement of the inputs of carbon to the system, no conclusions can be drawn from surface measurements alone.

2) When you consider the total budgets, most reservoirs acted as sinks of carbon in the short run (our measurements covered one year in each reservoir). In other words, they received more carbon than they exported to the atmosphere and to downstream.

3) Smaller reservoirs are more efficient as carbon traps than the larger ones.

4) As for the GHG impact, in order to determine it, we should add the methane (CH4) emissions to the fraction of carbon dioxide (CO2) emissions which comes from the flooded biomass and organic carbon in the flooded (terrestrial) soil. The other CO2 emissions, arising from the respiration of aquatic organisms or from the decomposition of terrestrial detritus that flows into the reservoir (including domestic sewage), are not impacts of the reservoir. From this sum, we should deduct the amount of carbon that is stored in the sediment and which will be kept there for at least the life of the reservoir (usually more than 80 years). This “stored carbon” ranges from as little as 2 percent of the total carbon output to more than 25 percent, depending on the reservoirs.

5) When we assess the GHG impacts following the guidelines just described, all of FURNAS’s reservoirs have lower emissions than the cleanest European oil plant. The worst case – Manso, which was sampled only three years after the impoundment, and therefore in a time in which the contribution from the flooded biomass was still very significant – emitted about half as much carbon dioxide equivalents (CO2 eq) as the average oil plant from the United States (CO2 eq is a metric measure used to compare the emissions from various greenhouse gases based upon their global warming potential, GWP. CO2 eq for a gas is derived by multiplying the tons of the gas by the associated GWP.) We also observed a very good correlation between GHG emissions and the age of the reservoirs. The reservoirs older than 30 years had negligible emissions, and some of them had a net absorption of CO2eq.

Keeping Methane in Perspective

Over the last 30 years, CH4 in the atmosphere increased from 1.6 ppm to 1.8 ppm, compared to CO2, presently at 400 ppm. So all the dam building over 3 decades, along with all other land use was part of a miniscule increase of a microscopic gas, 200 times smaller than the trace gas, CO2.

 

Background Facts on Methane and Climate Change

The US Senate is considering an act to repeal with prejudice an Obama anti-methane regulation. The story from activist source Climate Central is
Senate Mulls ‘Kill Switch’ for Obama Methane Rule

The U.S. Senate is expected to vote soon on whether to use the Congressional Review Act to kill an Obama administration climate regulation that cuts methane emissions from oil and gas wells on federal land. The rule was designed to reduce oil and gas wells’ contribution to climate change and to stop energy companies from wasting natural gas.

The Congressional Review Act is rarely invoked. It was used this month to reverse a regulation for the first time in 16 years and it’s a particularly lethal way to kill a regulation as it would take an act of Congress to approve a similar regulation. Federal agencies cannot propose similar regulations on their own.

The Claim Against Methane

Now some Republican senators are hesitant to take this step because of claims like this one in the article:

Methane is 86 times more potent as a greenhouse gas than carbon dioxide over a period of 20 years and is a significant contributor to climate change. It warms the climate much more than other greenhouse gases over a period of decades before eventually losing its potency. Atmospheric carbon dioxide remains a potent greenhouse gas for thousands of years.

Essentially the journalist is saying: As afraid as you are about CO2, you should be 86 times more afraid of methane. Which also means, if CO2 is not a warming problem, your fear of methane is 86 times zero. The thousands of years claim is also bogus, but that is beside the point of this post, which is Methane.

IPCC Methane Scare

The article helpfully provides a link referring to Chapter 8 of IPCC AR5 report by Working Group 1 Anthropogenic and Natural Radiative Forcing.

The document is full of sophistry and creative accounting in order to produce as scary a number as possible. Table 8.7 provides the number for CH4 potency of 86 times that of CO2.  They note they were able to increase the Global Warming Potential (GWP) of CH4 by 20% over the estimate in AR4. The increase comes from adding in more indirect effects and feedbacks, as well as from increased concentration in the atmosphere.

In the details are some qualifying notes like these:

Uncertainties related to the climate–carbon feedback are large, comparable in magnitude to the strength of the feedback for a single gas.

For CH4 GWP we estimate an uncertainty of ±30% and ±40% for 20- and 100-year time horizons, respectively (for 5 to 95% uncertainty range).

Methane Facts from the Real World
From Sea Friends (here):

Methane is natural gas CH4 which burns cleanly to carbon dioxide and water. Methane is eagerly sought after as fuel for electric power plants because of its ease of transport and because it produces the least carbon dioxide for the most power. Also cars can be powered with compressed natural gas (CNG) for short distances.

In many countries CNG has been widely distributed as the main home heating fuel. As a consequence, methane has leaked to the atmosphere in large quantities, now firmly controlled. Grazing animals also produce methane in their complicated stomachs and methane escapes from rice paddies and peat bogs like the Siberian permafrost.

It is thought that methane is a very potent greenhouse gas because it absorbs some infrared wavelengths 7 times more effectively than CO2, molecule for molecule, and by weight even 20 times. As we have seen previously, this also means that within a distance of metres, its effect has saturated, and further transmission of heat occurs by convection and conduction rather than by radiation.

Note that when H20 is present in the lower troposphere, there are few photons left for CH4 to absorb:

Even if the IPCC radiative greenhouse theory were true, methane occurs only in minute quantities in air, 1.8ppm versus CO2 of 390ppm. By weight, CH4 is only 5.24Gt versus CO2 3140Gt (on this assumption). If it truly were twenty times more potent, it would amount to an equivalent of 105Gt CO2 or one thirtieth that of CO2. A doubling in methane would thus have no noticeable effect on world temperature.

However, the factor of 20 is entirely misleading because absorption is proportional to the number of molecules (=volume), so the factor of 7 (7.3) is correct and 20 is wrong. With this in mind, the perceived threat from methane becomes even less.

Further still, methane has been rising from 1.6ppm to 1.8ppm in 30 years (1980-2010), assuming that it has not stopped rising, this amounts to a doubling in 2-3 centuries. In other words, methane can never have any measurable effect on temperature, even if the IPCC radiative cooling theory were right.

Because only a small fraction in the rise of methane in air can be attributed to farm animals, it is ludicrous to worry about this aspect or to try to farm with smaller emissions of methane, or to tax it or to trade credits.

The fact that methane in air has been leveling off in the past two decades, even though we do not know why, implies that it plays absolutely no role as a greenhouse gas.

More information at THE METHANE MISCONCEPTIONS by Dr Wilson Flood (UK) here

Summary:

Natural Gas (75% methane) burns the cleanest with the least CO2 for the energy produced.

Leakage of methane is already addressed by efficiency improvements for its economic recovery, and will apparently be subject to even more regulations.

The atmosphere is a methane sink where the compound is oxidized through a series of reactions producing 1 CO2 and 2H20 after a few years.

GWP (Global Warming Potential) is CO2 equivalent heat trapping based on laboratory, not real world effects.

Any IR absorption by methane is limited by H2O absorbing in the same low energy LW bands.

There is no danger this century from natural or man-made methane emissions.

Conclusion

Senators and the public are being bamboozled by opaque scientific bafflegab. The plain truth is much different. The atmosphere is a methane sink in which CH4 is oxidized in the first few meters. The amount of CH4 available in the air is miniscule, even compared to the trace gas CO2, and it is not accelerating. Methane is the obvious choice to signal virtue on the climate issue since governmental actions will not make a bit of difference anyway, except perhaps to do some economic harm.

Give a daisy a break (h/t Derek here)

Daisy methane

Footnote:

For a more thorough and realistic description of atmospheric warming see:

Fearless Physics from Dr. Salby

More Methane Madness

The US Senate is considering an act to repeal with prejudice an Obama anti-methane regulation. The story from activist source Climate Central is
Senate Mulls ‘Kill Switch’ for Obama Methane Rule

The U.S. Senate is expected to vote soon on whether to use the Congressional Review Act to kill an Obama administration climate regulation that cuts methane emissions from oil and gas wells on federal land. The rule was designed to reduce oil and gas wells’ contribution to climate change and to stop energy companies from wasting natural gas.

The Congressional Review Act is rarely invoked. It was used this month to reverse a regulation for the first time in 16 years and it’s a particularly lethal way to kill a regulation as it would take an act of Congress to approve a similar regulation. Federal agencies cannot propose similar regulations on their own.

The Claim Against Methane

Now some Republican senators are hesitant to take this step because of claims like this one in the article:

Methane is 86 times more potent as a greenhouse gas than carbon dioxide over a period of 20 years and is a significant contributor to climate change. It warms the climate much more than other greenhouse gases over a period of decades before eventually losing its potency. Atmospheric carbon dioxide remains a potent greenhouse gas for thousands of years.

Essentially the journalist is saying: As afraid as you are about CO2, you should be 86 times more afraid of methane. Which also means, if CO2 is not a warming problem, your fear of methane is 86 times zero. The thousands of years claim is also bogus, but that is beside the point of this post, which is Methane.

IPCC Methane Scare

The article helpfully provides a link referring to Chapter 8 of IPCC AR5 report by Working Group 1 Anthropogenic and Natural Radiative Forcing.

The document is full of sophistry and creative accounting in order to produce as scary a number as possible. Table 8.7 provides the number for CH4 potency of 86 times that of CO2.  They note they were able to increase the Global Warming Potential (GWP) of CH4 by 20% over the estimate in AR4. The increase comes from adding in more indirect effects and feedbacks, as well as from increased concentration in the atmosphere.

In the details are some qualifying notes like these:

Uncertainties related to the climate–carbon feedback are large, comparable in magnitude to the strength of the feedback for a single gas.

For CH4 GWP we estimate an uncertainty of ±30% and ±40% for 20- and 100-year time horizons, respectively (for 5 to 95% uncertainty range).

Methane Facts from the Real World
From Sea Friends (here):

Methane is natural gas CH4 which burns cleanly to carbon dioxide and water. Methane is eagerly sought after as fuel for electric power plants because of its ease of transport and because it produces the least carbon dioxide for the most power. Also cars can be powered with compressed natural gas (CNG) for short distances.

In many countries CNG has been widely distributed as the main home heating fuel. As a consequence, methane has leaked to the atmosphere in large quantities, now firmly controlled. Grazing animals also produce methane in their complicated stomachs and methane escapes from rice paddies and peat bogs like the Siberian permafrost.

It is thought that methane is a very potent greenhouse gas because it absorbs some infrared wavelengths 7 times more effectively than CO2, molecule for molecule, and by weight even 20 times. As we have seen previously, this also means that within a distance of metres, its effect has saturated, and further transmission of heat occurs by convection and conduction rather than by radiation.

Note that when H20 is present in the lower troposphere, there are few photons left for CH4 to absorb:

Even if the IPCC radiative greenhouse theory were true, methane occurs only in minute quantities in air, 1.8ppm versus CO2 of 390ppm. By weight, CH4 is only 5.24Gt versus CO2 3140Gt (on this assumption). If it truly were twenty times more potent, it would amount to an equivalent of 105Gt CO2 or one thirtieth that of CO2. A doubling in methane would thus have no noticeable effect on world temperature.

However, the factor of 20 is entirely misleading because absorption is proportional to the number of molecules (=volume), so the factor of 7 (7.3) is correct and 20 is wrong. With this in mind, the perceived threat from methane becomes even less.

Further still, methane has been rising from 1.6ppm to 1.8ppm in 30 years (1980-2010), assuming that it has not stopped rising, this amounts to a doubling in 2-3 centuries. In other words, methane can never have any measurable effect on temperature, even if the IPCC radiative cooling theory were right.

Because only a small fraction in the rise of methane in air can be attributed to farm animals, it is ludicrous to worry about this aspect or to try to farm with smaller emissions of methane, or to tax it or to trade credits.

The fact that methane in air has been leveling off in the past two decades, even though we do not know why, implies that it plays absolutely no role as a greenhouse gas.

More information at THE METHANE MISCONCEPTIONS by Dr Wilson Flood (UK) here

Summary:

Natural Gas (75% methane) burns the cleanest with the least CO2 for the energy produced.

Leakage of methane is already addressed by efficiency improvements for its economic recovery, and will apparently be subject to even more regulations.

The atmosphere is a methane sink where the compound is oxidized through a series of reactions producing 1 CO2 and 2H20 after a few years.

GWP (Global Warming Potential) is CO2 equivalent heat trapping based on laboratory, not real world effects.

Any IR absorption by methane is limited by H2O absorbing in the same low energy LW bands.

There is no danger this century from natural or man-made methane emissions.

Conclusion

Senators and the public are being bamboozled by opaque scientific bafflegab. The plain truth is much different. The atmosphere is a methane sink in which CH4 is oxidized in the first few meters. The amount of CH4 available in the air is miniscule, even compared to the trace gas CO2, and it is not accelerating. Methane is the obvious choice to signal virtue on the climate issue since governmental actions will not make a bit of difference anyway, except perhaps to do some economic harm.

Give a daisy a break (h/t Derek here)

Daisy methane

Footnote:

For a more thorough and realistic description of atmospheric warming see:

Fearless Physics from Dr. Salby

Ocean Oxygen Misdirection


The climate scare machine is promoting again the fear of suffocating oceans. For example, an article this week by Chris Mooney in Washington Post, It’s Official, the Oceans are Losing Oxygen.

A large research synthesis, published in one of the world’s most influential scientific journals, has detected a decline in the amount of dissolved oxygen in oceans around the world — a long-predicted result of climate change that could have severe consequences for marine organisms if it continues.

The paper, published Wednesday in the journal Nature by oceanographer Sunke Schmidtko and two colleagues from the GEOMAR Helmholtz Centre for Ocean Research in Kiel, Germany, found a decline of more than 2 percent in ocean oxygen content worldwide between 1960 and 2010.

Climate change models predict the oceans will lose oxygen because of several factors. Most obvious is simply that warmer water holds less dissolved gases, including oxygen. “It’s the same reason we keep our sparkling drinks pretty cold,” Schmidtko said.

But another factor is the growing stratification of ocean waters. Oxygen enters the ocean at its surface, from the atmosphere and from the photosynthetic activity of marine microorganisms. But as that upper layer warms up, the oxygen-rich waters are less likely to mix down into cooler layers of the ocean because the warm waters are less dense and do not sink as readily.

And of course, other journalists pile on with ever more catchy headlines.

The World’s Oceans Are Losing Oxygen Due to Climate Change

How Climate Change Is Suffocating The Oceans

Overview of Oceanic Oxygen

Once again climate alarmists/activists have seized upon an actual environmental issue, but misdirect the public toward their CO2 obsession, and away from practical efforts to address a real concern. Some excerpts from scientific studies serve to put things in perspective.

How the Ocean Breathes

Variability in oxygen and nutrients in South Pacific Antarctic Intermediate Water by J. L. Russell and A. G. Dickson

The Southern Ocean acts as the lungs of the ocean; drawing in oxygen and exchanging carbon dioxide. A quantitative understanding of the processes regulating the ventilation of the Southern Ocean today is vital to assessments of the geochemical significance of potential circulation reorganizations in the Southern Hemisphere, both during glacial-interglacial transitions and into the future.

Traditionally, the change in the concentration of oxygen along an isopycnal due to remineralization of organic material, known as the apparent oxygen utilization (AOU), has been used by physical oceanographers as a proxy for the time elapsed since the water mass was last exposed to the atmosphere. The concept of AOU requires that newly subducted water be saturated with respect to oxygen and is calculated from the difference between the measured oxygen concentration and the saturated concentration at the sample temperature.

This study has shown that the ratio of oxygen to nutrients can vary with time. Since Antarctic Intermediate Water provides a necessary component to the Pacific equatorial biological regime, this relatively high-nutrient, high-oxygen input to the Equatorial Undercurrent in the Western Pacific plays an important role in driving high rates of primary productivity on the equator, while limiting the extent of denitrifying bacteria in the eastern portion of the basin. 

Uncertain Measures of O2 Variability and Linkage to Climate Change

A conceptual model for the temporal spectrum of oceanic oxygen variability by Taka Ito and Curtis Deutsch

Changes in dissolved O2 observed across the world oceans in recent decades have been interpreted as a response of marine biogeochemistry to climate change. Little is known however about the spectrum of oceanic O2 variability. Using an idealized model, we illustrate how fluctuations in ocean circulation and biological respiration lead to low-frequency variability of thermocline oxygen.

Because the ventilation of the thermocline naturally integrates the effects of anomalous respiration and advection over decadal timescales, shortlived O2 perturbations are strongly damped, producing a red spectrum, even in a randomly varying oceanic environment. This background red spectrum of O2 suggests a new interpretation of the ubiquitous strength of decadal oxygen variability and provides a null hypothesis for the detection of climate change influence on oceanic oxygen. We find a statistically significant spectral peak at a 15–20 year timescale in the subpolar North Pacific, but the mechanisms connecting to climate variability remain uncertain.

The spectral power of oxygen variability increases from inter-annual to decadal frequencies, which can be explained using a simple conceptual model of an ocean thermocline exposed to random climate fluctuations. The theory predicts that the bias toward low-frequency variability is expected to level off as the forcing timescales become comparable to that of ocean ventilation. On time scales exceeding that of thermocline renewal, O2 variance may actually decrease due to the coupling between physical O2 supply and biological respiration [Deutsch et al., 2006], since the latter is typically limited by the physical nutrient supply.

Climate Model Projections are Confounded by Natural Variability

Natural variability and anthropogenic trends in oceanic oxygen in a coupled carbon cycle–climate model ensemble by T. L. Frolicher et al.

Internal and externally forced variability in oceanic oxygen (O2) are investigated on different spatiotemporal scales using a six-member ensemble from the National Center for Atmospheric Research CSM1.4-carbon coupled climate model. The oceanic O2 inventory is projected to decrease significantly in global warming simulations of the 20th and 21st centuries.

The anthropogenically forced O2 decrease is partly compensated by volcanic eruptions, which cause considerable interannual to decadal variability. Volcanic perturbations in oceanic oxygen concentrations gradually penetrate the ocean’s top 500 m and persist for several years. While well identified on global scales, the detection and attribution of local O2 changes to volcanic forcing is difficult because of unforced variability.

Internal climate modes can substantially contribute to surface and subsurface O2 variability. Variability in the North Atlantic and North Pacific are associated with changes in the North Atlantic Oscillation and Pacific Decadal Oscillation indexes. Simulated decadal variability compares well with observed O2 changes in the North Atlantic, suggesting that the model captures key mechanisms of late 20th century O2 variability, but the model appears to underestimate variability in the North Pacific.

Our results suggest that large interannual to decadal variations and limited data availability make the detection of human-induced O2 changes currently challenging.

The concentration of dissolved oxygen in the thermocline and the deep ocean is a particularly sensitive indicator of change in ocean transport and biology [Joos et al., 2003]. Less than a percent of the combined atmosphere and ocean O2 inventory is found in the ocean. The O2 concentration in the ocean interior reflects the balance between O2 supply from the surface through physical transport and O2 consumption by respiration of organic material.

Our modeling study suggests that over recent decades internal natural variability tends to mask simulated century-scale trends in dissolved oxygen from anthropogenic forcing in the North Atlantic and Pacific. Observed changes in oxygen are similar or even smaller in magnitude than the spread of the ensemble simulation. The observed decreasing trend in dissolved oxygen in the Indian Ocean thermocline and the boundary region between the subtropical and subpolar gyres in the North Pacific has reversed in recent years [McDonagh et al., 2005; Mecking et al., 2008], implicitly supporting this conclusion.

The presence of large-scale propagating O2 anomalies, linked with major climate modes, complicates the detection of long-term trends in oceanic O2 associated with anthropogenic climate change. In particular, we find a statistically significant link between O2 and the dominant climate modes (NAO and PDO) in the North Atlantic and North Pacific surface and subsurface waters, which are causing more than 50% of the total internal variability of O2 in these regions.

To date, the ability to detect and interpret observed changes is still limited by lack of data. Additional biogeo-chemical data from time series and profiling floats, such as the Argo array (http://www.argo.ucsd.edu) are needed to improve the detection of ocean oxygen and carbon system changes and our understanding of climate change.

The Real Issue is Ocean Dead Zones, Both Natural and Man-made

Since 1994, he and the World Resources Institute (report here) in Washington,D.C., have identified and mapped 479 dead zones around the world. That’s more than nine times as many as scientists knew about 50 years ago.

What triggers the loss of oxygen in ocean water is the explosive growth of sea life fueled by the release of too many nutrients. As they grow, these crowds can simply use up too much of the available oxygen.

Many nutrients entering the water — such as nitrogen and phosphorus — come from meeting the daily needs of some seven billion people around the world, Diaz says. Crop fertilizers, manure, sewage and exhaust spewed by cars and power plants all end up in waterways that flow into the ocean. Each can contribute to the creation of dead zones.

Ordinarily, when bacteria steal oxygen from one patch of water, more will arrive as waves and ocean currents bring new water in. Waves also can grab oxygen from the atmosphere.

Dead zones develop when this ocean mixing stops.

Rivers running into the sea dump freshwater into the salty ocean. The sun heats up the freshwater on the sea surface. This water is lighter than cold saltier water, so it floats atop it. When there are not enough storms (including hurricanes) and strong ocean currents to churn the water, the cold water can get trapped below the fresh water for long periods.

Dead zones are seasonal events. They typically last for weeks or months. Then they’ll disappear as the weather changes and ocean mixing resumes.

Solutions are Available and do not Involve CO2 Emissions

Helping dead zones recover

The Black Sea is bordered by Europe and Asia. Dead zones used to develop here that covered an area as large as Switzerland. Fertilizers running off of vast agricultural fields and animal feedlots in the former Soviet Union were a primary cause. Then, in 1989, parts of the Soviet Union began revolting. Two years later, this massive nation broke apart into 15 separate countries.

The political instability hurt farm activity. In short order, use of nitrogen and phosphorus fertilizers by area farmers declined. Almost at once, the size of the Black Sea’s dead zone shrunk dramatically. Now if a dead zone forms there it’s small, Rabalais says. Some years there is none.

Chesapeake Bay, the United State’s largest estuary, has its own dead zone. And the area affected has expanded over the past 50 years due to pollution. But since the 1980s, farmers, landowners and government agencies have worked to reduce the nutrients flowing into the bay.

Farmers now plant cover crops, such as oats or barley, that use up fertilizer that once washed away into rivers. Growers have also established land buffers to absorb nutrient runoff and to keep animal waste out of streams. People have even started to use laundry detergents made without phosphorus.

In 2011, scientists reported that these efforts had achieved some success in shrinking the size of the bay’s late-summer dead zones.

The World Resources Institute lists 55 dead zones as improving. “The bottom line is if we take a look at what is causing a dead zone and fix it, then the dead zone goes away,” says Diaz. “It’s not something that has to be permanent.”

Summary

Alarmists/activists are again confusing the public with their simplistic solution for a complex situation. And actual remedies are available, just not the agenda preferred by climatists.


Waste Management Saves the Ocean

 

Ocean Climate Ripples

Dr. Arnd Bernaerts is again active with edifying articles on how humans impact upon the oceans and thereby the climate. His recent post is Global Cooling 1940 – 1975 explained for climate change experts

I and others first approach Dr. Bernaerts’ theory relating naval warfare to climate change with a properly skeptical observation. The ocean is so vast, covering 71% of our planet’s surface and up to 11,000 meters deep, with such a storage of solar energy that it counteracts all forcings including human ones.

As an oceanographer, Bernaerts is well aware of that generalization, having named his website Oceans Govern Climate. But his understanding is much more particular and more clear to me in these recent presentations. His information is encyclopedic and his grasp of the details can be intimidating, but I think I get his main point.

When there is intense naval warfare concentrated in a small, shallow basin like the North Sea, the disturbance of the water structure and circulation is profound. The atmosphere responds, resulting in significant regional climate effects. Nearby basins and continents are impacted and eventually it ripples out across the globe.

The North Atlantic example is explained by Bernaerts Cooling of North Sea – 1939 (2_16) Some excerpts below.

Follow the Water

Water, among all solids and liquids, has the highest heat capacity except for liquid ammonia. If water within a water body remained stationary and did not move (which is what it does abundantly and often forcefully for a number of reasons), the uppermost water surface layer would, to a very high percentage, almost stop the transfer of any heat from a water body to the atmosphere.

However, temperature and salt are the biggest internal dynamic factors and they make the water move permanently. How much the ocean can transfer heat to the surface depends on how warm the surface water is relative to atmospheric air. Of no lesser importance is the question, as to how quickly and by what quantities cooled-down surface water is replaced by warmer water from sub-surface level. Wind, cyclones and hurricanes are atmospheric factors that quickly expose new water masses at the sea surface. Another ‘effective’ way to replace surface water is to stir the water body itself. Naval activities are doing just this.

War in the North Sea

Since the day the Second World War had started naval activities moved and turned the water in the North Sea at surface and lower levels at 5, 10, 20 or 30 metres or deeper on a scale that was possibly dozens of times higher than any comparable other external activity over a similar time period before. Presumably only World War One could be named in comparison.

The combatants arrived on the scene when the volume of heat from the sun had reached its annual peak. Impacts on temperatures and icing are listed in the last section: ‘Events’ (see below). The following circumstantial evidences help conclude with a high degree of certainty that the North Sea contributed to the arctic war winter of1939/40.

Climate Change in Response

On the basis of sea surface temperature record at Helgoland Station and subsequent air temperature, developments provide strong indication that the evaporation rate was high. This is confirmed by the following impacts observed:

More wind: As the rate of evaporation over the North Sea has not been measured and recorded, it seems there is little chance to prove that more vapour moved upwards during autumn 1939 than usual. It can be proved that the direction of the inflow of wind had changed from the usually most prevailing SW winds, to winds from the N to E, predominantly from the East. At Kew Observatory (London) general wind direction recorded was north-easterly only three times during 155 winter years; i.e. in 1814, 1841 and 1940[6]. This continental wind could have significantly contributed to the following phenomena of 1939: ‘The Western Front rain’.

More rain: One of the most immediate indicators of evaporation is the excessive rain in an area stretching from Southern England to Saxony, Silesia and Switzerland. Southern Baltic Sea together with Poland and Northern Germany were clearly separated from the generally wet weather conditions only three to four hundred kilometres further south. A demonstration of the dominant weather situation occurred in late October, when a rain section (supplied from Libya) south of the line Middle Germany, Hungary and Romania was completely separated from the rain section at Hamburg – Southern Baltic[7].

More cooling: Further, cooling observed from December 1939 onward can be linked to war activities in two ways. The most immediate effect, as has been explained (above), is the direct result from any excessive evaporation process. The second (at least for the establishment of global conditions in the first war winter) is the deprivation of the Northern atmosphere of its usual amount of water masses, circulating the globe as humidity.

Rippling Effects in Northern Europe and Beyond

Next to the Atlantic Gulf Current, the North Sea (Baltic Sea is discussed in the next chapter) plays a key role in determining the winter weather conditions in Northern Europe. The reason is simple. As long as these seas are warm, they help sustain the supremacy of maritime weather conditions. If their heat capacity turns negative, their feature turns ‘continental’, giving high air pressure bodies an easy opportunity to reign, i.e. to come with cold and dry air. Once that happens, access of warm Atlantic air is severely hampered or even prevented from moving eastwards freely.

The less moist air is circulating the globe south of the Arctic, the more easily cold polar air can travel south. A good piece of evidence is the record lack of rain in the USA from October – December 1939 followed by a colder than average January 1940, a long period of low water temperatures in the North Sea from October-March (see above) and the ‘sudden’ fall of air temperatures to record low in Northern Europe.

The graph above suggests that naval warfare is linked to rapid cooling. The climate system responds with negative feed backs to restore equilibrium. Following WWI, limited to the North Atlantic, the system overshot and the momentum continued upward into the 1930s. Following WWII, with both Pacific and Atlantic theaters, the climate feed backs show several peaks trying to offset the cooling, but the downward trend persisted until about 1975.

Summary

The Oceans Govern Climate. Man influences the ocean governor by means of an expanding fleet of motorized propeller-driven ships. Naval warfare in the two World Wars provide the most dramatic examples of the climate effects.

Neither I nor Dr. Bernaerts claim that shipping and naval activity are the only factors driving climate fluctuations. But it is disturbing that so much attention and money is spent on a bit player CO2, when a much more plausible human influence on climate is ignored and not investigated.

Scafetta vs. IPCC: Dueling Climate Theories

In one corner, Darth Vader, the Prince of CO2, filling the air with the overwhelming sound of his poison breath. Opposing him, Luke Skywalker, a single skeptic armed only with facts and logic.

OK, that’s over the top, but it’s what came to mind while reading a new paper by Nicola Scafetta in which he goes up against the IPCC empire. And Star Wars came to mind since Scafetta’s theory involves astronomical cycles. The title below links to the text, which is well worth reading.  Some excerpts follow. H/T GWPF

CMIP5 General Circulation Models versus a Semi-Empirical Model Based on Natural Oscillations

Scafetta comes out swinging: From the Abstract

Since 1850 the global surface temperature has warmed by about 0.9 oC. The CMIP5 computer climate models adopted by the IPCC have projected that the global surface temperature could rise by 2-5 oC from 2000 to 2100 for anthropogenic reasons. These projections are currently used to justify expensive mitigation policies to reduce the emission of anthropogenic greenhouse gases such as CO2.

However, recent scientific research has pointed out that the IPCC climate models fail to properly reconstruct the natural variability of the climate. Indeed, advanced techniques of analysis have revealed that the natural variability of the climate is made of several oscillations spanning from the decadal to the millennial scales (e.g. with periods of about 9.1, 10.4, 20, 60, 115, 1000 years and others). These oscillations likely have an astronomical origin.

In this short review I briefly summarize some of the main reasons why the AGWT should be questioned. In addition, I show that an alternative interpretation of climate change based on the evidences that a significant part of it is due to specific natural oscillations is possible. A modeling based on such interpretation agrees better with the climatic comprehensive picture deduced from the data.

The Missing Hot-Spot

It has been observed that for the last decades climate models predict a hot-spot, that is, a significant warming of a band of the upper troposphere 10 km over the tropics and the equator. The presence of this hot-spot is quite important because it would indicate that the water-vapor feedback to radiative forcing would be correctly reproduced by the models.

However, this predicted hot-spot has never been found in the tropospheric temperature records [20,21]. This could only be suggesting either that both the temperature records obtained with satellite measures and balloons have been poorly handled or that the models severely fail to properly simulate the water-vapor feedback. In the latter case, the flaw of the models would be fatal because the water-vapor feedback is the most important among the climate feedbacks.

Without a strong feedback response from water vapor the models would only predict a moderate climate sensitivity to radiative forcing of about 1.2 oC for CO2 doubling instead of about 3 oC. Figure 8 compares the observed temperature trend in the troposphere versus the climate model predictions: from Ref. [21]. The difference between the two record sets is evident.

scafettafig8

Figure 8. Comparison between observed temperature trend in the troposphere (green-blue) versus the climate model predictions (red). From Ref. [21].

Observations Favor Scafetta’s Model Over GCM Models

I have proposed that the global surface temperature record could be reconstructed from the decadal to the millennial scale using a minimum of 6 harmonics at 9.1, 10.4, 20, 60, 115 and 983 years plus a anthropogenic and volcano contribution that can be evaluated from the CMIP5 GCM outputs reduced by half because, as discussed above, the real climate sensitivity to radiative forcing appears to be about half of what assumed by the current climate models. The figure highlights the better performance of the solar–astronomical semi-empirical model versus the CMIP5 models. This is particularly evident since 2000, as shown in the inserts.

scafettavscmip

Figure 12 [A] The four CMIP5 ensemble average projections versus the HadCRUT4 GST record (black). [B] The solar– astronomical semi-empirical model. From Ref. [4] Left axis shows temperature anomalies in degrees Celsius.

Forecast Validation

In 2011 I prepared a global surface temperature forecast based on a simplified climate model based on four natural oscillations (9.1, 10.4, 20 and 60 year) plus an estimate of a realistic anthropogenic contribution [25]: for example, see Refs. [33,34,35] referring to the 60-year cycle. Figure 13 compares my 2011 forecast (red curve) against the global surface temperature record I used in 2011 (HadCUT3, blue curve) and a modern global surface temperature record updated at June/2016 (RSS MSU record, black line, http://www.remss.com/measurements/upper-air-temperature).

The RSS MSU record, which is a global surface temperature estimate using satellite measurements, was linearly rescaled to fit the original HadCUT3 global surface temperature record for optimal comparison. Other global temperature reconstructions perform similarly. Note that the HadCUT3 has been dismissed in 2014. Figure 13 also shows in green a schematic representation of the IPCC GCMs prediction since 2000 [25].

scafettaforecast082016

Left axis shows temperature anomalies in degrees Celsius.

Figure 13. Comparison of the forecast (red-yellow curve) made in Scafetta (2011) [25] against (1) the temperature record used in 2011 (HadCRUT3, blue curve), (2) the IPCC climate model projections since 2000 (green area), (3) a recent global temperature record (RSS MSU record, black line, linearly re-scaled to match the HadCRUT3 from 1979 to 2014). The temperature record has followed Scafetta’s forecast better than the IPCC ones. In 2015-2016 there was a strong El-Nino Pacific Ocean natural warming that caused the observed temperature peak.

Summary

The considerations emerging from these findings yield to the conclusion that the IPCC climate models severely overestimate the anthropogenic climatic warming by about two times. I have finally proposed a semi-empirical climate model calibrated to reconstruct the natural climatic variability since Medieval times. I have shown that this model projects a very moderate warming until 2040 and a warming less than 2 oC from 2000 to 2100 using the same anthropogenic emission scenarios used by the CMIP5 models: see Figure 12.

This result suggests that climatic adaptation policies, which are less expensive than the mitigation ones, could be sufficient to address most of the consequences of a climatic change during the 21st century. Similarly, fossil fuels, which have contributed significantly to the development of our societies, can still be used to fulfill our energy necessities until equally efficient alternative energy sources could be determined and developed.

Scafetta Briefly Explains the Harmonic oscillation theory

“The theory is very simple in words. The solar system is characterized by a set of specific gravitational oscillations due to the fact that the planets are moving around the sun. Everything in the solar system tends to synchronize to these frequencies beginning with the sun itself. The oscillating sun then causes equivalent cycles in the climate system. Also the moon acts on the climate system with its own harmonics. In conclusion we have a climate system that is mostly made of a set of complex cycles that mirror astronomical cycles. Consequently it is possible to use these harmonics to both approximately hindcast and forecast the harmonic component of the climate, at least on a global scale. This theory is supported by strong empirical evidences using the available solar and climatic data.”

Footnote: Scafetta is not alone.  Dr. Norman Page has a new paper going into detail about forecasting climate by means of  solar-astronomical patterns.

The coming cooling: Usefully accurate climate forecasting for policy makers

Meet Richard Muller, Lukewarmist

Richard Muller, head of the Berkeley Earth project, makes a fair and balanced response to a question regarding the “97% consensus.”  Are any of the US Senators listening?  Full text below from Forbes 97%: An Inconvenient Truth About The Oft-Cited Polling Of Climate Scientists including a reference to Will Happer, potentially Trump’s science advisor.

Read it and see that he sounds a lot like Richard Lindzen.

What are some widely cited studies in the news that are false?

Answer by Richard Muller, Professor of Physics at UC Berkeley, on Quora:

That 97% of all climate scientists accept that climate change is real, large, and a threat to the future of humanity. That 97% basically concur with the vast majority of claims made by Vice President Al Gore in his Nobel Peace Prize winning film, An Inconvenient Truth.

The question asked in typical surveys is neither of those. It is this: “Do you believe that humans are affecting climate?” My answer would be yes. Humans are responsible for about a 1 degree Celsius rise in the average temperature in the last 100 years. So I would be included as one of the 97% who believe.

Yet the observed changes that are scientifically established, in my vast survey of the science, are confined to temperature rise and the resulting small (4-inch) rise in sea level. (The huge “sea level rise” seen in Florida is actually subsidence of the land mass, and is not related to global warming.) There is no significant change in the rate of storms, or of violent storms, including hurricanes and volcanoes. The temperature variability is not increasing. There is no scientifically significant increase in floods or droughts. Even the widely reported warming of Alaska (“the canary in the mine”) doesn’t match the pattern of carbon dioxide increase–it may have an explanation in terms of changes in the northern Pacific and Atlantic currents. Moreover, the standard climate models have done a very poor job of predicting the temperature rise in Antarctica, so we must be cautious about the danger of confirmation bias.

My friend Will Happer believes that humans do affect the climate, particularly in cities where concrete and energy use cause what is called the “urban heat island effect.” So he would be included in the 97% who believe that humans affect climate, even though he is usually included among the more intense skeptics of the IPCC. He also feels that humans cause a small amount of global warming (he isn’t convinced it is as large as 1 degree), but he does not think it is heading towards a disaster; he has concluded that the increase in carbon dioxide is good for food production, and has helped mitigate global hunger. Yet he would be included in the 97%.

The problem is not with the survey, which asked a very general question. The problem is that many writers (and scientists!) look at that number and mischaracterize it. The 97% number is typically interpreted to mean that 97% accept the conclusions presented in An Inconvenient Truth by former Vice President Al Gore. That’s certainly not true; even many scientists who are deeply concerned by the small global warming (such as me) reject over 70% of the claims made by Mr. Gore in that movie (as did a judge in the UK; see the following link: Gore climate film’s nine ‘errors‘).

The pollsters aren’t to blame. Well, some of them are; they too can do a good poll and then misrepresent what it means. The real problem is that many people who fear global warming (include me) feel that it is necessary to exaggerate the meaning of the polls in order to get action from the public (don’t include me).

There is another way to misrepresent the results of the polls. Yes, 97% of those polled believe that there is human caused climate change. How did they reach that decision? Was it based on a careful reading of the IPCC report? Was it based on their knowledge of the potential systematic uncertainties inherent in the data? Or was it based on their fear that opponents to action are anti-science, so we scientists have to get together and support each other. There is a real danger in people with Ph.D.s joining a consensus that they haven’t vetted professionally.

I like to ask scientists who “believe” in global warming what they think of the data. Do they believe hurricanes are increasing? Almost never do I get the answer “Yes, I looked at that, and they are.” Of course they don’t say that, because if they did I would show them the actual data! Do they say, “I’ve looked at the temperature record, and I agree that the variability is going up”? No. Sometimes they will say, “There was a paper by Jim Hansen that showed the variability was increasing.” To which I reply, “I’ve written to Jim Hansen about that paper, and he agrees with me that it shows no such thing. He even expressed surprise that his paper has been so misinterpreted.”

A really good question would be: “Have you studied climate change enough that you would put your scientific credentials on the line that most of what is said in An Inconvenient Truth is based on accurate scientific results? My guess is that a large majority of the climate scientists would answer no to that question, and the true percentage of scientists who support the statement I made in the opening paragraph of this comment, that true percentage would be under 30%. That is an unscientific guestimate, based on my experience in asking many scientists about the claims of Al Gore.

This question originally appeared on Quora. the place to gain and share knowledge, empowering people to learn from others and better understand the world.

Compare Muller’s statement with a short video by Lindzen.

 

Precipitation Misunderstandings

 

A previous post on Temperature Misunderstandings addressed mistaken notions about the meaning of temperature measurements and records. This post looks at rainfall, the other primary determinant of climates. For this topic California provides the means for everyone to see how misconceptions arise, and how to see precipitation statistics in context.

Lessons learned from the end of California’s “permanent drought”

A report by Larry Kummer documents how extensively California’s recent shortage of water was proclaimed as a “permanent drought”. And it goes on to document how El Nino conditions have ended the water shortage.

Status of the California drought

“During the past week, a series of storms bringing widespread rain and snow showers impacted the states along the Pacific Coast and northern Rockies. In California, the cumulative effect of several months of abundant precipitation has significantly improved drought conditions across the state.”
— US Drought monitor – California, February 9.

Precipitation over California in the water year so far (October 1 to January 31) is 178% of average for this date. The snowpack is 179% of average, as of Feb 8. Our reservoirs are at 125% of average capacity. See the bottom line summary as of February 7, from the US Drought monitor for California.

The improvement has been tremendous. The area with exceptional drought conditions have gone year over year from 38% of California to 0%, extreme drought from 23% to 1%, severe drought from 20% to 10% — while dry and moderate drought went from 18% to 48%, and no drought from <1% to 41%. See the map below. And the rain continues to fall.

In addition there is the saga of Oroville dam threatened by its reservoir overfilling.

Confusing Weather and Climate

As with temperature, rainy weather is not climate. Neither is fair, sunny weather permanent. Precipitation is variable in any particular climate, with the seasons and on decadal and mult-decadal bases. For a context on precipitation patterns around the world see Here Comes the Rain Again.

It is a mistake to call a temporary lack of rain a drought, or worse a permanent drought, and equally a mistake to call a return of rainfall the end of a drought. California’s history as a desert environment does not change just because politicians and the public have short memories.

H/T to Eric Simpson for reminding us of that history:

There is also this perceptive comment by tomholsinger

I wouldn’t be so quick about the drought ending. Droughts are ALWAYS multi-season events. I was very impressed by the references below, which made the point that the 20th Century average of ~200 million acre feet of precipitation in California (rain and snow combined) is way more than the average of ~140 million acre feet over the last 2000 years.

Drying of the West, National Geographic

The West without Water: What Past Floods, Droughts, and Other Climatic Clues Tell Us about Tomorrow, Ingram, B. Lynn, and Malamud-Roam, Frances, 2013, University of California Press

Tom goes on to quote himself from a Modesto Bee op-ed almost two years ago.

Global warming has nothing to do with this – history is bad enough. A long-standing pre-industrial regional climate fluctuation seems underway, returning us from the wettest century in the past 1000 years to at least the historic average of much less (~70%) rain and snow. Many paleoclimatologists believe we are entering a still worse mega-drought .

An extreme drought by historic standards means a drop to 35-40% of the 20th Century average for 10-20 years. California has experienced two centuries-long such extreme mega-droughts in the past 2000 years.

Our average 20th Century precipitation (rain and snow combined) produced about 200 million acre feet of water annually over the whole state. 118 million acre feet went to nature in 2000, and 82 million was allocated by humans – the first 39 million for federal mandates, 9 million was used by people and industry, and the last 34 million for irrigation. A drop to the historic average of ~140 million acre feet over the past 2000 years means extinction for California agriculture – it would bear almost all the burden of the decrease even if the federal water is released. An extreme drought means a drop to about 75 million acre feet, and we might be starting 1-2 centuries of that.

This is happening to the entire Southwest . ~20 million acre feet of the Southwest’s precipitation annually entered the Colorado River in the 20th Century, of which ~12 million is currently withdrawn by Americans. Colorado River flow too has averaged much less over the past 2000 years (12-14 million acre fee annually), and it drops to 7-8 million in droughts which sometimes last centuries.

A drop to only the historic average precipitation over the past 2000 years means catastrophe for the Southwest. 2/3 of the very wet 20th Century average is normal for the entire area. We can expect ALL of California’s allotment of Colorado River to be diverted to urban areas in Arizona and Nevada in the decades of drought the region seems to be entering.

Summary

As with temperatures, changes in precipitation are misinterpreted when taken out of historical context. This is usually done to hype a sociopolitical agenda by distracting people from the baseline realities to which we can only adapt, not prevent.

The rainfall measures above show that California enjoyed an unusually wet century and it would have been prudent to take advantage of it by storing water resources. As the fable tells us, grasshoppers live for today, ants prepare for tomorrow.

AMO: Atlantic Climate Pulse

I was inspired by David Dilley’s weather forecasting based upon Atlantic water pulsing into the Arctic Ocean (see post: Global Weather Oscillations). So I went looking for that signal in the AMO dataset, our best long-term measure of sea surface temperature variations in the North Atlantic.

ATLANTIC MULTI-DECADAL OSCILLATION (AMO)

For this purpose, I downloaded the AMO Index from Kaplan SST v.2, the unaltered and untrended dataset. By definition, the data are monthly average SSTs interpolated to a 5×5 grid over the North Atlantic basically 0 to 70N.

For an overview the graph below presents a comparison between Annual, March and September averages from 1856 to 2016 inclusive.

amo-march-sept

We see about 4°C difference between the cold month of March, and warm September. The overall trend is slightly positive at 0.27°C per century, about 10% higher in September and 10% lower in March. It is also clear that monthly patterns resemble closely the annual pattern, so it is reasonable to look more closely into Annual variability.

The details of the Annual fluctuations in AMO reveal the pulse pattern suggested by Dilley.

amo-pulses-2

We note firstly the classic pattern of temperature cycles seen in all datasets featuring quality-controlled unadjusted data. The low in 1913, high in 1944, low in 1975, and high in 1998. Also evident are the matching El Nino years 1998, 2009 and 2016, indicating that what happens in the Pacific does not stay in the Pacific.

Most interesting are the periodic peaking of AMO in the 8 to 10 year time frame. The arrows indicate the peaks, which as Dilley describes produce a greater influx of warm Atlantic water under the Arctic ice. And as we know from historical records and naval ice charts, Arctic ice extents were indeed low in the 1930s, high in the 1970s, low in the 1990s and on a plateau presently.

Conclusion

I am intrigued but do not yet subscribe to the Lunarsolar explanation for these pulses, but the AMO index does provide impressive indication of the North Atlantic role as a climate pacemaker. Oceans make up 71% of the planet surface, so SSTs directly drive global mean temperatures (GMT). But beyond the math, Atlantic pulses set up oscillations in the Arctic that impact the world.

In the background is a large scale actor, the Atlantic Meridional Overturning Circulation (AMOC) which is the Atlantic part of the global “conveyor belt” moving warm water from the equatorial oceans to the poles and back again.  For more on this deep circulation pattern see Climate Pacemaker: The AMOC