Business Climate Advice

Climate Activists storm the bastion of Exxon Mobil, here seen without their shareholder disguises.

It’s not only big oil companies under siege from anti-fossil fuel activists. All businesses, small and large, private and publically traded are in the crosshairs.

Robert Bradley has a recent post directed at leaders of businesses, both large and small with respect to global warming/climate change. The article is Climate Alarmism and Corporate Responsibility at Master Resource. It is important advice since the large majority of business people are ethical and want to do the right thing, but their good intentions can be used against them in the current feverish media and political environment.

Everyone including business leaders needs to distinguish between two related but different threats from global warming/climate change. The first is the direct impacts upon one’s business and livelihood from natural conditions. The second is the indirect threat to prosperity from misguided public policies to “fight climate change.”

Climate Science: Where’s the Alarm?

On the outlook for the climate, of course there are both risks and opportunities to anticipate. Corporate plans typically involve assumptions and projections considered to be believable to a “reasonable person.” And as Bradley points out, no reasonable person can be expected to go beyond IPCC science, which is premised on concerns about global warming/climate change. Bradley:

Climate alarmism is not supported by the scientific reports of the Intergovernmental Panel on Climate Change (IPCC) on close inspection. There is no direct linkage between the IPCC finding that “the balance of evidence suggests a discernible human influence on climate” and climate alarmism. In fact, there is ample evidence in the scientific literature that the enhanced greenhouse effect is benign. Top climate economists have gone further to conclude that a warmer and wetter world predicted by climate models would produce net benefits in future decades for the United States and other areas of the world with the free market means to adapt. (my bold)

Corporate policymakers can discount climate alarmism by understanding several key arguments and facts:

  • Increasing atmospheric concentrations of carbon dioxide (CO2), the preeminent anthropogenic greenhouse gas, benefit plant life and agricultural productivity and are not a direct human health issue. It will be centuries before the plant optimum concentration is reached and exceeded, creating a long potential glide path for hydrocarbon energies to “green” planet earth.

  • The surface warming (“greenhouse signal”) of recent decades shows a relatively benign distribution. Minimum (night, winter) temperatures have been increasing twice as much as maximum (daytime, summer) temperatures.[9] Higher nighttime temperatures and longer growing seasons reinforce the aforementioned carbon fertilization effect to aid plant growth and agricultural productivity.

  • Model-estimated warming from anthropogenic effects has fallen over time. . .The chart above shows IPCC (2007) projecting warming at 0.38C per decade and the last report (2013) dropped to 0.17 C per decade. With a more than 50 percent drop in the IPCC estimate in six years, there is clear evidence that model revision and more realistic forcing scenarios have weakened the alarmist scenario.
  • Today’s lower model-predicted warming estimates may still be too high. At the half way point of the feared doubling of the warming potential greenhouse gases in the atmosphere, the model scenarios are over predicting warming by a factor of two or more.
  • The two global temperature measurements from satellites and balloons in their two decades of existence have not picked up the “greenhouse signal” where it should be most pronounced or at least discernible—the lower troposphere. This suggests that surface thermometers may be overestimating warming and/or the surface warming is primarily the result of other factors than just the enhanced greenhouse effect (such as increased solar radiation). A natural warming trend neuters the case for climate alarmism.
  • The reduced growth rate of greenhouse gas buildup in the atmosphere in the last decade, as much as half the rate of some alarmist scenarios, extends the warming timetable to facilitate adaptation under any scenario. The reduced buildup is primarily related to greater carbon dioxide intake—the “greening of planet earth” phenomenon of robust carbon sinks.
  • Scientists who are confident about pinpointing the greenhouse signal from the surface temperature record have not substantiated a greenhouse signal with weather extremes. “Overall,” concluded the IPCC, “there is no evidence that extreme weather events, or climate variability, has increased, in a global sense, through the 20th century, although data and analyses are poor and not comprehensive.”


Business leaders have to balance the needs and interests of various stakeholders in the enterprise: principally the customers, the employees and the investors, as well as the larger society. Today’s social context presents the hazards of misguided public policies and regulations against fossil fuels, despite the unconvincing science case for them. The main risks are more expensive and unreliable power, more costly energy of all kinds, and a sluggish economy burdened with unnecessary expenses. To cope, Bradley makes some “no regrets” suggestions:

“No regret” policies—policies that are economical whether or not GHG emissions are worth addressing—should be pursued in their own right to help defuse the climate change issue. A prominent example is for businesses to profitably lower their energy usage wherever possible. Energy service companies in recent years have executed long-term contracts with guaranteed savings to completely manage the energy function of commercial and industrial users. Total energy outsourcing improves the allocation of core competencies and creates new incentives for optimal energy usage to benefit each party to the agreement.  (my bold)

Other pro-market “no regret” public policies that would have the effect of profitably reducing greenhouse gas emissions over time include:

  • Reducing criteria air pollutants in urban areas not in compliance with the Clean Air Act.

  • Modernizing and simplifying the tax code to provide more incentives for capital-intensive businesses to modernize their physical capital, thus lowering energy usage and related GHG emissions.

  • Maintaining incentives (or removing disincentives) for hydroelectric and nuclear power generation facilities that produce carbon-free electricity.Increasing energy conversion efficiencies of new electric generation capacity and a growing market share of natural gas-fired power relative to coal are natural market processes that will complement the above public policy reforms. Together, they will ensure that GHG emissions are not greater than their free market levels and will continue to fall per unit of output (the decarbonization phenomenon).

Summary

Agenda-driven climate alarmism should be rejected by corporate America on pragmatic and social responsibility grounds. Not only does the balance of evidence point toward net social benefits from a carbon dioxide enriched and moderately warmer and wetter world. Energy reality concludes that any short-term regulatory approach is futile and wasteful compared to perfecting business-as-usual strategies and using the wealth of energy abundance and free global markets to adapt to any weather and climate conditions in the future.

 

Resilient Arctic Ice

 

Source: NASA Worldview July 18, 2017. Click on image to enlarge.

July is showing again the resilience of Arctic ice this year. The graph below shows 2017 extents for the first 19 days of July compared to the average for the previous 11 years, to 2016, to 2007 and the SII (Sea Ice Index) estimates for 2017.

The graph shows 2017 holding to the decadal average and just yesterday dropping below 8M km2, one day ahead of average.  Meanwhile the other extents are much lower than 2017: 2016 is down 357k km2, 2007 is 379k km2 down, and SII shows 2017 480k km2 less than MASIE day 200.

As we shall see, this year’s extents are in surplus on the Atlantic side, offset by deficits on the Pacific side and in Hudson Bay.  The image shows the evolution of Arctic ice from 2007 to this year for day 200.

Click on image to enlarge.

The Table compares 2017 day 200 ice extents with the decadal average and 2007

Region 2017200 Day 200
Average
2017-Ave. 2007200 2017-2007
 (0) Northern_Hemisphere 7997823 8064957 -67133 7618029 379795
 (1) Beaufort_Sea 806596 819503 -12906 797272 9324
 (2) Chukchi_Sea 514591 619294 -104704 488952 25638
 (3) East_Siberian_Sea 744800 937942 -193142 707353 37447
 (4) Laptev_Sea 666317 584009 82308 455463 210854
 (5) Kara_Sea 321934 310630 11304 377648 -55714
 (6) Barents_Sea 74053 45893 28160 55933 18120
 (7) Greenland_Sea 478308 388587 89721 375816 102492
 (8) Baffin_Bay_Gulf_of_St._Lawrence 371429 238537 132893 278443 92986
 (9) Canadian_Archipelago 624373 685089 -60717 686749 -62376
 (10) Hudson_Bay 172045 258697 -86652 170690 1355
 (11) Central_Arctic 3222235 3173093 49143 3221912 323

2007 overall ice extent on day 200 was lower by 380k km2, 2017 showing surpluses everywhere except Kara and CAA (Canadian Arctic Archipelago).  Compared to the decadal average, the 2017 larger deficits are in the Pacific ( Chukchi and East Siberian) and in Canada (Hudson Bay and CAA).  These are offset by above average extents elsewhere, especially in Laptev, Greenland, Baffin and Central Arctic. Barents is still surplus to average, but has now fallen behind 2014 as the highest in the last decade.

The black line is average for the last 11 years.  2007 in purple appears close to an average year.  2014 had the highest annual extent in Barents Sea, due to higher and later maximums, holding onto ice during the summer, and recovering quickly.  In contrast, 2016 was the lowest annual extent, melting out early and recovering later.  2017 in blue started out way behind, but grew rapidly to reach average, and then persisted longer to exceed even 2014 before falling behind just recently.

For more on why Barents Sea matters see Barents Icicles

 

Eemian and Holocene Climates

 

Hansen is publishing a new paper in support of the children suing the government for not fighting climate change. In it he claims temps now are higher than the Holocene and matching the Eemian, so we should expect comparable sea levels.

Paper is here:
http://www.earth-syst-dynam.net/8/577/2017/

Abstract. Global temperature is a fundamental climate metric highly correlated with sea level, which implies that keeping shorelines near their present location requires keeping global temperature within or close to its preindustrial Holocene range. However, global temperature excluding short-term variability now exceeds +1 °C relative to the 1880–1920 mean and annual 2016 global temperature was almost +1.3 °C. We show that global temperature has risen well out of the Holocene range and Earth is now as warm as it was during the prior (Eemian) interglacial period, when sea level reached 6–9 m higher than today. Further, Earth is out of energy balance with present atmospheric composition, implying that more warming is in the pipeline, and we show that the growth rate of greenhouse gas climate forcing has accelerated markedly in the past decade. The rapidity of ice sheet and sea level response to global temperature is difficult to predict, but is dependent on the magnitude of warming. Targets for limiting global warming thus, at minimum, should aim to avoid leaving global temperature at Eemian or higher levels for centuries. Such targets now require negative emissions, i.e., extraction of CO2 from the air. If phasedown of fossil fuel emissions begins soon, improved agricultural and forestry practices, including reforestation and steps to improve soil fertility and increase its carbon content, may provide much of the necessary CO2 extraction. In that case, the magnitude and duration of global temperature excursion above the natural range of the current interglacial (Holocene) could be limited and irreversible climate impacts could be minimized. In contrast, continued high fossil fuel emissions today place a burden on young people to undertake massive technological CO2 extraction if they are to limit climate change and its consequences. Proposed methods of extraction such as bioenergy with carbon capture and storage (BECCS) or air capture of CO2 have minimal estimated costs of USD 89–535 trillion this century and also have large risks and uncertain feasibility. Continued high fossil fuel emissions unarguably sentences young people to either a massive, implausible cleanup or growing deleterious climate impacts or both. (my bolds)

The image at the top shows that the Eemian climate was very different due to orbital mechanics, which were nothing like today.  And as Rud points out, the rise in sea levels took thousands of years at a rate similar to today: 2 mm a year.

In addition, Hansen et al. appear to have erased not only the Medieval Warming period, but also the Roman and Minoan periods before them.   Perhaps they are using 2016 temps as a trampoline for their claims, even though we are already well down from that El Nino event.

Hansen et al. are going over the top, exaggerating even beyond IPCC in order to proclaim Waterworld is at hand.

It seems to me that the kind of rise Hansen is looking comes after an ice age freezes lots of water, resulting in a very low baseline.  In the graph below, you can see the beginning of the Holocene around 14 thousand years ago, and then the rise slowed down to the present rate around 6000 years ago.

More information is available here:
https://wattsupwiththat.com/2015/06/01/ice-core-data-shows-the-much-feared-2c-climate-tipping-point-has-already-occurred/

For more on children’s crusade against global warming see:  Climate War Human Shields

 

 

Climate Biorhythms

Human Biorhythms

The question–whether monitoring biorhythm cycles can actually make a difference in people’s lives–has been studied since the 1960s, when the writings of George S. Thommen popularized the idea.

Several companies began experimenting and although the Japanese were the first nation to apply biorhythms on a large scale, the Swiss were the first to see and realize the benefits of biorhythms in reducing accidents.

Hans Frueh invented the Bio-Card and Bio-Calculator, and Swiss municipal and national authorities appear to have been applying biorhythms for many years before the Japanese experiments. Swissair, which reportedly had been studying the critical days of its pilots for almost a decade previously, did not allow either a pilot or a co-pilot experiencing a critical day to fly with another experiencing the same kind of instability. Reportedly, Swissair had no accidents on those flights where biorhythm had been applied.

Most biorhythm models use three cycles: a 23-day physical cycle, a 28-day emotional cycle, and a 33-day intellectual cycle.[8] Each of these cycles varies between high and low extremes sinusoidally, with days where the cycle crosses the zero line described as “critical days” of greater risk or uncertainty.

The numbers from +100% (maximum) to -100% (minimum) indicate where on each cycle the rhythms are on a particular day. In general, a rhythm at 0% is crossing the midpoint and is thought to have no real impact on your life, whereas a rhythm at +100% (at the peak of that cycle) would give you an edge in that area, and a rhythm at -100% (at the bottom of that cycle) would make life more difficult in that area. There is no particular meaning to a day on which your rhythms are all high or all low, except the obvious benefits or hindrances that these rare extremes are thought to have on your life.

Human Biorhythms are not proven

Various attempts have been made to validate this biorhythm model with inconclusive results. It is fair to say that this particular definition of physical, emotional, and intellectual cycles has not been proven. I do not myself subscribe to it nor have ever attempted to follow it. My point is mainly to draw an analogy. What if fluctuations in global temperatures are the combined results from multiple cycles of varying lengths?

What About Climate Biorhythms

At the longer end, we have astronomical cycles on millennial scales, and at the shorter end, we have seasonal cycles. In between there are a dozen or so oceanic cycles, such as ENSO, AMO, and AMOC, that have multi-decadal phases. Then there are solar cycles, ranging from basic quasi-11 year sunspot cycles, to other centennial maxs and mins. AARI scientists have documented a quasi-60 year cycle in Arctic ice extents. ETH Zurich has a solar radiation database showing an atmospheric sunscreen that alternatively dims or brightens the incoming sunshine over decades (see Nature’s Sunscreen).

It could be that observed warming and cooling periods occur when several more powerful cycles coincide in their phases. For example, we are at the moment anticipating an unusually quiet solar cycle, a Pacific Decadal Oscillation (PDO) negative phase, a cooler North Atlantic (AMO), and possibly a dimming period. Will that coincidence result in temperatures dropping? Was the Little Ice Age caused and then ended after 1850 by such a coincidence of climate biorhythms?

Summary

Our knowledge of these cycles is confounded by not yet untangling them to see individual periodicities, as a basis for probing into their interactions and combined influences.  Until that day, we should refrain from picking on one thing, like CO2, as though it were a control knob for the whole climate.

Nature’s Sunscreen

Greenhouse with adjustable sun screens to control warming.

A recent post Planetary Warming: Back to Basics discussed a recent paper by Nikolov and Zeller on the atmospheric thermal effect measured on various planets in our solar system. They mentioned that an important source of temperature variation around the earth’s energy balance state can be traced to global brightening and dimming.

This post explores the fact of fluctuations in the amount of solar energy reflected rather than absorbed by the atmosphere and surface. Brightening refers to more incoming solar energy from clear and clean skies. Dimming refers to less solar energy due to more sunlight reflected in the atmosphere by the presence of clouds and aerosols (air-born particles like dust and smoke).

The energy budget above from ERBE shows how important is this issue. On average, half of sunlight is either absorbed in the atmosphere or reflected before it can be absorbed by the surface land and ocean. Any shift in the reflectivity (albedo) impacts greatly on the solar energy warming the planet.

The leading research on global brightening/dimming is done at the Institute for Atmospheric and Climate Science of ETH Zurich, led by Martin Wild, senior scientist specializing in the subject.

Special instruments have been recording the solar radiation that reaches the Earth’s surface since 1923. However, it wasn’t until the International Geophysical Year in 1957/58 that a global measurement network began to take shape. The data thus obtained reveal that the energy provided by the sun at the Earth’s surface has undergone considerable variations over the past decades, with associated impacts on climate.

The initial studies were published in the late 1980s and early 1990s for specific regions of the Earth. In 1998 the first global study was conducted for larger areas, like the continents Africa, Asia, North America and Europe for instance.

Now ETH has announced The Global Energy Balance Archive (GEBA) version 2017: A database for worldwide measured surface energy fluxes. The title is a link to that paper published in May 2017 explaining the facility and some principal findings. The Archive itself is at  http://www.geba.ethz.ch.

For example, Figure 2 below provides the longest continuous record available in GEBA: surface downward shortwave radiation measured in Stockholm since 1922. Five year moving average in blue, 4th order regression model in red. Units Wm-2. Substantial multidecadal variations become evident, with an increase up to the 1950s (“early brightening”), an overall decline from the 1950s to the 1980s (“dimming”), and a recovery thereafter (“brightening”).

Figure 5. Composite of 56 European GEBA time series of annual surface downward shortwave radiation (thin line) from 1939 to 2013, plotted together with a 21 year Gaussian low-pass filter ((thick line). The series are expressed as anomalies (in Wm-2) from the 1971–2000 mean. Dashed lines are used prior to 1961 due to the lower number of records for this initial period. Updated from Sanchez-Lorenzo et al. (2015) including data until December 2013.

Martin Wild explains in a 2016 article Decadal changes in radiative fluxes at land and ocean surfaces and their relevance for global warming. From the Conclusion (SSR refers to solar radiation incident upon the surface)

However, observations indicate not only changes in the downward thermal fluxes, but even more so in their solar counterparts, whose records have a much wider spatial and temporal coverage. These records suggest multidecadal variations in SSR at widespread land-based observation sites. Specifically, declining tendencies in SSR between the 1950s and 1980s have been found at most of the measurement sites (‘dimming’), with a partial recovery at many of the sites thereafter (‘brightening’).

With the additional information from more widely measured meteorological quantities which can serve as proxies for SSR (primarily sunshine duration and DTR), more evidence for a widespread extent of these variations has been provided, as well as additional indications for an overall increasing tendency in SSR in the first part of the 20th century (‘early brightening’).

It is well established that these SSR variations are not caused by variations in the output of the sun itself, but rather by variations in the transparency of the atmosphere for solar radiation. It is still debated, however, to what extent the two major modulators of the atmospheric transparency, i.e., aerosol and clouds, contribute to the SSR variations.

The balance of evidence suggests that on longer (multidecadal) timescales aerosol changes dominate, whereas on shorter (decadal to subdecadal) timescales cloud effects dominate. More evidence is further provided for an increasing influence of aerosols during the course of the 20th century. However, aerosol and clouds may also interact, and these interactions were hypothesized to have the potential to amplify and dampen SSR trends in pristine and polluted areas, respectively.

No direct observational records are available over ocean surfaces. Nevertheless, based on the presented conceptual ideas of SSR trends amplified by aerosol–cloud interactions over the pristine oceans, modeling approaches as well as the available satellite-derived records it appears plausible that also over oceans significant decadal changes in SSR occur.

The coinciding multidecadal variations in SSTs and global aerosol emissions may be seen as a smoking gun, yet it is currently an open debate to what extent these SST variations are forced by aerosol-induced changes in SSR, effectively amplified by aerosol– cloud interactions, or are merely a result of unforced natural variations in the coupled ocean atmosphere system. Resolving this question could state a major step toward a better understanding of multidecadal climate change.

Another paper co-authored by Wild discusses the effects of aerosols and clouds The solar dimming/brightening effect over the Mediterranean Basin in the period 1979 − 2012. (NSWR is Net Short Wave Radiation, that is equal to surface solar radiation less reflected)

The analysis reveals an overall increasing trend in NSWR (all skies) corresponding to a slight solar brightening over the region (+0.36 Wm−2per decade), which is not statistically significant at 95% confidence level (C.L.). An increasing trend(+0.52 Wm−2per decade) is also shown for NSWR under clean skies (without aerosols), which is statistically significant (P=0.04).

This indicates that NSWR increases at a higher rate over the Mediterranean due to cloud variations only, because of a declining trend in COD (Cloud Optical Depth). The peaks in NSWR (all skies) in certain years (e.g., 2000) are attributed to a significant decrease in COD (see Figs. 9 and 10), whilethe two data series (NSWRall and NSWRclean) are highly correlated(r=0.95).

This indicates that cloud variation is the major regulatory factor for the amount and multi-decadal trends in NSWR over the Mediterranean Basin. (Note: Lower cloud optical depth is caused by less opaque clouds and/or decrease in overall cloudiness)

On the other hand, the results do not reveal a reversal from dimming to brightening during 1980s, as shown in several studies over Europe (Norris and Wild, 2007;Sanchez-Lorenzoet al., 2015), but a rather steady slight increasing trend in solar radiation, which, however, seems to be stabilized during the last years of the data series, in agreement with Sanchez-Lorenzo et al. (2015). Similarly, Wild (2012) reported that the solar brightening was less distinct at European sites after 2000 compared to the 1990s.

In contrast, the NSWR under clear (cloudless) skies shows a slight but statistically significant decreasing trend (−0.17 Wm−2per decade,P=0.002), indicating an overall decrease in NSWR over the Mediterranean due to water-vapor variability suggesting a transition to more humid environment under a warming climate.

Other researchers find cloudiness more dominant than aerosols. For example, The cause of solar dimming and brightening at the Earth’s surface during the last half century: Evidence from measurements of sunshine duration by Gerald Stanhill et al.

Analysis of the Angstrom-Prescott relationship between normalized values of global radiation and sunshine duration measured during the last 50 years made at five sites with a wide range of climate and aerosol emissions showed few significant differences in atmospheric transmissivity under clear or cloud-covered skies between years when global dimming occurred and years when global brightening was measured, nor in most cases were there any significant changes in the parameters or in their relationships to annual rates of fossil fuel combustion in the surrounding 1° cells. It is concluded that at the sites studied changes in cloud cover rather than anthropogenic aerosols emissions played the major role in determining solar dimming and brightening during the last half century and that there are reasons to suppose that these findings may have wider relevance.

Summary

The final words go to Martin Wild from Enlightening Global Dimming and Brightening.

Observed Tendencies in surface solar radiation
Figure 2.  Changes in surface solar radiation observed in regions with good station coverage during three periods.(left column) The 1950s–1980s show predominant declines (“dimming”), (middle column) the 1980s–2000 indicate partial recoveries (“brightening”) at many locations, except India, and (right column) recent developments after 2000 show mixed tendencies. Numbers denote typical literature estimates for the specified region and period in W m–2 per decade.  Based on various sources as referenced in Wild (2009).

The latest updates on solar radiation changes observed since the new millennium show no globally coherent trends anymore (see above and Fig. 2). While brightening persists to some extent in Europe and the United States, there are indications for a renewed dimming in China associated with the tremendous emission increases there after 2000, as well as unabated dimming in India (Streets et al. 2009; Wild et al. 2009).

We cannot exclude the possibility that we are currently again in a transition phase and may return to a renewed overall dimming for some years to come.

One can’t help but see the similarity between dimming/brightening and patterns of Global Mean Temperature, such as HadCrut.

Footnote: For more on clouds, precipitation and the ocean, see Here Comes the Rain Again

Climate Policies Failure, the Movie

 

H/T GWPF for pointing (here) to this documentary film on how climate change policies are threatening modern civilization.  Trailer can be viewed above.  My recent post on this subject is below.

The Failure of Climate Policies

Primum non nocere” means “First, Do No Harm.”

Medical practitioners know this principle, the closest approximation in the Hippocratic corpus coming from Epidemics: “The physician must be able to tell the antecedents, know the present, and foretell the future – must mediate these things, and have two special objects in view with regard to disease, namely, to do good or to do no harm.”

Every intervention has consequences by which its success is measured. Effectiveness regards the quality of outcomes: Good things happened, Nothing happened, or Bad things happened. Of course, it may be a mixed bag in which the net must be weighed.

In addition, efficiency is considered (“evidence-based” in today’s jargon): It was worth it, It was not worth it, or It was worse than doing nothing. Both attainment of intended consequences, and collateral, unintended damages bear on the judgment.

More and more in the nations “leading on climate change” people are starting to question the actions of policymakers. Recently Robert Lyman, Ottawa Energy policy analyst presented on the theme: Can Canada Survive Climate Change Policy? From Friends of Science

It must indeed seem strange that someone would wonder about the effects of the policies now proposed to reduce greenhouse gas emissions as though the policies themselves are the threat. And yet they are.

I am not here to address the issue of how much human-related greenhouse gas emissions are contributing to increased concentrations of carbon dioxide in the atmosphere nor on the sensitivity of global temperatures and climate to the increases in those concentrations over time. There are others here far more qualified than I to discuss that.

Instead, I want to discuss the policy and program measures that the people of Canada and other countries, especially in the industrialized world, are being urged to adopt and what will be the implications of those policies and programs.

Edmonton one winter night.

Canada is the second largest country in the world, sparsely populated,
with vast transportation needs. We withstand long, cold winters featuring
short days, extremely low temperatures and lots of snow. Our energy and
resource industries would be penalized for providing the 
valuable materials
the rest of the world demands and uses.

The article goes into the history of how we all, including Canada got to this point. Then comes this.

Ladies and gentlemen, these commitments are just the beginning, the mere “foot in the door” for the more radical demands that lie ahead. We are still bound in principle to reduce Canadian GHG emissions by 50% from 2005 levels by 2050. The U.N still wants us to “show leadership” by reducing emissions by 80% from 2010 levels by 2050. A number of environmental groups in Canada and other countries have recently endorsed the Wind, Water and Sunlight, or WWS, vision. This vision seeks completely to eliminate the use of all fossils fuels – coal, oil, and natural gas – in the world by 2050. The New Democratic Party’s LEAP Manifesto endorses this vision, as does the Green Party and most of Canada’s influential environmental organizations. The government of Ontario also has formally committed the province to this vision. So have a number of large Canadian municipal governments.

In practice, consumers pay twice, once for the (expensive) renewable
generation and then for the capital costs of the backup thermal plants.

How can we even begin to understand the magnitude of the changes being proposed? One way is to look at the sources of energy consumption and related emissions today. In 2005, Canadian emissions were 738 megatonnes of carbon dioxide equivalent. In 2014, after six years of the worst recession since the Great Depression, Canadians emitted less, 722 megatonnes. Twenty-six per cent of those emissions were from oil and gas production, 23 per cent were from transportation, and roughly equal portions of around 10 per cent were from electricity generation, buildings, industry and agriculture, with waste and other sources making up a residual 7 per cent. Assuming that emissions do not grow one bit over the next 32 years as a result of increased economic activity or increased population, achieving a 50 per cent emissions reduction from 2005 levels would mean reducing emissions to 369 megatonnes CO2 equivalent. That is comparable to completely eliminating the current emissions from oil and gas production, electricity generation, and all emissions-intensive industries like mining, petrochemicals, auto and parts manufacturing, iron, steel and cement. Gone. Achieving the aspirational goal of 80 per cent reduction recommended by the IPCC would mean reducing emissions to 147 megatonnes CO2 equivalent. That would be comparable to reducing Canada’s per capita emissions and our energy economy to the current levels of Bolivia, Sudan or Iraq. (original bold)

Which benefits would be achieved by incurring such costs?

Despite all the rhetoric about reducing world carbon dioxide emissions from fuel combustion and gas flaring, according to the U.S. Carbon dioxide information analysis center, they rose steadily from 16.6 Gigatonnes carbon dioxide equivalent in 1973 to 34.1 Gigatonnes in 2014. So, they more than doubled over that timeframe. Importantly, though, the origins of the emissions changed significantly. In 1973, the countries of the organization for economic cooperation and development, or OECD, accounted for two-thirds of global CO2 emissions from fuel combustion; by 2014, the OECD share had declined to just over a third. So all, or almost all, of the emissions growth occurred outside of the OECD.

So, we have two sharply different perspectives of the future, the EIA’s projections of what probably will happen and the aspirations of the U.N. and many environmental groups as to what in their view should happen. Reducing emissions by 50% by 2050 to meet the U.N.’s vision would mean a global total of about 16 Gigatonnes, in contrast to the EIA’s projection of 43 Gigatonnes (Gt). The OECD countries – the United States, Canada, most of Europe, Japan, Australia and others – could eliminate 100% of their projected emissions of 14 Gt, and the world would still be over its target by 13 Gt.

A Tangled Pile of Wasteful Climate Programs

There is not in Canada a comprehensive list of the measures that have been implemented by all orders of government to reduce greenhouse gas emissions. They have been increasing in number, reach and cost since 1988. I counted 37 different generic types of measures now in use. Large bureaucracies exist to design, implement, and (less frequently) evaluate these measures. They stretch like the tentacles of some vast octopus across every aspect of the Canadian economy and touch everyone’s life. As no one has ever established an inventory of the measures now in place or of those under consideration, no one knows how much these measures already cost Canadians. Two things are certain – they cost billions of dollars annually, and they are not going away soon, regardless of the taxes imposed on carbon. I might add a third certainty, which is that the government will continue to develop and implement more and more programs and regulations as time goes on.

Let me remind you of the conclusions reached by the federal government’s own monitor of program effectiveness, the Commissioner of the Environment and Sustainable Development.  Starting in 1998, the commissioner began to critique the government’s approach to managing emission reduction measures. In the seven reports that followed, there were five consistent themes.

  • First, the government has not created effective governance structures for managing climate change activities. In fact, there have been weaknesses in horizontal governance across departments, accountability and coordination.
  • Second, there has been, and remains, no overall implementation plan. The government has produced no estimate of the emission reductions expected from each sector. Without an implementation plan, industry, consumers and other levels of government lack a solid basis for knowing how to apply technology or make investment decisions.
  • Third, as a result, Canada cannot determine whether the targets for emissions reduction already announced will be met or how much it will cost to do so.
  • Fourth, there are few mechanisms in place to measure the performance of the emission-reduction measures that have been implemented so far.
  • Fifth, the federal and provincial governments do poorly in coordinating their approaches to emissions reduction.

I agree that we need an honest dialogue about climate change mitigation. It should start with the recognition that governments to date have publicly embraced emission reduction targets that are unachievable with present technology and at acceptable economic costs. We should acknowledge that we as a society have multiple goals of which environmental quality, however important one might think it is, represents only one. If we value our prosperity and unity as a federal, geographically diverse country, we must approach the climate change issue with a respect for all our collective goals.

Much of Canada’s current political elite favours the pursuit of international goals over the steadfast promotion of the Canadian interest, whether on issues of trade, security or the environment. Never before, however, have we faced a situation in which commitment to an international objective may well impose enormous and divisive costs on Canada for no discernable global environmental benefit. Climate change thus offers a clear dichotomy between the Canadian national interest and the global environmental agenda.

See also Trump Did the Right Thing in the Right Way

Updated: Climates Don’t Start Wars, People Do

Update July 14

A new study has looked into the Syrian civil war, which has been the poster child for those claiming climate causes  human conflict.  h\t to Mike Hulme who posted Climate Change and the Syrian Civil War Revisited The study concluded:

“For proponents of the view that anthropogenic climate change will become a ‘threat multiplier’ for instability in the decades ahead, the Syrian civil war has become a recurring reference point, providing apparently compelling evidence that such conflict effects are already with us. According to this view, human-induced climatic change was a contributory factor in the extreme drought experienced within Syria prior to its civil war; this drought in turn led to large-scale migration; and this migration in turn exacerbated the socio-economic stresses that underpinned Syria’s descent into war. This article provides a systematic interrogation of these claims, and finds little merit to them. Amongst other things it shows that there is no clear and reliable evidence that anthropogenic climate change was a factor in Syria’s pre-civil war drought; that this drought did not cause anywhere near the scale of migration that is often alleged; and that there exists no solid evidence that drought migration pressures in Syria contributed to civil war onset. The Syria case, the article finds, does not support ‘threat multiplier’ views of the impacts of climate change; to the contrary, we conclude, policymakers, commentators and scholars alike should exercise far greater caution when drawing such linkages or when securitising climate change.”  (my bold)

Original Post

Once again the media are promoting a link between climate change and human conflicts. It is obvious to anyone in their right mind that wars correlate with environmental destruction. From rioting in Watts, to the wars in Iraq, or the current chaos in Syria, there’s no doubt that fighting degrades the environment big time.

What is strange here is the notion that changes in temperatures and/or rainfall cause the conflicts in the first place. The researchers that advance this claim are few in number and are hotly disputed by many others in the field, but you would not know that from the one-sided coverage in the mass media.

The Claim

Lately the fuss arises from this study: Climate, conflict, and social stability: what does the evidence say?, Hsiang, S.M. & Burke, M. Climatic Change (2014) 123: 39. doi:10.1007/s10584-013-0868-3

Hsiang and Burke (2014) examine 50 quantitative empirical studies and find a “remarkable convergence in findings” (p. 52) and “strong support for a causal association” (p. 42) between climatological changes and conflict at all scales and across all major regions of the world. A companion paper by Hsiang et al. (2013) that attempts to quantify the average effect from these studies indicates that a 1 standard deviation (σ) increase in temperature or rainfall anomaly is associated with an 11.1 % change in the risk of “intergroup conflict”.1 Assuming that future societies respond similarly to climate variability as past populations, they warn that increased rates of human conflict might represent a “large and critical impact” of climate change.

The Bigger Picture

This assertion is disputed by numerous researchers, some 26 of whom joined in a peer-reviewed comment: One effect to rule them all? A comment on climate and conflict, Buhaug, H., Nordkvelle, J., Bernauer, T. et al. Climatic Change (2014) 127: 391. doi:10.1007/s10584-014-1266-1

In contrast to Hsiang and coauthors, we find no evidence of a convergence of findings on climate variability and civil conflict. Recent studies disagree not only on the magnitude of the impact of climate variability but also on the direction of the effect. The aggregate median effect from these studies suggests that a one-standard deviation increase in temperature or loss of rainfall is associated with a 3.5 % increase in conflict risk, although the 95 % highest density area of the distribution of effects cannot exclude the possibility of large negative or positive effects. With all contemporaneous effects, the aggregate point estimate increases somewhat but remains statistically indistinguishable from zero.

To be clear, this commentary should not be taken to imply that climate has no influence on armed conflict. Rather, we argue – in line with recent scientific reviews (Adger et al. 2014; Bernauer et al. 2012; Gleditsch 2012; Klomp and Bulte 2013; Meierding 2013; Scheffran et al. 2012a,b; Theisen et al. 2013; Zografos et al. 2014) – that research to date has failed to converge on a specific and direct association between climate and violent conflict.

The Root of Climate Change Bias

The two sides have continued to publish and the issue is far from settled. Interested observers describe how serious people can disagree so frequently about such findings in climate science.

Modeling and data choices sway conclusions about climate-conflict links, Andrew M. Linke, and Frank D. W. Witmer, Institute of Behavioral Science, University of Colorado, Boulder, CO 80309-0483 here

Conclusions about the climate–conflict relationship are also contingent on the assumptions behind the respective statistical analyses. Although this simple fact is generally understood, we stress the disciplinary preferences in modeling decisions.

However, we believe that the Burke et al. finding is not a “benchmark” in the sense that it is the scientific truth or an objective reality because disciplinary-related modeling decisions, data availability and choices, and coding rules are critical in deriving robust conclusions about temperature and conflict.

After adding additional covariates (models 4 and 6), the significant temperature effect in the Burke et al. (1) model disappears, with sociopolitical variables predicting conflict more effectively than the climate variables. Furthermore, this specification provides additional insights into the between- and within-effects that vary for factors such as political exclusion and prior conflict.

Summary

Sociopolitical variables predict conflict more effectively than climate variables. It is well established that poorer countries, such as those in Africa, are more likely to experience chronic human conflicts. It is also obvious that failing states fall into armed conflicts, being unable to govern effectively due to corruption and illegitimacy.

It boggles the mind that activists promote policies to deny cheap, reliable energy for such countries, perpetuating or increasing their poverty and misery, while claiming such actions reduce the chances of conflicts in the future.

Halvard Buhaug concludes (here):

Vocal actors within policy and practice contend that environmental variability and shocks, such as drought and prolonged heat waves, drive civil wars in Africa. Recently, a widely publicized scientific article appears to substantiate this claim. This paper investigates the empirical foundation for the claimed relationship in detail. Using a host of different model specifications and alternative measures of drought, heat, and civil war, the paper concludes that climate variability is a poor predictor of armed conflict. Instead, African civil wars can be explained by generic structural and contextual conditions: prevalent ethno-political exclusion, poor national economy, and the collapse of the Cold War system.

Footnote:  The Joys of Playing Climate Whack-A-Mole

Dealing with alarmist claims is like playing whack-a-mole. Every time you beat down one bogeyman, another one pops up in another field, and later the first one returns, needing to be confronted again. I have been playing Climate Whack-A-Mole for a while, and if you are interested, there are some hammers supplied below.

The alarmist methodology is repetitive, only the subject changes. First, create a computer model, purporting to be a physical or statistical representation of the real world. Then play with the parameters until fears are supported by the model outputs. Disregard or discount divergences from empirical observations. This pattern is described in more detail at Chameleon Climate Models

This post is the latest in a series here which apply reality filters to attest climate models.  The first was Temperatures According to Climate Models where both hindcasting and forecasting were seen to be flawed.

Others in the Series are:

Sea Level Rise: Just the Facts

Data vs. Models #1: Arctic Warming

Data vs. Models #2: Droughts and Floods

Data vs. Models #3: Disasters

Data vs. Models #4: Climates Changing

Climate Medicine

Beware getting sucked into any model.

Planetary Warming: Back to Basics

 

It is often said we must rely on projections from computer simulations of earth’s climate since we have no other earth on which to experiment. That is not actually true since we have observations upon a number of planetary objects in our solar system that also have atmospheres.

This is brought home by a paper, published recently in the journal “Environment Pollution and Climate Change,” written by Ned Nikolov, a Ph.D. in physical science, and Karl Zeller, retired Ph.D. research meteorologist. (title is link to paper).  H/T to Tallbloke for posting on this (here) along with comments by one of the authors.

New Insights on the Physical Nature of the Atmospheric Greenhouse Effect Deduced from an Empirical Planetary Temperature Model

Nikolov and Keller have written before on this topic, but this paper takes advantage of data from recent decades of space exploration as well as improved observatories. It is thorough, educational and makes a convincing case that a planet’s surface temperatures can be predicted from two variables: distance from the sun, and the atmospheric mass. This post provides some excerpts and exhibits as a synopsis, hopefully to encourage reading the paper itself.

Abstract

A recent study has revealed that the Earth’s natural atmospheric greenhouse effect is around 90 K or about 2.7 times stronger than assumed for the past 40 years. A thermal enhancement of such a magnitude cannot be explained with the observed amount of outgoing infrared long-wave radiation absorbed by the atmosphere (i.e. ≈ 158 W m-2), thus requiring a re-examination of the underlying Greenhouse theory.

We present here a new investigation into the physical nature of the atmospheric thermal effect using a novel empirical approach toward predicting the Global Mean Annual near-surface equilibrium Temperature (GMAT) of rocky planets with diverse atmospheres. Our method utilizes Dimensional Analysis (DA) applied to a vetted set of observed data from six celestial bodies representing a broad range of physical environments in our Solar System, i.e. Venus, Earth, the Moon, Mars, Titan (a moon of Saturn), and Triton (a moon of Neptune).

Twelve relationships (models) suggested by DA are explored via non-linear regression analyses that involve dimensionless products comprised of solar irradiance, greenhouse-gas partial pressure/density and total atmospheric pressure/density as forcing variables, and two temperature ratios as dependent variables. One non-linear regression model is found to statistically outperform the rest by a wide margin.

Above: Venusian Atmosphere

Our analysis revealed that GMATs of rocky planets with tangible atmospheres and a negligible geothermal surface heating can accurately be predicted over a broad range of conditions using only two forcing variables: top-of-the-atmosphere solar irradiance and total surface atmospheric pressure. The hereto discovered interplanetary pressure-temperature relationship is shown to be statistically robust while describing a smooth physical continuum without climatic tipping points.

This continuum fully explains the recently discovered 90 K thermal effect of Earth’s atmosphere. The new model displays characteristics of an emergent macro-level thermodynamic relationship heretofore unbeknown to science that has important theoretical implications. A key entailment from the model is that the atmospheric ‘greenhouse effect’ currently viewed as a radiative phenomenon is in fact an adiabatic (pressure-induced) thermal enhancement analogous to compression heating and independent of atmospheric composition. (my bold)

Earth Atmosphere Density and Temperature Profile

Consequently, the global down-welling long-wave flux presently assumed to drive Earth’s surface warming appears to be a product of the air temperature set by solar heating and atmospheric pressure. In other words, the so-called ‘greenhouse back radiation’ is globally a result of the atmospheric thermal effect rather than a cause for it. (my bold)

Our empirical model has also fundamental implications for the role of oceans, water vapour, and planetary albedo in global climate. Since produced by a rigorous attempt to describe planetary temperatures in the context of a cosmic continuum using an objective analysis of vetted observations from across the Solar System, these findings call for a paradigm shift in our understanding of the atmospheric ‘greenhouse effect’ as a fundamental property of climate.

The research effort demonstrates sound scientific research: data and sources are fully explained, the pattern analysis is replicable, and the conclusions set forth in a logical manner. Alternative hypotheses were explored and rejected in favor of one explaining observations to near perfection, and also showing applicability to other cases.

Equation (10a) implies that GMATs of rocky planets can be calculated as a product of two quantities: the planet’s average surface temperature in the absence of an atmosphere (Tna, K) and a nondimensional factor (Ea ≥ 1.0) quantifying the relative thermal effect of the atmosphere.

As an example of technical descriptions, consider how the paper describes issues relating to the calculation of Tna.

For bodies with tangible atmospheres (such as Venus, Earth, Mars, Titan and Triton), one must calculate Tna using αe=0.132 and ηe=0.00971, which assumes a Moon-like airless reference surface in accordance with our pre-analysis premise. For bodies with tenuous atmospheres (such as Mercury, the Moon, Calisto and Europa), Tna should be calculated from Eq. (4a) (or Eq. 4b respectively if S>0.15 W m-2 and/or Rg ≈ 0 W m-2) using the body’s observed values of Bond albedo αe and ground heat storage fraction ηe.

In the context of this model, a tangible atmosphere is defined as one that has significantly modified the optical and thermo-physical properties of a planet’s surface compared to an airless environment and/or noticeably impacted the overall planetary albedo by enabling the formation of clouds and haze. A tenuous atmosphere, on the other hand, is one that has not had a measurable influence on the surface albedo and regolith thermos-physical properties and is completely transparent to shortwave radiation.

The need for such delineation of atmospheric masses when calculating Tna arises from the fact that Eq. (10a) accurately describes RATEs of planetary bodies with tangible atmospheres over a wide range of conditions without explicitly accounting for the observed large differences in albedos (i.e., from 0.235 to 0.90) while assuming constant values of αe and ηe for the airless equivalent of these bodies. One possible explanation for this counterintuitive empirical result is that atmospheric pressure alters the planetary albedo and heat storage properties of the surface in a way that transforms these parameters from independent controllers of the global temperature in airless bodies to intrinsic byproducts of the climate system itself in worlds with appreciable atmospheres. In other words, once atmospheric pressure rises above a certain level, the effects of albedo and ground heat storage on GMAT become implicitly accounted for by Eq. (11). (my bold)

Significance

Equation (10b) describes the long-term (30 years) equilibrium GMATs of planetary bodies and does not predict inter-annual global temperature variations caused by intrinsic fluctuations of cloud albedo and/or ocean heat uptake. Thus, the observed 0.82 K rise of Earth’s global temperature since 1880 is not captured by our model, since this warming was likely not the result of an increased atmospheric pressure. Recent analyses of observed dimming and brightening periods worldwide [97-99] suggest that the warming over the past 130 years might have been caused by a decrease in global cloud cover and a subsequent increased absorption of solar radiation by the surface. Similarly, the mega shift of Earth’s climate from a ‘hothouse’ to an ‘icehouse’ evident in the sedimentary archives over the past 51 My cannot be explained by Eq. (10b) unless caused by a large loss of atmospheric mass and a corresponding significant drop in surface air pressure since the early Eocene.

Role of greenhouse gases from the new model perspective

Our analysis revealed a poor relationship between GMAT and the amount of greenhouse gases in planetary atmospheres across a broad range of environments in the Solar System (Figures 1-3 and Table 5). This is a surprising result from the standpoint of the current Greenhouse theory, which assumes that an atmosphere warms the surface of a planet (or moon) via trapping of radiant heat by certain gases controlling the atmospheric infrared optical depth [4,9,10]. The atmospheric opacity to LW radiation depends on air density and gas absorptivity, which in turn are functions of total pressure, temperature, and greenhouse-gas concentrations [9]. Pressure also controls the broadening of infrared absorption lines in individual gases. Therefore, the higher the pressure, the larger the infrared optical depth of an atmosphere, and the stronger the expected greenhouse effect would be. According to the present climate theory, pressure only indirectly affects global surface temperature through the atmospheric infrared opacity and its presumed constraint on the planet’s LW emission to Space [9,107].

The artificial decoupling between radiative and convective heat-transfer processes adopted in climate models leads to mathematically and physically incorrect solutions with regard to surface temperature. The LW radiative transfer in a real climate system is intimately intertwined with turbulent convection/advection as both transport mechanisms occur simultaneously. Since convection (and especially the moist one) is orders of magnitude more efficient in transferring energy than LW radiation [3,4], and because heat preferentially travels along the path of least resistance, a properly coupled radiative-convective algorithm of energy exchange will produce quantitatively and qualitatively different temperature solutions in response to a changing atmospheric composition than the ones obtained by current climate models. Specifically, a correctly coupled convective-radiative system will render the surface temperature insensitive to variations in the atmospheric infrared optical depth, a result indirectly supported by our analysis as well. This topic requires further investigation beyond the scope of the present study. (my bold)

The direct effect of atmospheric pressure on the global surface temperature has received virtually no attention in climate science thus far. However, the results from our empirical data analysis suggest that it deserves a serious consideration in the future.

How did Saturn’s moon Titan secure an atmosphere when no other moons in the solar system did? The answer lies largely in its size and location. Here, Titan as imaged in May 2005 by the Cassini spacecraft from about 900,000 miles away. Photo credit: Courtesy NASA/JPL/Space Science Institute

Physical nature of the atmospheric ‘greenhouse effect’

According to Eq. (10b), the heating mechanism of planetary atmospheres is analogous to a gravity-controlled adiabatic compression acting upon the entire surface. This means that the atmosphere does not function as an insulator reducing the rate of planet’s infrared cooling to space as presently assumed [9,10], but instead adiabatically boosts the kinetic energy of the lower troposphere beyond the level of solar input through gas compression. Hence, the physical nature of the atmospheric ‘greenhouse effect’ is a pressure-induced thermal enhancement independent of atmospheric composition. (my bold)

This mechanism is fundamentally different from the hypothesized ‘trapping’ of LW radiation by atmospheric trace gases first proposed in the 19th century and presently forming the core of the Greenhouse climate theory. However, a radiant-heat trapping by freely convective gases has never been demonstrated experimentally. We should point out that the hereto deduced adiabatic (pressure-controlled) nature of the atmospheric thermal effect rests on an objective analysis of vetted planetary observations from across the Solar System and is backed by proven thermodynamic principles, while the ‘trapping’ of LW radiation by an unconstrained atmosphere surmised by Fourier, Tyndall and Arrhenius in the 1800s was based on a theoretical conjecture. The latter has later been coded into algorithms that describe the surface temperature as a function of atmospheric infrared optical depth (instead of pressure) by artificially decoupling radiative transfer from convective heat exchange. Note also that the Ideal Gas Law (PV=nRT) forming the basis of atmospheric physics is indifferent to the gas chemical composition. (my bold)

Climate stability

Our semi-empirical model (Equations 4a, 10b and 11) suggests that, as long as the mean annual TOA solar flux and the total atmospheric mass of a planet are stationary, the equilibrium GMAT will remain stable. Inter-annual and decadal variations of global temperature forced by fluctuations of cloud cover, for example, are expected to be small compared to the magnitude of the background atmospheric warming because of strong negative feedbacks limiting the albedo changes. This implies a relatively stable climate for a planet such as Earth absent significant shifts in the total atmospheric mass and the planet’s orbital distance to the Sun. Hence, planetary climates appear to be free of tipping points, i.e., functional states fostering rapid and irreversible changes in the global temperature as a result of hypothesized positive feedbacks thought to operate within the system. In other words, our results suggest that the Earth’s climate is well buffered against sudden changes.

The hypothesis that a freely convective atmosphere could retain (trap) radiant heat due its opacity has remained undisputed since its introduction in the early 1800s even though it was based on a theoretical conjecture that has never been proven experimentally. It is important to note in this regard that the well-documented enhanced absorption of thermal radiation by certain gases does not imply an ability of such gases to trap heat in an open atmospheric environment. This is because, in gaseous systems, heat is primarily transferred (dissipated) by convection (i.e., through fluid motion) rather than radiative exchange.  (my bold)

If gases of high LW absorptivity/emissivity such as CO2, methane and water vapor were indeed capable of trapping radiant heat, they could be used as insulators. However, practical experience has taught us that thermal radiation losses can only be reduced by using materials of very low IR absorptivity/emissivity and correspondingly high thermal reflectivity such as aluminum foil. These materials are known among engineers at NASA and in the construction industry as radiant barriers [129]. It is also known that high-emissivity materials promote radiative cooling. Yet, all climate models proposed since 1800s were built on the premise that the atmosphere warms Earth by limiting radiant heat losses of the surface through to the action of IR absorbing gases aloft.

If a trapping of radiant heat occurred in Earth’s atmosphere, the same mechanism should also be expected to operate in the atmospheres of other planetary bodies. Thus, the Greenhouse concept should be able to mathematically describe the observed variation of average planetary surface temperatures across the Solar System as a continuous function of the atmospheric infrared optical depth and solar insolation. However, to our knowledge, such a continuous description (model) does not exist. 

Summary

The planetary temperature model consisting of Equations (4a), (10b), (11) has several fundamental theoretical implications, i.e.,
• The ‘greenhouse effect’ is not a radiative phenomenon driven by the atmospheric infrared optical depth as presently believed, but a pressure-induced thermal enhancement analogous to adiabatic heating and independent of atmospheric composition;
• The down-welling LW radiation is not a global driver of surface warming as hypothesized for over 100 years but a product of the near-surface air temperature controlled by solar heating and atmospheric pressure;
• The albedo of planetary bodies with tangible atmospheres is not an independent driver of climate but an intrinsic property (a byproduct) of the climate system itself. This does not mean that the cloud albedo cannot be influenced by external forcing such as solar wind or galactic cosmic rays. However, the magnitude of such influences is expected to be small due to the stabilizing effect of negative feedbacks operating within the system. This novel understanding explains the observed remarkable stability of planetary albedos;
• The equilibrium surface temperature of a planet is bound to remain stable (i.e., within ± 1 K) as long as the atmospheric mass and the TOA mean solar irradiance are stationary. Hence, Earth’s climate system is well buffered against sudden changes and has no tipping points;
• The proposed net positive feedback between surface temperature and the atmospheric infrared opacity controlled by water vapor appears to be a model artifact resulting from a mathematical decoupling of the radiative-convective heat transfer rather than a physical reality.

Update July 13, 2017

Michael Lewis pointed to a link on this subject in his comment below.  Reading again the discussion thread, I appreciated again this point by point response from Kristian to Tim Folkerts so I am adding it to the post. (To be clear, T_e means emission temperature (same as Tna in the article above, while T_s means surface temperature.)

Kristian says:

August 4, 2016 at 9:24 AM

Tim Folkerts says, August 3, 2016 at 3:33 PM:
“In any case, it seems we both agree that the atmosphere has some warming effect.”
That’s quite obvious. You only need to compare Earth’s T_s with the Moon’s.

“I agree that the mass itself plays a role. Mass creates thermal inertia to even out temperature swings. The mass of the atmosphere (and oceans) also allows convection to carry energy from warmer areas to cooler areas, which further reduces variations. By themselves, these could do no more than bring T_s UP TOWARD T_e.”
True.

“I see no physics that would explain mass itself raising T_s ABOVE T_e.”
Just as the radiative properties of gaseous molecules are also not able – all by themselves – to raise a planet’s T_s above its T_e. No, both mass and radiative properties are needed.

“To get above T_e we need something to change the outgoing thermal radiation, eg GHGs at a high enough altitude to be significantly cooler than the surface.”
Yes, but then we also need an air column above the solar-heated surface that can have such a “high enough altitude” in the first place. We also need that altitude to be cooler on average than the surface. IOW, we need mass. A certain gas density/pressure (molecular interaction). And we need fluid dynamics.

“So the key factor is ALTITUDE here (with some definite dependence of the concentrations of the GHGs as well).”
No. There is no dependence on the CONCENTRATION/CONTENT of IR-active constituents in an atmosphere. An atmosphere definitely needs to be IR active (although it’s evidently not enough) for a planet’s T_s to become higher than its T_e. It also needs to be IR active to be able to adequately rid itself of its absorbed energy from the surface (radiatively AND non-radiatively transferred) and directly from the Sun. But once it’s IR active, there is no dependence on the degree of activity. Because then the atmosphere has become stably convectively operative. And all that matters from then on is atmospheric MASS and SOLAR INPUT (TSI and global albedo).

“You say that atmospheric mass seems to force. Do you think that mass alone without GHGs could force temperatures higher than T_e?”
No. Just like “GHGs” alone could also not force T_s higher than T_e. You need both.
* * *
So I say: There IS a “GHE”. But it’s ultimately massively caused. The radiative properties are simply a tool. A means to an end. And there definitely ISN’T an “anthropogenically enhanced GHE” (AGW). It cannot happen.

 

 

July 10 Arctic Ice Report

The extent of Arctic ice fell to a new wintertime low in March 2017. But springtime ice persisted and June and July are hanging around the decadal average.

The graph shows the last two weeks ending day 190, July 9, 2017.  2016 and 2017 are nearly average and lower than 9M km2, while 2007 is about 150k km2 down, and SII 2017 even lower. The recent drop was largely due to Hudson Bay going to open water in just ten days (images at Ten Days in Hudson Bay).

As we shall see, this year’s extents are in surplus on the Atlantic side, offset by deficits on the Pacific side and in Hudson Bay.  The image compares day 190 with one year ago.

The Table compares 2017 day 190 ice extents with the decadal average and 2007

Region 2017190 Day 190
Average
2017-Ave. 2007190 2017-2007
 (0) Northern_Hemisphere 8877716 8991896 -114181 8732146 145570
 (1) Beaufort_Sea 825960 866156 -40196 860404 -34443
 (2) Chukchi_Sea 563718 683345 -119626 609005 -45287
 (3) East_Siberian_Sea 868691 1000309 -131618 871751 -3060
 (4) Laptev_Sea 719324 674515 44809 647038 72285
 (5) Kara_Sea 538340 437243 101097 499369 38971
 (6) Barents_Sea 125872 69548 56324 77180 48692
 (7) Greenland_Sea 563021 450768 112253 475611 87410
 (8) Baffin_Bay_Gulf_of_St._Lawrence 419134 364194 54941 379529 39606
 (9) Canadian_Archipelago 702592 750592 -48000 743621 -41030
 (10) Hudson_Bay 306542 499414 -192873 360041 -53499
 (11) Central_Arctic 3243319 3183825 59494 3205488 37831

The deficits in BCE (Beaufort, Chukchi, East Siberian) are offset by surpluses elsewhere.  2017 would be above average were it not for the 193k km2 deficit in Hudson Bay.

The graph below shows Barents this year continues to be above average matching the record year of 2014.  It will be interesting to see if 2017 hits its minimum around day 210 like 2014 did.

 

The black line is average for the last 11 years.  2007 in purple appears close to an average year.  2014 had the highest annual extent in Barents Sea, due to higher and later maximums, holding onto ice during the summer, and recovering quickly.  In contrast, 2016 was the lowest annual extent, melting out early and recovering later.  2017 in blue started out way behind, but grew rapidly to reach average, and then persisted longer to exceed even 2014.  It may yet beat out 2014 as the highest in the last 11 years.

For more on why Barents Sea matters see Barents Icicles

 

Ocean Cools and Air Temps Follow

June Sea Surface Temperatures (SSTs) are now available, and we can see ocean temps dropping further after a short pause and resuming the downward trajectory from the previous 12 months.

HadSST is generally regarded as the best of the global SST data sets, and so the temperature story here comes from that source, the latest version being HadSST3.

The chart below shows the last two years of SST monthly anomalies as reported in HadSST3 including June 2017.

In May despite a slight rise in the Tropics, declines in both hemispheres and globally caused SST cooling to resume after an upward bump in April.  Now in June a large spike upward in NH was overcome by an even larger drop in SH, now three months into a cooling phase. The Tropics also cooled off so the Global anomaly continued to decline.  Presently NH and SH are both changing strongly but in opposite directions.

Note that higher temps in 2015 and 2016 were first of all due to a sharp rise in Tropical SST, beginning in March 2015, peaking in January 2016, and steadily declining back to its beginning level. Secondly, the Northern Hemisphere added two bumps on the shoulders of Tropical warming, with peaks in August of each year. Also, note that the global release of heat was not dramatic, due to the Southern Hemisphere offsetting the Northern one. Note that June 2017 matches closely to June 2015, with almost the same anomalies for NH, SH and Global.  The Tropics are lower now and trending down compared to an upward trend in 2015.

June satellite measures of air over the land and oceans also shows a sharp drop.  The graph below provides UAH vs.6 TLT (lower troposphere temps) confirming the general impression from SSTs.

In contrast with SST measurements, air temps in the TLT upticked in May with all areas participating in the rise of almost 0.2C.  Then in June SH dropped 0.4C, NH down 0.2C while the Tropics declined slightly. The end result has all areas back to March values except for the Tropics.  June 2017 compares closely with July 2015 but with no signs of an impending El Nino.

We have seen lots of claims about the temperature records for 2016 and 2015 proving dangerous man made warming.  At least one senator stated that in a confirmation hearing.  Yet HadSST3 data for the last two years show how obvious is the ocean’s governing of global average temperatures.

USS Pearl Harbor deploys Global Drifter Buoys in Pacific Ocean

The best context for understanding these two years comes from the world’s sea surface temperatures (SST), for several reasons:

  • The ocean covers 71% of the globe and drives average temperatures;
  • SSTs have a constant water content, (unlike air temperatures), so give a better reading of heat content variations;
  • A major El Nino was the dominant climate feature these years.

Solar energy accumulates massively in the ocean and is variably released during circulation events.