USCS Warnings of Coastal Floodings

Be not Confused. USCS is not the US Coastal Service, but rather stands for the Union of Super Concerned Scientists, or UCS for short. Using their considerable PR skills and budgets, they have plastered warnings in the media targeting major coastal cities, designed to strike terror in anyone holding real estate in those places. Example headlines include:

Sea level rise could put thousands of homes in this SC county at risk, study says The State, South Carolina

Taxpayers in the Hamptons among the most exposed to rising seas Crain’s New York Business

Adapting to Climate Change Will Take More Than Just Seawalls and Levees Scientific American

The Biggest Threat Facing the City of Miami Smithsonian Magazine

What Does Maryland’s Gubernatorial Race Mean For Flood Management? The Real News Network

Study: Thousands of Palm Beach County homes impacted by sea-level rise WPTV, Florida

Sinking Land and Climate Change Are Worsening Tidal Floods on the Texas Coast Texas Observer

Sea Level Rise Will Threaten Thousands of California Homes Scientific American

300,000 coastal homes in US, worth $120 billion, at risk of chronic floods from rising seas USA Today

That last gets the thrust of the UCS study Underwater: Rising Seas, Chronic Floods, and the Implications for US Coastal Real Estate (2018)

Sea levels are rising. Tides are inching higher. High-tide floods are becoming more frequent and reaching farther inland. And hundreds of US coastal communities will soon face chronic, disruptive flooding that directly affects people’s homes, lives, and properties.

Yet property values in most coastal real estate markets do not currently reflect this risk. And most homeowners, communities, and investors are not aware of the financial losses they may soon face.

This analysis looks at what’s at risk for US coastal real estate from sea level rise—and the challenges and choices we face now and in the decades to come.

The report and supporting documents give detailed dire warnings state by state, and even down to counties and townships. As example of the damage projections is this table estimating 2030 impacts:

State  Homes at Risk  Value at Risk Property Tax at Risk  Population in 
at-risk homes 
AL  3,542 $1,230,676,217 $5,918,124  4,367
CA  13,554 $10,312,366,952 $128,270,417  33,430
CT  2,540 $1,921,428,017 $29,273,072  5,690
DC  – $0 $0  –
DE  2,539 $127,620,700 $2,180,222  3,328
FL  20,999 $7,861,230,791 $101,267,251  32,341
GA  4,028 $1,379,638,946 $13,736,791  7,563
LA  26,336 $2,528,283,022 $20,251,201  63,773
MA  3,303 $2,018,914,670 $17,887,931  6,500
MD  8,381 $1,965,882,200 $16,808,488  13,808
ME  788 $330,580,830 $3,933,806  1,047
MS  918 $100,859,844 $1,392,059  1,932
NC  6,376 $1,449,186,258 $9,531,481  10,234
NH  1,034 $376,087,216 $5,129,494  1,659
NJ  26,651 $10,440,814,375 $162,755,196  35,773
NY  6,175 $3,646,706,494 $74,353,809  16,881
OR  677 $110,461,140 $990,850  1,277
PA  138 $18,199,572 $204,111  310
RI  419 $299,462,350 $3,842,996  793
SC  5,779 $2,882,357,415 $22,921,550  8,715
TX  5,505 $1,172,865,533 $19,453,940  9,802
VA  3,849 $838,437,710 $8,296,637  6,086
WA  3,691 $1,392,047,121 $13,440,420  7,320

The methodology, of course is climate models all the way down. They explain:

Three sea level rise scenarios, developed by the National Oceanic and Atmospheric Administration (NOAA) and localized for this analysis, are included:

  • A high scenario that assumes a continued rise in global carbon emissions and an increasing loss of land ice; global average sea level is projected to rise about 2 feet by 2045 and about 6.5 feet by 2100.
  • An intermediate scenario that assumes global carbon emissions rise through the middle of the century then begin to decline, and ice sheets melt at rates in line with historical observations; global average sea level is projected to rise about 1 foot by 2035 and about 4 feet by 2100.
  • A low scenario that assumes nations successfully limit global warming to less than 2 degrees Celsius (the goal set by the Paris Climate Agreement) and ice loss is limited; global average sea level is projected to rise about 1.6 feet by 2100.

Oh, and they did not forget the disclaimer:

Disclaimer
This research is intended to help individuals and communities appreciate when sea level rise may place existing coastal properties (aggregated by community) at risk of tidal flooding. It captures the current value and tax base contribution of those properties (also aggregated by community) and is not intended to project changes in those values, nor in the value of any specific property.

The projections herein are made to the best of our scientific knowledge and comport with our scientific and peer review standards. They are limited by a range of factors, including but not limited to the quality of property-level data, the resolution of coastal elevation models, the potential installment of defensive measures not captured by those models, and uncertainty around the future pace of sea level rise. More information on caveats and limitations can be found at http://www.ucsusa.org/underwater.

Neither the authors nor the Union of Concerned Scientists are responsible or liable for financial or reputational implications or damages to homeowners, insurers, investors, mortgage holders, municipalities, or other any entities. The content of this analysis should not be relied on to make business, real estate or other real world decisions without independent consultation with professional experts with relevant experience. The views expressed by individuals in the quoted text of this report do not represent an endorsement of the analysis or its results.

The need for a disclaimer becomes evident when looking into the details. The NOAA reference is GLOBAL AND REGIONAL SEA LEVEL RISE SCENARIOS FOR THE UNITED STATES NOAA Technical Report NOS CO-OPS 083

Since the text emphasizes four examples of their scenarios, let’s consider them here. First there is San Francisco, a city currently suing oil companies over sea level rise. From tidesandcurrents comes this tidal gauge record
It’s a solid, long-term record providing a century of measurements from 1900 through 2017.  The graph below compares the present observed trend with climate models projections out to 2100.

Since the record is set at zero in 2000, the difference in 21st century expectation is stark. Instead of  the existing trend out to around 20 cm, models project 2.5 meters rise by 2100.

New York City is represented by the Battery tidal gauge:
Again, a respectable record with a good 20th century coverage.  And the models say:
The red line projects 2500 mm rise vs. 284 mm, almost a factor of 10 more.  The divergence is evident even in the first 17 years.

Florida comes in for a lot of attention, especially the keys, so here is Key West:
A similar pattern to NYC Battery gauge, and here is the projection:
The pattern is established: Instead of a rise of about 30 cm, the models project 250 cm.

Finally, probably the worst case, and well-known to all already is Galveston, Texas:
The water has been rising there for a long time, so maybe the models got this one close.
The gap is less than the others since the rising trend is much higher, but the projection is still four times the past.  Galveston is at risk, all right, but we didn’t need this analysis to tell us that.

A previous post Unbelievable Climate Models goes into why they are running so hot and so extreme, and why they can not be trusted.

Advertisements

Unbelievable Climate Models

It is not just you thinking the world is not warming the way climate models predicted. The models are flawed, and their estimates of the climate’s future response to rising CO2 are way too hot. Yet these overcooked forecasts are the basis for policy makers to consider all kinds of climate impacts, from sea level rise to food production and outbreaks of Acne.

The models’ outputs are contradicted by the instrumental temperature records. So a choice must be made: Shall we rely on measurements of our past climate experience, or embrace the much warmer future envisioned by these models?

Ross McKitrick takes us through this fundamental issue in his Financial Post article All those warming-climate predictions suddenly have a big, new problem Excerpts below with my bolds, headers and images

Why ECS is Important

One of the most important numbers in the world goes by the catchy title of Equilibrium Climate Sensitivity, or ECS. It is a measure of how much the climate responds to greenhouse gases. More formally, it is defined as the increase, in degrees Celsius, of average temperatures around the world, after doubling the amount of carbon dioxide in the atmosphere and allowing the atmosphere and the oceans to adjust fully to the change. The reason it’s important is that it is the ultimate justification for governmental policies to fight climate change.

The United Nations Intergovernmental Panel on Climate Change (IPCC) says ECS is likely between 1.5 and 4.5 degrees Celsius, but it can’t be more precise than that. Which is too bad, because an enormous amount of public policy depends on its value. People who study the impacts of global warming have found that if ECS is low — say, less than two — then the impacts of global warming on the economy will be mostly small and, in many places, mildly beneficial. If it is very low, for instance around one, it means greenhouse gas emissions are simply not worth doing anything about. But if ECS is high — say, around four degrees or more — then climate change is probably a big problem. We may not be able to stop it, but we’d better get ready to adapt to it.

So, somebody, somewhere, ought to measure ECS. As it turns out, a lot of people have been trying, and what they have found has enormous policy implications.

The violins span 5–95% ranges; their widths indicate how PDF values vary with ECS. Black lines show medians, red lines span 17–83% ‘likely’ ranges. Published estimates based directly on observed warming are shown in blue. Unpublished estimates of mine based on warming attributable to greenhouse gases inferred by two recent detection and attribution studies are shown in green. CMIP5 models are shown in salmon. The observational ECS estimates have broadly similar medians and ‘likely’ ranges, all of which are far below the corresponding values for the CMIP5 models. Source: Nic Lewis at Climate Audit https://climateaudit.org/2015/04/13/pitfalls-in-climate-sensitivity-estimation-part-2/

Methods Matter

To understand why, we first need to delve into the methodology a bit. There are two ways scientists try to estimate ECS. The first is to use a climate model, double the modeled CO2 concentration from the pre-industrial level, and let it run until temperatures stabilize a few hundred years into the future. This approach, called the model-based method, depends for its accuracy on the validity of the climate model, and since models differ quite a bit from one another, it yields a wide range of possible answers. A well-known statistical distribution derived from modeling studies summarizes the uncertainties in this method. It shows that ECS is probably between two and 4.5 degrees, possibly as low as 1.5 but not lower, and possibly as high as nine degrees. This range of potential warming is very influential on economic analyses of the costs of climate change.***

The second method is to use long-term historical data on temperatures, solar activity, carbon-dioxide emissions and atmospheric chemistry to estimate ECS using a simple statistical model derived by applying the law of conservation of energy to the planetary atmosphere. This is called the Energy Balance method. It relies on some extrapolation to satisfy the definition of ECS but has the advantage of taking account of the available data showing how the actual atmosphere has behaved over the past 150 years.

The surprising thing is that the Energy Balance estimates are very low compared to model-based estimates. The accompanying chart compares the model-based range to ECS estimates from a dozen Energy Balance studies over the past decade. Clearly these two methods give differing answers, and the question of which one is more accurate is important.

Weak Defenses for Models Discrepancies

Climate modelers have put forward two explanations for the discrepancy. One is called the “emergent constraint” approach. The idea is that models yield a range of ECS values, and while we can’t measure ECS directly, the models also yield estimates of a lot of other things that we can measure (such as the reflectivity of cloud tops), so we could compare those other measures to the data, and when we do, sometimes the models with high ECS values also yield measures of secondary things that fit the data better than models with low ECS values.

This argument has been a bit of a tough sell, since the correlations involved are often weak, and it doesn’t explain why the Energy Balance results are so low.

The second approach is based on so-called “forcing efficacies,” which is the concept that climate forcings, such as greenhouse gases and aerosol pollutants, differ in their effectiveness over time and space, and if these variations are taken into account the Energy Balance sensitivity estimates may come out higher. This, too, has been a controversial suggestion.

Challenges to Oversensitive Models

A recent Energy Balance ECS estimate was just published in the Journal of Climate by Nicholas Lewis and Judith Curry. There are several features that make their study especially valuable. First, they rely on IPCC estimates of greenhouse gases, solar changes and other climate forcings, so they can’t be accused of putting a finger on the scale by their choice of data. Second, they take into account the efficacy issue and discuss it at length. They also take into account recent debates about how surface temperatures should or shouldn’t be measured, and how to deal with areas like the Arctic where data are sparse. Third, they compute their estimates over a variety of start and end dates to check that their ECS estimate is not dependent on the relative warming hiatus of the past two decades.

Their ECS estimate is 1.5 degrees, with a probability range between 1.05 and 2.45 degrees. If the study was a one-time outlier we might be able to ignore it. But it is part of a long list of studies from independent teams (as this interactive graphic shows), using a variety of methods that take account of critical challenges, all of which conclude that climate models exhibit too much sensitivity to greenhouse gases.

Change the Sensitivity, Change the Future

Policy-makers need to pay attention, because this debate directly impacts the carbon-tax discussion.

The Environmental Protection Agency uses social cost of carbon models that rely on the model-based ECS estimates. Last year, two colleagues and I published a study in which we took an earlier Lewis and Curry ECS estimate and plugged it into two of those models. The result was that the estimated economic damages of greenhouse gas emissions fell by between 40 and 80 per cent, and in the case of one model the damages had a 40 per cent probability of being negative for the next few decades — that is, they would be beneficial changes. The new Lewis and Curry ECS estimate is even lower than their old one, so if we re-did the same study we would find even lower social costs of carbon.

Conclusion

If ECS is as low as the Energy Balance literature suggests, it means that the climate models we have been using for decades run too hot and need to be revised. It also means that greenhouse gas emissions do not have as big an impact on the climate as has been claimed, and the case for costly policy measures to reduce carbon-dioxide emissions is much weaker than governments have told us. For a science that was supposedly “settled” back in the early 1990s, we sure have a lot left to learn.

Ross McKitrick is professor of economics at the University of Guelph and senior fellow at the Fraser Institute.

2018 Update: Fossil Fuels ≠ Global Warming

Previous posts addressed the claim that fossil fuels are driving global warming. This post updates that analysis with the latest (2017) numbers from BP Statistics and compares World Fossil Fuel Consumption (WFFC) with three estimates of Global Mean Temperature (GMT). More on both these variables below.

WFFC

2017 statistics are now available from BP for international consumption of Primary Energy sources. 2018 Statistical Review of World Energy. 

The reporting categories are:
Oil
Natural Gas
Coal
Nuclear
Hydro
Renewables (other than hydro)

This analysis combines the first three, Oil, Gas, and Coal for total fossil fuel consumption world wide. The chart below shows the patterns for WFFC compared to world consumption of Primary Energy from 1965 through 2017.

WFFC2017

The graph shows that Primary Energy consumption has grown continuously for 5 decades. Over that period oil, gas and coal (sometimes termed “Thermal”) averaged 89% of PE consumed, ranging from 94% in 1965 to 85% in 2017.  MToe is millions of tons of oil equivalents.

Global Mean Temperatures

Everyone acknowledges that GMT is a fiction since temperature is an intrinsic property of objects, and varies dramatically over time and over the surface of the earth. No place on earth determines “average” temperature for the globe. Yet for the purpose of detecting change in temperature, major climate data sets estimate GMT and report anomalies from it.

UAH record consists of satellite era global temperature estimates for the lower troposphere, a layer of air from 0 to 4km above the surface. HadSST estimates sea surface temperatures from oceans covering 71% of the planet. HADCRUT combines HadSST estimates with records from land stations whose elevations range up to 6km above sea level.

Both GISS LOTI (land and ocean) and HADCRUT4 (land and ocean) use 14.0 Celsius as the climate normal, so I will add that number back into the anomalies. This is done not claiming any validity other than to achieve a reasonable measure of magnitude regarding the observed fluctuations.

No doubt global sea surface temperatures are typically higher than 14C, more like 17 or 18C, and of course warmer in the tropics and colder at higher latitudes. Likewise, the lapse rate in the atmosphere means that air temperatures both from satellites and elevated land stations will range colder than 14C. Still, that climate normal is a generally accepted indicator of GMT.

Correlations of GMT and WFFC

The next graph compares WFFC to GMT estimates over the five decades from 1965 to 2017 from HADCRUT4, which includes HadSST3.

WFFC&GMT2017

Over the last five decades the increase in fossil fuel consumption is dramatic and monotonic, steadily increasing by 227% from 3.5B to 11.5B oil equivalent tons.  Meanwhile the GMT record from Hadcrut shows multiple ups and downs with an accumulated rise of 0.9C over 52 years, 6% of the starting value.

The second graph compares to GMT estimates from UAH6, and HadSST3 for the satellite era from 1979 to 2017, a period of 38 years.

WFFC&UAH&HAD2017

In the satellite era WFFC has increased at a compounded rate of nearly 2% per year, for a total increase of 87% since 1979. At the same time, SST warming amounted to 0.44C, or 3.1% of the starting value.  UAH warming was 0.58C, or 4.2% up from 1979.  The temperature compounded rate of change is 0.1% per year, an order of magnitude less.  Even more obvious is the 1998 El Nino peak and flat GMT since.

Summary

The climate alarmist/activist claim is straight forward: Burning fossil fuels makes measured temperatures warmer. The Paris Accord further asserts that by reducing human use of fossil fuels, further warming can be prevented.  Those claims do not bear up under scrutiny.

It is enough for simple minds to see that two time series are both rising and to think that one must be causing the other. But both scientific and legal methods assert causation only when the two variables are both strongly and consistently aligned. The above shows a weak and inconsistent linkage between WFFC and GMT.

Going further back in history shows even weaker correlation between fossil fuels consumption and global temperature estimates:

wfc-vs-sat

Figure 5.1. Comparative dynamics of the World Fuel Consumption (WFC) and Global Surface Air Temperature Anomaly (ΔT), 1861-2000. The thin dashed line represents annual ΔT, the bold line—its 13-year smoothing, and the line constructed from rectangles—WFC (in millions of tons of nominal fuel) (Klyashtorin and Lyubushin, 2003). Source: Frolov et al. 2009

In legal terms, as long as there is another equally or more likely explanation for the set of facts, the claimed causation is unproven. The more likely explanation is that global temperatures vary due to oceanic and solar cycles. The proof is clearly and thoroughly set forward in the post Quantifying Natural Climate Change.

Background context for today’s post is at Claim: Fossil Fuels Cause Global Warming.

Blinded by Antarctica Reports

Special snow goggles for protection in polar landscapes.

Someone triggered Antarctica for this week’s media alarm blitz.

Antarctic ice loss increases to 200 billion tonnes a year – Climate Action

Antarctica is now melting three times faster than ever before – Euronews

Antarctica is shedding ice at an accelerating rate – Digital Journal

Al Gore Sounds the Alarm on 0.3 inches of Sea Level Rise from Ice Sheets– Daily Caller

Antarctica is losing an insane amount of ice. Nothing about this is good. – Fox News
Looks like it’s time yet again to play Climate Whack-A-Mole.  That means stepping back to get some perspective on the reports and the interpretations applied by those invested in alarmism.

Antarctic Basics

The Antarctic Ice Sheet extends almost 14 million square kilometers (5.4 million square miles), roughly the area of the contiguous United States and Mexico combined. The Antarctic Ice Sheet contains 30 million cubic kilometers (7.2 million cubic miles) of ice. (Source: NSIDC: Quick Facts Ice Sheets)

The Antarctic Ice Sheet covers an area larger than the U.S. and Mexico combined. This photo shows Mt. Erebus rising above the ice-covered continent. Credit: Ted Scambos & Rob Bauer, NSIDC

The study of ice sheet mass balance underwent two major advances, one during the early 1990s, and again early in the 2000s. At the beginning of the 1990s, scientists were unsure of the sign (positive or negative) of the mass balance of Greenland or Antarctica, and knew only that it could not be changing rapidly relative to the size of the ice sheet.

Advances in glacier ice flow mapping using repeat satellite images, and later using interferometric synthetic aperture radar SAR methods, facilitated the mass budget approach, although this still requires an estimate of snow input and a cross-section of the glacier as it flows out from the continent and becomes floating ice. Satellite radar altimetry mapping and change detection, developed in the early to mid-1990s allowed the research community to finally extract reliable quantitative information regarding the overall growth or reduction of the volume of the ice sheets.

By 2002, publications were able to report that both large ice sheets were losing mass (Rignot and Thomas 2002). Then in 2003 the launch of two new satellites, ICESat and GRACE, led to vast improvements in one of the methods for mass balance determination, volume change, and introduced the ability to conduct gravimetric measurements of ice sheet mass over time. The gravimetric method helped to resolve remaining questions about how and where the ice sheets were losing mass. With this third method, and with continued evolution of mass budget and geodetic methods it was shown that the ice sheets were in fact losing mass at an accelerating rate by the end of the 2000s (Veliconga 2009, Rignot et al. 2011b).

Contradictory Findings

NASA Study: Mass Gains of Antarctic Ice Sheet Greater than Losses

A new 2015 NASA study says that an increase in Antarctic snow accumulation that began 10,000 years ago is currently adding enough ice to the continent to outweigh the increased losses from its thinning glaciers.

The research challenges the conclusions of other studies, including the Intergovernmental Panel on Climate Change’s (IPCC) 2013 report, which says that Antarctica is overall losing land ice.

According to the new analysis of satellite data, the Antarctic ice sheet showed a net gain of 112 billion tons of ice a year from 1992 to 2001. That net gain slowed to 82 billion tons of ice per year between 2003 and 2008.

“We’re essentially in agreement with other studies that show an increase in ice discharge in the Antarctic Peninsula and the Thwaites and Pine Island region of West Antarctica,” said Jay Zwally, a glaciologist with NASA Goddard Space Flight Center in Greenbelt, Maryland, and lead author of the study, which was published on Oct. 30 in the Journal of Glaciology. “Our main disagreement is for East Antarctica and the interior of West Antarctica – there, we see an ice gain that exceeds the losses in the other areas.” Zwally added that his team “measured small height changes over large areas, as well as the large changes observed over smaller areas.”

Scientists calculate how much the ice sheet is growing or shrinking from the changes in surface height that are measured by the satellite altimeters. In locations where the amount of new snowfall accumulating on an ice sheet is not equal to the ice flow downward and outward to the ocean, the surface height changes and the ice-sheet mass grows or shrinks.

Snow covering Antarctic peninsula.

Keeping Things in Perspective

Such reports often include scary graphs like this one and the reader is usually provided no frame of reference or context to interpret the image. First, the chart is showing cumulative loss of mass arising from an average rate of 100 Gt lost per year since 2002. Many years had gains, including 2002, and the cumulative loss went below zero only in 2006.  Also, various methods of measuring and analyzing give different results, as indicated by the earlier section.

Most important is understanding the fluxes in proportion to the Antarctic Ice Sheet.  Let’s do the math.  Above it was stated Antarctica contains ~30 million cubic kilometers of ice volume.  One km3 of water is 1 billion cubic meters and weighs 1 billion tonnes, or 1 gigatonne.  So Antarctica has about 30,000,000 gigatonnes of ice.  Since ice is slightly less dense than water, the total should be adjusted by 0.92 for an estimate of 27.6 M Gts of ice comprising the Antarctic Ice Sheet.

So in the recent decade, an average year went from 27,600,100 Gt to 27,600,000, according to one analysis.  Other studies range from losing 200 Gt/yr to gaining 100 Gt/yr.

Even if Antarctica lost 200 Gt/yr. for the next 1000 years, it would only approach 1% of the ice sheet.

If like Al Gore you are concerned about sea level rise, that calculation starts with the ocean area estimated to be 3.618 x 10^8 km2 (361,800,000 km2). To raise that area 1 mm requires 3.618×10^2 km3 or 361.8 km3 water (1 km3 water=1 Gt.) So 200 Gt./yr is about 0.55mm/yr or 6 mm a decade, or 6 cm/century.

By all means let’s pay attention to things changing in our world, but let’s also notice the scale of the reality and not make mountains out of molehills.

Let’s also respect the scientists who study glaciers and their subtle movements over time (“glacial pace”).  Below is an amazing video showing the challenges and the beauty of working on Greenland Glacier.

From Ice Alive: Uncovering the secrets of Earth’s Ice

For more on the Joys of Playing Climate Whack-A-Mole 

Cooling Ocean Air Temps

Presently sea surface temperatures (SST) are the best available indicator of heat content gained or lost from earth’s climate system.  Enthalpy is the thermodynamic term for total heat content in a system, and humidity differences in air parcels affect enthalpy.  Measuring water temperature directly avoids distorted impressions from air measurements.  In addition, ocean covers 71% of the planet surface and thus dominates surface temperature estimates.  Eventually we will likely have reliable means of recording water temperatures at depth.

Recently, Dr. Ole Humlum reported from his research that air temperatures lag 2-3 months behind changes in SST.  He also observed that changes in CO2 atmospheric concentrations lag behind SST by 11-12 months.  This latter point is addressed in a previous post Who to Blame for Rising CO2?

The May update to HadSST3 will appear later this month, but in the meantime we can look at lower troposphere temperatures (TLT) from UAHv6 which are already posted for May. The temperature record is derived from microwave sounding units (MSU) on board satellites like the one pictured above.

The UAH dataset includes temperature results for air above the oceans, and thus should be most comparable to the SSTs. The graph below shows monthly anomalies for ocean temps since January 2015.

UAH May2018

Open image in new tab to enlarge.

The anomalies have reached the same levels as 2015.  Taking a longer view, we can look at the record since 1995, that year being an ENSO neutral year and thus a reasonable starting point for considering the past two decades.  On that basis we can see the plateau in ocean temps is persisting. Since last October all oceans have cooled, with upward bumps in Feb. 2018, now erased.

UAHv6 TLT 
Monthly Ocean
Anomalies
Average Since 1995 Ocean 5/2018
Global 0.13 0.09
NH 0.16 0.33
SH 0.11 -0.09
Tropics 0.12 0.02

As of May 2018, global ocean temps are slightly lower than April and below the average since 1995.  NH remains higher, but not enough to offset much lower temps in SH and Tropics (between 20N and 20S latitudes).  Global ocean air temps are now the lowest since April 2015, and SH the lowest since May 2013.

The details of UAH ocean temps are provided below.  The monthly data make for a noisy picture, but seasonal fluxes between January and July are important.

Click on image to enlarge.

The greater volatility of the Tropics is evident, leading the oceans through three major El Nino events during this period.  Note also the flat period between 7/1999 and 7/2009.  The 2010 El Nino was erased by La Nina in 2011 and 2012.  Then the record shows a fairly steady rise peaking in 2016, with strong support from warmer NH anomalies, before returning to the 22-year average.

Summary

TLTs include mixing above the oceans and probably some influence from nearby more volatile land temps.  They started the recent cooling later than SSTs from HadSST3, but are now showing the same pattern.  It seems obvious that despite the three El Ninos, their warming has not persisted, and without them it would probably have cooled since 1995.  Of course, the future has not yet been written.

 

Chicxulub asteroid Apocalypse? Not so fast.

The Daily Mail would have you believe Apocalyptic asteroid that wiped out the dinosaurs 66 million years ago triggered 100,000 years of global warming
Chicxulub asteroid triggered a global temperature rise of 5°C (9°F).

This notion has been around for years, but dredged up now to promote fears of CO2 and global warming. And maybe it’s because of a new Jurassic Park movie coming this summer.  But it doesn’t take much looking around to discover experts who have a sober, reasonable view of the situation.

Princeton expert Gerta Keller, Professor of Geosciences at Princeton, has studied this issue since the 1990s and tells all at her website CHICXULUB: THE IMPACT CONTROVERSY Excerpts below with my bolds.

Introduction to The Impact Controversy

In the 1980s as the impact-kill hypothesis of Alvarez and others gained popular and scientific acclaim and the mass extinction controversy took an increasingly rancorous turn in scientific and personal attacks fewer and fewer dared to voice critique. Two scientists stand out: Dewey McLean (VPI) and Chuck Officer (Dartmouth University). Dewey proposed as early as 1978 that Deccan volcanism was the likely cause for the KTB mass extinction, Officer also proposed a likely volcanic cause. Both were vilified and ostracized by the increasingly vocal group of impact hypothesis supporters. By the middle of the 1980s Vincent Courtillot (Physique de Globe du Paris) also advocated Deccan volcanism, though not as primary cause but rather as supplementary to the meteorite impact. Since 2008 Courtillot has strongly advocated Deccan volcanism as the primary cause for the KTB mass extinction.

(Overview from Tim Clarely, Ph.D. questioning the asteroid) In secular literature and movies, the most popular explanation for the dinosaurs’ extinction is an asteroid impact. The Chicxulub crater in Mexico is often referred to as the “smoking gun” for this idea. But do the data support an asteroid impact at Chicxulub?

The Chicxulub crater isn’t visible on the surface because it is covered by younger, relatively undeformed sediments. It was identified from a nearly circular gravity anomaly along the northwestern edge of the Yucatán Peninsula (Figure 1). There’s disagreement on the crater’s exact size, but its diameter is approximately 110 miles—large enough for a six-mile-wide asteroid or meteorite to have caused it.

Although some of the expected criteria for identifying a meteorite impact are present at the Chicxulub site—such as high-pressure and deformed minerals—not enough of these materials have been found to justify a large impact. And even these minerals can be caused by other circumstances, including rapid crystallization4 and volcanic activity.

The biggest problem is what is missing. Iridium, a chemical element more abundant in meteorites than on Earth, is a primary marker of an impact event. A few traces were identified in the cores of two drilled wells, but no significant amounts have been found in any of the ejecta material across the Chicxulub site. The presence of an iridium-rich layer is often used to identify the K-Pg (Cretaceous-Paleogene) boundary, yet ironically there is virtually no iridium in the ejecta material at the very site claimed to be the “smoking gun”!

In addition, secular models suggest melt-rich layers resulting from the impact should have exceeded a mile or two in thickness beneath the central portion of the Chicxulub crater. However, the oil wells and cores drilled at the site don’t support this. The thickest melt-rich layers encountered in the wells were between 330 and 990 feet—nowhere near the expected thicknesses of 5,000 to 10,000 feet—and several of the melt-rich layers were much thinner than 300 feet or were nonexistent.

Finally, the latest research even indicates that the tsunami waves claimed to have been generated by the impact across the Gulf of Mexico seem unlikely.

Summary from Geller

The Cretaceous-Tertiary boundary (KTB) mass extinction is primarily known for the demise of the dinosaurs, the Chicxulub impact, and the frequently rancorous thirty years-old controversy over the cause of this mass extinction. Since 1980 the impact hypothesis has steadily gained support, which culminated in 1990 with the discovery of the Chicxulub crater on Yucatan as the KTB impact site and “smoking gun” that proved this hypothesis. In a perverse twist of fate, this discovery also began the decline of this hypothesis, because for the first time it could be tested directly based on the impact crater and impact ejecta in sediments throughout the Caribbean, Central America and North America.

Two decades of multidisciplinary studies amassed a database with a sum total that overwhelmingly reveals the Chicxulub impact predates the KTB mass extinction. It’s been a wild and frequently acrimonious ride through the landscape of science and personalities. The highlights of this controversy, the discovery of facts inconsistent with the impact hypothesis, the denial of evidence, misconceptions, and misinterpretations are recounted here. (Full paper in Keller, 2011, SEPM 100, 2011).

Chicxulub Likely Happened ~100,000 years Before the KTB Extinction

Figure 42. Planktic foraminiferal biostratigraphy, biozone ages calculated based on time scales where the KTB is placed at 65Ma, 65.5Ma and 66Ma, and the relative age positions of the Chicxulub impact, Deccan volcanism phases 2 and 3 and climate change, including the maximum cooling and maximum warming (greenhouse warming) and the Dan-2 warm event relative to Deccan volcanism.

Most studies surrounding the Chicxulub impact crater have concentrated on the narrow interval of the sandstone complex or so-called impact-tsunami. Keller et al. (2002, 2003) placed that interval in zone CF1 based on planktic foraminiferal biostratigraphy and specifically the range of the index species Plummerita hantkeninoides that spans the topmost Maastrichtian. Zone CF1. The age of CF1 was estimated to span the last 300ky of the Maastrichtian based on the old time scale of Cande and Kent (1995) that places the KTB at 65Ma. The newer time scale (Gradstein et al., 2004) places the KTB at 65.5Ma, which reduces zone CF1 to 160ky.

By early 2000 our team embarked on an intensive search for impact spherules below the sandstone complex throughout NE Mexico. Numerous outcrops were discovered with impact spherule layers in planktic foraminiferal zone CF1 below the sandstone complex and we suggested that the Chicxulub impact predates the KTB by about 300ky (Fig. 42; Keller et al., 2002, 2003, 2004, 2005, 2007, 2009; Schulte et al., 2003, 2006).

Time scales change with improved dating techniques. Gradstein et al (2004) proposed to place the KTB at 65.5 Ma, (Abramovich et al., 2010). This time scale is now undergoing further revision (Renne et al., 2013) placing the KTB at 66 Ma, which reduces zone CF to less than 100ky. By this time scale, the age of the Chicxulub impact predates the KTB by less than 100ky based on impact spherule layers in the lower part zone CF1. See Fig. 42 for illustration.

Unfortunately, this wide interest rarely resulted in integrated interdisciplinary studies or joint discussions to search for common solutions to conflicting results. Increasingly, in a perverse twist of science new results became to be judged by how well they supported the impact hypothesis, rather than how well they tested it. An unhealthy US versus THEM culture developed where those who dared to question the impact hypothesis, regardless of the solidity of the empirical data, were derided, dismissed as poor scientists, blocked from publication and getting grant funding, or simply ignored. Under this assault, more and more scientists dropped out leaving a nearly unopposed ruling majority claiming victory for the impact hypothesis. In this adverse high-stress environment just a small group of scientists doggedly pursued evidence to test the impact hypothesis.

No debate has been more contentious during the past thirty years, or has more captured the imagination of scientists and public alike, than the hypothesis that an extraterrestrial bolide impact was the sole cause for the KTB mass extinction (Alvarez et al., l980). How did this hypothesis evolve so quickly into a virtually unassailable “truth” where questioning could be dismissed by phrases such as “everybody knows that an impact caused the mass extinction”, “only old fashioned Darwinian paleontologists can’t accept that the mass extinction was instantaneous”, “paleontologists are just bad scientists, more like stamp collectors”, and “it must be true because how could so many scientists be so wrong for so long.” Such phrases are reminiscent of the beliefs that the Earth is flat, that the world was created 6000 years ago, that Noah’s flood explains all geological features, and the vilification of Alfred Wegner for proposing that continents moved over time.

Update Published at National Geographic February 2018 By Shannon Hall Volcanoes, Then an Asteroid, Wiped Out the Dinosaur

What killed the dinosaurs? Few questions in science have been more mysterious—and more contentious. Today, most textbooks and teachers tell us that nonavian dinosaurs, along with three-fourths of all species on Earth, disappeared when a massive asteroid hit the planet near the Yucatán Peninsula some 66 million years ago.

But a new study published in the journal Geology shows that an episode of intense volcanism in present-day India wiped out several species before that impact occurred.

The result adds to arguments that eruptions plus the asteroid caused a one-two punch. The volcanism provided the first strike, weakening the climate so much that a meteor—the more deafening blow—was able to spell disaster for Tyrannosaurs rex and its late Cretaceous kin.

A hotter climate certainly helped send the nonavian dinosaurs to their early grave, says Paul Renne, a geochronologist at the University of California, Berkeley, who was not involved in the study. That’s because the uptick in temperature was immediately followed by a cold snap—a drastic change that likely set the stage for planet-wide disaster.

Imagine that some life managed to adapt to those warmer conditions by moving closer toward the poles, Renne says. “If you follow that with a major cooling event, it’s more difficult to adapt, especially if it’s really rapid,” he says.

In this scenario, volcanism likely sent the world into chaos, driving many extinctions alone and increasing temperatures so drastically that most of Earth’s remaining species couldn’t protect themselves from that second punch when the asteroid hit.

“The dinosaurs were extremely unlucky,” Wignall says.

But it will be hard to convince Sean Gulick, a geophysicist at the University of Texas at Austin, who co-led recent efforts to drill into the heart of the impact crater in Mexico. He points toward several studies that have suggested that ecosystems remained largely intact until the time of the impact.

Additionally, a forthcoming paper might make an even stronger case that the impact drove the extinction alone, notes Jay Melosh, a geophysicist at Purdue University who has worked on early results from the drilling project. It looks as though the divisive debate will continue with nearly as much ferocity as the events that rocked our world 66 million years ago.

Summary:

So if the Chicxulub asteroid didn’t kill the dinosaurs, what did? Paleontologists have advanced all manner of other theories over the years, including the appearance of land bridges that allowed different species to migrate to different continents, bringing with them diseases to which native species hadn’t developed immunity. Keller and Addate do not see any reason to stray so far from the prevailing model. Some kind of atmospheric haze might indeed have blocked the sun, making the planet too cold for the dinosaurs — it just didn’t have to have come from an asteroid. Rather, they say, the source might have been massive volcanoes, like the ones that blew in the Deccan Traps in what is now India at just the right point in history.

For the dinosaurs that perished 65 million years ago, extinction was extinction and the precise cause was immaterial. But for the bipedal mammals who were allowed to rise once the big lizards were finally gone, it is a matter of enduring fascination.

This science seems as settled as climate change/global warming, and with many of the same shenanigans.

New Zealand Warming Disputed

New Zealand Cook National Park.

A dust up over the temperature trend in New Zealand is discussed at the the climate conversation New Zealand Response to NIWA comment on de Freitas reanalysis of the NZ temperature record  by Barry Brill, Chairman of the New Zealand Climate Science Coalition.  Excerpts with my bolds.

Conclusions

de Freitas finds that New Zealand has experienced an insignificant warming trend of 0.28°C/century during 1909-2008. Using the same data, the Mullan Report calculates that trend at 0.91°C/century. Both studies claim to apply the statistical technique described in RS93, and each alleges that the other has departed from that methodology. This core issue has been described in the graph above but has not been addressed in this note.

A second core issue relates to reliance upon inhomogeneous Auckland and Wellington data despite the extensive contamination of both sites by sheltering and UHI. That matter has not been addressed here either.

Instead, this limited reply deals with the raft of peripheral allegations contained in the NIWA Comment. In particular, it sets out to show that all plausible published records, as well as the scientific literature, support the view that New Zealand’s temperature record has remained remarkably stable over the past century or so.

Some of the Issues Rebutted

Other temperature records:

The de Freitas warming trend of 0.28°C/century is wholly consistent with the synchronous Southern Hemisphere trend reported in IPPC’s AR5. Both the IPCC and NIWA have long reported that anthropogenic warming trends in ocean-dominated New Zealand would be materially lower than global averages. The S81/Mullan Report trend of 0.91°C/century is clearly anomalous.

Official New Zealand temperature records for eight years in the 1860s, which are both reliable and area-representative, show the absolute mean temperature was then 13.1°C. A 30-year government record for the period ending 1919 shows the mean temperature to be 12.8°C. The current normal (30-year) mean 7SS temperature is 12.9°C. Clearly, New Zealand mean temperatures have remained almost perfectly stable during the past 150 years.

Use of RS93 Statistical Method:

The Mullan Report (along with other NIWA articles that are not publicly available) does purport to use RS93 comparison techniques, so this assertion is naturally accepted whenever these ‘grey’ papers are mentioned in the peer-reviewed literature. However, the Mullan Report sits outside the literature and clearly fails to execute its stated intention to apply RS93 methods. The de Freitas paper rectifies those omissions.

NZ Glaciers

In this area, the most recent authority is Mackintosh et al. (2017), entitled “Regional cooling caused recent New Zealand glacier advances in a period of global warming.” After observing that at least 58 Southern Alps glaciers advanced during the period 1983-2008, the abstract notes:

“Here we show that the glacier advance phase resulted predominantly from discrete periods of reduced air temperature, rather than increased precipitation. The lower temperatures were associated with anomalous southerly winds and low sea surface temperature in the Tasman Sea region. These conditions result from variability in the structure of the extratropical atmospheric circulation over the South Pacific.”

This Nature paper, of which James Renwick was an author, notes that the World Glacier Monitoring Service database shows that in 2005 “15 of the 26 advancing glaciers observed worldwide were in New Zealand.”

BEST Data

Using up to 52 auto-adjusted datasets1, the Berkeley Earth group derives an absolute New Zealand temperature range of 9.5°C to 11°C over the 160-year period from 1853 to 2013.

The mid-point of this range is very far from the mid-point of the 12.3°C to 13.2°C range recorded in the 7SS (whether raw or adjusted) and is clearly wrong. Nonetheless, for the 100-year period 1909-2008, the BEST adjusted anomalies are said to show a 100% perfect correlation with those of the Mullan Report (to three decimal points). The claimed independence of such an immaculate outcome is entirely implausible.

Anthropogenic Warming

The Mullan Report’s 1960-90 upwards spike could not have occurred whilst the south-west Pacific region was in a cooling phase – which is confirmed by Mackintosh et al. (2017). Further, the final 30-year period of the Mullan Report shows an insignificant trend of only 0.12°C/century, demonstrating that New Zealand has not yet been affected by global warming trends.

Summary

Good to see that de Fritas et al are again speaking climate truth to entrenched alarmists.  Go Kiwis!

Climate Canary? N. America Cooling

Hidden amid reports of recent warmest months and years based on global averages, there is a significant departure in North America. Those of us living in Canada and USA have noticed a distinct cooling, and our impressions are not wrong.

The image above shows how much lower have been April 2018 temperatures. The table below provides the numbers behind the graphs from NOAA State of the Climate.

CONTINENT ANOMALY (1910-2000) TREND (1910-2018) RANK RECORDS
°C °F °C °F (OUT OF 109 YEARS) YEAR(S) °C °F
North America -0.97 -1.75 0.11 0.19 Warmest 94ᵗʰ 2010 2.65 4.77
South America 1.34 2.41 0.13 0.24 Warmest 1ˢᵗ 2018 1.34 2.41
Europe 2.82 5.08 0.14 0.25 Warmest 1ˢᵗ 2018 2.82 5.08
Africa 1.23 2.21 0.12 0.22 Warmest 5ᵗʰ 2016 1.72 3.1
Asia 1.66 2.99 0.18 0.32 Warmest 9ᵗʰ 2016 2.4 4.32
Oceania 2.47 4.45 0.14 0.25 Warmest 2ⁿᵈ 2005 2.54 4.57

The table shows how different was the North American experience: 94th out of 109 years.  But when we look at the first four months of the year, the NA is more in line with the rest of the globe.

 

As the image shows, cooling was more widespread during the first third of 2018, particularly in NA, Northern Europe and Asia, as well as a swath of cooler mid ocean latitudes in the Southern Hemisphere.

CONTINENT ANOMALY (1910-2000) TREND (1910-2018) RANK RECORDS
°C °F °C °F (OUT OF 109 YEARS) YEAR(S) °C °F
North America 0.44 0.79 0.16 0.29 Warmest 44ᵗʰ 2016 2.71 4.88
South America 0.94 1.69 0.13 0.24 Warmest 6ᵗʰ 2016 1.39 2.5
Europe 1.35 2.43 0.13 0.24 Warmest 13ᵗʰ 2014 2.46 4.43
Africa 1.08 1.94 0.1 0.18 Warmest 3ʳᵈ 2010 1.62 2.92
Asia 1.57 2.83 0.19 0.34 Warmest 8ᵗʰ 2002 2.72 4.9
Oceania 1.58 2.84 0.12 0.22 Warmest 1ˢᵗ 2018 1.58 2.84

The table confirms that Europe and Asia are cooler in 2018 than recent years in the decade.

Summary

These data show again that temperature indicators of climate are not global but regional, and even local in their manifestations.  At the continental level there are significant differences.  North America is an outlier, but who is to say whether it is an aberration that will join the rest, or whether it is the trend setter signaling a widespread cooler future.

See Also:  Is This Cold the New Normal?

CanAm Bucks the Trend

Hidden amid reports of recent warmest months and years based on global averages, there is a significant departure in North America. Those of us living in Canada and USA have noticed a distinct cooling, and our impressions are not wrong.

The image above shows how much lower have been April 2018 temperatures. The table below provides the numbers behind the graphs from NOAA State of the Climate.

CONTINENT ANOMALY (1910-2000) TREND (1910-2018) RANK RECORDS
°C °F °C °F (OUT OF 109 YEARS) YEAR(S) °C °F
North America -0.97 -1.75 0.11 0.19 Warmest 94ᵗʰ 2010 2.65 4.77
South America 1.34 2.41 0.13 0.24 Warmest 1ˢᵗ 2018 1.34 2.41
Europe 2.82 5.08 0.14 0.25 Warmest 1ˢᵗ 2018 2.82 5.08
Africa 1.23 2.21 0.12 0.22 Warmest 5ᵗʰ 2016 1.72 3.1
Asia 1.66 2.99 0.18 0.32 Warmest 9ᵗʰ 2016 2.4 4.32
Oceania 2.47 4.45 0.14 0.25 Warmest 2ⁿᵈ 2005 2.54 4.57

The table shows how different was the North American experience: 94th out of 109 years.  But when we look at the first four months of the year, the NA is more in line with the rest of the globe.

 

As the image shows, cooling was more widespread during the first third of 2018, particularly in NA, Northern Europe and Asia, as well as a swath of cooler mid ocean latitudes in the Southern Hemisphere.

CONTINENT ANOMALY (1910-2000) TREND (1910-2018) RANK RECORDS
°C °F °C °F (OUT OF 109 YEARS) YEAR(S) °C °F
North America 0.44 0.79 0.16 0.29 Warmest 44ᵗʰ 2016 2.71 4.88
South America 0.94 1.69 0.13 0.24 Warmest 6ᵗʰ 2016 1.39 2.5
Europe 1.35 2.43 0.13 0.24 Warmest 13ᵗʰ 2014 2.46 4.43
Africa 1.08 1.94 0.1 0.18 Warmest 3ʳᵈ 2010 1.62 2.92
Asia 1.57 2.83 0.19 0.34 Warmest 8ᵗʰ 2002 2.72 4.9
Oceania 1.58 2.84 0.12 0.22 Warmest 1ˢᵗ 2018 1.58 2.84

The table confirms that Europe and Asia are cooler in 2018 than recent years in the decade.

Summary

These data show again that temperature indicators of climate are not global but regional, and even local in their manifestations.  At the continental level there are significant differences.  North America is an outlier, but who is to say whether it is an aberration that will join the rest, or whether it is the trend setter signaling a widespread cooler future.

Correcting Flaws in Global Warming Projections

William Mason Gray (1929-2016), pioneering hurricane scientist and forecaster and professor of atmospheric science at Colorado State University.

Thanks to GWPF for publishing posthumously Bill Gray’s understanding of global warming/climate change.  The paper was compiled at his request, completed and now available as Flaws in applying greenhouse warming to Climate Variability This post provides some excerpts in italics with my bolds and some headers.  Readers will learn much from the entire document (title above is link to pdf).

The Fundamental Correction

The critical argument that is made by many in the global climate modeling (GCM) community is that an increase in CO2 warming leads to an increase in atmospheric water vapor, resulting in more warming from the absorption of outgoing infrared radiation (IR) by the water vapor. Water vapor is the most potent greenhouse gas present in the atmosphere in large quantities. Its variability (i.e. global cloudiness) is not handled adequately in GCMs in my view. In contrast to the positive feedback between CO2 and water vapor predicted by the GCMs, it is my hypothesis that there is a negative feedback between CO2 warming and and water vapor. CO2 warming ultimately results in less water vapor (not more) in the upper troposphere. The GCMs therefore predict unrealistic warming of global temperature. I hypothesize that the Earth’s energy balance is regulated by precipitation (primarily via deep cumulonimbus (Cb) convection) and that this precipitation counteracts warming due to CO2.

Figure 14: Global surface temperature change since 1880. The dotted blue and dotted red lines illustrate how much error one would have made by extrapolating a multi-decadal cooling or warming trend beyond a typical 25-35 year period. Note the recent 1975-2000 warming trend has not continued, and the global temperature remained relatively constant until 2014.

Projected Climate Changes from Rising CO2 Not Observed

Continuous measurements of atmospheric CO2, which were first made at Mauna Loa, Hawaii in 1958, show that atmospheric concentrations of CO2 have risen since that time. The warming influence of CO2 increases with the natural logarithm (ln) of the atmosphere’s CO2 concentration. With CO2 concentrations now exceeding 400 parts per million by volume (ppm), the Earth’s atmosphere is slightly more than halfway to containing double the 280 ppm CO2 amounts in 1860 (at the beginning of the Industrial Revolution).∗

We have not observed the global climate change we would have expected to take place, given this increase in CO2. Assuming that there has been at least an average of 1 W/m2 CO2 blockage of IR energy to space over the last 50 years and that this energy imbalance has been allowed to independently accumulate and cause climate change over this period with no compensating response, it would have had the potential to bring about changes in any one of the following global conditions:

  • Warm the atmosphere by 180◦C if all CO2 energy gain was utilized for this purpose – actual warming over this period has been about 0.5◦C, or many hundreds of times less.
  • Warm the top 100 meters of the globe’s oceans by over 5◦C – actual warming over this period has been about 0.5◦C, or 10 or more times less.
  • Melt sufficient land-based snow and ice as to raise the global sea level by about 6.4 m. The actual rise has been about 8–9 cm, or 60–70 times less. The gradual rise of sea level has been only slightly greater over the last ~50 years (1965–2015) than it has been over the previous two ~50-year periods of 1915–1965 and 1865–1915, when atmospheric CO2 gain was much less.
  • Increase global rainfall over the past ~50-year period by 60 cm.

Earth Climate System Compensates for CO2

If CO2 gain is the only influence on climate variability, large and important counterbalancing influences must have occurred over the last 50 years in order to negate most of the climate change expected from CO2’s energy addition. Similarly, this hypothesized CO2-induced energy gain of 1 W/m2 over 50 years must have stimulated a compensating response that acted to largely negate energy gains from the increase in CO2.

The continuous balancing of global average in-and-out net radiation flux is therefore much larger than the radiation flux from anthropogenic CO2. For example, 342 W/m2, the total energy budget, is almost 100 times larger than the amount of radiation blockage expected from a CO2 doubling over 150 years. If all other factors are held constant, a doubling of CO2 requires a warming of the globe of about 1◦C to enhance outward IR flux by 3.7 W/m2 and thus balance the blockage of IR flux to space.

Figure 2: Vertical cross-section of the annual global energy budget. Determined from a combination of satellite-derived radiation measurements and reanalysis data over the period of 1984–2004.

This pure IR energy blocking by CO2 versus compensating temperature increase for radiation equilibrium is unrealistic for the long-term and slow CO2 increases that are occurring. Only half of the blockage of 3.7 W/m2 at the surface should be expected to go into an temperature increase. The other half (about 1.85 W/m2) of the blocked IR energy to space will be compensated by surface energy loss to support enhanced evaporation. This occurs in a similar way to how the Earth’s surface energy budget compensates for half its solar gain of 171 W/m2 by surface-to-air upward water vapor flux due to evaporation.

Assuming that the imposed extra CO2 doubling IR blockage of 3.7 W/m2 is taken up and balanced by the Earth’s surface in the same way as the solar absorption is taken up and balanced, we should expect a direct warming of only ~0.5◦C for a doubling of CO2. The 1◦C expected warming that is commonly accepted incorrectly assumes that all the absorbed IR goes to the balancing outward radiation with no energy going to evaporation.

Consensus Science Exaggerates Humidity and Temperature Effects

A major premise of the GCMs has been their application of the National Academy of Science (NAS) 1979 study3 – often referred to as the Charney Report – which hypothesized that a doubling of atmospheric CO2 would bring about a general warming of the globe’s mean temperature of 1.5–4.5◦C (or an average of ~3.0◦C). These large warming values were based on the report’s assumption that the relative humidity (RH) of the atmosphere remains quasiconstant as the globe’s temperature increases. This assumption was made without any type of cumulus convective cloud model and was based solely on the Clausius–Clapeyron (CC) equation and the assumption that the RH of the air will remain constant during any future CO2-induced temperature changes. If RH remains constant as atmospheric temperature increases, then the water vapor content in the atmosphere must rise exponentially.

With constant RH, the water vapor content of the atmosphere rises by about 50% if atmospheric temperature is increased by 5◦C. Upper tropospheric water vapor increases act to raise the atmosphere’s radiation emission level to a higher and thus colder level. This reduces the amount of outgoing IR energy which can escape to space by decreasing T^4.

These model predictions of large upper-level tropospheric moisture increases have persisted in the current generation of GCM forecasts.§ These models significantly overestimate globally-averaged tropospheric and lower stratospheric (0–50,000 feet) temperature trends since 1979 (Figure 7).

Figure 8: Decline in upper tropospheric RH. Annually-averaged 300 mb relative humidity for the tropics (30°S–30°N). From NASA-MERRA2 reanalysis for 1980–2016. Black dotted line is linear trend.

All of these early GCM simulations were destined to give unrealistically large upper-tropospheric water vapor increases for doubling of CO2 blockage of IR energy to space, and as a result large and unrealistic upper tropospheric temperature increases were predicted. In fact, if data from NASA-MERRA24 and NCEP/NCAR5 can be believed, upper tropospheric RH has actually been declining since 1980 as shown in Figure 8. The top part of Table 1 shows temperature and humidity differences between very wet and dry years in the tropics since 1948; in the wettest years, precipitation was 3.9% higher than in the driest ones. Clearly, when it rains more in the tropics, relative and specific humidity decrease. A similar decrease is seen when differencing 1995–2004 from 1985–1994, periods for which the equivalent precipitation difference is 2%. Such a decrease in RH would lead to a decrease in the height of the radiation emission level and an increase in IR to space.

The Earth’s natural thermostat – evaporation and precipitation

What has prevented this extra CO2-induced energy input of the last 50 years from being realized in more climate warming than has actually occurred? Why was there recently a pause in global warming, lasting for about 15 years?  The compensating influence that prevents the predicted CO2-induced warming is enhanced global surface evaporation and increased precipitation.

Annual average global evaporational cooling is about 80 W/m2 or about 2.8 mm per day.  A little more than 1% extra global average evaporation per year would amount to 1.3 cm per year or 65 cm of extra evaporation integrated over the last 50 years. This is the only way that such a CO2-induced , 1 W/m2 IR energy gain sustained over 50 years could occur without a significant alteration of globally-averaged surface temperature. This hypothesized increase in global surface evaporation as a response to CO2-forced energy gain should not be considered unusual. All geophysical systems attempt to adapt to imposed energy forcings by developing responses that counter the imposed action. In analysing the Earth’s radiation budget, it is incorrect to simply add or subtract energy sources or sinks to the global system and expect the resulting global temperatures to proportionally change. This is because the majority of CO2-induced energy gains will not go into warming the atmosphere. Various amounts of CO2-forced energy will go into ocean surface storage or into ocean energy gain for increased surface evaporation. Therefore a significant part of the CO2 buildup (~75%) will bring about the phase change of surface liquid water to atmospheric water vapour. The energy for this phase change must come from the surface water, with an expenditure of around 580 calories of energy for every gram of liquid that is converted into vapour. The surface water must thus undergo a cooling to accomplish this phase change.

Therefore, increases in anthropogenic CO2 have brought about a small (about 0.8%) speeding up of the globe’s hydrologic cycle, leading to more precipitation, and to relatively little global temperature increase. Therefore, greenhouse gases are indeed playing an important role in altering the globe’s climate, but they are doing so primarily by increasing the speed of the hydrologic cycle as opposed to increasing global temperature.

Figure 9: Two contrasting views of the effects of how the continuous intensification of deep
cumulus convection would act to alter radiation flux to space.
The top (bottom) diagram represents a net increase (decrease) in radiation to space

Tropical Clouds Energy Control Mechanism

It is my hypothesis that the increase in global precipitation primarily arises from an increase in deep tropical cumulonimbus (Cb) convection. The typical enhancement of rainfall and updraft motion in these areas together act to increase the return flow mass subsidence in the surrounding broader clear and partly cloudy regions. The upper diagram in Figure 9 illustrates the increasing extra mass flow return subsidence associated with increasing depth and intensity of cumulus convection. Rainfall increases typically cause an overall reduction of specific humidity (q) and relative humidity (RH) in the upper tropospheric levels of the broader scale surrounding convective subsidence regions. This leads to a net enhancement of radiation flux to space due to a lowering of the upper-level emission level. This viewpoint contrasts with the position in GCMs, which suggest that an increase in deep convection will increase upper-level water vapour.

Figure 10: Conceptual model of typical variations of IR, albedo and net (IR + albedo) associated with three different areas of rain and cloud for periods of increased precipitation.

The albedo enhancement over the cloud–rain areas tends to increase the net (IR + albedo) radiation energy to space more than the weak suppression of (IR + albedo) in the clear areas. Near-neutral conditions prevail in the partly cloudy areas. The bottom diagram of Figure 9 illustrates how, in GCMs, Cb convection erroneously increases upper tropospheric moisture. Based on reanalysis data (Table 1, Figure 8) this is not observed in the real atmosphere.

Ocean Overturning Circulation Drives Warming Last Century

A slowing down of the global ocean’s MOC is the likely cause of most of the global warming that has been observed since the latter part of the 19th century.15 I hypothesize that shorter multi-decadal changes in the MOC16 are responsible for the more recent global warming periods between 1910–1940 and 1975–1998 and the global warming hiatus periods between 1945–1975 and 2000–2013.

Figure 12: The effect of strong and weak Atlantic THC. Idealized portrayal of the primary Atlantic Ocean upper ocean currents during strong and weak phases of the thermohaline circulation (THC)

Figure 13 shows the circulation features that typically accompany periods when the MOC is stronger than normal and when it is weaker than normal. In general, a strong MOC is associated with a warmer-than-normal North Atlantic, increased Atlantic hurricane activity, increased blocking action in both the North Atlantic and North Pacific and weaker westerlies in the mid-latitude Southern Hemisphere. There is more upwelling of cold water in the South Pacific and Indian Oceans, and an increase in global rainfall of a few percent occurs. This causes the global surface temperatures to cool. The opposite occurs when the MOC is weaker than normal.

The average strength of the MOC over the last 150 years has likely been below the multimillennium average, and that is the primary reason we have seen this long-term global warming since the late 19th century. The globe appears to be rebounding from the conditions of the Little Ice Age to conditions that were typical of the earlier ‘Medieval’ and ‘Roman’ warm periods.

Summary and Conclusions

The Earth is covered with 71% liquid water. Over the ocean surface, sub-saturated winds blow, forcing continuous surface evaporation. Observations and energy budget analyses indicate that the surface of the globe is losing about 80 W/m2 of energy from the global surface evaporation process. This evaporation energy loss is needed as part of the process of balancing the surface’s absorption of large amounts of incoming solar energy. Variations in the strength of the globe’s hydrologic cycle are the way that the global climate is regulated. The stronger the hydrologic cycle, the more surface evaporation cooling occurs, and greater the globe’s IR flux to space. The globe’s surface cools when the hydrologic cycle is stronger than average and warms when the hydrologic cycle is weaker than normal. The strength of the hydrologic cycle is thus the primary regulator of the globe’s surface temperature. Variations in global precipitation are linked to long-term changes in the MOC (or THC).

I have proposed that any additional warming from an increase in CO2 added to the atmosphere is offset by an increase in surface evaporation and increased precipitation (an increase in the water cycle). My prediction seems to be supported by evidence of upper tropospheric drying since 1979 and the increase in global precipitation seen in reanalysis data. I have shown that the additional heating that may be caused by an increase in CO2 results in a drying, not a moistening, of the upper troposphere, resulting in an increase of outgoing radiation to space, not a decrease as proposed by the most recent application of the greenhouse theory.

Deficiencies in the ability of GCMs to adequately represent variations in global cloudiness, the water cycle, the carbon cycle, long-term changes in deep-ocean circulation, and other important mechanisms that control the climate reduce our confidence in the ability of these models to adequately forecast future global temperatures. It seems that the models do not correctly handle what happens to the added energy from CO2 IR blocking.

Figure 13: Effect of changes in MOC: top, strong MOC; bottom weak MOC. SLP: sea level pressure; SST, sea surface temperature.

Solar variations, sunspots, volcanic eruptions and cosmic ray changes are energy-wise too small to play a significant role in the large energy changes that occur during important multi-decadal and multi-century temperature changes. It is the Earth’s internal fluctuations that are the most important cause of climate and temperature change. These internal fluctuations are driven primarily by deep multi-decadal and multi-century ocean circulation changes, of which naturally varying upper-ocean salinity content is hypothesized to be the primary driving mechanism. Salinity controls ocean density at cold temperatures and at high latitudes where the potential deep-water formation sites of the THC and SAS are located. North Atlantic upper ocean salinity changes are brought about by both multi-decadal and multi-century induced North Atlantic salinity variability.

 Footnote:

The main point from Bill Gray was nicely summarized in a previous post Earth Climate Layers

The most fundamental of the many fatal mathematical flaws in the IPCC related modelling of atmospheric energy dynamics is to start with the impact of CO2 and assume water vapour as a dependent ‘forcing’.  This has the tail trying to wag the dog. The impact of CO2 should be treated as a perturbation of the water cycle. When this is done, its effect is negligible. — Dr. Dai Davies

climate-onion2