Alberta Set to Imitate Ontario’s Electrical Mess

Albertans pay around five cents a kilowatt hour — compared to the up to 18 cents Ontarians experienced, but for how long?Postmedia News

Kevin Libin writes in Financial Post: Alberta’s now copying Ontario’s disastrous electricity policies. What could go wrong?  Get ready, Albertans, a new report reveals that all the thrills and spills that follow when politicians start meddling in a boring, but well-functioning electricity market are coming your way.  Excerpts in italics below with my bolds.

A report released Thursday by the University of Calgary’s School of Public Policy gives a sneak peek of how the Alberta script could play out. It begins once again with a “progressive” government convinced that its legacy lies in climate activism, out to redesign an electricity grid from something meant to provide affordable, reliable power into a showpiece of uncompetitive solar and wind power. And like Ontario, the Alberta NDP is determined to turn its provincial electricity grid into not just a green project that ignores economics, but an affirmative-action diversity project that sets aside certain renewable deals for producers owned by First Nations.

Alberta Premier Rachel Notley’s plan, like McGuinty’s, is to phase out all of Alberta’s cheap, abundant but terribly uncool coal-fired power (by 2030, in Alberta’s case) and force onto the grid instead large amounts of unreliable, expensive solar and wind power. Albertans have been so preoccupied fighting through a barrage of energy woes since Notley’s NDP was elected — the oil-price crash, government-imposed carbon taxes and emission caps, blocked and cancelled pipelines and the Trudeau government’s wholesale politicization of energy regulation — that they probably haven’t realized yet how vast an overhaul Notley was talking about when she began revealing this plan in 2015. But the report’s author, Brian Livingston, an engineer and lawyer with deep experience in the energy business in Alberta, runs through the shocking numbers: As of last year, Alberta’s grid had a capacity of roughly 17,000 megawatts, but the envisioned grid of 2032 will require nearly 13,000 megawatts that do not currently exist. Think of it as rebuilding 75 per cent of Alberta’s current grid in less than 15 years. Hey, what could go wrong?

Alberta Electricity System Operator is planning for so much wind power that the province will blow past Ontario, a province three times its size. Postmedia News

And if Ontarians thought their government was obsessed with green power, Livingston notes that the Alberta Electricity System Operator is planning for so much wind power that the province will blow past Ontario, a province three times its size, with 5,000 megawatts of wind compared to Ontario’s 4,213 megawatts, and nearly twice as much solar power, 700 megawatts, compared to Ontario’s 380 megawatts.

Learning from McGuinty’s mistake, the Alberta NDP is smart enough to ensure the extra cost of all this uneconomic power won’t show up printed in black and white on consumers’ power bills, likely hoping that spares them the political fallout that now threatens the Ontario Liberals. Rather than ratepayers shouldering the pain, it will be taxpayers — largely the same people — who pay most for any additional costs through added deficits and debts, at least for the next few years. That’s because Notley has ordered a temporary cap on household electricity rates of 6.8 cents per kilowatt hour (which is still significantly higher than the current rate). When wholesale rates rise higher than that, the government will use carbon-tax revenues to pay the difference. But businesses pay full freight from the get go.

Hiding from the real costs of using energy is a curious move for a government that gives away energy-efficient light bulbs and other products designed to conserve while imposing carbon taxes to try suppressing energy use. It’s also a costly move. Estimates from the C. D. Howe Institute estimate it will cost Alberta taxpayers up to $50 million this year alone; a recent report from electricity consultants at EDC Associates estimates that by 2021, the extra costs moved off electric bills and onto tax bills will total $700 million. That’s when the price cap expires and costs could start showing up on power bills, instead.

Of course, Ontario has proven that it’s easy to underestimate how expensive these political experiments can get, but the Alberta redesign is already getting pricey. First, Notley accidentally stuck Alberta consumers with nearly $2 billion in extra surcharges when she rewrote carbon policies without realizing that gave producers the right to cancel unprofitable contracts. Her plan also requires the government to create a new “capacity” payment system for electricity producers, who will able to charge substantial sums even if they don’t produce a single watt. Livingston shows that many producers can earn almost as much just for offering capacity to the grid as they do for producing. Meanwhile, since solar power is perennially and embarrassingly uncompetitive economically, even with expensive wind power, the government plans to let solar providers sell electricity at premium rates to government facilities, with taxpayers covering that cost, too, just as they’ll cover the cost of overpriced wind power, which doesn’t approach the affordability of fossil fuels.

In his report, Livingston drily notes that the way Albertans think of the future of their electricity system could probably be summed up as: “Whatever we do here in Alberta, please let us not do it like they did it in Ontario.” They have reason to fear, since Livingston shows Ontario households have faced rates as much as four times higher than those in Alberta. Even if it doesn’t look exactly like the way they did things in Ontario, that doesn’t mean it still can’t go very wrong. Whenever progressive politics infests the electrical grid, people always pay for it in the end.

Background:  Climate Policies: Real Economic Damage Fighting Imaginary Problem



US State Attorneys Push Back on Climate Lawsuits

A friend of the court brief has been filed by Attorneys for States of Indiana, Alabama, Arkansas, Colorado, Georgia, Kansas, Louisiana, Nebraska, Oklahoma, South Carolina, Texas, Utah, West Virginia, Wisconsin, and Wyoming.  They urge the federal Ninth District Court to dismiss the lawsuit against five major oil companies for claimed climate damages.  Previous posts discussed the scientific briefs against these lawsuits, and this post adds the legal reasons why these court actions are unreasonable.

An article in Forbes summarizes: As Boulder Sues, 15 States – Including Colorado – Oppose Global Warming Lawsuits

On April 19, Colorado Attorney General Cynthia Coffman joined 14 colleagues in a friend-of-the-court brief filed in the California litigation that finds fault with the idea of using public nuisance lawsuits to address climate change. The text of the pleading is AMICUS BRIEF OF INDIANA AND FOURTEEN OTHER STATES IN SUPPORT OF DISMISSAL  Excerpts in italics below from Forbes with my bolds.

“Plaintiffs’ theory of liability involves nothing more specific than promoting the use of fossil fuels,” the brief says.

“As utility owners, power plant operators and generally significant users of fossil fuels (through facilities, vehicle fleets and highway construction, among other functions), States and their political subdivisions themselves may be future defendants in similar actions.”

For now, those political subdivisions are plaintiffs – and the newest are the city and county of Boulder and San Miguel County. Their lawsuit was filed April 17 by two environmental firms and a Denver environmental/personal injury lawyer.

According to the Boulder County website, private attorneys will charge up to a 20% contingency fee. The City of Boulder has not yet produced a copy of its contract with these attorneys. Legal Newsline requested it April 18.

The City of Boulder was tight-lipped in the months leading up to the lawsuit, saying only that the City Council had approved a plan to hire a Washington, D.C., firm on a pro bono basis.

Like the California cases, Boulder’s makes a claim under the public nuisance theory. Climate change has caused a nuisance in the Boulder area, and the plaintiffs have to mitigate its impacts, the suit alleges.
The states, led by Indiana, say that theory isn’t good enough. Federal judges should not be asked to establish emissions policy, the brief says.

“But the questions of global climate change and its effects – and the proper balance of regulatory and commercial activity – are political questions not suited for resolution by any court,” the states say.
“Indeed, such judicial resolution would trample Congress’ carefully calibrated process of cooperative federalism where States work in tandem with EPA to administer the federal Clean Air Act.”

Background Is Global Warming A Public Nuisance?

Footnotes:  Notable quotes in italics from the State Attorneys’ brief (with my bolds):

To permit federal adjudication of claims for abatement fund remedies would disrupt carefully calibrated state regulatory schemes devised by politically accountable officials. Federal courts should not use public nuisance theories to confound state and federal political branches’ legislative and administrative processes by establishing emissions policy (or, as is more likely, multiple conflicting emissions policies) on a piecemeal, ad hoc, case-bycase basis under the aegis of federal common law.

As utility owners, power plant operators, and generally significant users of fossil fuels (through facilities, vehicle fleets and highway construction, among other functions), States and their political subdivisions themselves may be future defendants in similar actions.

Similarly, they request relief in the form of an “abatement fund remedy” rather than outright abatement, but the Ninth Circuit has already said that the remedy requested is irrelevant to the displacement issue. Ultimately, neither stratagem changes the essential nature of Plaintiffs’ claim or of the liability that they are asking the court to impose—liability that could serve as the predicate for myriad remedies in future cases or even in this one.

Plaintiffs are asking the court to order Defendants to pay to build sea walls, raise the elevation of low-lying property and buildings, and construct other infrastructure projects necessary to combat the effects of global climate change for the major cities of Oakland and San Francisco. Such a remedy could cost several billion dollars and seriously impact Defendants’ ability to provide energy to the rest of the country.

As the weight of authority demonstrates, Plaintiffs claims in this case may be styled as torts, but they are in substance political, and thus nonjusticiable.

To determine liability, the court would need to determine that plaintiffs have a “right” to the climate—in all of its infinite variations—as it stood at some unspecified time in the past, then find not only that this idealized climate has changed, but that Defendants caused that change through “unreasonable” action that deprived Plaintiffs of their right to the idealized climate.

Plaintiffs’ desired remedies are nothing more than a form of regulatory enforcement and creation of policy through the use of judicial remedies. Plaintiffs seek to inject their political and policy opinions into the national regulatory scheme of energy production, promotion, and use. Yet all States play a critical regulatory role within their borders, and Congress has leveraged and augmented that authority by way of the Clean Air Act, a cooperative federalist program designed to permit each State to achieve its optimal balance of regulation and commercial activity. Cooperative federalism in the environmental and energy production policy arena underscores the political nature of this case.

Thus, through the cooperative federalism model, States use their political bodies to secure environmental benefits for their citizens without sacrificing their livelihoods, and each does so in a different fashion—a natural result of the social, political, environmental, and economic diversity that exists among States. A plan to modify greenhouse gas emissions that is acceptable to California or Vermont may be unacceptable to Indiana, Georgia, or Texas, for example.

Plaintiffs are worried not about national climate change, but about global climate change. And, indeed, the global nature of concerns over anthropogenic climate change has spawned a variety of treaties and other international initiatives aimed at addressing air emissions. This activity has been multifaceted, balancing a variety of economic, social, geographic, and political factors and emphasizing multiparty action rather than arbitrarily focusing on a single entity or small group of entities.

The past two decades have thus seen four Presidencies with widely divergent views of what the United States’ foreign policy on climate change and greenhouse gas emissions should be. These shifts in direction further demonstrate the political nature of environmental and fossil fuel regulation and reaffirm the need for such decisions to be the subject of political debate and accountability.

Focusing on energy production rather than emissions does not make this case any less inherently political. If anything, it underscores the political nature of the global climate change problem by casting a spotlight on yet more political choices that bear on the issue. In some instances States themselves promote the very energy production and marketing targeted in this case. For example, the California State Oil and Gas Supervisor is charged with “encourag[ing] the wise development of oil and gas resources” and “permit[ing] the owners or operators of the wells to utilize all methods and practices known to the oil industry for the purpose of increasing the ultimate recovery of underground hydrocarbons[.]” Cal. Pub. Res. Code §§ 3004, 3106(b).

California cannot evade the application of the Commerce Clause by using common law rather than state statutory law to regulate commerce occurring outside its borders. The constitutional restrictions on California’s ability to regulate out-of-state commerce “reflect the Constitution’s special concern both with the maintenance of a national economic union unfettered by state imposed limitations on interstate commerce and with the autonomy of the individual States within their respective spheres.” Healy v. Beer Inst., Inc., 491 U.S. 324, 335–36 (1989).

At the most basic level, such remedies represent an effort by one state to occupy the field of environmental and energy production regulation across the nation, and to do so by superseding sound, reasonable, and longstanding standards adopted by other states in a system of cooperative federalism and by the federal government. Indeed, even if the Plaintiffs’ desired remedies do not directly conflict with other states’ existing laws and regulatory framework, it nonetheless would “arbitrarily . . . exalt the public policy of one state over that of another” in violation of the Commerce Clause.

By asking a single federal judge to impose energy production penalties on defendant companies, each of which is presumably compliant with the regulations of each state in which it operates, Plaintiffs are attempting to export their preferred environmental policies and their corresponding economic effects to other states. Allowing them to do so would be detrimental to state innovation and regional approaches that have prevailed through the political branches of government to date. California’s attempt to regulate out-of-state production of fossil fuels and by suing producers with common law cause of action implicates the constitutional doctrine against extraterritorial regulation. This is yet another reason to reject Plaintiffs’ novel theory of liability.

Take the Climate Challenge

Last night PBS aired the most impressive presentation yet of “Official” climate doctrine. I don’t say “science” because it mounts a powerful advocacy for a particular viewpoint and entertains no alternative perspectives. The broadcast is extremely well crafted with great imagery, crisp sound bite dialogue and sincere acting.

With all the invested effort, talent and expense, it is probably the strongest yet Blue Team argument for climate alarm and against fossil fuel consumption. As such we can expect that large audiences of impressionable people of all ages will be exposed to it. It behooves anyone who stands on skeptical ground, who wants to hold that position, to study what is asserted and decide what points are acceptable and what claims are disputed.

The telecast will be repeatedly aired this month on NOVA on US PBS stations. The website apparently blocks viewing in foreign countries, but the transcript is available and I will refer to it in comments below.

Update April 20: An independent review of the documentary is added at the end.

Decoding the Weather Machine  Discover how Earth’s intricate climate system is changing. Airing April 18, 2018 at 8 pm on PBS

Program Description
Disastrous hurricanes. Widespread droughts and wildfires. Withering heat. Extreme rainfall. It is hard not to conclude that something’s up with the weather, and many scientists agree. It’s the result of the weather machine itself—our climate—changing, becoming hotter and more erratic. In this two-hour documentary, NOVA will cut through the confusion around climate change. Why do scientists overwhelmingly agree that our climate is changing, and that human activity is causing it? How and when will it affect us through the weather we experience? And what will it take to bend the trajectory of planetary warming toward more benign outcomes? Join scientists around the world on a quest to better understand the workings of the weather and climate machine we call Earth, and discover how we can be resilient—even thrive—in the face of enormous change.

Outline Of Themes (Excerpts in italics from the transcript with my added images and pushback)

Introduction (The video clip above)
This is the essence of science …a global investigation of our climate machine.

We’re poking at the climate system with a long, sharp, carbon-tipped spear. And we cannot perfectly predict all of the consequences.

It’s a planetary crisis, but we’re clever enough to think our way out of this.

Alarming Weather and Wildfires

The rhythm of the atmosphere was off. We were seeing more freakish weather; storms were stronger and wetter.  We’ve got a multitude of active large fires, and another megastorm en route.

Douglas had heard about global warming, but given all the crazy weather he’d experienced, he was skeptical. And he’s not alone. A third of Americans doubt humans are changing the climate.

But: Weather is not more extreme.
And Wildfires were worse in the past.

Litany of Changes

Seven of the ten hottest years on record have occurred within the last decade; wildfires are at an all-time high, while Arctic Sea ice is rapidly diminishing.

We are seeing one-in-a-thousand-year floods with astonishing frequency.

When it rains really hard, it’s harder than ever.

We’re seeing glaciers melting, sea level rising.

The length and the intensity of heatwaves has gone up dramatically.

Plants and trees are flowering earlier in the year. Birds are moving polewards.

We’re seeing more intense storms.

But: All of these are within the range of past variability.

In fact our climate is remarkably stable.

And many aspects follow quasi-60 year cycles.

Climate is Changing the Weather

Changes like these have led an overwhelming majority of climate scientists to an alarming conclusion: it isn’t just the weather that’s changing, it’s what drives the weather, Earth’s climate.

But: Actual climate zones are local and regional in scope, and they show little boundary change.

The Journey to Blaming CO2 and Humans

In 1824, Fourier was the first to deduce that it’s the composition of the atmosphere that governs the surface temperature of the earth; 1824, almost 200 years ago, and climate science has been accumulating ever since.

(Forty years later)Tyndall figured out that carbon dioxide traps heat. But even more importantly, Tyndall realized that when we dig up coal and burn it, it’s actually releasing more of these heat-trapping gases.

(In the 1950’s) This annual rise and fall of carbon dioxide is what Dave Keeling discovered. It is the breath of the world’s forests. The Keeling Curve established, without question, that the carbon dioxide content in the atmosphere was going up steeply, sharply, rapidly.

But these(Antarctic) ice cores can extend the Keeling Curve back in time and reveal that today’s concentration of carbon dioxide is unusually high. The current concentration of carbon dioxide in the atmosphere is higher than it has been for 800,000 years.

From ocean mud, emerges a record of temperature that goes back tens of millions of years. That record shows temperature swings from warm periods to ice ages triggered by changes in Earth’s orbit. But when these temperature changes are paired with the levels of carbon dioxide from ice cores, a startling correlation emerges. The two graphs are a near perfect match.

Fossil fuels have been locked up underground for millions of years. So, when we emit fossil fuels into the atmosphere, we’re emitting carbon that is very different. It has a very distinct fingerprint. This chemical fingerprint and many other lines of evidence leave no doubt that we are responsible for the skyrocketing levels of carbon dioxide.

But: Ice cores show that it was warmer in the past, not due to humans.

And CO2 relation to Temperature is Inconsistent.

Linking CO2 to Climate and Weather

Climate and weather are flip sides of the same coin. You impact climate, it’s going to impact weather. Weather is what is happening in the atmosphere at a given time and place: hot, cold, rain or snow. Climate is an average of that weather, over longer periods.

It is fundamentally these two factors, Earth’s spin and heat differences between the poles and the equator that create the weather patterns we know. So, if you trap more heat in the system, you change the weather.

We are more powerful than nature in the push we are putting on climate. And we don’t entirely understand and cannot perfectly predict all of the consequences. It’s not we’re worried because it’s never happened before, Earth’s climate has changed. What hasn’t happened before is to change it this quickly.

But: Human emissions are dwarfed by CO2 from estimated natural sources.

The Race to Understand the Climate Machine

Across the globe, scientists are now racing to understand and model Earth’s climate system, trying to figure out just how damaging climate change will be.

The evidence is clear that by burning fossil fuels, we humans have changed the composition of the atmosphere, which is now trapping more heat. How the other parts of the climate machine will respond will determine how much our climate will change and how much the great diversity of life that it supports will be affected.

The land, part of Earth’s climate machine, is playing an essential role, because trees are absorbing about 25 percent of the extra carbon dioxide that is heating our atmosphere. It turns out that the oceans are doing the same.

Probing the Ocean’s Mysteries

When we talk about warming of the climate system, we tend to focus on the atmosphere, but the lion’s share of the warming of our climate system is in the ocean.

Along with teams from around the world, (Stephen Riser) is building fleets of underwater drones, called “Argo floats,” to do the work. These robots are pioneering explorers, designed to probe parts of the earth never seen before.

The Southern Ocean is this gateway between the deep ocean and the atmosphere. There’s not many places in the global ocean where that deep water can contact the atmosphere. Once at the surface, the deep cold water, that scientists call “old water,” soaks up heat like a sponge.

The Argo floats reveal that over the last 30 years, the ocean has heated up by an average of a half-degree Fahrenheit. If we put all of that heat into the lower atmosphere, the atmosphere would heat up by about 20 degrees Fahrenheit, that’s how much heat we’re talking about here.

In all, a staggering 93 percent of the heat that we’re putting into our atmosphere is getting soaked up by our oceans. This comes with consequences. Heating the ocean and adding carbon dioxide are damaging to life in the sea.

But:  The Argo record is short and shows a mixed picture.

Studying Ice and Sea Levels

The data from the motion trackers and other high tech devices, like this radar, are giving Holland new insights into how glaciers disappear. What he has found is surprising. For glaciers in contact with the ocean, warmer air causes some of the loss of ice, but the real trigger for intense calving is warmer water coming underneath the glacier and destabilizing it.

Locked up in the Antarctic ice sheet is a total of 200 feet of possible sea level rise. And this vast continent of ice, especially the western part, is breaking up faster than anyone thought possible.

The melting or break up of all that ice would devastate much of civilization as we know it, as sea levels rise and flood cities and coasts.

By mapping this ancient Australian reef, Andrea Dutton is able to tell how high sea levels were the last time Earth was as warm as today. Our research shows that with just the amount of warming we’ve seen today, the seas could rise much higher, up to 20 to 30 feet higher than today.

The big question is how fast? Does it take us 500 years to get there? Well that’s one thing. Or does it take us 100 years to get there. That’s three feet in a decade. That’s a lot.

But: Sea Level Rise is not accelerating.

Sea Level Rise Today

So, when will we start to feel the impact of sea level rise? Some people already are. The Marshall Islands are a nation of low-lying islands in the Pacific. They are home to 50,000 people and a vibrant culture. Today, they face becoming a new kind of refugee: a climate refugee.

Sea level rise is now a reality even in the United States. And low-lying cities, like Norfolk, Virginia, are on the front line.

But: On site observations show no alarming sea level rise.

Rising Costs and Feedbacks

For the people of Norfolk, climate change is already affecting their lives. And across all of America, the costs are mounting. 2017 was the costliest hurricane season on record. Harvey alone caused catastrophic flooding in southeastern Texas, with financial damages that rival Katrina, and Puerto Rico was devastated by Hurricane Maria.

Wildfires in the western United States have quadrupled since the 1980s, exacerbated by drought. Effects like these are being felt across the planet, and some are even accelerating the warming itself. When trees that have been helping by pulling carbon dioxide out of the atmosphere burn down, much of that carbon is pumped back into the air.

And in the Arctic, ice that has been cooling the planet by reflecting away some of the sun’s heat is melting. The loss of ice means more warming.

But: Arctic Ice has not declined since 2007.

Models Foretell the Future

Using nothing but basic physics, we can actually produce, in our computers, a virtual Earth. With this virtual Earth, scientists like Kirsten Findell work to predict where our climate is going, before it’s too late to change course.

Worldwide, there are dozens of models. They predict how each part of the climate machine will change, like sea surface temperature, storm intensity or the extent of the ice caps. Every detail is included. But the path to perfect models is still a work in progress, because Earth’s climate machine is such a complicated one.

The role that clouds play, for instance, is important, but poorly understood. And the speed at which ice sheets will break apart is another big unknown.

Computer models don’t exist in isolation. We calibrate them against what we’ve observed. We test them against the history of climate change. And we now know they’re pretty good.

But: Those models are running hot and vary greatly despite shared assumptions.

And the models only come close to observations when CO2 is left out.

The Grim Outlook from Models

The models can be used to run a virtual experiment: if we continue emitting carbon dioxide on the path we are on, what do they say our world will look like in 2100?

This map shows how temperatures could change. The models predict the average temperature could be 5 to 10 degrees Fahrenheit hotter. That means in New York City, days with temperatures over 90 degrees would more than triple. And in the Arctic, which will heat up even faster, it could rise, on average, more than 15 degrees.

Their results suggest we will see more Category 4 and 5 hurricanes, and the prevalence of devastating heatwaves will be much more extreme.

The models also show that by the end of the century, it is likely the ocean will rise one-and-a-half to four feet. Without major changes, this would put parts of cities like Miami under water.

But: We have heard all this before.

What are The Options

The path ahead comes down to three basic options. We can do nothing and suffer the consequences; we can adapt as the changes unfold; or we can act now to mitigate, or limit the damage. The options are connected. The more we mitigate, the less we would need to adapt. The more we adapt and mitigate, the less we would suffer.

Adaptation is perhaps most urgent in the ocean, which, right now, is bearing the brunt of climate change by absorbing most of the heat. Billions of people depend on the sea for food or their livelihood. As temperatures rise, many species of marine life are moving to cooler waters, threatening local fisheries. And warmer water is killing off coral reefs, which support about 25 percent of all life in the sea.

Across America, cities are drawing up plans to adapt to the impacts of climate change, whether that’s too much water from rising sea levels and stronger storms, or too little water from harsher, longer droughts.

But there is a way to avoid the worst impacts of climate change in the first place. The more we mitigate, or limit, how much our climate changes, the less we will have to adapt. That will require shifting our economies away from burning fossil fuels. The good news is technology is moving so fast, there are many alternatives.

But: Fossil fuel consumption is poorly related to temperatures.

Technology Solutions

The scientific toolkit finally got big enough to crack this thing. Wind and solar are much further ahead than anybody ever thought they would be 10 years ago. They’re growing impossibly rapidly.

These turbines are 40 stories high, with rotors the size of a football field. Each can produce enough electricity to power up to 400 homes or make a lot of dishwashers. It’s time to innovate, and it’s time to change. Instead of having one plant that makes 1,000 megawatts, let’s have 100 plants and make 10 megawatts, or 1,000 plants that make one megawatt.

They’re working on endgame technologies that fully fill the gap between where we need to go and the track that we’ve been on since the beginning in the Industrial Revolution. So where do we need to go? Jet fuel made from plants; taller, more powerful wind turbines; better batteries; and the next generation solar cell.

Lisa Dyson envisions a day that our choices for solving the climate crisis are not just suffer, adapt or mitigate, but also prosper, by learning to recycle carbon dioxide into useful everyday products. If carbon capture and renewable technologies become more widespread, carbon dioxide levels will stop increasing.

But: Modern nations (G20) depend on fossil fuels for nearly 90% of their energy.

Negative Emissions

But even reaching that goal may not be enough, because we still would have record high levels, continuing to warm up our planet. We may need to find a way to pull more carbon dioxide out of the air than we emit into it, to go into what’s called “negative emissions.”

On most farms, the soil is tilled, or plowed, to reduce weeds and pests. But in the process, much of the carbon gets dug up and released back to the atmosphere. Dave decided to go another route called “no-till” farming. Every time you harvest, you leave the residue from that crop in place, so there is a protective blanket on the top of the soil. So, here we have residue left from last year’s corn crop. Corn stalks, leaves, an occasional corncob. Not tilling helped the soil become healthier.

We need to fundamentally rethink how we do agriculture, focused on soil building, soil health, putting carbon back in the ground. And if we’re able to do that, then agriculture could be a major contributor to very positive changes related to global climate.

But: The planet is greener because of rising CO2.


For over 200 years, in every corner of the globe, scientists have probed Earth’s climate machine, developing a deep understanding of how it works.

They have proven beyond reasonable doubt that climate change is happening and that burning fossil fuels is the primary cause. They have built computer models that can predict the road ahead, and they have come up with ways to adapt, or solutions to avoid the worst of the impacts. But there is one powerful piece of the climate machine so unpredictable and inconsistent that no computer model could ever guess how it will behave: us.

The scientific evidence is so clear about where we’re going, but there is an astonishing inertia. We’re not mitigating fast enough to stop the train crash. The technological solutions make it inevitable that we will solve this problem. The question is just how much damage we create before we finally reign it in.

Update April 20:  This review of the documentary was posted by cerescokid  at Climate Etc.

I watched the PBS show. Perfect……..for an 8 year old. Could it be more simplistic? It is warming. CO2 did it. That was the sum and substance of it.

No mention of previous warm periods or the debate about them. Not a word about questions over SLR acceleration. Nothing on previous warm periods in the Arctic. Silence on East Antarctica gaining ice or Antarctic Peninsula Cooling. Not a peep about geothermal activity in Greenland and Antarctica. Nothing about the 12 year hiatus in Cat 3 hurricanes. Nothing about trendless tornadic activity. Nothing about trendless snow levels in North America. Not any explanation why temperatures are believed to be unprecedented and not just natural variability. Why no discussion of endless stacked Oscillations. Why wasn’t the sun dismissed? Hasn’t glacier calving been happening for eons? They made a big deal of an iceberg the size of the Empire State Building. Big deal.

But there were plenty of pretty pictures and age appropriate explanations of the issues. See spot run.

It was nothing more than a propaganda piece, perfect for the marginally competent HP aficionados.

They did, however, have a nice voiceover stating that temperatures haven’t been this warm in 800,000 years while showing a graph, not of temperatures, but of spiking CO2 levels over the last 800,000 years. Nice Trick.

Alarmists Fret, while Farmers Adapt


Update April 18 at End

This latest alarm is about the eastward shift of the above climate zone boundary, which historically was located upon the 100th meridian. The narrative by alarmists is along the lines of “OMG, we are screwed because drylands are replacing wetlands. There goes our food supply.” Some of the story headlines are these:

As World Warms, America’s Invisible ‘Climate Curtain’ Creeps East

The arid US midwest just crept 140 miles east thanks to climate change

America’s Arid West Is Invading the Fertile East

A major climate boundary in the central U.S. has shifted 140 miles due to global warming

From USA Today

Both population and development are sparse west of the 100th meridian, where farms are larger and primarily depend on arid-resistant crops like wheat, the Yale School of Forestry & Environmental Studies said. To the more humid east, more people and infrastructure exist. Farms are smaller and a large portion of the harvested crop is moisture-loving corn.

Now, due to shifting patterns in precipitation, wind and temperature since the 1870s — due to man-made climate change — the boundary between the dry West and the wetter East has shifted to roughly 98 degrees west longitude, the 98th meridian.

For instance, in Texas, the boundary has moved approximately from Abilene to Fort Worth.

According to Columbia University’s Earth Institute, Seager predicts that as the line continues to move farther East, farms will have to consolidate and become larger to remain viable.

And unless farmers are able to adapt, such as by using irrigation, they will need to consider growing wheat or another more suitable crop than corn.

“Large expanses of cropland may fail altogether, and have to be converted to western-style grazing range. Water supplies could become a problem for urban areas,” the Earth Institute said.

The studies appeared in the journal Earth Interactions, a publication of the American Meteorological Society.

What They Didn’t Tell You:  Context Makes All the Difference

This is another example of misdirection to push FFF (Fear of Fossil Fuels) by ignoring history and human ingenuity, while kowtowing to climate models as infallible oracles. The truth is, we didn’t get here by being victims, and lessons from the past will serve in the future.

First, the West Was Settled by Adaptive Farmers

One of the best researchers and historians is Geoff Cunfer, who with Fridolin Krausmann wrote Adaptation on an Agricultural Frontier: The Socio-Ecological Metabolism of Great Plains Settlement, 1875-1936.  Excerpts with my bolds.

The most important agricultural development of the nineteenth century was a massive and rapid expansion of farmland in the world’s grasslands, a process that doubled global land in farms. Displacing indigenous populations, European settlers plowed and fenced extensive new territories in North America’s Great Plains, South America’s campos and pampas, the Ukrainian and Russian steppes, and parts of Australia and New Zealand. Between 1800 and 1920 arable land increased from 400 million hectares to 950 million, and pasture land from 950 to 2,300 million hectares; much of that expansion occurred in grasslands. These regions became enduring “breadbaskets” for their respective nations and fed the nineteenth century’s 60 percent increase in world population. Never had so much new land come into agricultural production so fast. This episode was one of the most extensive and important environmental transformations in world history.

Most agro-ecologists and sustainability scientists focus on the present and the future. This article adapts their approach in order to understand agricultural change in the past, integrating socio-economic and physical-ecological characteristics that reveal both natural and cultural drivers of change. Socio-ecological profiles embrace land use, soil nitrogen, and food energy as key characteristics of agricultural sustainability. Ten descriptive measures link biophysical and socio-economic processes in farm communities to create socio-ecological profiles revealing human impacts on nature as well as environmental endowments, opportunities, constraints, and limitations that influenced settlers’ choices.

Tracing these characteristics from the beginning of agricultural colonization through sixty years reveals a pattern of expansion and growth, maturity, and adaptation. Agricultural systems are seldom static. Farmers interact with constantly varying natural forces and with social processes always in flux. The Kansas agricultural frontier reveals adjustments and readjustments to an ever-changing world and, especially, to environmental  forces beyond settlers’ control. Three distinct socio-ecological profiles emerged in Kansas: a) high productivity mixed farming; b) low productivity ranching; and c) market-oriented dryland wheat farming. The following narrative addresses each profile in chronological order and from east to west across the state, revealing settlers’ rapid adaptation to environmental constraints; accompanying figures allow simultaneous spatial comparison.

Second, Farming was Sustained through Environmental Changes

Cunfer wrote a book On the Great Plains: Agriculture and Environment review here).  Some excerpts with my bolds.

Though it may seem inconceivable to characterize the history of Great Plains land use as stable, Cunfer uncovers a persistent theme in his research: Great Plains farmers surprisingly found an optimal mix between agricultural uses (in particular, plowing vs. pasture) quickly and maintained this mix within the limits of the natural environment for a surprisingly long period of time. Only occasionally, in particular during the mid 1930s, did farmers push the boundaries of this regional environment; however, they quickly returned to a “steady-state” land-use equilibrium.

In particular, Cunfer blends together these two extreme approaches and summarizes Great Plains agricultural history in three components: (1) the rapid build-up of farm settlements from 1870-1920, which substantially altered the surrounding environment; (2) relative land-use stability from 1920 to 2000; and (3) the occasional transition in agricultural techniques which resulted in a quick shift away from this land-use equilibrium.

The Dust Bowl still remains an important environmental crisis and it is often a rallying point for federal government conservation programs. Cunfer adds to this literature by applying GIS maps to the entire Great Plains and interpreting comparative sand, rainfall, and temperature differential data to conclude that “human land-use choices were less prominent in creating dust storms than was the weather” (p. 163).[1] The localized portion of the Great Plains where dust storms were magnified contained substantially more sandy soil, only a small percentage of land devoted for crops, and the greatest degree of rainfall deficits from past trends. This non-exploitative argument contradicts the conventional wisdom which maintains that a massive plow-up followed the trail of increasing wheat prices and low cost of farming.

Our Ancestors Prevailed and We have Additional Advantages

Just as pioneer colonization inscribed a new cultural signature onto a plains landscape constructed by Native Americans, industrial agriculture began to over-write the settlement-era landscape. Fossil fuel-powered technologies brought powerful new abilities to deliver irrigation water, apply synthetic fertilizers, control pests, and reconstruct landscapes with tractors, trucks, and mechanical harvesters. A new equilibrium between environmental alteration and adaptation emerged. Industrial agriculture’s remarkable ability to alter and manage natural systems depends on a massive mobilization of fossil fuel energy. But until the early twentieth century farmers accommodated and adapted to natural constraints to a considerable extent.

Fig. 1 An energy model of agroecosystems, optimized for estimation based on historical sources (adopted from Tello et al. 2015)


It is disrespectful and demeaning for the activist media types to pretend we are unprepared and incapable of adapting to changing environmental and climate conditions.  Present day knowledge of agroecosystems is highly advanced, supported by modern technologies and experience with crop selections and choices for diverse microclimates.  Confer and colleagues discuss the possibilities in a paper Agroecosystem energy transitions: exploring the energy-land nexus in the course of industrialization

A previous post at this blog was Adapting Works! Mitigating Fails. discussing how farmers pushed the extent of wheat production 1000 km north through adaptation and innovation.

Warming has produced bumper crops most everywhere.

Update April 18

Dr. Roy Spencer has also weighed in on these scare stories, and adds considerable perspective.  He challenges the claim that the eastward shift has happened.

Since I’ve been consulting for U.S. grain interests for the last seven or eight years, I have some interest in this subject. Generally speaking, climate change isn’t on the Midwest farmers’ radar because, so far, there has been no sign of it in agricultural yields. Yields (production per acre) of all grains, even globally, have been on an upward trend for decades. This is fueled mainly by improved seeds, farming practices, and possibly by the direct benefits of more atmospheric CO2 on plants. If there has been any negative effect of modestly increasing temperatures, it has been buried by other, positive, effects.

And so, the study begs the question: how has growing season precipitation changed in this 100th meridian zone? Using NOAA’s own official statewide average precipitation statistics, this is how the rainfall observations for the primary agricultural states in the zone (North and South Dakota, Nebraska, Kansas, and Oklahoma) have fared every year between 1900 and 2017:


What we see is that there has been, so far, no evidence of decreasing precipitation amounts exactly where the authors claim it will occur (and according to press reports, has already occurred).

Spencer’s post is The 100th Meridian Agricultural Scare: Another Example of Media Hype Exceeding Reality




Mar. 2018 Ocean Cooling? Wait and See


globpop_countriesThe best context for understanding decadal temperature changes comes from the world’s sea surface temperatures (SST), for several reasons:

  • The ocean covers 71% of the globe and drives average temperatures;
  • SSTs have a constant water content, (unlike air temperatures), so give a better reading of heat content variations;
  • A major El Nino was the dominant climate feature in recent years.

HadSST is generally regarded as the best of the global SST data sets, and so the temperature story here comes from that source, the latest version being HadSST3.  More on what distinguishes HadSST3 from other SST products at the end.

The Current Context

The chart below shows SST monthly anomalies as reported in HadSST3 starting in 2015 through March 2018.

A global cooling pattern has persisted, seen clearly in the Tropics since its peak in 2016, joined by NH and SH dropping since last August. Upward bumps occurred last October, in January and again in March 2018.  Three months of 2018 now show slight warming since the low point of December 2017.  Only the Tropics are showing temps the lowest in this time frame.  Globally, and in both hemispheres anomalies closely match March 2015.

Note that higher temps in 2015 and 2016 were first of all due to a sharp rise in Tropical SST, beginning in March 2015, peaking in January 2016, and steadily declining back below its beginning level. Secondly, the Northern Hemisphere added three bumps on the shoulders of Tropical warming, with peaks in August of each year. Also, note that the global release of heat was not dramatic, due to the Southern Hemisphere offsetting the Northern one.

With ocean temps positioned the same as three years ago, we can only wait and see whether the previous cycle will repeat or something different appears.  As the analysis belows shows, the North Atlantic has been the wild card bringing warming this decade, and cooling will depend upon a phase shift in that region.

A longer view of SSTs

The graph below  is noisy, but the density is needed to see the seasonal patterns in the oceanic fluctuations.  Previous posts focused on the rise and fall of the last El Nino starting in 2015.  This post adds a longer view, encompassing the significant 1998 El Nino and since.  The color schemes are retained for Global, Tropics, NH and SH anomalies.  Despite the longer time frame, I have kept the monthly data (rather than yearly averages) because of interesting shifts between January and July.


Open image in new tab for sharper detail.

1995 is a reasonable starting point prior to the first El Nino.  The sharp Tropical rise peaking in 1998 is dominant in the record, starting Jan. ’97 to pull up SSTs uniformly before returning to the same level Jan. ’99.  For the next 2 years, the Tropics stayed down, and the world’s oceans held steady around 0.2C above 1961 to 1990 average.

Then comes a steady rise over two years to a lesser peak Jan. 2003, but again uniformly pulling all oceans up around 0.4C.  Something changes at this point, with more hemispheric divergence than before. Over the 4 years until Jan 2007, the Tropics go through ups and downs, NH a series of ups and SH mostly downs.  As a result the Global average fluctuates around that same 0.4C, which also turns out to be the average for the entire record since 1995.

2007 stands out with a sharp drop in temperatures so that Jan.08 matches the low in Jan. ’99, but starting from a lower high. The oceans all decline as well, until temps build peaking in 2010.

Now again a different pattern appears.  The Tropics cool sharply to Jan 11, then rise steadily for 4 years to Jan 15, at which point the most recent major El Nino takes off.  But this time in contrast to ’97-’99, the Northern Hemisphere produces peaks every summer pulling up the Global average.  In fact, these NH peaks appear every July starting in 2003, growing stronger to produce 3 massive highs in 2014, 15 and 16, with July 2017 only slightly lower.  Note also that starting in 2014 SH plays a moderating role, offsetting the NH warming pulses. (Note: these are high anomalies on top of the highest absolute temps in the NH.)

What to make of all this? The patterns suggest that in addition to El Ninos in the Pacific driving the Tropic SSTs, something else is going on in the NH.  The obvious culprit is the North Atlantic, since I have seen this sort of pulsing before.  After reading some papers by David Dilley, I confirmed his observation of Atlantic pulses into the Arctic every 8 to 10 years as shown by this graph:

The data is annual averages of absolute SSTs measured in the North Atlantic.  The significance of the pulses for weather forecasting is discussed in AMO: Atlantic Climate Pulse

But the peaks coming nearly every July in HadSST require a different picture.  Let’s look at August, the hottest month in the North Atlantic from the Kaplan dataset.Now the regime shift appears clearly. Starting with 2003, seven times the August average has exceeded 23.6C, a level that prior to ’98 registered only once before, in 1937.  And other recent years were all greater than 23.4C.


The oceans are driving the warming this century.  SSTs took a step up with the 1998 El Nino and have stayed there with help from the North Atlantic, and more recently the Pacific northern “Blob.”  The ocean surfaces are releasing a lot of energy, warming the air, but eventually will have a cooling effect.  The decline after 1937 was rapid by comparison, so one wonders: How long can the oceans keep this up?

To paraphrase the wheel of fortune carnival barker:  “Down and down she goes, where she stops nobody knows.”  As this month shows, nature moves in cycles, not straight lines, and human forecasts and projections are tenuous at best.



In the most recent GWPF 2017 State of the Climate report, Dr. Humlum made this observation:

“It is instructive to consider the variation of the annual change rate of atmospheric CO2 together with the annual change rates for the global air temperature and global sea surface temperature (Figure 16). All three change rates clearly vary in concert, but with sea surface temperature rates leading the global temperature rates by a few months and atmospheric CO2 rates lagging 11–12 months behind the sea surface temperature rates.”

Footnote: Why Rely on HadSST3

HadSST3 is distinguished from other SST products because HadCRU (Hadley Climatic Research Unit) does not engage in SST interpolation, i.e. infilling estimated anomalies into grid cells lacking sufficient sampling in a given month. From reading the documentation and from queries to Met Office, this is their procedure.

HadSST3 imports data from gridcells containing ocean, excluding land cells. From past records, they have calculated daily and monthly average readings for each grid cell for the period 1961 to 1990. Those temperatures form the baseline from which anomalies are calculated.

In a given month, each gridcell with sufficient sampling is averaged for the month and then the baseline value for that cell and that month is subtracted, resulting in the monthly anomaly for that cell. All cells with monthly anomalies are averaged to produce global, hemispheric and tropical anomalies for the month, based on the cells in those locations. For example, Tropics averages include ocean grid cells lying between latitudes 20N and 20S.

Gridcells lacking sufficient sampling that month are left out of the averaging, and the uncertainty from such missing data is estimated. IMO that is more reasonable than inventing data to infill. And it seems that the Global Drifter Array displayed in the top image is providing more uniform coverage of the oceans than in the past.


USS Pearl Harbor deploys Global Drifter Buoys in Pacific Ocean


Arctic Ice Mid April



Click on image to enlarge.

The most obvious Arctic ice feature this year has been the shrinkage in the Pacific basins, especially Bering Sea (on the right).  The image shows extents on day 104 from the decadal high in 2012 to 2018 (yesterday).  Bering has only 200k km2 mid April 2018 compared to 1100k km2 six years ago.  On the left, Okhotsk has gone through ups and downs, but 2018 is comparable to 2012.  It appears Bering is dominated by Northeast Pacific warming, whose effects are moderated in Okhotsk by Siberian conditions.

This is evident in the current nullschool simulation of wind patterns in the region (link to animation):,53.61,1130/loc=-167.641,51.083

On the European side, Barents continues to show more ice than in recent years. Ice in Barents Sea has retreated lately, but extent there is still above average and slightly larger than 2014,the iciest year.  As the graph shows 2017 came on late in Spring to surpass 2014 for awhile.

The graph below shows how Arctic extent over the last six weeks compared to the 11 year average and to some years of interest.

Note the average max on day 62 and 2018 max on day 74.  In recent weeks 2018 is matching 2017 and slightly higher than 2007. SII (NOAA) continues to show ~200k km2 less extent. The graph below shows that the deficit to average is entirely due to Bering and Okhotsk Seas, since removing those two basins eliminates the shortfall.

The table below confirms that the core Arctic ice remains firmly in place.

Region 2018104 Day 104 
2018-Ave. 2007104 2018-2017
 (0) Northern_Hemisphere 13956065 14373298 -417234 13862996 93068
 (1) Beaufort_Sea 1070445 1068880 1565 1058157 12288
 (2) Chukchi_Sea 962477 965131 -2654 960944 1532
 (3) East_Siberian_Sea 1087137 1085763 1374 1074001 13136
 (4) Laptev_Sea 897845 894331 3514 866524 31321
 (5) Kara_Sea 934919 925323 9596 912398 22521
 (6) Barents_Sea 708699 609715 98984 521344 187355
 (7) Greenland_Sea 575274 663379 -88104 691751 -116477
 (8) Baffin_Bay_Gulf_of_St._Lawrence 1340040 1352819 -12779 1222152 117888
 (9) Canadian_Archipelago 853109 852426 683 846282 6827
 (10) Hudson_Bay 1260022 1245760 14263 1212987 47035
 (11) Central_Arctic 3200334 3238761 -38426 3245148 -44813
 (12) Bering_Sea 189180 780469 -591289 645687 -456507
 (13) Baltic_Sea 68363 44683 23681 20075 48289
 (14) Sea_of_Okhotsk 805400 639794 165606 576913 228487

The overall deficit is~3%, entirely due to Bering Sea.  Okhotsk and Barents are above average, but not enough to offset lack of ice in Bering.

Drift ice in Okhotsk Sea at sunrise.


Who to Blame for Rising CO2?

Source :NOAA

Blaming global warming on humans comes down to two assertions:

Rising CO2 in the atmosphere causes earth’s surface temperature to rise.

Humans burning fossil fuels cause rising atmospheric CO2.

For this post I will not address the first premise, instead refer the reader to a previous article referencing Fred Singer. He noted that greenhouse gas theory presumes surface warming arises because heat is forced to escape at a higher, colder altitude. In fact, temperatures in the tropopause do not change with altitude (“pause”), and in the stratosphere temperatures increase with altitude. That post also includes the “meat” of the brief submitted to Judge Alsup’s court by Happer, Koonin and Lindzen, which questions CO2 driving global warming in the face of other more powerful factors. See Courtroom Climate Science

The focus in this piece is the claim that fossil fuel emissions drive observed rising CO2 concentrations. IPCC consensus scientists and supporters note that human emissions are about twice the meaured rise and presume that natural sinks absorb half, leaving the other half to accumulate in the atmosphere. Thus they conclude all of the increase in atmospheric CO2 is from fossil fuels.

This simple-minded conclusion takes the only two things we measure in the carbon cycle: CO2 in the atmosphere, and fossil fuel emissions. And then asserts that one causes the other. But several elephants are in the room, namely the several carbon reservoirs that dwarf human activity in their size and activity, and can not be measured because of their complexity.

The consensus notion is based on a familiar environmental paradigm: The Garden of Eden. This is the modern belief that nature, and indeed the climate is in balance, except for humans disrupting it by their activities. In the current carbon cycle context, it is the supposition that all natural sources and sinks are in balance, thus any additional CO2 is because of humans.

Now, a curious person might wonder: How is it that for decades as the rate of fossil fuel emissions increased, the absorption by natural sinks has also increased at exactly the same rate, so that 50% is always removed and 50% remains? It can only be that nature is also dynamic and its flows change over time!

That alternative paradigm is elaborated in several papers that are currently under vigorous attack from climatists. As one antagonist put it: Any paper concluding that humans don’t cause rising CO2 is obviously wrong. One objectionable study was published by Hermann Harde, another by Ole Humlum, and a third by Ed Berry is delayed in pre-publication review.

The methods and analyses are different, but the three skeptical papers argue that the levels and flows of various carbon reservoirs fluctuate over time with temperature itself as a causal variable. Some sinks are stimulated by higher temperatures to release more CO2 while others respond by capturing more CO2. And these reactions occur on a range of timescales. Once these dynamics are factored in, the human contribution to rising atmospheric CO2 is neglible, much to the ire of alarmists.

Ed Berry finds IPCC carbon cycle metrics illogical.

Dr. Ed Berry provides a preprint of his submitted paper at a blog post entitled Why human CO2 does not change climate. He welcomes comments and uses the discussion to revise and improve the text. Excerpts with my bolds.

The United Nations Intergovernmental Panel on Climate Change (IPCC) claims human emissions raised the carbon dioxide level from 280 ppm to 410 ppm, or 130 ppm. Physics proves this claim is impossible.

The IPCC agrees today’s annual human carbon dioxide emissions are 4.5 ppm per year and nature’s carbon dioxide emissions are 98 ppm per year. Yet, the IPCC claims human emissions have caused all the increase in carbon dioxide since 1750, which is 30 percent of today’s total.

How can human carbon dioxide, which is only 5 percent of natural carbon dioxide, add 30 percent to the level of atmospheric carbon dioxide? It can’t.

This paper derives a Model that shows how human and natural carbon dioxide emissions independently change the equilibrium level of atmospheric carbon dioxide. This Model should replace the IPCC’s invalid Bern model.

The Model shows the ratio of human to natural carbon dioxide in the atmosphere equals the ratio of their inflows, independent of residence time.

Fig. 5. The sum of nature’s inflow is 20 times larger than the sum of human emissions. Nature balances inflow with or without human emissions.

The model shows, contrary to IPCC claims, that human emissions do not continually add carbon dioxide to the atmosphere, but rather cause a flow of carbon dioxide through the atmosphere. The flow adds a constant equilibrium level, not a continuing increasing level, of carbon dioxide.

Fig. 2. Balance proceeds as follows: (1) Inflow sets the balance level. (2) Level sets the outflow. (3) Level moves toward balance level until outflow equals inflow.

Ole Humlum proves that CO2 follows temperature also for interannual/decadal periods.

Humlum et al. looks the modern record of fluctuating temperatures and atmospheric CO2 and concludes that CO2 changes follow temperature changes over these timescales. The paper is The phase relation between atmospheric carbon dioxide and global temperature OleHumlum, KjellStordahl, Jan-ErikSolheim.  Excerpts with my bolds.

From the Abstract:
Using data series on atmospheric carbon dioxide and global temperatures we investigate the phase relation (leads/lags) between these for the period January 1980 to December 2011. Ice cores show atmospheric CO2 variations to lag behind atmospheric temperature changes on a century to millennium scale, but modern temperature is expected to lag changes in atmospheric CO2, as the atmospheric temperature increase since about 1975 generally is assumed to be caused by the modern increase in CO2.

In our analysis we used eight well-known datasets. . . We find a high degree of co-variation between all data series except 7) and 8), but with changes in CO2 always lagging changes in temperature.

► Changes in global atmospheric CO2 are lagging 11–12 months behind changes in global sea surface temperature. ► Changes in global atmospheric CO2 are lagging 9.5–10 months behind changes in global air surface temperature. ► Changes in global atmospheric CO2 are lagging about 9 months behind changes in global lower troposphere temperature. ► Changes in ocean temperatures explain a substantial part of the observed changes in atmospheric CO2 since January 1980. ► Changes in atmospheric CO2 are not tracking changes in human emissions.


Summing up, monthly data since January 1980 on atmospheric CO2 and sea and air temperatures unambiguously demonstrate the overall global temperature change sequence of events to be 1) ocean surface, 2) surface air, 3) lower troposphere, and with changes in atmospheric CO2 always lagging behind changes in any of these different temperature records.9

A main control on atmospheric CO2 appears to be the ocean surface temperature, and it remains a possibility that a significant part of the overall increase of atmospheric CO2 since at least 1958 (start of Mauna Loa observations) simply reflects the gradual warming of the oceans, as a result of the prolonged period of high solar activity since 1920 (Solanki et al., 2004).

Based on the GISP2 ice core proxy record from Greenland it has previously been pointed out that the present period of warming since 1850 to a high degree may be explained by a natural c. 1100 yr periodic temperature variation (Humlum et al., 2011).

Hermann Harde sets realistic proportions for the carbon cycle.

Hermann Harde applies a comparable perspective to consider the carbon cycle dynamics. His paper is Scrutinizing the carbon cycle and CO2 residence time in the atmosphere. Excerpts with my bolds.

From the Abstract:

Climate scientists presume that the carbon cycle has come out of balance due to the increasing anthropogenic emissions from fossil fuel combustion and land use change. This is made responsible for the rapidly increasing atmospheric CO2 concentrations over recent years, and it is estimated that the removal of the additional emissions from the atmosphere will take a few hundred thousand years. Since this goes along with an increasing greenhouse effect and a further global warming, a better understanding of the carbon cycle is of great importance for all future climate change predictions. We have critically scrutinized this cycle and present an alternative concept, for which the uptake of CO2 by natural sinks scales proportional with the CO2 concentration. In addition, we consider temperature dependent natural emission and absorption rates, by which the paleoclimatic CO2 variations and the actual CO2 growth rate can well be explained. The anthropogenic contribution to the actual CO2 concentration is found to be 4.3%, its fraction to the CO2 increase over the Industrial Era is 15% and the average residence time 4 years.
Fig. 1. Simplified schematic of the global carbon cycle. Black numbers and arrows indicate reservoir mass in PgC and exchange fluxes in PgC/yr before the Industrial Era. Red arrows and numbers show annual  anthropogenic’ flux changes averaged over the 2000–2009 time period. Graphic from AR5-Chap.6-Fig.6.1. 


Climate scientists assume that a disturbed carbon cycle, which has come out of balance by the increasing anthropogenic emissions from fossil fuel combustion and land use change, is responsible for the rapidly increasing atmospheric CO2 concentrations over recent years. While over the whole Holocene up to the entrance of the Industrial Era (1750) natural emissions by heterotrophic processes and fire were supposed to be in equilibrium with the uptake by photosynthesis and the net ocean-atmosphere gas exchange, with the onset of the Industrial Era the IPCC estimates that about 15–40% of the additional emissions cannot further be absorbed by the natural sinks and are accumulating in the atmosphere. The IPCC further argues that CO2 emitted until 2100 will remain in the atmosphere longer than 1000 years, and in the same context it is even mentioned that the removal of human-emitted CO2 from the atmosphere by natural processes will take a few hundred thousand years (high confidence) (see AR5-Chap.6ExecutiveSummary). Since the rising CO2 concentrations go along with an increasing greenhouse effect and, thus, a further global warming, a better understanding of the carbon cycle is a necessary prerequisite for all future climate change predictions.

In their accounting schemes and models of the carbon cycle the IPCC uses many new and detailed data which are primarily focussing on fossil fuel emission, cement fabrication or net land use change (see AR5-WG1- Chap.6.3.2), but it largely neglects any changes of the natural emissions, which contribute to more than 95 % to the total emissions and by far cannot be assumed to be constant over longer periods (see, e.g.: variations over the last 800,000 years (Jouzel et al., 2007); the last glacial termination (Monnin et al., 2001); or the younger Holocene (Monnin et al., 2004; Wagner et al., 2004)).

Since our own estimates of the average CO2 residence time in the atmosphere differ by several orders of magnitude from the announced IPCC values, and on the other hand actual investigations of Humlum et al. (2013) or Salby (2013, 2016) show a strong relation between the natural CO2 emission rate and the surface temperature, this was motivation enough to scrutinize the IPCC accounting scheme in more detail and to contrast this to our own calculations.

Different to the IPCC we start with a rate equation for the emission and absorption processes, where the uptake is not assumed to be saturated but scales proportional with the actual CO2 concentration in the atmosphere (see also Essenhigh, 2009; Salby, 2016). This is justified by the observation of an exponential decay of 14C. A fractional saturation, as assumed by the IPCC, can directly be expressed by a larger residence time of CO2 in the atmosphere and makes a distinction between a turnover time and adjustment time needless.

Based on this approach and as solution of the rate equation we derive a concentration at steady state, which is only determined by the product of the total emission rate and the residence time. Under present conditions the natural emissions contribute 373 ppm and anthropogenic emissions 17 ppm to the total concentration of 390 ppm (2012). For the average residence time we only find 4 years.

The stronger increase of the concentration over the Industrial Era up to present times can be explained by introducing a temperature dependent natural emission rate as well as a temperature affected residence time. With this approach not only the exponential increase with the onset of the Industrial Era but also the concentrations at glacial and cooler interglacial times can well be reproduced in full agreement with all observations.

So, different to the IPCC’s interpretation the steep increase of the concentration since 1850 finds its natural explanation in the self accelerating processes on the one hand by stronger degassing of the oceans as well as a faster plant growth and decomposition, on the other hand by an increasing residence time at reduced solubility of CO2 in oceans. Together this results in a dominating temperature controlled natural gain, which contributes about 85% to the 110 ppm CO2 increase over the Industrial Era, whereas the actual anthropogenic emissions of 4.3% only donate 15%. These results indicate that almost all of the observed change of CO2 during the Industrial Era followed, not from anthropogenic emission, but from changes of natural emission. The results are consistent with the observed lag of CO2 changes behind temperature changes (Humlum et al., 2013; Salby, 2013), a signature of cause and effect. Our analysis of the carbon cycle, which exclusively uses data for the CO2 concentrations and fluxes as published in AR5, shows that also a completely different interpretation of these data is possible, this in complete conformity with all observations and natural causalities.


CO2 Fluxes, Sources and Sinks

Obsessed with Human CO2

Not Worried About CO2


Why People Rely on Pipelines

The Trans-Alaska Pipeline system’s 420 miles above ground segments are built in a zig-zag configuration to allow for expansion or contraction of the pipe.

In their near-religious belief about demonic CO2, activists are obstructing construction or expansion of pipelines delivering the energy undergirding our civilization.  The story is explained by James Conca in a recent Forbes article Supersize It! Building Bigger Pipelines Over Old Ones Is A Good Idea Excerpts below with my bolds.

The idea of replacing an older small pipe with a wider new pipe is not new. But it really makes a difference when applied to the oil and natural gas pipelines that crisscross North America.

The concept of supersizing pipelines is taking hold in the United States and Canada to address the growing dearth of pipelines in areas where oil production is increasing but can’t get to the refineries in the Gulf.

At the same time, natural gas use is increasing in places that don’t have many pipelines and where the public is strongly against building new pipelines.

A bigger pipeline along the same route as the older smaller one is cheaper to build than a bunch of new small ones. It also falls under the existing permits and rights-of-way of the old pipe. And the public doesn’t have to approve any new pipeline routes.

Supersizing also increases safety and ensures less environmental impact since you’re not building additional lines, but replacing an old one with a new one, and older lines are more likely to leak or fail completely, like the Colonial Pipeline Spill.

Not only do we need more crude oil pipelines, the ones we have quite old. But they can be supersized without finding and permitting new routes, which saves a lot of money and time and doesn’t increase their environmental footprint. Of course, if there are no pipelines to supersize, like in much of New England and New York, that’s a whole other problem. Photo is of a section of the Alaska Pipeline.

Increasing the diameter of a pipe increases the flow within it much more than you might think. In fact, the volume of flow per minute goes as the fourth power of the change in diameter. This means, if you double the diameter, you increase the flow by 16 times – (2^4 = 2 x 2 x 2 x 2 = 16).

Enbridge is a good example. The company has replaced a 26-inch-diameter shale gas line running from Pennsylvania into New England with a 42-inch line. That doesn’t seem like much, it’s only 1.6 times as big. But raised to the power of four, this new pipeline can carry six times as much gas as the old one.

‘Once the pipe is in the ground, you can do a lot of things: reverse flows, expand it, optimize it,’ said Al Monaco, President and Chief Executive of Enbridge.

These types of upgrades have essentially made the controversial portion of the Keystone XL pipeline unnecessary.

Of course, if you don’t have old pipelines to begin with, supersizing doesn’t help. Parts of the U.S. simply need more pipelines, especially where fracking has unleashed huge amounts of oil and gas in areas that didn’t have a lot pipelines to begin with.

New England is a good example.

New England is closing their nuclear plants faster than you can say “who cares about science.” They are replacing them with natural gas and renewables. But there are hardly any gas pipelines in the region. They are forced to import liquefied natural gas from places like Russia or use dirty oil plants when demand is high, like this last winter. When this happens the customers in New England pay a high price for their electricity, sometimes ten times the normal price.

Yet, the people of New England do not want more gas pipelines.

The citizens of New England also want more renewables, whose electricity has to be imported into the northeast from other regions, especially hydropower from Canada. But this requires installing high voltage transmission lines which the public keeps voting down.

When the public does not listen to scientists and experts, things don’t go well. This problem is engulfing the energy sector. The public seems to care about climate change, but don’t want to do what the climate scientists advise – to increase nuclear and renewables – because they reject them, or their infrastructure requirements, out of ignorance.

And they aren’t putting solar panels on their roofs very much either.

So next year, they will wonder why electricity costs keep going up, their carbon emissions keep going up, and their black-outs keep getting longer.

Natural gas requires the least steel and concrete of any energy source, is cheap and quick to build, has cheap fuel, and is beloved by state legislatures. But you need pipelines to deliver it, and no one wants them in their backyard. Supersizing helps with this problem.

Natural gas is the fastest growing energy source in America. Replacing coal plants with gas plants is the only reason our carbon emissions are at a 27-year low. It’s the easiest power plant to build, requires the least amount of steel and concrete (see figure above), is the easiest to permit, will have cheap fuel for decades, and is vastly preferred over coal.

However, it requires pipelines to deliver the gas and no one seems to like them. You can ship gas as liquefied natural gas (LNG), which is what happens a lot in the New England, but that’s three times more expensive and we have very limited LNG facilities.

This is a classic case of the public misunderstanding how the real world works. These issues are somewhat complex and you have to juggle a lot of conflicting issues. Our oil pipelines themselves are generally old (see figure above). Half of them are older than 40 years. Some date from before World War One.

They need to be replaced, and new ones built, or we risk the very environmental damage feared by those who don’t want them at all. Just supersizing the oldest half would be tremendous, and remove most of the need for additional pipelines.

Our pipelines, refineries and transmission lines are all at capacity. When problems occur, there’s no back-up plan. We have over 3,000 blackouts a year, the highest in the developed world. It’s why the American Society of Civil Engineers gave America a D+ on our 2017 Infrastructure Report Card.

Somewhere around 1985, we became the world’s richest, most powerful, greatest nation in the history of the world. But we still have to work hard to remain the best. It doesn’t just maintain itself. And infrastructure is where long-term neglect first shows, often catastrophically.

So we need to upgrade our transmission lines and transformers, build new non-fossil power plants and supersize our pipelines.

We can’t afford not to.

Dr. James Conca is an expert on energy, nuclear and dirty bombs, a planetary geologist, and a professional speaker.


Dr. Conca is writing from an American perspective, while in Canada a major pipeline supersizing project is being resisted by one province, British Columbia, despite the pipeline gaining federal approval after a long, tortuous environmental assessment process.

The BC Premier is holding power in a coalition government beholden to a few Greens, who are adamant about “keeping it in the ground.”

Latest news at The many ways B.C. Premier John Horgan is wrong about Trans Mountain.

“The current developments are a real test of Canada’s commitment to the rule of law and the ability of any resource company to rely on the legal approval process for projects,” said Dwight Newman, one of Canada’s top constitutional scholars and Munk senior fellow at the Macdonald-Laurier Institute.

To diminish the importance to Alberta and Canada of the Trans Mountain project, Horgan refers to it as a Texas project, adopting the pejorative jargon of eco-activists dependent on U.S. funding and organizational support to run their campaigns.

“The interest of the Texas boardrooms are not the interests of British Columbians,” Horgan said. In fact, Kinder Morgan Canada has been publicly traded since last year and is 77-per-cent owned by Canadians, with Manulife and TD the biggest shareholders. Its shippers are predominantly Canadian oilsands producers and Canada as a whole benefits from the pipeline’s opening of the Asian market.

Horgan’s actions have already triggered a trade war with Alberta, fomented irrational fears about oil spills, put the spotlight on B.C. for all the wrong reasons, and exposed his province to potentially hard retribution from both from Alberta and from Ottawa. If Horgan doesn’t see that, he’s not looking.

Media Raises False Alarms of Ocean Cooling

The RAPID moorings being deployed. Credit: National Oceanography Centre.

The usual suspects, such as BBC, the Guardian, New York Times, Washington Post etc., are reporting that the Atlantic gulf stream is slowing down due to climate change, threatening an ice age.  That’s right, warmists are now claiming fossil fuels do cooling when they are not warming.  As usual the headlines are not supported by the details.

The AMOC is back in the news following a recent Ocean Sciences meeting.  This update adds to the theme Oceans Make Climate. Background links are at the end, including one where chief alarmist M. Mann claims fossil fuel use will stop the ocean conveyor belt and bring a new ice age.  Actual scientists are working away methodically on this part of the climate system, and are more level-headed.  H/T GWPF for noticing the recent article in Science Ocean array alters view of Atlantic ‘conveyor belt’  By Katherine Kornei Feb. 17, 2018 . Excerpts with my bolds.

The powerful currents in the Atlantic, formally known as the Atlantic meridional overturning circulation (AMOC), are a major engine in Earth’s climate. The AMOC’s shallower limbs—which include the Gulf Stream—transport warm water from the tropics northward, warming Western Europe. In the north, the waters cool and sink, forming deeper limbs that transport the cold water back south—and sequester anthropogenic carbon in the process. This overturning is why the AMOC is sometimes called the Atlantic conveyor belt.

Fig. 1. Schematic of the major warm (red to yellow) and cold (blue to purple) water pathways in the NASPG (North Atlantic subpolar gyre ) credit: H. Furey, Woods Hole Oceanographic Institution): Denmark Strait (DS), Faroe Bank Channel (FBC), East and West Greenland Currents (EGC and WGC, respectively), NAC, DSO, and ISO.

In February at the American Geophysical Union’s (AGU’s) Ocean Sciences meeting, scientists presented the first data from an array of instruments moored in the subpolar North Atlantic. The observations reveal unexpected eddies and strong variability in the AMOC currents. They also show that the currents east of Greenland contribute the most to the total AMOC flow. Climate models, on the other hand, have emphasized the currents west of Greenland in the Labrador Sea. “We’re showing the shortcomings of climate models,” says Susan Lozier, a physical oceanographer at Duke University in Durham, North Carolina, who leads the $35-million, seven-nation project known as the Overturning in the Subpolar North Atlantic Program (OSNAP).

Fig. 2. Schematic of the OSNAP array. The vertical black lines denote the OSNAP moorings with the red dots denoting instrumentation at depth. The thin gray lines indicate the glider survey. The red arrows show pathways for the warm and salty waters of subtropical origin; the light blue arrows show the pathways for the fresh and cold surface waters of polar origin; and the dark blue arrows show the pathways at depth for waters that originate in the high-latitude North Atlantic and Arctic.

The research and analysis is presented by Dr. Lozier et al. in this publication Overturning in the Subpolar North Atlantic Program: A New International Ocean Observing System Images above and text excerpted below with my bolds.

For decades oceanographers have assumed the AMOC to be highly susceptible to changes in the production of deep waters at high latitudes in the North Atlantic. A new ocean observing system is now in place that will test that assumption. Early results from the OSNAP observational program reveal the complexity of the velocity field across the section and the dramatic increase in convective activity during the 2014/15 winter. Early results from the gliders that survey the eastern portion of the OSNAP line have illustrated the importance of these measurements for estimating meridional heat fluxes and for studying the evolution of Subpolar Mode Waters. Finally, numerical modeling data have been used to demonstrate the efficacy of a proxy AMOC measure based on a broader set of observational data, and an adjoint modeling approach has shown that measurements in the OSNAP region will aid our mechanistic understanding of the low-frequency variability of the AMOC in the subtropical North Atlantic.

Fig. 7. (a) Winter [Dec–Mar (DJFM)] mean NAO index. Time series of temperature from the (b) K1 and (c) K9 moorings.

Finally, we note that while a primary motivation for studying AMOC variability comes from its potential impact on the climate system, as mentioned above, additional motivation for the measure of the heat, mass, and freshwater fluxes in the subpolar North Atlantic arises from their potential impact on marine biogeochemistry and the cryosphere. Thus, we hope that this observing system can serve the interests of the broader climate community.

Fig. 10. Linear sensitivity of the AMOC at (d),(e) 25°N and (b),(c) 50°N in Jan to surface heat flux anomalies per unit area. Positive sensitivity indicates that ocean cooling leads to an increased AMOC—e.g., in the upper panels, a unit increase in heat flux out of the ocean at a given location will change the AMOC at (d) 25°N or (e) 50°N 3 yr later by the amount shown in the color bar. The contour intervals are logarithmic. (a) The time series show linear sensitivity of the AMOC at 25°N (blue) and 50°N (green) to heat fluxes integrated over the subpolar gyre (black box with surface area of ∼6.7 × 10 m2) as a function of forcing lead time. The reader is referred to Pillar et al. (2016) for model details and to Heimbach et al. (2011) and Pillar et al. (2016) for a full description of the methodology and discussion relating to the dynamical interpretation of the sensitivity distributions.

In summary, while modeling studies have suggested a linkage between deep-water mass formation and AMOC variability, observations to date have been spatially or temporally compromised and therefore insufficient either to support or to rule out this connection.

Current observational efforts to assess AMOC variability in the North Atlantic.

The U.K.–U.S. Rapid Climate Change–Meridional Overturning Circulation and Heatflux Array (RAPID–MOCHA) program at 26°N successfully measures the AMOC in the subtropical North Atlantic via a transbasin observing system (Cunningham et al. 2007; Kanzow et al. 2007; McCarthy et al. 2015). While this array has fundamentally altered the community’s view of the AMOC, modeling studies over the past few years have suggested that AMOC fluctuations on interannual time scales are coherent only over limited meridional distances. In particular, a break point in coherence may occur at the subpolar–subtropical gyre boundary in the North Atlantic (Bingham et al. 2007; Baehr et al. 2009). Furthermore, a recent modeling study has suggested that the low-frequency variability of the RAPID–MOCHA appears to be an integrated response to buoyancy forcing over the subpolar gyre (Pillar et al. 2016). Thus, a measure of the overturning in the subpolar basin contemporaneous with a measure of the buoyancy forcing in that basin likely offers the best possibility of understanding the mechanisms that underpin AMOC variability. Finally, though it might be expected that the plethora of measurements from the North Atlantic would be sufficient to constrain a measure of the AMOC within the context of an ocean general circulation model, recent studies (Cunningham and Marsh 2010; Karspeck et al. 2015) reveal that there is currently no consensus on the strength or variability of the AMOC in assimilation/reanalysis products.

Atlantic Meridional Overturning Circulation (AMOC). Red colours indicate warm, shallow currents and blue colours indicate cold, deep return flows. Modified from Church, 2007, A change in circulation? Science, 317(5840), 908–909. doi:10.1126/science.1147796

In addition we have a recent report from the United Kingdom Marine Climate Change Impacts Partnership (MCCIP) lead author G.D. McCarthy Atlantic Meridional Overturning Circulation (AMOC) 2017.

12-hourly, 10-day low pass filtered transport timeseries from April 2nd 2004 to February 2017.

Figure 1: Ten-day (colours) and three month (black) low-pass filtered timeseries of Florida Straits transport (blue), Ekman transport (green), upper mid-ocean transport (magenta), and overturning transport (red) for the period 2nd April 2004 to end- February 2017. Florida Straits transport is based on electromagnetic cable measurements; Ekman transport is based on ERA winds. The upper mid-ocean transport, based on the RAPID mooring data, is the vertical integral of the transport per unit depth down to the deepest northward velocity (~1100 m) on each day. Overturning transport is then the sum of the Florida Straits, Ekman, and upper mid-ocean transports and represents the maximum northward transport of upper-layer waters on each day. Positive transports correspond to northward flow.

The RAPID/MOCHA/WBTS array (hereinafter referred to as the RAPID array) has revolutionized basin scale oceanography by supplying continuous estimates of the meridional overturning transport (McCarthy et al., 2015), and the associated basin-wide transports of heat (Johns et al., 2011) and freshwater (McDonagh et al., 2015) at 10-day temporal resolution. These estimates have been used in a wide variety of studies characterizing temporal variability of the North Atlantic Ocean, for instance establishing a decline in the AMOC between 2004 and 2013.

Summary from RAPID data analysis

MCCIP reported in 2006 that:

  • a 30% decline in the AMOC has been observed since the early 1990s based on a limited number of observations. There is a lack of certainty and consensus concerning the trend;
  • most climate models anticipate some reduction in strength of the AMOC over the 21st century due to increased freshwater influence in high latitudes. The IPCC project a slowdown in the overturning circulation rather than a dramatic collapse.And in 2017 that:
  • a substantial increase in the observations available to estimate the strength of the AMOC indicate, with greater certainty, a decline since the mid 2000s;
  • the AMOC is still expected to decline throughout the 21st century in response to a changing climate. If and when a collapse in the AMOC is possible is still open to debate, but it is not thought likely to happen this century.

And also that:

  • a high level of variability in the AMOC strength has been observed, and short term fluctuations have had unexpected impacts, including severe winters and abrupt sea-level rise;
  • recent changes in the AMOC may be driving the cooling of Atlantic ocean surface waters which could lead to drier summers in the UK.


  • The AMOC is key to maintaining the mild climate of the UK and Europe.
  • The AMOC is predicted to decline in the 21st century in response to a changing climate.
  • Past abrupt changes in the AMOC have had dramatic climate consequences.
  • There is growing evidence that the AMOC has been declining for at least a decade, pushing the Atlantic Multidecadal Variability into a cool phase.
  • Short term fluctuations in the AMOC have proved to have unexpected impacts, including being linked
    with severe winters and abrupt sea-level rise.


Oceans Make Climate: SST, SSS and Precipitation Linked

Climate Pacemaker: The AMOC

Evidence is Mounting: Oceans Make Climate

Mann-made Global Cooling