My university degree is a Bachelors in Organic Chemistry from Stanford. For that and other reasons, it always annoyed me that some lawyers decided CO2 can be called a “pollutant”, all the while exhaling the toxic gas themselves.
This nonsense forms the root of all the ridiculous regulations that POTUS ordered reviewed and rescinded yesterday. Thus I agree completely with this Wall Street Journal article by Paul Tice Trump’s Next Step on Climate Change. Full text below.
Reconsider the EPA’s labeling of carbon dioxide as a pollutant, based on now-outdated science.
By PAUL H. TICE
March 28, 2017 6:41 p.m. ET
The executive orders on climate change President Trump signed this week represent a step in the right direction for U.S. energy policy and, importantly, deliver on Mr. Trump’s campaign promise to roll back burdensome regulations affecting American companies. But it will take more than the stroke of a pen to make lasting progress and reverse the momentum of the climate-change movement.
On Tuesday, in a series of orders, Mr. Trump instructed the Environmental Protection Agency to rework its Clean Power Plan, which would restrict carbon emissions from existing power plants, mainly coal-fired ones. Last year the U.S. Supreme Court stayed enforcement of the CPP pending judicial review.
Mr. Trump also directed the Interior Department to lift its current moratorium on federal coal leasing and loosen restrictions on oil and gas development (including methane flaring) on federal lands. And he instructed all government agencies to stop factoring climate change into the environmental-review process for federal projects. The federal government will recalculate the “social cost of carbon.”
These actions are a good start, but all they do is reverse many of the executive orders President Obama signed late in his second term. While easy to implement and theatrical to stage, such measures are largely superficial and may prove as temporary as the decrees they rescind.
Because they don’t attack the climate-change regulatory problem at its root, Mr. Trump’s orders will not provide enough clarity to U.S. energy companies—particularly electric utilities and coal-mining companies—for their long-term business forecasting or short-term capital investment and head-count planning.
To accomplish that, the Trump administration, led by EPA Administrator Scott Pruitt, needs to target the EPA’s 2009 “endangerment finding,” which labeled carbon dioxide as a pollutant. That foundational ruling provided the legal underpinnings for all of the EPA’s follow-on carbon regulations, including the CPP.
It also provided the rationale for the previous administration’s anti-fossil-fuel agenda and its various climate-change initiatives and programs, which spanned more than a dozen federal agencies and cost the American taxpayer roughly $20 billion to $25 billion a year during Mr. Obama’s presidency.
The endangerment finding was the product of a rush to judgment. Much of the scientific data upon which it was predicated—chiefly, the 2007 Fourth Assessment Report of the U.N.’s Intergovernmental Panel on Climate Change—was already dated by the time of its publication and arguably not properly peer-reviewed as federal law requires.
With the benefit of hindsight—including more than a decade of actual-versus-modeled data, plus the insights into the insular climate-science community gleaned from the University of East Anglia Climategate email disclosures—there would seem to be strong grounds now to reconsider the EPA’s 2009 decision and issue a new finding.
In 2013, the IPCC issued a more circumspect Fifth Assessment Report, which noted a hiatus in global warming since 1998 and a breakdown in correlation between the world’s average surface temperatures and atmospheric carbon dioxide levels, causing the U.N. body to revise down its 2007 projections for the rate of planetary warming over the first half of the 21st century.
Although this initially reported “pause” was subsequently eliminated through the downward manipulation of historical temperature data, this latest IPCC assessment calls into question both the predictive power and input data quality of most global climate models, and further highlights the scientific uncertainty surrounding the basic premise of anthropogenic climate change.
An updated EPA endangerment finding based on an objective review of the latest available scientific data is warranted, along with a more sober discussion of the threat posed by carbon dioxide and other greenhouse gases to the “public health and welfare of current and future generations,” in the words of the original endangerment finding.
As long as the 2009 finding remains on the books, it will provide legal ammunition for environmentalists, academics and state government officials seeking to sue the administration for any actions related to climate change, including this week’s executive orders.
Issuing a new endangerment finding would be a bold move requiring thorough work, but the Trump EPA would be well within its legal rights to undertake such an updated review process. In Massachusetts v. EPA (2007), the Supreme Court ruled that the Clean Air Act gives the EPA the authority, but not the obligation, to regulate carbon dioxide and other greenhouse gases. The EPA needs to “ground its reasons for action or inaction” with “reasoned judgment” and scientific analysis.
Addressing the 2009 endangerment finding head-on would show that Mr. Trump is serious about challenging climate-change orthodoxy. Thus far he has sent a mixed message, as demonstrated by this week’s ambivalence on CPP (reworking rather than repealing) and his administration’s silence on U.S. participation in the U.N.’s 2015 Paris Agreement.
Simply standing down on regulatory enforcement, cutting government funding for climate-change research and stopping data collection for the next four years will not suffice. Ignoring the EPA’s 2009 endangerment finding would mean that it is only a matter of time before another liberal-minded occupant of the White House reasserts this regulatory power, bringing the country and the domestic energy sector back where Mr. Obama left them.
Mr. Tice is an executive-in-residence at New York University’s Stern School of Business and a former Wall Street energy research analyst.
Ethan Siegel provides an informative primer on energy physics: Is There Any Such Thing As Pure Energy? It is useful background for anyone interested in energy and climate science. Some excerpts below.
What is the nature of Energy?
Energy plays a tremendous role, not only in our technology-rich daily lives, but in fundamental physics as well. The chemical energy stored in gasoline gets converted into kinetic energy that propels our vehicles, while the electrical energy from our power plants gets converted into light, heat and other forms of energy at our homes. But this energy always seems to exist as merely one property of an otherwise independently-existing system. Must it always be so? Alex from Moscow writes in with a question about energy itself:
“Does pure energy [exist], maybe very shortly before turning into a particle or a photon? Or is it just a useful mathematical abstraction, an equivalent that we use in physics?”
At a fundamental level, energy can take on many forms.
Mass = Energy
The simplest, most familiar form of energy of all is in terms of mass. You don’t normally think in terms of Einstein’s E = mc2, but every physical object that’s ever existed in this Universe is made of massive particles, and simply by having mass, these particles have energy.
Mass in Motion = Kinetic Energy
If these particles are moving, they have an additional form of energy as well: kinetic energy, or the energy of motion.
Particles Linked Together = Binding Energy
Finally, these particles can link together in a variety of ways, forming more complex structures like nuclei, atoms, molecules, cells, organisms, planets and more. This form of energy is known as binding energy, and is actually negative in its effect. It reduces the rest mass of the overall system, which is why nuclear fusion, taking place in the cores of stars, can emit so much light and heat: by converting mass into energy via that same E = mc2. Over the 4.5 billion year history of the Sun, it’s lost approximately the mass of Saturn from simply fusing hydrogen into helium.
Massless Particles in Motion = Restless Kinetic Energy
The Sun itself gives another example of energy: light and heat, which comes in the form of photons, which are different from the forms of energy we’ve considered so far. There exist massless particles as well — particles with no rest energy — and these particles, like photons, gluons and (hypothetically) gravitons, all move at the speed of light. However, they do carry energy in the form of kinetic energy, and, in the case of gluons, are responsible for the binding energy inside atomic nuclei and protons themselves.
Energy is Always Conserved
Energy comes in a variety of forms, and some of those forms are fundamental. A particle’s rest mass energy doesn’t change over time, and in fact doesn’t change from particle to particle. It’s a type of energy that is inherent to everything in the Universe itself. But all the other forms of energy that exist are relative. An atom in an excited state has more energy than an atom in a ground state, and that’s due to the difference in binding energy. And if you want to make that transition to the lower-energy state? You have to emit a photon to get there; you cannot make that transition without conserving energy, and that energy needs to be carried by a particle — even a massless one — in order to make that happen.
Energy is Relative to the Observer
Perhaps an oddity of this is that photon energy, or any form of kinetic energy (i.e., the energy of motion), is that its value is not fundamental, but rather is dependent on the motion of the observer. If you move towards a photon, you’ll find its energy appears greater (as its wavelength is blueshifted), and if you move away from it, its energy will be lesser, and it will appear redshifted. Energy is relative, but what’s interesting that for any observer, it’s always conserved. No matter what the interactions are, energy is never seen to exist on its own, but only as part of a system of particles, whether massive or massless.
There is one form of energy, however, that may not need a particle at all: dark energy. The form of energy that causes the expansion of the Universe to accelerate may very well be energy inherent to the fabric of the Universe itself! This interpretation of dark energy is self-consistent and matches the observations of distant, receding galaxies and quasars that we see exactly. The only problem? This form of energy, as far as we can tell, can neither be used to create or destroy particles, nor can it be inter-converted to and from other forms of energy. It seems to be its own entity, disconnected from interacting with the other forms of energy present within the Universe.
So the full answer to the question of whether pure energy exists is:
- For all of the particles that exist, massive and massless, energy is only one property of them, and cannot exist independently.
- For all of the situations where energy appears to be lost in a system, such as through gravitational decay, there exists some form of radiation carrying off that energy, leaving it conserved.
- And that dark energy itself may be the purest form of energy, existing independent of particles, but as far as any effect other than the expansion of the Universe, that energy is inaccessible to everything else in the Universe.
As far as we can tell, energy is not something we can isolate in a laboratory, but only one of many properties that matter, antimatter and radiation all possess. Creating energy independent of particles? It might be something the Universe itself does, but until we learn how to create (or destroy) spacetime itself, we find ourselves unable to make it so.
What’s in it? The text is hard to find.
First and most prominently, the executive order directs the Environmental Protection Agency to review the Clean Power Plan, one of Obama’s key regulatory actions to drive down greenhouse gas emissions in the electric power sector. Because an executive order cannot directly overturn a regulation, the EPA will have to come to a finding about whether the CPP should be revised or repealed.
The Supreme Court ruled in a 7-to-2 decision in June 2014 that the Obama administration’s Environmental Protection Agency is free to regulate carbon dioxide in the atmosphere, as long as the source of emissions in question is a traditional polluter, like a factory or a power plant, rather than a school or a shopping mall. The decision was largely written by conservative Justice Antonin Scalia. However, the Court also chastised the EPA for acting without a clear directive from Congress.
Some claim that the Supreme Court requires EPA to regulate greenhouse gases, but that is not correct. The Court ruled that CO2 can be considered a “pollutant” under the Clear Air Act, but EPA decides what, if anything to do about it. Expect lots of legal activity around this, including EPA seeking congressional legislation before regulating.
While determining the fate of the CPP could end up being a complex multi-year undertaking, the order also includes the following actions that can be carried out quickly:
- Reversing Obama’s moratorium on new coal mining leases on federal lands;
- Removing the consideration of greenhouse gases from permit reviews under the National Environmental Policy Act;
- Formally abandoning Obama’s roadmap on how to achieve U.S. emissions reductions
- Eliminating a tool for cost-benefit analysis in regulatory review called the “Social Cost of Carbon”
Finally, although Trump’s directive does not directly address American engagement in the Paris Agreement or other international climate agreements, it does have some implications for broader U.S. engagement in international climate policy. Rolling back the CPP would remove an important component of the American climate strategy and make it more difficult to achieve Obama’s U.S. climate targets. Other players, including big emitters like China, the European Union, and India, are aware of Trump’s stance on climate and will not be surprised by this action: most countries have committed to continuing to pursue their own goals in development as well as climate actions.
Thanks to Junk Science for putting up the full text (here)
Update March 29
Lots of freaking out by true believers. Here is a balanced review of this EO.
President Donald Trump’s executive order dismantling large chunks of Barrack Obama’s environmental legacy is a cleverly written document that avoids the pitfalls of Trump’s controversial orders on immigration. Unlike those orders, which have been suspended by federal courts, this one bears the clear stamp of experienced government lawyers and leaves the administration with a rich variety of tactical choices on how to eliminarte Obama-era regulations on fossil fuels.
Eliminating the previous administration’s legal memorandum could be a speedier way to get rid of the CPP, although it would still have to go through a notice and comment period as well as the inevitable legal challenges. The government wouldn’t have to delve as deeply into the scientific record, however, which the Obama administration provided in ample detail to justify its plan. Instead, the Trump administration would argue the CPP, which takes a systemwide approach toward reducing CO2 emissions, is based on an incorrect reading of federal law.
The order also calls for the elimination of the Interagency Working Group on Social Cost of Greenhouse Gases, as well as its findings on the cost of global warming, which it pegged at $42 a ton by 2020. Effective immediately, the administration will use Bush-era standards to judge the cost of carbon emissions.
Environmentalists and states can and will sue to try and force the administration to stick to the Obama-era goals for reducing CO2 emissions. But the EPA can only work with the tools Congress gave it, and Chevron deference allows the agency to determine how powerful those tools are. So it can simply say that federal statutes don’t give it the power to reorder the electric grid to cut emissions by 30%; perhaps the limit, by ordering existing plants to the highest levels of efficiency achievable with current technology, is a few percent. The agency can then argue further cuts have to come from Congress.
The usual suspects are sounding alarms about the ocean “conveyor belt” AMOC slowing down, and predicting that global warming will result in global cooling.
Using data gathered from ice cores, tree rings and coral samples, the research team (led by the Potsdam Institute for Climate Impact Research in Germany) posits that weakness in the AMOC after 1975 “is an unprecedented event in the past millennium.”
Planet Expert Michael Mann co-authored the study and told ClimateCentral that a “full-on collapse” of the AMOC could be possible in the coming decades.
The exact consequences will be difficult to predict, but it will definitely have an impact on marine life, which benefits from the nutrients the AMOC delivers up from the ocean depths. “The most productive region, in terms of availability of nutrients, is the high latitudes of the North Atlantic,” said Mann. “If we lose that, that’s a fundamental threat to our ability to continue to fish.”
Hurricanes and nor’easters like the recent Winter Storm Juno that brought Snowmageddon to the East Coast could also become more common, he warned.
“If you shut down this mode of ocean circulation, you’re denying the climate system one of its modes of heat transport,” said Mann. “if you deny it one mode of transport, it’s often the case that you will see other modes of transport increase.”
The published paper is Exceptional twentieth-century slowdown in Atlantic Ocean overturning circulation by Stefan Rahmstorf, Jason E. Box, Georg Feulner, Michael E. Mann, Alexander Robinson, Scott Rutherford & Erik J. Schaffernicht 23 March 2015 (Three of the first four names are the most outspoken climatists of our time.) The Abstract is not as alarming as Mann’s interview.
Possible changes in Atlantic meridional overturning circulation (AMOC) provide a key source of uncertainty regarding future climate change. Maps of temperature trends over the twentieth century show a conspicuous region of cooling in the northern Atlantic. Here we present multiple lines of evidence suggesting that this cooling may be due to a reduction in the AMOC over the twentieth century and particularly after 1970. Since 1990 the AMOC seems to have partly recovered. This time evolution is consistently suggested by an AMOC index based on sea surface temperatures, by the hemispheric temperature difference, by coral-based proxies and by oceanic measurements. We discuss a possible contribution of the melting of the Greenland Ice Sheet to the slowdown. Using a multi-proxy temperature reconstruction for the AMOC index suggests that the AMOC weakness after 1975 is an unprecedented event in the past millennium (p > 0.99). Further melting of Greenland in the coming decades could contribute to further weakening of the AMOC. (wiggle words in bold)
Recent Observations of AMOC Trends
Activists pushing global warming/climate change are trying to get ahead of the likely coming cooling phase following the recent warming phase of the natural climate cycle. The kernal of truth in their hand waving resides in the initial report from the RAPID project.
The RAPID moorings being deployed. Credit: National Oceanography Centre
The RAPID project report is Observed decline of the Atlantic meridional overturning circulation 2004–2012 by D. A. Smeed, G. D. McCarthy et al.
“We have shown that there was a slowdown in the AMOC transport between 2004 and 2012 amounting to an average of −0.54 Sv yr−1 (95 % c.i. −0.08 to −0.99 Sv yr−1) at 26◦ N, and that this was primarily due to a strengthening of the southward flow in the upper 1100 m and a reduction of the southward transport of NADW below 3000 m. This trend is an order of magnitude larger than that predicted by climate models associated with global climate change scenarios, suggesting that this decrease represents decadal variability in the AMOC system rather than a response to climate change. (lower North Atlantic deep water (LNADW) upper (UNADW) . . .our observations show no significant change in the Gulf Stream transport over the 2004–2012 period when the AMOC is decreasing.”
AMOC Observations in Historical Context
Oceanographers are not alarmed, unlike activists Ramsdorf, Mann and Box. For example these recent papers: Recent slowing of Atlantic overturning circulation as a recovery from earlier strengthening by Laura C. Jackson, K. Andrew Peterson, Chris D. Roberts & Richard A. Wood 23 May 2016. Abstract:
The Atlantic meridional overturning circulation (AMOC) has weakened substantially over the past decade. Some weakening may already have occurred over the past century, and global climate models project further weakening in response to anthropogenic climate change. Such a weakening could have significant impacts on the surface climate.
However, ocean model simulations based on historical conditions have often found an increase in overturning up to the mid-1990s, followed by a decrease. It is therefore not clear whether the observed weakening over the past decade is part of decadal variability or a persistent weakening. Here we examine a state-of-the-art global-ocean reanalysis product, GloSea5, which covers the years 1989 to 2015 and closely matches observations of the AMOC at 26.5° N, capturing the interannual variability and decadal trend with unprecedented accuracy.
The reanalysis data place the ten years of observations—April 2004 to February 2014—into a longer-term context and suggest that the observed decrease in the overturning circulation is consistent with a recovery following a previous increase. We find that density anomalies that propagate southwards from the Labrador Sea are the most likely cause of these variations. We conclude that decadal variability probably played a key role in the decline of the AMOC observed over the past decade. (my bolds)
And this paper: A reversal of climatic trends in the North Atlantic since 2005
by Jon Robson, Pablo Ortega & Rowan Sutton 06 June 2016. Abstract:
In the mid-1990s the North Atlantic subpolar gyre warmed rapidly, which had important climate impacts such as increased hurricane numbers and changes to rainfall over Africa, Europe and North America. Evidence suggests that the warming was largely due to a strengthening of the ocean circulation, particularly the Atlantic Meridional Overturning Circulation. Since the mid-1990s direct and indirect measurements have suggested a decline in the strength of the ocean circulation, which is expected to lead to a reduction in northward heat transport.
Here we show that since 2005 a large volume of the upper North Atlantic Ocean has cooled significantly by approximately 0.45 °C or 1.5 × 1022 J, reversing the previous warming trend. By analysing observations and a state-of-the-art climate model, we show that this cooling is consistent with a reduction in the strength of the ocean circulation and heat transport, linked to record low densities in the deep Labrador Sea. The low density in the deep Labrador Sea is primarily due to deep ocean warming since 1995, but a long-term freshening also played a role. The observed upper ocean cooling since 2005 is not consistent with the hypothesis that anthropogenic aerosols directly drive Atlantic temperatures.
Once again climatists exaggerate and ignore ocean oscillations in favor of their CO2 hysteria. In the details of the AMOC report are observations that the heat transports slowed in part, increased in another part, and the warm gulf stream flow remained the same.
The AMO index supports this latter point, showing the continuing pulses of warm Atlantic water into the Arctic.
The next phase of the AMO will be cooler than the present, and will not be caused by human activity.
Background on AMOC is at Climate Pacemaker: The AMOC
The media are again amping up claims of bad weather to be feared from “climate change.” It is Whack-A-Mole time again, so here is a complete debunking of such media reports, compiled to refute a particularly bad speech by Mark Carney Governor of the Bank of England. H/T Friends of Science
Fact Checking Mark Carney’s Climate Claims is a useful reference document written by Steven Kopits of Princeton Energy Advisors. A few examples below show his systematic dismantling of the alarmist narrative by referencing publically available sources, many of them on government or corporate sites.
We do have long-time series data for Central England, extending back to 1772. To the extent this measurement is reliable and can be extrapolated to hemispheric averages, it shows a step-up of about 1 deg Celsius from 1980 to 2005, which supports Governor Carney’s assertions. On other hand, it also shows a drop of 0.5 deg Celsius from 2005 to the present—which does not.
As with just about every other metric the Governor mentions, we have data. Sea level is measured by tide gauges, and also by satellites. Satellite measurements suggest that sea level has been rising steadily by roughly 3 mm / year, which equates to about 1 foot per century.
Weather-related Insurance Losses
Hurricanes account for 75% of catastrophic losses, with typhoons representing an additional 8%. Thus, hurricanes and typhoons represent $6 of every $7 paid out in ‘top ten’ catastrophic weather-related insurance claims.
And this in turn tells us a great deal about the nature of insurance. Where do insured hurricane losses occur? Principally in the United States. Where do insured typhoon losses occur? Principally in Japan and Taiwan. Why these places? Because all of these are wealthy countries. Hurricane and typhoon losses will be greater where there is, first, a concentration of physical assets, and second, where those assets are valuable. In other words, in the advanced countries exposed to hurricanes and typhoons.
In this, no country is more exposed than the United States. Of overall losses due to top ten catastrophic weather events, nearly 2/3 occurred in the United States alone.
Insured Weather-related Losses
Indeed, if we restrict this to insured losses (including floods and tornadoes), the US accounts for 84% by itself. Thus, if we are speaking of insured weather-related losses, as a practical matter we are speaking of hurricane damage in the US. The rest is largely incidental. For example, Superstorm Sandy caused more insured losses in one event than the cumulative and collective top ten catastrophic, weather-related losses from Europe, China, and Japan since 1980. And Sandy was only the second worst insurance event in recent times.
Now, why are US losses so great? Is it due to the number or strength of storms making landfall in the United States?
In fact, there is no such pattern discernible in the data. Indeed, the last few years have seen fewer than average hurricanes globally, with a recovery to up-cycle numbers in the last year or so.
Rather, reinsurance data hints at the source of losses: higher payouts for assets in harm’s way.
Further, more and more expensive assets are exposed to hurricanes in particular. In the US, for example, ever more people are living on the coasts, and beach front property has become prized and expensive. One need only look out the window on a flight approaching Miami International Airport to be appalled at the sheer concertation of high-end housing built just above sea level on islands dotting Florida’s Atlantic Coast. How long until a hurricane wipes a good number of these off their foundations? And what kind of insurance losses will that involve?
Indeed, an examination of catastrophic losses suggests a decisive role for government policy. Hurricane Katrina, which destroyed New Orleans in 2005, represents alone more than one-quarter of all insured top ten losses globally since 1980. In just one event.
The article goes on to deal with other claims regarding Floods, Droughts, Tornadoes, and Wildfires before reaching this conclusion.
In his speech to London’s insurance community, Mark Carney, Governor of the Bank of England, asserted a series of claims about climate change. Some of these are widely accepted. The climate does change. The world has warmed. Atmospheric CO2 has increased, half of the increment due to human activities.
Beyond this, there is no consensus, and indeed, the available data in many cases directly refutes the Governor’s more extreme assertions. There is no consensus that humans are the primary drivers of climate change. As we can see, sea levels, for example, were rising well before the 1950s date Carney gives as the start of modern anthropogenic warming.
Importantly, the increase in losses since the 1980s is more likely to reflect expanded insurance coverage, increasing payouts as a percent of losses incurred, and an increased number of assets with higher values placed in harm’s way. Losses increases have not occurred due to increases in hurricane, tornado, flooding, drought or fire frequency or strength, at least not in the United States, which represents the lion’s share of insurance claims. In many cases, either frequency or intensity of weather-related events has actually declined. Sea level rise has not accelerated, not as measured by either satellites or tide gauges. Sea level has been rising for well over 100 years, and continues on that pace.
Like so many other economists, Governor Carney seems to operate under the assumption that current CO2 levels are just on the edge of some catastrophic acceleration. For some reason, 320 ppm of atmospheric CO2 is safe, but 540 ppm is not, because there is some precipice—an inflection point or boundary—between here and there. The limit is not 1,000 ppm, or 5,000 ppm, or 42,448 ppm, but right here, right now. A little more CO2, a trace more of a harmless trace gas, and we are doomed.
The climate is complex and the future uncertain. It is possible the worst fears may prove correct. Nevertheless, such an assertion is not supported by the historical data, not for US droughts, floods, tornados, hurricanes or fires. But it does show up. In politics. If sea levels were 20 cm higher in New York and this contributed to the damage from Superstorm Sandy, well, any middling analyst could have predicted the rise back in 1940, just as we can predict today that sea levels will be one foot higher a century hence. The failure was not of CO2 emissions, but squarely a failure of governance. And that goes doubly so for the fate of New Orleans. If Governor Carney wanted to make a constructive proposal, he should have called for Lloyds to create macro audits of risk zones and censure or refuse to insure jurisdictions where governance is not up to par. If insurers had refused to insure New Orleans unless the levees were sound, they could have saved themselves $30 bn in payouts and probably twice that in losses.
As an analyst, I find Mr. Carney’s speech is truly dismaying. For the Governor of the Bank to claim that climate change is leading to rapidly rising insurance claims is, at best, a critical failure of analysis. As discussed above, insurance claims are a function of a number of factors, including the type and country of the weather event, as well as the extent of insurance coverage and payout ratios. A hurricane in the US may see one hundred times the payouts of a major flood in India. Payouts will rise as a function of nominal GDP, as both inflation and the value and concentration of assets will play a crucial role in overall losses. The specific path of a storm can also be decisive for global averages. It goes without saying that a storm which strikes in Philadelphia, marches up the New Jersey coast, slams into the Manhattan and turns towards New Haven is going to cost a bundle. That same storm hitting, say, rural Mississippi would cause a fraction of the monetary damages. And this matters, because Superstorm Sandy caused more insured damages than all the leading weather events in Europe, Japan, and China combined. Single events can move long-term global averages.
If the Bank missed this, it is not because the necessary data is hard to find. Information on weather-related events is readily and publicly accessible on the internet. Almost every graph I use above relating to hurricanes, tornadoes, floods and droughts comes from the US government itself. Apparently, the Bank of England could not be bothered to consult the underlying climate data before making hyperbolic claims. Thus, at best, the Bank was careless with data analysis.
A worse interpretation of events suggests that Mr. Carney was willing to blindly accept the conventional wisdom, the ‘consensus of scientists’ regarding global warming, without any will or curiosity to dig deeper and form a personal view. One can only hope that monetary policy in the UK is not informed by such superficiality or passivity.
The very worst interpretation is that Mr. Carney is in fact aware of the source data, but chose to make hysterical claims to promote a personal political agenda. I cannot imagine a more ill-considered idea. For those of us who consider central bank independence sacred, the appearance of a national bank taking sides in a highly charged political debate—and doing so with scant regard for the underlying data—will establish the Bank of England as partisan and the political opponent of conservative politicians. Given that Janet Yellen, the Chairman of the US Federal Reserve Bank, hails from Berkeley, a hot bed of climate activism, should the Republican Party consider the Fed also its opponent? If so, I can assure you, the Republicans will find some support to ‘audit’ the institution.
At the end of the day, political neutrality is a pre-condition for central bank independence. If a political party deems the central bank to be an opponent, then it will take measures to gain political control over the bank, with the result that monetary policy itself may become politicized. If the Bank nevertheless feels compelled to champion a particular side in a political debate, its analysis must be water-tight and its communication, impartial. That Governor Carny violated both dictums is simply stunning and a huge blow to the prestige of the Bank of England. It was a very bad call indeed.
More anti-alarmist information at Climate Whack-A-Mole
An earlier post Arctic Ice Factors discussed how ice extent varies in the Arctic primarily due to the three Ws: Water, Wind and Weather. There are other posts on the details of Water and Wind linked below at the end, but this post looks at some ordinary and repeating Weather events in the Arctic that influence ice formation. An interesting new study prompted this essay, but first some background on heat exchange observations in the Arctic.
One project in particular has provided comprehensive empirical data on the energy interface between Arctic Sea Ice and the atmosphere. The SHEBA project collected heat exchange data on site in the Arctic as described in this article SHEBA: The Surface Heat Budget of the Arctic Ocean by Donald K. Perovich and John Weatherly, U.S. Army Engineer Research and Development Center, Cold Regions Research and Engineering Laboratory, Hanover, New Hampshire; and Richard C. Moritz, Polar Science Center, University of Washington, Seattle.
The combination of the importance of the Arctic sea ice cover to climate and the uncertainties of how to treat the sea ice cover led directly to SHEBA: the Surface Heat Budget of the Arctic Ocean. SHEBA is a large, interdisciplinary project that was developed through several workshops and reports. SHEBA was governed by two broad goals: understand the ice–albedo and cloud–radiation feedback mechanisms and use that understanding to improve the treatment of the Arctic in large-scale climate models. The SHEBA project was sponsored _jointly by the National Science Foundation’s Office of Polar Programs Arctic System Science program and the Office of Naval Research’s High Latitude Dynamics program.
Ice Station SHEBA
On 2 October 1997, the Canadian Coast Guard icebreaker Des Groseilliers stopped in the middle of an ice floe in the Arctic Ocean, beginning the year-long drift of Ice Station SHEBA. For the next 12 months, until 11 October 1998, Ice Station SHEBA drifted with the pack ice from 75°N, 142°W to 80°N, 162°W. At any given time, there were 20–50 researchers at Ice Station SHEBA. During the year over 200 researchers participated in the field campaign, spending anywhere from just a few days to the entire year. Conducting a year-long sea ice experiment provided daunting scientific and logistic challenges: low temperatures, high winds, ice breakup, demanding instruments, and polar bears.
There was an intense measurement program designed to obtain a complete, integrated time series of every possible variable defining the state of the “SHEBA column” over an entire annual cycle. This column is an imaginary cylinder stretching from the top of the atmosphere through the ice into the upper ocean. Observations included longwave and shortwave radiative fluxes; the turbulent fluxes of latent and sensible heat; cloud height, thickness, phase, and properties; energy exchange in the boundary layers of the atmosphere and ocean; snow depth and ice thickness; and upper ocean salinity, temperature, and currents. This year-long, integrated data set provides a test bed for exploring the feedback mechanisms and for model development.
The full set of observations is available in a report entitled Reconciling different observational data sets from Surface Heat Budget of the Arctic Ocean (SHEBA) for model validation purposes
All the detailed measurements are in the report, and the takeaway findings are summarized in Figure 8 below.
Figure 8a shows how the conductive heat flux in winter (October –March) is controlled by the net longwave radiation. The net longwave radiation has large variability. It is generally high for clear sky conditions, and low for cloudy sky, and constitutes a heat loss from the surface throughout the whole year. The net shortwave radiation (Figure 8a) is steadily growing in spring and early summer with a sudden increase in mid-June when the snow cover starts disappearing and the albedo drops to a lower value. When the surface temperature is at the melting point, the energy surplus is used for melting. This heat flux becomes the major counterbalance of the net solar flux during summer (April –September).
The sensible heat flux (Figure 8b) is usually small except in winter during clear sky conditions when the air temperature is relatively higher than the surface and the wind speed is higher [see Walsh and Chapman, 1998] (see Figure 1). In general, the surface is colder than the overlying air and the sensible heat is downward. During the winter,the sensible heat flux and the net longwave radiation are generally anticorrelated (Figures 8a – 8b). That is, the heat loss from the surface to the atmosphere during clear sky conditions leads to a positive temperature gradient in the air and results in a downward sensible heat flux. The coupling between these two fluxes is discussed in more detail by Makshtas et al. . The latent heat flux (Figure 8b) is close to zero except after the onset of the melt season when it has several peaks indicating moisture transport from the surface to the atmosphere. Figure 8a shows most components of the surface energy budget together, and the residual from all fluxes.
The Effects of Polar Weather Intrusions
With this background understanding of the winter heat flux over Arctic ice, let us consider the implications of the recent study.
An interesting paper analyzes intrusive weather and estimates the connection between such events and ice extents in the Arctic. The paper is: The role of moist intrusions in winter Arctic warming and sea ice decline in Journal of Climate 29(12):160314091706008 · March 2016 by Cian Woods and Rodrigo Caballero, Department of Meteorology, and Bolin Centre for Climate Research, Stockholm University, Stockholm, Sweden
This paper examines the trajectories followed by intense intrusions of moist air into the Arctic polar region during autumn and winter and their impact on local temperature and sea ice concentration. It is found that the vertical structure of the warming associated with moist intrusions is bottom amplified, corresponding to a transition of local conditions from a ‘‘cold clear’’ state with a strong inversion to a ‘‘warm opaque’’ state with a weaker inversion. In the marginal sea ice zone of the Barents Sea, the passage of an intrusion also causes a retreat of the ice margin, which persists for many days after the intrusion has passed. The authors find that there is a positive trend in the number of intrusion events crossing 708N during December and January that can explain roughly 45% of the surface air temperature and 30% of the sea ice concentration trends observed in the Barents Sea during the past two decades.
An injection event is defined as a vertically integrated northward moisture flux across 708N in excess of 200 Tg day21 deg21 that is sustained for at least 1.5 days and occupies a contiguous zonal extent of at least 98 at all times.
The case study in Fig. 2 shows that the passage of an intrusion can induce local warming of over 20 K in the central Arctic. Here, we examine the typical thermodynamic impact of intrusions, focusing on the fully ice-covered interior of the Arctic basin—specifically, the region where monthly climatological SIC exceeds 90% and shows negligible trend across the data record. This region is shaded gray in Fig. 1.
An example intrusion event is shown in Fig. 2. The injection occurs over the northern tip of Norway and lasts for 1.75 days, yielding seven centroid trajectories. As the injection event progresses, its centroid shifts slowly eastward, giving some zonal spread in centroid trajectories. The flow field during the event features a large-scale dipole straddling the North Pole, with cyclonic circulation over the Atlantic/North American sector and an anticyclone over Eurasia. The trajectories reflect this structure, heading toward the North Pole after injection and then curving cyclonically to exit the Arctic over North America. The intrusion event is associated with large surface air temperature anomalies in the central Arctic and a retreat of the sea ice margin in the Barents Sea, topics we discuss in detail in sections 4 and 5 below.
To focus on intrusions that reach deep into the Arctic basin, events in which fewer than 40% of the trajectory ensemble members reaches 808N over 5 days are discarded. This leaves us with a final dataset of 359 intrusion events from 1990 to 2012, or ;16 per ONDJ season.
It is clear from Fig. 3 that by far the largest fraction of intrusions enters the Arctic through the Atlantic sector, with smaller numbers entering over the Labrador Sea and Greenland and from the Pacific. Interestingly, intrusions entering via the Atlantic and the Barents/Kara sector typically turn cyclonically toward North America—just as in the case study above—while those entering to the east of the Kara Sea typically turn anticyclonically and exit over Siberia. This suggests that moist intrusions into the Arctic are typically associated with cyclonic anomalies over eastern North America and anticyclonic anomalies over western Siberia, consistent with previous work.
A key feature of the warming trend in the Arctic is that it is bottom amplified (i.e., that it is in fact a trend toward a weakening of the climatological temperature inversion that prevails in ice-covered regions of the Arctic basin in winter). This feature has previously been mostly attributed to increased upward turbulent heat flux due to sea ice loss (Serreze et al. 2009; Screen and Simmonds 2010a,b).
Our results suggest a more nuanced view. The passage of an intrusion affects local conditions by inducing a transition from a “cold clear” state with a strong inversion to a “warm opaque” state with a much weaker inversion, in agreement with recent modeling work (Pithan et al. 2014; Cronin and Tziperman 2015). This yields an overall bottom-amplified local temperature perturbation, owing largely to surface heating by increased downwelling longwave radiation.
An increase in the frequency of intrusions can therefore drive bottom-amplified warming trend even in the absence of sea ice loss. In addition, the intrusions themselves drive sea ice retreat in the marginal zone and thus promote the upward turbulent fluxes that help produce bottom-amplified warming.
Our results agree with other recent work showing a strong impact of poleward moisture flux on Arctic sea ice variability and trends (D.-S. R. Park et al. 2015; H.-S. Park et al. 2015a,b). Since most of the moisture flux into the Arctic occurs in a small number of extreme events (Woods et al. 2013; Liu and Barnes 2015), it is natural to take an event-based approach as we do here, which allows us to study the structure of the intrusion events and their link to dynamical processes in the Arctic region and at lower latitudes.
Predicted surface air temperature trends (Fig. 9f) are greatest in the Barents Sea area extending into the central Arctic in agreement with observations (Fig. 9k), with the average trend predicted in the Barents Sea box approximately 45% of that observed. This localization of the trends arises both because intrusion counts have risen most rapidly in that region (Fig. 8b) and because individual intrusions have the greatest impact in that region (Fig. 9a). The predicted trend has a peak amplitude of about 3 K decade-1, about half of the observed value. For SIC the predicted trend (Fig. 9g) again coincides spatially with the observed trend (Fig. 9l) and peaks at about 10% decade, or about 1/3 the observed value at the same location, with the average predicted trend in the Barents Sea box being approximately 30% of that observed.
Current wind patterns over Barents and the Atlantic gateway to the Arctic can viewed at nullschool:
Arctic Shifts between Cyclonic and Anticyclonic Wind Regimes The Great Arctic Ice Exchange
Previous posts have noted that in March, all the Arctic seas are locked in ice, the exceptions being Bering and Okhotsk in the Pacific, and Barents and Baffin Bay in the Atlantic. And the seesaw continues, shown in the images below. Firstly on the Atlantic side, featuring Baffin Bay and Kara, Barents, Greenland Seas.
And on the Pacific side, the only action is in Bering and Okhotsk Seas.
The overall NH extents are down from the 11-year average, and it is mostly due to deficits in the usual places: Barents, Bering and Okhotsk, somewhat offset by a surplus in Baffin. All of them melt out in September, and Bering and Okhotsk basins are effectively outside of the Arctic ocean per se.
As reported previously, 2017 peaked early, rising close to the average on day 53 in February, then losing extent and never achieving the 15M km2 threshold. 14.8 M km2 proved to be the 2017 peak daily ice extent. 2016 also lost extent throughout March, though higher than the current year, and will likely end with a higher monthly average. 2006 and 2017 are virtually tied at this point, though 2017 will likely end up higher on the month. SII shows about 300km2 less extent for the month, but drawing closer lately.
The Table below presents the ice extents reported by MASIE for day 80 in the years 2017, 2006 and the 11-year average (2006 through 2016).
The 2017 deficit to average is largely due to Okhotsk and Bering declining early, along with Barents and Kara. A surplus in Baffin somewhat offsets these, especially in comparison with 2006.
To summarize, central Arctic seas are locked in ice, while extents have started to decline in the peripheral basins. As of day 80, extents in 2017 are 4% below average and tied with 2006.
It works like a spoon stirring hot coffee, attracting cold air from Siberia. In this respect they serve as confined research regions, like a unique field laboratory experiment.
This post presents an article by John S. Wettlaufer who sees not only the oceans but cosmic patterns in coffee cup vorticies. His essay is The universe in a cup of coffee. (Bolded text is my emphasis.)
John Wettlaufer is the A. M. Bateman Professor of Geophysics, Physics, and Applied Mathematics at Yale University in New Haven, Connecticut.
As people throughout the world awake, millions of them every minute perform the apparently banal act of pouring cold milk into hot coffee or tea. Those too groggy to reach for a spoon might notice upwelling billows of milk separated by small, sinking, linear dark features such as shown in panel a of the figure. The phenomenon is such a common part of our lives that even scientists—trained to be observant—may overlook its importance and generality. The pattern bears resemblance to satellite images of ocean color, and the physics behind it is responsible for the granulated structure of the Sun and other cosmic objects less amenable to scrutiny.
Archimedes pondered the powerful agent of motion known as buoyancy more than two millennia ago. Children do, too, when they imagine the origins of cloud animals on a summer’s day. The scientific study of thermal and compositional buoyancy originated in 1798 with a report by Count Rumford intended to disabuse believers of the caloric theory. Nowadays, buoyancy is at the heart of some of the most challenging problems in nonlinear physics—problems that are increasingly compelling. Answers to fundamental questions being investigated today will have implications for understanding Earth’s heat budget, the transport of atmospheric and oceanographic energy, and, as a corollary, the climate and fate of stars and the origins of planets. Few avenues of study combine such basic challenges with such a broad swath of implications. Nonetheless, the richness of fluid flow is rarely found in undergraduate physics courses.
Wake up and smell the physics
The modern theory of hydrodynamic stability arose from experiments by Henri Bénard, who heated, from below, a thin horizontal layer of spermaceti, a viscous, fluid wax. For small vertical temperature gradients, Bénard observed nothing remarkable; the fluid conducted heat up through its surface but exhibited no wholesale motion as it did so. However, when the gradient reached a critical value, a hexagonal pattern abruptly appeared as organized convective motions emerged from what had been an homogenous fluid. The threshold temperature gradient was described by Lord Rayleigh as reflecting the balance between thermal buoyancy and viscous stresses, embodied in a dimensionless parameter now called the Rayleigh number.
When the momentary thermal buoyancy of a blob of fluid—provided by the hot lower boundary—overcomes the viscous stresses of the surrounding fluid, wholesale organized motion ensues. The strikingly structured fluid, with its up-and-down flow assuming specific geometries, is an iconic manifestation of how a dissipative system can demonstrate symmetry breaking (the up-and-down flow distinguishes horizontal positions even though the lower boundary is at a uniform temperature), self-organization, and beauty. (See the article by Leo Kadanoff in PHYSICS TODAY, August 2001, page 34.)
Astrophysicists and geophysicists can hardly make traction on many of the problems they face unless they come to grips with convection—and their quests are substantially complicated by their systems’ rotations. Despite the 1835 publication of Gaspard-Gustave Coriolis’s Mémoire sur les équations du mouvement relatif des systèmes de corps (On the Equations of Relative Motion of a System of Bodies), debate on the underlying mechanism behind the deflection of the Foucault pendulum raged in the 1905 volume of Annalen der Physik, the same volume in which Albert Einstein introduced the world to special relativity. Maybe the lack of comprehension is not so surprising: Undergraduates still more easily grasp Einstein’s theory than the Coriolis effect, which is essential for understanding why, viewed from above, atmospheric circulation around a low pressure system over a US city is counterclockwise but circulation over an Australian city is clockwise.
Practitioners of rotating-fluid mechanics generally credit mathematical physicist Vagn Walfrid Ekman for putting things in the modern framework, in another key paper from 1905. Several years earlier, during his famous Fram expedition, explorer Fridtjof Nansen had observed that ice floes moved to the right of the wind that imparted momentum to them. Nansen then suggested to Ekman that he investigate the matter theoretically. That the deflection was due to the ocean’s rotating with Earth was obvious, but Ekman described the corrections that must be implemented in a noninertial reference frame. Since so much in the extraterrestrial realm is spinning, scientists taken by cosmological objects eventually embraced Ekman’s formulation and sought evidence for large-scale vortex structures in the accretion disks around stars. Vortices don’t require convection and when convection is part of a vortex-producing system, additional and unexpected patterns ensue.
Cream, sugar, and spinning
The Arctic Ocean freezes, cooling and driving salt into the surface layers. Earth’s inner core solidifies, leaving a buoyant, iron-depleted metal. Rapidly rising air from heated land surfaces creates thunderstorms. Planetary accretion disks receive radiation from their central stars. In all these systems, rotation has a hand in the fate of rising or sinking fluid. What about your steaming cup of coffee: What happens when you spin that?
Place the cup in the center of a spinning record player— some readers may even remember listening to music on one of those. The friction from the wall of the cup transmits stresses into the fluid interior. If the coffee is maintained at a fixed temperature for about a minute, every parcel of fluid will move at the same angular velocity; the coffee is said to be spun up.
On the time scales of contemporary atmospheric and oceanographic phenomena, Earth’s rotation is indeed a constant, whereas the time variation of the rotation could be important for phenomena in planetary interiors, the evolution of an accretion disk, or tidal perturbations of a distant moon. Thus convective vortices are contemplated relative to a rotating background flow. Perturbations in the rotation rate revive the role of boundary friction and substantially influence the interior circulation. Moreover, evaporation and freezing represent additional perturbations, which alter how the fluid behaves as stresses attempt to enforce uniform rotation. Returning to the coffee mug as laboratory, the model system shown in panel b of the figure reveals how the added complexity of rotation momentarily organizes the pattern seen in panel a into concentric rings of cold and warm fluid.
Fundamental competitions play out when you rotate your evaporating coffee. As we have seen, evaporative cooling drives narrow regions of downward convection; significant viscous and Coriolis effects balance each other in those downwelling regions. Rotation then dramatically organizes the sinking cold sheets and rising warm billows into concentric rings that first form at the center of the cup. By about 7.5 minutes after rotation has been initiated, the rings shown in panel b have grown to cover most of the horizontal plane. Their uniform azimuthal motion exists for about 3.5 minutes, at which time so-called Kelvin–Helmholtz billows associated with the shearing between the rings appear at their boundaries, grow, and roll up into vortices; see panel c. Three minutes later, as shown in panel d, those vortices lose their azimuthal symmetry and assemble into a regular vortex grid whose centers contain sinking fluid.
Panel d shows one type of coherent structure that forms in rotating fluids and other mathematically analogous systems if the persistence time of the structure—vortices here— is much longer than the rotational period. Other well-known examples are Jupiter’s Great Red Spot, which is an enduring feature of the chaotic Jovian atmosphere, and the meandering jet streams on Earth.
Moreover, persistent vortices in superconductors and superfluids organize themselves. Indeed, it appears that vortices in superconductors are as mobile as their counterparts in inviscid fluids. And although scientists have long studied rotating convective superfluids, the classical systems considered in this Quick Study suggest that we may yet find surprising analogies in superconductors. Will we one day see superconducting jet streams?
If you are reading this article with a cup of coffee, put it down and take a closer look at what is going on in your cup.
Wettlaufer has been an advocate for getting the physics right in climate models. His analogy of a cuppa coffee is actually a demonstration of mesoscale fluid and rotational dynamics and perturbations that still defy human attempts to simulate climate operations.
A consise summary is provided by Paul Driessen and Roger Bezdek
in this article Anti-fossil fuel SCC relies on garbage models, ignores carbon benefits and hurts the poor. Excerpts below.
The UN Development Program also calls energy “central to poverty reduction.” And International Energy Agency Executive Director Dr. Fatih Birol notes that “coal is raising living standards and lifting hundreds of millions of people out of poverty.” In fact, all fossil fuels are doing so.
Indeed, fossil fuels created the modern world and the housing, transportation, other technologies and living standards so many of us take for granted. They are essential for electricity and life, and over the past 250 years they more than doubled average life expectancy in countries that took advantage of them.
But the Obama Administration and radical environmentalists despise fossil fuels and used every tactic they could devise to eliminate them. One of their most important schemes was the “social cost of carbon.”
Six Things Wrong with Social Cost of Carbon
1. Each ton of U.S. emissions averted would initially have prevented a hypothetical $25/ton in global societal costs allegedly resulting from dangerous manmade climate change: less coastal flooding and tropical disease, fewer droughts and extreme weather events, for example. But within three years regulators arbitrarily increased the SCC to around $40/ton.
That made it easier to justify the Clean Power Plan, Paris climate agreement, and countless Obama Era actions on electricity generation, fracking, methane, pipelines, vehicle mileage and appliance efficiency standards, livestock operations, carbon taxes, and wind, solar and biofuel mandates and subsidies.
2. The supposed bedrock for the concept is the now rapidly shifting sands of climate chaos theory. New questions are arising almost daily about data quality and manipulation, the degree to which carbon dioxide affects global temperatures, the complex interplay of solar, cosmic ray, oceanic and other natural forces, and the inability of computer models to predict temperatures, sea level rise or hurricanes.
3. The SCC scheme blames American emissions for supposed costs worldwide (even though U.S. CO2 emissions are actually declining). It incorporates almost every conceivable cost of oil, gas and coal use on crops, forests, coastal cities, property damage, “forced migration,” and human health, nutrition and disease. However, it utterly fails to mention, much less analyze, tremendous and obvious carbon benefits.
4. CC schemes likewise impute only costs to carbon dioxide emissions. However, as thousands of scientific studies verify, rising levels of this miracle molecule are “greening” the Earth – reducing deserts and improving forests, grasslands, drought resistance, crop yields and human nutrition. No matter which government report or discount rate is used, asserted social costs of more CO2 in Earth’s atmosphere are infinitesimal compared to its estimated benefits.
5. Government officials claim they can accurately forecast damages to the world’s climate, economies, civilizations, populations and ecosystems from U.S. carbon dioxide emissions over the next three centuries. They say we must base today’s energy policies, laws, and regulations on those forecasts. The notion is delusional and dangerous.
6. Finally, the most fundamental issue isn’t even the social cost of carbon. It is the costs inflicted on society by anti-carbon regulations. Those rules replace fossil fuel revenues with renewable energy subsidies; reliable, affordable electricity with unreliable power that costs two to three times as much; and mines, drill holes, cropland and wildlife habitats with tens of millions of acres of wind, solar and biofuel “farms.”
Anti-carbon rules are designed to drive energy de-carbonization and modern nation de-industrialization. Perhaps worst, their impacts fall hardest on poor, minority and blue-collar families. . . Worldwide, billions of people still do not have electricity – and the SCC would keep them deprived of its benefits.
It’s time to rescind and defund the SCC – and replace it with honest, objective cost-benefit analyses.
Roger Bezdek is an internationally recognized energy analyst and president of Management Information Services, Inc. Paul Driessen is senior policy analyst for the Committee For A Constructive Tomorrow and author of books and articles on energy, climate change and human rights.