Red Flag: Ontario’s Green Energy Debacle

Babatunde Williams writes at Spiked Ontario’s green-energy catastrophe.  Excerpts in italics with my bolds

A transition to renewables sent energy prices soaring, pushed thousands into poverty and fueled a populist backlash.

In February 2009, Ontario passed its Green Energy Act (GEA). It was signed a week after Obama’s Economic Recovery and Reinvestment Act in the US, following several months of slow and arduous negotiations. It also had grand plans to start a ‘green’ recovery following the financial crash – although on a more modest scale.

This was the plan: increased integration of wind and solar energy into Ontario’s electricity grid would shut down coal plants and create 50,000 green jobs in the first three years alone.

Additionally, First Nations communities would manage their own electricity supply and distribution – what observers would later call the ‘decolonisation’ of energy – empowering Canada’s indigenous communities who had been disenfranchised by historical trauma. Lawmakers promised that clean and sustainable energy provided by renewables would also reduce costs for poorer citizens. This won an endorsement from Ontario’s Low Income Energy Network – a group which campaigns for universal access to affordable energy.

But on 1 January, 2019, Ontario repealed the GEA, one month before its 10th anniversary. The 50,000 guaranteed jobs never materialised. The ‘decolonisation’ of energy didn’t work out, either. A third of indigenous Ontarians now live in energy poverty. Ontarians watched in dismay as their electricity bills more than doubled during the life of the GEA. Their electricity costs are now among the highest in North America.

To understand how the GEA went irreparably wrong, we must look at Ontario’s contracts with its green-energy suppliers. Today, Ontario’s contracts guarantee to electricity suppliers that they ‘will be paid for each kWh of electricity generated from the renewable energy project’, regardless of whether this electricity is consumed. As preposterous as this may seem, it’s actually an improvement on many of the original contracts the Ontario government locked itself into.

Earlier contracts guaranteed payments that benchmarked close to 100 per cent of the supplier’s capacity, rather than the electricity generated. So if a participating producer supplied only 33 per cent of its capacity in a given year, the state would still pay it as if it had produced 100 per cent.

This was especially alarming in context, as 97 per cent of the applicants to the GEA programme were using wind or solar energy. These are both intermittent forms of energy. In an hour, day or month with little wind or sun, wind and solar farms can’t supply the grid with electricity, and other sources are needed for back-up. As a result, wind and solar electricity providers can only supplement the grid but cannot replace consistently reliable power plants like gas or nuclear.

Many governments, including other Canadian provinces, have used subsidies of all hues to incentivise renewables. But Ontario put this strategy on steroids. For example, the Council for Clean and Reliable Energy found that ‘in 2015, Ontario’s wind farms operated at less than one-third capacity more than half (58 per cent) the time’. Regardless, Ontarians paid multiple contracts as if wind farms had operated at full capacity all year round. To add insult to injury, Ontario’s GEA contracts guaranteed exorbitant prices for renewable energy – often at up to 40 times the cost of conventional power for 20 years.

By 2015, Ontario’s auditor general, Bonnie Lysyk, concluded that citizens had paid ‘a total of $37 billion’ above the market rate for energy. They were even ‘expected to pay another $133 billion from 2015 to 2032’, again, ‘on top of market valuations’. (One steelmaker has taken the Ontarian government to court for these exorbitant energy costs.)

Today, this problem persists.  Furthermore, electricity demand from ratepayers declined between 2011 and 2015, and has continued to fall. Ontarians were forced to pay higher prices for new electricity capacity, even as their consumption was going down.

Ontario’s auditor general in 2015 stated that: ‘The implied cost of using non-hydro renewables to reduce carbon emissions in the electricity sector was quite high: approximately $257 million [£150million] for each megatonne of emissions reduced.’ Per tonne of carbon reduced, the Ontario scheme has cost 48 per cent more than Sweden’s carbon tax – the most expensive carbon tax in the world.

Clearly, bad policy has led to exorbitant waste. This wasn’t the result of corruption or conspiracy – it was sheer incompetence. It’s a meandering story of confusion and gross policy blunders that will fuel energy poverty in Ontario for at least another decade.

As democracies across the West respond to the coronavirus crisis with hastily prepared financial packages for a ‘green recovery’, they should consider the cautionary tale of Ontario.

The GEA’s stubborn defenders refuse to recognise that poor policy, even with the best intentions, discredits future efforts at cutting emissions. ‘Green New Deals’ for the post-pandemic recovery in the US and Europe should learn from the GEA. Clean energy at any cost will be rightfully short-lived and repealed, and its supporters will be unceremoniously booted out of power.

See Also:  Electrical Madness in Green Ontario

 

New York Nukes Itself

This post is not about WuHanFlu, but about New York’s insane decision to close nuclear power plants in favor of wind farms.  Robert Bryce writes at Forbes New York Has 1,300 Reasons Not To Close Indian Point. Excerpts in italics with my bolds.

At the end of this month, the Unit 2 reactor at the Indian Point Energy Center in Buchanan, New York will be permanently shut down. Next April, the final reactor at the site, Unit 3, will also be shuttered.

TOMKINS COVE , NY – MAY 11: The Indian Point nuclear power plant is seen from Tomkins Cove, New York … [+] CORBIS VIA GETTY IMAGES

But the premature closure of the 2,069-megawatt nuclear plant is even worse land-use policy. Here’s why: replacing the 16 terawatt-hours of carbon-free electricity that is now being produced by the twin-reactor plant with wind turbines will require 1,300 times as much territory as what is now covered by Indian Point.

Here are the facts: Indian Point covers 239 acres, or about 1 square kilometer. To put Indian Point’s footprint into context, think of it this way: you could fit three Indian Points inside Central Park in Manhattan.

Based on projected output from offshore wind projects (which have higher capacity factors than onshore wind projects), producing that same amount of electricity as is now generated by Indian Point – about 16 terawatt-hours per year – would require installing about 4,000 megawatts of wind turbines. That estimate is based on the proposed South Fork offshore wind project, a 90-megawatt facility that is expected to produce 370 gigawatt-hours per year. (Note that these output figures are substantially higher than what can be expected from onshore wind capacity.) Using the numbers from South Fork, a bit of simple division shows that each megawatt of wind capacity will produce about 4.1 gigawatt-hours per year. Thus, matching the energy output of Indian Point will require about 4,000 megawatts of wind capacity.

That’s a lot of wind turbines. According to the American Wind Energy Association, existing wind-energy capacity in New York state now totals about 1,987 megawatts. That capacity will require enormous amounts of land. Numerous studies, including ones by the Department of Energy have found that the footprint, or capacity density, of wind energy projects is about 3 watts per square meter. Thus, 4,000 megawatts (four billion watts) divided by 3 watts per square meter = 1.33 billion square meters or 1,333 square kilometers. (Or roughly 515 square miles.)

UNITED STATES – AUGUST 20: Aerial view of New York City’s Central Park (Photo by Carol M. … [+] GETTY IMAGES

Those numbers are almost too big to imagine. Therefore, let’s look again at Central Park. Recall that three Indian Points could fit inside the confines of the famed park. Thus, replacing the energy production from Indian Point would require paving a land area equal to 400 Central Parks with forests of wind turbines.

Put another way, the 1,300 square kilometers of wind turbines needed to replace the electricity output of Indian Point is nearly equal to the size of Albany County. Would New York legislators who convene in the capitol in Albany consent to having the entire county covered in wind turbines? I can’t be sure, but I am guessing that they might oppose such plan. (See yellow area in Google Earth image  at top).

These basic calculations prove some undeniable facts. Among them: Indian Point represents the apogee of densification. The massive amount of energy being produced by the two reactors on such a small footprint provides a perfect illustration of what may be nuclear energy’s single greatest virtue: its unsurpassed power density. (Power density is a measure of energy flow from a given area, volume, or mass.) High power density sources, like nuclear, allow us to spare land for nature. Density is green.

Alas, the environmental groups that are influencing policymakers in New York and in other states are strident in their belief that nuclear energy is bad and that renewables are good. But that theology ignores the greenness of density and the essential role that nuclear energy must play if we are to have any hope of making significant reductions in carbon-dioxide emissions.

In short, the premature closure of Indian Point – and the raging land-use battles over renewable energy siting in New York – should lead environmental groups to rethink their definition of what qualifies as “green.” Just because wind and solar are renewable doesn’t mean they are green. In fact, the land-use problems with renewables show the exact opposite.

Kelly’s Climate Clarity

Michael Kelly was the inaugural Prince Philip Professor of Technology at the University of Cambridge. His interest in the topic of this lecture was roused during 2006–9 when he was a part time Chief Scientific Adviser to the Department for Communities and Local Government. On his return full-time to Cambridge he was asked by his engineering colleagues to lead the teaching of final-year and graduate engineers on present and future energy systems, which he did until he retired in 2016. Michael Kelly recently spoke on the topic Energy Utopias and Engineering Reality. The text of his remarks is published by GWPF. This post provides a synopsis consisting of excerpts in italics with associated images and my bolds.

Overview

Just so that there can be no doubt whatsoever, the real-world data shows me that the climate is changing, as indeed it has always changed. It would appear by correlation that mankind’s activity, by way of greenhouse gas emissions, is now a significant contributory factor to that change, but the precise percentage quantification of that factor is far from certain. The global climate models seem to show heating at least twice as fast as the observed data over the last three decades. I am unconvinced that climate change represents a proximate catastrophe, and I suggest that a mega-volcano in Iceland that takes out European airspace for six months would eclipse the climate concerns in short order.

The detailed science is not my concern here. The arguments in this lecture would still apply if the actual warming were twice as fast as model predictions.

Project engineering has rules of procedure and performance that cannot be circumvented, no matter how much one would wish it. Much of what is proposed by way of climate change mitigation is simply pie-in-the-sky, and I am particularly pleased to have so many parliamentarians here tonight, as I make the case for engineering reality to underpin the public debate.

I plan to describe:

(i) the global energy sector,
(ii) the current drivers of energy demand,
(iii) progress to date on decarbonisation, and the treble challenges represented by
(iv)factors of thousands in the figures of merit between various forms of energy,
(v) the energy return on energy invested for various energy sources, and
(vi) the energising of future megacities.

I make some miscellaneous points and then sum up. The main message is that our present energy infrastructure is vast and has evolved over 200 years. So the chances of revolutionising it in short order on the scale envisaged by the net-zero target of Parliament is pretty close to zero; zero being exactly the chance of the meeting Extinction Rebellion’s demands.

The energy sector – its scale and pervasiveness

As society evolves and civilisation advances, energy demands increase. As well as increasing
demand for energy, the Industrial Revolution led to an increase in global population, which had been rather static until about 1700. Since then, both the number of people and the energy consumption per person have increased, and from Figure 2 we can see the steady growth of gross domestic product per person and energy consumption through the 19th and 20th centuries until now.
Energy is the essential driver of modern civilisation. World GDP this year is estimated at $88 trillion, growing to $108 trillion by 2023, with the energy sector then being of order $10 trillion. But renewables have played, and will continue to play, a peripheral role in this growth. Industrialisation was accompanied by a steady and almost complete reduction in the use of renewables (Figure 4).

In recent years, there has been an uptick in renewables use, but this has been entirely the result of the pressure to decarbonise the global economy in the context of mitigating climate change, and the impact has again been nugatory. Modern renewables remain an insignificant share of the energy supply. Indeed MIT analysts suggest the transition away from fossil fuel energies will take 400 years at the current rate of progress.

Figure 6 shows the scale of what has been proposed. Even reaching the old target of an
80% reduction in carbon dioxide emissions would be miraculous; this is a level of emissions
not seen since 1880. I assert that a herd of unicorns will be needed to deliver this target,
let alone full decarbonisation. I also point out the utter nonsense of Extinction Rebellion’s
demands to complete the task by 2025.

Figure 6 Source: After Glen Peters,

Contemporary drivers of energy needs 1995–2035

I wish to focus on the drivers of global energy demands today by looking back and forward
twenty years. Figure 7 shows data from BP covering the period 1965–2035 on the demand
for global energy by fuel type. The data to 2015 is historic and not for challenge.

One notes that we have not had an ‘energy transition’: fossil fuels have continued to grow steadily at a rate about 7–8 times that of renewable technologies over the last 20 years. The energy demand of the major developed countries has been static or in small decline over that period. Most of the increase has come from growth in the global middle class, which increased by 1.5 billion people in the 20 years to 2015.

The whole of Figure 7 can be explained quantitatively if one assumes that a middle class person (living in a high rise building with running water and electricity, without any mention of personal mobility – the World Bank definition of middle class existence – uses between three and four times the amount of energy per day as a poor person in a rural hovel or urban slum.

You should be under no illusions: this is a humanitarian triumph. It is the delivery of the top Sustainable Development Goals – the elimination of poverty and hunger – that has been and will remain the main driver of energy demand for the foreseeable future.

Decarbonisation progress to date

In the UK, the Climate Change Committee has, on the face of it, overseen a steady fall in UK emissions of carbon dioxide since its formation in 2008. However, the fall started in 1990 and has continued at a very steady rate since (Figure 8a).
However, UK decreases are dwarfed by global increases. After no-growth years in 2016 and 2017, global carbon dioxide emissions grew by 3% in 2018 (Figure 8b). European emissions fell but the growth in all the other parts of the world was 17 times greater. The emissions reductions in the UK have also come at a considerable cost. The deficit of the UK balance of payments with respect to manufactures has been increasing since then. In other words, a significant proportion of our emissions have been exported to China and elsewhere. Indeed, over the period 1991– 2007, the emissions associated with rising imports almost exactly cancelled the UK emissions reduction!

There was much publicity in late summer this year when 50% of the UK’s electricity was (briefly) generated from renewables. Few people realised that electricity is only 16% of our total energy usage, and it is a common error, even in Parliament, to think that we are making enormous progress on the whole energy front. The real challenge is shown in Figure 10, where the energy used in fuels, heating and electricity are directly compared over a three year period. Several striking points emerge from this one figure.
First, we use twice as much energy in the UK for transport as we do for electricity. Little progress has been made in converting the fuel energy to electricity, as there are few electric vehicles and no ships or aircraft that are battery powered.

Note that if such a conversion of transport fuel to electricity were to take place, the grid capacity would have to treble from what we have today.

Second, most of the electricity use today is baseload, with small daily and seasonal variations (one can see the effect of the Christmas holidays). The more intermittent wind and solar energy is used, the more back-up has to be ready for nights and times of anticyclones or both: the back-up capacity could have been used all along to produce higher levels of baseload electricity, and because it is being used less efficiently, the resulting back-up generation costs more as it pays off the same total capital costs.

But in fact it is the heating that is the real problem. Today that is provided by gas, with gas flows varying by a factor of eight between highs in winter and lows in summer. If heat were to be electrified along with transport, the grid capacity would have to be expanded by a factor between five and six from today. How many more wind and solar farms would we need?

Initial conclusions

So far, I have described the scale of the global energy sector, how it has come to be the size it is, the current drivers for more energy and the current status of attempts to decarbonise the global economy. I can draw some initial conclusions at this point.

• Energy equals quality of life and we intervene there only with the most convincing of
cases.
• Renewables do not come close to constituting a solution to the climate change problem for an industrialised world.
• China is not the beacon of hope it is portrayed to be.
• There is no ground shift in energy sources despite claims to contrary.

The engineering challenges implied by factors of hundreds and thousands

Many people do not realise the very different natures of the forms of energy we use today.  But energy generation technologies can differ by factors of hundreds or thousands on key measures, such as the efficiency of materials use, the land area needed, the whole-life costs of ownership, and matters associated with energy storage.

Here are four statements about the efficiency with which energy generation systems use
high-value advanced materials:

• A Siemens gas turbine weighs 312 tonnes and delivers 600 MW. That translates to 1920 W/kg of firm power over a 40-year design life.

• The Finnish PWR reactors weigh 500 tonnes and produces 860 MW of power, equivalent to 1700 W/kg of firm supply over 40 years. When combined with a steam turbine, the figure is 1000 W/kg.

• A 1.8-MW wind turbine weighs 164 tonnes, made up of a 56-tonne nacelle, 36 tonnes
for the blades, and a 71-tonne tower. That is equivalent to 10 W/kg for the nameplate
capacity, but at a typical load factor of 30%, this corresponds to 3 W/kg of firm power.
A 3.6-MW offshore turbine, with its 400-tonne above-water assembly, and with a 40%
load factor, comes out at 3.6 W/kg over a 20-year life.

Solar panels for roof-top installation weigh about 16 kg/m2, and with about 40 W/m2
firm power provided over a year, that translates to about 2.5 W/kg energy per mass
over a 20-year life.

The figures are shown in Figure 12, although the wind and solar bars are all but invisible.
You’d need 360 5-MW wind turbines (of 33% efficiency) to produce the same output as a gas turbine, each with concrete foundations of comparable volume.

The late David MacKay showed that the land areas needed to produce 225 MW of power were very different: 15 acres for a small modular nuclear reactor, 2400 acres for average solar cell arrays, and 60,000 acres for an average wind farm.

Approximate area required for all of
London’s electricity to come from wind farms

Gray area required for wind farms, yellow area for solar farms, to power London UK.

The challenge of megacities

In 2050 over half the world’s population will be living in megacities with populations of more
than 5 million people. The energising of such cities at present is achieved with fossil and
nuclear fuels, as can the cities of the future. The impact of renewable energies will be very
small, as the vast areas of land needed, often taken away from local areas devoted to food
production as in London or Beijing, will limit their contribution. The extreme examples are
Hong Kong and Singapore, neither of which have any available hinterland.

Conclusions

It is clear to me that, for the sake of the whole of mankind, we must stay with business as usual, which has always had a focus on the efficient use of energy and materials. Climate change mitigation projects are inappropriate while large-scale increases in energy demand continue. If renewables prove insufficiently productive, research should be diverted to focus on genuinely new technologies. It is notable that within a few decades of Watt’s steam engine becoming available, the windmills of Europe ceased turning. We should not be reversing that process if the relative efficiencies have not changed.  We must de-risk major infrastructure projects, such as mass decarbonisation. They are too serious to get wrong. Human lifestyle changes can have a greater and quicker impact:they could deliver a 10% drop in our energy consumption from tomorrow. This approach would not be without consequences, however. For example, airlines might well collapse if holidaymakers stayed, or were made to stay, at home.

Who owns the integrity of engineering in the climate debate in the United Kingdom? Globally? The Royal Society, the Royal Academy of Engineering and the Engineering Institutions should all be holding the fort for engineering integrity, and not letting the engineering myths of a Swedish teenager go unchallenged.

Footnote:  See also a previous 2015 article by Kelly in Standpoint Magazine: For Climate Alarmism, The Poor Pay The Price  Some excerpts in italics with my bolds.

During a period as a scientific adviser in Whitehall, I quickly learned the elements of sound advice given to politicians — a process that is quite distinct from lobbying. A well-briefed minister knows about the general area in which a decision is sought, and is given four scenarios before any recommendation. Those scenarios are the upsides and the downsides both of doing nothing and of doing something. Those who give only the upside of doing something and the downside of doing nothing are in fact lobbying.

In his introduction he (Stern) makes it clear that he has consulted many scientists, businessmen, philosophers and economists, but in his book I find not a single infrastructure project engineer asked about the engineering reality of any of his propositions, nor a historian of technology about the elementary fact that technological breakthroughs are not pre-programmable. Lord Stern’s description of the climate science is an uncritical acceptance of the worst case put by the International Panel on Climate Change (IPCC), one from which many in the climate science community are now distancing themselves.

Those building the biblical Tower of Babel, intending to reach heaven, did not know where heaven was and hence when the project would be finished, or at what cost. Those setting out to solve the climate change problem now are in the same position. If we were to spend 10 or even 100 trillion dollars mitigating carbon dioxide emissions, what would happen to the climate? If we can’t evaluate whether reversing climate change would be value for money, why should we bother, when we can clearly identify many and better investments for such huge resources?

The Paris meeting on climate change will be setting out to build a modern Tower of Babel.

On Stable Electric Power: What You Need to Know

electric-power-system

nzrobin commented on my previous post Big Wind Blacklisted   that he had more to add.  So this post provides excerpts from a 7 part series Anthony wrote at kiwithinker on Electric Power System Stability. Excerpts are in italics with my bolds to encourage you to go read the series of posts at kiwithinker.

1. Electrical Grid Stability is achieved by applying engineering concepts of power generation and grids.

Some types of generation provide grid stability, other types undermine it. Grid stability is an essential requirement for a power supply reliability and security. However there is insufficient understanding of what grid stability is and the risk that exists if stability is undermined to the point of collapse. Increasing grid instability will lead to power outages. The stakes are very high.

2.Electric current is generated ‘on demand’. There is no stored electric current in the grid.

The three fundamental parts of a power system are:

its generators, which make the power,
its loads, which use the power, and
its grid, which connects them together.

The electric current delivered when you turn on a switch is generated from the instant you operate the switch. There is no store of electric current in the grid. Only certain generators can provide this instant ‘service’.

So if there is no storage in the grid the amount of electric power being put into the grid has to very closely match that taken out. If not, voltage and frequency will move outside of safe margins, and if the imbalance is not corrected very quickly it will lead to voltage and frequency excursions resulting in damage or outages, or both.

3. A stable power system is one that continuously responds and compensates for power/ frequency disturbances, and completes the required adjustments within an acceptable timeframe to adequately compensate for the power/frequency disturbances.

Voltage is an important performance indicator and it should of course be kept within acceptable tolerances. However voltage excursions tend to be reasonably local events. So while voltage excursions can happen from place to place and they cause damage and disruption, it turns out that voltage alone is not the main ‘system wide’ stability indicator.

The key performance indicator of an acceptably stable power system is its frequency being within a close margin from its target value, typically within 0.5 Hz from either 50 Hz or 60 Hz, and importantly, the rise and fall rate of frequency deviations need to be managed to achieve that narrow window.

An increasing frequency indicates more power is entering the system than is being taken out. Likewise, a reducing frequency indicates more is being taken out than is entering. For a power supply system to be stable it is necessary to control the frequency. Control systems continuously observe the frequency, and the rate of change of the frequency. The systems control generator outputs up or down to restore the frequency to the target window.

Of course energy imbalances of varying size are occurring all the time. Every moment of every day the load is continuously changing, generally following a daily load curve. These changes tend to be gradual and lead to a small rate of change of frequency. Now and then however faults occur. Large power imbalances mean a proportionately faster frequency change occurs, and consequently the response has to be bigger and faster, typically within two or three seconds if stability is to be maintained. If not – in a couple blinks of an eye the power is off – across the whole grid.

If the system can cope with the range of disturbances thrown at it, it is described as ‘stable’. If it cannot cope with the disturbances it is described as ‘unstable’.

4.There are two main types of alternating current machines used for the generation of electricity; synchronous and asynchronous. The difference between them begins with the way the magnetic field of the rotor interacts with the stator. Both types of machine can be used as either a generator or motor.

There are two key differences affecting their contribution to stability.

The kinetic energy of the synchronous machine’s rotor is closely coupled to the power system and therefore available for immediate conversion to power. The rotor kinetic energy of the asynchronous machine is decoupled from the system by virtue of its slip and is therefore not easily available to the system.

Synchronous generators are controllable by governors which monitor system frequency and adjust prime mover input to bring correction to frequency movements. Asynchronous generators are typically used in applications where the energy source is not controllable, eg: wind turbines. These generators cannot respond to frequency movements representing a system energy imbalance. They are instead a cause of energy imbalance.

Short -term stability

The spinning kinetic energy in the rotors of the synchronous machines is measured in megawatt seconds. Synchronous machines provide stability under power system imbalances because the kinetic energy of their rotors (and prime movers) is locked in synchronism with the grid through the magnetic field between the rotor and the stator. The provision of this energy is essential to short duration stability of the power system.

Longer-term stability

Longer term stability is managed by governor controls. These devices monitor system frequency (recall that the rate of system frequency change is proportional to energy imbalance) and automatically adjust machine power output to compensate for the imbalance and restore stability.

5.For a given level of power imbalance the rate of rise and fall of system frequency is directly dependent on synchronously connected angular momentum.

The rotational form of Newton’s second law of motion; Force = Mass * Acceleration describes the power flow between the rotating inertia (rotational kinetic energy) of a synchronous generator and the power system. It applies for the first few seconds after the onset of a disturbance, i.e.: before the governor and prime mover have had opportunity to adjust the input power to the generator.

Pm – Pe = M * dw/dt

Pm is the mechanical power being applied to the rotor by the prime mover. We consider this is a constant for the few seconds that we are considering.

Pe is the electrical power being taken from the machine. This is variable.

M is the angular momentum of the rotor and the directly connected prime mover. We can also consider M a constant, although strictly speaking it isn’t constant because it depends on w. However as w is held within a small window, M does not vary more than a percent or so.

dw/dt is the rate of change of rotor speed, which relates directly to the rate of increasing or reducing frequency.

The machine is in equilibrium when Pm = Pe. This results in dw/dt being 0, which represents the rotor spinning at a constant speed. The frequency is constant.

When electrical load has been lost Pe is less than Pm and the machine will accelerate resulting in increasing frequency. Alternatively when electrical load is added Pe is greater than Pm the machine will slow down resulting in reducing frequency.

Here’s the key point, for a given level of power imbalance the rate of rise and fall of system frequency is directly dependent on synchronously connected angular momentum, M.

It should now be clear how central a role that synchronously connected angular momentum plays in power system stability. It is the factor that determines how much time generator governors and automatic load shedding systems have to respond to the power flow variation and bring correction.

 

6 .Generation Follows Demand. The machine governor acts in the system as a feedback controller. The governor’s purpose is to sense the shaft rotational speed, and the rate of speed increase /decrease, and to adjust machine input via a gate control.

The governor’s job is to continuously monitor the rotational speed w of the shaft and the rate of change of shaft speed dw/dt and to control the gate(s) to the prime mover. In the example below, a hydro turbine, the control applied is to adjust the flow of water into the turbine, and increasing or reducing the mechanical power Pm compensate for the increase or reduction in electrical load, ie: to approach equilibrium.

It should be pointed out that while the control systems aim for equilibrium, true equilibrium is never actually achieved. Disturbances are always happening and they have to compensated for continuously, every second of every minute of every hour, 24 hours a day, 365 days a year, year after year.

The discussion has been for a single synchronous generator, whereas of course the grid has hundreds of generators. In order for each governor controlled generator to respond fairly and proportionately to a network power imbalance, governor control is implemented with what is called a ‘droop characteristic’. Without a droop characteristic, governor controlled generators would fight each other each trying to control the frequency to its own setting. A droop characteristic provides a controlled increase in generator output, in inverse proportion to a small drop in frequency.

In New Zealand the normal operational frequency band is 49.8 to 50.2 Hz. An under frequency event is an event where the frequency drops to 49.25 Hz. It is the generators controlled by governors with a droop characteristic that pick up the load increase and thereby maintain stability. If it happens that the event is large and the governor response is insufficient to arrest the falling frequency, under frequency load shedding relays turn off load.

Here is a record of an under frequency event earlier this month, where a power station tripped.

The generator tripped at point A which started the frequency drop. The rate of drop dw/dt is determined by size of the power imbalance divided by the synchronous angular momentum (Pm – Pe)/M. In only 6 seconds the frequency drop was arrested at point B by other governor controlled generators and under frequency load shedding, and in about 6 further seconds additional power is generated, once again under the control of governors, and the frequency was restored to normal at point C. The whole event lasting merely 12 seconds.

So why would we care about a mere 12 second dip in frequency of less than 1 Hz. The reason is that without governor action and under frequency load shedding, a mere 12 second dip would instead be a complete power blackout of the North Island of New Zealand.

Local officials standing outside substation Masterton NZ .

7.An under frequency event on the North Island of New Zealand demonstrates how critical is electrical system stability.

The graph below which is based on 5 minute load data from NZ’s System Operator confirms that load shedding occurred. The North Island load can be seen to drop 300 MW, from 3700 MW at 9.50 to 3400 MW at 9.55. The load restoration phase can also be observed from this graph. From 10.15 through 10.40 the shed load is restored in several steps.

The high resolution data that we’ll be looking at more closely was recorded by a meter with power quality and transient disturbance recording capability. It is situated in Masterton, Wairarapa, about 300 km south of the power station that tripped. The meter is triggered to capture frequency excursions below 49.2 Hz. The graph below shows the captured excursion on June 15th. The graph covers a total period of only one minute. It shows the frequency and Masterton substation’s load. I have highlighted and numbered several parts of the frequency curve to help with the discussion.

The first element we’ll look at is element 1 to 2. The grid has just lost 310 MW generation and the frequency is falling. No governors nor load shedding will have responded yet. The frequency falls 0.192 Hz in 0.651 seconds giving a fall rate df/dt of -0.295 Hz/s. From this df/dt result and knowing the lost generation is 310 MW we can derive the system angular momentum M as 1,052 MWs/Hz from -310 = M * -0.295.

It is interesting (and chilling) to calculate how long it would take for blackout to occur if no corrective action is taken to restore system frequency and power balance. 47 Hz is the point where cascade tripping is expected. Most generators cannot operate safely below 47 Hz, and under frequency protection relays disconnect generators to protect them from damage. This sets 47 Hz as the point at which cascade outage and complete grid blackout is likely. A falling frequency of -0.295 Hz/s would only take 10.2 seconds to drop from 50 to 47 Hz. That’s not very long and obviously automated systems are required to arrest the decline. The two common automatic systems that have been in place for decades are governor controlled generators and various levels of load shedding.

The fall arrest between 4 and 5 has been due to automatic load shedding. New Zealand has a number of customers contracted to disconnect load at 49.2 Hz. From these figures we can estimate a net shed load of 214 MW (114 MW + 100 MW).

From 7 to 8 the frequency is increasing with df/dt of 0.111 Hz/s and the system has a surplus of 117 MW of generation. At point 8 the system reached 50 Hz again, but the system then over shoots a little and governor action works to reduce generation to control the overshoot between 8 and 9.

This analysis shows how system inertia, under frequency load shedding and governor action work together to maintain system stability.

Summary: The key points

  • The system needs to be able to maintain stability second by second, every minute, every hour, every day, year after year. Yet when a major disturbance happens, the time available to respond is only a few seconds.
  • This highlights the essential role of system inertia in providing this precious few seconds. System inertia defines the relationship between power imbalance and frequency fall rate. The less inertia the faster the collapse and the less time we have to respond. Nearly all system inertia is provided by synchronous generators.
  • Control of the input power to the generators by governor action is essential to control frequency and power balance, bringing correction to maintain stability. This requires control of prime mover, typically this is only hydro and thermal stations.
  • When the fall rate is too fast for governor response, automatic load shedding can provide a lump of very helpful correction, which the governors later tidy up by fine tuning the response.

Big Wind Blacklisted

What is wrong with wind farms? Let us count the ways.

Dear Congress, stop subsidizing wind like it’s 1999 and let the tax credit expire is written by Richard McCarty at Daily Torch.  Excerpts in italics with my bolds.

Congress created the production tax credit for wind energy in 1992. In other words, wind turbine owners receive a tax credit for each kilowatt hour of electricity their turbines create, whether the electricity is needed or not. The production tax credit was supposed to have expired in 1999; but, instead, Congress has repeatedly extended it. After nearly three decades of propping up the wind industry, it is past time to let the tax credit expire in 2020.

All Congress needs to do is nothing.

Addressing the issue of wind production tax credits, Americans for Limited Government President Rick Manning stated, “Wind energy development is no longer a nascent industry, having grown from 0.7 percent of the grid in 2007 to 6.6 percent in 2018 at 275 billion kWh. The rationale behind the wind production tax credit has always been that it is necessary to attract investors.”

Manning added, “wind energy development has matured to the point where government subsidization of billionaires like Warren Buffett cannot be justified, neither from an energy production standpoint nor a fiscal one. Americans for Limited Government strongly urges Congress to end the Wind Production Tax Credit. The best part is, they only need to do nothing as it expires at the end of the year.”

There are plenty of reasons for ending the tax credit. Here are some of them:

  • Wind energy is unreliable. Wind turbines require winds of six to nine miles per hour to produce electricity; when winds speeds reach approximately 55 miles per hour, turbines shut down to prevent damage to the equipment. Wind turbines also shut down in extremely cold weather.
  • Due to this unreliability, relatively large amounts of backup power capacity must be kept available.
  • Wind energy often requires the construction of costly, new high-voltage transmission lines. This is because some of the best places to generate wind energy are in remote locations far from population centers or offshore.
  • Generating electricity from wind requires much more land than does coal, natural gas, nuclear, or even solar power. According to a 2017 study, generating one megawatt of electricity from coal, natural gas, or nuclear power requires about 12 acres; producing one megawatt of electricity from solar energy requires 43.5 acres; and harnessing wind energy to generate one megawatt of electricity requires 70.6 acres.
  • Wind turbines have a much shorter life span than other energy sources. According to the Department of Energy’s National Renewable Energy Laboratory, the useful life of a wind turbine is 20 years while coal, natural gas, nuclear, and hydroelectric power plants can remain in service for more than 50 years.
  • Wind power’s inefficiencies lead to higher rates for customers.
  • Higher electricity rates can have a chilling effect on the local economy. Increasing electricity rates for businesses makes them less competitive and can result in job losses or reduced investments in businesses.
  • Increasing rates on poor consumers can have an even more negative impact sometimes forcing them to go without heat in the winter or air conditioning in the summer.
  • Wind turbines are a threat to aviators. Wind turbines are a particular concern for crop dusters, who must fly close to the ground to spray crops. Earlier this summer, a crop dusting plane clipped a wind turbine tower and crashed.
  • Wind turbines are deadly for birds and bats, which help control the pest population. Even if bats are not struck by the rotors, some evidence suggests that they may be injured or killed by the sudden drop in air pressure around wind turbines.

Large wind turbines endanger lives, the economy, and the environment. Even after decades of heavy subsidies, the wind industry has failed to solve these problems. For these and other reasons, Congress should finally allow the wind production tax credit to expire.

Richard McCarty is the Director of Research at Americans for Limited Government Foundation.

Update August 16, 2019

nzrobin commented regarding more technical detail about managing grid reliability.  A new post is a synopsis of his series on the subject On Stable Electric Power: What You Need to Know

LA Times Misreports Mexican Energy Realism

 

Emily Green writes at LA Times Alternative energy efforts in Mexico slow as Lopez Obrador prioritizes oil. Excerpts in italics with my bolds.

The title of the article is not wrong, as we shall see below. But as usual climatists leave out the reality so obvious in the pie chart above. Seeing which energy sources are driving his nation’s prosperity provides the missing context for understanding the priorities of Mexican President Andres Lopez Obrador

The alarmist/activist hand-wringing is in full display:

With its windy valleys and wide swaths of desert, Mexico has some of the best natural terrain to produce wind and solar energy. And, in recent years, the country has attracted alternative energy investors from across the globe.

An aerial view of the Villanueva photovoltaic power plant in the municipality of Viesca, Coahuila state, Mexico. The plant covers an area the size of 40 football fields making it the largest solar plant in the Americas. (Alfredo Estrella / AFP/Getty Images)

But the market has taken a step back under Mexico’s new president, who has made clear his priority is returning Mexico’s oil company to its former dominance.

Since taking office Dec. 1, President Andres Manuel Lopez Obrador has canceled a highly anticipated electricity auction, as well as two major transmission-line projects that would have transported power generated by renewable energy plants around the country. He has also called for more investment in coal, and stood by as his director of Mexico’s electric utility dismissed wind and solar energy as unreliable and expensive.

It’s too soon to forecast the long-term consequences, but business leaders and energy consultants are seeing a trend: a chilling in the country’s up-and-coming renewable energy market.

Further on we get the usual distortions and misdirection: Renewables capacities and low prices are cited ignoring the low actual production and intermittancy mismatch with actual needs.

Energy and oil remain sensitive topics in Mexico, where people still recall the glory days of state-owned oil company Pemex, when it was the country’s economic lifeblood. There’s even a day commemorating Mexico’s 1938 nationalization of its oil and mineral wealth.

In recent years, however, Mexico’s energy market has undergone a transformation and reached out to investors. In 2014, Lopez Obrador’s predecessor, Enrique Peña Nieto, fully opened up the country’s oil, gas and electricity sector to private investment for the first time in 70 years.

The effects were immediate. In the oil sector, companies such as ExxonMobil and Chevron clamored to explore large deposits that had once been the sole purview of Pemex.

On the electricity side, the reform led to billions of dollars in private investment in Mexico’s power sector, both in renewable energy and traditional sources such as natural gas.

Through a series of auctions, Mexico’s state-owned utility awarded long-term power contracts to private developers. Although the auctions were open to all energy technologies, wind and solar companies won the bulk of the contracts because they offered among the lowest prices in the world. Solar developers won contracts to generate electricity in Mexico at around $20 per megawatt-hour, according to the government. Industry sources said that is about half the going price for coal and gas.

The country’s wind generation capacity jumped from 2,360 megawatts at the end of 2014 to 5,382 megawatts this April, according to the Mexican wind energy association. The numbers were even more stark in solar, which soared from 166 megawatts of capacity in 2014 to 2,900 megawatts in April, according to the Mexican solar energy association.

Virtue Signalling is an Expensive Way to Run an Economy

The electricity auctions were also seen as the main vehicle for Mexico to reach its clean energy commitments made as part of the Paris climate accord to produce 35% of its electricity from clean energy sources by 2024, and 50% by 2050. Under Mexico’s definition, clean energy sources include solar and wind generation, as well as sources that some critics say aren’t environmentally friendly — such as hydroelectric dams, nuclear energy and efficient natural gas plants. Currently, 24% of Mexico’s electricity comes from clean energy sources.

Summary

Note that for true believers, no energy is “clean” except wind and solar. And Mexico is another example of how renewables cannibalize your electrical grid while claiming to be cheaper than FF sources and saving the planet from the plant food gas CO2. Meanwhile those two “zero carbon” sources provide only 2% of the energy consumed, despite the billions invested.

I get the impression that ALO is much smarter than AOC.
See Also

Exaggerating Green Energy Supply

Cutting Through the Fog of Renewable Power Costs

Superhuman Renewables Targets

 

 

 

The End of Wind and Solar Parasites

Norman Rogers writes at American Thinker What It Will Take for the Wind and Solar Industries to Collapse. Excerpts in italics with my bolds.

The solar electricity industry is dependent on federal government subsidies for building new capacity. The subsidy consists of a 30% tax credit and the use of a tax scheme called tax equity finance. These subsidies are delivered during the first five years.

For wind, there is subsidy during the first five to ten years resulting from tax equity finance. There is also a production subsidy that lasts for the first ten years.

The other subsidy for wind and solar, not often characterized as a subsidy, is state renewable portfolio laws, or quotas, that require that an increasing portion of a state’s electricity come from renewable sources. Those state mandates result in wind and solar electricity being sold via profitable 25-year power purchase contracts. The buyer is generally a utility with good credit. The utilities are forced to offer these terms in order to cause sufficient supply to emerge to satisfy the renewable energy quotas.

The rate of return from a wind or solar investment can be low and credit terms favorable because the investors see the 25-year contract by a creditworthy utility as a guarantee of a low risk of default. If the risk were to be perceived as higher, then a higher rate of return and a higher interest rate on loans would be demanded. That in turn would increase the price of the electricity generated.

The bankruptcy of PG&E, the largest California utility, has created some cracks in the façade. A bankruptcy judge has ruled that cancellation of up to $40 billion in long-term energy contracts is a possibility. These contracts are not essential or needed to preserve the supply of electricity because they are mostly for wind or solar electricity supply that varies with the weather and can’t be counted on. As a consequence, there has to exist and does exist the necessary infrastructure to supply the electricity needs without the wind or solar energy.

Probably the judge will be overruled for political reasons, or the state will step in with a bailout. Utilities have to keep operating, no matter what. Ditching wind and solar contracts would make California politicians look foolish because they have long touted wind and solar as the future of energy.

PG&E is in bankruptcy because California applies strict liability for damages from forest fires started by electric lines, no matter who is really at fault. Almost certainly the government is at fault for not anticipating the danger of massive fires and for not enforcing strict fire prevention and protection. Massive fire damage should be protected by insurance, not by the utility, even if the fire was started by a power line. The fire in question could just as well have been started by lightning or a homeless person. PG&E previously filed bankruptcy in 2001, also a consequence of abuse of the utility by the state government.

By far the most important subsidy is the renewable portfolio laws. Even if the federal subsidies are reduced, the quota for renewable energy will force price increases to keep the renewable energy industry in business, because it has to stay in business to supply energy to meet the quota. Other plausible methods of meeting the quota have been outlawed by the industry’s friends in the state governments. Nuclear and hydro, neither of which generates CO2 emissions, are not allowed. Hydro is not strictly prohibited — only hydro that involves dams and diversions. That is very close to all hydro. Another reason hydro is banned is that environmental groups don’t like dams.

For technical reasons, an electrical grid cannot run on wind or solar much more than 50% of the time. The fleet of backup plants must be online to provide adjustable output to compensate for erratic variations in wind or solar. Output has to be ramped up to meet early-evening peaks. Wind suffers from a cube power law, meaning that if the wind drops by 10%, the electricity drops by 30%. Solar suffers from too much generation in the middle of the day and not enough generation to meet early evening peaks in consumption.

When a “too much generation” situation happens, the wind or solar has to be curtailed. That means that the operators are told to stop delivering electricity. In many cases, they are not paid for the electricity they could have delivered. Some contracts require that they be paid according to a model that figures out how much they could have generated according to the recorded weather conditions. The more wind and solar, the more curtailments as the amount of erratic electricity approaches the allowable limits. Curtailment is an increasing threat, as quotas increase, to the financial health of wind and solar.

There is a movement to include batteries with solar installations to move excessive middle-of-the-day generation to the early evening. This is a palliative to extend the time before solar runs into the curtailment wall. The batteries are extremely expensive and wear out every five years.

Neither wind nor solar is competitive without subsidies. If the subsidies and quotas were taken away, no wind or solar operation outside very special situations would be built. Further, the existing installations would continue only as long as their contracts are honored and they are cash flow–positive. In order to be competitive, without subsidies, wind or solar would have to supply electricity for less than $20 per megawatt-hour, the marginal cost of generating the electricity with gas or coal. Only the marginal cost counts, because the fossil fuel plants have to be there whether or not there is wind or solar. Without the subsidies, quotas, and 25-year contracts, wind or solar would have to get about $100 per megawatt-hour for its electricity. That gap, between $100 and $20, is a wide chasm only bridged by subsidies and mandates.

The cost of using wind and solar for reducing CO2 emissions is very high. The most authoritative and sincere promoters of global warming loudly advocate using nuclear, a source that is not erratic, does not emit CO2 or pollution, and uses the cheapest fuel. One can buy carbon offsets for 10 or 20 times less than the cost of reducing CO2 emissions with wind or solar. A carbon offset is a scheme where the buyer pays the seller to reduce world emissions of CO2. This is done in a variety of ways by the sellers.

The special situations where wind and solar can be competitive are remote locations using imported oil to generate electricity. In those situations, the marginal cost of the electricity may be $200 per megawatt-hour or more. Newfoundland comes to mind — for wind, not solar.

Maintenance costs for solar are low. For wind, maintenance costs are high, and major components, such as propeller blades and gearboxes, may fail, especially as the turbines age. These heavy and awkward objects are located hundreds of feet above ground. There exists a danger that wind farms will fail once the inflation-protected subsidy of $24 per megawatt-hour runs out after ten years. At that point, turbines that need expensive repairs may be abandoned. Wind turbine graveyards from the first wind fad in the 1970s can be seen near Palm Springs, California. Wind farms can’t receive the production subsidy unless they can sell the electricity. That has resulted paying customers to “buy” the electricity.

Tehachapi’s dead turbines.

A significant financial risk is that the global warming narrative may collapse. If belief in the reality of the global warming threat collapses, then the major intellectual support for renewable energy will collapse. It is ironic that the promoters of global warming are campaigning to require companies to take into account the threat of global warming in their financial projections. If the companies do this in an honest manner, they also have to take into account the possibility that the threat will evaporate. My own best guess, after considerable technical study, is that it is near a sure thing that the threat of global warming is imaginary and largely invented by the people who benefit. Adding CO2 to the atmosphere has well understood positive effects for the growth of crops and the greening of deserts.

The conservative investors who make long-term investments in wind or solar may be underestimating the risks involved. For example, an article in Chief Investment Officer magazine stated that CalPERS, the giant California public employees retirement fund, is planning to expand investments in renewable energy, characterized as “stable cash flowing assets.” That article was written before the bankruptcy of PG&E. The article also stated that competition among institutional investors for top yielding investments in the alternative energy space is fierce.

Wind and solar are not competitive and never will be. They have been pumped up into supposedly solid investments by means of ill advised subsidies and mandates. At some point, the governments will wake up to the waste and foolishness involved. At that point, the value of these investments will collapse. It won’t be the first time that investment experts made bad investments because they don’t really understand what is going on.

Footnote:  There is also a report from GWPF on environmental degradation from industrial scale wind and solar:

Superhuman Renewables Targets

Faster than a speeding bullet! More powerful than a locomotive! Able to leap tall buildings in a single bound! It’s Superman.

New York is not the only climate cuckoo’s nest in the United States. Here are four more states promising efforts to install wind and solar power at rates that would exhaust Superman. EIA reports; Four states updated their renewable portfolio standards in the first half of 2019. Excerpts in italics with my bolds.

As of the end of 2018, 29 states and the District of Columbia had renewable portfolio standards (RPS), or polices that require electricity suppliers to source a certain portion of their electricity from designated renewable resources or eligible technologies. Four states—New Mexico, Washington, Nevada, and Maryland—and the District of Columbia have updated their RPS since the start of 2019.

States with legally binding RPS collectively accounted for 63% of electricity retail sales in the United States in 2018. In addition to the 29 states with binding RPS policies, 8 states have nonbinding renewable portfolio goals.

New Mexico increased its overall RPS target in March 2019 to 100% of electricity sales from carbon-free generation by 2045, up from the previous target of 20% renewable generation by 2020. The new policy only applies to investor-owned utilities; cooperative electric utilities have until 2050 to reach the 100% carbon-free generation goal. The target has intermittent goals of 50% renewable generation by 2030 and 80% renewable generation by 2040.

In April 2019, the Nevada legislature increased its RPS to 50% of sales from renewable generation by 2030, including a goal of 100% of electricity sales from clean energy by 2050. Later that month, Washington increased its RPS target to 100% of sales from carbon-neutral generation by 2045, an increase from the previous target of 15% of sales from renewable generation by 2020. In addition, the policy mandates a phaseout of coal-fired electricity generation in Washington by 2025. Nevada and Washington became the fourth and fifth states, respectively, to pass legislation for 100% clean electricity, following Hawaii, California, and New Mexico.

In May 2019, Maryland increased its overall RPS target to 50% of electricity sales from renewable generation by 2030, replacing the earlier target of 22.5% by 2024. In addition, the legislation mandates further study of the effects and the possibility of Maryland reaching 100% generation from renewables by 2040.

Cutting Through the Fog of Renewable Power Costs

 

Most every day there are media reports saying solar and wind power plants are now cheaper than coal. Recently UCS expressed outrage that some coal plants remain viable because industrial customers are able to commit purchasing of the reliable coal-fired supply.

Joe Daniel writes at Forbes The Billion-Dollar Coal Bailout Nobody Is Talking About: Self-Committing In Power Markets. A typical companion piece at Forbes claims The Coal Cost Crossover: 74% Of US Coal Plants Now More Expensive Than New Renewables, 86% By 2025.

Having acquired some knowledge of this issue, I wondered how these cost comparisons dealt with the intermittency problem of wind and solar, and the requirement for backup dispatchable power to balance the grid.

EIA has developed a dual assessment of power plants using both Levelized Cost and Levelized Avoided Costs of Electricity power provision. The first metric estimates output costs from building and operating power plants, and the second estimates the value of the electricity to the grid. Source: EIA uses two simplified metrics to show future power plants’ relative economics Excerpts in italics with my bolds.

EIA calculates two measures that, when used together, largely explain the economic competitiveness of electricity generating technologies.

The levelized cost of electricity (LCOE) represents the installed capital costs and ongoing operating costs of a power plant, converted to a level stream of payments over the plant’s assumed financial lifetime. Installed capital costs include construction costs, financing costs, tax credits, and other plant-related subsidies or taxes. Ongoing costs include the cost of the generating fuel (for power plants that consume fuel), expected maintenance costs, and other related taxes or subsidies based on the operation of the plant.

The levelized avoided cost of electricity (LACE) represents that power plant’s value to the grid. A generator’s avoided cost reflects the costs that would be incurred to provide the electricity displaced by a new generation project as an estimate of the revenue available to the plant. As with LCOE, these revenues are converted to a level stream of payments over the plant’s assumed financial lifetime.

Power plants are considered economically attractive when their projected LACE (value) exceeds their projected LCOE (cost). Both LCOE and LACE are levelized over the expected electricity generation during the lifetime of the plant, resulting in values presented in dollars per megawatthour. These values range across geography, as resource availability, fuel costs, and other factors often differ by market. LCOE and LACE values also change over time as technology improves, tax credits and other taxes or subsidies expire, and fuel costs change.

The relative difference between LCOE and LACE is a better indicator of economic competitiveness than either metric alone. A comparison of only LCOE across technology types fails to capture the differences in value provided by different types of generators to the grid.

Some power plants can be dispatched, while some—such as those powered by the wind or solar—operate only when resources are available. Some power plants provide electricity during parts of the day or year when power prices are higher, while others may produce electricity during times of relatively low power prices.

Solar PV’s economic competitiveness is relatively high through 2022 as federal tax credits reduce PV’s LCOE. As those tax credits are phased out, technology costs are expected to have declined to the point where solar PV remains economically competitive in most parts of the country. Because solar PV provides electricity during the middle of the day, when electricity prices are relatively high, solar PV’s value to the grid (i.e., LACE) tends to be higher than other technologies.

Onshore wind also sees higher economic competiveness in the earlier part of the projection, prior to the expiration of federal tax credits in 2020. Over time, wind remains competitive in the Plains states, where wind resources are highest. Wind’s LACE is relatively low in most areas, as wind output tends to be highest at times when power prices are low.

To get the coal comparison to renewables, there is a study Benchmark Levelized Cost of Electricity Estimates from National Academies Press. Excerpts in italics with my bolds.

The EIA Annual Energy Outlook supporting information identifies the methodology and assumptions that affect the reported estimates of LCOE for utility-scale generation technologies. The reported estimates are for the years 2022 and 2040. The focus here is on the 2022 estimates as the benchmark for the “current” costs. The assumptions include choices regarding the effects of learning, capital costs, transmission investment, operating characteristics, and externalities. These choices are both important and appropriate for the benchmark comparison (e.g., learning rates), are important and require some adjustment (e.g., capital costs), or are supplemental to the EIA assumptions (e.g., externality costs).

Note:  For the externality of CO2 emissions, the chart below shows a $15/ton “Social Cost of Carbon.”

EIA separates electricity generation technologies into categories of dispatchable and nondispatchable (EIA, 2015f, p. 6). The former include conventional fossil fuel plants that have a fairly consistent available capacity and can follow dispatch instructions to increase or decrease production. The latter consist of intermittent plants such as wind and solar, which depend on the availability of the wind and sunlight and typically cannot follow dispatch instructions easily or at all. It is generally recognized that the different operating profiles create different values for the technologies (Borenstein, 2012; Joskow, 2011). Empirical estimates for existing technologies show that the value of wind, which blows more at night when prices are low, can be 12 percent below the unweighted average price of electricity; and the value of solar, with the sun tending to shine when prices are higher, can be 16 percent greater than the unweighted average (Schmalensee, 2013).

One procedure utilized for putting nondispatchable technologies on an equivalent basis is to pair them with appropriately scaled dispatchable peaking technologies to produce an output that is like that of a conventional fossil fuel plant (Greenstone and Looney, 2012). Another approach, used by Schmalensee (2013), is to calculate the value of nondispatchable technologies based on spot prices. EIA provides a similar estimate based on its projected simulations, which is known as the levelized avoided cost estimate (LACE).

For purposes of equivalent comparison of the LCOE, the approach here combines these adjustments to provide an estimate of the net difference between the LACEs for the technology and for a conventional combined-cycle natural gas plant. The net differences are added to (e.g., for wind) or subtracted from (e.g., for solar) the other components of the LCOE.

With the above assumptions and adjustments to obtain an approximation of equivalent LCOE, the results appear in Figure B-1 and Table B-1.


FIGURE B-1 Levelized cost of electricity for plants entering service in 2022 (2015 $/MWh).
SOURCE: EIA, 2015f, 2016g. Because Annual Energy Outlook 2016 does not assess conventional coal and IGCC technologies, their values (in 2013 dollars) were sourced from Annual Energy Outlook 2015 and then converted to 2015 dollars using the Bureau of Economic Analysis’ gross domestic product (GDP) implicit price deflator.

It is clear from Figure B-1 that new natural gas plants are the dominant technology. And without accounting for the costs of externalities, new IGCC coal plants are more competitive than even the best of the wind and solar. Onshore wind is the closest to being competitive. But the relative cost estimates shown here are similar to those in Greenstone and Looney (2012). The primary renewable technologies are not cost-competitive, and the differences are significant. This is for entry year 2022. Looking ahead to 2040, with some additional cost reductions for renewables and more substantial increased fuel costs for natural gas, the situation changes for wind but not for solar.

CONCLUSION

Equivalent estimates of the LCOE are available from the supporting analyses of AEO2016. The data without the effect of selective policies indicate that existing technologies for clean energy are not competitive with new natural gas. And without accounting for the costs of externalities, the principal renewable technologies of wind and solar are not cost-competitive with new coal plants.

FIGURE B-2 Electric power generation by fuel (billions of kilowatt hours [kWh]) assuming No Clean Power Plan, 2000-2040. SOURCE: EIA, 2016f, Figure IF3-6.

Footnote: The above analyses do not adequately consider the effect of cheap subsidized solar and wind power driving dispatchable power plants into bankruptcy.  For more on electricity economics see Climateers Tilting at Windmills

Cyber Solutions Can’t Fix the Climate

This post is dedicated to Silicon Valley nouveau rich and their Cyber-Space Cadets now in the streets demanding that adults fix the climate, and fix it now!  Their thinking is fatally flawed by the simplistic transfer of tactics from cyber world to the real physical world.

Mark P. Mills writes at City Journal Want an Energy Revolution?  It won’t come from renewables—which can never supply all the power we need—but from foundational scientific discoveries. Excerpts in italics with my bolds.

Throughout history, some 60 percent to 90 percent of every nation’s economy has been consumed by food and fuel costs. Hydrocarbons changed the way that humans organize their productive capacity. The coal age, followed by the oil age, and now by the ascendant age of natural gas, has (at least for developed nations) driven the share of GDP devoted to acquiring food and fuel down to around 10 percent. That transformation constitutes one of the great pivots for civilization.

Many analysts claim that yet another such consequential energy revolution is upon us: “clean energy,” in the form of wind turbines, solar arrays, and batteries, they say, is about to become incredibly cheap, making it possible to create a “new energy economy.” Polls show that nearly 80 percent of voters believe that America is “capable of creating a new electricity system.”

We can thank Silicon Valley for popularizing “exponential change” and “disruptive innovations.” The computing and communications revolutions that have transformed many industries have also shaped both expectations and rhetoric about how other technologies evolve. We hear claims, as one Stanford professor put it, that clean tech will follow digital technology in a “10x exponential process which will wipe fossil fuels off the market in about a decade.” Or, as the International Monetary Fund recently summarized, “smartphone substitution seemed no more imminent in the early 2000s than large-scale energy substitution seems today.” The mavens at Singularity University tell us that with clean tech, we’re “on the verge of a new, radically different point in history.” Solar, wind, and batteries are “on a path to disrupt” the old order dominated by fossil fuels.

Never mind that wind and solar—the focus of all “new energy economy” aspirations, including its latest incarnation in the Green New Deal—supply just 2 percent of global energy, despite hundreds of billions of dollars in subsidies. After all, it wasn’t long ago that only 2 percent of the world owned a pocket-sized computer. “New energy economy” visionaries believe that a digital-like energy disruption is not just possible, but imminent. One professor predicts that we will see an “Apple of clean energy.”

A similar transformation in how energy is produced or stored isn’t just unlikely: it’s impossible. Drawing an analogy between information production and energy production is a fundamental category error. They entail different laws of physics. Logic engines don’t produce physical action or energy; they manipulate the idea of the numbers one and zero. Silicon logic is rooted in simply knowing and storing the position of a binary switch—on or off.

But the energy needed to move a ton of people, heat a ton of steel or silicon, or grow a ton of food is determined by properties of nature, whose boundaries are set by laws of gravity, inertia, friction, and thermodynamics—not clever software or marketing. Indeed, the differences between the physical and virtual are best illustrated by the fact that, using mathematical magic, one can do things like “compress” information to reduce the energy needed to transport that information. But in the world of humans and objects with mass, comparable “compression” options exist only in Star Trek.

Spending $1 million on wind or solar hardware in order to capture nature’s diffuse wind and sunlight will yield about 50 million kilowatt-hours of electricity over a 30-year period. Meantime, the same money spent on a shale well yields enough natural gas over 30 years to produce 300 million kilowatt-hours. That difference is anchored in the far higher, physics-based energy density of hydrocarbons. Subsidies can’t change that fact.

And then batteries are needed, and widely promoted, as the way to convert wind or solar into useable on-demand power. While the physical chemistry of batteries is indeed nearly magical in storing tiny quantities of energy, it doesn’t scale up efficiently. When it comes to storing energy at country scales, or for cargo ships, cars and aircraft, engineers start with a simple fact: the maximum potential energy contained in hydrocarbon molecules is about 1,500 percent greater, pound for pound, than the maximum theoretical lithium chemistries. That’s why the cost to store a unit of energy in a battery is 200 times more than storing the same amount of energy as natural gas. And why, today, it would take $60 million worth of Tesla batteries—weighing five times as much as the entire aircraft—to hold the same energy as is held in a transatlantic plane’s onboard fuel tanks.

For a practical example of the physics-anchored gap between aspiration and reality, consider Florida Power & Light’s (FPL) recently announced plan to replace an old gas-fired power station with the world’s biggest battery project—promised to be four times bigger than the current number one, a system Tesla installed, to much fanfare, last year in South Australia. The monster FPL battery “farm” will be able to store just two minutes of Florida’s electricity needs. That’s not going to change the world, or even Florida.

Moreover, it takes the energy equivalent of about 100 barrels of oil to manufacture a battery that can store the energy equal to one oil barrel. That means that batteries fabricated in China (most already are) by its predominantly coal-powered grid result in more carbon-dioxide emissions than those batteries, coupled with wind/solar, can eliminate. It’s true that wind turbines, solar cells, and batteries will get better, but so, too, will drilling rigs and combustion engines. The idea that “old” hydrocarbon technologies are about to be displaced wholesale by a digital-like, clean-tech energy revolution is a fantasy.

If we want a disruption to the energy status quo, we will need new, foundational discoveries in the sciences. As Bill Gates has put it, the challenge calls for scientific “miracles.” Any hoped-for technological breakthroughs won’t emerge from subsidizing yesterday’s technologies, including wind and solar. The Internet didn’t emerge from subsidizing the dial-up phone, or the transistor from subsidizing vacuum tubes, or the automobile from subsidizing railroads. If policymakers were serious about the pursuit of the next energy revolution, they’d be talking a lot more about reinvigorating support for basic science.

It bears noting that over the past decade, U.S. production of oil and natural gas has increased by 2,000 percent more than the combined growth of (subsidized) wind and solar. Shale technology has utterly transformed the global energy landscape. After a half-century of hand-wringing about import dependencies, America is now a major exporter. Now that’s a revolution.

See also Energy Changes Society: Transition Stories