Data Show Wind Power Messed Up Texas

Yes, with hindsight you can blame Texas for not winter weather proofing fossil fuel supplies as places do in more northern latitudes.  But it was over-reliance on wind power that caused the problem and made it intractable.  John Peterson explains in his TalkMarkets article How Wind Power Caused The Great Texas Blackout Of 2021.  Excerpts in italics with my bolds.

  • The State of Texas is suffering from a catastrophic power grid failure that’s left 4.3 million homes without electricity, including 1.3 million homes in Houston, the country’s fourth-largest city.
  • While talking heads, politicians, and the press are blaming fossil fuels and claiming that more renewables are the solution, hard data from the Energy Information Administration paints a very different picture.
  • The generation failures that led to The Great Texas Blackout of 2021 began at 6 pm on Sunday. Wind power fell from 36% of nameplate capacity to 22% before midnight and plummeted to 3% of nameplate capacity by 8 pm on Monday.
  • While power producers quickly ramped production to almost 90% of dedicated natural gas capacity, a combination of factors including shutdowns for scheduled maintenance and a statewide increase in natural gas demand began to overload safety systems and set-off a cascade of shutdowns.
  • While similar overload-induced shutdowns followed suit in coal and nuclear plants, the domino effect began with ERCOT’s reckless reliance on unreliable wind power.

The ERCOT grid has 85,281 MW of operational generating capacity if no plants are offline for scheduled maintenance. Under the “Winter Fuel Types” tab of its Capacity, Demand and Reserves Report dated December 16, 2020, ERCOT described its operational generating capacity by fuel source as follows:

Since power producers frequently take gas-fired plants offline for scheduled maintenance in February and March when power demand is typically low, ERCOT’s systemwide generating capacity was less than 85 GW and its total power load was 59.6 GW at 9:00 am on Valentines Day. By 8:00 pm, power demand has surged to 68 GW (14%). Then hell froze over. Over the next 24 hours, statewide power production collapsed to 43.5 GW (36%) and millions of households were plunged into darkness in freezing weather conditions.

I went to the US Energy Information Administration’s website and searched for hourly data on electricity production by fuel source in the State of Texas. The first treasure I found was this line graph that shows electricity generation by fuel source from 12:01 am on February 10th through 11:59 pm on February 16th.

The second and more important treasure was a downloadable spreadsheet file that contained the hourly data used to build the graph. An analysis of the hourly data shows:

  • Wind power collapsing from 9 GW to 5.45 GW between 6 pm and 11:59 pm on the 14th with natural gas ramping from 41 GW to 43 GW during the same period.
  • Wind power falling from 5.45 GW to 0.65 GW between 12:01 am and 8:00 pm on the 15th with natural gas spiking down from 40.4 GW to 33 GW between 2 am and 3 am as excess demand caused a cascade of safety events that took gas-fired plants offline.
  • Coal power falling from 11.1 GW to 7.65 GW between 2:00 am and 3:00 pm on the 15th as storm-related demand overwhelmed generating capacity.
  • Nuclear power falling from 5.1 GW to 3.8 GW at 7:00 am on the 15th as storm-related demand overwhelmed generating capacity.

The following table summarizes the capacity losses of each class of generating assets.

The Great Texas Blackout of 2021 was a classic domino-effect chain reaction where unreliable wind power experienced a 40% failure before gas-fired power plants began to buckle under the strain of an unprecedented winter storm. There were plenty of failures by the time the dust settled, but ERCOT’s reckless reliance on unreliable wind power set up the chain of dominoes that brought untold suffering and death to Texas residents.

The graph clearly shows that during their worst-performing hours:

  • Natural gas power plants produced at least 60.2% of the power available to Texas consumers, or 97% of their relative contribution to power supplies at 6:00 pm on Valentine’s day;
  • Coal-fired power plants produced at least 15.6% of the power available to Texas consumers, or 95% of their relative contribution to power supplies at 6:00 pm on Valentine’s day;
  • Nuclear power plants produced at least 7.5% of the power available to Texas consumers, or 97% of their relative contribution to power supplies at 6:00 pm on Valentine’s day; and
  • Wind power plants produced 1.5% of the power available to Texas consumers, or 11% of their relative contribution to power supplies at 6:00 pm on Valentine’s day; and
  • Solar power plants did what solar power plants do and had no meaningful impact.

Conclusion

Now that temperatures have moderated, things are getting back to normal, and The Great Texas Blackout of 2021 is little more than an unpleasant memory. While some Texas consumers are up in arms over blackout-related injuries, the State has rebounded, and many of us believe a few days of inconvenience is a fair price to pay for decades of cheap electric power. I think the inevitable investigations and public hearings will be immensely entertaining. I hope they lead to modest reforms of the free-wheeling ERCOT market that prevent irresponsible action from low-cost but wildly unreliable electricity producers from wind turbines.

Over the last year, wind stocks like Vestas Wind Systems (VWDRY) TPI Composites (TPIC) Northland Power (NPIFF), American Superconductor (AMSC), and NextEra Energy (NEE) have soared on market expectations of unlimited future growth. As formal investigations into the root cause of The Great Texas Blackout of 2021 proceed to an inescapable conclusion that unreliable wind power is not suitable for use in advanced economies, I think market expectations are likely to turn and turn quickly. I won’t be surprised if the blowback from The Great Texas Blackout of 2021 rapidly bleeds over to other overvalued sectors that rely on renewables as the heart of their raison d’etre, including vehicle electrification.

Supremes Steer Clear of Penn Case of Election Fraud

JUST IN – U.S. Supreme Court refuses to review #Pennsylvania election cases. No standing before an election, moot after. Justices Alito, Gorsuch, and Thomas dissent from the denial. Since it only takes 4 justices to hear a case, these cases were only one vote away from getting a full hearing at the SCOTUS. (Source: Disclose.tv tweet)  Excerpts in italics with my bolds from dissenting opinions. Full text available at Gateway Pundit post Supreme Court Refuses to Review Pennsylvania Election Cases – Alito, Gorsuch and Thomas Dissent.

Justice Thomas:

Changing the rules in the middle of the game is bad enough. Such rule changes by officials who may lack authority to do so is even worse. When those changes alter election results, they can severely damage the electoral system on which our self-governance so heavily depends. If state officials have the authority they have claimed, we need to make it clear. If not, we need to put an end to this practice now before the consequences become catastrophic.

Because the judicial system is not well suited to address these kinds of questions in the short time period available immediately after an election, we ought to use available cases outside that truncated context to address these admittedly important questions. Here, we have the opportunity to do so almost two years before the next federal election cycle. Our refusal to do so by hearing these cases is befuddling. There is a clear split on an issue of such great importance that both sides previously asked us to grant certiorari. And there is no dispute that the claim is sufficiently meritorious to warrant review. By voting to grant emergency relief in October, four Justices made clear that they think petitioners are likely to prevail. Despite pressing for review in October, respondents now ask us not to grant certiorari because they think the cases are moot. That argument fails.

The issue presented is capable of repetition, yet evades review. This exception to mootness, which the Court routinely invokes in election cases, “applies where (1) the challenged action is in its duration too short to be fully litigated prior to cessation or expiration, and (2) there is a reasonable expectation that the same complaining party will be subject to the same action again.”

And there is a reasonable expectation that these petitioners—the State Republican Party and legislators—will again confront non legislative officials altering election rules. In fact, various petitions claim that no fewer than four other decisions of the Pennsylvania Supreme Court implicate the same issue.  Future cases will arise as lower state courts apply those precedents to justify intervening in elections and changing the rules.

One wonders what this Court waits for. We failed to settle this dispute before the election, and thus provide clear rules. Now we again fail to provide clear rules for future elections. The decision to leave election law hidden beneath a shroud of doubt is baffling. By doing nothing, we invite further confusion and erosion of voter confidence. Our fellow citizens deserve better and expect more of us. I respectfully dissent.

Justice Alito, joined by Justice Gorsuch:

Now, the election is over, and there is no reason for refusing to decide the important question that these cases pose. . .A decision in these cases would not have any implications regarding the 2020 election. . . But a decision would provide invaluable guidance for future elections.

Some respondents contend that the completion of the 2020 election rendered these cases moot and that they do not fall within the mootness exception for cases that present questions that are “capable of repetition” but would other-wise evade review.  They argue that the Pennsylvania Supreme Court’s decision “arose from an extraordinary and unprecedented confluence of circumstances”—specifically, the COVID–19 pandemic, an increase in mail-in voting, and Postal Service delays—and that such a perfect storm is not likely to recur. 

That argument fails for three reasons. First, it does not acknowledge the breadth of the Pennsylvania Supreme Court’s decision. That decision claims that a state constitutional provision guaranteeing “free and equal” elections gives the Pennsylvania courts the authority to override even very specific and unambiguous rules adopted by the legislature for the conduct of federal elections. . .That issue is surely capable of repetition in future elections. Indeed, it would be surprising if parties who are unhappy with the legislature’s rules do not invoke this decision and ask the state courts to substitute rules that they find more advantageous.

Second, the suggestion that we are unlikely to see a recurrence of the exact circumstances we saw this fall misunderstands the applicable legal standard. In order for a question to be capable of repetition, it is not necessary to predict that history will repeat itself at a very high level of specificity.

Third, it is highly speculative to forecast that the Pennsylvania Supreme Court will not find that conditions at the time of a future federal election are materially similar to those last fall. The primary election for Pennsylvania congressional candidates is scheduled to occur in 15 months,and the rules for the conduct of elections should be established well in advance of the day of an election. . .As voting by mail becomes more common and more popular, the volume of mailed ballots may continue to increase and thus pose delivery problems similar to those anticipated in 2020.

For these reasons, the cases now before us are not moot. There is a “reasonable expectation” that the parties will face the same question in the future. . ., and that the question will evade future pre-election review, just as it did in these cases.These cases call out for review, and I respectfully dissent from the Court’s decision to deny certiorari. 

Background:  SCOTUS Conference on Election Integrity

Election Integrity is up for conference at SCOTUS on Friday.  The petition to be discussed is the complaint by the Pennsylvania legislature against the state Election Officer Boockvar, a proceeding that began on Sept. 28, 2020.  The petition makes clear the intent is not to overturn any completed election, but to ensure future elections are conducted according to laws in force.  From scotusblog:

Republican Party of Pennsylvania v. Boockvar

Issue:  Whether the Pennsylvania Supreme Court usurped the Pennsylvania General Assembly’s plenary authority to “direct [the] Manner” for appointing electors for president and vice president under Article II of the Constitution, as well as the assembly’s broad power to prescribe “[t]he Times, Places, and Manner” for congressional elections under Article I, when the court issued a ruling requiring the state to count absentee ballots that arrive up to three days after Election Day as long as they are not clearly postmarked after Election Day; and (2) whether that decision is preempted by federal statutes that establish a uniform nationwide federal Election Day.

The petition to be discussed is the December 15, 2020 brief from the petitioners Republican Party:

No. 20-542 REPLY BRIEF IN SUPPORT OF PETITION FOR A WRIT OF CERTIORARI

Respondents’ Oppositions only confirm what some
Respondents told the Court just weeks ago: that the
Court should grant review and resolve the important
and recurring questions presented in this case. Pa.
Dems. Br. 9, No. 20A54 (Oct. 5, 2020) (advocating for
review because the questions presented are “of
overwhelming importance for States and voters across
the country”); Sec’y Br. 2-3, No. 20A54 (Oct. 5, 2020).
Respondents uniformly fail to mention that after the
Republican Party of Pennsylvania (RPP) filed its
Petition but more than a month before Respondents
filed their Oppositions, the Eighth Circuit created a
split on the question whether the Electors Clause
constrains state courts from altering election
deadlines enacted by state legislatures. See Carson v.
Simon, 978 F.3d 1051 (8th Cir. 2020). Instead,
Respondents seek to obfuscate the matter with a
welter of vehicle arguments turning on the fact that
Pennsylvania has certified the results of the 2020
general election. In reality, however, this case is an
ideal vehicle, in part precisely because it will not affect
the outcome of this election.

Indeed, this Court has repeatedly emphasized the
imperative of settling the governing rules in advance
of the next election, in order to promote the public
“[c]onfidence in the integrity of our electoral processes
[that] is essential to the functioning of our
participatory democracy.” Purcell v. Gonzalez, 549
U.S. 1, 4 (2006). This case presents a vital and unique
opportunity to do precisely that. By resolving the
important and recurring questions now, the Court can
provide desperately needed guidance to state
legislatures and courts across the country outside the
context of a hotly disputed election and before the next
election. The alternative is for the Court to leave
legislatures and courts with a lack of advance
guidance and clarity regarding the controlling law
only to be drawn into answering these questions in
future after-the-fact litigation over a contested
election, with the accompanying time pressures and
perceptions of partisan interest.

Note:  As reported in Gateway Pundit, legally required chain of custody for ballots was broken in every battleground state and in other states as well.

Democrats Were ONLY Able to “Win” in 2020 By Breaking Chain of Custody Laws in EVERY SWING STATE

President Trump was ahead in Pennsylvania by nearly 700,000 votes.
In Michigan Trump was ahead by over 300,000 votes.
In Wisconsin Trump was ahead by 120,000 votes.

Trump was also ahead in Georgia and Nevada.

And President Trump already trounced Joe Biden in Ohio, Florida, and Iowa — three states that ALWAYS go to the eventual presidential winner.

Then suddenly Pennsylvania, Michigan, and Wisconsin announced they would not be announcing their winner that night. This was an unprecedented and coordinated move in US history.

Then many crimes occurred to swing the election to Biden, but perhaps the greatest crime was the lack of dual controls and chain of custody records that ensure a fair and free election. At a high level, when ballots are transferred or changes are made in voting machines, these moves and changes should be done with two individuals present (dual control), one from each party, and the movements of ballots should be recorded.

So when states inserted drop boxes into the election, these changes first needed to be updated through the legislature, which they weren’t, and all movements from the time when the ballots were inserted into drop boxes needed to be recorded, which they weren’t.

Immunity by Easter?

Could it be that doors and societies will open and life be reborn as early as Easter 2021?  That depends upon lockdown politicians and scientists who advise them.  One such is Dr. Makary, a professor at the Johns Hopkins School of Medicine and Bloomberg School of Public Health, chief medical adviser to Sesame Care, and author of “The Price We Pay.”.  His article at Wall Street Journal is We’ll Have Herd Immunity by April.  Excerpts in italics with my bolds.

Covid cases have dropped 77% in six weeks. Experts should level with the public about the good news.

Amid the dire Covid warnings, one crucial fact has been largely ignored: Cases are down 77% over the past six weeks. If a medication slashed cases by 77%, we’d call it a miracle pill. Why is the number of cases plummeting much faster than experts predicted?

In large part because natural immunity from prior infection is far more common than can be measured by testing.

Testing has been capturing only from 10% to 25% of infections, depending on when during the pandemic someone got the virus. Applying a time-weighted case capture average of 1 in 6.5 to the cumulative 28 million confirmed cases would mean about 55% of Americans have natural immunity.

Now add people getting vaccinated. As of this week, 15% of Americans have received the vaccine, and the figure is rising fast. Former Food and Drug Commissioner Scott Gottlieb estimates 250 million doses will have been delivered to some 150 million people by the end of March.

There is reason to think the country is racing toward an extremely low level of infection. As more people have been infected, most of whom have mild or no symptoms, there are fewer Americans left to be infected. At the current trajectory, I expect Covid will be mostly gone by April, allowing Americans to resume normal life.

Antibody studies almost certainly underestimate natural immunity. Antibody testing doesn’t capture antigen-specific T-cells, which develop “memory” once they are activated by the virus. Survivors of the 1918 Spanish flu were found in 2008—90 years later—to have memory cells still able to produce neutralizing antibodies.

Researchers at Sweden’s Karolinska Institute found that the percentage of people mounting a T-cell response after mild or asymptomatic Covid-19 infection consistently exceeded the percentage with detectable antibodies. T-cell immunity was even present in people who were exposed to infected family members but never developed symptoms. A group of U.K. scientists in September pointed out that the medical community may be under-appreciating the prevalence of immunity from activated T-cells.

Covid-19 deaths in the U.S. would also suggest much broader immunity than recognized. About 1 in 600 Americans has died of Covid-19, which translates to a population fatality rate of about 0.15%. The Covid-19 infection fatality rate is about 0.23%. These numbers indicate that roughly two-thirds of the U.S. population has had the infection.

In my own conversations with medical experts, I have noticed that they too often dismiss natural immunity, arguing that we don’t have data. The data certainly doesn’t fit the classic randomized-controlled-trial model of the old-guard medical establishment. There’s no control group. But the observational data is compelling.

I have argued for months that we could save more American lives if those with prior Covid-19 infection forgo vaccines until all vulnerable seniors get their first dose. Several studies demonstrate that natural immunity should protect those who had Covid-19 until more vaccines are available. Half my friends in the medical community told me: Good idea. The other half said there isn’t enough data on natural immunity, despite the fact that reinfections have occurred in less than 1% of people—and when they do occur, the cases are mild.

But the consistent and rapid decline in daily cases since Jan. 8 can be explained only by natural immunity. Behavior didn’t suddenly improve over the holidays; Americans traveled more over Christmas than they had since March. Vaccines also don’t explain the steep decline in January. Vaccination rates were low and they take weeks to kick in.

My prediction that Covid-19 will be mostly gone by April is based on laboratory data, mathematical data, published literature and conversations with experts. But it’s also based on direct observation of how hard testing has been to get, especially for the poor. If you live in a wealthy community where worried people are vigilant about getting tested, you might think that most infections are captured by testing. But if you have seen the many barriers to testing for low-income Americans, you might think that very few infections have been captured at testing centers. Keep in mind that most infections are asymptomatic, which still triggers natural immunity.

Many experts, along with politicians and journalists, are afraid to talk about herd immunity. The term has political overtones because some suggested the U.S. simply let Covid rip to achieve herd immunity. That was a reckless idea. But herd immunity is the inevitable result of viral spread and vaccination. When the chain of virus transmission has been broken in multiple places, it’s harder for it to spread—and that includes the new strains.

Herd immunity has been well-documented in the Brazilian city of Manaus, where researchers in the Lancet reported the prevalence of prior Covid-19 infection to be 76%, resulting in a significant slowing of the infection. Doctors are watching a new strain that threatens to evade prior immunity. But countries where new variants have emerged, such as the U.K., South Africa and Brazil, are also seeing significant declines in daily new cases. The risk of new variants mutating around the prior vaccinated or natural immunity should be a reminder that Covid-19 will persist for decades after the pandemic is over. It should also instill a sense of urgency to develop, authorize and administer a vaccine targeted to new variants.

Some medical experts privately agreed with my prediction that there may be very little Covid-19 by April but suggested that I not to talk publicly about herd immunity because people might become complacent and fail to take precautions or might decline the vaccine. But scientists shouldn’t try to manipulate the public by hiding the truth. As we encourage everyone to get a vaccine, we also need to reopen schools and society to limit the damage of closures and prolonged isolation. Contingency planning for an open economy by April can deliver hope to those in despair and to those who have made large personal sacrifices.

Don’t Fence Me In!

Why Team Left Cheats More than Team Right

One of the few pleasures remaining during pandemania involves sports competitions where rules are followed and enforced by unbiased officials, so that teams or individuals win or lose based solely on the merit of their performances.  Elsewhere with identity politics and political correctness, it is a different story.  People on the right perceive accurately that their opponents on the left are not bound by the rules, and break them readily in order to win.

Brent E. Hamachek explains in his blog post Why They Cheat-a look at the behavioral differences between Team Right and Team Left.  Excerpts in italics with my bolds.

America is divided into two political teams; Team Right and Team Left. As Joe Biden and Kamala Harris assume office, many Team Right members are still trying to come to terms with the results of the 2020 election. They feel certain that Team Left cheated in a variety of ways in order to produce enough votes to secure victory.

Setting aside the MSM’s agreed-upon talking points of “baseless accusations” of election fraud and their “despite there being no evidence to support such claims” mantra, we now know that there was significant evidence of election tampering. That is actually a “fact” about which I’ve previously written. It is also, at this point, irrelevant. Joe Biden is in office. Focusing on 2020 election cheating is fine for investigators in various states if they so choose (there will be no federal investigation), but it is not helpful for ordinary citizens who would like to reverse trends.

The more helpful issue to explore in order to make a difference going forward is in answering this question: Why do Team Left members seem to be more willing to cheat than do Team Right members?

This is a question, I believe, that we can answer without needing any sort of physical proof. We can prove it solely through the use of our reason and with a clear understanding of the ethical structure, and attendant influences on behavior, of modern-day Team Left members (many of whom were election officials and vote counters).

When the typical person says they are “ethical,” they really mean that in their mind the things they do are the right things to do. This suggests a sort of self-legislating capability on the part of each person to know right from wrong. An idea like this can be found in the work of famous philosophers ranging from Immanuel Kant, to Karl Marx, to many others. They argue that each person is capable of such self-legislating and engage in the process constantly.

Very few people realize that there are actual ethical systems that have been “constructed” to help direct us on the path to making consistent and appropriate decisions as to how to act and behave in any given situation. We have the above-referenced Kant’s categorical imperative (if what I’m thinking of doing now were a rule that everyone had to follow, would it be workable for society?). We have Jeremy Bentham’s utilitarianism (pure cost-benefit analysis) or John Stuart Mill’s more refined and kinder version, which calls for for cost-benefit analysis with an allowance for the subjective nature of “higher” human values.

There are a number of ways to view the development and deployment of moral and ethical behavior, but the typical person knows little, if any, of this. Yet they will tell you that they are ethical, and others are not. By what standard? How do they know? This logical dilemma, by the way, exists in people whether they were supporters of Donald Trump or Joe Biden; whether they are members of Team Right or Team Left. There is absolutely no difference in that respect. There is a difference we will get to eventually, but it does not involve ethics.

Hobbes was right!

It is my opinion, based upon many years of studying political philosophy, working in a large corporate environment, working with and running privately owned businesses, and doing political advising and writing, that the greatest of all the political philosophers, the one who got the most important thing right, was Englishman Thomas Hobbes. Born in 1588, the year of the Spanish Armada, it is said that his mother went into premature labor upon seeing the ships off the English coast, thereby birthing poor Thomas out of fear.

Hobbes spent the rest of his life focusing on the fearful nature of humans, among other things.

He is the father of social contract theory, which describes man’s compact to enter into civil society as a way to control his more primitive impulses. He is famous for his line about man’s life in the state of nature, before the social contract, which he describes as being “solitary, poor, nasty, brutish, and short.” Hobbes suggested that, owing to their nature, men are unable to be left to govern themselves without stern direction. His diagnosis of us as people? Fearful and self-destructive. His prescription? A strong sovereign.

Hobbes is also the father of the idea of moral relativism. His contention is that, for the typical human, their calculation of whether or not something is “right or wrong” is nothing more than a reduction to looking at things that please them and things that offend them. They maximize the one and avoid the other. In that process, they create their own morality, or set of ethics, that is based solely upon their own desires and aversions.

My own fifty-eight years of study and empirical observations have led me to conclude that this theory of human behavior and ethical development most accurately describes the greatest number of people Assuming a human population existing under a bell curve, Hobbes’s ethical construct describes the greatest number of people gathered around the mean.

At this point you might think I’m suggesting that Biden supporters, Team Left members, are moral relativists and Trump supporters, Team Right members, are not. That somehow I believe we are inherently better creatures than are they. You’d be wrong. I am not. I believe that most people are moral relativists in general, and even that people who attempt to operate under a more disciplined structure of ethics, including the Christian ethic, can become moral relativists at the very moment that they find themselves placed most at risk.

Survival is in our nature. When it is in jeopardy, even the most truly righteous can attempt to hedge their ethical bets.

Since I am concluding that there is no fundamental difference in ethics between the typical Trump or the typical Biden supporter, why go through all the trouble to share this background on ethics? After all, the purpose is to demonstrate how we can prove that Team Left members are more likely to cheat. I walked through the ethical piece because people typically consider cheating to be “unethical.” Yet it happens, and it happens more by their team than by ours.

To understand why, I believe we need to look beyond ethics and consider Tom Hanks, World War II, and the ancient Stoics.

Duty as a differentiator

Love or hate his personal life and politics, Tom Hanks makes spectacular movies and is especially good in war roles. A few months back, I had a chance to watch him in the Apple Television release of Greyhound. It is a story based on the U.S. Navy convoys that brought supplies and armaments across the Atlantic during World War II. It is not a long film, but it is nonstop action packed. For ninety minutes, there is nothing but German U-boat peril. American sailors show incredible courage, some losing their lives, others saving lives, up against challenging odds.
What happens to make men so courageous in one moment and so devoid of any kind of ethical or moral compass in the next? I think the answer lies in the notion of duty. Those men on the ship with Tom Hanks in that movie were driven in those moments by a higher calling. They had a sense of duty. Some, when they returned home, for whatever reason might have lost their way; found themselves left with no higher calling. Absent duty, they were left with only their own personal moral and ethical framework in which to operate. Given moral relativism, they became able to justify almost any behavior.

This notion of duty is a very Stoic concept. Stoicism, which dates back to Ancient Greece, emphasizes duty and the importance of virtue. There were four attributes of virtue: wisdom, justice, courage, and moderation. Doing one’s duty was central to the Stoics. Duty manifested itself in more than just following orders; it meant adhering to the four key elements of virtue and to keeping in sync with all of nature.

One does not have to buy into all of Stoic philosophy to grasp the importance of duty. It is with duty that we can begin to answer our question: How can we know that Team Left members will cheat?

The answer lies in the absence of a sense of duty to something outside themselves. The typical contemporary Team Left member does not have any external force that commands him or her to “behave better.”

Again, operating under the bell curve, the mainstream Trump supporter tries to follow either the voice of God, the call of patriotism, or both. Both are external to themselves. Both set standards for behavior that transcend their own personal calculations of convenience. Both provide fairly clear direction, either through Scripture or the Constitution. Both rest like weights upon their shoulders, burdening them with a non-ignorable sense of obligation.

It doesn’t mean they won’t fail. It doesn’t mean they will not behave badly. It simply means they have a better chance of making a better choice than does a person who is not encumbered by any sense of duty other than to themselves. Duty is typically viewed as a call to act. It can just as easily be seen as the antithesis to action, which means it can inhibit. I must because it’s my duty. I must not because it betrays my duty.

Common responses I have received from Team Left members over the years when I ask them about feeling a sense of duty include:

• I have a duty to those around me.

• I have a duty to those less fortunate than myself.

• I have a duty to humanity.

The shared characteristic of each of those “duties” is that although they sound as if they reside “outside” the individual, they are wholly subjective with regard to their definition. Each individual person gets to define their “duty to others” however they see fit. There is no separate standard. For those focused on a Christian duty, there is the reasonable clarity of the Bible. For those who pledge allegiance to the United States of America, there is our Constitution bolstered by the original Declaration of Independence.

For those, however, who say that they simply have a duty to help “others,” the others can be whomever they so choose, and need whatever kind of help it is the helper decides they should provide.

Machiavelli provides the final element

To succinctly summarize my thoughts to this point, it is my personal belief that the members of Team Right are not inherently any more ethical than are their counterparts on Team Left. When it comes right down to it, individual to individual, most people are basic moral relativists as identified and defined by Hobbes, and given no other considerations, most people conduct themselves under an ethical code that is simply one of convenience.

The difference between the two is that those who answer to a calling of duty that is outside themselves and more objective than subjective in nature can have their individual passions held in check. It gives their better angels a chance to be heard and followed.

Machiavelli’s statement about ends and means explains why the modern-day Team Left member, almost always a Democrat, is so willing to cheat. Existing as a typical moral relativist where little to nothing is malum in se, and being for the most part unconstrained by a sense of duty other than that which they conveniently self-define, any sort of activity is permissible so long as they end up getting what they want. They give cover to this behavior by saying their actions are necessary to “help others.” As has been shown, that statement can mean whatever they want it to mean.

By our nature as humans, we are flawed and sinful creatures. That goes for Trump supporters as well as those who lined up behind Joe Biden. The difference is that for those of us who truly have a good old-fashioned love for God, country, or both, we have a voice outside ourselves warning us to control our nature. It asks us to heed a higher calling. It limits us in a way that is beneficial to maintaining an ordered, predictable, and just society.

Those who operate without that sense of duty are left to do whatever their free will wishes, unbound by any real constraints. They can justify their actions through the simple pleasure they feel or the pain they avoid. Their ends always can justify their means.  That is why they cheat. That is how we can use our reason to know they cheat.

Postscript:  Dennis Prager sees the left/right distinction in terms of focus on politics vs. persons.

That’s a major difference between the right and the left, concerning the way each seeks to improve society. Conservatives believe that the way to a better society is almost always through the moral improvement of the individual by each person doing battle with his or her own weaknesses, and flaws. It is true that in violent and evil society such as fascist Communist or Islam is tyrannies, the individual must be preoccupied with battling outside forces. Almost everywhere else, though, certainly in a free and decent country such as America, the greatest Battle of the individual must be with inner forces, that is with his or her moral failings.

The left on the other hand, believes that the way to a better society is almost always through doing battle with society’s moral failings. Thus, in America, the left concentrates its efforts on combating sexism, racism, intolerance, xenophobia, homophobia, Islamophobia, and the many other evils that the left believes permeate American society.

One important consequence of this left right distinction is that those on the left are far more preoccupied with politics than those on the Right. Since the left is so much more interested in fixing society than in fixing the individual, politics inevitably becomes the vehicle for societal improvement. That’s why whenever the term activist is used, we almost always assume that the term refers to someone on the left.

See also: Left and Right on Climate (and so much else)

See also: Climate Science, Ethics and Religion

 

 

 

Feb. 2021 Polar Vortex Hits Okhotsk Ice

 

Update Feb. 19, 2021 to previous post

This update is to note a dramatic effect on Okhotsk Sea ice coincidental with the Polar Vortex event that froze Texas and other midwestern US states.  When Arctic air extends so far south due to the weak and wavy vortex, warmer air replaces the icy air in Arctic regions.  In this case, the deficits to sea ice extent appear mostly in the Sea of Okhotsk in the Pacific.

The graph below shows a sharp drop in ice extent the last three days.

A closer look into the regions shows that Okhotsk peaked at 1.1M km2 on day 37, and lost 217k km2 down to 0.9M km2 yesterday.  That loss along with Bering flat extent makes up 70% of the present deficit to average.

Some comments from Dr. Judah Cohen Feb. 15 from his AER blog Arctic Oscillation and Polar Vortex Analysis and Forecasts  Excerpts in italics with my bolds.

I have been writing how the stratospheric PV disruption that has been so influential on our weather since mid-January has been unusual and perhaps even unique in the observational record, so I guess then it should be no surprise that it’s ending is also highly unusual. I was admittedly skeptical, but it does seem that the coupling between the stratospheric PV and the tropospheric circulation is about to come to an abrupt end.

The elevated polar cap geopotential height anomalies (PCHs) related to what I like to refer to the third and final PV disruption at the end of January/early February quickly propagates to the surface and even amplifies, peaking this past weekend. And as I have argued, it is during spikes in PCH when severe winter is most likely across the NH mid-latitudes, as demonstrated in Cohen et al. (2018).

But rather than the typical gradual influence from the stratospheric PV disruption over many weeks, maybe akin to the drip, drip, drip of a leaky faucet, the entire signal dropped all at once like an anchor. This also likely contributed to the severity of the current Arctic outbreak in the Central US that is generational and even historical in its severity. But based on the forecast the PV gave all it had all at once, and the entire troposphere-stratosphere-troposphere coupling depicted in Figure ii is about to abruptly end in the next few days.

I am hesitant to bring analogs before 2000 but the extreme cold in Texas did remind me of another winter that brought historic Arctic outbreaks including cold to Texas – January 1977. It does appear that the downward influence from the stratospheric PV to the surface came to an abrupt end at the end of January 1977 . . . Relative to normal, January 1977 was the coldest month for both Eurasia and the US when stratosphere-troposphere coupling was active. But the relative cold did persist in both the Eastern US and northern Eurasia in February post the stratosphere-troposphere coupling. By March the cold weather in the Eastern US was over but persisted for northern Eurasia.

See also No, CO2 Doesn’t Drive the Polar Vortex

Background from Previous Post

In January, most of the Arctic ocean basins are frozen over, and so the growth of ice extent slows down.  According to SII (Sea Ice Index) January on average adds 1.3M km2, and this month it was 1.4M.  (background is at Arctic Ice Year-End 2020).  The few basins that can grow ice this time of year tend to fluctuate and alternate waxing and waning, which appears as a see saw pattern in these images.

Two weeks into February Arctic ice extents are growing faster than the 14-year average, such that they are approaching the mean.  The graph below shows the ice recovery since mid-January for 2021, the 14-year average and several recent years.

The graph shows mid January a small deficit to average, then slow 2021 growth for some days before picking up the pace in the latter weeks.  Presently extents are slightly (1%) below average, close to 2019 and 2020 and higher than 2018.

February Ice Growth Despite See Saws in Atlantic and Pacific

As noted above, this time of year the Arctic adds ice on the fringes since the central basins are already frozen over.  The animation above shows Barents Sea on the right (Atlantic side) grew in the last two weeks by 175k km2 and is now 9% greater than the maximum last March.  Meanwhile on the left (Pacific side)  Bering below and Okhotsk above wax and wane over this period. Okhotsk is seen growing 210k km2 the first week, and giving half of it back the second week.  Bering waffles up and down ending sightly higher in the end.

The table below presents ice extents in the Arctic regions for day 44 (Feb. 13) compared to the 14 year average and 2018.

Region 2021044 Day 044 Average 2021-Ave. 2018044 2021-2018
 (0) Northern_Hemisphere 14546503 14678564 -132061 14140166 406337
 (1) Beaufort_Sea 1070689 1070254 435 1070445 244
 (2) Chukchi_Sea 966006 965691 315 965971 35
 (3) East_Siberian_Sea 1087120 1087134 -14 1087120 0
 (4) Laptev_Sea 897827 897842 -15 897845 -18
 (5) Kara_Sea 934988 906346 28642 874714 60274
 (6) Barents_Sea 837458 563224 274235 465024 372434
 (7) Greenland_Sea 645918 610436 35482 529094 116824
 (8) Baffin_Bay_Gulf_of_St._Lawrence 1057623 1487547 -429924 1655681 -598058
 (9) Canadian_Archipelago 854597 853146 1451 853109 1489
 (10) Hudson_Bay 1260471 1260741 -270 1260838 -367
 (11) Central_Arctic 3206263 3211892 -5630 3117143 89120
 (12) Bering_Sea 559961 674196 -114235 319927 240034
 (13) Baltic_Sea 116090 94341 21749 76404 39686
 (14) Sea_of_Okhotsk 1027249 930357 96892 911105 116144
 (15) Yellow_Sea 9235 28237 -19002 33313 -24078
 (16) Cook_Inlet 223 11137 -10914 11029 -10806

The table shows that Bering defict to average is offset by surplus in Okhotsk.  Baffin Bay show the largest deficit, mostly offset by surpluses in Barents, Kara and Greenland Sea.

The polar bears have a Valentine Day’s wish for Arctic Ice.

welovearcticicefinal

And Arctic Ice loves them back, returning every year so the bears can roam and hunt for seals.

Footnote:

Seesaw accurately describes Arctic ice in another sense:  The ice we see now is not the same ice we saw previously.  It is better to think of the Arctic as an ice blender than as an ice cap, explained in the post The Great Arctic Ice Exchange.

After Counting Mail-in Ballots, Senate Finds Trump Guilty

Babylon Bee has the special report In Mail-In Impeachment Vote, Senate Convicts Trump 8275 To 3.  Excerpts in italics with my bolds.

WASHINGTON, D.C.—In a historic move, the U.S. Senate decided to switch to voting by mail for Trump’s second impeachment trial. After all the votes were counted by an intern in a back room with no cameras, the Senate ruled to convict President Trump of incitement to violence by a vote of 8275 to 3.

“Our holy democracy has spoken,” said Senator Chuck Schumer. “Do not ask any questions or you are a blasphemer against the sacred sacredness of our vote. Everyone can go home now!”

A couple of troublemaking Senators attempted to overthrow the Constitution by bringing up the point that there are only 100 Senators, making it impossible to arrive at a tally of 8275 to 3, but they were quickly removed from the Senate Chambers and condemned for “attempting to suppress the votes of people of color.”

The Senate then moved on to other business, passing universal healthcare by a margin of 320,000 to 4.

Footnote:  SCOTUS Conference on Election Integrity

Humor aside, Election Integrity is up for conference at SCOTUS on Friday.  The petition to be discussed is the complaint by the Pennsylvania legislature against the state Election Officer Boockvar, a proceeding that began on Sept. 28, 2020.  The petition makes clear the intent is not to overturn any completed election, but to ensure future elections are conducted according to laws in force.  From scotusblog:

Republican Party of Pennsylvania v. Boockvar

Issue:  Whether the Pennsylvania Supreme Court usurped the Pennsylvania General Assembly’s plenary authority to “direct [the] Manner” for appointing electors for president and vice president under Article II of the Constitution, as well as the assembly’s broad power to prescribe “[t]he Times, Places, and Manner” for congressional elections under Article I, when the court issued a ruling requiring the state to count absentee ballots that arrive up to three days after Election Day as long as they are not clearly postmarked after Election Day; and (2) whether that decision is preempted by federal statutes that establish a uniform nationwide federal Election Day.

The petition to be discussed is the December 15, 2020 brief from the petitioners Republican Party:

No. 20-542 REPLY BRIEF IN SUPPORT OF PETITION FOR A WRIT OF CERTIORARI

Respondents’ Oppositions only confirm what some
Respondents told the Court just weeks ago: that the
Court should grant review and resolve the important
and recurring questions presented in this case. Pa.
Dems. Br. 9, No. 20A54 (Oct. 5, 2020) (advocating for
review because the questions presented are “of
overwhelming importance for States and voters across
the country”); Sec’y Br. 2-3, No. 20A54 (Oct. 5, 2020).
Respondents uniformly fail to mention that after the
Republican Party of Pennsylvania (RPP) filed its
Petition but more than a month before Respondents
filed their Oppositions, the Eighth Circuit created a
split on the question whether the Electors Clause
constrains state courts from altering election
deadlines enacted by state legislatures. See Carson v.
Simon, 978 F.3d 1051 (8th Cir. 2020). Instead,
Respondents seek to obfuscate the matter with a
welter of vehicle arguments turning on the fact that
Pennsylvania has certified the results of the 2020
general election. In reality, however, this case is an
ideal vehicle, in part precisely because it will not affect
the outcome of this election.

Indeed, this Court has repeatedly emphasized the
imperative of settling the governing rules in advance
of the next election, in order to promote the public
“[c]onfidence in the integrity of our electoral processes
[that] is essential to the functioning of our
participatory democracy.” Purcell v. Gonzalez, 549
U.S. 1, 4 (2006). This case presents a vital and unique
opportunity to do precisely that. By resolving the
important and recurring questions now, the Court can
provide desperately needed guidance to state
legislatures and courts across the country outside the
context of a hotly disputed election and before the next
election. The alternative is for the Court to leave
legislatures and courts with a lack of advance
guidance and clarity regarding the controlling law
only to be drawn into answering these questions in
future after-the-fact litigation over a contested
election, with the accompanying time pressures and
perceptions of partisan interest.

Note:  As reported in Gateway Pundit, legally required chain of custody for ballots was broken in every battleground state and in other states as well.

Democrats Were ONLY Able to “Win” in 2020 By Breaking Chain of Custody Laws in EVERY SWING STATE

President Trump was ahead in Pennsylvania by nearly 700,000 votes.
In Michigan Trump was ahead by over 300,000 votes.
In Wisconsin Trump was ahead by 120,000 votes.

Trump was also ahead in Georgia and Nevada.

And President Trump already trounced Joe Biden in Ohio, Florida, and Iowa — three states that ALWAYS go to the eventual presidential winner.

Then suddenly Pennsylvania, Michigan, and Wisconsin announced they would not be announcing their winner that night. This was an unprecedented and coordinated move in US history.

Then many crimes occurred to swing the election to Biden, but perhaps the greatest crime was the lack of dual controls and chain of custody records that ensure a fair and free election. At a high level, when ballots are transferred or changes are made in voting machines, these moves and changes should be done with two individuals present (dual control), one from each party, and the movements of ballots should be recorded.

So when states inserted drop boxes into the election, these changes first needed to be updated through the legislature, which they weren’t, and all movements from the time when the ballots were inserted into drop boxes needed to be recorded, which they weren’t.

 

 

 

Media Chose to Lie, Not Go Broke

Martin Gurri tells the story how legacy print and tv news descended into deceit and rabble-rousing when faced with decline and eventual bankruptcy.  His article Slouching Toward Post-Journalism at City Journal is a thorough and probing analysis, of which only some excerpts are posted here, in italics with my bolds and images. The journey of the NY Times exemplifies how and why mass media went from informing to inflaming the public.

The New York Times and other elite media outlets have openly embraced advocacy over reporting.

Traditional newspapers never sold news; they sold an audience to advertisers. To a considerable degree, this commercial imperative determined the journalistic style, with its impersonal voice and pretense of objectivity. The aim was to herd the audience into a passive consumerist mass. Opinion, which divided readers, was treated like a volatile substance and fenced off from “factual” reporting.

The digital age exploded this business model. Advertisers fled to online platforms, never to return. For most newspapers, no alternative sources of revenue existed: as circulation plummets to the lowest numbers on record, more than 2,000 dailies have gone silent since the turn of the century. The survival of the rest remains an open question.

Led by the New York Times, a few prominent brand names moved to a model that sought to squeeze revenue from digital subscribers lured behind a paywall. This approach carried its own risks. The amount of information in the world was, for practical purposes, infinite. As supply vastly outstripped demand, the news now chased the reader, rather than the other way around. Today, nobody under 85 would look for news in a newspaper.

Under such circumstances, what commodity could be offered for sale?

During the 2016 presidential campaign, the Times stumbled onto a possible answer. It entailed a wrenching pivot from a journalism of fact to a “post-journalism” of opinion—a term coined, in his book of that title, by media scholar Andrey Mir. Rather than news, the paper began to sell what was, in effect, a creed, an agenda, to a congregation of like-minded souls. Post-journalism “mixes open ideological intentions with a hidden business necessity required for the media to survive,” Mir observes. The new business model required a new style of reporting. Its language aimed to commodify polarization and threat: journalists had to “scare the audience to make it donate.” At stake was survival in the digital storm.

The experiment proved controversial. It sparked a melodrama over standards at the Times, featuring a conflict between radical young reporters and befuddled middle-aged editors. In a crucible of proclamations, disputes, and meetings, the requirements of the newspaper as an institution collided with the post-journalistic call for an explicit struggle against injustice.

The old media had needed happy customers. The goal of post-journalism, according to Mir, is to “produce angry citizens.” The August 2016 article marked the point of no return in the spiritual journey of the New York Times from newspaper of record to Vatican of liberal political furor. While the impulse originated in partisan herd instinct, the discovery of a profit motive would make the change irrevocable. Rutenberg professed to find the new approach “uncomfortable” and, “by normal standards, untenable”—but the fault, he made clear, lay entirely with the “abnormal” Trump, whose toxic personality had contaminated journalism. He was the active principle in the headline “The Challenge Trump Poses to Objectivity.”

A cynic (or a conservative) might argue that objectivity in political reporting was more an empty boast than a professional standard and that the newspaper, in pandering to its audience, had long favored an urban agenda, liberal causes, and Democratic candidates. This interpretation misses the transformation in the depths that post-journalism involved. The flagship American newspaper had turned in a direction that came close to propaganda. The oppositional stance, as Mir has noted, cannot coexist with newsroom independence: writers and editors were soon to be punished for straying from the cause. The news agenda became narrower and more repetitive as journalists focused on a handful of partisan controversies—an effect that Mir labeled “discourse concentration.”  The New York Times, as a purveyor of information and a political institution, had cut itself loose from its own history.

[The Russia Collusion story] was one of the most extraordinary episodes in American politics—and the first sustained excursion into post-journalism by the American news media, led every step of the way by the New York Times.

Future media historians may hold the Trump-Russia story to be a laboratory-perfect specimen of discourse concentration. For nearly two years, it towered over the information landscape and devoured the attention of the media and the public. The total number of articles on the topic produced by the Times is difficult to measure, but a Google search suggests that it was more than 3,000—the equivalent, if accurate, of multiple articles per day for the period in question. This was journalism as if conducted under the impulse of an obsessive-compulsive personality. Virtually every report either implied or proclaimed culpability. Every day in the news marked the beginning of the Trumpian End Times.

The sum of all this sound and fury was . . . zero. The most intensively covered story in history turned out to be empty of content. Mueller’s investigation “did not identify evidence that any US persons conspired or coordinated” with the Russians. Mueller’s halting television appearance in July 2019 convinced even the most vehement partisans that he was not the knight to slay the dragon in the White House. After two years of media frenzy came an awkward moment. The New York Times had reorganized its newsroom to pursue this single story—yet, just as it had missed Trump’s coming, the paper failed to see that Trump would stay.

Yet what looked like journalistic failure was, in fact, an astonishing post-journalistic success. The intent of post-journalism was never to represent reality or inform the public but to arouse enough political fervor in readers that they wished to enter the paywall in support of the cause. This was ideology by the numbers—and the numbers were striking. Digital subscriptions to the New York Times, which had been stagnant, nearly doubled in the first year of Trump’s presidency. By August 2020, the paper had 6 million digital subscribers—six times the number on Election Day 2016 and the most in the world for any newspaper.

The Russian collusion story, though refuted objectively, had been validated subjectively, by the growth in the congregation of the paying faithful.

In throwing out the old textbook, post-journalism made transgression inevitable. In July 2019, Jonathan Weisman, who covered Congress for the Times and happened to be white, questioned on Twitter the legitimacy of leftist members of the House who happened to be black. Following criticism, Weisman deleted the offending tweets and apologized elaborately, but he was demoted nonetheless.

The dramatic confrontation had been triggered by Weisman’s tweets and the heretical headline but was really about the boundaries of expression—what was allowed and what was taboo—in a post-objective, post-journalistic time. On the contentious subjects of Trump and race, managers and reporters at the paper appeared to hold similar opinions. No one in the room defended Trump as a normal politician whose views deserved a hearing. No one questioned the notion that the United States, having elected Trump, was a fundamentally racist country. But as Baquet fielded long and pointed questions from his staff, it became clear that management and newsroom—which translated roughly to middle age and youth—held radically divergent visions of the post-journalism future.

Unlike management, the reporters were active on social media, where they had to face the most militant elements of the subscriber base. In this way, they represented the forces driving the information agenda. Baquet had disparaged Twitter and insisted that the Times would not be edited by social media. He was mistaken. The unrest in the newsroom had been propelled by outrage on the web, and the paper had quickly responded. Generational attitudes, displayed on social media, allowed no space for institutional loyalty. Baquet had demoted Weisman because of his inappropriate behavior—but the newsroom turned against him because he had picked a fight with the wrong enemy.

Two days after the town hall meeting, the New York Times inaugurated, in its magazine section, the “1619 Project”—an attempt, said Baquet, “to try to understand the forces that led to the election of Donald Trump.” Rather than dig deep into the “half of America” that had voted for the president, the newspaper chose to blame the events of 2016 on the country’s pervasive racism, not only here and now but everywhere and always.

The 1619 Project rode the social-justice ambitions of the newsroom to commodify racial polarization—and, not incidentally, to fill the void left by Robert Mueller’s failure to launch.

The project showed little interest in investigative reporting or any other form of old-school journalism. It produced no exposés of present-day injustice. Instead, it sold agenda-setting on a grand scale: the stated mission was to “reframe the country’s history by placing the consequences of slavery and the contributions of black Americans at the center of our national narrative.” The reportorial crunch implicit in this high-minded posture might be summarized as “All the news that’s fit to reframe history.”

The 1619 Project has come under fire for its extreme statements and many historical inaccuracies. Yet critics missed the point of the exercise, which was to stake out polarizing positions in the mode of post-truth: opinions could be transformed into facts if held passionately enough. The project became another post-journalistic triumph for the Times. Public school systems around the country have included the material in their curricula. Hannah-Jones received a Pulitzer Prize for her “sweeping, provocative, and personal essay”—possibly the first award offered for excellence in post-journalism. The focus on race propelled the Times to the vanguard of establishment opinion during the convulsions that followed the death of George Floyd under the knee of a white Minneapolis police officer in May 2020.

That episode replaced the Russia collusion story as the prime manufacturer of “angry citizens” and added an element of inflexibility to the usual rigors of post-journalism. Times coverage of antipolice protests was generally sympathetic to the protesters. Trump was, of course, vilified for “fanning the strife.” But the significant change came in the severe tightening of discourse: the reframing imperative now controlled the presentation of news. Reporting minimized the violence that attended the protests, for example, and sought to keep the two phenomena sharply segregated.

Less than two weeks after Floyd’s death, amid spreading lawlessness in many American cities, the paper posted an opinion piece by Republican senator Tom Cotton in its online op-ed section, titled “Time to Send in the Troops.” It called for “an overwhelming show of force” to pacify troubled urban areas. To many loyal to the New York Times, including staff, allowing Cotton his pitch smacked of treason. Led by young black reporters, the newsroom rebelled.

Once again, the mutiny began on Twitter. Many reporters had large followings; they could appeal directly to readers. In the way of social media, the most excited voices dominated among subscribers. As the base roared, the rebels moved to confront their employer.

The history-reframing mission is now in the hands of a deeply self-righteous group that has trouble discerning the many human stopping places between true and false, good and evil, objective and subjective. According to one poll, a majority of Americans shared the opinion that Cotton expressed in his op-ed. That had no bearing on the discussion. In the letter and the town hall meetings, the rebels wielded the word “truth” as if they owned it. By their lights, Cotton had lied, and the fact that the public approved of his lies was precisely what made his piece dangerous.

Revolutions tend to radicalization. The same is true of social media mobs: they grow ever more extreme until they explode.

But the New York Times is neither of these things—it’s a business, and post-journalism is now its business model. The demand for moral clarity, pressed by those who own the truth, must increasingly resemble a quest for radical conformism; but for nonideological reasons, the demand cannot afford to leave subscriber opinion too far behind. Radicalization must balance with the bottom line.

The final paradox of post-journalism is that the generation most likely to share the moralistic attitude of the newsroom rebels is the least likely to read a newspaper. Andrey Mir, who first defined the concept, sees post-journalism as a desperate gamble, doomed in the end by demographics. For newspapers and their multiple art forms developed over a 400-year history, Mir writes, the collision with the digital tsunami was never going to be a challenge to surmount but rather “an extinction-level event.”

 

 

 

Biden’s Bizarre Climate Charade

David Krayden explains in his Human Events article Joe Biden Thinks He’s Tackling Climate Change, but He’s Really Sacking the U.S. Economy.  Excerpt in italics with my bolds and images.

The Paris Accords and cancelling Keystone is just the beginning of life under the new climate regime.

President Biden’s vision is to “lead a clean energy revolution” that will free the United States from the “pollution” of carbon dioxide by 2035 and have “net-zero emissions” by 2050.

Of course, the President himself will likely not be around to see if the United States achieves either target, even if his insane plan survives successive administration. Instead, he sits in his chair like a languorous old man assiduously reading his speaking notes from his desk, looking like he is under house arrest. Still, he is governing—or at least, appearing to do so—by executive order, and the sheer mass of those dictates is not just staggering but terrifying.

The new President had barely warmed his Oval Office seat when he announced that the U.S. would return to the Paris climate accord—a job-destroying bit of global authoritarianism that is not worth the diplomatic paper it is printed on, let alone the lavish parties staged while it was being negotiated. Then, he quickly produced an executive order to cancel the XL pipeline. With the flash of another one of those pens that Biden runs through on a daily basis, he canceled 10,000 jobs in the U.S., along with another 3,000 in Canada. And this in the midst of a pandemic that even Biden has called our “dark winter!” Even uber-environmentalist Canadian Prime Minister Justin Trudeau supports the XL pipeline, and promptly said so.

Has President Biden discovered the miracle fuel that is going to make petroleum obsolescent and put the oil industry out of business—even before his administration decides to do it for them? Is that what he was up to during all those months when he cowered in his basement instead of campaigning for the presidency? Clearly, the Biden administration has not thought this through beyond the talking points.

Whether the President chooses to acknowledge it or not, oil will continue to be the principal source of energy for American consumers for quite some time to come—at least until perpetual motion is discovered. That oil that the XL pipeline was supposed to transport from America’s closest ally—Canada—will now have to be brought in by rail, a potentially more dangerous and far less environmentally friendly method than a pipeline.

Fossil fuels remain the overwhelming source of all of America’s energy needs: petroleum and natural gas account for 69% of energy usage, coal 11%, and nuclear power 8%. Renewable energy accounts for 11%, and that includes the wood you burn in the fireplace or woodstove every winter. Solar and wind power account for only a fraction of that 11%.

So clearly, with all his activist policy around climate change, President Biden has America on track for a return trip to the Middle Ages.

And like they did in the Middle Ages, the President expects Americans to have blind faith in the climate change priests who will be integral to his administration. If you don’t think the climate change movement is a religion or at least a passable cult, just listen to how its adherents talk about environmental policy. When Democrats were trying to convince us that the California wildfires were somehow the result of climate change, and not just bad forestry management, House Speaker Nancy Pelosi, sounding more like a pagan devotee than the good Catholic she claims to be, exploded: “Mother earth is angry, she is telling us. Whether she’s telling us with hurricanes in the Gulf Coast, fires in the West, whatever it is, the climate crisis is real.”

So if climate change is the culprit for every Act of God, will President Biden’s plan for Americans to live in caves and shut off the heat actually work? Not without China’s cooperation, where 29% of greenhouse gasses are emitted. Without addressing that reality, we’ll continue to spend untold trillions, lose the energy independence that we gained under former President Donald Trump, and sit in the dark, while China continues to play by its own rules—just as it has throughout the coronavirus pandemic.

What is so undemocratic about President Biden’s climate change plan is that it has been served up as an executive order, without debate, and without Congressional approval. What is so ominous about it is not its specificity—which sounds relatively harmless—but its vagueness and political potential. It’s a veritable environmental Enabling Act that can be used to justify any economic dictate, any security violation, or any foreign policy entanglement. Senate Majority Leader Chuck Schumer (D-NY) publicity advised Biden to “call a climate emergency … He can do many, many things under the emergency powers… that he could do without legislation.”

Even the President’s promise to replace the federal government’s gas-operating vehicles with electrical-powered versions is contained in another executive order to “buy American.”

The Biden administration is lying about the economic opportunities embedded in green energy, and its decision to “tackle” climate change is a blatant attempt to appease the left-wing Democrats who see Biden as their puppet. In the process, as he is doing with so many of these executive orders,

President Biden is destroying the American economy and naively trusting that brutal dictatorships like China will surrender before a bourgeois fetish like a greenhouse gas reduction target.

So much will be lost for nothing except America’s further prostration to China.

Feb. 2021 Arctic Ice Stays the Course

In January, most of the Arctic ocean basins are frozen over, and so the growth of ice extent slows down.  According to SII (Sea Ice Index) January on average adds 1.3M km2, and this month it was 1.4M.  (background is at Arctic Ice Year-End 2020).  The few basins that can grow ice this time of year tend to fluctuate and alternate waxing and waning, which appears as a see saw pattern in these images.

Two weeks into February Arctic ice extents are growing faster than the 14-year average, such that they are approaching the mean.  The graph below shows the ice recovery since mid-January for 2021, the 14-year average and several recent years.

The graph shows mid January a small deficit to average, then slow 2021 growth for some days before picking up the pace in the latter weeks.  Presently extents are slightly (1%) below average, close to 2019 and 2020 and higher than 2018.

February Ice Growth Despite See Saws in Atlantic and Pacific

As noted above, this time of year the Arctic adds ice on the fringes since the central basins are already frozen over.  The animation above shows Barents Sea on the right (Atlantic side) grew in the last two weeks by 175k km2 and is now 9% greater than the maximum last March.  Meanwhile on the left (Pacific side)  Bering below and Okhotsk above wax and wane over this period. Okhotsk is seen growing 210k km2 the first week, and giving half of it back the second week.  Bering waffles up and down ending sightly higher in the end.

The table below presents ice extents in the Arctic regions for day 44 (Feb. 13) compared to the 14 year average and 2018.

Region 2021044 Day 044 Average 2021-Ave. 2018044 2021-2018
 (0) Northern_Hemisphere 14546503 14678564 -132061 14140166 406337
 (1) Beaufort_Sea 1070689 1070254 435 1070445 244
 (2) Chukchi_Sea 966006 965691 315 965971 35
 (3) East_Siberian_Sea 1087120 1087134 -14 1087120 0
 (4) Laptev_Sea 897827 897842 -15 897845 -18
 (5) Kara_Sea 934988 906346 28642 874714 60274
 (6) Barents_Sea 837458 563224 274235 465024 372434
 (7) Greenland_Sea 645918 610436 35482 529094 116824
 (8) Baffin_Bay_Gulf_of_St._Lawrence 1057623 1487547 -429924 1655681 -598058
 (9) Canadian_Archipelago 854597 853146 1451 853109 1489
 (10) Hudson_Bay 1260471 1260741 -270 1260838 -367
 (11) Central_Arctic 3206263 3211892 -5630 3117143 89120
 (12) Bering_Sea 559961 674196 -114235 319927 240034
 (13) Baltic_Sea 116090 94341 21749 76404 39686
 (14) Sea_of_Okhotsk 1027249 930357 96892 911105 116144
 (15) Yellow_Sea 9235 28237 -19002 33313 -24078
 (16) Cook_Inlet 223 11137 -10914 11029 -10806

The table shows that Bering defict to average is offset by surplus in Okhotsk.  Baffin Bay show the largest deficit, mostly offset by surpluses in Barents, Kara and Greenland Sea.

The polar bears have a Valentine Day’s wish for Arctic Ice.

welovearcticicefinal

And Arctic Ice loves them back, returning every year so the bears can roam and hunt for seals.

Footnote:

Seesaw accurately describes Arctic ice in another sense:  The ice we see now is not the same ice we saw previously.  It is better to think of the Arctic as an ice blender than as an ice cap, explained in the post The Great Arctic Ice Exchange.

IPCC Scenarios Ensure Unreal Climate Forecasts

 

Figure 5. CO2 emissions (a) and concentrations (b), anthropogenic radiative forcing (c), and global mean temperature change (d) for the three long-term extensions. As in Fig. 3, concentration, forcing, and temperature outcomes are calculated with a simple climate model (MAGICC version 6.8.01 BETA; Meinshausen et al., 2011a, b). Outcomes for the CMIP5 versions of the long-term extensions of RCP2.6 and RCP8.5 (Meinshausen et al., 2011c), as calculated with the same model, are shown for comparison.

Roger Pielke Jr. has a new paper at Science Direct Distorting the view of our climate future: The misuse and abuse of climate pathways and scenarios.  Excerpt in italics with my bolds.

Abstract

Climate science research and assessments under the umbrella of the Intergovernmental Panel on Climate Change (IPCC) have misused scenarios for more than a decade. Symptoms of misuse have included the treatment of an unrealistic, extreme scenario as the world’s most likely future in the absence of climate policy and the illogical comparison of climate projections across inconsistent global development trajectories.

Reasons why such misuse arose include (a) competing demands for scenarios from users in diverse academic disciplines that ultimately conflated exploratory and policy relevant pathways, (b) the evolving role of the IPCC – which extended its mandate in a way that creates an inter-relationship between literature assessment and literature coordination, (c) unforeseen consequences of employing a temporary approach to scenario development, (d) maintaining research practices that normalize careless use of scenarios, and (e) the inherent complexity and technicality of scenarios in model-based research and in support of policy.

Consequently, much of the climate research community is presently off-track from scientific coherence and policy-relevance.

Attempts to address scenario misuse within the community have thus far not worked. The result has been the widespread production of myopic or misleading perspectives on future climate change and climate policy. Until reform is implemented, we can expect the production of such perspectives to continue, threatening the overall credibility of the IPCC and associated climate research. However, because many aspects of climate change discourse are contingent on scenarios, there is considerable momentum that will make such a course correction difficult and contested – even as efforts to improve scenarios have informed research that will be included in the IPCC 6th Assessment.

Discussion of How Imaginary Scenarios Spoil Attempts to Envision Climate Futures

The article above is paywalled, but a previous post reprinted below goes into the background of the role of scenarios in climate modelling, and demonstrates the effects referring to results from the most realistic model, INMCM5

Roger Pielke Jr. explains that climate models projections are unreliable because they are based on scenarios no longer bounded by reality.  His article is The Unstoppable Momentum of Outdated Science.  Excerpts in italics with my bolds.

Much of climate research is focused on implausible scenarios of the future, but implementing a course correction will be difficult.

In 2020, climate research finds itself in a similar situation to that of breast cancer research in 2007. Evidence indicates the scenarios of the future to 2100 that are at the focus of much of climate research have already diverged from the real world and thus offer a poor basis for projecting policy-relevant variables like economic growth and carbon dioxide emissions. A course-correction is needed.

In a new paper of ours just out in Environmental Research Letters we perform the most rigorous evaluation to date of how key variables in climate scenarios compare with data from the real world (specifically, we look at population, economic growth, energy intensity of economic growth and carbon intensity of energy consumption). We also look at how these variables might evolve in the near-term to 2040.

We find that the most commonly-used scenarios in climate research have already diverged significantly from the real world, and that divergence is going to only get larger in coming decades. You can see this visualized in the graph above, which shows carbon dioxide emissions from fossil fuels from 2005, when many scenarios begin, to 2045. The graph shows emissions trajectories projected by the most commonly used climate scenarios (called SSP5-8.5 and RCP8.5, with labels on the right vertical axis), along with other scenario trajectories. Actual emissions to date (dark purple curve) and those of near-term energy outlooks (labeled as EIA, BP and ExxonMobil) all can be found at the very low end of the scenario range, and far below the most commonly used scenarios.

Our paper goes into the technical details, but in short, an important reason for the lower-than-projected carbon dioxide emissions is that economic growth has been slower than expected across the scenarios, and rather than seeing coal use expand dramatically around the world, it has actually declined in many regions.

It is even conceivable, if not likely, that in 2019 the world has passed “peak carbon dioxide emissions.” Crucially, the projections in the figure above are pre-Covid19, which means that actual emissions 2020 to 2045 will be even less than was projected in 2019.

While it is excellent news that the broader community is beginning to realize that scenarios are increasingly outdated, voluminous amounts of research have been and continue to be produced based on the outdated scenarios. For instance, O’Neill and colleagues find that “many studies” use scenarios that are “unlikely.” In fact, in their literature review such “unlikely” scenarios comprise more than 20% of all scenario applications from 2014 to 2019. They also call for “re-examining the assumptions underlying” the high-end emissions scenarios that are favored in physical climate research, impact studies and economic and policy analyses.

Make no mistake. The momentum of outdated science is powerful. Recognizing that a considerable amount of climate science to be outdated is, in the words of the late Steve Rayer, “uncomfortable knowledge” — that knowledge which challenges widely-held preconceptions. According to Rayner, in such a context we should expect to see reactions to uncomfortable knowledge that include:

  • denial (that scenarios are off track),
  • dismissal (the scenarios are off track, but it doesn’t matter),
  • diversion (the scenarios are off track, but saying so advances the agenda of those opposed to action) and,
  • displacement (the scenarios are off track but there are perhaps compensating errors elsewhere within scenario assumptions).

Such responses reinforce the momentum of outdated science and make it more difficult to implement a much needed course correction.

Responding to climate change is critically important. So too is upholding the integrity of the science which helps to inform those responses. Identification of a growing divergence between scenarios and the real-world should be seen as an opportunity — to improve both science and policy related to climate — but also to develop new ways for science to be more nimble in getting back on track when research is found to be outdated.

[A previous post is reprinted below since it demonstrates how the scenarios drive forecasting by CMIP6 models, including the example of the best performant model: INMCM5]

Background from Previous Post : Best Climate Model: Mild Warming Forecasted

Links are provided at the end to previous posts describing climate models 4 and 5 from the Institute of Numerical Mathematics in Moscow, Russia.  Now we have forecasts for the 21st Century published for INM-CM5 at Izvestiya, Atmospheric and Oceanic Physics volume 56, pages218–228(July 7, 2020). The article is Simulation of Possible Future Climate Changes in the 21st Century in the INM-CM5 Climate Model by E. M. Volodin & A. S. Gritsun.  Excerpts are in italics with my bolds, along with a contextual comment.

Abstract

Climate changes in 2015–2100 have been simulated with the use of the INM-CM5 climate model following four scenarios: SSP1-2.6, SSP2-4.5, and SSP5-8.5 (single model runs) and SSP3-7.0 (an ensemble of five model runs). Changes in the global mean temperature and spatial distribution of temperature and precipitation are analyzed. The global warming predicted by the INM-CM5 model in the scenarios considered is smaller than that in other CMIP6 models. It is shown that the temperature in the hottest summer month can rise more quickly than the seasonal mean temperature in Russia. An analysis of a change in Arctic sea ice shows no complete Arctic summer ice melting in the 21st century under any model scenario. Changes in the meridional stream function in atmosphere and ocean are studied.

Overview

The climate is understood as the totality of statistical characteristics of the instantaneous states of the atmosphere, ocean, and other climate system components averaged over a long time period.

Therefore, we restrict ourselves to an analysis of some of the most important climate parameters, such as average temperature and precipitation. A more detailed analysis of individual aspects of climate change, such as changes in extreme weather and climate situations, will be the subject of another work. This study is not aimed at a full comparison with the results of other climate models, where calculations follow the same scenarios, since the results of other models have not yet been published in peer reviewed journals by the time of this writing.

The INM-CM5 climate model [1, 2] is used for the numerical experiments. It differs from the previous version, INMCM4, which was also used for experiments on reproducing climate change in the 21st century [3], in the following:

  • an aerosol block has been added to the model, which allows inputting anthropogenic emissions of aerosols and their precursors;
  • the concentrations and optical properties of aerosols are calculated, but not specified, like in the previous version;
  • the parametrizations of cloud formation and condensation are changed in the atmospheric block;
  • the upper boundary in the atmospheric block is raised from 30 to 60 km;
  • the horizontal resolution in the ocean block is doubled along each coordinate; and,
  • the software related to adaptation to massively parallel computers is improved, which allows the effective use a larger number of compute cores.

The model resolution in the atmospheric and aerosol blocks is 2° × 1.5° in longitude and latitude and 73 levels and, in the ocean, 0.5° × 0.25° and 40 levels. The calculations were performed at supercomputers of the Joint Supercomputer Center, Russian Academy of Sciences, and Moscow State University, with the use of 360 to 720 cores. The model calculated 6–10 years per 24 h in the above configuration.

Four scenarios were used to model the future climate: SSP1-2.6, SSP2-4.5, SSP3-7.0, and SSP5-8.5. The scenarios are described in [4]. The figure after the abbreviation SSP (Shared Socioeconomic Pathway) is the number of the mankind development path (see the values in [4]). The number after the dash means the radiation forcing (W m–2) in 2100 compared to the preindustrial level. Thus, the SSP1-2.6 scenario is the most moderate and assumes rapid actions which sharply limit and then almost completely stop anthropogenic emissions. Within this scenario, greenhouse gas concentrations are maximal in the middle of the 21st century and then slightly decrease by the end of the century. The SSP5-8.5 scenario is the warmest and implies the fastest climate change. The scenarios are recommended for use in the project on comparing CMIP6 (Coupled Model Intercomparison Project, Phase 6, [5]) climate models.  Each scenario includes the time series of:

  • carbon dioxide, methane, nitrous oxide, and ozone concentrations;
  • emissions of anthropogenic aerosols and their precursors;
  • the concentration of volcanic sulfate aerosol; and
  • the solar constant. 

One model experiment was carried out for each of the above scenarios. It began at the beginning of 2015 and ended at the end of 2100. The initial state was taken from the so-called historical experiment with the same model, where climate changes were simulated for 1850–2014, and all impacts on the climate system were set according to observations. The results of the ensemble of historical experiments with the model under consideration are given in [6, 7]. For the SSP3-7.0 scenario, five model runs was performed differing in the initial data taken from different historical experiments. The ensemble of numerical experiments is required to increase the statistical confidence of conclusions about climate changes.

[My Contextual Comment inserted Prior to Consideration of Results]

Firstly, the INM-CM5 historical experiment can be read in detail by following a linked post (see Resources at the end), but this graphic summarizes the model hindcasting of past temperatures (GMT) compared to HadCrutv4.

Figure 1. The 5-year mean GMST (K) anomaly with respect to 1850–1899 for HadCRUTv4 (thick solid black); model mean (thick solid red). Dashed thin lines represent data from individual model runs: 1 – purple, 2 – dark blue, 3 – blue, 4 – green, 5 – yellow, 6 – orange, 7 – magenta. In this and the next figures numbers on the time axis indicate the first year of the 5-year mean.

Secondly, the scenarios are important to understand since they stipulate data inputs the model must accept as conditions for producing forecasts according to a particular scenario (set of assumptions).  The document with complete details referenced as [4] is The Scenario Model Intercomparison Project (ScenarioMIP) for CMIP6.

All the details are written there but one diagram suggests the implications for the results described below.

Figure 5. CO2 emissions (a) and concentrations (b), anthropogenic radiative forcing (c), and global mean temperature change (d) for the three long-term extensions. As in Fig. 3, concentration, forcing, and temperature outcomes are calculated with a simple climate model (MAGICC version 6.8.01 BETA; Meinshausen et al., 2011a, b). Outcomes for the CMIP5 versions of the long-term extensions of RCP2.6 and RCP8.5 (Meinshausen et al., 2011c), as calculated with the same model, are shown for comparison.

As shown, the SSP1-26 is virtually the same scenario as the former RCP2.6, while SSP5-85 is virtually the same as RCP8.5, the wildly improbable scenario (impossible according to some analysts).  Note that FF CO2 emissions are assumed to quadruple in the next 80 years, with atmospheric CO2 rising from 400 to 1000 ppm ( +150%).  Bear these suppositions in mind when considering the INMCM5 forecasts below.

Results [Continuing From Volodin and Gritsun]

Fig. 1. Changes in the global average surface temperature (K) with respect to the pre-industrial level in experiments according to the SSP1-2.6 (triangles), SSP2-4.5 (squares), SSP3-7.0 (crosses), and SSP5-8.5 (circles) scenarios.

Let us describe some simulation results of climate change in the 21st century. Figure 1 shows the change in the globally averaged surface air temperature with respect to the data of the corresponding historical experiment for 1850–1899. In the warmest SSP5-8.5 scenario (circles), the temperature rises by more than 4° by the end of the 21st century. In the SSP3-7.0 scenario (crosses), different members of the ensemble show warming by 3.4°–3.6°. In the SSP2-4.5 scenario (squares), the temperature increases by about 2.4°. According to the SSP1-2.6 scenario (triangles) , the maximal warming by ~1.7° occurs in the middle of the 21st century, and the temperature exceeds the preindustrial temperature by 1.4° by the end of the century.

[My comment: Note that the vertical scale starts with +1.0C as was seen in the historical experiment. Thus an anomaly of 1.4C by 2100 is an increase of only 0.4C, while the SSP2-4.5 result adds 1.4C to the present]. 

The results for other CMIP6 models have not yet been published in peer-reviewed journals. However, according to the preliminary analysis (see, e.g.  https://cmip6workshop19.sciencesconf.org/ data/Session1_PosterSlides.pdf, p.29), the INM-CM5 model shows the lowest temperature increase among the CMIP6 models considered for all the scenarios due to the minimal equilibrium sensitivity to the CO2 concentration doubling, which is ~2.1° for the current model version, like for the previous version, despite new condensation and cloud formation blocks. [For more on CMIP6 comparisons see post Climate Models: Good, Bad and Ugly]

Fig. 2. Differences between the annual average surface air temperatures (K) in 2071–2100 and 1981–2010 for the (a) SSP5-8.5 and (b) SSP1-2.6 scenarios.

The changes in the surface air temperature are similar for all scenarios; therefore, we analyze the difference between temperatures in 2071–2100 and 1981–2010 under the SSP5-8.5 and SSP1-2.6 scenarios (Fig. 2). The warming is maximal in the Arctic; it reaches 10° and 3°, respectively. Other features mainly correspond to CMIP5 data [8], including the INMCM4 model, which participates in the comparison. The warming on the continents of the Northern Hemisphere is about 2 times higher than the mean, and the warming in the Southern Hemisphere is noticeably less than in the Northern Hemisphere. The land surface is getting warmer than the ocean surface in all the scenarios except SSP1-2.6, because the greenhouse effect is expected to weaken in the second half of the 21st century in this scenario, and the higher heat capacity of the ocean prevents it from cooling as quickly as the land.

The changes in precipitation in December–February and June–August for the SSP3-7.0 scenario averaged over five members of the ensemble are shown in Fig. 4. All members of the ensemble show an increase in precipitation in the winter in a significant part of middle and high latitudes. In summer, the border between the increase and decrease in precipitation in Eurasia passes mainly around or to the north of 60°. In southern and central Europe, all members of the ensemble show a decrease in precipitation. Precipitation also increases in the region of the summer Asian monsoon, over the equatorial Pacific, due to a decrease in the upwelling and an increase in ocean surface temperature (OST). The distribution of changes in precipitation mainly corresponds to that given in [6, Fig. 12.22] for all CMIP5 models.

The change in the Arctic sea ice area in September, when the ocean ice cover is minimal over the year, is of interest. Figure 5 shows the sea ice area in September 2015–2019 to be 4–6 million km2 in all experiments, which corresponds to the estimate from observations in [11]. The Arctic sea ice does not completely melt in any of the experiments and under any scenario. However, according to [8, Figs. 12.28 and 12.31], many models participating in CMIP6, where the Arctic ice area is similar to that observed at the beginning of the 21st century, show the complete absence of ice by the end of the 21st century, especially under the RCP8.5 scenario, which is similar to SSP5-8.5.

The reason for these differences is the lower equilibrium sensitivity of the INM-CM5 model.

Note that the scatter of data between experiments under different scenarios in the first half of the 21st century is approximately the same as between different members of the ensemble under the SSP3-7.0 scenario and becomes larger only after 2070. The sea ice area values are sorted in accordance with the radiative forcing of the scenarios only after 2090. This indicates the large contribution of natural climate variability into the Arctic ice area. In the SSP1-2.6 experiment, the Arctic ice area at the end of the 21st century approximately corresponds to its area at the beginning of the experiment.

Climate changes can be also traced in the ocean circulation. Figure 6 shows the change in the 5-year averaged intensity of the Atlantic meridional circulation, defined as the maximum of the meridional streamfunction at 32° N. All experiments show a decrease in the intensity of meridional circulation in the 21st century and natural fluctuations against this decrease. The decrease is about 4.5–5 Sv for the SSP5-8.5 scenario, which is close to values obtained in the CMIP5 models [8, Fig. 12.35] under the RCP8.5 scenario. Under milder scenarios, the weakening of the meridional circulation is less pronounced. The reason for this weakening of the meridional circulation in the Atlantic, as far as we know, is not yet fully understood.

Conclusion

Numerical experiments have been carried out to reproduce climate changes in the 21st century according to four scenarios of the CMIP6 program [4, 5], including an ensemble of five experiments under the SSP3-7.0 scenario. The changes in the global mean surface temperature are analyzed. It is shown that the global warming predicted by the INM-CM5 model is the lowest among the currently published CMIP6 model data. The geographical distribution of changes in the temperature and precipitation is considered. According to the model, the temperature in the warmest summer month will increase faster than the summer average temperature in Russia.

None of the experiments show the complete melting of the Arctic ice cover by the end of the 21st century. Some changes in the ocean dynamics, including the flow velocity and the meridional stream function, are analyzed. The changes in the Hadley and Ferrel circulation in the atmosphere are considered.

Resources:

Climate Models: Good, Bad and Ugly

2018 Update: Best Climate Model INMCM5

Temperatures According to Climate Models