IPCC Wants a 10-year Lockdown

You’ve seen the News:

While analysts agree the historic lockdowns will significantly lower emissions, some environmentalists argue the drop is nowhere near enough.–USA Today

Emissions Declines Will Set Records This Year. But It’s Not Good News.  An “unprecedented” fall in fossil fuel use, driven by the Covid-19 crisis, is likely to lead to a nearly 8 percent drop, according to new research.–New York Times

The COVID-19 pandemic cut carbon emissions down to 2006 levels.  Daily global CO2 emissions dropped 17 percent in April — but it’s not likely to last–The Verge

In fact, the drop is not even enough to get the world back on track to meet the target of the 2015 Paris Agreement, which aims for global temperature rise of no more than 1.5 degree above pre-industrial levels, said WHO S-G Taalas. That would require at least a 7% annual drop in emissions, he added.–Reuters

An article at Las Vegas Review-Journal draws the implications Environmentalists want 10 years of coronavirus-level emissions cuts.  Excerpts in italics with my bolds.

It’s always been difficult for the layman to comprehend what it would entail to reduce carbon emissions enough to satisfy environmentalists who fret over global warming. Then came coronavirus.

The once-in-a-century pandemic has devastated the U.S. economy. The April unemployment rate is 14.7 percent and rising. Nevada has the highest unemployment rate in the country at 28.2 percent. One-third of families with children report feeling “food insecure,” according to the Institute for Policy Research, Northwestern University. Grocery prices saw their largest one-month increase since 1974.

To keep the economy afloat, the U.S. government has spent $2.4 trillion on coronavirus programs. Another expensive relief bill seems likely. Unless the end goal is massive inflation, this level of spending can’t continue.

Amazingly, the United States has it good compared with many other places in the world. David Beasley, director of the U.N. World Food Program, has estimated that the ongoing economic slowdown could push an additional 130 million “to the brink of starvation.”

That’s the bad news. The good news for global warming alarmists is that economic shutdowns reduce carbon emissions. If some restrictions remain in place worldwide through the end of the year, researchers writing in Nature estimate emissions will drop in 2020 by 7 percent.

For some perspective, the U.N.’s climate panel calls for a 7.6 percent reduction in emissions — every year for a decade. And now we get a real-world glimpse of the cost.

What happens if we return to the course we were on before this pandemic?  A previous post shows that the past continuing into the future is not disasterous, and that panic is unwarranted.

I Want You Not to Panic

I’ve been looking into claims for concern over rising CO2 and temperatures, and this post provides reasons why the alarms are exaggerated. It involves looking into the data and how it is interpreted.

First the longer view suggests where to focus for understanding. Consider a long term temperature record such as Hadcrut4. Taking it at face value, setting aside concerns about revisions and adjustments, we can see what has been the pattern in the last 120 years following the Little Ice Age. Often the period between 1850 and 1900 is considered pre industrial since modern energy and machinery took hold later on. The graph shows that warming was not much of a factor until temperatures rose peaking in the 1940s, then cooling off into the 1970s, before ending the century with a rise matching the rate of earlier warming. Overall, the accumulated warming was 0.8C.

Then regard the record concerning CO2 concentrations in the atmosphere. It’s important to know that modern measurement of CO2 really began in 1959 with Mauna Loa observatory, coinciding with the mid-century cool period. The earlier values in the chart are reconstructed by NASA GISS from various sources and calibrated to reconcile with the modern record. It is also evident that the first 60 years saw minimal change in the values compared to the post 1959 rise after WWII ended and manufacturing was turned from military production to meet consumer needs. So again the mid-20th century appears as a change point.

It becomes interesting to look at the last 60 years of temperature and CO2 from 1959 to 2019, particularly with so much clamour about climate emergency and crisis. This graph puts together rising CO2 and temperatures for this period. Firstly note that the accumulated warming is about 0.8C after fluctuations. And remember that those decades witnessed great human flourishing and prosperity by any standard of life quality. The rise of CO2 was a monotonic steady rise with some acceleration into the 21st century.

Now let’s look at projections into the future, bearing in mind Mark Twain’s warning not to trust future predictions. No scientist knows all or most of the surprises that overturn continuity from today to tomorrow. Still, as weathermen well know, the best forecasts are built from present conditions and adding some changes going forward.

Here is a look to century end as a baseline for context. No one knows what cooling and warming periods lie ahead, but one scenario is that the next 80 years could see continued warming at the same rate as the last 60 years. That presumes that forces at play making the weather in the lifetime of many of us seniors will continue in the future. Of course factors beyond our ken may deviate from that baseline and humans will notice and adapt as they have always done. And in the back of our minds is the knowledge that we are 11,500 years into an interglacial period before the cold returns, being the greater threat to both humanity and the biosphere.

Those who believe CO2 causes warming advocate for reducing use of fossil fuels for fear of overheating, apparently discounting the need for energy should winters grow harsher. The graph shows one projection similar to that of temperature, showing the next 80 years accumulating at the same rate as the last 60. A second projection in green takes the somewhat higher rate of the last 10 years and projects it to century end. The latter trend would achieve a doubling of CO2.

What those two scenarios mean depends on how sensitive you think Global Mean Temperature is to changing CO2 concentrations. Climate models attempt to consider all relevant and significant factors and produce future scenarios for GMT. CMIP6 is the current group of models displaying a wide range of warming presumably from rising CO2. The one model closely replicating Hadcrut4 back to 1850 projects 1.8C higher GMT for a doubling of CO2 concentrations. If that held true going from 300 ppm to 600 ppm, the trend would resemble the red dashed line continuing the observed warming from the past 60 years: 0.8C up to now and another 1C the rest of the century. Of course there are other models programmed for warming 2 or 3 times the rate observed.

People who take to the streets with signs forecasting doom in 11 or 12 years have fallen victim to IPCC 450 and 430 scenarios.  For years activists asserted that warming from pre industrial can be contained to 2C if CO2 concentrations peak at 450 ppm.  Last year, the SR1.5 lowered the threshold to 430 ppm, thus the shortened timetable for the end of life as we know it.

For the sake of brevity, this post leaves aside many technical issues. Uncertainties about the temperature record, and about early CO2 levels, and the questions around Equilibrium CO2 Sensitivity (ECS) and Transient CO2 Sensitivity (TCS) are for another day. It should also be noted that GMT as an average hides huge variety of fluxes over the globe surface, and thus larger warming in some places such as Canada, and cooling in other places like Southeast US. Ross McKitrick pointed out that Canada has already gotten more than 1.5C of warming and it has been a great social, economic and environmental benefit.

So I want people not to panic about global warming/climate change. Should we do nothing? On the contrary, we must invest in robust infrastructure to ensure reliable affordable energy and to protect against destructive natural events. And advanced energy technologies must be developed for the future since today’s wind and solar farms will not suffice.

It is good that Greta’s demands were unheeded at the Davos gathering. Panic is not useful for making wise policies, and as you can see above, we have time to get it right.

Media Turn Math Dopes into Dupes

Those who have investigated global warming/climate change discovered that the numbers don’t add up. But if you don’t do the math you wouldn’t know that, because in the details is found the truth (the devilish contradictions to sweeping claims). Those without numerical literacy (including apparently most journalists) are at the mercy of the loudest advocates. Social policy then becomes a matter of going along with herd popularity. Shout out to AOC!

Now we get the additional revelation regarding pandemic math and the refusal to correct over-the-top predictions. It’s the same dynamic but accelerated by the more immediate failure of models to forecast contagious reality. Sean Trende writes at Real Clear Politics The Costly Failure to Update Sky-Is-Falling Predictions. Excerpts in italics with my bolds.

On March 6, Liz Specht, Ph.D., posted a thread on Twitter that immediately went viral. As of this writing, it has received over 100,000 likes and almost 41,000 retweets, and was republished at Stat News. It purported to “talk math” and reflected the views of “highly esteemed epidemiologists.” It insisted it was “not a hypothetical, fear-mongering, worst-case scenario,” and that, while the predictions it contained might be wrong, they would not be “orders of magnitude wrong.” It was also catastrophically incorrect.

The crux of Dr. Specht’s 35-tweet thread was that the rapid doubling of COVID-19 cases would lead to about 1 million cases by May 5, 4 million by May 11, and so forth. Under this scenario, with a 10% hospitalization rate, we would expect approximately 400,000 hospitalizations by mid-May, which would more than overwhelm the estimated 330,000 available hospital beds in the country. This would combine with a lack of protective equipment for health care workers and lead to them “dropping from the workforce for weeks at a time,” to shortages of saline drips and so forth. Half the world would be infected by the summer, and we were implicitly advised to buy dry goods and to prepare not to leave the house.

Interestingly, this thread was wrong not because we managed to bend the curve and stave off the apocalypse; for starters, Dr. Specht described the cancellation of large events and workplace closures as something that would shift things by only days or weeks.

Instead, this thread was wrong because it dramatically understated our knowledge of the way the virus worked; it fell prey to the problem, common among experts, of failing to address adequately the uncertainty surrounding its point estimates. It did so in two opposing ways. First, it dramatically understated the rate of spread. If serological tests are to be remotely believed, we likely hit the apocalyptic milestone of 2 million cases quite some time ago. Not in the United States, mind you, but in New York City, where 20% of residents showed positive COVID-19 antibodies on April 23. Fourteen percent of state residents showed antibodies, suggesting 2.5 million cases in the Empire State alone; since antibodies take a while to develop, this was likely the state of affairs in mid-April or earlier.

But in addition to being wrong about the rate of spread, the thread was also very wrong about the rate of hospitalization. While New York City found its hospital system stretched, it avoided catastrophic failure, despite having within its borders the entire number of cases predicted for the country as a whole, a month earlier than predicted. Other areas of the United States found themselves with empty hospital beds and unused emergency capacity.

One would think that, given the amount of attention this was given in mainstream sources, there would be some sort of revisiting of the prediction. Of course, nothing of the sort occurred.

This thread has been absolutely memory-holed, along with countless other threads and Medium articles from February and March. We might forgive such forays on sites like Twitter and Medium, but feeding frenzies from mainstream sources are also passed over without the media ever revisiting to see how things turned out.

Consider Florida. Gov. Ron DeSantis was castigated for failing to close the beaches during spring break, and critics suggested that the state might be the next New York. I’ve written about this at length elsewhere, but Florida’s new cases peaked in early April, at which point it was a middling state in terms of infections per capita. The virus hasn’t gone away, of course, but the five-day rolling average of daily cases in Florida is roughly where it was in late March, notwithstanding the fact that testing has increased substantially. Taking increased testing into account, the positive test rate has gradually declined since late March as well, falling from a peak of 11.8% on April 1 to a low of 3.6% on May 12.

Notwithstanding this, the Washington Post continues to press stories of public health officials begging state officials to close beaches (a more interesting angle at this point might be why these health officials were so wrong), while the New York Times noted a few days ago (misleadingly, and grossly so) that “Florida had a huge spike in cases around Miami after spring break revelry,” without providing the crucial context that the caseload mimicked increases in other states that did not play host to spring break. Again, perhaps the real story is that spring breakers passed COVID-19 among themselves and seeded it when they got home. I am sure some of this occurred, but it seems exceedingly unlikely that they would have spread it widely among themselves and not also spread it widely to bartenders, wait staff, hotel staff, and the like in Florida.

Florida was also one of the first states to experiment with reopening. Duval County (Jacksonville) reopened its beaches on April 19 to much national skepticism. Yet daily cases are lower today than they were they day that it reopened; there was a recent spike in cases associated with increased testing, but it is now receding.

Or consider Georgia, which one prominent national magazine claimed was engaging in “human sacrifice” by reopening. Yet, after nearly a month, a five-day average of Georgia’s daily cases looks like this:

What about Wisconsin, which was heavily criticized for holding in-person voting? It has had an increased caseload, but that is largely due to increased testing (up almost six-fold since early April) and an idiosyncratic outbreak in its meatpacking plants. The latter is tragic, but it is not related to the election; in fact, a Milwaukee Journal-Sentinel investigation failed to link any cases to the election; this has largely been ignored outside of conservative media sites such as National Review.

We could go on – after being panned for refusing to issue a stay-at-home order, South Dakota indeed suffered an outbreak (once again, in its meatpacking plants), but deaths there have consistently averaged less than three per day, to little fanfare – but the point is made. Some “feeding frenzies” have panned out, but many have failed to do so; rather than acknowledging this failure, the press typically moves on.

This is an unwelcome development, for a few reasons. First, not everyone follows this pandemic closely, and so a failure to follow up on how feeding frenzies end up means that many people likely don’t update their views as often as they should. You’d probably be forgiven if you suspected hundreds of cases and deaths followed the Wisconsin election.

Second, we obviously need to get policy right here, and to be sure, reporting bad news is important for producing informed public opinion. But reporting good news is equally as important. Third, there are dangers to forecasting with incredible certitude, especially with a virus that was detected less than six months ago. There really is a lot we still don’t know, and people should be reminded of this. Finally, among people who do remember things like this, a failure to acknowledge errors foments cynicism and further distrust of experts.

The damage done to this trust is dangerous, for at this time we desperately need quality expert opinions and news reporting that we can rely upon.

Addendum:  Tilak Doshi makes the comparison to climate crisis claims Coronavirus And Climate Change: A Tale Of Two Hysterias writing at Forbes.  Excerpts in italics with my bolds.

It did not take long after the onset of the global pandemic for people to observe the many parallels between the covid-19 pandemic and climate change. An invisible novel virus of the SARS family now represents an existential threat to humanity. As does CO2, a colourless trace gas constituting 0.04% of the atmosphere which allegedly serves as the control knob of climate change. Lockdowns are to the pandemic what decarbonization is to climate change. Indeed, lockdowns and decarbonization share much in common, from tourism and international travel to shopping and having a good time. It would seem that Greta Thunberg’s dreams have come true, and perhaps that is why CNN announced on Wednesday that it is featuring her on a coronavirus town-hall panel alongside health experts.

But, beyond being a soundbite and means of obtaining political cover, ‘following the science’ is neither straightforward nor consensual. The diversity of scientific views on covid-19 became quickly apparent in the dramatic flip-flop of the UK government. In the early stages of the spread in infection, Boris Johnson spoke of “herd immunity”, protecting the vulnerable and common sense (à la Sweden’s leading epidemiologist Professor Johan Giesecke) and rejected banning mass gatherings or imposing social distancing rules. Then, an unpublished bombshell March 16th report by Professor Neil Ferguson of Imperial College, London, warned of 510,000 deaths in the country if the country did not immediately adopt a suppression strategy. On March 23, the UK government reversed course and imposed one of Europe’s strictest lockdowns. For the US, the professor had predicted 2.2 million deaths absent similar government controls, and here too, Ferguson’s alarmism moved the federal government into lockdown mode.

Unlike climate change models that predict outcomes over a period of decades, however, it takes only days and weeks for epidemiological model forecasts to be falsified by data. Thus, by March 25th, Ferguson’s predicted half a million fatalities in the UK was adjusted downward to “unlikely to exceed 20,000”, a reduction by a factor of 25. This drastic reduction was credited to the UK’s lockdown which, however, was imposed only 2 days previously, before any social distancing measures could possibly have had enough time to work.

For those engaged in the fraught debates over climate change over the past few decades, the use of alarmist models to guide policy has been a familiar point of contention. Much as Ferguson’s model drove governments to impose Covid-19 lockdowns affecting nearly 3 billion people on the planet, Professor Michael Mann’s “hockey stick” model was used by the IPCC, mass media and politicians to push the man-made global warming (now called climate change) hysteria over the past two decades.

As politicians abdicate policy formulation to opaque expertise in highly specialized fields such as epidemiology or climate science, a process of groupthink emerges as scientists generate ‘significant’ results which reinforce confirmation bias, affirm the “scientific consensus” and marginalize sceptics.

Rather than allocating resources and efforts towards protecting the vulnerable old and infirm while allowing the rest of the population to carry on with their livelihoods with individuals taking responsibility for safe socializing, most governments have opted to experiment with top-down economy-crushing lockdowns. And rather than mitigating real environmental threats such as the use of traditional biomass for cooking indoors that is a major cause of mortality in the developing world or the trade in wild animals, the climate change establishment advocates decarbonisation (read de-industrialization) to save us from extreme scenarios of global warming.

Taking the wheels off of entire economies on the basis of wildly exaggerated models is not the way to go.

Footnote: Mark Hemingway sees how commonplace is the problem of uncorrected media falsity in his article When Did the Media Stop Running Corrections? Excerpts in italics with my bolds.

Vanity Fair quickly recast Sherman’s story without acknowledging its error: “This post has been updated to include a denial from Blackstone, and to reflect comments received after publication by Charles P. Herring, president of Herring Networks, OANN’s parent company.” In sum, Sherman based his piece on a premise that was wrong, and Vanity Fair merely acted as if all the story needed was a minor update.

Such post-publication “stealth editing” has become the norm. Last month, The New York Times published a story on the allegation that Joe Biden sexually assaulted a former Senate aide. After publication, the Times deleted the second half of this sentence: “The Times found no pattern of sexual misconduct by Mr. Biden, beyond the hugs, kisses and touching that women previously said made them uncomfortable.”

In an interview with Times media columnist Ben Smith, Times’ Executive Editor Dean Baquet admitted the sentence was altered at the request of Biden’s presidential campaign. However, if you go to the Times’ original story on the Biden allegations, there’s no note saying how the story was specifically altered or why.

It’s also impossible not to note how this failure to issue proper corrections and penchant for stealth editing goes hand-in-hand with the media’s ideological preferences.

In the end the media’s refusal to run corrections is a damnable practice for reasons that have nothing to do with Christianity. In an era when large majorities of the public routinely tell pollsters they don’t trust the media, you don’t have to be a Bible-thumper to see that admitting your mistakes promptly, being transparent about trying to correct them, and when appropriate, apologizing and asking for forgiveness – are good secular, professional ethics.

 

 

 

On Following the Science

H/T to Luboc Motl for posting at his blog Deborah Cohen, BBC, and models vs theories  Excerpts in italics with my bolds

Dr Deborah Cohen is an award-winning health journalist who has a doctor degree – which actually seems to be related to medical sciences – and who is working for the BBC Newsnight now. I think that the 13-minute-long segment above is an excellent piece of journalism.

It seems to me that she primarily sees that the “models” predicting half a million of dead Britons have spectacularly failed and it is something that an honest health journalist simply must be interested in. And she seems to be an achieved and award-winning journalist. Second, she seems to see through some of the “more internal” defects of bad medical (and not only medical) science. Her PhD almost certainly helps in that. Someone whose background is purely in humanities or the PR-or-communication gibberish simply shouldn’t be expected to be on par with a real PhD.

So she has talked to the folks at the “Oxford evidence-based medicine” institute and others who understand the defect of the “computer models” as the basis of science or policymaking. Unsurprisingly, she is more or less led to the conclusion that the lockdown (in the U.K.) was a mistake.

If your equation – or computer model – assumes that 5% of those who contract the virus die (i.e. the probability is 5% that they die in a week if they get the virus), then your predicted fatality count may be inflated by a factor of 25 assuming that the case fatality rate is 0.2% – and it is something comparable to that. It should be a common sense that if someone makes a factor-of-25 error in the choice of this parameter, his predictions may be wrong by a factor-of-25, too. It doesn’t matter if the computer program looks like SimCity with 66.666 million Britons represented by a piece of a giant RAM memory of a supercomputer. This brute force obviously cannot compensate for a fundamental ignorance or error in your choice of the fatality rate.

I would think that most 3-year-old kids get this simple point and maybe this opinion is right. Nevertheless, most adults seem to be completely braindead today and they don’t get this point. When they are told that something was calculated by a computer, they worship the predictions. They don’t ask “whether the program was based on a realistic or scientifically supported theory”. Just the brute power of the pile of silicon seems to amaze them.

So we always agreed e.g. with Richard Lindzen that an important part of the degeneration of the climate science was the drift away from the proper “theory” to “modeling”. A scientist may be more leaning towards doing experiments and finding facts and measuring parameters with her hands (and much of the experimental climate science remained OK, after all, Spencer and Christy are still measuring the temperature by satellites etc.); and a theorist for whom the brain is (even) more important than for the experimenter. Experimenters sort of continued to do their work. However, it’s mainly the “theorists” who hopelessly degenerated in the climate science, under the influence of toxic ideology, politics, and corruption.

The real problem is that proper theorists – those who actually understand the science – can solve basic equations on the top of their heads, and are aware of all the intricacies in the process of finding the right equations, equivalence and unequivalence of equations, universal behavior, statistical effects etc. – were replaced by “modelers” i.e. people who don’t really have a clue about science, who write a computer-game-like code, worship their silicon, and mindlessly promote what comes out of this computer game. It is a catastrophe for the field – and the same was obviously happening to “theoretical epidemiology”, too.

“Models” and “good theory” aren’t just orthogonal. The culture of “models” is actively antiscientific because it comes with the encouragement to mindlessly trust in what happens in computer games. This isn’t just “different and independent from” the genuine scientific method. It just directly contradicts the scientific method. In science, you just can’t ever mindlessly trust something just because expensive hardware was used or a high number of operations was made by the CPU. These things are really negative for the trustworthiness and expected accuracy of the science, not positive. In science, you want to make things as simple as possible (because the proliferation of moving parts increases the probability of glitches) but not simpler; and you want to solve a maximum fraction of the issues analytically, not numerically or by a “simulation”.

Science is a systematic framework to figure out which statements about Nature are correct and which are incorrect.

And according to quantum mechanics, the truth values of propositions must be probabilistic. Quantum mechanics only predicts the “similarity [of propositions] to the truth” which is the translation of the Czech word for probability (pravděpodobnost).

It is the truth values (or probabilities) that matter in science – the separation of statements to right and wrong ones (or likely and unlikely ones). Again, I think that I am saying something totally elementary, something that I understood before I was 3 and so did many of you. But it seems obvious that the people who need to ask whether Leo’s or Stephen’s pictures are “theories of everything” must totally misunderstand even this basic point – that science is about the truth, not just representation of objects.

See also: The Deadly Lockdowns and Covid19 Linked to Affluence

Footnote:  Babylon Bee Has Some Fun with this Topic.

‘The Science On Climate Change Is Settled,’ Says Man Who Does Not Believe The Settled Science On Gender, Unborn Babies, Economics

PORTLAND, OR—Local man Trevor J. Gavyn pleaded with his conservative coworker to “believe the science on climate change,” though he himself does not believe the science on the number of genders there are, the fact that unborn babies are fully human, and that socialism has failed every time it has been tried.

“It’s just like, the science is settled, man,” he said in between puffs on his vape. “We just need to believe the scientists and listen to the experts here.”

“Facts don’t care about your feelings on the climate, bro,” he added, though he ignores the fact that there are only two biological genders. He also hand-waves away the science that an unborn baby is 100% biologically human the moment it is conceived and believes economics is a “conservative hoax foisted on us by the Illuminati and Ronald Reagan.”

“That whole thing is, like, a big conspiracy, man,” he said.

The conservative coworker, for his part, said he will trust the science on gender, unborn babies, and economics while simply offering “thoughts and prayers” for the climate.

Virus Models Are Accountable. Climate Models Not.

Paul Driessen and David Legates write at the Hill about what we are learning from coronavirus epidemic models, and why we should remain skeptical about forecasts from climate models. His article is Fauci-Birx Climate Models? Excerpts in italics with my bolds and images.

President Trump and his Coronavirus Task Force presented some frightening numbers during their March 31 White House briefing. Based on now two-week-old data and models, as many as 100,000 Americans at the models’ low end, to 2.2 million at their high end, could die from the fast-spreading virus, they said.

However, the president, vice president, and Drs. Anthony Fauci and Deborah Birx hastened to add that those high-end numbers are based on computer models. And they are “unlikely” if Americans keep doing what they are doing now to contain, mitigate and treat the virus. Although that worst-case scenario “is possible,” it is “unlikely if we do the kinds of things that we’re essentially outlining right now.”

On March 31, Dr. Fauci said, the computer models were saying that, even with full mitigation, it is “likely” that America could still suffer at least 100,000 deaths. But he then added a very important point:

“The question is, are the models really telling us what’s going on? When someone creates a model, they put in various assumptions. And the models are only as good and as accurate as the assumptions you put into them. As we get more data, as the weeks go by, that might change. We feed the data back into the models and relook at the models.”

The data can change the assumptions – and thus the models’ forecasts.

“If we have more data like the NY-NJ metro area, the numbers could go up,” Dr. Birx added. But if the numbers coming in are more like Washington or California, which reacted early and kept their infection and death rates down – then the models would likely show lower numbers. “We’re trying to prevent that logarithmic increase in New Orleans and Detroit and Chicago – trying to make sure those cities work more like California than like the New York metro area.” That seems to be happening, for the most part.

If death rates from corona are misattributed or inflated, if other model assumptions should now change, if azithromycin, hydroxychloroquine and other treatments, and people’s immunities are reducing infections – then business shutdowns and stay-home orders could (and should) end earlier, and we can go back to work and life, rebuild America and the world’s economies …

And avoid different disasters, like these:

    • Millions of businesses that never reopen.
    • Tens of millions of workers with no paychecks.
    • Tens of trillions of dollars vanished from our economy.
    • Millions of families with lost homes and savings.
    • Millions of cases of depression, stroke, heart attack, domestic violence, suicide, murder-suicide, and early death due to depression, obesity and alcoholism, due to unemployment, foreclosure and destroyed dreams.

In other words, numerous deaths because of actions taken to prevent infections and deaths from COVID-19.

It is vital that they recheck the models and assumptions – and distinguish between COVID-19 deaths actually due to the virus … and not just associated with or compounded by it, but primarily due to age, obesity, pneumonia or other issues. We can’t afford a cure that’s worse than the disease – or a prolonged and deadly national economic shutdown that could have been shortened by updated and corrected models.

Now just imagine: What if we could have that same honest, science-based approach to climate models?

What if the White House, EPA, Congress, UN, EU and IPCC acknowledged that climate models are only as good and as accurate as the assumptions built into them? What if – as the months and years went by and we got more real-world temperature, sea level and extreme weather data – we used that information to honestly refine the models? Would the assumptions and therefore the forecasts change dramatically?

What if we use real science to help us understand Earth’s changing climate and weather? And base energy and other policies on real science that honestly examines manmade and natural influences on climate?

Many climate modelers claim we face existential manmade climate cataclysms caused by our use of fossil fuels. They use models to justify calls to banish fossil fuels that provide 80% of US and global energy; close down countless industries, companies and jobs; totally upend our economy; give trillions of dollars in subsidies to fossil fuel replacement companies; and drastically curtail our travel and lifestyles.

Shouldn’t we demand that these models be verified against real-world evidence?

Natural forces have caused climate changes and extreme weather events throughout history. What proof is there that what we see today is due to fossil fuel emissions, and not to those same natural forces? We certainly don’t want energy “solutions” that don’t work and are far worse than the supposed manmade climate and weather ‘virus.’

And we have the climate data. We’ve got years of data. The data show the models don’t match reality.

Model-predicted temperatures are more than 0.5 degrees F above actual satellite-measured average global temperatures – and “highest ever” records are mere hundredths of a degree above previous records from 50 to 80 years ago. Actual hurricane, tornado, sea level, flood, drought, and other historic records show no unprecedented trends or changes, no looming crisis, no evidence that humans have replaced the powerful natural forces that have always driven climate and weather in the real world outside the modelers’ labs.

Real science – and real scientists – seek to understand natural phenomena and processes. They pose hypotheses that they think best explain what they have witnessed, then test them against actual evidence, observations and data. If the hypotheses (and predictions based on them) are borne out by their subsequent observations or findings, the hypotheses become theories, rules or laws of nature – at least until someone finds new evidence that pokes holes in their assessments, or devises better explanations.

Real scientists often employ computers to analyze data more quickly and accurately, depict or model complex natural systems, or forecast future events or conditions. But they test their models against real-world evidence. If the models, observations and predictions don’t match up, real scientists modify or discard the models, and the hypotheses behind them. They engage in robust discussion and debate.

Real scientists don’t let models or hypotheses become substitutes for real-world data, evidence and observations.

They don’t alter or “homogenize” raw or historic data to make it look like the models actually work. They don’t tweak their models after comparing predictions to actual subsequent observations, to make it look like the models “got it right.” They don’t “lose” or hide data and computer codes, restrict peer review to closed circles of like-minded colleagues who protect one another’s reputations and funding, claim “the debate is over,” or try to silence anyone who asks inconvenient questions or criticizes their claims or models. Climate modelers have done all of this – and more.

Climate models have always overstated the warming. But even though modelers have admitted that their models are “tuned” – revised after the fact to make it look like they predicted temperatures accurately – the modelers have made no attempt to change the climate sensitivity to match reality. Why not?

They know disaster scenarios sell. Disaster forecasts keep them employed, swimming in research money – and empowered to tell legislators and regulators that humanity must we take immediate, draconian action to eliminate all fossil fuel use – the economic, human and environmental consequences be damned. And they probably will never admit their mistakes or duplicity, much less be held accountable.

“Wash your hands! You could save millions of lives!” has far more impact than “You could save your own life, your kids’ lives, dozens of lives.” When it comes to climate change, you’re saving the planet.

With ‘Mann-made’ climate change, we are always shown the worst-case scenario: RCP 8.5, the “business-as-usual” … ten times more coal use in 2100 than now … “total disaster.” Alarmist climatologists know their scenario has maybe a 0.1 percent likelihood, and assumes no new energy technologies over the next 80 years. But energy technologies have evolved incredibly over the last 80 years – since 1940, the onset of World War II! Who could possibly think technologies won’t change at least as much going forward?

Disaster scenarios are promoted because most people don’t know any better – and voters and citizens won’t accept extreme measures and sacrifices unless they are presented with extreme disaster scenarios.

The Fauci-Birx team has been feeding updated data into their models, and forecasts for infections and deaths from the ChiCom-WHO coronavirus are going down. They team is doing data-based science. Let’s start demanding a similarly honest, factual, evidence-based approach to climate models and “dangerous manmade climate change.” Our energy, economy, livelihoods, lives and liberties depend on it.

Paul Driessen is senior policy analyst for the Committee For A Constructive Tomorrow (www.CFACT.org) and author of books and articles on energy, environment, climate and human rights issues. David R. Legates is a Professor of Climatology at the University of Delaware.

Top Climate Model Gets Better

Figure S7. Contributions of forcing and feedbacks to ECS in each model and for the multimodel means. Contributions from the tropical and extratropical portion of the feedback are shown in light and dark shading, respectively. Black dots indicate the ECS in each model, while upward and downward pointing triangles indicate contributions from non-cloud and cloud feedbacks, respectively. Numbers printed next to the multi-model mean bars indicate the cumulative sum of each plotted component. Numerical values are not printed next to residual, extratropical forcing, and tropical albedo terms for clarity. Models within each collection are ordered by ECS.

A previous post here discussed discovering that INMCM4 was the best CMIP5 model in replicating historical temperature records. Additional posts described improvements built into INMCM5, the next generation model included for CMIP6 testing. Later on is a reprint of the temperature history replication and the parameters included in the revised model. This post focuses on a recent report of additional enhancements by the modelers in order to better represent precipitation and extreme rainfall events.

The paper is Influence of various parameters of INM RAS climate model on the results of extreme precipitation simulation by M A Tarasevich and E M Volodin 2019. Excerpts in italics with my bolds.

Modern models of the Earth’s climate can reproduce not only the average climate condition, but also extreme weather and climate phenomena. Therefore, there arises the problem of comparing climate models for observable extreme weather events.

In [1, 2], various extreme weather and climatic situations are considered. According to the paper,27 extreme indices are defined, characterizing different situations with high and low temperatures, with heavy precipitation or with absence of precipitation.

The results of simulation of the extreme indices with the INMCM4 [3] climate model were compared with the results of other models which took part in the CMIP5 project (Coupled Model Intercomparison Project, Phase 5) [2]. The comparison demonstrates that this model performs well for most indices except for those related to daily minimum temperature. For those indices the model shows one of the worst results.

The parameterizations of physical processes in the next model version, INMCM5, were replaced or tuned [4, 5], so that changes in the extreme indices simulation are expected.

The simulation results were compared to the ERA-Interim [6] reanalysis data, which were considered as the observational data for this study. Indices averaged for the 1981–2010 year range were compared. Mann-Whitney test with 1% significance level was used to examine where changes are significant.

To evaluate the quality of simulation of extreme weather phenomena, the extreme indices were calculated [7] using the results of computations performed by two versions of the INM RAS climate model (INMCM4 and INMCM5) and the ERA Interim reanalysis. We took the root mean square deviation of the index value computed from the modeled and reanalysis data as the measure of simulation quality. The mean is averaged over the land.

Tables 1 and 2 present the names of extreme indices related to temperature and precipitation, their labels and measurement units, as well as the land only averaged standard deviations for these indices between the ERA-Interim reanalysis and different versions of the INM RAS climate model.

Table 1 shows that the simulation of almost all temperature indices has improved in the INMCM5 compared to INMCM4. In particular, the simulation of the following extreme indices related to the minimum daily temperature improved significantly (by 37–56%): the annual daily minimum temperature (TNn), the number of frost days (FD) and tropical nights (TR), the diurnal temperature range (DTR), and the growing season length (GSL).

[Comment: Note that values in these tables are standard deviations from observations as presented by ERA reanalysis. So for example, growing season length (GSL) varied from mean ERA values by 24 days in INMCM4, but improved to a 15 day differential in INMCM5.]

Table 2 shows that the simulation of the number of heavy (R10mm) and very heavy (R20mm) precipitation days, consecutive wet days (CWD), simple daily intensity (SDII), and total wet-day precipitation (PRCPTOT) noticeably improved in INMCM5. At the same time, the simulation of indices related to the intensity (RX5day) and the amount (R95p) of precipitation on very rainy days became worse.

Improvements Added to INMCM5

To improve the simulation of extreme precipitation by the INMCM5 model, the following physical  processes were considered: evaporation of precipitation in the upper atmosphere; mixing of horizontal velocity components due to large-scale condensation and deep convection; air resistance acting on falling precipitation particles.

Both large-scale condensation and deep convection cause vertical motion, which redistributes the horizontal momentum between the nearby air layers. The implementation of mixing due to large-scale condensation was added to the model. For short we will refer to the INMCM5 version with these changes as INMCM5VM (INMCM5 Velocities Mixing).

Since precipitation particles (water droplets or ice crystals) move in the surrounding air, a drag force arises that carries the air along with the particles. This resistance force can be included in the right hand side of the momentum balance equation, which is part of the atmosphere hydrothermodynamic system of equations. Accurate accounting for the effect of this force requires numerical solving of an additional Poisson-type equation. For short, we will refer to the INMCM5 model version with the air resistance and vertical mixing of the horizontal velocity components as INMCM5AR (INMCM5 Air Resistance).

Figure 3. (a) RX5day index values averaged over 1981–2010 according to ERA-Interim data. (b-d)  Deviations of the same average obtained from INMCM5, INMCM5VM, and INMCM5AR data. Statistically insignificant deviations are presented as white.

Table 2 shows that the quality of simulation of all precipitation-related extreme indices in INMCM5AR either improved by 3–21 % compared to INMCM5 or remained unchanged.

Figures 2d, 3d show the spatial distribution of the deviations for max 1 day (RX1day) and 5 day (RX5day) precipitation according to INMCM5AR compared to INMCM5. The model with air resistance acting on falling precipitation particles compared to INMCM5 significantly underestimates RX1day and RX5day in South Africa, South and East Asia, and slightly underestimates the indicated extreme indices in Tibet.

Taking into account the air resistance acting on falling precipitation particles significantly reduces  the overestimation of RX1day and RX5day observed in INMCM5 in South Africa, South and East Asia, and leads to an improvement in the quality of extreme indices associated with the precipitation amount on very rainy days and their intensity simulation by 9–21 %. At the same time, a significant overestimation of the RX1day and RX5day indices in the Amazon basin and Southeast Asia, as well as their underestimation in West Africa, still remain.

Footnote: 

A simple analysis shows if the climate sensitivity estimated by INMCM5 (1.8C per doubling of CO2) would be realized over the next 80 years, it would mean a continuation of the warming over the last 60 years.  The accumulated rise in GMT would be 1.2C for the 21st Century, well below the IPCC 1.5C aspiration.  See I Want You Not to Panic

Update February 4, 2020

A recent comparison of INMCM5 and other CMIP6 climate models is discussed in the post
Climate Models: Good, Bad and Ugly

Updated with October 25, 2018 Report

A previous analysis Temperatures According to Climate Models showed that only one of 42 CMIP5 models was close to hindcasting past temperature fluctuations. That model was INMCM4, which also projected an unalarming 1.4C warming to the end of the century, in contrast to the other models programmed for future warming five times the past.

In a recent comment thread, someone asked what has been done recently with that model, given that it appears to be “best of breed.” So I went looking and this post summarizes further work to produce a new, hopefully improved version by the modelers at the Institute of Numerical Mathematics of the Russian Academy of Sciences.

Institute of Numerical Mathematics, Russian Academy of Sciences, Moscow, Russia

A previous post a year ago went into the details of improvements made in producing the latest iteration INMCM5 for entry into the CMIP6 project.  That text is reprinted below.

Now a detailed description of the model’s global temperature outputs has been published October 25, 2018 in Earth System Dynamics Simulation of observed climate changes in 1850–2014 with climate model INM-CM5   (Title is link to pdf) Excerpts below with my bolds.

Figure 1. The 5-year mean GMST (K) anomaly with respect to 1850–1899 for HadCRUTv4 (thick solid black); model mean (thick solid red). Dashed thin lines represent data from individual model runs: 1 – purple, 2 – dark blue, 3 – blue, 4 – green, 5 – yellow, 6 – orange, 7 – magenta. In this and the next figures numbers on the time axis indicate the first year of the 5-year mean.

Abstract

Climate changes observed in 1850-2014 are modeled and studied on the basis of seven historical runs with the climate model INM-CM5 under the scenario proposed for Coupled Model Intercomparison Project, Phase 6 (CMIP6). In all runs global mean surface temperature rises by 0.8 K at the end of the experiment (2014) in agreement with the observations. Periods of fast warming in 1920-1940 and 1980-2000 as well as its slowdown in 1950-1975 and 2000-2014 are correctly reproduced by the ensemble mean. The notable change here with respect to the CMIP5 results is correct reproduction of the slowdown of global warming in 2000-2014 that we attribute to more accurate description of the Solar constant in CMIP6 protocol. The model is able to reproduce correct behavior of global mean temperature in 1980-2014 despite incorrect phases of  the Atlantic Multidecadal Oscillation and Pacific Decadal Oscillation indices in the majority of experiments. The Arctic sea ice loss in recent decades is reasonably close to the observations just in one model run; the model underestimates Arctic sea ice loss by the factor 2.5. Spatial pattern of model mean surface temperature trend during the last 30 years looks close the one for the ERA Interim reanalysis. Model correctly estimates the magnitude of stratospheric cooling.

Additional Commentary

Observational data of GMST for 1850-2014 used for verification of model results were produced by HadCRUT4 (Morice et al 2012). Monthly mean sea surface temperature (SST) data ERSSTv4 (Huang et al 2015) are used for comparison of the AMO and PDO indices with that of the model. Data of Arctic sea ice extent for 1979-2014 derived from satellite observations are taken from Comiso and Nishio (2008). Stratospheric temperature trend and geographical distribution of near surface air temperature trend for 1979-2014 are calculated from ERA Interim reanalysis data (Dee et al 2011).

Keeping in mind the arguments that the GMST slowdown in the beginning of 21st 6 century could be due to the internal variability of the climate system let us look at the behavior of the AMO and PDO climate indices. Here we calculated the AMO index in the usual way, as the SST anomaly in Atlantic at latitudinal band 0N-60N minus anomaly of the GMST. Model and observed 5 year mean AMO index time series are presented in Fig.3. The well known oscillation with a period of 60-70 years can be clearly seen in the observations. Among the model runs, only one (dashed purple line) shows oscillation with a period of about 70 years, but without significant maximum near year 2000. In other model runs there is no distinct oscillation with a period of 60-70 years but period of 20-40 years prevails. As a result none of seven model trajectories reproduces behavior of observed AMO index after year 1950 (including its warm phase at the turn of the 20th and 21st centuries). One can conclude that anthropogenic forcing is unable to produce any significant impact on the AMO dynamics as its index averaged over 7 realization stays around zero within one sigma interval (0.08). Consequently, the AMO dynamics is controlled by internal variability of the climate system and cannot be predicted in historic experiments. On the other hand the model can correctly predict GMST changes in 1980-2014 having wrong phase of the AMO (blue, yellow, orange lines on Fig.1 and 3).

Conclusions

Seven historical runs for 1850-2014 with the climate model INM-CM5 were analyzed. It is shown that magnitude of the GMST rise in model runs agrees with the estimate based on the observations. All model runs reproduce stabilization of GMST in 1950-1970, fast warming in 1980-2000 and a second GMST stabilization in 2000-2014 suggesting that the major factor for predicting GMST evolution is the external forcing rather than system internal variability. Numerical experiments with the previous model version (INMCM4) for CMIP5 showed unrealistic gradual warming in 1950-2014. The difference between the two model results could be explained by more accurate modeling of stratospheric volcanic and tropospheric anthropogenic aerosol radiation effect (stabilization in 1950-1970) due to the new aerosol block in INM-CM5 and more accurate prescription of Solar constant scenario (stabilization in 2000-2014) in CMIP6 protocol. Four of seven INM-CM5 model runs simulate acceleration of warming in 1920-1940 in a correct way, other three produce it earlier or later than in reality. This indicates that for the year warming of 1920-1940 the climate system natural variability plays significant role. No model trajectory reproduces correct time behavior of AMO and PDO indices. Taking into account our results on the GMST modeling one can conclude that anthropogenic forcing does not produce any significant impact on the dynamics of AMO and PDO indices, at least for the INM-CM5 model. In turns, correct prediction of the GMST changes in the 1980-2014 does not require correct phases of the AMO and PDO as all model runs have correct values of the GMST while in at least three model experiments the phases of the AMO and PDO are opposite to the observed ones in that time. The North Atlantic SST time series produced by the model correlates better with the observations in 1980-2014. Three out of seven trajectories have strongly positive North Atlantic SST anomaly as the observations (in the other four cases we see near-to-zero changes for this quantity). The INMCM5 has the same skill for prediction of the Arctic sea ice extent in 2000-2014 as CMIP5 models including INMCM4. It underestimates the rate of sea ice loss by a factor between the two and three. In one extreme case the magnitude of this decrease is as large as in the observations while in the other the sea ice extent does not change compared to the preindustrial ages. In part this could be explained by the strong internal variability of the Arctic sea ice but obviously the new version of INMCM model and new CMIP6 forcing protocol does not improve prediction of the Arctic sea ice extent response to anthropogenic forcing.

Previous Post:  Climate Model Upgraded: INMCM5 Under the Hood

Earlier in 2017 came this publication Simulation of the present-day climate with the climate model INMCM5 by E.M. Volodin et al. Excerpts below with my bolds.

In this paper we present the fifth generation of the INMCM climate model that is being developed at the Institute of Numerical Mathematics of the Russian Academy of Sciences (INMCM5). The most important changes with respect to the previous version (INMCM4) were made in the atmospheric component of the model. Its vertical resolution was increased to resolve the upper stratosphere and the lower mesosphere. A more sophisticated parameterization of condensation and cloudiness formation was introduced as well. An aerosol module was incorporated into the model. The upgraded oceanic component has a modified dynamical core optimized for better implementation on parallel computers and has two times higher resolution in both horizontal directions.

Analysis of the present-day climatology of the INMCM5 (based on the data of historical run for 1979–2005) shows moderate improvements in reproduction of basic circulation characteristics with respect to the previous version. Biases in the near-surface temperature and precipitation are slightly reduced compared with INMCM4 as  well as biases in oceanic temperature, salinity and sea surface height. The most notable improvement over INMCM4 is the capability of the new model to reproduce the equatorial stratospheric quasi-biannual oscillation and statistics of sudden stratospheric warmings.

The family of INMCM climate models, as most climate system models, consists of two main blocks: the atmosphere general circulation model, and the ocean general circulation model. The atmospheric part is based on the standard set of hydrothermodynamic equations with hydrostatic approximation written in advective form. The model prognostic variables are wind horizontal components, temperature, specific humidity and surface pressure.

Atmosphere Module

The INMCM5 borrows most of the atmospheric parameterizations from its previous version. One of the few notable changes is the new parameterization of clouds and large-scale condensation. In the INMCM5 cloud area and cloud water are computed prognostically according to Tiedtke (1993). That includes the formation of large-scale cloudiness as well as the formation of clouds in the atmospheric boundary layer and clouds of deep convection. Decrease of cloudiness due to mixing with unsaturated environment and precipitation formation are also taken into account. Evaporation of precipitation is implemented according to Kessler (1969).

In the INMCM5 the atmospheric model is complemented by the interactive aerosol block, which is absent in the INMCM4. Concentrations of coarse and fine sea salt, coarse and fine mineral dust, SO2, sulfate aerosol, hydrophilic and hydrophobic black and organic carbon are all calculated prognostically.

Ocean Module

The oceanic module of the INMCM5 uses generalized spherical coordinates. The model “South Pole” coincides with the geographical one, while the model “North Pole” is located in Siberia beyond the ocean area to avoid numerical problems near the pole. Vertical sigma-coordinate is used. The finite-difference equations are written using the Arakawa C-grid. The differential and finite-difference equations, as well as methods of solving them can be found in Zalesny etal. (2010).

The INMCM5 uses explicit schemes for advection, while the INMCM4 used schemes based on splitting upon coordinates. Also, the iterative method for solving linear shallow water equation systems is used in the INMCM5 rather than direct method used in the INMCM4. The two previous changes were made to improve model parallel scalability. The horizontal resolution of the ocean part of the INMCM5 is 0.5 × 0.25° in longitude and latitude (compared to the INMCM4’s 1 × 0.5°).

Both the INMCM4 and the INMCM5 have 40 levels in vertical. The parallel implementation of the ocean model can be found in (Terekhov etal. 2011). The oceanic block includes vertical mixing and isopycnal diffusion parameterizations (Zalesny et al. 2010). Sea ice dynamics and thermodynamics are parameterized according to Iakovlev (2009). Assumptions of elastic-viscous-plastic rheology and single ice thickness gradation are used. The time step in the oceanic block of the INMCM5 is 15 min.

Note the size of the human emissions next to the red arrow.

Carbon Cycle Module

The climate model INMCM5 has а carbon cycle module (Volodin 2007), where atmospheric CO2 concentration, carbon in vegetation, soil and ocean are calculated. In soil, а single carbon pool is considered. In the ocean, the only prognostic variable in the carbon cycle is total inorganic carbon. Biological pump is prescribed. The model calculates methane emission from wetlands and has a simplified methane cycle (Volodin 2008). Parameterizations of some electrical phenomena, including calculation of ionospheric potential and flash intensity (Mareev and Volodin 2014), are also included in the model.

Surface Temperatures

When compared to the INMCM4 surface temperature climatology, the INMCM5 shows several improvements. Negative bias over continents is reduced mainly because of the increase in daily minimum temperature over land, which is achieved by tuning the surface flux parameterization. In addition, positive bias over southern Europe and eastern USA in summer typical for many climate models (Mueller and Seneviratne 2014) is almost absent in the INMCM5. A possible reason for this bias in many models is the shortage of soil water and suppressed evaporation leading to overestimation of the surface temperature. In the INMCM5 this problem was addressed by the increase of the minimum leaf resistance for some vegetation types.

Nevertheless, some problems migrate from one model version to the other: negative bias over most of the subtropical and tropical oceans, and positive bias over the Atlantic to the east of the USA and Canada. Root mean square (RMS) error of annual mean near surface temperature was reduced from 2.48 K in the INMCM4 to 1.85 K in the INMCM5.

Precipitation

In mid-latitudes, the positive precipitation bias over the ocean prevails in winter while negative bias occurs in summer. Compared to the INMCM4, the biases over the western Indian Ocean, Indonesia, the eastern tropical Pacific and the tropical Atlantic are reduced. A possible reason for this is the better reproduction of the tropical sea surface temperature (SST) in the INMCM5 due to the increase of the spatial resolution in the oceanic block, as well as the new condensation scheme. RMS annual mean model bias for precipitation is 1.35mm day−1 for the INMCM5 compared to 1.60mm day−1 for the INMCM4.

Cloud Radiation Forcing

Cloud radiation forcing (CRF) at the top of the atmosphere is one of the most important climate model characteristics, as errors in CRF frequently lead to an incorrect surface temperature.

In the high latitudes model errors in shortwave CRF are small. The model underestimates longwave CRF in the subtropics but overestimates it in the high latitudes. Errors in longwave CRF in the tropics tend to partially compensate errors in shortwave CRF. Both errors have positive sign near 60S leading to warm bias in the surface temperature here. As a result, we have some underestimation of the net CRF absolute value at almost all latitudes except the tropics. Additional experiments with tuned conversion of cloud water (ice) to precipitation (for upper cloudiness) showed that model bias in the net CRF could be reduced, but that the RMS bias for the surface temperature will increase in this case.

A table from another paper provides the climate parameters described by INMCM5.

Climate Parameters Observations INMCM3 INMCM4 INMCM5
Incoming solar radiation at TOA 341.3 [26] 341.7 341.8 341.4
Outgoing solar radiation at TOA   96–100 [26] 97.5 ± 0.1 96.2 ± 0.1 98.5 ± 0.2
Outgoing longwave radiation at TOA 236–242 [26] 240.8 ± 0.1 244.6 ± 0.1 241.6 ± 0.2
Solar radiation absorbed by surface 154–166 [26] 166.7 ± 0.2 166.7 ± 0.2 169.0 ± 0.3
Solar radiation reflected by surface     22–26 [26] 29.4 ± 0.1 30.6 ± 0.1 30.8 ± 0.1
Longwave radiation balance at surface –54 to 58 [26] –52.1 ± 0.1 –49.5 ± 0.1 –63.0 ± 0.2
Solar radiation reflected by atmosphere      74–78 [26] 68.1 ± 0.1 66.7 ± 0.1 67.8 ± 0.1
Solar radiation absorbed by atmosphere     74–91 [26] 77.4 ± 0.1 78.9 ± 0.1 81.9 ± 0.1
Direct hear flux from surface     15–25 [26] 27.6 ± 0.2 28.2 ± 0.2 18.8 ± 0.1
Latent heat flux from surface     70–85 [26] 86.3 ± 0.3 90.5 ± 0.3 86.1 ± 0.3
Cloud amount, %     64–75 [27] 64.2 ± 0.1 63.3 ± 0.1 69 ± 0.2
Solar radiation-cloud forcing at TOA         –47 [26] –42.3 ± 0.1 –40.3 ± 0.1 –40.4 ± 0.1
Longwave radiation-cloud forcing at TOA          26 [26] 22.3 ± 0.1 21.2 ± 0.1 24.6 ± 0.1
Near-surface air temperature, °С 14.0 ± 0.2 [26] 13.0 ± 0.1 13.7 ± 0.1 13.8 ± 0.1
Precipitation, mm/day 2.5–2.8 [23] 2.97 ± 0.01 3.13 ± 0.01 2.97 ± 0.01
River water inflow to the World Ocean,10^3 km^3/year 29–40 [28] 21.6 ± 0.1 31.8 ± 0.1 40.0 ± 0.3
Snow coverage in Feb., mil. Km^2 46 ± 2 [29] 37.6 ± 1.8 39.9 ± 1.5 39.4 ± 1.5
Permafrost area, mil. Km^2 10.7–22.8 [30] 8.2 ± 0.6 16.1 ± 0.4 5.0 ± 0.5
Land area prone to seasonal freezing in NH, mil. Km^2 54.4 ± 0.7 [31] 46.1 ± 1.1 48.3 ± 1.1 51.6 ± 1.0
Sea ice area in NH in March, mil. Km^2 13.9 ± 0.4 [32] 12.9 ± 0.3 14.4 ± 0.3 14.5 ± 0.3
Sea ice area in NH in Sept., mil. Km^2 5.3 ± 0.6 [32] 4.5 ± 0.5 4.5 ± 0.5 6.1 ± 0.5

Heat flux units are given in W/m^2; the other units are given with the title of corresponding parameter. Where possible, ± shows standard deviation for annual mean value.  Source: Simulation of Modern Climate with the New Version Of the INM RAS Climate Model (Bracketed numbers refer to sources for observations)

Ocean Temperature and Salinity

The model biases in potential temperature and salinity averaged over longitude with respect to WOA09 (Antonov et al. 2010) are shown in Fig.12. Positive bias in the Southern Ocean penetrates from the surface downward for up to 300 m, while negative bias in the tropics can be seen even in the 100–1000 m layer.

Nevertheless, zonal mean temperature error at any level from the surface to the bottom is small. This was not the case for the INMCM4, where one could see negative temperature bias up to 2–3 K from 1.5 km to the bottom nearly al all latitudes, and 2–3 K positive bias at levels of 700–1000 m. The reason for this improvement is the introduction of a higher background coefficient for vertical diffusion at high depth (3000 m and higher) than at intermediate depth (300–500m). Positive temperature bias at 45–65 N at all depths could probably be explained by shortcomings in the representation of deep convection [similar errors can be seen for most of the CMIP5 models (Flato etal. 2013, their Fig.9.13)].

Another feature common for many present day climate models (and for the INMCM5 as well) is negative bias in southern tropical ocean salinity from the surface to 500 m. It can be explained by overestimation of precipitation at the southern branch of the Inter Tropical Convergence zone. Meridional heat flux in the ocean (Fig.13) is not far from available estimates (Trenberth and Caron 2001). It looks similar to the one for the INMCM4, but maximum of northward transport in the Atlantic in the INMCM5 is about 0.1–0.2 × 1015 W higher than the one in the INMCM4, probably, because of the increased horizontal resolution in the oceanic block.

Sea Ice

In the Arctic, the model sea ice area is just slightly overestimated. Overestimation of the Arctic sea ice area is connected with negative bias in the surface temperature. In the same time, connection of the sea ice area error with the positive salinity bias is not evident because ice formation is almost compensated by ice melting, and the total salinity source for these pair of processes is not large. The amplitude and phase of the sea ice annual cycle are reproduced correctly by the model. In the Antarctic, sea ice area is underestimated by a factor of 1.5 in all seasons, apparently due to the positive temperature bias. Note that the correct simulation of sea ice area dynamics in both hemispheres simultaneously is a difficult task for climate modeling.

The analysis of the model time series of the SST anomalies shows that the El Niño event frequency is approximately the same in the model and data, but the model El Niños happen too regularly. Atmospheric response to the El Niño vents is also underestimated in the model by a factor of 1.5 with respect to the reanalysis data.

Conclusion

Based on the CMIP5 model INMCM4 the next version of the Institute of Numerical Mathematics RAS climate model was developed (INMCM5). The most important changes include new parameterizations of large scale condensation (cloud fraction and cloud water are now the prognostic variables), and increased vertical resolution in the atmosphere (73 vertical levels instead of 21, top model level raised from 30 to 60 km). In the oceanic block, horizontal resolution was increased by a factor of 2 in both directions.

The climate model was supplemented by the aerosol block. The model got a new parallel code with improved computational efficiency and scalability. With the new version of climate model we performed a test model run (80 years) to simulate the present-day Earth climate. The model mean state was compared with the available datasets. The structures of the surface temperature and precipitation biases in the INMCM5 are typical for the present climate models. Nevertheless, the RMS error in surface temperature, precipitation as well as zonal mean temperature and zonal wind are reduced in the INMCM5 with respect to its previous version, the INMCM4.

The model is capable of reproducing equatorial stratospheric QBO and SSWs.The model biases for the sea surface height and surface salinity are reduced in the new version as well, probably due to increasing spatial resolution in the oceanic block. Bias in ocean potential temperature at depths below 700 m in the INMCM5 is also reduced with respect to the one in the INMCM4. This is likely because of the tuning background vertical diffusion coefficient.

Model sea ice area is reproduced well enough in the Arctic, but is underestimated in the Antarctic (as a result of the overestimated surface temperature). RMS error in the surface salinity is reduced almost everywhere compared to the previous model except the Arctic (where the positive bias becomes larger). As a final remark one can conclude that the INMCM5 is substantially better in almost all aspects than its previous version and we plan to use this model as a core component for the coming CMIP6 experiment.
climatesystem_web

Summary

One the one hand, this model example shows that the intent is simple: To represent dynamically the energy balance of our planetary climate system.  On the other hand, the model description shows how many parameters are involved, and the complexity of processes interacting.  The attempt to simulate operations of the climate system is a monumental task with many outstanding challenges, and this latest version is another step in an iterative development.

Note:  Regarding the influence of rising CO2 on the energy balance.  Global warming advocates estimate a CO2 perturbation of 4 W/m^2.  In the climate parameters table above, observations of the radiation fluxes have a 2 W/m^2 error range at best, and in several cases are observed in ranges of 10 to 15 W/m^2.

We do not yet have access to the time series temperature outputs from INMCM5 to compare with observations or with other CMIP6 models.  Presumably that will happen in the future.

Early Schematic: Flows and Feedbacks for Climate Models

I Want You Not to Panic

 

I’ve been looking into claims for concern over rising CO2 and temperatures, and this post provides reasons why the alarms are exaggerated. It involves looking into the data and how it is interpreted.

First the longer view suggests where to focus for understanding. Consider a long term temperature record such as Hadcrut4. Taking it at face value, setting aside concerns about revisions and adjustments, we can see what has been the pattern in the last 120 years following the Little Ice Age. Often the period between 1850 and 1900 is considered pre industrial since modern energy and machinery took hold later on. The graph shows that warming was not much of a factor until temperatures rose peaking in the 1940s, then cooling off into the 1970s, before ending the century with a rise matching the rate of earlier warming. Overall, the accumulated warming was 0.8C.

Then regard the record concerning CO2 concentrations in the atmosphere. It’s important to know that modern measurement of CO2 really began in 1959 with Mauna Loa observatory, coinciding with the mid-century cool period. The earlier values in the chart are reconstructed by NASA GISS from various sources and calibrated to reconcile with the modern record. It is also evident that the first 60 years saw minimal change in the values compared to the post 1959 rise after WWII ended and manufacturing was turned from military production to meet consumer needs. So again the mid-20th century appears as a change point.

It becomes interesting to look at the last 60 years of temperature and CO2 from 1959 to 2019, particularly with so much clamour about climate emergency and crisis. This graph puts together rising CO2 and temperatures for this period. Firstly note that the accumulated warming is about 0.8C after fluctuations. And remember that those decades witnessed great human flourishing and prosperity by any standard of life quality. The rise of CO2 was a monotonic steady rise with some acceleration into the 21st century.

Now let’s look at projections into the future, bearing in mind Mark Twain’s warning not to trust future predictions. No scientist knows all or most of the surprises that overturn continuity from today to tomorrow. Still, as weathermen well know, the best forecasts are built from present conditions and adding some changes going forward.

Here is a look to century end as a baseline for context. No one knows what cooling and warming periods lie ahead, but one scenario is that the next 80 years could see continued warming at the same rate as the last 60 years. That presumes that forces at play making the weather in the lifetime of many of us seniors will continue in the future. Of course factors beyond our ken may deviate from that baseline and humans will notice and adapt as they have always done. And in the back of our minds is the knowledge that we are 11,500 years into an interglacial period before the cold returns, being the greater threat to both humanity and the biosphere.

Those who believe CO2 causes warming advocate for reducing use of fossil fuels for fear of overheating, apparently discounting the need for energy should winters grow harsher. The graph shows one projection similar to that of temperature, showing the next 80 years accumulating at the same rate as the last 60. A second projection in green takes the somewhat higher rate of the last 10 years and projects it to century end. The latter trend would achieve a doubling of CO2.

What those two scenarios mean depends on how sensitive you think Global Mean Temperature is to changing CO2 concentrations. Climate models attempt to consider all relevant and significant factors and produce future scenarios for GMT. CMIP6 is the current group of models displaying a wide range of warming presumably from rising CO2. The one model closely replicating Hadcrut4 back to 1850 projects 1.8C higher GMT for a doubling of CO2 concentrations. If that held true going from 300 ppm to 600 ppm, the trend would resemble the red dashed line continuing the observed warming from the past 60 years: 0.8C up to now and another 1C the rest of the century. Of course there are other models programmed for warming 2 or 3 times the rate observed.

People who take to the streets with signs forecasting doom in 11 or 12 years have fallen victim to IPCC 450 and 430 scenarios.  For years activists asserted that warming from pre industrial can be contained to 2C if CO2 concentrations peak at 450 ppm.  Last year, the SR1.5 lowered the threshold to 430 ppm, thus the shortened timetable for the end of life as we know it.

For the sake of brevity, this post leaves aside many technical issues. Uncertainties about the temperature record, and about early CO2 levels, and the questions around Equilibrium CO2 Sensitivity (ECS) and Transient CO2 Sensitivity (TCS) are for another day. It should also be noted that GMT as an average hides huge variety of fluxes over the globe surface, and thus larger warming in some places such as Canada, and cooling in other places like Southeast US. Ross McKitrick pointed out that Canada has already gotten more than 1.5C of warming and it has been a great social, economic and environmental benefit.

So I want people not to panic about global warming/climate change. Should we do nothing? On the contrary, we must invest in robust infrastructure to ensure reliable affordable energy and to protect against destructive natural events. And advanced energy technologies must be developed for the future since today’s wind and solar farms will not suffice.

It is good that Greta’s demands were unheeded at the Davos gathering. Panic is not useful for making wise policies, and as you can see above, we have time to get it right.

Climate Models: Good, Bad and Ugly

Several posts here discuss INM-CM4, the Good CMIP5 climate model since it alone closely replicates the Hadcrut temperature record, as well as approximating BEST and satellite datasets. This post is prompted by recent studies comparing various CMIP6 models, the new generation intending to hindcast history through 2014, and forecast to 2100.

Background

Much revealing information is provided in an AGU publication Causes of Higher Climate Sensitivity in CMIP6 Models by Mark D. Zelinka et al. (2019). H/T Judith Curry.  Excerpts in italics with my bolds.

The severity of climate change is closely related to how much the Earth warms in response to greenhouse gas increases. Here we find that the temperature response to an abrupt quadrupling of atmospheric carbon dioxide has increased substantially in the latest generation of global climate models. This is primarily because low cloud water content and coverage decrease more strongly with global warming, causing enhanced planetary absorption of sunlight—an amplifying feedback that ultimately results in more warming. Differences in the physical representation of clouds in models drive this enhanced sensitivity relative to the previous generation of models. It is crucial to establish whether the latest models, which presumably represent the climate system better than their predecessors, are also providing a more realistic picture of future climate warming.

The objective is to understand why the models are getting badder and uglier, and whether the increased warming is realistic. This issue was previously noted by John Christy last summer:

Figure 8: Warming in the tropical troposphere according to the CMIP6 models.
Trends 1979–2014 (except the rightmost model, which is to 2007), for 20°N–20°S, 300–200 hPa.

Christy’s comment: We are just starting to see the first of the next generation of climate models, known as CMIP6. These will be the basis of the IPCC assessment report, and of climate and energy policy for the next 10 years. Unfortunately, as Figure 8 shows, they don’t seem to be getting any better. The observations are in blue on the left. The CMIP6 models, in pink, are also warming faster than the real world. They actually have a higher sensitivity than the CMIP5 models; in other words, they’re apparently getting worse! This is a big problem.

Why CMIP6 Models Are More Sensitive

Zelinka et al. (2019) delve into the issue by comparing attributes of the CMIP6 models currently available for diagnostics.

1 Introduction

Determining the sensitivity of Earth’s climate to changes in atmospheric carbon dioxide (CO2) is a fundamental goal of climate science. A typical approach for doing so is to consider the planetary energy balance at the top of the atmosphere (TOA), represented as

urn:x-wiley:grl:media:grl60047:grl60047-math-0004

urn:x-wiley:grl:media:grl60047:grl60047-math-0005 is the net TOA radiative flux anomaly,  urn:x-wiley:grl:media:grl60047:grl60047-math-0006  is the radiative forcingurn:x-wiley:grl:media:grl60047:grl60047-math-0007  is the radiative feedback parameter, and urn:x-wiley:grl:media:grl60047:grl60047-math-0008  is the global mean surface air temperature anomaly. The sign convention is that urn:x-wiley:grl:media:grl60047:grl60047-math-0005  is positive down and  urn:x-wiley:grl:media:grl60047:grl60047-math-0007  is negative for a stable system. 

Conceptually, this equation states that the TOA energy imbalance can be expressed as the sum of the radiative forcing and the radiative response of the system to a global surface temperature anomaly. The assumption that the radiative damping can be expressed as a product of a time‐invariant and global mean surface temperature anomaly is useful but imperfect (Armour et al., 2013; Ceppi & Gregory, 2019). Under this assumption, one can estimate the effective climate sensitivity (ECS), the ultimate global surface temperature change that would restore TOA energy balance

urn:x-wiley:grl:media:grl60047:grl60047-math-0014

where urn:x-wiley:grl:media:grl60047:grl60047-math-0015  is the radiative forcing due to doubled CO2 .

ECS therefore depends on the magnitude of the CO2 radiative forcing and on how strongly the climate system radiatively damps planetary warming. A climate system that more effectively radiates thermal energy to space or more strongly reflects sunlight back to space as it warms (larger magnitude urn:x-wiley:grl:media:grl60047:grl60047-math-0007 ) will require less warming to restore planetary energy balance in response to a positive radiative forcing, and vice versa.

Because GCMs attempt to represent all relevant processes governing Earth’s response to CO2, they provide the most direct means of estimating ECS. ECS values diagnosed from CO2 quadrupling experiments performed in fully coupled GCMs as part of the fifth phase of the Coupled Model Intercomparison Project ranged from 2.1 to 4.7 K. It is already known that several models taking part in CMIP6 have values of ECS exceeding the upper limit of this range. These include CanESM5.0.3 , CESM2, CNRM‐CM6‐1, E3SMv1, and both HadGEM3‐GC3.1 and UKESM1.

In all of these models, high ECS values are at least partly attributed to larger cloud feedbacks than their predecessors.

In this study, we diagnose the forcings, feedbacks, and ECS values in all available CMIP6 models. We assess in each model the individual components that make up the climate feedback parameter and quantify the contributors to intermodel differences in ECS. We also compare these results with those from CMIP5 to determine whether the multimodel mean or spread in ECS, feedbacks, and forcings have changed.

The range of ECS values across models has widened in CMIP6, particularly on the high end, and now includes nine models with values exceeding the CMIP5 maximum (Figure 1a). Specifically, the range has increased from 2.1–4.7 K in CMIP5 to 1.8–5.6 K in CMIP6, and the intermodel variance has significantly increased (p = 0.04).

One model’s ECS is below the CMIP5 minimum (INM‐CM4‐8).

This increased population of high ECS models has caused the multimodel mean ECS to increase from 3.3 K in CMIP5 to 3.9 K in CMIP6. Though substantial, this increase is not statistically significant (p = 0.16).  ER urn:x-wiley:grl:media:grl60047:grl60047-math-0015  has increased slightly on average in CMIP6 and its intermodel standard deviation has been reduced by nearly 30% from 0.50 Wm^2 in CMIP5 to 0.36 Wm^2 in CMIP6 (Figure 1b).

This ECS increase is primarily attributable to an increased multimodel mean feedback parameter due to strengthened positive cloud feedbacks, as all noncloud feedbacks are essentially unchanged on average in CMIP6. However, it is the unique combination of weak overall negative feedback and moderate radiative forcing that allows several CMIP6 models to achieve high ECS values beyond the CMIP5 range.

The increase in cloud feedback arises solely from the strengthened SW low cloud component, while the non‐low cloud feedback has slightly decreased. The SW low cloud feedback is larger on average in CMIP6 due to larger reductions in low cloud cover and weaker increases in cloud liquid water path with warming. Both of these changes are much more dramatic in the extratropics, such that the CMIP6 mean low cloud amount feedback is now stronger in the extratropics than in the tropics, and the fraction of multimodel mean ECS attributable to extratropical cloud feedback has roughly tripled.

The aforementioned increase in CMIP6 mean cloud feedback is related to changes in model representation of clouds. Specifically, both low cloud cover and water content increase less dramatically with SST in the middle latitudes as estimated from unforced climate variability in CMIP6.

Figure 1. INM-CM5 representation of temperature history. The 5-year mean GMST (K) anomaly with respect to 1850–1899 for HadCRUTv4 (thick solid black); model mean (thick solid red). Dashed thin lines represent data from individual model runs: 1 – purple, 2 – dark blue, 3 – blue, 4 – green, 5 – yellow, 6 – orange, 7 – magenta. In this and the next figures numbers on the time axis indicate the first year of the 5-year mean

The Nitty Gritty

Open image in new tab to enlarge.

The details are shown in Supporting Information for “Causes of higher climate
sensitivity in CMIP6 models”. Here we can seen how specific models stack up on the key variables driving ECS attributes.

Open image in new tab to enlarge.

Figure S1. Gregory plots showing global and annual mean TOA net radiation anomalies
plotted against global and annual mean surface air temperature anomalies. Best-fit ordinary linear least squares lines are shown. The y-intercept of the line (divided by 2) provides an estimate of the effective radiative forcing from CO2 doubling (ERF2x), the slope of the line provides an estimate of the net climate feedback parameter (λ), and the x-intercept of the line (divided by 2) provides an estimate of the effective climate sensitivity (ECS). These values are printed in each panel. Models are ordered by ECS.

Open image in new tab to enlarge.

Figure S7. Contributions of forcing and feedbacks to ECS in each model and for the multimodel means. Contributions from the tropical and extratropical portion of the feedback are shown in light and dark shading, respectively. Black dots indicate the ECS in each model, while upward and downward pointing triangles indicate contributions from non-cloud and cloud feedbacks, respectively. Numbers printed next to the multi-model mean bars indicate the cumulative sum of each plotted component. Numerical values are not printed next to residual, extratropical forcing, and tropical albedo terms for clarity. Models within each collection are ordered by ECS.

Open image in new tab to enlarge.

Figure S8. Cloud feedbacks due to low and non-low clouds in the (light shading) tropics and (dark shading) extratropics in each model and for the multi-model means. Non-low cloud feedbacks are separated into LW and SW components, and SW low cloud feedbacks are separated into amount and scattering components. “Others” represents the sum of LW low cloud feedbacks and the small difference between kernel- and APRP-derived SW low cloud feedback. Insufficient diagnostics are available to compute SW cloud amount and scattering feedbacks for the FGOALSg2 and CAMS-CSM1-0 models. Black dots indicate the global mean net cloud feedback in each model, while upward and downward pointing triangles indicate total contributions from non-low and low clouds, respectively. Models within each collection are ordered by global mean net cloud feedback.

My Summary

Once again the Good Model INM-CM4-8 is bucking the model builders’ consensus. The new revised INM model has a reduced ECS and it flipped its cloud feedback from positive to negative.The description of improvements made to the INM modules includes how clouds are handled:

One of the few notable changes is the new parameterization of clouds and large-scale condensation. In the INMCM5 cloud area and cloud water are computed prognostically according to Tiedtke (1993). That includes the formation of large-scale cloudiness as well as the formation of clouds in the atmospheric boundary layer and clouds of deep convection. Decrease of cloudiness due to mixing with unsaturated environment and precipitation formation are also taken into account. Evaporation of precipitation is implemented according to Kessler (1969).

Cloud radiation forcing (CRF) at the top of the atmosphere is one of the most important climate model characteristics, as errors in CRF frequently lead to an incorrect surface temperature.

In the high latitudes model errors in shortwave CRF are small. The model underestimates longwave CRF in the subtropics but overestimates it in the high latitudes. Errors in longwave CRF in the tropics tend to partially compensate errors in shortwave CRF. Both errors have positive sign near 60S leading to warm bias in the surface temperature here. As a result, we have some underestimation of the net CRF absolute value at almost all latitudes except the tropics. Additional experiments with tuned conversion of cloud water (ice) to precipitation (for upper cloudiness) showed that model bias in the net CRF could be reduced, but that the RMS bias for the surface temperature will increase in this case.

Resources:

Temperatures According to Climate Models  Initial Discovery of the Good Model INM-CM4 within CMIP5

Latest Results from First-Class Climate Model INMCM5 The new version improvements and historical validation

 

Climate Models Argue from False Premises

Roger Pielke Jr. Explains at Forbes If Climate Scenarios Are Wrong For 2020, Can They Get 2100 Right? Excerpts in italics with my bolds.

How we think and talk about climate policy is profoundly shaped by 31 different computer models which produce a wide range of scenarios of the future, starting from a base year of 2005. With 2020 right around the corner, we now have enough experience to ask how well these models are doing. Based on my preliminary analysis reported below, the answer appears to be not so well.

Background

Climate policy discussions are framed by the assessment reports of the Intergovernmental Panel on Climate Change (IPCC). There are of course discussions that occur outside the boundaries of the IPCC, but the IPCC analyses carry enormous influence. At the center of the IPCC approach to climate policy analyses are scenarios of the future. The IPCC reports that its database contains 1,184 scenarios from 31 models.

Some of these scenarios are the basis for projecting future changes in climate (typically using what are called Global Climate Models or GCMs). Scenarios are also the basis for projecting future impacts of climate change, as well as the consequences of climate policy for the economy and environment (often using what are called Integrated Assessment Models or IAMs).

Chain of suppositions comprising Integrated Assessment Models.

Here I focus on two key metrics directly relevant to climate policy that come from the scenarios of fifth assessment report (AR5) of the IPCC: economic growth and atmospheric carbon dioxide concentrations. The scenarios of the AR5 begin in 2005 and most project futures to 2100, with some looking only to 2050. We now have almost 15 years of data to compare against projections, allowing us to assess how they are doing.

Economic Growth Scenarios Way Too High

Economic growth is important because it is one of the elements of the so-called Kaya Identity, the basis for projecting future carbon dioxide emissions, and a key input to GCMs that produce projections of future climate change. Economic growth, in the context of climate change is a double-edged sword. On the one hand, high rates of growth can mean more individual and societal wealth, which is generally viewed to be a good thing. On the other hand, high rates of economic growth, all else equal, means greater amounts of carbon dioxide emitted into the atmosphere, decidedly not a good thing.

The vast majority of scenarios reported by the IPCC AR5 include rates of economic growth (measured as GDP per capita using market exchange rates) that are greater than what has been observed since 2010. Specifically, more than 99.5% of IPCC AR5 scenarios – all but 5 of about 1,100 — have GDP growth rates for the period 2010 to 2020 in excess of that which has been observed in the real world from 2010 to 2018. The International Monetary Fund recently lowered its expectations for global economic growth in 2019 and 2020 to below that of 2018. So it seems highly unlikely that the real-world will “catch up.”

What is clear is that, to date, the vast majority of IPCC scenarios are far more aggressive in their projections of economic growth than has been observed. For the scenarios to “catch up” would require growth rates in future years even more aggressive than those built into the scenarios. If the IPCC projections are indeed too aggressive, then this has implications for the results of analyses that depend upon such assumptions for projecting future climate changes, impacts and the costs and benefits of policy action.

Models Overstate CO2 Concentrations in 2020

We see a similar aggressiveness in scenarios when looking at the concentration of carbon dioxide in the atmosphere. Based on data from the National Oceanic and Atmospheric Administration, in 2020 global carbon dioxide concentrations will be at about 413 parts per million (ppm). To put this into context, the oft-cited 2 degree Celsius temperature target is sometimes associated with a carbon dioxide concentration level of 450 ppm, and concentration levels are currently increasing by about 2-3 ppm per year.

All of the scenarios in the IPCC database that assume no climate policy (called reference scenarios) have carbon dioxide concentrations above 413 ppm. Across all scenarios, including those that assume successful implementation of climate policies such as a globally harmonized carbon price, 86% have concentrations levels above 413 ppm.

There is little evidence to suggest that climate policies have accelerated rates of decarbonization, leading to lower carbon dioxide concentrations than previously expected. One reason for this is that the world has not actually adopted climate policies of the sort assumed in policy scenarios. Thus, the fact carbon dioxide concentrations in 2020 will be at the lower end of scenarios almost certainly says something about what is going on in the models rather than unexpected good news about climate policy success.

Flawed Scenarios Give False Projections

It seems obvious that we should ask hard questions of scenarios initiated in 2005 to project outcomes for 2050 or 2100 that fail to accurately describe what is observed in 2020. Individual scenarios are not predictions, but they can certainly be more or less consistent with how the world actually evolves. We should also ask questions when an entire set of scenarios collectively fails to encompass real-world observations – such as is the case with the reference scenarios of the IPCC AR5 database and actual atmospheric concentrations of carbon dioxide.

To the extent that flawed scenarios make their way into GCMs, we would be using misleading projections of climate futures and their probabilities, of possible future climate impacts and their likelihoods, and, crucially, of the costs and benefits of alternative approaches to climate policy. It is thus imperative that the forthcoming sixth IPCC assessment – or a separate group — ensures that its scenario database is consistent with real-world evidence, and that we understand why many scenarios have fared so poorly since 2005 with respect to key metrics.

More generally, it is important that the knowledge base that informs climate policy discussions be opened up to a broader diversity of methodologies and perspectives, and that all approaches are rigorously scrutinized. Climate policy is too important for anything less.

See also Models Wrong About the Past Produce Unbelievable Futures

And Unbelievable Climate Models

Beware getting sucked into any model, climate or otherwise.

El Nino’s Cold Tongue Baffles Climate Models


This post is prompted by an article published by Richard Seager et al. at AMS Journal Is There a Role for Human-Induced Climate Change in the Precipitation Decline that Drove the California Drought? Excerpts in italics with my bolds.

Overview

The recent California drought was associated with a persistent ridge at the west coast of North America that has been associated with, in part, forcing from warm SST anomalies in the tropical west Pacific. Here it is considered whether there is a role for human-induced climate change in favoring such a west coast ridge. The models from phase 5 of the Coupled Model Intercomparison Project do not support such a case either in terms of a shift in the mean circulation or in variance that would favor increased intensity or frequency of ridges. The models also do not support shifts toward a drier mean climate or more frequent or intense dry winters or to tropical SST states that would favor west coast ridges. However, reanalyses do show that over the last century there has been a trend toward circulation anomalies over the Pacific–North American domain akin to those during the height of the California drought.

Position of the Warm Pool in the western Pacific under La Niña conditions, and the convergence zone where the Warm Pool meets nutrient-enriched waters of the eastern equatorial Pacific. Tuna and their prey are most abundant in this convergence zone 21,48 (source: HadISST) 109 .

First we plot together the history of California winter precipitation and Arctic sea ice anomaly in terms of area covered by ice at the annual minimum month of September and also as the November through April winter average (Fig. 9, top). While all three are of course negative during the drought years there is no year to year relationship between these quantities. Next we composite 200-mb height anomalies, U.S. precipitation, and sea ice concentration for, during the period covered by sea ice data, the driest 15% of California winters and subtract the climatological winter values (Fig. 9, bottom). As in Seager et al. (2015), the composites show that when California is dry the entire western third of the United States tends to be dry and that there is a high pressure ridge located immediately off the west coast, which does not appear to be connected to a tropically sourced wave train. There also tends to be a trough over the North Atlantic, similar to winter 2013/14. There are notable localized sea ice concentration anomalies with increased ice in the Sea of Okohtsk, reduced ice in the Bering Sea, and increased ice in Hudson Bay and Labrador Sea, though the anomalies are small. These ice anomalies are consistent with atmospheric forcing. The Sea of Okhotsk and Hudson Bay/Labrador Sea anomalies appear under northerly flow that would favor cold advection and increased ice. The Bering Sea anomaly appears under easterly flow that would drive ice offshore. As shown by Seager et al. (2015), the dry California winters are also associated with North Pacific SST anomalies forced by the atmospheric wave train and the sea ice anomalies appear part of this feature rather than as causal drivers of the atmospheric circulation anomalies.

These analyses do not support the idea that variations in sea ice extent influence the prevalence of west coast ridges or dry winters in California.

Source: NASA

On the basis of the above analysis we conclude that the occurrence of persistent ridges at the west coast is more connected to SST anomalies than it is to sea ice anomalies. The CMIP5 model ensemble lends no support to the idea that ridge-inducing SST patterns become more likely as a result of rising GHGs. However, the models could be wrong so we next examine whether trends in observed SSTs lend any support to this idea. Trends were computed by straightforward linear least squares regression.

A number of features stand out in these trends regardless of the time period used.

  • Amid near-ubiquitous warming of the oceans the central equatorial Pacific stands out as a place that has not warmed.
  • The west–east SST gradient across the tropical Pacific has strengthened as the west Pacific has warmed.
  • Increased reanalysis precipitation over the Indian Ocean–Maritime Continent–tropical west Pacific and reduced reanalysis precipitation over the central equatorial Pacific Ocean were found.
  • Tropical geopotential heights have increased at all longitudes.
  • A trend toward a localized high pressure ridge extending from the subtropics toward Alaska across western North America.

These associations in the trends—a strengthened west–east SST gradient across the tropical Pacific and localized high pressure at the North American west coast—are in line with every piece of evidence based on observations and SST-forced models presented so far that there is a connection between drought-inducing circulation anomalies and tropical Pacific SSTs. The mediating influence is seen in the precipitation trends that show enhanced zonal gradients of tropical Indo-Pacific precipitation and a marked increase centered over the Maritime Continent region.

Conclusions and discussion

We have examined whether there is any evidence, observational and/or model based, that the precipitation decline that drove the California drought was contributed to by human-driven climate change. Findings are as follows:

  • The CMIP5 model ensemble provides no evidence for mean drying or increased prevalence of dry winters for California or a shift toward a west coast ridge either in the mean or as a more common event. They also provide no evidence of a shift in tropical SSTs toward a state with an increased west–east SST gradient that has been invoked as capable of forcing a west coast ridge and drought.
  • Analysis of observations-based reanalyses shows that west coast ridges, akin to that in winter 2013/14, are related to an increased west–east SST gradient across the tropical Pacific Ocean and have repeatedly occurred over past decades though as imperfect analogs.
  • SST-forced models can reproduce such ridges and their connection to tropical SST anomalies.  Century-plus-long reanalyses and SST-forced models indicate a long-term trend toward circulation anomalies more akin to that of winter 2013/14.
  • The trends of heights and SSTs in the reanalyses also show both an increased west–east SST gradient and a 200-mb ridge over western North America that, in terms of association between ocean and atmospheric circulation, matches those found via the other analyses on interannual time scales.
  • However, SST-forced models when provided the trends in SSTs create a 200-mb ridge over the central North Pacific and, in general, a circulation pattern that cannot be said to truly match that in reanalyses.

So can a case be made that human-driven climate change contributed to the precipitation drop that drives the drought? Not from the simulations of historical climate and projections of future climate of the CMIP5 multimodel ensemble.

These simulations show no current or future increase in the likelihood or extremity of negative precipitation, precipitation minus evaporation, west coast ridges, or ridge-forcing tropical SST patterns. However, when examining the observational record a case can be made that the climate system has been moving in a direction that favors both a ridge over the west coast, which has a limited similarity to that observed in winter 2013/14, the driest winter of the drought, and a ridge-generating pattern of increased west–east SST gradient across the tropical Pacific Ocean with warm SSTs in the Indo–west Pacific region. This observations-based argument then gets tripped up by SST-forced models, which know about the trends in SST but fail to simulate a trend toward a west coast ridge. On the other hand, idealized modeling indicates that preferential warming in the Indo–west Pacific region does generate a west coast ridge.

To make the argument we outline above requires rejecting the CMIP5 ensemble as a guide to how tropical climate responds to increased radiative forcing since this tropical ocean response is at odds with what they do. To do so follows in the footsteps of Kohyama and Hartmann (2017, p. 4248), who correctly point out that “El Niño–like mean-state warming is only a ‘majority decision’ based on currently available GCMs, most of which exhibit unrealistic nonlinearity of the ENSO dynamics” (see also Kohyama et al. 2017). The implications of changing tropical SST gradients would extend far beyond just California and include most regions of the world sensitive to ENSO-generated climate anomalies.

We believe that the current state of observational information, analysis of it, and climate modeling does not allow a confident rejection of the CMIP5 model responses and/or a confident assertion of human role in the precipitation drop of the California drought. We also believe that for the same reasons a human role cannot be excluded.

Comment:

The researchers set out to prove man-made global warming contributes to droughts in California, but their findings put them in a quandry.  The models include CO2 forcings, yet do not predict the conditions resulting in west coast droughts,   They have to admit the models are wrong in this respect (what else do the models get wrong?).  They cling to the hope that global warming can be tied to droughts, but have to admit there is no evidence from the failed models.

Postscript:

(a) Annual variation (Annual RMSE) of SST and Chl-a globally (units are °C/decade for SST and log(mg/m3/decade) for Chl-a). (b) The pattern of annual variation in the Bonney Upwelling, Southern Australia. (c) The pattern of annual variation in the the Florida Current, South East USA.

 

A separate study is Global patterns of change and variation in sea surface temperature and chlorophyll by Piers K. Dunstan et. al.

The blue tongue shows up as an equatorial pacific region that shows little variability over the 14 year period of study.  From the article:

The interaction between annual variation in SST and Chl-a provides insights into how and where linkages occur on annual time scales. Our analysis shows strong latitudinal bands associated with variation in seasonal warming (Fig. 4a). The equatorial Pacific, Indian and Atlantic Oceans are all characterised by very low annual RMSE for both SST and Chl-a. The mid latitudes of each ocean basin have higher variance in SST and/or Chl-a.

It’s Models All the Way Down

In Rapanos v. United States, Justice Antonin Scalia offered a version of the traditional tale of how the Earth is carried on the backs of animals. In this version of the story, an Eastern guru affirms that the earth is supported on the back of a tiger. When asked what supports the tiger, he says it stands upon an elephant; and when asked what supports the elephant he says it is a giant turtle. When asked, finally, what supports the giant turtle, he is briefly taken aback, but quickly replies “Ah, after that it is turtles all the way down.”  By this analogy, Scalia was showing how other judges were substituting the “purpose” of a law for the actual text written by congress.

The moral of the story is that our perceptions of reality are built upon assumptions. The facts from our experience are organized by means of a framework that provides a worldview, a mental model or paradigm of the way things are. Through the history of science, various pieces of that paradigm have been challenged and have collapsed when contradicted by fresh observations and measurements from experience. Today a small group of scientists have declared themselves climate experts and claim their computer models predict a dangerous future for the planet because of our energy choices.

The Climate Alarmist paradigm is described and refuted in an essay by John Christy published by GWPF The Tropical Skies: Falsifying climate alarm. The content comes from his presentation 23 May 2019 to a meeting in the Palace of Westminster in London, England. Excerpts in italics with my bolds

At the global level a significant discrepancy has been confirmed between empirical measurements and computer predictions.

“The global warming trend for the last 40 years, starting in 1979 when satellite measurements began, is +0.13C per decade or about half of what climate models predicted.”

Figure 3: Updating the estimate.
Redrawn from Christy and McNider 2017.

The top line is the actual temperature of the global troposphere, with the range of original 1994 study shown as the shaded area. We were able to calculate and remove the El Niño effect, which accounts for a lot of the variance, but has no trend to it. Then there are these two dips in global temperature after the El Chichón and Mt Pinatubo eruptions. Volcanic eruptions send aerosols up into the stratophere, and these reflect sunlight, so fewer units of energy get in and the earth cools. I developed a mathematical function to simulate this, as shown in Figure 3d. 

After eliminating the effect of volcanoes, we were left with a line that was approximately straight, apart from some noise. The trend, the dark line in Figure 3e, was 0.095◦C per decade, almost exactly the same as in our earlier study, 25 years before.

Our result is that the transient climate response – the short-term warming – in the troposphere is 1.1◦C at the point in time when carbon dioxide levels double. This is not a very alarming number. If we perform the same calculation on the climate models, you get a figure of 2.31◦C, which is significantly different. The models’ response to carbon dioxide is twice what we see in the real world. So the evidence indicates the consensus range for climate sensitivity is incorrect.

Almost all climate models have predicted rapid warming at high altitudes in the tropics due to greenhouse gas forcing.

They all have rapid warming above 30,000 feet in the tropics – it’s effectively a diagnostic signal of greenhouse warming. But in reality it’s just not happening. It’s warming up there, but at only about one third of the rate predicted by the models.”

Figure 5: The hot spot in the Canadian model.
The y-axis is denominated in units of pressure, but the scale makes it linear in altitude.

Almost all of the models show such a warming, and none show it when extra greenhouse gas forcing is not included. Figure 6 shows the warming trends from 102 climate models, and the average trend is 0.44◦C per decade. This is quite fast: over 40 years, it amounts to almost 2◦C, although some models have slower warming and some faster. However, the real-world warming is much lower; around one third of the model average.

Christy 2019 fig7

Figure 7: Tropical mid-tropospheric temperatures, models vs. observations.
Models in pink, against various observational datasets in shades of blue. Five-year averages
1979–2017. Trend lines cross zero at 1979 for all series.

Figure 7 shows the model projections in pink and different observational datasets in shades of blue. You can also easily see the difference in warming rates: the models are warming too fast. The exception is the Russian model, which has much lower sensitivity to carbon dioxide, and therefore gives projections for the end of the century that are far from alarming. The rest of them are already falsified, and their predictions for 2100 can’t be trusted.

The next generation of climate models show that lessons are not being learned.

“An early look at some of the latest generation of climate models reveals they are predicting even faster warming. This is simply not credible.”

Figure 8: Warming in the tropical troposphere according to the CMIP6 models.
Trends 1979–2014 (except the rightmost model, which is to 2007), for 20°N–20°S, 300–200 hPa.

We are just starting to see the first of the next generation of climate models, known as CMIP6. These will be the basis of the IPCC assessment report, and of climate and energy policy for the next 10 years. Unfortunately, as Figure 8 shows, they don’t seem to be getting any better. The observations are in blue on the left. The CMIP6 models, in pink, are also warming faster than the real world. They actually have a higher sensitivity than the CMIP5 models; in other words, they’re apparently getting worse! This is a big problem.


Figure 9: (b) Enlargement and simplification of the tropical troposphere
The tropical troposphere in the Fifth Assessment Report.
The coloured bands represent the range of warming trends. Red is the model runs incorporating natural and anthropogenic forcings, blue is natural forcings only. The range of the observations is in grey

Conclusion

So the rate of accumulation of joules of energy in the tropical troposphere is significantly less than predicted by the CMIP5 climate models. Will the next IPCC report discuss this long running mismatch? There are three possible ways they could handle the problem:
• The observations are wrong, the models are right.
• The forcings used in the models were wrong.
• The models are failed hypotheses.

I predict that the ‘failed hypothesis’ option will not be chosen. Unfortunately, that’s exactly what you should do when you follow the scientific method.