It’s Models All the Way Down

In Rapanos v. United States, Justice Antonin Scalia offered a version of the traditional tale of how the Earth is carried on the backs of animals. In this version of the story, an Eastern guru affirms that the earth is supported on the back of a tiger. When asked what supports the tiger, he says it stands upon an elephant; and when asked what supports the elephant he says it is a giant turtle. When asked, finally, what supports the giant turtle, he is briefly taken aback, but quickly replies “Ah, after that it is turtles all the way down.”  By this analogy, Scalia was showing how other judges were substituting the “purpose” of a law for the actual text written by congress.

The moral of the story is that our perceptions of reality are built upon assumptions. The facts from our experience are organized by means of a framework that provides a worldview, a mental model or paradigm of the way things are. Through the history of science, various pieces of that paradigm have been challenged and have collapsed when contradicted by fresh observations and measurements from experience. Today a small group of scientists have declared themselves climate experts and claim their computer models predict a dangerous future for the planet because of our energy choices.

The Climate Alarmist paradigm is described and refuted in an essay by John Christy published by GWPF The Tropical Skies: Falsifying climate alarm. The content comes from his presentation 23 May 2019 to a meeting in the Palace of Westminster in London, England. Excerpts in italics with my bolds

At the global level a significant discrepancy has been confirmed between empirical measurements and computer predictions.

“The global warming trend for the last 40 years, starting in 1979 when satellite measurements began, is +0.13C per decade or about half of what climate models predicted.”

Figure 3: Updating the estimate.
Redrawn from Christy and McNider 2017.

The top line is the actual temperature of the global troposphere, with the range of original 1994 study shown as the shaded area. We were able to calculate and remove the El Niño effect, which accounts for a lot of the variance, but has no trend to it. Then there are these two dips in global temperature after the El Chichón and Mt Pinatubo eruptions. Volcanic eruptions send aerosols up into the stratophere, and these reflect sunlight, so fewer units of energy get in and the earth cools. I developed a mathematical function to simulate this, as shown in Figure 3d. 

After eliminating the effect of volcanoes, we were left with a line that was approximately straight, apart from some noise. The trend, the dark line in Figure 3e, was 0.095◦C per decade, almost exactly the same as in our earlier study, 25 years before.

Our result is that the transient climate response – the short-term warming – in the troposphere is 1.1◦C at the point in time when carbon dioxide levels double. This is not a very alarming number. If we perform the same calculation on the climate models, you get a figure of 2.31◦C, which is significantly different. The models’ response to carbon dioxide is twice what we see in the real world. So the evidence indicates the consensus range for climate sensitivity is incorrect.

Almost all climate models have predicted rapid warming at high altitudes in the tropics due to greenhouse gas forcing.

They all have rapid warming above 30,000 feet in the tropics – it’s effectively a diagnostic signal of greenhouse warming. But in reality it’s just not happening. It’s warming up there, but at only about one third of the rate predicted by the models.”

Figure 5: The hot spot in the Canadian model.
The y-axis is denominated in units of pressure, but the scale makes it linear in altitude.

Almost all of the models show such a warming, and none show it when extra greenhouse gas forcing is not included. Figure 6 shows the warming trends from 102 climate models, and the average trend is 0.44◦C per decade. This is quite fast: over 40 years, it amounts to almost 2◦C, although some models have slower warming and some faster. However, the real-world warming is much lower; around one third of the model average.

Christy 2019 fig7

Figure 7: Tropical mid-tropospheric temperatures, models vs. observations.
Models in pink, against various observational datasets in shades of blue. Five-year averages
1979–2017. Trend lines cross zero at 1979 for all series.

Figure 7 shows the model projections in pink and different observational datasets in shades of blue. You can also easily see the difference in warming rates: the models are warming too fast. The exception is the Russian model, which has much lower sensitivity to carbon dioxide, and therefore gives projections for the end of the century that are far from alarming. The rest of them are already falsified, and their predictions for 2100 can’t be trusted.

The next generation of climate models show that lessons are not being learned.

“An early look at some of the latest generation of climate models reveals they are predicting even faster warming. This is simply not credible.”

Figure 8: Warming in the tropical troposphere according to the CMIP6 models.
Trends 1979–2014 (except the rightmost model, which is to 2007), for 20°N–20°S, 300–200 hPa.

We are just starting to see the first of the next generation of climate models, known as CMIP6. These will be the basis of the IPCC assessment report, and of climate and energy policy for the next 10 years. Unfortunately, as Figure 8 shows, they don’t seem to be getting any better. The observations are in blue on the left. The CMIP6 models, in pink, are also warming faster than the real world. They actually have a higher sensitivity than the CMIP5 models; in other words, they’re apparently getting worse! This is a big problem.


Figure 9: (b) Enlargement and simplification of the tropical troposphere
The tropical troposphere in the Fifth Assessment Report.
The coloured bands represent the range of warming trends. Red is the model runs incorporating natural and anthropogenic forcings, blue is natural forcings only. The range of the observations is in grey

Conclusion

So the rate of accumulation of joules of energy in the tropical troposphere is significantly less than predicted by the CMIP5 climate models. Will the next IPCC report discuss this long running mismatch? There are three possible ways they could handle the problem:
• The observations are wrong, the models are right.
• The forcings used in the models were wrong.
• The models are failed hypotheses.

I predict that the ‘failed hypothesis’ option will not be chosen. Unfortunately, that’s exactly what you should do when you follow the scientific method.

Advertisements

Models Wrong About the Past Produce Unbelievable Futures

Models vs. Observations. Christy and McKitrick (2018) Figure 3

The title of this post is the theme driven home by Patrick J. Michaels in his critique of the most recent US National Climate Assessment (NA4). The failure of General Circulation Models (GCMs) is the focal point of his presentation February 14, 2018. Comments on the Fourth National Climate Assessment. Excerpts in italics with my bolds.

NA4 uses a flawed ensemble of models that dramatically overforecast warming of the lower troposphere, with even larger errors in the upper tropical troposphere. The model ensemble also could not accommodate the “pause” or “slowdown” in warming between the two large El Niños of 1997-8 and 2015-6. The distribution of warming rates within the CMIP5 ensemble is not a true indication of a statistical range of prospective warming, as it is a collection of systematic errors. Despite a glib statement about this Assessment fulfilling the terms of the federal Data Quality Act, that is fatuous. The use of systematically failing models does not fulfill the “maximizing the quality, objectivity, utility, and integrity of information” provision of the Act.

USGCRP should produce a reset Assessment, relying on a model or models that work in four dimensions for future guidance and ignoring the ones that don’t.

Why wasn’t this done to begin with? The model INM-CM4 is spot on, both at the surface and in the vertical, but using it would have largely meant the end of warming as a significant issue. Under a realistic emission scenario (which USGCRP also did not use), INM-CM4 strongly supports the “lukewarm” synthesis of global warming. Given the culture of alarmism that has infected the global change community since before the first (2000) Assessment, using this model would have been a complete turnaround with serious implications.

The new Assessment should employ best scientific practice, and one that weather forecasters use every day. In the climate sphere, billions of dollars are at stake, and reliable forecasts are also critical.

The theme is now picked up in the latest NIPCC report on Fossil Fuels. Chapter 2 is the Climate Science background and the statements below in italics with my bolds come from there.

Chapter 2 Climate Science Climate Change Reconsidered II: Fossil Fuels

Of the 102 model runs considered by Christy and McKitrick, only one comes close to accurately hindcasting temperatures since 1979: the INM-CM4 model produced by the Institute for Numerical Mathematics of the Russian Academy of Sciences (Volodin and Gritsun, 2018). That model projects only 1.4°C warming by the end of the century, similar to the forecast made by the Nongovernmental International Panel on Climate Change (NIPCC, 2013) and many scientists, a warming only one-third as much as the IPCC forecasts. Commenting on the success of the INM-CM model compared to the others (as shown in an earlier version of the Christy graphic), Clutz (2015) writes,

(1) INM-CM4 has the lowest CO2 forcing response at 4.1K for 4xCO2. That is 37% lower than multi-model mean.

(2) INM-CM4 has by far the highest climate system inertia: Deep ocean heat capacity in INM-CM4 is 317 W yr m-2 K -1 , 200% of the mean (which excluded INM-CM4 because it was such an outlier).

(3)INM-CM4 exactly matches observed atmospheric H2O content in lower troposphere (215 hPa), and is biased low above that. Most others are biased high.

So the model that most closely reproduces the temperature history has high inertia from ocean heat capacities, low forcing from CO2 and less water for feedback. Why aren’t the other models built like this one?

The outputs of GCMs are only as reliable as the data and theories “fed” into them, which scientists widely recognize as being seriously deficient (Bray and von Storch, 2016; Strengers, et al., 2015). The utility and skillfulness of computer models are dependent on how well the processes they model are understood, how faithfully those processes are simulated in the computer code, and whether the results can be repeatedly tested so the models can be refined (Loehle, 2018). To date, GCMs have failed to deliver on each of these counts.

The reference above is to a study published in July 2018 by John Christy and Ross McKitrick  A Test of the Tropical 200‐ to 300‐hPa Warming Rate in Climate Models. Excerpts in italics with my bolds.

Abstract

Overall climate sensitivity to CO2 doubling in a general circulation model results from a complex system of parameterizations in combination with the underlying model structure. We refer to this as the model’s major hypothesis, and we assume it to be testable. We explain four criteria that a valid test should meet: measurability, specificity, independence, and uniqueness. We argue that temperature change in the tropical 200‐ to 300‐hPa layer meets these criteria. Comparing modeled to observed trends over the past 60 years using a persistence‐robust variance estimator shows that all models warm more rapidly than observations and in the majority of individual cases the discrepancy is statistically significant. We argue that this provides informative evidence against the major hypothesis in most current climate models.

Discussion

All series‐specific trends and confidence intervals are reported in the supporting information Table S1. The mean restricted trend (without a break term) is 0.325 ± 0.132°C per decade in the models and 0.173 ± 0.056°C per decade in the observations. With a break term included they are 0.389 ± 0.173°C per decade (models) and 0.142 ± 0.115°C per decade (observed). Figure 4 shows the individual trend magnitudes. The red circles and confidence interval whiskers are from models, and the blue are observed.  Trend magnitudes and 95% confidence intervals. Number in upper left corner indicates number of model trends (out of 102) that exceed observed average trend.

If models accurately represented the magnitude of 200‐ to 300‐hPa warming with only nonsystematic errors contributing noise, these distributions would be centered on zero. Clearly, they are centered above zero, in fact in both the restricted and general cases, the entire distribution is above zero.

Table S2 presents individual run test results. In the restricted case, 62 of the 102 divergence terms are significant, while in the general case, 87 of 102 are. The model‐observational discrepancy is not simple uncertainty or random noise but represents a structural bias shared across models.

Worst and Best Models (Table S2) No Break With Break
bcc‐csm1‐1 220.1 593.3
CanESM2 410.3 534.4
CCSM4 258.1 430.6
EC‐EARTH 296.0 222.5
FIO‐ESM 129.2 310.9
GISS‐E2‐H 157.3 444.8
GISS‐E2‐H‐CC 139.0 468.5
GISS‐E2‐R 382.4 237.7
HadGEM2‐ES 50.0 575.4
INMCM4 0.0 2.9

Note. First column: test score for restricted case (no break). Score is significant at 5% if it exceeds 41.53. Second column: test score for unrestricted case (with break at 1979). Score is significant at 5% if it exceeds 50.48.

Conclusion

Comparing observed trends to those predicted by models over the past 60 years reveals a clear and significant tendency on the part of models to overstate warming. All 102 CMIP5 model runs warm faster than observations, in most individual cases the discrepancy is significant, and on average the discrepancy is significant. The test of trend equivalence rejects whether or not we include a break at 1979 for the PCS, though the rejections are stronger when we control for its influence. Measures of series divergence are centered at a positive mean and the entire distribution is above zero. While the observed analogue exhibits a warming trend over the test interval it is significantly smaller than that shown in models, and the difference is large enough to reject the null hypothesis that models represent it correctly, within the bounds of random uncertainty.

Footnote:

The reference to Clutz (2015) is the post Temperatures According to Climate Models

See also: 2018 Update: Best Climate Model INMCM5

On Thermodynamic Climate Modelling

Earth climate systems functions as a massive heat engine.

Some years ago I wrote a post called Climate Thinking Out of the Box (reprinted later on) which was prompted by a conclusion from Lucarini et al. 2014:

“In particular, it is not obvious, as of today, whether it is more efficient to approach the problem of constructing a theory of climate dynamics starting from the framework of hamiltonian mechanics and quasi-equilibrium statistical mechanics or taking the point of view of dissipative chaotic dynamical systems, and of non-equilibrium statistical mechanics, and even the authors of this review disagree. The former approach can rely on much more powerful mathematical tools, while the latter is more realistic and epistemologically more correct, because, obviously, the climate is, indeed, a non-equilibrium system.”

Now we have a publication discussing progress in applying the latter approach using thermodynamic concepts in the effort to model climate processes.. The article is A new diagnostic tool for water, energy and entropy budgets in climate models by Valerio Lembo, Frank Lunkeit, and Valerio Lucarini February 14, 2019.  Overview in italics with my bolds.

Abstract: This work presents a novel diagnostic tool for studying the thermodynamics of the climate systems with a wide range of applications,from sensitivity studies to model tuning. It includes a number of modules for assessing the internal energy budget, the hydrological cycle,the Lorenz Energy Cycle and the material entropy production, respectively.

The routine receives as inputs energy fluxes at surface and at the Top-of-Atmosphere (TOA), for the computation of energy budgets at Top-of-Atmosphere (TOA), at the surface, and in the atmosphere as a residual. Meridional enthalpy transports are also computed from the divergence of the zonal mean energy budget fluxes; location and intensity of peaks in the two hemispheres are then provided as outputs. Rainfall, snowfall and latent heat fluxes are received as inputs for computing the water mass and latent energy budgets. If a land-sea mask is provided, the required quantities are separately computed over continents and oceans. The diagnostic tool also computes the Lorenz Energy Cycle (LEC) and its storage/conversion terms as annual mean global and hemispheric values.

In order to achieve this, one needs to provide as input three-dimensional daily fields of horizontal wind velocity and temperature in the troposphere. Two methods have been implemented for the computation of the material entropy production, one relying on the convergence of radiative heat fluxes in the atmosphere (indirect method), one combining the irreversible processes occurring in the climate system, particularly heat fluxes in the boundary layer, the hydrological cycle and the kinetic energy dissipation as retrieved from the residuals of the LEC.

A version of the diagnostic tool is included in the Earth System Model eValuation Tool (ESMValTool) community diagnostics, in order to assess the performances of soon available CMIP6 model simulations. The aim of this software is to provide a comprehensive picture of the thermodynamics of the climate system as reproduced in the state-of-the-art coupled general circulation models. This can prove useful for better understanding anthropogenic and natural climate change, paleoclimatic climate variability, and climatic tipping points.

Energy: Rather than a proxy of a changing climate, surface temperatures and precipitation changes should be better viewed as a consequence of a non-equilibrium steady state system which is responding to a radiative energy imbalance through a complex interaction of feedbacks. A changing climate, under the effect of an external transient forcing, can only be properly addressed if the energy imbalance, and the way it is transported within the system and converted into different forms is taken into account. The models’ skill to represent the history of energy and heat exchanges in the climate system has been assessed by comparing numerical simulations against available observations, where available, including the fundamental problem of ocean heat uptake.

Heat Transport: In order to understand how the heat is transported by the geophysical fluids, one should clarify what sets them into motion. We focus here on the atmosphere. A comprehensive view of the energetics fuelling the general circulation is given by the Lorenz Energy Cycle (LEC) framework. This provides a picture of the various processes responsible for conversion of available potential energy (APE), i.e. the excess of potential energy with respect to a state of thermodynamic equilibrium, into kinetic energy and dissipative heating. Under stationary conditions, the dissipative heating exactly equals the mechanical work performed by the atmosphere. In other words, the LEC formulation allows to constrain the atmosphere to the first law of thermodynamics, and the system as a whole can be seen as a pure thermodynamic heat engine under dissipative non-equilibrium conditions.

Water: On one hand the energy budget is relevantly affected by semi-empirical formulations of the water vapor spectrum, on the other hand the energy budget influences the moisture budget by means of uncertainties in aerosol-cloud interactions and mechanisms of tropical deep convection. A global scale evaluation of the hydrological cycle, both from a moisture and energetic perspective, is thus considered an integral part of an overall diagnostics for the thermodynamics of climate system.

Entropy: From a macroscopic point of view, one usually refers to “material entropy production” as the entropy produced by the geophysical fluids in the climate system, which are not related to the properties of the radiative fields, but rather to the irreversible processes related to the motion of these fluids. Mainly, this has to do with phase changes and water vapor diffusion. Lucarini (2009) underlined the link between entropy production and efficiency of the climate engine, which were then used to understand climatic tipping points, and, in particular, the snowball/warm Earth critical transition, to define a wider class of climate response metrics, and to study planetary circulation regimes. A constraint has also been proposed to the entropy production of the atmospheric heat engine, given by the emerging importance of non-viscous processes in a warming climate.

The goal here is to look at models through the lens of their dynamics and thermodynamics, in the view of enunciated above ideas about complex non-equilibrium systems. The metrics that we here propose are based on the analysis of the energy and water budgets and transports, of the energy transformations, and of the entropy production.

Previous Post: Climate Thinking Out of the Box 

CMIP5 vs RSS

It seems that climate modelers are dealing with a quandary: How can we improve on the unsatisfactory results from climate modeling?

Shall we:
A.Continue tweaking models using classical maths though they depend on climate being in quasi-equilibrium; or,
B.Start over from scratch applying non-equilibrium maths to the turbulent climate, though this branch of math is immature with limited expertise.

In other words, we are confident in classical maths, but does climate have features that disqualify it from their application? We are confident that non-equilibrium maths were developed for systems such as the climate, but are these maths robust enough to deal with such a complex reality?

It appears that some modelers are coming to grips with the turbulent quality of climate due to convection dominating heat transfer in the lower troposphere. Heretofore, models put in a parameter for energy loss through convection, and proceeded to model the system as a purely radiative dissipative system. Recently, it seems that some modelers are striking out in a new, possibly more fruitful direction. Herbert et al 2013 is one example exploring the paradigm of non-equilibrium steady states (NESS). Such attempts are open to criticism from a classical position, but may lead to a breakthrough for climate modeling.

That is my layman’s POV. Here is the issue stated by practitioners, more elegantly with bigger words:

“In particular, it is not obvious, as of today, whether it is more efficient to approach the problem of constructing a theory of climate dynamics starting from the framework of hamiltonian mechanics and quasi-equilibrium statistical mechanics or taking the point of view of dissipative chaotic dynamical systems, and of non-equilibrium statistical mechanics, and even the authors of this review disagree. The former approach can rely on much more powerful mathematical tools, while the latter is more realistic and epistemologically more correct, because, obviously, the climate is, indeed, a non-equilibrium system.”

Lucarini et al 2014
http://arxiv.org/pdf/1311.1190.pdf

Here’s how Herbert et al address the issue of a turbulent, non-equilibrium atmosphere. Their results show that convection rules in the lower troposphere and direct warming from CO2 is quite modest, much less than current models project.

“Like any fluid heated from below, the atmosphere is subject to vertical instability which triggers convection. Convection occurs on small time and space scales, which makes it a challenging feature to include in climate models. Usually sub-grid parameterizations are required. Here, we develop an alternative view based on a global thermodynamic variational principle. We compute convective flux profiles and temperature profiles at steady-state in an implicit way, by maximizing the associated entropy production rate. Two settings are examined, corresponding respectively to the idealized case of a gray atmosphere, and a realistic case based on a Net Exchange Formulation radiative scheme. In the second case, we are also able to discuss the effect of variations of the atmospheric composition, like a doubling of the carbon dioxide concentration.

The response of the surface temperature to the variation of the carbon dioxide concentration — usually called climate sensitivity — ranges from 0.24 K (for the sub-arctic winter profile) to 0.66 K (for the tropical profile), as shown in table 3. To compare these values with the literature, we need to be careful about the feedbacks included in the model we wish to compare to. Indeed, if the overall climate sensitivity is still a subject of debate, this is mainly due to poorly understood feedbacks, like the cloud feedback (Stephens 2005), which are not accounted for in the present study.”

Abstract from:
Vertical Temperature Profiles at Maximum Entropy Production with a Net Exchange Radiative Formulation
Herbert et al 2013
http://arxiv.org/pdf/1301.1550.pdf

In this modeling paradigm, we have to move from a linear radiative Energy Budget to a dynamic steady state Entropy Budget. As Ozawa et al explains, this is a shift from current modeling practices, but is based on concepts going back to Carnot.

“Entropy of a system is defined as a summation of “heat supplied” divided by its “temperature” [Clausius, 1865].. Heat can be supplied by conduction, by convection, or by radiation. The entropy of the system will increase by equation (1) no matter which way we may choose. When we extract the heat from the system, the entropy of the system will decrease by the same amount. Thus the entropy of a diabatic system, which exchanges heat with its surrounding system, can either increase or decrease, depending on the direction of the heat exchange. This is not a violation of the second law of thermodynamics since the entropy increase in the surrounding system is larger.

Carnot regarded the Earth as a sort of heat engine, in which a fluid like the atmosphere acts as working substance transporting heat from hot to cold places, thereby producing the kinetic energy of the fluid itself. His general conclusion about heat engines is that there is a certain limit for the conversion rate of the heat energy into the kinetic energy and that this limit is inevitable for any natural systems including, among others, the Earth’s atmosphere.

Thus there is a flow of energy from the hot Sun to cold space through the Earth. In the Earth’s system the energy is transported from the warm equatorial region to the cool polar regions by the atmosphere and oceans. Then, according to Carnot, a part of the heat energy is converted into the potential energy which is the source of the kinetic energy of the atmosphere and oceans.

Thus it is likely that the global climate system is regulated at a state with a maximum rate of entropy production by the turbulent heat transport, regardless of the entropy production by the absorption of solar radiation This result is also consistent with a conjecture that entropy of a whole system connected through a nonlinear system will increase along a path of evolution, with a maximum rate of entropy production among a manifold of possible paths [Sawada, 1981]. We shall resolve this radiation problem in this paper by providing a complete view of dissipation processes in the climate system in the framework of an entropy budget for the globe.

The hypothesis of the maximum entropy production (MEP) thus far seems to have been dismissed by some as coincidence. The fact that the Earths climate system transports heat to the same extent as a system in a MEP state does not prove that the Earths climate system is necessarily seeking such a state. However, the coincidence argument has become harder to sustain now that Lorenz et al. [2001] have shown that the same condition can reproduce the observed distributions of temperatures and meridional heat fluxes in the atmospheres of Mars and Titan, two celestial bodies with atmospheric conditions and radiative settings very different from those of the Earth.”

THE SECOND LAW OF THERMODYNAMICS AND THE GLOBAL CLIMATE SYSTEM: A REVIEW OF THE MAXIMUM ENTROPY PRODUCTION PRINCIPLE
Hisashi Ozawa et al 2003
http://www.knmi.nl/~laagland/cursus/presentaties_voorjaar11/Ozawa.pdf

Climate Models Cover Up

Making Climate Models Look Good

Clive Best dove into climate models temperature projections and discovered how the data can be manipulated to make model projections look closer to measurements than they really are. His first post was A comparison of CMIP5 Climate Models with HadCRUT4.6 January 21, 2019. Excerpts in italics with my bolds.

Overview: Figure 1. shows a comparison of the latest HadCRUT4.6 temperatures with CMIP5 models for Representative Concentration Pathways (RCPs). The temperature data lies significantly below all RCPs, which themselves only diverge after ~2025.

Modern Climate models originate from Global Circulation models which are used for weather forecasting. These simulate the 3D hydrodynamic flow of the atmosphere and ocean on earth as it rotates daily on its tilted axis, and while orbiting the sun annually. The meridional flow of energy from the tropics to the poles generates convective cells, prevailing winds, ocean currents and weather systems. Energy must be balanced at the top of the atmosphere between incoming solar energy and out going infra-red energy. This depends on changes in the solar heating, water vapour, clouds , CO2, Ozone etc. This energy balance determines the surface temperature.

Weather forecasting models use live data assimilation to fix the state of the atmosphere in time and then extrapolate forward one or more days up to a maximum of a week or so. Climate models however run autonomously from some initial state, stepping far into the future assuming that they correctly simulate a changing climate due to CO2 levels, incident solar energy, aerosols, volcanoes etc. These models predict past and future surface temperatures, regional climates, rainfall, ice cover etc. So how well are they doing?

Fig 2. Global Surface temperatures from 12 different CMIP5 models run with RCP8.5

The disagreement on the global average surface temperature is huge – a spread of 4C. This implies that there must still be a problem relating to achieving overall energy balance at the TOA. Wikipedia tells us that the average temperature should be about 288K or 15C. Despite this discrepancy in reproducing net surface temperature the model trends in warming for RCP8.5 are similar.

Likewise weather station measurements of temperature have changed with time and place, so they too do not yield a consistent absolute temperature average. The ‘solution’ to this problem is to use temperature ‘anomalies’ instead, relative to some fixed normal monthly period (baseline). I always use the same baseline as CRU 1961-1990. Global warming is then measured by the change in such global average temperature anomalies. The implicit assumption of this is that nearby weather station and/or ocean measurements warm or cool coherently, such that the changes in temperature relative to the baseline can all be spatially averaged together. The usual example of this is that two nearby stations with different altitudes will have different temperatures but produce the similar ‘anomalies’. A similar procedure is used on the model results to produce temperature anomalies. So how do they compare to the data?

Fig 4. Model comparisons to data 1950-2050

Figure 4 shows a close up detail from 1950-2050. This shows how there is a large spread in model trends even within each RCP ensemble. The data falls below the bulk of model runs after 2005 except briefly during the recent el Nino peak in 2016.  Figure 4. shows that the data are now lower than the mean of every RCP, furthermore we won’t be able to distinguish between RCPs until after ~2030.

Zeke Hausfather’s Tricks to Make the Models Look Good

Clive’s second post is Zeke’s Wonder Plot January 25,2019. Excerpts in italics with my bolds.

Zeke Hausfather who works for Carbon Brief and Berkeley Earth has produced a plot which shows almost perfect agreement between CMIP5 model projections and global temperature data. This is based on RCP4.5 models and a baseline of 1981-2010. First here is his original plot.

I have reproduced his plot and  essentially agree that it is correct. However, I also found some interesting quirks.

The apples to apples comparison (model SSTs blended with model land 2m temperatures) reduces the model mean by about 0.06C. Zeke has also smoothed out the temperature data by using a 12 month running average. This has the effect of exaggerating peak values as compared to using the annual averages.

Effect of changing normalisation period. Cowtan & Way uses kriging to interpolate Hadcrut4.6 coverage into the Arctic and elsewhere.

Shown above is the result for a normalisation from 1961-1990. Firstly look how the lowest 2 model projections now drop further down while the data seemingly now lies below both the blended (thick black) and the original CMIP average (thin black). HadCRUT4 2016 is now below the blended value.

This improved model agreement has nothing to do with the data itself but instead is due to a reduction in warming predicted by the models. So what exactly is meant by ‘blending’?

Measurements of global average temperature anomalies use weather stations on land and sea surface temperatures (SST) over oceans. The land measurements are “surface air temperatures”(SAT) defined as the temperature 2m above ground level. The CMIP5 simulations however used SAT everywhere. The blended model projections use simulated SAT over land and TOS (temperature at surface) over oceans. This reduces all model predictions slightly, thereby marginally improving agreement with data. See also Climate-lab-book

The detailed blending calculations were done by Kevin Cowtan using a land mask and ice mask to define where TOS and SAT should be used in forming the global average. I downloaded his python scripts and checked all the algorithm, and they look good to me. His results are based on the RCP8.5 ensemble

The solid blue curve is the CMIP5 RCP4.6 ensemble average after blending. The dashed curve is the original. Click to expand.

Again the models mostly lie above the data after 1999.

This post is intended to demonstrate just how careful you must be when interpreting plots that seemingly demonstrate either full agreement of climate models with data, or else total disagreement.

In summary, Zeke Hausfather writing for Carbon Brief 1) used a clever choice of baseline, 2) of RCP for blended models and 3) by using a 12 month running average, was able to show an almost perfect agreement between data and models. His plot is 100% correct. However exactly the same data plotted with a different baseline and using annual values (exactly like those in the models), instead of 12 monthly running averages shows instead that the models are still lying consistently above the data. I know which one I think best represents reality.

Moral to the Story:
There are lots of ways to make computer models look good.Try not to be distracted.

Latest Results from First-Class Climate Model INMCM5

Updated with October 25, 2018 Report

A previous analysis Temperatures According to Climate Models showed that only one of 42 CMIP5 models was close to hindcasting past temperature fluctuations. That model was INMCM4, which also projected an unalarming 1.4C warming to the end of the century, in contrast to the other models programmed for future warming five times the past.

In a recent comment thread, someone asked what has been done recently with that model, given that it appears to be “best of breed.” So I went looking and this post summarizes further work to produce a new, hopefully improved version by the modelers at the Institute of Numerical Mathematics of the Russian Academy of Sciences.

Institute of Numerical Mathematics, Russian Academy of Sciences, Moscow, Russia

A previous post a year ago went into the details of improvements made in producing the latest iteration INMCM5 for entry into the CMIP6 project.  That text is reprinted below.

Now a detailed description of the model’s global temperature outputs has been published October 25, 2018 in Earth System Dynamics Simulation of observed climate changes in 1850–2014 with climate model INM-CM5   (Title is link to pdf) Excerpts below with my bolds.

Figure 1. The 5-year mean GMST (K) anomaly with respect to 1850–1899 for HadCRUTv4 (thick solid black); model mean (thick solid red). Dashed thin lines represent data from individual model runs: 1 – purple, 2 – dark blue, 3 – blue, 4 – green, 5 – yellow, 6 – orange, 7 – magenta. In this and the next figures numbers on the time axis indicate the first year of the 5-year mean.

Abstract

Climate changes observed in 1850-2014 are modeled and studied on the basis of seven historical runs with the climate model INM-CM5 under the scenario proposed for Coupled Model Intercomparison Project, Phase 6 (CMIP6). In all runs global mean surface temperature rises by 0.8 K at the end of the experiment (2014) in agreement with the observations. Periods of fast warming in 1920-1940 and 1980-2000 as well as its slowdown in 1950-1975 and 2000-2014 are correctly reproduced by the ensemble mean. The notable change here with respect to the CMIP5 results is correct reproduction of the slowdown of global warming in 2000-2014 that we attribute to more accurate description of the Solar constant in CMIP6 protocol. The model is able to reproduce correct behavior of global mean temperature in 1980-2014 despite incorrect phases of  the Atlantic Multidecadal Oscillation and Pacific Decadal Oscillation indices in the majority of experiments. The Arctic sea ice loss in recent decades is reasonably close to the observations just in one model run; the model underestimates Arctic sea ice loss by the factor 2.5. Spatial pattern of model mean surface temperature trend during the last 30 years looks close the one for the ERA Interim reanalysis. Model correctly estimates the magnitude of stratospheric cooling.

Additional Commentary

Observational data of GMST for 1850-2014 used for verification of model results were produced by HadCRUT4 (Morice et al 2012). Monthly mean sea surface temperature (SST) data ERSSTv4 (Huang et al 2015) are used for comparison of the AMO and PDO indices with that of the model. Data of Arctic sea ice extent for 1979-2014 derived from satellite observations are taken from Comiso and Nishio (2008). Stratospheric temperature trend and geographical distribution of near surface air temperature trend for 1979-2014 are calculated from ERA Interim reanalysis data (Dee et al 2011).

Keeping in mind the arguments that the GMST slowdown in the beginning of 21st 6 century could be due to the internal variability of the climate system let us look at the behavior of the AMO and PDO climate indices. Here we calculated the AMO index in the usual way, as the SST anomaly in Atlantic at latitudinal band 0N-60N minus anomaly of the GMST. Model and observed 5 year mean AMO index time series are presented in Fig.3. The well known oscillation with a period of 60-70 years can be clearly seen in the observations. Among the model runs, only one (dashed purple line) shows oscillation with a period of about 70 years, but without significant maximum near year 2000. In other model runs there is no distinct oscillation with a period of 60-70 years but period of 20-40 years prevails. As a result none of seven model trajectories reproduces behavior of observed AMO index after year 1950 (including its warm phase at the turn of the 20th and 21st centuries). One can conclude that anthropogenic forcing is unable to produce any significant impact on the AMO dynamics as its index averaged over 7 realization stays around zero within one sigma interval (0.08). Consequently, the AMO dynamics is controlled by internal variability of the climate system and cannot be predicted in historic experiments. On the other hand the model can correctly predict GMST changes in 1980-2014 having wrong phase of the AMO (blue, yellow, orange lines on Fig.1 and 3).

Conclusions

Seven historical runs for 1850-2014 with the climate model INM-CM5 were analyzed. It is shown that magnitude of the GMST rise in model runs agrees with the estimate based on the observations. All model runs reproduce stabilization of GMST in 1950-1970, fast warming in 1980-2000 and a second GMST stabilization in 2000-2014 suggesting that the major factor for predicting GMST evolution is the external forcing rather than system internal variability. Numerical experiments with the previous model version (INMCM4) for CMIP5 showed unrealistic gradual warming in 1950-2014. The difference between the two model results could be explained by more accurate modeling of stratospheric volcanic and tropospheric anthropogenic aerosol radiation effect (stabilization in 1950-1970) due to the new aerosol block in INM-CM5 and more accurate prescription of Solar constant scenario (stabilization in 2000-2014) in CMIP6 protocol. Four of seven INM-CM5 model runs simulate acceleration of warming in 1920-1940 in a correct way, other three produce it earlier or later than in reality. This indicates that for the year warming of 1920-1940 the climate system natural variability plays significant role. No model trajectory reproduces correct time behavior of AMO and PDO indices. Taking into account our results on the GMST modeling one can conclude that anthropogenic forcing does not produce any significant impact on the dynamics of AMO and PDO indices, at least for the INM-CM5 model. In turns, correct prediction of the GMST changes in the 1980-2014 does not require correct phases of the AMO and PDO as all model runs have correct values of the GMST while in at least three model experiments the phases of the AMO and PDO are opposite to the observed ones in that time. The North Atlantic SST time series produced by the model correlates better with the observations in 1980-2014. Three out of seven trajectories have strongly positive North Atlantic SST anomaly as the observations (in the other four cases we see near-to-zero changes for this quantity). The INMCM5 has the same skill for prediction of the Arctic sea ice extent in 2000-2014 as CMIP5 models including INMCM4. It underestimates the rate of sea ice loss by a factor between the two and three. In one extreme case the magnitude of this decrease is as large as in the observations while in the other the sea ice extent does not change compared to the preindustrial ages. In part this could be explained by the strong internal variability of the Arctic sea ice but obviously the new version of INMCM model and new CMIP6 forcing protocol does not improve prediction of the Arctic sea ice extent response to anthropogenic forcing.

Previous Post:  Climate Model Upgraded: INMCM5 Under the Hood

Earlier in 2017 came this publication Simulation of the present-day climate with the climate model INMCM5 by E.M. Volodin et al. Excerpts below with my bolds.

In this paper we present the fifth generation of the INMCM climate model that is being developed at the Institute of Numerical Mathematics of the Russian Academy of Sciences (INMCM5). The most important changes with respect to the previous version (INMCM4) were made in the atmospheric component of the model. Its vertical resolution was increased to resolve the upper stratosphere and the lower mesosphere. A more sophisticated parameterization of condensation and cloudiness formation was introduced as well. An aerosol module was incorporated into the model. The upgraded oceanic component has a modified dynamical core optimized for better implementation on parallel computers and has two times higher resolution in both horizontal directions.

Analysis of the present-day climatology of the INMCM5 (based on the data of historical run for 1979–2005) shows moderate improvements in reproduction of basic circulation characteristics with respect to the previous version. Biases in the near-surface temperature and precipitation are slightly reduced compared with INMCM4 as  well as biases in oceanic temperature, salinity and sea surface height. The most notable improvement over INMCM4 is the capability of the new model to reproduce the equatorial stratospheric quasi-biannual oscillation and statistics of sudden stratospheric warmings.

The family of INMCM climate models, as most climate system models, consists of two main blocks: the atmosphere general circulation model, and the ocean general circulation model. The atmospheric part is based on the standard set of hydrothermodynamic equations with hydrostatic approximation written in advective form. The model prognostic variables are wind horizontal components, temperature, specific humidity and surface pressure.

Atmosphere Module

The INMCM5 borrows most of the atmospheric parameterizations from its previous version. One of the few notable changes is the new parameterization of clouds and large-scale condensation. In the INMCM5 cloud area and cloud water are computed prognostically according to Tiedtke (1993). That includes the formation of large-scale cloudiness as well as the formation of clouds in the atmospheric boundary layer and clouds of deep convection. Decrease of cloudiness due to mixing with unsaturated environment and precipitation formation are also taken into account. Evaporation of precipitation is implemented according to Kessler (1969).

In the INMCM5 the atmospheric model is complemented by the interactive aerosol block, which is absent in the INMCM4. Concentrations of coarse and fine sea salt, coarse and fine mineral dust, SO2, sulfate aerosol, hydrophilic and hydrophobic black and organic carbon are all calculated prognostically.

Ocean Module

The oceanic module of the INMCM5 uses generalized spherical coordinates. The model “South Pole” coincides with the geographical one, while the model “North Pole” is located in Siberia beyond the ocean area to avoid numerical problems near the pole. Vertical sigma-coordinate is used. The finite-difference equations are written using the Arakawa C-grid. The differential and finite-difference equations, as well as methods of solving them can be found in Zalesny etal. (2010).

The INMCM5 uses explicit schemes for advection, while the INMCM4 used schemes based on splitting upon coordinates. Also, the iterative method for solving linear shallow water equation systems is used in the INMCM5 rather than direct method used in the INMCM4. The two previous changes were made to improve model parallel scalability. The horizontal resolution of the ocean part of the INMCM5 is 0.5 × 0.25° in longitude and latitude (compared to the INMCM4’s 1 × 0.5°).

Both the INMCM4 and the INMCM5 have 40 levels in vertical. The parallel implementation of the ocean model can be found in (Terekhov etal. 2011). The oceanic block includes vertical mixing and isopycnal diffusion parameterizations (Zalesny et al. 2010). Sea ice dynamics and thermodynamics are parameterized according to Iakovlev (2009). Assumptions of elastic-viscous-plastic rheology and single ice thickness gradation are used. The time step in the oceanic block of the INMCM5 is 15 min.

Note the size of the human emissions next to the red arrow.

Carbon Cycle Module

The climate model INMCM5 has а carbon cycle module (Volodin 2007), where atmospheric CO2 concentration, carbon in vegetation, soil and ocean are calculated. In soil, а single carbon pool is considered. In the ocean, the only prognostic variable in the carbon cycle is total inorganic carbon. Biological pump is prescribed. The model calculates methane emission from wetlands and has a simplified methane cycle (Volodin 2008). Parameterizations of some electrical phenomena, including calculation of ionospheric potential and flash intensity (Mareev and Volodin 2014), are also included in the model.

Surface Temperatures

When compared to the INMCM4 surface temperature climatology, the INMCM5 shows several improvements. Negative bias over continents is reduced mainly because of the increase in daily minimum temperature over land, which is achieved by tuning the surface flux parameterization. In addition, positive bias over southern Europe and eastern USA in summer typical for many climate models (Mueller and Seneviratne 2014) is almost absent in the INMCM5. A possible reason for this bias in many models is the shortage of soil water and suppressed evaporation leading to overestimation of the surface temperature. In the INMCM5 this problem was addressed by the increase of the minimum leaf resistance for some vegetation types.

Nevertheless, some problems migrate from one model version to the other: negative bias over most of the subtropical and tropical oceans, and positive bias over the Atlantic to the east of the USA and Canada. Root mean square (RMS) error of annual mean near surface temperature was reduced from 2.48 K in the INMCM4 to 1.85 K in the INMCM5.

Precipitation

In mid-latitudes, the positive precipitation bias over the ocean prevails in winter while negative bias occurs in summer. Compared to the INMCM4, the biases over the western Indian Ocean, Indonesia, the eastern tropical Pacific and the tropical Atlantic are reduced. A possible reason for this is the better reproduction of the tropical sea surface temperature (SST) in the INMCM5 due to the increase of the spatial resolution in the oceanic block, as well as the new condensation scheme. RMS annual mean model bias for precipitation is 1.35mm day−1 for the INMCM5 compared to 1.60mm day−1 for the INMCM4.

Cloud Radiation Forcing

Cloud radiation forcing (CRF) at the top of the atmosphere is one of the most important climate model characteristics, as errors in CRF frequently lead to an incorrect surface temperature.

In the high latitudes model errors in shortwave CRF are small. The model underestimates longwave CRF in the subtropics but overestimates it in the high latitudes. Errors in longwave CRF in the tropics tend to partially compensate errors in shortwave CRF. Both errors have positive sign near 60S leading to warm bias in the surface temperature here. As a result, we have some underestimation of the net CRF absolute value at almost all latitudes except the tropics. Additional experiments with tuned conversion of cloud water (ice) to precipitation (for upper cloudiness) showed that model bias in the net CRF could be reduced, but that the RMS bias for the surface temperature will increase in this case.

A table from another paper provides the climate parameters described by INMCM5.

Climate Parameters Observations INMCM3 INMCM4 INMCM5
Incoming solar radiation at TOA 341.3 [26] 341.7 341.8 341.4
Outgoing solar radiation at TOA   96–100 [26] 97.5 ± 0.1 96.2 ± 0.1 98.5 ± 0.2
Outgoing longwave radiation at TOA 236–242 [26] 240.8 ± 0.1 244.6 ± 0.1 241.6 ± 0.2
Solar radiation absorbed by surface 154–166 [26] 166.7 ± 0.2 166.7 ± 0.2 169.0 ± 0.3
Solar radiation reflected by surface     22–26 [26] 29.4 ± 0.1 30.6 ± 0.1 30.8 ± 0.1
Longwave radiation balance at surface –54 to 58 [26] –52.1 ± 0.1 –49.5 ± 0.1 –63.0 ± 0.2
Solar radiation reflected by atmosphere      74–78 [26] 68.1 ± 0.1 66.7 ± 0.1 67.8 ± 0.1
Solar radiation absorbed by atmosphere     74–91 [26] 77.4 ± 0.1 78.9 ± 0.1 81.9 ± 0.1
Direct hear flux from surface     15–25 [26] 27.6 ± 0.2 28.2 ± 0.2 18.8 ± 0.1
Latent heat flux from surface     70–85 [26] 86.3 ± 0.3 90.5 ± 0.3 86.1 ± 0.3
Cloud amount, %     64–75 [27] 64.2 ± 0.1 63.3 ± 0.1 69 ± 0.2
Solar radiation-cloud forcing at TOA         –47 [26] –42.3 ± 0.1 –40.3 ± 0.1 –40.4 ± 0.1
Longwave radiation-cloud forcing at TOA          26 [26] 22.3 ± 0.1 21.2 ± 0.1 24.6 ± 0.1
Near-surface air temperature, °С 14.0 ± 0.2 [26] 13.0 ± 0.1 13.7 ± 0.1 13.8 ± 0.1
Precipitation, mm/day 2.5–2.8 [23] 2.97 ± 0.01 3.13 ± 0.01 2.97 ± 0.01
River water inflow to the World Ocean,10^3 km^3/year 29–40 [28] 21.6 ± 0.1 31.8 ± 0.1 40.0 ± 0.3
Snow coverage in Feb., mil. Km^2 46 ± 2 [29] 37.6 ± 1.8 39.9 ± 1.5 39.4 ± 1.5
Permafrost area, mil. Km^2 10.7–22.8 [30] 8.2 ± 0.6 16.1 ± 0.4 5.0 ± 0.5
Land area prone to seasonal freezing in NH, mil. Km^2 54.4 ± 0.7 [31] 46.1 ± 1.1 48.3 ± 1.1 51.6 ± 1.0
Sea ice area in NH in March, mil. Km^2 13.9 ± 0.4 [32] 12.9 ± 0.3 14.4 ± 0.3 14.5 ± 0.3
Sea ice area in NH in Sept., mil. Km^2 5.3 ± 0.6 [32] 4.5 ± 0.5 4.5 ± 0.5 6.1 ± 0.5

Heat flux units are given in W/m^2; the other units are given with the title of corresponding parameter. Where possible, ± shows standard deviation for annual mean value.  Source: Simulation of Modern Climate with the New Version Of the INM RAS Climate Model (Bracketed numbers refer to sources for observations)

Ocean Temperature and Salinity

The model biases in potential temperature and salinity averaged over longitude with respect to WOA09 (Antonov et al. 2010) are shown in Fig.12. Positive bias in the Southern Ocean penetrates from the surface downward for up to 300 m, while negative bias in the tropics can be seen even in the 100–1000 m layer.

Nevertheless, zonal mean temperature error at any level from the surface to the bottom is small. This was not the case for the INMCM4, where one could see negative temperature bias up to 2–3 K from 1.5 km to the bottom nearly al all latitudes, and 2–3 K positive bias at levels of 700–1000 m. The reason for this improvement is the introduction of a higher background coefficient for vertical diffusion at high depth (3000 m and higher) than at intermediate depth (300–500m). Positive temperature bias at 45–65 N at all depths could probably be explained by shortcomings in the representation of deep convection [similar errors can be seen for most of the CMIP5 models (Flato etal. 2013, their Fig.9.13)].

Another feature common for many present day climate models (and for the INMCM5 as well) is negative bias in southern tropical ocean salinity from the surface to 500 m. It can be explained by overestimation of precipitation at the southern branch of the Inter Tropical Convergence zone. Meridional heat flux in the ocean (Fig.13) is not far from available estimates (Trenberth and Caron 2001). It looks similar to the one for the INMCM4, but maximum of northward transport in the Atlantic in the INMCM5 is about 0.1–0.2 × 1015 W higher than the one in the INMCM4, probably, because of the increased horizontal resolution in the oceanic block.

Sea Ice

In the Arctic, the model sea ice area is just slightly overestimated. Overestimation of the Arctic sea ice area is connected with negative bias in the surface temperature. In the same time, connection of the sea ice area error with the positive salinity bias is not evident because ice formation is almost compensated by ice melting, and the total salinity source for these pair of processes is not large. The amplitude and phase of the sea ice annual cycle are reproduced correctly by the model. In the Antarctic, sea ice area is underestimated by a factor of 1.5 in all seasons, apparently due to the positive temperature bias. Note that the correct simulation of sea ice area dynamics in both hemispheres simultaneously is a difficult task for climate modeling.

The analysis of the model time series of the SST anomalies shows that the El Niño event frequency is approximately the same in the model and data, but the model El Niños happen too regularly. Atmospheric response to the El Niño vents is also underestimated in the model by a factor of 1.5 with respect to the reanalysis data.

Conclusion

Based on the CMIP5 model INMCM4 the next version of the Institute of Numerical Mathematics RAS climate model was developed (INMCM5). The most important changes include new parameterizations of large scale condensation (cloud fraction and cloud water are now the prognostic variables), and increased vertical resolution in the atmosphere (73 vertical levels instead of 21, top model level raised from 30 to 60 km). In the oceanic block, horizontal resolution was increased by a factor of 2 in both directions.

The climate model was supplemented by the aerosol block. The model got a new parallel code with improved computational efficiency and scalability. With the new version of climate model we performed a test model run (80 years) to simulate the present-day Earth climate. The model mean state was compared with the available datasets. The structures of the surface temperature and precipitation biases in the INMCM5 are typical for the present climate models. Nevertheless, the RMS error in surface temperature, precipitation as well as zonal mean temperature and zonal wind are reduced in the INMCM5 with respect to its previous version, the INMCM4.

The model is capable of reproducing equatorial stratospheric QBO and SSWs.The model biases for the sea surface height and surface salinity are reduced in the new version as well, probably due to increasing spatial resolution in the oceanic block. Bias in ocean potential temperature at depths below 700 m in the INMCM5 is also reduced with respect to the one in the INMCM4. This is likely because of the tuning background vertical diffusion coefficient.

Model sea ice area is reproduced well enough in the Arctic, but is underestimated in the Antarctic (as a result of the overestimated surface temperature). RMS error in the surface salinity is reduced almost everywhere compared to the previous model except the Arctic (where the positive bias becomes larger). As a final remark one can conclude that the INMCM5 is substantially better in almost all aspects than its previous version and we plan to use this model as a core component for the coming CMIP6 experiment.
climatesystem_web

Summary

One the one hand, this model example shows that the intent is simple: To represent dynamically the energy balance of our planetary climate system.  On the other hand, the model description shows how many parameters are involved, and the complexity of processes interacting.  The attempt to simulate operations of the climate system is a monumental task with many outstanding challenges, and this latest version is another step in an iterative development.

Note:  Regarding the influence of rising CO2 on the energy balance.  Global warming advocates estimate a CO2 perturbation of 4 W/m^2.  In the climate parameters table above, observations of the radiation fluxes have a 2 W/m^2 error range at best, and in several cases are observed in ranges of 10 to 15 W/m^2.

We do not yet have access to the time series temperature outputs from INMCM5 to compare with observations or with other CMIP6 models.  Presumably that will happen in the future.

Early Schematic: Flows and Feedbacks for Climate Models

2018 Update: Best Climate Model INMCM5

A previous analysis Temperatures According to Climate Models showed that only one of 42 CMIP5 models was close to hindcasting past temperature fluctuations. That model was INMCM4, which also projected an unalarming 1.4C warming to the end of the century, in contrast to the other models programmed for future warming five times the past.

In a recent comment thread, someone asked what has been done recently with that model, given that it appears to be “best of breed.” So I went looking and this post summarizes further work to produce a new, hopefully improved version by the modelers at the Institute of Numerical Mathematics of the Russian Academy of Sciences.

Institute of Numerical Mathematics, Russian Academy of Sciences, Moscow, Russia

A previous post a year ago went into the details of improvements made in producing the latest iteration INMCM5 for entry into the CMIP6 project.  That text is reprinted below.  Now we have some initial and promising results Simulation of observed climate changes in 1850-2014 with climate model INM-CM5 published May 8, 2018 by Evgeny Volodin and Andrey Gritsun in Earth Systems Dynamics.  Excerpts in italics with my bolds.

volodin-fig5

Figure 1. The 5-year mean GMST (K) anomaly with respect to 1850–1899 for HadCRUTv4 (thick solid black); model mean (thick solid red). Dashed thin lines represent data from individual model runs: 1 – purple, 2 – dark blue, 3 – blue, 4 – green, 5 – yellow, 6 – orange, 7 – magenta. In this and the next figures numbers on the time axis indicate the first year of the 5-year mean.

Abstract

Climate changes observed in 1850-2014 are modeled and studied on the basis of seven historical runs with the climate model INM-CM5 under the scenario proposed for Coupled Model Intercomparison Project, Phase 6 (CMIP6). In all runs global mean surface temperature rises by 0.8 K at the end of the experiment (2014) in agreement with the observations. Periods of fast warming in 1920-1940 and 1980-2000 as well as its slowdown in 1950-1975 and 2000-2014 are correctly reproduced by the ensemble mean. The notable change here with respect to the CMIP5 results is correct reproduction of the slowdown of global warming in 2000-2014 that we attribute to more accurate description of the Solar constant in CMIP6 protocol. The model is able to reproduce correct behavior of global mean temperature in 1980-2014 despite incorrect phases of  the Atlantic Multidecadal Oscillation and Pacific Decadal Oscillation indices in the majority of experiments. The Arctic sea ice loss in recent decades is reasonably close to the observations just in one model run; the model underestimates Arctic sea ice loss by the factor 2.5. Spatial pattern of model mean surface temperature trend during the last 30 years looks close the one for the ERA Interim reanalysis. Model correctly estimates the magnitude of stratospheric cooling.

Additional Commentary

Observational data of GMST for 1850-2014 used for verification of model results were produced by HadCRUT4 (Morice et al 2012). Monthly mean sea surface temperature (SST) data ERSSTv4 (Huang et al 2015) are used for comparison of the AMO and PDO indices with that of the model. Data of Arctic sea ice extent for 1979-2014 derived from satellite observations are taken from Comiso and Nishio (2008). Stratospheric temperature trend and geographical distribution of near surface air temperature trend for 1979-2014 are calculated from ERA Interim reanalysis data (Dee et al 2011).

Keeping in mind the arguments that the GMST slowdown in the beginning of 21st 6 century could be due to the internal variability of the climate system let us look at the behavior of the AMO and PDO climate indices. Here we calculated the AMO index in the usual way, as the SST anomaly in Atlantic at latitudinal band 0N-60N minus anomaly of the GMST. Model and observed 5 year mean AMO index time series are presented in Fig.3. The well known oscillation with a period of 60-70 years can be clearly seen in the observations. Among the model runs, only one (dashed purple line) shows oscillation with a period of about 70 years, but without significant maximum near year 2000. In other model runs there is no distinct oscillation with a period of 60-70 years but period of 20-40 years prevails. As a result none of seven model trajectories reproduces behavior of observed AMO index after year 1950 (including its warm phase at the turn of the 20th and 21st centuries). One can conclude that anthropogenic forcing is unable to produce any significant impact on the AMO dynamics as its index averaged over 7 realization stays around zero within one sigma interval (0.08). Consequently, the AMO dynamics is controlled by internal variability of the climate system and cannot be predicted in historic experiments. On the other hand the model can correctly predict GMST changes in 1980-2014 having wrong phase of the AMO (blue, yellow, orange lines on Fig.1 and 3).

Conclusions

Seven historical runs for 1850-2014 with the climate model INM-CM5 were analyzed. It is shown that magnitude of the GMST rise in model runs agrees with the estimate based on the observations. All model runs reproduce stabilization of GMST in 1950-1970, fast warming in 1980-2000 and a second GMST stabilization in 2000-2014 suggesting that the major factor for predicting GMST evolution is the external forcing rather than system internal variability. Numerical experiments with the previous model version (INMCM4) for CMIP5 showed unrealistic gradual warming in 1950-2014. The difference between the two model results could be explained by more accurate modeling of stratospheric volcanic and tropospheric anthropogenic aerosol radiation effect (stabilization in 1950-1970) due to the new aerosol block in INM-CM5 and more accurate prescription of Solar constant scenario (stabilization in 2000-2014) in CMIP6 protocol. Four of seven INM-CM5 model runs simulate acceleration of warming in 1920-1940 in a correct way, other three produce it earlier or later than in reality. This indicates that for the year warming of 1920-1940 the climate system natural variability plays significant role. No model trajectory reproduces correct time behavior of AMO and PDO indices. Taking into account our results on the GMST modeling one can conclude that anthropogenic forcing does not produce any significant impact on the dynamics of AMO and PDO indices, at least for the INM-CM5 model. In turns, correct prediction of the GMST changes in the 1980-2014 does not require correct phases of the AMO and PDO as all model runs have correct values of the GMST while in at least three model experiments the phases of the AMO and PDO are opposite to the observed ones in that time. The North Atlantic SST time series produced by the model correlates better with the observations in 1980-2014. Three out of seven trajectories have strongly positive North Atlantic SST anomaly as the observations (in the other four cases we see near-to-zero changes for this quantity). The INMCM5 has the same skill for prediction of the Arctic sea ice extent in 2000-2014 as CMIP5 models including INMCM4. It underestimates the rate of sea ice loss by a factor between the two and three. In one extreme case the magnitude of this decrease is as large as in the observations while in the other the sea ice extent does not change compared to the preindustrial ages. In part this could be explained by the strong internal variability of the Arctic sea ice but obviously the new version of INMCM model and new CMIP6 forcing protocol does not improve prediction of the Arctic sea ice extent response to anthropogenic forcing.

Previous Post:  Climate Model Upgraded: INMCM5 Under the Hood

Earlier in 2017 came this publication Simulation of the present-day climate with the climate model INMCM5 by E.M. Volodin et al. Excerpts below with my bolds.

In this paper we present the fifth generation of the INMCM climate model that is being developed at the Institute of Numerical Mathematics of the Russian Academy of Sciences (INMCM5). The most important changes with respect to the previous version (INMCM4) were made in the atmospheric component of the model. Its vertical resolution was increased to resolve the upper stratosphere and the lower mesosphere. A more sophisticated parameterization of condensation and cloudiness formation was introduced as well. An aerosol module was incorporated into the model. The upgraded oceanic component has a modified dynamical core optimized for better implementation on parallel computers and has two times higher resolution in both horizontal directions.

Analysis of the present-day climatology of the INMCM5 (based on the data of historical run for 1979–2005) shows moderate improvements in reproduction of basic circulation characteristics with respect to the previous version. Biases in the near-surface temperature and precipitation are slightly reduced compared with INMCM4 as  well as biases in oceanic temperature, salinity and sea surface height. The most notable improvement over INMCM4 is the capability of the new model to reproduce the equatorial stratospheric quasi-biannual oscillation and statistics of sudden stratospheric warmings.

The family of INMCM climate models, as most climate system models, consists of two main blocks: the atmosphere general circulation model, and the ocean general circulation model. The atmospheric part is based on the standard set of hydrothermodynamic equations with hydrostatic approximation written in advective form. The model prognostic variables are wind horizontal components, temperature, specific humidity and surface pressure.

Atmosphere Module

The INMCM5 borrows most of the atmospheric parameterizations from its previous version. One of the few notable changes is the new parameterization of clouds and large-scale condensation. In the INMCM5 cloud area and cloud water are computed prognostically according to Tiedtke (1993). That includes the formation of large-scale cloudiness as well as the formation of clouds in the atmospheric boundary layer and clouds of deep convection. Decrease of cloudiness due to mixing with unsaturated environment and precipitation formation are also taken into account. Evaporation of precipitation is implemented according to Kessler (1969).

In the INMCM5 the atmospheric model is complemented by the interactive aerosol block, which is absent in the INMCM4. Concentrations of coarse and fine sea salt, coarse and fine mineral dust, SO2, sulfate aerosol, hydrophilic and hydrophobic black and organic carbon are all calculated prognostically.

Ocean Module

The oceanic module of the INMCM5 uses generalized spherical coordinates. The model “South Pole” coincides with the geographical one, while the model “North Pole” is located in Siberia beyond the ocean area to avoid numerical problems near the pole. Vertical sigma-coordinate is used. The finite-difference equations are written using the Arakawa C-grid. The differential and finite-difference equations, as well as methods of solving them can be found in Zalesny etal. (2010).

The INMCM5 uses explicit schemes for advection, while the INMCM4 used schemes based on splitting upon coordinates. Also, the iterative method for solving linear shallow water equation systems is used in the INMCM5 rather than direct method used in the INMCM4. The two previous changes were made to improve model parallel scalability. The horizontal resolution of the ocean part of the INMCM5 is 0.5 × 0.25° in longitude and latitude (compared to the INMCM4’s 1 × 0.5°).

Both the INMCM4 and the INMCM5 have 40 levels in vertical. The parallel implementation of the ocean model can be found in (Terekhov etal. 2011). The oceanic block includes vertical mixing and isopycnal diffusion parameterizations (Zalesny et al. 2010). Sea ice dynamics and thermodynamics are parameterized according to Iakovlev (2009). Assumptions of elastic-viscous-plastic rheology and single ice thickness gradation are used. The time step in the oceanic block of the INMCM5 is 15 min.

Note the size of the human emissions next to the red arrow.

Carbon Cycle Module

The climate model INMCM5 has а carbon cycle module (Volodin 2007), where atmospheric CO2 concentration, carbon in vegetation, soil and ocean are calculated. In soil, а single carbon pool is considered. In the ocean, the only prognostic variable in the carbon cycle is total inorganic carbon. Biological pump is prescribed. The model calculates methane emission from wetlands and has a simplified methane cycle (Volodin 2008). Parameterizations of some electrical phenomena, including calculation of ionospheric potential and flash intensity (Mareev and Volodin 2014), are also included in the model.

Surface Temperatures

When compared to the INMCM4 surface temperature climatology, the INMCM5 shows several improvements. Negative bias over continents is reduced mainly because of the increase in daily minimum temperature over land, which is achieved by tuning the surface flux parameterization. In addition, positive bias over southern Europe and eastern USA in summer typical for many climate models (Mueller and Seneviratne 2014) is almost absent in the INMCM5. A possible reason for this bias in many models is the shortage of soil water and suppressed evaporation leading to overestimation of the surface temperature. In the INMCM5 this problem was addressed by the increase of the minimum leaf resistance for some vegetation types.

Nevertheless, some problems migrate from one model version to the other: negative bias over most of the subtropical and tropical oceans, and positive bias over the Atlantic to the east of the USA and Canada. Root mean square (RMS) error of annual mean near surface temperature was reduced from 2.48 K in the INMCM4 to 1.85 K in the INMCM5.

Precipitation

In mid-latitudes, the positive precipitation bias over the ocean prevails in winter while negative bias occurs in summer. Compared to the INMCM4, the biases over the western Indian Ocean, Indonesia, the eastern tropical Pacific and the tropical Atlantic are reduced. A possible reason for this is the better reproduction of the tropical sea surface temperature (SST) in the INMCM5 due to the increase of the spatial resolution in the oceanic block, as well as the new condensation scheme. RMS annual mean model bias for precipitation is 1.35mm day−1 for the INMCM5 compared to 1.60mm day−1 for the INMCM4.

Cloud Radiation Forcing

Cloud radiation forcing (CRF) at the top of the atmosphere is one of the most important climate model characteristics, as errors in CRF frequently lead to an incorrect surface temperature.

In the high latitudes model errors in shortwave CRF are small. The model underestimates longwave CRF in the subtropics but overestimates it in the high latitudes. Errors in longwave CRF in the tropics tend to partially compensate errors in shortwave CRF. Both errors have positive sign near 60S leading to warm bias in the surface temperature here. As a result, we have some underestimation of the net CRF absolute value at almost all latitudes except the tropics. Additional experiments with tuned conversion of cloud water (ice) to precipitation (for upper cloudiness) showed that model bias in the net CRF could be reduced, but that the RMS bias for the surface temperature will increase in this case.

A table from another paper provides the climate parameters described by INMCM5.

Climate Parameters Observations INMCM3 INMCM4 INMCM5
Incoming solar radiation at TOA 341.3 [26] 341.7 341.8 341.4
Outgoing solar radiation at TOA   96–100 [26] 97.5 ± 0.1 96.2 ± 0.1 98.5 ± 0.2
Outgoing longwave radiation at TOA 236–242 [26] 240.8 ± 0.1 244.6 ± 0.1 241.6 ± 0.2
Solar radiation absorbed by surface 154–166 [26] 166.7 ± 0.2 166.7 ± 0.2 169.0 ± 0.3
Solar radiation reflected by surface     22–26 [26] 29.4 ± 0.1 30.6 ± 0.1 30.8 ± 0.1
Longwave radiation balance at surface –54 to 58 [26] –52.1 ± 0.1 –49.5 ± 0.1 –63.0 ± 0.2
Solar radiation reflected by atmosphere      74–78 [26] 68.1 ± 0.1 66.7 ± 0.1 67.8 ± 0.1
Solar radiation absorbed by atmosphere     74–91 [26] 77.4 ± 0.1 78.9 ± 0.1 81.9 ± 0.1
Direct hear flux from surface     15–25 [26] 27.6 ± 0.2 28.2 ± 0.2 18.8 ± 0.1
Latent heat flux from surface     70–85 [26] 86.3 ± 0.3 90.5 ± 0.3 86.1 ± 0.3
Cloud amount, %     64–75 [27] 64.2 ± 0.1 63.3 ± 0.1 69 ± 0.2
Solar radiation-cloud forcing at TOA         –47 [26] –42.3 ± 0.1 –40.3 ± 0.1 –40.4 ± 0.1
Longwave radiation-cloud forcing at TOA          26 [26] 22.3 ± 0.1 21.2 ± 0.1 24.6 ± 0.1
Near-surface air temperature, °С 14.0 ± 0.2 [26] 13.0 ± 0.1 13.7 ± 0.1 13.8 ± 0.1
Precipitation, mm/day 2.5–2.8 [23] 2.97 ± 0.01 3.13 ± 0.01 2.97 ± 0.01
River water inflow to the World Ocean,10^3 km^3/year 29–40 [28] 21.6 ± 0.1 31.8 ± 0.1 40.0 ± 0.3
Snow coverage in Feb., mil. Km^2 46 ± 2 [29] 37.6 ± 1.8 39.9 ± 1.5 39.4 ± 1.5
Permafrost area, mil. Km^2 10.7–22.8 [30] 8.2 ± 0.6 16.1 ± 0.4 5.0 ± 0.5
Land area prone to seasonal freezing in NH, mil. Km^2 54.4 ± 0.7 [31] 46.1 ± 1.1 48.3 ± 1.1 51.6 ± 1.0
Sea ice area in NH in March, mil. Km^2 13.9 ± 0.4 [32] 12.9 ± 0.3 14.4 ± 0.3 14.5 ± 0.3
Sea ice area in NH in Sept., mil. Km^2 5.3 ± 0.6 [32] 4.5 ± 0.5 4.5 ± 0.5 6.1 ± 0.5

Heat flux units are given in W/m^2; the other units are given with the title of corresponding parameter. Where possible, ± shows standard deviation for annual mean value.  Source: Simulation of Modern Climate with the New Version Of the INM RAS Climate Model (Bracketed numbers refer to sources for observations)

Ocean Temperature and Salinity

The model biases in potential temperature and salinity averaged over longitude with respect to WOA09 (Antonov et al. 2010) are shown in Fig.12. Positive bias in the Southern Ocean penetrates from the surface downward for up to 300 m, while negative bias in the tropics can be seen even in the 100–1000 m layer.

Nevertheless, zonal mean temperature error at any level from the surface to the bottom is small. This was not the case for the INMCM4, where one could see negative temperature bias up to 2–3 K from 1.5 km to the bottom nearly al all latitudes, and 2–3 K positive bias at levels of 700–1000 m. The reason for this improvement is the introduction of a higher background coefficient for vertical diffusion at high depth (3000 m and higher) than at intermediate depth (300–500m). Positive temperature bias at 45–65 N at all depths could probably be explained by shortcomings in the representation of deep convection [similar errors can be seen for most of the CMIP5 models (Flato etal. 2013, their Fig.9.13)].

Another feature common for many present day climate models (and for the INMCM5 as well) is negative bias in southern tropical ocean salinity from the surface to 500 m. It can be explained by overestimation of precipitation at the southern branch of the Inter Tropical Convergence zone. Meridional heat flux in the ocean (Fig.13) is not far from available estimates (Trenberth and Caron 2001). It looks similar to the one for the INMCM4, but maximum of northward transport in the Atlantic in the INMCM5 is about 0.1–0.2 × 1015 W higher than the one in the INMCM4, probably, because of the increased horizontal resolution in the oceanic block.

Sea Ice

In the Arctic, the model sea ice area is just slightly overestimated. Overestimation of the Arctic sea ice area is connected with negative bias in the surface temperature. In the same time, connection of the sea ice area error with the positive salinity bias is not evident because ice formation is almost compensated by ice melting, and the total salinity source for these pair of processes is not large. The amplitude and phase of the sea ice annual cycle are reproduced correctly by the model. In the Antarctic, sea ice area is underestimated by a factor of 1.5 in all seasons, apparently due to the positive temperature bias. Note that the correct simulation of sea ice area dynamics in both hemispheres simultaneously is a difficult task for climate modeling.

The analysis of the model time series of the SST anomalies shows that the El Niño event frequency is approximately the same in the model and data, but the model El Niños happen too regularly. Atmospheric response to the El Niño vents is also underestimated in the model by a factor of 1.5 with respect to the reanalysis data.

Conclusion

Based on the CMIP5 model INMCM4 the next version of the Institute of Numerical Mathematics RAS climate model was developed (INMCM5). The most important changes include new parameterizations of large scale condensation (cloud fraction and cloud water are now the prognostic variables), and increased vertical resolution in the atmosphere (73 vertical levels instead of 21, top model level raised from 30 to 60 km). In the oceanic block, horizontal resolution was increased by a factor of 2 in both directions.

The climate model was supplemented by the aerosol block. The model got a new parallel code with improved computational efficiency and scalability. With the new version of climate model we performed a test model run (80 years) to simulate the present-day Earth climate. The model mean state was compared with the available datasets. The structures of the surface temperature and precipitation biases in the INMCM5 are typical for the present climate models. Nevertheless, the RMS error in surface temperature, precipitation as well as zonal mean temperature and zonal wind are reduced in the INMCM5 with respect to its previous version, the INMCM4.

The model is capable of reproducing equatorial stratospheric QBO and SSWs.The model biases for the sea surface height and surface salinity are reduced in the new version as well, probably due to increasing spatial resolution in the oceanic block. Bias in ocean potential temperature at depths below 700 m in the INMCM5 is also reduced with respect to the one in the INMCM4. This is likely because of the tuning background vertical diffusion coefficient.

Model sea ice area is reproduced well enough in the Arctic, but is underestimated in the Antarctic (as a result of the overestimated surface temperature). RMS error in the surface salinity is reduced almost everywhere compared to the previous model except the Arctic (where the positive bias becomes larger). As a final remark one can conclude that the INMCM5 is substantially better in almost all aspects than its previous version and we plan to use this model as a core component for the coming CMIP6 experiment.
climatesystem_web

Summary

One the one hand, this model example shows that the intent is simple: To represent dynamically the energy balance of our planetary climate system.  On the other hand, the model description shows how many parameters are involved, and the complexity of processes interacting.  The attempt to simulate operations of the climate system is a monumental task with many outstanding challenges, and this latest version is another step in an iterative development.

Note:  Regarding the influence of rising CO2 on the energy balance.  Global warming advocates estimate a CO2 perturbation of 4 W/m^2.  In the climate parameters table above, observations of the radiation fluxes have a 2 W/m^2 error range at best, and in several cases are observed in ranges of 10 to 15 W/m^2.

We do not yet have access to the time series temperature outputs from INMCM5 to compare with observations or with other CMIP6 models.  Presumably that will happen in the future.

Early Schematic: Flows and Feedbacks for Climate Models

Unbelievable Climate Models

It is not just you thinking the world is not warming the way climate models predicted. The models are flawed, and their estimates of the climate’s future response to rising CO2 are way too hot. Yet these overcooked forecasts are the basis for policy makers to consider all kinds of climate impacts, from sea level rise to food production and outbreaks of Acne.

The models’ outputs are contradicted by the instrumental temperature records. So a choice must be made: Shall we rely on measurements of our past climate experience, or embrace the much warmer future envisioned by these models?

Ross McKitrick takes us through this fundamental issue in his Financial Post article All those warming-climate predictions suddenly have a big, new problem Excerpts below with my bolds, headers and images

Why ECS is Important

One of the most important numbers in the world goes by the catchy title of Equilibrium Climate Sensitivity, or ECS. It is a measure of how much the climate responds to greenhouse gases. More formally, it is defined as the increase, in degrees Celsius, of average temperatures around the world, after doubling the amount of carbon dioxide in the atmosphere and allowing the atmosphere and the oceans to adjust fully to the change. The reason it’s important is that it is the ultimate justification for governmental policies to fight climate change.

The United Nations Intergovernmental Panel on Climate Change (IPCC) says ECS is likely between 1.5 and 4.5 degrees Celsius, but it can’t be more precise than that. Which is too bad, because an enormous amount of public policy depends on its value. People who study the impacts of global warming have found that if ECS is low — say, less than two — then the impacts of global warming on the economy will be mostly small and, in many places, mildly beneficial. If it is very low, for instance around one, it means greenhouse gas emissions are simply not worth doing anything about. But if ECS is high — say, around four degrees or more — then climate change is probably a big problem. We may not be able to stop it, but we’d better get ready to adapt to it.

So, somebody, somewhere, ought to measure ECS. As it turns out, a lot of people have been trying, and what they have found has enormous policy implications.

The violins span 5–95% ranges; their widths indicate how PDF values vary with ECS. Black lines show medians, red lines span 17–83% ‘likely’ ranges. Published estimates based directly on observed warming are shown in blue. Unpublished estimates of mine based on warming attributable to greenhouse gases inferred by two recent detection and attribution studies are shown in green. CMIP5 models are shown in salmon. The observational ECS estimates have broadly similar medians and ‘likely’ ranges, all of which are far below the corresponding values for the CMIP5 models. Source: Nic Lewis at Climate Audit https://climateaudit.org/2015/04/13/pitfalls-in-climate-sensitivity-estimation-part-2/

Methods Matter

To understand why, we first need to delve into the methodology a bit. There are two ways scientists try to estimate ECS. The first is to use a climate model, double the modeled CO2 concentration from the pre-industrial level, and let it run until temperatures stabilize a few hundred years into the future. This approach, called the model-based method, depends for its accuracy on the validity of the climate model, and since models differ quite a bit from one another, it yields a wide range of possible answers. A well-known statistical distribution derived from modeling studies summarizes the uncertainties in this method. It shows that ECS is probably between two and 4.5 degrees, possibly as low as 1.5 but not lower, and possibly as high as nine degrees. This range of potential warming is very influential on economic analyses of the costs of climate change.***

The second method is to use long-term historical data on temperatures, solar activity, carbon-dioxide emissions and atmospheric chemistry to estimate ECS using a simple statistical model derived by applying the law of conservation of energy to the planetary atmosphere. This is called the Energy Balance method. It relies on some extrapolation to satisfy the definition of ECS but has the advantage of taking account of the available data showing how the actual atmosphere has behaved over the past 150 years.

The surprising thing is that the Energy Balance estimates are very low compared to model-based estimates. The accompanying chart compares the model-based range to ECS estimates from a dozen Energy Balance studies over the past decade. Clearly these two methods give differing answers, and the question of which one is more accurate is important.

Weak Defenses for Models Discrepancies

Climate modelers have put forward two explanations for the discrepancy. One is called the “emergent constraint” approach. The idea is that models yield a range of ECS values, and while we can’t measure ECS directly, the models also yield estimates of a lot of other things that we can measure (such as the reflectivity of cloud tops), so we could compare those other measures to the data, and when we do, sometimes the models with high ECS values also yield measures of secondary things that fit the data better than models with low ECS values.

This argument has been a bit of a tough sell, since the correlations involved are often weak, and it doesn’t explain why the Energy Balance results are so low.

The second approach is based on so-called “forcing efficacies,” which is the concept that climate forcings, such as greenhouse gases and aerosol pollutants, differ in their effectiveness over time and space, and if these variations are taken into account the Energy Balance sensitivity estimates may come out higher. This, too, has been a controversial suggestion.

Challenges to Oversensitive Models

A recent Energy Balance ECS estimate was just published in the Journal of Climate by Nicholas Lewis and Judith Curry. There are several features that make their study especially valuable. First, they rely on IPCC estimates of greenhouse gases, solar changes and other climate forcings, so they can’t be accused of putting a finger on the scale by their choice of data. Second, they take into account the efficacy issue and discuss it at length. They also take into account recent debates about how surface temperatures should or shouldn’t be measured, and how to deal with areas like the Arctic where data are sparse. Third, they compute their estimates over a variety of start and end dates to check that their ECS estimate is not dependent on the relative warming hiatus of the past two decades.

Their ECS estimate is 1.5 degrees, with a probability range between 1.05 and 2.45 degrees. If the study was a one-time outlier we might be able to ignore it. But it is part of a long list of studies from independent teams (as this interactive graphic shows), using a variety of methods that take account of critical challenges, all of which conclude that climate models exhibit too much sensitivity to greenhouse gases.

Change the Sensitivity, Change the Future

Policy-makers need to pay attention, because this debate directly impacts the carbon-tax discussion.

The Environmental Protection Agency uses social cost of carbon models that rely on the model-based ECS estimates. Last year, two colleagues and I published a study in which we took an earlier Lewis and Curry ECS estimate and plugged it into two of those models. The result was that the estimated economic damages of greenhouse gas emissions fell by between 40 and 80 per cent, and in the case of one model the damages had a 40 per cent probability of being negative for the next few decades — that is, they would be beneficial changes. The new Lewis and Curry ECS estimate is even lower than their old one, so if we re-did the same study we would find even lower social costs of carbon.

Conclusion

If ECS is as low as the Energy Balance literature suggests, it means that the climate models we have been using for decades run too hot and need to be revised. It also means that greenhouse gas emissions do not have as big an impact on the climate as has been claimed, and the case for costly policy measures to reduce carbon-dioxide emissions is much weaker than governments have told us. For a science that was supposedly “settled” back in the early 1990s, we sure have a lot left to learn.

Ross McKitrick is professor of economics at the University of Guelph and senior fellow at the Fraser Institute.

Climate Model Upgraded: INMCM5 Under the Hood

2018 Update: Best Climate Model INMCM5 October 22, 2018 (follow link to latest info on INMCM5)

A previous analysis Temperatures According to Climate Models showed that only one of 42 CMIP5 models was close to hindcasting past temperature fluctuations. That model was INMCM4, which also projected an unalarming 1.4C warming to the end of the century, in contrast to the other models programmed for future warming five times the past.

In a recent comment thread, someone asked what has been done recently with that model, given that it appears to be “best of breed.” So I went looking and this post summarizes further work to produce a new, hopefully improved version by the modelers at the Institute of Numerical Mathematics of the Russian Academy of Sciences.

Institute of Numerical Mathematics, Russian Academy of Sciences, Moscow, Russia

Earlier this year came this publication Simulation of the present-day climate with the climate model INMCM5 by E.M. Volodin et al. Excerpts below with my bolds.

In this paper we present the fifth generation of the INMCM climate model that is being developed at the Institute of Numerical Mathematics of the Russian Academy of Sciences (INMCM5). The most important changes with respect to the previous version (INMCM4) were made in the atmospheric component of the model. Its vertical resolution was increased to resolve the upper stratosphere and the lower mesosphere. A more sophisticated parameterization of condensation and cloudiness formation was introduced as well. An aerosol module was incorporated into the model. The upgraded oceanic component has a modified dynamical core optimized for better implementation on parallel computers and has two times higher resolution in both horizontal directions.

Analysis of the present-day climatology of the INMCM5 (based on the data of historical run for 1979–2005) shows moderate improvements in reproduction of basic circulation characteristics with respect to the previous version. Biases in the near-surface temperature and precipitation are slightly reduced compared with INMCM4 as  well as biases in oceanic temperature, salinity and sea surface height. The most notable improvement over INMCM4 is the capability of the new model to reproduce the equatorial stratospheric quasi-biannual oscillation and statistics of sudden stratospheric warmings.

The family of INMCM climate models, as most climate system models, consists of two main blocks: the atmosphere general circulation model, and the ocean general circulation model. The atmospheric part is based on the standard set of hydrothermodynamic equations with hydrostatic approximation written in advective form. The model prognostic variables are wind horizontal components, temperature, specific humidity and surface pressure.

Atmosphere Module

The INMCM5 borrows most of the atmospheric parameterizations from its previous version. One of the few notable changes is the new parameterization of clouds and large-scale condensation. In the INMCM5 cloud area and cloud water are computed prognostically according to Tiedtke (1993). That includes the formation of large-scale cloudiness as well as the formation of clouds in the atmospheric boundary layer and clouds of deep convection. Decrease of cloudiness due to mixing with unsaturated environment and precipitation formation are also taken into account. Evaporation of precipitation is implemented according to Kessler (1969).

In the INMCM5 the atmospheric model is complemented by the interactive aerosol block, which is absent in the INMCM4. Concentrations of coarse and fine sea salt, coarse and fine mineral dust, SO2, sulfate aerosol, hydrophilic and hydrophobic black and organic carbon are all calculated prognostically.

Ocean Module

The oceanic module of the INMCM5 uses generalized spherical coordinates. The model “South Pole” coincides with the geographical one, while the model “North Pole” is located in Siberia beyond the ocean area to avoid numerical problems near the pole. Vertical sigma-coordinate is used. The finite-difference equations are written using the Arakawa C-grid. The differential and finite-difference equations, as well as methods of solving them can be found in Zalesny etal. (2010).

The INMCM5 uses explicit schemes for advection, while the INMCM4 used schemes based on splitting upon coordinates. Also, the iterative method for solving linear shallow water equation systems is used in the INMCM5 rather than direct method used in the INMCM4. The two previous changes were made to improve model parallel scalability. The horizontal resolution of the ocean part of the INMCM5 is 0.5 × 0.25° in longitude and latitude (compared to the INMCM4’s 1 × 0.5°).

Both the INMCM4 and the INMCM5 have 40 levels in vertical. The parallel implementation of the ocean model can be found in (Terekhov etal. 2011). The oceanic block includes vertical mixing and isopycnal diffusion parameterizations (Zalesny et al. 2010). Sea ice dynamics and thermodynamics are parameterized according to Iakovlev (2009). Assumptions of elastic-viscous-plastic rheology and single ice thickness gradation are used. The time step in the oceanic block of the INMCM5 is 15 min.

Note the size of the human emissions next to the red arrow.

Carbon Cycle Module

The climate model INMCM5 has а carbon cycle module (Volodin 2007), where atmospheric CO2 concentration, carbon in vegetation, soil and ocean are calculated. In soil, а single carbon pool is considered. In the ocean, the only prognostic variable in the carbon cycle is total inorganic carbon. Biological pump is prescribed. The model calculates methane emission from wetlands and has a simplified methane cycle (Volodin 2008). Parameterizations of some electrical phenomena, including calculation of ionospheric potential and flash intensity (Mareev and Volodin 2014), are also included in the model.

Surface Temperatures

When compared to the INMCM4 surface temperature climatology, the INMCM5 shows several improvements. Negative bias over continents is reduced mainly because of the increase in daily minimum temperature over land, which is achieved by tuning the surface flux parameterization. In addition, positive bias over southern Europe and eastern USA in summer typical for many climate models (Mueller and Seneviratne 2014) is almost absent in the INMCM5. A possible reason for this bias in many models is the shortage of soil water and suppressed evaporation leading to overestimation of the surface temperature. In the INMCM5 this problem was addressed by the increase of the minimum leaf resistance for some vegetation types.

Nevertheless, some problems migrate from one model version to the other: negative bias over most of the subtropical and tropical oceans, and positive bias over the Atlantic to the east of the USA and Canada. Root mean square (RMS) error of annual mean near surface temperature was reduced from 2.48 K in the INMCM4 to 1.85 K in the INMCM5.

Precipitation

In mid-latitudes, the positive precipitation bias over the ocean prevails in winter while negative bias occurs in summer. Compared to the INMCM4, the biases over the western Indian Ocean, Indonesia, the eastern tropical Pacific and the tropical Atlantic are reduced. A possible reason for this is the better reproduction of the tropical sea surface temperature (SST) in the INMCM5 due to the increase of the spatial resolution in the oceanic block, as well as the new condensation scheme. RMS annual mean model bias for precipitation is 1.35mm day−1 for the INMCM5 compared to 1.60mm day−1 for the INMCM4.

Cloud Radiation Forcing

Cloud radiation forcing (CRF) at the top of the atmosphere is one of the most important climate model characteristics, as errors in CRF frequently lead to an incorrect surface temperature.

In the high latitudes model errors in shortwave CRF are small. The model underestimates longwave CRF in the subtropics but overestimates it in the high latitudes. Errors in longwave CRF in the tropics tend to partially compensate errors in shortwave CRF. Both errors have positive sign near 60S leading to warm bias in the surface temperature here. As a result, we have some underestimation of the net CRF absolute value at almost all latitudes except the tropics. Additional experiments with tuned conversion of cloud water (ice) to precipitation (for upper cloudiness) showed that model bias in the net CRF could be reduced, but that the RMS bias for the surface temperature will increase in this case.

A table from another paper provides the climate parameters described by INMCM5.

Climate Parameters Observations INMCM3 INMCM4 INMCM5
Incoming solar radiation at TOA 341.3 [26] 341.7 341.8 341.4
Outgoing solar radiation at TOA   96–100 [26] 97.5 ± 0.1 96.2 ± 0.1 98.5 ± 0.2
Outgoing longwave radiation at TOA 236–242 [26] 240.8 ± 0.1 244.6 ± 0.1 241.6 ± 0.2
Solar radiation absorbed by surface 154–166 [26] 166.7 ± 0.2 166.7 ± 0.2 169.0 ± 0.3
Solar radiation reflected by surface     22–26 [26] 29.4 ± 0.1 30.6 ± 0.1 30.8 ± 0.1
Longwave radiation balance at surface –54 to 58 [26] –52.1 ± 0.1 –49.5 ± 0.1 –63.0 ± 0.2
Solar radiation reflected by atmosphere      74–78 [26] 68.1 ± 0.1 66.7 ± 0.1 67.8 ± 0.1
Solar radiation absorbed by atmosphere     74–91 [26] 77.4 ± 0.1 78.9 ± 0.1 81.9 ± 0.1
Direct hear flux from surface     15–25 [26] 27.6 ± 0.2 28.2 ± 0.2 18.8 ± 0.1
Latent heat flux from surface     70–85 [26] 86.3 ± 0.3 90.5 ± 0.3 86.1 ± 0.3
Cloud amount, %     64–75 [27] 64.2 ± 0.1 63.3 ± 0.1 69 ± 0.2
Solar radiation-cloud forcing at TOA         –47 [26] –42.3 ± 0.1 –40.3 ± 0.1 –40.4 ± 0.1
Longwave radiation-cloud forcing at TOA          26 [26] 22.3 ± 0.1 21.2 ± 0.1 24.6 ± 0.1
Near-surface air temperature, °С 14.0 ± 0.2 [26] 13.0 ± 0.1 13.7 ± 0.1 13.8 ± 0.1
Precipitation, mm/day 2.5–2.8 [23] 2.97 ± 0.01 3.13 ± 0.01 2.97 ± 0.01
River water inflow to the World Ocean,10^3 km^3/year 29–40 [28] 21.6 ± 0.1 31.8 ± 0.1 40.0 ± 0.3
Snow coverage in Feb., mil. Km^2 46 ± 2 [29] 37.6 ± 1.8 39.9 ± 1.5 39.4 ± 1.5
Permafrost area, mil. Km^2 10.7–22.8 [30] 8.2 ± 0.6 16.1 ± 0.4 5.0 ± 0.5
Land area prone to seasonal freezing in NH, mil. Km^2 54.4 ± 0.7 [31] 46.1 ± 1.1 48.3 ± 1.1 51.6 ± 1.0
Sea ice area in NH in March, mil. Km^2 13.9 ± 0.4 [32] 12.9 ± 0.3 14.4 ± 0.3 14.5 ± 0.3
Sea ice area in NH in Sept., mil. Km^2 5.3 ± 0.6 [32] 4.5 ± 0.5 4.5 ± 0.5 6.1 ± 0.5

Heat flux units are given in W/m^2; the other units are given with the title of corresponding parameter. Where possible, ± shows standard deviation for annual mean value.  Source: Simulation of Modern Climate with the New Version Of the INM RAS Climate Model (Bracketed numbers refer to sources for observations)

Ocean Temperature and Salinity

The model biases in potential temperature and salinity averaged over longitude with respect to WOA09 (Antonov et al. 2010) are shown in Fig.12. Positive bias in the Southern Ocean penetrates from the surface downward for up to 300 m, while negative bias in the tropics can be seen even in the 100–1000 m layer.

Nevertheless, zonal mean temperature error at any level from the surface to the bottom is small. This was not the case for the INMCM4, where one could see negative temperature bias up to 2–3 K from 1.5 km to the bottom nearly al all latitudes, and 2–3 K positive bias at levels of 700–1000 m. The reason for this improvement is the introduction of a higher background coefficient for vertical diffusion at high depth (3000 m and higher) than at intermediate depth (300–500m). Positive temperature bias at 45–65 N at all depths could probably be explained by shortcomings in the representation of deep convection [similar errors can be seen for most of the CMIP5 models (Flato etal. 2013, their Fig.9.13)].

Another feature common for many present day climate models (and for the INMCM5 as well) is negative bias in southern tropical ocean salinity from the surface to 500 m. It can be explained by overestimation of precipitation at the southern branch of the Inter Tropical Convergence zone. Meridional heat flux in the ocean (Fig.13) is not far from available estimates (Trenberth and Caron 2001). It looks similar to the one for the INMCM4, but maximum of northward transport in the Atlantic in the INMCM5 is about 0.1–0.2 × 1015 W higher than the one in the INMCM4, probably, because of the increased horizontal resolution in the oceanic block.

Sea Ice

In the Arctic, the model sea ice area is just slightly overestimated. Overestimation of the Arctic sea ice area is connected with negative bias in the surface temperature. In the same time, connection of the sea ice area error with the positive salinity bias is not evident because ice formation is almost compensated by ice melting, and the total salinity source for these pair of processes is not large. The amplitude and phase of the sea ice annual cycle are reproduced correctly by the model. In the Antarctic, sea ice area is underestimated by a factor of 1.5 in all seasons, apparently due to the positive temperature bias. Note that the correct simulation of sea ice area dynamics in both hemispheres simultaneously is a difficult task for climate modeling.

The analysis of the model time series of the SST anomalies shows that the El Niño event frequency is approximately the same in the model and data, but the model El Niños happen too regularly. Atmospheric response to the El Niño vents is also underestimated in the model by a factor of 1.5 with respect to the reanalysis data.

Conclusion

Based on the CMIP5 model INMCM4 the next version of the Institute of Numerical Mathematics RAS climate model was developed (INMCM5). The most important changes include new parameterizations of large scale condensation (cloud fraction and cloud water are now the prognostic variables), and increased vertical resolution in the atmosphere (73 vertical levels instead of 21, top model level raised from 30 to 60 km). In the oceanic block, horizontal resolution was increased by a factor of 2 in both directions.

The climate model was supplemented by the aerosol block. The model got a new parallel code with improved computational efficiency and scalability. With the new version of climate model we performed a test model run (80 years) to simulate the present-day Earth climate. The model mean state was compared with the available datasets. The structures of the surface temperature and precipitation biases in the INMCM5 are typical for the present climate models. Nevertheless, the RMS error in surface temperature, precipitation as well as zonal mean temperature and zonal wind are reduced in the INMCM5 with respect to its previous version, the INMCM4.

The model is capable of reproducing equatorial stratospheric QBO and SSWs.The model biases for the sea surface height and surface salinity are reduced in the new version as well, probably due to increasing spatial resolution in the oceanic block. Bias in ocean potential temperature at depths below 700 m in the INMCM5 is also reduced with respect to the one in the INMCM4. This is likely because of the tuning background vertical diffusion coefficient.

Model sea ice area is reproduced well enough in the Arctic, but is underestimated in the Antarctic (as a result of the overestimated surface temperature). RMS error in the surface salinity is reduced almost everywhere compared to the previous model except the Arctic (where the positive bias becomes larger). As a final remark one can conclude that the INMCM5 is substantially better in almost all aspects than its previous version and we plan to use this model as a core component for the coming CMIP6 experiment.

Summary

One the one hand, this model example shows that the intent is simple: To represent dynamically the energy balance of our planetary climate system.  On the other hand, the model description shows how many parameters are involved, and the complexity of processes interacting.  The attempt to simulate operations of the climate system is a monumental task with many outstanding challenges, and this latest version is another step in an iterative development.

Note:  Regarding the influence of rising CO2 on the energy balance.  Global warming advocates estimate a CO2 perturbation of 4 W/m^2.  In the climate parameters table above, observations of the radiation fluxes have a 2 W/m^2 error range at best, and in several cases are observed in ranges of 10 to 15 W/m^2.

We do not yet have access to the time series temperature outputs from INMCM5 to compare with observations or with other CMIP6 models.  Presumably that will happen in the future.

Early Schematic: Flows and Feedbacks for Climate Models

Warming from CO2 Unlikely

Figure 5. Simplification of IPCC AR5 shown above in Fig. 4. The colored lines represent the range of results for the models and observations. The trends here represent trends at different levels of the tropical atmosphere from the surface up to 50,000 ft. The gray lines are the bounds for the range of observations, the blue for the range of IPCC model results without extra GHGs and the red for IPCC model results with extra GHGs.The key point displayed is the lack of overlap between the GHG model results (red) and the observations (gray). The nonGHG model runs (blue) overlap the observations almost completely. 

A recent post at Friends of Science alerted me to an important proof against the CO2 global warming claim. It was included in John Christy’s testimony 29 Mar 2017 at the House Committee on Science, Space and Technology. The text below is from that document which can be accessed here. (My bolds)

Main Point: IPCC Assessment Reports show that the IPCC climate models performed best versus observations when they did not include extra GHGs and this result can be demonstrated with a statistical model as well.

(5)  A simple statistical model that passed the same “scientific-method” test

The IPCC climate models performed best versus observations when they did not include extra GHGs and this result can be demonstrated with a statistical model as well. I was coauthor of a report which produced such an analysis (Wallace, J., J. Christy, and J. D’Aleo, “On the existence of a ‘Tropical Hot Spot’ & the validity of the EPA’s CO2 Endangerment Finding – Abridged Research Report”, August 2016 (Available here ).

In this report we examine annual estimates from many sources of global and tropical deep-layer temperatures since 1959 and since 1979 utilizing explanatory variables that did not include rising CO2 concentrations. We applied the model to estimates of global and tropical temperature from the satellite and balloon sources, individually, shown in Fig. 2 above. The explanatory variables are those that have been known for decades such as indices of El Nino-Southern Oscillation (ENSO), volcanic activity, and a solar activity (e.g. see Christy and McNider, 1994, “Satellite greenhouse signal”, Nature, 367, 27Jan). [One of the ENSO explanatory variables was the accumulated MEI (Multivariate ENSO Index, see https://www.esrl.noaa.gov/psd/enso/mei/) in which the index was summed through time to provide an indication of its accumulated impact. This “accumulated-MEI” was shown to be a potential factor in global temperatures by Spencer and Braswell, 2014 (“The role of ENSO in global ocean temperature changes during 1955-2011 simulated with a 1D climate model”, APJ.Atmos.Sci. 50(2), 229-237, DOI:10.1007/s13143-014- 001-z.) Interestingly, later work has shown that this “accumulated-MEI” has virtually the same impact as the accumulated solar index, both of which generally paralleled the rise in temperatures through the 1980s and 1990s and the slowdown in the 21st century. Thus our report would have the same conclusion with or without the “accumulated-MEI.”]

The basic result of this report is that the temperature trend of several datasets since 1979 can be explained by variations in the components that naturally affect the climate, just as the IPCC inadvertently indicated in Fig. 5 above. The advantage of the simple statistical treatment is that the complicated processes such as clouds, ocean-atmosphere interaction, aerosols, etc., are implicitly incorporated by the statistical relationships discovered from the actual data. Climate models attempt to calculate these highly non-linear processes from imperfect parameterizations (estimates) whereas the statistical model directly accounts for them since the bulk atmospheric temperature is the response-variable these processes impact. It is true that the statistical model does not know what each sub-process is or how each might interact with other processes. But it also must be made clear: it is an understatement to say that no IPCC climate model accurately incorporates all of the non-linear processes that affect the system. I simply point out that because the model is constrained by the ultimate response variable (bulk temperature), these highly complex processes are included.

The fact that this statistical model explains 75-90 percent of the real annual temperature variability, depending on dataset, using these influences (ENSO, volcanoes, solar) is an indication the statistical model is useful. In addition, the trends produced from this statistical model are not statistically different from the actual data (i.e. passing the “scientific-method” trend test which assumes the natural factors are not influenced by increasing GHGs). This result promotes the conclusion that this approach achieves greater scientific (and policy) utility than results from elaborate climate models which on average fail to reproduce the real world’s global average bulk temperature trend since 1979.

The over-warming of the atmosphere by the IPCC models relates to a problem the IPCC AR5 encountered elsewhere. In trying to determine the climate sensitivity, which is how sensitive the global temperature is relative to increases in GHGs, the IPCC authors chose not to give a best estimate. [A high climate sensitivity is a foundational component of the last Administration’s Social Cost of Carbon.] The reason? … climate models were showing about twice the sensitivity to GHGs than calculations based on real, empirical data. I would encourage this committee, and our government in general, to consider empirical data, not climate model output, when dealing with environmental regulations.

Summary

Planning requires assumptions because no one has knowledge of the future, only informed opinions.  Christy makes the case that our assumptions should be based on empirical data rather than models that are driven by theoretical assumptions.  When the CO2 sensitivity assumption is removed from climate models they come much closer to observed temperature measurements.  Statistical analysis shows that at least 75% of observed warming comes from factors other than CO2.  That analysis also correlates with the accumulated effects of oceanic circulations, principally the ENSO index.

Putting Climate Models in Their Place

A previous post Chameleon Climate Models described the general issue of whether a model belongs on the bookshelf (theoretically useful) or whether it passes real world filters of relevance, thus qualifying as useful for policy considerations.

Following an interesting discussion on her blog, Dr. Judith Curry has written an important essay on the usefulness and limitations of climate models.

The paper was developed to respond to a request from a group of lawyers wondering how to regard claims based upon climate model outputs. The document is entitled Climate Models (here) and is a great informative read for anyone. Some excerpts that struck me:

Climate model development has followed a pathway mostly driven by scientific curiosity and computational limitations. GCMs were originally designed as a tool to help understand how the climate system works. GCMs are used by researchers to represent aspects of climate that are extremely difficult to observe, experiment with theories in a new way by enabling hitherto infeasible calculations, understand a complex system of equations that would otherwise be impenetrable, and explore the climate system to identify unexpected outcomes. As such, GCMs are an important element of climate research.

Climate models are useful tools for conducting scientific research to understand the climate system. However, the above points support the conclusion that current GCM climate models are not fit for the purpose of attributing the causes of 20th century warming or for predicting global or regional climate change on timescales of decades to centuries, with any high level of confidence. By extension, GCMs are not fit for the purpose of justifying political policies to fundamentally alter world social, economic and energy systems. It is this application of climate model results that fuels the vociferousness of the debate surrounding climate models.

Evolution of state-of-the-art Climate Models from the mid 70s to the mid 00s. From IPCC (2007)

Evolution of state-of-the-art Climate Models from the mid 70s to the mid 00s. From IPCC (2007)

The actual equations used in the GCM computer codes are only approximations of the physical processes that occur in the climate system. While some of these approximations are highly accurate, others are unavoidably crude. This is because the real processes they represent are either poorly understood or too complex to include in the model given the constraints of the computer system. Of the processes that are most important for climate change, parameterizations related to clouds and precipitation remain the most challenging, and are the greatest source of disagreement among different GCMs.

There are literally thousands of different choices made in the construction of a climate model (e.g. resolution, complexity of the submodels, parameterizations). Each different set of choices produces a different model having different sensitivities. Further, different modeling groups have different focal interests, e.g. long paleoclimate simulations, details of ocean circulations, nuances of the interactions between aerosol particles and clouds, the carbon cycle. These different interests focus their limited computational resources on a particular aspect of simulating the climate system, at the expense of others.


Overview of the structure of a state-of-the-art climate model. See Climate Models Explained by R.G. Brown

Human-caused warming depends not only on how much CO2 is added to the atmosphere, but also on how ‘sensitive’ the climate is to the increased CO2. Climate sensitivity is defined as the global surface warming that occurs when the concentration of carbon dioxide in the atmosphere doubles. If climate sensitivity is high, then we can expect substantial warming in the coming century as emissions continue to increase. If climate sensitivity is low, then future warming will be substantially lower.

In GCMs, the equilibrium climate sensitivity is an ‘emergent property’ that is not directly calibrated or tuned. While there has been some narrowing of the range of modeled climate sensitivities over time, models still can be made to yield a wide range of sensitivities by altering model parameterizations. Model versions can be rejected or not, subject to the modelers’ own preconceptions, expectations and biases of the outcome of equilibrium climate sensitivity calculation.

Further, the discrepancy between observational and climate model-based estimates of climate sensitivity is substantial and of significant importance to policymakers. Equilibrium climate sensitivity, and the level of uncertainty in its value, is a key input into the economic models that drive cost-benefit analyses and estimates of the social cost of carbon.

Variations in climate can be caused by external forcing, such as solar variations, volcanic eruptions or changes in atmospheric composition such as an increase in CO2. Climate can also change owing to internal processes within the climate system (internal variability). The bestknown example of internal climate variability is El Nino/La Nina. Modes of decadal to centennial to millennial internal variability arise from the slow circulations in the oceans. As such, the ocean serves as a ‘fly wheel’ on the climate system, storing and releasing heat on long timescales and acting to stabilize the climate. As a result of the time lags and storage of heat in the ocean, the climate system is never in equilibrium.

The combination of uncertainty in the transient climate response (sensitivity) and the uncertainties in the magnitude and phasing of the major modes in natural internal variability preclude an unambiguous separation of externally forced climate variations from natural internal climate variability. If the climate sensitivity is on the low end of the range of estimates, and natural internal variability is on the strong side of the distribution of climate models, different conclusions are drawn about the relative importance of human causes to the 20th century warming.

Figure 5.1. Comparative dynamics of the World Fuel Consumption (WFC) and Global Surface Air Temperature Anomaly (ΔT), 1861-2000. The thin dashed line represents annual ΔT, the bold line—its 13-year smoothing, and the line constructed from rectangles—WFC (in millions of tons of nominal fuel) (Klyashtorin and Lyubushin, 2003). Source: Frolov et al. 2009

Anthropogenic (human-caused) climate change is a theory in which the basic mechanism is well understood, but whose potential magnitude is highly uncertain. What does the preceding analysis imply for IPCC’s ‘extremely likely’ attribution of anthropogenically caused warming since 1950? Climate models infer that all of the warming since 1950 can be attributed to humans. However, there have been large magnitude variations in global/hemispheric climate on timescales of 30 years, which are the same duration as the late 20th century warming. The IPCC does not have convincing explanations for previous 30 year periods in the 20th century, notably the warming 1910-1945 and the grand hiatus 1945-1975. Further, there is a secular warming trend at least since 1800 (and possibly as long as 400 years) that cannot be explained by CO2, and is only partly explained by volcanic eruptions.

Summary

There is growing evidence that climate models are running too hot and that climate sensitivity to CO2 is on the lower end of the range provided by the IPCC. Nevertheless, these lower values of climate sensitivity are not accounted for in IPCC climate model projections of temperature at the end of the 21st century or in estimates of the impact on temperatures of reducing CO2 emissions.

The climate modeling community has been focused on the response of the climate to increased human caused emissions, and the policy community accepts (either explicitly or implicitly) the results of the 21st century GCM simulations as actual predictions. Hence we don’t have a good understanding of the relative climate impacts of the above (natural factors) or their potential impacts on the evolution of the 21st century climate.

Footnote:

There are a series of posts here which apply reality filters to attest climate models.  The first was Temperatures According to Climate Models where both hindcasting and forecasting were seen to be flawed.

Others in the Series are:

Sea Level Rise: Just the Facts

Data vs. Models #1: Arctic Warming

Data vs. Models #2: Droughts and Floods

Data vs. Models #3: Disasters

Data vs. Models #4: Climates Changing

Climate Medicine

Climates Don’t Start Wars, People Do

virtual-reality-1920x1200

Beware getting sucked into any model, climate or otherwise.