Top Climate Model Gets Better

Figure S7. Contributions of forcing and feedbacks to ECS in each model and for the multimodel means. Contributions from the tropical and extratropical portion of the feedback are shown in light and dark shading, respectively. Black dots indicate the ECS in each model, while upward and downward pointing triangles indicate contributions from non-cloud and cloud feedbacks, respectively. Numbers printed next to the multi-model mean bars indicate the cumulative sum of each plotted component. Numerical values are not printed next to residual, extratropical forcing, and tropical albedo terms for clarity. Models within each collection are ordered by ECS.

A previous post here discussed discovering that INMCM4 was the best CMIP5 model in replicating historical temperature records. Additional posts described improvements built into INMCM5, the next generation model included for CMIP6 testing. Later on is a reprint of the temperature history replication and the parameters included in the revised model. This post focuses on a recent report of additional enhancements by the modelers in order to better represent precipitation and extreme rainfall events.

The paper is Influence of various parameters of INM RAS climate model on the results of extreme precipitation simulation by M A Tarasevich and E M Volodin 2019. Excerpts in italics with my bolds.

Modern models of the Earth’s climate can reproduce not only the average climate condition, but also extreme weather and climate phenomena. Therefore, there arises the problem of comparing climate models for observable extreme weather events.

In [1, 2], various extreme weather and climatic situations are considered. According to the paper,27 extreme indices are defined, characterizing different situations with high and low temperatures, with heavy precipitation or with absence of precipitation.

The results of simulation of the extreme indices with the INMCM4 [3] climate model were compared with the results of other models which took part in the CMIP5 project (Coupled Model Intercomparison Project, Phase 5) [2]. The comparison demonstrates that this model performs well for most indices except for those related to daily minimum temperature. For those indices the model shows one of the worst results.

The parameterizations of physical processes in the next model version, INMCM5, were replaced or tuned [4, 5], so that changes in the extreme indices simulation are expected.

The simulation results were compared to the ERA-Interim [6] reanalysis data, which were considered as the observational data for this study. Indices averaged for the 1981–2010 year range were compared. Mann-Whitney test with 1% significance level was used to examine where changes are significant.

To evaluate the quality of simulation of extreme weather phenomena, the extreme indices were calculated [7] using the results of computations performed by two versions of the INM RAS climate model (INMCM4 and INMCM5) and the ERA Interim reanalysis. We took the root mean square deviation of the index value computed from the modeled and reanalysis data as the measure of simulation quality. The mean is averaged over the land.

Tables 1 and 2 present the names of extreme indices related to temperature and precipitation, their labels and measurement units, as well as the land only averaged standard deviations for these indices between the ERA-Interim reanalysis and different versions of the INM RAS climate model.

Table 1 shows that the simulation of almost all temperature indices has improved in the INMCM5 compared to INMCM4. In particular, the simulation of the following extreme indices related to the minimum daily temperature improved significantly (by 37–56%): the annual daily minimum temperature (TNn), the number of frost days (FD) and tropical nights (TR), the diurnal temperature range (DTR), and the growing season length (GSL).

[Comment: Note that values in these tables are standard deviations from observations as presented by ERA reanalysis. So for example, growing season length (GSL) varied from mean ERA values by 24 days in INMCM4, but improved to a 15 day differential in INMCM5.]

Table 2 shows that the simulation of the number of heavy (R10mm) and very heavy (R20mm) precipitation days, consecutive wet days (CWD), simple daily intensity (SDII), and total wet-day precipitation (PRCPTOT) noticeably improved in INMCM5. At the same time, the simulation of indices related to the intensity (RX5day) and the amount (R95p) of precipitation on very rainy days became worse.

Improvements Added to INMCM5

To improve the simulation of extreme precipitation by the INMCM5 model, the following physical  processes were considered: evaporation of precipitation in the upper atmosphere; mixing of horizontal velocity components due to large-scale condensation and deep convection; air resistance acting on falling precipitation particles.

Both large-scale condensation and deep convection cause vertical motion, which redistributes the horizontal momentum between the nearby air layers. The implementation of mixing due to large-scale condensation was added to the model. For short we will refer to the INMCM5 version with these changes as INMCM5VM (INMCM5 Velocities Mixing).

Since precipitation particles (water droplets or ice crystals) move in the surrounding air, a drag force arises that carries the air along with the particles. This resistance force can be included in the right hand side of the momentum balance equation, which is part of the atmosphere hydrothermodynamic system of equations. Accurate accounting for the effect of this force requires numerical solving of an additional Poisson-type equation. For short, we will refer to the INMCM5 model version with the air resistance and vertical mixing of the horizontal velocity components as INMCM5AR (INMCM5 Air Resistance).

Figure 3. (a) RX5day index values averaged over 1981–2010 according to ERA-Interim data. (b-d)  Deviations of the same average obtained from INMCM5, INMCM5VM, and INMCM5AR data. Statistically insignificant deviations are presented as white.

Table 2 shows that the quality of simulation of all precipitation-related extreme indices in INMCM5AR either improved by 3–21 % compared to INMCM5 or remained unchanged.

Figures 2d, 3d show the spatial distribution of the deviations for max 1 day (RX1day) and 5 day (RX5day) precipitation according to INMCM5AR compared to INMCM5. The model with air resistance acting on falling precipitation particles compared to INMCM5 significantly underestimates RX1day and RX5day in South Africa, South and East Asia, and slightly underestimates the indicated extreme indices in Tibet.

Taking into account the air resistance acting on falling precipitation particles significantly reduces  the overestimation of RX1day and RX5day observed in INMCM5 in South Africa, South and East Asia, and leads to an improvement in the quality of extreme indices associated with the precipitation amount on very rainy days and their intensity simulation by 9–21 %. At the same time, a significant overestimation of the RX1day and RX5day indices in the Amazon basin and Southeast Asia, as well as their underestimation in West Africa, still remain.

Footnote: 

A simple analysis shows if the climate sensitivity estimated by INMCM5 (1.8C per doubling of CO2) would be realized over the next 80 years, it would mean a continuation of the warming over the last 60 years.  The accumulated rise in GMT would be 1.2C for the 21st Century, well below the IPCC 1.5C aspiration.  See I Want You Not to Panic

Update February 4, 2020

A recent comparison of INMCM5 and other CMIP6 climate models is discussed in the post
Climate Models: Good, Bad and Ugly

Updated with October 25, 2018 Report

A previous analysis Temperatures According to Climate Models showed that only one of 42 CMIP5 models was close to hindcasting past temperature fluctuations. That model was INMCM4, which also projected an unalarming 1.4C warming to the end of the century, in contrast to the other models programmed for future warming five times the past.

In a recent comment thread, someone asked what has been done recently with that model, given that it appears to be “best of breed.” So I went looking and this post summarizes further work to produce a new, hopefully improved version by the modelers at the Institute of Numerical Mathematics of the Russian Academy of Sciences.

Institute of Numerical Mathematics, Russian Academy of Sciences, Moscow, Russia

A previous post a year ago went into the details of improvements made in producing the latest iteration INMCM5 for entry into the CMIP6 project.  That text is reprinted below.

Now a detailed description of the model’s global temperature outputs has been published October 25, 2018 in Earth System Dynamics Simulation of observed climate changes in 1850–2014 with climate model INM-CM5   (Title is link to pdf) Excerpts below with my bolds.

Figure 1. The 5-year mean GMST (K) anomaly with respect to 1850–1899 for HadCRUTv4 (thick solid black); model mean (thick solid red). Dashed thin lines represent data from individual model runs: 1 – purple, 2 – dark blue, 3 – blue, 4 – green, 5 – yellow, 6 – orange, 7 – magenta. In this and the next figures numbers on the time axis indicate the first year of the 5-year mean.

Abstract

Climate changes observed in 1850-2014 are modeled and studied on the basis of seven historical runs with the climate model INM-CM5 under the scenario proposed for Coupled Model Intercomparison Project, Phase 6 (CMIP6). In all runs global mean surface temperature rises by 0.8 K at the end of the experiment (2014) in agreement with the observations. Periods of fast warming in 1920-1940 and 1980-2000 as well as its slowdown in 1950-1975 and 2000-2014 are correctly reproduced by the ensemble mean. The notable change here with respect to the CMIP5 results is correct reproduction of the slowdown of global warming in 2000-2014 that we attribute to more accurate description of the Solar constant in CMIP6 protocol. The model is able to reproduce correct behavior of global mean temperature in 1980-2014 despite incorrect phases of  the Atlantic Multidecadal Oscillation and Pacific Decadal Oscillation indices in the majority of experiments. The Arctic sea ice loss in recent decades is reasonably close to the observations just in one model run; the model underestimates Arctic sea ice loss by the factor 2.5. Spatial pattern of model mean surface temperature trend during the last 30 years looks close the one for the ERA Interim reanalysis. Model correctly estimates the magnitude of stratospheric cooling.

Additional Commentary

Observational data of GMST for 1850-2014 used for verification of model results were produced by HadCRUT4 (Morice et al 2012). Monthly mean sea surface temperature (SST) data ERSSTv4 (Huang et al 2015) are used for comparison of the AMO and PDO indices with that of the model. Data of Arctic sea ice extent for 1979-2014 derived from satellite observations are taken from Comiso and Nishio (2008). Stratospheric temperature trend and geographical distribution of near surface air temperature trend for 1979-2014 are calculated from ERA Interim reanalysis data (Dee et al 2011).

Keeping in mind the arguments that the GMST slowdown in the beginning of 21st 6 century could be due to the internal variability of the climate system let us look at the behavior of the AMO and PDO climate indices. Here we calculated the AMO index in the usual way, as the SST anomaly in Atlantic at latitudinal band 0N-60N minus anomaly of the GMST. Model and observed 5 year mean AMO index time series are presented in Fig.3. The well known oscillation with a period of 60-70 years can be clearly seen in the observations. Among the model runs, only one (dashed purple line) shows oscillation with a period of about 70 years, but without significant maximum near year 2000. In other model runs there is no distinct oscillation with a period of 60-70 years but period of 20-40 years prevails. As a result none of seven model trajectories reproduces behavior of observed AMO index after year 1950 (including its warm phase at the turn of the 20th and 21st centuries). One can conclude that anthropogenic forcing is unable to produce any significant impact on the AMO dynamics as its index averaged over 7 realization stays around zero within one sigma interval (0.08). Consequently, the AMO dynamics is controlled by internal variability of the climate system and cannot be predicted in historic experiments. On the other hand the model can correctly predict GMST changes in 1980-2014 having wrong phase of the AMO (blue, yellow, orange lines on Fig.1 and 3).

Conclusions

Seven historical runs for 1850-2014 with the climate model INM-CM5 were analyzed. It is shown that magnitude of the GMST rise in model runs agrees with the estimate based on the observations. All model runs reproduce stabilization of GMST in 1950-1970, fast warming in 1980-2000 and a second GMST stabilization in 2000-2014 suggesting that the major factor for predicting GMST evolution is the external forcing rather than system internal variability. Numerical experiments with the previous model version (INMCM4) for CMIP5 showed unrealistic gradual warming in 1950-2014. The difference between the two model results could be explained by more accurate modeling of stratospheric volcanic and tropospheric anthropogenic aerosol radiation effect (stabilization in 1950-1970) due to the new aerosol block in INM-CM5 and more accurate prescription of Solar constant scenario (stabilization in 2000-2014) in CMIP6 protocol. Four of seven INM-CM5 model runs simulate acceleration of warming in 1920-1940 in a correct way, other three produce it earlier or later than in reality. This indicates that for the year warming of 1920-1940 the climate system natural variability plays significant role. No model trajectory reproduces correct time behavior of AMO and PDO indices. Taking into account our results on the GMST modeling one can conclude that anthropogenic forcing does not produce any significant impact on the dynamics of AMO and PDO indices, at least for the INM-CM5 model. In turns, correct prediction of the GMST changes in the 1980-2014 does not require correct phases of the AMO and PDO as all model runs have correct values of the GMST while in at least three model experiments the phases of the AMO and PDO are opposite to the observed ones in that time. The North Atlantic SST time series produced by the model correlates better with the observations in 1980-2014. Three out of seven trajectories have strongly positive North Atlantic SST anomaly as the observations (in the other four cases we see near-to-zero changes for this quantity). The INMCM5 has the same skill for prediction of the Arctic sea ice extent in 2000-2014 as CMIP5 models including INMCM4. It underestimates the rate of sea ice loss by a factor between the two and three. In one extreme case the magnitude of this decrease is as large as in the observations while in the other the sea ice extent does not change compared to the preindustrial ages. In part this could be explained by the strong internal variability of the Arctic sea ice but obviously the new version of INMCM model and new CMIP6 forcing protocol does not improve prediction of the Arctic sea ice extent response to anthropogenic forcing.

Previous Post:  Climate Model Upgraded: INMCM5 Under the Hood

Earlier in 2017 came this publication Simulation of the present-day climate with the climate model INMCM5 by E.M. Volodin et al. Excerpts below with my bolds.

In this paper we present the fifth generation of the INMCM climate model that is being developed at the Institute of Numerical Mathematics of the Russian Academy of Sciences (INMCM5). The most important changes with respect to the previous version (INMCM4) were made in the atmospheric component of the model. Its vertical resolution was increased to resolve the upper stratosphere and the lower mesosphere. A more sophisticated parameterization of condensation and cloudiness formation was introduced as well. An aerosol module was incorporated into the model. The upgraded oceanic component has a modified dynamical core optimized for better implementation on parallel computers and has two times higher resolution in both horizontal directions.

Analysis of the present-day climatology of the INMCM5 (based on the data of historical run for 1979–2005) shows moderate improvements in reproduction of basic circulation characteristics with respect to the previous version. Biases in the near-surface temperature and precipitation are slightly reduced compared with INMCM4 as  well as biases in oceanic temperature, salinity and sea surface height. The most notable improvement over INMCM4 is the capability of the new model to reproduce the equatorial stratospheric quasi-biannual oscillation and statistics of sudden stratospheric warmings.

The family of INMCM climate models, as most climate system models, consists of two main blocks: the atmosphere general circulation model, and the ocean general circulation model. The atmospheric part is based on the standard set of hydrothermodynamic equations with hydrostatic approximation written in advective form. The model prognostic variables are wind horizontal components, temperature, specific humidity and surface pressure.

Atmosphere Module

The INMCM5 borrows most of the atmospheric parameterizations from its previous version. One of the few notable changes is the new parameterization of clouds and large-scale condensation. In the INMCM5 cloud area and cloud water are computed prognostically according to Tiedtke (1993). That includes the formation of large-scale cloudiness as well as the formation of clouds in the atmospheric boundary layer and clouds of deep convection. Decrease of cloudiness due to mixing with unsaturated environment and precipitation formation are also taken into account. Evaporation of precipitation is implemented according to Kessler (1969).

In the INMCM5 the atmospheric model is complemented by the interactive aerosol block, which is absent in the INMCM4. Concentrations of coarse and fine sea salt, coarse and fine mineral dust, SO2, sulfate aerosol, hydrophilic and hydrophobic black and organic carbon are all calculated prognostically.

Ocean Module

The oceanic module of the INMCM5 uses generalized spherical coordinates. The model “South Pole” coincides with the geographical one, while the model “North Pole” is located in Siberia beyond the ocean area to avoid numerical problems near the pole. Vertical sigma-coordinate is used. The finite-difference equations are written using the Arakawa C-grid. The differential and finite-difference equations, as well as methods of solving them can be found in Zalesny etal. (2010).

The INMCM5 uses explicit schemes for advection, while the INMCM4 used schemes based on splitting upon coordinates. Also, the iterative method for solving linear shallow water equation systems is used in the INMCM5 rather than direct method used in the INMCM4. The two previous changes were made to improve model parallel scalability. The horizontal resolution of the ocean part of the INMCM5 is 0.5 × 0.25° in longitude and latitude (compared to the INMCM4’s 1 × 0.5°).

Both the INMCM4 and the INMCM5 have 40 levels in vertical. The parallel implementation of the ocean model can be found in (Terekhov etal. 2011). The oceanic block includes vertical mixing and isopycnal diffusion parameterizations (Zalesny et al. 2010). Sea ice dynamics and thermodynamics are parameterized according to Iakovlev (2009). Assumptions of elastic-viscous-plastic rheology and single ice thickness gradation are used. The time step in the oceanic block of the INMCM5 is 15 min.

Note the size of the human emissions next to the red arrow.

Carbon Cycle Module

The climate model INMCM5 has а carbon cycle module (Volodin 2007), where atmospheric CO2 concentration, carbon in vegetation, soil and ocean are calculated. In soil, а single carbon pool is considered. In the ocean, the only prognostic variable in the carbon cycle is total inorganic carbon. Biological pump is prescribed. The model calculates methane emission from wetlands and has a simplified methane cycle (Volodin 2008). Parameterizations of some electrical phenomena, including calculation of ionospheric potential and flash intensity (Mareev and Volodin 2014), are also included in the model.

Surface Temperatures

When compared to the INMCM4 surface temperature climatology, the INMCM5 shows several improvements. Negative bias over continents is reduced mainly because of the increase in daily minimum temperature over land, which is achieved by tuning the surface flux parameterization. In addition, positive bias over southern Europe and eastern USA in summer typical for many climate models (Mueller and Seneviratne 2014) is almost absent in the INMCM5. A possible reason for this bias in many models is the shortage of soil water and suppressed evaporation leading to overestimation of the surface temperature. In the INMCM5 this problem was addressed by the increase of the minimum leaf resistance for some vegetation types.

Nevertheless, some problems migrate from one model version to the other: negative bias over most of the subtropical and tropical oceans, and positive bias over the Atlantic to the east of the USA and Canada. Root mean square (RMS) error of annual mean near surface temperature was reduced from 2.48 K in the INMCM4 to 1.85 K in the INMCM5.

Precipitation

In mid-latitudes, the positive precipitation bias over the ocean prevails in winter while negative bias occurs in summer. Compared to the INMCM4, the biases over the western Indian Ocean, Indonesia, the eastern tropical Pacific and the tropical Atlantic are reduced. A possible reason for this is the better reproduction of the tropical sea surface temperature (SST) in the INMCM5 due to the increase of the spatial resolution in the oceanic block, as well as the new condensation scheme. RMS annual mean model bias for precipitation is 1.35mm day−1 for the INMCM5 compared to 1.60mm day−1 for the INMCM4.

Cloud Radiation Forcing

Cloud radiation forcing (CRF) at the top of the atmosphere is one of the most important climate model characteristics, as errors in CRF frequently lead to an incorrect surface temperature.

In the high latitudes model errors in shortwave CRF are small. The model underestimates longwave CRF in the subtropics but overestimates it in the high latitudes. Errors in longwave CRF in the tropics tend to partially compensate errors in shortwave CRF. Both errors have positive sign near 60S leading to warm bias in the surface temperature here. As a result, we have some underestimation of the net CRF absolute value at almost all latitudes except the tropics. Additional experiments with tuned conversion of cloud water (ice) to precipitation (for upper cloudiness) showed that model bias in the net CRF could be reduced, but that the RMS bias for the surface temperature will increase in this case.

A table from another paper provides the climate parameters described by INMCM5.

Climate Parameters Observations INMCM3 INMCM4 INMCM5
Incoming solar radiation at TOA 341.3 [26] 341.7 341.8 341.4
Outgoing solar radiation at TOA   96–100 [26] 97.5 ± 0.1 96.2 ± 0.1 98.5 ± 0.2
Outgoing longwave radiation at TOA 236–242 [26] 240.8 ± 0.1 244.6 ± 0.1 241.6 ± 0.2
Solar radiation absorbed by surface 154–166 [26] 166.7 ± 0.2 166.7 ± 0.2 169.0 ± 0.3
Solar radiation reflected by surface     22–26 [26] 29.4 ± 0.1 30.6 ± 0.1 30.8 ± 0.1
Longwave radiation balance at surface –54 to 58 [26] –52.1 ± 0.1 –49.5 ± 0.1 –63.0 ± 0.2
Solar radiation reflected by atmosphere      74–78 [26] 68.1 ± 0.1 66.7 ± 0.1 67.8 ± 0.1
Solar radiation absorbed by atmosphere     74–91 [26] 77.4 ± 0.1 78.9 ± 0.1 81.9 ± 0.1
Direct hear flux from surface     15–25 [26] 27.6 ± 0.2 28.2 ± 0.2 18.8 ± 0.1
Latent heat flux from surface     70–85 [26] 86.3 ± 0.3 90.5 ± 0.3 86.1 ± 0.3
Cloud amount, %     64–75 [27] 64.2 ± 0.1 63.3 ± 0.1 69 ± 0.2
Solar radiation-cloud forcing at TOA         –47 [26] –42.3 ± 0.1 –40.3 ± 0.1 –40.4 ± 0.1
Longwave radiation-cloud forcing at TOA          26 [26] 22.3 ± 0.1 21.2 ± 0.1 24.6 ± 0.1
Near-surface air temperature, °С 14.0 ± 0.2 [26] 13.0 ± 0.1 13.7 ± 0.1 13.8 ± 0.1
Precipitation, mm/day 2.5–2.8 [23] 2.97 ± 0.01 3.13 ± 0.01 2.97 ± 0.01
River water inflow to the World Ocean,10^3 km^3/year 29–40 [28] 21.6 ± 0.1 31.8 ± 0.1 40.0 ± 0.3
Snow coverage in Feb., mil. Km^2 46 ± 2 [29] 37.6 ± 1.8 39.9 ± 1.5 39.4 ± 1.5
Permafrost area, mil. Km^2 10.7–22.8 [30] 8.2 ± 0.6 16.1 ± 0.4 5.0 ± 0.5
Land area prone to seasonal freezing in NH, mil. Km^2 54.4 ± 0.7 [31] 46.1 ± 1.1 48.3 ± 1.1 51.6 ± 1.0
Sea ice area in NH in March, mil. Km^2 13.9 ± 0.4 [32] 12.9 ± 0.3 14.4 ± 0.3 14.5 ± 0.3
Sea ice area in NH in Sept., mil. Km^2 5.3 ± 0.6 [32] 4.5 ± 0.5 4.5 ± 0.5 6.1 ± 0.5

Heat flux units are given in W/m^2; the other units are given with the title of corresponding parameter. Where possible, ± shows standard deviation for annual mean value.  Source: Simulation of Modern Climate with the New Version Of the INM RAS Climate Model (Bracketed numbers refer to sources for observations)

Ocean Temperature and Salinity

The model biases in potential temperature and salinity averaged over longitude with respect to WOA09 (Antonov et al. 2010) are shown in Fig.12. Positive bias in the Southern Ocean penetrates from the surface downward for up to 300 m, while negative bias in the tropics can be seen even in the 100–1000 m layer.

Nevertheless, zonal mean temperature error at any level from the surface to the bottom is small. This was not the case for the INMCM4, where one could see negative temperature bias up to 2–3 K from 1.5 km to the bottom nearly al all latitudes, and 2–3 K positive bias at levels of 700–1000 m. The reason for this improvement is the introduction of a higher background coefficient for vertical diffusion at high depth (3000 m and higher) than at intermediate depth (300–500m). Positive temperature bias at 45–65 N at all depths could probably be explained by shortcomings in the representation of deep convection [similar errors can be seen for most of the CMIP5 models (Flato etal. 2013, their Fig.9.13)].

Another feature common for many present day climate models (and for the INMCM5 as well) is negative bias in southern tropical ocean salinity from the surface to 500 m. It can be explained by overestimation of precipitation at the southern branch of the Inter Tropical Convergence zone. Meridional heat flux in the ocean (Fig.13) is not far from available estimates (Trenberth and Caron 2001). It looks similar to the one for the INMCM4, but maximum of northward transport in the Atlantic in the INMCM5 is about 0.1–0.2 × 1015 W higher than the one in the INMCM4, probably, because of the increased horizontal resolution in the oceanic block.

Sea Ice

In the Arctic, the model sea ice area is just slightly overestimated. Overestimation of the Arctic sea ice area is connected with negative bias in the surface temperature. In the same time, connection of the sea ice area error with the positive salinity bias is not evident because ice formation is almost compensated by ice melting, and the total salinity source for these pair of processes is not large. The amplitude and phase of the sea ice annual cycle are reproduced correctly by the model. In the Antarctic, sea ice area is underestimated by a factor of 1.5 in all seasons, apparently due to the positive temperature bias. Note that the correct simulation of sea ice area dynamics in both hemispheres simultaneously is a difficult task for climate modeling.

The analysis of the model time series of the SST anomalies shows that the El Niño event frequency is approximately the same in the model and data, but the model El Niños happen too regularly. Atmospheric response to the El Niño vents is also underestimated in the model by a factor of 1.5 with respect to the reanalysis data.

Conclusion

Based on the CMIP5 model INMCM4 the next version of the Institute of Numerical Mathematics RAS climate model was developed (INMCM5). The most important changes include new parameterizations of large scale condensation (cloud fraction and cloud water are now the prognostic variables), and increased vertical resolution in the atmosphere (73 vertical levels instead of 21, top model level raised from 30 to 60 km). In the oceanic block, horizontal resolution was increased by a factor of 2 in both directions.

The climate model was supplemented by the aerosol block. The model got a new parallel code with improved computational efficiency and scalability. With the new version of climate model we performed a test model run (80 years) to simulate the present-day Earth climate. The model mean state was compared with the available datasets. The structures of the surface temperature and precipitation biases in the INMCM5 are typical for the present climate models. Nevertheless, the RMS error in surface temperature, precipitation as well as zonal mean temperature and zonal wind are reduced in the INMCM5 with respect to its previous version, the INMCM4.

The model is capable of reproducing equatorial stratospheric QBO and SSWs.The model biases for the sea surface height and surface salinity are reduced in the new version as well, probably due to increasing spatial resolution in the oceanic block. Bias in ocean potential temperature at depths below 700 m in the INMCM5 is also reduced with respect to the one in the INMCM4. This is likely because of the tuning background vertical diffusion coefficient.

Model sea ice area is reproduced well enough in the Arctic, but is underestimated in the Antarctic (as a result of the overestimated surface temperature). RMS error in the surface salinity is reduced almost everywhere compared to the previous model except the Arctic (where the positive bias becomes larger). As a final remark one can conclude that the INMCM5 is substantially better in almost all aspects than its previous version and we plan to use this model as a core component for the coming CMIP6 experiment.
climatesystem_web

Summary

One the one hand, this model example shows that the intent is simple: To represent dynamically the energy balance of our planetary climate system.  On the other hand, the model description shows how many parameters are involved, and the complexity of processes interacting.  The attempt to simulate operations of the climate system is a monumental task with many outstanding challenges, and this latest version is another step in an iterative development.

Note:  Regarding the influence of rising CO2 on the energy balance.  Global warming advocates estimate a CO2 perturbation of 4 W/m^2.  In the climate parameters table above, observations of the radiation fluxes have a 2 W/m^2 error range at best, and in several cases are observed in ranges of 10 to 15 W/m^2.

We do not yet have access to the time series temperature outputs from INMCM5 to compare with observations or with other CMIP6 models.  Presumably that will happen in the future.

Early Schematic: Flows and Feedbacks for Climate Models

N. Atlantic 2020 Surprise

RAPID Array measuring North Atlantic SSTs.

For the last few years, observers have been speculating about when the North Atlantic will start the next phase shift from warm to cold.The way 2018 went and 2019 followed,suggested this may be the onset.  However, 2020 is starting out against that trend.  First some background.

. Source: Energy and Education Canada

An example is this report in May 2015 The Atlantic is entering a cool phase that will change the world’s weather by Gerald McCarthy and Evan Haigh of the RAPID Atlantic monitoring project. Excerpts in italics with my bolds.

This is known as the Atlantic Multidecadal Oscillation (AMO), and the transition between its positive and negative phases can be very rapid. For example, Atlantic temperatures declined by 0.1ºC per decade from the 1940s to the 1970s. By comparison, global surface warming is estimated at 0.5ºC per century – a rate twice as slow.

In many parts of the world, the AMO has been linked with decade-long temperature and rainfall trends. Certainly – and perhaps obviously – the mean temperature of islands downwind of the Atlantic such as Britain and Ireland show almost exactly the same temperature fluctuations as the AMO.

Atlantic oscillations are associated with the frequency of hurricanes and droughts. When the AMO is in the warm phase, there are more hurricanes in the Atlantic and droughts in the US Midwest tend to be more frequent and prolonged. In the Pacific Northwest, a positive AMO leads to more rainfall.

A negative AMO (cooler ocean) is associated with reduced rainfall in the vulnerable Sahel region of Africa. The prolonged negative AMO was associated with the infamous Ethiopian famine in the mid-1980s. In the UK it tends to mean reduced summer rainfall – the mythical “barbeque summer”.Our results show that ocean circulation responds to the first mode of Atlantic atmospheric forcing, the North Atlantic Oscillation, through circulation changes between the subtropical and subpolar gyres – the intergyre region. This a major influence on the wind patterns and the heat transferred between the atmosphere and ocean.

The observations that we do have of the Atlantic overturning circulation over the past ten years show that it is declining. As a result, we expect the AMO is moving to a negative (colder surface waters) phase. This is consistent with observations of temperature in the North Atlantic.

Cold “blobs” in North Atlantic have been reported, but they are usually winter phenomena. For example in April 2016, the sst anomalies looked like this

But by September, the picture changed to this

And we know from Kaplan AMO dataset, that 2016 summer SSTs were right up there with 1998 and 2010 as the highest recorded.

As the graph above suggests, this body of water is also important for tropical cyclones, since warmer water provides more energy.  But those are annual averages, and I am interested in the summer pulses of warm water into the Arctic. As I have noted in my monthly HadSST3 reports, most summers since 2003 there have been warm pulses in the north atlantic, and 2019 was one of them.

The AMO Index is from from Kaplan SST v2, the unaltered and not detrended dataset. By definition, the data are monthly average SSTs interpolated to a 5×5 grid over the North Atlantic basically 0 to 70N.  The graph shows the warmest month August beginning to rise after 1993 up to 1998, with a series of matching years since.  December 2017 set a record at 20.6C, but note the plunge down to 20.2C for December 2018, matching 2011 as the coldest years since 2000. December 2019 shows an uptick but still lower than 2016-2017.

December 2019 confirms the summer pulse weakening, along with 2018 well below other recent peak years since 1998.  Because McCarthy refers to hints of cooling to come in the N. Atlantic, let’s take a closer look at some AMO years in the last 2 decades.

The 2020 North Atlantic Surprise
This graph shows monthly AMO temps for some important years. The Peak years were 1998, 2010 and 2016, with the latter emphasized as the most recent. The other years show lesser warming, with 2007 emphasized as the coolest in the last 20 years. Note the red 2018 line was at the bottom of all these tracks.  The black line shows that 2019 began slightly cooler than January 2018, then tracked closely before rising in the summer months, though still lower than the peak years. Through December 2019 is again tracking warmer than 2018 but cooler than other recent years in the North Atlantic.

Now in 2020 following a warm January, N. Atlantic temps in February are the highest in the record.  This is consistent with reports of unusually warm February weather in the Norhern Hemisphere.

WryHeat Climate Wisdom

A decade ago when I became curious about the issue of global warming/climate change, Jonathan DuHamel was one of the voices persuading me to look critically and investigate claims carefully.  He wrote a regular column in an Arizona newspaper (Arizona Daily Independent) under the banner WryHeat exploring a wide range of scientific issues, including but not limited to global warming.  This post is to celebrate his publishing a compilation of articles on climate concerns over the years, entitled  Summary Of Climate Change Principles & State Of The Science – A Rebuttal Of Climate Alarmism at the Arizona Independent News Network.

The excerpts below show the themes of articles DuHamel wrote.  To access the orginal published columns readers can go to the link in red above, where links to each article are provided.

This post collects several past articles which review climate science and bring together some main points on the state of the climate debate. These points show that the politically correct, carbon dioxide driven meme is wrong. Readers can use these articles to counter climate alarmists. Read each article for more details. (Note, many of these articles appeared in ADI, however the links below go to my Wryheat blog where the articles may be expanded and updated from the ADI versions. The articles also provide addition links to more articles.) [ The first heading below links to a summary pdf file with comprehensive discussion.]

Climate change in perspective

Climate change is a major issue of our times. Concern is affecting environmental, energy, and economic policy decisions. Many politicians are under the mistaken belief that legislation and regulation can significantly control our climate to forestall any deviation from “normal” and save us from a perceived crisis. This post is intended as a primer for politicians so they can cut through the hype and compare real observational data against the flawed model prognostications.
The data show that the current warming is not unusual, but part of a natural cycle; that greenhouse gases, other than water vapor, are not significant drivers of climate; that human emissions of carbon dioxide are insignificant when compared to natural emissions of greenhouse gases; and that many predictions by climate modelers and hyped by the media are simply wrong.

A simple question for climate alarmists – where is the evidence

“What physical evidence supports the contention that carbon dioxide emissions from burning fossil fuels are the principal cause of global warming since 1970?”
(Remember back in the 1970s, climate scientists and media were predicting a return to an “ice age.”)
I have posed that question to five “climate scientist” professors at the University of Arizona who claim that our carbon dioxide emissions are the principal cause of dangerous global warming. Yet, when asked the question, none could cite any supporting physical evidence.

Carbon dioxide is necessary for life on Earth

Rather than being a “pollutant.” Carbon dioxide is necessary for life on Earth as we know it. Earth’s climate has been changing for at least four billion years in cycles large and small. Few in the climate debate understand those changes and their causes. Many are fixated on carbon dioxide (CO2), a minor constituent of the atmosphere, but one absolutely necessary for life as we know it. Perhaps this fixation derives from ulterior political motives for controlling the global economy. For others, the true believers, perhaps this fixation derives from ignorance.

Carbon Dioxide and the Greenhouse Effect

The “greenhouse effect,” very simplified, is this: solar radiation penetrates the atmosphere and warms the surface of the earth. The earth’s surface radiates thermal energy (infrared radiation) back into space. Some of this radiation is absorbed and re-radiated back to the surface and into space by clouds, water vapor, methane, carbon dioxide, and other gases. Water vapor is the principle greenhouse gas; the others are minor players. It is claimed that without the greenhouse effect the planet would be an iceball, about 34∘C colder than it is.* The term “greenhouse effect” with respect to the atmosphere is an unfortunate usage because it is misleading. The interior of a real greenhouse (or your automobile parked with windows closed and left in the sun) heats up because there is a physical barrier to convective heat loss. There is no such physical barrier in the atmosphere.*There is an alternate hypothesis:

What keeps Earth warm – the greenhouse effect or something else?

Scottish physicist James Clerk Maxwell proposed in his 1871 book “Theory of Heat” that the temperature of a planet depends only on gravity, mass of the atmosphere, and heat capacity of the atmosphere. Temperature is independent of atmosphere composition. Greenhouse gases have nothing to do with it. Many publications since, have expounded on Maxwell’s theory and have shown that it applies to all planets in the Solar System.
The Grand Canyon of Arizona provides a practical demonstration of this principle.

Evidence that CO2 emissions do not intensify the greenhouse effect

The U.S. government’s National Climate Assessment report and the UN IPCC both claim that human carbon dioxide emissions are “intensifying” the greenhouse effect and causing global warming. The carbon dioxide driven global warming meme makes four specific predictions. Physical evidence shows that all four of these predictions are wrong.
“It doesn’t matter how beautiful your theory is; it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.” – Richard Feynmann

An examination of the relationship between temperature and carbon dioxide

In this article, we will examine the Earth’s temperature and the carbon dioxide (CO2) content of the atmosphere at several time scales to see if there is any relationship. I stipulate that the greenhouse effect does exist. I maintain, however, that the ability of CO2 emissions to cause global warming is tiny and overwhelmed by natural forces. The main effect of our “greenhouse” is to slow cooling.

How much global warming is dangerous?

The United Nation’s IPCC and other climate alarmists say all hell will break loose if the global temperature rises more than an additional 2º C (3.6ºF). That number, by the way, is purely arbitrary with no basis in science. It also ignores Earth’s geologic history which shows that for most of the time global temperatures have been much warmer than now. Let’s look back at a time when global temperatures are estimated to have been as much as 34ºF warmer than they are now. Hell didn’t break loose then.

Effects of global warming on humans

The EPA’s “endangerment finding” classified carbon dioxide as a pollutant and claimed that global warming will have adverse effects on human health. Real research says the opposite: cold is deadlier. The scientific evidence shows that warming is good for health.

Geology is responsible for some phenomena blamed on global warming

Melting of the Greenland and West Antarctic ice sheets have been blamed on global warming, but both have a geologic origin. The “Blob” a recent warm ocean area off the Oregon coast, responsible in part for the hot weather and drought in California, has been blamed on global warming, but that too may have a geologic cause.

The 97 percent consensus for human caused climate change debunked again

It has been claimed that 97% of climate scientists say humans are causing most of the global warming. An examination of the numbers and how those numbers have been reached show that only 8.2% of scientists polled explicitly endorse carbon dioxide as the principal driver.
Read also a more general article: On consensus in science

Conclusion:

The basic conclusion of this review is that carbon dioxide has little effect on climate and all attempts to control carbon dioxide will be a futile and expensive exercise to no end. All the dire predictions are based on flawed computer models. Carbon dioxide is a phantom menace.

“The whole aim of practical politics is to keep the populace alarmed (and hence clamorous to be led to safety) by menacing it with an endless series of hobgoblins, all of them imaginary.” – H. L. Mencken

ABOUT THE AUTHOR Jonathan DuHamel

I am a retired economic geologist and have worked as an explorationist in search of economic mineral deposits, mainly copper, molybdenum, and gold. My exploration activities have been mainly in the Western U.S. including Alaska. I have also worked in Mexico, South Africa, Ireland, and Scotland.

Exploration geologists are trained not only in the geologic sciences, but also in chemistry, physics, botany, and geostatistics. I am also trained in the natural history of the Sonoran Desert.

After graduating from The Colorado School of Mines with a Geologic Engineering degree and Master of Science degree, and before practicing as a geologist, I served as an officer in the Army Chemical Corps assigned to a unit that tested experimental weapons and equipment.

I currently reside in Tucson, AZ.

Greta’s Spurious “Carbon Budget”

Many have noticed that recent speeches written for child activist Greta Thunberg are basing the climate “emergency” on the rapidly closing “carbon budget”. This post aims to summarize how alarmists define the so-called carbon budget, and why their claims to its authority are spurious. In the text and at the bottom are links to websites where readers can access both the consensus science papers and the analyses showing the flaws in the carbon budget notion. Excerpts are in italics with my bolds.

The 2019 update on the Global Carbon Budget was reported at Future Earth article entitled Global Carbon Budget Estimates Global CO2 Emissions Still Rising in 2019. The results were published by the Global Carbon Project in the journals Nature Climate Change, Environmental Research Letters, and Earth System Science Data. Excerpts below in italics with my bolds.

History of Growing CO2 Emissions

“Carbon dioxide emissions must decline sharply if the world is to meet the ‘well below 2°C’ mark set out in the Paris Agreement, and every year with growing emissions makes that target even more difficult to reach,” said Robbie Andrew, a Senior Researcher at the CICERO Center for International Climate Research in Norway.

Global emissions from coal use are expected to decline 0.9 percent in 2019 (range: -2.0 percent to +0.2 percent) due to an estimated 10 percent fall in the United States and a 10 percent fall in Europe, combined with weak growth in coal use in China (+0.8 percent) and India (+2 percent).

 

Shifting Mix of Fossil Fuel Consumption

“The weak growth in carbon dioxide emissions in 2019 is due to an unexpected decline in global coal use, but this drop is insufficient to overcome the robust growth in natural gas and oil consumption,” said Glen Peters, Research Director at CICERO.

“Global commitments made in Paris in 2015 to reduce emissions are not yet being matched by proportionate actions,” said Peters. “Despite political rhetoric and rapid growth in low carbon technologies such as solar and wind power, electric vehicles, and batteries, global fossil carbon dioxide emissions are likely to be more than four percent higher in 2019 than in 2015 when the Paris Agreement was adopted.

“Compared to coal, natural gas is a cleaner fossil fuel, but unabated natural gas merely cooks the planet more slowly than coal,” said Peters. “While there may be some short-term emission reductions from using natural gas instead of coal, natural gas use needs to be phased out quickly on the heels of coal to meet ambitious climate goals.”

Oil and gas use have grown almost unabated in the last decade. Gas use has been pushed up by declines in coal use and increased demand for gas in industry. Oil is used mainly to fuel personal transport, freight, aviation and shipping, and to produce petrochemicals.

“This year’s Carbon Budget underscores the need for more definitive climate action from all sectors of society, from national and local governments to the private sector,” said Amy Luers, Future Earth’s Executive Director. “Like the youth climate movement is demanding, this requires large-scale systems changes – looking beyond traditional sector-based approaches to cross-cutting transformations in our governance and economic systems.”

Burning gas emits about 40 percent less CO2 than coal per unit energy, but it is not a zero-carbon fuel. While CO2 emissions are likely to decline when gas displaces coal in electricity production, Global Carbon Project researchers say it is only a short-term solution at best. All CO2 emissions will need to decline rapidly towards zero.

The Premise: Rising CO2 Emissions Cause Global Warming

Atmospheric CO2 concentration is set to reach 410 ppm on average in 2019, 47 percent above pre-industrial levels.

Glen Peters on the carbon budget and global carbon emissions is a Future of Earth interview explaining the Carbon Budget notion. Excerpts in italics with my bolds.

In many ways, the global carbon budget is like any other budget. There’s a maximum amount we can spend, and it must be allocated to various countries and various needs. But how do we determine how much carbon each country can emit? Can developing countries grow their economies without increasing their emissions? And if a large portion of China’s emissions come from products made for American and European consumption, who’s to blame for those emissions? Glen Peters, Research Director at the Center for International Climate Research (CICERO) in Oslo, explains the components that make up the carbon budget, the complexities of its calculation, and its implications for climate policy and mitigation efforts. He also discusses how emissions are allocated to different countries, how emissions are related to economic growth, what role China plays in all of this, and more.

The carbon budget generally has two components: the source component, so what’s going into the atmosphere; and the sink component, so the components which are more or less going out of the atmosphere.

So in terms of sources, we have fossil fuel emissions; so we dig up coal, oil, and gas and burn them and emit CO2. We have cement, which is a chemical reaction, which emits CO2. That’s sort of one important component on the source side. We also have land use change, so deforestation. We’re chopping down a lot of trees, burning them, using the wood products and so on. And then on the other side of the equation, sort of the sink side, we have some carbon coming back out in a sense to the atmosphere. So the land sucks up about 25% of the carbon that we put into the atmosphere and the ocean sucks up about 25%. So for every ton we put into the atmosphere, then only about half a ton of CO2 remains in the atmosphere. So in a sense, the oceans and the land are cleaning up half of our mess, if you like.

The other half just stays in the atmosphere. Half a ton stays in the atmosphere; the other half is cleaned up. It’s that carbon that stays in the atmosphere which is causing climate change and temperature increases and changes in precipitation and so on.

The carbon budget is like a balance, so you have something coming in and something going out, and in a sense by mass balance, they have to equal. So if we go out and we take an estimate of how much carbon have we emitted by burning fossil fuels or by chopping down forests and we try and estimate how much carbon has gone into the ocean or the land, then we can measure quite well how much carbon is in the atmosphere. So we can add all those measurements together and then we can compare the two totals — they should equal. But they don’t equal. And this is sort of part of the science, if we overestimated emissions or if we over or underestimated the strength of the land sink or the oceans or something like that. And we can also cross check with what our models say.

My Comment:

Several things are notable about the carbon cycle diagram from GCP. It claims the atmosphere adds 18 GtCO2 per year and drives Global Warming. Yet estimates of emissions from burning fossil fuels and from land use combined range from 36 to 45 GtCO2 per year, or +/- 4.5. The uptake by the biosphere and ocean combined range from 16 to 25 GtCO2 per year, also +/- 4.5. The uncertainty on emissions is 11% while the natural sequestration uncertainty is 22%, twice as much.

Furthermore, the fluxes from biosphere and ocean are both presented as balanced with no error range. The diagram assumes the natural sinks/sources are not in balance, but are taking more CO2 than they release. IPCC reported: Gross fluxes generally have uncertainties of more than +/- 20%. (IPCC AR4WG1 Figure 7.3.) Thus for land and ocean the estimates range as follows:

Land: 440, with uncertainty between 352 and 528, a range of 176
Ocean: 330, with uncertainty between 264 and 396, a range of 132
Nature: 770, with uncertainty between 616 and 924, a range of 308

So the natural flux uncertainty is 7.5 times the estimated human emissions of 41 GtCO2 per year.

For more detail see CO2 Fluxes, Sources and Sinks and Who to Blame for Rising CO2?

The Fundamental Flaw: Spurious Correlation

Beyond the uncertainty of the amounts is a method error in claiming rising CO2 drives temperature changes. For this discussion I am drawing on work by chaam jamal at her website Thongchai Thailand. A series of articles there explain in detail how the mistake was invented and why it is faulty. A good starting point is The Carbon Budgets of Climate Science. Below is my attempt at a synopsis from her writings with excerpts in italics and my bolds.

Simplifying Climate to a Single Number

Figure 1 above shows the strong positive correlation between cumulative emissions and cumulative warming used by climate science and by the IPCC to track the effect of emissions on temperature and to derive the “carbon budget” for various acceptable levels of warming such as 2C and 1.5C. These so called carbon budgets then serve as policy tools for international climate action agreements and climate action imperatives of the United Nations. And yet, all such budgets are numbers with no interpretation in the real world because they are derived from spurious correlations. Source: Matthews et al 2009

Carbon budget accounting is based on the TCRE (Transient Climate Response to Cumulative Emissions). It is derived from the observed correlation between temperature and cumulative emissions. A comprehensive explanation of an application of this relationship in climate science is found in the IPCC SR 15 2018. This IPCC description is quoted below in paragraphs #1 to #7 where the IPCC describes how climate science uses the TCRE for climate action mitigation of AGW in terms of the so called the carbon budget. Also included are some of difficult issues in carbon budget accounting and the methods used in their resolution.

It has long been recognized that the climate sensitivity of surface temperature to the logarithm of atmospheric CO2 (ECS), which lies at the heart of the anthropogenic global warming and climate change (AGW) proposition, was a difficult issue for climate science because of the large range of empirical values reported in the literature and the so called “uncertainty problem” it implies.

The ECS uncertainty issue was interpreted in two very different ways. Climate science took the position that ECS uncertainty implies that climate action has to be greater than that implied by the mean value of ECS in order to ensure that higher values of ECS that are possible will be accommodated while skeptics argued that the large range means that we don’t really know. At the same time skeptics also presented convincing arguments against the assumption that observed changes in atmospheric CO2 concentration can be attributed to fossil fuel emissions.

A breakthrough came in 2009 when Damon Matthews, Myles Allen, and a few others almost simultaneously published almost identical papers reporting the discovery of a “near perfect” correlation (ρ≈1) between surface temperature and cumulative emissions {2009: Matthews, H. Damon, et al. “The proportionality of global warming to cumulative carbon emissions” Nature 459.7248 (2009): 829}. They had found that, irrespective of the timing of emissions or of atmospheric CO2 concentration, emitting a trillion tonnes of carbon will cause 1.0 – 2.1 C of global warming. This linear regression coefficient corresponding with the near perfect correlation between cumulative warming and cumulative emissions (note: temperature=cumulative warming), initially described as the Climate Carbon Response (CCR) was later termed the Transient Climate Response to Cumulative Emissions (TCRE).

Initially a curiosity, it gained in importance when it was found that it was in fact predicting future temperatures consistent with model predictions. The consistency with climate models was taken as a validation of the new tool and the TCRE became integrated into the theory of climate change. However, as noted in a related post the consistency likely derives from the assumption that emissions accumulate in the atmosphere.

Thereafter the TCRE became incorporated into the foundation of climate change theory particularly so in terms of its utility in the construction of carbon budgets for climate action plans for any given target temperature rise, an application for which the TCRE appeared to be tailor made. Most importantly, it solved or perhaps bypassed the messy and inconclusive uncertainty issue in ECS climate sensitivity that remained unresolved. The importance of this aspect of the TCRE is found in the 2017 paper “Beyond Climate Sensitivity” by prominent climate scientist Reto Knutti where he declared that the TCRE metric should replace the ECS as the primary tool for relating warming to human caused emissions {2017: Knutti, Reto, Maria AA Rugenstein, and Gabriele C. Hegerl. “Beyond equilibrium climate sensitivity.” Nature Geoscience 10.10 (2017): 727}. The anti ECS Knutti paper was not only published but received with great fanfare by the journal and by the climate science community in general.

The TCRE has continued to gain in importance and prominence as a tool for the practical application of climate change theory in terms of its utility in the construction and tracking of carbon budgets for limiting warming to a target such as the Paris Climate Accord target of +1.5C above pre-industrial. {Matthews, H. Damon. “Quantifying historical carbon and climate debts among nations.” Nature climate change 6.1 (2016): 60}. A bibliography on the subject of TCRE carbon budgets is included below at the end of this article (here).

However, a mysterious and vexing issue has arisen in the practical matter of applying and tracking TCRE based carbon budgets. The unsolved matter in the TCRE carbon budget is the remaining carbon budget puzzle {Rogelj, Joeri, et al. “Estimating and tracking the remaining carbon budget for stringent climate targets.” Nature 571.7765 (2019): 335-342}. It turns out that midway in the implementation of a carbon budget, the remaining carbon budget computed by subtraction does not match the TCRE carbon budget for the latter period computed directly using the Damon Matthews proportionality of temperature with cumulative emissions for that period. As it turns out, the difference between the two estimates of the remaining carbon budget has a rational explanation in terms of the statistics of a time series of cumulative values of another time series described in a related post

It is shown that a time series of the cumulative values of another time series has neither time scale nor degrees of freedom and that therefore statistical properties of this series can have no practical interpretation.

It is demonstrated with random numbers that the only practical implication of the “near perfect proportionality” correlation reported by Damon Matthews is that the two time series being compared (annual warming and annual emissions) tend to have positive values. In the case of emissions we have all positive values, and during a time of global warming, the annual warming series contains mostly positive values. The correlation between temperature (cumulative warming) and cumulative emissions derives from this sign bias as demonstrated with random numbers with and without sign bias.

Figure 4: Random Numbers without Sign Bias

Figure 5: Random Numbers with Sign Bias

The sign bias explains the correlation between cumulative values of time series data and also the remaining carbon budget puzzle. It is shown that the TCRE regression coefficient between these time series of cumulative values derives from the positive value bias in the annual warming data. Thus, during a period of accelerated warming, the second half of the carbon budget period may contain a higher percentage of positive values for annual warming and it will therefore show a carbon budget that exceeds the proportional budget for the second half computed from the full span regression coefficient that is based on a lower bias for positive values.

In short, the bias for positive annual warming is highest for the second half, lowest for the first half, and midway between these two values for the full span – and therein lies the simple statistics explanation of the remaining carbon budget issue that climate science is trying to solve in terms of climate theory and its extension to Earth System Models. The Millar and Friedlingstein 2018 paper is yet another in a long line of studies that ignore the statistical issues the TCRE correlation and instead try to explain its anomalous behavior in terms of climate theory whereas in fact their explanation lies in statistical issues that have been overlooked by these young scientists.

The fundamental problem with the construction of TCRE carbon budgets and their interpretation in terms of climate action is that the TCRE is a spurious correlation that has no interpretation in terms of a relationship between emissions and warming. Complexities in these carbon budgets such as the remaining carbon budget are best understood in these terms and not in terms of new and esoteric variables such as those in earth system models.

Footnote:

An independent study by Jamal Munshi come to a similar conclusion. Climate Sensitivity and the Responsiveness of Temperature to Atmospheric CO2

Detrended correlation analysis of global mean temperature observations and model projections are compared in a test for the theory that surface temperature is responsive to atmospheric CO2 concentration in terms of GHG forcing of surface temperature implied by the Climate Sensitivity parameter ECS. The test shows strong evidence of GHG forcing of warming in the theoretical RCP8.5 temperature projections made with CMIP5 forcings. However, no evidence of GHG forcing by CO2 is found in observational temperatures from four sources including two from satellite measurements. The test period is set to 1979-2018 so that satellite data can be included on a comparable basis. No empirical evidence is found in these data for a climate sensitivity parameter that determines surface temperature according to atmospheric CO2 concentration or for the proposition that reductions in fossil fuel emissions will moderate the rate of warming.

Postscript on Spurious Correlations

I am not a climate, environment, geology, weather, or physics expert. However, I am an expert on statistics. So, I recognize bad statistical analysis when I see it. There are quite a few problems with the use of statistics within the global warming debate. The use of Gaussian statistics is the first error. In his first movie Gore used a linear regression of CO2 and temperature. If he had done the same regression using the number of zoos in the world, or the worldwide use of atomic energy, or sunspots, he would have the same result. A linear regression by itself proves nothing.–Dan Ashley · PhD statistics, PhD Business, Northcentral University

 

NH Land & Ocean Air Warms in February

banner-blog

With apologies to Paul Revere, this post is on the lookout for cooler weather with an eye on both the Land and the Sea.  UAH has updated their tlt (temperatures in lower troposphere) dataset for February 2020.  Previously I have done posts on their reading of ocean air temps as a prelude to updated records from HADSST3. This month also has a separate graph of land air temps because the comparisons and contrasts are interesting as we contemplate possible cooling in coming months and years.

Presently sea surface temperatures (SST) are the best available indicator of heat content gained or lost from earth’s climate system.  Enthalpy is the thermodynamic term for total heat content in a system, and humidity differences in air parcels affect enthalpy.  Measuring water temperature directly avoids distorted impressions from air measurements.  In addition, ocean covers 71% of the planet surface and thus dominates surface temperature estimates.  Eventually we will likely have reliable means of recording water temperatures at depth.

Recently, Dr. Ole Humlum reported from his research that air temperatures lag 2-3 months behind changes in SST.  He also observed that changes in CO2 atmospheric concentrations lag behind SST by 11-12 months.  This latter point is addressed in a previous post Who to Blame for Rising CO2?

After a technical enhancement to HadSST3 delayed March and April updates, May resumed a pattern of HadSST updates mid month.  For comparison we can look at lower troposphere temperatures (TLT) from UAHv6 which are now posted for February. The temperature record is derived from microwave sounding units (MSU) on board satellites like the one pictured above.

The UAH dataset includes temperature results for air above the oceans, and thus should be most comparable to the SSTs. There is the additional feature that ocean air temps avoid Urban Heat Islands (UHI). Recently there was a change in UAH processing of satellite drift corrections, including dropping one platform which can no longer be corrected. The graphs below are taken from the new and current dataset.

The graph below shows monthly anomalies for ocean temps since January 2015. After a June rise in ocean air temps, all regions dropped back down to May levels in July and August.  A spike occured in September, followed by plummenting October ocean air temps in the Tropics and SH. In November that drop partly warmed back, then leveling slightly downword with continued cooling in NH.

2020 started with NH warming slightly, still cooler than the previous months back to September.  SH and Tropics also rose slightly resulting in a Global rise. Now in February there is an anomaly spike of 0.32C in NH, rarely seen in the ocean data.  The Tropics and SH also rose, resulting in an uptick Globally.

Land Air Temperatures Showing a Seesaw Pattern

We sometimes overlook that in climate temperature records, while the oceans are measured directly with SSTs, land temps are measured only indirectly.  The land temperature records at surface stations sample air temps at 2 meters above ground.  UAH gives tlt anomalies for air over land separately from ocean air temps.  The graph updated for February 2020 is below.

Here we have freash evidence of the greater volatility of the Land temperatures, along with an extraordinary departures, first by SH land followed by NH  Despite the small amount of SH land, it spiked in July, then dropped in August so sharply along with the Tropics that it pulled the global average downward against slight warming in NH.  In November SH jumped up beyond any month in this period.  Despite this spike along with a rise in the Tropics, NH land temps dropped sharply.  The larger NH land area pulled the Global average downward.  December reversed the situation with the SH dropping as sharply as it rose, while NH rose to the same anomaly, pulling the Global up slightly.

2020 started with sharp drops in both SH and NH, with the Global anomaly dropping as a result.  Now in February comes a spike of 0.42C in NH land air, nearing 2016 levels. Meanwhile SH land continued dropping.  The behavior of SH and NH land temps is puzzling, to say the least.  it is also a reminder that global averages can conceal important underlying volatility.

The longer term picture from UAH is a return to the mean for the period starting with 1995.  2019 average rose but currently lacks any El Nino to sustain it.

TLTs include mixing above the oceans and probably some influence from nearby more volatile land temps.  Clearly NH and Global land temps have been dropping in a seesaw pattern, more than 1C lower than the 2016 peak, prior to these last several months. TLT measures started the recent cooling later than SSTs from HadSST3, but are now showing the same pattern.  It seems obvious that despite the three El Ninos, their warming has not persisted, and without them it would probably have cooled since 1995.  Of course, the future has not yet been written.

Climate Models Fail from Radiative Obsession

Peter Stallinga provides a thorough analysis explaining why models based purely on radiative heat exhanges fail without incorporating other thermodynamic processes.  This post is a synopsis of the structure of his position without the extensive mathematical expressions of the relationships discussed.  The full text of his paper can be accessed by linking to the title below.

Comprehensive Analytical Study of the Greenhouse Effect of the Atmosphere
By Peter Stallinga. Atmospheric and Climate Sciences, 2020 H/T NoTricksZone. Excerpts in italics with my bolds

Introduction

Climate change is an important societal issue. Large effort in society is spent on addressing it. For adequate measures, it is important that the phenomenon of climate change is well understood, especially the effect of adding carbon dioxide to the atmosphere. In this work, a theoretical fully analytical study is presented of the so-called greenhouse effect of carbon dioxide. The effect of this gas in the atmosphere itself was already determined as being of little importance based on empirical analysis. In the current work, the effect is studied both phenomenologically and analytically.

In a new approach, the atmosphere is solved by taking both radiative as well as thermodynamic processes into account. The model fully fits the empirical data and an analytical equation is given for the atmospheric behavior. Upper limits are found for the greenhouse effect ranging from zero to a couple of mK per ppm CO2. (mK is 1/1000 degree Kelvin). It is shown that it cannot explain the observed correlation of carbon dioxide and surface temperature. This correlation, however, is readily explained by Henry’s Law (outgassing of oceans), with other phenomena insignificant.

Finally, while the greenhouse effect can thus, in a rudimentary way, explain the behavior of the atmosphere of Earth, it fails describing other atmospheres such as that of Mars. Moreover, looking at three cities in Spain, it is found that radiation balances only cannot explain the temperature of these cities. Finally, three data sets with different time scales (60 years, 600 thousand years, and 650 million years) show markedly different behavior, something that is inexplicable in the framework of the greenhouse theory.

The Greenhouse Effect

The greenhouse effect is phenomenologically introduced and compared to the alternative explanation for the data, namely Henry’s Law. The observed correlations between temperature and CO2 are presented; these are the data we are going to attempt to explain.

Henry’s Law

The correlation between temperature and [CO2] is readily explained by another phenomenon, called Henry’s Law: The capacity of liquids to hold gases in solution is depending on temperature. When oceans heat up, the capacity decreases and the oceans thus release CO2 (and other gases) into the atmosphere. When we quantitatively analyze this phenomenon, we see that it perfectly fits the observations, without the need of any feedback [1]. We thus now have an alternative hypothesis for the explanation of the observations presented by Al Gore.

The greenhouse effect can be as good as rejected and Henry’s Law stays firmly standing. We concluded that the effect of anthropogenic CO2 on the climate is negligible and the effect of the ocean temperature on atmospheric [CO2] is exactly, both sign and magnitude, equal to that as expected on basis of Henry’s Law [1].

Contemporary Correlation

Correlations are best shown in correlation plots; instead of both shown as a time series, they are better shown as one vs. the other; if they are correlated, a straight line should result. Figure 2(b) shows a correlation plot of the same data as used for Figure 2(a) (but no averaging). We see that there is an apparent correlation between the two datasets and we can fit a line to them to find the coefficient. The value is 10.2 mK/ppm. (See Table 2 for a summary of all data sets and models described here).

This experimental value of 10.2 mK/ppm is highly interesting. It is neither close to the value expected for the greenhouse effect (1.4 mK/ppm), nor close to the value of Henry’s Law (100 mK/ppm). Even stranger, it is also not close to the value of the 600-ka ice core data (95 mK/ppm).

A missing response might be explained in a relaxation model. After all, induced changes take time to materialize; the system needs time to settle to the new equilibrium value. However, too big responses compared to the model are not possible. A more likely cause for the divergence between model and data is that the correlation is merely coincidental [2]. In a Henry’s-Law (HL) analysis, the CO2 has no effect on the temperature, but a concurrent temperature rise is merely a coincidence. That is, because the [CO2] rise in contemporary data is possibly of anthropogenic origin and not (much) caused by the temperature rise. In the HL framework, the ca. 0.8 degree temperature rise has contributed a meager 8 ppm to the CO2 in the atmosphere. The rest might be coming from anthropogenic sources, or from nature itself.

If, on the other hand, we want to attribute the temperature rise to CO2, we must build-in a delay, since most of the effect of the alleged greenhouse effect has apparently not occurred, yet. Using the value of 95 mK/ppm (Table 2), the 80 ppm of Figure 2(b) should have produced (will produce?) a staggering 7.6 degree temperature rise. A meager 0.8 degrees is observed after 60 years (13 mK/a. Figure 2). From these data we can deduce a virtual relaxation time τ , namely a relaxation time of about half a millennium. We can thus expect a temperature rise of some tenths of a degree per decade in the coming centuries.

This causes a problem. Having set out to explain things in known physical laws, we have no idea what physical process might be the origin of this relaxation. The radiation balance of the atmosphere has a relaxation time of about a month and a half, as evidenced by the 30-day belay between shortest/longest day and coldest/warmest day [1]. No millennium-scale relaxation mechanisms are easily identifiable in the atmosphere, where the GHE resides. We can even exclude that the oceans act as thermal sinks for the heat generated in the atmosphere, because then the effect in the atmosphere would initially be even larger than theoretically predicted, in an effect called overshoot. We thus conclude that, upon scrutiny, the (alleged) greenhouse effect also has to be rejected as a hypothesis to explain contemporary data.

Can the data be explained by Henry’s Law, instead? The observed correlation is 10 mK/ppm, or conversely 100 ppm/K. That is a factor 10 too big for Henry’s Law, and relaxation processes can only make the effect smaller. We thus exclude Henry’s Law as an explanation for the contemporary steady [CO2] rise in the atmosphere, it is not caused by the steady rise in temperature.

A Radiative Greenhouse Model

A “classic” greenhouse model is presented where energy transfer is uniquely by radiation. First with the atmosphere as single body, then with the atmosphere as an infinite set of identical layers, and finally with a multi-layer model in which the atmosphere is in thermodynamic equilibrium, so that layers get thinner and colder upwards. 

Absorption in the Atmosphere

An intrinsic assumption we make here is that all incoming energy comes from radiation from the Sun. Heat coming from the Earth itself, from below the crust, is too small (about 50 mW/m2 ; estimation from the authors) to be significant. Moreover, all heat must be dissipated to the universe by radiation only. Things as evaporation of hot molecules from the top of the atmosphere is too insignificant. This is a rather trivial assumption and will therefore not be further justified. Perhaps more questionable is the assumption that the atmosphere is a well mixed chamber, meaning that all gases occur in the same ratios everywhere. The theoretical greenhouse effect is governed by optical absorption and emission processes in the atmosphere. As such, the Beer-Lambert rule of absorption plays an important role.

The greenhouse effect is often erroneously presented as being caused by the fact that σ x (and thus also α , τ and absorbance) depends on the wavelength. That visible sunlight can reach the surface and infrared terrestrial light cannot easily escape through the atmosphere. This schematic presentation is wrong, however, as we will discuss. It does not matter how and where the solar radiation is absorbed or terrestrial IR radiation is absorbed and reemitted in the atmosphere, if sunlight reaches the surface or not, etc. The only thing that matters is the amount of radiation received by Earth (that is, the amount that is not reflected directly back into space) and amount of IR radiation emitted

Since the absorption coefficient depends on capture cross-sections as well as concentrations, pumping CO2 in the atmosphere might increase heat absorption in the atmosphere and might thus heat it up. However, at first sight, the effect is probably minimal, because nearly all infrared light is already absorbed; at the top of the atmosphere, according to the Beer-Lambert Equation (Equation (2)): J h( ) ≈ 0. We thus do not expect much effect from adding CO2 to the atmosphere as long as there are other channels open for emission. Imagine: if 99% of the light possibly absorbed is already absorbed (say, 350 W/m2 ), doubling CO2 in the atmosphere will not double the absorption (to 700 W/m2 ), but just add something close to 1% to it (3.5 W/m2 ), as Equation (2) tells us. We call this the “forcing” of the atmosphere.

We must remark at this moment that 254K is the temperature of Earth as seen from outer space. Irrespective of any greenhouse or other effect. If we, from outer space, point a radiometer at the planet it will have a temperature signature of 254.0 K. (If we also include the visible light that is reflected (aS), then the total radiation power is equal to that of an black sphere with temperature 4 T S = = σ 278.3 K ). The greenhouse effect does not change the apparent temperature of the planet as seen from outer space, but only that of a hidden layer (e.g., the solid surface). Radiation into space effectively comes from the atmosphere at an altitude where the temperature is 254.0 K, which is in the troposphere at about 6 km height.

In this analysis we assume that radiation balances are determining the atmosphere. The temperature can then be found from the radiation intensity by the inverse function of Stefan-Boltzmann. With the emission α (UDz + )d linearly depending on height, this results in a fourth-root curve of height, with T = 288 K at the surface ( z = 0 ) and about 210 K (−60˚C) at the top of the atmosphere. This is obviously incorrect. In the extreme, in this model attributing all greenhouse effect to CO2, doubling c = [CO2] will double α and that will result in a temperature of 0 T = 313.2 K, or in other words, a sensitivity of ∆∆ = T0 2 [CO 25.1 K 350 ppm 71.7mK ppm ].

A Non-Uniform Atmosphere

In the analysis above, it was assumed that the atmosphere was a homogeneous well-mixed closed box of height h with constant properties everywhere: pressure, concentrations, absorption constants, etc. (Except for a radiation gradient of Figure 4(b)). A real atmosphere, on the other hand, has no upper boundary and is contained by gravity on one side, making it ever thinner with height. We can calculate the properties of such a system, starting with the ideal gas law.

Figure 6. Adiabatic, static atmosphere without heat sources or sinks. (a) Temperature as a function of height, the slope of this linear curve is called the lapse rate and is −6.49 K/km. The black open circles and “+” signs are US standard atmosphere (Refs. [8] and [14], respectively). The green line is Equation (36). The vertical line at 254 K is where the outward radiation apparently is coming from; (b) Pressure as a function of height. The black dots are from Ref. [8]. The blue dashed line is the classical barometric equation, Equation (25), and the green solid line is Equation (38); (c) Density as a function of height, according to Equation (39). (Parameters used: 26 m 4.82 10 kg, − = × p c =1.51 kJ kg K, ⋅ 0 T =15 C, ˚ 0 P =1013.2 hPa, 2 g = 9.81 m s ). The curves here are indistinguishable from the empirical data; the atmosphere is apparently in thermodynamic equilibrium.

We have to make the very important observation that these curves and equations here are independent of any absorption of radiation in the atmosphere, for instance the greenhouse effect discussed earlier (Figure 4(b)). The lapse rate is not caused by radiation, but is a thermodynamic property of the atmosphere. That is, as long as there is thermodynamic equilibrium in the atmosphere, the curves are as given by the above three formulas that are determined by only the surface temperature T0 and total air weight 0P , and physical properties m and p c of the gas and the gravitational constant g. It might, of course, be the case that the atmosphere is not in equilibrium. In that case, mass and heat can be transported through convection, diffusion, conduction and radiation. The latter two only heat (and no mass).

These results are phenomenologically equal to the case of the closed-box Beer-Lambert model. It has the same linear radiation forcing behavior of D(0), depending linearly on total amount of absorbant in the atmosphere. For instance, doubling the total atmosphere with all constituents in it doubles P0 and that doubles the downward radiation (Equation (45)). Moreover, the dependence of temperature on total amount of [CO2] is likewise of the same behavior. If CO2 is the only gas contributing to the greenhouse effect in the atmosphere, the above number would imply a climate sensitivity of ∆T/∆[CO2] = 
(25.1 K) /(350 ppm) = 72 mK ppm.  Now, estimates are given that CO2 contributes to about 3.62% of the greenhouse effect [18]. Substituting x x σ σ ′ = 1.0362 gives a temperature of 289.2 K, and thus a climate sensitivity of s  =∆T/ ∆[CO2 ] =  (1.0 K)/( 350 ppm) =  2.9 mK ppm.

Failure of the Model

Now, as mentioned before, we have to conclude that this entire idea of an analysis is wrong, and is only performed here to show what values we might get on basis of the (faulty) analysis. Because of before-mentioned heat creep and thermodynamic equilibrium in general, it does not matter where and how the planet is heated (where radiation is absorbed, etc.), what matters is only the total amount of power, (1− a S) absorbed in the Earth system and U h( ) emitted by it, in equilibrium they are equal.

For ease of calculation we put it all at the surface, but it makes no difference whatsoever. Thus, as a consequence, and even more important to observe, radiation coming from the surface, U (0) , or anywhere else from the atmosphere itself should not be taken into account, since it does not add anything to the total energy input, it is merely a way of redistribution of internal heat, just like transport by convection and evaporation, etc., the final distribution which is given by the equations given here based on ideal gas laws.

The idea that gases in the atmosphere work as some sort of “mirror” to reflect heat back to the surface is incorrect, because a very efficient and functional “mirror” already exists: Because the atmosphere is in thermodynamic equilibrium (as evidenced by the perfect fit of thermodynamic-equilibrium equations to empirical reality, see Figure 6) adding a heat flux F to the system from the top of the atmosphere to the bottom (see Figure 8)—for instance an absorption of IR and reemission downwards—will be fully counteracted by an equal flux F from the bottom of the atmosphere to the top. The net effect will always be zero!

Adding internal radiation to the radiation balance only, and ignoring the annulling other effects, would be allowing an atmosphere to boot-strap itself, heating itself somehow. No object can heat itself to higher temperatures, when it is already in thermal equilibrium. The only radiation that matters is the one coming from the Sun, and it does not matter where and how it enters into the heat balance of the atmosphere. Heat creep (convection, evaporation and radiation) will redistribute the heat to result in the distribution given here (Figure 6); According to Sorokhtin 66.56% by convection, 24.90% by condensation and 8.54% by radiation [19].

An added process (by multiple absorption-emission) might, at best, speed up the equilibration. There is nothing CO2 would add to the current heat balance in the atmosphere, if the outward radiation no longer comes from the surface of the planet, but from a layer high up in the atmosphere. As long as the radiation does not come from the surface, making the layer blacker (more emissive) will radiate—“mirror”—more heat downwards, which is irrelevant (since it will only speed up the rate of thermalization), but also more heat upwards (F’ in Figure 8), cooling down that layer and thermodynamically the surface layer and the entire planet with it! Opening a radiative channel to the cold universe will rather cool an object.

Thermodynamic-Radiative Atmospheric Model

A thermodynamic-radiative model is presented in which each part of the atmosphere is in thermodynamic equilibrium and, moreover, exchanges heat not only by radiation, but also by other ways, such as convection, etc.

This brings us to the final model that will be presented here. It is based on combining the thermodynamic and radiative analyses given above. Considering the fact that the atmosphere is in thermodynamic equilibrium, we must assume that absorbed radiation does get assimilated by the heat bath, just like in classic Beer-Lambert theory. Radiation absorbed is distributed instantaneously all over the atmosphere and the surface. The surface emits with 4 0 σT , a part λ is passing directly to the universe unhindered, 1− λ is absorbed. The atmosphere emits too; the total emissivity equal to ε. Since, for radiation properties, it does not matter where the absorbing/emitting mass resides (z), but only how much radiation it receives and what temperature it has, we start by describing the atmosphere ρ and T not as a function of height z, but as a function of temperature T.

The total energy in the atmosphere can easily be calculated. Since in thermodynamic equilibrium all air packages have the same specific energy given by p c T gz + , independent of z, all mass must have specific energy (per kg) equal to that at z = 0, namely p 0 c T . The total thermodynamic energy of the atmosphere is then total p 0 E cTM = . The atmosphere tries to shed energy by radiation, and receives energy from the surface. The surface also tries to shed energy, either by radiation into the universe, or by transfer to the atmosphere somehow (conduction, etc.). This is a intricate interplay of energy transfer.

We go back to the Beer-Lambert analysis. Once again, this is justified by the assumption that internal absorption is simply recycled back into the system. The heat is rapidly distributed to maintain thermodynamic equilibrium. Radiation leaving the surface 4 σT0 is partly absorbed by the atmosphere and partly transmitted. The absorbed energy goes back to the heat bath p 0 cTM and is redistributed therein. It does not heat up this bath (that would be counting the solar radiation twice), but reabsorbed energy is heat that was not allowed to escape the system and is thus blocked from cooling it.

We can now make a plot of the total radiation coming out of the atmosphere as a function of the total optical depth the atmosphere (τ σ= xM m ). Combining Equations (52) and (55), Figure 9 shows the fraction which is the planetary emissivity  , as a function of optical optical depth τ = x σ M m, and thermodynamic molecular heat capacity p η = mc k .

Figure 9. Left: Earth planetary emissivity of the fraction of the radiation emitted by the surface that is escaping from the top of the atmosphere. 1 is like a black body, 0 is white. The red curve is the contribution from the surface and the blue curve from the atmosphere. The sum is  , shown by the green curve. Right: The effect of augmenting the specific heat p c of air is an increment of the planetary emissivity and thus cooling of the surface if η = mc k p increases, but also a heating up of the atmosphere from where now more radiation originates. (Gray lines are the situation of left figure, colored lines are for a doubling of p c ).

The effect of the atmosphere can be found by knowing the thermodynamic parameter and the optical parameter, more precisely the molecular heat capacity, η = mc k p , and optical depth, x τ σ= M m. The latter is complicated, since it is not simply a matter of a linear function of M. We have to know the complete spectrum. But before we continue, it has to be pointed out that the emissivity Equation (Equation (56)) is a monotonously decreasing function of τ for any value of η . That means that increasing the optical depth of the atmosphere will always reduce the emissivity and increase the surface temperature, in contrast to earlier thoughts we had. Reabsorbed radiation goes back to the heat bath and gets a second change to be radiated out to the universe, maybe through another channel: Emitted at a wavelength for which the atmosphere is more transparent, or maybe from a place higher up in the atmosphere.

Comparing to Reality

Section 5 discusses some test cases of the model. They show mixed success. 

Mars

The relevant parameters of the NASA Mars Fact Sheet [22] are given in Table 4. Because most of the atmosphere consists of CO2 (sic), the average molecular mass of molecules is close to the value of CO2 (44.01 g/mol), the fact sheet gives 43.34 g/mol [23]. Therefore, the atmosphere has 3.955 kmol/m2 of molecules. 95.32% of that is CO2, that is 3.770 kmol/m2 . That is much more than above any point on Earth. Yet, the effect of all that CO2 is unmeasurable. The black body (atmosphereless) temperature is bb, T ♂ = 209.8 K, which can be calculated on basis of the solar irradiance 2 W♂ = 586.2 W m and the albedo of Mars ( a♂ = 0.250 ): 4 σTbb,♂ = (1 4 − a W ♂) ♂ . The real temperature of the Mars surface is just this 210 K, making the measured greenhouse effect within the measurement error.

Of course, what is important is not so much how much CO2 is absorbing, but how much is the atmosphere in its entirety absorbing. Better to say, how much it is letting through. Even with CO2 fully saturated, nearly all radiated heat easily escapes the atmosphere. A tiny unmeasurable effect remains. Now, doubling it will have no effect. Imagine 1% of the spectrum is covered, in which part 90% of the radiation is absorbed. Thus 99.1% of all radiation escapes. Doubling this constituent will make the absorption in that 1% part only go to 99%; still 99.01% of all radiation escapes. In this particular case of Mars, CO2 has little effect in whatever quantity it is in the atmosphere.

Earth

Ångström, in his classical work, wrote “[…] it is clear, first, that no more than about 16 percent of earth’s radiation can be absorbed by atmospheric carbon dioxide, and secondly, that the total absorption is very little dependent on the changes in the atmospheric carbon dioxide content, as long as it is not smaller than 0.2 of the existing value” [5].

This basically states that there is no further contribution to the greenhouse effect from CO2 for concentrations above, about, 60 ppm. It is another way of saying that a radiation window that is closed cannot further contribute to greenhouse effect. That is, as long as there are other windows still open. This is not true, however, because absorption lines have tails that can never saturate, absorption can be much larger. A HITRAN simulation of only CO2 absorption (with concentrations as found in the Earth atmosphere) results in absorption of 17.4% (See Figure 11; direct emission 82.6%), close to the value estimated by Ångström. Yet, as we have seen, for Mars, with 30 times more CO2, this absorption is 26.4%. It shows the complexity of the subject. This manuscript is not about numerical simulations, but about analytical understanding of the greenhouse effect. We will now make an empirical estimation here in the framework of our analytical model.

The real emissivity of Earth (surface plus atmosphere) can easily be determined on basis of empirical data, according to Equation (57), where we defined the emissivity through the ratio of radiation coming out to the radiation emitted by the surface.  Taking the thermodynamic parameters ( p m c, ) as constant and as used before (Table 3), from the equations we find that this value occurs for an optical depth of τ = 0.754 (See the green dot of Figure 9). We can even see that 77.7% (0.470) of the radiation going into space comes from the surface and 22.3% (0.135) from the atmosphere. This contrasts the notion mentioned earlier that the radiation comes from 6 km altitude (most still comes from the surface), and also contrasts the values of 28.5% direct emission found earlier for a radiation-only atmosphere. Yet, we also see that if the opacity of the atmosphere increases, more radiation, also in absolute terms, comes from the atmosphere. This is the cooling effect of the atmosphere described earlier.

Figure 13. Schematic picture of what happens at Earth. Two channels, one (A) if fully open and one (B) is nearly closed. Closing B further has little-to-no effect.

However, the situation is much closer to situation (f) of Figure 10. That is, a part of the spectrum emission is fully open, and the part of CO2 is as good as closed (shown black in the figure). This situation is depicted in Figure 13, with an open channel A and a closed channel B. Now, further closing the channel (B) that is already as good as closed has little-to-no effect, as long as a significant part of the rest of the spectrum is open.

Note that this is not envisaged in a radiation balance analysis.

In that model, once a heat package has “decided” to opt for a certain wavelength, the only way to make it out of the atmosphere is by multiple-emission-absorptions events, or crawling its way back to the surface. As such, adding CO2 to a closed channel still has a lot of impact. In the current thermodynamic-radiative model, absorbed radiation is given back to the heat bath that is the surface plus the atmosphere ( p 0 cTM ), from where it can have a second chance of escaping into the universe, by the same or by a different channel. To say it in another way, the best-case climate sensitivity of CO2 is zero. The optical length of CO2 in the atmosphere is about 25 m. That is, 25 meters up in the air the radiation emitted by the surface in the spectrum of CO2 is already attenuated by a factor e. In this 25-m layer resides only 1/1773th part of the atmosphere (and CO2). The total transmission of the entire atmosphere is thus exp 1773 (− ) which any calculator shows as zero. Doubling the CO2 in the atmosphere, will have no measurable effect, exp 3546 0. (− ≈) Heat had no chance escaping to the universe through this channel, and now even less so. As long as there is a sliver of the emission spectrum for which the atmosphere is transparent the effect of doubling agents such as CO2 that have a spectrum that is close to saturation is close to nil. This is the lower limit of the effect.

Sevilla-Córdoba-Granada

In this thermodynamic analysis, the temperature at a point on the planet is for a certain radiative input (1 , − a S) mainly determined by the altitude z. The radiative greenhouse effect states it mainly depends on the total amount of carbon dioxide floating above the point. To test these hypotheses, we can look at cities with the same or similar radiative solar input, at the same latitude on the planet, but at different altitudes. Without doing an exhaustive study, we take as example three neighboring cities in the south of Spain,namely Sevilla, Córdoba and Granada, each at a different elevation (Table 5).

The question now is, why is Granada not much warmer in 2019 than Sevilla was in 1951? It is actually still colder. This seriously undermines the idea that carbon dioxide is determining the temperature on our planet. Figure 14 plots the temperature of these cities versus the carbon content above them. The linear regression quality parameter is R2 = 0.21. Meaning, temperature is not well correlated with [CO2].

Venus

When we look at reality, the greenhouse effect on Venus is enormous. Comparing the real temperature  737 K with the blackbody temperature based on the solar radiance and albedo of only 226.6 K, we determine that the emissivity of Venus is very small: 0.00894. For these values we see that the radiation no longer comes from the surface at all, but from the atmosphere, instead, that is very opaque with an optical depth τ of about 81 (see Figure 12). Venus connects to the universe at a high altitude in the atmosphere. The heat finds it way to the surface by thermodynamic means, resulting in a high surface temperature.

Geological Time Scales

Figure 16. (a) (Holocene) ten-thousand-year time scale data of temperature (blue curve) and [CO2] (purple curve) of Greenland; (b) A correlation curve shows an absence of correlation between the two on this time scale. Plots made with the help of WebPlotDigitizer 4.2 [29] from a plot of based on data of GISP 2 (temperature) and EPICA C (CO2), found at Climate4You [31].

Another experiment nature throws at us is the geological time scale, from times long before humans appeared on this planet. Figure 15 shows these data and it is obvious that in this time scale there is no correlation between carbon-dioxide concentrations in the air and surface temperatures. In fact, a fitting to the data results in a correlation of dT/ d[CO2] = -0.43 mK ppm,  which is probably accidental, because we do not know any theory that might explain an inverse correlation between the two quantities. A similar non-correlation we see in the Holocene data of Greenland, see Figure 16. Here a (pseudo)correlation of dT/ d[CO2] = -0.43 mK ppm, is found, two orders-of-magnitude more than in the paleontological data of Figure 15. These results undermine the hypothesis that only CO2 is climate forcing, something that some climatologists claim.

Other Effects: Feedback, Delay, Water

Section 6 augments the model by including feedback and secondary effects, such as water. It also tries to establish relaxation times of the system.

Delay

We might think that the atmosphere did not have time yet to reach the new equilibrium. The observed effects are then always less than the one calculated. For the greenhouse effect, however, we need to explain that the signal is larger than theory predicts. Considering the fact that our calculations may be wrong, we can, still, make an estimation about how long it takes to reach the equilibrium, and turn the observed short-term contemporary 10.2 mK/ppm into the observed long-term 95 mK/ppm. The specific heat capacity of air is p c = ⋅ 1.51 kJ K kg. The pressure is 1013.2 Pa, so the mass density is 3 2 0 M Pg = = × 10.3 10 kg m . We found a radiative forcing of 2 w = 8.6 mW m per ppm (Equation (61)), and a temperature effect of ∆T/∆ [CO2] = 3.3 mK ppm (Equation (60)). The characteristic temperature adjustment time of just the atmosphere alone is then  46 days,  That is about a month and a half, a value very similar to the one we found empirically in the phase shift of yearly-periodic solar radiation and temperature data, namely about 1.2 months [32] and a simple relaxation analysis of daily temperature variations which gives 23 days [1]. We can thus exclude any substantial delay effects in the greenhouse effect [CO2] → T on the time scales of the contemporary and ice-core-drilling datasets, 60 a and 600 ka, respectively. On the other hand, we can expect long delays between T and [CO2] in the framework of Henry’s Law. Imagine the atmosphere warms up for some reason (maybe solar activity). This warmed up air must then heat up the relevant layer of the ocean and expose this layer to the surface where the surplus CO2 can outgas.

Water Effect

Water has several effects. First it will change the specific heat of the air ( p c ) and thus the lapse rate of the atmosphere, Equation (36). This will slightly lower the temperature at the surface. As a secondary effect, it will increase dramatically the absorption cross section for infrared, and this will change the ground temperature T0 . The feedback effect of water on outgoing radiation is beyond the Barkhausen criterion and thus, any addition of water to the atmosphere, however tiny, would result in a run-away scenario. However, this effect is fully canceled by the albedo effect, resulting in a net negative effect of water on the temperature.

Conclusions

We have analyzed here the greenhouse effect, using fully analytical techniques, without reverting anywhere to finite-elements calculations. This gave important insight into the phenomenon. An important conclusion is that the analysis in terms of radiation-balances-only cannot explain the situation in the atmosphere. In the extreme case, a differential equation of layers with absorption coefficients, etc., gave the same results as a much simpler 2-box mixed chamber model. However, the underlying assumptions in these calculations are not physical.

Therefore we set out to model the greenhouse effect ab initio, and came up with the thermodynamic-radiation model. The atmosphere is close to thermodynamic equilibrium and based on that we can calculate where and how radiation is absorbed and emitted. This model can explain phenomenologically and analytically how big the effect of the atmosphere is, specifically Equations (56) and (58).

Continuing with the reasoning, we find that the alleged greenhouse effect cannot explain the empirical data—orders of magnitude are missing. There where Henry’s Law—outgassing of oceans—easily can explain all observed phenomena.

Moreover, the greenhouse hypothesis—as presented here—cannot explain the atmosphere on Mars, nor can it explain the geological data, where no correlation between [CO2] and temperature is observed. Nor can it explain why a different correlation is observed in contemporary data of the last 60 years compared to historical data (600 thousand years).

We thus reject the anthropogenic global warming (AGW) hypothesis, both on basis of empirical grounds as well as a theoretical analysis

Oceans Erase Last Summer’s Warming

The best context for understanding decadal temperature changes comes from the world’s sea surface temperatures (SST), for several reasons:

  • The ocean covers 71% of the globe and drives average temperatures;
  • SSTs have a constant water content, (unlike air temperatures), so give a better reading of heat content variations;
  • A major El Nino was the dominant climate feature in recent years.

HadSST is generally regarded as the best of the global SST data sets, and so the temperature story here comes from that source, the latest version being HadSST3.  More on what distinguishes HadSST3 from other SST products at the end.

The Current Context

The chart below shows SST monthly anomalies as reported in HadSST3 starting in 2015 through January 2020.
A global cooling pattern is seen clearly in the Tropics since its peak in 2016, joined by NH and SH cycling downward since 2016.  In 2019 all regions had been converging to reach nearly the same value in April.

Then  NH rose exceptionally by almost 0.5C over the four summer months, in August exceeding previous summer peaks in NH since 2015.  In the 4 succeeding months, that warm NH pulse has reversed sharply.  January NH anomaly is little changed from December.  SH and Tropics SSTs bumped upward since September, but despite that the global anomaly dropped a little due to strong NH cooling. Now the Global anomaly is the same as June 2019.

Note that higher temps in 2015 and 2016 were first of all due to a sharp rise in Tropical SST, beginning in March 2015, peaking in January 2016, and steadily declining back below its beginning level. Secondly, the Northern Hemisphere added three bumps on the shoulders of Tropical warming, with peaks in August of each year.  A fourth NH bump was lower and peaked in September 2018.  As noted above, a fifth peak in August 2019 exceeded the four previous upward bumps in NH.

And as before, note that the global release of heat was not dramatic, due to the Southern Hemisphere offsetting the Northern one.  The major difference between now and 2015-2016 is the absence of Tropical warming driving the SSTs.

A longer view of SSTs

The graph below  is noisy, but the density is needed to see the seasonal patterns in the oceanic fluctuations.  Previous posts focused on the rise and fall of the last El Nino starting in 2015.  This post adds a longer view, encompassing the significant 1998 El Nino and since.  The color schemes are retained for Global, Tropics, NH and SH anomalies.  Despite the longer time frame, I have kept the monthly data (rather than yearly averages) because of interesting shifts between January and July.

1995 is a reasonable (ENSO neutral) starting point prior to the first El Nino.  The sharp Tropical rise peaking in 1998 is dominant in the record, starting Jan. ’97 to pull up SSTs uniformly before returning to the same level Jan. ’99.  For the next 2 years, the Tropics stayed down, and the world’s oceans held steady around 0.2C above 1961 to 1990 average.

Then comes a steady rise over two years to a lesser peak Jan. 2003, but again uniformly pulling all oceans up around 0.4C.  Something changes at this point, with more hemispheric divergence than before. Over the 4 years until Jan 2007, the Tropics go through ups and downs, NH a series of ups and SH mostly downs.  As a result the Global average fluctuates around that same 0.4C, which also turns out to be the average for the entire record since 1995.

2007 stands out with a sharp drop in temperatures so that Jan.08 matches the low in Jan. ’99, but starting from a lower high. The oceans all decline as well, until temps build peaking in 2010.

Now again a different pattern appears.  The Tropics cool sharply to Jan 11, then rise steadily for 4 years to Jan 15, at which point the most recent major El Nino takes off.  But this time in contrast to ’97-’99, the Northern Hemisphere produces peaks every summer pulling up the Global average.  In fact, these NH peaks appear every July starting in 2003, growing stronger to produce 3 massive highs in 2014, 15 and 16.  NH July 2017 was only slightly lower, and a fifth NH peak still lower in Sept. 2018.

The highest summer NH peak came in 2019, only this time the Tropics and SH are offsetting rather adding to the warming. Since 2014 SH has played a moderating role, offsetting the NH warming pulses. Now in January 2020 last summer’s unusually high NH SSTs have been erased. (Note: these are high anomalies on top of the highest absolute temps in the NH.)

What to make of all this? The patterns suggest that in addition to El Ninos in the Pacific driving the Tropic SSTs, something else is going on in the NH.  The obvious culprit is the North Atlantic, since I have seen this sort of pulsing before.  After reading some papers by David Dilley, I confirmed his observation of Atlantic pulses into the Arctic every 8 to 10 years.

But the peaks coming nearly every summer in HadSST require a different picture.  Let’s look at August, the hottest month in the North Atlantic from the Kaplan dataset.
The AMO Index is from from Kaplan SST v2, the unaltered and not detrended dataset. By definition, the data are monthly average SSTs interpolated to a 5×5 grid over the North Atlantic basically 0 to 70N. The graph shows warming began after 1992 up to 1998, with a series of matching years since. Because the N. Atlantic has partnered with the Pacific ENSO recently, let’s take a closer look at some AMO years in the last 2 decades.
This graph shows monthly AMO temps for some important years. The Peak years were 1998, 2010 and 2016, with the latter emphasized as the most recent. The other years show lesser warming, with 2007 emphasized as the coolest in the last 20 years. Note the red 2018 line is at the bottom of all these tracks. The black line shows that 2019 began slightly cooler, then tracked 2018, then rose to match previous summer pulses, before dropping the last four months to be slightly above 2018 and below other years.

Summary

The oceans are driving the warming this century.  SSTs took a step up with the 1998 El Nino and have stayed there with help from the North Atlantic, and more recently the Pacific northern “Blob.”  The ocean surfaces are releasing a lot of energy, warming the air, but eventually will have a cooling effect.  The decline after 1937 was rapid by comparison, so one wonders: How long can the oceans keep this up? If the pattern of recent years continues, NH SST anomalies may rise slightly in coming months, but once again, ENSO which has weakened will probably determine the outcome.

Footnote: Why Rely on HadSST3

HadSST3 is distinguished from other SST products because HadCRU (Hadley Climatic Research Unit) does not engage in SST interpolation, i.e. infilling estimated anomalies into grid cells lacking sufficient sampling in a given month. From reading the documentation and from queries to Met Office, this is their procedure.

HadSST3 imports data from gridcells containing ocean, excluding land cells. From past records, they have calculated daily and monthly average readings for each grid cell for the period 1961 to 1990. Those temperatures form the baseline from which anomalies are calculated.

In a given month, each gridcell with sufficient sampling is averaged for the month and then the baseline value for that cell and that month is subtracted, resulting in the monthly anomaly for that cell. All cells with monthly anomalies are averaged to produce global, hemispheric and tropical anomalies for the month, based on the cells in those locations. For example, Tropics averages include ocean grid cells lying between latitudes 20N and 20S.

Gridcells lacking sufficient sampling that month are left out of the averaging, and the uncertainty from such missing data is estimated. IMO that is more reasonable than inventing data to infill. And it seems that the Global Drifter Array displayed in the top image is providing more uniform coverage of the oceans than in the past.

uss-pearl-harbor-deploys-global-drifter-buoys-in-pacific-ocean

USS Pearl Harbor deploys Global Drifter Buoys in Pacific Ocean

Land Air Temps Continue Cooling in January

banner-blog

With apologies to Paul Revere, this post is on the lookout for cooler weather with an eye on both the Land and the Sea.  UAH has updated their tlt (temperatures in lower troposphere) dataset for January 2020.  Previously I have done posts on their reading of ocean air temps as a prelude to updated records from HADSST3. This month also has a separate graph of land air temps because the comparisons and contrasts are interesting as we contemplate possible cooling in coming months and years.

Presently sea surface temperatures (SST) are the best available indicator of heat content gained or lost from earth’s climate system.  Enthalpy is the thermodynamic term for total heat content in a system, and humidity differences in air parcels affect enthalpy.  Measuring water temperature directly avoids distorted impressions from air measurements.  In addition, ocean covers 71% of the planet surface and thus dominates surface temperature estimates.  Eventually we will likely have reliable means of recording water temperatures at depth.

Recently, Dr. Ole Humlum reported from his research that air temperatures lag 2-3 months behind changes in SST.  He also observed that changes in CO2 atmospheric concentrations lag behind SST by 11-12 months.  This latter point is addressed in a previous post Who to Blame for Rising CO2?

After a technical enhancement to HadSST3 delayed March and April updates, May resumed a pattern of HadSST updates mid month.  For comparison we can look at lower troposphere temperatures (TLT) from UAHv6 which are now posted for January. The temperature record is derived from microwave sounding units (MSU) on board satellites like the one pictured above. Recently there was a change in UAH processing of satellite drift corrections, including dropping one platform which can no longer be corrected. The graphs below are taken from the new and current dataset.

The UAH dataset includes temperature results for air above the oceans, and thus should be most comparable to the SSTs. There is the additional feature that ocean air temps avoid Urban Heat Islands (UHI).  The graph below shows monthly anomalies for ocean temps since January 2015.After a June rise in ocean air temps, all regions dropped back down to May levels in July and August.  A spike occured in September, followed by plummenting October ocean air temps in the Tropics and SH. In November that drop partly warmed back, now leveling slightly downword with continued cooling in NH. 2020 starts with NH warming slightly, still cooler than the previous months back to September.  SH and Tropics also rose slightly resulting in a Global rise.

Land Air Temperatures Cooling in Seesaw Pattern

We sometimes overlook that in climate temperature records, while the oceans are measured directly with SSTs, land temps are measured only indirectly.  The land temperature records at surface stations sample air temps at 2 meters above ground.  UAH gives tlt anomalies for air over land separately from ocean air temps.  The graph updated for January 2020 is below.

Here we have freash evidence of the greater volatility of the Land temperatures, along with an extraordinary departure by SH land.  Despite the small amount of SH land, it spiked in July, then dropped in August so sharply along with the Tropics that it pulled the global average downward against slight warming in NH.  In November SH jumped up beyond any month in this period.  Despite this spike along with a rise in the Tropics, NH land temps dropped sharply.  The larger NH land area pulled the Global average downward.  December reversed the situation with the SH dropping as sharply as it rose, while NH rose to the same anomaly, pulling the Global up slightly.

2020 starts with sharp drops in both SH and NH, with the Global anomaly dropping as a result.  The behavior of SH land temps is puzzling, to say the least.  it is also a reminder that global averages can conceal important underlying volatility.

The longer term picture from UAH is a return to the mean for the period starting with 1995.  2019 average rose but currently lacks any El Nino to sustain it.

TLTs include mixing above the oceans and probably some influence from nearby more volatile land temps.  Clearly NH and Global land temps have been dropping in a seesaw pattern, more than 1C lower than the 2016 peak, prior to these last several months. TLT measures started the recent cooling later than SSTs from HadSST3, but are now showing the same pattern.  It seems obvious that despite the three El Ninos, their warming has not persisted, and without them it would probably have cooled since 1995.  Of course, the future has not yet been written.

Historic Climate Cycles (glaciers added)

Update: February 7, 2020

This is an update to a post The Ever Changing Climate with a new slide showing fluctuating Alpine glaciers over several thousand years.  Context below is from the previous post along with the new content.

Raymond of RiC-Communications  studio commented on a recent post and made an offer to share here some graphics on CO2 for improving public awareness.  He produced 12 interesting slides which are presented in the post Here’s Looking at You, CO2.   I find them straightforward and useful, and appreciate his excellent work on this. Project title is link to RiC-Communications. This post presents the five initial charts he has so far created on a second theme The World of Climate Change and adds another regarding Alpine glacier studies by two prominent geologists.  In addition, Raymond was able to consult the work of  these two experts in their native German language.

This project is The World of Climate Change

Infographics can be helpful, in making things simple to understand. Climate change is a complex topic with a lot of information and statistics. These simple step by step charts are to better understand what is occurring naturally and what could be caused by humans. What is cause for alarm and what isn’t cause for alarmism if at all. Only through learning is it possible to get the big picture so as to make the right decisions for the future.

– N° 1 600 million years of global temperature change
– N° 2 Earth‘s temperature record for the last 400,000 years
– N° 3 Holocene period and average northern hemispheric temperatures
– N° 4 140 years of global mean temperature
– N° 5 120 m of sea level rise over the past 20‘000 years
– N° 6 Eastern European alpine glacier history during the Holocene period.

03_infographic_wocc-1

04_infographic_wocc

Summer Temperatures (May – September) A rise in temperature during a warming period will result in a glacier losing more surface area or completely vanishing. This can happen very rapidly in only a few years or over a longer period of time. If temperatures drop during a cooling period and summer temperatures are too low, glaciers will begin to grow and advance with each season. This can happen very rapidly or over a longer period in time. Special thanks to Prof. em. Christian Schlüchter / (Quartärgeologie, Umweltgeologie) Universität Bern Institut für Geologie His work is on the Western Alps and was so kind to help Raymond make this graphic as correct as possible.

Comment:

This project will explore information concerning how aspects of the world climate system have changed in the past up to the present time.  Understanding the range of historical variation and the factors involved is essential for anticipating how future climate parameters might fluctuate.

For example:

The Climate Story (Illustrated) looks at the temperature record.

H20 the Gorilla Climate Molecule looks at precipitation patterns.

Data vs. Models #2: Droughts and Floods looks at precipitation extremes.

Data vs. Models #3: Disasters looks at extreme weather events.

Data vs. Models #4: Climates Changing looks at boundaries of defined climate zones.

And in addition, since Chart #5 features the Statue of Liberty, here are the tidal guage observations there compared to climate model projections:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The Ever Changing Climate

Update: January 31, 2020

This is an update to a post Simple Science 2: World of Climate Change with two new slides and a revised sequence. Context below is from the previous with the new content.

Raymond of RiC-Communications  studio commented on a recent post and made an offer to share here some graphics on CO2 for improving public awareness.  He has produced 12 interesting slides which are presented in the post Here’s Looking at You, CO2.  This post presents the three initial charts he has so far created on a second theme The World of Climate Change.  I find them straightforward and useful, and appreciate his excellent work on this. Project title is link to RiC-Communications. (For some reason I had problems getting my Opera browser to load the revised links, but Edge worked fine.

This project is The World of Climate Change

Infographics can be helpful, in making things simple to understand. Climate change is a complex topic with a lot of information and statistics. These simple step by step charts are here to better understand what is occurring naturally and what could be caused by humans. What is cause for alarm and what isn’t cause for alarmism if at all. Only through learning is it possible to get the big picture so as to make the right decisions for the future.

– N° 1 600 million years of global temperature change
– N° 2 Earth‘s temperature record for the last 400,000 years
– N° 3 Holocene period and average northern hemispheric temperatures
– N° 4 140 years of global mean temperature
– N° 5 120 m of sea level rise over the past 20‘000 years

Comment:

This project will explore information concerning how aspects of the world climate system have changed in the past up to the present time.  Understanding the range of historical variation and the factors involved is essential for anticipating how future climate parameters might fluctuate.

For example:

The Climate Story (Illustrated) looks at the temperature record.

H20 the Gorilla Climate Molecule looks at precipitation patterns.

Data vs. Models #2: Droughts and Floods looks at precipitation extremes.

Data vs. Models #3: Disasters looks at extreme weather events.

Data vs. Models #4: Climates Changing looks at boundaries of defined climate zones.

And in addition, since Chart #5 features the Statue of Liberty, here are the tidal guage observations there compared to climate model projections: