Pushing for Climate Diversity

Amidst all the concerns for social diversity, let’s raise a cry for scientific diversity. No, I am not referring to the gender or racial identities of people doing science, but rather acknowledging the diversity of climates and their divergent patterns over time. The actual climate realities affecting people’s lives are hidden within global averages and abstractions. A previous post Concurrent Warming and Cooling presented research findings that on long time scales maritime climates can shift toward inland patterns including both colder winters and warmer summers.

It occurred to me that Frank Lansner had done studies on weather stations showing differences depending on exposure to ocean breezes or not. That led me to his recent publication Temperature trends with reduced impact of ocean air temperature Lansner and Pederson March 21, 2018. Excerpts in italics with my bolds.

Abstract

Temperature data 1900–2010 from meteorological stations across the world have been analyzed and it has been found that all land areas generally have two different valid temperature trends. Coastal stations and hill stations facing ocean winds are normally more warm-trended than the valley stations that are sheltered from dominant oceans winds.

Thus, we found that in any area with variation in the topography, we can divide the stations into the more warm trended ocean air-affected stations, and the more cold-trended ocean air-sheltered stations. We find that the distinction between ocean air-affected and ocean air-sheltered stations can be used to identify the influence of the oceans on land surface. We can then use this knowledge as a tool to better study climate variability on the land surface without the moderating effects of the ocean.

We find a lack of warming in the ocean air sheltered temperature data – with less impact of ocean temperature trends – after 1950. The lack of warming in the ocean air sheltered temperature trends after 1950 should be considered when evaluating the climatic effects of changes in the Earth’s atmospheric trace amounts of greenhouse gasses as well as variations in solar conditions.

As a contrast to the OAS stations, we compare with what we designate as ocean air affected (OAA) stations, which are more exposed to the influence of the ocean, see Figure 1. The optimal OAA locations are defined as positions with potential first contact with ocean air. In general, stations where the location offers no shelter in the directions of predominant winds are best categorized as OAA stations.

Conversely, the optimal OAS area is a lower point surrounded by mountains in all directions. In this case, the existence of predominant wind directions is not needed. Only in locations with a predominant wind direction, the leeward side of the mountains can also form an OAS region.

Figure 2. The optimal OAA and OAS locations with respect to dominating wind direction.

A total of 10 areas were chosen for this work to present the temperature trends of OAS areas (typically valley areas) and OAA areas from Scandinavia, Central Siberia, Central Balkan, Midwest USA, Central China, Pakistan/North India, the Sahel Area, Southern Africa, Central South America, and Southeast Australia. In this work, we have only considered an area as “OAS” or “OAA” if it comprises at least eight independent temperature sets. In the following, temperature data 1900–2010 from individual areas are discussed.

As an example, we show in Figure 3 the results for the Scandinavian area where we have used a total of 49 OAS stations and 18 OAA stations. The large number of stations available is due to the use of meteorological yearbooks as supplement to data sources such as ECA&D climate data and Nordklim database.

Figure 3. OAS and OAA temperature stations, Scandinavia.

The upper set of curves is from the OAS areas: Here the blue lines show one-year mean temperature averages for each temperature station, the red lines show the average of all stations of the area, and the thick black line is a five-year running mean of the station average. The reference period is 1951–1980. The middle set of curves is from the OAA areas. Here the orange lines show one-year mean temperature averages for each temperature station, the red lines show average of the stations of the area, and the thick black line is a five-year running mean of the station average. The reference period is 1951–1980.

On the lower set of curves labeled “OAS vs. OAA areas,” a comparison of the two data sets of stations is shown. The blue lines are the one-year average of OAS stations of the area and the red lines are the one-year average of OAA stations of the area. The reference period is 1995–2010. We note that these Scandinavian OAS stations are not well shielded from easterly winds.

Although easterly winds are not frequent (see Figure 2), the OAS area used cannot be characterized as an optimal OAS area. Despite this, we find a difference between the OAS and OAA area temperature data. While the general five-year running mean temperature curves (left panel in Figure 3) show resemblance in warming/cooling cycles, the OAA stations show less variation than the OAS stations.

We also find the absolute temperature anomalies for the Scandinavian OAS areas deviate from the OAA area with the OAS stations showing less warming than the OAA stations during the 20th century. For the years 1920–1950, we thus find temperatures in the OAS area to be up to 1 K warmer than temperature in the OAA area. In recent years, there is a closer agreement between OAS and OAA trends and even though the Scandinavian OAS data generally are warmer than OAA data for 1920–1950, we also note that in some very cold years, OAS temperatures are slightly colder than the OAA temperatures.

The paper presents all ten regions analyzed, but I will include here the USA example to see how it compares with other depictions of US regions. For example, see the map at the top shows the dramatic difference between temperature records in Eastern versus Western US stations. Here is the assessment from Lansner and Pederson. Note the topographical realities.

For the USA (Figure 6), we defined the OAS area as consisting of eight boxes, each of size 5° X 5°. The boxes were defined as 40–45N X 100–95 W, 40–45N  X 95–90W, 35– 40N X 100–95W, 35–40N X 95–90 W, 35–40N X 90–85W, 35–30N X 100– 95W, 35–30N X 95–90W, and 35–30N X 90–85W. A total of 236 temperature stations were used from this area. Full 5 X 5 grids were not found to be suited as OAA areas, but 27 stations indicated on the map were used for the OAA data set. All data were taken from GHCN v2 raw data. The OAS area in the US Midwest is well protected against westerly oceanic (Pacific) winds due to the Rocky Mountains. The US Midwest is also to some degree sheltered against easterly winds due to the Appalachian mountain range. Again the temperature trends from the OAS area as defined above show the 1920–1955 period in most years to be around 1 K warmer than temperature trends from the OAA areas.

Summation

Figure 13. OAS and OAA temperature averages, Northern Hemisphere.

In Figure 13 we have combined average temperature trends for all seven NH OAS areas (blue curves) and OAA areas (brown curves) were areas are divided into low (0–45N) and high (45–90N) latitudes (dark colors are used for low and light colors for high latitudes). Both for the OAS areas and the OAA areas we see that the seven NH areas have similar development of temperature trends for 1900–2010. The larger variation in data from high latitudes (45–90N) is likely to reflect the Arctic amplification of temperature variations. OAS temperature stations further away from the Arctic (0–45N) seem to show less temperature increase during 1980–2010 than the OAS areas most affected by the Arctic (45– 90N). The NH OAS data all reveal a period of heating of the Earth surface 1920–1950 that the OAA data do not reflect well.

Figure 19. OAS and OAA temperatures, all regions.

Conclusion

Bromley et al. raise shifts in seasonality as a factor in climate change. Now Lansner and Pederson show differences in temperature trends due to ocean exposure, and also greater fluctuations with higher latitudes. Note that the cooling in the USA is replicated in the pattern shown worldwide in OAS regions. The key factor is the hotter temperatures prior to 1950s appearing in OAS records but not in OAA records.

Despite all the clamor about global warming (or recently global cooling since the hiatus), it all depends on where you are.  Recognizing the diversity of local and regional climates is the sort of climate justice I can support.

Footnote:

I do not subscribe to Arctic “Amplification” to explain latitudinal differences.  Since earth’s climate system is always working to transport energy from the equator to poles, any additional heat shows up in higher latitudes by meridional transport.  Previous posts have noted how anomalies give a distorted picture since temperatures are more volatile at higher (colder) NH latitudes.

See: Temperature Misunderstandings

Clive Best provides this animation of recent monthly temperature anomalies which demonstrates how most variability in anomalies occur over northern continents.

Advertisements

Concurrent Warming and Cooling

Rannoch Moor, Scotland

This post highlights recent interesting findings regarding past climate change in NH, Scotland in particular. The purpose of the research was to better understand how glaciers could be retreating during the Younger Dryas Stadia (YDS), one of the coldest periods in our Holocene epoch.

The lead researcher is Gordon Bromley, and the field work was done on site of the last ice fields on the highlands of Scotland. 14C dating was used to estimate time of glacial events such as vegetation colonizing these places. Bromely explains in article Shells found in Scotland rewrite our understanding of climate change at siliconrepublic. Excerpts in italics with my bolds.

By analysing ancient shells found in Scotland, the team’s data challenges the idea that the period was an abrupt return to an ice age climate in the North Atlantic, by showing that the last glaciers there were actually decaying rapidly during that period.

The shells were found in glacial deposits, and one in particular was dated as being the first organic matter to colonise the newly ice-free landscape, helping to provide a minimum age for the glacial advance. While all of these shell species are still in existence in the North Atlantic, many are extinct in Scotland, where ocean temperatures are too warm.

This means that although winters in Britain and Ireland were extremely cold, summers were a lot warmer than previously thought, more in line with the seasonal climates of central Europe.

“There’s a lot of geologic evidence of these former glaciers, including deposits of rubble bulldozed up by the ice, but their age has not been well established,” said Dr Gordon Bromley, lead author of the study, from NUI Galway’s School of Geography and Archaeology.

“It has largely been assumed that these glaciers existed during the cold Younger Dryas period, since other climate records give the impression that it was a cold time.”

He continued: “This finding is controversial and, if we are correct, it helps rewrite our understanding of how abrupt climate change impacts our maritime region, both in the past and potentially into the future.”

The recent report is Interstadial Rise and Younger Dryas Demise of Scotland’s Last Ice Fields
G. Bromley A. Putnam H. Borns Jr T. Lowell T. Sandford D. Barrell  First published: 26 April 2018.(my bolds)

Abstract

Establishing the atmospheric expression of abrupt climate change during the last glacial termination is key to understanding driving mechanisms. In this paper, we present a new 14C chronology of glacier behavior during late‐glacial time from the Scottish Highlands, located close to the overturning region of the North Atlantic Ocean. Our results indicate that the last pulse of glaciation culminated between ~12.8 and ~12.6 ka, during the earliest part of the Younger Dryas stadial and as much as a millennium earlier than several recent estimates. Comparison of our results with existing minimum‐limiting 14C data also suggests that the subsequent deglaciation of Scotland was rapid and occurred during full stadial conditions in the North Atlantic. We attribute this pattern of ice recession to enhanced summertime melting, despite severely cool winters, and propose that relatively warm summers are a fundamental characteristic of North Atlantic stadials.

Plain Language Summary

Geologic data reveal that Earth is capable of abrupt, high‐magnitude changes in both temperature and precipitation that can occur well within a human lifespan. Exactly what causes these potentially catastrophic climate‐change events, however, and their likelihood in the near future, remains frustratingly unclear due to uncertainty about how they are manifested on land and in the oceans. Our study sheds new light on the terrestrial impact of so‐called “stadial” events in the North Atlantic region, a key area in abrupt climate change. We reconstructed the behavior of Scotland’s last glaciers, which served as natural thermometers, to explore past changes in summertime temperature. Stadials have long been associated with extreme cooling of the North Atlantic and adjacent Europe and the most recent, the Younger Dryas stadial, is commonly invoked as an example of what might happen due to anthropogenic global warming. In contrast, our new glacial chronology suggests that the Younger Dryas was instead characterized by glacier retreat, which is indicative of climate warming. This finding is important because, rather than being defined by severe year‐round cooling, it indicates that abrupt climate change is instead characterized by extreme seasonality in the North Atlantic region, with cold winters yet anomalously warm summers.

The complete report is behind a paywall, but a 2014 paper by Bromley discusses the evidence and analysis in reaching these conclusions. Younger Dryas deglaciation of Scotland driven by warming summers  Excerpts with my bolds.

Significance: As a principal component of global heat transport, the North Atlantic Ocean also is susceptible to rapid disruptions of meridional overturning circulation and thus widely invoked as a cause of abrupt climate variability in the Northern Hemisphere. We assess the impact of one such North Atlantic cold event—the Younger Dryas Stadial—on an adjacent ice mass and show that, rather than instigating a return to glacial conditions, this abrupt climate event was characterized by deglaciation. We suggest this pattern indicates summertime warming during the Younger Dryas, potentially as a function of enhanced seasonality in the North Atlantic.

Surface temperatures range from -30C to +30C

Fig. 1. Surface temperature and heat transport in the North Atlantic Ocean.  The relatively mild European climate is sustained by warm sea-surface temperatures and prevailing southwesterly airflow in the North Atlantic Ocean (NAO), with this ameliorating effect being strongest in maritime regions such as Scotland. Mean annual temperature (1979 to present) at 2 m above surface (image obtained using University of Maine Climate Reanalyzer, http://www.cci-reanalyzer.org). Locations of Rannoch Moor and the GISP2 ice core are indicated.

Thus the Scottish glacial record is ideal for reconstructing late glacial variability in North Atlantic temperature (Fig. 1). The last glacier resurgence in Scotland—the “Loch Lomond Advance” (LLA)—culminated in a ∼9,500-km2 ice cap centered over Rannoch Moor (Fig. 2A) and surrounded by smaller ice fields and cirque glaciers.

Fig. 2. Extent of the LLA ice cap in Scotland and glacial geomorphology of western Rannoch Moor. (A) Maximum extent of the ∼9,500 km2 LLA ice cap and larger satellite ice masses, indicating the central location of Rannoch Moor. Nunataks are not shown. (B) Glacial-geomorphic map of western Rannoch Moor. Distinct moraine ridges mark the northward active retreat of the glacier margin (indicated by arrow) across this sector of the moor, whereas chaotic moraines near Lochan Meall a’ Phuill (LMP) mark final stagnation of ice. Core sites are shown, including those (K1–K3) of previous investigations (14, 15).

When did the LLA itself occur? We consider two possible resolutions to the paradox of deglaciation during the YDS. First, declining precipitation over Scotland due to gradually increasing North Atlantic sea-ice extent has been invoked to explain the reported shrinkage of glaciers in the latter half of the YDS (18). However, this course of events conflicts with recent data depicting rapid, widespread imposition of winter sea-ice cover at the onset of the YDS (9), rather than progressive expansion throughout the stadial.

Loch Lomond

Furthermore, considering the gradual active retreat of LLA glaciers indicated by the geomorphic record, our chronology suggests that deglaciation began considerably earlier than the mid-YDS, when precipitation reportedly began to decline (18). Finally, our cores contain lacustrine sediments deposited throughout the latter part of the YDS, indicating that the water table was not substantially different from that of today. Indeed, some reconstructions suggest enhanced YDS precipitation in Scotland (24, 25), which is inconsistent with the explanation that precipitation starvation drove deglaciation (26).

We prefer an alternative scenario in which glacier recession was driven by summertime warming and snowline rise. We suggest that amplified seasonality, driven by greatly expanded winter sea ice, resulted in a relatively continental YDS climate for western Europe, both in winter and in summer. Although sea-ice formation prevented ocean–atmosphere heat transfer during the winter months (10), summertime melting of sea ice would have imposed an extensive freshwater cap on the ocean surface (27), resulting in a buoyancy-stratified North Atlantic. In the absence of deep vertical mixing, summertime heating would be concentrated at the ocean surface, thereby increasing both North Atlantic summer sea-surface temperatures (SSTs) and downwind air temperatures. Such a scenario is analogous to modern conditions in the Sea of Okhotsk (28) and the North Pacific Ocean (29), where buoyancy stratification maintains considerable seasonal contrasts in SSTs. Indeed, Haug et al. (30) reported higher summer SSTs in the North Pacific following the onset of stratification than previously under destratified conditions, despite the growing presence of northern ice sheets and an overall reduction in annual SST. A similar pattern is evident in a new SST record from the northeastern North Atlantic, which shows higher summer temperatures during stadial periods (e.g., Heinrich stadials 1 and 2) than during interstadials on account of amplified seasonality (30).

Our interpretation of the Rannoch Moor data, involving the summer (winter) heating (cooling) effects of a shallow North Atlantic mixed layer, reconciles full stadial conditions in the North Atlantic with YDS deglaciation in Scotland. This scenario might also account for the absence of YDS-age moraines at several higher-latitude locations (12, 36–38) and for evidence of mild summer temperatures in southern Greenland (11). Crucially, our chronology challenges the traditional view of renewed glaciation in the Northern Hemisphere during the YDS, particularly in the circum-North Atlantic, and highlights our as yet incomplete understanding of abrupt climate change.

Summary

Several things are illuminated by this study. For one thing, glaciers grow or recede because of multiple factors, not just air temperature. The study noted that glaciers require precipitation (snow) in order to grow, but also melt under warmer conditions. For background on the complexities of glacier dynamics see Glaciermania

Also, paleoclimatology relies on temperature proxies who respond to changes over multicentennial scales at best. C14 brings higher resolution to the table.

Finally, it is interesting to consider climate changing with respect to seasonality.  Bromley et al. observe that during Younger Dryas, Scotland shifted from a moderate maritime climate to one with more seasonal extremes like that of inland continental regions. In that light, what should we expect from cooler SSTs in the North Atlantic?

Note also that our modern warming period has been marked by the opposite pattern. Many NH temperature records show slight summer cooling along with somewhat stronger warming in winter, the net being the modest (fearful?) warming in estimates of global annual temperatures.

It seems that climate shifts are still events we see through a glass darkly.

 

EU Joins Alarmist Beehive

Update April 28, 2018

The European Union has decided to ban bee-killing pesticides

This post is about bees since they are also victims of science abuse by environmental activists, aided and abetted by the media. The full story is told by Jon Entine at Slate Do Neonics Hurt Bees?  Researchers and the Media Say Yes. The Data Do Not.
A new, landmark study provides plenty of useful information. If only we could interpret it accurately. Synopsis below.

Futuristic Nightmare Scenarios

“Neonicotinoid Pesticides Are Slowly Killing Bees.”

No, there is no consensus evidence that neonics are “slowly killing bees.” No, this study did not add to the evidence that neonics are driving bee health problems. And yet . . .

Unfortunately, and predictably, the overheated mainstream news headlines also generated a slew of even more exaggerated stories on activist and quack websites where undermining agricultural chemicals is a top priority (e.g., Greenpeace, End Times Headlines, and Friends of the Earth). The takeaway: The “beepocalypse” is accelerating. A few news outlets, such as Reuters (“Field Studies Fuel Dispute Over Whether Banned Pesticides Harm Bees”) and the Washington Post (“Controversial Pesticides May Threaten Queen Bees. Alternatives Could Be Worse.”), got the contradictory findings of the study and the headline right.

But based on the study’s data, the headline could just as easily have read: “Landmark Study Shows Neonic Pesticides Improve Bee Health”—and it would have been equally correct. So how did so many people get this so wrong?

trampoline-judges-sydney-olympics

Bouncing off a database can turn your perspective upside down.

Using Data as a Trampoline rather than Mining for Understanding

This much-anticipated two year, $3.6 million study is particularly interesting because it was primarily funded by two major producers of neonicotinoids, Bayer Crop Science and Syngenta. They had no involvement with the analysis of the data. The three-country study was led by the Centre for Ecology and Hydrology, or CEH, in the U.K.—a group known for its skepticism of pesticides in general and neonics in particular.

The raw data—more than 1,000 pages of it (only a tiny fraction is reproduced in the study)—are solid. It’s a reservoir of important information for entomologists and ecologists trying to figure out the challenges facing bees. It’s particularly important because to date, the problem with much of the research on neonicotinoids has been the wide gulf between the findings from laboratory-based studies and field studies.

Some, but not all, results from lab research have claimed neonics cause health problems in honeybees and wild bees, endangering the world food supply. This has been widely and often breathlessly echoed in the popular media—remember the execrably reported Time cover story on “A World Without Bees.” But the doses and time of exposure have varied dramatically from lab study to lab study, so many entomologists remain skeptical of these sweeping conclusions. Field studies have consistently shown a different result—in the field, neonics seem to pose little or no harm. The overwhelming threat to bee health, entomologists now agree, is a combination of factors led by the deadly Varroa destructor mite, the miticides used to control them, and bee practices. Relative to these factors, neonics are seen as relatively inconsequential.

The Bees are all right. Carry on.

Disparity between Field and Lab Research (sound familiar?)

Jon Entine addressed this disparity between field and lab research in a series of articles at the Genetic Literacy Project, and specifically summarized two dozen key field studies, many of which were independently funded and executed. This study was designed in part to bridge that gulf. And the devil is in the interpretation.

Overall, the data collected from 33 different fields covered 42 analyses and 258 endpoints—a staggering number. The paper only presented a sliver of that data—a selective glimpse of what the research, in its entirety showed.

What patterns emerged when examining the entire data set? . . . In sum, of 258 endpoints, 238—92 percent—showed no effects. (Four endpoints didn’t yield data.) Only 16 showed effects. Negative effects showed up 9 times—3.5 percent of all outcomes; 7 showed a benefit from using neonics—2.7 percent.

As one scientist pointed out, in statistics there is a widely accepted standard that random results are generated about 5 percent of the time—which means by chance alone we would expect 13 results meaninglessly showing up positive or negative.

Norman Carreck, science director of the International Bee Research Association, who was not part of either study, noted, the small number of significant effects “makes it difficult to draw any reliable conclusions.”

Moreover, Bees Are Not in Decline

The broader context of the bee health controversy is also important to understand; bees are not in sharp decline—not in North America nor in Europe, where neonics are under a temporary ban that shows signs of becoming permanent, nor worldwide. Earlier this week, Canada reported that its honeybee colonies grew 10 percent year over year and now stand at about 800,000. That’s a new record, and the growth tracks the increased use of neonics, which are critical to canola crops in Western Canada, where 80 percent of the nation’s honey originates.

Managed beehives in the U.S. had been in steady decline since the 1940s, as farm land disappeared to urbanization, but began stabilizing in the mid-1990s, coinciding with the introduction of neonicotinoids. They hit a 22-year high in the last count.

Global hive numbers have steadily increased since the 1960s except for two brief periods—the emergence of the Varroa mite in the late 1980s and the brief outbreak of colony collapse disorder, mostly in the U.S., in the mid-2000s.

Conclusion

So the bees, contrary to widespread popular belief, are actually doing all right in terms of numbers, although the Varroa mite remains a dangerous challenge. But still, a cadre of scientists well known for their vocal opposition to and lobbying against neonics have already begun trying to leverage the misinterpretation of the data. Within hours of the release of the study, advocacy groups opposed to intensive agricultural techniques had already begun weaponizing the misreported headlines.

But viewing the data from the European study in context makes it even more obvious that sweeping statements about the continuing beepocalypse and the deadly dangers to bees from pesticides, and neonicotinoids in particular, are irresponsible. That’s on both the scientists, and the media.

Summary

The comparison with climate alarmism is obvious. The data is equivocal and subject to interpretation. Lab studies can not be replicated in the real world. Activists make mountains out of molehills. Reasonable balanced analysts are ignored or silenced. Media outlets proclaim the end of life as we know it to capture ears and eyeballs for advertisers, and to build their audiences (CNN: All the fear all the time”). Business as usual for Crisis Inc.

Update April 29

Entine posted regarding the just announced full  EU ban: Global consensus finds neonicotinoids not driving honeybee health problems—Why is Europe so determined to ban them?

The whole article is enlightening, and especially this part describing the research protocols:

“The BRGD (Bee Research Guidance Document) insists that, in order to be considered valid, field experiments must demonstrate that 90 percent of the hive has been exposed to the neonic. The biggest problem with this is that there are generally no neonic residues detectable in crops by the time bees are foraging on them, and if there are residues, the amount is miniscule.”

“The authors of the 2017 CEH study (cited by innumerable reporters as condemning neonics) noted that neonic “residues were detected infrequently and rarely exceeded [1.5 parts per billion].” (To put 1.5 parts per billion in context, the EPA has determined that levels below 25 parts per billion have no effect at all on bees.)”

“At the same time, the bee-hive is a dynamic community and has a considerable capacity to detoxify itself from contaminants. So even the vanishingly small quantities brought into the hives by foragers might very well be wholly or partially eliminated before researchers could test for them.”

“The BRGD thus presents those researchers with a Catch 22: In order to meet the 90th percent exposure requirement they would have to massively over-treat their crops with neonics, creating a foraging environment that simply would not occur in real life. But this defeats the entire purpose of a Tier III field trial, which is to recreate realistic, controlled conditions to see how bees are affected—or not—in the real world. The BRGD requirement has the effect of ‘forcing’ certain pesticides to fail or the studies that don’t comply are invalidated.”

 

 

Only Two Energy Sources

This incisive (cutting to the core) essay is from Darrin Qualman There are just two sources of energy  Excerpts below in italics with my bolds.

Our petro-industrial civilization produces and consumes a seemingly diverse suite of energies: oil, coal, ethanol, hydroelectricity, gasoline, geothermal heat, hydrogen, solar power, propane, uranium, wind, wood, dung. At the most foundational level, however, there are just two sources of energy. Two sources provide more than 99 percent of the power for our civilization: solar and nuclear. Every other significant energy source is a form of one of these two. Most are forms of solar.

When we burn wood we release previously captured solar energy. The firelight we see and the heat we feel are energies from sunlight that arrived decades ago. That sunlight was transformed into chemical energy in the leaves of trees and used to form wood. And when we burn that wood, we turn that chemical-bond energy back into light and heat. Energy from wood is a form of contemporary solar energy because it embodies solar energy mostly captured years or decades ago, as distinct from fossil energy sources such as coal and oil that embody solar energy captured many millions of years ago.

Straw and other biomass are a similar story: contemporary solar energy stored as chemical-bond energy then released through oxidation in fire. Ethanol, biodiesel, and other biofuels are also forms of contemporary solar energy (though subsidized by the fossil fuels used to create fertilizers, fuels, etc.).

Coal, natural gas, and oil products such as gasoline and diesel fuel are also, fundamentally, forms of solar energy, but not contemporary solar energy: fossil. The energy in fossil fuels is the sun’s energy that fell on leaves and algae in ancient forests and seas. When we burn gasoline in our cars, we are propelled to the corner store by ancient sunlight.

Wind power is solar energy. Heat from the sun creates air-temperature differences that drive air movements that can be turned into electrical energy by wind turbines, mechanical work by windmills, or geographic motion by sailing ships.

Hydroelectric power is solar energy. The sun evaporates and lifts water from oceans, lakes, and other water bodies, and that water falls on mountains and highlands where it is aggregated by terrain and gravity to form the rivers that humans dam to create hydro-power.

Of course, solar energy (both photovoltaic electricity and solar-thermal heat) is solar energy.

Approximately 86 percent of our non-food energy comes from fossil-solar sources such as oil, natural gas, and coal. Another 9 percent comes from contemporary solar sources, mostly hydro-electric, with a small but rapidly growing contribution from wind turbines and solar photovoltaic panels. In total, then, 95 percent of the energy we use comes from solar sources—contemporary or fossil. As is obvious upon reflection, the Sun powers the Earth.

The only major energy source that is not solar-based is nuclear power: energy from the atomic decay of unstable, heavy elements buried in the ground billions of years ago when our planet was formed. We utilize nuclear energy directly, in reactors, and also indirectly, when we tap geothermal energies (atomic decay provides 60-80 percent of the heat from within the Earth). Uranium and other radioactive elements were forged in the cores of stars that exploded before our Earth and Sun were created billions of years ago. The source for nuclear energy is therefore not solar, but nonetheless stellar; energized not by our sun, but by another. Our universe is energized by its stars.

There are two minor exceptions to the rule that our energy comes from nuclear and solar sources: Tidal power results from the interaction of the moon’s gravitational field and the initial rotational motion imparted to the Earth; and geothermal energy is, in its minor fraction, a product of residual heat within the Earth, and of gravity. Tidal and geothermal sources provide just a small fraction of one percent of our energy supply.

Some oft-touted energy sources are not mentioned above. Because some are not energy sources at all. Rather, they are energy-storage media. Hydrogen is one example. We can create purified hydrogen by, for instance, using electricity to split water into its oxygen and hydrogen atoms. But this requires energy inputs, and the energy we get out when we burn hydrogen or react it in a fuel cell is less than the energy we put in to purify it. Hydrogen, therefore, functions like a gaseous battery: energy carrier, not energy source.

Knowing that virtually all energy flows have their origins in our sun or other stars helps us critically evaluate oft-heard ideas that there may exist undiscovered energy sources. To the contrary, it is extremely unlikely that there are energy sources we’ve overlooked. The solution to energy supply constraints and climate change is not likely to be “innovation” or “technology.” Though some people hold out hope for nuclear fusion (creating a small sun on Earth rather than utilizing the conveniently-placed large sun in the sky) it is unlikely that fusion will be developed and deployed this century. Thus, the suite of energy sources we now employ is probably the suite that will power our civilization for generations to come. And since fossil solar sources are both limited and climate-disrupting, an easy prediction is that contemporary solar sources such as wind turbines and solar photovoltaic panels will play a dominant role in the future.

Summary

Understanding that virtually all energy sources are solar or nuclear in origin reduces the intellectual clutter and clarifies our options. We are left with three energy supply categories when making choices about our future:
Fossil solar: oil, natural gas, and coal;
Contemporary solar: hydroelectricity, wood, biomass, wind, photovoltaic electricity, ethanol and biodiesel (again, often energy-subsidized from fossil-solar sources); and
Nuclear.

Footnote:  The author ends with support for windmills and solar panels, but drops nuclear without explanation.  Also there are presently unsolved problems when substituting those intermittent power sources for fossil fuels.  Details are at Climateers Tilting at Windmills

Fortunately, we have time to adapt to the ongoing slight fluctuations in weather while assembling the longer-term transition to nuclear and other energy sources. Unfortunately, a recent study of energy subsidies in the US shows only a small amount is directed toward nuclear, and the sole purpose is decommissioning.

Update April 27

Some good news today: Secretary of Energy Rick Perry Announces $60 Million for U.S. Industry Awards in Support of Advanced Nuclear Technology Development

WASHINGTON, D.C. – U.S. Secretary of Energy Rick Perry announced today that the U.S. Department of Energy (DOE) has selected 13 projects to receive approximately $60 million in federal funding for cost-shared research and development for advanced nuclear technologies. These selections are the first under DOE’s Office of Nuclear Energy’s U.S. Industry Opportunities for Advanced Nuclear Technology Development funding opportunity announcement (FOA), and subsequent quarterly application review and selection processes will be conducted over the next five years. DOE intends to apply up to $40 million of additional FY 2018 funding to the next two quarterly award cycles for innovative proposals under this FOA.

“Promoting early-stage investment in advanced nuclear power technology will support a strong, domestic, nuclear energy industry now and into the future,” said Secretary Perry. “Making these new investments is an important step to reviving and revitalizing nuclear energy, and ensuring that our nation continues to benefit from this clean, reliable, resilient source of electricity. Supporting existing as well as advanced reactor development will pave the way to a safer, more efficient, and clean baseload energy that supports the U.S. economy and energy independence.”

Is Global Warming a Necessity Defense?

On April 23, the Minnesota valve turners gained permission from the state appellate court to claim their admittedly illegal actions were necessary in light of global warming from pipeline fuels.  But there are still several slips between this cup and their lips.  In the report below from Minnesota Star Tribune there are several points of interest:

First, the ruling did not support defendants’ claims, only that the prosecution did not prove that allowing the defense would damage the trial process.

Second, prosecutors may appeal this ruling to the Supreme court.

Third, the district court judge who will preside over the trial says there is a high standard for that defense to succeed.

Fourth, as the dissenting appellate justice said: “This case is about whether respondents have committed the crimes of damage to property and trespass. It is not about global warming.”

A three-judge appeals panel ruled 2-1 Monday that the prosecutor has failed to show that allowing the necessity defense will have a “critical impact” on the outcome of the protesters’ trials. The decision was labeled “unpublished,” which means it sets no binding precedent for other cases.

The appellate court allowed the so-called “necessity” defense to go forward. In a pretrial hearing in District Court, the protesters testified about their “individual perceptions of the necessity of their actions in preventing environmental harm caused by the use of fossil fuels, particularly the tar sands oil carried by the pipeline with which they interfered.”

The state objected to the defense, and the appeals court, on a 2-1 vote, has now dismissed that objection. The state can ask the Supreme Court to take up the issue.

The Court of Appeals didn’t rule on whether the defendants’ actions were necessary but said the state failed to show that allowing the defense would significantly reduce the likelihood of a successful prosecution. The court said state law doesn’t allow objections to the necessity defense before trial.

In a pretrial ruling, Clearwater County District Judge Robert Tiffany allowed the defense but said the necessity evidence must be “focused, direct, and presented in a noncumulative manner.” The state argued that the necessity defense would “unnecessarily confuse the jury.”

The defendants have said they intend to call expert witnesses to testify about global warming and their belief the federal government’s response has been ineffective.

Connolly’s dissent cited a 1971 state court ruling that said the necessity defense “applies only in emergency situations where the peril is instant, overwhelming, and leaves no alternative but the conduct in question.” The defendants in this case cannot meet that requirement, Connolly wrote.

Footnote: In a previous valve turner trial, the judge refused to have testimony from witnesses such as James Hansen unless they could testify regarding defendants’ state of mind.  In other words, that judge was willing to consider global warming as a plea of temporary insanity.

Campus Thought Control

 

Art credit: Chris Gall

This post is based on one of the best things written lately on the toxic mentality dominating today’s college campus. It is a rich, in-depth exploration of the issue, and rewarding to those reading the whole article. Some excerpts in italics here with my bolds to show the train of thought and some of the pearls.

Peter Berkowitz writes in The Standard on Liberal Education and Liberal Democracy Excerpts in italics with my bolds.

Colleges foster smugness on the left and resentment on the right.

According to John Stuart Mill, liberal education furnishes and refines the mind. It furnishes the mind with general knowledge of history and literature, science, economics and politics, morality, religion, and philosophy. It refines the mind by teaching students to grasp the complexities of critical issues and to appreciate the several sides of moral and political questions. In furnishing and refining the mind, liberal education tends to temper judgment, elevate character, and form richer and fuller human beings.

Though different from professional education, liberal education improves the ability of professionals to practice their professions wisely. As Mill observed, “Men may be competent lawyers without general education, but it depends on general education to make them philosophic lawyers who demand, and are capable of apprehending, principles, instead of merely cramming their memory with details.”

Chris Gall recalls some examples where Justice Scalia demonstrates Mill’s point, and then digs into the heart of the matter.

Unfortunately, liberal education in America is in bad shape. Our colleges have exposed it to three major threats. They have attacked and curtailed free speech. They have denigrated and diluted due process. And they have hollowed and politicized the curriculum. These threats are not isolated and independent. They are intertwined. All are rooted in the conceit of infallibility. To remedy one requires progress in remedying all.

Free Speech Curtailed

From speech codes, trigger warnings, microaggressions, and safe spaces to disinviting speakers and shouting down lecturers, free speech is under assault on college campuses. One reason is that, as polls by Gallup and others show, many students do not understand the First Amendment. And when they learn that it protects offensive and even hateful speech, they dislike it.

Why has free speech fallen out of favor? Many university students, faculty, and administrators suppose there is a fundamental conflict between free speech on one side and diversity and inclusion on the other. The freer the speech, the argument goes, the more pain and suffering for marginalized students. This way of thinking springs from a faulty understanding of free speech and of diversity and inclusion in education.

Yes, words wound. Children learn that from experience. History teaches, however, that beyond certain narrow exceptions—such as true threats, direct and immediate incitement to violence, defamation, and sexual harassment—the costs of regulating speech greatly exceed the benefits. One cost is that regulating speech disposes majorities to ban opinions that differ from their own.

Well-meaning people will say, “I hear you, I’m with you, I support free speech, too. But what does free speech offer to historically discriminated-against minorities and women?” The short answer is the same precious goods that it offers to everyone else: knowledge and truth. The long answer begins with three observations.

First, for many years women have formed the majority on campuses around the country. Approximately 56 percent of university students are female. On any given campus, women and historically discriminated-against minorities are together likely to represent a large majority. Thus, the curtailing of campus speech on behalf of these minorities and women reflects the will of a new campus majority. This new majority exhibits the same old antipathy to free speech. It plays the same old trick of repressing speech it labels offensive. And it succumbs to the same old tyrannical impulse to silence dissenting views that has always been a bane of democracy.

Second, as Erwin Chemerinsky and Howard Gillman argued last year in their book Free Speech on Campus, far from serving as an instrument of oppression and a tool of white male privilege, free speech has always been a weapon of those challenging the authorities—on the side of persecuted minorities, dissenters, iconoclasts, and reformers. In the United States, free speech has been essential to abolition, women’s suffrage, the civil rights movement, feminism, and gay rights. All took advantage of the room that free speech creates to criticize and correct the established order. Restricting speech—that is, censorship—has been from time immemorial a favorite weapon of authoritarians.

Third, a campus that upholds free speech and promotes its practice is by its very nature diverse and inclusive. Such a campus offers marvelous benefits to everyone regardless of race, class, or gender. These benefits include the opportunity to express one’s thoughts with the best evidence and arguments at one’s disposal; the opportunity to listen to and learn from a variety of voices, some bound to complement and some sure to conflict with one’s own convictions; and, not least, the opportunity to live in a special sort of community, one dedicated to intellectual exploration and the pursuit of truth.

Instead of touting free speech’s benefits, however, schools are encouraging students—especially but not only historically discriminated-against minorities and women—to see themselves as unfit for free speech, as weak and wounded, as fragile and vulnerable, as subjugated by invisible but pervasive social and political forces. Standing liberal education on its head, colleges and universities enlist students in cracking down on the lively exchange of opinion.

Liberal education ought to champion the virtues of freedom. It ought to cultivate curiosity and skepticism in inquiry, conscientiousness and boldness in argument, civility in speaking, attentiveness in listening, and coolness and clarity in responding to provocation. These virtues enable students—regardless of race, class, or gender—to take full advantage of free speech.

Since free speech is essential to liberal education, we must devise reforms that will enable colleges and universities to reinvigorate it on their campuses. Last year, the Phoenix-based Goldwater Institute developed “model state-level legislation designed to safeguard freedom of speech at America’s public university systems.” Consistent with its recommendations, universities could take several salutary steps:

Abolish speech codes and all other forms of censorship.
Publish a formal statement setting forth the purposes of free speech.
Create freshman orientation programs on free speech.
Punish those who attempt to disrupt free speech.
Host an annual lecture on the theory and practice of free speech.
Issue an annual report on the state of free speech on campus.
Strive where possible for institutional neutrality on partisan controversies, the better to serve as an arena for vigorous debate of the enduring controversies.
Many colleges and universities won’t act on such principles. Public universities, however, are subject to the First Amendment, and state representatives can enact legislation to assist state schools in complying with their constitutional obligations.

Due Process Denigrated

The curtailing of free speech on campus has not occurred in a vacuum. It is closely connected to the denial of due process in disciplinary proceedings dealing with allegations of sexual misconduct. Both suppose that little is to be gained from listening to the other side. Both rest on the conceit of infallibility.

Campus practices, for example, can presume guilt by designating accusers as “victims” and those accused as “perpetrators.” Universities sometimes deprive the accused of full knowledge of the charges and evidence and of access to counsel. It is typical for them to use the lowest standard of proof—a preponderance of the evidence—despite the gravity of allegations. In many instances, universities withhold exculpatory evidence and prevent the accused from presenting what exculpatory evidence is available; they deny the accused the right to cross-examine witnesses, even indirectly; and they allow unsuccessful complainants to appeal, effectively exposing the accused to double jeopardy. To achieve their preferred outcomes in disciplinary hearings and grievance procedures, universities have even been known to flout their own published rules and regulations.

There is, of course, no room for sexual harassment on campus or anywhere else. Predators must be stopped. Sexual assault is a heinous crime. Allegations should be fully investigated. Universities should provide complainants immediate medical care and where appropriate psychological counseling and educational accommodations. Students found guilty should be punished to the full extent of the law.

At the same time, schools must honor due process, which rightly embodies the recognition that accusations and defenses are put forward by fallible human beings and implementing justice is always the work of fallible human beings. Some would nevertheless truncate due process on the grounds that a rape epidemic plagues higher education, but, fortunately, there is no such thing. The common claim that women who attend four-year colleges face a one in five chance of being sexually assaulted has been debunked. According to the most recent Department of Justice data, 6.1 in every 1,000 female students will be raped or sexually assaulted; the rate for non-student females in the same age group is 7.6 per 1,000. Yes, even one incident of sexual assault is too many. Yes, women’s safety must be a priority. And yes, we can do more. But contrary to conventional campus wisdom, university women confront a lower incidence of sexual assault than do women outside of higher education.

Others would curb due process because all women should just be believed. Certainly they should be heard. But no one should just be believed, especially when another’s rights are at stake. And for a simple reason: Human beings are fallible. As Harvard professor of psychology Daniel Schacter amply demonstrated in The Seven Sins of Memory: How the Mind Forgets and Remembers (2001), we humans routinely forget, routinely remember things that never were, and routinely reconstruct the past in ways that serve our passions and interests.

Then there’s the question of why universities are involved at all in adjudicating allegations of nonconsensual sex. Nonconsensual sex is a common statutory definition of rape. Generally, universities leave violent crimes to the police and courts. If a student were accused of murdering a fellow student, who would dream of convening a committee of administrators, professors, and students to investigate, prosecute, judge, and punish? For that matter, if a student were accused of stealing or vandalizing a fellow student’s car, would we turn to a university committee for justice? If both murder, the gravest crime, and crimes much less grave than sexual assault—theft and vandalism—are matters for the criminal justice system, why isn’t the violent crime of sexual assault?

The denial of female agency, which follows from the claim that women are incapable of truly consenting to sex, implies that a man who acknowledges having had sex with a woman has prima facie committed assault. This approach—common on campuses—may be illegal. Insofar as it presumes male guilt and denies men due process, it appears to violate Title IX by discriminating against men on the basis of sex. It is also profoundly illiberal and anti-woman. It turns out that the denial of due process for men rests on the rejection of the belief—central to liberal democracy—that women, as human beings, are free and equal, able to decide for themselves, and responsible for their actions.

The willingness of university officials to deny female agency, presume male guilt, and dispense with due process is on display in the more than 150 lawsuits filed since 2011 in state and federal courts challenging universities’ handlings of sexual-assault accusations. Lawsuits arising from allegations of deprivation of due process at Amherst, Berkeley, Colgate, Oberlin, Swarthmore, USC, Yale, and many more make chilling reading. Numerous plaintiff victories have already been recorded.

To take advantage of their newfound freedom to provide due process for all their students, universities might consult the October 2014 statement published by 28 Harvard Law School professors in the Boston Globe. The statement offers guidance in reconciling the struggle against sexual misconduct with the imperatives of due process. It counsels universities to adopt several measures:

Inform accused students in a timely fashion of the precise charges against them and of the facts alleged.
Ensure that accused students have adequate representation. Adopt a standard of proof and other procedural protections commensurate with the gravity of the charge, which should include the right to cross-examine witnesses, even if indirectly, and the opportunity to present a full defense at an adversarial hearing.
Adopt a standard of proof and other procedural protections commensurate with the gravity of the charge, which should include the right to cross-examine witnesses, even if indirectly, and the opportunity to present a full defense at an adversarial hearing.
Avoid assigning any one office—particularly the Title IX office, which is an interested party because maximizing convictions justifies its presence—responsibility for fact-finding, prosecuting, adjudicating, and appeals.
In, addition, universities ought to make sessions on due process an essential part of freshman orientation.

It is unreasonable, however, to expect the restoration of due process on campuses anytime soon. For starters, it depends on reinvigoration of free speech. A culture of free speech presupposes and promotes a healthy sense of fallibility. That opens one to the justice of due process. For what is due process but formalization of the effort by fallible human beings to fairly evaluate other fallible human beings’ conflicting claims?

Free speech, however, is not enough on its own to rehabilitate due process. Commitment to both is rooted in an understanding of their indispensable role in vindicating liberal democracy’s promise of freedom and equality. To recover that understanding, it is necessary to renovate the curriculum so that liberal education prepares students for freedom.

The Curriculum Politicized

The college curriculum has been hollowed out and politicized. The conceit of infallibility is again at work—in the conviction that the past is either a well-known and reprehensible repository of cruel ideas and oppressive practices or not worth knowing because progress has refuted or otherwise rendered irrelevant the foolish old ways of comprehending the world and organizing human affairs.

The disdain for the serious study of the history of literature, philosophy, religion, politics, and war that our colleges and universities implicitly teach by neglecting them, denigrating them, or omitting them entirely from the curriculum, has devastating consequences for liberal education. Without a solid foundation of historical knowledge, students cannot understand the ideas and events that have shaped our culture, the practices and institutions that undergird liberal democracy in America, the advantages and weaknesses of constitutional self-government, and the social and political alternatives to regimes based on freedom and equality. Absent such an understanding, students’ reasoning lacks suppleness, perspective, and depth. Consequently, graduates of America’s colleges and universities, many of whom will go on to occupy positions of leadership in their communities and in the nation, are poorly equipped to form reasoned judgments about the complex challenges America faces and the purposes to which they might wish to devote their lives.

To say that the curriculum has been hollowed is not to say that it fails to deliver a message but that it lacks a core. Much of college education is a mishmash of unconnected courses. Most undergraduates are required to fulfill some form of distribution requirements. Typically, this involves a few classes in the humanities, a few in the social sciences, and a few in the natural sciences. Within those broad parameters, students generally pick and choose as they like. For fulfilling requirements in the humanities, schools tend to treat courses on the sociology of sports, American film and race, and queer literary theory as just as good as classical history, Shakespeare, or American political thought.

The most common objection to a coherent and substantive core curriculum is that it would impair students’ freedom. Each undergraduate is different, the argument goes, and each knows best the topics and courses that will advance his or her educational goals. What right do professors and administrators have to tell students what they must study?

The better question is why we put up with professors and administrators who lack the confidence and competence to fashion and implement a core curriculum that provides a solid foundation for a lifetime of learning. Every discipline recognizes that one must learn to walk before one learns to run.

In every discipline, excellence depends on the acquisition of primary knowledge and necessary skills. Even the ability to improvise effectively—with a game-winning shot, a searing riff, or a devastating cross-examination—is acquired initially through submission to widely shared standards and training in established practices. It is peculiar, to put it mildly, that the authorities on college campuses are in the habit of insisting on their lack of qualifications to specify for novices the proper path to excellence.

For many professors, ideological opposition to a core curriculum on the grounds that it interferes with students’ freedom merges with self-interested opposition to it on the grounds that having to teach a common and required course of study would interfere with faculty members’ freedom. University hiring, promotion, and tenure decisions usually turn on scholarly achievement in rarefied areas of research. Powerful professional interests impel faculty to avoid teaching the sort of courses that provide students with general introductions, solid foundations, and broad overviews because those take time away from the specialized scholarly labors that confer prestige and status. Much better for professors, given the incentives for professional advancement entrenched by university administrations, to offer courses that focus on small aspects of arcane issues.

The hollowed-out curriculum, moreover, is politicized as much by routine exclusion of conservative perspectives as by aggressive promulgation of progressive doctrines. Students who express conservative opinions—about romance, sex, and the family; abortion and affirmative action; and individual liberty, limited government, and capitalism—often encounter mockery, incredulity, or hostile silence. Few professors who teach moral and political philosophy recognize the obligation to ensure in their classroom the full and energetic representation of the conservative sides of questions. Courses featuring Jean-Jacques Rousseau, Karl Marx, and John Rawls abound; those featuring Adam Smith, Edmund Burke, and Friedrich Hayek are scant.

Worse still, higher education fails to teach the truly liberal principles that explain why study of both conservative and progressive ideas nourishes the virtues of toleration and civility so vital to liberal democracy. Many faculty in the humanities and social sciences suppose they are champions of pluralism even as they inculcate progressive ideas. The cause of their delusion is that the rightward extreme of their intellectual universe extends no further than the center-left. Many were themselves so thoroughly cheated of a liberal education that, unaware of their loss, they blithely perpetuate the crime against education by cheating their students.

Small wonder that our politics is polarized. Both through their content and their omissions, college curricula teach students on the left that their outlook is self-evidently correct and that the purpose of intellectual inquiry is to determine how best to implement progressive ideas. At the same time, students on the right hear loud and clear that their opinions are ugly expressions of ignorance and bigotry and do not deserve serious consideration in pressing public-policy debates. By fostering smugness on the left and resentment on the right, our colleges and universities make a major contribution to polarizing young voters and future public officials.

What should be done?

First, freshman orientation must be restructured. Schools should not dwell on diversity, equality, and inclusion while excluding diversity of thought. In addition to providing sessions on the fundamentals of free speech and the essentials of due process, they ought to give pride of place in orientation to explaining the proper purposes of liberal education. This means, among other things, reining in the routine exhortations to students to change the world—as if there were no controversial issues wrapped up in determining which changes would be for the better and which for the worse. Instead, orientation programming should concentrate on helping students understand the distinctive role higher education plays in preserving civilization’s precious inheritance and the distinctive role such preservation plays in enriching students’ capacity for living free and worthy lives.

Second, curricula must be restructured to make room for a core. In our day and age, undergraduate specialization in the form of a major is inevitable. And students accustomed to a wealth of choice and to personalizing their music lists and news sources cannot be expected to abide a curriculum that does not provide a generous offering of electives. But even if a third of college were devoted to a major and a third to pure electives, that would leave a third—more than a year’s worth of study—to core knowledge.

A proper curriculum should not only introduce students to the humanities, social sciences, and natural sciences. It should also make mandatory a course on the tradition of freedom that underlies the American constitutional order and clarifies the benefits of a liberal education. In addition, the curriculum should require study of the great moral, political, and religious questions, and the seminal and conflicting answers, that define Western civilization. And it should require study of the seminal and conflicting answers to those great questions about our humanity and our place in the world given by non-Western civilizations.

Third, professors must bring the spirit of liberal education to their classrooms. The most carefully crafted and farsighted revisions of the curriculum will not succeed in revivifying liberal education unless professors teach in the spirit of Mill’s dictum from On Liberty, “He who knows only his own side of the case, knows little of that.” Indeed, unless professors recognize the wisdom of Mill’s dictum, they will fail to grasp the defects of the contemporary curriculum that make its revision urgent.

The Professor’s Vocation

To provide a properly liberal education, then, our colleges and universities must undertake three substantial reforms. They must institutionalize the unfettered exchange of ideas. They must govern campus life on the premise that students are endowed with equal rights and therefore equally deserving of due process without regard to race, class, or gender. And they must renovate the curriculum by introducing all students to the principles of freedom; to the continuities, cleavages, and controversies that constitute America and the West; and to the continuities, cleavages, and controversies that constitute at least one other civilization.

To accomplish these reforms, the conceit of infallibility must be tamed. Progress in one area of reform depends on progress in all. But to recall a matter Marx touched on and, long before him, Plato pursued: Who will educate the educators?

Times have changed. The academy has undergone a kind of religious awakening. These days many professors resemble priests who believe their job is to impose their faith. But the zealous priest is no more suited to the vocation of liberal education than is the cynical priest. Professors would do better to take the midwife—in the Socratic spirit that Mill embraced—as their model.

Liberal education’s task is to liberate students from ignorance and emancipate them from dogma so that they can live examined lives. It does this by furnishing and refining minds—transmitting knowledge and equipping students to think for themselves.

What about political responsibility? What about justice? What about saving the country and the world?

Through the discipline of liberal education, professors do what is in their limited power to cultivate citizens capable of self-government. And law professors do what is in their limited power to cultivate thoughtful lawyers. Those are lofty contributions since self-government and the rule of law are essential features of liberal democracy—the regime most compatible with our freedom, our equality, and our natural desire to understand the world and live rightly and well in it.

Peter Berkowitz is the Tad and Dianne Taube senior fellow at the Hoover Institution, Stanford University. This is a revised and expanded version of the 2018 Justice Antonin Scalia Lecture delivered on February 5 at Harvard Law School. It draws on previously published essays.

Barents Sea Ice Stays Put

Barents104to113

In the last nine days, sea ice in Barents persists, remaining above 700k km2, well above the decadal average and the previously high 2014.  The melting is confined mostly to Bering Sea on the Pacific side, and less so in Okhotsk next door.

BeringOk104to113

The April pattern of ice extent decline is shown in graph below:

NH ice1132018

2018 is tracking close to 2007 and 2017, all more than 400k km2 below the 11 year average (2007 through 2017 inclusive).  SII is showing ~200k km2 less ice throughout.  The graph below shows 2018 ice extent is close to the decadal average, except for Bering and Okhotsk Seas, the two Pacific basins.

NH less BO 1132018

The table below shows regional  ice extents on day 113 comparing to decadal averages and 2017.

Region 2018113 Day 113 
Average
2018-Ave. 2017113 2018-2017
 (0) Northern_Hemisphere 13515699 14083321 -567621 13651810 -136111
 (1) Beaufort_Sea 1070445 1069106 1339 1070445 0
 (2) Chukchi_Sea 954262 965239 -10977 961723 -7461
 (3) East_Siberian_Sea 1086737 1086195 542 1083967 2770
 (4) Laptev_Sea 897845 894453 3392 897326 518
 (5) Kara_Sea 934867 916778 18090 932153 2715
 (6) Barents_Sea 724756 572825 151931 546422 178334
 (7) Greenland_Sea 516420 670606 -154186 673722 -157302
 (8) Baffin_Bay_Gulf_of_St._Lawrence 1239506 1338185 -98679 1444616 -205110
 (9) Canadian_Archipelago 853109 850093 3015 853214 -106
 (10) Hudson_Bay 1244858 1252135 -7277 1258453 -13595
 (11) Central_Arctic 3208617 3242368 -33751 3245713 -37096
 (12) Bering_Sea 88256 689111 -600856 374254 -285998
 (13) Baltic_Sea 44869 32599 12270 23289 21579
 (14) Sea_of_Okhotsk 648464 499591 148873 283164 365300

Overall, the 2018 deficit to average is 4%,  or 570 k km2. The difference is entirely due to open water in Bering Sea, now a deficit of 600k km2 (down by 90%).  Barents and Okhotsk are both above average, by ~30% with Greenland Sea down about 20%.  It remains to be seen how fast or slow will be the melting of the Arctic core regions, solidly frozen at this point in the year.

20180424en

Current Arctic ice conditions according to AARI, St. Petersburg, Russia. Old ice shown in brown.

 

32670_banner

Boris Vilkitsky, the 172,000 m3 Arc7 ice-class LNG carrier, violated a number of safety rules on a ballast voyage to Yamal LNG terminal in the Russian High Arctic port of Sabetta earlier in April.

An Arc4 rating effectively prohibits the ship from operating independently or even with an icebreaker escort in the waters of the Kara Sea when ice conditions are medium to heavy. Roshydromet, Russia’s Federal Service for Hydrometeorology and Environmental Monitoring, has reported recently that first-year ice in the region is up to 2 metres thick.

Image from 4 days ago, source LNGworldshipping.

Pensioners Pay for Climate Activism

The Perpetrators: Huge Fund Managers like BlackRock and Vanguard apply Proxy Power against companies in their portfolios.

Reports such as these have been appearing in the media:

BlackRock Wields Its $6 Trillion Club to Combat Climate Risks

Big investors press major companies to step up climate action

‘Money talks’: A $1.2 trillion fund manager is about to pull investment from companies that won’t act on climate change

BlackRock’s Message: Contribute to Society, or Risk Losing Our Support

The irresponsible behavior by these perps is explained by Tim Doyle at Crain’s newsletter: BlackRock mustn’t mimic underperforming NYC pension funds. Excerpts below with my bolds

Financial firms and fund managers must focus on returns, not political and social causes

Earlier this year, BlackRock CEO Larry Fink released his annual letter to CEOs, in which he called for a greater focus on “societal impacts” by the companies in which BlackRock invests. The lengthy letter went into considerable detail to explain the firm’s position, but failed to specify how this initiative will be carried out and what it means for the millions of hardworking Americans investing in BlackRock’s passive index funds.

But what is really behind this call to action? And why are passive-fund managers becoming active?

BlackRock’s newfound focus comes at a time when pension funds—such as the New York City Employment Retirement Systems and California Public Employees Retirement System (CalPERS)—are taking an increasingly aggressive position toward investments that align with social and political agendas, with returns often taking a backseat.

In New York today, the city’s contribution to its pension funds were $9.3 billion in fiscal 2017, up from $1.4 billion in 2002. The city’s budget will soon allocate more spending on pension costs than on social services (excluding education), while the funds are estimated to be at least $65 billion in the red, with a weighted average funded ratio of only 62%, or 10% below the national average. These are alarming figures.

Ironically, as the pension system struggles to meet its obligations, city Comptroller Scott Stringer is ramping up the funds’ focus on matters that on their face have little to do with performance and much more to do with public relations, submitting 92 separate shareholder proposals to 88 different companies in fiscal year 2017, the majority focused on giving large passive funds and pensions special status to elect directors and on social and environmental issues.

Stringer even stated in his inaugural address that he wants to “remake the office of comptroller into a think tank for innovation and ideas” and recently came out in support of fossil-fuel divestment at the city’s pension funds—another politically motivated move that could cost taxpayers billions.

The same can be seen at the nation’s largest public pension fund. CalPERS has increased its environmental and social investing and activism while converting a $3 billion pension surplus in 2007 to a $138 billion deficit today. CalPERS leaders have also taken an increasingly active public role, using the fund’s environmental and social platform to push a larger agenda on the world stage, all while struggling to meet its obligations.

Despite poor returns, these funds carry a large influence on companies, like BlackRock, that manage trillions in pension fund money, pressuring them to also take an aggressive stance on these issues. CalPERS, for instance, has paid BlackRock millions of dollars in fees and is negotiating with the company over management of the fund’s $26 billion private-equity program. BlackRock also voted in support of a CalPERS-sponsored climate-related proposal for the first time in 2017.

These are no small matters for a company that manages roughly $6 trillion in assets, the large majority of which are passive investments meant to track an index for steady value creation, not to act as active managers to push political and societal agendas unrelated to returns.

Since BlackRock’s announcement, its CEO has made a number of public appearances but has yet to reassure those who have placed trust in his firm’s passive-investment vehicles. After all, passive investors are looking to make gains based on the market, not pick winners and losers based on the social and political influences of mismanaged pension funds. Instead, the firm’s inability to share concrete details about the path forward creates more confusion for investors, who are worried what this new focus on “social impact” will mean for their retirement funds.

We all care about the environment and our own social causes, but an increased focus on issues that have no concrete connection to value has proven costly for the retirees who rely on their pension fund for their livelihood and the taxpayers that backfill their underperformance.

Now the nation’s largest passive-investment fund is moving toward implementing the same types of policies as the pension funds that happen to provide it billions of dollars in business each year, and millions of everyday investors could be affected. Anyone with an investment account should take note.

Tim Doyle is vice president of policy and general counsel at the American Council for Capital Formation.

The Sky is Not Falling

Bjorn Lomborg brings perspective to doomsday hyperbole in his article The Sky Is Not Falling.  Excerpts in italics below with my bolds.

Main Point: Long, slow, positive trends don’t make it to the front page or to water-cooler conversations. So we develop peculiar misperceptions, especially the idea that a preponderance of things are going wrong.

When I published The Skeptical Environmentalist in 2001, I pointed out that the world was getting better in many respects. Back then, this was viewed as heresy, as it punctured several common and cherished misperceptions, such as the idea that natural resources were running out, that an ever-growing population was leaving less to eat, and that air and water were becoming ever-more polluted.

In each case, careful examination of the data established that the gloomy scenarios prevailing at the time were exaggerated. While fish stocks, for example, are depleted because of a lack of regulation, we can actually eat more fish than ever, thanks to the advent of aquaculture. Worries that we are losing forests overlook the reality that as countries become richer, they increase their forest cover.

Since I wrote the book, the world has only become better, according to many important indicators. We have continued to see meaningful reductions in infant mortality and malnutrition, and there have been massive strides toward eradication of polio, measles, malaria, and illiteracy.

By focusing on the most lethal environmental problem – air pollution – we can see some of the reasons for improvement. As the world developed, deaths from air pollution have declined dramatically, and that trend is likely to continue. Looking at a polluted city in a country like China might suggest otherwise, but the air inside the homes of most poor people is about ten times more polluted than the worst outdoor air in Beijing. The most serious environmental problem for humans is indoor air pollution from cooking and heating with dirty fuels like wood and dung – which is the result of poverty.

In 1900, more than 90% of all air pollution deaths resulted from indoor air pollution. Economic development has meant more outdoor pollution, but also much less indoor pollution. Reductions in poverty have gone hand in hand with a four-fold reduction in global air pollution mortality. Yet more people today still die from indoor air pollution than from outdoor pollution. Even in China, while outside air has become a lot more polluted, poverty reduction has caused a lower risk of total air pollution death. And as countries become richer, they can afford to regulate and cut even outdoor air pollution.

Two hundred years ago, almost every person on the planet lived in poverty, and a tiny elite in luxury. Today just 9.1% of the population, or almost 700 million people, lives on less than $1.90 per day (or what used to be one dollar in 1985). And just in the last 20 years, the proportion of people living in extreme poverty has almost halved. Yet few of us know this. The Gapminder foundation surveyed the UK and found that just 10% of people believe poverty has decreased. In South Africa and in Sweden, more people believe extreme poverty has doubled than believe – correctly – that it has plummeted.

How do we continue our swift progress? There has been no shortage of well-intentioned policy interventions, so we have decades of data showing what works well and what doesn’t.

In the latter category, even well-considered ideas from the world’s most eminent thinkers can fall short. The ambitious Millennium Villages concept was supposed to create simultaneous progress on multiple fronts, producing “major results in three or fewer years,” according to founder Jeffrey D. Sachs. But a study by the United Kingdom’s Department for International Development shows the villages had “moderately positive impacts,” and “little overall impact on poverty.”

It’s more constructive to focus on what works. Global analysis of development targets for Copenhagen Consensus by a panel of Nobel laureate economists showed where more money can achieve the most. They concluded that improved access to contraception and family-planning services would reduce maternal and child mortality, and also – through a demographic dividend – increase economic growth.

Likewise, research assessing the best development policies for Haiti found that focusing on improvements in nutrition through the use of fortified flour would transform the health of young children, creating lifelong benefits.

And the most powerful weapon in the fight against poverty is the one that got us where we are today: broad-based economic growth. Over the past 30 years, China’s growth spurt alone lifted an unprecedented 680 million people above the poverty line.

Humanity’s success in reducing poverty is an extraordinary achievement, and one that we are far too reticent about acknowledging. We need to make sure that we don’t lose sight of what got us this far – and what justifies the hope of an even better future.

Background:  Why climate activism has become a doomsday cult Clexit Gloom and Doom

Alberta Set to Imitate Ontario’s Electrical Mess

Albertans pay around five cents a kilowatt hour — compared to the up to 18 cents Ontarians experienced, but for how long?Postmedia News

Kevin Libin writes in Financial Post: Alberta’s now copying Ontario’s disastrous electricity policies. What could go wrong?  Get ready, Albertans, a new report reveals that all the thrills and spills that follow when politicians start meddling in a boring, but well-functioning electricity market are coming your way.  Excerpts in italics below with my bolds.

A report released Thursday by the University of Calgary’s School of Public Policy gives a sneak peek of how the Alberta script could play out. It begins once again with a “progressive” government convinced that its legacy lies in climate activism, out to redesign an electricity grid from something meant to provide affordable, reliable power into a showpiece of uncompetitive solar and wind power. And like Ontario, the Alberta NDP is determined to turn its provincial electricity grid into not just a green project that ignores economics, but an affirmative-action diversity project that sets aside certain renewable deals for producers owned by First Nations.

Alberta Premier Rachel Notley’s plan, like McGuinty’s, is to phase out all of Alberta’s cheap, abundant but terribly uncool coal-fired power (by 2030, in Alberta’s case) and force onto the grid instead large amounts of unreliable, expensive solar and wind power. Albertans have been so preoccupied fighting through a barrage of energy woes since Notley’s NDP was elected — the oil-price crash, government-imposed carbon taxes and emission caps, blocked and cancelled pipelines and the Trudeau government’s wholesale politicization of energy regulation — that they probably haven’t realized yet how vast an overhaul Notley was talking about when she began revealing this plan in 2015. But the report’s author, Brian Livingston, an engineer and lawyer with deep experience in the energy business in Alberta, runs through the shocking numbers: As of last year, Alberta’s grid had a capacity of roughly 17,000 megawatts, but the envisioned grid of 2032 will require nearly 13,000 megawatts that do not currently exist. Think of it as rebuilding 75 per cent of Alberta’s current grid in less than 15 years. Hey, what could go wrong?

Alberta Electricity System Operator is planning for so much wind power that the province will blow past Ontario, a province three times its size. Postmedia News

And if Ontarians thought their government was obsessed with green power, Livingston notes that the Alberta Electricity System Operator is planning for so much wind power that the province will blow past Ontario, a province three times its size, with 5,000 megawatts of wind compared to Ontario’s 4,213 megawatts, and nearly twice as much solar power, 700 megawatts, compared to Ontario’s 380 megawatts.

Learning from McGuinty’s mistake, the Alberta NDP is smart enough to ensure the extra cost of all this uneconomic power won’t show up printed in black and white on consumers’ power bills, likely hoping that spares them the political fallout that now threatens the Ontario Liberals. Rather than ratepayers shouldering the pain, it will be taxpayers — largely the same people — who pay most for any additional costs through added deficits and debts, at least for the next few years. That’s because Notley has ordered a temporary cap on household electricity rates of 6.8 cents per kilowatt hour (which is still significantly higher than the current rate). When wholesale rates rise higher than that, the government will use carbon-tax revenues to pay the difference. But businesses pay full freight from the get go.

Hiding from the real costs of using energy is a curious move for a government that gives away energy-efficient light bulbs and other products designed to conserve while imposing carbon taxes to try suppressing energy use. It’s also a costly move. Estimates from the C. D. Howe Institute estimate it will cost Alberta taxpayers up to $50 million this year alone; a recent report from electricity consultants at EDC Associates estimates that by 2021, the extra costs moved off electric bills and onto tax bills will total $700 million. That’s when the price cap expires and costs could start showing up on power bills, instead.

Of course, Ontario has proven that it’s easy to underestimate how expensive these political experiments can get, but the Alberta redesign is already getting pricey. First, Notley accidentally stuck Alberta consumers with nearly $2 billion in extra surcharges when she rewrote carbon policies without realizing that gave producers the right to cancel unprofitable contracts. Her plan also requires the government to create a new “capacity” payment system for electricity producers, who will able to charge substantial sums even if they don’t produce a single watt. Livingston shows that many producers can earn almost as much just for offering capacity to the grid as they do for producing. Meanwhile, since solar power is perennially and embarrassingly uncompetitive economically, even with expensive wind power, the government plans to let solar providers sell electricity at premium rates to government facilities, with taxpayers covering that cost, too, just as they’ll cover the cost of overpriced wind power, which doesn’t approach the affordability of fossil fuels.

In his report, Livingston drily notes that the way Albertans think of the future of their electricity system could probably be summed up as: “Whatever we do here in Alberta, please let us not do it like they did it in Ontario.” They have reason to fear, since Livingston shows Ontario households have faced rates as much as four times higher than those in Alberta. Even if it doesn’t look exactly like the way they did things in Ontario, that doesn’t mean it still can’t go very wrong. Whenever progressive politics infests the electrical grid, people always pay for it in the end.

Background:  Climate Policies: Real Economic Damage Fighting Imaginary Problem