Science 101: Null Test All Claims

Francis Menton provides some essential advice for non-scientists in his recent essay at Manhattan Contrarian You Don’t Need To Be A Scientist To Know That The Global Warming Alarm “Science” Is Fake. Excerpts in italics with my bolds.

When confronted with a claim that a scientific proposition has been definitively proven, ask the question: What was the null hypothesis, and on what basis has it been rejected?

As Menton explains, you don’t need the skills to perform yourself the null test, just the boldness to check how they dismissed the null hypothesis.

Consider first a simple example, the question of whether aspirin cures headaches. Make that our scientific proposition: aspirin cures headaches. How would this proposition be established? You yourself have taken aspirin many times, and your headache always went away. Doesn’t that prove that the aspirin worked? Absolutely not. The fact that you took aspirin 100 times and the headache went away 100 times proves nothing. Why? Because there is a null hypothesis that must first be rejected. Here the null hypothesis is that headaches will go away just as quickly on their own. How do you reject that? The standard method is to take some substantial number of people with headaches, say 2000, and give half of them the aspirin and the other half a placebo. Two hours later, of the 1000 who took the aspirin, 950 feel better and only 50 still have the headache; and of the 1000 who took the placebo, 500 still have the headache. Now you have very, very good proof that aspirin cured the headaches.

The point to focus on is that the most important evidence — the only evidence that really proves causation — is the evidence that requires rejection of the null hypothesis.

Over to climate science. Here you are subject to a constant barrage of information designed to convince you of the definitive relationship between human carbon emissions and global warming. The world temperature graph is shooting up in hockey stick formation! Arctic sea ice is disappearing! The rate of sea level rise is accelerating! Hurricanes are intensifying! June was the warmest month EVER! And on and on and on. All of this is alleged to be “consistent” with the hypothesis of human-caused global warming.

But, what is the null hypothesis, and on what basis has it been rejected? Here the null hypothesis is that some other factor, or combination of factors, rather than human carbon emissions, was the dominant cause of the observed warming.

Once you pose the null hypothesis, you immediately realize that all of the scary climate information with which you are constantly barraged does not even meaningfully address the relevant question. All of that information is just the analog of your 100 headaches that went away after you took aspirin. How do you know that those headaches wouldn’t have gone away without the aspirin? You don’t know unless someone presents data that are sufficient to reject the null hypothesis. Proof of causation can only come from disproof of the null hypothesis or hypotheses, that is, disproof of other proposed alternative causes. This precept is fundamental to the scientific method, and therefore fully applies to “climate science” to the extent that that field wishes to be real science versus fake science.

Now, start applying this simple check to every piece you read about climate science. Start looking for the null hypothesis and how it was supposedly rejected. In mainstream climate literature — and I’m including here both the highbrow media like the New York Times and also the so-called “peer reviewed” scientific journals like Nature and Scienceyou won’t find that. It seems that people calling themselves “climate scientists” today have convinced themselves that their field is such “settled science” that they no longer need to bother with tacky questions like worrying about the null hypothesis.

When climate scientists start addressing the alternative hypotheses seriously, then it will be real science. In the meantime, it’s fake science.

Summary

The null test can be applied to any scientific claim.  If there is no null hypothesis considered, then you can add the report  to the file “Unproven Claims,” or “Unfounded Suppositions.”  Some researchers call them SWAGs: Scientific Wild Ass Guesses.  These are not useless, since any discovery starts with a SWAG.  But you should avoid believing that they describe the way the world works until alternative explanations have been tested and dismissed.

See Also: No “Gold Standard” Climate Science

No GHG Warming Fingerprints in the Sky

Advertisements

Scientific vs. Social Authenticity

Credit: Stanislaw Pytel Getty Images

This post was triggered by an essay in Scientific American Authenticity under Fire by Scott Barry Kaufman. He raises modern issues and expresses a social and psychological sense of authenticity that left me unsatisfied.  So following that, I turn to a scientific standard much richer in meaning and closer to my understanding.

Social Authenticity

Researchers are calling into question authenticity as a scientifically viable concept

Authenticity is one of the most valued characteristics in our society. As children we are taught to just “be ourselves”, and as adults we can choose from a large number of self-help books that will tell us how important it is to get in touch with our “real self”. It’s taken as a given by everyone that authenticity is a real thing and that it is worth cultivating.

Even the science of authenticity has surged in recent years, with hundreds of journal articles, conferences, and workshops. However, the more that researchers have put authenticity under the microscope, the more muddied the waters of authenticity have become.

Many common ideas about authenticity are being overturned.
Turns out, authenticity is a real mess.

One big problem with authenticity is that there is a lack of consensus among both the general public and among psychologists about what it actually means for someone or something to be authentic. Are you being most authentic when you are being congruent with your physiological states, emotions, and beliefs, whatever they may be?

Another thorny issue is measurement. Virtually all measures of authenticity involve self-report measures. However, people often do not know what they are really like or why they actually do what they do. So tests that ask people to report how authentic they are is unlikely to be a truly accurate measure of their authenticity.

Perhaps the thorniest issue of them all though is the entire notion of the “real self”. The humanistic psychotherapist Carl Rogers noted that many people who seek psychotherapy are plagued by the question “Who am I, really?” While people spend so much time searching for their real self, the stark reality is that all of the aspects of your mind are part of you.

So what is this “true self” that people are always talking about? Once you take a closer scientific examination, it seems that what people refer to as their “true self” really is just the aspects of themselves that make them feel the best about themselves.

Even more perplexing, it turns out that most people’s feelings of authenticity have little to do with acting in accord with their actual nature. The reality appears to be quite the opposite. All people tend to feel most authentic when having the same experiences, regardless of their unique personality.

Another counterintuitive finding is that people actually tend to feel most authentic when they are acting in socially desirable ways, not when they are going against the grain of cultural dictates (which is how authenticity is typically portrayed). On the flip side, people tend to feel inauthentic when they are feeling socially isolated, or feel as though they have fallen short of the standards of others.

Therefore, what people think of as their true self may actually just be what people want to be seen as. According to social psychologist Roy Baumeister, we will report feeling highly authentic and satisfied when the way others think of us matches up with how we want to be seen, and when our actions “are conducive to establishing, maintaining, and enjoying our desired reputation.”

Conversely, Baumeister argues that when people fail to achieve their desired reputation, they will dismiss their actions as inauthentic, as not reflecting their true self (“That’s not who I am”). As Baumeister notes, “As familiar examples, such repudiation seems central to many of the public appeals by celebrities and politicians caught abusing illegal drugs, having illicit sex, embezzling or bribing, and other reputation-damaging actions.”

Kaufman Conclusion

As long as you are working towards growth in the direction of who you truly want to be, that counts as authentic in my book regardless of whether it is who you are at this very moment. The first step to healthy authenticity is shedding your positivity biases and seeing yourself for who you are, in all of your contradictory and complex splendor. Full acceptance doesn’t mean you like everything you see, but it does mean that you’ve taken the most important first step toward actually becoming the whole person you most wish to become. As Carl Rogers noted, “the curious paradox is that when I accept myself just as I am, then I can change.”

My Comment:
Kaufman describes contemporary ego-centric group-thinking, which leads to the philosophical dead end called solipsism. As an epistemological position, solipsism holds that knowledge of anything outside one’s own mind is unsure; the external world and other minds cannot be known and might not exist outside the mind.

His discussion proves the early assertion that authenticity (in the social or psychological sense) is indeed a mess. The author finds no objective basis to determine fidelity to reality, thus leaving everyone struggling whether to be self-directed or other-directed. As we know from Facebook, most resolve that conflict by competing to see who can publish the most selfies while acquiring the most “friends.”This is the best Scientific American can do? The swamp is huge and deep indeed.

It reminds me of what Ross Pomeroy wrote at Real Science: “Psychology, as a discipline, is a house made of sand, based on analyzing inherently fickle human behavior, held together with poorly-defined concepts, and explored with often scant methodological rigor. Indeed, there’s a strong case to be made that psychology is barely a science.”

Scientific Authenticity

In contrast, let us consider some writing by Philip Kanarev, A practicing physicist, he is concerned with the demise of scientific thinking and teaching and calls for a return to fundamentals. His essay is Scientific Authenticity Criteria by Ph. M. Kanarev in the General Science Journal.  Excerpts in italics with my bolds.

A conjunction of scientific results in the 21st century has reached a level that provides an opportunity to find and to systematize the scientific authenticity criteria of precise knowledge already gained by mankind.

Neither Euclid, nor Newton gave precise definitions of the notions of an axiom, a postulate and a hypothesis. As a result, Newton called his laws the axioms, but it was in conflict with the Euclidean ideas concerning the essence of the axioms. In order to eliminate these contradictions, it was necessary to give a definition not only to the notions of the axiom and the postulate, but also to the notion of the hypothesis. This necessity is stipulated by the fact that any scientific research begins with an assumption regarding the reason causing a phenomenon or process being studied. A formulation of this assumption is a scientific hypothesis.

Thus, the axioms and the postulates are the main criteria of authenticity of any scientific result.

An axiom is an obvious statement, which requires no experimental check and has no exceptions. Absolute authenticity of an axiom appears from this definition. It protects it by a vivid connection with reality. A scientific value of an axiom does not depend on its recognition; that is why disregarding an axiom as a scientific authenticity criterion is similar to ineffectual scientific work.

A postulate is a non-obvious statement, its reliability being proven in the way of experiment or a set of theoretic results originating from the experiments. The reliability of a postulate is determined by the level of acknowledgement by the scientific community. That’s why its value is not absolute.

An hypothesis is an unproven statement, which is not a postulate. A proof can be theoretical and experimental. Both proofs should not be at variance with the axioms and the recognized postulates. Only after that, hypothetical statements gain the status of postulates, and the statements, which sum up a set of axioms and postulates, gain the status of a trusted theory.

The first axioms were formulated by Euclid. Here are some of them:
1 – To draw a straight line from any point to any point.
2 – To produce a finite straight line continuously in a straight line.
3 – That all right angles equal one another.

Euclidean formulation concerning the parallelism of two straight lines proved to be less concise. As a result, it was questioned and analyzed in the middle of the 19th century. It was accepted that two parallel straight lines cross at infinity. Despite a complete absence of evidence of this statement, the status of an axiom was attached to it. Mankind paid a lot for such an agreement among the scientists. All theories based on this axiom proved to be faulty. The physical theories of the 20th century proved to be the principal ones among them.

In order to understand the complicated situation being formed, one has to return to Euclidean axioms and assess their completeness. It has turned out that there are no axioms, which reflect the properties of the primary elements of the universe (space, matter and time), among those of Euclid. There are no phenomena, which could compress space, stretch it or distort it, in the nature; that is why space is absolute. There are no phenomena, which change the rate of the passing of time in nature. Time does not depend on anything; that’s why we have every reason to consider time absolute. The absolute nature of space and time has been acknowledged by scientists since Euclidean times. But when his axiom concerning the parallelism of straight lines was disputed, the ideas of relativity of space and time as well as the new theories, which were based on these ideas and proved (as we noted) to be faulty, appeared.

A law of acknowledgement of new scientific achievements was introduced by Max Planck. He formulated it in the following way: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it”. Our attempt to report the reliability of this law to the authorities is in the history of science an unnecessary intention. Certainly, time appeared in space only after matter. But still we do not know of a source that produces elementary particles – building blocks of the material world. That’s why we have no reason to consider matter absolute. But it does not prevent us from paying attention to an interconnection of the primary elements of the universe: space, matter and time. They exist only together and regardless of each other. This fact is vivid, and we have every reason to consider an indivisible existence of space, matter and time as an axiomatic one, and to call the axiom, which reflects this fact, the Unity axiom. The philosophic essence of this axiom has been noted long ago, but the practitioners of the exact sciences have failed to pay attention to the fact that it is implemented in the experimental and analytical processes of cognition of the world. When material bodies move, the mathematical description of this motion should be based on the Unity axiom. It appears from this axiom, that an axis of motion of any object is the time function. Almost all physical theories of the 20th century are in conflict with the Unity axiom. It is painful to write about it in detail.

Let us go on analyzing the role of postulates as scientific authenticity criteria. First of all, let us recollect the famous postulate by Niels Bohr concerning the orbital motion of the electrons in atoms. This catchy model of the process of the interaction of the electrons in the atoms goes on being formed in the mind of the pupils in school despite of the fact that its impropriety has been proven more than 10 years ago.

The role of Niels Bohr’s generalized postulate is great. Practically, it is used in the whole of modern chemistry and the larger part of physics. This postulate is based on the calculation of the spectrum of the hydrogen atom. But it is impossible to calculate the spectrum of the first orbit of the helium atom (which occupies the second place in Mendeleev’s table,) with Bohr’s postulate, to say nothing of the spectra of more complicated atoms and ions. It was enough to dispute the authenticity of Bohr’s postulate, but the mission of doubt has fallen to our lot for some reason. Two years were devoted to decoding the spectrum of the first electron of the helium atom. As a result, the law of formation of the spectra of atoms and ions has taken place as well as the law of the change of binding energy of the electron with the protons of the nuclei when energy-jumps take place in the atoms. It has turned out that there is no energy of orbital motion of the electrons in these laws; there are only the energies of their linear interaction with the protons of the nuclei.

Thereafter, it has become clear that only elementary particle models can play the role of the scientific result authenticity criteria in cognition of the micro-world. From the analysis of behaviour of these models, one should derive the mathematical models, which have been ascertained analytically long ago, and describe their behaviour in the experiments that have been carried out earlier.

The ascertained models of the photons of all frequencies, the electron, the proton and the neutron meet the above-mentioned requirements. They are interconnected with each other by such a large set of theoretical and experimental information, whose impropriety cannot be proven. This is the main feature of the proximity to reality of the ascertained models of the principle elementary particles. Certainly, the process of their generation has begun from a formulation of the hypothesis concerning their structures. Sequential development of the description of these structures and their behaviour during the interactions extended the range of experimental data where the parameters of the elementary particles and their interactions were registered. For example, the formation and behaviour of electrons are governed by more than 20 constants.

We have every reason to state that the models of the photons, the electron, the proton and the neutron, which have been ascertained by us, as well as the principles of formation of the nuclei, the atoms, the ions, the molecules and the clusters already occupy a foundation for the postulates, and new scientific knowledge will cement its strength.

Science has a rather complete list of criteria in order to estimate the authenticity of scientific investigative results. The axioms (the obvious statements, which require no experimental check and have no exceptions,) occupy the first place; the second place is occupied by the postulates. If the new theory is in conflict with at least one axiom, it will be rejected immediately by the scientific community without discussion. If the experimental data, which are in conflict with any postulate (as it happened, for example, to the Newton’s first law), appear, the future scientific community, which has learned a lesson from scientific cowardice of the academic elite of the 20th century, will submit such a postulate to a collective analysis of its authenticity.

Kanarev Conclusion

To the academicians who have made many mistakes in knowledge of the fields of physics and chemistry, we wish them to recover their sight in old age and be glad that these mistakes are already amended. It is time to understand that a prolongation of stuffing the heads of young people with faulty knowledge is similar to a crime that will be taken to heart emotionally in the near future.

The time has ended, when a diploma confirming higher education was enough in order to get a job. Now it is not a convincing argument for an employer; in order to be on the safe side, he hires a young graduate as a probationer at first as he wants to see what the graduate knows and what he is able to do. A new system of higher education has almost nullified a possibility for the student to have the skills of practical work according to his specialty and has preserved a requirement to have moronic knowledge, i.e. the knowledge which does not reflect reality.

My Summary

In Science, authenticity requires fidelity to axioms and postulates describing natural realities. It also means insisting that hypotheses be validated by experimental results. Climate science claims are not scientifically authentic unless or until confirmed by observations, and not simply projections from a family of divergent computer models. And despite all of the social support for climate hysteria, those fears are again more stuffing of nonsense into heads of youth and of the scientifically illiterate.

See Also Degrees of Climate Truth

Earth and Universe As Never Seen Before

This is an introduction to amazing graphics done by Eleanor Lutz (no relation) at her website Tabletop Whale, an original science illustration blog. Above is a data-based view of Earth’s seasons. If you watch in full screen, the four corners show views of the cycle from top, bottom, and sides. Below is her map of the solar system, showing how much scientific information is represented in the illustration (H/T Real Clear Science)

An Orbit Map of the Solar System
JUNE 10 2019 · Link to the Open-Source Code

This week’s map shows the orbits of more than 18000 asteroids in the solar system. This includes everything we know of that’s over 10km in diameter – about 10000 asteroids – as well as 8000 randomized objects of unknown size. This map shows each asteroid at its exact position on New Years’ Eve 1999.

All of the data for this map is shared by NASA and open to the public. However, the data is stored in several different databases so I had to do a decent amount of data cleaning. I’ve explained all of the steps in detail in my open-source code and tutorial, so I’ll just include a sketch of the process here in this blog post:

To see details, open image in new tab, then click on it to enlarge.

To see details, open the image in a new tab, then click on it to enlarge. Then browse the solar system to your heart’s content.

Historic Revision of Weights and Measures

Today 16 November 2018, representatives from 60 nations agreed unanimously to an historic revision of the International System of Units (SI). The press kit for the 26th General Conference on Weights and Measures (CGPM) provides the details and implications. Excerpts in italics with my bolds.

Why is the SI important?  The name International System of Units, with the abbreviation SI, was given to the system in 1960. The SI units form a foundation for measurement across the world to ensure consistency and reliability. They are the basis of trading, manufacturing, innovation and scientific discovery around the world.

SI units can provide new opportunities for innovation. Some examples where greater accuracy is supporting better methods and understanding with a positive impact on society include:  The accurate measurement of temperature: This will support the ability to identify and measure reliably very small changes across large time periods with greater accuracy. Therefore, it will allow for precise monitoring and better predictions for climate change.  The accurate administration of drugs: The pharmaceutical industry needs to use a standard for very small amounts of mass in order to make dosages of medication even more appropriate for patients.

SI units can help us support innovation into the future. As our ability to measure properties improves, the standards we have for measurement will need to keep up. The accuracy of services like the Global Positioning System (GPS) are limited by our ability to use standard units, in this case the second to measure time. We can track our locations effectively because we can establish time using the SI definition of a second, which can be realized by an atomic clock. This advancement was made possible because society had defined the second more accurately well before we had even discovered what it could be used for. The atomic clock was made before computing really took off. Now, accurate timing is a fundamental part of the industry; without it, the internet, mobile phones and other technologies could not work reliably.

How are the units of measurement defined? Originally, measurement units were defined by physical objects or properties of materials. For example, the metre was originally defined by a metal bar exactly one metre in length.

However, these physical representations can change over time or in different environments, and are no longer accurate enough for today’s research and technological applications. Over the last century, scientists measured natural constants of nature, such as the speed of light in a vacuum and the Planck constant, with increasing accuracy. They discovered that these are more stable than physical objects, and fixed numerical values to the constants. These natural constants do not vary, so are at least one million times more stable.

This revision of the SI will, for the first time, see all base units in the SI defined by the constants of physical science that we use to describe nature. Using the constants we have found in nature as our universal basis for measurement allows not only scientists, but also industry and society, to have a measurement system that is more reliable, consistent, and scalable across quantities, from very large to very small.

There are two key ways the SI will change to create a more stable and future-proof basis for measurement:

It will take physical artifacts out of the equation: the kilogram is still defined by a physical object equal to the mass of the International Prototype of the Kilogram (IPK), an artifact stored at the International Bureau of Weights and Measures (BIPM) in France. This revision will finally remove the need for this last artifact.

For over a century, a kilogram has been defined by a lump of metal held securely in a Paris vault (AP)

For the first time, all the definitions will be separate from their realizations: instead of definitions becoming outdated as we find better ways to realize units, definitions will remain constant and future-proof. For example, the ampere is currently defined as “the magnetic force between two wires at a certain distance apart”, which means that it uses the realization of a measurement to define it. However, advancements like the advent of the Josephson and quantum Hall effects, have revealed better ways of realizing the ampere, making the original approach obsolete.

Which are changing?
The kilogram (kg), ampere (A), kelvin (K), and mole (mol) will have new definitions.
The new definitions affect four of the base units:
The kilogram in terms of the Planck constant (h)
The ampere in terms of the elementary charge (e)
The kelvin in terms of the Boltzmann constant (k)
The mole in terms of the Avogadro constant (NA)

Defining the kilogram in terms of fundamental physical constants will ensure its long-term stability, and hence its reliability, which is at present in doubt.

What about the definitions of the other units? The definitions of the second (s), metre (m), and candela (cd), will not change, but the way the definitions are written will be revised to make them consistent in form with the new definitions for the kilogram (kg), ampere (A), kelvin (K), and mole (mol). These new wordings are also expected to be approved at the 26th CGPM in November 2018 and to come into force on 20 May 2019.

What impact does the redefinition have on the realization of the kilogram? The kilogram will be defined in terms of the Planck constant, guaranteeing long-term stability of the SI mass scale. The kilogram can then be realized by any suitable method (for example the Kibble (watt) balance or the Avogadro (X-ray crystal density) method). Users will be able to obtain traceability to the SI from the same sources used at present (the BIPM, national metrology institutes and accredited laboratories). International comparisons will ensure their consistency.

The value of the Planck constant will be chosen to ensure that there will be no change in the SI kilogram at the time of redefinition. The uncertainties offered by NMIs to their calibration customers will also be broadly unaffected.

What impact does the redefinition have on the realization of the ampere? The ampere and other electrical units, as practically realized at the highest metrological level, will become fully consistent with the definitions of these units. The transition from the 1990 convention to the revised SI will result in small changes to all disseminated electrical units.

For the vast majority of measurement users, no action need be taken as the volt will change by about 0.1 parts per million and the ohm will change by even less. Practitioners working at the highest level of accuracy may need to adjust the values of their standards and review their measurement uncertainty budgets.

What impact does the redefinition have on the realization of the kelvin? The kelvin will be redefined with no immediate effect on temperature measurement practice or on the traceability of temperature measurements, and for most users, it will pass unnoticed. The redefinition lays the foundation for future improvements. A definition free of material and technological constraints enables the development of new and more accurate techniques for making temperature measurements traceable to the SI, especially at extremes of temperature.

After the redefinition, the guidance on the practical realization of the kelvin will support its worldwide dissemination by describing primary methods for measurement of thermodynamic temperature and equally through the defined scales ITS-90 and PLTS-2000.

What impact does the redefinition have on the realization of the mole? The mole will be redefined with respect to a specified number of entities (typically atoms or molecules) and will no longer depend on the unit of mass, the kilogram. Traceability to the mole can still be established via all previously employed approaches including, but not limited to, the use of mass measurements along with tables of atomic weights and the molar mass constant Mu.

Atomic weights will be unaffected by this change in definition and Mu will still be 1 g/mol, although now with a measurement uncertainty. This uncertainty will be so small that the revised definition of the mole will not require any change to common practice.

Will there be any change to the realization of the second, the metre and the candela? No.
The second will continue to be defined in terms of the hyperfine transition frequency of the caesium 133 atom. The traceability chain to the second will not be affected. Time and frequency metrology will not be impacted.
The metre in the revised SI will continue to be defined in terms of the speed of light, one of the fundamental constants of physics. Dimensional metrology practice will not need to be modified in any way and will benefit from the improved long-term stability of the system.
The candela will continue to be defined in terms of Kcd, a technical constant for photometry and will therefore continue to be linked to the watt. Traceability to the candela will still be established with the same measurement uncertainty via radiometric methods using absolutely-calibrated detectors.

Summary

The new definitions will use ‘the rules of nature to create the rules of measurement’ linking measurements at the atomic and quantum scales to those at the macroscopic level. As science and technology progress, the demands for measurements to underpin new products and services will increase. Metrology is a dynamic branch of science and the steps taken by the BIPM and the wider metrology community to advance the SI in 2018 will underpin these requirements ensuring that scientists can study it and engineers can improve it. And, since science and engineering play an important role in our lives, measurement matters for everyone.

Background:  

For an informative and also a whimsical look at measuring units see Origins of Science

Greenland Viking Science in Depth

 

Eric the Red slept here: Qassiarsuk features replicas of a Viking church and longhouse. (Ciril Jazbec)

Update August 9 2018

With an article just published in South China Morning Post and reblogged in GWPF,  I am reposting this more in depth discussion of the Greenland Vikings.  It was originally published in 2017 with information and graphics drawn from a fine essay in the Smithsonian Magazine.

It is refreshing to come across scientists researching a question without the corrupting need to scare the public or to confirm some personal, professional or moral fear of the future. In this case I refer to a wonderful Smithsonian article on the question: Why Did Greenland’s Vikings Vanish? Newly discovered evidence is upending our understanding of how early settlers made a life on the island — and why they suddenly disappeared.

Some excerpts below give the flavor of this persistent effort by researchers unrewarded by the availability of huge grants that now flow to the once-lowly climatologists.  The whole article is fascinating to anyone with curiosity.

The Mystery of Greenland Vikings

But the documents are most remarkable—and baffling—for what they don’t contain: any hint of hardship or imminent catastrophe for the Viking settlers in Greenland, who’d been living at the very edge of the known world ever since a renegade Icelander named Erik the Red arrived in a fleet of 14 longships in 985. For those letters were the last anyone ever heard from the Norse Greenlanders.

They vanished from history.

Europeans didn’t return to Greenland until the early 18th century. When they did, they found the ruins of the Viking settlements but no trace of the inhabitants. The fate of Greenland’s Vikings—who never numbered more than 2,500—has intrigued and confounded generations of archaeologists.

Those tough seafaring warriors came to one of the world’s most formidable environments and made it their home. And they didn’t just get by: They built manor houses and hundreds of farms; they imported stained glass; they raised sheep, goats and cattle; they traded furs, walrus-tusk ivory, live polar bears and other exotic arctic goods with Europe. “These guys were really out on the frontier,” says Andrew Dugmore, a geographer at the University of Edinburgh. “They’re not just there for a few years. They’re there for generations—for centuries.”

So what happened to them?

The Conventional Wisdom

Thomas McGovern used to think he knew. An archaeologist at Hunter College of the City University of New York, McGovern has spent more than 40 years piecing together the history of the Norse settlements in Greenland. With his heavy white beard and thick build, he could pass for a Viking chieftain, albeit a bespectacled one. Over Skype, here’s how he summarized what had until recently been the consensus view, which he helped establish: “Dumb Norsemen go into the north outside the range of their economy, mess up the environment and then they all die when it gets cold.”

Thomas McGovern (with Viking-era animal bones); The Greenlanders’ end was “grim.” (Reed Young)

Accordingly, the Vikings were not just dumb, they also had dumb luck: They discovered Greenland during a time known as the Medieval Warm Period, which lasted from about 900 to 1300. Sea ice decreased during those centuries, so sailing from Scandinavia to Greenland became less hazardous. Longer growing seasons made it feasible to graze cattle, sheep and goats in the meadows along sheltered fjords on Greenland’s southwest coast. In short, the Vikings simply transplanted their medieval European lifestyle to an uninhabited new land, theirs for the taking.

But eventually, the conventional narrative continues, they had problems. Overgrazing led to soil erosion. A lack of wood—Greenland has very few trees, mostly scrubby birch and willow in the southernmost fjords—prevented them from building new ships or repairing old ones. But the greatest challenge—and the coup de grâce—came when the climate began to cool, triggered by an event on the far side of the world.

In 1257, a volcano on the Indonesian island of Lombok erupted. Geologists rank it as the most powerful eruption of the last 7,000 years. Climate scientists have found its ashy signature in ice cores drilled in Antarctica and in Greenland’s vast ice sheet, which covers some 80 percent of the country. Sulfur ejected from the volcano into the stratosphere reflected solar energy back into space, cooling Earth’s climate. “It had a global impact,” McGovern says. “Europeans had a long period of famine”—like Scotland’s infamous “seven ill years” in the 1690s, but worse. “The onset was somewhere just after 1300 and continued into the 1320s, 1340s. It was pretty grim. A lot of people starving to death.”

Amid that calamity, so the story goes, Greenland’s Vikings—numbering 5,000 at their peak—never gave up their old ways. They failed to learn from the Inuit, who arrived in northern Greenland a century or two after the Vikings landed in the south. They kept their livestock, and when their animals starved, so did they. The more flexible Inuit, with a culture focused on hunting marine mammals, thrived.

An aerial photograph of southern Greenland. (Ciril Jazbec)

New Evidence Overturns Past Conceptions

But over the last decade a radically different picture of Viking life in Greenland has started to emerge from the remains of the old settlements, and it has received scant coverage outside of academia. “It’s a good thing they can’t make you give your PhD back once you’ve got it,” McGovern jokes. He and the small community of scholars who study the Norse experience in Greenland no longer believe that the Vikings were ever so numerous, or heedlessly despoiled their new home, or failed to adapt when confronted with challenges that threatened them with annihilation.

“It’s a very different story from my dissertation,” says McGovern. “It’s scarier. You can do a lot of things right—you can be highly adaptive; you can be very flexible; you can be resilient—and you go extinct anyway.” And according to other archaeologists, the plot thickens even more: It may be that Greenland’s Vikings didn’t vanish, at least not all of them.

A New Understanding How Vikings Lived on Greenland

 

The Vikings established two outposts in Greenland: one along the fjords of the southwest coast, known historically as the Eastern Settlement, where Gardar is located, and a smaller colony about 240 miles north, called the Western Settlement. Nearly every summer for the last several years, Konrad Smiarowski has returned to various sites in the Eastern Settlement to understand how the Vikings managed to live here for so many centuries, and what happened to them in the end.

“Probably about 50 percent of all bones at this site will be seal bones,” Smiarowski says as we stand by the drainage ditch in a light rain. He speaks from experience: Seal bones have been abundant at every site he has studied, and his findings have been pivotal in reassessing how the Norse adapted to life in Greenland. The ubiquity of seal bones is evidence that the Norse began hunting the animals “from the very beginning,” Smiarowski says. “We see harp and hooded seal bones from the earliest layers at all sites.”

A seal-based diet would have been a drastic shift from beef-and-dairy-centric Scandinavian fare. But a study of human skeletal remains from both the Eastern and Western settlements showed that the Vikings quickly adopted a new diet. Over time, the food we eat leaves a chemical stamp on our bones—marine-based diets mark us with different ratios of certain chemical elements than terrestrial foods do. Five years ago, researchers based in Scandinavia and Scotland analyzed the skeletons of 118 individuals from the earliest periods of settlement to the latest. The results perfectly complement Smiarow­ski’s fieldwork: Over time, people ate an increasingly marine diet, he says.

Judging from the bones Smiarowski has uncovered, most of the seafood consisted of seals—few fish bones have been found. Yet it appears the Norse were careful: They limited their hunting of the local harbor seal, Phoca vitulina, a species that raises its young on beaches, making it easy prey. (The harbor seal is critically endangered in Greenland today due to overhunting.) “They could have wiped them out, and they didn’t,” Smiarowski says. Instead, they pursued the more abundant—and more difficult to catch—harp seal, Phoca groenlandica, which migrates up the west coast of Greenland every spring on the way from Canada. Those hunts, he says, must have been well-organized communal affairs, with the meat distributed to the entire settlement—seal bones have been found at homestead sites even far inland. The regular arrival of the seals in the spring, just when the Vikings’ winter stores of cheese and meat were running low, would have been keenly anticipated.

The Vikings Were Players in the Ivory Trade

The Norse harnessed their organizational energy for an even more important task: annual walrus hunts. Smiarowski, McGovern and other archaeologists now suspect that the Vikings first traveled to Greenland not in search of new land to farm—a motive mentioned in some of the old sagas—but to acquire walrus-tusk ivory, one of medieval Europe’s most valuable trade items. Who, they ask, would risk crossing hundreds of miles of arctic seas just to farm in conditions far worse than those at home? As a low-bulk, high-value item, ivory would have been an irresistible lure for seafaring traders.

After hunting walruses to extinction in Iceland, the Norse must have sought them out in Greenland. They found large herds in Disko Bay, about 600 miles north of the Eastern Settlement and 300 miles north of the Western Settlement. “The sagas would have us believe that it was Erik the Red who went out and explored [Greenland],” says Jette Arneborg, a senior researcher at the National Museum of Denmark, who, like McGovern, has studied the Norse settlements for decades. “But the initiative might have been from elite farmers in Iceland who wanted to keep up the ivory trade—it might have been in an attempt to continue this trade that they went farther west.”

A bishop’s ring and top of his crosier from the Gardar ruins. (Ciril Jazbec)

How profitable was the ivory trade? Every six years, the Norse in Greenland and Iceland paid a tithe to the Norwegian king. A document from 1327, recording the shipment of a single boatload of tusks to Bergen, Norway, shows that that boatload, with tusks from 260 walruses, was worth more than all the woolen cloth sent to the king by nearly 4,000 Icelandic farms for one six-year period.

Archaeologists once assumed that the Norse in Greenland were primarily farmers who did some hunting on the side. Now it seems clear that the reverse was true. They were ivory hunters first and foremost, their farms only a means to an end. Why else would ivory fragments be so prevalent among the excavated sites? And why else would the Vikings send so many able-bodied men on hunting expeditions to the far north at the height of the farming season? “There was a huge potential for ivory export,” says Smiarowski, “and they set up farms to support that.” Ivory drew them to Greenland, ivory kept them there, and their attachment to that toothy trove may be what eventually doomed them.

A New Theory Why Viking Greenland Settlements Failed

For all their intrepidness, though, the Norse were far from self-sufficient, and imported grains, iron, wine and other essentials. Ivory was their currency. “Norse society in Greenland couldn’t survive without trade with Europe,” says Arneborg, “and that’s from day one.”

Then, in the 13th century, after three centuries, their world changed profoundly. First, the climate cooled because of the volcanic eruption in Indonesia. Sea ice increased, and so did ocean storms—ice cores from that period contain more salt from oceanic winds that blew over the ice sheet. Second, the market for walrus ivory collapsed, partly because Portugal and other countries started to open trade routes into sub-Saharan Africa, which brought elephant ivory to the European market. “The fashion for ivory began to wane,” says Dugmore, “and there was also the competition with elephant ivory, which was much better quality.” And finally, the Black Death devastated Europe. There is no evidence that the plague ever reached Greenland, but half the population of Norway—which was Greenland’s lifeline to the civilized world—perished.

The Norse probably could have survived any one of those calamities separately. After all, they remained in Greenland for at least a century after the climate changed, so the onset of colder conditions alone wasn’t enough to undo them. Moreover, they were still building new churches—like the one at Hvalsey—in the 14th century. But all three blows must have left them reeling. With nothing to exchange for European goods—and with fewer Europeans left—their way of life would have been impossible to maintain. The Greenland Vikings were essentially victims of globalization and a pandemic.

Summary

So there is a climate angle to the story of Greenland Vikings. Unlike climate alarmists, these scientists looked deeper and found a more complicated truth. Of course, even this explanation is provisional, because we are talking about science, after all.

OLYMPUS DIGITAL CAMERA

Head, Heart and Science Updated

A man who has not been a socialist before 25 has no heart. If he remains one after 25 he has no head.—King Oscar II of Sweden

H/T to American Elephants for linking to this Jordan Peterson video:  The Fatal Flaw in Leftist Thought.  He has an outstanding balance between head and heart, and also applies scientific analysis to issues, in this case the problem of identity politics and leftist ideology.

As usual Peterson makes many persuasive points in this talk.  I was struck by his point that we have established the boundary of extremism on the right, but no such boundary exists on the left.  Our society rejects right wingers who cross the line and assert racial superiority.  Conservative voices condemn that position along with the rest.

We know from the Soviet excesses that the left can go too far, but what is the marker?  Left wingers have the responsibility to set the boundary and sanction the extremists.  Peterson suggests that the fatal flaw is the attempt to ensure equality of outcomes for identity groups, and explains why that campaign is impossible.

From Previous Post on Head, Heart and Science

Recently I had an interchange with a friend from high school days, and he got quite upset with this video by Richard Lindzen. So much so, that he looked up attack pieces in order to dismiss Lindzen as a source. This experience impressed some things upon me.

Climate Change is Now Mostly a Political Football (at least in USA)

My friend attributed his ill humor to the current political environment. He readily bought into slanderous claims, and references to being bought and paid for by the Koch brothers. At this point, Bernie and Hilliary only disagree about who is the truest believer in Global Warming. Once we get into the general election process, “Fighting Climate Change” will intensify as a wedge issue, wielded by smug righteous believers on the left against the anti-science neanderthals on the right.

So it is a hot label for social-media driven types to identify who is in the tribe (who can be trusted) and the others who can not.  For many, it is not any deeper than that.

The Warming Consensus is a Timesaver

My friend acknowledged that his mind was made up on the issue because 95+% of scientists agreed. It was extremely important for him to discredit Lindzen as untrustworthy to maintain the unanimity. When a Warmist uses: “The Scientists say: ______” , it is much the same as a Christian reference: “The Bible says: _______.” In both cases, you can fill in the blank with whatever you like, and attribute your idea to the Authority. And most importantly, you can keep the issue safely parked in a No Thinking Zone. There are plenty of confusing things going on around us, and no one wants one more ambiguity requiring time and energy.

Science Could Lose the Delicate Balance Between Head and Heart

Decades ago Arthur Eddington wrote about the tension between attitudes of artists and scientists in their regarding nature. On the one hand are people filled with the human impulse to respect, adore and celebrate the beauty of life and the world. On the other are people driven by the equally human need to analyze, understand and know what to expect from the world. These are Yin and Yang, not mutually exclusive, and all of us have some of each.

Most of us can recall the visceral response in the high school biology lab when assigned to dissect a frog. Later on, crayfish were preferred (less disturbing to artistic sensibilities). For all I know, recent generations have been spared this right of passage, to their detriment. For in the conflict between appreciating things as they are, and the need to know why and how they are, we are exposed to deeper reaches of the human experience. If you have ever witnessed, as I have, a human body laid open on an autopsy table, then you know what I mean.

Anyone, scientist or artist, can find awe in contemplating the mysteries of life. There was a time when it was feared that the march of science was so advancing the boundaries of knowledge that the shrinking domain of the unexplained left ever less room for God and religion. Practicing scientists knew better. Knowing more leads to discovering more unknowns; answers produce cascades of new questions. The mystery abounds, and the discovery continues. Eddington:

It is pertinent to remember that the concept of substance has disappeared from fundamental physics; what we ultimately come down to is form. Waves! Waves!! Waves!!! Or for a change — if we turn to relativity theory — curvature! Energy which, since it is conserved, might be looked upon as the modern successor of substance, is in relativity theory a curvature of space-time, and in quantum theory a periodicity of waves. I do not suggest that either the curvature or the waves are to be taken in a literal objective sense; but the two great theories, in their efforts to reduce what is known about energy to a comprehensible picture, both find what they require in a conception of “form”.

What do we really observe? Relativity theory has returned one answer — we only observe relations. Quantum theory returns another answer — we only observe probabilities.

It is impossible to trap modern physics into predicting anything with perfect determinism because it deals with probabilities from the outset.
― Arthur Stanley Eddington

Works by Eddington on Science and the Natural World are here.

Summary

The science problem today is not the scientists themselves, but with those attempting to halt its progress for the sake of political power and wealth.

Eddington:
Religious creeds are a great obstacle to any full sympathy between the outlook of the scientist and the outlook which religion is so often supposed to require … The spirit of seeking which animates us refuses to regard any kind of creed as its goal. It would be a shock to come across a university where it was the practice of the students to recite adherence to Newton’s laws of motion, to Maxwell’s equations and to the electromagnetic theory of light. We should not deplore it the less if our own pet theory happened to be included, or if the list were brought up to date every few years. We should say that the students cannot possibly realise the intention of scientific training if they are taught to look on these results as things to be recited and subscribed to. Science may fall short of its ideal, and although the peril scarcely takes this extreme form, it is not always easy, particularly in popular science, to maintain our stand against creed and dogma.
― Arthur Stanley Eddington

But enough about science. It’s politicians we need to worry about:

Footnote:

“Asked in 1919 whether it was true that only three people in the world understood the theory of general relativity, [Eddington] allegedly replied: ‘Who’s the third?”

Postscript:  For more on how we got here see Warmists and Rococo Marxists.

The Sky is Not Falling

Bjorn Lomborg brings perspective to doomsday hyperbole in his article The Sky Is Not Falling.  Excerpts in italics below with my bolds.

Main Point: Long, slow, positive trends don’t make it to the front page or to water-cooler conversations. So we develop peculiar misperceptions, especially the idea that a preponderance of things are going wrong.

When I published The Skeptical Environmentalist in 2001, I pointed out that the world was getting better in many respects. Back then, this was viewed as heresy, as it punctured several common and cherished misperceptions, such as the idea that natural resources were running out, that an ever-growing population was leaving less to eat, and that air and water were becoming ever-more polluted.

In each case, careful examination of the data established that the gloomy scenarios prevailing at the time were exaggerated. While fish stocks, for example, are depleted because of a lack of regulation, we can actually eat more fish than ever, thanks to the advent of aquaculture. Worries that we are losing forests overlook the reality that as countries become richer, they increase their forest cover.

Since I wrote the book, the world has only become better, according to many important indicators. We have continued to see meaningful reductions in infant mortality and malnutrition, and there have been massive strides toward eradication of polio, measles, malaria, and illiteracy.

By focusing on the most lethal environmental problem – air pollution – we can see some of the reasons for improvement. As the world developed, deaths from air pollution have declined dramatically, and that trend is likely to continue. Looking at a polluted city in a country like China might suggest otherwise, but the air inside the homes of most poor people is about ten times more polluted than the worst outdoor air in Beijing. The most serious environmental problem for humans is indoor air pollution from cooking and heating with dirty fuels like wood and dung – which is the result of poverty.

In 1900, more than 90% of all air pollution deaths resulted from indoor air pollution. Economic development has meant more outdoor pollution, but also much less indoor pollution. Reductions in poverty have gone hand in hand with a four-fold reduction in global air pollution mortality. Yet more people today still die from indoor air pollution than from outdoor pollution. Even in China, while outside air has become a lot more polluted, poverty reduction has caused a lower risk of total air pollution death. And as countries become richer, they can afford to regulate and cut even outdoor air pollution.

Two hundred years ago, almost every person on the planet lived in poverty, and a tiny elite in luxury. Today just 9.1% of the population, or almost 700 million people, lives on less than $1.90 per day (or what used to be one dollar in 1985). And just in the last 20 years, the proportion of people living in extreme poverty has almost halved. Yet few of us know this. The Gapminder foundation surveyed the UK and found that just 10% of people believe poverty has decreased. In South Africa and in Sweden, more people believe extreme poverty has doubled than believe – correctly – that it has plummeted.

How do we continue our swift progress? There has been no shortage of well-intentioned policy interventions, so we have decades of data showing what works well and what doesn’t.

In the latter category, even well-considered ideas from the world’s most eminent thinkers can fall short. The ambitious Millennium Villages concept was supposed to create simultaneous progress on multiple fronts, producing “major results in three or fewer years,” according to founder Jeffrey D. Sachs. But a study by the United Kingdom’s Department for International Development shows the villages had “moderately positive impacts,” and “little overall impact on poverty.”

It’s more constructive to focus on what works. Global analysis of development targets for Copenhagen Consensus by a panel of Nobel laureate economists showed where more money can achieve the most. They concluded that improved access to contraception and family-planning services would reduce maternal and child mortality, and also – through a demographic dividend – increase economic growth.

Likewise, research assessing the best development policies for Haiti found that focusing on improvements in nutrition through the use of fortified flour would transform the health of young children, creating lifelong benefits.

And the most powerful weapon in the fight against poverty is the one that got us where we are today: broad-based economic growth. Over the past 30 years, China’s growth spurt alone lifted an unprecedented 680 million people above the poverty line.

Humanity’s success in reducing poverty is an extraordinary achievement, and one that we are far too reticent about acknowledging. We need to make sure that we don’t lose sight of what got us this far – and what justifies the hope of an even better future.

Background:  Why climate activism has become a doomsday cult Clexit Gloom and Doom

Astronomy is Science. Climatology Not.

A nice tongue in cheek essay appeared in the Atlantic The Eclipse Conspiracy: Something doesn’t add up.

It is a whimsical spoof on anyone skeptical that the solar eclipse will happen tomorrow. (Excerpts)

Meanwhile the scientists tell us we can’t look at it without special glasses because “looking directly at the sun is unsafe.”

That is, of course, unless we wear glasses that are on a list issued by these very same scientists. Meanwhile, corporations like Amazon are profiting from the sale of these eclipse glasses. Is anyone asking how many of these astronomers also, conveniently, belong to Amazon Prime?

Let’s follow the money a little further. Hotels along the “path of totality”—a region drawn up by Obama-era NASA scientists—have been sold out for months. Some of those hotels are owned and operated by large multinational corporations. Where else do these hotels have locations? You guessed it: Washington, D.C.

In fact the entire politico-scientifico-corporate power structure is aligned behind the eclipse. This includes the mainstream media. How many news stories have you read about how the eclipse won’t happen?

That’s a great example of “conspiracy ideation” and a subtle dig at people who don’t trust NASA on climate matters. In fact, many of the real NASA scientists are extremely critical of NASA’s participation in climate activism.  Journalists or Senators who raise NASA as evidence of climate change should be directed to The Right Climate Stuff, where esteemed NASA scientists give plenty of good reasons to doubt NASA on this topic.

Bottom Line: A Real Science Makes Predictions that Come True.

The article, perhaps unwittingly, shows why Astronomy is a real science we can trust while Climatology is faith-based, like Astrology. When the eclipse happens, it confirms Astronomers have knowledge about the behavior of planetary bodies. When numerous predictions of climate catastrophes are unfulfilled, it demonstrates scientists’ lack of knowledge about our climate system. Anyone claiming certainty about the climate is exercising their religious freedom, but not doing science.

 

How Science Is Losing Its Humanity

 

The Closing of the Scientific Mind is a plea for scientists to celebrate and enhance humanity rather than belittle human life.  Author David Gelernter is a professor of computer science at Yale. His book Subjectivism: The Mind from Inside will be published by Norton later this year.  Excerpts below.

The huge cultural authority science has acquired over the past century imposes large duties on every scientist. Scientists have acquired the power to impress and intimidate every time they open their mouths, and it is their responsibility to keep this power in mind no matter what they say or do. Too many have forgotten their obligation to approach with due respect the scholarly, artistic, religious, humanistic work that has always been mankind’s main spiritual support. Scientists are (on average) no more likely to understand this work than the man in the street is to understand quantum physics. But science used to know enough to approach cautiously and admire from outside, and to build its own work on a deep belief in human dignity. No longer.

Belittling Humanity.

Today science and the “philosophy of mind”—its thoughtful assistant, which is sometimes smarter than the boss—are threatening Western culture with the exact opposite of humanism. Call it roboticism. Man is the measure of all things, Protagoras said. Today we add, and computers are the measure of all men.

Many scientists are proud of having booted man off his throne at the center of the universe and reduced him to just one more creature—an especially annoying one—in the great intergalactic zoo. That is their right. But when scientists use this locker-room braggadocio to belittle the human viewpoint, to belittle human life and values and virtues and civilization and moral, spiritual, and religious discoveries, which is all we human beings possess or ever will, they have outrun their own empiricism. They are abusing their cultural standing. Science has become an international bully.

The Closing of the Scientific Mind.

That science should face crises in the early 21st century is inevitable. Power corrupts, and science today is the Catholic Church around the start of the 16th century: used to having its own way and dealing with heretics by excommunication, not argument.

Science is caught up, also, in the same educational breakdown that has brought so many other proud fields low. Science needs reasoned argument and constant skepticism and open-mindedness. But our leading universities have dedicated themselves to stamping them out—at least in all political areas. We routinely provide superb technical educations in science, mathematics, and technology to brilliant undergraduates and doctoral students. But if those same students have been taught since kindergarten that you are not permitted to question the doctrine of man-made global warming, or the line that men and women are interchangeable, or the multiculturalist idea that all cultures and nations are equally good (except for Western nations and cultures, which are worse), how will they ever become reasonable, skeptical scientists? They’ve been reared on the idea that questioning official doctrine is wrong, gauche, just unacceptable in polite society. (And if you are president of Harvard, it can get you fired.)

Beset by all this mold and fungus and corruption, science has continued to produce deep and brilliant work. Most scientists are skeptical about their own fields and hold their colleagues to rigorous standards. Recent years have seen remarkable advances in experimental and applied physics, planetary exploration and astronomy, genetics, physiology, synthetic materials, computing, and all sorts of other areas.

But we do have problems, and the struggle of subjective humanism against roboticism is one of the most important.

The moral claims urged on man by Judeo-Christian principles and his other religious and philosophical traditions have nothing to do with Earth’s being the center of the solar system or having been created in six days, or with the real or imagined absence of rational life elsewhere in the universe. The best and deepest moral laws we know tell us to revere human life and, above all, to be human: to treat all creatures, our fellow humans and the world at large, humanely. To behave like a human being (Yiddish: mensch) is to realize our best selves.

No other creature has a best self.

This is the real danger of anti-subjectivism, in an age where the collapse of religious education among Western elites has already made a whole generation morally wobbly. When scientists casually toss our human-centered worldview in the trash with the used coffee cups, they are re-smashing the sacred tablets, not in blind rage as Moses did, but in casual, ignorant indifference to the fate of mankind.

A world that is intimidated by science and bored sick with cynical, empty “postmodernism” desperately needs a new subjectivist, humanist, individualist worldview. We need science and scholarship and art and spiritual life to be fully human. The last three are withering, and almost no one understands the first.

At first, roboticism was just an intellectual school. Today it is a social disease. Some young people want to be robots (I’m serious); they eagerly await electronic chips to be implanted in their brains so they will be smarter and better informed than anyone else (except for all their friends who have had the same chips implanted). Or they want to see the world through computer glasses that superimpose messages on poor naked nature. They are terrorist hostages in love with the terrorists.

All our striving for what is good and just and beautiful and sacred, for what gives meaning to human life and makes us (as Scripture says) “just a little lower than the angels,” and a little better than rats and cats, is invisible to the roboticist worldview. In the roboticist future, we will become what we believe ourselves to be: dogs with iPhones. The world needs a new subjectivist humanism now—not just scattered protests but a growing movement, a cry from the heart.

Footnote:  A related post provides additional background:  Head, Heart and Science

Mind-Blowing Science

Cometh the man; Francis Bacon’s insight was that the process of discovery was inherently algorithmic. Photo courtesy NPG/Wikipedia

In a refreshing relief from Science Marches promoting slogans and tenets of climate dogma, we have an insightful look into a fruitful future for the scientific endeavor.

The article is Science has outgrown the human mind and its limited capacities by Ahmed Alkhateeb, a molecular cancer biologist at Harvard Medical School. (bolded text is my emphasis)

It starts with a great quote:

The duty of man who investigates the writings of scientists, if learning the truth is his goal, is to make himself an enemy of all that he reads and … attack it from every side. He should also suspect himself as he performs his critical examination of it, so that he may avoid falling into either prejudice or leniency.
– Ibn al-Haytham (965-1040 CE)

First the author reminds readers of the current sorry state of scientific research:  overwhelming quantity of papers with diminishing quality (bogus findings, unreplicable studies, sloppy methodology, etc.). He then raises an intriguing question:

One promising strategy to overcome the current crisis is to integrate machines and artificial intelligence in the scientific process. Machines have greater memory and higher computational capacity than the human brain. Automation of the scientific process could greatly increase the rate of discovery. It could even begin another scientific revolution. That huge possibility hinges on an equally huge question: Can scientific discovery really be automated?

Alkhateeb gets to the point of Bacon’s forming the scientific process:

The Baconian method attempted to remove logical bias from the process of observation and conceptualisation, by delineating the steps of scientific synthesis and optimizing each one separately. Bacon’s vision was to leverage a community of observers to collect vast amounts of information about nature and tabulate it into a central record accessible to inductive analysis. In Novum Organum, he wrote: ‘Empiricists are like ants; they accumulate and use. Rationalists spin webs like spiders. The best method is that of the bee; it is somewhere in between, taking existing material and using it.’

The Baconian method is rarely used today. It proved too laborious and extravagantly expensive; its technological applications were unclear. However, at the time the formalization of a scientific method marked a revolutionary advance. Before it, science was metaphysical, accessible only to a few learned men, mostly of noble birth. By rejecting the authority of the ancient Greeks and delineating the steps of discovery, Bacon created a blueprint that would allow anyone, regardless of background, to become a scientist.

Bacon’s insights also revealed an important hidden truth: the discovery process is inherently algorithmic. It is the outcome of a finite number of steps that are repeated until a meaningful result is uncovered. Bacon explicitly used the word ‘machine’ in describing his method. His scientific algorithm has three essential components:

  • First, observations have to be collected and integrated into the total corpus of knowledge.
  • Second, the new observations are used to generate new hypotheses.
  • Third, the hypotheses are tested through carefully designed experiments.

If science is algorithmic, then it must have the potential for automation. This futuristic dream has eluded information and computer scientists for decades, in large part because the three main steps of scientific discovery occupy different planes. Observation is sensual; hypothesis-generation is mental; and experimentation is mechanical. Automating the scientific process will require the effective incorporation of machines in each step, and in all three feeding into each other without friction. Nobody has yet figured out how to do that.

Experimentation has seen the most substantial recent progress. For example, the pharmaceutical industry commonly uses automated high-throughput platforms for drug design.

Automated hypothesis-generation is less advanced, but the work of Don Swanson in the 1980s provided an important step forward. He demonstrated the existence of hidden links between unrelated ideas in the scientific literature; using a simple deductive logical framework, he could connect papers from various fields with no citation overlap. In this way, Swanson was able to hypothesise a novel link between dietary fish oil and Reynaud’s Syndrome without conducting any experiments or being an expert in either field.

The most challenging step in the automation process is how to collect reliable scientific observations on a large scale. There is currently no central data bank that holds humanity’s total scientific knowledge on an observational level. Natural language-processing has advanced to the point at which it can automatically extract not only relationships but also context from scientific papers. However, major scientific publishers have placed severe restrictions on text-mining. More important, the text of papers is biased towards the scientist’s interpretations (or misconceptions), and it contains synthesised complex concepts and methodologies that are difficult to extract and quantify.

Summary

Nevertheless, recent advances in computing and networked databases make the Baconian method practical for the first time in history. And even before scientific discovery can be automated, embracing Bacon’s approach could prove valuable at a time when pure reductionism is reaching the edge of its usefulness.

Such an approach would enable us to generate novel hypotheses that have higher chances of turning out to be true, to test those hypotheses, and to fill gaps in our knowledge. It would also provide a much-needed reminder of what science is supposed to be: truth-seeking, anti-authoritarian, and limitlessly free.