Coronavirus 101

The best overview I have seen comes from Rud Istvan at Wuhan Coronavirus–a WUWT Scientific Commentary  Excerpts in italics with my bolds

Basic Virology

What follows perhaps oversimplifies an unavoidably complex topic, like sea level rise or atmospheric feedbacks to CO2 in climate science.

There are three main types of human infectious microorganisms: bacteria, fungi, and viruses. (I skip important complicating stuff like malaria or giardia.) Most human bacteria are helpful; the best example is the vast gut biome. In human disease some bacteria (typhoid, plague, tetanus, gangrene, sepsis, strep) and certain classes of fungi (candida yeasts) can cause serious disease, as do some human viruses (polio, smallpox, measles, yellow fever, Zika, Ebola).

There are two basic forms of bacteria (Prokaryotes and Archaea, neither having a genetic cell nucleus). Methanogens are exclusively Archaean; most methanotrophs are Prokaryotes. Membrane bound photosynthetic organelle containing cyanobacteria are the evolutionary transition from bacteria to all Eukaryotes (cells having a separate membrane bound genetic nucleus) like phytoplankton, fungi, and us. Both Prokaryote and Eukaryote single cell (and all higher) life forms have a basic thing in common—they can reproduce by themselves in an appropriate environment.

Viruses are none of the above. They are not ‘alive’; they are genetic parasites. They can only reproduce by infecting a living cell that can already reproduce itself. The ‘nonliving’ viral genetic machinery hijacks the reproductive machinery of a living host cell and uses it to replicate virions (individual virus particles) until the host cell ‘bursts’ and the new virions bud out in search of new hosts.

There are two basic virus forms, and two basic genetics.

Form

1. Viruses are either ‘naked’ or ‘enveloped’. (see image at top).  A naked virus like cold causing rhino has just two structural components, an inner genetic whatever code (only the two basic types–DNA and RNA–are important for this comment) and an outer protective ‘capsid’ protective viral protein coat. An example is cold producing rhinovirus in the family picornavirus (which also includes polio).

2.Enveloped viruses like influenza and corona (Wuhan) include a third outer lipid membrane layer outside the capsid, studded with partly viral and partly host proteins acquired from the host cell at budding. These are used to infect the next host cell by binding to cell surface proteins. The classic example is influenza (internal genetic machinery A or B) designated HxNy for the flavor of the (H) hemagglutinin and (N) neuraminidase protein variants on the lipid membrane surface.

Genetic Type

The second major distinction is the basic genetics. Viral genetic machinery can be either RNA based or DNA based. There is a huge difference. All living cells (the viral hosts) have evolved DNA copy error machinery, but not RNA copy error machinery. That means RNA based viruses will accumulate enormous ‘transcription’ errors with each budding. As an actual virology estimate, a single rhinovirus infected mucosal cell might produce 100000 HRV virion copies before budding. But say 99% are defective unviable transcription errors. That math still says each mucosal cell infected by a single HRV virion will produce about 10 infective virions despite the severe RNA mutation problem. The practical clinical implication is that when you first ‘catch’ a HRV cold, the onset to clinical symptoms (runny nose) is very fast, usually less than 24 hours.

This also explains why adenovirus is not very infective. It is a DNA virus, so mutates slowly, so the immune memory is longer lasting. In fact, in 2011 the FDA approved (for military use only) a vaccine against adeno pharyngoconjuntivitis that was a big problem in basic training. (AKA PCF, or PC Fever, highly contagious, very debilitating, and unlike similar high fever strep throat untreatable with antibiotics.) In the first two years of mandatory PCF vaccine use, military PCF disease incidence reduced 100 fold.

Upper Respiratory Tract viral infections.

So-called URI’s have only two causes in humans: common colds, and influenza. Colds have three distinguishing symptoms–runny nose, sore throat, and cough—all caused not by the virus but by the immune system response to it. Influenza adds two more symptoms: fever and muscular ache. Physicians know this well, almost never test for the actual virus seriotype, and prescribe aspirin for flu but not colds. Much of what follows in this section is based on somewhat limited actual data, since there has been little clinical motivation to do extensive research. A climate analogy would be sea surface temperature and ocean heat content before ARGO. Are there estimates? Yes. Are there good estimates? No.

Common cold URI’s stem from three viral types: RNA rhinovirus (of which there are about 99 seriotypes but nobody knows for sure) causing about 75% of all common colds, RNA coronaviruses, for which (excluding SARS, MERS, and Wuhan) there are only 4 known human seriotypes causing about 20% of common colds, and DNA adenoviruses (about 60 human seriotypes, but including lots of non-cold symptom seriotypes like conjunctivitis (pink eye and pharyngoconjunctivitis) causing about 5% of common colds.

Available data says rhinovirus seriotypes are ubiquitous but individually not terribly infective, coronavirus seriotypes are few but VERY infective, and adenoviruses are neither. This explains, given the previous RNA mutation problem, why China and US are undertaking strict Wuhan quarantine measures.

This also explains why there is no possibility of a common cold vaccine: too many viral targets. You catch a cold, you get temporary (RNA viruses are constantly mutating) immunity to that virus. You next cold is simply a different virus, which is why the average adult has 2-4 colds per year.

A clinical sidebar about URI’s. Both are worse in winter, because people are more indoors in closer infectious proximity. But colds have much less seasonality than flus. Summer colds are common. Summer flus aren’t.

There is a differential route of transmission explanation for this empirical observation. Colds are spread primarily by contact, while flus are spread primarily by inhalation. You have a cold, you politely (as taught) cover your sneeze or cough with a hand, then open a door using its doorknob, depositing your fresh virions on it. The person behind you opens the door, picking up your virions, then touches the mouth or nose (or eyes) before washing hands. That person is now probably infected. This is also why alcohol hand sanitizers have been clinically proven ineffective against colds. They will denature enveloped corona and adeno, but have basically no effect on the by far more prevalent naked rhinos.

There is an important corollary to this contact transmission fact. Infectivity via the contact route of transmission depends on how long a virion remains infective on an inanimate surface. This depends on the virion, the surface (hard doorknob or ‘soft’ cardboard packaging), and the environment (humidity, temperature). The general epidemiological rule of thumb for common colds and flus is at most 4 days viability. This corollary is crucial for Wuhan containment, discussed below.

The main flu infection route is inhalation of infected aspirate. This does not require a cough, merely an infected person breathing in your vicinity. In winter, when you breathe out outside below freezing ‘smoke’ it is just aspirate that ‘freezes’ and becomes visible. Football aficionados see this at Soldier and Lambeau Fields every winter watching Bears and Packers games. The very fine micro-droplet residence time in the air depends on humidity. With higher humidity, they don’t dry out as fast, so remain heavier and sink faster to where they don’t get inhaled, typically minutes. In typical winter indoor low humidity, they dry rapidly and remain circulating in the air for much longer, typically hours. This is also why alcohol hand sanitizers are ineffective against influenza; the main route of flu transmission has nothing to do with hands.

[Note: The flu virus is contained in droplets that become air borne by sneezing or coughing.  Unless you inhale the air sneezed or coughed by an infected person, the main risk is direct skin contact with a surface on which the droplet landed.]

Wuhan Coronavirus

As of this writing, there are a reported 37500 confirmed infections and 811 deaths. Those numbers are about as reliable as GAST in climate change. Many people do not have access to definitive diagnostic kits; China has a habit of reporting an underlying comorbidity (emphysema, COPD, asthma) as cause of death, the now known disease progression means deaths lag diagnoses by 2-3 weeks. A climate analogy is the US surface temperature measurement problems uncovered by the WUWT Surface Stations project.

There are a number of important general facts we DO now know, which together provide directional guidance about whether anyone should be concerned or alarmed. The information is pulled from reasonably reliable sources like WHO, CDC, NIH, and JAMA or NEJM case reports. Plus, we have an inadvertent cruise ship laboratory experiment presently underway in Japan.

The incubation period is about 10-14 days until symptoms (fever, cough) evidence. That is VERY BAD news, because it has been demonstrated beyond question (Germany, Japan, US) that human to human transmission PRECEDES symptoms by about a week. So unlike SARS where all air travelers got a fever screening (mine was to and from a medical conference in Panama City). Since transmission did not precede symptoms, SARS fever screening sufficed; with Wuhan fever screening is futile. That is why all the 14-day quarantines imposed last week; the only way to quarantine Wuhan coronavirus with certainty is to wait for symptoms to appear or not. Quarantine is disruptive and expensive, but very effective.

Once symptoms appear, disease progression is now predictable from sufficient hundreds of case reports—usual corona cold progression for about 7-10 days. But then there is a bifurcation. 75-80% of patients start improving. In 20-25%, they begin a rapid decline into lower respiratory pneumonia. It is a subset of these where the deaths occur with or without ICU intervention. And as whistleblower Dr. Li’s death in Wuhan proves, ICU intervention is no panacea. He was an otherwise healthy 34 years old doctor.

We also now know from a JAMA report Friday 2/7/2020 analyzing spread of Wuhan coronavirus inside a Wuhan hospital, that 41% of patients were infected within the hospital—meaning the ubiquitous surgical masks DO NOT work as prevention. The shortage of masks is symptomatic of panic, not efficacy.

Scientists last week also traced the source. There are two clues. Wuhan is now known to be 96% genetically similar to an endemic Asian bat corona. Like SARS and ‘Spanish flu’, it jumped to humans via an intermediate mammal species. No bats were sold in the Huanan wet market in Wuhan. But pangolins were, and as of Friday there is a 99% genetic match between pangolin corona and Wuhan human corona. Trade in wild pangolins is illegal, but the meat is considered a delicacy in China and Vietnam and pangolins WERE sold in the Wuhan wet market. This is is similar to SARS in 2003. A bat corona jumped to humans via live civets in another Chinese wet market. Xi’s ‘simple’ permanent SARS/Wuhan coronavirus solution is to ban Chinese wet markets.

Conclusions

Should the world be concerned? Perhaps.

Will there be a terrible Wuhan pandemic? Probably not.

Again, the analogy to climate change alarm is striking. Alarm based on lack of underlying scientific knowledge plus unfounded worst case projections.

Proven human to human transmissibility and the likely (since proven) ineffectiveness of surgical masks were real early concerns. But the Wuhan virus will probably not become pandemic, or even endemic.

We know it can be isolated and transmission stopped with 14-day quarantine followed by symptomatic clinical isolation and ICU treatment if needed.

We know from infectivity duration on surfaces that it cannot be spread from China via ship cargo. And cargo ship crews can simply not be given shore leave until their symptomless ocean transit time plus port time passes 14 days.

Eliminating Chinese wet markets and the illegal trade in pangolins prevents another outbreak ever emerging from the wild, unfortunately unlike Ebola.

Footnote:  This is of particular interest to me since my wife and I are presently on a cruise in the Indian Ocean ending in Singapore.  We were supposed to fly from there to Shanghai connecting to Air Canada back to Montreal.  Those AC flights were cancelled for February and unlikely to be available for our transit.

Activist-Legal Complex Perverts Science

This article was published at the American Council on Science and Health Activist-Legal Complex Will Destroy American Science And Industry by Alex Berezow and Josh Bloom. Excerpts in italics with my bolds and added images.

American science and industry are under threat by this complex, known to be an unholy alliance of activists and trial lawyers who deploy various pseudoscientific tricks to score multibillion-dollar lawsuits against large companies. No industry is safe from these deceptions.

In his Farewell Address, President Eisenhower warned of the military-industrial complex, a partnership between the military and defense industry that was financially incentivized to promote war over peace. Today, we face a different threat – the “activist-legal complex,” which is responsible for scoring multibillion-dollar verdicts against some of America’s biggest companies.

One partner in this unholy alliance are activists who falsely claim that the food we eat, the water we drink, the air we breathe, and the products we use are all secretly killing us. They pervert scientific uncertainty to nefarious ends by magnifying hypothetical risks and downplaying relevant facts, such as level of exposure.

They exploit widespread misunderstanding of science and a general hatred of “corporations” – especially those that manufacture chemicals, drugs, or consumer products – to instill fear into the public.

The other partner is the legal industry, which relies on activist scaremongering to win jackpot verdicts. They identify sympathetic patients, often suffering from cancer or some other debilitating disease, and blame their maladies on a company with deep pockets. They buy television commercials to recruit more “victims” for the inevitable class-action lawsuit.

This formula works nearly every time, and the result is always the same: A giant bag of money. In this way, the activist-legal complex recently won a $4.7 billion lawsuit against Johnson & Johnson’s baby powder for causing ovarian cancer and a $2 billion lawsuit (subsequently reduced to merely $87 million) against Monsanto’s glyphosate for non-Hodgkin’s lymphoma.

There is no credible scientific evidence in support of either verdict.

But the absence of genuine scientific evidence is typically irrelevant in trials of this type. With the aid of flawed or cherry-picked toxicological and epidemiological studies – often published by activists in low-quality journals – the activist-legal complex can subvert science using well-established pseudoscientific tricks.

The first involves undermining long-held truths about toxicity. Thanks to Paracelsus, it has been known since the 16th Century that “the dose makes the poison.” Yet, the activist-legal complex promotes an alternate theory, namely that the mere presence of a chemical is an indicator of its potential harm. It is not.

Given advances in analytical instrumentation, it is now possible to detect almost any chemical in your body or in the environment at levels as minute as “one part per trillion,” which is roughly equivalent to a drop in an Olympic-sized swimming pool. There are very few, if any, chemicals on Earth that pose a health risk at such a low concentration.

But using the activist-legal complex’s doctrine – that we are constantly swimming in a sea of harmful chemicals – it is easy for lawyers to argue that any exposure to a potential carcinogen could be responsible for a cancer that develops decades later. Usually, the chemicals that are blamed have been used for decades and have been present in our bodies in tiny amounts all along without causing health concerns.

The second trick is to play on society’s belief that regulators and activists are righteous, unbiased people with no conflicts of interest. For example, jurors in the Monsanto glyphosate trial heard that the International Agency for Research on Cancer (IARC), a subsidiary of the World Health Organization, classified glyphosate as a probable human carcinogen. What they did not hear is that one of the key members of the IARC panel received £120,000 from trial lawyers who stood to benefit financially from the classification.

The third trick is to foment conspiracy theories, usually involving a few old, obscure documents or emails taken out of context. The activist-legal complex uses this tactic to convince jurors, already eager to “punish” Big Business, that the company was engaged in malfeasance.

Game, set, match. The only question left is how big the bag of money is going to be.

Where will the activist-legal complex strike next? It could be anywhere. Maybe there will be a class action lawsuit against Coca-Cola for obesity in America. Perhaps lawyers will go after Facebook for making its social media platform too addictive. Or maybe Apple’s iPhone will be blamed for causing car accidents due to distracted driving.

As long as a company has a sufficiently large bank account, quite literally anything is possible. No industry is safe from the activist-legal complex.

Postscript:

The article points to jackpot justice in general.  A number of posts here have discussed how the same dynamic is at work in Climate Litigation (link is to posts so tagged)

Nature Mag Favors Diversity Over Merit

Lubos Motl writes at his blog Reference Frame reviewing Nature Mag proclaiming top 10 Scientists for 2019.  His article is Nature’s shocking “top ten” scientists.  Excerpts below with my bolds.

Fer137 has told us about an incredible list published at Nature Nature’s 10.which is supposed to enumerate the most influential people in science of the year. As Alex correctly said, Nature basically became a new brand of toilet paper. How will they compare to Presto!?

Well, there have been numerous indications of this “evolution of purpose” of that journal but now they have jumped the shark, indeed.

As Nature openly admits, Ricardo Galvão was chosen for his being a Latin American “Amazon” activist and for his frictions with Brazil’s president, Jair Bolsonaro, whom the leftists at Nature consider politically incorrect. He clearly didn’t do anything revolutionary in the science of forests or in biology in general. In fact, he is a physicist!

Victoria Kaspi was clearly chosen for her failure to be male in a field that is overwhelmingly advanced by males, astrophysics. You should look for “fast radio bursts” at Google Scholar to become sure that she isn’t really a leader of this subfield. Even if you add CHIME, the name of her key experiment, to the query, it doesn’t become better.

Nenad Šestan was chosen for the good old left-wing “atheist” reasons. This guy works on the fuzziness of “brain death” so he can take people from God, thus proving the ill-definedness of the religious concepts including death itself. This would be a preferred scientific topic of the leftists some 20 years ago but these days, it’s no longer too hot. And incidentally, Nature just copied the name from the New York Times, a left-wing daily, that promoted Šestan in the summer. At any rate, he is one of the 3 or so actual star scientists in the list.

Sandra Díaz is a hot Venezulean model. OK, they meant this Sandra Díaz which is somewhat less pretty. She is both female and associated with the “biodiversity” hysteria. Clearly, no important advances in the “science of biodiversity” took place in the recent year or several years and she wasn’t the key in those that took place earlier.

Jean-Jacques Muyembe-Tamfum is Congolese and a racially pure black. At least, he is an actual co-discoverer of Ebola, a disease he still fights against. How important was he in the discovery of Ebola? Well, in 1976 the disease first appeared in Sudan and then in Zaire. In Zaire, Muyembe-Tamfum was just in charge of the doctors who were supposed to respond. Among other obvious things, he sent blood samples to Peter Piot. Clearly Piot was far closer to the actual discoverer of Ebola: Muyembe-Tumfum’s role is similar to that of Rosalind Franklin (or perhaps even to the unknown miner-in-chief in Jáchymov, Bohemia who sent the radium samples to Marie Sklodowska). The situations really are analogous. I am not the only one who sees it in this way. Wikipedia mentions:

In 2012, Piot published a book entitled “No Time to Lose” [see the clickable image] which chronicles his professional work, including the discovery of the Ebolavirus. He mentions Muyembe in passing rather than as a co-discoverer.

But Piot is a white man so, according to the fanatical racists at Nature, he must be censored and destroyed, right? In fact, even Piot’s claim that a passing was a passing was a heresy because the passing was black. Why would someone confuse a true scientist with someone who sent blood samples by the USPS? It’s like Penny’s discovery of a comet.

Yohannes Haile-Selassie found an old skull somewhere – one of many old skulls – but he is Ethiopian so he must automatically make it to the top ten as well, right? At least he has done some real research into the African hominids.

Wendy Rogers is both female and an activist talking about organ transplants in China; I didn’t have enough motivation to see what she says or wants because I don’t believe it’s important. Also, I wasn’t able to add a Wikipedia link because I think that her page doesn’t even exist. You may find a Republican politician and an actress of the name much more easily than this organ transplant activist. One paper with her name and “organ” has 28 citations, others are below 10. In the field focused on “organs” where she was named a member of “top ten”, she’s technically an unknown scientist according to the high energy physics criteria.

Deng Hongkui is arguably a real HIV-focused Chinese immunologist with quite some results.

John Martins leads the Google’s “quantum supremacy” advances in quantum computing. He clearly deserves to be there. Nature probably failed to notice that he is a white supremacist according to another article in Nature.Greta Thunberg… doesn’t really surprise us. She is the role model for everything that is bad about the interactions between science and the general society in 2019. She is a whining spoiled brat who refuses to go to school and who is correspondingly scientifically illiterate because of that and who, with quite some success, persuades other people that her hateful hysterical outbursts may compensate for her laziness and caution. She is the exact opposite of a young person who is close to science. Every teenager who does at least 10% of the things that Greta does should be spanked for several hours so that he cannot sit on his bottom for a week.

Nature also adds a “list whom to watch in science in 2020” that starts with António Guterres, the boss of the United Nations who completely lost his mind and who has become a little puddy of Greta Thunberg’s. Even if he weren’t a Greta’s puddy, it would be shocking to claim that being such a politically appointed bureaucrat makes one a top scientist.

At any rate, it’s terribly disappointing to see that a journal that used to be good – although it has played no role in my interest in science whatsoever – chooses way over 50% of its “best scientists” according to some extremist political or identity politics criteria. The individuals at Nature who are responsible for this outrageous page are harmful agents and should be treated as harmful agents.

Let Science Students Handle Doubt and Diversity

Jerry Ravetz writes at Nature Stop the science training that demands ‘don’t ask’. Jerry Ravetz is an associate fellow at the Institute for Science, Innovation and Society, University of Oxford, UK.Excerpts In italics with my bolds and images.

It’s time to trust students to handle doubt and diversity in science.

As a child, I realized that my parents spoke in Yiddish when they didn’t want me to know what they were talking about, so I became aware that some knowledge was intended only for grown-ups — don’t ask. In college, I was taught an elegant theory of chemical combination based on excess electrons going into holes in the orbital shell of a neighbouring atom. But what about diatomic compounds like oxygen gas? Don’t ask; students aren’t ready to know. In physics, I learnt that Newton’s second law of motion is not an empirical, approximate relation such as Boyle’s and Hooke’s laws, and instead has a universal application; but what about the science of statics, in which forces are balanced and there is no acceleration? Don’t ask. Mere students are not worthy of an answer. Yet when I was moonlighting in the social sciences and humanities, I found my questions and opinions were respected, even if only as part of my learning experience.

Observant students will notice that social problems surrounding science are seldom mentioned in official curricula. And now, these pupils are starting to act. They have shamed their seniors into including more diverse contributors as faculty members and role models. Young scholars insolently ask their superiors why they fail to address the extinction crises elucidated by their research. Such subversions are reminiscent of the mass-produced heretical pamphlets circulated by Martin Luther’s supporters at the start of the Protestant Reformation in sixteenth-century Europe.

The philosopher Thomas Kuhn once compared taught science to orthodox theology. A narrow, rigid education does not prepare anyone for the complexities of scientific research, applications and policy. If we discourage students from inquiring into the real nature of scientific truths, or exploring how society shapes the questions that researchers ask, how can we prepare them to maintain public trust in science in our ‘post-truth’ world?

Diversity and doubt produce creativity; we must make room for them, and stop funnelling future scientists into narrow specialties that value technique over thought.

In the 1990s, Silvio Funtowicz, a philosopher of science, and I developed the concept of ‘post-normal science’, building on the Kuhnian terms ‘normal’ and ‘revolutionary’ science. It outlines how to use science in a society confronted with high-stakes decisions, where both facts and values are uncertain; it requires drawing on a broad community with broad inquiries. Suppressing questions from budding scientists is sure to suppress promising ideas and solutions.

As a nonagenarian and former historian of science, I know that even foundational building blocks can be questioned. The unifying patterns of the periodic table are now seen, under closer scrutiny, to be riddled with anomalies and paradoxes (E. Scerri Nature 565, 557–559; 2019). Some scientists now wonder whether the concept of biological ‘species’ contributes more confusion than insight, and whether it should therefore be abandoned (see go.nature.com/2offaav). However, such a decision would affect conservation policy, in which identification of endangered species is crucial — so it is not just an issue for basic science.

Science students generally remain unaware that concepts such as elements and species are contested or are even contestable. In school, college and beyond, curricula highlight the technical and hide the reflective. Public arguments among scientists often presume that every problem has just one solution. When they were students, these researchers had never learnt that they have a right to be wrong.

And when scientists advise on policy, they are pressured to become attached to official stances on issues, or to shun the responsibility entirely. They then find it difficult to resist dismissing all critics as cranks or ‘denialists’, whose rejection of ‘facts’ is a sign of their depravity. (To be sure, much of science denial is cynical and self-serving.)

Nonetheless, vacillating advice on complex issues, most obviously nutrition, should be a warning that, from a future perspective, today’s total scientific consensus on some policy issue might have been the result of obduracy, a conflict of interest or worse.

Trust in established science will not be protected by exhortations, denunciations and absolutism. Just as a healthy democracy accommodates dissent and dissonance, the collective consciousness of science would do well to embrace doubt and diversity. This could start with teaching science as a great, flawed, ongoing human achievement, rather than as a collection of cut-and-dried eternal truths. There is plenty of material for such a Socratic education in science: physics and cosmology now enjoy creative ignorance; the digital and life sciences abound in moral mazes; and environmental and sustainability sciences demand recognition of complexities. The established ‘facts’ can function as tools for ongoing dialogues.

I recall a legendary chemistry professor who was inept at getting classroom demonstrations to work — but discussing what went wrong helped his students to thrive. A mathematician friend ran his classes like those in an Athenian agora: pupils discussed every statement in the textbook until all were satisfied. They did very well in exams, and taught themselves when he was absent. Treating people at all levels as committed thinkers, whose asking teaches us all, is the key to tackling the challenges to science in the post-trust age.

Footnote:  Contrast what Ravetz says with the Italian proposal to indoctrinate students with climate change dogma and activism.  Lubos Motl reports Italian schools: 33 mandatory hours of climate hysteria a year

In this week, the media have announced that starting September 2020, ten months from now, all Italian public schools will require the education in “climate science”. It was ordered by the Italian minister of education, Lorenzo Fioramonti.

If I understand well, this absolutely ludicrous new subject should be taught every year. If you spend 8 years at school and multiply it by 33 hours a year, you should be exposed to 264 hours worth of the climate science education.

This is just a breathtaking amount of time. It is very clear that the most famous person associated with the climate hysteria today, Prophet Greta Thunberg, doesn’t know even 26.4 minutes worth of climate education – assuming that the teacher doesn’t okay the idea that 27 minutes of screaming “how dare you” counts as the climate science. How can an average Italian schoolkid meaningfully learn 33 hours worth of climate science every year? It just doesn’t make the slightest sense.

Which Comes First: Story or Facts?


Facts vs Stories is written by Steven Novella at Neurologica. Excerpts in italics with my bolds.

There is a common style of journalism, that you are almost certainly very familiar with, in which the report starts with a personal story, then delves into the facts at hand often with reference to the framing story and others like it, and returns at the end to the original personal connection. This format is so common it’s a cliche, and often the desire to connect the actual new information to an emotional story takes over the reporting and undermines the facts.

This format reflects a more general phenomenon – that people are generally more interested in and influenced by a good narrative than by dry facts. Or are we? New research suggests that while the answer is still generally yes, there is some more nuance here (isn’t there always?). The researchers did three studies in which they compared the effects of strong vs weak facts presented either alone or embedded in a story. In the first two studies the information was about a fictitious new phone. The weak fact was that the phone could withstand a fall of 3 feet. The strong fact was that the phone could withstand a fall of 30 feet. What they found in both studies is that the weak fact was more persuasive when presented embedded in a story than alone, while the strong fact was less persuasive.

They then did a third study about a fictitious flu medicine, and asked subjects if they would give their e-mail address for further information. People are generally reluctant to give away their e-mail address unless it’s worth it, so this was a good test of how persuasive the information was. When a strong fact about the medicine was given alone, 34% of the participants were willing to provide their e-mail. When embedded in a story, only 18% provided their e-mail.  So, what is responsible for this reversal of the normal effect that stories are generally more persuasive than dry facts?

The authors suggest that stories may impair our ability to evaluate factual information.

This is not unreasonable, and is suggested by other research as well. To a much greater extent than you might think, cognition is a zero-sum game. When you allocate resources to one task, those resources are taken away from other mental tasks (this basic process is called “interference” by psychologists). Further, adding complexity to brain processing, even if this leads to more sophisticated analysis of information, tends to slow down the whole process. And also, parts of the brain can directly suppress the functioning of other parts of the brain. This inhibitory function is actually a critical part of how the brain works together.

Perhaps the most dramatic relevant example of this is a study I wrote about previously in which fMRI scans were used to study subjects listening to a charismatic speaker that was either from the subjects religion or not. When a charismatic speaker that matched the subject’s religion was speaking, the critical thinking part of the brain was literally suppressed. In fact this study also found opposite effects depending on context.

The contrast estimates reveal a significant increase of activity in response to the non-Christian speaker (compared to baseline) and a massive deactivation in response to the Christian speaker known for his healing powers. These results support recent observations that social categories can modulate the frontal executive network in opposite directions corresponding to the cognitive load they impose on the executive system.

So when listening to speech from a belief system we don’t already believe, we engaged our executive function. When listening to speech from within our existing belief system, we suppressed our executive function.

In regards to the current study, is something similar going on? Does processing the emotional content of stories impair our processing of factual information, which is a benefit for weak facts but actually a detriment to the persuasive power of strong facts that are persuasive on their own?

Another potential explanation occurs to me, however (showing how difficult it can be to interpret the results of psychological research like this). It is a reasonable premise that a strong fact is more persuasive on it’s own than a weak fact – being able to survive a 3 foot fall is not as impressive as a 30 foot fall. But, the more impressive fact may also trigger more skepticism. I may simply not believe that a phone could survive such a fall. If that fact, however, is presented in a straightforward fashion, it may seem somewhat credible. If it is presented as part of a story that is clearly meant to persuade me, then that might trigger more skepticism. In fact, doing so is inherently sketchy. The strong fact is impressive on its own, why are you trying to persuade me with this unnecessary personal story – unless the fact is BS.There is also research to support this hypothesis. When a documentary about a fringe topic, like UFOs, includes the claim that, “This is true,” that actually triggers more skepticism. It encourages the audience to think, “Wait a minute, is this true?” Meanwhile, including a scientists who says, “This is not true,” may actually increase belief, because the audience is impressed that the subject is being taken serious by a scientist, regardless of their ultimate conclusion. But the extent of such backfire effects remains controversial in psychological research – it appears to be very context dependent.

I would summarize all this by saying that – we can identify psychological effects that relate to belief and skepticism. However, there are many potential effects that can be triggered in different situations, and interact in often complex and unpredictable ways. So even when we identify a real effect, such as the persuasive power of stories, it doesn’t predict what will happen in every case. In fact, the net statistical effect may disappear or even reverse in certain contexts, because it is either neutralized or overwhelmed by another effect. I think that is what is happening here.

What do you do when you are trying to be persuasive, then? The answer has to be – it depends? Who is your audience? What claims or facts are you trying to get across? What is the ultimate goal of the persuasion (public service, education, political activism, marketing)? I don’t think we can generate any solid algorithm, but we do have some guiding rules of thumb.

First, know your audience, or at least those you are trying to persuade. No message will be persuasive to everyone.

If the facts are impressive on their own, let them speak for themselves. Perhaps put them into a little context, but don’t try to wrap them up in an emotional story. That may backfire.

Depending on context, your goal may be to not just provide facts, but to persuade your audience to reject a current narrative for a better one. In this case the research suggests you should both argue against the current narrative, and provide a replacement that provides an explanatory model.

So you can’t just debunk a myth, conspiracy theory, or misconception. You need to provide the audience with another way to make sense of their world.

When possible find common ground. Start with the premises that you think most reasonable people will agree with, then build from there.

Now, it’s not my goal to outline how to convince people of things that are not true, or that are subjective but in your personal interest. That’s not what this blog is about. I am only interested in persuading people to portion their belief to the logic and evidence. So I am not going to recommend ways to avoid triggering skepticism – I want to trigger skepticism. I just want it to be skepticism based on science and critical thinking, not emotional or partisan denial, nihilism, cynicism, or just being contrarian.

You also have to recognize that it can be difficult to persuade people. This is especially true if your message is constrained by facts and reality. Sometimes the real information is not optimized for emotional appeal, and it has to compete against messages that are so optimized (and are unconstrained by reality). But at least know the science about how people process information and form their beliefs is useful.

Postscript:  Hans Rosling demonstrates how to use data to tell the story of our rising civilization.

Bottom Line:  When it comes to science, the rule is to follow the facts.  When the story is contradicted by new facts, the story changes to fit the facts, not the other way around.

See also:  Data, Facts and Information

Too Many People, or Too Few?

A placard outside the UN headquarters in New York City, November 2011

Some years ago I read the book Boom, Bust and Echo. It described how planners for public institutions like schools and hospitals often fail to anticipate demographic shifts. The authors described how in North America, the baby Boom after WWII overcrowded schools, and governments struggled to build and staff more facilities. Just as they were catching up came the sexual revolution and the drop in fertility rates, resulting in a population Bust in children entering the education system. Now the issue was to close schools and retire teachers due to overcapacity, not easy to do with sentimental attachments. Then as the downsizing took hold came the Echo. Baby boomers began bearing children, and even at a lower birth rate, it still meant an increased cohort of students arriving at a diminished system.

The story is similar to what is happening today with world population. Zachary Karabell writes in Foreign Affairs The Population Bust: Demographic Decline and the End of Capitalism as We Know It. Excerpts in italics with my bolds.

For most of human history, the world’s population grew so slowly that for most people alive, it would have felt static. Between the year 1 and 1700, the human population went from about 200 million to about 600 million; by 1800, it had barely hit one billion. Then, the population exploded, first in the United Kingdom and the United States, next in much of the rest of Europe, and eventually in Asia. By the late 1920s, it had hit two billion. It reached three billion around 1960 and then four billion around 1975. It has nearly doubled since then. There are now some 7.6 billion people living on the planet.

Just as much of the world has come to see rapid population growth as normal and expected, the trends are shifting again, this time into reverse. Most parts of the world are witnessing sharp and sudden contractions in either birthrates or absolute population. The only thing preventing the population in many countries from shrinking more quickly is that death rates are also falling, because people everywhere are living longer. These oscillations are not easy for any society to manage. “Rapid population acceleration and deceleration send shockwaves around the world wherever they occur and have shaped history in ways that are rarely appreciated,” the demographer Paul Morland writes in The Human Tide, his new history of demographics. Morland does not quite believe that “demography is destiny,” as the old adage mistakenly attributed to the French philosopher Auguste Comte would have it. Nor do Darrell Bricker and John Ibbitson, the authors of Empty Planet, a new book on the rapidly shifting demographics of the twenty-first century. But demographics are clearly part of destiny. If their role first in the rise of the West and now in the rise of the rest has been underappreciated, the potential consequences of plateauing and then shrinking populations in the decades ahead are almost wholly ignored.

The mismatch between expectations of a rapidly growing global population (and all the attendant effects on climate, capitalism, and geopolitics) and the reality of both slowing growth rates and absolute contraction is so great that it will pose a considerable threat in the decades ahead. Governments worldwide have evolved to meet the challenge of managing more people, not fewer and not older. Capitalism as a system is particularly vulnerable to a world of less population expansion; a significant portion of the economic growth that has driven capitalism over the past several centuries may have been simply a derivative of more people and younger people consuming more stuff. If the world ahead has fewer people, will there be any real economic growth? We are not only unprepared to answer that question; we are not even starting to ask it.

BOMB OR BUST?
At the heart of The Human Tide and Empty Planet, as well as demography in general, is the odd yet compelling work of the eighteenth-century British scholar Thomas Malthus. Malthus’ 1798 Essay on the Principle of Population argued that growing numbers of people were a looming threat to social and political stability. He was convinced that humans were destined to produce more people than the world could feed, dooming most of society to suffer from food scarcity while the very rich made sure their needs were met. In Malthus’ dire view, that would lead to starvation, privation, and war, which would eventually lead to population contraction, and then the depressing cycle would begin again.

Yet just as Malthus reached his conclusions, the world changed. Increased crop yields, improvements in sanitation, and accelerated urbanization led not to an endless cycle of impoverishment and contraction but to an explosion of global population in the nineteenth century. Morland provides a rigorous and detailed account of how, in the nineteenth century, global population reached its breakout from millennia of prior human history, during which the population had been stagnant, contracting, or inching forward. He starts with the observation that the population begins to grow rapidly when infant mortality declines. Eventually, fertility falls in response to lower infant mortality—but there is a considerable lag, which explains why societies in the modern world can experience such sharp and extreme surges in population. In other words, while infant mortality is high, women tend to give birth to many children, expecting at least some of them to die before reaching maturity. When infant mortality begins to drop, it takes several generations before fertility does, too. So a woman who gives birth to six children suddenly has six children who survive to adulthood instead of, say, three. Her daughters might also have six children each before the next generation of women adjusts, deciding to have smaller families.

The population bust is going global almost as quickly as the population boom did in the twentieth century.  The burgeoning of global population in the past two centuries followed almost precisely the patterns of industrialization, modernization, and, crucially, urbanization. It started in the United Kingdom at the end of the nineteenth century (hence the concerns of Malthus), before spreading to the United States and then France and Germany. The trend next hit Japan, India, and China and made its way to Latin America. It finally arrived in sub-Saharan Africa, which has seen its population surge thanks to improvements in medicine and sanitation but has not yet enjoyed the full fruits of industrialization and a rapidly growing middle class.

With the population explosion came a new wave of Malthusian fears, epitomized by the 1968 book The Population Bomb, by Paul Ehrlich, a biologist at Stanford University. Ehrlich argued that plummeting death rates had created an untenable situation of too many people who could not be fed or housed. “The battle to feed all of humanity is over,” he wrote. “In the 1970’s the world will undergo famines—hundreds of millions of people are going to starve to death in spite of any crash programs embarked on now.”

Ehrlich’s prophecy, of course, proved wrong, for reasons that Bricker and Ibbitson elegantly chart in Empty Planet. The green revolution, a series of innovations in agriculture that began in the early twentieth century, accelerated such that crop yields expanded to meet humankind’s needs. Moreover, governments around the world managed to remediate the worst effects of pollution and environmental degradation, at least in terms of daily living standards in multiple megacities, such as Beijing, Cairo, Mexico City, and New Delhi. These cities face acute challenges related to depleted water tables and industrial pollution, but there has been no crisis akin to what was anticipated.

Doesn’t anyone want my Green New Deal?

Yet visions of dystopic population bombs remain deeply entrenched, including at the center of global population calculations: in the forecasts routinely issued by the United Nations. Today, the UN predicts that global population will reach nearly ten billion by 2050. Judging from the evidence presented in Morland’s and Bricker and Ibbitson’s books, it seems likely that this estimate is too high, perhaps substantially. It’s not that anyone is purposely inflating the numbers. Governmental and international statistical agencies do not turn on a dime; they use formulas and assumptions that took years to formalize and will take years to alter. Until very recently, the population assumptions built into most models accurately reflected what was happening. But the sudden ebb of both birthrates and absolute population growth has happened too quickly for the models to adjust in real time. As Bricker and Ibbitson explain,

“The UN is employing a faulty model based on assumptions that worked in the past but that may not apply in the future.”

Population expectations aren’t merely of academic interest; they are a key element in how most societies and analysts think about the future of war and conflict. More acutely, they drive fears about climate change and environmental stability—especially as an emerging middle class numbering in the billions demands electricity, food, and all the other accoutrements of modern life and therefore produces more emissions and places greater strain on farms with nutrient-depleted soil and evaporating aquifers. Combined with warming-induced droughts, storms, and shifting weather patterns, these trends would appear to line up for some truly bad times ahead.

Except, argue Bricker and Ibbitson, those numbers and all the doomsday scenarios associated with them are likely wrong. As they write,

“We do not face the challenge of a population bomb but a population bust—a relentless, generation-after-generation culling of the human herd.”

Already, the signs of the coming bust are clear, at least according to the data that Bricker and Ibbitson marshal. Almost every country in Europe now has a fertility rate below the 2.1 births per woman that is needed to maintain a static population. The UN notes that in some European countries, the birthrate has increased in the past decade. But that has merely pushed the overall European birthrate up from 1.5 to 1.6, which means that the population of Europe will still grow older in the coming decades and contract as new births fail to compensate for deaths. That trend is well under way in Japan, whose population has already crested, and in Russia, where the same trends, plus high mortality rates for men, have led to a decline in the population.

What is striking is that the population bust is going global almost as quickly as the population boom did in the twentieth century. Fertility rates in China and India, which together account for nearly 40 percent of the world’s people, are now at or below replacement levels. So, too, are fertility rates in other populous countries, such as Brazil, Malaysia, Mexico, and Thailand. Sub-Saharan Africa remains an outlier in terms of demographics, as do some countries in the Middle East and South Asia, such as Pakistan, but in those places, as well, it is only a matter of time before they catch up, given that more women are becoming educated, more children are surviving their early years, and more people are moving to cities.

Both books note that the demographic collapse could be a bright spot for climate change. Given that carbon emissions are a direct result of more people needing and demanding more stuff—from food and water to cars and entertainment—then it would follow that fewer people would need and demand less. What’s more, larger proportions of the planet will be aging, and the experiences of Japan and the United States are showing that people consume less as they age. A smaller, older population spells some relief from the immense environmental strain of so many people living on one finite globe.

The Reinvention of Chess

That is the plus side of the demographic deflation. Whether the concomitant greening of the world will happen quickly enough to offset the worst-case climate scenarios is an open question—although current trends suggest that if humanity can get through the next 20 to 30 years without irreversibly damaging the ecosystem, the second half of the twenty-first century might be considerably brighter than most now assume.

The downside is that a sudden population contraction will place substantial strain on the global economic system.

Capitalism is, essentially, a system that maximizes more—more output, more goods, and more services. That makes sense, given that it evolved coincidentally with a population surge. The success of capitalism in providing more to more people is undeniable, as are its evident defects in providing every individual with enough. If global population stops expanding and then contracts, capitalism—a system implicitly predicated on ever-burgeoning numbers of people—will likely not be able to thrive in its current form. An aging population will consume more of certain goods, such as health care, but on the whole aging and then decreasing populations will consume less. So much of consumption occurs early in life, as people have children and buy homes, cars, and white goods. That is true not just in the more affluent parts of the world but also in any country that is seeing a middle-class surge.

The future world may be one in which capitalism at best frays and at worst breaks down completely.
But what happens when these trends halt or reverse? Think about the future cost of capital and assumptions of inflation. No capitalist economic system operates on the presumption that there will be zero or negative growth. No one deploys investment capital or loans expecting less tomorrow than today. But in a world of graying and shrinking populations, that is the most likely scenario, as Japan’s aging, graying, and shrinking absolute population now demonstrates. A world of zero to negative population growth is likely to be a world of zero to negative economic growth, because fewer and older people consume less. There is nothing inherently problematic about that, except for the fact that it will completely upend existing financial and economic systems. The future world may be one of enough food and abundant material goods relative to the population; it may also be one in which capitalism at best frays and at worst breaks down completely.

The global financial system is already exceedingly fragile, as evidenced by the 2008 financial crisis. A world with negative economic growth, industrial capacity in excess of what is needed, and trillions of dollars expecting returns when none is forthcoming could spell a series of financial crises. It could even spell the death of capitalism as we know it. As growth grinds to a halt, people may well start demanding a new and different economic system. Add in the effects of automation and artificial intelligence, which are already making millions of jobs redundant, and the result is likely a future in which capitalism is increasingly passé.

If population contraction were acknowledged as the most likely future, one could imagine policies that might preserve and even invigorate the basic contours of capitalism by setting much lower expectations of future returns and focusing society on reducing costs (which technology is already doing) rather than maximizing output.

But those policies would likely be met in the short term by furious opposition from business interests, policymakers, and governments, all of whom would claim that such attitudes are defeatist and could spell an end not just to growth but to prosperity and high standards of living, too. In the absence of such policies, the danger of the coming shift will be compounded by a complete failure to plan for it.

Different countries will reach the breaking point at different times. Right now, the demographic deflation is happening in rich societies that are able to bear the costs of slower or negative growth using the accumulated store of wealth that has been built up over generations. Some societies, such as the United States and Canada, are able to temporarily offset declining population with immigration, although soon, there won’t be enough immigrants left. As for the billions of people in the developing world, the hope is that they become rich before they become old. The alternative is not likely to be pretty: without sufficient per capita affluence, it will be extremely difficult for developing countries to support aging populations.

So the demographic future could end up being a glass half full, by ameliorating the worst effects of climate change and resource depletion, or a glass half empty, by ending capitalism as we know it. Either way, the reversal of population trends is a paradigm shift of the first order and one that is almost completely unrecognized. We are vaguely prepared for a world of more people; we are utterly unprepared for a world of fewer. That is our future, and we are heading there fast.

See also Control Population, Control the Climate. Not.

Epic Media Science Fail: Fear Not Pollinator Collapse

Jon Entine returns to this topic writing at the Genetic Literacy Project: The world faces ‘pollinator collapse’? How and why the media get the science wrong time and again. Excerpts in italics with my bolds.

As I and others have detailed in the Genetic Literacy Project and as other news organizations such as the Washington Post and Slate have outlined, the pollinator-collapse narrative has been relentless and mostly wrong for more than seven years now.

It germinated with Colony Collapse Disorder that began in 2006 and lasted for a few years—a freaky die off of bees that killed almost a quarter of the US honey bee population, but its cause remains unknown. Versions of CCD have been occurring periodically for hundreds of years, according to entomologists.

Today, almost all entomologists are convinced that the ongoing bee health crisis is primarily driven by the nasty Varroa destructor mite. Weakened honey bees, trucked around the country as livestock, face any number of health stressors along with Varroa, including the use of miticides used to control the invasive mite, changing weather and land and the use of some farm chemicals, which may lower the honeybee’s ability to fight off disease.

Still, the ‘bee crisis’ flew under the radar until 2012, when advocacy groups jumped in to provide an apocalyptic narrative after a severe winter led to a sharp, and as it turned out temporary, rise in overwinter bee deaths.

Colony loss numbers jumped in 2006 when CCD hit but have been steady and even improving since.

The alarm bells came with a spin, as advocacy groups blamed a class of pesticides known as neonicotinoids, which were introduced in the 1990s, well after the Varroa mite invasion infected hives and started the decline. The characterization was apocalyptic, with some activist claiming that neonics were driving honey bees to extinction.

In the lab evaluations, which are not considered state of the art—field evaluations replicate real-world conditions far better—honeybee mortality did increase. But that was also true of all the insecticides tested; after all, they are designed to kill harmful pests. Neonics are actually far safer than the pesticides they replaced, . . . particularly when their impact is observed under field-realistic conditions (i.e., the way farmers would actually apply the pesticide).

As the “science” supporting the bee-pocalypse came under scrutiny, the ‘world pollinator crisis’ narrative began to fray. Not only was it revealed that the initial experiments had severely overdosed the bees, but increasing numbers of high-quality field studies – which test how bees are actually affected under realistic conditions – found that bees can successfully forage on neonic-treated crops without noticeable harm.

Those determined to keep the crisis narrative alive were hardly deterred. Deprived of both facts and science to argue their case, many advocacy groups simply pounded the table by shifting their crisis argument dramatically. For example, in 2016, the Sierra Club (while requesting donations), hyped the honey bee crisis to no end.

But more recently, in 2018, the same organization posted a different message on its blog. Honeybees, the Sierra Club grudgingly acknowledged, were not threatened. Forget honeybees, the Sierra Club said, the problem is now wild bees, or more generally, all insect pollinators, which are facing extinction due to agricultural pesticides of all types (though neonics, they insisted, were especially bad).

So, once again, with neither the facts nor the science to back them up, advocacy groups have pulled a switcheroo and are again pounding the table. As they once claimed with honeybees, they now claim that the loss of wild bees and other insect pollinators imperils our food supply. A popular meme on this topic is the oft-cited statistic, which appears in the recent UN IPBES report on biodiversity, that “more than 75 per cent of global food crop types, including fruits and vegetables and some of the most important cash crops such as coffee, cocoa and almonds, rely on animal pollination.”

There’s a sleight of hand here. Most people (including most journalists) miss or gloss over the important point that this is 75 percent of crop types, or varieties, not 75 percent of all crop production. In fact, 60 percent of agricultural production comes from crops that do not rely on animal pollination, including cereals and root crops. As the GLP noted in its analysis, only about 7 percent of crop output is threatened by pollinator declines—not a welcomed percentage, but far from an apocalypse.

And the word “rely” seems almost purposefully misleading. More accurately, most of these crops receive some marginal boost in yield from pollination. Few actually “rely” on it. A UN IPBES report on pollinators published in 2018 actually breaks this down in a convenient pie graph.

Many of these facts are ignored by advocacy groups sharpening their axes, and they’re generally lost on the “if it bleeds it leads” media, which consistently play up catastrophe scenarios of crashing pollinator communities and food supplies. Unfortunately, many scientists willingly go along. Some are activists themselves; others hope to elevate the significance of their findings to garner media attention and supercharge grant proposals.

As John Adams is alleged to have said, ‘facts are stubborn things.’ We can’t be simultaneously in the midst of a pollinator crisis threatening our ability to grow food and see continually rising yield productivity among those crops most sensitive to pollination.

With these claims of an impending wild bee catastrophe, as in the case of the original honeybee-pocalypse claims, few of the journalists, activists, scientists or biodiversity experts who regularly sound this ecological alarm have reviewed the facts in context. Advocacy groups consistently extrapolate from the declines of a handful of wild bee species (out of the thousands that we know exist), to claim that we are in the midst of a worldwide crisis. But just as with the ‘honey bee-mageddon, we are not.

Those of us who actually care about science and fact, however, might note the irony here: It is precisely the pesticides which the catastrophists are urging us to ban that, along with the many other tools in the modern farmer’s kit, have enabled us grow more of these nutritious foods, at lower prices, than ever before in human history.

Scientific vs. Social Authenticity

Credit: Stanislaw Pytel Getty Images

This post was triggered by an essay in Scientific American Authenticity under Fire by Scott Barry Kaufman. He raises modern issues and expresses a social and psychological sense of authenticity that left me unsatisfied.  So following that, I turn to a scientific standard much richer in meaning and closer to my understanding.

Social Authenticity

Researchers are calling into question authenticity as a scientifically viable concept

Authenticity is one of the most valued characteristics in our society. As children we are taught to just “be ourselves”, and as adults we can choose from a large number of self-help books that will tell us how important it is to get in touch with our “real self”. It’s taken as a given by everyone that authenticity is a real thing and that it is worth cultivating.

Even the science of authenticity has surged in recent years, with hundreds of journal articles, conferences, and workshops. However, the more that researchers have put authenticity under the microscope, the more muddied the waters of authenticity have become.

Many common ideas about authenticity are being overturned.
Turns out, authenticity is a real mess.

One big problem with authenticity is that there is a lack of consensus among both the general public and among psychologists about what it actually means for someone or something to be authentic. Are you being most authentic when you are being congruent with your physiological states, emotions, and beliefs, whatever they may be?

Another thorny issue is measurement. Virtually all measures of authenticity involve self-report measures. However, people often do not know what they are really like or why they actually do what they do. So tests that ask people to report how authentic they are is unlikely to be a truly accurate measure of their authenticity.

Perhaps the thorniest issue of them all though is the entire notion of the “real self”. The humanistic psychotherapist Carl Rogers noted that many people who seek psychotherapy are plagued by the question “Who am I, really?” While people spend so much time searching for their real self, the stark reality is that all of the aspects of your mind are part of you.

So what is this “true self” that people are always talking about? Once you take a closer scientific examination, it seems that what people refer to as their “true self” really is just the aspects of themselves that make them feel the best about themselves.

Even more perplexing, it turns out that most people’s feelings of authenticity have little to do with acting in accord with their actual nature. The reality appears to be quite the opposite. All people tend to feel most authentic when having the same experiences, regardless of their unique personality.

Another counterintuitive finding is that people actually tend to feel most authentic when they are acting in socially desirable ways, not when they are going against the grain of cultural dictates (which is how authenticity is typically portrayed). On the flip side, people tend to feel inauthentic when they are feeling socially isolated, or feel as though they have fallen short of the standards of others.

Therefore, what people think of as their true self may actually just be what people want to be seen as. According to social psychologist Roy Baumeister, we will report feeling highly authentic and satisfied when the way others think of us matches up with how we want to be seen, and when our actions “are conducive to establishing, maintaining, and enjoying our desired reputation.”

Conversely, Baumeister argues that when people fail to achieve their desired reputation, they will dismiss their actions as inauthentic, as not reflecting their true self (“That’s not who I am”). As Baumeister notes, “As familiar examples, such repudiation seems central to many of the public appeals by celebrities and politicians caught abusing illegal drugs, having illicit sex, embezzling or bribing, and other reputation-damaging actions.”

Kaufman Conclusion

As long as you are working towards growth in the direction of who you truly want to be, that counts as authentic in my book regardless of whether it is who you are at this very moment. The first step to healthy authenticity is shedding your positivity biases and seeing yourself for who you are, in all of your contradictory and complex splendor. Full acceptance doesn’t mean you like everything you see, but it does mean that you’ve taken the most important first step toward actually becoming the whole person you most wish to become. As Carl Rogers noted, “the curious paradox is that when I accept myself just as I am, then I can change.”

My Comment:
Kaufman describes contemporary ego-centric group-thinking, which leads to the philosophical dead end called solipsism. As an epistemological position, solipsism holds that knowledge of anything outside one’s own mind is unsure; the external world and other minds cannot be known and might not exist outside the mind.

His discussion proves the early assertion that authenticity (in the social or psychological sense) is indeed a mess. The author finds no objective basis to determine fidelity to reality, thus leaving everyone struggling whether to be self-directed or other-directed. As we know from Facebook, most resolve that conflict by competing to see who can publish the most selfies while acquiring the most “friends.”This is the best Scientific American can do? The swamp is huge and deep indeed.

It reminds me of what Ross Pomeroy wrote at Real Science: “Psychology, as a discipline, is a house made of sand, based on analyzing inherently fickle human behavior, held together with poorly-defined concepts, and explored with often scant methodological rigor. Indeed, there’s a strong case to be made that psychology is barely a science.”

Scientific Authenticity

In contrast, let us consider some writing by Philip Kanarev, A practicing physicist, he is concerned with the demise of scientific thinking and teaching and calls for a return to fundamentals. His essay is Scientific Authenticity Criteria by Ph. M. Kanarev in the General Science Journal.  Excerpts in italics with my bolds.

A conjunction of scientific results in the 21st century has reached a level that provides an opportunity to find and to systematize the scientific authenticity criteria of precise knowledge already gained by mankind.

Neither Euclid, nor Newton gave precise definitions of the notions of an axiom, a postulate and a hypothesis. As a result, Newton called his laws the axioms, but it was in conflict with the Euclidean ideas concerning the essence of the axioms. In order to eliminate these contradictions, it was necessary to give a definition not only to the notions of the axiom and the postulate, but also to the notion of the hypothesis. This necessity is stipulated by the fact that any scientific research begins with an assumption regarding the reason causing a phenomenon or process being studied. A formulation of this assumption is a scientific hypothesis.

Thus, the axioms and the postulates are the main criteria of authenticity of any scientific result.

An axiom is an obvious statement, which requires no experimental check and has no exceptions. Absolute authenticity of an axiom appears from this definition. It protects it by a vivid connection with reality. A scientific value of an axiom does not depend on its recognition; that is why disregarding an axiom as a scientific authenticity criterion is similar to ineffectual scientific work.

A postulate is a non-obvious statement, its reliability being proven in the way of experiment or a set of theoretic results originating from the experiments. The reliability of a postulate is determined by the level of acknowledgement by the scientific community. That’s why its value is not absolute.

An hypothesis is an unproven statement, which is not a postulate. A proof can be theoretical and experimental. Both proofs should not be at variance with the axioms and the recognized postulates. Only after that, hypothetical statements gain the status of postulates, and the statements, which sum up a set of axioms and postulates, gain the status of a trusted theory.

The first axioms were formulated by Euclid. Here are some of them:
1 – To draw a straight line from any point to any point.
2 – To produce a finite straight line continuously in a straight line.
3 – That all right angles equal one another.

Euclidean formulation concerning the parallelism of two straight lines proved to be less concise. As a result, it was questioned and analyzed in the middle of the 19th century. It was accepted that two parallel straight lines cross at infinity. Despite a complete absence of evidence of this statement, the status of an axiom was attached to it. Mankind paid a lot for such an agreement among the scientists. All theories based on this axiom proved to be faulty. The physical theories of the 20th century proved to be the principal ones among them.

In order to understand the complicated situation being formed, one has to return to Euclidean axioms and assess their completeness. It has turned out that there are no axioms, which reflect the properties of the primary elements of the universe (space, matter and time), among those of Euclid. There are no phenomena, which could compress space, stretch it or distort it, in the nature; that is why space is absolute. There are no phenomena, which change the rate of the passing of time in nature. Time does not depend on anything; that’s why we have every reason to consider time absolute. The absolute nature of space and time has been acknowledged by scientists since Euclidean times. But when his axiom concerning the parallelism of straight lines was disputed, the ideas of relativity of space and time as well as the new theories, which were based on these ideas and proved (as we noted) to be faulty, appeared.

A law of acknowledgement of new scientific achievements was introduced by Max Planck. He formulated it in the following way: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it”. Our attempt to report the reliability of this law to the authorities is in the history of science an unnecessary intention. Certainly, time appeared in space only after matter. But still we do not know of a source that produces elementary particles – building blocks of the material world. That’s why we have no reason to consider matter absolute. But it does not prevent us from paying attention to an interconnection of the primary elements of the universe: space, matter and time. They exist only together and regardless of each other. This fact is vivid, and we have every reason to consider an indivisible existence of space, matter and time as an axiomatic one, and to call the axiom, which reflects this fact, the Unity axiom. The philosophic essence of this axiom has been noted long ago, but the practitioners of the exact sciences have failed to pay attention to the fact that it is implemented in the experimental and analytical processes of cognition of the world. When material bodies move, the mathematical description of this motion should be based on the Unity axiom. It appears from this axiom, that an axis of motion of any object is the time function. Almost all physical theories of the 20th century are in conflict with the Unity axiom. It is painful to write about it in detail.

Let us go on analyzing the role of postulates as scientific authenticity criteria. First of all, let us recollect the famous postulate by Niels Bohr concerning the orbital motion of the electrons in atoms. This catchy model of the process of the interaction of the electrons in the atoms goes on being formed in the mind of the pupils in school despite of the fact that its impropriety has been proven more than 10 years ago.

The role of Niels Bohr’s generalized postulate is great. Practically, it is used in the whole of modern chemistry and the larger part of physics. This postulate is based on the calculation of the spectrum of the hydrogen atom. But it is impossible to calculate the spectrum of the first orbit of the helium atom (which occupies the second place in Mendeleev’s table,) with Bohr’s postulate, to say nothing of the spectra of more complicated atoms and ions. It was enough to dispute the authenticity of Bohr’s postulate, but the mission of doubt has fallen to our lot for some reason. Two years were devoted to decoding the spectrum of the first electron of the helium atom. As a result, the law of formation of the spectra of atoms and ions has taken place as well as the law of the change of binding energy of the electron with the protons of the nuclei when energy-jumps take place in the atoms. It has turned out that there is no energy of orbital motion of the electrons in these laws; there are only the energies of their linear interaction with the protons of the nuclei.

Thereafter, it has become clear that only elementary particle models can play the role of the scientific result authenticity criteria in cognition of the micro-world. From the analysis of behaviour of these models, one should derive the mathematical models, which have been ascertained analytically long ago, and describe their behaviour in the experiments that have been carried out earlier.

The ascertained models of the photons of all frequencies, the electron, the proton and the neutron meet the above-mentioned requirements. They are interconnected with each other by such a large set of theoretical and experimental information, whose impropriety cannot be proven. This is the main feature of the proximity to reality of the ascertained models of the principle elementary particles. Certainly, the process of their generation has begun from a formulation of the hypothesis concerning their structures. Sequential development of the description of these structures and their behaviour during the interactions extended the range of experimental data where the parameters of the elementary particles and their interactions were registered. For example, the formation and behaviour of electrons are governed by more than 20 constants.

We have every reason to state that the models of the photons, the electron, the proton and the neutron, which have been ascertained by us, as well as the principles of formation of the nuclei, the atoms, the ions, the molecules and the clusters already occupy a foundation for the postulates, and new scientific knowledge will cement its strength.

Science has a rather complete list of criteria in order to estimate the authenticity of scientific investigative results. The axioms (the obvious statements, which require no experimental check and have no exceptions,) occupy the first place; the second place is occupied by the postulates. If the new theory is in conflict with at least one axiom, it will be rejected immediately by the scientific community without discussion. If the experimental data, which are in conflict with any postulate (as it happened, for example, to the Newton’s first law), appear, the future scientific community, which has learned a lesson from scientific cowardice of the academic elite of the 20th century, will submit such a postulate to a collective analysis of its authenticity.

Kanarev Conclusion

To the academicians who have made many mistakes in knowledge of the fields of physics and chemistry, we wish them to recover their sight in old age and be glad that these mistakes are already amended. It is time to understand that a prolongation of stuffing the heads of young people with faulty knowledge is similar to a crime that will be taken to heart emotionally in the near future.

The time has ended, when a diploma confirming higher education was enough in order to get a job. Now it is not a convincing argument for an employer; in order to be on the safe side, he hires a young graduate as a probationer at first as he wants to see what the graduate knows and what he is able to do. A new system of higher education has almost nullified a possibility for the student to have the skills of practical work according to his specialty and has preserved a requirement to have moronic knowledge, i.e. the knowledge which does not reflect reality.

My Summary

In Science, authenticity requires fidelity to axioms and postulates describing natural realities. It also means insisting that hypotheses be validated by experimental results. Climate science claims are not scientifically authentic unless or until confirmed by observations, and not simply projections from a family of divergent computer models. And despite all of the social support for climate hysteria, those fears are again more stuffing of nonsense into heads of youth and of the scientifically illiterate.

See Also Degrees of Climate Truth

False Beliefs about Human Genes

Carl Zimmer writes at Skeptical Inquirer Seven Big Misconceptions About Heredity. Excerpts in italics with my bolds

It’s been seven decades since scientists demonstrated that DNA is the molecule of heredity. Since then, a steady stream of books, news programs, and episodes of CSI have made us comfortable with the notion that each of our cells contains three billion base pairs of DNA, which we inherited from our parents. But we’ve gotten comfortable without actually knowing much at all about our own genomes.

If you want to get your entire genome sequenced—all three billion base pairs in your DNA—a company called Dante Labs will do it for $699. You don’t need whole genome sequencing to learn a lot about your genes, however. The 20,000 genes that encode our proteins make up less than 2 percent of the human genome. That fraction of the genome—the “exome”—can be yours for just a few hundred dollars. The cheapest insights come from “genotyping”—in which scientists survey around a million spots in the genome known to vary a lot among people. Genotyping—offered by companies such as 23andMe and Ancestry—is typically available for under a hundred dollars.

Thanks to these falling prices, the number of people who are getting a glimpse at their own genes is skyrocketing. By 2019, over twenty-five million worldwide had gotten genotyped or had their DNA sequenced. At its current pace, the total may reach 100 million by 2020.

There’s a lot we can learn about ourselves in these test results. But there’s also a huge opportunity to draw the wrong lessons.

Many people have misconceptions about heredity—how we are connected to our ancestors and how our inheritance from them shapes us. Rather than dispelling those misconceptions, our growing fascination with our DNA may only intensify them. A number of scientists have warned of a new threat they call “genetic astrology.” It’s vitally important to fight these misconceptions about heredity, just as we must fight misconceptions about other fields of science, such as global warming, vaccines, and evolution. Here are just a few examples.

Misconception #1: Finding a Special Ancestor Makes You Special

You can join the Order of the Crown of Charlemagne if you can prove that the Holy Roman Emperor is your ancestor. It’s a thrill to discover we have a genealogical link to someone famous—perhaps because that link seems to make us special, too.

But that’s an illusion. I could join the Mayflower Society, for example, because I’m descended from a servant aboard the ship named John Howland. Howland’s one claim to fame is that he fell out of the Mayflower. Fortunately for me, he got fished out of the water and reached Massachusetts. But I’m not the only fortunate one; by one estimate, there are two million people who descend from him alone.

Mathematicians have analyzed the structure of family trees, and they’ve found that the further back in time you go, the more descendants people had. (This is only true of people who have any living descendants at all, it should be noted.) This finding has an astonishing implication. Since we know Charlemagne has living descendants (thank you, Order of the Crown!), he is likely the ancestor of every living person of European descent.

Misconception #2: You Are Connected to All Your Ancestors by DNA

But genetics do not equal genealogy. It turns out that practically none of the Europeans who descend from Charlemagne inherited any of his DNA. All humans, in fact, have no genetic link to most of their direct ancestors.

The reason for this disconnect is the way that DNA gets passed down from one generation to the next. Every egg or sperm randomly ends up with one copy of each chromosome, coming either from a person’s mother or father. As a result, we inherit about a quarter of our DNA from each grandparent—but only on average.

If you go back a few generations more, that contribution can drop all the way to zero. . . While it is true that you inherit your DNA from your ancestors, that DNA is only a tiny sampling of the genes in your family tree.

Even without a genetic link, though, your ancestors remain your ancestors. They did indeed help shape who you are—not by giving you a gene for some particular trait, but by raising their own children, who then raised their own children in turn, passing down a cultural inheritance along with a genetic one.

Misconception #3: Ancestry Tests Are as Reliable as Medical Tests

Millions of people are getting ancestry reports based on their DNA. My own report informs me that I’m 43 percent Ashkenazi Jewish, 25 percent Northwestern European, 23 percent South/Central European, 6 percent Southwestern European, and 2.2 percent North Slavic. Those percentages sound impressive, even definitive. It’s easy to conclude that ancestry reports are as reliable as stepping on a scale at the doctor’s office to get your height and weight measured.

That is a mistake, and one that can cause a lot of heartbreak. To estimate ancestry, researchers compare each customer to a database of thousands of people from around the world. . . They can identify stretches of DNA that are likely to have originated in a particular part of the world. While some matches are clear-cut, others are less so. As a result, ancestry estimates always have margins of error—which often go missing in the reports customers get.

These estimates are going to get better with time, but there’s a fundamental limit to what they can tell us about our ancestry. . . Researchers are getting glimpses of those older peoples by retrieving DNA from ancient skeletons. And they’re finding that our genetic history is far more tumultuous than previously thought. Time and again, researchers find that the people who have lived in a given place in recent centuries have little genetic connection to the people who lived there thousands of years ago. All over the world, populations have expanded and migrated, coming into contact with other populations. . . If you want to find purity in your ancestry, you’re on a fool’s errand.

Misconception #4: There’s a Gene for Every Trait You Inherit

Mendel is a great place to start learning about heredity but a bad place to stop. There are some traits that are determined by a single gene. Whether Mendel’s peas were smooth or wrinkled was determined by a gene called SBEI. Whether people develop sickle cell anemia or not comes down to a single gene called HBB. But many traits do not follow this so-called Mendelian pattern—even ones that we may have been told in school are Mendelian.

Consider your ear lobes. For decades, teachers taught that they could either hang free or be attached to the side of our heads. The sort of ear lobes you had was a Mendelian trait, determined by a single gene. In fact, our ear lobes typically fall somewhere between the two extremes of strongly attached to fully free. In 2017, a team of researchers compared the ear lobes of over 74,000 people to their DNA. They looked for genetic variants that were common in people at either end of the ear-lobe spectrum. They pinpointed forty-nine genes that appear to play a role in determining how attached they are to our heads. There well may be more waiting to be discovered.

The genetics of ear lobes is actually very simple compared to other traits. Studying height, for example, scientists have identified thousands of genetic variants that appear to play a role. The same holds true for our risk of developing diabetes, heart disease, and other common disorders. We can’t expect to find a single gene in our DNA tests that determines whether we’ll die of a heart attack. Nor should we expect easy fixes for such complex diseases by repairing single genes.

Misconception #5: The Genes You Inherit Explain Exactly Who You Are

Take, for example, a recent study on how long people stay in school. Researchers examined DNA from 1.1 million people and found over 1,200 genetic variants that were unusually common either in people who left school early or in people who went on to college or graduate school. They then used the genetic differences in their subjects to come up with a predictive score, which they then tried out on another group of subjects. They found that in the highest-scoring 20 percent of these subjects, 57 percent finished college. In the lowest-scoring 20 percent, only 12 percent did.

But these results don’t mean that how long you stayed in school was determined before birth by your genes. Getting your children’s DNA tested won’t tell you if you should save up money for college tuition or not. Plenty of people in the educational attainment study who got high genetic scores dropped out of high school. Plenty of people who got low scores went on to get PhDs. And many more got an average amount of education in between those extremes. For any individual, these genetic scores make predictions that are barely better than guessing at random.

This confusing state of affairs is the result of how genes and the environment interact. Scientists call a trait such as how long people stay in school “moderately heritable.” In other words, a modest amount of the variation in education attainment is due to genetic variation. Lots of other factors also matter, too—the neighborhoods where people live, the quality of their schools, the stability of their family life, their income, and so on. What’s more, a gene that may have an influence on how long people stay in school in one environment may have no influence at all in another.

Misconception #6: You Have One Genome

According to this assumption, you will find an identical sequence of DNA in any cell you examine. But there are many ways in which we can end up with different genomes within our bodies.

Fairchild is known as a chimera. She developed inside her mother alongside a fraternal twin. That twin embryo died in the womb, but not before exchanging cells with Fairchild. Now her body was made up of two populations of cells, each of which multiplied and developed into different tissues. In Fairchild’s case, her blood arose from one population, while her eggs arose from another.

It’s unclear how many people are chimeras. Once they were considered bizarre rarities. Scientists became aware of them only in cases such as Lydia Fairchild’s, when their mixed identity made itself known. In recent years, researchers have been carrying out small-scale surveys that suggest that perhaps a few percent of twins are chimeras, but the true number could be higher. As for chimeric mothers, they may be the rule rather than the exception. In a 2017 study, researchers studied brain tumors taken from women who had sons. Eighty percent of them had Y-chromosome-bearing cells in their tumors.

Chimerism is not the only way we can end up with different genomes. Every time a cell in our body divides, there’s a tiny chance that one of the daughter cells may gain a mutation. At first, these new aberrations—called somatic mutations—seemed important only for cancer. But that view has changed as new genome-sequencing technologies have made it possible for scientists to study somatic mutations in many healthy tissues. It now turns out that every person’s body is a mosaic, made up of populations of cells with many different mutations.
Misconception #7: Genes Don’t Matter Because of Epigenetics

The notion that our genes are our destiny can trigger an equally false backlash: that genes don’t matter at all. And very often, those who push against the importance of genetics invoke a younger, more tantalizing field of research: epigenetics.

Our cells use many layers of control to make proper use of their genes. They can quickly turn some genes on and off in response to quick changes in their environment. But they can also silence genes for life. Women, for example, have two copies of the X chromosome, but in early development, each of their cells produces a swarm of RNA molecules and proteins that clamp down on one copy. The cell then only uses the other X chromosome. And if the cell divides, its daughter cells will silence the same copy again.

One of the most tantalizing possibilities scientists are now exploring is whether certain epigenetic “marks” can be inherited not just by daughter cells but by daughters—and sons. If people experience trauma in their lives and it leaves an epigenetic mark on their genes, for example, can they pass down those marks to future generations?

If you’re a plant, the answer is definitely yes. Plants that endure droughts or attacks by insects can reprogram their seeds, and these epigenetic changes can get carried down several generations. The evidence from animals is, for now, still a mixed bag. . . But skeptics have questioned how epigenetics can transmit these traits through the generations, suggesting that the results are just statistical flukes. That hasn’t stopped a cottage industry of epigenetic self-help from springing up. You can join epigenetic yoga classes to rewrite your epigenetic marks or go to epigenetic psychotherapy sessions to overcome the epigenetic legacy you inherited from your grandparents.

On Sexual Brains: Vive La Difference!

As Jordan Peterson has pointed out, an ideology takes a partial truth and asserts it as the whole truth, and nothing but the truth.  With global warming\climate change, we see how a complex, poorly understood natural system is reduced to a simplistic tweet:  “Ninety-seven percent of scientists agree: climate change is real, man-made and dangerous.”  That is the work of a small but dedicated group of ideologues who captured and overturned climate science so that it now only functions as a tool of political operatives.

The post shows how decades of painstaking work in neurological science are being attacked by gender ideologues, who cannot tolerate any biological differences between men and women.

Larry Cahill writes at Quillette Denying the Neuroscience of Sex Differences Excerpts in italics with my bolds.

For decades neuroscience, like most research areas, overwhelmingly studied only males, assuming that everything fundamental to know about females would be learned by studying males. I know — I did this myself early in my career. Most neuroscientists assumed that differences between males and females, if they exist at all, are not fundamental, that is, not essential for understanding brain structure or function. Instead, we assumed that sex differences result from undulating sex hormones (typically viewed as a sort of pesky feature of the female), and/or from different life experiences (“culture”). In either case, they were dismissable in our search for the fundamental. In truth, it was always a strange assumption, but so it was.

Gradually however, and inexorably, we neuroscientists are seeing just how profoundly wrong — and in fact disproportionately harmful to women — that assumption was, especially in the context of understanding and treating brain disorders. Any reader wishing to confirm what I am writing can easily start by perusing online the January/February 2017 issue of the Journal of Neuroscience Research, the first ever of any neuroscience journal devoted to the topic of sex differences in its entirety. All 70 papers, spanning the neuroscience spectrum, are open access to the public.

In statistical terms, something called effect size measures the size of the influence of one variable on another. Although some believe that sex differences in the brain are small, in fact, the average effect size found in sex differences research is no different from the average effect size found in any other large domain of neuroscience. So here is a fact: It is now abundantly clear to anyone honestly looking, that the variable of biological sex influences all levels of mammalian brain function, down to the cellular/genetic substrate, which of course includes the human mammalian brain.

The mammalian brain is clearly a highly sex-influenced organ. Both its function and dysfunction must therefore be sex influenced to an important degree. How exactly all of these myriad sex influences play out is often hard, or even impossible to pinpoint at present (as it is for almost every issue in neuroscience). But that they must play out in many ways, both large and small, having all manner of implications for women and men that we need to responsibly understand, is now beyond debate — at least among non-ideologues.

Recognizing our obligation to carefully study sex influences in essentially all domains (not just neuroscience), the National Institute of Health on January 25, 2016 adopted a policy (called “Sex as a Biological Variable,” or SABV for short) requiring all of its grantees to seriously incorporate the understanding of females into their research. This was a landmark moment, a conceptual corner turned that cannot be unturned.

But the remarkable and unprecedented growth in research demonstrating biologically-based sex influences on brain function triggered 5-alarm fire bells in those who believe that such biological influences cannot exist.

Since Simone de Beauvoir in the early 1950s famously asserted that “One is not born, but rather becomes, a woman,” and John Money at Johns Hopkins shortly thereafter introduced the term “gender” (borrowed from linguistics) to avoid the biological implications of the word “sex,” a belief that no meaningful differences exist in the brains of women and men has dominated U.S. culture. And God help you if you suggest otherwise! Gloria Steinem once called sex differences research “anti-American crazy thinking.” Senior colleagues warned me as an untenured professor around the year 2000 that studying sex differences would be career suicide. This new book by Rippon marks the latest salvo by a very small but vocal group of anti-sex difference individuals determined to perpetuate this cultural myth.

A book like this is very difficult for someone knowledgeable about the field to review seriously. It is so chock-full of bias that one keeps wondering why one is bothering with it. Suffice to say it is replete with tactics that are now standard operating procedure for the anti-sex difference writers. The most important tactic is a comically biased, utterly non-representative view of the enormous literature of studies ranging from humans to single neurons. Other tactics include magnifying or inventing problems with disfavored studies, ignoring even fatal problems with favored studies, dismissing what powerful animal research reveals about mammalian brains, hiding uncomfortable facts in footnotes, pretending not to be denying biologically based sex-influences on the brain while doing everything possible to deny them, pretending to be in favor of understanding sex differences in medical contexts yet never offering a single specific research example why the issue is important for medicine, treating “brain plasticity” as a magic talisman with no limitations that can explain away sex differences, presenting a distorted view of the “stereotype” literature and what it really suggests, and resurrecting 19th century arguments almost no modern neuroscientist knows of, or cares about. Finally, use a catchy name to slander those who dare to be good scientists and investigate potential sex influences in their research despite the profound biases against the topic (“neurosexists!”). These tactics work quite well with those who know little or nothing about the neuroscience.

The book is downright farcical when it comes to modern animal research, simply ignoring the vast majority of it. The enormous power of animal research, of course, is that it can establish sex influences in particular on mammalian brain function (such as sex differences in risk-taking, play behavior, and responses to social defeat as just three examples) that cannot be explained by human culture, (although they may well be influenced in humans by culture.) Rippon engages in what is effectively a denial of evolution, implying to her reader that we should ignore the profound implications of animal research (“Not those bloody monkeys again!”) when trying to understand sex influences on the human brain. She is right only if you believe evolution in humans stopped at the neck.

Rippon tries to convince you (and may even believe herself) that it is impossible to disentangle biology from culture when investigating sex differences in humans. This is false. I encourage the interested reader to see the discussion of the excellent work doing exactly this by a sociologist named J. Richard Udry in an article I wrote in 2014 for the Dana Foundation’s “Cerebrum,” free online.

Rippon does not mention Udry’s work, or its essential replication by Udry’s harshest critic, a leading sociologist who has described herself as a “feminist” who now “wrestles” with testosterone. (The Dana paper “Equal ≠ Same” also deconstructs the specious “brain plasticity” argument on which Rippon’s narrative heavily rests.)

Of course, Rippon is completely correct in arguing that neuroscientists (and the general public) should remember that “nature” interacts with “nurture,” and should not run wild with implications of sex difference findings for brain function and behavior. We must also reject the illogical conclusion that sex influences on the brain will mean that women are superior, or that men are superior. I genuinely do not know a single neuroscientist who disagrees with these arguments. But she studiously avoids an equally important truth: That neuroscientists should not deny that biologically-based sex differences exist and likely have important implications for understanding brain function and behavior, nor should they fear investigating them.

You may ask: What exactly are people like Rippon so afraid of? She cites potential misuse of the findings for sexist ends, which has surface plausibility. But by that logic we should also stop studying, for example, genetics. The potential to misuse new knowledge has been around since we discovered fire and invented the wheel. It is not a valid argument for remaining ignorant.

After almost 20 years of hearing the same invalid arguments (like Bill Murray in “Groundhog Day” waking up to the same song every day), I have come to see clearly that the real problem is a deeply ingrained, implicit, very powerful yet 100 percent false assumption that if women and men are to be considered “equal,” they have to be “the same.” Conversely, the argument goes, if neuroscience shows that women and men are not the same on average, then it somehow shows that they are not equal on average. Although this assumption is false, it still creates fear of sex differences in those operating on it. Ironically, forced sameness where two groups truly differ in some respect means forced inequality in that respect, exactly as we see in medicine today.

Women are not treated equally with men in biomedicine today because overwhelmingly they are still being treated the same as men (although this is finally changing). Yet astoundingly, and despite claiming she is not anti-sex difference, Rippon says “perhaps we should just stop looking for [sex] differences altogether?” Such dumbfounding statements from a nominal expert make me truly wonder whether the Rippons of the world even realize that, by constantly denying and trivializing and even vilifying research into biologically-based sex influences on the brain they are in fact advocating for biomedical research to retain its male subject-dominated status quo so disproportionately harmful to women.

So are female and male brains the same or different? We now know that the correct answer is “yes”: They are the same or similar on average in many respects, and they are different, a little to a lot, on average in many other respects. The neuroscience behind this conclusion is now remarkably robust, and not only won’t be going away, it will only grow. And yes, we, of course, must explore sex influences responsibly, as with all science. Sadly, the anti-sex difference folks will doubtless continue their ideological attacks on the field and the scientists in it.

Thus one can at present only implore thinking individuals to be wary of ideologues on both sides of the sex difference issue — those who want to convince you that men and women are always as different as Mars and Venus (and that perhaps God wants it that way), and those who want to convince you of the demonstrably false idea that the brains of women and men are for all practical purposes the same (“unisex”), that all differences between women and men are really due to an arbitrary culture (a “gendered world”), and that you are essentially a bad person if you disagree.

No one seems to have a problem accepting that, on average, male and female bodies differ in many, many ways. Why is it surprising or unacceptable that this is true for the part of our body that we call “brain”? Marie Curie said, “Nothing in life is to be feared, it is only to be understood. Now is the time to understand more, so that we may fear less.” Her sage advice applies perfectly to discussions about the neuroscience of sex differences in 2019.

Larry Cahill is a professor in the Department of Neurobiology and Behavior at the University of California, Irvine and an internationally recognized leader on the topic of sex influences on brain function.

Footnote:  This video uses humor to look at sexual brains based on observed human behavior “Why Men and Women Think Differently.)

See Also Gender Ideology and Science, including excerpts from Jordan Peterson