Which Comes First: Story or Facts?


Facts vs Stories is written by Steven Novella at Neurologica. Excerpts in italics with my bolds.

There is a common style of journalism, that you are almost certainly very familiar with, in which the report starts with a personal story, then delves into the facts at hand often with reference to the framing story and others like it, and returns at the end to the original personal connection. This format is so common it’s a cliche, and often the desire to connect the actual new information to an emotional story takes over the reporting and undermines the facts.

This format reflects a more general phenomenon – that people are generally more interested in and influenced by a good narrative than by dry facts. Or are we? New research suggests that while the answer is still generally yes, there is some more nuance here (isn’t there always?). The researchers did three studies in which they compared the effects of strong vs weak facts presented either alone or embedded in a story. In the first two studies the information was about a fictitious new phone. The weak fact was that the phone could withstand a fall of 3 feet. The strong fact was that the phone could withstand a fall of 30 feet. What they found in both studies is that the weak fact was more persuasive when presented embedded in a story than alone, while the strong fact was less persuasive.

They then did a third study about a fictitious flu medicine, and asked subjects if they would give their e-mail address for further information. People are generally reluctant to give away their e-mail address unless it’s worth it, so this was a good test of how persuasive the information was. When a strong fact about the medicine was given alone, 34% of the participants were willing to provide their e-mail. When embedded in a story, only 18% provided their e-mail.  So, what is responsible for this reversal of the normal effect that stories are generally more persuasive than dry facts?

The authors suggest that stories may impair our ability to evaluate factual information.

This is not unreasonable, and is suggested by other research as well. To a much greater extent than you might think, cognition is a zero-sum game. When you allocate resources to one task, those resources are taken away from other mental tasks (this basic process is called “interference” by psychologists). Further, adding complexity to brain processing, even if this leads to more sophisticated analysis of information, tends to slow down the whole process. And also, parts of the brain can directly suppress the functioning of other parts of the brain. This inhibitory function is actually a critical part of how the brain works together.

Perhaps the most dramatic relevant example of this is a study I wrote about previously in which fMRI scans were used to study subjects listening to a charismatic speaker that was either from the subjects religion or not. When a charismatic speaker that matched the subject’s religion was speaking, the critical thinking part of the brain was literally suppressed. In fact this study also found opposite effects depending on context.

The contrast estimates reveal a significant increase of activity in response to the non-Christian speaker (compared to baseline) and a massive deactivation in response to the Christian speaker known for his healing powers. These results support recent observations that social categories can modulate the frontal executive network in opposite directions corresponding to the cognitive load they impose on the executive system.

So when listening to speech from a belief system we don’t already believe, we engaged our executive function. When listening to speech from within our existing belief system, we suppressed our executive function.

In regards to the current study, is something similar going on? Does processing the emotional content of stories impair our processing of factual information, which is a benefit for weak facts but actually a detriment to the persuasive power of strong facts that are persuasive on their own?

Another potential explanation occurs to me, however (showing how difficult it can be to interpret the results of psychological research like this). It is a reasonable premise that a strong fact is more persuasive on it’s own than a weak fact – being able to survive a 3 foot fall is not as impressive as a 30 foot fall. But, the more impressive fact may also trigger more skepticism. I may simply not believe that a phone could survive such a fall. If that fact, however, is presented in a straightforward fashion, it may seem somewhat credible. If it is presented as part of a story that is clearly meant to persuade me, then that might trigger more skepticism. In fact, doing so is inherently sketchy. The strong fact is impressive on its own, why are you trying to persuade me with this unnecessary personal story – unless the fact is BS.There is also research to support this hypothesis. When a documentary about a fringe topic, like UFOs, includes the claim that, “This is true,” that actually triggers more skepticism. It encourages the audience to think, “Wait a minute, is this true?” Meanwhile, including a scientists who says, “This is not true,” may actually increase belief, because the audience is impressed that the subject is being taken serious by a scientist, regardless of their ultimate conclusion. But the extent of such backfire effects remains controversial in psychological research – it appears to be very context dependent.

I would summarize all this by saying that – we can identify psychological effects that relate to belief and skepticism. However, there are many potential effects that can be triggered in different situations, and interact in often complex and unpredictable ways. So even when we identify a real effect, such as the persuasive power of stories, it doesn’t predict what will happen in every case. In fact, the net statistical effect may disappear or even reverse in certain contexts, because it is either neutralized or overwhelmed by another effect. I think that is what is happening here.

What do you do when you are trying to be persuasive, then? The answer has to be – it depends? Who is your audience? What claims or facts are you trying to get across? What is the ultimate goal of the persuasion (public service, education, political activism, marketing)? I don’t think we can generate any solid algorithm, but we do have some guiding rules of thumb.

First, know your audience, or at least those you are trying to persuade. No message will be persuasive to everyone.

If the facts are impressive on their own, let them speak for themselves. Perhaps put them into a little context, but don’t try to wrap them up in an emotional story. That may backfire.

Depending on context, your goal may be to not just provide facts, but to persuade your audience to reject a current narrative for a better one. In this case the research suggests you should both argue against the current narrative, and provide a replacement that provides an explanatory model.

So you can’t just debunk a myth, conspiracy theory, or misconception. You need to provide the audience with another way to make sense of their world.

When possible find common ground. Start with the premises that you think most reasonable people will agree with, then build from there.

Now, it’s not my goal to outline how to convince people of things that are not true, or that are subjective but in your personal interest. That’s not what this blog is about. I am only interested in persuading people to portion their belief to the logic and evidence. So I am not going to recommend ways to avoid triggering skepticism – I want to trigger skepticism. I just want it to be skepticism based on science and critical thinking, not emotional or partisan denial, nihilism, cynicism, or just being contrarian.

You also have to recognize that it can be difficult to persuade people. This is especially true if your message is constrained by facts and reality. Sometimes the real information is not optimized for emotional appeal, and it has to compete against messages that are so optimized (and are unconstrained by reality). But at least know the science about how people process information and form their beliefs is useful.

Postscript:  Hans Rosling demonstrates how to use data to tell the story of our rising civilization.

Bottom Line:  When it comes to science, the rule is to follow the facts.  When the story is contradicted by new facts, the story changes to fit the facts, not the other way around.

See also:  Data, Facts and Information

Advertisements

Too Many People, or Too Few?

A placard outside the UN headquarters in New York City, November 2011

Some years ago I read the book Boom, Bust and Echo. It described how planners for public institutions like schools and hospitals often fail to anticipate demographic shifts. The authors described how in North America, the baby Boom after WWII overcrowded schools, and governments struggled to build and staff more facilities. Just as they were catching up came the sexual revolution and the drop in fertility rates, resulting in a population Bust in children entering the education system. Now the issue was to close schools and retire teachers due to overcapacity, not easy to do with sentimental attachments. Then as the downsizing took hold came the Echo. Baby boomers began bearing children, and even at a lower birth rate, it still meant an increased cohort of students arriving at a diminished system.

The story is similar to what is happening today with world population. Zachary Karabell writes in Foreign Affairs The Population Bust: Demographic Decline and the End of Capitalism as We Know It. Excerpts in italics with my bolds.

For most of human history, the world’s population grew so slowly that for most people alive, it would have felt static. Between the year 1 and 1700, the human population went from about 200 million to about 600 million; by 1800, it had barely hit one billion. Then, the population exploded, first in the United Kingdom and the United States, next in much of the rest of Europe, and eventually in Asia. By the late 1920s, it had hit two billion. It reached three billion around 1960 and then four billion around 1975. It has nearly doubled since then. There are now some 7.6 billion people living on the planet.

Just as much of the world has come to see rapid population growth as normal and expected, the trends are shifting again, this time into reverse. Most parts of the world are witnessing sharp and sudden contractions in either birthrates or absolute population. The only thing preventing the population in many countries from shrinking more quickly is that death rates are also falling, because people everywhere are living longer. These oscillations are not easy for any society to manage. “Rapid population acceleration and deceleration send shockwaves around the world wherever they occur and have shaped history in ways that are rarely appreciated,” the demographer Paul Morland writes in The Human Tide, his new history of demographics. Morland does not quite believe that “demography is destiny,” as the old adage mistakenly attributed to the French philosopher Auguste Comte would have it. Nor do Darrell Bricker and John Ibbitson, the authors of Empty Planet, a new book on the rapidly shifting demographics of the twenty-first century. But demographics are clearly part of destiny. If their role first in the rise of the West and now in the rise of the rest has been underappreciated, the potential consequences of plateauing and then shrinking populations in the decades ahead are almost wholly ignored.

The mismatch between expectations of a rapidly growing global population (and all the attendant effects on climate, capitalism, and geopolitics) and the reality of both slowing growth rates and absolute contraction is so great that it will pose a considerable threat in the decades ahead. Governments worldwide have evolved to meet the challenge of managing more people, not fewer and not older. Capitalism as a system is particularly vulnerable to a world of less population expansion; a significant portion of the economic growth that has driven capitalism over the past several centuries may have been simply a derivative of more people and younger people consuming more stuff. If the world ahead has fewer people, will there be any real economic growth? We are not only unprepared to answer that question; we are not even starting to ask it.

BOMB OR BUST?
At the heart of The Human Tide and Empty Planet, as well as demography in general, is the odd yet compelling work of the eighteenth-century British scholar Thomas Malthus. Malthus’ 1798 Essay on the Principle of Population argued that growing numbers of people were a looming threat to social and political stability. He was convinced that humans were destined to produce more people than the world could feed, dooming most of society to suffer from food scarcity while the very rich made sure their needs were met. In Malthus’ dire view, that would lead to starvation, privation, and war, which would eventually lead to population contraction, and then the depressing cycle would begin again.

Yet just as Malthus reached his conclusions, the world changed. Increased crop yields, improvements in sanitation, and accelerated urbanization led not to an endless cycle of impoverishment and contraction but to an explosion of global population in the nineteenth century. Morland provides a rigorous and detailed account of how, in the nineteenth century, global population reached its breakout from millennia of prior human history, during which the population had been stagnant, contracting, or inching forward. He starts with the observation that the population begins to grow rapidly when infant mortality declines. Eventually, fertility falls in response to lower infant mortality—but there is a considerable lag, which explains why societies in the modern world can experience such sharp and extreme surges in population. In other words, while infant mortality is high, women tend to give birth to many children, expecting at least some of them to die before reaching maturity. When infant mortality begins to drop, it takes several generations before fertility does, too. So a woman who gives birth to six children suddenly has six children who survive to adulthood instead of, say, three. Her daughters might also have six children each before the next generation of women adjusts, deciding to have smaller families.

The population bust is going global almost as quickly as the population boom did in the twentieth century.  The burgeoning of global population in the past two centuries followed almost precisely the patterns of industrialization, modernization, and, crucially, urbanization. It started in the United Kingdom at the end of the nineteenth century (hence the concerns of Malthus), before spreading to the United States and then France and Germany. The trend next hit Japan, India, and China and made its way to Latin America. It finally arrived in sub-Saharan Africa, which has seen its population surge thanks to improvements in medicine and sanitation but has not yet enjoyed the full fruits of industrialization and a rapidly growing middle class.

With the population explosion came a new wave of Malthusian fears, epitomized by the 1968 book The Population Bomb, by Paul Ehrlich, a biologist at Stanford University. Ehrlich argued that plummeting death rates had created an untenable situation of too many people who could not be fed or housed. “The battle to feed all of humanity is over,” he wrote. “In the 1970’s the world will undergo famines—hundreds of millions of people are going to starve to death in spite of any crash programs embarked on now.”

Ehrlich’s prophecy, of course, proved wrong, for reasons that Bricker and Ibbitson elegantly chart in Empty Planet. The green revolution, a series of innovations in agriculture that began in the early twentieth century, accelerated such that crop yields expanded to meet humankind’s needs. Moreover, governments around the world managed to remediate the worst effects of pollution and environmental degradation, at least in terms of daily living standards in multiple megacities, such as Beijing, Cairo, Mexico City, and New Delhi. These cities face acute challenges related to depleted water tables and industrial pollution, but there has been no crisis akin to what was anticipated.

Doesn’t anyone want my Green New Deal?

Yet visions of dystopic population bombs remain deeply entrenched, including at the center of global population calculations: in the forecasts routinely issued by the United Nations. Today, the UN predicts that global population will reach nearly ten billion by 2050. Judging from the evidence presented in Morland’s and Bricker and Ibbitson’s books, it seems likely that this estimate is too high, perhaps substantially. It’s not that anyone is purposely inflating the numbers. Governmental and international statistical agencies do not turn on a dime; they use formulas and assumptions that took years to formalize and will take years to alter. Until very recently, the population assumptions built into most models accurately reflected what was happening. But the sudden ebb of both birthrates and absolute population growth has happened too quickly for the models to adjust in real time. As Bricker and Ibbitson explain,

“The UN is employing a faulty model based on assumptions that worked in the past but that may not apply in the future.”

Population expectations aren’t merely of academic interest; they are a key element in how most societies and analysts think about the future of war and conflict. More acutely, they drive fears about climate change and environmental stability—especially as an emerging middle class numbering in the billions demands electricity, food, and all the other accoutrements of modern life and therefore produces more emissions and places greater strain on farms with nutrient-depleted soil and evaporating aquifers. Combined with warming-induced droughts, storms, and shifting weather patterns, these trends would appear to line up for some truly bad times ahead.

Except, argue Bricker and Ibbitson, those numbers and all the doomsday scenarios associated with them are likely wrong. As they write,

“We do not face the challenge of a population bomb but a population bust—a relentless, generation-after-generation culling of the human herd.”

Already, the signs of the coming bust are clear, at least according to the data that Bricker and Ibbitson marshal. Almost every country in Europe now has a fertility rate below the 2.1 births per woman that is needed to maintain a static population. The UN notes that in some European countries, the birthrate has increased in the past decade. But that has merely pushed the overall European birthrate up from 1.5 to 1.6, which means that the population of Europe will still grow older in the coming decades and contract as new births fail to compensate for deaths. That trend is well under way in Japan, whose population has already crested, and in Russia, where the same trends, plus high mortality rates for men, have led to a decline in the population.

What is striking is that the population bust is going global almost as quickly as the population boom did in the twentieth century. Fertility rates in China and India, which together account for nearly 40 percent of the world’s people, are now at or below replacement levels. So, too, are fertility rates in other populous countries, such as Brazil, Malaysia, Mexico, and Thailand. Sub-Saharan Africa remains an outlier in terms of demographics, as do some countries in the Middle East and South Asia, such as Pakistan, but in those places, as well, it is only a matter of time before they catch up, given that more women are becoming educated, more children are surviving their early years, and more people are moving to cities.

Both books note that the demographic collapse could be a bright spot for climate change. Given that carbon emissions are a direct result of more people needing and demanding more stuff—from food and water to cars and entertainment—then it would follow that fewer people would need and demand less. What’s more, larger proportions of the planet will be aging, and the experiences of Japan and the United States are showing that people consume less as they age. A smaller, older population spells some relief from the immense environmental strain of so many people living on one finite globe.

The Reinvention of Chess

That is the plus side of the demographic deflation. Whether the concomitant greening of the world will happen quickly enough to offset the worst-case climate scenarios is an open question—although current trends suggest that if humanity can get through the next 20 to 30 years without irreversibly damaging the ecosystem, the second half of the twenty-first century might be considerably brighter than most now assume.

The downside is that a sudden population contraction will place substantial strain on the global economic system.

Capitalism is, essentially, a system that maximizes more—more output, more goods, and more services. That makes sense, given that it evolved coincidentally with a population surge. The success of capitalism in providing more to more people is undeniable, as are its evident defects in providing every individual with enough. If global population stops expanding and then contracts, capitalism—a system implicitly predicated on ever-burgeoning numbers of people—will likely not be able to thrive in its current form. An aging population will consume more of certain goods, such as health care, but on the whole aging and then decreasing populations will consume less. So much of consumption occurs early in life, as people have children and buy homes, cars, and white goods. That is true not just in the more affluent parts of the world but also in any country that is seeing a middle-class surge.

The future world may be one in which capitalism at best frays and at worst breaks down completely.
But what happens when these trends halt or reverse? Think about the future cost of capital and assumptions of inflation. No capitalist economic system operates on the presumption that there will be zero or negative growth. No one deploys investment capital or loans expecting less tomorrow than today. But in a world of graying and shrinking populations, that is the most likely scenario, as Japan’s aging, graying, and shrinking absolute population now demonstrates. A world of zero to negative population growth is likely to be a world of zero to negative economic growth, because fewer and older people consume less. There is nothing inherently problematic about that, except for the fact that it will completely upend existing financial and economic systems. The future world may be one of enough food and abundant material goods relative to the population; it may also be one in which capitalism at best frays and at worst breaks down completely.

The global financial system is already exceedingly fragile, as evidenced by the 2008 financial crisis. A world with negative economic growth, industrial capacity in excess of what is needed, and trillions of dollars expecting returns when none is forthcoming could spell a series of financial crises. It could even spell the death of capitalism as we know it. As growth grinds to a halt, people may well start demanding a new and different economic system. Add in the effects of automation and artificial intelligence, which are already making millions of jobs redundant, and the result is likely a future in which capitalism is increasingly passé.

If population contraction were acknowledged as the most likely future, one could imagine policies that might preserve and even invigorate the basic contours of capitalism by setting much lower expectations of future returns and focusing society on reducing costs (which technology is already doing) rather than maximizing output.

But those policies would likely be met in the short term by furious opposition from business interests, policymakers, and governments, all of whom would claim that such attitudes are defeatist and could spell an end not just to growth but to prosperity and high standards of living, too. In the absence of such policies, the danger of the coming shift will be compounded by a complete failure to plan for it.

Different countries will reach the breaking point at different times. Right now, the demographic deflation is happening in rich societies that are able to bear the costs of slower or negative growth using the accumulated store of wealth that has been built up over generations. Some societies, such as the United States and Canada, are able to temporarily offset declining population with immigration, although soon, there won’t be enough immigrants left. As for the billions of people in the developing world, the hope is that they become rich before they become old. The alternative is not likely to be pretty: without sufficient per capita affluence, it will be extremely difficult for developing countries to support aging populations.

So the demographic future could end up being a glass half full, by ameliorating the worst effects of climate change and resource depletion, or a glass half empty, by ending capitalism as we know it. Either way, the reversal of population trends is a paradigm shift of the first order and one that is almost completely unrecognized. We are vaguely prepared for a world of more people; we are utterly unprepared for a world of fewer. That is our future, and we are heading there fast.

See also Control Population, Control the Climate. Not.

Epic Media Science Fail: Fear Not Pollinator Collapse

Jon Entine returns to this topic writing at the Genetic Literacy Project: The world faces ‘pollinator collapse’? How and why the media get the science wrong time and again. Excerpts in italics with my bolds.

As I and others have detailed in the Genetic Literacy Project and as other news organizations such as the Washington Post and Slate have outlined, the pollinator-collapse narrative has been relentless and mostly wrong for more than seven years now.

It germinated with Colony Collapse Disorder that began in 2006 and lasted for a few years—a freaky die off of bees that killed almost a quarter of the US honey bee population, but its cause remains unknown. Versions of CCD have been occurring periodically for hundreds of years, according to entomologists.

Today, almost all entomologists are convinced that the ongoing bee health crisis is primarily driven by the nasty Varroa destructor mite. Weakened honey bees, trucked around the country as livestock, face any number of health stressors along with Varroa, including the use of miticides used to control the invasive mite, changing weather and land and the use of some farm chemicals, which may lower the honeybee’s ability to fight off disease.

Still, the ‘bee crisis’ flew under the radar until 2012, when advocacy groups jumped in to provide an apocalyptic narrative after a severe winter led to a sharp, and as it turned out temporary, rise in overwinter bee deaths.

Colony loss numbers jumped in 2006 when CCD hit but have been steady and even improving since.

The alarm bells came with a spin, as advocacy groups blamed a class of pesticides known as neonicotinoids, which were introduced in the 1990s, well after the Varroa mite invasion infected hives and started the decline. The characterization was apocalyptic, with some activist claiming that neonics were driving honey bees to extinction.

In the lab evaluations, which are not considered state of the art—field evaluations replicate real-world conditions far better—honeybee mortality did increase. But that was also true of all the insecticides tested; after all, they are designed to kill harmful pests. Neonics are actually far safer than the pesticides they replaced, . . . particularly when their impact is observed under field-realistic conditions (i.e., the way farmers would actually apply the pesticide).

As the “science” supporting the bee-pocalypse came under scrutiny, the ‘world pollinator crisis’ narrative began to fray. Not only was it revealed that the initial experiments had severely overdosed the bees, but increasing numbers of high-quality field studies – which test how bees are actually affected under realistic conditions – found that bees can successfully forage on neonic-treated crops without noticeable harm.

Those determined to keep the crisis narrative alive were hardly deterred. Deprived of both facts and science to argue their case, many advocacy groups simply pounded the table by shifting their crisis argument dramatically. For example, in 2016, the Sierra Club (while requesting donations), hyped the honey bee crisis to no end.

But more recently, in 2018, the same organization posted a different message on its blog. Honeybees, the Sierra Club grudgingly acknowledged, were not threatened. Forget honeybees, the Sierra Club said, the problem is now wild bees, or more generally, all insect pollinators, which are facing extinction due to agricultural pesticides of all types (though neonics, they insisted, were especially bad).

So, once again, with neither the facts nor the science to back them up, advocacy groups have pulled a switcheroo and are again pounding the table. As they once claimed with honeybees, they now claim that the loss of wild bees and other insect pollinators imperils our food supply. A popular meme on this topic is the oft-cited statistic, which appears in the recent UN IPBES report on biodiversity, that “more than 75 per cent of global food crop types, including fruits and vegetables and some of the most important cash crops such as coffee, cocoa and almonds, rely on animal pollination.”

There’s a sleight of hand here. Most people (including most journalists) miss or gloss over the important point that this is 75 percent of crop types, or varieties, not 75 percent of all crop production. In fact, 60 percent of agricultural production comes from crops that do not rely on animal pollination, including cereals and root crops. As the GLP noted in its analysis, only about 7 percent of crop output is threatened by pollinator declines—not a welcomed percentage, but far from an apocalypse.

And the word “rely” seems almost purposefully misleading. More accurately, most of these crops receive some marginal boost in yield from pollination. Few actually “rely” on it. A UN IPBES report on pollinators published in 2018 actually breaks this down in a convenient pie graph.

Many of these facts are ignored by advocacy groups sharpening their axes, and they’re generally lost on the “if it bleeds it leads” media, which consistently play up catastrophe scenarios of crashing pollinator communities and food supplies. Unfortunately, many scientists willingly go along. Some are activists themselves; others hope to elevate the significance of their findings to garner media attention and supercharge grant proposals.

As John Adams is alleged to have said, ‘facts are stubborn things.’ We can’t be simultaneously in the midst of a pollinator crisis threatening our ability to grow food and see continually rising yield productivity among those crops most sensitive to pollination.

With these claims of an impending wild bee catastrophe, as in the case of the original honeybee-pocalypse claims, few of the journalists, activists, scientists or biodiversity experts who regularly sound this ecological alarm have reviewed the facts in context. Advocacy groups consistently extrapolate from the declines of a handful of wild bee species (out of the thousands that we know exist), to claim that we are in the midst of a worldwide crisis. But just as with the ‘honey bee-mageddon, we are not.

Those of us who actually care about science and fact, however, might note the irony here: It is precisely the pesticides which the catastrophists are urging us to ban that, along with the many other tools in the modern farmer’s kit, have enabled us grow more of these nutritious foods, at lower prices, than ever before in human history.

Scientific vs. Social Authenticity

Credit: Stanislaw Pytel Getty Images

This post was triggered by an essay in Scientific American Authenticity under Fire by Scott Barry Kaufman. He raises modern issues and expresses a social and psychological sense of authenticity that left me unsatisfied.  So following that, I turn to a scientific standard much richer in meaning and closer to my understanding.

Social Authenticity

Researchers are calling into question authenticity as a scientifically viable concept

Authenticity is one of the most valued characteristics in our society. As children we are taught to just “be ourselves”, and as adults we can choose from a large number of self-help books that will tell us how important it is to get in touch with our “real self”. It’s taken as a given by everyone that authenticity is a real thing and that it is worth cultivating.

Even the science of authenticity has surged in recent years, with hundreds of journal articles, conferences, and workshops. However, the more that researchers have put authenticity under the microscope, the more muddied the waters of authenticity have become.

Many common ideas about authenticity are being overturned.
Turns out, authenticity is a real mess.

One big problem with authenticity is that there is a lack of consensus among both the general public and among psychologists about what it actually means for someone or something to be authentic. Are you being most authentic when you are being congruent with your physiological states, emotions, and beliefs, whatever they may be?

Another thorny issue is measurement. Virtually all measures of authenticity involve self-report measures. However, people often do not know what they are really like or why they actually do what they do. So tests that ask people to report how authentic they are is unlikely to be a truly accurate measure of their authenticity.

Perhaps the thorniest issue of them all though is the entire notion of the “real self”. The humanistic psychotherapist Carl Rogers noted that many people who seek psychotherapy are plagued by the question “Who am I, really?” While people spend so much time searching for their real self, the stark reality is that all of the aspects of your mind are part of you.

So what is this “true self” that people are always talking about? Once you take a closer scientific examination, it seems that what people refer to as their “true self” really is just the aspects of themselves that make them feel the best about themselves.

Even more perplexing, it turns out that most people’s feelings of authenticity have little to do with acting in accord with their actual nature. The reality appears to be quite the opposite. All people tend to feel most authentic when having the same experiences, regardless of their unique personality.

Another counterintuitive finding is that people actually tend to feel most authentic when they are acting in socially desirable ways, not when they are going against the grain of cultural dictates (which is how authenticity is typically portrayed). On the flip side, people tend to feel inauthentic when they are feeling socially isolated, or feel as though they have fallen short of the standards of others.

Therefore, what people think of as their true self may actually just be what people want to be seen as. According to social psychologist Roy Baumeister, we will report feeling highly authentic and satisfied when the way others think of us matches up with how we want to be seen, and when our actions “are conducive to establishing, maintaining, and enjoying our desired reputation.”

Conversely, Baumeister argues that when people fail to achieve their desired reputation, they will dismiss their actions as inauthentic, as not reflecting their true self (“That’s not who I am”). As Baumeister notes, “As familiar examples, such repudiation seems central to many of the public appeals by celebrities and politicians caught abusing illegal drugs, having illicit sex, embezzling or bribing, and other reputation-damaging actions.”

Kaufman Conclusion

As long as you are working towards growth in the direction of who you truly want to be, that counts as authentic in my book regardless of whether it is who you are at this very moment. The first step to healthy authenticity is shedding your positivity biases and seeing yourself for who you are, in all of your contradictory and complex splendor. Full acceptance doesn’t mean you like everything you see, but it does mean that you’ve taken the most important first step toward actually becoming the whole person you most wish to become. As Carl Rogers noted, “the curious paradox is that when I accept myself just as I am, then I can change.”

My Comment:
Kaufman describes contemporary ego-centric group-thinking, which leads to the philosophical dead end called solipsism. As an epistemological position, solipsism holds that knowledge of anything outside one’s own mind is unsure; the external world and other minds cannot be known and might not exist outside the mind.

His discussion proves the early assertion that authenticity (in the social or psychological sense) is indeed a mess. The author finds no objective basis to determine fidelity to reality, thus leaving everyone struggling whether to be self-directed or other-directed. As we know from Facebook, most resolve that conflict by competing to see who can publish the most selfies while acquiring the most “friends.”This is the best Scientific American can do? The swamp is huge and deep indeed.

It reminds me of what Ross Pomeroy wrote at Real Science: “Psychology, as a discipline, is a house made of sand, based on analyzing inherently fickle human behavior, held together with poorly-defined concepts, and explored with often scant methodological rigor. Indeed, there’s a strong case to be made that psychology is barely a science.”

Scientific Authenticity

In contrast, let us consider some writing by Philip Kanarev, A practicing physicist, he is concerned with the demise of scientific thinking and teaching and calls for a return to fundamentals. His essay is Scientific Authenticity Criteria by Ph. M. Kanarev in the General Science Journal.  Excerpts in italics with my bolds.

A conjunction of scientific results in the 21st century has reached a level that provides an opportunity to find and to systematize the scientific authenticity criteria of precise knowledge already gained by mankind.

Neither Euclid, nor Newton gave precise definitions of the notions of an axiom, a postulate and a hypothesis. As a result, Newton called his laws the axioms, but it was in conflict with the Euclidean ideas concerning the essence of the axioms. In order to eliminate these contradictions, it was necessary to give a definition not only to the notions of the axiom and the postulate, but also to the notion of the hypothesis. This necessity is stipulated by the fact that any scientific research begins with an assumption regarding the reason causing a phenomenon or process being studied. A formulation of this assumption is a scientific hypothesis.

Thus, the axioms and the postulates are the main criteria of authenticity of any scientific result.

An axiom is an obvious statement, which requires no experimental check and has no exceptions. Absolute authenticity of an axiom appears from this definition. It protects it by a vivid connection with reality. A scientific value of an axiom does not depend on its recognition; that is why disregarding an axiom as a scientific authenticity criterion is similar to ineffectual scientific work.

A postulate is a non-obvious statement, its reliability being proven in the way of experiment or a set of theoretic results originating from the experiments. The reliability of a postulate is determined by the level of acknowledgement by the scientific community. That’s why its value is not absolute.

An hypothesis is an unproven statement, which is not a postulate. A proof can be theoretical and experimental. Both proofs should not be at variance with the axioms and the recognized postulates. Only after that, hypothetical statements gain the status of postulates, and the statements, which sum up a set of axioms and postulates, gain the status of a trusted theory.

The first axioms were formulated by Euclid. Here are some of them:
1 – To draw a straight line from any point to any point.
2 – To produce a finite straight line continuously in a straight line.
3 – That all right angles equal one another.

Euclidean formulation concerning the parallelism of two straight lines proved to be less concise. As a result, it was questioned and analyzed in the middle of the 19th century. It was accepted that two parallel straight lines cross at infinity. Despite a complete absence of evidence of this statement, the status of an axiom was attached to it. Mankind paid a lot for such an agreement among the scientists. All theories based on this axiom proved to be faulty. The physical theories of the 20th century proved to be the principal ones among them.

In order to understand the complicated situation being formed, one has to return to Euclidean axioms and assess their completeness. It has turned out that there are no axioms, which reflect the properties of the primary elements of the universe (space, matter and time), among those of Euclid. There are no phenomena, which could compress space, stretch it or distort it, in the nature; that is why space is absolute. There are no phenomena, which change the rate of the passing of time in nature. Time does not depend on anything; that’s why we have every reason to consider time absolute. The absolute nature of space and time has been acknowledged by scientists since Euclidean times. But when his axiom concerning the parallelism of straight lines was disputed, the ideas of relativity of space and time as well as the new theories, which were based on these ideas and proved (as we noted) to be faulty, appeared.

A law of acknowledgement of new scientific achievements was introduced by Max Planck. He formulated it in the following way: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it”. Our attempt to report the reliability of this law to the authorities is in the history of science an unnecessary intention. Certainly, time appeared in space only after matter. But still we do not know of a source that produces elementary particles – building blocks of the material world. That’s why we have no reason to consider matter absolute. But it does not prevent us from paying attention to an interconnection of the primary elements of the universe: space, matter and time. They exist only together and regardless of each other. This fact is vivid, and we have every reason to consider an indivisible existence of space, matter and time as an axiomatic one, and to call the axiom, which reflects this fact, the Unity axiom. The philosophic essence of this axiom has been noted long ago, but the practitioners of the exact sciences have failed to pay attention to the fact that it is implemented in the experimental and analytical processes of cognition of the world. When material bodies move, the mathematical description of this motion should be based on the Unity axiom. It appears from this axiom, that an axis of motion of any object is the time function. Almost all physical theories of the 20th century are in conflict with the Unity axiom. It is painful to write about it in detail.

Let us go on analyzing the role of postulates as scientific authenticity criteria. First of all, let us recollect the famous postulate by Niels Bohr concerning the orbital motion of the electrons in atoms. This catchy model of the process of the interaction of the electrons in the atoms goes on being formed in the mind of the pupils in school despite of the fact that its impropriety has been proven more than 10 years ago.

The role of Niels Bohr’s generalized postulate is great. Practically, it is used in the whole of modern chemistry and the larger part of physics. This postulate is based on the calculation of the spectrum of the hydrogen atom. But it is impossible to calculate the spectrum of the first orbit of the helium atom (which occupies the second place in Mendeleev’s table,) with Bohr’s postulate, to say nothing of the spectra of more complicated atoms and ions. It was enough to dispute the authenticity of Bohr’s postulate, but the mission of doubt has fallen to our lot for some reason. Two years were devoted to decoding the spectrum of the first electron of the helium atom. As a result, the law of formation of the spectra of atoms and ions has taken place as well as the law of the change of binding energy of the electron with the protons of the nuclei when energy-jumps take place in the atoms. It has turned out that there is no energy of orbital motion of the electrons in these laws; there are only the energies of their linear interaction with the protons of the nuclei.

Thereafter, it has become clear that only elementary particle models can play the role of the scientific result authenticity criteria in cognition of the micro-world. From the analysis of behaviour of these models, one should derive the mathematical models, which have been ascertained analytically long ago, and describe their behaviour in the experiments that have been carried out earlier.

The ascertained models of the photons of all frequencies, the electron, the proton and the neutron meet the above-mentioned requirements. They are interconnected with each other by such a large set of theoretical and experimental information, whose impropriety cannot be proven. This is the main feature of the proximity to reality of the ascertained models of the principle elementary particles. Certainly, the process of their generation has begun from a formulation of the hypothesis concerning their structures. Sequential development of the description of these structures and their behaviour during the interactions extended the range of experimental data where the parameters of the elementary particles and their interactions were registered. For example, the formation and behaviour of electrons are governed by more than 20 constants.

We have every reason to state that the models of the photons, the electron, the proton and the neutron, which have been ascertained by us, as well as the principles of formation of the nuclei, the atoms, the ions, the molecules and the clusters already occupy a foundation for the postulates, and new scientific knowledge will cement its strength.

Science has a rather complete list of criteria in order to estimate the authenticity of scientific investigative results. The axioms (the obvious statements, which require no experimental check and have no exceptions,) occupy the first place; the second place is occupied by the postulates. If the new theory is in conflict with at least one axiom, it will be rejected immediately by the scientific community without discussion. If the experimental data, which are in conflict with any postulate (as it happened, for example, to the Newton’s first law), appear, the future scientific community, which has learned a lesson from scientific cowardice of the academic elite of the 20th century, will submit such a postulate to a collective analysis of its authenticity.

Kanarev Conclusion

To the academicians who have made many mistakes in knowledge of the fields of physics and chemistry, we wish them to recover their sight in old age and be glad that these mistakes are already amended. It is time to understand that a prolongation of stuffing the heads of young people with faulty knowledge is similar to a crime that will be taken to heart emotionally in the near future.

The time has ended, when a diploma confirming higher education was enough in order to get a job. Now it is not a convincing argument for an employer; in order to be on the safe side, he hires a young graduate as a probationer at first as he wants to see what the graduate knows and what he is able to do. A new system of higher education has almost nullified a possibility for the student to have the skills of practical work according to his specialty and has preserved a requirement to have moronic knowledge, i.e. the knowledge which does not reflect reality.

My Summary

In Science, authenticity requires fidelity to axioms and postulates describing natural realities. It also means insisting that hypotheses be validated by experimental results. Climate science claims are not scientifically authentic unless or until confirmed by observations, and not simply projections from a family of divergent computer models. And despite all of the social support for climate hysteria, those fears are again more stuffing of nonsense into heads of youth and of the scientifically illiterate.

See Also Degrees of Climate Truth

False Beliefs about Human Genes

Carl Zimmer writes at Skeptical Inquirer Seven Big Misconceptions About Heredity. Excerpts in italics with my bolds

It’s been seven decades since scientists demonstrated that DNA is the molecule of heredity. Since then, a steady stream of books, news programs, and episodes of CSI have made us comfortable with the notion that each of our cells contains three billion base pairs of DNA, which we inherited from our parents. But we’ve gotten comfortable without actually knowing much at all about our own genomes.

If you want to get your entire genome sequenced—all three billion base pairs in your DNA—a company called Dante Labs will do it for $699. You don’t need whole genome sequencing to learn a lot about your genes, however. The 20,000 genes that encode our proteins make up less than 2 percent of the human genome. That fraction of the genome—the “exome”—can be yours for just a few hundred dollars. The cheapest insights come from “genotyping”—in which scientists survey around a million spots in the genome known to vary a lot among people. Genotyping—offered by companies such as 23andMe and Ancestry—is typically available for under a hundred dollars.

Thanks to these falling prices, the number of people who are getting a glimpse at their own genes is skyrocketing. By 2019, over twenty-five million worldwide had gotten genotyped or had their DNA sequenced. At its current pace, the total may reach 100 million by 2020.

There’s a lot we can learn about ourselves in these test results. But there’s also a huge opportunity to draw the wrong lessons.

Many people have misconceptions about heredity—how we are connected to our ancestors and how our inheritance from them shapes us. Rather than dispelling those misconceptions, our growing fascination with our DNA may only intensify them. A number of scientists have warned of a new threat they call “genetic astrology.” It’s vitally important to fight these misconceptions about heredity, just as we must fight misconceptions about other fields of science, such as global warming, vaccines, and evolution. Here are just a few examples.

Misconception #1: Finding a Special Ancestor Makes You Special

You can join the Order of the Crown of Charlemagne if you can prove that the Holy Roman Emperor is your ancestor. It’s a thrill to discover we have a genealogical link to someone famous—perhaps because that link seems to make us special, too.

But that’s an illusion. I could join the Mayflower Society, for example, because I’m descended from a servant aboard the ship named John Howland. Howland’s one claim to fame is that he fell out of the Mayflower. Fortunately for me, he got fished out of the water and reached Massachusetts. But I’m not the only fortunate one; by one estimate, there are two million people who descend from him alone.

Mathematicians have analyzed the structure of family trees, and they’ve found that the further back in time you go, the more descendants people had. (This is only true of people who have any living descendants at all, it should be noted.) This finding has an astonishing implication. Since we know Charlemagne has living descendants (thank you, Order of the Crown!), he is likely the ancestor of every living person of European descent.

Misconception #2: You Are Connected to All Your Ancestors by DNA

But genetics do not equal genealogy. It turns out that practically none of the Europeans who descend from Charlemagne inherited any of his DNA. All humans, in fact, have no genetic link to most of their direct ancestors.

The reason for this disconnect is the way that DNA gets passed down from one generation to the next. Every egg or sperm randomly ends up with one copy of each chromosome, coming either from a person’s mother or father. As a result, we inherit about a quarter of our DNA from each grandparent—but only on average.

If you go back a few generations more, that contribution can drop all the way to zero. . . While it is true that you inherit your DNA from your ancestors, that DNA is only a tiny sampling of the genes in your family tree.

Even without a genetic link, though, your ancestors remain your ancestors. They did indeed help shape who you are—not by giving you a gene for some particular trait, but by raising their own children, who then raised their own children in turn, passing down a cultural inheritance along with a genetic one.

Misconception #3: Ancestry Tests Are as Reliable as Medical Tests

Millions of people are getting ancestry reports based on their DNA. My own report informs me that I’m 43 percent Ashkenazi Jewish, 25 percent Northwestern European, 23 percent South/Central European, 6 percent Southwestern European, and 2.2 percent North Slavic. Those percentages sound impressive, even definitive. It’s easy to conclude that ancestry reports are as reliable as stepping on a scale at the doctor’s office to get your height and weight measured.

That is a mistake, and one that can cause a lot of heartbreak. To estimate ancestry, researchers compare each customer to a database of thousands of people from around the world. . . They can identify stretches of DNA that are likely to have originated in a particular part of the world. While some matches are clear-cut, others are less so. As a result, ancestry estimates always have margins of error—which often go missing in the reports customers get.

These estimates are going to get better with time, but there’s a fundamental limit to what they can tell us about our ancestry. . . Researchers are getting glimpses of those older peoples by retrieving DNA from ancient skeletons. And they’re finding that our genetic history is far more tumultuous than previously thought. Time and again, researchers find that the people who have lived in a given place in recent centuries have little genetic connection to the people who lived there thousands of years ago. All over the world, populations have expanded and migrated, coming into contact with other populations. . . If you want to find purity in your ancestry, you’re on a fool’s errand.

Misconception #4: There’s a Gene for Every Trait You Inherit

Mendel is a great place to start learning about heredity but a bad place to stop. There are some traits that are determined by a single gene. Whether Mendel’s peas were smooth or wrinkled was determined by a gene called SBEI. Whether people develop sickle cell anemia or not comes down to a single gene called HBB. But many traits do not follow this so-called Mendelian pattern—even ones that we may have been told in school are Mendelian.

Consider your ear lobes. For decades, teachers taught that they could either hang free or be attached to the side of our heads. The sort of ear lobes you had was a Mendelian trait, determined by a single gene. In fact, our ear lobes typically fall somewhere between the two extremes of strongly attached to fully free. In 2017, a team of researchers compared the ear lobes of over 74,000 people to their DNA. They looked for genetic variants that were common in people at either end of the ear-lobe spectrum. They pinpointed forty-nine genes that appear to play a role in determining how attached they are to our heads. There well may be more waiting to be discovered.

The genetics of ear lobes is actually very simple compared to other traits. Studying height, for example, scientists have identified thousands of genetic variants that appear to play a role. The same holds true for our risk of developing diabetes, heart disease, and other common disorders. We can’t expect to find a single gene in our DNA tests that determines whether we’ll die of a heart attack. Nor should we expect easy fixes for such complex diseases by repairing single genes.

Misconception #5: The Genes You Inherit Explain Exactly Who You Are

Take, for example, a recent study on how long people stay in school. Researchers examined DNA from 1.1 million people and found over 1,200 genetic variants that were unusually common either in people who left school early or in people who went on to college or graduate school. They then used the genetic differences in their subjects to come up with a predictive score, which they then tried out on another group of subjects. They found that in the highest-scoring 20 percent of these subjects, 57 percent finished college. In the lowest-scoring 20 percent, only 12 percent did.

But these results don’t mean that how long you stayed in school was determined before birth by your genes. Getting your children’s DNA tested won’t tell you if you should save up money for college tuition or not. Plenty of people in the educational attainment study who got high genetic scores dropped out of high school. Plenty of people who got low scores went on to get PhDs. And many more got an average amount of education in between those extremes. For any individual, these genetic scores make predictions that are barely better than guessing at random.

This confusing state of affairs is the result of how genes and the environment interact. Scientists call a trait such as how long people stay in school “moderately heritable.” In other words, a modest amount of the variation in education attainment is due to genetic variation. Lots of other factors also matter, too—the neighborhoods where people live, the quality of their schools, the stability of their family life, their income, and so on. What’s more, a gene that may have an influence on how long people stay in school in one environment may have no influence at all in another.

Misconception #6: You Have One Genome

According to this assumption, you will find an identical sequence of DNA in any cell you examine. But there are many ways in which we can end up with different genomes within our bodies.

Fairchild is known as a chimera. She developed inside her mother alongside a fraternal twin. That twin embryo died in the womb, but not before exchanging cells with Fairchild. Now her body was made up of two populations of cells, each of which multiplied and developed into different tissues. In Fairchild’s case, her blood arose from one population, while her eggs arose from another.

It’s unclear how many people are chimeras. Once they were considered bizarre rarities. Scientists became aware of them only in cases such as Lydia Fairchild’s, when their mixed identity made itself known. In recent years, researchers have been carrying out small-scale surveys that suggest that perhaps a few percent of twins are chimeras, but the true number could be higher. As for chimeric mothers, they may be the rule rather than the exception. In a 2017 study, researchers studied brain tumors taken from women who had sons. Eighty percent of them had Y-chromosome-bearing cells in their tumors.

Chimerism is not the only way we can end up with different genomes. Every time a cell in our body divides, there’s a tiny chance that one of the daughter cells may gain a mutation. At first, these new aberrations—called somatic mutations—seemed important only for cancer. But that view has changed as new genome-sequencing technologies have made it possible for scientists to study somatic mutations in many healthy tissues. It now turns out that every person’s body is a mosaic, made up of populations of cells with many different mutations.
Misconception #7: Genes Don’t Matter Because of Epigenetics

The notion that our genes are our destiny can trigger an equally false backlash: that genes don’t matter at all. And very often, those who push against the importance of genetics invoke a younger, more tantalizing field of research: epigenetics.

Our cells use many layers of control to make proper use of their genes. They can quickly turn some genes on and off in response to quick changes in their environment. But they can also silence genes for life. Women, for example, have two copies of the X chromosome, but in early development, each of their cells produces a swarm of RNA molecules and proteins that clamp down on one copy. The cell then only uses the other X chromosome. And if the cell divides, its daughter cells will silence the same copy again.

One of the most tantalizing possibilities scientists are now exploring is whether certain epigenetic “marks” can be inherited not just by daughter cells but by daughters—and sons. If people experience trauma in their lives and it leaves an epigenetic mark on their genes, for example, can they pass down those marks to future generations?

If you’re a plant, the answer is definitely yes. Plants that endure droughts or attacks by insects can reprogram their seeds, and these epigenetic changes can get carried down several generations. The evidence from animals is, for now, still a mixed bag. . . But skeptics have questioned how epigenetics can transmit these traits through the generations, suggesting that the results are just statistical flukes. That hasn’t stopped a cottage industry of epigenetic self-help from springing up. You can join epigenetic yoga classes to rewrite your epigenetic marks or go to epigenetic psychotherapy sessions to overcome the epigenetic legacy you inherited from your grandparents.

On Sexual Brains: Vive La Difference!

As Jordan Peterson has pointed out, an ideology takes a partial truth and asserts it as the whole truth, and nothing but the truth.  With global warming\climate change, we see how a complex, poorly understood natural system is reduced to a simplistic tweet:  “Ninety-seven percent of scientists agree: climate change is real, man-made and dangerous.”  That is the work of a small but dedicated group of ideologues who captured and overturned climate science so that it now only functions as a tool of political operatives.

The post shows how decades of painstaking work in neurological science are being attacked by gender ideologues, who cannot tolerate any biological differences between men and women.

Larry Cahill writes at Quillette Denying the Neuroscience of Sex Differences Excerpts in italics with my bolds.

For decades neuroscience, like most research areas, overwhelmingly studied only males, assuming that everything fundamental to know about females would be learned by studying males. I know — I did this myself early in my career. Most neuroscientists assumed that differences between males and females, if they exist at all, are not fundamental, that is, not essential for understanding brain structure or function. Instead, we assumed that sex differences result from undulating sex hormones (typically viewed as a sort of pesky feature of the female), and/or from different life experiences (“culture”). In either case, they were dismissable in our search for the fundamental. In truth, it was always a strange assumption, but so it was.

Gradually however, and inexorably, we neuroscientists are seeing just how profoundly wrong — and in fact disproportionately harmful to women — that assumption was, especially in the context of understanding and treating brain disorders. Any reader wishing to confirm what I am writing can easily start by perusing online the January/February 2017 issue of the Journal of Neuroscience Research, the first ever of any neuroscience journal devoted to the topic of sex differences in its entirety. All 70 papers, spanning the neuroscience spectrum, are open access to the public.

In statistical terms, something called effect size measures the size of the influence of one variable on another. Although some believe that sex differences in the brain are small, in fact, the average effect size found in sex differences research is no different from the average effect size found in any other large domain of neuroscience. So here is a fact: It is now abundantly clear to anyone honestly looking, that the variable of biological sex influences all levels of mammalian brain function, down to the cellular/genetic substrate, which of course includes the human mammalian brain.

The mammalian brain is clearly a highly sex-influenced organ. Both its function and dysfunction must therefore be sex influenced to an important degree. How exactly all of these myriad sex influences play out is often hard, or even impossible to pinpoint at present (as it is for almost every issue in neuroscience). But that they must play out in many ways, both large and small, having all manner of implications for women and men that we need to responsibly understand, is now beyond debate — at least among non-ideologues.

Recognizing our obligation to carefully study sex influences in essentially all domains (not just neuroscience), the National Institute of Health on January 25, 2016 adopted a policy (called “Sex as a Biological Variable,” or SABV for short) requiring all of its grantees to seriously incorporate the understanding of females into their research. This was a landmark moment, a conceptual corner turned that cannot be unturned.

But the remarkable and unprecedented growth in research demonstrating biologically-based sex influences on brain function triggered 5-alarm fire bells in those who believe that such biological influences cannot exist.

Since Simone de Beauvoir in the early 1950s famously asserted that “One is not born, but rather becomes, a woman,” and John Money at Johns Hopkins shortly thereafter introduced the term “gender” (borrowed from linguistics) to avoid the biological implications of the word “sex,” a belief that no meaningful differences exist in the brains of women and men has dominated U.S. culture. And God help you if you suggest otherwise! Gloria Steinem once called sex differences research “anti-American crazy thinking.” Senior colleagues warned me as an untenured professor around the year 2000 that studying sex differences would be career suicide. This new book by Rippon marks the latest salvo by a very small but vocal group of anti-sex difference individuals determined to perpetuate this cultural myth.

A book like this is very difficult for someone knowledgeable about the field to review seriously. It is so chock-full of bias that one keeps wondering why one is bothering with it. Suffice to say it is replete with tactics that are now standard operating procedure for the anti-sex difference writers. The most important tactic is a comically biased, utterly non-representative view of the enormous literature of studies ranging from humans to single neurons. Other tactics include magnifying or inventing problems with disfavored studies, ignoring even fatal problems with favored studies, dismissing what powerful animal research reveals about mammalian brains, hiding uncomfortable facts in footnotes, pretending not to be denying biologically based sex-influences on the brain while doing everything possible to deny them, pretending to be in favor of understanding sex differences in medical contexts yet never offering a single specific research example why the issue is important for medicine, treating “brain plasticity” as a magic talisman with no limitations that can explain away sex differences, presenting a distorted view of the “stereotype” literature and what it really suggests, and resurrecting 19th century arguments almost no modern neuroscientist knows of, or cares about. Finally, use a catchy name to slander those who dare to be good scientists and investigate potential sex influences in their research despite the profound biases against the topic (“neurosexists!”). These tactics work quite well with those who know little or nothing about the neuroscience.

The book is downright farcical when it comes to modern animal research, simply ignoring the vast majority of it. The enormous power of animal research, of course, is that it can establish sex influences in particular on mammalian brain function (such as sex differences in risk-taking, play behavior, and responses to social defeat as just three examples) that cannot be explained by human culture, (although they may well be influenced in humans by culture.) Rippon engages in what is effectively a denial of evolution, implying to her reader that we should ignore the profound implications of animal research (“Not those bloody monkeys again!”) when trying to understand sex influences on the human brain. She is right only if you believe evolution in humans stopped at the neck.

Rippon tries to convince you (and may even believe herself) that it is impossible to disentangle biology from culture when investigating sex differences in humans. This is false. I encourage the interested reader to see the discussion of the excellent work doing exactly this by a sociologist named J. Richard Udry in an article I wrote in 2014 for the Dana Foundation’s “Cerebrum,” free online.

Rippon does not mention Udry’s work, or its essential replication by Udry’s harshest critic, a leading sociologist who has described herself as a “feminist” who now “wrestles” with testosterone. (The Dana paper “Equal ≠ Same” also deconstructs the specious “brain plasticity” argument on which Rippon’s narrative heavily rests.)

Of course, Rippon is completely correct in arguing that neuroscientists (and the general public) should remember that “nature” interacts with “nurture,” and should not run wild with implications of sex difference findings for brain function and behavior. We must also reject the illogical conclusion that sex influences on the brain will mean that women are superior, or that men are superior. I genuinely do not know a single neuroscientist who disagrees with these arguments. But she studiously avoids an equally important truth: That neuroscientists should not deny that biologically-based sex differences exist and likely have important implications for understanding brain function and behavior, nor should they fear investigating them.

You may ask: What exactly are people like Rippon so afraid of? She cites potential misuse of the findings for sexist ends, which has surface plausibility. But by that logic we should also stop studying, for example, genetics. The potential to misuse new knowledge has been around since we discovered fire and invented the wheel. It is not a valid argument for remaining ignorant.

After almost 20 years of hearing the same invalid arguments (like Bill Murray in “Groundhog Day” waking up to the same song every day), I have come to see clearly that the real problem is a deeply ingrained, implicit, very powerful yet 100 percent false assumption that if women and men are to be considered “equal,” they have to be “the same.” Conversely, the argument goes, if neuroscience shows that women and men are not the same on average, then it somehow shows that they are not equal on average. Although this assumption is false, it still creates fear of sex differences in those operating on it. Ironically, forced sameness where two groups truly differ in some respect means forced inequality in that respect, exactly as we see in medicine today.

Women are not treated equally with men in biomedicine today because overwhelmingly they are still being treated the same as men (although this is finally changing). Yet astoundingly, and despite claiming she is not anti-sex difference, Rippon says “perhaps we should just stop looking for [sex] differences altogether?” Such dumbfounding statements from a nominal expert make me truly wonder whether the Rippons of the world even realize that, by constantly denying and trivializing and even vilifying research into biologically-based sex influences on the brain they are in fact advocating for biomedical research to retain its male subject-dominated status quo so disproportionately harmful to women.

So are female and male brains the same or different? We now know that the correct answer is “yes”: They are the same or similar on average in many respects, and they are different, a little to a lot, on average in many other respects. The neuroscience behind this conclusion is now remarkably robust, and not only won’t be going away, it will only grow. And yes, we, of course, must explore sex influences responsibly, as with all science. Sadly, the anti-sex difference folks will doubtless continue their ideological attacks on the field and the scientists in it.

Thus one can at present only implore thinking individuals to be wary of ideologues on both sides of the sex difference issue — those who want to convince you that men and women are always as different as Mars and Venus (and that perhaps God wants it that way), and those who want to convince you of the demonstrably false idea that the brains of women and men are for all practical purposes the same (“unisex”), that all differences between women and men are really due to an arbitrary culture (a “gendered world”), and that you are essentially a bad person if you disagree.

No one seems to have a problem accepting that, on average, male and female bodies differ in many, many ways. Why is it surprising or unacceptable that this is true for the part of our body that we call “brain”? Marie Curie said, “Nothing in life is to be feared, it is only to be understood. Now is the time to understand more, so that we may fear less.” Her sage advice applies perfectly to discussions about the neuroscience of sex differences in 2019.

Larry Cahill is a professor in the Department of Neurobiology and Behavior at the University of California, Irvine and an internationally recognized leader on the topic of sex influences on brain function.

Footnote:  This video uses humor to look at sexual brains based on observed human behavior “Why Men and Women Think Differently.)

See Also Gender Ideology and Science, including excerpts from Jordan Peterson

Climatism vs. Eugenics: Which is Worse?

Ralph B. Alexander writes at his blog Science Under Attack Belief in Catastrophic Climate Change as Misguided as Eugenics was 100 Years Ago. H/T Yen Makabenta Manila Times. Excerpts in italics with my bolds.

Last October’s landmark report by the UN’s IPCC (Intergovernmental Panel on Climate Change), which claims that global temperatures will reach catastrophic levels unless we take drastic measures to curtail climate change by 2030, is as misguided as eugenics was 100 years ago. Eugenics was the shameful but little-known episode in the early 20th century characterized by the sterilization of hundreds of thousands of people considered genetically inferior, especially the mentally ill, the physically handicapped, minorities and the poor.

Although ill-conceived and even falsified as a scientific theory in 1917, eugenics became a mainstream belief with an enormous worldwide following that included not only scientists and academics, but also politicians of all parties, clergymen and luminaries such as U.S. President Teddy Roosevelt and famed playwright George Bernard Shaw. In the U.S., where the eugenics movement was generously funded by organizations such as the Rockefeller Foundation, a total of 27 states had passed compulsory sterilization laws by 1935 – as had many European countries.

Eugenics only fell into disrepute with the discovery after World War II of the horrors perpetrated by the Nazi regime in Germany, including the holocaust as well as more than 400,000 people sterilized against their will. The subsequent global recognition of human rights declared eugenics to be a crime against humanity.

The so-called science of catastrophic climate change is equally misguided. Whereas modern eugenics stemmed from misinterpretation of Mendel’s genetics and Darwin’s theory of evolution, the notion of impending climate disaster results from misrepresentation of the actual empirical evidence for a substantial human contribution to global warming, which is shaky at best.

Instead of the horrors of eugenics, the narrative of catastrophic anthropogenic (human-caused) global warming conjures up the imaginary horrors of a world too hot to live in. The new IPCC report paints a grim picture of searing yearly heatwaves, food shortages and coastal flooding that will displace 50 million people, unless draconian action is initiated soon to curb emissions of greenhouse gases from the burning of fossil fuels. Above all, insists the IPCC, an unprecedented transformation of the world’s economy is urgently needed to avoid the most serious damage from climate change.

But such talk is utter nonsense. First, the belief that we know enough about climate to control the earth’s thermostat is preposterously unscientific. Climate science is still in its infancy and, despite all our spectacular advances in science and technology, we still have only a rudimentary scientific understanding of climate. The very idea that we can regulate the global temperature to within 0.9 degrees Fahrenheit (0.5 degrees Celsius) through our own actions is absurd.

Second, the whole political narrative about greenhouse gases and dangerous anthropogenic warming depends on faulty computer climate models that were unable to predict the recent slowdown in global warming, among other failings. The models are based on theoretical assumptions; science, however, takes its cue from observational evidence. To pretend that current computer models represent the real world is sheer arrogance on our part.

And third, the empirical climate data that is available has been exaggerated and manipulated by activist climate scientists. The land warming rates from 1975 to 2015 calculated by NOAA (the U.S. National Oceanic and Atmospheric Administration) are distinctly higher than those calculated by the other two principal guardians of the world’s temperature data. Critics have accused the agency of exaggerating global warming by excessively cooling the past and warming the present, suggesting politically motivated efforts to generate data in support of catastrophic human-caused warming.

Source: Tony Heller, Real Climate Science

Exaggeration also shows up in the setting of new records for the “hottest year ever” –declarations deliberately designed to raise alarm. But when the global temperature is currently creeping upwards at the rate of only a few hundredths of a degree every 10 years, the establishment of new records is unsurprising. If the previous record has been set in the last 10 or 20 years, a high temperature that is only several hundredths of a degree above the old record will set a new one.

Eugenics too was rooted in unjustified human hubris, false science, and exaggeration in its methodology. Just like eugenics, belief in apocalyptic climate change and in the dire prognostications of the IPCC will one day be abandoned also.

Ralph B. Alexander is a retired physicist and a science writer who puts science above political correctness. He is the author of Science Under Attack: The Age of Unreason and Global Warming False Alarm. Ralph grew up in Perth, Western Australia and received his PhD in physics from the University of Oxford.  Dr. Alexander has held a variety of positions in research, academia and industry over the course of his scientific career, and now lives in California. 

See also:  On the Hubris of Climatism

Control Population, Control the Climate. Not.

“Hottest Year” Misdirection

Man Made Warming from Adjusting Data

Our Bent Elite Delivers Admissions Cheating and Climate Virtue

 

Matthew Continetti March 15, 2019 writes at Washington Free Beacon on Our Bankrupt Elite: Operation Varsity Blues and the hypocrisy of Hollywood liberals.  Excerpts in italics with my bolds and images.

Every element of the college admissions scandal, aka “Operation Varsity Blues,” is fascinating.

There are the players: the Yale dad who, implicated in a securities fraud case, tipped the feds off to the caper; a shady high school counselor turned admissions consultant; the 36-year-old Harvard grad who sold his talents for standardized testing to the highest bidder; the comely actresses from Full House and Desperate Housewives; the fashion designer; the casino magnate. Who would have thought that one of the major headlines of 2019 would be “Lori Loughlin released on bond”?

There are the children: the social media influencer (yes this is a thing) who was told of her parents’ arrest while vacationing on the yacht of a USC trustee; the mom who submitted doctored photographs to USC to portray her son as a championship pole-vaulter; the place kicker for a high school with no football team; and the rap artist from the Upper East Side who defended his mom and dad to the press while smoking a blunt.

There are the means: paying tens of thousands of dollars to Rick Singer, Trinity ’86, who bribed athletic directors and coaches, doctored student résumés, and arranged for clients to take college admittance exams alongside a “proctor” who answered the questions for them. The icing on the cake: Some payments were made to a charitable foundation so the parents could get the tax write-off. What a country.

There is the objective: placement at a high-profile school. Why? Social signaling, status games, but also because the wage premium for a college degree has become so large that parents are apparently willing to break federal law to earn it. Not for what the students learn at college—they hardly learn anything. Loughlin’s daughter, the influencer, spoke for most undergraduates when she said, “I do want the experience of game days, partying—I don’t really care about school, as you guys all know.” Oh, we know. Otherwise your mom wouldn’t be looking for a defense attorney.

It’s not what happens in class that matters. The university has long been corrupted by athletics, politicization of the curriculum, identity politics, grade inflation, affirmative action, the death of the humanities, and ideological bias among faculty. What matters is the chit you receive at graduation.

Finally, there are the lessons to be drawn from this story. It’s the media’s vocation, drawing lessons. I’ve heard it said that the parents ought to have been concerned about the lesson they were teaching their children—though right now I’d wager they are more concerned with avoiding jail time. Others say this is the latest example of the falsity of meritocracy. For progressives, the affair reveals the classism and racism of our society, its rampant white privilege.

Which is a funny thing to say about the academic world. Colleges exert tremendous energy to be as diverse and inclusive and woke as possible, to the point where Asian-American students are discriminated against lest they ruin the schemes of college admissions officers. A scandal over which the media seems far less upset.

Lessons? Here are two. First the good news: We are shocked by the actions of these parents precisely because there is so little corruption in America. If the problems were as systemic as some on the Internet believe, they would hardly raise such an outcry. Denizens of countries where bribery is a way of life look at us and say, “Amateurs.”

The second lesson is not as comforting. Operation Varsity Blues is further evidence of the bankruptcy of American elites. For over a decade now, the legitimacy of elites in politics, foreign policy, central banking, journalism, religion, and economics has crumbled as reality failed to match their rhetoric. Education is the latest sphere where elites have betrayed our country’s institutions and our country’s people by using wealth and connections to rig the rules of the game.

The scandal also points to the flagrant hypocrisy of Hollywood liberalism. No class is more moralistic, more hectoring, more obnoxiously activist than the Hollywood left. They barrage Americans with displays of their virtue, their calls to humanitarianism, their paeans to multiculturalism and feminism, their slanders of President Trump, Vice President Pence, Republicans in general, and conservatives in particular. And they have great sway in national politics. A Democrat’s future depends on the beneficence of Hollywood donors—donors who were well represented among the individuals charged in Operation Varsity Blues.

The entertainment industry liberals talk a good game. But look at their actions. Harvey Weinstein and Kevin Spacey are synonymous with predation. Jussie Smollett was a B-list celebrity until he faked a hate crime against himself and blamed it on supporters of Trump. Now we have actors breaking the law so their kids can go to USC.

Why on Earth should we take political cues from these people? By what right do they portray themselves as enlightened, as advanced, as more sophisticated than half the country, even while they lie, cheat, steal, and assault? Plenty of baddies doing nasty things understand that donations to the Democratic Party and its interest groups insulate them from scrutiny and criticism—right until the moment they go to jail. These people aren’t interested in the common good. They are interested in themselves.

“Devoid of all collective attachment except membership in its own club,” writes Christophe Guilluy in Twilight of the Elites (2019), “the new bourgeoisie merrily surfs the surging waves of the market, reinforcing its class position, capturing the economic benefits of globalization, and building up a portfolio of real estate holdings that soon will rival that of the old bourgeoisie.” Guilluy is describing contemporary France. He might as well be talking about Aunt Becky.

Conclusion:

Roger L. Simon writes at Real Clear Politics. Excerpts in italics with my bolds.

What made these people, among the most privileged in our society, act this way? Did they not think that they were either teaching their children to lie or, almost as bad, plunging them into situations where they were doomed to fail? Or were they relying on the current spate of grade inflation to save the day for their underqualified offspring?

Whatever the case, what accounts for this particularly repellent version of do what I say, not as I do? Is it just an insatiable desire for status by an insecure community, this time on the backs of their children?

Hollywood is rampant with this excessive public moral posturing, which disguises often equally excessive private amorality or even immorality. The biggest liberal or progressive stars are frequently the most avaricious and nasty people in their personal lives. It’s a form of split personality cum self-hypnosis that has been employed successfully by the entertainment industry for some time, but the college admissions scandal is bringing it unpleasantly to the surface, as did the recent #MeToo controversy.

Hollywood, however, is far from alone in deserving blame for the admissions scandal. Although the FBI has not taken legal action against the colleges involved, they should be considered at minimum unindicted co-conspirators. Our universities have come under increasing criticism of late for political bias — in one study, only 39 percent of colleges had even one Republican professor — suppression of freedom of speech, and their own covert form of racial discrimination. Asian-Americans, with justification, are currently suing Harvard for admissions bias against them.

These days our colleges seem as much, if not more, bent on social engineering as they are on education. This encourages many students to compete in what is, in essence, a victimhood derby under the trendy rubric of intersectionality. Besides being a waste of educational time and money, this does not augur well for the future of our country.

What we have in the college admissions scandal is corrupt people applying for an already corrupted system. If the attention that glamorous Hollywood usually attracts brings more attention to this problem, it is all to the good. And if it helps to begin to solve it, better yet.

Footnote:

Never forget these people are paid for pretending.  As one of them said:  If you can fake sincerity, you’ll go a long way in this town.

Overheating About Global Warming

Mar 13, 2019 Bjørn Lomborg writes about the overheated discourse that has children taking to the streets on the advice of adults who should know better.  Overheating About Global Warming was published today at Project Syndicate.  Excerpts in italics with my bolds and images.

Decades of climate-change exaggeration in the West have produced frightened children, febrile headlines, and unrealistic political promises. The world needs a cooler approach that addresses climate change smartly without scaring us needlessly and that pays heed to the many other challenges facing the planet.

Across the rich world, school students have walked out of classrooms and taken to the streets to call for action against climate change. They are inspired by 16-year-old Swedish activist Greta Thunberg, who blasts the media and political leaders for ignoring global warming and wants us to “panic.” A global day of action is planned for March 15.

Although the students’ passion is admirable, their focus is misguided. This is largely the fault of adults, who must take responsibility for frightening children unnecessarily about climate change. It is little wonder that kids are scared when grown-ups paint such a horrific picture of global warming.

For starters, leading politicians and much of the media have prioritized climate change over other issues facing the planet. Last September, United Nations Secretary-General António Guterres described climate change as a “direct existential threat” that may become a “runaway” problem. Just last month, The New York Times ran a front-page commentary on the issue with the headline “Time to Panic.” And some prominent politicians, as well as many activists, have taken the latest report from the United Nations Intergovernmental Panel on Climate Change (IPCC) to suggest the world will come to an end in just 12 years.

This normalization of extreme language reflects decades of climate-change alarmism. The most famous clip from Al Gore’s 2006 film An Inconvenient Truth showed how a 20-foot rise in sea level would flood Florida, New York, the Netherlands, Bangladesh, and Shanghai – omitting the fact that this was seven times worse than the worst-case scenario.

A separate report that year described how such alarmism “might even become secretly thrilling – effectively a form of ‘climate porn.’” And in 2007, The Washington Post reported that “for many children and young adults, global warming is the atomic bomb of today.”

When the language stops being scary, it gets ramped up again. British environmental campaigner George Monbiot, for example, has suggested that the term “climate change” is no longer adequate and should be replaced by “catastrophic climate breakdown.”

Educational materials often don’t help, either. One officially endorsed geography textbook in the United Kingdom suggests that global warming will be worse than famine, plague, or nuclear war, while Education Scotland has recommended The Day After Tomorrow as suitable for climate-change education. This is the film, remember, in which climate change leads to a global freeze and a 50-foot wall of water flooding New York, man-eating wolves escape from the zoo, and – spoiler alert – Queen Elizabeth II’s frozen helicopter falls from the sky.

Reality would sell far fewer newspapers. Yes, global warming is a problem, but it is nowhere near a catastrophe. The IPCC estimates that the total impact of global warming by the 2070s will be equivalent to an average loss of income of 0.2-2% – similar to one recession over the next half-century. The panel also says that climate change will have a “small” economic impact compared to changes in population, age, income, technology, relative prices, lifestyle, regulation, and governance.

And while media showcase the terrifying impacts of every hurricane, the IPCC finds that “globally, there is low confidence in attribution of changes in [hurricanes] to human influence.” What’s more, the number of hurricanes that make landfall in the United States has decreased, as has the number of strong hurricanes. Adjusted for population and wealth, hurricane costs show “no trend,” according to a new study published in Nature.

Another Nature study shows that although climate change will increase hurricane damage, greater wealth will make us even more resilient. Today, hurricanes cost the world 0.04% of GDP, but in 2100, even with global warming, they will cost half as much, or 0.02% of GDP. And, contrary to breathless media reports, the relative global cost of all extreme weather since 1990 has been declining, not increasing.

Perhaps even more astoundingly, the number of people dying each year from weather-related catastrophes has plummeted 95% over the past century, from almost a half-million to under 20,000 today – while the world’s population has quadrupled.

Meanwhile, decades of fearmongering have gotten us almost nowhere. What they have done is prompt grand political gestures, such as the unrealistic cuts in carbon dioxide emissions that almost every country has promised under the 2015 Paris climate agreement. In total, these cuts will cost $1-2 trillion per year. But the sum total of all these promises is less than 1% of what is needed, and recent analysis shows that very few countries are actually meeting their commitments.

In this regard, the young protesters have a point: the world is failing to solve climate change. But the policy being pushed – even bigger promises of faster carbon cuts – will also fail, because green energy still isn’t ready. Solar and wind currently provide less than 1% of the world’s energy, and already require subsidies of $129 billion per year. The world must invest more in green-energy research and development eventually to bring the prices of renewables below those of fossil fuels, so that everyone will switch.

And although media reports describe the youth climate protests as “global,” they have taken place almost exclusively in wealthy countries that have overcome more pressing issues of survival. A truly global poll shows that climate change is people’s lowest priority, far behind health, education, and jobs.

In the Western world, decades of climate-change exaggeration have produced frightened children, febrile headlines, and grand political promises that aren’t being delivered. We need a calmer approach that addresses climate change without scaring us needlessly and that pays heed to the many other challenges facing the planet.

Bjørn Lomborg, a visiting professor at the Copenhagen Business School, is Director of the Copenhagen Consensus Center. His books include The Skeptical Environmentalist, Cool It, How to Spend $75 Billion to Make the World a Better Place, The Nobel Laureates’ Guide to the Smartest Targets for the World, and, most recently, Prioritizing Development. In 2004, he was named one of Time magazine’s 100 most influential people for his research on the smartest ways to help the world.

Sciencing Vs. Scientism

 

What is Scientific Truth? Previous posts here have discussed the difference between science as a process of discovery (“sciencing” if you will), and science as a catalog of answers to how the world works (“scientism” in this sense). On this issue, I am following Richard Feynman, and also Arthur Eddington, who is quoted at the end.

This post dives into the struggle over truth and science in contemporary society. It also discusses some underlying philosophical confusions leading to distortions of scientific processes and discoveries. Michela Massimi is Professor of Philosophy of Science at the University of Edinburgh in Scotland. She works in history and philosophy of science and was the recipient of the 2017 Wilkins-Bernal-Medawar Medal by the Royal Society, London, UK. Her article recently published at Aeron is entitled Getting it right. Excerpts in italics with my bolds and images. My takeaway: Science matters only because Truth matters. But do read her entire essay for your own edification. Title is link to essay.

Truth is neither absolute nor timeless. But the pursuit of truth remains at the heart of the scientific endeavour

Think of the number of scenarios in which truth matters in science. We care to know whether increased CO2 emission levels cause climate change, and how fast. We care to know whether smoking tobacco increases the risk of lung cancer. We care to know whether poor diet exposes children to the risk of developing obesity, or whether forecasts of economic growth are correct. Truth in science is not esoteric dilly-dallying. It shapes climate science, medicine, public health, the economy and many other worldly endeavours.

That truth matters to science is hardly news. For a long time, people have looked to science for truths about the world. The Scientific Revolution was nothing if not the triumph of Galileo’s scientific truth – hard-won through his telescopic observations – over centuries of dogma about the geocentric system. With its system of epicycles and deferents, Ptolemaic astronomy was at once sophisticated and false. It served to, at best, ‘save the appearances’ about how planets seemed to move in the sky. It did not tell the truth about planetary motion until the discovery of the Copernican explanation. Or consider the Chemical Revolution at the end of the 18th century. We no longer, after all, believe in phlogiston – the fictional imponderable fluid that Georg Ernst Stahl, Joseph Priestley and other natural philosophers at the time believed to be at work in combustion and calcination phenomena. Antoine Lavoisier’s scientific truth about oxygen prevailed over false beliefs about phlogiston.

The main actors of these scientific revolutions often fostered this way of thinking about science as an enquiry leading to the inevitable triumph of truth over past errors. Two centuries after Galileo’s successful defence of the heliocentric system, this idea of the course of scientific truth continued to inspire philosophers. In his Cours de philosophie positive (1830-42), Auguste Comte saw the evolution of human knowledge in three main stages: ‘the Theological, or fictitious; the Metaphysical, or abstract; and the Scientific, or positive’. In the ‘positive’, the third and last stage, ‘an explanation of facts is simply the establishment of a connection between single phenomena and some general facts, the number of which continually diminishes with the progress of science’.

In some scientific quarters, this Comtean notion of how science evolves and progresses remains common currency. But philosophers of science, over the past half-century, have turned against the representation of science as a ceaseless forward march toward truth. It is just not how science works, how it moves through history. It flies in the face of the wonderful and subtle historical nuances of how scientific revolutions have in fact occurred. It does not accommodate how some of the greatest scientific minds held dearly to some false beliefs. It wilfully ignores the many voices, disagreements and controversies through which scientific knowledge has often advanced and progressed over time.

However, many (and legitimate in their own right) criticisms against this naive view of science have committed a similar mistake. They have offered a portrait of science purged of any commitment to truth. They see truth as an inconvenient and disposable feature of science. Fraught as the ideal and pursuit of truth is with tendencies to petty doctrinairism, it is nonetheless a mistake to try to purge it. The fallacy of positivist philosophy was to think of science as coming in stages of some sort, or following a particular path, or historical cycles. The anti-truth trend in the philosophy of science has often ended up repeating this same misstep. It is important to move beyond the sterile dichotomy between the old (quasi-positivist) view of truth in science and the rival anti-truth trend of recent decades.

Let us start with some genuine philosophical questions about truth in science. Here are three: 1) Does science aim at truth? 2) Does science tell us the truth? 3) Should we expect science to tell us the truth?

In each of these questions, ‘science’ is a generic placeholder for whichever scientific discipline we are interested in questioning. Question one might strike us as otiose but, in fact, it triggered one of the liveliest debates of the past 40 years. Bas van Fraassen launched this debate as to whether science aims at truth with his pioneering book The Scientific Image (1980). Does science aim to tell us a true story about nature? Or does it aim only at saving the observable phenomena (namely, providing an account that makes sense of what we can observe, without expecting it to be the true account about nature)?

There are philosophers today who embrace the view that science does not need to be true in order to be good. They argue that asking for truth is risky because it commits one to believing in things (be it epicycles, phlogiston, ether or something else) that might prove false in the future. In their view, ‘empirically adequate’ theories, theories that ‘save the observable phenomena’, are good enough for science. For example, one might take the Standard Model in high-energy physics not as aiming at the truth about whether the world is really carved up into quarks, leptons and force carriers; whether these entities really have the properties that the Standard Model says they have; and so on.

When it comes to the second question – does science tell us the truth? – scientific realists and anti-realists of various stripes have debated it. Leaving aside the aim of science, let us concentrate on its track record instead. Has science told us the truth? Looking at the history of science, does it amount to a persuasive story of truth accumulated over the centuries? Philosophers, historians, sociologists and science-studies scholars have all challenged a simple affirmative answer to this question.

This decades-long, multi-pronged, disenchantment-with-truth trend in philosophy of science starts by rejecting the idea that there are facts about nature that make our scientific claims true or false. Fact-constructivism is only one aspect of this multi-pronged disenchantment-with-truth trend. Outlandish as this might sound, its defenders claim that there is not a single, objective way that the world is; there are rather many different and ‘equally true descriptions of the world, and their truth is the only standard of their faithfulness’, in the words of the philosopher Nelson Goodman. For example, he claimed that we do make facts, but not like, say, a baker makes bread, or a sculptor makes a statue. In Goodman’s view, we make facts any time we construct what he called a ‘version’ of the world (via works of art, of music, of poetry, or of science).

We do this all the time, for example, with stars and constellations. As the philosopher Hilary Putnam expresses it: ‘Nowadays, there is a Big Dipper up there in the sky, and we, so to speak, “put” a Big Dipper up there in the sky by constructing that version.’ Goodman’s world-making view has severe implications for truth in science. ‘Truth,’ he wrote, ‘far from being a solemn and severe master, is a docile and obedient servant. The scientist who supposes that he is single-mindedly dedicated to the search for truth deceives himself … He as much decrees as discovers the laws he sets forth, as much designs as discerns the patterns he delineates.’

Fact-constructivism sounds too radical to many philosophers, and alienating to most scientists. So here is another approach against factual truth, well-known among philosophers of science. Over the past 40 years, they have produced an extraordinary amount of work on models in science. The role of abstractions and idealisations in scientific models, they maintain, is to select and to distort aspects of the relevant target system. The billiard-ball model of Brownian motion, for example, represents the motion of molecules by idealising them as perfectly spherical billiard balls. Moreover, the model abstracts, or removes, molecules from their actual environment, which is of course where collisions among molecules take place.

Studying modelling practices in science has led some to argue that science does not tell the truth but it does provide important non-factive understanding. Consider, for instance, Boyle’s gas law, which captures the relation between pressure p and volume v in an ideal gas at constant temperature. At best, Boyle’s law is true ceteris paribus (ie, all else being equal) in highly idealised and contrived circumstances. There simply is no ideal gas with perfectly spherical molecules displaying ‘atomic facts’ (in a quasi-Wittgensteinian sense) that make Boyle’s law true. Despite being true of nothing real, the billiard-ball model of Brownian motion and Boyle’s ideal gas law do nonetheless provide important non-factual understanding of the behaviour of real gases. For they allow scientists to understand the relation between decreasing volume and increasing pressure in any gas, even if there are no atomic facts in nature about perfectly spherical molecules corresponding to such idealisations.

Anti-dogmatic and anti-monist approaches to science have also questioned the value, as well as the facticity, of truth. From the 1960s, science-studies scholars began to see the word ‘truth’ as evoking unpalatable petty doctrinairism and intracultural battles in the wake of the Vietnamese war, postmodernism and, later on, what became known as the ‘science wars’. Many saw the physicist Thomas Kuhn as the forefather of a new historicist trend that dismantled what they perceived as the naive view that science aims at or tracks truth. Kuhn saw himself as ‘a fact lover and a truth seeker’. Yet in the final remarks to his classic The Structure of Scientific Revolutions (1962), he made a prescient, almost ominous, warning:

Does it really help to imagine that there is some one full, objective, true account of nature and that the proper measure of scientific achievement is the extent to which it brings us closer to the ultimate goal? … Successive stages in that developmental process are marked by an increase in articulation and specialisation. And the entire process might have occurred, as we now suppose biological evolution did, without benefit of a set goal, a permanent fixed scientific truth, of which each stage in the development of scientific knowledge is a better exemplar.

For Kuhn, truth is not an overarching aim of science across scientific revolutions. Nor do scientific revolutions (eg, from Ptolemaic to Copernican astronomy) track truth either. What they do, at best, is to increase our ability to solve anomalies that beset the previous paradigm (as when we eventually discovered that retrograde motion was only an illusion, and not something that needed epicycles and deferents to be explained).

We see the spirit of Kuhn’s warning in discussions today. Truth itself is not enough to settle or even guide debates about expertise, trust, consensus and dissent in science. The philosophers of science Inmaculada de Melo-Martín and Kristen Intemann have described the matter well in their book The Fight Against Doubt (2018). When it comes to the role of science in policymaking, the key is ‘engaging in discussions with all relevant parties about the values at stake, rather than the truth of particular scientific claims’. Policymaking involves politics and values, and ‘disagreement about values cannot, and should not, be decided by scientists alone’ or by just scientific evidence.

The third question is whether we should expect science to tell us the truth, or is truth (or at least the notion of factual truth) not best left to logicians and metaphysicians?

While critical analyses of factual truth are indeed best left to logicians and metaphysicians, philosophers of science should not abdicate their responsibility to talk about truth in science. The quasi-Wittgensteinian myth of atomic facts as the truth-makers of scientific claims has proved inadequate to even scratch the surface of very complex practices in science. But that is not a good reason (or pretext) for forgoing truth altogether. Nor is it a reason for concluding that science should not be expected to tell us the truth.

But whose truth? By whose lights? Some might be tempted at this point by a Jamesian pragmatist theory of truth. American pragmatism has traditionally provided an alternative way of thinking about truth, which some philosophers of science see as more congenial to capturing the complex nuances and the power structure of scientific practice.

In James’s words: ‘“The true” … is only the expedient in the way of our thinking, just as “the right” is only the expedient in the way of our behaving.’ Stripped of its rhetorical flourishes, for James to be true is (to a good approximation) to work successfully. A scientific model is true – on a loosely Jamesian view – if it successfully facilitates and enables activities (be they epistemic or not). If the billiard-ball model of Brownian motion helps scientists to predict the behaviour of gas molecules, for example, the model is (pragmatically) true. The falseness of the presumption of perfectly spherical molecules does not matter.

The risk with a James-inspired conception of truth, as I see it, is that it is too malleable to resist the tides of time and the stresses of social forces endlessly at work in science. A James-inspired view of truth abdicates the expectation that science tells us the truth in the name of a non-better-qualified kind of success of a scientific practice. But how to tell apart cases where success does indeed track truth from cases where it does not? More to the point, when it comes to matters such as climate change, the benefit of vaccinating children, or economic forecasts, we seem to need more than a malleable Jamesian conception of truth for the sake of scientifically informed decisions that do not bow to pressure from powerful lobbies and political agendas (in the name of what ‘might work’). But, someone might reply, how can truth and pluralism go hand in hand if not by opting for a Jamesian conception of truth (if we really care about truth at all)?

There is another way of thinking about how truth and pluralism might go hand in hand, without reducing matters of truth to calculations of what is pragmatically good to individuals or communities sharing a scientific perspective at some point in time. First, it is necessary to understand the key term ‘scientific perspective’ and how it impinges on scientific pluralism. In its original use by the philosopher Ronald Giere in 2006, ‘scientific perspective’ is akin to Kuhn’s disciplinary matrix: a set of scientific models (including the relevant experimental instruments to gather data). In broader terms, scientific perspective is the disciplinary practice of a real scientific community at any given historical time. It includes the knowledge they produce, and the theoretical, technological and experimental resources they use, or that guide their work.

The time for a defence of truth in science has come. It begins with a commitment to get things right, which is at the heart of the realist programme, despite mounting Kuhnian challenges from the history of science, considerations about modelling, and values in contemporary scientific practice. In the simple-minded sense, getting things right means that things are as the relevant scientific theory says that they are. Climate science is true if what it says about CO2 emissions (and their effects on climate change) corresponds to the way that things are in nature. For the sake of powerful economic interests, sociopolitical consequences or simply different economic principles, one can try to discount, mitigate, compensate for, disregard or ignore altogether the way that things are. But doing so is to forgo the normative nature of the realist commitment in science. The scientific world, we have seen, is too complex and messy to be represented by any quasi-Wittgensteinian picture of atomic facts. Nor can the naive image of Comte’s positive science render justice to it. But acknowledging complexity and historical nuances gives no reason (or justification) for forgoing truth altogether; much less for concluding that science trades in falsehoods of some kind. It is part of our social responsibility as philosophers of science to set the record straight on such matters.

We should expect science to tell us the truth because, by realist lights, this is what science ought to do. Truth – understood as getting things right – is not the aim of science, because it is not what science (or, better, scientists) should aspire to (assuming one has realist leanings). Instead, it is what science ought to do by realist lights. Thus, to judge a scientific theory or model as true is to judge it as one that ‘commands our assent’. Truth, ultimately, is not an aspiration; a desirable (but maybe unachievable) goal; a figment in the mind of the working scientist; or, worse, an insupportable and dispensable burden in scientific research. Truth is a normative commitment inherent in scientific knowledge.

Constructive empiricists, instrumentalists, Jamesian pragmatists, relativists and constructivists do not share the same commitment. They do not share with the realist a suitable notion of ‘rightness’. As an example, compare the normative commitment to get things right with the view of the philosopher Richard Rorty, in whose hands Putnam’s truth as ‘idealised warranted assertibility’ reduces to what is acceptable to ‘us as we should like to be … us educated, sophisticated, tolerant, wet liberals, the people who are always willing to hear the other side, to think out all the implications’.

Getting things right is not a norm about us at our best, ‘educated, sophisticated, tolerant, wet liberals’. It is a norm inherent in scientific knowledge. To claim to know something in science (or about a scientific topic or domain) is to claim for the truth of the relevant beliefs about that topic or domain.

Thinking of truth as a normative commitment inherent in the very notion of scientific knowledge brings some benefits. It overcomes a false dichotomy between atomic facts and non-factive, non-truth-conducive inferences. And it makes realism compatible with perspectivism. Scientific communities that endorse historically and culturally situated scientific perspectives (either across the history of science or in contemporary science, across different fields or different scientific programmes) share (and indeed ought to) a normative commitment to get things right. That is a minimum requirement to pass the bar of what we count as ‘scientific knowledge’.

Getting the evidence right, in the first instance – via accurate measurements, sound non-ad-hoc procedures, and robust inferential strategies – defines any research programme that is worth being called ‘scientific’. The realist commitment to get things right must begin with getting the evidence right. No perspective worthy of being called ‘scientific’ survives fudging the evidence, massaging or altering the data or discarding evidence.

Scientists ought to share rules for cross-perspectival assessment. That our knowledge is situated and perspectival does not make scientific truths relativised to perspectives. Often enough, scientific perspectives themselves provide the rules for cross-perspectival assessment. Those rules can be as simple as translating the 10 degree Celsius temperature in Edinburgh today into the 50 degree equivalent on the Fahrenheit scale. Or they can be as complex as retrieving the viscosity of a fluid in statistical mechanics, where fluids are treated as statistical ensembles of a large number of discrete molecules.

Let there be no doubt: scientific knowledge is the product of our getting it right across our perspectival multicultural scientific history. Scientific knowledge is not a prerogative of our Western cultural perspective (and its discipline-specific scientific perspectives) but the outcome of a plurality of historically and culturally situated scientific perspectives that, over millennia, have reliably produced knowledge with the tools, resources and concepts respectively available to each and every one of them.

Scientific truths are the resilient and robust outcome of a plurality of scientific perspectives that, over time, have meshed with one another in their (tacit, implicit and often survival-adaptive) normative commitment to reliably produce scientific knowledge for us as humankind. That is why, far from being an insufferable hindrance to scientific pluralism, truth is in fact its best safeguard in tolerant, open and democratic societies that are genuinely committed to the advancement of scientific knowledge in the very many faces it comes with.

Footnote: 

Religious creeds are a great obstacle to any full sympathy between the outlook of the scientist and the outlook which religion is so often supposed to require … The spirit of seeking which animates us refuses to regard any kind of creed as its goal. It would be a shock to come across a university where it was the practice of the students to recite adherence to Newton’s laws of motion, to Maxwell’s equations and to the electromagnetic theory of light. We should not deplore it the less if our own pet theory happened to be included, or if the list were brought up to date every few years. We should say that the students cannot possibly realise the intention of scientific training if they are taught to look on these results as things to be recited and subscribed to. Science may fall short of its ideal, and although the peril scarcely takes this extreme form, it is not always easy, particularly in popular science, to maintain our stand against creed and dogma.
― Arthur Stanley Eddington

See Also: 

Data, Facts and Information

Three Wise Men Talking Climate

Head, Heart and Science

Post-Truth Climatism

How Science Is Losing Its Humanity