EPA Plans for a Bright Environmental Future

EPA Administrator Andrew Wheeler delivered an address laying out the agency vision for fulfilling its mission.  Excerpts in italics with my formatting and bolds.

EPA’s mission has been straight forward since its founding. Protect human health and the environment. Doing this ensures that all Americans – regardless of their zip code – have clean air to breathe, clean water to drink, and clean land to live, work, and play upon. Under President Trump, we have done this as well, if not better, than any recent administration.  This is great news, and like most great news, you rarely read about it in the press.

  • During the first three years of the Trump Administration, air pollution in this country fell 7 percent.
  • Last year, EPA delisted 27 Superfund sites, the most in a single year since 2001.
  • And agency programs have contributed more than $40 billion dollars to clean water infrastructure investment during President Trump’s first term.

For much of the latter part of the 20th century, there was bipartisan understanding on what environmental protection meant. Some of it was captured in legislation and some it by established practice. These principles formed a consensus about how the federal government did its job of protecting the environment.

But unfortunately, in the past decade or so, some members of former administrations and progressives in Congress have elevated single issue advocacy – in many cases focused just on climate change – to virtue-signal to foreign capitals, over the interests of communities within their own country. Communities deserve better than this, but in the recent past, EPA has forgotten important parts of its mission. It’s my belief that we misdirect a lot of resources that could be better used to help communities across this country.

So, if this is where we are – with misdirected policies, misused resources, and a more partisan political environment – and we want an EPA for the next 50 years – how do we get there? One way to do this – and I’ve spent more than 25 years thinking about this problem – is to focus on helping communities become healthier in a more comprehensive manner.

Communities that deal with the worst pollution in this country – and tend to be low-income and minority – face multiple environmental problems that need solving.  Many of the sites EPA has responsibility for are in some of the most disadvantaged communities in this country. And I will point out a truism. Neglect is a form of harm, and it’s not fair for these communities to be abandoned just because they don’t have enough political power to stop the neglect.

So where does this put us as a country in 2020? The truth is this country is facing a lot of environmental and social problems that have not been dealt with the right way up until now. And while the focus of the next 50 years should not be like the last 50, it should be informed by it.

Many towns and cities in the United States are using the same water infrastructure they’ve used for over 100 years, and many schools use lead water pipes long after such pipes were banned from new buildings. The American public views our pesticide program through the lens of the trial lawyers who advertise on television instead of the way we manage the program. And the Superfund Program – which celebrates its 40th anniversary this year, has become focused on process, rather than project completion.

These issues are challenging and would be difficult for any administration in office. But they would be easier to solve if people in power were more aware of the consequences of poor environmental policies.

It’s very disappointing to see governors on the East Coast, such as Governor Cuomo, unilaterally block pipelines that would take natural gas from Pennsylvania to New York and New England. These poor choices subject Americans to imports of gas from places like Russia, even in the face of evidence that U.S. natural gas has a much cleaner emissions profile than imported gas from Europe. Governor Cuomo is doing this in the name of climate change, but the carbon footprint of natural gas to New England through pipeline is much smaller than transporting it across the ocean. It also forces citizens in Vermont, New Hampshire and Maine to use more polluting wood and heating oil to heat their homes because of gas shortages in the winter months, which in turn creates very poor local air quality.

And there are many examples of poor environmental outcomes here in California, despite its environmental reputation. It should go without saying that dumping sewage into San Francisco Bay without disinfection, indeed without any chemical or biological treatment, is a bad idea, but that’s what been happening for many years, against federal law.

And just last month, the rolling blackouts created by California’s latest electricity crisis – the result of policies against power plants being fueled by natural gas – spilled 50,000 gallons of raw sewage into the Oakland Estuary when back-up wastewater pumps failed. As state policymakers push more renewables onto the grid at times of the day when renewables aren’t available, these environmental accidents will happen more often. CARB seems to have no appreciation for baseload power generation. Or at least their regulations don’t.

Instead of confusing words with actions, and choosing empty symbolism over doing a good job, we can focus our attention and resources on helping communities help themselves. Doing this will strengthen this country from its foundation up – and start to solve the environmental problems of tomorrow. We could do a lot of good if the federal government, through Congress, puts resources to work with a fierce focus on community-driven environmentalism that promotes community revitalization on a greater scale.

This will do more for environmental justice than all the rhetoric in political campaigns.

Over the next four years the Trump Administration is going to reorganize how it approaches communities so it can take action and address the range of environmental issues that need to be addressed for people and places in need. In President Trump’s second term, we will help communities across this country take control and reshape themselves through the following five priorities.

  • Creating a Community-Driven Environmentalism that Promotes Community Revitalization.
  • Meeting the 21st Century Demands for Water.
  • Reimagining Superfund as a Project-Oriented Program.
  • Reforming the Permitting Process to Empower States. And,
  • Creating a Holistic Pesticide Program for the Future.

For communities, traditionally, EPA has focused on environmental issues in a siloed manner that only looks at air, water and land separately, and states and local communities end up doing the same. We will change this, and look at Brownfields grants, environmental justice issues, and air quality in each community at the same time and encourage them to do the same.

Since EPA’s Brownfields Program began in 1995, nearly $1.6 billion dollars in grants have been spent to clean-up contaminated sites and return blighted properties to productive reuse. To date, communities participating in the Program have been able to attract an additional $33.3 billion dollars in cleanup and redevelopment funding after receiving Brownfields funds.

And when combined with the Opportunity Zones created in the landmark 2017 Trump tax bill, economic development, job creation and environmental improvements can truly operate together at the same time. A study published last month found that Opportunity Zones, which have only been in existence since 2018, have attracted about $75 billion dollars in private investment, which in turn has lifted about one million people out of poverty through job creation in a very short time. While all the economic data isn’t available yet for 2019, it’s possible that Opportunity Zones are one of the biggest reasons black unemployment in this country fell to its lowest recorded levels ever in 2019.

One other way we are going to help communities is by creating one consolidated grant program that combines several smaller grants from multiple programs. It will help focus local communities to view environmental problems holistically, and it will help refocus EPA.

We can meet the 21st Century Demands for Clean Water by creating an integrated planning approach using WIFIA loans, our Water Reuse Action Plan, and our Nutrient Trading Initiative to improve water quality and modernize legal frameworks that have been around since the 19th Century. Over 40 percent of water utility workers are eligible to retire. We need to do a better job recruiting and training for 21st century threats to the water utilities industry.

And we can reinvigorate the Superfund Program. Roughly 16 percent of the U.S. population lives within 3 miles of a Superfund site today. That’s over 50 million Americans. EPA has allowed litigation and bureaucracy to dictate the pace of Superfund projects, instead of focusing on improving the environmental indicators and moving sites to completion. We need to fully implement the recommendations of the 2018 Superfund Task Force and reimagine the approach to clean up sites using the latest technologies and best practices.

We can improve the way we handle pesticide regulation. We do a good job approving pesticides on an individual basis, but we have not excelled in explaining to the public our holistic approach to pesticide management. The media and the courts tend to view our individual pesticide decisions in a one-off fashion, which has left the American public uninformed on our science-based process.

We will take into account biotech advances and better examinations of new active ingredients. Just this week, we announced a proposed rule that would remove onerous and expensive regulation of gene-edited plant protectants. We will safeguard pollinators to support the agriculture industry. And we can decrease reliance on animal testing to a point where no animal testing takes place for any of the agency’s programs by 2035.

Here are five things EPA is doing – five new pillars that have gone largely unnoticed by the public – that are changing the way the agency operates today.

The first pillar is our Cost-Benefit Rulemaking.
We are creating cost-benefit rules for every statute that governs EPA. The American public deserves to know what the costs and the benefits are for each of our rules. We are starting with the Clean Air Act, which will provide much better clarity to local communities, industry and stakeholders. And we will implement a cost-benefit regulation for all our environmental statutes by 2022.

Our second major pillar is Science Transparency.

The American public has a right to know the scientific justification behind a regulation. We are creating science transparency rules that are applied consistently. This will bring much needed sunlight into our regulatory process. Some people oppose it, calling it a Secret Science rule. Those who oppose it want regulatory decisions to be made behind closed doors. They are the people who say, “Trust us, we know what’s best for you.” I want to bring our environmental decision-making process out of the proverbial smoke-filled back room. The Cost-Benefit and Science Transparency rules will go a long way in delivering that. After finalizing the Science Transparency rule later this year, EPA will conduct a statute by statue rulemaking, much like the Cost-Benefit rule.

Guidance documents are the third pillar of agency change, and it’s an area we’ve made a lot of progress, and we have shined even more light.

The agency for years was criticized for not making guidance documents – which have almost the force of law – available for public review. The costs involved to uncover guidance documents became a major barrier for anyone wanting to improve their communities. Last year, EPA went through all our guidance documents from the agency’s beginnings, and we put all 10,000 documents onto a searchable database. We also rescinded 1,000 guidance documents. Now all our guidance documents are available to the public, for the first time. This is a huge change in administrative procedures at EPA, perhaps the biggest change in at least a generation.

The fourth pillar is our reorganization of all 10 of our regional offices to mirror our headquarters structure.

All the regional offices across the country now have an air division, a water division, a lands division, and a chemical division. This was a change that was needed for decades.

As the fifth pillar of EPA fundamental change, we have implemented a Lean Management System that tracks real metrics with which the agency can measure success or failure.

There is a lot of good news in these changes, but the best news is this: the problems I’ve highlighted are structural, and when a problem is structural or organizational, an agency can be changed. Until the Trump administration, EPA was not able to track how long it took to complete a permit, a grant process, or a state implementation plan, or really any meaningful task the agency had before it. Organizations do change; it can be hard, but they do change, and when they change, it’s usually for the better.

Conclusion:
As I said at the beginning, EPA data points to 2020 air quality being the best on record. Here in California, where the modern environmental movement began – and from where President Nixon brought it to the rest of the country – it’s important to acknowledge the role states have in being laboratories for democracy, and in this case, laboratories for environmental policy.

But for environmental policy to work nationally, the federal government and states must work together as partners, not as adversaries. To do this involves a new vision, and for a country searching for a new consensus, on the environment as well as on many other things, this can seem tough. But I believe we can find a new consensus, if we strive to.

I believe that by focusing EPA toward communities in the coming years, our agency can change the future for people living in this country who have been left behind simply for living in polluted places. We are a nation made up of communities, and communities are the foundation of this nation, not the other way around.

If we can do the work before us – break down the silos between us as an agency and elsewhere – I believe we can both protect the places we love and bring back the places that have been hurt by pollution – and make them even better than they were before.

I see EPA beginning its second half century with big challenges, but ones that can be overcome with the same skill and tenacity that helped this agency, and this country, overcome the challenges of the last 50 years.  I hope everyone can support our agency as we work to deliver this vision of a great environmental future for all Americans – regardless of where they live.

Thank you.”

Silly Science Questions

Here at RealClearScience, a lazy blogging day can prompt a torrent of laughter! That’s because we occasionally return to the well of humor available at a crudely-named subreddit of the popular website Reddit to bring you “hilariously stupid science questions”. Be prepared to drown in terrible puns, painful fallacies, and poor logic. Should you survive (and somehow enjoy the experience), you can check out some of the other installments in this recurring series. H/Y Ross Pomeroy

Attempts to “Connect the Dots” With Few or No Clues

If we lose net neutrality, will the net become acidic or basic?
If global warming was real, wouldn’t the ice wall melt and let the oceans drain away? So then why is the sea level rising?
Why do meteorites always land in craters?
My pizza says to bake for 18-21 minutes, how do I bake something for -3 minutes?
Are children actually small or are they just far away?
The first dog in space died of stress. Was that because of all the vacuums up there?
If Mercury is so close to the sun how come we can get it inside thermometers???
Why are so many products harmful only to Californians?
How much higher would the sea level be if there were no sponges?
If setting off nukes creates “nuclear winters”, why don’t we set off a few nukes to offset global warming?
If electricity always follows the path of least resistance, why doesn’t lightning only strike in France?
What happens if a very stoppable force meets a very movable object?
If Pi is never ending, why is there still world hunger?
Is HIV considered a “retro virus” because it started to be a problem in the 80s?
Why does alcohol need proofs? Shouldn’t we just take their word for it?
Do strippers in the southern hemisphere spin around their poles in the opposite direction as strippers in the northern hemisphere?
If sound can’t travel through vacuums, why are they so loud?
How can we trust atoms if they make up everything?
If the human body is ~90% water, why can’t we put out fires with our bodies?
If there’s a new moon every month. Where does the old one go?
Why did ancient people bury so many buildings?
How can fish hold their breath for so long underwater?
If Corn Oil is made from corn, and Olive Oil is made from olives, where does Baby Oil come from?
Before light bulbs were invented, how did people get ideas?
Does it take 18 months for twins to be born?
I just found out I am bipolar. Should I avoid magnets?
From which sheep do we get steel wool?
When will the gorilla at the zoo turn into a person?
Is the water bug the natural predator of the firefly?
Did Schrödinger ever consider the fact that his cat had nine lives?
If oxygen was discovered in 1783 by Antoine Lavoisier, how did people breathe before then?

Hydroxychloroquine: A Morality Tale

Norman Doige writes in The Tablet A startling investigation into how a cheap, well-known drug became a political football in the midst of a pandemic.  Excerpts in italics with my bolds.

We live in a culture that has uncritically accepted that every domain of life is political, and that even things we think are not political are so, that all human enterprises are merely power struggles, that even the idea of “truth” is a fantasy, and really a matter of imposing one’s view on others. For a while, some held out hope that science remained an exception to this. That scientists would not bring their personal political biases into their science, and they would not be mobbed if what they said was unwelcome to one faction or another. But the sordid 2020 drama of hydroxychloroquine—which saw scientists routinely attacked for critically evaluating evidence and coming to politically inconvenient conclusions—has, for many, killed those hopes.

Phase 1 of the pandemic saw the near collapse of the credible authority of much of our public health officialdom at the highest levels, led by the exposure of the corruption of the World Health Organization. The crisis was deepened by the numerous reversals on recommendations, which led to the growing belief that too many officials were interpreting, bending, or speaking about the science relevant to the pandemic in a politicized way. Phase 2 is equally dangerous, for it shows that politicization has started to penetrate the peer review process, and how studies are reported in scientific journals, and of course in the press.

What is unique about the hydroxychloroquine discussion is that it is a story of “unwishful thinking”—to coin a term for the perverse hope that some good outcome that most sane people would earnestly desire, will never come to pass. It’s about how, in the midst of a pandemic, thousands started earnestly hoping—before the science was really in—that a drug, one that might save lives at a comparatively low cost, would not actually do so. Reasonably good studies were depicted as sloppy work, fatally flawed. Many have excelled in making counterfeit bills that look real, but few have excelled at making real bills look counterfeit. As such, as we sort this out, we shall observe not only some “tricks” about how to make bad studies look like good ones, but also how to make good studies look like bad ones. And why should anyone facing a pandemic wish to discredit potentially lifesaving medications? Well, in fact, this ability can come in very handy in this midst of a plague, when many medications and vaccines are competing to Save the World—and for the billions of dollars that will go along with that.

So this story is twofold. It’s about the discussion that unfolded (and is still unfolding) around hydroxychloroquine, but if you’re here for a definitive answer to a narrow question about one specific drug (“does hydroxychloroquine work?”), you will be disappointed. Because what our tale is really concerned with is the perilous state of vulnerability of our scientific discourse, models, and institutions—which is arguably a much bigger, and more urgent problem, since there are other drugs that must be tested for safety and effectiveness (most complex illnesses like COVID-19 often require a group of medications) as well as vaccines, which would be slated to be given to billions of people. “This misbegotten episode regarding hydroxychloroquine will be studied by sociologists of medicine as a classic example of how extra-scientific factors overrode clear-cut medical evidence,” Yale professor of epidemiology Harvey A. Risch recently argued. Why not start studying it now?

Norman Doige tells the story in some detail (see article link in red at the top)

  • the history of quinine, chloroquine, and HCQ medical effectiveness;
  • how HCQ was used against SARS CV2 early on;
  • how Raoult was the one in his lab who came up with the idea of combining the two older drugs, HCQ and azithromycin, for COVID-19;
  • the criticisms of the French studies exemplifying “unwishful thinking”;
  • Trump’s interest in HCQ and the media backlash against the medicine;
  • the failure of ICU treatment protocols with ventilators and no alternatives to off-label prescribing;
  • the insistence upon Randomized Controlled Trials (RCTs) as the only valid test for HCQ;
  • the confounding factors in such studies and the problems replicating RCT results; and,
  • the publication in high-profile journals of studies structured for HCQ to fail to help infected patients.

Conclusion from Doige

Lots and lots of COVID-19 studies will come out—several hundred are in the works. People will hope more and more accumulating numbers—and more big data—will settle it. But big data, interpreted by people who have never treated any of the patients involved can be dangerous, a kind of exalted nonsense. It’s an old lesson: Quantity is not quality.

On this, I favor the all-available-evidence approach, which understands that large studies are important, but also that the medication that might be best for the largest number of people may not be best one for an individual patient. In fact, it would be typical of medicine that a number of different medications will be needed for COVID-19, and that there will be interactions of some with patient’s existing medications or conditions, so that the more medications we have to choose from, the better. We should be giving individual clinicians on the front lines the usual latitude to take account of their individual patient’s condition, and preferences, and encourage these physicians to bring to bear everything they have learned and read (they have been trained to read studies), and continue to read, but also what they have seen with their own eyes. Unlike medical bureaucrats or others who issue decrees from remote places physicians are literally on our front lines—actually observing the patients in question, and a Hippocratic Oath to serve them—and not the Lancet or WHO or CNN.

As contentious as this debate has been, and as urgent as the need for informed and timely information seems now, the reason to understand what happened with HCQ is for what it reflects about the social context within which science is now produced:

  • a landscape overly influenced by technology and its obsession with big data abstraction over concrete, tangible human experience;
  • academics who increasingly see all human activities as “political” power games, and so in good conscience can now justify inserting their own politics into academic pursuits and reporting;
  • extraordinarily powerful pharmaceutical companies competing for hundreds of billions of dollars;
  • politicians competing for pharmaceutical dollars as well as public adoration—both of which come these days too much from social media; and,
  • the decaying of the journalistic and scholarly super-layers that used to do much better holding everyone in this pyramid accountable, but no longer do, or even can.

If you think this year’s controversy is bad, consider that hydroxychloroquine is given to relatively few people with COVID-19, all sick, many with nothing to lose. It enters the body, and leaves fairly quickly, and has been known to us for decades. COVID vaccines, which advocates will want to be mandatory and given to all people—healthy and not, young and old—are being rushed past their normal safety precautions and regulations, and the typical five-to-10-year observation period is being waived to get “Operation Warp Speed” done as soon as possible.

This is being done with the endorsement of public health officials—the same ones, in many cases who are saying HCQ is suddenly extremely dangerous.

Philosophically, and psychologically, it is a fantastic spectacle to behold, a reversal, the magnitude and the chutzpah of which must inspire awe: a public health establishment, showing extraordinary risk aversion to medications and treatments that are extremely well known, and had been used by billions, suddenly throwing caution to the wind and endorsing the rollout of treatments that are entirely novel—and about which we literally can’t possibly know anything, as regards to their long-term effects. Their manufacturers know this well themselves, which is why they have aimed for, insisted on, and have already been granted indemnification—guaranteed, by those same public health officials and government that they will not be held legally accountable should their product cause injury.

From unheard of extremes of caution and “unwishful thinking,” to unheard of extremes of risk-taking, and recklessly wishful thinking, this double standard, this about-face, is not happening because this issue of public safety is really so complex a problem that only our experts can understand it; it is happening because there is, right now, a much bigger problem: with our experts, and with the institutions that we had trusted to help solve our most pressing scientific and medical problems.

Unless these are attended to, HCQ won’t be remembered simply as that major medical issue that no one could agree on, and which left overwhelming controversy, confusion, and possibly unnecessary deaths of tens of thousands in its wake; it will be one of many in a chain of such disasters.

Norman Doidge, a contributing writer for Tablet, is a psychiatrist, psychoanalyst, and author of The Brain That Changes Itself and The Brain’s Way of Healing.

 

Kneeling to Experts Not Advisable

Taking an opinion “under advisement” means seriously considering it but retaining the independence to weigh it against other considerations.  Charles Lipson explains the importance of not bowing to expert recommendations in his article Reopening Schools and the Limits of Expertise.  Excerpts in italics with my bolds.

The last thing you want to hear from your brain surgeon (aside from “Oops”) is “Wow, I’ve always wanted to do one of these.” You’ll feel a lot better hearing, “I’ve done 30 operations like this over the past month and published several articles about them.”

Expertise like that is essential for brain surgery, building rockets, constructing skyscrapers, and much, much more. Our modern world is built upon it. We need such expert advice as we decide whether to open schools this fall, and we should turn to educators, physicians, and economists to get it. But ultimately we, as citizens and the local officials we elect, should make the choices. These are not technical decisions but political ones that incorporate technical issues and projections.

We should hold our representatives, not the experts, responsible for the choices they make.

When we listen to experts, we should remember Clint Eastwood’s comment in “Magnum Force”: “A man’s got to know his limitations.” Even the best authorities have them, and one, ironically, is that they seldom admit them, even to themselves. It is important for us both to appreciate expert advice and to recognize its limits every time we’re told to “be quiet and do what they say.” We should listen, think it over, and then make our own decisions as citizens, parents, teachers, business owners, workers, retirees — and voters.

The best way to understand why we need experts but also why we need to weigh their advice, not swallow it whole and uncooked, is to consider this illustration: Should we build a hydroelectric dam in a beautiful valley? If we construct it, we certainly need the best engineers and construction workers. We need engineering firms to project the cost and economists to project the price of its energy and potable water. Their expertise is essential.

But they cannot tell us whether it is wise to destroy California’s Hetch Hetchy Valley to build that dam. The world’s top experts on wildlife conservation and regional economic growth cannot give us the definitive answer, either. They would give us, at best, different answers, reflecting their different expertise. The conservationist would tell us it is a terrible idea to destroy such beautiful, irreplaceable habitat and kill endangered species. The economist would tell us we need the energy and fresh water if Northern California is to grow. What no economist could have predicted, decades ago, is that the entire world’s income would vastly increase because of technological advances from Silicon Valley, which had the resources needed to grow.

The hydroelectric example illustrates a more general point: complex questions involve experts in multiple fields, but there is no supra-expert to aggregate their differing advice. Even if we assume all experts within a field give similar advice, who can aggregate it across fields? No one. There is no “expert of experts.” In the example of the hydroelectric dam, the policy decision depends on how much we weigh conservation versus growth and how well we can predict future options and alternatives, such as the price of solar power or prospective growth from Palo Alto to San Jose.

Sorting out the answers is ultimately a question for voters and their representatives, not for experts in hydroelectric engineering, wildlife conservation, or regional economics. We need the best advice, but only we, as citizens, can weigh it and make a final decision. In a representative democracy, we elect officials to make those decisions. If democracy is to work, we must hold them accountable. One criticism of the growing regulatory state is that it is impossible to hold the decision makers accountable. Some of that criticism should be directed at legislators, who avoid responsibility by writing vague laws and then off-loading hard decisions onto bureaucrats and judges.

We should be especially skeptical when experts predict distant outcomes.

Their record is none too impressive. We should be skeptical, too, when laws and regulations set one definitive criterion, such as preserving the endangered snail darter, at the expense of all other considerations. That might be the best decision, or it might not, but it is ultimately a political choice. Right now, federal judges have awarded themselves extensive — and unilateral — power to make it.

These problems, which combine technical expertise and political judgment, are essential to understanding our dilemmas about reopening K-12 schools during the COVID-19 pandemic. Epidemiologists are saying, “Resuming in-person instruction too soon could spread the disease. Although children are at low risk, they will bring it home to parents and grandparents.” Pediatricians, by contrast, say it is important for children’s overall health to get them back in school. Online learning is not very effective, they say, and losing a year’s classroom instruction and socialization will be extremely harmful. Economists focus on different issues, such as parents who cannot return to full-time employment because they must care for children at home. That constraint is especially harmful to one-parent households and low-income, hourly workers, whose children also have less access to computers and fast internet connections. Notice that these experts are not the self-interested voices of interest groups such as teachers’ unions or small businesses. They are specialists in economics, education, and public health. Each has its own “silo of expertise.” Each silo produces a different answer because its experts focus on their own subset of issues and weigh them most heavily.

As we listen to these experts, we need to remember that even the best, most disinterested advice has its limitations. Reopening schools, like other big policy questions, involves multiple silos and hundreds of moving parts. It is impossible to predict what all those parts will do, how much weight to give each one, or what effects they might have, now and in the distant future. It was only from trial-and-error that we learned how inadequate online instruction really is. We entered this massive national experiment with some optimism and trudge forward with pessimism.

We should be humble about what we still don’t know.

Our success in reopening schools and businesses depends on things we cannot know with certainty. How quickly will our biotechnology companies discover effective therapeutics and vaccines? How quickly will the American population develop “herd immunity?” How soon will customers return, en masse, to shopping malls, indoor dining, and cross-country travel?

Predicting the secondary and tertiary effects of policy choices is especially hard.

Keeping businesses closed, for instance, sharply reduces local tax revenues, which probably means reducing essential services such as garbage collection and local policing. Those cuts harm public health and safety. But how much? No expert is smart enough to predict all these knock-on effects, much less aggregate them and give an overall conclusion. As it happens, experts are no better at predicting these effects than well-informed laymen. The main difference, according to studies, is that experts are more confident in their (often-wrong) predictions.

The point here is not that experts are irrelevant. We need them, and we need to pay attention to their data, logic, and conclusions. But we also need to remember that

  • Even the best current knowledge has its limits, and
  • There are no “supra-experts” to weigh the best advice from different fields and aggregate them to reach the “definitive” answer.

Sorting out this expert advice is not a technological question. It is a political one. Mayors, governors, and school boards across the country understand that crucial point as they decide whether to open schools this fall for in-person instruction. The voters understand it, too. They should listen to the experts, see what other jurisdictions decide, and check out their varied results. Then, they should walk into the voting booth and hold their representatives to account.

Charles Lipson is the Peter B. Ritzma Professor of Political Science Emeritus at the University of Chicago, where he founded the Program on International Politics, Economics, and Security.

In Praise of Science Skeptics

Pandemic Panic: Play or Quit? Only a skeptic gives you a choice.

Peter St. Onge writes at Mises Wire The COVID-19 Panic Shows Us Why Science Needs Skeptics Excerpts in italics with my bolds and images.

The dumpster fire of COVID predictions has shown exactly why it’s important to sustain and nurture skeptics, lest we blunder into scientific monoculture and groupthink. And yet the explosion of “cancel culture” intolerance of any opinion that doesn’t fit a shrinking “3 x 5 card” of right-think risks destroying the very tolerance and science that sustains our civilization.

Since World War II, America has suffered two respiratory pandemics comparable to COVID-19: the 1958 “Asian flu,” then the 1969 “Hong Kong flu.” In neither case did we shut down the economy—people were simply more careful. Not all that careful, of course—Jimi Hendrix was playing at Woodstock in the middle of the 1969 pandemic, and social distancing wasn’t really a thing in the “Summer of Love.”

And yet COVID-19 was very different thanks to a single “buggy mess” of a computer prediction from one Neil Ferguson, a British epidemiologist given to hysterical overestimates of deaths, from mad cow to bird flu to H1N1.

For COVID-19, Ferguson predicted 3 million deaths in America unless we basically shut down the economy. Panicked policymakers took his prediction as gospel, dressed as it was in the cloak of science.

Now, long after governments plunged half the world into a Great Depression, those panicked revisions are being quietly revised down by an order of magnitude, now suggesting a final tally comparable to 1958 and 1969.

COVID-19 would have been a deadly pandemic with or without Ferguson’s fantasies, but had we known the true scale and parameters of the threat we might have chosen better tailored means to both safeguard the elderly and at-risk, while sustaining the wider economy. After all, economists have long known that mass unemployment and widespread bankruptcies carry enormous health consequences that are very real to the victims suffering drained life savings, ruined businesses, broken families, widespread mental and physical health deterioration, even suicide. Decisions involve tradeoffs.

COVID-19 has illustrated the importance of free and robust inquiry. After all, panicked politicians facing media accusations of “killing grandma” aren’t in a very good position to evaluate these tradeoffs, and they need intellectual ammunition. Not only to show them which path is best, but to bolster them when a left-wing media establishment attacks.

Moreover, voters need this ammunition so they can actually tell the politicians what to do. This means two things: debate that is transparent, and debate that is tolerant of skeptics.

Transparency means data and computer code open to public scrutiny as the minimum requirement for any study that is used to justify policy, from lockdowns to carbon taxes to whatever comes next. These studies must be based on verifiable facts, code that does what it says it does, and the ensuing decision-making process must be transparent and open to the public.

One former Indian bureaucrat put it well: “Emergency situations like this pandemic should require a far higher—and not lower—level of scrutiny,” since policy choices have such tremendous impact. “This suggests a need for democracies to strengthen their critical thinking capacity by creating an independent ‘Black Hat’ institution whose purpose would be to question any technical foundations of government decisions.”

Even more important than transparency, debate must be tolerant of alternative opinions. This means ideas that are wrong, offensive, even dangerous, have to be tolerated, even celebrated. By all means, refute them—most alternative hypotheses are completely wrong, so it shouldn’t be hard to simply refute them without censorship. This, after all, is the essence of science—to generate hypotheses testable by anybody, not just licensed “experts.”

Whether we are faced with a new crisis, a new policy innovation, or simply designing a better mousetrap, groupthink and censorship are recipes for disaster and stagnation, while transparency and tolerance of new ideas are the very essence of progress. Indeed, it is largely this scientific tolerance that allowed us to rise up from the long, brutal darkness of poverty.

As Francis Bacon observed three hundred years ago, innovation and new knowledge do not come from prestigious “learned” insiders, rather progress comes from the questioner, the tinkerer, the skeptic.

Indeed, every major scientific advance challenged the “settled science” of its day, and was often denounced as pernicious and false, even dangerous. The modern blood transfusion, for example, was developed in the late 1600s, then banned for nearly a century by a hostile medical establishment, “canceling” tens of millions of lives at the altar of groupthink and hostility to skeptics.

It’s comforting to know that our problems are old ones, and also encouraging that our solution is both time-tested and simple: transparency and tolerance. After all, the very reason our culture elevates science is because it is built on a millennia-long evolutionary “battle of ideas” in which theories are constantly tested and retested in a delightfully endless search for ever better understanding.

This implies there is no such thing as “settled science”—the phrase itself is contrary to the scientific method. In reality, science is not some billion-dollar gleaming palace in Bethesda, rather it’s a gnarled mutant sewer rat that takes all comers because it’s been burned, cut, run over, crushed, run through the wood chipper, and survived. That ugly beast is our salvation, not the gleaming palace where we bow down to whichever random guy has the biggest degree in the room.

Only with free inquiry for the most unpopular, offensive, dangerous, and, yes, wrong ideas imaginable does that power sustain. And if we break that, we can expect a series of rapid catastrophes that, like failed golden ages of the past, return us to the nasty, brutish, and very short lives that have been humanity’s norm.

Whether pandemic, climate change, “institutional racism,” or whatever new crisis they conjure next, we have a fundamental right to tenaciously defend the transparency and tolerance that constitutes science itself so that it remains among humanity’s crowning achievements, and so that we preserve this golden age that would astound our ancestors.

Update: Stories vs. Facts

This post revisits a previous discussion of how public discourse is increasingly governed by stories at the expense of facts.  The recent street violence provides another example.  NYT columnist Bari Weiss provides an insider’s look at how the media produces stories instead of reports.

Bari Weiss Twitter Thread

The civil war inside The New York Times between the (mostly young) wokes the (mostly 40+) liberals is the same one raging inside other publications and companies across the country. The dynamic is always the same. (Thread.)

The Old Guard lives by a set of principles we can broadly call civil libertarianism. They assumed they shared that worldview with the young people they hired who called themselves liberals and progressives. But it was an incorrect assumption.

The New Guard has a different worldview, one articulated best by @JonHaidt and @glukianoff. They call it “safetyism,” in which the right of people to feel emotionally and psychologically safe trumps what were previously considered core liberal values, like free speech.

Perhaps the cleanest example of this dynamic was in 2018, when David Remnick, under tremendous public pressure from his staffers, disinvited Steve Bannon from appearing on stage at the New Yorker Ideas Festival. But there are dozens and dozens of examples.

I’ve been mocked by many people over the past few years for writing about the campus culture wars. They told me it was a sideshow. But this was always why it mattered: The people who graduated from those campuses would rise to power inside key institutions and transform them.

I’m in no way surprised by what has now exploded into public view. In a way, it’s oddly comforting: I feel less alone and less crazy trying to explain the dynamic to people. What I am shocked by is the speed. I thought it would take a few years, not a few weeks.

Here’s one way to think about what’s at stake: The New York Times motto is “all the news that’s fit to print.” One group emphasizes the word “all.” The other, the word “fit.”

W/r/t Tom Cotton’s oped and the choice to run it: I agree with our critics that it’s a dodge to say “we want a totally open marketplace of ideas!” There are limits. Obviously. The question is: does his view fall outside those limits? Maybe the answer is yes.

If the answer is yes, it means that the view of more than half of Americans are unacceptable. And perhaps they are. https://theweek.com/speedreads/917760/plurality-democrats-support-calling-military-aid-police-during-protests-poll-shows

“A plurality of Democrats would support calling in the U.S. military to aid police during protests,…
President Trump on Monday threatened to call in the United States military in an effort to curtail protests across the United States, and it turns out most Americans — even some of those who think the president is doing a poor job of handling the demonstrations against police brutality — would support such an action.”

Background from Previous Post

Facts vs Stories is written by Steven Novella at Neurologica. Excerpts in italics with my bolds.

There is a common style of journalism, that you are almost certainly very familiar with, in which the report starts with a personal story, then delves into the facts at hand often with reference to the framing story and others like it, and returns at the end to the original personal connection. This format is so common it’s a cliche, and often the desire to connect the actual new information to an emotional story takes over the reporting and undermines the facts.

This format reflects a more general phenomenon – that people are generally more interested in and influenced by a good narrative than by dry facts. Or are we? New research suggests that while the answer is still generally yes, there is some more nuance here (isn’t there always?). The researchers did three studies in which they compared the effects of strong vs weak facts presented either alone or embedded in a story. In the first two studies the information was about a fictitious new phone. The weak fact was that the phone could withstand a fall of 3 feet. The strong fact was that the phone could withstand a fall of 30 feet. What they found in both studies is that the weak fact was more persuasive when presented embedded in a story than alone, while the strong fact was less persuasive.

They then did a third study about a fictitious flu medicine, and asked subjects if they would give their e-mail address for further information. People are generally reluctant to give away their e-mail address unless it’s worth it, so this was a good test of how persuasive the information was. When a strong fact about the medicine was given alone, 34% of the participants were willing to provide their e-mail. When embedded in a story, only 18% provided their e-mail.  So, what is responsible for this reversal of the normal effect that stories are generally more persuasive than dry facts?

The authors suggest that stories may impair our ability to evaluate factual information.

This is not unreasonable, and is suggested by other research as well. To a much greater extent than you might think, cognition is a zero-sum game. When you allocate resources to one task, those resources are taken away from other mental tasks (this basic process is called “interference” by psychologists). Further, adding complexity to brain processing, even if this leads to more sophisticated analysis of information, tends to slow down the whole process. And also, parts of the brain can directly suppress the functioning of other parts of the brain. This inhibitory function is actually a critical part of how the brain works together.

Perhaps the most dramatic relevant example of this is a study I wrote about previously in which fMRI scans were used to study subjects listening to a charismatic speaker that was either from the subjects religion or not. When a charismatic speaker that matched the subject’s religion was speaking, the critical thinking part of the brain was literally suppressed. In fact this study also found opposite effects depending on context.

The contrast estimates reveal a significant increase of activity in response to the non-Christian speaker (compared to baseline) and a massive deactivation in response to the Christian speaker known for his healing powers. These results support recent observations that social categories can modulate the frontal executive network in opposite directions corresponding to the cognitive load they impose on the executive system.

So when listening to speech from a belief system we don’t already believe, we engaged our executive function. When listening to speech from within our existing belief system, we suppressed our executive function.

In regards to the current study, is something similar going on? Does processing the emotional content of stories impair our processing of factual information, which is a benefit for weak facts but actually a detriment to the persuasive power of strong facts that are persuasive on their own?

Another potential explanation occurs to me, however (showing how difficult it can be to interpret the results of psychological research like this). It is a reasonable premise that a strong fact is more persuasive on it’s own than a weak fact – being able to survive a 3 foot fall is not as impressive as a 30 foot fall. But, the more impressive fact may also trigger more skepticism. I may simply not believe that a phone could survive such a fall. If that fact, however, is presented in a straightforward fashion, it may seem somewhat credible. If it is presented as part of a story that is clearly meant to persuade me, then that might trigger more skepticism. In fact, doing so is inherently sketchy. The strong fact is impressive on its own, why are you trying to persuade me with this unnecessary personal story – unless the fact is BS.There is also research to support this hypothesis. When a documentary about a fringe topic, like UFOs, includes the claim that, “This is true,” that actually triggers more skepticism. It encourages the audience to think, “Wait a minute, is this true?” Meanwhile, including a scientists who says, “This is not true,” may actually increase belief, because the audience is impressed that the subject is being taken serious by a scientist, regardless of their ultimate conclusion. But the extent of such backfire effects remains controversial in psychological research – it appears to be very context dependent.

I would summarize all this by saying that – we can identify psychological effects that relate to belief and skepticism. However, there are many potential effects that can be triggered in different situations, and interact in often complex and unpredictable ways. So even when we identify a real effect, such as the persuasive power of stories, it doesn’t predict what will happen in every case. In fact, the net statistical effect may disappear or even reverse in certain contexts, because it is either neutralized or overwhelmed by another effect. I think that is what is happening here.

What do you do when you are trying to be persuasive, then? The answer has to be – it depends? Who is your audience? What claims or facts are you trying to get across? What is the ultimate goal of the persuasion (public service, education, political activism, marketing)? I don’t think we can generate any solid algorithm, but we do have some guiding rules of thumb.

First, know your audience, or at least those you are trying to persuade. No message will be persuasive to everyone.

If the facts are impressive on their own, let them speak for themselves. Perhaps put them into a little context, but don’t try to wrap them up in an emotional story. That may backfire.

Depending on context, your goal may be to not just provide facts, but to persuade your audience to reject a current narrative for a better one. In this case the research suggests you should both argue against the current narrative, and provide a replacement that provides an explanatory model.

So you can’t just debunk a myth, conspiracy theory, or misconception. You need to provide the audience with another way to make sense of their world.

When possible find common ground. Start with the premises that you think most reasonable people will agree with, then build from there.

Now, it’s not my goal to outline how to convince people of things that are not true, or that are subjective but in your personal interest. That’s not what this blog is about. I am only interested in persuading people to portion their belief to the logic and evidence. So I am not going to recommend ways to avoid triggering skepticism – I want to trigger skepticism. I just want it to be skepticism based on science and critical thinking, not emotional or partisan denial, nihilism, cynicism, or just being contrarian.

You also have to recognize that it can be difficult to persuade people. This is especially true if your message is constrained by facts and reality. Sometimes the real information is not optimized for emotional appeal, and it has to compete against messages that are so optimized (and are unconstrained by reality). But at least know the science about how people process information and form their beliefs is useful.

Postscript:  Hans Rosling demonstrates how to use data to tell the story of our rising civilization.

Bottom Line:  When it comes to science, the rule is to follow the facts.  When the story is contradicted by new facts, the story changes to fit the facts, not the other way around.

See also:  Data, Facts and Information

Media Turn Math Dopes into Dupes

Those who have investigated global warming/climate change discovered that the numbers don’t add up. But if you don’t do the math you wouldn’t know that, because in the details is found the truth (the devilish contradictions to sweeping claims). Those without numerical literacy (including apparently most journalists) are at the mercy of the loudest advocates. Social policy then becomes a matter of going along with herd popularity. Shout out to AOC!

Now we get the additional revelation regarding pandemic math and the refusal to correct over-the-top predictions. It’s the same dynamic but accelerated by the more immediate failure of models to forecast contagious reality. Sean Trende writes at Real Clear Politics The Costly Failure to Update Sky-Is-Falling Predictions. Excerpts in italics with my bolds.

On March 6, Liz Specht, Ph.D., posted a thread on Twitter that immediately went viral. As of this writing, it has received over 100,000 likes and almost 41,000 retweets, and was republished at Stat News. It purported to “talk math” and reflected the views of “highly esteemed epidemiologists.” It insisted it was “not a hypothetical, fear-mongering, worst-case scenario,” and that, while the predictions it contained might be wrong, they would not be “orders of magnitude wrong.” It was also catastrophically incorrect.

The crux of Dr. Specht’s 35-tweet thread was that the rapid doubling of COVID-19 cases would lead to about 1 million cases by May 5, 4 million by May 11, and so forth. Under this scenario, with a 10% hospitalization rate, we would expect approximately 400,000 hospitalizations by mid-May, which would more than overwhelm the estimated 330,000 available hospital beds in the country. This would combine with a lack of protective equipment for health care workers and lead to them “dropping from the workforce for weeks at a time,” to shortages of saline drips and so forth. Half the world would be infected by the summer, and we were implicitly advised to buy dry goods and to prepare not to leave the house.

Interestingly, this thread was wrong not because we managed to bend the curve and stave off the apocalypse; for starters, Dr. Specht described the cancellation of large events and workplace closures as something that would shift things by only days or weeks.

Instead, this thread was wrong because it dramatically understated our knowledge of the way the virus worked; it fell prey to the problem, common among experts, of failing to address adequately the uncertainty surrounding its point estimates. It did so in two opposing ways. First, it dramatically understated the rate of spread. If serological tests are to be remotely believed, we likely hit the apocalyptic milestone of 2 million cases quite some time ago. Not in the United States, mind you, but in New York City, where 20% of residents showed positive COVID-19 antibodies on April 23. Fourteen percent of state residents showed antibodies, suggesting 2.5 million cases in the Empire State alone; since antibodies take a while to develop, this was likely the state of affairs in mid-April or earlier.

But in addition to being wrong about the rate of spread, the thread was also very wrong about the rate of hospitalization. While New York City found its hospital system stretched, it avoided catastrophic failure, despite having within its borders the entire number of cases predicted for the country as a whole, a month earlier than predicted. Other areas of the United States found themselves with empty hospital beds and unused emergency capacity.

One would think that, given the amount of attention this was given in mainstream sources, there would be some sort of revisiting of the prediction. Of course, nothing of the sort occurred.

This thread has been absolutely memory-holed, along with countless other threads and Medium articles from February and March. We might forgive such forays on sites like Twitter and Medium, but feeding frenzies from mainstream sources are also passed over without the media ever revisiting to see how things turned out.

Consider Florida. Gov. Ron DeSantis was castigated for failing to close the beaches during spring break, and critics suggested that the state might be the next New York. I’ve written about this at length elsewhere, but Florida’s new cases peaked in early April, at which point it was a middling state in terms of infections per capita. The virus hasn’t gone away, of course, but the five-day rolling average of daily cases in Florida is roughly where it was in late March, notwithstanding the fact that testing has increased substantially. Taking increased testing into account, the positive test rate has gradually declined since late March as well, falling from a peak of 11.8% on April 1 to a low of 3.6% on May 12.

Notwithstanding this, the Washington Post continues to press stories of public health officials begging state officials to close beaches (a more interesting angle at this point might be why these health officials were so wrong), while the New York Times noted a few days ago (misleadingly, and grossly so) that “Florida had a huge spike in cases around Miami after spring break revelry,” without providing the crucial context that the caseload mimicked increases in other states that did not play host to spring break. Again, perhaps the real story is that spring breakers passed COVID-19 among themselves and seeded it when they got home. I am sure some of this occurred, but it seems exceedingly unlikely that they would have spread it widely among themselves and not also spread it widely to bartenders, wait staff, hotel staff, and the like in Florida.

Florida was also one of the first states to experiment with reopening. Duval County (Jacksonville) reopened its beaches on April 19 to much national skepticism. Yet daily cases are lower today than they were they day that it reopened; there was a recent spike in cases associated with increased testing, but it is now receding.

Or consider Georgia, which one prominent national magazine claimed was engaging in “human sacrifice” by reopening. Yet, after nearly a month, a five-day average of Georgia’s daily cases looks like this:

What about Wisconsin, which was heavily criticized for holding in-person voting? It has had an increased caseload, but that is largely due to increased testing (up almost six-fold since early April) and an idiosyncratic outbreak in its meatpacking plants. The latter is tragic, but it is not related to the election; in fact, a Milwaukee Journal-Sentinel investigation failed to link any cases to the election; this has largely been ignored outside of conservative media sites such as National Review.

We could go on – after being panned for refusing to issue a stay-at-home order, South Dakota indeed suffered an outbreak (once again, in its meatpacking plants), but deaths there have consistently averaged less than three per day, to little fanfare – but the point is made. Some “feeding frenzies” have panned out, but many have failed to do so; rather than acknowledging this failure, the press typically moves on.

This is an unwelcome development, for a few reasons. First, not everyone follows this pandemic closely, and so a failure to follow up on how feeding frenzies end up means that many people likely don’t update their views as often as they should. You’d probably be forgiven if you suspected hundreds of cases and deaths followed the Wisconsin election.

Second, we obviously need to get policy right here, and to be sure, reporting bad news is important for producing informed public opinion. But reporting good news is equally as important. Third, there are dangers to forecasting with incredible certitude, especially with a virus that was detected less than six months ago. There really is a lot we still don’t know, and people should be reminded of this. Finally, among people who do remember things like this, a failure to acknowledge errors foments cynicism and further distrust of experts.

The damage done to this trust is dangerous, for at this time we desperately need quality expert opinions and news reporting that we can rely upon.

Addendum:  Tilak Doshi makes the comparison to climate crisis claims Coronavirus And Climate Change: A Tale Of Two Hysterias writing at Forbes.  Excerpts in italics with my bolds.

It did not take long after the onset of the global pandemic for people to observe the many parallels between the covid-19 pandemic and climate change. An invisible novel virus of the SARS family now represents an existential threat to humanity. As does CO2, a colourless trace gas constituting 0.04% of the atmosphere which allegedly serves as the control knob of climate change. Lockdowns are to the pandemic what decarbonization is to climate change. Indeed, lockdowns and decarbonization share much in common, from tourism and international travel to shopping and having a good time. It would seem that Greta Thunberg’s dreams have come true, and perhaps that is why CNN announced on Wednesday that it is featuring her on a coronavirus town-hall panel alongside health experts.

But, beyond being a soundbite and means of obtaining political cover, ‘following the science’ is neither straightforward nor consensual. The diversity of scientific views on covid-19 became quickly apparent in the dramatic flip-flop of the UK government. In the early stages of the spread in infection, Boris Johnson spoke of “herd immunity”, protecting the vulnerable and common sense (à la Sweden’s leading epidemiologist Professor Johan Giesecke) and rejected banning mass gatherings or imposing social distancing rules. Then, an unpublished bombshell March 16th report by Professor Neil Ferguson of Imperial College, London, warned of 510,000 deaths in the country if the country did not immediately adopt a suppression strategy. On March 23, the UK government reversed course and imposed one of Europe’s strictest lockdowns. For the US, the professor had predicted 2.2 million deaths absent similar government controls, and here too, Ferguson’s alarmism moved the federal government into lockdown mode.

Unlike climate change models that predict outcomes over a period of decades, however, it takes only days and weeks for epidemiological model forecasts to be falsified by data. Thus, by March 25th, Ferguson’s predicted half a million fatalities in the UK was adjusted downward to “unlikely to exceed 20,000”, a reduction by a factor of 25. This drastic reduction was credited to the UK’s lockdown which, however, was imposed only 2 days previously, before any social distancing measures could possibly have had enough time to work.

For those engaged in the fraught debates over climate change over the past few decades, the use of alarmist models to guide policy has been a familiar point of contention. Much as Ferguson’s model drove governments to impose Covid-19 lockdowns affecting nearly 3 billion people on the planet, Professor Michael Mann’s “hockey stick” model was used by the IPCC, mass media and politicians to push the man-made global warming (now called climate change) hysteria over the past two decades.

As politicians abdicate policy formulation to opaque expertise in highly specialized fields such as epidemiology or climate science, a process of groupthink emerges as scientists generate ‘significant’ results which reinforce confirmation bias, affirm the “scientific consensus” and marginalize sceptics.

Rather than allocating resources and efforts towards protecting the vulnerable old and infirm while allowing the rest of the population to carry on with their livelihoods with individuals taking responsibility for safe socializing, most governments have opted to experiment with top-down economy-crushing lockdowns. And rather than mitigating real environmental threats such as the use of traditional biomass for cooking indoors that is a major cause of mortality in the developing world or the trade in wild animals, the climate change establishment advocates decarbonisation (read de-industrialization) to save us from extreme scenarios of global warming.

Taking the wheels off of entire economies on the basis of wildly exaggerated models is not the way to go.

Footnote: Mark Hemingway sees how commonplace is the problem of uncorrected media falsity in his article When Did the Media Stop Running Corrections? Excerpts in italics with my bolds.

Vanity Fair quickly recast Sherman’s story without acknowledging its error: “This post has been updated to include a denial from Blackstone, and to reflect comments received after publication by Charles P. Herring, president of Herring Networks, OANN’s parent company.” In sum, Sherman based his piece on a premise that was wrong, and Vanity Fair merely acted as if all the story needed was a minor update.

Such post-publication “stealth editing” has become the norm. Last month, The New York Times published a story on the allegation that Joe Biden sexually assaulted a former Senate aide. After publication, the Times deleted the second half of this sentence: “The Times found no pattern of sexual misconduct by Mr. Biden, beyond the hugs, kisses and touching that women previously said made them uncomfortable.”

In an interview with Times media columnist Ben Smith, Times’ Executive Editor Dean Baquet admitted the sentence was altered at the request of Biden’s presidential campaign. However, if you go to the Times’ original story on the Biden allegations, there’s no note saying how the story was specifically altered or why.

It’s also impossible not to note how this failure to issue proper corrections and penchant for stealth editing goes hand-in-hand with the media’s ideological preferences.

In the end the media’s refusal to run corrections is a damnable practice for reasons that have nothing to do with Christianity. In an era when large majorities of the public routinely tell pollsters they don’t trust the media, you don’t have to be a Bible-thumper to see that admitting your mistakes promptly, being transparent about trying to correct them, and when appropriate, apologizing and asking for forgiveness – are good secular, professional ethics.

 

 

 

On Following the Science

H/T to Luboc Motl for posting at his blog Deborah Cohen, BBC, and models vs theories  Excerpts in italics with my bolds

Dr Deborah Cohen is an award-winning health journalist who has a doctor degree – which actually seems to be related to medical sciences – and who is working for the BBC Newsnight now. I think that the 13-minute-long segment above is an excellent piece of journalism.

It seems to me that she primarily sees that the “models” predicting half a million of dead Britons have spectacularly failed and it is something that an honest health journalist simply must be interested in. And she seems to be an achieved and award-winning journalist. Second, she seems to see through some of the “more internal” defects of bad medical (and not only medical) science. Her PhD almost certainly helps in that. Someone whose background is purely in humanities or the PR-or-communication gibberish simply shouldn’t be expected to be on par with a real PhD.

So she has talked to the folks at the “Oxford evidence-based medicine” institute and others who understand the defect of the “computer models” as the basis of science or policymaking. Unsurprisingly, she is more or less led to the conclusion that the lockdown (in the U.K.) was a mistake.

If your equation – or computer model – assumes that 5% of those who contract the virus die (i.e. the probability is 5% that they die in a week if they get the virus), then your predicted fatality count may be inflated by a factor of 25 assuming that the case fatality rate is 0.2% – and it is something comparable to that. It should be a common sense that if someone makes a factor-of-25 error in the choice of this parameter, his predictions may be wrong by a factor-of-25, too. It doesn’t matter if the computer program looks like SimCity with 66.666 million Britons represented by a piece of a giant RAM memory of a supercomputer. This brute force obviously cannot compensate for a fundamental ignorance or error in your choice of the fatality rate.

I would think that most 3-year-old kids get this simple point and maybe this opinion is right. Nevertheless, most adults seem to be completely braindead today and they don’t get this point. When they are told that something was calculated by a computer, they worship the predictions. They don’t ask “whether the program was based on a realistic or scientifically supported theory”. Just the brute power of the pile of silicon seems to amaze them.

So we always agreed e.g. with Richard Lindzen that an important part of the degeneration of the climate science was the drift away from the proper “theory” to “modeling”. A scientist may be more leaning towards doing experiments and finding facts and measuring parameters with her hands (and much of the experimental climate science remained OK, after all, Spencer and Christy are still measuring the temperature by satellites etc.); and a theorist for whom the brain is (even) more important than for the experimenter. Experimenters sort of continued to do their work. However, it’s mainly the “theorists” who hopelessly degenerated in the climate science, under the influence of toxic ideology, politics, and corruption.

The real problem is that proper theorists – those who actually understand the science – can solve basic equations on the top of their heads, and are aware of all the intricacies in the process of finding the right equations, equivalence and unequivalence of equations, universal behavior, statistical effects etc. – were replaced by “modelers” i.e. people who don’t really have a clue about science, who write a computer-game-like code, worship their silicon, and mindlessly promote what comes out of this computer game. It is a catastrophe for the field – and the same was obviously happening to “theoretical epidemiology”, too.

“Models” and “good theory” aren’t just orthogonal. The culture of “models” is actively antiscientific because it comes with the encouragement to mindlessly trust in what happens in computer games. This isn’t just “different and independent from” the genuine scientific method. It just directly contradicts the scientific method. In science, you just can’t ever mindlessly trust something just because expensive hardware was used or a high number of operations was made by the CPU. These things are really negative for the trustworthiness and expected accuracy of the science, not positive. In science, you want to make things as simple as possible (because the proliferation of moving parts increases the probability of glitches) but not simpler; and you want to solve a maximum fraction of the issues analytically, not numerically or by a “simulation”.

Science is a systematic framework to figure out which statements about Nature are correct and which are incorrect.

And according to quantum mechanics, the truth values of propositions must be probabilistic. Quantum mechanics only predicts the “similarity [of propositions] to the truth” which is the translation of the Czech word for probability (pravděpodobnost).

It is the truth values (or probabilities) that matter in science – the separation of statements to right and wrong ones (or likely and unlikely ones). Again, I think that I am saying something totally elementary, something that I understood before I was 3 and so did many of you. But it seems obvious that the people who need to ask whether Leo’s or Stephen’s pictures are “theories of everything” must totally misunderstand even this basic point – that science is about the truth, not just representation of objects.

See also: The Deadly Lockdowns and Covid19 Linked to Affluence

Footnote:  Babylon Bee Has Some Fun with this Topic.

‘The Science On Climate Change Is Settled,’ Says Man Who Does Not Believe The Settled Science On Gender, Unborn Babies, Economics

PORTLAND, OR—Local man Trevor J. Gavyn pleaded with his conservative coworker to “believe the science on climate change,” though he himself does not believe the science on the number of genders there are, the fact that unborn babies are fully human, and that socialism has failed every time it has been tried.

“It’s just like, the science is settled, man,” he said in between puffs on his vape. “We just need to believe the scientists and listen to the experts here.”

“Facts don’t care about your feelings on the climate, bro,” he added, though he ignores the fact that there are only two biological genders. He also hand-waves away the science that an unborn baby is 100% biologically human the moment it is conceived and believes economics is a “conservative hoax foisted on us by the Illuminati and Ronald Reagan.”

“That whole thing is, like, a big conspiracy, man,” he said.

The conservative coworker, for his part, said he will trust the science on gender, unborn babies, and economics while simply offering “thoughts and prayers” for the climate.

Jimbob Does Coronavirus

Humor is important as a means of poking holes in narratives that assert beliefs contrary to reality. Jimbob has become a force skewering notions of climate change, as well as other distorted ideas comprising the “woke” PC canon. Those inside the believer bubble will not be affected, but the important audience are those ignorant or agnostic about the so called “progressive, post-modern agenda.”

A previous post Best Cartoons Madebyjimbob provided an introduction to this artist, along with his point of view.  This post presents his more recent images related to present pandemic foibles.

Another Way Carbon Makes Life Better

 

With wall thicknesses of about 160 nanometers, a closed-cell, plate-based nanolattice structure designed by researchers at UCI and other institutions is the first experimental verification that such arrangements reach the theorized limits of strength and stiffness in porous materials. Credit: Cameron Crook and Jens Bauer / UCI

Brian Bell, University of California, Irvine, writes at phys.org announcing a new way that carbon will serve humanity in years to come.  It’s another example of scientific progress making human life better. The article is Team designs carbon nanostructure stronger than diamonds. Excerpts in italics with my bolds.

Researchers at the University of California, Irvine and other institutions have architecturally designed plate-nanolattices—nanometer-sized carbon structures—that are stronger than diamonds as a ratio of strength to density.

In a recent study in Nature Communications, the scientists report success in conceptualizing and fabricating the material, which consists of closely connected, closed-cell plates instead of the cylindrical trusses common in such structures over the past few decades.

“Previous beam-based designs, while of great interest, had not been so efficient in terms of mechanical properties,” said corresponding author Jens Bauer, a UCI researcher in mechanical & aerospace engineering. “This new class of plate-nanolattices that we’ve created is dramatically stronger and stiffer than the best beam-nanolattices.”

According to the paper, the team’s design has been shown to improve on the average performance of cylindrical beam-based architectures by up to 639 percent in strength and 522 percent in rigidity.

Members of the architected materials laboratory of Lorenzo Valdevit, UCI professor of materials science & engineering as well as mechanical & aerospace engineering, verified their findings using a scanning electron microscope and other technologies provided by the Irvine Materials Research Institute.

Bauer said the team’s achievement rests on a complex 3-D laser printing process called two-photon lithography direct laser writing. As an ultraviolet-light-sensitive resin is added layer by layer, the material becomes a solid polymer at points where two photons meet. The technique is able to render repeating cells that become plates with faces as thin as 160 nanometers.

One of the group’s innovations was to include tiny holes in the plates that could be used to remove excess resin from the finished material. As a final step, the lattices go through pyrolysis, in which they’re heated to 900 degrees Celsius in a vacuum for one hour. According to Bauer, the end result is a cube-shaped lattice of glassy carbon that has the highest strength scientists ever thought possible for such a porous material.

Nanolattices hold great promise for structural engineers, particularly in aerospace, because it’s hoped that their combination of strength and low mass density will greatly enhance aircraft and spacecraft performance.

Other co-authors on the study were Anna Guell Izard, a UCI graduate student in mechanical & aerospace engineering, and researchers from UC Santa Barbara and Germany’s Martin Luther University of Halle-Wittenberg. The project was funded by the Office of Naval Research and the German Research Foundation.

Footnote: This material adds to the many ways our lives are already enriched by carbon-based materials.