Climate Delusional Disorder (CDD)

 

WebMD tells What You Need to Know about this condition.  Delusions and Delusional Disorder. Excerpts in italics with my bolds.

Delusions are the main symptom of delusional disorder. They’re unshakable beliefs in something that isn’t true or based on reality. But that doesn’t mean they’re completely unrealistic. Delusional disorder involves delusions that aren’t bizarre, having to do with situations that could happen in real life, like being followed, poisoned, deceived, conspired against, or loved from a distance. These delusions usually involve mistaken perceptions or experiences. But in reality, the situations are either not true at all or highly exaggerated.

People with delusional disorder often can continue to socialize and function normally, apart from the subject of their delusion, and generally do not behave in an obviously odd or bizarre manner. This is unlike people with other psychotic disorders, who also might have delusions as a symptom of their disorder. But in some cases, people with delusional disorder might become so preoccupied with their delusions that their lives are disrupted.

What Are the Complications of Delusional Disorder?

  • People with delusional disorder might become depressed, often as the result of difficulties associated with the delusions.
  • Acting on the delusions also can lead to violence or legal problems. For example, a person with an erotomanic delusion who stalks or harasses the object of the delusion could be arrested.
  • Also, people with this disorder can become alienated from others, especially if their delusions interfere with or damage their relationships.

Treatment most often includes medication and psychotherapy (a type of counseling). Delusional disorder can be very difficult to treat, in part because those who have it often have poor insight and do not know there’s a psychiatric problem. Studies show that close to half of patients treated with antipsychotic medications show at least partial improvement.

Delusional disorder is typically a chronic (ongoing) condition, but when properly treated, many people can find relief from their symptoms. Some recover completely, while others have bouts of delusional beliefs with periods of remission (lack of symptoms).

Unfortunately, many people with this disorder don’t seek help. It’s often hard for people with a mental disorder to know they aren’t well. Or they may credit their symptoms to other things, like the environment. They also might be too embarrassed or afraid to seek treatment. Without treatment, delusional disorder can be a lifelong illness.

An example of CDD

H.Sterling Burnett and James Taylor write at Epoch Times United Nations Misleads About Food Production and Climate Change. Excerpts in italics with my bolds

There is no better way to describe the arguments contained in the U.N. Intergovernmental Panel on Climate Change’s (IPCC) new report, “Climate Change and Land,” released just in time to influence discussions at the United Nations’ 68th Civil Society Conference. Citing anecdotal evidence instead of hard data, IPCC’s new report paints a dark, disturbing picture about the current and future state of crop production and food availability.

“Climate change, including increases in frequency and intensity of extremes, has adversely impacted food security and terrestrial ecosystems as well as contributed to desertification and land degradation in many regions,” the report claims.

“Warming compounded by drying has caused yield declines in parts of Southern Europe. Based on indigenous and local knowledge, climate change is affecting food security in drylands, particularly those in Africa, and high mountain regions of Asia and South America,” the report continues.

Here, climate alarmists in the United Nations are doing nothing more than “pounding the table,” hoping fear will drive the public to demand “climate action now!”

Of course, the fake news media eagerly amplified the alarmist report. For example, an Aug. 8 NBC News headline reads, “Climate change could trigger a global food crisis, new U.N. report says.” Many other major media outlets published similar stories.

The biggest problem is the report’s thesis and “facts” are totally wrong—and that’s quite a problem!

For instance, the United Nations’ own data shows farmers throughout the world are setting new production records virtually every year. In fact, the U.N. Food and Agriculture Organization reports new records were set in each of the past five years for global cereal production, which is composed of the Big Three food staples: corn, wheat, and rice.

Indeed, World-Grain.com reports in 2016 world cereal production broke records for the third straight year, exceeding the previous record yield, recorded in 2015, by 1.2 percent and topping the record yield in 2014 by 1.5 percent. These facts should not surprise anyone because hundreds of studies and experiments conclusively demonstrate plants do better under conditions of higher carbon dioxide and modestly warmer temperatures.

The ongoing record crop production perfectly illustrates the difference between the Climate Delusion perpetrated by IPCC and other government-funded alarmists and what is actually happening in the real world. To make the news gloomy, IPCC’s report nefariously engages in semantic tricks to give readers a false impression of declining global crop production. The report cites anecdotal evidence crop yields are declining in “parts” of Southern Europe, ignoring copious data showing crop yields are rising across the globe, including throughout Southern Europe.

Instead of highlighting this welcome development, IPCC focuses on what it claims are yield reductions in some small regions of Southern Europe. Readers who are not paying close attention will be led to believe, incorrectly, that crop yields are declining throughout Southern Europe. In reality, the exact opposite is true!

IPCC claims “indigenous and local knowledge” indicates food production is declining “in drylands” in Africa, Asia, and South America. However, such indigenous and local knowledge does not trump objective data, which are readily available to IPCC’s authors and show crop yields are increasing throughout Africa, Asia, and South America as a whole, including in dryland areas.

Tragically, IPCC’s misleading claims result in people who dare to point out crop production continues to set new records being accused of “denying” climate change and attacking science. Climate change is real and record crop production is in fact consistent with it. In fact, record crop production is partly due to climate change.

This is just the latest example of the ongoing Climate Delusion, as radical environmental activists, government bureaucrats, socialists, and a biased news media, looking to transform U.S. society, repeatedly make ridiculous climate claims with no basis in real environmental conditions. They hope the constant drumbeat of authoritative-sounding claims will fool people into stampeding politicians to give governments more power over the economy to combat the false climate crisis.

Fortunately, we can avoid this fate. Factual data showing the truth about global food supplies and other climate conditions are readily available to anyone willing to search the internet. Let’s hope the public accesses the facts. Enacting policies that restrict the use of abundant energy supplies will rob people of choice and harm the economy. This won’t hurt the global elite, but it will result in everyone else living poorer, more precarious lives.

See also Alarmists Anonymous

Advertisements

Which Comes First: Story or Facts?


Facts vs Stories is written by Steven Novella at Neurologica. Excerpts in italics with my bolds.

There is a common style of journalism, that you are almost certainly very familiar with, in which the report starts with a personal story, then delves into the facts at hand often with reference to the framing story and others like it, and returns at the end to the original personal connection. This format is so common it’s a cliche, and often the desire to connect the actual new information to an emotional story takes over the reporting and undermines the facts.

This format reflects a more general phenomenon – that people are generally more interested in and influenced by a good narrative than by dry facts. Or are we? New research suggests that while the answer is still generally yes, there is some more nuance here (isn’t there always?). The researchers did three studies in which they compared the effects of strong vs weak facts presented either alone or embedded in a story. In the first two studies the information was about a fictitious new phone. The weak fact was that the phone could withstand a fall of 3 feet. The strong fact was that the phone could withstand a fall of 30 feet. What they found in both studies is that the weak fact was more persuasive when presented embedded in a story than alone, while the strong fact was less persuasive.

They then did a third study about a fictitious flu medicine, and asked subjects if they would give their e-mail address for further information. People are generally reluctant to give away their e-mail address unless it’s worth it, so this was a good test of how persuasive the information was. When a strong fact about the medicine was given alone, 34% of the participants were willing to provide their e-mail. When embedded in a story, only 18% provided their e-mail.  So, what is responsible for this reversal of the normal effect that stories are generally more persuasive than dry facts?

The authors suggest that stories may impair our ability to evaluate factual information.

This is not unreasonable, and is suggested by other research as well. To a much greater extent than you might think, cognition is a zero-sum game. When you allocate resources to one task, those resources are taken away from other mental tasks (this basic process is called “interference” by psychologists). Further, adding complexity to brain processing, even if this leads to more sophisticated analysis of information, tends to slow down the whole process. And also, parts of the brain can directly suppress the functioning of other parts of the brain. This inhibitory function is actually a critical part of how the brain works together.

Perhaps the most dramatic relevant example of this is a study I wrote about previously in which fMRI scans were used to study subjects listening to a charismatic speaker that was either from the subjects religion or not. When a charismatic speaker that matched the subject’s religion was speaking, the critical thinking part of the brain was literally suppressed. In fact this study also found opposite effects depending on context.

The contrast estimates reveal a significant increase of activity in response to the non-Christian speaker (compared to baseline) and a massive deactivation in response to the Christian speaker known for his healing powers. These results support recent observations that social categories can modulate the frontal executive network in opposite directions corresponding to the cognitive load they impose on the executive system.

So when listening to speech from a belief system we don’t already believe, we engaged our executive function. When listening to speech from within our existing belief system, we suppressed our executive function.

In regards to the current study, is something similar going on? Does processing the emotional content of stories impair our processing of factual information, which is a benefit for weak facts but actually a detriment to the persuasive power of strong facts that are persuasive on their own?

Another potential explanation occurs to me, however (showing how difficult it can be to interpret the results of psychological research like this). It is a reasonable premise that a strong fact is more persuasive on it’s own than a weak fact – being able to survive a 3 foot fall is not as impressive as a 30 foot fall. But, the more impressive fact may also trigger more skepticism. I may simply not believe that a phone could survive such a fall. If that fact, however, is presented in a straightforward fashion, it may seem somewhat credible. If it is presented as part of a story that is clearly meant to persuade me, then that might trigger more skepticism. In fact, doing so is inherently sketchy. The strong fact is impressive on its own, why are you trying to persuade me with this unnecessary personal story – unless the fact is BS.There is also research to support this hypothesis. When a documentary about a fringe topic, like UFOs, includes the claim that, “This is true,” that actually triggers more skepticism. It encourages the audience to think, “Wait a minute, is this true?” Meanwhile, including a scientists who says, “This is not true,” may actually increase belief, because the audience is impressed that the subject is being taken serious by a scientist, regardless of their ultimate conclusion. But the extent of such backfire effects remains controversial in psychological research – it appears to be very context dependent.

I would summarize all this by saying that – we can identify psychological effects that relate to belief and skepticism. However, there are many potential effects that can be triggered in different situations, and interact in often complex and unpredictable ways. So even when we identify a real effect, such as the persuasive power of stories, it doesn’t predict what will happen in every case. In fact, the net statistical effect may disappear or even reverse in certain contexts, because it is either neutralized or overwhelmed by another effect. I think that is what is happening here.

What do you do when you are trying to be persuasive, then? The answer has to be – it depends? Who is your audience? What claims or facts are you trying to get across? What is the ultimate goal of the persuasion (public service, education, political activism, marketing)? I don’t think we can generate any solid algorithm, but we do have some guiding rules of thumb.

First, know your audience, or at least those you are trying to persuade. No message will be persuasive to everyone.

If the facts are impressive on their own, let them speak for themselves. Perhaps put them into a little context, but don’t try to wrap them up in an emotional story. That may backfire.

Depending on context, your goal may be to not just provide facts, but to persuade your audience to reject a current narrative for a better one. In this case the research suggests you should both argue against the current narrative, and provide a replacement that provides an explanatory model.

So you can’t just debunk a myth, conspiracy theory, or misconception. You need to provide the audience with another way to make sense of their world.

When possible find common ground. Start with the premises that you think most reasonable people will agree with, then build from there.

Now, it’s not my goal to outline how to convince people of things that are not true, or that are subjective but in your personal interest. That’s not what this blog is about. I am only interested in persuading people to portion their belief to the logic and evidence. So I am not going to recommend ways to avoid triggering skepticism – I want to trigger skepticism. I just want it to be skepticism based on science and critical thinking, not emotional or partisan denial, nihilism, cynicism, or just being contrarian.

You also have to recognize that it can be difficult to persuade people. This is especially true if your message is constrained by facts and reality. Sometimes the real information is not optimized for emotional appeal, and it has to compete against messages that are so optimized (and are unconstrained by reality). But at least know the science about how people process information and form their beliefs is useful.

Postscript:  Hans Rosling demonstrates how to use data to tell the story of our rising civilization.

Bottom Line:  When it comes to science, the rule is to follow the facts.  When the story is contradicted by new facts, the story changes to fit the facts, not the other way around.

See also:  Data, Facts and Information

Yes, Global Warming is a matter of opinion in Canada

Canada Survey Mostly Human

The map above shows the results of a survey in 2015 to measure the distribution of public opinion regarding global warming.  A previous post is reprinted below explaining the methods.  This post is about the media ruckus due to Elections Canada reminding environmental activists that climate advocacy during the upcoming Parlimentary campaign could be partisan politicking.  From the Star:

Ghislain Desjardins, a spokesman for Elections Canada, confirmed in an interview with me on Monday that yes, environmental groups were warned in a recent webinar that what they see as a fact — climate change — could become seen as a matter of mere belief in the heat of an election campaign. That’s a real possibility, since Bernier has used social media to muse along those lines in the past.

Elections Canada stresses that no one is gagging the environmentalists from stating the facts on climate change before or during the campaign. But if the existence of climate change becomes an election issue, some charities will have to be very careful about what they say in any advertising. Otherwise, they may be forced to register as “third parties” in the campaign, which could put their charitable status at risk.

Beliefs, however, aren’t the same as facts. That distinction is going to be important, if not crucial in this fall’s campaign — on climate change, but also on potentially hot topics such as immigration or refugee policy.

Thanks to Elections Canada and a warning it recently delivered to environmental activists, we’re seeing just how shaky the ground may get between facts and beliefs when the official campaign gets under way in a few weeks.

As the map above shows, it is a minority in most of Canada thinking that the earth is warming due mostly to human activity.  Below is a post explaining how this finding was obtained.

Update August 20, 2019

See also Lorrie Goldstein writing in Toronto Sun For climate alarmists ‘free speech’ exists only for them
Ironically, in 2015 the environmental charity, Ecojustice, urged Canada’s Competition Bureau, on behalf of six “prominent” Canadians, including former Ontario NDP leader and UN ambassador Stephen Lewis, to investigate Friends of Science, the International Climate Science Coalition and the Heartland Institute for climate denial.

A woman walks past a map showing the elevation of the sea in the last 22 years during the World Climate Change Conference 2015 near Paris. A new study asked 5,000 Canadians their opinions on the cause of climate change. (Stephane Mahe/Reuters)

As a Canadian living near Montreal, I was of course curious about this survey:
The distribution of climate change public opinion in Canada
Mildenberger et al. 2015 (here)

CBC created some controversy by switching headlines on its report of the findings.
First the title was:
Climate change: Majority of Canadians don’t believe it’s caused by humans
Then it changed to:
Canadians divided over human role in climate change, study suggests

I’m wondering what really was learned from this survey.

What Was Asked and Answered

With any survey, it is important to look at the actual questions asked and answered. While we do not have access to specific responses, the script for the telephone interviews is available. The first two questions asked about global warming (not climate change).

Survey Questionnaire

1. “From what you’ve read and heard, is there solid evidence that the average temperature on earth has been getting warmer over the past four decades?”
Yes
No
Don’t Know (volunteered)

2. [If yes, solid evidence] “Is the earth getting warmer mostly because of human activity such as burning fossil fuels or mostly because of natural patterns in the earth’s environment?”

Human Activity
Natural Patterns
Combination (volunteered)
Not sure / Refused (volunteered)

The finding reported in the Study:

Our results reveal, for the first time, the enormous diversity of Canadian climate and energy opinions at the local level.

At the national level, 79% of Canadians believe climate change is happening but only 44% think climate change is caused mostly by human activities.

So the 79% who said there’s solid evidence of warming the last 40 years got a followup question: mostly caused by human activity or mostly natural? Slightly more than half said mostly human, thus a result of 44% believing both that it is warming and that humans are mostly to blame.

Now some people were unwilling to decide between mostly human and mostly natural, and volunteered that it was a combination. This fraction of respondents was recorded as partially human caused, and they added 17% to bring the number up to 61%. The remaining 39% combines people who don’t accept evidence on warming and those who think warming is mostly natural or are uncertain about both issues.

From having done opinion surveys in the past, I suspect that many who were uncertain between human or natural causes didn’t want to say “don’t know”, and instead said it was a “combination”. Thus the group counted as “partially human-caused” is a soft number.

My suspicions are reinforced by a question that was asked and not included in this report: “How much do you feel you know about global warming?” Typically about 25% say they know a lot, 60% say they know a little, and the rest less than a little. As we know from other researchers more climate knowledge increases skepticism for many, so it is likely the soft number includes many who feel they really don’t know.

This process does determine a survey result about the size of the population who believes warming is happening and mostly caused by humans.  Everything else is subject to interpretation, including how much is due to land use, urbanization or fossil fuel emissions.  The solid finding is displayed in the diagram below:

Canada Survey Mostly HumanYes, the map shows I am living in a hotbed of global warming believers around Montreal; well, it is 55%, as high as it gets in Canada.

Responses on Carbon Pricing
Now consider the script for the last two questions on policy options

3. “There is a proposed system called cap and trade where the government issues permits limiting the amount of greenhouse gases companies can put out. If a company exceeds their limit, they will have to buy more permits. If they don’t use all of their permits, they will be able to sell or trade them to others who exceed their cap. The idea is that companies will find ways to put out less greenhouse gases because that would be cheaper than buying permits.

Do you strongly support, somewhat support, somewhat oppose or strongly oppose this type of system for your province?”

Strongly support
Somewhat support
Somewhat oppose
Strongly oppose
Not sure / Refused (volunteered)

4. “Another way to lower greenhouse gas emissions is to increase taxes on carbon based fuels such as coal, oil, gasoline and natural gas. Do you strongly support, somewhat support, somewhat oppose or strongly oppose this type of system?”

Strongly support
Somewhat support
Somewhat oppose
Strongly oppose
Not sure / Refused (volunteered)

And the finding is (from the report):
Despite this variation in core beliefs about climate change, we find widespread public support for climate policies. Support is greatest and most consistent for emissions trading. . . The overall pattern is clear: there is majority support for emissions trading in every Canadian district.

We find larger variation in support for a carbon tax across the country. At the national level, support for carbon taxation at 49% is just below a majority, with opposition at 44%.

Now here is the underlying motivation for the survey: to determine the level of support in the Canadian population for government action to increase the price of carbon-based energy. Not surprisingly, people who mostly know only a little about this like the sound of companies footing the bill for policies, more than the government raising my taxes. With a little more knowledge they will understand that cap and trade increases the cost of energy within all of the products and services they use, and therefore raises the price of pretty much everything. It is a hidden tax completely without accountability.

I described in some detail how this is already at work in Quebec by virtue of the province joining California’s carbon market: https://rclutz.wordpress.com/2015/04/15/quebec-joins-california-carbon-market/

Conclusion

No one should be surprised that those conducting this survey think they know the correct answers and want the population to agree with them. The sponsors include numerous organizations advocating for carbon pricing:

Thanks to the Social Sciences and Humanities Research Council of Canada, the Fonds de Recherche du Québec – Société et Culture, the Skoll Global Threats Fund, the Energy Foundation, and the Grantham Foundation for the Protection of the Environment for financial support. Funding for individual survey waves was provided by the Ministère des Relations internationales et de la Francophonie, the Public Policy Forum, Sustainable Prosperity, Canada 2020, l’Institut de l’énergie Trottier and la Chaire d’études politiques et économiques américaines.

And as we have seen with virtually all marketing-type surveys, opinion-makers know that conducting surveys is itself an intervention to raise awareness and concern about the issue.

Footnote:

Partiicipants were asked in 2015: “From what you’ve read and heard, is there solid evidence that the average temperature on earth has been getting warmer over the past four decades?”

uah-lo-since-1995

Looks to me that the evidence for warming in the first 20 years was solid, but the evidence since 1995 is not.

On Stable Electric Power: What You Need to Know

nzrobin commented on my previous post Big Wind Blacklisted   that he had more to add.  So this post provides excerpts from a 7 part series Anthony wrote at kiwithinker on Electric Power System Stability. Excerpts are in italics with my bolds to encourage you to go read the series of posts at kiwithinker.

1. Electrical Grid Stability is achieved by applying engineering concepts of power generation and grids.

Some types of generation provide grid stability, other types undermine it. Grid stability is an essential requirement for a power supply reliability and security. However there is insufficient understanding of what grid stability is and the risk that exists if stability is undermined to the point of collapse. Increasing grid instability will lead to power outages. The stakes are very high.

2.Electric current is generated ‘on demand’. There is no stored electric current in the grid.

The three fundamental parts of a power system are:

its generators, which make the power,
its loads, which use the power, and
its grid, which connects them together.

The electric current delivered when you turn on a switch is generated from the instant you operate the switch. There is no store of electric current in the grid. Only certain generators can provide this instant ‘service’.

So if there is no storage in the grid the amount of electric power being put into the grid has to very closely match that taken out. If not, voltage and frequency will move outside of safe margins, and if the imbalance is not corrected very quickly it will lead to voltage and frequency excursions resulting in damage or outages, or both.

3. A stable power system is one that continuously responds and compensates for power/ frequency disturbances, and completes the required adjustments within an acceptable timeframe to adequately compensate for the power/frequency disturbances.

Voltage is an important performance indicator and it should of course be kept within acceptable tolerances. However voltage excursions tend to be reasonably local events. So while voltage excursions can happen from place to place and they cause damage and disruption, it turns out that voltage alone is not the main ‘system wide’ stability indicator.

The key performance indicator of an acceptably stable power system is its frequency being within a close margin from its target value, typically within 0.5 Hz from either 50 Hz or 60 Hz, and importantly, the rise and fall rate of frequency deviations need to be managed to achieve that narrow window.

An increasing frequency indicates more power is entering the system than is being taken out. Likewise, a reducing frequency indicates more is being taken out than is entering. For a power supply system to be stable it is necessary to control the frequency. Control systems continuously observe the frequency, and the rate of change of the frequency. The systems control generator outputs up or down to restore the frequency to the target window.

Of course energy imbalances of varying size are occurring all the time. Every moment of every day the load is continuously changing, generally following a daily load curve. These changes tend to be gradual and lead to a small rate of change of frequency. Now and then however faults occur. Large power imbalances mean a proportionately faster frequency change occurs, and consequently the response has to be bigger and faster, typically within two or three seconds if stability is to be maintained. If not – in a couple blinks of an eye the power is off – across the whole grid.

If the system can cope with the range of disturbances thrown at it, it is described as ‘stable’. If it cannot cope with the disturbances it is described as ‘unstable’.

4.There are two main types of alternating current machines used for the generation of electricity; synchronous and asynchronous. The difference between them begins with the way the magnetic field of the rotor interacts with the stator. Both types of machine can be used as either a generator or motor.

There are two key differences affecting their contribution to stability.

The kinetic energy of the synchronous machine’s rotor is closely coupled to the power system and therefore available for immediate conversion to power. The rotor kinetic energy of the asynchronous machine is decoupled from the system by virtue of its slip and is therefore not easily available to the system.

Synchronous generators are controllable by governors which monitor system frequency and adjust prime mover input to bring correction to frequency movements. Asynchronous generators are typically used in applications where the energy source is not controllable, eg: wind turbines. These generators cannot respond to frequency movements representing a system energy imbalance. They are instead a cause of energy imbalance.

Short -term stability

The spinning kinetic energy in the rotors of the synchronous machines is measured in megawatt seconds. Synchronous machines provide stability under power system imbalances because the kinetic energy of their rotors (and prime movers) is locked in synchronism with the grid through the magnetic field between the rotor and the stator. The provision of this energy is essential to short duration stability of the power system.

Longer-term stability

Longer term stability is managed by governor controls. These devices monitor system frequency (recall that the rate of system frequency change is proportional to energy imbalance) and automatically adjust machine power output to compensate for the imbalance and restore stability.

5.For a given level of power imbalance the rate of rise and fall of system frequency is directly dependent on synchronously connected angular momentum.

The rotational form of Newton’s second law of motion; Force = Mass * Acceleration describes the power flow between the rotating inertia (rotational kinetic energy) of a synchronous generator and the power system. It applies for the first few seconds after the onset of a disturbance, i.e.: before the governor and prime mover have had opportunity to adjust the input power to the generator.

Pm – Pe = M * dw/dt

Pm is the mechanical power being applied to the rotor by the prime mover. We consider this is a constant for the few seconds that we are considering.

Pe is the electrical power being taken from the machine. This is variable.

M is the angular momentum of the rotor and the directly connected prime mover. We can also consider M a constant, although strictly speaking it isn’t constant because it depends on w. However as w is held within a small window, M does not vary more than a percent or so.

dw/dt is the rate of change of rotor speed, which relates directly to the rate of increasing or reducing frequency.

The machine is in equilibrium when Pm = Pe. This results in dw/dt being 0, which represents the rotor spinning at a constant speed. The frequency is constant.

When electrical load has been lost Pe is less than Pm and the machine will accelerate resulting in increasing frequency. Alternatively when electrical load is added Pe is greater than Pm the machine will slow down resulting in reducing frequency.

Here’s the key point, for a given level of power imbalance the rate of rise and fall of system frequency is directly dependent on synchronously connected angular momentum, M.

It should now be clear how central a role that synchronously connected angular momentum plays in power system stability. It is the factor that determines how much time generator governors and automatic load shedding systems have to respond to the power flow variation and bring correction.

 

6 .Generation Follows Demand. The machine governor acts in the system as a feedback controller. The governor’s purpose is to sense the shaft rotational speed, and the rate of speed increase /decrease, and to adjust machine input via a gate control.

The governor’s job is to continuously monitor the rotational speed w of the shaft and the rate of change of shaft speed dw/dt and to control the gate(s) to the prime mover. In the example below, a hydro turbine, the control applied is to adjust the flow of water into the turbine, and increasing or reducing the mechanical power Pm compensate for the increase or reduction in electrical load, ie: to approach equilibrium.

It should be pointed out that while the control systems aim for equilibrium, true equilibrium is never actually achieved. Disturbances are always happening and they have to compensated for continuously, every second of every minute of every hour, 24 hours a day, 365 days a year, year after year.

The discussion has been for a single synchronous generator, whereas of course the grid has hundreds of generators. In order for each governor controlled generator to respond fairly and proportionately to a network power imbalance, governor control is implemented with what is called a ‘droop characteristic’. Without a droop characteristic, governor controlled generators would fight each other each trying to control the frequency to its own setting. A droop characteristic provides a controlled increase in generator output, in inverse proportion to a small drop in frequency.

In New Zealand the normal operational frequency band is 49.8 to 50.2 Hz. An under frequency event is an event where the frequency drops to 49.25 Hz. It is the generators controlled by governors with a droop characteristic that pick up the load increase and thereby maintain stability. If it happens that the event is large and the governor response is insufficient to arrest the falling frequency, under frequency load shedding relays turn off load.

Here is a record of an under frequency event earlier this month, where a power station tripped.

The generator tripped at point A which started the frequency drop. The rate of drop dw/dt is determined by size of the power imbalance divided by the synchronous angular momentum (Pm – Pe)/M. In only 6 seconds the frequency drop was arrested at point B by other governor controlled generators and under frequency load shedding, and in about 6 further seconds additional power is generated, once again under the control of governors, and the frequency was restored to normal at point C. The whole event lasting merely 12 seconds.

So why would we care about a mere 12 second dip in frequency of less than 1 Hz. The reason is that without governor action and under frequency load shedding, a mere 12 second dip would instead be a complete power blackout of the North Island of New Zealand.

Local officials standing outside substation Masterton NZ .

7.An under frequency event on the North Island of New Zealand demonstrates how critical is electrical system stability.

The graph below which is based on 5 minute load data from NZ’s System Operator confirms that load shedding occurred. The North Island load can be seen to drop 300 MW, from 3700 MW at 9.50 to 3400 MW at 9.55. The load restoration phase can also be observed from this graph. From 10.15 through 10.40 the shed load is restored in several steps.

The high resolution data that we’ll be looking at more closely was recorded by a meter with power quality and transient disturbance recording capability. It is situated in Masterton, Wairarapa, about 300 km south of the power station that tripped. The meter is triggered to capture frequency excursions below 49.2 Hz. The graph below shows the captured excursion on June 15th. The graph covers a total period of only one minute. It shows the frequency and Masterton substation’s load. I have highlighted and numbered several parts of the frequency curve to help with the discussion.

The first element we’ll look at is element 1 to 2. The grid has just lost 310 MW generation and the frequency is falling. No governors nor load shedding will have responded yet. The frequency falls 0.192 Hz in 0.651 seconds giving a fall rate df/dt of -0.295 Hz/s. From this df/dt result and knowing the lost generation is 310 MW we can derive the system angular momentum M as 1,052 MWs/Hz from -310 = M * -0.295.

It is interesting (and chilling) to calculate how long it would take for blackout to occur if no corrective action is taken to restore system frequency and power balance. 47 Hz is the point where cascade tripping is expected. Most generators cannot operate safely below 47 Hz, and under frequency protection relays disconnect generators to protect them from damage. This sets 47 Hz as the point at which cascade outage and complete grid blackout is likely. A falling frequency of -0.295 Hz/s would only take 10.2 seconds to drop from 50 to 47 Hz. That’s not very long and obviously automated systems are required to arrest the decline. The two common automatic systems that have been in place for decades are governor controlled generators and various levels of load shedding.

The fall arrest between 4 and 5 has been due to automatic load shedding. New Zealand has a number of customers contracted to disconnect load at 49.2 Hz. From these figures we can estimate a net shed load of 214 MW (114 MW + 100 MW).

From 7 to 8 the frequency is increasing with df/dt of 0.111 Hz/s and the system has a surplus of 117 MW of generation. At point 8 the system reached 50 Hz again, but the system then over shoots a little and governor action works to reduce generation to control the overshoot between 8 and 9.

This analysis shows how system inertia, under frequency load shedding and governor action work together to maintain system stability.

Summary: The key points

  • The system needs to be able to maintain stability second by second, every minute, every hour, every day, year after year. Yet when a major disturbance happens, the time available to respond is only a few seconds.
  • This highlights the essential role of system inertia in providing this precious few seconds. System inertia defines the relationship between power imbalance and frequency fall rate. The less inertia the faster the collapse and the less time we have to respond. Nearly all system inertia is provided by synchronous generators.
  • Control of the input power to the generators by governor action is essential to control frequency and power balance, bringing correction to maintain stability. This requires control of prime mover, typically this is only hydro and thermal stations.
  • When the fall rate is too fast for governor response, automatic load shedding can provide a lump of very helpful correction, which the governors later tidy up by fine tuning the response.

Epic Media Science Fail: Fear Not Pollinator Collapse

Jon Entine returns to this topic writing at the Genetic Literacy Project: The world faces ‘pollinator collapse’? How and why the media get the science wrong time and again. Excerpts in italics with my bolds.

As I and others have detailed in the Genetic Literacy Project and as other news organizations such as the Washington Post and Slate have outlined, the pollinator-collapse narrative has been relentless and mostly wrong for more than seven years now.

It germinated with Colony Collapse Disorder that began in 2006 and lasted for a few years—a freaky die off of bees that killed almost a quarter of the US honey bee population, but its cause remains unknown. Versions of CCD have been occurring periodically for hundreds of years, according to entomologists.

Today, almost all entomologists are convinced that the ongoing bee health crisis is primarily driven by the nasty Varroa destructor mite. Weakened honey bees, trucked around the country as livestock, face any number of health stressors along with Varroa, including the use of miticides used to control the invasive mite, changing weather and land and the use of some farm chemicals, which may lower the honeybee’s ability to fight off disease.

Still, the ‘bee crisis’ flew under the radar until 2012, when advocacy groups jumped in to provide an apocalyptic narrative after a severe winter led to a sharp, and as it turned out temporary, rise in overwinter bee deaths.

Colony loss numbers jumped in 2006 when CCD hit but have been steady and even improving since.

The alarm bells came with a spin, as advocacy groups blamed a class of pesticides known as neonicotinoids, which were introduced in the 1990s, well after the Varroa mite invasion infected hives and started the decline. The characterization was apocalyptic, with some activist claiming that neonics were driving honey bees to extinction.

In the lab evaluations, which are not considered state of the art—field evaluations replicate real-world conditions far better—honeybee mortality did increase. But that was also true of all the insecticides tested; after all, they are designed to kill harmful pests. Neonics are actually far safer than the pesticides they replaced, . . . particularly when their impact is observed under field-realistic conditions (i.e., the way farmers would actually apply the pesticide).

As the “science” supporting the bee-pocalypse came under scrutiny, the ‘world pollinator crisis’ narrative began to fray. Not only was it revealed that the initial experiments had severely overdosed the bees, but increasing numbers of high-quality field studies – which test how bees are actually affected under realistic conditions – found that bees can successfully forage on neonic-treated crops without noticeable harm.

Those determined to keep the crisis narrative alive were hardly deterred. Deprived of both facts and science to argue their case, many advocacy groups simply pounded the table by shifting their crisis argument dramatically. For example, in 2016, the Sierra Club (while requesting donations), hyped the honey bee crisis to no end.

But more recently, in 2018, the same organization posted a different message on its blog. Honeybees, the Sierra Club grudgingly acknowledged, were not threatened. Forget honeybees, the Sierra Club said, the problem is now wild bees, or more generally, all insect pollinators, which are facing extinction due to agricultural pesticides of all types (though neonics, they insisted, were especially bad).

So, once again, with neither the facts nor the science to back them up, advocacy groups have pulled a switcheroo and are again pounding the table. As they once claimed with honeybees, they now claim that the loss of wild bees and other insect pollinators imperils our food supply. A popular meme on this topic is the oft-cited statistic, which appears in the recent UN IPBES report on biodiversity, that “more than 75 per cent of global food crop types, including fruits and vegetables and some of the most important cash crops such as coffee, cocoa and almonds, rely on animal pollination.”

There’s a sleight of hand here. Most people (including most journalists) miss or gloss over the important point that this is 75 percent of crop types, or varieties, not 75 percent of all crop production. In fact, 60 percent of agricultural production comes from crops that do not rely on animal pollination, including cereals and root crops. As the GLP noted in its analysis, only about 7 percent of crop output is threatened by pollinator declines—not a welcomed percentage, but far from an apocalypse.

And the word “rely” seems almost purposefully misleading. More accurately, most of these crops receive some marginal boost in yield from pollination. Few actually “rely” on it. A UN IPBES report on pollinators published in 2018 actually breaks this down in a convenient pie graph.

Many of these facts are ignored by advocacy groups sharpening their axes, and they’re generally lost on the “if it bleeds it leads” media, which consistently play up catastrophe scenarios of crashing pollinator communities and food supplies. Unfortunately, many scientists willingly go along. Some are activists themselves; others hope to elevate the significance of their findings to garner media attention and supercharge grant proposals.

As John Adams is alleged to have said, ‘facts are stubborn things.’ We can’t be simultaneously in the midst of a pollinator crisis threatening our ability to grow food and see continually rising yield productivity among those crops most sensitive to pollination.

With these claims of an impending wild bee catastrophe, as in the case of the original honeybee-pocalypse claims, few of the journalists, activists, scientists or biodiversity experts who regularly sound this ecological alarm have reviewed the facts in context. Advocacy groups consistently extrapolate from the declines of a handful of wild bee species (out of the thousands that we know exist), to claim that we are in the midst of a worldwide crisis. But just as with the ‘honey bee-mageddon, we are not.

Those of us who actually care about science and fact, however, might note the irony here: It is precisely the pesticides which the catastrophists are urging us to ban that, along with the many other tools in the modern farmer’s kit, have enabled us grow more of these nutritious foods, at lower prices, than ever before in human history.

Science 101: Null Test All Claims

Francis Menton provides some essential advice for non-scientists in his recent essay at Manhattan Contrarian You Don’t Need To Be A Scientist To Know That The Global Warming Alarm “Science” Is Fake. Excerpts in italics with my bolds.

When confronted with a claim that a scientific proposition has been definitively proven, ask the question: What was the null hypothesis, and on what basis has it been rejected?

As Menton explains, you don’t need the skills to perform yourself the null test, just the boldness to check how they dismissed the null hypothesis.

Consider first a simple example, the question of whether aspirin cures headaches. Make that our scientific proposition: aspirin cures headaches. How would this proposition be established? You yourself have taken aspirin many times, and your headache always went away. Doesn’t that prove that the aspirin worked? Absolutely not. The fact that you took aspirin 100 times and the headache went away 100 times proves nothing. Why? Because there is a null hypothesis that must first be rejected. Here the null hypothesis is that headaches will go away just as quickly on their own. How do you reject that? The standard method is to take some substantial number of people with headaches, say 2000, and give half of them the aspirin and the other half a placebo. Two hours later, of the 1000 who took the aspirin, 950 feel better and only 50 still have the headache; and of the 1000 who took the placebo, 500 still have the headache. Now you have very, very good proof that aspirin cured the headaches.

The point to focus on is that the most important evidence — the only evidence that really proves causation — is the evidence that requires rejection of the null hypothesis.

Over to climate science. Here you are subject to a constant barrage of information designed to convince you of the definitive relationship between human carbon emissions and global warming. The world temperature graph is shooting up in hockey stick formation! Arctic sea ice is disappearing! The rate of sea level rise is accelerating! Hurricanes are intensifying! June was the warmest month EVER! And on and on and on. All of this is alleged to be “consistent” with the hypothesis of human-caused global warming.

But, what is the null hypothesis, and on what basis has it been rejected? Here the null hypothesis is that some other factor, or combination of factors, rather than human carbon emissions, was the dominant cause of the observed warming.

Once you pose the null hypothesis, you immediately realize that all of the scary climate information with which you are constantly barraged does not even meaningfully address the relevant question. All of that information is just the analog of your 100 headaches that went away after you took aspirin. How do you know that those headaches wouldn’t have gone away without the aspirin? You don’t know unless someone presents data that are sufficient to reject the null hypothesis. Proof of causation can only come from disproof of the null hypothesis or hypotheses, that is, disproof of other proposed alternative causes. This precept is fundamental to the scientific method, and therefore fully applies to “climate science” to the extent that that field wishes to be real science versus fake science.

Now, start applying this simple check to every piece you read about climate science. Start looking for the null hypothesis and how it was supposedly rejected. In mainstream climate literature — and I’m including here both the highbrow media like the New York Times and also the so-called “peer reviewed” scientific journals like Nature and Scienceyou won’t find that. It seems that people calling themselves “climate scientists” today have convinced themselves that their field is such “settled science” that they no longer need to bother with tacky questions like worrying about the null hypothesis.

When climate scientists start addressing the alternative hypotheses seriously, then it will be real science. In the meantime, it’s fake science.

Summary

The null test can be applied to any scientific claim.  If there is no null hypothesis considered, then you can add the report  to the file “Unproven Claims,” or “Unfounded Suppositions.”  Some researchers call them SWAGs: Scientific Wild Ass Guesses.  These are not useless, since any discovery starts with a SWAG.  But you should avoid believing that they describe the way the world works until alternative explanations have been tested and dismissed.

See Also: No “Gold Standard” Climate Science

No GHG Warming Fingerprints in the Sky

Scientific vs. Social Authenticity

Credit: Stanislaw Pytel Getty Images

This post was triggered by an essay in Scientific American Authenticity under Fire by Scott Barry Kaufman. He raises modern issues and expresses a social and psychological sense of authenticity that left me unsatisfied.  So following that, I turn to a scientific standard much richer in meaning and closer to my understanding.

Social Authenticity

Researchers are calling into question authenticity as a scientifically viable concept

Authenticity is one of the most valued characteristics in our society. As children we are taught to just “be ourselves”, and as adults we can choose from a large number of self-help books that will tell us how important it is to get in touch with our “real self”. It’s taken as a given by everyone that authenticity is a real thing and that it is worth cultivating.

Even the science of authenticity has surged in recent years, with hundreds of journal articles, conferences, and workshops. However, the more that researchers have put authenticity under the microscope, the more muddied the waters of authenticity have become.

Many common ideas about authenticity are being overturned.
Turns out, authenticity is a real mess.

One big problem with authenticity is that there is a lack of consensus among both the general public and among psychologists about what it actually means for someone or something to be authentic. Are you being most authentic when you are being congruent with your physiological states, emotions, and beliefs, whatever they may be?

Another thorny issue is measurement. Virtually all measures of authenticity involve self-report measures. However, people often do not know what they are really like or why they actually do what they do. So tests that ask people to report how authentic they are is unlikely to be a truly accurate measure of their authenticity.

Perhaps the thorniest issue of them all though is the entire notion of the “real self”. The humanistic psychotherapist Carl Rogers noted that many people who seek psychotherapy are plagued by the question “Who am I, really?” While people spend so much time searching for their real self, the stark reality is that all of the aspects of your mind are part of you.

So what is this “true self” that people are always talking about? Once you take a closer scientific examination, it seems that what people refer to as their “true self” really is just the aspects of themselves that make them feel the best about themselves.

Even more perplexing, it turns out that most people’s feelings of authenticity have little to do with acting in accord with their actual nature. The reality appears to be quite the opposite. All people tend to feel most authentic when having the same experiences, regardless of their unique personality.

Another counterintuitive finding is that people actually tend to feel most authentic when they are acting in socially desirable ways, not when they are going against the grain of cultural dictates (which is how authenticity is typically portrayed). On the flip side, people tend to feel inauthentic when they are feeling socially isolated, or feel as though they have fallen short of the standards of others.

Therefore, what people think of as their true self may actually just be what people want to be seen as. According to social psychologist Roy Baumeister, we will report feeling highly authentic and satisfied when the way others think of us matches up with how we want to be seen, and when our actions “are conducive to establishing, maintaining, and enjoying our desired reputation.”

Conversely, Baumeister argues that when people fail to achieve their desired reputation, they will dismiss their actions as inauthentic, as not reflecting their true self (“That’s not who I am”). As Baumeister notes, “As familiar examples, such repudiation seems central to many of the public appeals by celebrities and politicians caught abusing illegal drugs, having illicit sex, embezzling or bribing, and other reputation-damaging actions.”

Kaufman Conclusion

As long as you are working towards growth in the direction of who you truly want to be, that counts as authentic in my book regardless of whether it is who you are at this very moment. The first step to healthy authenticity is shedding your positivity biases and seeing yourself for who you are, in all of your contradictory and complex splendor. Full acceptance doesn’t mean you like everything you see, but it does mean that you’ve taken the most important first step toward actually becoming the whole person you most wish to become. As Carl Rogers noted, “the curious paradox is that when I accept myself just as I am, then I can change.”

My Comment:
Kaufman describes contemporary ego-centric group-thinking, which leads to the philosophical dead end called solipsism. As an epistemological position, solipsism holds that knowledge of anything outside one’s own mind is unsure; the external world and other minds cannot be known and might not exist outside the mind.

His discussion proves the early assertion that authenticity (in the social or psychological sense) is indeed a mess. The author finds no objective basis to determine fidelity to reality, thus leaving everyone struggling whether to be self-directed or other-directed. As we know from Facebook, most resolve that conflict by competing to see who can publish the most selfies while acquiring the most “friends.”This is the best Scientific American can do? The swamp is huge and deep indeed.

It reminds me of what Ross Pomeroy wrote at Real Science: “Psychology, as a discipline, is a house made of sand, based on analyzing inherently fickle human behavior, held together with poorly-defined concepts, and explored with often scant methodological rigor. Indeed, there’s a strong case to be made that psychology is barely a science.”

Scientific Authenticity

In contrast, let us consider some writing by Philip Kanarev, A practicing physicist, he is concerned with the demise of scientific thinking and teaching and calls for a return to fundamentals. His essay is Scientific Authenticity Criteria by Ph. M. Kanarev in the General Science Journal.  Excerpts in italics with my bolds.

A conjunction of scientific results in the 21st century has reached a level that provides an opportunity to find and to systematize the scientific authenticity criteria of precise knowledge already gained by mankind.

Neither Euclid, nor Newton gave precise definitions of the notions of an axiom, a postulate and a hypothesis. As a result, Newton called his laws the axioms, but it was in conflict with the Euclidean ideas concerning the essence of the axioms. In order to eliminate these contradictions, it was necessary to give a definition not only to the notions of the axiom and the postulate, but also to the notion of the hypothesis. This necessity is stipulated by the fact that any scientific research begins with an assumption regarding the reason causing a phenomenon or process being studied. A formulation of this assumption is a scientific hypothesis.

Thus, the axioms and the postulates are the main criteria of authenticity of any scientific result.

An axiom is an obvious statement, which requires no experimental check and has no exceptions. Absolute authenticity of an axiom appears from this definition. It protects it by a vivid connection with reality. A scientific value of an axiom does not depend on its recognition; that is why disregarding an axiom as a scientific authenticity criterion is similar to ineffectual scientific work.

A postulate is a non-obvious statement, its reliability being proven in the way of experiment or a set of theoretic results originating from the experiments. The reliability of a postulate is determined by the level of acknowledgement by the scientific community. That’s why its value is not absolute.

An hypothesis is an unproven statement, which is not a postulate. A proof can be theoretical and experimental. Both proofs should not be at variance with the axioms and the recognized postulates. Only after that, hypothetical statements gain the status of postulates, and the statements, which sum up a set of axioms and postulates, gain the status of a trusted theory.

The first axioms were formulated by Euclid. Here are some of them:
1 – To draw a straight line from any point to any point.
2 – To produce a finite straight line continuously in a straight line.
3 – That all right angles equal one another.

Euclidean formulation concerning the parallelism of two straight lines proved to be less concise. As a result, it was questioned and analyzed in the middle of the 19th century. It was accepted that two parallel straight lines cross at infinity. Despite a complete absence of evidence of this statement, the status of an axiom was attached to it. Mankind paid a lot for such an agreement among the scientists. All theories based on this axiom proved to be faulty. The physical theories of the 20th century proved to be the principal ones among them.

In order to understand the complicated situation being formed, one has to return to Euclidean axioms and assess their completeness. It has turned out that there are no axioms, which reflect the properties of the primary elements of the universe (space, matter and time), among those of Euclid. There are no phenomena, which could compress space, stretch it or distort it, in the nature; that is why space is absolute. There are no phenomena, which change the rate of the passing of time in nature. Time does not depend on anything; that’s why we have every reason to consider time absolute. The absolute nature of space and time has been acknowledged by scientists since Euclidean times. But when his axiom concerning the parallelism of straight lines was disputed, the ideas of relativity of space and time as well as the new theories, which were based on these ideas and proved (as we noted) to be faulty, appeared.

A law of acknowledgement of new scientific achievements was introduced by Max Planck. He formulated it in the following way: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it”. Our attempt to report the reliability of this law to the authorities is in the history of science an unnecessary intention. Certainly, time appeared in space only after matter. But still we do not know of a source that produces elementary particles – building blocks of the material world. That’s why we have no reason to consider matter absolute. But it does not prevent us from paying attention to an interconnection of the primary elements of the universe: space, matter and time. They exist only together and regardless of each other. This fact is vivid, and we have every reason to consider an indivisible existence of space, matter and time as an axiomatic one, and to call the axiom, which reflects this fact, the Unity axiom. The philosophic essence of this axiom has been noted long ago, but the practitioners of the exact sciences have failed to pay attention to the fact that it is implemented in the experimental and analytical processes of cognition of the world. When material bodies move, the mathematical description of this motion should be based on the Unity axiom. It appears from this axiom, that an axis of motion of any object is the time function. Almost all physical theories of the 20th century are in conflict with the Unity axiom. It is painful to write about it in detail.

Let us go on analyzing the role of postulates as scientific authenticity criteria. First of all, let us recollect the famous postulate by Niels Bohr concerning the orbital motion of the electrons in atoms. This catchy model of the process of the interaction of the electrons in the atoms goes on being formed in the mind of the pupils in school despite of the fact that its impropriety has been proven more than 10 years ago.

The role of Niels Bohr’s generalized postulate is great. Practically, it is used in the whole of modern chemistry and the larger part of physics. This postulate is based on the calculation of the spectrum of the hydrogen atom. But it is impossible to calculate the spectrum of the first orbit of the helium atom (which occupies the second place in Mendeleev’s table,) with Bohr’s postulate, to say nothing of the spectra of more complicated atoms and ions. It was enough to dispute the authenticity of Bohr’s postulate, but the mission of doubt has fallen to our lot for some reason. Two years were devoted to decoding the spectrum of the first electron of the helium atom. As a result, the law of formation of the spectra of atoms and ions has taken place as well as the law of the change of binding energy of the electron with the protons of the nuclei when energy-jumps take place in the atoms. It has turned out that there is no energy of orbital motion of the electrons in these laws; there are only the energies of their linear interaction with the protons of the nuclei.

Thereafter, it has become clear that only elementary particle models can play the role of the scientific result authenticity criteria in cognition of the micro-world. From the analysis of behaviour of these models, one should derive the mathematical models, which have been ascertained analytically long ago, and describe their behaviour in the experiments that have been carried out earlier.

The ascertained models of the photons of all frequencies, the electron, the proton and the neutron meet the above-mentioned requirements. They are interconnected with each other by such a large set of theoretical and experimental information, whose impropriety cannot be proven. This is the main feature of the proximity to reality of the ascertained models of the principle elementary particles. Certainly, the process of their generation has begun from a formulation of the hypothesis concerning their structures. Sequential development of the description of these structures and their behaviour during the interactions extended the range of experimental data where the parameters of the elementary particles and their interactions were registered. For example, the formation and behaviour of electrons are governed by more than 20 constants.

We have every reason to state that the models of the photons, the electron, the proton and the neutron, which have been ascertained by us, as well as the principles of formation of the nuclei, the atoms, the ions, the molecules and the clusters already occupy a foundation for the postulates, and new scientific knowledge will cement its strength.

Science has a rather complete list of criteria in order to estimate the authenticity of scientific investigative results. The axioms (the obvious statements, which require no experimental check and have no exceptions,) occupy the first place; the second place is occupied by the postulates. If the new theory is in conflict with at least one axiom, it will be rejected immediately by the scientific community without discussion. If the experimental data, which are in conflict with any postulate (as it happened, for example, to the Newton’s first law), appear, the future scientific community, which has learned a lesson from scientific cowardice of the academic elite of the 20th century, will submit such a postulate to a collective analysis of its authenticity.

Kanarev Conclusion

To the academicians who have made many mistakes in knowledge of the fields of physics and chemistry, we wish them to recover their sight in old age and be glad that these mistakes are already amended. It is time to understand that a prolongation of stuffing the heads of young people with faulty knowledge is similar to a crime that will be taken to heart emotionally in the near future.

The time has ended, when a diploma confirming higher education was enough in order to get a job. Now it is not a convincing argument for an employer; in order to be on the safe side, he hires a young graduate as a probationer at first as he wants to see what the graduate knows and what he is able to do. A new system of higher education has almost nullified a possibility for the student to have the skills of practical work according to his specialty and has preserved a requirement to have moronic knowledge, i.e. the knowledge which does not reflect reality.

My Summary

In Science, authenticity requires fidelity to axioms and postulates describing natural realities. It also means insisting that hypotheses be validated by experimental results. Climate science claims are not scientifically authentic unless or until confirmed by observations, and not simply projections from a family of divergent computer models. And despite all of the social support for climate hysteria, those fears are again more stuffing of nonsense into heads of youth and of the scientifically illiterate.

See Also Degrees of Climate Truth

Trudeau’s Empty Plastic Gesture

Bjorn Lomborg writes in the Globe and Mail about Canadian PM Justin Treudeau showing off by proposing to ban single-use plastics. Sorry, banning plastic bags won’t save our planet. Excerpts in italics with my bolds.

Last week, Prime Minister Justin Trudeau announced a plan to reduce plastic pollution, which will include a ban on single-use plastics as early as 2021. This is laudable: plastics clog drains and cause floods, litter nature and kill animals and birds.

Of course, plastic also makes our lives better in a myriad of ways. In just four decades, plastic packaging has become ubiquitous because it keeps everything from cereals to juice fresher and reduces transportation losses, while one-use plastics in the medical sector have made syringes, pill bottles and diagnostic equipment more safe.

Going without disposable plastic entirely would leave us worse off, so we need to tackle the problems without losing all of the benefits.

The simplest action for consumers is to ensure that plastic is collected and used, so a grocery bag, for example, has a second life as a trash bag, and is then used for energy.

But we need to be honest about how much consumers can achieve. As with other environmental issues, instead of tackling the big-picture problems to actually reduce the plastic load going into oceans, we focus on relatively minor changes involving consumers, meaning we only ever tinker at the margins.

More than 20 countries have taken the showy action of banning plastic bags, including even an al-Qaeda-backed terrorist group which said plastic bags pose “a serious threat to the well-being of humans and animals alike.”

But even if every country banned plastic bags it would not make much of a difference, since plastic bags make up less than 0.8 per cent of the mass of plastic currently afloat on the world’s oceans.

Rather than trying to save the oceans with such bans in rich countries, we need to focus on tackling the inferior waste management and poor environmental policies in developing regions.

Research from 2015 shows that less than 5 per cent of land-based plastic waste going into the ocean comes from OECD countries, with half coming from just four countries: China, Indonesia, Philippines and Vietnam. While China already in 2008 banned thin plastic bags and put a tax on thicker ones, it is estimated to contribute more than 27 per cent of all marine plastic pollution originating from land.

Moreover, banning plastic bags can have unexpected, inconvenient results. A new study shows California’s ban eliminates 40 million pounds of plastic annually. However, many banned bags would have been reused for trash, so consumption of trash bags went up by 12 million pounds, reducing the benefit. It also increased consumption of paper bags by twice the saved amount of plastic – 83 million pounds. This will lead to much larger emissions of CO₂.

When Kenya banned plastic bags, people predictably shifted to thicker bags made of synthetic fabric – which now may be banned. But Kenya had to relent and exempt plastics used to wrap fresh foods such as meat and other products.

We also need to consider the wider environmental impact of our bag choices. A 2018 study by the Danish Ministry of Environment and Food looked not just at plastic waste, but also at climate-change damage, ozone depletion, human toxicity and other indicators. It found you must reuse an organic cotton shopping bag 20,000 times before it will have less climate damage than a plastic bag.

If we use the same shopping bag every single time we go to the store, twice every week, it will still take 191 years before the overall environmental effect of using the cotton bag is less than if we had just used plastic.

Even a simple paper bag requires 43 reuses to be better for the environment – far beyond the point at which the bag will be fit for the purpose.

The study clearly shows that a simple plastic bag, reused as a trash bag, has the smallest environmental impact of any of the choices.

If we want to reduce the impact of plastic bags while still allowing for their efficient use, a tax seems like a much better idea. A 2002 levy in Ireland reduced plastic bag use from 328 bags a person per year to just 21 bags.

And if we really want to make a meaningful impact on ocean plastics coming from land, we should focus on the biggest polluters such as China, Indonesia, Philippines and Vietnam, and emphasize the most effective ways to cut the plastic load, namely better waste management in the developing world.

We should also recognize that more than 70 per cent of all plastics floating on oceans today – about 190,000 tonnes – come from fisheries, with buoys and lines making up the majority. That tells us clearly that concerted action is needed to clean up the fishing industry.

If our goal is to get a cleaner ocean, we should by all means think about actions we can take as consumers in rich countries to reduce our use of unnecessary plastic bags. But we need to keep a sense of proportion and, if we’re serious, focus on change where it’s really needed.

Bjorn Lomborg is president of the Copenhagen Consensus Center.

See Also Plastic Trash Talking

Waste Management Saves the Ocean

Epic Science Fraud by Inept Journalist

Alex Berezow reminds why we cannot trust today’s journalists to tell the truth, especially regarding anything scientific. His article at American Council on Science and Health (ACSH) is La Croix And BPA: Journalist Celebrates That She Caused Millions In Losses. Excerpts in italics with my bolds.

Journalism is thoroughly inept and corrupt. The quality of journalism has gotten so bad that I have whittled down my trusted sources to merely a handful. Even then, when it comes to science, these sources often get it wrong.

The reason is two-fold: First, journalists aren’t experts in anything. Many of them went to journalism school, which taught them absolutely nothing useful. An editor at The Economist once told me that the newspaper did not hire journalism majors, preferring people who majored in “something real.” The craft of journalism can be learned on the job. Besides, as science communicator Mary Mangan once wrote, “Every crank in the crankosphere has either a politics degree or a journalism degree.”

Second, too many journalists believe their primary job is to “change the world” rather than “report the facts.” If it seems like many journalists behave like partisans or activists, it’s because they really are partisans and activists. Truth matters less than fulfilling an ideological mission. This attitude was summed up best by Michael Wolff, who once said, “If it rings true, it is true.” Really, who needs facts when you have feelings?

Putting this all together, we shouldn’t be surprised when a journalist goes on social media to celebrate when her (poor) reporting causes a company to lose hundreds of millions of dollars in market capitalization.

Business Insider Journalist Celebrates a Massive Loss of Wealth

La Croix is a popular beverage that I refuse to drink because I think it tastes like fizzy horse urine. But plenty of other people like it, which is one reason why its parent company, the National Beverage Corporation, has a market cap well over $2 billion.

Like several other high-profile companies, La Croix has been the target of a junk science lawsuit. The company was accused of using synthetic chemicals instead of natural ones as advertised, a distinction without a difference, as my colleague Dr. Josh Bloom explains.

Now, they are the subject of another lawsuit, this time revolving around bisphenol A (BPA), a chemical that is used as a liner to protect the integrity of cans. There are no known health effects caused by the tiny doses to which humans are exposed, and the FDA declares BPA “safe.” So, what’s the basis of the lawsuit?

According to court filings reported by Business Insider, the president of the National Beverage Corp. planned to lie about the BPA content of its products. Specifically, he allegedly planned to announce that the company no longer used BPA months before the cans would actually be BPA-free. When a high-level executive voiced opposition, he was fired. The lawsuit, then, is for wrongful termination.

There are two facets to this story: (1) The science of BPA; and (2) The conditions surrounding the termination of the employee. As already discussed, (1) is perfectly clear. Yet, despite the fact that the FDA has declared BPA “safe,” Business Insider originally called the chemical “toxic.” The article was eventually updated to remove that completely inaccurate descriptor.

The exact details of (2) are unknown. The only thing we know is that allegations have been made in a lawsuit. But Hayley Peterson, the author of the Business Insider piece, not only seems to have concluded erroneously that BPA is dangerous, but that the company is guilty of wrongful termination. How else can we explain her decision to go on LinkedIn and brag that her reporting cost the company 10% of its market cap?

How a Responsible Journalist Would Cover the La Croix Lawsuit

Instead of just cursing the darkness, I will attempt to light a candle. Here’s how a responsible journalist would cover the La Croix lawsuit.

First, it would be discussed in-depth that the entire basis of the controversy — namely, the presence of BPA — is entirely misguided because it’s a safe product. Whether the company engaged in wrongful termination is far less important than the larger discussion about BPA, which is used in many different products. Second, it would be made clear, if it is eventually found that the president planned to lie and that he acted illegally by terminating an employee, that this has no bearing on the safety of BPA. BPA is safe whether or not the president is a jerk. Third, a responsible journalist wouldn’t go on social media and brag about how they destroyed hundreds of millions of dollars of wealth.

Essentially, I’m asking that journalists be competent, well-informed, and well-behaved. I know that’s asking a lot in 2019.

Earth and Universe As Never Seen Before

This is an introduction to amazing graphics done by Eleanor Lutz (no relation) at her website Tabletop Whale, an original science illustration blog. Above is a data-based view of Earth’s seasons. If you watch in full screen, the four corners show views of the cycle from top, bottom, and sides. Below is her map of the solar system, showing how much scientific information is represented in the illustration (H/T Real Clear Science)

An Orbit Map of the Solar System
JUNE 10 2019 · Link to the Open-Source Code

This week’s map shows the orbits of more than 18000 asteroids in the solar system. This includes everything we know of that’s over 10km in diameter – about 10000 asteroids – as well as 8000 randomized objects of unknown size. This map shows each asteroid at its exact position on New Years’ Eve 1999.

All of the data for this map is shared by NASA and open to the public. However, the data is stored in several different databases so I had to do a decent amount of data cleaning. I’ve explained all of the steps in detail in my open-source code and tutorial, so I’ll just include a sketch of the process here in this blog post:

To see details, open image in new tab, then click on it to enlarge.

To see details, open the image in a new tab, then click on it to enlarge. Then browse the solar system to your heart’s content.