In recent weeks climate activists have turned to the courts to advance their cause. An assembly of international supreme court judges discussed issuing a ruling to establish consensus science as legal fact. The UN proposed agreement for Paris COP includes an International Climate Tribunal “to oversee, control and sanction the fulfilment [sic] of and compliance with the obligations” under the agreement. A letter was sent to US justice officials appealing for RICO law to be used to silence dissenters from consensus climate science. And a plan hatched long ago was activated to catch Exxon in a tobacco-style litigation. More on the latter is here.
Noting these events, Judith Curry had a discussion about the role of the courts regarding climate science. It seems to me that any legal proceeding would bog down at the first testimony by a consensus witness, since any opposing counsel brighter than a fence post should repeatedly object: “Objection, asserting facts not in evidence.” (Probably wishful thinking on my part.)
A separate event yesterday attracted my attention to the topic of climate evidence. At his swearing-in as Canadian Prime Minister, Justin Trudeau appointed a Minister of the Environment and Climate Change portfolio. In his general comments, not specific to that department, he said his government would deliver “evidence-based policy”. Naturally I am wondering what that could mean regarding climate policies.
What does “Evidence-Based” mean?
Robert Sutton has written extensively on the notion of evidence-based management, and says this in the Harvard Business Review here.
We’ve just suggested no less than six substitutes that managers, like doctors, often use for the best evidence—obsolete knowledge, personal experience, specialist skills, hype, dogma, and mindless mimicry of top performers—so perhaps it’s apparent why evidence-based decision making is so rare. At the same time, it should be clear that relying on any of these six is not the best way to think about or decide among alternative practices.
Sutton talks about some of the elements that make up an evidence-based approach. For example,
Start with an Answerable Question:
The decision-making process used at Oxford’s Centre for Evidence-Based Medicine starts with a crucial first step—the situation confronting the practitioner must be framed as an answerable question. That makes it clear how to compile relevant evidence.
When people in the organization see senior executives spending the time and mental energy to unpack the underlying assumptions that form the foundation for some proposed policy, practice, or intervention, they absorb a new cultural norm.
Treat the organization as an unfinished prototype.
For some questions in some businesses, the best evidence is to be found at home—in the company’s own data and experience rather than in the broader-based research of scholars. Companies that want to promote more evidence-based management should get in the habit of running trial programs, pilot studies, and small experiments, and thinking about the inferences that can be drawn from them
Embrace the attitude of wisdom.
Something else, something broader, is more important than any single guideline for reaping the benefits of evidence-based management: the attitude people have toward business knowledge. At least since Plato’s time, people have appreciated that true wisdom does not come from the sheer accumulation of knowledge, but from a healthy respect for and curiosity about the vast realms of knowledge still unconquered.
The approach is summarized here.
Five Principles of EBM
1. Face the hard facts, and build a culture in which people
are encouraged to tell the truth, even if it is unpleasant.
2. Be committed to “fact based” decision making — which
means being committed to getting the best evidence and
using it to guide actions.
3. Treat your organization as an unfinished prototype —
encourage experimentation and learning by doing.
4. Look for the risks and drawbacks in what people recommend
— even the best medicine has side effects.
5. Avoid basing decisions on untested but strongly held beliefs,
what you have done in the past, or on uncritical “benchmarking”
of what winners do.
The Medical Paradigm of Evidence
Throughout this essay you will see references to medical decision making, since the evidence-based idea originated in this arena. The practice of medicine is where the notion took root: treatment choices should be based upon the data of historical results. And in the courts, it was often medical cases where protocols developed for making the case for or against a medicine, treatment or environmental condition causing damage to someone.
I used above the classical image of Justice being blind in weighing evidence, the idea being that the defendant’s wealth or social status has no bearing on the decision of guilt or innocence. Medical science goes one step further to eliminate bias: double-blind randomized controlled trials (RCTs) are the gold standard for evidence.
Rules for Scientific Evidence in Court
A court of law is first and foremost an evidence-based proceeding, and detailed rules are applied when submitting and accepting anything as evidence for the purpose of reaching a decision. Without going into the complexities (I am not a lawyer), it is instructive to see how courts do handle scientific evidence as a background for what a climate case might entail.
Much of the following information comes from Nathan Schachtman here.
Proper epidemiological methodology begins with published study results which demonstrate an association between a drug and an unfortunate effect. Once an association has been found, a judgment as whether a real causal relationship between exposure to a drug and a particular birth defect really exists must be made. This judgment requires a critical analysis of the relevant literature applying proper epidemiologic principles and methods. It must be determined whether the observed results are due to a real association or merely the result of chance. Appropriate scientific studies must be analyzed for the possibility that the apparent associations were the result of chance, confounding or bias. It must also be considered whether the results have been replicated.
Step 1: Establish an association between two variables.
Proper epidemiologic method requires surveying the pertinent published studies that investigate whether there is an association between the medication use and the claimed harm. The expert witnesses must, however, do more than write a bibliography; they must assess any putative associations for “chance, confounding or bias”:
Step 2: Rule out chance as an explanation
The appropriate and generally accepted methodology for accomplishing this step of evaluating a putative association is to consider whether the association is statistically significant at the conventional level.
“Generally accepted methodology considers statistically significant replication of study results in different populations because apparent associations may reflect flaws in methodology.”
Step 3: Rule out bias or confounding factors.
The studies must be structured to analyze and reject other factors or influences, such as non-random sampling, additional intervening variables such as demographic or socio-economic differences.
Step 4: Infer Causation by Applying Accepted Causative Factors
Most often legal proceedings follow the Bradford Hill factors, which are delineated here.
What aspects of that association should we especially consider before deciding that the most likely interpretation of it is causation?
(1) Strength. First upon my list I would put the strength of the association.
(2) Consistency: Next on my list of features to be specially considered I would place the consistency of the observed association. Has it been repeatedly observed by different persons, in different places, circumstances and times?
(3) Specificity: One reason, needless to say, is the specificity of the association, the third characteristic which invariably we must consider. If as here, the association is limited to specific workers and to particular sites and types of disease and there is no association between the work and other modes of dying, then clearly that is a strong argument in favor of causation.
(4) Temporality: My fourth characteristic is the temporal relationship of the association – which is the cart and which is the horse? This is a question which might be particularly relevant with diseases of slow development. Does a particular diet lead to disease or do the early stages of the disease lead to those particular dietetic habits?
(5) Biological gradient: Fifthly, if the association is one which can reveal a biological gradient, or dose-response curve, then we should look most carefully for such evidence.
(6) Plausibility: It will be helpful if the causation we suspect is biologically plausible. But this is a feature I am convinced we cannot demand. What is biologically plausible depends upon the biological knowledge of the day.
(7) Coherence: On the other hand the cause-and-effect interpretation of our data should not seriously conflict with the generally known facts of the natural history and biology of the disease – in the expression of the Advisory Committee to the Surgeon-General it should have coherence.
(8) Experiment: Occasionally it is possible to appeal to experimental, or semi-experimental, evidence. For example, because of an observed association some preventive action is taken. Does it in fact prevent? The dust in the workshop is reduced, lubricating oils are changed, persons stop smoking cigarettes. Is the frequency of the associated events affected? Here the strongest support for the causation hypothesis may be revealed.
(9) Analogy: In some circumstances it would be fair to judge by analogy. With the effects of thalidomide and rubella before us we would surely be ready to accept slighter but similar evidence with another drug or another viral disease in pregnancy.
None of my nine viewpoints can bring indisputable evidence for or against the cause-and-effect hypothesis and none can be required as a sine qua non. What they can do, with greater or less strength, is to help us to make up our minds on the fundamental question – is there any other way of explaining the set of facts before us, is there any other answer equally, or more, likely than cause and effect?
How Does This Apply to Climate Policy?
The legal methodology above is used to decide the causal relationship between two variables. Clearly, in Climate Science the starting question is: Do rising fossil fuel emissions cause temperatures to rise? Those who have been following the issue know that there are many arguments underneath: Are temperatures always rising along with CO2? Has chance been eliminated? Are not natural factors confounding the association? And so on.
But that question is only the beginning when considering an evidence-based climate policy. Daniel Roberts has provided a simple, comprehensive framework of questions, showing that answers to each one impact upon the others.
When governments speak of evidence-based policies, they usually mean allocating scarce public funds to programs that have shown value for money. Cost and benefit analysis is inescapable, along with definitions of outcomes, outputs, service activities, and the metrics to assess performance for the sake of funding priorities. Is that what PM Trudeau has in mind? Will that discipline be applied regarding climate change?
If I had used a term like “evidence-based” in a schoolboy essay, I would have gotten a red circle with a GG alongside (“Glittering Generality”). I wonder if today’s teachers are as discerning and demanding of rigor, or do they let it go if it is politically correct? Justin Trudeau was formerly a schoolteacher, so I guess we will find out.