In Praise of Richard Lindzen

Earlier this month, atmospheric physicist and professor emeritus at MIT Richard Lindzen was interviewed by radio host William Frezza, and here is a transcript of that interview. Thanks to Alec Cull and Climate Scepticism for the transcript.

Below are some excerpts, but the whole read is worth it.

Update February 24

Another powerful speech by Lindzen just appeared:

http://icecap.us/images/uploads/Global_Warming_and_the_Irrelevance_of_Science-Erice-mod1.pdf

On Temperature Data

If you want a daily measurement, do you take a 6 pm minus 6 am or 12 versus 12, or so on? It all makes a difference – doesn’t make a big difference for the purpose for which these measurements were made, which was not climate.

It was for weather forecasting. And if you look at a weather forecast, you don’t care if it changed two tenths of a degree – you couldn’t measure that, you couldn’t feel that. You want to know: did it go up 10 degrees, 20 degrees, you know – is a cold front coming through? So, for those purposes, for weather forecasts and so on, for people’s lives, these measurements were adequate.

On Global Mean Temperature

By definition, if they’re reporting on global mean temperature anomaly, which is what they use, of course it involves adjustments. You have to process this, you have to take the average, you have to move from it. They also do have adjustments – we know that urban areas introduce warmth and they have formulas that they design to quote correct for it. And again, the problem is not that this is illegitimate but that if you’re worried about tenths of a degree, it’s totally inadequate.

The fact of the matter, if you have adjustments of a few tenths of a degree, it means that they weren’t good to that

The virtue of the satellites is of course they have global coverage. The thermometers have very poor coverage over the oceans – 70% of the Earth. They are not measuring exactly the same thing. They are more consistent over time, but even there, there are many things to correct for – the orbital decay, the other things – and so they also have their own corrections. They are more nearly, I would say, corrections than adjustments, but, you know, there’s semantics mixed in.

On Climate Models

You see, the existing models, for instance, if you restrict yourself to this global mean temperature anomaly – one variable, the others may be way off, but let’s take that one – if they predict too much increase in temperature, they have thus far added aerosols and said those cancel it. So they adjust it to look like the period they – it’s a little like taking an exam and being told the answer in advance.

But the bigger test is: run models forward. And if you do that, virtually every model used by the UN, from 1978 to the present, is overestimating the observed change in temperature..

On the “Pause”

Look, you look at the temperature records for the ground, from the satellites, for anything. And what you see is something flopping around a few tenths of a degree, but no obvious trend for at least 18 years. Now, people are then saying “Well, if I take 2015 and it’s a tenth or two higher than ’98, or something like that, now I can draw a trend line through this that makes it look like it went up a tenth or two of a degree.” The problem with that is: if something is flopping around with a zero mean, and you pick your end points selectively, you can get it go up, get it go down… It’s a distraction.

On the Consensus

So all scientists agree that it’s probably warmer now than it was at the end of the Little Ice Age. Almost all scientists agree that adding CO2 should give you some warming, though it might be very little. But it is propagandists who translate that into “It is dangerous – we must reduce CO2”, etc. That doesn’t even come from the IPCC scientific assessment.

On the Climate Debate

But within the science community, the real division is much more subtle. So I would say IPCC Working Group I, which is the scientific assessment – the general position they adopt is that there is warming, it is mostly due to man in recent years – meaning since about 1960, 1970, not before – and it is potentially dangerous. Okay. And the sceptical position is: there are many causes of the change and it doesn’t look like the sensitivity is enough for it to be serious. So, you know, this is a discussable issue. Neither side is saying catastrophe is round the corner.

On the Funding Monopoly

Government has a monopoly. Science in this country is funded by the government, and that has its implications. Dwight Eisenhower picked this up, many many years ago, when he said, you know, one of the dangers of this is a government contract might be a replacement for scientific results. And indeed, you know, when you get letters asking for letters of recommendation for promotion, some things like that, very often the question is “What kind of fund-raising can we expect from this person?” So these are by no means minor considerations, and young people know that, that they have to bring in funds. This becomes even more important in modern universities, where the area of major growth has been administration.

Conclusion

Good Health and Long Life, Dr. Richard Lindzen.  We need your wisdom and character now more than ever.

 

Advertisements

5 comments

  1. Hifast · February 23, 2016

    Reblogged this on Climate Collections and commented:
    Lindzen makes a fascinating point at 19:13 about the next generation of students. https://youtu.be/F-fXj-ANWRk?t=19m13s

    Like

  2. Ken McMurtrie · February 24, 2016

    Reblogged this on The GOLDEN RULE and commented:
    If you don’t believe my conclusions on the invalid alarmist “science”, maybe you will believe his!

    Like

  3. Pethefin · February 24, 2016

    Richard Lindzen has been observing the evolution of the AGW-movement through its history and has made given several illuminating lectures on the issue, here’s one of them:

    Like

  4. Frederick Colbourne · February 25, 2016

    “By definition, if they’re reporting on global mean temperature anomaly, which is what they use, of course it involves adjustments. You have to process this, you have to take the average, you have to move from it.”

    Yes, but which average? As I understand it, the average used is the arithmetic average, which would be appropriate if the values averaged are normally distributed. But, if the vales are lognormally distributed, the appropriate statistic would be the geometric mean (the antilog of the mean of the logarithms, which is the median of the distribution).

    I do not have the data for the projection of the 100 or so GCMs (“the models”) but when I eyeballed the graph prepared by John Christy, the ensemble seemed to me to be lognormally distributed.

    if so, the mean of ensembles should be the geometric mean not the arithmetic mean. The means of normal distributions correspond to the median. Taking antilog of the mean for a lognormal distribution, gives the median, which is lower than the arithmetic mean. The gap between the arithmetic and geometric means (the skewness) depends on the standard deviation of the logarithms (kurtosis).

    If the results from GCMs are lognormally distributed, reporting the arithmetic instead of the geometric mean of an ensemble of GCMs will bias the result in the direction of higher values, away from the densest end of the distribution.

    Further, if a distribution is assumed to be normal but is actually lognormal, the error bars will be set too high because set relative to the arithmetic mean instead of the median.

    The only issue to be resolved is whether a given ensemble of GCM results is distributed normally or lognormally. The best test I know is the gap between the mean and the median and the direction of skewness, to the left for lognormal distributions.

    Reference: http://mathworld.wolfram.com/LogNormalDistribution.html

    Aitchison, J. and Brown, J. A. C. The Lognormal Distribution, with Special Reference to Its Use in Economics. New York: Cambridge University Press, 1957.

    Balakrishnan, N. and Chen, W. W. S. Handbook of Tables for Order Statistics from Lognormal Distributions with Applications. Amsterdam, Netherlands: Kluwer, 1999.

    Crow, E. L. and Shimizu, K. (Ed.). Lognormal Distributions:Theory and Applications. New York: Dekker, 1988.

    Like

    • Ron Clutz · February 25, 2016

      I don’t think Lindzen was talking about GCM ensembles at that point. Speaking broadly, he noted that a lot of processing is involved in calculating a global anomaly, including averaging of anomalies. His main point: the size of the adjustments is on the order of the size of the signal claimed.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s