12th JUNE 2004

CERTAINTY AND UNCERTAINTY

Benjamin Franklin considered that the only certainties were death and taxes. David Hume went even further, insisting that induction never led to certainty. All future information is subject to uncertainty; to levels of probability. While death and taxes might be fairly close to 100% certain, the public, and scientists, still find it difficult to come to terms with the undoubted truth of Hume's conclusion. Most statements and conclusions fail to mention or admit that they are uncertain, or how much they are uncertain.

The advances in the science of statistics in the earlier years of the last century provided the prospect of actually supplying measurements of uncertainty. These procedures are, however, inadequately understood, even by scientists, so that the "uncertainties" involved in using these measures are often neglected.

A recent article in the "Economist"

The Economist, 3 June 2004

illustrates my point. It ends with the following,

"Far too many scientists have only a shaky grasp of the statistical techniques they are using. They employ them as an amateur chef employs a cook book, believing the recipes will work without understanding why. A more cordon bleu attitude to the maths involved might lead to fewer statistical soufflés failing to rise."

Most of us are familiar with the most widely practiced means of measuring uncertainty by its use in public opinion surveys. A recent attempt by the "New Zealand Listener" to describe this procedure was rather garbled.

In the past, when a set of measurements were made, the "range" of the results was given. Although it is now regarded as an obsolete, unreliable measure of uncertainty It is used by the IPCC to indicate the possible change in global surface temperature by the year 2100 in "Climate Change 2001".

Another example is the IPCC opinion of the rate of global sea level rise during the 20th century, said to be "in the range of 1.0 to 2.0 mm/yr with a central value of 0.5mm/yr". This figure is based on tide-gauge data, which are extremely unrepresentative of the world oceans, being mainly confined to Northern Hemisphere port cities.

.

The more modern method of expressing uncertainty is to assume that the results can be expressed by some form of distribution curve, which can hopefully be given a mathematical form, so that the levels of uncertainty can be calculated. If the curve is symmetrical, then the "average" will be close to the most likely result.

Although real life distribution curves can be quite varied, and may not be symmetrical, it is usual to assume that they follow the simplest distribution curve of all, called the "Gaussian curve". When this is so, it is possible to calculate a quantity called the "standard deviation" or "standard error" which

tells you the likelihood that a particular result might deviate from the average. For the Gaussian curve, two observations in three are within one standard deviation of the mean, nine out of ten are within two standard deviations, and ninety nine out of a hundred are within three standard deviations.

One in three is pretty poor odds, but people who have shonky results often use it to exaggerate the accuracy of their data. For example, the ocean temperature data of Levitus et al (Science 2000 287 2225-2229) are given with only one standard deviation, since they would look unconvincing if the conventional two standard deviations were supplied.

Most working scientists, including public opinion gatherers and medical researchers, choose two standard deviations as their measure of uncertainty. The public may not realise that this assumes a one in twenty chance that an individual result may be outside these limits. This is not really very long odds, and three standard deviations may often be justified if you want to cover 99% of the data.

Many of us possess a "Scientific" calculator, and others have computers with an "Excel" spreadsheet which allow us to calculate "linear regression" This uses a mathematical technique called the "method of least squares" which enables you to draw the best straight line through a plot of one quantity against another quantity, and to calculate the "correlation coefficient", which is a measure of how good the line fits.

The procedure can only be justified if there is reason to suppose that the quantities plotted really are related by means of a straight line. This requirement is routinely ignored by climate change scientists.The plots of "globally averaged temperature anomalies" against year, so frequently displayed, are very far from conforming with a straight line behaviour, yet the linear regression curves are drawn, and extrapolated to predict the future. Admittedly "Climate Change 2001" in Chapter 2 goes so far as to break their graph up into sections, each of which has its individual straight line. The Satellite global temperature record from 1979 is so wobbly that it is really absurd to try and fit a straight line to it, but the scientists still do it.

Strangely, the one climate change quantity which actually does appear to conform with a straight line function is the global atmospheric carbon dioxide concentration, which has increased in a linear fashion since 1976. Since there is no theory that can explain this the fact is ignored.

Much of the work of the IPCC is based on guesswork (called by them "judgmental estimates"), without any scientifically valid measures of uncertainty. They feel awfully guilty about this, so they try to apply quantitative figures to their guesses.

Thus "virtually certain" means "greater than 99% chance that it is true"; "very likely" , 90-99% chance; "likely" , 66-95% chance; "medium likelihood", 33-66% chance;"unlikely, 10-30% chance; very unlikely 1-10% chance and "Exceptionally unlikely", less than 1% chance. It is no surprise to find that there are no results below the "likely" level. The "judgmental estimates" are always made by those who originate the calculation.

The masterpiece of statistical uncertainty from the IPCC is their Figure which summarises the factors involved in global mean radiative forcing since 1750. This figure appears no less than three times in "Climate Change 2001", and is the fundamental basis of the entire claim that there is "global warming" based on an increase in net radiative forcing.

This figure, to begin with, omits the most important components of "radiative forcing", which are water vapour and clouds, which are tucked under the carpet as merely "feedbacks"..If they were included they would overwhelm the quantities pictured.

Then each quantity has "error bars", which, according to the caption of

The IPCC warn that you cannot just add and subtract these quantities, since they interact with one another, but it is surely obvious that the net value of "radiative forcing", the additional radiation at the troposphere since 1750, is simply unknown. It is not known whether it is going up or down.

The climate models, are, if this is possible, even worse. No estimates of uncertainty are ever given, apart from the "judgmental estimates (i.e. guesses).

All future projections of the IPCC are given without scientifically justified measures of uncertainty. All evidence indicates that if these were available they would be so great that the projections are meaningless and no basis for promoting the economic disruptions.involved in the proposed controls on emissions.

Vincent Gray

75 Silverstream Road

Crofton Downs

Wellington 6004

New Zealand

Phone/Fax (064) 4 9735939

Email vinmary.gray@paradise.net.nz

"It's not the things you don't know that fool you.

It's the things you do know that ain't so"

Josh Billings