Do We Really Know Earth’s Temperature?

Guest article by Pat Frank
We’ve all read the diagnosis, for example here, that the global climate has suffered “unprecedented warming,” since about 1900. The accepted increase across the 20th century is 0.7 (+/-)0.2 C. As an experimental chemist, I always wondered at that “(+/-)0.2 C.” In my experience, it seemed an awfully narrow uncertainty, given the exigencies of instruments and outdoor measurements.

When I read the literature, going right back to such basics as Phil Jones’ early papers [1, 2], I found no mention of instrumental uncertainty in their discussions of sources of error.
The same is true in Jim Hansen’s papers, e.g. [3]. It was as though the instrumental readings themselves were canonical, and the only uncertainties were in inhomogeneities arising from such things as station moves, instrumental changes, change in time of observation, and so on.

But Phil Brohan’s paper in 2006 [4], on which Phil Jones was a co-author, discussed error analysis more thoroughly than previously done. Warwick has posted, and here on the change that occurred in 2005, when the UK Met Office took over for the Climate Research Unit of the UEA, in compiling the global average temperature record. So, maybe Phil Brohan decided to be more forthcoming about their error models.

The error analysis in Brohan, 2006, revealed that they’d been taking a signal averaging approach to instrumental error. The assumption was that all the instrumental error was random, independent, identically distributed (iid) error. This kind of error averages out to zero when large numbers of measurements are averaged together.

To make the long story short, it turned out that no one had ever surveyed the temperature sensors of climate stations to see whether the assumption of uniformly iid measurement errors could be empirically validated.

That led to my study, and the study led to the paper that is just out in Energy and Environment [5]. Here’s the title and the abstract:

Title: “Uncertainty in the Global Average Surface Air Temperature Index: A Representative Lower Limit

Abstract: “Sensor measurement uncertainty has never been fully considered in prior appraisals of global average surface air temperature. The estimated average (+/-)0.2 C station error has been incorrectly assessed as random, and the systematic error from
uncontrolled variables has been invariably neglected. The systematic errors in measurements from three ideally sited and maintained temperature sensors are calculated herein. Combined with the (+/-)0.2 C average station error, a representative lower-limit uncertainty of (+/-)0.46 C was found for any global annual surface air temperature anomaly. This (+/-)0.46 C reveals that the global surface air temperature anomaly trend from 1880 through 2000 is statistically
indistinguishable from 0 C, and represents a lower limit of calibration uncertainty for climate models and for any prospective physically justifiable proxy reconstruction of paleo-temperature. The rate and magnitude of 20th century warming are thus unknowable, and suggestions of an unprecedented trend in 20th century global air temperature are unsustainable.

Here’s the upshot of the study in graphical form; Figure 3 from the paper showing the 20th century average surface air temperature trend, with the lower limit of instrumental uncertainty as grey error bars.
Fig 3 from Pat Frank E & E  paper
Figure Legend: (•), the global surface air temperature anomaly series through 2009, as updated on 18 February 2010, (data.giss.nasa.gov/gistemp/graphs/). The grey error bars show the annual anomaly lower-limit uncertainty of (+/-)0.46 C.

The lower limit of error was based in part on the systematic error displayed by the Minimum-Maximum Temperature System under ideal site conditions. I chose the MMTS because that sensor is the replacement instrument of choice brought into the USHCN since about 1990.

This lower limit of instrumental uncertainty implies that Earth’s fever is indistinguishable from zero Celsius, at the 1σ level, across the entire 20th century.

References:
1. Jones, P.D., Raper, S.C.B., Bradley, R.S., Diaz, H.F., Kellyo, P.M. and Wigley, T.M.L., Northern Hemisphere Surface Air Temperature Variations: 1851-1984, Journal of Climate and Applied Meteorology, 1986, 25 (2), 161-179.

2. Jones, P.D., Raper, S.C.B. and Wigley, T.M.L., Southern Hemisphere Surface Air Temperature Variations: 1851-1984, Journal of Climate and Applied Meteorology, 1986, 25 (9), 1213-1230.

3. Hansen, J. and Lebedeff, S., Global Trends of Measured Surface Air Temperature, J. Geophys. Res., 1987, 92 (D11), 13345-13372.

4. Brohan, P., Kennedy, J.J., Harris, I., Tett, S.F.B. and Jones, P.D., Uncertainty estimates in regional and global observed temperature changes: A new data set from 1850, J. Geophys. Res., 2006, 111 D12106 1-21; doi:10.1029/2005JD006548; see www.cru.uea.ac.uk/cru/info/warming/.

5. Frank, P., Uncertainty in the Global Average Surface Air Temperature Index: A Representative Lower Limit, Energy & Environment, 2010, 21 (8), 969-989.

38 thoughts on “Do We Really Know Earth’s Temperature?”

  1. In my view the errors are under estimated. Errors are additive. In testing a material eg concrete or a bore core, there are at least three sets of errors a) sampling, b) sample preparation and c) testing. In the case of temperature measurement I suggest the follow error sets 1/ location (Antony Watts has plenty of photos of poorly located measuring stations) error has a bias on the high side but is also variable depending on wind direction, weather some nearby equipment such an an air conitioner is on or off etc. I sugest the error is +2.0 to 0.5C 2/ instrument error – my experience with calibration of thermocouples and other various types of thermometers is +-0.5C 3/ recording, transcription and timing problems. I estimate +-0.5 (note sign problems with data around 0C in Canadian stations). The total errors are then +3.5C to -1.0C. The estimates of global temperature are thus pure nonsense.

  2. Good luck with your paper Dr. Frank. I am not surprised that no one did any proper error analysis. Climate science seems to be criminally sloppy.

    Dave N., Jeff Id at the Air Vent (noconsensus.wordpress.com/) did a post on that a while back showing how you can draw a horizontal line straight through the noise proving there is no trend at all, even though he thinks the trend is slightly positive.

    He also did a post even further back that showed how the last 1500 years looks relatively flat if you actually use a graph with +/- 10C. AGW is such BS, but it is costing us a lot of money.

  3. Thanks to everyone for their interest.

    #1 cementafriend, errors usually sum as the sqrt(sum of squares), but I agree with you that a full accounting of error in the global temperature record will be much higher than usually admitted. We’re all waiting for Anthony Watts and Joe D’Aleo to publish their study of site errors in the USHCN network. No doubt but that it’ll be a stunning revelation. I just concentrated on the instrumental error, under ideal conditions, to get a handle on the minimal expected measurement uncertainty. That, by itself, turns out to be enough to render the global temperature trend moot.

    #2, Dave, you’re right and that’s the whole conclusion.

    #3, thanks, woodNfish, and pattoh, #4, looking at your graph we can follow the Idsos and observe that there’s not much evidence of global warming there.

    To all, I mistakenly sent Warwick a prior version of the article abstract. I’m guessing that in a day or so he’ll replace it with the correct one that I just sent him. Just so you know there’s no fancy two-step going on. 🙂

  4. There is further error measurement by Jane Warne of BOM at Broadmeadows, N of Melbourne

    www.geoffstuff.com/Jane%20Warne%20thermometry%20Broadmeadows.pdf

    There is a quite fundamental question which I have never seen addressed. In the reconciliation with temperatures over the land or sea surface, should on measure 1 mm, 1 cm, 1 m, 1 km or 10 km above the surce, or at some intermediate value determined by experiment?

    If the experiment has been done, what was its purpose and how was it established as the “right” altitude?

    It seems that we are adopting initial conditions for complicated model projections like GCMs based on the convenience of being able to read a thermometer at about eye level.

    Is that a scientific approach?

  5. Dr Frank I’m posting links on other sites so I hope you get more informed comments.

    and Geoff thanks for that link; I’m going to do the same with it

  6. a link to Dr McKitrick’s paper

    Socioeconomic Patterns in Climate Data
    rossmckitrick.weebly.com/uploads/4/8/0/8/4808045/final_jesm_dec2010.formatted.pdf
    Overall we find that the evidence for contamination
    of climatic data is robust across numerous data sets, it is not undermined by controlling for
    spatial autocorrelation, and the patterns are not explained by climate models. Consequently we
    conclude that important data products used for the analysis of climate change over global land
    surfaces may be contaminated with socioeconomic patterns related to urbanization and other
    socioeconomic processes.

  7. When it comes to systematic errors, some of them can be measured and others are estimated. The general rule of thumb in such cases to take the root-mean-square of the measured systematic errors and then sum that number with the estimated systematic errors to get the total error.

    Most scientists seem to rms all the errors, and, as such, underestimate the total error.

  8. Pat, have you seen the following wattsupwiththat.com/2011/01/16/the-past-is-not-what-it-used-to-be-gw-tiger-tale/#more-31814 GISS own data is used to estimate a bias of 0.3C

    Geoff Sherrington, one can not use 1km or 10 km heights because they would be affected by the lapse rate (environmental lapse rate is 6.49 K or C per 1000m). Ground level temperature depends very much of the emissivity of the surface and in coming radiation from the sun. I imagine that the height of a Stevenson screen was determined for convenience of manual reading. This then became a standard for comparison. Some 70% of the earth’s surface is water. I maybe wrong but I do not think the temperatures over water are measured at the same height as on land. That would mean an additional error in supposed average gobal temperature measurements.

  9. #10, cementafriend, I’d seen that and was glad it was brought out. But if Hansen had to explain the 1999-2011 difference, he’d probably cite the cause as processing improvements rather than as errors in the actual temperature magnitudes.

    By the way, if anyone would like an official reprint of the article, send an email to: pfrank830 AT earthlink DOT net.

  10. The instrumental accuracy of Australia’s AWS thermometers are quoted to be 0.3 deg C.
    When quoting a measured difference in temperature, the accuracy of each temperature reading is additive. Therefore a change in temperature recorded by an AWS is 0.6 deg C.
    Conversely, for a temperature change to be quoted having an accuracy of 0.2 deg C, would require each individual temperature reading to have an accuracy of 0.1 deg C.
    I might be wrong, but I’m pretty sure no thermometers used for weather observation exhibit this level of accuracy.

  11. What sort of thermometers do they use? Sorry to ask but I’m new to this.

    You can get calibrated (mercury) thermometers to 0.1 deg C, but they are quite expensive. I can’t see the typical Govt. Purchasing Dept. buying them [even at less than $150 each in bulk] when they could buy the “equivalent” at $1. Unless the respective BOM’s bought them themselves, or kept instructing each new Purchasing Officer on what to buy, then the cheap ones would be bought.

    Use (or substitution/replacement) with ordinary thermometers introduces an error of over 0.5 deg. [if you’ve ever checked a box of them you’d know]. If they use thermocouples and automatic recording, how well are these calibrated at the start, and after time?

    And then there is the well known problem of getting people to read them accurately. It is well known that in the USSR the colder the temperature reported, the more heating oil was allocated by the Central Bureau. A certain tendency to “underestimate” developed, but disappeared after the breakup when there was no point in doing so. But have those records ever been corrected? So from 1985 to 1995 there would have been a jump in the supposed temperature regardless of any actual trend.

  12. I see there’s another great post by Ira Glickstein on WUWT
    iT’S about data bias
    This posting is about how the official climate Team has (mis)adjusted past temperature data to exaggerate warming, and how the low quality of measurement stations and their encroachment by urban heat island (UHI) developments have distorted the historical record.
    wattsupwiththat.com/2011/01/16/the-past-is-not-what-it-used-to-be-gw-tiger-tale/#more-31814
    Ira Glickstein is the expert who had a previous post on WUWT on the race between 1934 and 1998 to be the highest temperature; I recall she had a ski slide for 1934 and a ski lift for 1998 and itemised the adjustments made to the two temperatures with the result that 1934 lost the race; in her words ‘bad luck to the old timer’
    CONCLUSIONS
    It seems to me that my estimate of 0.3ºC for Data Bias and Station Quality is fully justified, but I am open to hearing the opinions of WUWT readers who may think I have over- (or under-) estimated this component of the supposed 0.8ºC rise in global temperatures since 1880.

    Here’s the other Ira Glickstein post
    wattsupwiththat.com/2010/12/25/do-we-care-if-2010-is-the-warmist-year-in-history/
    selective quote
    OOPS, the hot race continued after the FOIA email! I checked the tabular data at GISS Contiguous 48 U.S. Surface Air Temperature Anomaly (C) today and, guess what? Since the Sato FOIA email discussed above, GISS has continued their taxpayer-funded work on both 1998 and 1934. The Annual Mean for 1998 has increased to 1.32ºC, a gain of a bit over an 11th of a degree (+0.094ºC), while poor old 1934 has been beaten down to 1.2ºC., a loss of about a 20th of a degree (-0.049ºC). So, sad to say, 1934 has lost the hot race by about an eighth of a degree (0.12ºC). Tough loss for the old-timer.

  13. #13, Graeme, most 20th century readings were made using specialized mercury-style thermometers inside a shelter — typically a Stevenson screen (aka Cotton Regional Shelter). Over the last twenty years or so, these have been systematically replaced in North America and Europe with precision resistance thermometers inside gilled chambers, often aspirated. There’s a good run-down here of the various types of thermometers and shelters in use.

    In the laboratory, the best thermometers can be calibrated to (+/-)0.1 C, although the older mercury thermometers varied in precision and may not have markings every 0.1 C. However, the real question is precision and accuracy in the field, rather than in the lab.

    The screens and shelters help prevent sun and wind (among other factors) from distorting the temperature readings. But they’re not perfect, and there is systematic error in the temperature measurements.

    It’s pretty clear that climate scientists have just assumed that all the measurement errors just average away. But they’ve never surveyed the thermometers and sensors in the field to test this assumption and demonstrate its validity. After my own look at published material, the evidence is that this assumption doesn’t hold at all. But in any case, such negligence is hardly the way to do experimental science, and certainly no way to justify forcing huge economic dislocations.

  14. Nice analysis, Pat.

    @#5 You only sum uncertainties in quadrature if you know that they are uncorrelated. They are additive if they are totally correlated. Otherwise, you have to understand the covariance and use that for the combination.

    I have actually been concerned with what a “mean global temperature” is. I know it can be computed, but does it have any real meaning to the climate?
    depriest-mpu.blogspot.com/2008/05/it-doesnt-add-up.html

  15. #15 Pat Frank – thanks. Were the thermometers in use in, say, 1910 as precise as those in 1980? Were they ever re-calibrated? I know it was done with one set of data from the 1700’s (I think Yale Uni) with the original thermometer(s?) and the temperature records were modified because of the big difference between the original and modern thermometer. [sorry, I’ve lost reference; I think H. Lamb].

    Unless the instruments are/were calibrated regularly then any instrumental errors are “locked into the record”. I don’t disagree at all with your statement about the possible error range. It is more than likely that with human error it could be larger e.g. the policeman who checked the temperature at the same time every morning at 9 a.m. It turned out in practice to be after breakfast, which varied in time with the seasons. Equally, other readings were dependent on him not being busy at that time, even to the extent of records for 3 days when his dairy showed him a hundred miles away from the station house. See also the russian note in my original.

    We do know that the temperature (at least in Northern Europe) went up between 1880 and 1940 because the Icelanders were able to resume growing oats/barley in the 1920’s after a hiatus of 400 years or more. But to claim that the “temperature of the Earth” (whatever that is) can be measured to hundreds of a degree as is now claimed is ludicrous. They are making the old error of “4 decimal places must be more accurate than 1 place” when they should (as you say) be wondering how accurate are the original readings.

  16. #16, Russ I don’t dispute your point about accounting for correlation of errors. One typical way errors can be correlated is in parametrized fits to serial data (time series, e.g., or energy series), where the fitted parameters are correlated. Often systematic and statistical uncertainty are separated out and reported separately rather than added into a total uncertainty. Which way one goes in this seems to be a convention rather than a rule.

    But in the article I was concerned with single station temperatures, and the uncertainty per measurement. I chose to combine the data on a per-measurement basis to minimize the total uncertainty, in pursuit of a lower limit. When these data are combined among the several thousand stations there shouldn’t be correlation of error across the surface stations.

    Your question about the “meaning” of a global average temperature really goes to the heart of the science. Chris Essex wrote a paper with Ross McKitrick and Bjarne Andresen, titled “Does a Global Temperature Exist?” (2007) J. Non-Equil. Thermodyn. 32, 1-27, in which they show that “global temperature” is a statistic with no physical meaning. Naturally, that fundamental flaw in current climate science has been roundly ignored by the AGW players.

    Here’s the abstract to their paper:”Physical, mathematical, and observational grounds are employed to show that there is no physically meaningful global temperature for the Earth in the context of the issue of global warming. While it is always possible to construct statistics for any given set of local temperature data, an infinite range of such statistics is mathematically permissible if physical principles provide no explicit basis for choosing among them. Distinct and equally valid statistical rules can and do show opposite trends when applied to the results of computations from physical models and real data in the atmosphere. A given temperature field can be interpreted as both “warming” and “cooling” simultaneously, making the concept of warming in the context of the issue of global warming physically ill-posed.

  17. #17, Graeme, not only were the thermometers in 1910 generally less accurate than those in 1980, but often the shelter in which the thermometer was housed was non-standard. The Cotton Regional Shelter (aka the Stevenson Screen) spread into general use after 1880 or so, but installation was not globally systematic. Non-standard shelters produce non-standard systematic errors.

    Calibration records are sparse for early thermometers, and not very good through to about the last third of the 20th century. Remember, it was only after about 1980, that people began really trying to use surface temperatures for global climate studies, rather than for local meteorology. So, there are attempts to make rational judgments about the uncertainty of, say, pre-1940 temperature measurements, but they’re all models and not really based on hard information. Mostly the available data are combed for station moves, data “inhomogeneities” and outliers.

    So, really, the uncertainty in temperature measurements before about 1980 is a bit of a crap-shoot. The assumption that measurement errors average away to near zero, ‘so we needn’t worry about them,’ seems to have governed the thinking throughout, and so that error has been pretty much ignored.

    IMO, the only way to know that climate has warmed is through observational information, such as you cited, along with the timing of Spring migrations, the advance of the northern tree line, etc. But putting a physically defensible number on it seems impossible.

  18. Pat, thank you for your patience. I can’t help wondering about the 19TH Century warming, which is usually shown as negligible, yet the glaciers (outside Antarctica) melted quite a lot. Almost as much as in the 20TH Century.
    There was also the poor spread of thermometers, which make it hard to think that the given figure is only “politically correct”.

    Re #14; The 1998 figure would include the Russian faked figures. When some faked lower temperatures to get more heating oil, it punished the honest, who would have faced community pressure to undercut their next readings. Even if only 30% reduced their annual figures by 2.5 deg. @12.5% of the total that means a gain (1985 – 1995) of 0.094 deg.

    Lastly, trading on your patience – would the rising CO2 also help the trees spread North into Canada and Siberia?
    Regards

  19. Graeme, thanks for your interest. 🙂

    I can’t speak much to your questions. From what I’ve read, glaciers often advance and retreat according to their own internal dynamic and local effects. It’s hard to draw large conclusions from the relatively few glaciers that have been monitored.

    You raise an interesting question about Russians possibly faking low temperatures to get more heating oil. I wouldn’t doubt it, but don’t know of it. Warwick knows a lot about northern Russian temperatures, though. Do you know of any fakery going on, Warwick? 🙂

    I know that CO2 will help with drought resistance, but haven’t seen any literature about CO2 helping with cold tolerance. It’s conceivable that higher CO2 might help a tree lay down deeper bark against freezing, and may also help conserve energy (as ATP) by requiring less effort to get the necessary CO2. But I’ve not researched the question.

    Sorry I’m not much help here. Searching Google scholar might help you answer the question though.

  20. Pat,
    I’m very happy to see that you have addressed the issue of measurement error. In my work, our customers ask us frequently to perform gauge R & R studies on our instruments. The lack of concern about error in the field of climate science has left me wondering whether the subject is no longer taught in universities.

  21. That remains and amazing story, Warwick. As with the UEA data, you were clearly early on to that one, too. How does it feel to be a prophet? 🙂 But the story seems to have disappeared. Is there important stuff going on behind closed doors, or will the scandal of the Russian sites just evaporate away like so much of the abuse of science in hat field?

    #23, thanks, Brooks. You raise a good point, After my experience submitting my paper first to JAMC, I was left wondering whether, at any rate, a lab on instrumental methods (and error) is part of the climate science curriculum.

  22. Pat, I seem to be making heavy weather of this (if you will pardon pun).

    1. Reference to Russian errors was to internal faking, not that in the UK. I don’t have the original reference – it occurs in Plimer Heaven + Earth page 379, but no reference. There is also some comment in “Fabricating Temperatures on the DEW Line” on ‘Watts Up…” – see comment by Bob Tisdale (but his link is for sea water temp.) and Leebert.

    2. CO2 query was based on the (in)famous bristle cone pine data in the Hockey Stick Marks 1,2 &3, where it was incorrectly used as ‘proof’ of warming rather than extra fertilizer helping pines to grow more in a marginal climate. Hence possibility on edge of tree line. I found nothing pertinent on Google Scholar.

    3. While on WattsUp.. there are some nice things said about your paper. Also there is paper ‘Metrology of Thermometers’ and a reference to Dr A Burns says: January 20, 2011 at 4:52 pm
    Temperatures now measured electronically to 0.1 degrees but have been recorded to only +/- 0.5 degrees 
F – and to www.srh.noaa.gov/ohx/dad/coop/EQUIPMENT.pdf … page 11 (through to page 16). I read this with disbelieving eyes. I was going to chide you for using ‘crapshoot’ but the first 4 letters apply to NOAA effort. I won’t waste you time, but I do like their comment about “using a good thermometer” – as distinct from an immoral one? And I didn’t know that temperatures read to +/- 1 deg. F (with error to +/- 2 deg. minimum) could be described as “accurate” and averaged to 3 decimal places.

    From personal experience I know that commercial (0-110C, full immersion thermometers, CALIBRATED) can be accurate at 0 & 100C, but inaccurate at 25C (see also in Metrology of …). But in my case most were lower readings. One +0.3, two +0.2, three +/-0.1, seven more than -0.5 (down to 23.7deg!). That out of 20, 2 batches, same manufacturer (not Chinese). Were these good thermometers? I know not of their morals.
    I put 11 aside as inaccurate (>0.2) and marked packet as such, yet other Chemists used them as ‘a thermometer is a thermometer’. So what hope of a non technical person understanding.
    I hope that this is of interest.
    Regards and thanks for your patience.

  23. Thanks for the commentary, Graeme. WRT the thermometer issue, I guess so long as one doesn’t use them to false accuracy, and have only modest needs, everything is OK. The problem, of course, is reporting false accuracy as real, which seems to be the endemic problem in surface air temperature studies.

    In my conversations on WUWT and by email, it appears that systematic instrumental error has never been seriously considered in appraisals of air temperature, and no one seems to realize that it can be correlated between adjacent climate stations.

    Lastly, “crapshoot” is an Americanism referring to the random outcome of a game of thrown dice (craps), not to firing a rifle at fecal throws. 🙂

  24. Warwick I’ve been looking today at your Greenhouse Warming Scorecard

    (Updated 4/2/2006)

    www.warwickhughes.com/hoyt/scorecard.htm#http_58__47__47_climatesci_46_atmos_46_colostate_46_edu_47_2006_47_03_47_27_47_f

    I’m particularly interested in the surface temperature trend

    Here’s your table for that aspect
    Type of prediction
    1900-2000 surface temperature trend

    Model prediction
    1.1 to 3.3 C warming if all greenhouse gases are included (IPCC 2001)

    Actual measurements
    Surface temperature warming of 0.6 C

    Comments
    Predicted warming is 2 to 5 times greater than observed warming.

    Lindzen says it is 4 times too large.

    Alternative and additional sources of warming include the sun, UHI and land use changes, soot on snow, and other reasons.

    More on land use changes here.

    More on the warm bias in surface observations here.
    (end of quote)

    Are you intending to update that aspect at any immediate future time (which means a time not too far distant) to include up to and including 2010

    Just another comment – not all the links work – is there some other place where what you previously had linked could be made available

  25. I’ve just had a guest post on Jeffid’s the Air Vent, showing that between 1988 and 2010 there is a strange mutability in the trend of global air temp as produced by GISS, under Jim Hansen’s by-line.

    Folks here might be interested. According to GISS, in 1988 the early 20th century warmed at about the same rate as the late 20th century. By 1999, the late 20th century warmed 2.3 times faster, increasing to 2.8 times faster by 2010.

    This increase in rate wasn’t due to an accelerating late 20th century trend. It’s mostly due to modifications of the 1880-1920 record.

    Do the systematic changes show an increasingly sophisticated understanding of early 20th century natural variability? A better perception, perhaps, of UHI effects or station site inhomogeneities? None of that seems likely.

    Rather, it seems more likely that anthropogenic climate change has much more to do with the climate data than it does with the climate itself.

  26. Pat thanks for your post and I was extremely interested in your global temperature post; I wonder for lay people like me would you be able to place a summary of your assessment (or findings) at the end of your posts
    I can read what you say but I feel more assured passing on your posts (or any other experts for that matter) if I had a summary I can copy and post
    Sorry, from a layman who has taken an interest for a few years but is not confident to make her own assessments from what a science expert says
    But in that respect I’m similar to a number of sceptics
    WE NEED SUMMARIES OF THE RESULTS YOU EXPERTS FIND

  27. Hi Val — you’re right, and I apologize for not thinking of that. There’s an implicit summary of the tAV essay in post #28, right above your post. Start with the second sentence in paragraph 2, “According to GISS…” and go all the way to the end.

    The last sentence means that it seems to me that since at least 1996 the GISS global temperature data set has been step-by-step modified so that the global climate looks like it warmed faster after 1960 than it did between 1880 and 1940.

    In 1988, the rates were the same. By 2010, the late rate was 2.8 times faster than the earlier rate.

  28. I am in no doubt Pat that there can be huge human impacts on climate data.
    Take the amazing agreement I found for UAH satellites vs CRUT2 over the 48 States. Not possible without assiduous stroking & tweaking.
    Then I had another post on the subject last year, “Surface minus satellites – some differences look political“.
    showing v large differences over China and Russia.
    I like your discovery in Fig 4 – showing the trend tweaked up a tad in CRUT2 compared to Jones 1994.

  29. Warwick,I was unaware of your previous posts, thanks. You’ve found some very strange things. And I agree that the amazingly small difference between HadCRUT2 and the UAH trends over the US are very difficult to reconcile with accident.

    You also mentioned the protests of Russian scientists, following Climategate, of HadCRU’s uncritical use of the warm northern Russian station data. Since then we’ve had the now-classical climatological approach of remaining studiously silent about any data that contradicts the AGW explanation.

    If someone ever toted up your work, with the work of Steve McIntyre, Ross McKitrick, Pat Michaels, Willie Soon, Demetris Koutsoyiannis, and so many others, the weight of contradiction would be overwhelming. But their work is all greeted with silence, passed over, and left unremarked to disappear into the journals. The MSM never discuses them, they’re never assessed or referenced in the AGW climate literature, the surface temperature trend remains uncorrected, GCMs continue to be used naively to propagate scare stories, and generally the AGW beat just goes determinedly on, falsification be damned.

    It’s worse than shameful.

  30. Pat and Warwick thanks for your replies; I have problems with graphs, well I can see what the graphs outline but I always have a problem with that word ‘anomalies’
    so for me I need verbiage but thanks for taking the time to explain

  31. I have argued on pro-warming blogs on numerous occasions. I had a particularly frank exchange of views with Michael Mann some years ago some of which has been cited on CA. In short, I’m not a strong pro-warmer.

    However, I’ve been disappointed by the lack of comment on the WUWT surfacestations project on ‘sceptic’ sites – and particularly on sites like this which claim a special interest in UHI and related siting issues.

    For some time I’ve thought sceptics were barking up the wrong tree regarding the UHI effect and that Anthony’s surfacestations project would not reveal a significant UHI influence. The surface temperature record is not perfect but I am confident that the overall trend is sound. From a GW (not necessarily AGW) perspective it is the trend hat is important.

  32. The stupid efforts of Government
    news.ninemsn.com.au/national/8248531/carbon-price-backed-by-lower-house

    The Gillard government’s bid to introduce a carbon tax has been boosted with the lower house of parliament backing the idea of a carbon price.

    Labor MP Stephen Jones MP moved a motion calling on the House of Representatives to acknowledge a carbon price as an “essential step in reducing carbon pollution”.

    It also noted the efforts already under way by government and business in developing green jobs.

    Independents Andrew Wilkie, Rob Oakeshott and Tony Windsor and Australian Greens MP Adam Bandt, backed the motion that passed the lower house on Thursday night.

    The coalition and independent Bob Katter voted against the motion that passed 74 votes to 72.

    WA independent Tony Crook was absent from the vote, but has previously indicated he is open to supporting the carbon tax that Labor wants in place by mid 2012.

    The government needs the support of at least four crossbenchers to get the measure through the lower house of parliament.

  33. #34, John, take a look at my E&E paper, free pdf download here, by the generosity of Bill Hughes at Multi-Science.

    No one compiling the surface temperature record has ever paid attention to systematic instrumental error. Systematic sensor error follows solar irradiation, albedo, and wind speed — the same physical factors that influence surface air temperature. There can be little doubt that systematic sensor error correlates over distance and across surface temperature stations in much the same way that the recorded temperatures themselves do.

    The surface air temperature trend is badly contaminated with systematic, not random, error. We can infer a global warming trend from other observables, such as changes in growing season and the migration of the northern treeline, but no one knows the magnitude of the warming trend or the rate.

    I have another paper coming out at E&E pretty soon (haven’t gotten the proofs yet). Apart from other things, it shows that the natural variability over 1900-2000 is about (+/-)0.28 C. Two sigma of that accounts for the entire change in 20th century global average air temperature.

    If one wanted to ignore systematic error and credit the air temperature numbers, the centennial trend itself can be interpreted as little more than the meanderings of a red noise random process with some underlying periodicity (e.g., PDO, AMO, etc.)

  34. I’ve posted a comment on the Air Vent up-dating my analysis of published GISS global global air temperature trends. It goes like this: from 1987 through 1996, Hansen and GISS used the Monthly Climatic Data of the World (MCDW) data set, and used the same processing methods described in Hansen and Lebedeff, 1987. In 1996, Hansen, et al., discuss SSTs but didn’t include them in their 1880-1995 anomaly data set.

    In 1999, Hansen, et al., combined GHCN data with MCDW data into a single set, but used the same data cleaning and processing methods as before.

    The 1988, 1996, and 1999 data sets I show here are all restricted to land-surface station data. So, all of the differences between the 1988, 1996, and 1999 data sets are real. Whatever choices they made to combine the various stations ineluctably resulted in steepening the difference between the 1880-1940 warming and the 1960-1988 (-1996) (-1999) warming.

    In the same 1999 paper, however, Hansen reported combining SST data into his land-station data set. He discussed the differences between the GISS SST+land and GISS land-only trends.

    Hansen plotted his land+SST data only as 5-year means, rather than annual means, and so in digitizing the published data sets I didn’t want to bother with those.

    However, to follow up in detail, tonight I digitized the land+SST plot (Plate 3b) from GISS 1999, and lo-and-behold, the difference between the 1999 land-only anomalies and the 1999 land+sea anomalies shows the same residual periodicity as the 2010 minus 1999 (and only) anomalies shown by the blue difference line (and cosine fit) in my Figure 3.

    After 1999, GISS included SSTs in their global data set, and also switched to using the GHCN land surface station data. So, it seems that the residual periodicity entered the data set with the SSTs. This looks to be a clear signature of air temperature oscillations arising from the net thermal phase changes in the world ocean.

    It’s interesting that Plate A1b in the Appendix of the GISS 1999 paper showed the [CRU 1999] minus [GISS (land-only) 1999] difference anomalies.

    CRU had included SSTs in their global data and the (CRU 1999 minus GISS(land) 1999) difference showed the same residual periodicity as I show in Figure 3 for GISS 2010 minus GISS 1999. But this periodicity between the data sets is passed over in silence by Hansen, et al., 1999.

    After all that, I now understand something more about the time-wise changes in the GISS global air anomaly data sets.

    It remains true that every shift in methodology of station choice between 1988 and 2010 ended up making the late 20th century warming appear ever larger than the early 20th century warming. That was the original message of my post, and there is no reason to change it.

    Some other interesting things came out of this extended examination of published global temperature trends, which might be the subject of another post at the Air Vent (Jeffid willing).

  35. I now have another analysis of the global surface air temperature trend posted at the Air Vent.

    This one shows that both the GISS and the CRU 1880-2010 anomaly trend can be represented to a high degree as a single cosine-like oscillation, with a period of about 60 years, and a uniformly linear trend. That, plus the year-to-year temperature wiggles.

    The cosine-like oscillation is probably the net sum of all the thermal periods of the world ocean (PDO+AMO+IOO+…, etc.). It turns out to be responsible for all the steep temperature slope changes during the last 130 years. Subtract it out, and all that’s left is a quite linear trend of about 0.058 C/decade.

    The first question is: if the late 20th century/early 21st century has warmed at about the same rate as the early 20th century when climate was driven purely by nature, then where is the extra warming due to GHGs?

    But the analysis could be extended to separate early-period/late-period analyses. When that was done, the period 1960-2010 turned out to have warmed about 0.03 C/decade faster than the period 1880-1940.

    This was a wonderful discovery. One could now take the GHG forcing during that same period, which I happened to have already calculated for my Skeptic paper, and plot the 1960-2010 increasing GHG forcing against the 1960-2010 linear temperature change. The slope will directly yield the climate sensitivity of Earth to GHG forcing, in Celsius/W-m^-2. This number is the holy grail of AGW climate science.

    Temperature is supposed to increase linearly with forcing, so one would expect to plot up a straight line correlation between extra temperature and extra forcing.

    Nope. It turned out to be a concave downward line. A linear fit, though, produced an estimate of the sensitivity. This turned out to be 0.09 C/W-m^-2, which is about 11% of the IPCC mid-range estimate.

    So, Earth seems to be contradicting the IPCC. Yet again.

    For the full analysis and discussion, please go to Jeffid’s tAv, linked in the first sentence.

    Finally, I’d like to publicly thank Jeff for giving me the opportunity to guest-post an essay on his site.

    And thank-you, too, Warwick, for being here. 🙂

Leave a Reply

Your email address will not be published.