Archive

Archive for the ‘statistics’ Category

>Adjusting multi-site and single site temperature data

2009 December 8 2 comments

> NIWA offer as their explanation for the temperature adjustments the paper

  • Rhoades, D.A. and Salinger, M.J., 1993: Adjustment of temperature and rainfall measurements for site changes. International Journal of Climatology 13, 899–913.

Though they do not link to it nor give a digital object identifier (doi:10.1002/joc.3370130807).

The abstract states

Methods are presented for estimating the effect of known site changes on temperature and rainfall measurements. Parallel cumulative sums of seasonally adjusted series from neighbouring stations are a useful exploratory tool for recognizing site-change effects at a station that has a number of near neighbours. For temperature data, a site-change effect can be estimated by a difference between the target station and weighted mean of neighbouring stations, comparing equal periods before and after the site change. For rainfall the method is similar, except for a logarithmic transformation. Examples are given. In the case of isolated stations, the estimation is necessarily more subjective, but a variety of graphical and analytical techniques are useful aids for deciding how to adjust for a site change. (Emphasis added)

I did not fully follow all the maths in the paper. It was not particularly complex but I would need to spend some time doing examples to completely grasp it.

In the introduction they define “site change”,

We use the term site change to mean any sudden change of non-meteorological origin. Gradual changes can seldom be assigned with any certainty to non-meteorological causes. Where long-term homogeneous series are required, for example, for studies of climate change, it is best to choose stations that are unlikely to have been affected by gradual changes in shading or urbanization. This is no easy task. Karl et al. (1988) have concluded that urban effects on temperature are detectable even for small towns with a population under 10000.

…This paper is concerned with the estimation of site-change effects when the times of changes are known a priori, such as when the station was moved or the instrument replaced.

The paper predominantly discusses adjustments to data when there are site changes and there are surrounding overlapping data sets (nearby thermometers) that can be used to assess whether there needs to be adjustment.

Later in the paper when discussing sites that have no overlapping data the authors state,

Such an adjustment involves much greater uncertainty than the adjustment of a station with many neighbours. A greater degree of subjectivity is inevitable. In the absence of corroborating data there is no way of knowing whether an apparent shift that coincides with a site change is due to the site change or not. However, several statistical procedures can be used alongside information on station histories to assist in the estimation of the effect of a site change. These include graphical examination of the data, simple statistical tests for detecting shifts applied to intervals of different length before and after the site change, and identification of the most prominent change points in the series independently of known site changes. Finally, a subjective judgement must be made whether to adjust the data or not, taking into account the consistency of all the graphical and analytical evidence supporting the need for an adjustment and any other relevant information.

Moreover when they apply this adjustment to a station in Christchurch to demonstrate their method comparing with the more accurate method used earlier in the paper they significantly over estimate the difference,

The 1975 site change at Christchurch Airport is somewhat overestimated, when compared with the neighbouring stations analysis. The contrast between the estimates based on 2 years data before and after this site change is particularly marked. For the neighbouring stations analysis the estimate is 0.45°C (Table TI); for the isolated station analysis the estimate is 1.58°C (Table V). This is to be expected when a site change coincides with an actual shift in temperature, as occurred in this case. The isolated station analysis then estimates the sum of the site change effect and the actual shift.

In their conclusion they note,

Adjustments for site changes can probably never be done once and for all. For stations with several neighbours, the decision to adjust for a site change usually can be taken with some confidence. The same cannot be said for isolated stations. However, large shifts can be recognized and corrected, albeit with some uncertainty. Ideally, for isolated stations, tests for site change effects would be incorporated into the estimation of long-term trends and periodicities as suggested by Ansley and Kohn (1989). This is not practicable at present on a routine basis, but may be in the future.

And

Whatever adjustment procedures are used, the presence of site changes causes an accumulating uncertainty when comparing observation that are more distant in time. The cumulative uncertainties associated with site change effects, whether adjustments are made or not, are often large compared with effects appearing in studies of long-term climate change. For this reason it is a good idea to publish the standard errors of site change effects along with homogenized records, whether adjustments are made or not. This would help ensure that, in subsequent analyses, not too much reliance is placed on the record of any one station. (Emphasis added)

Ironically, the methods suggested in this paper do not include the method used by NIWA in defending their Wellington data.

Advertisements

>NIWA defends it adjustment of data

2009 November 27 Leave a comment

>NIWA have released a statement that the data that shows a warming trend in New Zealand over 100 years was adjusted.

NIWA’s analysis of measured temperatures uses internationally accepted techniques, including making adjustments for changes such as movement of measurement sites.

Though the paper (and my post yesterday) suggest adjustment was the likely explanation. However the graph and the surrounding paragraph fail to mention the data is adjusted. I read significant numbers of scientific papers and they are always referencing the raw and the adjusted data labelling both. There are statistical issues with some of these papers but this is not one of them.

NIWA go on to say,

Such site differences are significant and must be accounted for when analysing long-term changes in temperature. The Climate Science Coalition has not done this.

NIWA climate scientists have previously explained to members of the Coalition why such corrections must be made. NIWA’s Chief Climate Scientist, Dr David Wratt, says he’s very disappointed that the Coalition continue to ignore such advice and therefore to present misleading analyses.

Unfortunately this comment fails to identify and thus address the issue which is: “why” is not the question the Coaliltion is asking; it is “what” and “how”. What is the adjustment? and how have you done it? Treadgold (an author of the paper) writes,

We cannot account for adjustments, because we don’t know what they are. We ask only to know the adjustments that have been made, in detail, for all seven stations, and why.

Transparency demands that the specific reasons for data adjustment be given.

  • What stations have been adjusted?
  • When were they adjusted?
  • Is the adjustment stepwise or a trend?
  • Is there overlap of data when stations are shifted?
  • Does the overlapped data show good correlation?
  • Have adjustments been modified in subsequent years? Why?
  • What is the computer code that applies the adjustment?

This sort of information allows others to review the legitimacy of such decisions. And various groups can argue for and against these reasons and the weighing various reasons should be given.

Why the secrecy? The refusal to be open with data and theories is looked upon with suspicion, and rightly so.

Gareth Renowden writes a post explaining why adjustments are made to the data. The excessive rhetoric notwithstanding, the argument is plausible. But it still leaves questions unanswered. While the Wellington station may just be used an example, what of the other 6 stations? Wellington may show a rise after adjustment, but this will be diluted when averaged across all the station unless they all showed a rise. It they did what is the explanation for them.

Though I am not fully convinced with NIWA’s explanation. The Airport and Kelburn temperatures seem well correlated, with Kelburn cooler being at a higher altitude. And Thorndon and Airport are both at the same elevation (sea level). But there is no correlation established between Thorndon and the other 2 locations.



Elevation is not the sole determiner of temperature. There may be other considerations that make Thorndon and the Airport different temperatures. If so, then the adjustment down of the Thorndon data may be excessive. It should be easy to set up further measurements at Thorndon currently and see how they correlate to Kelburn and the Airport. If they all correlate well then we can establish a more accurate correction factor for the pre-1930 Thorndon data.

>New Zealand not warming?

2009 November 26 18 comments

>It seems to residents that the country has not being getting warmer over the last decade. Such that advocates of global warming prefer the term climate change so that any weather anomaly can be attributed to anthropomorphic global warming. And people are willing to parrot claims that some parts of the world will get colder (this may be a prediction of the theory but should encourage one to cautiously consider these claims).

The New Zealand National Institute of Atmosphere and Water Research (NIWA) do not show significant change since 2000 but they do show an increase over the last century as seen in this graph.

Graph. Mean annual temperature over New Zealand, from 1853 to 2008 inclusive, based on between 2 (from 1853) and 7 (from 1908) long-term station records. The blue and red bars show annual differences from the 1971 – 2000 average, the solid black line is a smoothed time series, and the dotted line is the linear trend over 1909 to 2008 (0.92°C/100 years).

Yesterday the New Zealand Climate Science Coalition released an article challenging this rise using NIWA’s own data. They plotted the temperatures from the NIWA source data and got this graph.

Whereas the first shows a rise of ~1°C per century, the second shows no discernable rise. The difference between the 2 graphs? The second uses raw data, the first (probably) has adjusted the data.

About half the adjustments actually created a warming trend where none existed; the other half greatly exaggerated existing warming.

There are legitimate reasons why data can and should be adjusted. Cities grow and hence warm so later temperatures may be warmer, especially overnight. Different thermometers may be used that show a consistent measurable difference. But there are 2 comments to make about adjusting data. Firstly adjusted data should be labelled as such with the unadjusted data displayed alongside it and the factors the data was adjusted for.

Secondly, it makes a difference whether adjusting data removes or produces an association. Frequently differences in data are seen because they attributes of the data sets are different. If we compare test scores between highschools to create a league table it may be reasonable to correct for number of children in different grades as some schools may have more students at higher levels, or one school may only let its brightest children sit the test. But we should be more cautious about accepting an association that only appears after adjustment. It is not that there can be no difference, rather it is that enough statistical manipulation can show a difference and the reasons for the adjusted variables are then argued after the fact.

If you do find a difference after adjustment you need to check your adjustment factors are not associated with the variable that is under consideration, in this case you cannot adjust for time as time changes are what is being looked for; and you must validate your adjustment with an independent data set.

On top of the release of emails and computer code from the now infamous Climate Research Unit at the University of East Anglia, UK; perhaps there might be some room for debate around the issues of climate change. Is it happening? Are humans responsible? Would it be detrimental? Should we pay attention to scientists who refuse to reveal their data and formulae?

>Adjusting multiple choice examinations

2009 October 10 6 comments

>Multiple choice examinations have several benefits. They have no intra- or intermarker variability. In fact they can be automated. And I wouldn’t be surprised if they are as effective as any other system in effectively evaluating material.

They need to be well written.

  • The correct answer needs to be clearly more correct than other options.
  • The correct answer should not be able to be guessed by the construction of the question.
  • The order of the answer option should be random.
  • A reasonable number of options need to be given.
    • And the same number of options for every question.
  • A significant number of questions needs to be included.
    • The problem with multiple choice questions is the chance element. This can be reduced by increasing the number of questions.

If we have 20 questions with 4 options for each question, then random guessing will lead to people getting 5 correct on average (Exam mark = 25%); 20 / 4. However the range of correct answers will be quite great. Some will get 1 correct (5%), others 10 (50%). Whereas 200 questions will mean that people get 50 correct on average (Exam mark still = 25%), but a much lower range. Some may get 40 correct (20%), others 60 correct (30%).

Thus both exams when taken by people ignorant of the topic will give an average mark of 25%, but the chance of any particular individual getting a high mark is much greater with a smaller number of questions.

This seems obvious based on the examples above. Mathematically the range of marks is (inversely) related to the number of questions. The standard deviation of the range of answer marks is inversely proportional to the square root of the number of questions.

The other issue is standardising the results. Because people are likely to get 25% of the answers correct by chance (for 4 options), then one could subtract 25% from the final mark. So if you get 25% as a raw mark, you likely didn’t know the answer to any of the questions, that is your knowledge is 0%. So we subtract 25% from your mark to get your adjusted mark, which is 0%.

However if you get 100%, it is unlikely you knew 75% and got the other 25% correct by chance. Rather you get the ones you know correct, and you tend to get about a quarter of the ones you don’t know correct. So if you know 50% of the questions you will get 50% plus a quarter of the remaining 50%, that is 12.5%, which gives you a total of 50% + 12.5% = 62.5%. So a raw mark of 62.5% needs to be scaled back to 50%. And 100% means you know all the answers and does not need to be scaled back at all.

So we need to adjust the raw marks linearly to get adjusted marks.

  • Let N be the number of questions.
  • Let R be the number of options.
  • Let X be the number of questions correct.
  • Let Y be the adjusted number of questions correct.

Then

  • X/N is the raw mark.
  • Y/N is the adjusted mark.
  • N/R is the chance number of correct answers.

When X = N/R then the mark needs to be adjusted to zero, ie. Y = 0.
When X = N then the mark needs no adjustment, ie. Y = N and Y/N = 1 (= 100%).

The number of questions correct equals the number of questions known plus the remaining number of questions divided by the number of options.

X = Y + (NY)/R

Rearranging for Y we get

Y = (RXN)/(R – 1)

Or as a mark

Y/N = 100% × (RXN)/N(R – 1)

And any negative numbers are given zero.

Categories: examination, statistics

>Bible statistics

2009 March 26 3 comments

>Number of books, chapters, verses and words in the Bible. These are based on the English Standard Version (ESV) 2007 version, main text. Footnotes, chapter and verse numbers, book names are excluded. Psalm titles are included. I realise the chapters and verses are an artificial construct.

Chapters were inserted into the Bible circa 1228 by Stephen Langton.

The Old Testament was possibly divided into verses by Isaac Nathan circa 1448 and the New Testament by Robert Stephanus in 1551.

Book Chapters Verses Words
Old Testament
Genesis 50 36909
Exodus 40 31271
Leviticus 27 23602
Numbers 36 31211
Deuteronomy 34 27845
Joshua 24 18060
Judges 21 18556
Ruth 4 2475
1 Samuel 31 24536
2 Samuel 24 20074
1 Kings 22 23741
2 Kings 25 23088
1 Chronicles 29 18590
2 Chronicles 36 24938
Ezra 10 6915
Nehemiah 13 9902
Esther 10 5519
Job 42 17825
Psalms 150 42420
Proverbs 31 14566
Ecclesiastes 12 5344
Song of Solomon 8 2537
Isaiah 66 35555
Jeremiah 52 40975
Lamentations 5 3286
Ezekiel 48 37524
Daniel 12 11314
Hosea 14 4997
Joel 3 1913
Amos 9 4134
Obadiah 1 606
Jonah 4 1318
Micah 7 3012
Nahum 3 1114
Habakkuk 3 1366
Zephaniah 3 1574
Haggai 2 1096
Zechariah 14 6145
Malachi 4 1769
OT Total 929 587622
New Testament
Matthew 28 23137
Mark 16 14642
Luke 24 25104
John 21 19322
Acts 28 23744
Romans 16 9534
1 Corinthians 16 9316
2 Corinthians 13 6061
Galatians 6 3116
Ephesians 6 3017
Philippians 4 2144
Colossians 4 1936
1 Thessalonians 5 1842
2 Thessalonians 3 1065
1 Timothy 6 2318
2 Timothy 4 1633
Titus 3 927
Philemon 1 457
Hebrews 13 6953
James 5 2333
1 Peter 5 2397
2 Peter 3 1553
1 John 5 2492
2 John 1 298
3 John 1 303
Jude 1 607
Revelation 22 11559
NT Total 260 177810
Bible
Grand total 1189 765432

I haven’t been able to find the ESV summary data on the internet. The ESV exists in database form in various places so the information should be easy to calculate, I just don’t have the facilities. I will repost this if I get the verse data.

This information may be more useful for the Hebrew and Greek text though there would still be disagreement over which texttype or critical text to use.

Greek New Testament ~138000 words.

If I have my calculations correct I note the number of words in the ESV Bible is easy to remember. If the book titles are included the number of words is 765517.

Categories: Bible, statistics