John Aldrich, University of Southampton, Southampton, UK

John Aldrich, University of Southampton, Southampton, UK. (home) December 2003. Most recent changes April 2006.

Likelihood and Probability in R. A. Fisher’s Statistical Methods for Research Workers

http://www.economics.soton.ac.uk/staff/aldrich/fisherguide/prob+lik.htm

Introduction

There is a passage in Chapter 1 of Fisher’s Statistical Methods for Research Workers describing the proper roles of probability and likelihood. It criticises the Bayesian misuse of probability (termed “inverse probability”) and seems to anticipate the 1960s discussion of the likelihood principle. (See the entries Bayesian, Likelihood and Likelihood Principle in Probability & Statistics on the Earliest Uses Pages.) The purpose of this presentation is to make the passage available and, more especially, to show the changes Fisher made in the later editions of the book.

The book was first published in 1925 and the division of responsibility between probability and likelihood, which it lays down, follows what Fisher had written in On the "Probable Error" of a Coefficient of Correlation Deduced from a Small Sample (1921). This paper not only describes the division (p. 24), it shows how to do likelihood inference, or at least how to construct likelihood intervals (see p. 25). However the wide-ranging and masterly On the Mathematical Foundations of Theoretical Statistics (1922) was not a likelihood text in the same sense. It also presents maximum likelihood but the properties it emphasises are properties in ‘repeated sampling.’ The Theory of Statistical Estimation (1925) takes the same approach. Two New Properties of Mathematical Likelihood (1934) added a conditional dimension to Fisher’s theory of estimation but the concern was still with repeated sampling properties. Although likelihood inference was proclaimed as a fundamental form of inference in all editions of Statistical Methods for Research Workers, it was not developed any further for more than 30 years. Likelihood inference re-appears in Statistical Methods and Scientific Inference (1956) and the discussion there picks up from where the 1921 paper and Statistical Methods for Research Workers had left off, “The likelihood supplies a natural order of preference among the possibilities under consideration” (chapter III, §6).

Although Fisher argued with such Bayesians, (as they would now be called) as Harold Jeffreys (see Harold Jeffreys as a Statistician), these controversies did not lead to any rewriting of the 1925 passage. The rewriting was done primarily to accommodate the fiducial argument, which gave probability a role in inference—a legitimate role in Fisher’s eyes, unlike the Bayesian pretence of a role. The changes began to appear in the 1932 (4th) edition, following Fisher’s Inverse Probability which introduced the fiducial argument and which had appeared in 1930. Fisher slowly fiducialised the book by rewriting the passage as well as by making changes. He did not revise his account of likelihood except to describe the power function of Neyman and Pearson as a specialised application of it!

Primary Literature
Works by Fisher

Most of the relevant documents can be found in the web: Fisher’s articles from the University of Adelaide’s R. A. Fisher Digital Archive and the first edition of SMRW on Christopher Green’s site. These sites give the full references.

·  Statistical Methods for Research Workers (The first edition is available in Christopher Green’s Classics in the History of Psychology.)

·  Statistical Methods for Research Workers (The 14th edition is in print as part of Statistical Methods, Experimental Design and Scientific Inference, Oxford: Oxford University Press, 1990.)

·  On the "Probable Error" of a Coefficient of Correlation Deduced from a Small Sample. (1921)

·  On the Mathematical Foundations of Theoretical Statistics (1922)

·  Theory of Statistical Estimation (1925)

·  Inverse Probability. (1930)

·  Two New Properties of Mathematical Likelihood. (1934)

·  Statistical Methods and Scientific Inference, 3 editions, 1956/59/74, Edinburgh: Oliver & Boyd. (The 3rd edition is in print as part of Statistical Methods, Experimental Design and Scientific Inference, Oxford: Oxford University Press, 1990.)

Work on the likelihood principle by others

George Barnard developed Fisher’s division between probability and likelihood in a 1949 paper but the work had little impact. The same cannot be said of Allan Birnbaum’s effort, which appeared when there was a strong revival of interest in foundations. Edwards’s book develops a system of statistics based on likelihood. Berger & Wolpert provide a survey.

·  G. A. Barnard (1949) Statistical Inference, Journal of the Royal Statistical Society. Series B (Methodological), 11, 115-149. JSTOR. (For a personal appreciation of Barnard see Lindley's George. Barnard (1915-2002) )

·  Barnard sent Fisher a copy of his paper and in reply Fisher congratulated him on his “enterprise”: see pp. 5-6 of J. H. Bennett (1990) (ed) Statistical Inference and Analysis: Selected Correspondence of R. A. Fisher, Oxford, University Press.

·  Allan Birnbaum (1962) On the Foundations of Statistical Inference" Journal of the American Statistical Association, 57, 269-306 (with comments by L. J. Savage, G. A. Barnard, J. Cornfield, Irwin Bross, G. E. P. Box, I. J. Good, D. V. Lindley, C. W. Clunies-Ross, J. W. Pratt, H. Levene, T. Goldman, A. P. Dempster, O. Kempthorne and reply by Birnbaum, 307-326). JSTOR

·  Between 1958 and –60 Birnbaum sent Fisher copies of his papers but Fisher was not interested in Birnbaum’s project of improving Neyman’s theory. Birnbaum did not send Fisher a copy of his likelihood paper which was presented at a meeting in December 1961. However the paper may have been enclosed with a letter from Kempthorne to which Fisher replied on 19th February 1962; Fisher described Birnbaum as “a very bewildered type.” In a letter to H. E. Kyburg dated 14th May Fisher mentioned that “Likelihood Statements” had “recently been rediscovered by Birnbaum”: see pp. 188-9 of J. H. Bennett (1990) (ed). Birnbaum's paper appeared in the June issue of JASA. On 29th July Fisher died.

·  A. W. F. Edwards (1972/1992) Likelihood (the second expanded edition reprints the first with some related articles and a new preface), Baltimore: Johns Hopkins University Press.

·  J. O. Berger, R. L. Wolpert (1988) The Likelihood Principle (2nd. Edition), Hayward, Calif.: Institute of Mathematical Statistics.

Secondary Literature

Edwards (1999) provides a nice introduction to the topic and its history. Aldrich (1997) gives a detailed account of Fisher’s ideas on likelihood to 1922 while Aldrich (2000) describes the development of the fiducial argument and the consequences of this development for the likelihood/probability division. The Guide provides additional information on Fisher, including references on the fiducial argument.

·  A. W. F. Edwards (1999) “Likelihood” preliminary version of IESBS entry.

·  John Aldrich (1997) R. A. Fisher and the Making of Maximum Likelihood 1912-22, Statistical Science, 12, 162-176. Project Euclid JSTOR

·  John Aldrich (2000) Fisher’s “Inverse Probability” of 1930, International Statistical Review, 68, 155-172.

·  John Aldrich A Guide to R. A. Fisher

Likelihood and Probability in R. A. Fisher’s Statistical Methods for Research Workers

http://www.economics.soton.ac.uk/staff/aldrich/fisherguide/SECT1.htm

{Below is the passage as it appeared in the 14 editions of Statistical Methods for Research Workers. The trail of red ink shows the changes.}

2nd & 3rd editions 1928/30

The deduction of inferences respecting samples, from assumptions respecting the populations from which they are drawn, shows us the position in Statistics of the Theory of Probability. For a given population we may calculate the probability with which any given sample will occur, and if we can solve the purely mathematical problem presented, we can calculate the probability of occurrence of any given statistic calculated from such a sample. The problems of distribution may in fact be regarded as applications and extensions of the theory of probability. Three of the distributions with which we shall be concerned, Bernoulli's binomial distribution, Laplace's normal distribution, and Poisson's series, were developed by writers on probability. For many years, extending over a century and a half, attempts were made to extend the domain of the idea of probability to the deduction of inferences respecting populations from assumptions (or observations) respecting samples. Such inferences are usually distinguished under the heading of Inverse Probability, and have at times gained wide acceptance. This is not the place to enter into the subtleties of a prolonged controversy; it will be sufficient in this general outline of the scope of Statistical Science to express my personal conviction, which I have sustained elsewhere, that the theory of inverse probability is founded upon an error, and must be wholly rejected. Inferences respecting populations, from which known samples have been drawn, cannot be expressed in terms of probability, except in the trivial case when the population is itself a sample of a super-population the specification of which is known with accuracy.

This is not to say that we cannot draw, from knowledge of a sample, inferences respecting the corresponding population. Such a view would entirely deny validity to all experimental science. What is essential is that the mathematical concept of probability is inadequate to express our mental confidence or diffidence in making such inferences, and that the mathematical quantity which appears to be appropriate for measuring our order of preference among different possible populations does not in fact obey the laws of probability. To distinguish it from probability, I have used the term "Likelihood" to designate this quantity; since both the words "likelihood" and "probability" are loosely used in common speech to cover both kinds of relationship.

______


4th & 5th editions 1932/34

The deduction of inferences respecting samples, from assumptions respecting the populations from which they are drawn, shows us the position in Statistics of the classical Theory of Probability. For a given population we may calculate the probability with which any given sample will occur, and if we can solve the purely mathematical problem presented, we can calculate the probability of occurrence of any given statistic calculated from such a sample. The problems of distribution may in fact be regarded as applications and extensions of the theory of probability. Three of the distributions with which we shall be concerned, Bernoulli's binomial distribution, Laplace's normal distribution, and Poisson's series, were developed by writers on probability. For many years, extending over a century and a half, attempts were made to extend the domain of the idea of probability to the deduction of inferences respecting populations from assumptions (or observations) respecting samples. Such inferences are usually distinguished under the heading of Inverse Probability, and have at times gained wide acceptance. This is not the place to enter into the subtleties of a prolonged controversy; it will be sufficient in this general outline of the scope of Statistical Science to express my personal conviction, which I have sustained elsewhere, that the theory of inverse probability is founded upon an error, and must be wholly rejected. Inferences respecting populations, from which known samples have been drawn, cannot, by this method be expressed in terms of probability, except in the trivial case when the population is itself a sample of a super-population the specification of which is known with accuracy.

The probabilities established by those tests of significance, which we shall later designate by t and z, are, however, entirely distinct from statements of inverse probability, and are free from the objections which apply to these latter. Their interpretation as probability statements respecting populations constitutes an application unknown to the classical writers on probability.

The rejection of the theory of inverse probability should not be taken to imply that we cannot draw, from knowledge of a sample, inferences respecting the corresponding population. Such a view would entirely deny validity to all experimental science. What is essential is that the mathematical concept of probability is, in most cases, inadequate to express our mental confidence or diffidence in making such inferences, and that the mathematical quantity which appears to be appropriate for measuring our order of preference among different possible populations does not in fact obey the laws of probability. To distinguish it from probability, I have used the term "Likelihood" to designate this quantity; since both the words "likelihood" and "probability" are loosely used in common speech to cover both kinds of relationship.

______


6th 7th & 8th editions 1936/38/41

The deduction of inferences respecting samples, from assumptions respecting the populations from which they are drawn, shows us the position in Statistics of the classical Theory of Probability. For a given population we may calculate the probability with which any given sample will occur, and if we can solve the purely mathematical problem presented, we can calculate the probability of occurrence of any given statistic calculated from such a sample. The problems of distribution may in fact be regarded as applications and extensions of the theory of probability. Three of the distributions with which we shall be concerned, Bernoulli's binomial distribution, Laplace's normal distribution, and Poisson's series, were developed by writers on probability. For many years, extending over a century and a half, attempts were made to extend the domain of the idea of probability to the deduction of inferences respecting populations from assumptions (or observations) respecting samples. Such inferences are usually distinguished under the heading of Inverse Probability, and have at times gained wide acceptance. This is not the place to enter into the subtleties of a prolonged controversy; it will be sufficient in this general outline of the scope of Statistical Science to express my personal conviction, which I have sustained elsewhere, that the theory of inverse probability is founded upon an error, and must be wholly rejected. Inferences respecting populations, from which known samples have been drawn, cannot, by this method be expressed in terms of probability, except in the trivial case when the population is itself a sample of a super-population the specification of which is known with accuracy.

The probabilities established by those tests of significance, which we shall later designate by t and z, are, however, entirely distinct from statements of inverse probability, and are free from the objections which apply to these latter. Their interpretation as probability statements respecting populations constitutes an application unknown to the classical writers on probability. To distinguish such statements as to the probability of causes from the earlier attempts now discarded, they are known as statements of Fiducial Probability.