Fisher, R. A.
Fisher, R. A.
Contributions to statistics and genetics
Ronald Aylmer Fisher (1890-1962) achieved world-wide recognition during his lifetime as a statistician and geneticist. He continued the work begun in England by Karl Pearson at the beginning of the twentieth century, but he developed it in new directions. Others also contributed to the tremendous surge in the development of statistical techniques and their application in biology; but these two men, by their energetic research and example, in turn held the distinction of dominating the statistical scene for a generation.
Fisher was born in East Finchley, near London. Apart from a twin brother who did not live long, he was the youngest of seven children. His father, George Fisher, was an auctioneer; no particular scientific ability is evident in the achievements of his relatives, except perhaps those of an uncle who, like Ronald Fisher, was a wrangler in the mathematical tripos at Cambridge. Fisher attended school at Stanmore Park and then went to Harrow, where he was encouraged in his mathematical interests and won a scholarship to Gonville and Caius College, Cambridge. His leanings in mathematics followed the English tradition in natural philosophy, and his university student years, from 1909 to 1913, culminated in his receiving first a distinction in optics for his degree papers in 1912 and then a studentship in physics during his postgraduate year. He had, however, already noticed from studying Karl Pearson’s Mathematical Contributions to the Theory of Evolution that natural philosophy need not stop with the physical sciences.
After leaving Cambridge, Fisher spent a short time with the Mercantile and General Investment Company. When World War I broke out in 1914, his very bad myopia prevented him from joining the army, and he taught mathematics and physics for four years at various English public schools. In 1917 he married Ruth Eileen Guinness, who was to bear him eight children.
Fisher did not really begin his full-time statistical and biological career until 1919, when he became statistician at Rothamsted Experimental Station, an agricultural research institute in Harpenden, Hertfordshire. His earlier years had, however, been a valuable gestation period. In a short paper published in 1912 while he was still at Cambridge, he had already proposed the method of maximum likelihood for fitting frequency curves. Two more solid papers established his permanent reputation for research. The first was his remarkable paper on the sampling distribution of the correlation coefficient, published in Karl Pearson’s journal Biometrika in 1915, in which his geometrical powers of reasoning were first fully displayed. The second, published in 1918, examined the correlation between relatives on the basis of Mendelian inheritance and exhibited his ability to resolve crucial problems of statistical genetics. Fisher received from Karl Pearson an offer of a post at University College at the same time that he received the Rothamsted offer, but he wisely chose Rothamsted, largely because of the much greater scope and independence of this new statistical post but also, perhaps, because his contacts with Pearson had not been particularly promising. Pearson was apt to bulldoze his way into research problems without worrying unduly about territorial rights. Having been previously stymied by the correlation distribution problem, he took over Fisher’s solution with enthusiasm but without the further close consultation that professional etiquette would seem to require. Fisher was aggrieved by this treatment, and it may well have been the start of the long and bitter feud that developed over the years. Fisher had reason to criticize much of Karl Pearson’s work, but the personal animosity that developed between them was something more than a substantive disagreement. As late as 1950, when a selection of Fisher’s best statistical papers was published, the omission of the 1915 Biometrika paper was a silent reminder of Fisher’s feelings.
The period at Rothamsted, from 1919 to 1933, was the most brilliant and productive of Fisher’s career. The institute, with its teams of biologists and congenial research atmosphere, was precisely the environment Fisher needed. His own wide range of biological interests enabled him to understand his colleagues’ problems and to discuss their statistical aspects constructively with them. His statistical activities were represented by the publication of his best-known book, Statistical Methods for Research Workers (1925a), which has been published in 13 English editions and also translated into several foreign languages. Fisher’s varied accomplishments included doing much of his own computing and initiating many of his own genetical experiments on poultry, snails, and mice, although it is for his creative theoretical ability that he will be remembered. By 1929 he had been elected a fellow of the Royal Society of London. He published his classic on population genetics, The Genetical Theory of Natural Selection (1930a), in which he did much to reconcile Darwin’s theory of evolution by natural selection with Mendel’s genetical principles, which were unknown to Darwin. In the later chapters of this work he discussed the theory, first suggested by Francis Galton, of the evolution of a genetic association between infertility and ability. This could result from marriages between those successful because of high innate ability and those successful because of social advantages due to relatively infertile parents’ having concentrated their material resources on one or two children. His energetic views included proposals for family allowances proportional both to size of family and to size of income, which would offset the penalty imposed on children by parental fertility.
Fisher’s genetic and eugenic interests were soon to be reflected in his move to London in 1933, when he was appointed Galton professor at University College as successor to Karl Pearson. This move, however, no doubt fanned the flames of their feud. When Pearson retired, the college isolated the teaching of statistics in a new department of statistics, under his son Egon; the Galton professor was left with eugenics and biometry. In spite of Egon Pearson’s greater tolerance and appreciation of Fisher’s new statistical lechniques, which emphasized precise methods of analysis in small samples, Fisher felt frustrated, as he indicated at the time in a letter to W. S. Gosset. Moreover, Jerzy Neyman, who held a post in the statistics department from 1935 to 1938, incurred Fisher’s wrath by publishing work that Fisher regarded as unnecessary or misguided; their proximity in the same building at University College exacerbated Fisher’s sense of injury. The recurrence of feuds of this kind was by now beginning to be as much a manifeslalion of Fisher’s own temperament as of his antagonists’. His wide interests and strong personality made him a charming and lively companion when he chose to be and a generous colleague to those who were in sympathy with his work, as many have testified. But his emotions as well as his intellect were too bound up in his work for him to tolerate criticism, to which he replied in vigorous and sometimes quite unfair terms. Apart from such lapses from objectivity, Fisher proceeded to consolidate his scientific reputation both by the development of the study of genetics (especially human genetics) in his department and by the continued publication of statistical works, such as The Design of Experiments (1935a), Statistical Tables for Biological, Agricultural, and Medical Research (with F. Yates, in 1938), and further original papers in his departmental journal (the Annals of Eugenics) and elsewhere.
The third main phase of Fisher’s scientific career was his appointment lo the Arthur Balfour chair of genetics at Cambridge, from 1943 until his retirement in 1957. During this time he wrote two more books—The Theory of Inbreeding, in 1949, and Statistical Methods and Scientific Inference, in 1956—and also edited the collection of his papers published in 1950 (see 1920-1945); but most of his important work was already under way. Honors continued to accumulate, including three medals from the Royal Society (a royal medal in 1938, a Darwin medal in 1948, and a Copley medal in 1955), a knighthood in 1952. Shortly before he retired, he became master of his Cambridge college. He was an honorary member of the American Academy of Arts and Sciences, a foreign associate of the National Academy of Sciences, a member of the Pontifical Academy of Sciences, and a foreign member of the Royal Danish Academy of Sciences and Letters and of the Royal Swedish Academy of Sciences. After retirement he visited the division of mathematical statistics of the Commonwealth Scientific and Industrial Research Organisation in Adelaide, Australia, where he was a research fellow at the time of his death.
Contributions to statistics and genetics
Statistics . To turn in somewhat more detail to Fisher’s original work, a formal listing of his contributions to statistics item by item might result in an underemphasis of the strength of their impact, which was due to their simultaneous variety and depth. Moreover, a formal listing is unsatisfactory since the intimate relation of Fisher’s contributions to the practical problems arising from his professional environment sometimes meant that their academic presentation was incomplete or late, or both. Work of great value, such as the technique of analysis of variance, received inadequate discussion in Statistical Methods for Research Workers because it had hardly reached any degree of finalily when this book was published in 1925, and it was still ralher cluttered with ideas of intraclass correlation; apparently, Fisher never bothered to redraft the discussion for the later editions.
Excluding Fisher’s mathematical work in genelics, it is nevertheless convenient to iry to list his chief contributions to statistical theory under two main headings: (1) fundamental work in statistical inference, and (2) statistical methodology and technique. The first group would include his important work on siatislical estimation, mainly represented by two papers, one published in the Philosophical Transactions of the Royal Society (1922), and the other published in the Proceedings of the Cambridge Philosophical Society (1925b). Before writing ihese papers, Fisher had already been much concerned with precise inference in small samples for familiar quantities, such as the correlation coefficient, chi-square, etc., and had produced a steady flow of papers on their sampling distributions, of which his 1915 paper on the correlation coefficient is the best known. He was very careful to distinguish between an unknown population parameter and its sample estimate. When the sampling distribution of the estimate was available in numerical form, a test of the significance of any hypothetical value of the parameter became possible. These precise tests of significance—for example, of an apparent correlation r in the sample on the “null hypothesis” of no real correlation (ρ = 0)—were particularly valuable at the time because of ihe tendency among biologists and other research workers not to bother with them. Fisher’s own emphasis on them was, however, rather inconsistent with his subsequent attack on the Neyman-Pearson theory of testing hypotheses, especially since an unthinking use of these significance tests by some workers, as in the failure to recognize that a nonsignificant result does not imply the truth of the hypothesis tested (e.g., ρ = 0), caused some reaction against their use later on.
It is evident that Fisher had begun to think about his general theory of estimation before 1922; apart from his advocacy of maximum likelihood in 1912, the notion of sufficiency had also arisen in the special case of the root mean square deviation as an estimate of the true standard deviation a- in the case of a normal, or Gaussian, sample. Nevertheless, the general theory was first systematically developed in the two papers cited, and it included a discussion of the concept of consistency and a heuristic derivation of the asymptotic properties of maximum likelihood estimates in large samples [See ESTIMATION, article on POINT ESTMATION]. It also included a crystallization of the concept of information on a parameter 9 in the formula
where L = log p(Sǀθ) is the logarithm of the probability of the sample S when the parameter has true value θ. The importance of this work lay in (1) examining the actual sampling properties of maximum-likelihood estimates, particularly in large samples (the method of maximum likelihood goes back quite a long way, at least as it is analogous to maximum a posteriori probability by B ayes’ inverse-probability theorem on the assumption of a uniform a priori distribution); and (2) emphasizing that a sample provides, in some appropriate statistical sense, a definite amount of information on a parameter. Fisher’s concept of information, preceding Shannon’s, which was introduced in quite a different context (see Shannon 1948), was especially appropriate for large samples because of the possible ordering of normally distributed estimates in terms of their variances (squared standard deviations). [See INFORMATION THEORY.] Fisher justified his concept more generally in the use of small samples by thinking in terms of many such samples; it is curious that he missed the exact inequality relating the information function and the variance of an unbiased estimate known as the Cramer-Rao inequality. In any case, however, the arbitrariness of the variance and unbiasedness remains; and in small samples Fisher introduced the general notion of a sufficient statistic, which, by rendering conditional distributions of any other sample quantities independent of the unknown parameter, exhausted the “information” in the sample [See SUFFICIENCY].
The next remarkable contribution in this general area came with Fisher’s brief paper entitled “Inverse Probability” (1930b). Fisher had always been derisory of the estimates and inferences resulting from the Bayes inverse-probability approach. He felt that a unique and more objective system of inferences should be possible in fields where statistical probabilities operate and noted that an exact sampling distribution involving a sample quantity or statistic T and an unknown parameter 6 leads (under appropriate regularity and monotonicity properties) to the feasibility of assigning what he termed a “fiducial interval,” with a known fiducial probability that the parameter 0 is contained in the interval. The interpretation accepted at the time, and implied by Fisher’s own wording in this paper, was that this interval, which is necessarily a function of the statistic T, is in consequence a random interval and that fiducial probability is a statistical probability with the usual frequency connotation. Referring to the case of a true correlation coefficient p, he said, “We know that if we take a number of samples of 4, from the same or different populations, and for each calculate the fiducial 5 per cent value for p, then in 5 per cent of cases, the true value of p will be less than the value we found” ([1930fa] 1950, pp. 22, 535). With some restrictions (for example, to sufficient statistics) it was thus apparently identical with the theory of confidence intervals developed about the same time by Neyman (1937). [See ESTIMATION, article on CONFIDENCE INTERVALS AND REGIONS; FIDUCIAL INFERENCE.]
On inductive inference questions, Fisher often did not make it clear what he was claiming; but it should be stressed that regardless of what his interpretation was or of its relevance to the problem at hand, it still was formulated in terms of an assumed statistical framework. Nevertheless, its formal bypassing of Bayes’ theorem was a masterly stroke which received attention outside statistical circles (cf. Eddington’s remark, “We can never be sure of particular inferences; therefore we should aim at a system of inference that will give conclusions of which in the long run not more than a stated proportion, say l/q, will be wrong” [1935a] 1960, p. 126).
Later, Fisher attempted to extend fiducial theory to more than one parameter. His first paper (1935i>) discussing this extension took as one example the problem of inferring the difference in population means of two samples coming from normal populations with different variances. The difficulty here (which Fisher may not have realized at the time, since he himself never examined in detail the logical relations of sufficient statistics in the case of more than one parameter) is that the effect of the unknown variance ratio cannot be segregated in the absence of a “sufficient” quantity for it that does not involve unwanted parameters, such as the individual population means. In rejecting a criticism along these lines by the present author (Bartlett 1936; for further details see, for example, Bartlett 1965), Fisher explicitly gave up the orthodox frequency interpretation for fiducial probability which he appeared to have assumed earlier. He and others attempted to formulate a theory for several parameters that would be both unique and self-consistent, but this has yet to be achieved in any generality, and to many this search is misguided in that it does not eliminate from fiducial theory the arbitrariness that Fisher had so strongly criticized in the Bayes approach.
A rather different and somewhat more technical estimation problem that Fisher solved in 1928 is the derivation of sample statistics that are unbiased estimates of the corresponding population quantities and of the sampling moments of these statistics. The population quantities are the cumu-lants or semi-invariants first introduced by Thiele (1903), and Fisher’s combinatorial rules for obtaining the appropriate sample statistics and their own cumulants constituted a striking example of Fisher’s intuitive mathematical powers. Another paper published in 1934 is worth noting as an original and independent contribution to the theory of games developed about the same time by von Neumann and Morgenstern. [See GAME THEORY, article on THEORETICAL ASPECTS.]
Fisher’s work on the design of experiments is so important logically as well as practically that it may be regarded as one of his most fundamental contributions to the science of statistical inference. It is, however, convenient to consider it in the second general area of statistical methodology and technique, in conjunction with analysis of variance. Fisher perceived the simultaneous simplicity and efficiency of balanced and orthogonal experimental designs in agriculture. Replication of the same treatment in different plots is essential if any statistical assessment of error is to be made, and formally equal numbers of plots per treatment are desirable. However, simplification in the statistical analysis is illusory if the analysis is not valid. When observations are collected haphazardly, the most sensible assumptions about statistical variability have to be made. In controlled experiments there is the opportunity for deliberately introducing randomness into the design so that systematic variation can be separated from purely random error. This is the first vital point Fisher made; the second naturally accompanied it. With the analysis geared to the design, all variation not attributable to the treatments does not have to inflate the error. With equal numbers of plots per treatment, each complete replication can be contained in a separate block, and only variability among plots in the same block is a source of error; variability between different blocks can be automatically removed in the analysis as irrelevant. The third point arose from treatment combinations, such as different fertilizer ingredients. For example, if nitrogenous fertilizer (N) and phosphate (P) are to be tested, the recommended set of treatment combinations is
Control (no fertilizer), N, P, NP,
where NP denotes the treatment consisting of both the ingredients N and P (each in the same amount as when given alone). This design maintains simplicity and may improve efficiency, for if phosphate has no effect, or even if its effect is purely additive, the plots are balanced for nitrogen and doubled in number, and similarly for phosphate. Moreover, if both ingredients do not act additively, an interaction term can be defined that measures the difference in effect of N (or P) in the presence and absence of the other ingredient. Such a definition, although to some extent arbitrary, completes the specification of the treatment effects; and the whole technique of factorial experimentation typified by the above example is of the utmost importance both in principle and in practice. As Fisher put it:
The modifications possible to any complicated apparatus, machine, or industrial process must always be considered as potentially interacting with one another, and must be judged by the probable effects of such interactions. If they have to be treated one at a time this is not because to do so is an ideal scientific procedure, but because to test them simultaneously would sometimes be too troublesome, or too costly. In many instances . . . this impression is greatly exaggerated. (1935a, p. 97)
A further device that naturally arose in factorial designs was that of confounding, by which some of the higher-order interaction effects in designs with three or more factors are assumed to be unimportant and are deliberately arranged to coincide in the analysis with particular block contrasts. This enables the number of plots per block to be smaller and the accuracy of the remaining treatment effects thereby to be increased.
To a large extent the practical value of these experimental methods was not dependent on the statistical analysis, but the simplicity and clarity of the analysis greatly contributed to the worldwide popularity of these designs. This analysis was in principle classical least-squares theory, but the orthogonality of the design rendered the estimation problem trivial, and the concomitant assessment of error was systematized by the technique of “analysis of variance.” Basically, this technique is a breakdown of the total sum of squares of the observations into relevant additive parts containing any systematic terms ascribable to treatments, blocks, and so on. Once the technique was established and the appropriate tests of significance were available (on the assumption of normality and of homogeneity of error variance) from Fisher’s derivation and tabulation of the “variance-ratio” distribution, it could handle more complicated least-squares problems, such as more complex and even non-orthogonal experimental designs or linear and curvilinear regression problems. One useful extension was the adjustment of observed experimental quantities, such as final agricultural yield, by some observed quantity measured prior to the application of treatments. This technique was referred to as analysis of covariance, although this last term seems more appropriate for the simultaneous analysis of two or more variables—that is, the technique of multivariate analysis.
Fisher was active in the development of multivariate analysis. Earlier workers had of course encountered multivariate problems in various contexts, and Fisher had followed up his geometrical distribution of the correlation coefficient with a derivation of the distribution of the multiple correlation coefficient (1928b), again brilliantly using his geometrical approach. The problem exercising Harold Hotelling in the United States and P. C. Mahalanobis in India, as well as Fisher, was the efficient use of several correlated variables for discriminatory and regression problems. Fisher’s name is particularly associated with the concept of the discriminant function, some function of the variables that will efficiently distinguish from the measurements of these variables for a single individual whether he came from one or another of two different populations.
Contributions to genetics . Fisher’s work in genetics was comparable in importance to his purely statistical contributions and equally reflected his originality and independence of outlook. In the first decade of the twentieth century, Mendelian genetics was still a new subject, and its quantitative consequences were not yet properly appreciated. It was in dispute whether they were consistent with Darwin’s theory of evolution by natural selection or even with the observed inheritance of metrical characters. Fisher took the second and lesser problem first, and in his 1918 paper he gave a penetrating theoretical analysis of correlation, breaking it down into nongenetic effects, additive gene action, and further complications, such as genie interaction and dominance. He was thus able to demonstrate the consistency of Mendelian principles with the observed correlations between sibs or between parents and offspring. Then, in his book on the genetic theory of natural selection, he tackled the larger problem. He pointed out that the atomistic character of gene segregation (in contrast to Darwin’s hypothetical “blending” theory of heredity) is essential for maintaining variability, which in turn is the basis of the process of natural selection. He clarified the theoretical role of mutations, showing that mutations provide a reservoir from which eventually only favorable ones can survive. He emphasized the possibility of the selection of modifier genes, for example, in rendering the action of many mutant genes recessive by modifying the heterozygote phenotypically toward the natural wild type. The relative importance of this theory of the evolution of dominance was queried, for example, by Sewall Wright, among others, but this did not prevent it from being a relevant thesis that stimulated further research. The effect of modifier genes was also shown to be important in the phenomenon of mimicry.
At the Galton laboratory, Fisher’s work in human genetics included linkage studies and the initiation of serology research on the human blood groups. An exciting moment in the work on serology came when G. L. Taylor and R. R. Race studied the Rhesus blood groups. Fisher was able to predict, from the experimental results to date, the effective triple structure of the gene, and hence two more anti-sera and one more allele, which were soon successfully traced. Such predictions may be compared with Fisher’s earlier theoretical predictions in the theory of evolution, for example, on the evolution of dominance. They made possible the maintenance of a healthy link with experimental and observational work, which was often initiated or encouraged by Fisher himself. It was in this spirit that he collaborated for several years with E. B. Ford in sampling studies of natural populations.
In retrospect, Fisher’s wholehearted immersion in his own research problems, fundamental and broad as these were, did cause him to ignore some important theoretical trends. His neglect of purely mathematical probability, which had been rigorously formulated by A. N. Kolmogorov (1933), seemed to extend to developments in the theory of random or stochastic processes, although these were very relevant to some of his own problems in evolutionary genetics. In England, A. G. McKen-drick had published some brilliant papers on stochastic processes in medicine, and G. U. Yule on the analysis of time series, but Fisher never appeared to appreciate this work; in particular, his own papers on the statistical analysis of data recorded in time sometimes showed a rather over-rigid adherence to classical and unduly narrow assumptions. In appraising Fisher’s work, one must consider, in addition to these general boundaries that demarcate it, his occasional specific errors and, more importantly, his temperamental bias in controversy. Fisher’s scientific achievements are, however, so varied and so penetrating that such lapses cannot dim their luster or reduce his ranking as one of the great scientists of this century.
M. S. BARTLETT
[For the historical context of Fisher’s work, seeSTATISTICS, article onTHE HISTORY OF STATISTICAL METHOD; and the biographies ofGALTON; GOSSET; PEARSON; YULE. For discussion of the subsequent development of Fisher’s ideas, seeESTIMATION; EXPERIMENTAL DESIGN; FIDUCIAL INFERENCE; HYPOTHESIS TESTING; LINEAR HYPOTHESES, article onANALYSIS OF VARIANCE; MULTIVARIATE ANALYSIS.]
WORKS BY FISHER
1912 On an Absolute Criterion for Fitting Frequency Curves. Messenger of Mathematics New Series 41: 155-160.
1915 Frequency Distribution of the Values of the Correlation Coefficient in Samples From an Indefinitely Large Population. Biometrika 10:507-521.
1918 The Correlation Between Relatives on the Supposition of Mendelian Inheritance. Royal Society of Edinburgh, Transactions 52:399-433.
(1920-1945) 1950 Contributions to Mathematical Statistics. New York: Wiley.
(1922) 1950 On the Mathematical Foundations of Theoretical Statistics. Pages 10.308a-10.368 in R. A. Fisher, Contributions to Mathematical Statistics. New York: Wiley. ⇒ First published in Volume 222 of the Philosophical Transactions, Series A, of the Royal Society of London.
(1925a) 1958 Statistical Methods for Research Workers. 13th ed., rev. New York: Hafner. ⇒ Previous editions were also published by Oliver & Boyd.
(1925b) 1950 Theory of Statistical Estimation. Pages 11.699a-11.725 in R. A. Fisher, Contributions to Mathematical Statistics. New York: Wiley. ⇒ First published in Volume 22 of the Proceedings of the Cambridge Philosophical Society.
(1928a) 1950 Moments and Product Moments of Sampling Distributions. Pages 20.198a-20.237 in R. A. Fisher, Contributions to Mathematical Statistics. New York: Wiley. ⇒ First published in Volume 30 of the Proceedings of the London Mathematical Society.
(1928b) 1950 The General Sampling Distribution of the Multiple Correlation Coefficient. Pages 14.653a-14.763 in R. A. Fisher, Contributions to Mathematical Statistics. New York: Wiley. ⇒ First published in Volume 121 of the Proceedings of the Royal Society of London.
(1930a) 1958 The Genetical Theory of Natural Selection. 2d ed., rev. New York: Dover.
(1930b) 1950 Inverse Probability. Pages 22.527a-22.535 in R. A. Fisher, Contributions to Mathematical Statistics. New York: Wiley. ⇒ First published in Volume 26 of the Proceedings of the Cambridge Philosophical Society.
1934 Randomization and an Old Enigma of Card Play. Mathematical Gazette 18:294-297.
(1935a) 1960 The Design of Experiments. 7th ed. New York: Hafner. ⇒ Previous editions were also published by Oliver & Boyd.
(1935b) 1950 The Fiducial Argument in Statistical Inference. Pages 25.390a-25.398 in R. A. Fisher, Contributions to Mathematical Statistics. New York: Wiley. ⇒ First published in Volume 6, part 4 of the Annals of Eugenics.
(1938) 1963 FISHER, R. A.; and YATES, FRANK Statistical Tables for Biological, Agricultural, and Medical Research. 6th ed., rev. & enl. New York: Hafner.
1949 The Theory of Inbreeding. Edinburgh: Oliver & Boyd; New York: Hafner.
(1956) 1959 Statistical Methods and Scientific Inference. 2d ed., rev. New York: Hafner. ⇒ Previous editions were also published by Oliver & Boyd.
SUPPLEMENTARY BIBLIOGRAPHY
BARTLETT, M. S. 1936 The Information Available in Small Samples. Cambridge Philosophical Society, Proceedings 32:560-566.
BARTLETT, M. S. 1965 R. A. Fisher and the Last Fifty Years of Statistical Methodology. Journal of the American Statistical Association 60:395-409.
EDDINGTON, ARTHUR STANLEY 1935 New Pathways’ in Science. New York: Macmillan.
KOLMOGOROV, ANDREI N. (1933) 1956 Foundations of the Theory of Probability. New York: Chelsea. ⇒ First published in German.
NEYMAN, JERZY 1937 Outline of a Theory of Statistical Estimation Based on the Classical Theory of Probability. Royal Society of London, Philosophical Transactions Series A 236:333-380.
NEYMAN, JERZY 1967 R. A. Fisher (1890-1962): An Appreciation. Science 156:1456-1462. ⇒ Includes a two-page “Footnote” by William G. Cochran.
SHANNON, C. E. 1948 Mathematical Theory of Communication. Bell System Technical Journal 27:379-423, 623-656.
THIELE, THORWALD N. 1903 Theory of Observations. London: Layton.
YATES, F.; and MATHER, K. 1963 Ronald Aylmer Fisher. Volume 9, pages 91-129 in Royal Society of London, Biographical Memoirs of the Fellows of the Royal Society. London: The Society. ⇒ Contains a bibliography on pages 120-129.