Random Numbers

views updated May 14 2018

Random Numbers

Definitions

Uses of random numbers

Sources of random numbers

History

Tables of random digits

Mathematical generation processes

Random permutations

BIBLIOGRAPHY

Random numbers are numbers generated by a mechanism that produces irregularity, in a sense that will be made precise. There are four major uses of random numbers: (1) To protect against selective bias in the acquisition of information from sample surveys and experiments. In these same contexts, random numbers provide a known probability structure for statistical calculations. (2) To gain insight, by simulation, into the behavior of complex mechanisms or models. (3) To study theoretical properties of statistical procedures, such as efficiency of estimation and power of statistical tests. (4) To obtain approximate solutions to other mathematical problems. In each of these applications, random numbers are used to simulate observations from an arbitrary probability distribution. It is often useful to think of random numbers as an idealization of the successive outcomes of a carefully made gambling device, say a roulette wheel.

Definitions

A definition of random numbers is a specification of the probabilistic properties of the process that generates them, rather than a specification of the properties of the numbers themselves. One can only partially test and verify the numbers themselves; they are only specific outcomes of the process. To clarify the distinction between a process and an outcome of the process, consider the example of coin tossing. In stating that the probability of getting a head in flipping an honest coin is ½, one implies that before the coin is flipped the process of flipping gives probability ½ of observing a head. Once the coin is flipped, however, either a head or a tail shows; the situation is determined and not probabilistic.

Two concepts are involved in specifying the process generating random numbers: independence and a specific type of nondeterminism. In terms of the coin-flipping process, independence implies that the process that flips the coin is not influenced by prior flips and nondeterminism implies that one cannot state with certainty whether a head or a tail will occur on a flip. In the case of an honest coin one can only say that the probability is ½ that a head will occur.

A process is said to generate random numbers in a strict, or theoretical, sense when the process provides random variables that are (1) independent in the sense of probability and (2) governed by the same continuous uniform distribution. Independence implies that knowledge of some numbers generated by the process cannot help in predicting other numbers. The uniform distribution requirement ensures that the numbers generated lie within an interval; the probability that a random number lies in a subinterval is given by the length of the subinterval divided by the length of the entire interval [seeDistributions, statistical; Probability].

A process generates random (decimal) digits when it provides random variables that are independent and take the values 0, 1, …, 9 with equal probability, 110, that is, the random variables follow a discrete uniform distribution. Similar definitions apply to random digits for bases other than 10, for any set of contiguous integers, or, in general, for any set following a discrete uniform distribution.

No real process can truly generate random numbers in the strict sense, since real processes must always generate rational numbers. The term “random numbers” is, however, often applied to processes that generate independent, equispaced, equally probable rational numbers when those numbers are so numerous that the discrete process may be considered an approximation to a continuous one. It is necessary, nonetheless, to keep this distinction between theory and practice in mind. Similarly the terms “random digits” and “random numbers” are in practice applied to real processes that presumably can never provide values that are exactly independent or uniform. Further, the term “random numbers” is often loosely used to apply to any of the above possibilities. Finally, these terms are even sometimes used to mean any number-producing process—probabilistic or not—where the produced values appear approximately equally often and where there is no evident pattern in the series of outcomes. See Hammersley and Handscomb (1964), Shreider (1962), or Young (1962) for additional discussion. This terminological thicket is not so tangled as might appear at first, since the meaning intended is usually clear from context.

Uses of random numbers

Random numbers, digits, etc. permit the construction of probability samples, which in turn permit the use of present-day statistical inference. In sample surveys, units must be selected from the sampled population by probability sampling if statistical analysis, in the ordinary sense, is properly to be done. The simplest case is that in which each unit of the population has an identification number (from 1 to N, say). Random integers (in the interval 1 to N) are chosen and the corresponding units form the sample. In practice, sampling without replacement is usually used, and here random permutations are needed; these are discussed below. Also in practice, sampling designs of various degrees of complexity are common, using devices of stratification, clustering, etc. [seeSample surveys].

In a similar way, random integers or permutations are used in experimental design for deciding which experimental treatments are to be applied to the various physical units: mice, men, fields, etc. In both these cases, the use of a random device serves to protect against selective bias, for example, assigning the healthiest subjects to one experimental therapy. Of equal importance, however, is the fact that the use of a random device provides a firm foundation for statistical inference [seeExperimental design].

Random numbers can also be very useful in studying models of complex physical mechanisms or abstract theories, whenever a model involves variables with probabilistic (stochastic) elements from arbitrary probability distributions. For example, in models of fatigue or of learning one can simulate the performance of many individuals by generating observations from a hypothetical probability distribution depicting performance and then using the generated results to estimate actual performance. In essence, one is sampling from the hypothetical population to estimate how the “real” population can be expected to perform. Random numbers are used to obtain observations from an arbitrary probability distribution function. If observations {x} are desired from a probability distribution F(x), and if {u} denotes values of random numbers in the unit interval (0,1), that is, observations from the unit uniform distribution, then an observation value x from F(x) is obtained by generating a value for u and solving for x in the relationship x=F−1(u) For the derivation and conditions necessary for this result, see, for example, Mood (1950, p. 107). More economical ways of obtaining a conversion of random numbers to observational values of F(x) can often be achieved For example, several methods exist for obtaining observations from the normal distribution function (see Box & Muller 1958; Muller 1958; Teichroew 1965; Hull & Dobell 1962).

The use of random numbers has also been considered for solving physical or mathematical problems that appear to be completely nonprobabilistic. For example, assume that a mathematical function is given and it is desired to find (by integration) the area under its curve between specified limits of the independent variable. The solution to this problem can be approximated by use of random numbers. Imagine, for convenience, that the area to be estimated lies within the unit square; then by generating pairs of random numbers that lie in the unit square and comparing the number that lie under the curve to the total number of pairs generated, one obtains an estimate of the area under the curve. The general basis for this approach is considered, for example, by George W. Brown (1956, p. 283).

With the advent of modern digital computers this approach to mathematical problems has been given considerable attention and is referred to as the Monte Carlo (sampling) approach (see Florida, Univ. of 1956; Hammersley & Handscomb 1964; Shreider 1962). Indeed, the Monte Carlo approach can be used to explore theoretical questions of statistics that are not readily solved by direct analysis (see Teichroew 1965). Without going into detail, it can be stated that random numbers can also be used to mix strategies in complex decision processes, an application of growing usefulness [seeDecision theory].

Sources of random numbers

Physical and mathematical processes have both been employed to generate random numbers. It is very difficult to determine whether these sources provide satisfactory numbers; the closeness of agreement between the real-world process and the hypothetical model has been measured by rather general purpose statistical tests, which at best can give only partial answers.

Until recently, the major sources of random numbers have been published tables derived primarily by using physical devices, some of which have been improved with the aid of mathematical processing. A list of prepared tables of random numbers is presented below.

The physical processes used include (1) drawing capsules from a bowl, with replacement and remixing after each drawing; (2) flashing a beam of light at irregular time intervals on a sectioned rotating disk, so as to obtain section numbers; (3) recording digitized electronic pulses whose time distribution of occurrences is supposedly random; and (4) counting the number of output pulses generated by the radioactive decay of cobalt 60 during constant time intervals.

Physical processes have several disadvantages as sources of random numbers. Unless one stores the numbers generated by a physical process one cannot have them available and cannot regenerate the specific sequence used if it becomes necessary to check or repeat a computation. Furthermore, the early types of physical processes were difficult to maintain and use. Interest has shifted to the use of mathematical processes that can be programmed for digital computers, since large tables cannot as yet be economically entered and stored in the high-speed memories of computers. Most such mathematical processes center about a recurrence relation, where the next number of a sequence is derived from one or more prior numbers of the sequence.

These mathematical processes are deterministic, that is, any number of a sequence is a specified function of prior numbers of the sequence. For this reason, one could easily raise objections to the use of these processes and prefer the use of physical processes. However, as will be noted below, there is a strong pragmatic justification for these mathematical procedures.

Several methods use the output of a physical process as initial “random” numbers, which are improved by use of mathematical relationships such as those provided by Horton and Smith (1949). These techniques have been called compound randomization techniques, and they appear to hold much promise.

History

Tables of random numbers and random digits are a relatively recent development, having been available only since 1927. This is surprising, considering that, almost a century earlier, mechanical methods and shuffling devices for random numbers were already employed to examine probability problems. Samples for studying theoretical properties of various statistics had been reported in 1908 by “Student” (W. S. Gosset), who investigated the distribution of the standard deviation and the ratio of the mean to the standard deviation of samples by shuffling and drawing at “random” from a population of 3,000 cards (see Gosset 1907–1937). This study and earlier sampling studies using dice to estimate correlations among events—for example, Darbishire in 1907 and Weldon in 1906—are reviewed in Teichroew (1965).

Even after it was recognized that random numbers would provide a reliable and economical tool in selecting samples, a satisfactory method to generate them still remained to be found. Karl Pearson was instrumental in encouraging L. H. C. Tippett to develop a table of random digits. In the introduction to that table, Pearson mentioned that Tippett found that drawing numbered cards from a bag was unsatisfactory and that better results were obtained by using digits selected from a 1925 census report. With this approach, Tippett (1927) provided a table of 41,600 digits. Subsequently several analyses of this table were made and larger tables using other methods have since appeared. With the introduction of punched-card equipment, calculators, and digital computers, other methods of creating random numbers have been considered. Interest has focused on additional ways to test for the adequacy of numbers produced by various methods. Recognizing that in long sequences of random numbers, one might observe patterns of numbers that might be objectionable for a specific use, Kendall and Smith (1938, p. 153) provided criteria of local randomness by which to judge whether a set of numbers is acceptable for a specific application. One could raise objections to the use of the concept of local randomness to include or exclude specific sequences of numbers because each has one or more sets of subsequences that do or do not occur in some desired order. The practical and philosophical questions here are not solved and they raise doubt in some minds about the very foundations of probability theory. For example, G. Spencer Brown (1957) has given considerable discussion to the apparent paradox resulting from the exclusion of specific patterns of numbers.

The first of the mathematical generation processes, which is discussed below, appears to have been suggested by John von Neumann, who proposed the middle square method; this was followed by a multiplicative congruence method by Lehmer, which appeared in 1951.

In passing it should be noted that although Fisher did not directly contribute to the development of techniques for random numbers until 1938, when he and Yates provided their tables, his influence in suggesting the use of randomization in experimental design undoubtedly stimulated much interest in the subject.

Tables of random digits

Since the publication of the table of 41,600 digits by Tippett in 1927, several major tables have appeared. In 1938–1939 two were published: the first, by Fisher and Yates (1938), contains 15,000 digits based on entries in a table of logarithms; the second, by Kendall and Smith (1938; 1939), contains 100,000 digits based on readings from a rotating disk. In conjunction with the publication of their table, Kendall and Smith introduced their four tests of randomness and the concept of local randomness. Several workers have examined these three tables. In 1953, Good raised the question of the need for a modification of one of the tests by Kendall and Smith. From 1939 through the middle of the 1950s several other tables were developed. The most ambitious and largest was the one developed by workers at the RAND Corporation, using the output of an electronic pulse generator and a compound randomization technique to “improve the numbers” (see Brown 1951). These tables originally had limited distribution and were also available on punched cards. In 1955, a table appeared as a book under the title A Million Random Digits With 100,000 Normal Deviates (RAND Corp. 1955). This book is now more or less the standard source. The table of normal deviates was derived by converting random numbers to observations from the unit normal distribution by the method of inversion mentioned earlier. Several earlier workers had also published tables of normal deviates based upon, for example, Tippett’s table. See Section 13 of Greenwood and Hartley (1962) for an extensive list of tables on random digits and random observations from other distributions.

Each publication of random digits contains, in addition, instructions for the use of the tables. There are unresolved questions about the consequences of the repeated use of the same table.

As previously mentioned, mathematical methods for producing approximately random numbers were introduced for reasons of convenience and efficiency. When a sequence of random numbers is used in a computation, it is often desirable to be able to reproduce the sequence exactly so that comparisons can be made among related computations that use the same sequence. The mathematical generation processes that have been provided to date all satisfy the requirement of reproducibility; that is, given a definite starting point in the process, the same sequence of subsequent numbers can be obtained in repeated applications of the process. The mathematical processes that have been introduced reflect the fact that the electromechanical calculators and early digital computers had very limited memories and generally could perform only one arithmetical or logical operation at a time. It was, therefore, not economical to store large volumes of random numbers, and the generation processes, to conserve memory and computer time, usually used at each step only one or two prior numbers of the process, which were combined by simple computer operations. It remains to be seen whether more involved generation processes will be introduced to take fuller advantage of advances in the speed and capacity of computers.

Mathematical generation processes

A mathematical process for obtaining sequences of numbers in a deterministic manner cannot generate sequences of truly random numbers. Despite this limitation, however, deterministic processes have a practical advantage over random processes, in that a random process can generate very undesirable, although unlikely, sequences of numbers, which would provide questionable samples or experimental designs. One can, on the other hand, sometimes design a deterministic process that generates numbers in a sufficiently haphazard arrangement to satisfy criteria of approximate randomness essential to an application. Processes can be designed so as to give special importance or weight to specific sequences of numbers. Such weighting techniques are similar to stratification techniques used in sampling applications; these ideas are considered, for example, by George W. Brown (1956), Daniel Teichroew (1965), and H. A. Meyer (Florida, Univ. of 1956). Hammersley and Handscomb (1964, p. 27) use the term “quasi random numbers” when it is known that a sequence of numbers is nonrandom but possesses particular desirable statistical properties.

To prevent misunderstanding about the interpretation of numbers generated by a deterministic mathematical process, the process will be referred to as one that generates pseudo random numbers when the numbers are produced with the intent to use them as if they were random numbers. Use of the term “pseudo random numbers” relates to the intent rather than the performance of the process, and the suitability of a process that generates pseudo random numbers must be judged relative to each specific application. For example, assume that in a specific application it is necessary to be able to generate even and odd numbers with the same probability. Further, assume there exists some method for generating statistically independent numbers, with r as the probability that a number generated will be odd and q = 1 − r as the probability that it will be even. It is desired that r equal q. The following device can be employed to satisfy the requirement: Generate and use the numbers in pairs. If the pair is odd-odd (with probability r2) or even-even (with probability q2), ignore the pair. If the pair is odd-even, call it an odd occurrence; if the pair is even-odd, call it an even occurrence. Note that the probabilities of an odd occurrence or an even occurrence are now equal; namely, r(1 − r) = (1 − r)r so that the conditional probabilities are both ½.

If the generation process is being performed to obtain p-position numbers in a given number system (the number 725 is a three-position decimal number), then the largest number of distinct numbers possible is the base of the number system raised to the pth power. For example, in the decimal base the limit is 10”; thus the number of distinct numbers that can be specified using three positions is limited to 1,000. Although it follows that the number of possible distinct pseudo random numbers is limited by the number system and the number of positions used, this limitation is not necessarily serious. Some of the numbers will be repeated whenever a sufficiently long sequence of p-position numbers is generated. If the sequence being generated begins to consist of numbers in a cycle pattern that will be repeated without interruption, then the length of this cycle is called the period of the process. The early generation methods were designed to provide sequences with a period as large as possible for a given p. This was partially motivated by the belief that it might help ensure adequate haphazardness of the numbers. The period of a specific process can be either smaller or larger than the number of distinct p-position numbers that can be generated. The determination of the period, if any, of a process can be very difficult and can depend on how many prior numbers are used in determining the next number to be generated. It is often felt that it is important for the period to be longer than the number of pseudo random numbers needed for a specific application or experiment.

In addition to seeking a long period, it is customary to be sure that a process does not contain the same number too often. Some processes also need to be checked to ensure that they do not degenerate, that is, begin to repeat a single number pattern with a short period. In spite of the deterministic nature of a process, it may be very difficult to analyze its behavior except by subjecting actual outcomes to statistical tests. For example, the behavior of the middle square method, which was proposed by von Neumann, has been difficult to analyze. The method proposed by Leh-mer, however, is much more tractable to analysis by number theory, and it has features in common with many other methods proposed by subsequent workers, such as the multiplicative congruence method described below. Hull and Dobell (1962) have compiled a very comprehensive survey of all mathematical methods and studies made through the middle of 1962.

Multiplicative congruence. The term “congruence” means the following: two integers A and B are said to be congruent modulo M, written A≡ B (modM), if A and B have the same remainder when divided by M. A sequence x1,x2, …. can be generated by the multiplicative congruence relation, such as xn+1, ≡kxn;(modM), as follows: Assume that k is a given constant and it is multiplied by the first number of the sequence. The result of the multiplication is divided by M and the remainder of this division process is specified to be the next number of the sequence, x2. Thereafter x3 is obtained by repeating the multiplication and division process using x2 in place of x1. Studies of this process have been made concerning the selection of k and x1 relative to M to ensure a large period. As mentioned by Lehmer, the particular relation given here has the undesirable feature that the right-hand digit positions of the generated numbers are not satisfactory individually as random digits. To overcome this limitation and to try to get a period as large as M, Rotenberg suggested modifying the relation by adding a constant, that is, xn+1kxn + c (mod M). However, the choice of k and c relative to M not only influences the period but also influences the extent of correlation between successive numbers of the sequence.

Other recurrence relations. Other types of recurrence relations, such as using two prior numbers instead of one, have been suggested; and these are reviewed in the paper by Hull and Dobell (1962), as is the method suggested by Rotenberg. However, much more work remains to be done before the behavior of the methods that have been proposed is fully understood. Although pseudo random numbers that pass the statistical tests applied to them have been produced, specific digit positions of these numbers have failed to be satisfactory for use as pseudo random digits. One can, however, get pseudo random digits by judicious use of pseudo random numbers or by putting pseudo random numbers through a compound randomization procedure of the type suggested by Horton and Smith (1949) or Walsh (1949). Studies have been made of the digit positions of irrational numbers such as π and e as pseudo random digits since they have infinite periods. For example, Metropolis, Reitwiesner, and von Neumann (1950) studied some of the properties of the first 2,000 digits of π and e, and Stoneham (1965) studied the first 60,000 digits of e. It is important to keep in mind that although one can pose and apply a large number of meaningful tests for randomness, any given sequence of numbers or digits cannot be expected to meet all such tests with an arbitrary level of statistical significance.

Random permutations

Random permutations represent a special application of random digits in the design of experiments. Here “permutation” is used to indicate that a set of objects has been assigned a specific order. Random permutations are used when it is desired, for example, to assign (order) a set of n different objects to n positions on a line at random. The number of different arrangements possible for n different objects is n!. For example, three objects labeled 1, 2, 3 can be arranged in 3! (3! = 3.2.1 = 6) different ways: 123, 132, 213, 231, 312, 321.

A process is said to generate random permutations of size n if each of the n! different possible permutations is independently generated with equal probability. The probability of observing a specific permutation will be 1/n!. Random permutations of size n may be based on random digits as follows. If n is a p-position number, then p-position integers will be selected from a table of random digits with the following restrictions: If the number selected is equal to or less than n and if it has not already been selected for the permutation, then it should be included as part of the permutation; otherwise it should be rejected. Continue this process until all n places in the permutation have been filled. Although this method is practicable when n is quite small, there are better methods that do not reject so many random digits (see Cochran & Cox 1950, p. 569; Fisher & Yates 1938, p. 34; Walsh 1957, p. 355).

The idea of having random permutations available as a table is due to George W. Snedecor. Tables of random permutations of size 9 and 16 can be found in Cochran and Cox (1950, p. 577) and Moses and Oakford (1963). For other sources, see Greenwood and Hartley (1962, p. 460).

Sampling without replacement. Sampling without replacement is a type of probability sampling of a finite population where each item of the population has an equal and independent probability of being selected but each item is allowed to be included in a sample only once. A random permutation of size n represents the extreme situation of sampling without replacement where the entire population of size n is included in the sample in a specified order. If one is interested in selecting, without replacement, random samples of size r from a population of size n, then random permutations of size n can be used to accomplish the selection by using the first r numbers of each permutation. There are other methods of using random numbers to select samples without replacement. For example, a recent set of techniques that can be used on digital computers is due to Fan, Muller, and Rezucha (1962, p. 387). With respect to the direct use of random digits to select samples without replacement, Jones (1959) has studied how many numbers must be used to obtain a sample of given size, taking into account the need to reject duplicate and undesired numbers.

Mervin E. Muller

[Other relevant material may be found inErrors, article onnonsampling errors, and in the biographies ofFisher, R. A.; andGosset.]

BIBLIOGRAPHY

Box, G. E. P.; and Muller, Mervin E. 1958 A Note on the Generation of Random Normal Deviates. Annals of Mathematical Statistics 29:610–611.

Brown, G. Spencer 1957 Probability and Scientific Inference. London: Longmans.

Brown, George W. 1951 History of RAND’s Random Digits: Summary. Pages 31–32 in U.S. National Bureau of Standards, Applied Mathematical Series, Monte Carlo Method. Edited by Alston S. Householder et al. Washington: Government Printing Office.

Brown, George W. 1956 Monte Carlo Methods. Pages 279–303 in Edwin F. Beckenbach (editor), Modern Mathematics for the Engineer. New York: McGraw-Hill.

Cochran, William G. (1953) 1963 Sampling Techniques. 2d ed. New York: Wiley.

Cochran, William G.; and Cox, Gertrude M. (1950) 1957 Experimental Designs. 2d ed. New York: Wiley.

Darbishire, A. D. 1907 Some Tables for Illustrating Statistical Correlation. Manchester Literary and Philosophical Society, Memoirs and Proceedings 51, no. 16.

Fan, C. T.; Muller, M. E.; and Rezucha, I. 1962 Development of Sampling Plans by Using Sequential (Item by Item) Selection Techniques and Digital Computers. Journal of the American Statistical Association 57:387–402.

Fisher, Ronald A.; and Yates, Frank (1938) 1963 Statistical Tables for Biological, Agricultural, and Medical Research. 6th ed., rev. & enl. New York: Hafner. → Previous editions were published by Oliver and Boyd.

Florida, University of, Gainesville, Statistical Laboratory 1956 Symposium on Monte Carlo Methods, Held at the University of Florida … March 16 and 17, 1954. Edited by Herbert A. Meyer. New York: Wiley.

Good, I. J. 1953 The Serial Test for Sampling Numbers and Other Tests for Randomness. Cambridge Philosophical Society, Proceedings 49:276–284.

[Gosset, William S.] (1907–1937) 1943 “Student’s” Collected Papers. Edited by E. S. Pearson and John Wishart. London: Biometrika Office, University College. → William S. Gosset wrote under the pseudonym “Student.”

Greenwood, Joseph A.; and Hartley, H. O. 1962 Guide to Tables in Mathematical Statistics. Princeton Univ. Press. → See especially pages 454–468, “Tables of Random Samples.”

Hammersley, John M.; and Handscomb, David C. 1964 Monte Carlo Methods. New York: Wiley.

Horton, H. Burke; and Smith, R. Tynes 1949 A Direct Method for Producing Random Digits in Any Number System. Annals of Mathematical Statistics 20:82–90.

Hull, T. E.; and Dobell, A. B. 1962 Random Number Generators. SIAM Review 4:230–254.

Jones, Howard L. 1959 How Many of a Group of Random Numbers Will Be Usable in Selecting a Particular Sample? Journal of the American Statistical Association 54:102–122.

Kendall, M. G.; and Smith, B. Babtngton 1938 Randomness and Random Sampling Numbers. Journal of the Royal Statistical Society Series A 101:147–166.

Kendall, M. G.; and Smith, B. Babington 1939 Tables of Random Sampling Numbers. Cambridge Univ. Press.

Lehmer, D. H. 1951 Mathematical Methods in Large-scale Computing Units. Pages 141–146 in Symposium on Large-scale Digital Calculating Machinery, Second, 1949, Proceedings. Cambridge, Mass.: Harvard Univ. Press.

Metropolis, N. C; Reitwiesner, G.; and Von Neumann, J. 1950 Statistical Treatment of Values of First 2,000 Decimal Digits of e and π Calculated on the ENIAC. Mathematical Tables and Other Aids to Computation 4:109–111.

Mood, Alexander M. 1950 Introduction to the Theory of Statistics. New York: McGraw-Hill.

Moses, Lincoln E.; and Oakford, Robert V. 1963 Tables of Random Permutations. Stanford (Calif.) Univ. Press.

Muller, Mervin E. 1958 An Inverse Method for the Generation of Random Normal Deviates on Large-scale Computers. Mathematical Tables and Other Aids to Computation 12:167–174.

RAND Corporation 1955 A Million Random Digits With 100.000 Normal Deviates. Glencoe, 111.: Free Press.

Shredder, Iulii A. (editor) (1962)1964 Method of Statistical Testing: Monte Carlo Method. Amsterdam and New York: Elsevier. → First published in Russian.

Stoneham, R. G. 1965 A Study of 60,000 Digits of Transcendental e. American Mathematical Monthly 72:483–500.

Teichroew, Daniel 1965 A History of Distribution Sampling Prior to the Era of the Computer and Its Relevance to Simulation. Journal of the American Statistical Association 60:27–49.

Tippett, Leonard H. C. (1927) 1952 Random Sampling Numbers. Cambridge Univ. Press.

Von Neumann, John 1951 Various Techniques Used in Connection With Random Digits. Pages 36–38 in U.S. National Bureau of Standards, Applied Mathematics Series, Monte Carlo Method. Edited by Alston S. Householder. Washington: Government Printing Office.

Walsh, John E. 1949 Concerning Compound Randomization in the Binary System. Annals of Mathematical Statistics 20:580–589.

Walsh, John E. 1957 An Experimental Method for Obtaining Random Digits and Permutations. Sankhya: The Indian Journal of Statistics 17:355–360.

Weldon, W. F. R. 1906 Inheritance in Animals and Plants. Pages 81–109 in T. B. Strong (editor), Lectures on the Method of Science. Oxford: Clarendon.

Young, Frederick H. 1962 Random Numbers, Mathematical Induction, Geometric Numbers. Boston: Ginn.

random numbers

views updated May 23 2018

random numbers Numbers that are drawn using a random sampling technique from a set of permissible numbers. True random numbers are difficult to obtain, and programs using them are difficult to debug.

Attempts to produce random numbers using the arithmetic properties of a computer result in pseudorandom numbers, since in principle the numbers generated depend on their predecessors. However, their frequency distribution may be assumed to correspond to a given theoretical form, and they may be assumed to be independent of each other. Basic pseudorandom numbers are uniformly distributed in the range (0,1), and may be transformed to provide other distributions.

Many methods for their generation have been proposed over the years, one of the earliest being the middle square method proposed by von Neumann: the previous random number is squared and the middle digits extracted from the result to form the next number in the sequence. More successful methods are based upon the linear congruential method in which a sequence of numbers is generated using the formula Xn+1 = aXn + c (mod m)

for particular choices of a, c and m.

Pseudorandom numbers are used in a number of applications: in Monte Carlo methods of numerical integration, to sample a large set and so gain insight into the set, and to simulate natural phenomena such as the collision of nuclear particles.

More From encyclopedia.com