Mathematics: Probability and Statistics

views updated

Mathematics: Probability and Statistics

Introduction

Probability and statistics are the mathematical fields concerned with uncertainty and randomness. Since uncertainty is part of all measurement—no instrument is perfect—and all scientific knowledge is ultimately tested against measurements, probability and statistics necessarily pervade science. They are also found throughout engineering, economics, business, government, artificial intelligence, and medicine. Indeed, in today's world most people grapple with statistical concepts and claims in everyday life. For example, if the weather forecast predicts a 30% chance of rain, should I grab an umbrella as I go out? If a study reports that eating a certain fast food as often as I do increases the probability of my contracting heart disease by 10%, should I change my diet? What if the claimed risk increase is 80%?

Although human beings have always weighed probabilities intuitively when making decisions with uncertain outcomes, the application of mathematical methods to uncertainty is only a few centuries old. Early mathematicians did not see uncertainty as a natural topic for mathematics, which was thought of as the science of pure certainty, of absolute knowledge: The idea of deliberately introducing uncertainty into mathematics would probably have struck early philosophers as ridiculous or repulsive. Nevertheless, probability and statistics are today essential aspects of applied mathematics. For example, the behavior of the elementary particles of which the universe is composed is, according to the interpretation most common among physicists today, random in its essence and can only be described using probabilistic concepts.

Historical Background and Scientific Foundations

The Beginnings of Probability

The first mathematical treatments of probability arose from a desire to understand and to triumph in games of chance. Suppose, for example, that two gamblers play a dice-rolling game in which the rule is that the first player to win the game three times claims the wager. Now suppose the game is interrupted after it has begun but before one of the players has won three times. How should the wager be divided in order to fairly recognize the current positions of the two players? In other words, if the first player has won two games and the second player has won one game, what portion of the wager belongs to the first and what portion to the second?

This question was posed to French mathematician Blaise Pascal (1623–1662) by Antoine Gombaud, Chevalier de Méré (1607–1684), a fellow Frenchman and writer with a thirst for gambling. Similar problems had been addressed by various fifteenth and sixteenth century mathematicians, but without significant success. Pascal himself communicated the problem to another brilliant French mathematician, Pierre de Fermat (1601–1665). Both men proposed solutions to the problem and exchanged a series of philosophical letters during the summer of 1654. In this correspondence, Pascal and Fermat inaugurated the modern study of mathematical probability.

A Dutch scientist and mathematician, Christiaan Huygens (1629–1695), learned of the Pascal-Fermat correspondence and published the first printed version of the new ideas developing in mathematical probability in a 1657 book called De Ratiociniis in Ludo Aleae (On Reasoning in Games of Chance).

Pascal also provided us with one of the most ambitious claims ever based mathematical probability—the claim that belief in God is mathematically justified. “Pascal's Wager,” as his argument is called, is an early attempt to apply what mathematicians today call decision theory. Pascal claimed one could choose to either believe in God or not believe in God. Now, presumably God exists or does not. If one believes in God yet God does not exist, Pascal argued, no true loss is incurred. And if one does not believe in God and God does not exist, again nothing is lost. However, if one chooses to believe in God and God does exist, the rewards are infinite, and if one chooses not to believe in God yet God does exist, the penalty is immense. Pascal's conclusion: one should choose to believe in God.

It should be noted that Pascal's Wager is not the purely logical argument that it may seem at first sight. It depends, rather, on a number of assumptions about ethics, psychology, and theology, including the doctrine that God (if there is such a being) punishes disbelief, which not all religious groups affirm. In the case of Pascal's Wager, as with many arguments in the fields of game theory and decision-making that employ the mathematics of probability, the conclusion actually depends not only on mathematical reasoning but on implied claims about facts and values that may be questionable. In fact, so common is the cloaking of flawed arguments in the jargon and machinery of probability and statistics that the phrase “lies, damned lies, and statistics” has become a cliché. Experts and non-experts alike should scrutinize statistical arguments carefully, whether they happen to find their conclusions pleasing or not.

Basic Discoveries in the Theory of Probability

To modern persons, the probabilities associated with tossing a coin may seem obvious. If the coin is fair-not weighted to come up one way more often than the other-then the probability of getting “heads” on any given flip is 1/2 (0.5) and the probability of a “tails” is also 0.5. (Probabilities for given events are expressed as numbers between 0 and 1, with 0 denoting the probability of an impossible event and 1 the probability of a certain event.) Therefore, if one were to flip a fair coin 10 times one would expect to see about five heads and five tails. The word “about” is important: If you actually took a coin (presumed to be fair) out of your pocket and flipped it ten times, you would not think it bizarre to observe, say, six heads and four tails instead of five of each. If, on the other hand, you flipped a coin a million times and recorded about 600,000 heads and 400,000 tails, this would probably strike you as showing that the coin is not fair, even though the proportion of heads or tails to the total number of throws is the same as in the 10-flip case. Intuitively, we expect a fair coin to come up closer and closer to exactly half the more times it is flipped. The mathematical truth behind this intuition is termed the law of large numbers, which was first formalized by the Swiss mathematician Jacob Bernoulli (1654–1705). The law of large numbers essentially states that the probability of an event such as recording a heads on a coin toss is equal to the relative frequency of that event if the experiment is repeated a large number of times. Although this may seem obvious to many modern observers, understanding probability as an expression of relative frequencies over large numbers of trials was a breakthrough in the quest to create a mathematics of probability.

Bernoulli's statement of the law of large numbers was part of his attempt to define what he called “moral certainty.” For Bernoulli, the achievement of moral certainty meant taking into account enough data or observations so that one could approach certainty as to probable outcomes. In his highly influential work, Ars Conjectandi (The Art of Conjecture), Bernoulli argued that if one were presented with an urn containing a large number of white pebbles and black pebbles, one could calculate the probable ratio of black pebbles to white pebbles by recording the color of pebbles drawn from the urn one at a time (with replacement of each pebble after it is drawn out and then restirring the urn). If the number of pebbles drawn were made large enough, the actual ratio of black to white pebbles could be approximated to any desired level of accuracy. Bernoulli chose to define a probability of at least 0.999 (one chance in a thousand of being in error) as the level required to conclude the outcome was correct with moral certainty. He proceeded to show mathematically how many trials were needed to produce such an outcome in a given situation. Today, the law of large numbers is central to many of the most important applications of probability theory.

Conditional Probability and Other Discoveries

Intuition can often be misleading in questions of probability. For example, if you have tossed a fair coin 6 times and it has come up heads every time a very unlikely event what is the chance, on the seventh throw, that it will come up heads yet again? The answer: 1/2. If the coin really is fair, then the chances of heads on any single toss are always 1/2, regardless of what low-probability series of throws may have come before. Another way of saying this is that the probability of getting heads on any given toss is not conditioned on the results of previous throws: It is independent of them. On the other hand, some probabilities are conditioned by other events: The probability that you will drown on a given day is greater if you go swimming that day. The mathematical treatment of conditional probabilities was first developed by the British minister Thomas Bayes (1702–1761). Bayes published his work in 1764 in a paper called “Essay towards solving a problem in the doctrine of chances.” Bayess ideas have developed into the field known as Bayesian analysis.

JACOB BERNOULLI (1654–1705)

Jacob Bernoulli was born in Basel, Switzerland in 1654. Although he was pressured by his father to study philosophy and theology, Jacobs true interest was in mathematics. After obtaining degrees in philosophy and theology from the University of Basel, he set out to study mathematics under some of the most important mathematicians in Europe. The University of Basel eventually offered Jacob the chair of mathematics, a position he would hold until his death in 1705.

Jacob was a member of perhaps the most remarkable mathematical family in history. He and his brother Johann made vital contributions to a wide array of mathematics in the late seventeenth and early eighteenth centuries. Three of Jacobs nephews were prominent mathematicians, as were two grand-nephews. The Bernoulli name was dominant in mathematics for over a century.

Jacobs work in the theory of probability provided a foundation for that discipline during its formative years. However, it is Bernoulli's work in calculus, another new (at the time) branch of mathematics, for which he is most often remembered today.

Another important problem in mathematical probability was proposed and solved by the French naturalist Georges-Louis Leclerc (1707–1788, also known as the Comte de Buffon). Suppose a needle of a given length is tossed onto a floor marked by equidistant parallel lines, such as the cracks in a hardwood floor. What is the probability that the needle will come to rest across one of the cracks? The solution of this question turns out to be related to the geometrical constant π.

Buffon's needle problem is an example of a seemingly esoteric question that has implications for many practical problems. Each drop of the needle can be seen as the acquisition of a single random sample or snapshot of a landscape or space that one is searching for something (lines on the floor, submarines in the sea, astronomical objects in the sky, animals in a landscape, or other targets) but that is too large to search completely. The event of the needle's landing on or “finding” a line is comparable to the event of one's sample or snapshot detecting a target object in a large search space. Using geometrical probability methods descended from Buffon's needle problem, scientists can say how many

samples are needed to characterize the number of tar-gets in a search space to any desired level of accuracy. (Whether it is practical to make the necessary measurements is another matter, to be decided case by case.)

Other concepts of probability arose in unusual places during the early development of the field. For example, the reliability of testimony and other legal questions was put to the test by the new theories developed in the eighteenth century. In France, the Marquis de Condorcet (1743–1794) discussed in his Essay on the Application of Probability Analysis to Decisions Arrived at by Plurality of Votes such topics as the mathematical expectation of receiving fair treatment in a courtroom. Such questions remain relevant today: statistics showing that black defendants are more likely than white defendants to receive the death penalty in U.S. courts for similar murders have been cited in recent years by persons arguing for a moratorium or ban on the death penalty.

Condorcet is also known for what is called the “Condorcet Paradox,” which states that it is possible for a majority to prefer candidate A over candidate B, and candidate B over candidate C, but not prefer candidate A over candidate C. The paradox can arise because the majorities preferring A over B and B over C may not be made up of all the same individuals.

Statistics

Probability theory is the mathematical study of uncertainty and may refer to purely abstract quantities; statistics is the collection and analysis, using mathematical tools from probability theory, of numerical data about the real world. Modern statistics began to develop in the late seventeenth century, as governments, municipalities, and churches, especially in Britain, began collecting more data on the births, lives, and deaths of its citizens, and interested persons began analyzing this data in the attempt to find patterns and trends. One such man, John Graunt (1620–1674), a wealthy London clothmerchant, collected and organized mortality figures; in the process he founded the science of demography, the study of human populations using statistics. His book Observations on the Bills of Mortality (1662) was perhaps the first book on statistics. The importance of his book is that for the first time someone established the uniformity of certain social statistics when taken in very large numbers. His work represents a precursor to the modern life tables so vital to insurance calculations.

Another early application of statistics was found in the emerging field of actuarial science, the study of risks in the insurance business. In the seventeenth century, this application of statistics was developing into an important social and financial tool. Among the first to apply such methods to risk assessment was the Dutch political leader, Johan de Witt (1625–1672). In his book A Treatise on Life Annuities, De Witt laid out the fundamental idea of expectation in risk management. Today, actuaries (specialists in assessing insurance risks) employ advanced methods in statistics on behalf of insurance companies.

One of the most influential eighteenth-century writers on probability and statistics was Abraham de Moivre (1667–1754). De Moivre was born in France but fled to England as a young man due to religious persecution. De Moivre published two books on the subject of statistics, The Doctrine of Chance and Annuities on Lives, both of which went through many editions. In The Doctrine of Chance, de Moivre defined the crucial notion of statistical independence. Events are independent if the outcome of one event does not influence the likely outcome of the other. For example, successive flips of a coin are independent events, because each flip is a new experiment and is not influenced by earlier outcomes. But when pulling cards one by one from a deck, the results of earlier draws do influence what can happen in later draws: Later draws are, therefore, not independent of earlier draws. A person's chance of contracting lung cancer is independent of the color of their shoes, but may be dependent on whether they smoke. The concepts of dependence and independence are fundamental in modern probability and statistics.

De Moivre capitalized on his friendship with Edmund Halley (of Halley's Comet fame) to expand on Halley's work on mortality statistics and the calculation of annuities. Purchase of an annuity is the exchange of a lump sum of money by a person for a guarantee of regular payments until the person's death. The longer the person lives, the more payout they receive. This is a better deal for the buyer who receives the annuity and a worse one for the seller who pays it out. On average, a seller of annuities must pay out no more than they take in, or they will go out of business. Before de Moivre, pricing of annuities was at best a hit-or-miss proposition, with experience playing the most important role in determining value. De Moivre applied the new methods of probability to the calculation of annuities to ensure a fair price to the annuitant (the person paying for the annuity) and the seller of the annuity. His calculations resulted in tables showing the value of annuities for annuitant ages up to 86.

Perhaps de Moivre's most important contribution to statistics was his description of the properties of the normal curve, also called the bell curve because it is shaped like the cross-section of a bell. One of the most important distributions in statistics, it is used by statisticians for many purposes, such as employing samples to approximate the behavior of populations. More precisely, the normal curve can tell statisticians how much error to expect from their sample if they are using the sample to predict the behavior of a population.

Probability and Statistics in the Physical Sciences

One of the most important statistical techniques, the method of least squares, also happens to possess one of the more interesting and colorful histories. The method of least squares is a procedure by which a curve based on an equation—the curve may be as simple as a straight line, and often is-is fitted to a set of measured data points. Such fits are attempted when there is reason to believe that the physical or social process being measured by the data points is well-described by the the—oretical curve, but the actual data have been scattered somewhat by random factors such as noise or random measurement error. The goal is to recover the best possible mathematical description or prediction of the underlying process. The best fit between curve and data is found when the curve is adjusted (say, the slope of the line is changed) until the total error or distance between all the data points and the points on the curve is least, that is, at a minimum. The method of least squares is one way of adjusting the curve shape to achieve this least error or best fit. German mathematician Carl Friedrich Gauss (1777–1855) discovered the method of least squares while still a teenager, but did not publish his results. The discovery resulted from his work in astronomy; Gauss wanted to come up with a system that accounted for various observational errors made in the science. More than ten years after Gauss' discovery, another prominent mathematician, the Frenchman Adrien—Marie Legendre (1752–1833), independently discovered the method of least squares (also while working on astronomy) and promptly published. A bitter dispute arose between the two men over who should be given priority for the discovery.

Probability theory was shaped into something resembling its present-day form with the publication of Théorie Analytique des Probabilités (Analytical Theory of Probabilities, 1812) by French mathematician Pierre-Simon Laplace (1749–1827). Laplace's work incorporated most of the results in probability and statistics already known, as well as advances made by Laplace himself. Laplace's new results included a proof of the central limit theorem, which states that a random variable that is the sum of a large number of other random variables will always have a distribution of the particular type called normal (if the summed variables all have identical properties). Laplace's book, issued in many editions, influenced the study of probability throughout the nineteenth century.

Statistical methods eventually led to deep insights into formerly baffling physical problems. For example, three physicists, working independently in three different countries, discovered that statistical models could be used to predict the behavior of seemingly chaotic particles in samples of gas. These men, James Clerk Maxwell

IN CONTEXT:EUGENICS

Idiot, imbecile, degenerate, moron, dullard, feeble-minded—in the nineteenth and early twentieth century, all these words were attached by some scientists to people with actual or supposed limited mental abilities. Various scientists and social activists believed that the inheritance of inferior intelligence, along with flawed character traits believed to be inherited by criminals, prostitutes, illegitimate children, and the poor, was a major social problem. American sociologist Richard Dugdale (1841–1883) brought the problem to a head in when in 1875 he published a history of a New York family he called the Jukes. Dugdale studied the history of 709 members of the Juke family and found a high rate of criminal activity, prostitution, disease, and poverty. Dugdale and like-minded researchers assumed that such a pattern must be due to inherited genes, not inherited poverty. Similar research projects followed, with similar conclusions, supposedly based on scientific, statistical proofs. Entire families exhibited socially undesirable tendencies that cost governments millions of dollars in services.

Thus was born eugenics, widely considered a science in its day and advocated by many persons considering themselves enlightened or progressive. Eugenicists sought to improve the human gene pool by controlling human breeding. Forced sterilization of criminals and other persons deemed undesirable became the goal of many eugenicists. In the United States, many states passed bills legalizing such methods in the early twentieth century. The atrocities of the Nazis, who had admired the American sterilization laws and implemented far more violent and sweeping measures in the name of eugenics, caused eugenics to quickly fall out of favor in the 1930s and 1940s. The statistical claims of eugenics have been examined by latter-day scientists such as American evolutionary biologist Stephen Jay Gould (1941–2002) to expose the ways in which systematic errors in data collection and analysis, along with the framing of such work in untested assumptions about race, intelligence, gender, and criminality, enabled ideas with no basis in fact to achieve, for a while, the status of received scientific truths.

(1831–1879) of Scotland, Ludwig Boltzman'n (1844–1906) of Austria, and Josiah Willard Gibbs (1839–1903) of the United States, made fundamental contributions to what would be called statistical mechanics. (Mechanics is the study of the motions of physical objects. Statistical mechanics is the study of the group behavior of large numbers of objects, such as atoms or molecules, moving randomly.)

Probability and Statistics in the Social Sciences

While physics in the nineteenth century was being revolutionized by statistical methods, statistical concepts were also being applied to the social sciences, where some of them had originated. Lambert Adolphe Jacques Quetelet (1796–1874), a Belgian scientist originally trained as an astronomer, introduced the science of “social physics” in his book Sur l'homme et le développement de ses facultés, (On Man and the Development of his Facilities, 1842). In this book, Quetelet applied statistical techniques to the study of various human traits (as measured in a large sample of Scottish soldiers) to arrive at what he called the “average man”. These traits included not only physical measurements such as height and weight, but also other supposed characteristics relevant to society, such as criminal propensity. (The notion that a propensity to criminal behavior was innate in the individual was widespread among scientists during the nineteenth century.) Quetelet grouped the measurements of each trait around the average, or mean, value for that trait on a normal curve. He then treated deviations from the average as “errors.”

One of the results of Quetelet's studies was a formula that is seen today in almost every doctor's office in the country the Body Mass Index for measuring obesity by comparing weight and height to average figures. However, many medical scientists have argued recently that the Body Mass Index is not a useful tool for promoting human health because its assumptions about human variation, and its reliance on only two parameters (height and weight), are too simplistic.

Others also soon began to study the social sciences using statistical methods, sometimes with questionable results based on untested assumptions. For example, English scientist Francis Galton (1822–1911) applied statistical methods to human intelligence. Inspired by the work of his cousin, Charles Darwin, Galton became convinced that intelligence conceived of as a single, measurable quantity inherent in each individual was inherited. Galton therefore pursued the study of eugenics, the attempt to improve the human species by selective breeding (including, sometimes, sterilization of persons deemed unfit). Although eugenics was widely rejected after the horrors of Nazi Germany, which implemented eugenic ideas in extreme and horrific forms, Galton advanced the field of statistics with his development of the concepts of regression and correlation as well as his pioneering work in identifying people by their fingerprint patterns.

Galton's work inspired another English scientist, Karl Pearson (1857–1936, also a eugenicist). Pearson, a talented mathematician, became interested in the statistical laws of heredity and was instrumental in building the young science of biometrics, the application of statistical methods to individual human characteristics. He developed the idea of standard deviation, a central aspect of modern statistics, and he introduced the chisquared statistic (named because its standard mathematical notation employs the Greek letter chi, pronounced “kie”),a tool for evaluating the standard deviation of a data set. The standard deviation of a collection of numbers is their average distance from the average a measure of how spread-out the data are around their central average. In addition to many mathematical contributions to statistics, Pearson was Director of the Biometrics Laboratory at University College, London, and a co-founder of the journal Biometrika.

Karl Pearson's son, Egon Pearson (1895–1980), teamed with another mathematician, Jerzy Neyman (1894–1981), to lay the foundations of the modern method of hypothesis testing. A hypothesis test is a process by which the probability that a chosen hypothesis or proposed idea, termed the null hypothesis, is correct is determined. For instance, say a pharmaceutical company claims that its new drug increases the rate of recovery from a certain disease. Furthermore, assume that 40% of patients with this disease recover without the drug. In a test of the drug on a sample of patients with the disease, 46% recover. Obviously 46% is larger than 40%, but is the difference significant that is, is it due to the drug? Might it be merely a coincidence whereby a few more patients in this sample just happened to get better than one would expect on average? How large does such a difference have to be before we are, say, 90% sure that it is a real difference, not a random one?

Hypothesis testing seeks to answer such questions. In this example, a hypothesis test would be used to determine whether the drug played a significant role in the recovery or whether the increase could be attributed to simple coincidence. Pearson and Neyman teamed up to introduce ideas such as the alternative hypothesis, type I and type II errors, and the critical region into the hypothesis testing process. Later, Neyman introduced the idea of a confidence interval, another process very familiar to modern students of statistics. Both hypothesis tests and confidence intervals may be used to compare means, proportions, or standard deviations in a wide array of applications in psychology, medicine, marketing, and countless other disciplines.

The early development of modern information technology was stimulated by the need to handle large quantities of statistical data. An American engineer, Herman Hollerith (1860–1929), while working for the United States Census Bureau, first began to think about building a machine that could mechanically tabulate the huge amounts of data collected during the census every ten years. At MIT, and later while employed by the United States Patent Office, Hollerith began to experiment with designs based on the punch-card system used with the Jacquard loom, an automated weaving machine used throughout much of the nineteenth century in textile production. Hollerith's machine won a competition to be used by the Census Office for the 1890 census. It was a smashing success, tabulating the census data in three months instead of the two years that would have been required for tabulation by hand. The company Hollerith formed to market his calculating machine, the Tabulating Machine Company, would, after several mergers and name changes, eventually become Inter-national Business Machines, known to everyone today as IBM. Until punch-cards were completely replaced by keyboards and video screens for computer programming in the early 1980s, the cards used to feed lines of program code to computers one line per card were known as Hollerith cards. Hollerith cards and machines supplied by U.S. companies were used by the Nazi regime to keep systematic track of people sent to extermination camps.

Twentieth Century Advances

Probability and statistics advanced greatly in the twentieth century. New theories and new applications continued to appear in disciplines as varied as physics, economics, and the social sciences. One of the most influential statisticians of the twentieth century was Englishman R.A. Fisher (1890–1962). Fisher continued the work of Galton, Karl Pearson, and others in biometrics. He was especially interested in the application of statistical processes to Mendelian genetics. Fisher's work was important in reconciling Mendel's genetics with Darwin's theory of natural selection, making possible what has been called the neo-Darwinian Synthesis. This is the union of genetics, mathematics, and traditional Darwinian theories of natural selection into a single theory of evolutionary change that has been highly successful in explaining the history of life. Fisher also produced groundbreaking results in sampling theory and in the theory of estimation. Among many important publications, Fisher's Statistical Methods for Research Workers (1925) provided a valuable guide in the application of statistics for researchers in various fields.

The Student's t-statistic, a tool used to estimate means for small samples, was employed by Fisher and others in the early twentieth century in the development of experimental design. The hypothesis-testing methods that came from Fisher's work have found application in almost every branch of science, from agriculture to zoology. Fisher realized that to make a claim about any matter of fact, one needs to be sufficiently sure that the result is not a chance occurrence. In one of Fisher's own examples, a lady claims that she can taste the difference in tea depending on whether the milk is added to the tea or whether the tea is added to milk. Fisher proposed an experiment in which the lady is given eight cups of tea, with four made each way. He then calculated the probability that the lady can guess correctly on each cup of tea without really discerning the difference. A definite probability of occurrence by chance can then be assigned to each possible outcome of the tea-tasting test. Although this example is trivial, similar insight is desired in real-world research; the application of suchmethods to studies of new medical tests and treatments is routine.

Specific mathematical questions that arose in the process of building the first atomic bomb in the 1940s led to the development of a statistical method called the Monte Carlo method. The Monte Carlo method is the use of random numbers to create a statistical experi-ment: that is, to see how some real-world process with random inputs (such as decaying radioactive atoms) might behave, one generates long lists of random numbers to specify those inputs, feeds them into equations representing the behavior of the system, and examines the system's behavior. The method was named after the roulette wheels in the famed Monte Carlo gambling center in Monaco. With the advent of electronic computers, the Monte Carlo method has become an important tool for physicists, engineers, mathematicians, climatologists, and many other professionals who deal with statistical phenomena.

The twentieth century also marked a time of renewed interest in the mathematical foundations of probability. One man, Russian mathematician Andrei Nikolaevich Kolmogorov (1903–1987), made the most significant contributions to this endeavor. In his book, Foundations of the Theory of Probability, Kolmogorov sought to make probability a rigorous, axiomatic branch of mathematics just as Euclid had done with geometry twenty-three centuries earlier. In doing so, Kolmogorov provided mathematically rigorous definitions of many statistical terms. His work also formed the foundation of the study of stochastic processes.

Modern Cultural Connections

Today, methods from probability and statistics are used in a wide array of settings. Psychologists use statistics to make inferences about individual and group behavior. Probability plays an important role in the study of evolutionary processes, both biological and social, as evolution is a mixture of random processes (genetic mutation) with nonrandom natural selection (survival of the fittest). Probability and statistics methods allow economists to build financial models and meteorologists to build weather models. Statistical analysis in medicine aids in diagnostics as well as in medical research.

With the birth of quantum physics in the twentieth century, probability became an integral part of the way scientists strive to understand and describe our physical world. According to the standard interpretation of quantum physics, events at the smallest size scales are truly random, not merely apparently random, because we cannot account for all the forces causing them (as is the case with rolling dice). Albert Einstein, reflecting on the uncertainty introduced into science by discoveries indicating the existence of a true or ontological randomness in nature, uttered his famous phrase, “God does not play dice” though apparently, in some sense, God (or the underlying principles of the physical universe) does, at least at the quantum scale. However, the correctness of the standard interpretation is still debated in the scientific community.

One of the hottest new applications of statistics in the early twenty-first century was the field of data mining. Data mining combines statistics, artificial intelligence, database management, and other computer-related studies to search for useful patterns in large collections of data. Data mining has applications in business, economics, politics, and military intelligence, to name a few. For example, retailers have begun to use data mining to track customer's buying preferences and to use this information for more precise marketing strategies.

Primary Source Connection

Arguments about matters of public policy, such as the Iraq War, often lead to brawls over statistical claims. In this text, left-leaning media critics attack a statistical argument made by a prominent conservative.

ARE 2,000 U.S. DEATHS ‘NEGLIGIBLE’?

On the October 13 broadcast of Special Report, the show he regularly hosts, ‘Fox News Channel anchor Brit’ Hume said of U.S. deaths in Iraq, “by historic standards, these casualties are negligible.”

On August 26, 2003, Hume conjured up a bizarre mathematical formula to show that U.S. casualties were not a big deal:

“Two hundred seventy-seven U.S. soldiers have now died in Iraq, which means that statistically speaking U.S. soldiers have less of a chance of dying from all causes in Iraq than citizens have of being murdered in California, which is roughly the same geographical size. The most recent statistics indicate California has more than 2,300 homicides each year, which means about 6.6 murders each day. Meanwhile, U.S. troops have been in Iraq for 160 days, which means they're incurring about 1.7 deaths, including illness and accidents each day.”

Hume's geographic comparison was meaningless, since the total population of California is far greater than the number of U.S. troops in Iraqapproximately 240 times greater. If Californians were being killed at the same rate that Hume cited for U.S. soldiers, there would be more than 400 murders per day, not six. When Washington Post reporter Howard Kurtz asked Humeabout this point, Hume said: “Admittedly it was a crude comparison, but it was illustrative of something.”

fair (fairness and accur acy in r eporting), “are 2,000 u.s. deaths ‘negligible’?” october 25, 2005. http://www.fair.org/index.php?page=2706 (accessed october 12, 2007).

See Also Mathematics: The Specialization of Mathematics; Mathematics: Trigonometry.

bibliography

Books

Daston, Lorraine J. Classical Probability in the Enlightenment. Princeton, NJ: Princeton University Press, 1988.

David, F.N. Games, Gods and Gambling. London: Griffin, 1962.

Gigerenzer, Gerd, et al, eds. The Empire of Chance: How Probability Changed Science and Everyday Life.

Cambridge: Cambridge University Press, 1989.

Gould, Stephen Jay. The Mismeasure of Man. New York: Norton, 1981.

Hacking, Ian. The Emergence of Probability: A Philosophical Study of Early Ideas about Probability, Induction and Statistical Inference. Cambridge, UK: Cambridge University Press, 1975.

Hacking, Ian. The Taming of Chance. Cambridge, UK: Cambridge University Press, 1990.

Hald, Anders. A History of Probability and Statistics and Their Applications Before 1750. New York: Wiley, 1990.

Johnson, Norman Loyd, and Samuel Kotz, eds.

Leading Personalities in Statistical Sciences: From the Seventeenth Century to the Present. New York: Wiley, 1997.

Krger, Lorenz, Lorraine J. Daston, and Michael Heidelberger, eds. The Probabilistic Revolution. 2 vols. Cambridge, MA: MIT Press, 1987.

Owen, D.B. On the History of Statistics and Probability. New York: Dekker, 1976.

Stigler, Stephen M. The History of Statistics: The Measurement of Uncertainty Before 1900. Cambridge, MA: The Belknap Press of Harvard University Press, 1986.

Porter, Theodore M. The Rise of Statistical Thinking, 1820–1900. Princeton, NJ: Princeton University Press, 1986.

Periodicals

Box, Joan Fisher. “Gossett, Fisher and the t distribution.” American Statistician 35 (1981):61–66.

Cowan, R.S. “Francis Galton's Statistical Ideas: The Influence of Eugenics.” Isis 63 (1972): 509–528.

Dutka, Jacques. “On Gauss' Priority in the Discovery of Least Squares.” Archive for History of Exact Sciences 49 (1996): 355–370.

Ore, Øystein. “Pascal and the Invention of Probability Theory.” American Mathematical Monthly 67 (1960): 409–419.

Van Brakel, J. “Some Remarks on the Prehistory of Statistical Probability.” Archive for the History of Exact Sciences 16 (1976): 119–136.

Web Sites

FAIR (Fairness and Accuracy in Reporting), “Are 2,000 U.S. Deaths ‘Negligible’?” October 25, 2005. http://www.fair.org/index.php?page=2706 (accessed October 12, 2007).

Todd Timmons

More From encyclopedia.com

About this article

Mathematics: Probability and Statistics

Updated About encyclopedia.com content Print Article

You Might Also Like

    NEARBY TERMS

    Mathematics: Probability and Statistics