Mathematics: Foundations of Mathematics
Mathematics: Foundations of Mathematics
Introduction
Physical science is based on the direct or indirect observation of objects or events. Mathematics, however, is the study of non-material objects or relationships—sets, equations, lines, and the like—that do not exist outside of the human mind, at least, not according to many modern mathematicians and philosophers of mathematics. Several basic questions about mathematics can be asked: How can mathematicians and scientists be sure that mathematical theories are true? Why should mathematics say one thing rather than another, and why are mathematics so useful in describing the physical universe?
Part of the answer to the last question is that not all mathematics are, in fact, useful in describing the physical universe. Only some systems of geometry, for example, describe the geometry of the universe we observe.
But other geometries are not false or incorrect simply because they describe nothing physically real. A system of mathematical relationships is considered valid or true if it is consistent with itself, that is, if it works on its own terms: it need not correspond to anything in the world. In mathematics, correctness is judged by purely mathematical standards, not by the experimental standards of physical science.
Historically, the first foundation of mathematics was common sense, the intuitive human sense of arithmetic and geometry as tested against experience. The earliest forms of mathematics were, therefore, (a) the counting or natural numbers (1, 2, 3, etc.) and their associated operations (addition, subtractions, etc.) and (b) Euclidean geometry, the geometry still used in daily life, engineering, and most science because it provides a close approximation to the geometry of much of the physical world.
However, as mathematics became more abstract and complex in recent centuries, mathematicians and philosophers began to question the nature of mathematics itself. How do we know that a given mathematical system of statements is valid or true, whatever those terms might mean? Modern mathematicians, logicians, and philosophers have sought to answer this question by discerning a theoretical foundation or bedrock for explanation and understanding mathematical truth. The field of study termed “foundations of mathematics” examines the nature of mathematics and its methods using various theoretical tools, especially mathematics itself.
Historical Background and Scientific Foundations
The questions with which inquiry into the foundations of mathematics starts are philosophical: that is, they are addressed to the nature of the ideas on which other ideas depend. Typical examples are: What are mathematical objects (e.g., variables, matrices, equations)? How do we have knowledge of mathematical objects and on what grounds do we believe if we do believe that this constitutes true knowledge? What is a proof? When do we know that a mathematical statement is provable?
A theory that provides answers to such questions is called a foundational theory. Any such theory must be able to account for a large part, preferably all, of mathematics, starting from a small number of base assumptions and principles. No foundational theory produced as of the early 2000s was fully satisfactory on these terms or had convinced a large majority of mathematicians and philosophers of mathematics.
The aim of foundations of mathematics is to organize all aspects of mathematics in such a way that at the base are the most fundamental concepts, assumptions and principles, and all other aspects depend on this base. There then arises the question of why certain fundamental notions are accepted rather than others; this is a matter of philosophical inquiry. The tools of foundational mathematics are mostly those of mathematical logic, a closely related discipline and, as its name indicates, a form of mathematics.
Foundational Crises before the Twentieth Century
The problem of giving secure foundations to mathematics arose with particular vigor at the beginning of the twentieth century due to the discovery of various contradictions at the basis of Cantor's theory of infinite sets; this upheaval in mathematical thinking is known today as the foundational crisis. But mathematics, in its long history, had known other foundational crises connected with new discoveries or inventions that raised doubts about what had formerly been taken to be the unshakeable basis of mathematical thought.
The discovery of irrational numbers by ancient Greek mathematicians may have caused the first foundational crisis. As the story goes, around 500 BC, Hippasu's of Metapontum, disciple of Pythagoras, produced a geometric proof that the square root of 2 is an irrational number—that is, a number that cannot be expressed as the ratio of two whole numbers or as a terminating or repeating decimal. Pythagoras is said to have discovered this fact earlier, but to have kept it secret; however, many such stories are mythical, since few contemporary records survive. Pythagoras believed that all things are numbers—that is, that numbers are the basis of all reality—and the existence of a number that could not be expressed as a ratio of two whole numbers was, supposedly, disturbing to him. When Hippasu's revealed this secret knowledge of the Pythagoreans, his brethren, according to legend, threw him off a ship and drowned him.
Another crisis in mathematics happened around 1850 and concerned the discovery of non-Euclidean geometries. In his book The Elements, the Greek mathematician Euclid (325–265 BC) organized geometry by providing an axiomatic system, that is, a limited number of basic assumptions (axioms) from which he thought it possible to derive all other true geometric propositions. His system was based on five axioms, propositions regarded as self-evident, which he thought could not be proved from simpler ones. One of the five axioms, the fifth, was problematic in the sense that it seemed not so self-evident as the others. This is the parallel postulate, which says that if a line A intersects two other lines, forming two interior angles (angles facing each other) on the same side that sum to less than 180°, then the two lines, if extended indefinitely, must meet (intersect) on that side of A. In Euclidean geometry, it is assumed that on the other side of A the two lines will diverge, that is, get farther and farther apart and never intersect. Or, if the angles sum to 180° exactly, then the lines are parallel, and never intersect on either side of A. What is problematic with the parallel postulate is the involvement of the notion of infinity. The axiom posits the possibility of an intersection that may happen at an infinite distance and therefore cannot be observed. For this reason the fifth axiom was never considered, even by Euclid himself, as being as self-evident as the others.
Due to these considerations, mathematicians of the nineteenth century thought that Euclid's fifth postulate was not an axiom but a theorem, that is, a proposition that could be proved—in some way not yet discovered from the other four axioms, which were considered self-evident. Several attempts were made to prove that the fifth postulate was a theorem, but all failed. The result of these efforts, however, was a breakthrough: it was realized that while the first four axioms plus the fifth form a logically consistent system, Euclidean geometry, the first four axioms plus a statement which contradicts Euclid's axiom can form a logically consistent alternative system, a non-Euclidean geometry. Today logicians express this fact by saying that the fifth postulate is independent of the other four postulates. The first person to realize this fact was German mathematician and scientist Johann Carl Friedrich Gauss (1777–1855), but he never published his work. Euclidean geometry was still assumed at that time to be the geometry of the real space, but mathematicians were realizing that non-Euclidean geometries could be constructed that were not self-contradictory.
The discovery of the possibility of non-Euclidean geometries, which contradicts human intuition (e.g., in a non-Euclidean geometry the shortest distance between two points may not be a straight line), diminished mathematicians's confidence in intuition—one's sense of rightness or obviousness as a foundation of mathematical knowledge and so promoted the study of mathematics using formal logic. It was logic that had shown that geometries other than Euclidean geometry (which scientists and mathematicians still thought described
real space) were possible; now, more emphasis was put on the study of the properties of different axiomatic systems, giving birth to a variety of new geometries.
Until this time, Euclidean geometry had been given a special status among mathematical disciplines, as it seemed to be directly justified by intuition based on spatial experience. For this reason, concepts in mathematical analysis were given a geometrical interpretation. But in the nineteenth century, work by many mathematicians, including the German Karl Weierstrass (1815–1897) showed that it was possible to interpret notions such as limit, integral, and derivative (from calculus) in terms of statements about the real numbers rather than geometrically. By the end of the century, it had also been shown that all real numbers could be represented as sums of rational numbers. As rational numbers were known to be representable as pairs of natural numbers, the problem that was left was to explain what natural numbers are. For some, this question was simply not answerable: natural or counting numbers werefundamental, not further reducible. This was the opinion, for example, of Leopold Kronecker (1823–1891) who said that “God created the whole numbers; everything else is the work of man.” Some others, like German mathematician Richard Dedekind (1831–1916) and German mathematician and philosopher GottlobFrege (1848–1925), strove to find a further reduction by logical means.
Birth of Modern Logic
Logic was already an old discipline, part of philosophy since its invention by Greek philosopher Aristotle (AD 384–322) as part of the science of valid inference. It remained mostly unchanged until the nineteenth century when, thanks to the discovery of non-Euclidean geometries, the great potentiality of logical reasoning became clear.
IN CONTEXT: Russell's PARADOX
In 1901, British philosopher Bertrand Russell (1872–1970) discovered a simple contradiction in Gottlob Frege's (1848–1925) logical system, which attempted to show that all mathematics can be reduced to logic. Russell wrote to Frege on June 16, 1902, announcing the news. Frege promptly replied, relating the paradox to the system in his work The Basic Laws of Arithmetic. In particular, Frege diagnosed the problem as concerning his Law V.
Law V says that any two concepts G and F are identical if exactly the same objects fall under them. For example the number 2 is the only number falling under the concept square root of 4 and under the concept even prime number; thus, the two concepts are identical.
Implicit in Law V was the assumption that what falls under a concept forms a set. This opens the possibility of assigning a set of objects to any linguistically defined concept: namely, the set of the things to which the concept is applicable. For example, the set 2, whose only element is the number 2, extends the concept square root of 4.
What Russell's paradox reveals is that there is not a set corresponding to every linguistically defined concept.
The second volume of the Grundgesetze was already in press and Frege could only add an appendix in which he says that his Law V is not so evident, as the others. That was a classic understatement: in fact, the flaw undermined his whole effort to establish mathematics entirely on a foundation of pure logic.
Despite Frege's failure to establish a logicist foundation for mathematics, his mathematical logic became a standard tool for foundational research, and its invention is today recognized as one of the main philosophical achievements of Western philosophy, not only because of its application to the foundations of mathematics but because it triggered a series of developments that led to the invention of modern digital computers.
A major advancement of logic, one fated to greatly influence not only foundations of mathematics research but also mathematics and philosophy in general, was achieved by Frege. In 1879, Frege, whose work was not widely recognized during his life, published a pamphlet called “The Concept Script” (Begriffschrift in German). This work, in which he sought to describe what he saw as the necessary laws of thought, is generally regarded as the birth of modern mathematical logic. For Frege, logic had the same role as the microscope had for scientific research: it was a more refined, precise way of seeing. In studying the fundamental properties of mathematics one cannot use natural language: it is too imprecise. Frege therefore invented an artificial language, a new ideography (system of signs standing for ideas), with the intent of rigorously describing logical concepts. His hope was to avoid the disadvantages of natural languages, such as vagueness and ambiguity. These characteristics of natural language were handicaps if one wanted to express and study the links between formulas in a mathematical demonstration. In 1884, Frege published the The Foundations of Arithmetic (Grundlagen der Arithmetik) with the purpose of showing that arithmetic (considered the most fundamental part of mathematics) was founded on logic alone, that arithmetical truths did not need support from empirical facts, and that mathematical intuitions were not in need of empirical confirmation. His thesis that arithmetic could be reduced to logic came to be known as logicism. According to the logicist thesis, all the things that mathematics speaks about can be treated as purely logical entities.
In his attempt to demonstrate that mathematics can be treated as a branch of logic, Frege implicitly used the informal notion of class, which corresponds in more modern mathematics to the concept of set. The notion of class was considered by Frege a logical notion, not a mathematical one. Using classes he was able to provide a definition of the most fundamental notion of arithmetic, that of number. For Frege, numbers were (or could be reduced to) classes: for example, the number two is the class of all classes that have two elements; the number three is the class of all classes having only three elements, and so on.
In his essay “The Basic Laws of Arithmetic” (Die Grundgesetze der Arithmetik), published in two volumes in 1893 and 1903, Frege finally presented his reconstruction of arithmetics based on logic alone, setting out in a rigorous way logical laws and rules from which it was possible to demonstrate, step by step and without appeal to any extra assumptions, arithmetical truths.
Frege's logicist program, if successful, would have proved that the certainty of mathematical knowledge does not derive from intuition or from empirical fact.
This thesis contrasted with the opinion of almost all mathematicians and philosophers of that time, who, influenced by the thought of the German philosopher Immanuel Kant (1724–1804) believed that mathematics rested on intuitions of space (in the case of geometry) and of time (in the case of arithmetic).
Frege's program failed. Just before the publication of the second volume, British mathematician and philosopher Bertrand Russell (1872–1970) found a contradiction in his system, the flaw known today as Russell's paradox. The contradiction was due to by Frege's use of the informal notion of class.
The Birth of Set Theory
In his logical system, Frege used the notion of “class,” what today is called “set” (a collection of things) and it was the way in which this notion was used that turned out to be paradoxicalan unavoidable flaw in Frege's effort to reduce mathematics to logic. Russell's paradox exposed a flaw in the nave conception of class. The notion of set started to be used at that time also by other mathematicians, mainly as a tool to solve mathematical problems, as in the case of the German mathematician Georg Cantor (1845–1918). Cantor, beside employing sets for solving mathematical problems, initiated the study of set-theoretical concepts, giving birth to what is today known as set theory.
Cantor's set-theoretical language turned out to be a powerful tool, as many problems in mathematics could be formulated as problems involving sets. Yet Cantor's theories were also affected by the discovery of Russell's paradox. Both theories were based on the possibility connecting linguistic expressions with sets, while Russell's paradox shows that the notion of set being used by the theories is paradoxical (in some circumstances contradicts itself). Russell showed that a set of objects, no matter how abstract, does not exist for every set definable in words. For example, there is no set corresponding to the concept of “set of all sets” or to the concept of “set of all sets that have the property of not being members of themselves.” Suppose such a set-let us call it A is given. A contains all the sets that have the property of not being members of themselves. Russell asked: Is A an element of A (that is, is it an element of itself)? Both the assumption that A is a member of A and the assumption that A is not a member of A (the only two possible answers) lead to a contradiction: A is a member of itself if A has the relevant property of not being a member of itself (a contradiction), and if A is not a member of A, then A has the relevant property of not being a member of itself and is a member of itself—again, a contradiction.
Russell was not the first to discover the paradoxicality of the Cantorian and Fregean notion of set; Cantor himself was aware of the fact even before Russell's discovery. However, Russell's was the simplest of the paradoxes thus far formulated and the most difficult to resolve.
The lesson drawn was that the notion of set must be regulated by axioms that state which sets exist or can be constructed starting from previously given sets. In 1908, German mathematician Ernst Zermelo (1871–1953) gave for the first time an axiomatic system for Cantor's set theory. Later on, many other mathematicians extended Zermelo's system or gave different axiomatizations of set theory. German mathematician Abraham Fraenkel (1891–1965), Hungarian mathematician John von Neumann (1903–1957), Swiss mathematician Paul Bernays (1888–1977), and Austrian mathematician and logician Kurt Gödel (1906–1978) are all important figures in the development of axiomatic set theory.
Cantor's theory of sets introduced a new way of doing mathematics and a new philosophy of the infinite. One of the most fascinating ideas that Cantor introduced in mathematics, one that been important in foundational research, is that there are different orders of infinity: that is to say, not all infinite sets are equivalent in size. In 1874, Cantor published his proof that there are as many rational and algebraic numbers as natural numbers but that the set of real numbers is strictly bigger in size—although all these sets contain an infinite number of members. This was a striking result, as all infinite collections had previously been considered the same size. Even more striking than the result itself was the way Cantor arrived at his conclusion, that is, by means of showing that it was impossible to enumerate all real numbers. In short, it proved that something was true by showing that something else was impossible.
Kronecker, who was a member of the editorial staff of Crelle's Journal, where Cantor's proof was published, did not like the revolutionary new ideas contained in Cantor's article and tried to prevent Cantor's later work from being published in the journal. Kronecker, like many mathematicians of his time, only accepted mathematical objects that could be constructed starting from the numbers that he believed were intuitively given, namely the natural or counting numbers. Cantor's ideas were, in Kronecker's eyes, meaning-less because they were about objects that for him did not exist.
In developing his theory of infinite cardinalities—orders of infinities, some larger than others—Cantor conjectured that the set of real numbers (the continuum) is the smallest possible set that is strictly bigger than the set of natural numbers. This is termed the continuum hypothesis. Cantor sought to prove that this conjecture was correct, but all his attempts were unsuccessful, and the question was left for future mathematicians to resolve.
A few decades later, using mathematical logic techniques, Gödel proved that the continuum hypothesis is consistent with the axioms of what is nowadays
GEORG CANTOR (1845–1918), CREATOR OF TRANSFINITE SET THEORY
Georg Ferdinand Ludwig Philipp Cantor (1845–1918) was born in St. Petersburg, Russia, and died in Halle, Germany. He started to study mathematics at the Polytechnicum in Zrich in 1863, but soon, after the death of his father, moved to Berlin where he attended the lectures of Weierstrass and Kronecker, among others. At the University of Berlin, he completed his dissertation on number theory in 1867. In 1869 he was appointed to the University of Halle, where he presented his thesis, again on number theory, and received his professorial teaching qualification.
At Halle, Cantor turned his attention to the branch of mathematics known as analysis, which deals with all questions relating to convergence and limits. He did so because his senior colleague Heine asked him to prove a difficult problem on which he had worked, unsuccessfully, himself: the uniqueness of representation of a function as a trigonometric series. The problem was solved by Cantor by April 1870. It was this work that prompted the discovery of transfinite numbers. In 1874, he published an article in Crelle's Journal that is considered to have founded set theory, and between 1879 and 1884 he published a series of six papers in Mathematische Annalen providing a basic introduction to set theory.
The year 1884 was one of crisis for Cantor, the year of his first serious mental breakdown. The crisis may have been due to his inability to prove the continuum hypothesis, or to stress: in any case, it lasted about one month. After that Cantor changed his attitude, becoming more interested in philosophy and seeking to teach philosophy instead of mathematics. Further breakdowns occurred, and in 1899 Cantor was again hospitalized for mental instability. The last period of his life he spent in a sanatorium.
considered the standard axiomatic system for set theory, namely ZFC (Zermelo-Fraenkel with axiom of choice). This means that if we add to ZFC as a further axiom the statement that the continuum hypothesis is true, we do not obtain any contradiction. Later on, American mathematician Paul Cohen (1934) proved also that the negation of the continuum hypothesis plus ZFC constitutes a consistent mathematical system. Together these results mean that the question of the truth or falsity of the continuum hypothesis cannot be settled by means of the ZFC axioms.
The Foundational Crisis of the Twentieth Century
By the beginning of the twentieth century, the mathematical and philosophical landscape had been greatly changed by the invention of mathematical logic and set theory. These proposed two new paradigms of doing mathematics. Logic, no longer seen as a part of philosophy, became a standard tool in mathematical investigations. Mathematics, because of the introduction of set theoretical concepts and techniques, became more abstract and no longer dealt fundamentally with numbers. This innovation was accompanied by complex foundational crises driven by the paradoxes that affected the basic notion of set.
These paradoxes, far from halting the progress of mathematics, had the effect of promoting the growth of modern logic. The language and methods of logic began to be extensively employed to give a precise and economical shape to mathematical theories. The expression of large parts of mathematics in a formal, axiomatic framework had many advantages: it was possible to make explicit all assumptions involved in mathematical reasoning, diminishing and even dismissing completely the importance of intuitive processes of thought. Intuition was thus extruded from formal mathematics in the sense that it could not be used to justify mathematical concepts or to prove the truth of mathematical results.
Around 1900, Italian mathematician Giuseppe Peano (1858–1932) devised a system of five postulates from which the entire arithmetic of the natural numbers could be derived:
- 0 is a natural number (i.e., a counting number: non-negative whole number).
- Every natural number has a successor.
- No natural number has 0 as its successor.
- Distinct natural numbers have distinct successors (i.e., no two natural numbers have the same successor).
- (Induction axiom) If a property holds for 0, and holds for the successor of every natural number for which it holds, then the property holds for all natural numbers.
According to Bertrand Russell, Peano's postulates implicitly define what we mean by a natural number. However, French mathematician Henri Poincar (1854–1912) maintained that Peano's axioms only de-fined natural numbers if they were consistent, that is if no self-contradictory statement of the form “P and not P” could be proved inside the system. If such a proof exists, then the axioms are inconsistent and cannot be said to define anything.
David Hilbert (1862–1943), the leading personality in mathematics in this period, formulated a foundational program aiming to show that Peano's system was free from contradictions. Hilbert posed the problem of proving the consistency of arithmetic. In his famous list of unsolved mathematical problems, put forth at the Inter-national Congress of Mathematicians in 1900 in Paris, the problem of the consistency of arithmetics was second.
Hilbert's Program and Gödels Incompleteness Theorems
Some mathematicians felt that the new non-intuitive methods and the mathematics that were the outcome of their application were devoid of mathematical meaning. But for most mathematicians, especially for Hilbert, it would have been absurd to renounce the power-ful methods made available by Cantor's invention. Still, the paradoxes were present. In 1922, Hilbert launched a foundational program, wanting, as he said, to settle the question of foundations once and for all. Stressing the importance of the use of formal axiomatic systems, he hoped to establish the noncontradictority of the formalized mathematical theories. The peculiarity of his program, what has come to be known as formalism, was that it required that a proof of noncontradictority should be produced by methods involving only combinatory relations among the linguistic symbols used to express mathematical statements. Such a proof would have secured the result against the radical criticism of those who, like Dutch mathematician Luitzen Egbertus Jan Brouwer (1881–1966), required for mathematics a more concrete, computational meaning.
It was the young Gödel who proved that Hilbert's dream was not realizable, at least not in the form desired by Hilbert. In 1931, Gödel proved a pair of theorems, called the first and second incompleteness theorems, which together set a strict limit on the power of logic. The second theorem particularly affected Hilbert's program by stating that the noncontradictority of Peano's axiom system cannot be proved by using means whose strength is weaker or equivalent to the system itself. In other words, no nontrivial mathematical system of statements can completely prove its own correctness. This result had important foundational repercussions, as it showed that Hilbert's program could not be carried out in its original form.
Modern Cultural Connections
The effort to found mathematics on a solid basis has continued to this day, as mathematicians develop and extend the essential innovations of the beginning of the twentieth century. Today foundational studies overlap with mathematical logic as articulated in model theory, axiomatic set theory, proof theory, and recursion (or computability) theory. Foundational studies are also of great importance for philosophy. Different positions concerning the foundations of mathematics have been defended, among them logicism (carrying on Frege's foundational work) and formalism (carrying on Hilbert's). Besides these, we have Platonism, a widespread position among mathematicians and scientists according to which one should act as if mathematical entities enjoyed actual existence; intuitionism, introduced by Brouwer, in which it is maintained that the primary source of mathematical knowledge is our intuitive sense that mathematical objects are real; and mathematics-as-language, according to which mathematics is produced by the imagination and operates like a language. Today there is still no universal agreement on what mathematics “is”: mathematics remains mysterious, even as mathematicians continue to produce new proofs, invent new types of mathematics, and gain new insights into ancient problems. A pragmatic common—ground position might be that mathematics is simply what mathematicians do—although it may be more.
In a sense, foundations of mathematics remain irrelevant to modern science. Physicists, engineers, and other scientists who employ mathematics do not need to know what mathematics is: they only need to know what kind of mathematics works for them and describes the behavior of the world. Nevertheless, physicists are often compelled to wonder what mathematics “is,” a question called by Hungarian physicist Eugene Wigner (1902–1995) the “unreasonable effectiveness of mathematics in the natural sciences”. Repeatedly, modern physicists have sought mathematical systems to describe subtleties of the physical world, especially in quantum physics and relativity, only to find that mathematicians had already produced such systems for purely mathematical reasons, without any regard to their possible uses in physics. For example, non-Euclidean geometries were developed before Albert Einstein (1879–1955) proposed, in his theory of general relativity (1915), that the geometry of the physical world is actually non-Euclidean (though it closely approximates Euclidean geometry over short distances).
Foundations of mathematics, although having no direct application in physics or technology, have driven progress in mathematics itself by forcing mathematicians to examine exactly what they are doing. The resulting advances in mathematics are, if history is any guide, likely to have tangible benefits for the physical sciences sooner or later. As has already been noted, the development of set theory was an essential precursor to the exploitation of the full potential of the modern digital computer. Simple mechanical calculators could be and were in fact constructed before the development of set theory, but modern computer science relies essentially on many branches of mathematics that draw upon set theory, number theory, computability theory, and the like.
Among the questions that are actively considered in foundational studies today, some, like the continuum hypothesis, were formulated by the founders of mathematical logic and set theory. However, the field of foundations of mathematics has expanded and diversified, taking in elements of formal philosophy and a number of mathematical fields. The foundational crisis of the early twentieth century has not resulted in a clear victory for any one of the contending points of viewlogicism, Platonism, formalism, intuitionism, or mathematics-as-language; rather, these views have continued to contend, but with a lessened sense of urgency, as it has become clear that regardless of which of these views (if any) ever does prevail, mathematics itself will continue, both on its own terms and as essential tool for all other scientific disciplines.
See Also Mathematics: The Specialization of Mathematics.
bibliography
Books
Dauben, Joseph W. Georg Cantor: His Mathematics and Philosophy of the Infinite. Princeton, NJ: Princeton University Press, 1979.
Hatcher, William S. Foundations of Mathematics. Philadelphia: W.B. Saunders Company, 1968.
Mancosu, Paolo. From Brouwer to Hilbert. The Debate on the Foundations of Mathematics in the 1920s.
Oxford: Oxford University Press, 1998.
Russell, Bertrand B. The Autobiography of Bertrand Russell. Crows Nest, New South Wales, Australia: Allen & Unwin, 1967.
Periodicals
Sarukkai, Sundar. “Revisiting the ‘Unreasonable Effectiveness’ of Mathematics.” Current Science 88 (2005): 415–423.
Wigner, Eugene. “The Unreasonable Effectiveness of Mathematics in the Natural Sciences.” Communications in Pure and Applied Mathematics 13 (1960): 1–14.
Giuseppina Ronzitti