Computer Science: Artificial Intelligence
Computer Science: Artificial Intelligence
Introduction
Artificial intelligence (AI) is a branch of computer science that seeks to build machines that carry out tasks which, when performed by humans, require intelligence.
AI techniques made possible some self-guided navigation by the twin Mars Exploration Rovers (on Mars since early 2004), allowing the robot rovers to explore more of Mars than if they were steered entirely by commands radioed from Earth. AI programs for chess-playing and limited recognition of faces, text, images, and spoken words are commonplace today in industrial, military, and domestic computer systems. However, AI has so far not produced machines capable of handling everyday human language, translating between human languages fluently, and performing most other intelligent tasks that are routine for human beings. Whether this failure arises from the fundamental nature of thought or if AI research has simply not yet progressed far enough to build such systems is one of the many philosophical and technological debates that rage around AI. Other questions often raised in the context of AI include the following: What is thinking? Whatever it is, can machines do it? Why or why not? How would we know? Are people machines?
Historical Background and Scientific Foundations
In the seventeenth century, German mathematician Gottfried Leibniz (1646–1716) invented the binary number system, the foundation of digital computation. He expressed the belief that his system of notation could be used to formalize all knowledge: to every relevant object in the world, he said, one would assign a “determined characteristic number,” as he called it, and by manipulating these numbers one could resolve any question. “If someone would doubt my results,” Leibniz wrote, “I would say to him, ‘Let us calculate, Sir,’ and thus by taking pen and ink, we should settle the question.”
Leibniz did not anticipate decision-making by mechanical means rather than by pen and ink—that is, artificial intelligence—but a century later British mathematician George Boole (1815–1864) described the logical rules, today called Boolean algebra, by which true-or-false statements can be identified with binary numbers and manipulated with mathematical rigor. At about the same time, the idea of mechanized calculation was advanced by British philosopher and engineer Charles Babbage (1791–1871), who designed what he called an Analytic Engine to perform mathematical and logical operations. His work was supported by Lady Ada Byron (1815–1852), who wrote the world's first computer program for the Analytic Engine (which was never completed). The possibility of AI was immediately apparent, even at this early period: In one letter, Ada Byron asked Boole if his Analytic Engine would “think.”
World War II (1939–1945) boosted the development of digital computers for code-breaking and other military purposes. By the late 1940s, mathematicians and philosophers had real computers to ponder and addressed the question of artificial intelligence with fresh clarity. In 1950 British mathematician Alan Turing (1912–1954) published one of the most famous papers in computer-science history, “Computing Machinery and Intelligence.” In it he took up the already old question “Can machines think?” and proposed that it could be answered by the now-famous Turing test, which he called the imitation game. The imitation game would work as follows: if a human interrogator communicating with both a human being and a computer in another room, say by exchanging typed messages, could not reliably tell which was the human and which the computer, even after an extended exchange—that is, if the computer could imitate unrestricted human conversation—then it would be reasonable to say that the computer thinks and is intelligent.
By 1950 early electronic computers could manipulate numbers at blinding speed. It looked as if Leibniz's old dream of reducing all thinking to computation was about to be realized. All that remained, some scientists thought, was to code the rules of human thought (human thought was assumed to depend on hidden rules) in binary form, supply the computer with a mass of digitally encoded facts about how the world works, and run some programs. In this heady atmosphere, many over-optimistic claims were made about progress in AI.
For example, AI pioneer Herbert Simon (1916–2001) predicted in 1957 that “within ten years a digital computer will be the world's chess champion.” (A computer did not beat the world chess champion until 1997, 30 years behind schedule.) In 1965 Simon predicted that “Machines will be capable, within twenty years, of doing any work that a man can do.” (They still cannot.) In 1968, Stanley Kubrick's hit movie 2001: A Space Odyssey depicted a conversational computer, the HAL-9000, as being a reality in 2001. The film was not meant to be pure fiction: AI expert Marvin Minsky (1927–), hired as a technical consultant for the film, assured the director that the existence of computers like HAL by 2001 was a sure thing. As of 2007 no such computer was even close to being constructed. As one AI textbook put it in 2005, “The problem of over-promising has been a recurring one for the field of AI.”
IN CONTEXT: WHAT IS INTELLIGENCE?
Jobs that seem hard to people, like handling large sets of numbers rapidly, are easy for computers, while things that seem easy to most people, like having a conversation or cleaning house, are hard—extremely hard—for computers.
The reason is that tasks like chess and arithmetic can be performed by applying a few strictly defined rules to pieces of coded information. In contrast, most daily activities of human beings are richly connected with a physical world full of endlessly various objects, persons, and meanings. The sheer number of facts that any normal person knows, and the number of ways in which they apply those facts in performing a typical task of daily life, including speech, is simply too large for even a modern computer to handle (even assuming that human intelligence can be understood in terms of applying rules to facts, which is a matter of dispute). A human being washing dishes by hand must deal simultaneously with the mechanical properties of arms and hands, caked-on food, grease, water, soap, scrubbers, utensils of scores of different shapes, plastics, metals, and ceramics, and so on. They must start with a disorganized heap of dirty tableware and finish with a pile of properly draining or towel-dried dishes, all acceptably clean, while not injuring themselves or flooding the kitchen and breaking a dish only rarely. To describe such a job in terms of chess-like rules that a computer can follow has turned out to be far more difficult than the pioneers of artificial intelligence thought in the 1950s and 1960s.
Speculation about thinking machines was common in the fiction of the mid-twentieth century, especially after the publication of Czech writer Karl Capek's play Rossum's Universal Robots, which introduced the word “robot” into the English language in 1923. Long before computers were commonplace in actual life, they saturated the popular imagination through hundreds of representations in science fiction—mostly sinister. During the 1950s and 1960s, popular interest in machine intelligence became pervasive, rivaled only by fear of the
atomic bomb. Computers and robots starred in such hit movies as The Day the Earth Stood Still (1951), Forbidden Planet (1956), and 2001: A Space Odyssey (1968). The press often presented startling claims that some new computerized device could think, learn, compose music, or perform some other human task. In 1970, for example, Life magazine announced excitedly that a turtle-like machine called Shaky, which could navigate a simple indoor environment, was the “first electronic person,” and promised its readers that by 1985 at the latest we would “have a machine with the general intelligence of an average human being.”
The Science
AI research restricted to specific problems such as pattern identification, question-answering, navigation, and the like is often called “weak” AI. AI research and philosophy concerned with the possibility of producing artificial minds comparable (or superior) to human minds is called “strong” AI. Strong AI has produced few real-world results; weak AI has produced many.
AI can also be divided into what is sometimes called “good old-fashioned artificial intelligence” (GOFAI) and neural networks. GOFAI seeks to produce computer programs that apply symbolic rules to coded information in order to make decisions about how to manipulate objects, identify the content of sounds or images, steer vehicles, aim weapons, or the like. These programs run on ordinary digital computers. The other basic approach to AI is the connectionist or neural network approach. This seeks to mimic the way animal nervous systems achieve intelligent behavior, by linking together anywhere from dozens to billions of separate nerve cells—neurons—into a network. A neural network may be produced by building electronic neurons and linking them, or by simulating such a network on a regular digital computer (the more common approach). Neural networks may also be combined with GOFAI, rule-based type systems to create hybrid systems.
The mathematical theory behind AI is complex, but as a simplification it can be said that there are three basic elements to the handling of information in AI systems.
The first is knowledge representation. Knowledge representation involves ways of organizing and annotating factual information in computer memory. The second element is searching, the means by which a computer sifts through a database (such as a list of possible disease diagnoses) or calculated alternatives (such as chessboard positions) to find a solution to a problem. The set of all possible solutions is often called a “problem space.” Problem spaces are often too large to search completely, so AI programmers seek rules that govern the search process and make it more efficient. These rules are called heuristics.
State space search is a class of search methods used widely in AI. State space search defines a problem as a “space” in a mathematical, not a physical sense, that is, as a linked network of possible states or nodes. Operators (computational functions) allow transitions from each state to others. The resulting network of states is called a graph. Nodes or states that correspond to acceptable solutions to the problem are called goal states.
The mathematical discipline known as graph theory is used to reason about ways of searching state spaces for goal states. For example, the problem of a robot rover seeking a route through a field of boulders might be cast as a state space search. Acceptable goals would be points on the far side of the field, attained with the rover undamaged, untrapped, and standing with an acceptable degree of tilt. Intermediate positions of the rover in the boulder field might be symbolized as nodes in the graph: operators would correspond to maneuvers between intermediate positions.
(This is not necessarily how autonomous rover navigation is actually calculated, but serves as an illustrative example.) Specialized problem-solving languages have been developed for handling heuristics, state space searches, and other AI operations. These languages include LISP (for “list processing”) and PROLOG (for programmation en logique, French for “programming in logic”).
Modern Cultural Connections
Programs exploiting techniques developed in the AI field are now commonplace in commercial and military applications. Hundreds of thousands of industrial robots are used worldwide on assembly lines (more in Japan than any other country). Neural-network programs running on conventional digital computers are used by credit-card companies to search for anomalous card uses that might signal identity theft. Governments use such programs to search the Internet for behaviors that they consider illicit, including terrorism or forbidden politics; for example, China, named as a routine abuser of human rights by the U.S. State Department and nongovernmental human-rights organizations such as Amnesty International, began research on a nationwide digital surveillance project called Golden Shield in 2001. Golden Shield depends heavily on artificial intelligence techniques to recognize faces in video recorded by surveillance cameras and to decipher recorded telephone conversations. The system will, according to the International Centre for Human Rights and Democratic Development, be “effectively applying artificial intelligence routines to data analysis via complex algorithms, which enable automatic recognition and tracking. Such automation not only widens the surveillance net, it narrows the mesh.”
As with other technologies, AI can cause social stresses even when there is no malicious intent. The RAND Corporation, a private strategic think-tank often hired by the U.S. military, reported in 2001 that “[t]he increasing sophistication of robotics, coupled with software advances (e.g., in artificial intelligence and speech understanding) removes jobs from the marketplace, both in low-skilled, entry-level positions and more sophisticated specialties.” Military applications for AI are now occurring, including the partially self-guided weapons termed “smart bombs” and autonomous navigation by unmanned airplanes and submarines. Military research programs are directed toward the eventual development of fully autonomous weapons that would function without direct human supervision.
AI systems are also finding application in medicine, industry, game-playing, and numerous other areas. Wherever computers are interacting with complex environments, AI techniques are often being applied, usually in inconspicuous or built-in ways that are not apparent to users.
In the culture at large, AI participates prominently in unresolved debates about human nature, determinism, free will, and morality. Some thinkers argue that human beings are only “lumbering robots” (Richard Dawkins, 1941–) programmed by their DNA, and that the human brain is a carbon-based neural net comprised of billions of neural mini-robots, not essentially different from any other computer (Daniel Dennett, 1942–). In popular culture, Captain Jean-Luc Picard informs watchers of the TV drama Star Trek: The Next Generation that human beings “are machines—just machines of a different type.” Some, such as physicist Roger Penrose (1931–) and philosophers John Searl (1932–) and Hubert Dreyfus (1929–), argue that the strong-AI equation of mechanical and human thought is based on fallacious assumptions about thinking, information, and physical systems.
See Also Computer Science: Artificial Intelligence and Economics; Computer Science: Information Science and the Rise of the Internet; Computer Science: The Computer.
bibliography
Books
Anderson, Alan Ross, ed. Minds and Machines. Englewood Cliffs, NJ: Prentice-Hall, 1964.
Crevier, Daniel. AI: The Tumultuous History of the Search for Artificial Intelligence. New York: Basic Books, 1993.
Dreyfus, Hubert L. What Computers Can't Do: A Critique of Artificial Reason. Cambridge, MA: MIT Press, 1992.
Padhy, N.P. Artificial Intelligence and Intelligent Systems. New Delhi, India: Oxford University Press, 2005.
Penrose, Roger. The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics. New York: Oxford University Press, 1989.
Roland, Alex. Strategic Computing: DARPA and the Quest for Machine Intelligence, 1983–1993. Cambridge, MA: MIT Press, 2002.
Scientific American. Understanding Artificial Intelligence. New York: Warner Books, 2002.
Von Neumann, John. The Computer and the Brain. New Haven, CT: Yale University Press, 1958.
Larry Gilman