Computationalism

views updated

COMPUTATIONALISM

Computer science has been notably successful in building devices capable of performing sophisticated intellectual tasks. Impressed by these successes, many philosophers of mind have embraced a computational account of the mind. Computationalism, as this view is called, is committed to the literal truth of the claim that the mind is a computer: Mental states, processes, and events are computational states, processes, and events.

The Basic Idea

What exactly are computational states, processes, and events? Most generally, a physical system, such as the human brain, implements a computation if the causal structure of the systemat a suitable level of descriptionmirrors the formal structure of the computation. This requires a one to one mapping of formal states of the computation to physical state-types of the system. The mapping from formal state-types to physical state types can be called an interpretation function I. I allows a sequence of physical state-transitions to be seen as a computation.

An example should make the central idea clear. A physical system computes the addition function if there exists a one-to-one mapping from numbers to physical state types of the system such that any numbers n, m, and n+m related as addends and sums are mapped to physical state types related by a causal state-transition relation. In other words, whenever the system goes into the physical state specified under the mapping as n, and then goes into the physical state specified under the mapping as m, it is caused to go into the physical state specified under the mapping as n+m.

Traditionally, computational processes have been understood as rule-governed manipulations of internal symbols or representationswhat computer scientists call data structures. Though these representations typically have meaning or semantic content, the rules apply to them solely in virtue of their structural properties, in the same way that the truth-preserving rules of formal logic apply to the syntax or formal character of natural language sentences, irrespective of their semantic content. Computationalism thus construes thinking as a type of mechanical theorem-proving.

Computationalism has been the predominant paradigm in cognitive psychology since the demise of behaviorism in the early 1960s. The failure of behaviorism can be traced in no small part to its refusal to consider the inner causes of behaviorin particular, the capacity of intelligent organisms to represent their environment and use their representations in controlling and modulating their interactions with the environment. Computationalism avoids this failing, explaining intelligent behavior as the product of internal computational processes that manipulate (construct, store, retrieve, and so on) symbolic representations of the organism's environment.

Many philosophers of mind find computationalism attractive for two reasons. First, it promises a physicalistic account of mind; specifically, it promises to explain mental phenomena without positing any mysterious nonphysical substances, properties, or events. Computational states are physically realized in the computer; they are just the physical states specified by the mapping I. Computational operations, as noted, are purely mechanical, applying to the objects in their domaintypically, symbolsin virtue of their structural properties. Moreover, computationalism, if true, would show how it is possible for mental states to have both causal and representational propertiesto function as the causes of behavior, and to be about things other than themselves. Mental states, on this view, are relations to internal symbols, and symbols have a dual character: They are both physically constituted, hence causally efficacious, and bearers of meaning.

A second reason why philosophers of mind find computationalism attractive is that it promises a nonreductive account of the mind. A serious problem with reductive physicalist programssuch as central state identity theoryis that they are overly chauvinistic; they seek to identify mental state-types, such as pain, with their specific physical (i.e. neural) realization in humans, thus denying mentality to systems that lack human physiology. Functionalists, by contrast, take it to be a contingent fact about mental states and processes that they are realized in neural matter; these same mental processes may, in other creatures or devices, be realized in other ways (e.g., in a silicon-based circuitry). According to functionalism, it is the causal organization of a systemrather than its intrinsic physical makeupthat is crucial to its mentality. Computationalism endorsesand affords a precise specification ofthe basic idea of functionalism. The computational characterization given by I provides an abstract characterization of that causal organization for a given system. Computational explanation is itself a species of functional explanation; it provides an analysis of a cognitive capacity in terms of the organized interaction of distinct components of the system, which are themselves functionally characterizedthat is, described abstractly in terms of what they do rather than what they are made of.

A commitment to computationalism by philosophers of mind has frequently taken the form of a commitment to a computational construal of the Representational Theory of Mind (hereafter, RTM-C), which is an account of propositional attitudes, such states as beliefs, desires, hopes, fears, and so on. According to RTM-C, propositional attitudes are relations to internal representationsfor example, to believe that P is to bear a certain relation to a token of an internal representation that means that P. Each attitude type is construed as a distinct computationally characterizable relation to an internal representation; thus, believing is one type of computational relation, and desiring another. The RTM-C has been advertised, by, for example, Jerry Fodor in Psychosemantics (1987), as a scientific vindication of the commonsense practice of explaining a subject's behavior by appealing to his or her propositional attitudes. If true, it would underwrite the practice of individuating propositional attitudes along two distinct dimensions, by attitude and by content. Subjects can hold various attitudes toward a single proposition; they may believe, doubt, or fear that the conflict in the Middle East will never by resolved. And subjects bear the same relationbelief, sayto many different propositions. On the RTM-C, the various attitudes correspond to distinct computational operations, and distinct data structure-types over which these operations are defined have distinct contents. The transparency of the relation between the commonsense explanatory scheme and the underlying computational realization of human psychology is an attractive feature of the view. However, it may also seem rather surprising that the two explanatory structures are virtually isomorphic. (Imagine if commonsense physics had anticipated the basic explanatory structure of quantum physics.)

The RTM-C is a heavily committed empirical hypothesis about the nature of the mind. Unlike computationalism, which claims simply that the mind is a computer, RTM-C purports to specify in broad outline the computational architecture of the mental processes that produce behavior. Computationalism is therefore compatible with the falsity of RTM-C. Is there any reason to believe RTM-C? It has been claimed that existing work in computational cognitive science provides empirical support for the RTM-C. Fodor (1987) points out that computational models of human cognitive capacities construe such capacities as involving the manipulation of internal representations. Psycholinguistic theories, for example, explain linguistic processing as the construction and transformation of structural descriptions, or parse trees, of the public language sentence being processed. It should be noted, however, that in order for a psychological theory to provide support for the RTM-Cfor the claim that to have an attitude A toward a proposition P is to bear a computational relation R to a internal structure that means that Pit is not sufficient that the theory posits computational operations defined over internal representations. The posited representations must have appropriate contentsin particular, they must be interpreted in the theory as the contents of attitudes that one is prepared to ascribe to subjects independently of any commitment to the RTM.

For example, consider a psycholinguistic theory that explains a subject's understanding of the sentence "the dog bit the boy" as involving the construction of a parse tree exhibiting the constituent structure of the sentence. The theory supports the RTM only if there are independent grounds for attributing to the subject the content ascribed to the parse tree. There may be grounds for attributing to the subject a belief in a certain distal state of affairsthat a specific dog bit a specific boybut this is not the content ascribed to the parse tree by the psycholinguistic theory. The parse tree's content is not even of the right sort. It does not represent a distal state of affairs; it represents the constituent structure of the sentence comprehended. The psycholinguistic theory supports the RTM only if the subject has propositional attitudes about the grammatical constituents of the sentence, such things as noun phrases and determiners. Such attitudes may be attributed to subjects as a consequence of the acceptance of the RTM-C, but these attitudes would not provide independent empirical support for the view.

Some General Issues

Developments in computer science in the 1980s, in particular, the construction of connectionist machinesdevices capable of performing cognitive tasks but without fixed symbols over which their operations are definedhave necessitated a broadened understanding of computation. Connectionist processes are not naturally interpretable as manipulations of internal symbols or data structures. Rather, connectionist networks consist of units or nodes whose activation increases or decreases the activation of other units to which they are connected until the ensemble settles into a stable configuration. Because connectionist networks lack symbols, they lack the convenient "hooks" to which, in the more traditional classical models, semantic interpretations or meanings are attached. Semantic interpretations, in connectionist models, are assigned either to individual units (in localist networks), or, more commonly, to patterns of activation over an ensemble of units (in distributed networks). Therefore, representation in connectionist devices is, in one respect, not as straightforward as it is in classical devices, because it is not as transparent which states or structures of the device count as representations. But issues concerning how the interpreted internal states or structures acquire their meaningin other words, how a given semantic interpretation is justifiedare fundamentally the same for the two kinds of machines.

Connectionist networks have had some success modeling various learning tasksmost notably, pattern-recognition tasks. There is continuing discussion within cognitive science about whether connectionist models will succeed in providing adequate explanations of more complex human cognitive capacities without simply implementing a classical or symbol-based architecture. One issue, originally raised by Jerry Fodor and Zenon Pylyshyn (1988), has turned on whether connectionist networks have the resources to explain a putative property of thoughtthat cognitive capacities are systematically related. For example, subjects can think the thought the dog bit the boy only if they can think the thought the boy bit the dog. A classical explanation of systematicity would appeal to the constituent structure of representations over which the operations involved in these capacities are definedthese representations contain the same constituents, just differently arranged. Connectionists, of course, cannot provide this sort of explanationtheir models do not contain structured representations of the sort that the explanation requires.

Whether systematicity constitutes a decisive reason to prefer classical over connectionist cognitive models depends on several unresolved issues: (1) how pervasive the phenomena really is. It is certainly not true generally that if one can entertain a proposition of the form aRb, then one can entertain bRa. One can think the thought the boy parsed the sentence but not the sentence parsed the boy ; (2) whether classical cognitive models are in fact able to provide real explanations of the systematic relations holding among cognitive capacities, rather than simply a sketch of the form such explanations would take in classical models. A real explanation of the phenomena would require, at least, the specification of a compositional syntax for the internal system of representation, something that classicists have so far been unable to provide; and (3) whether connectionist models are in fact unable to explain the systematic relations that do hold among cognitive capacities.

While strong claims have been made on both sides of this dispute, the question remains open. If it turns out that the mind has a connectionist architecture, then it would be expected that a perspicuous account of this architecture would reveal many cognitive capacities to be systematically related. For example, a characterization of the state of the network that consists in an English speaker's understanding of the sentence "the dog bit the boy" would cite the activation levels of various nodes of the network. The subject's understanding of the sentence "the boy bit the dog" would, presumably, activate many of the same nodes, and the explanation for the systematic relation between these two states would appeal to a dynamical account of the network's state transitions. These nodes would not be constituents of the subject's thought(s), in the sense required by classical models. And yet the relation between the two thoughts would not be merely accidental but instead would be a lawful consequence of general features of the network's architecture.

Questions such as whether connectionist devices will prove capable of modeling a wide range of complex cognitive capacities, and whether the best explanation of human cognitive capacities will advert to connectionist or classical computational processes, are properly understood as issues within computationalism. It should be noted, however, that the majority of philosophers committed to computationalism tend to interpret computation in classical terms, claiming that mental processes are manipulations of symbols in an internal code or language of thought. (See Jerry Fodor's The Language of Thought [1975] for the most explicit account of this view.) For this reason, and for ease of exposition, this entry will continue to refer to computational processes as manipulations of internal representations.

Computationalism requires a psychosemantics that is, an account of how the postulated internal representations (or, in connectionist devices, states of the network) acquire their meaning. In virtue of what fact does a particular data structure mean snow is white rather than 2+2=4 ? The meanings of natural language sentences are fixed by public agreement, but internal symbols must acquire their meanings in some other way. Philosophers committed to computationalism (and, hence, typically to physicalism) have assumed that an appropriate semantics for the language of thought must respect a "naturalistic constraint," the requirement that the conditions for a mental representation's having a particular meaning must be specifiable in nonintentional and nonsemantic terms. There have been various proposals for a naturalistic semantics. Information-based theories identify the meaning of a mental representation with the cause of its tokening in certain specifiable circumstances. Teleological theories hold that the meaning of a mental representation is determined by its biological function, what it was selected for.

No proposal is without serious problems, and the difficulty of accounting for the possibility that thoughts can misrepresent is the most widely discussed. But the difficulty of specifying naturalistic conditions for mental representation does not undermine computationalism itself. Cognitive scientists engaged in the business of developing computational models of cognitive capacities seem little concerned with the naturalistic constraint, and their specifications of semantic interpretations for these models do not obviously respect it. (See Frances Egan [1995] for argument.) There is no reason to think that the physicalistic bona fides of computational models are thereby impugned.

Successes and Obstacles

As a hypothesis about the nature of mind, computationalism is not uncontentious. Important aspects of the mental have so far resisted computational analysis, and computational theorists have had little to say about the nature of conscious experience. While computers perform many intellectual tasks impressively, no one has succeeded in building a computer that can plausibly be said to feel pain or experience joy. It is possible that consciousness requires an explanation in terms of the biochemistry of the brain. In other words, the computational strategy of prescinding from the neural details of mental processes may mean that conscious phenomena will escape its explanatory net.

If conscious mental phenomena resist computational analysis, then the computational model of mind cannot be said to provide a general account of the human mind; however, the model may still provide the basis for a theory of those cognitive capacities that do not involve consciousness in any essential way. Cognitive psychologists have applied the computational model to the study of language processing, memory, vision, and motor activity, often with impressive results. Domain-specific processessuch as syntactic processing and early visionhave proved most amenable to computational analysis. It is likely that the information available to these processes is tightly constrained. So-called modular processes lend themselves to computational treatment precisely because they can be studied independently of the rest of the cognitive system. One does not need to know how the whole mind works to characterize the relatively simple interactions involved in these processes. The idea that perceptual processes are modular, at least up to a certain point, is well supported. Modular processes tend to be more reliablethey take account of information in the input before being influenced by the system's beliefs and expectations. This is especially important for the perception of novel input. And modular processes are fasterthe process does not have to find and retrieve relevant information from memory for the processing to proceed. Ultimately, of course, perceptual processes will have to be integrated with the rest of the system if they are to provide a basis for reasoning, belief formation, and action.

Perceptual mechanisms, as characterized by computational accounts, typically rely on physical constraintsthat is, on general information true of the subject's environmentto aid the recovery of perceptible properties of that environment. This information is assumed to be available only to the process in questionnot stored in memory, and hence not available to the system for reasoning tasks. For example, the structure from motion visual mechanism, characterized by Shimon Ullman in The Interpretation of Visual Motion (1979), computes the structure of objects in the scene from information obtained from relative motion. The mechanism computes the unique rigid structure compatible with relatively minimal input data (three distinct views of four non-coplanar points in the object), in effect making use of the fact that objects are rigid in translation. Without the assumption of rigidity, more data is required to compute an object's shape. Whether or not Ullman's model accurately describes the human visual system, the general strategy of positing innate assumptions about the environment that simplify the processing is methodologically sound, given that perceptual mechanisms may be assumed to be adaptations to that environment.

Domain-general processessuch as decision making and rational revision of belief in response to new informationhave so far resisted computational treatment. Their intractability is due in part to the fact that general constraints on the information that may be relevant to solutions are difficult, if not impossible, to specify. A system capable of passing the Turing test the requirement that it convince an interlocutor that it is a person for a short period of timemust have access to a vast store of information about the world, about how agents typically interact with that world, and about the conventions governing conversation among agents in the world. All this information must be stored in the system's memory. At any point in the conversation, the system must be capable of bringing that information to bear on the selection of an appropriate response from the vast number of meaningful responses available to it. Human agents, of course, have no trouble doing this. The relevant information is somehow just there when it is needed. The task for the computational theorist is to characterize how this vast store of information is represented in the system's memory in such a way that relevant information can be accessed efficiently when needed. This formidable technical problem is known in the field of Artificial Intelligence (AI) as the knowledge representation problem.

A related problem, known in AI circles as the frame problem, concerns how a system is able to continuously update its knowledge store as the world around it changes. Every change has a large number of consequences. For example, the typing of the previous sentence on this author's computer requires the provision of a plausible example of the generalization just typed. It also changes the arrangement of subatomic particles in the room, yet it doesn't affect the Dow Jones industrial average or the price of crude oil. The author needs to keep track of some of these consequences, but most can and should be ignored. How, then, does the author update her knowledge store to take account of just those changes that are relevant (for her) while ignoring the vast number that are not? Unless the frame problem can be solved, or otherwise sidestepped, computationalism has a slim chance of providing a general account of human cognitive capacities.

General Objections

Opponents of computationalism have offered arguments purporting to show that the human mind cannot be a computer. One class of objection, typified by Roger Penrose's The Emperor's New Mind (1989), takes as its starting point Kurt Gödel's result that any formal system powerful enough to do arithmetic can yield a sentence that is undecidablethat is, a sentence such that neither it nor its negation is provable within the system. A human observer, the argument continues, can see that the undecidable sentence is true; therefore, the human's cognitive abilities outstrip that of the machine. For the argument to establish that human minds are not machines it would have to demonstrate that human cognitive abilities simultaneously transcend the limits of all machines. No version of the argument has succeeded in establishing this strong claim.

A second objection claims that any physical system, including a rock or a piece of cheese, may be described as computing any function, thus computationalism's claim that the human mind is a computer is utterly trivial. If everything is a computer, then computationalism reveals nothing interesting about the nature of mind. The following is John Searle's version of the argument in The Rediscovery of the Mind (1992). Recall that to characterize a physical system as a computer is to specify a mapping from formal states of a computation to physical state-types of the system. Take some arbitrary function, say the addition function, and some physical system, say a particular wall. Though the wall appears to be in a constant state, it is known that the wall is made up of atoms in continuous motion. Its physical state is constantly changing. The microphysical state of the wall at time t1 can be interpreted as two and its microphysical state at time t2 as three, and its microphysical state at time t3 as five. And similarly for other combinations of addends and sums. Under this interpretation the physical state transitions of the wall implement the addition function. The wall is an adder!

It is possible, in the above sense, to describe any physical system as computing any function. This does nothing, however, to damage computationalism's claim that the mind is a computer. There are significant differences between the interpretation function under which the wall is an adder, and the interpretation function under which a hand calculator is an adder. One important difference is that in order to know how to interpret the wall's states as sums one has to compute the addition function oneself. The triviality argument does point to an important task for theorists concerned with the foundations of computational cognitive sciencenamely, specifying the adequacy conditions on interpretation that allow a computational characterization of a physical system to be predictive and explanatory of the systems's behavioral capacities.

A third objection to computationalism has been made by John Searle in his 1980 article "Minds, Brains, and Programs." According to Searle's Chinese Room argument, genuine understanding cannot be a computational process. The manipulation of symbols according to rules that operate only on their structural properties is, according to Searle, a fundamentally unintelligent process. The argument, which many have found unconvincing, is formulated explicitly for classical computational modelsyet if Searle is right it would apply to any mechanical model of the mind, and hence to connectionist models as well.

It is unlikely that a philosophical argument of the sort discussed in this section will prove computationalism false. Computationalism is a bold empirical hypothesis about the nature of mind that will be evaluated by the explanatory fruit it bears. There is reason for cautious optimism, though substantial progress needs to be made on some formidable technical issues before theorists of mind can be confident that computationalism is true.

See also Artificial Intelligence; Chinese Room Argument; Cognitive Science; Machine Intelligence; Psychology.

Bibliography

Copeland, Jack. Artificial Intelligence: A Philosophical Introduction. Oxford: Blackwells, 1993.

Cummins, Robert. Meaning and Mental Representation. Cambridge, MA: MIT Press, 1989.

Egan, Frances. "Computation and Content." The Philosophical Review 104 (1995): 443459.

Fodor, Jerry. The Language of Thought. New York: Thomas Y. Crowell, 1975.

Fodor, Jerry. Psychosemantics: The Problem of Meaning in the Philosophy of Mind. Cambridge, MA: MIT Press, 1987.

Fodor, Jerry, and Zenon Pylyshyn. "Connectionism and Cognitive Architecture: A Critical Analysis." Cognition 28 (1988): 371.

Penrose, Roger. The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics. Oxford: Oxford University Press, 1989.

Searle, John. "Minds, Brains, and Programs." Behavioral and Brain Sciences 3 (1980): 417424.

Searle, John. The Rediscovery of the Mind. Cambridge, MA: MIT Press, 1992.

Sterelny, Kim. The Representational Theory of Mind: An Introduction. Oxford: Blackwells, 1990.

Ullman, Shimon. The Interpretation of Visual Motion. Cambridge, MA: MIT Press, 1979.

Frances Egan (2005)

More From encyclopedia.com